Docstoc

Black Ops 2004 _ LayerOne

Document Sample
Black Ops 2004 _ LayerOne Powered By Docstoc
					Black Ops of DNS



     Dan Kaminsky
          Introduction
Who am I?
 Senior Security Consultant, Avaya
  Enterprise Security Practice
 Author of “Paketto Keiretsu”, a collection of
  advanced TCP/IP manipulation tools
 Speaker at Black Hat Briefings
     Black Ops of TCP/IP series
     Gateway Cryptography w/ OpenSSH

   Protocol Geek
What’s On The Plate for
        Today?
/* char descrip[256] = “You‟ll see”; */
               What is DNS
DNS: Domain Name System
   Mechanism for translating human-readable names
    into machine routable addresses
“Like 411 for the Internet”
   As 411 usually but not always yields simple phone
    numbers, DNS usually but not always yields IP
    addresses
        A: Given name, find IP
        MX: Given name, find Mail
        PTR: Given IP, find name
        TXT: Given name, find “stuff”
“Useful” Traits of DNS
 (Very Very Abridged)
Hierarchical
   .com says where to find addresses in .doxpara.com, and
    .doxpara.com says where to find addresses in
    foo.doxpara.com
Recursive vs. Iterative Lookups
   Iterative Lookup: Ask a server a question, it tells you where
    to go to find out the answer
   Recursive Lookup: Ask a server, it goes out and finds out
    the answer for you, and tells you
        It queries the hierarchy…which you may control
Caching
   Responses contain a TTL – Time To Live – within which
    future requests don‟t require another message to be sent
Primary Research Areas
        for DNS
Exploitation
  1999-2000 were filled with exploits against
   BIND, the most common DNS server
  Not terribly vulnerable now

DNS Spoofing
    Returning false addresses = hijack
     people‟s outgoing net connections
DNS Tunneling
         DNS Tunneling [1]
How
   Client -> Server
        What‟s the information for BATCH-OF-ENCODED-
         DATA.doxpara.com?
   Server -> Client
        The information? Why, it‟s “HERES-THAT-DATA-YOU-
         WERE-LOOKING-FOR”
Why?
   DNS is extremely permeable – it will route through
    architectures where often nothing else will
        Captive portals for Wireless Internet
        “More” ;-)
       Starting Simple:
      DNS Tunneling [0]
Who?
   NSTX most popular
      Creates a “virtual network device” that routes IP
       (actually, Ethernet frames) over DNS
      Linux Only

   Rumors of various botnets / malware using
    DNS as a covert channel
     DNS Tunneling[2]:
    Entering Userspace
Starting “Simple”
   NSTX requires kernel cooperation to get at IP
   Lets make something that doesn‟t require the
    kernel, but still allows remote networking
        Remote Networking: “I‟m on this network, but all my
         traffic is routed through that network over there –
         preferably securely”
        Normally done with VPNs (also kernel level)
        SSH Dynamic Forwarding allows secure remote
         networking over a single TCP port (Poor Man‟s VPN)
        So lets start with SSH over DNS
         DNS Tunneling[3]:
            Problems
DNS is not TCP
   TCP moves bytestreams, DNS moves records
        Blocks of data
   TCP lets either side speak first, while in DNS, the
    server can only talk if the client asks something
   TCP is 8 bit clean, while DNS can only move a
    limited set of characters in each direction
    (Base64)
   This seems so familiar…
      DNS Tunneling[4]:
         Mini-HTTP
The semantics of DNS are surprisingly similar to
those of HTTP
Many tools have been written with the “lets tunnel
everything over HTTP” methodology because it gets
through firewalls easier (see first point)
Those tools that support small message sizes (like
GNU httptunnel) can be quickly modified to use DNS
as an alternate transport
   Must use separate streams for upstream vs. downstream,
    since downstream reflects all data from upstream (similar to
    HTTP, but on a per packet basis)
But DNS has a feature HTTP servers don‟t…
    DNS Tunneling[5]:
    Recursive Redux
Recursive lookup: Ask another host to
iterate through the hierarchy to find your
answer
 Why, it‟s as if every web server was also a
  web proxy...
 Simple Trick: Bounce your traffic off any
  DNS server (like, for instance, a captive
  portal‟s DNS for free WiFi)
 But there‟s better…
         DNS Tunneling[6]:
           Set em up…
Some DNS servers are dual hosted
   External interface is sending names out
   Internal interface is bringing names in
   Two interfaces because one is behind the firewall
    and one is in front
Subdomain Delegation
   It is possible to claim that “foo.doxpara.com” is
    hosted at an arbitrary name server with an
    arbitrary IP address
        …even if you yourself can‟t route to that address…
        …someone else can…
 DNS Tunneling[7]:
…and knock em down
Incoming SSH to Protected Networks via
Recursive DNS
   1. Wrap SSHD in dns-ized httptunnel
   2. Request that remote DNS daemon look up a subdomain
    of a domain you control. Tell that DNS it‟ll find the answer it
    seeks at the internal server with the SSHD-DNS.
   3. SSHD-DNS receives forwarded request, responds to it.
    Remote DNS daemon forwards that response back to you.
   4. Continue with normal SSH over DNS semantics.
Note: This is actually a bad thing
Changing the Gameplan
Most tricks require a DNS server under the
requester‟s control
   The client and the final server conspire against the
    recursive server in the middle
But what if there is no DNS server under the
client‟s control?
   What can a client do with queries alone?
   Can two clients communicate with eachother
    through a DNS server?
                  DNS Cache
                 Modulation[0]
DNS stores the results of queries, along with a TTL (Time To
Live)
    Well known Information Leakage: If someone else looked up a site
     first, the TTL is different.
         Example:
         root@bsd:~# dig @129.210.8.1 mail.layerone.info; sleep 100; dig
          @129.210.8.1 mail.layerone.info
               First Reply: mail.layerone.info. 4H IN A  66.33.213.202
               Second Reply: mail.layerone.info. 3h58m19s IN A 66.33.213.202
         Destructive – If nobody else made a particular query, then the probe
          becomes the standard bearer for max seconds.
    Not well known: Two hosts can communicate with eachother
     through the state change introduced by a particular query
         Possibly the lowest bandwidth channel available, at 1 bit per query
             DNS Cache
            Modulation[1]
Temporal Bit Mapping
   The sender requests a low traffic, preferably low TTL name
    from the server.
   The receiver requests the same name, and sees it at some
    decremented TTL value. It thus knows at approximately
    what time the cache entry will expire.
   Within some “sender window” after expiration, the sender
    either does (1) or does not (0) issue another request for the
    same name
   Within some “receiver window”, the receiver checks the
    name after the window expiration. IF the TTL is max, then
    the bit was 0. IF the TTL is not max, then the bit was 1.
   Capacity = 1 bit per Max TTL period (slowwww)
              DNS Cache
             Modulation[2]
“Spatial Bit Mapping”
   Spread the load across multiple names, probably
    each retrieved off a wildcard server
        Wildcard – 00000001.doxpara.com still resolves
   Bits are “set” within a setting window
   Bits are destructively read within a reading
    window, using an identical TTL strategy.
   Next series of bits may be sent when TTL of last
    sent request by the sender expires.
   Capacity = 1 bit per name per Max TTL period
    (slow)
               DNS Cache
              Modulation[3]
Bidirectionality
   Simply use two separate channels, or timeshare one
    channel
   It‟s just a shared medium
Adaptability
   Anything that can be changed by one and seen by another
    can be used to send data
        Web Counters
        IP ID counters in IP Stacks
        Whether a car was washed
   “Did it increment or not? Were there sharp spikes between
    6:01 and 6:02?”
   “You can always send a bit” – can we send more?
History of DNS Storage
Long history of storing data in DNS
   Means
        TXT records
        AXFR Zone Transfer (can be arbitrarily large, but doesn‟t
         work from almost any restricted network since it‟s TCP)
   Downsides
        Very inefficient – packet size (w/o AXFR) is locked below
         512 bytes, and you only get a little more than half of the
         packet filled with a payload
        DNS can get really slow, especially under load
   Upsides
        Everyone has a DNS server, and it caches
                   KDNS[0]
What do we have?
 A Very Dynamic DNS server
 A Desire to Send More Than A Bit
        Fine, I‟ll go host a name on a server
   A challenge to send something new
What do we get?
                 KDNS[1]
Voice over DNS – TXT w/ Streaming Audio
Speex codec supports Voice compression at
~2kbps (best public codec)
   Ends up (with headers) being about 356
    bytes/second
   We can traffic 356 bytes per second through even
    extremely slow DNS servers
Power to the Caching People
   Use a TTL of WINDOW seconds (~20s)
   All listeners behind the same DNS server will split
    the same “stream”
                    KDNS[2]
Server HOWTO
   <timestamp>.server.com
        TTL=WINDOW
        TXT (or MX) = 1.0s or 0.5s of audio
   latest.server.com
        CNAME to <window>.<timestamp>.server.com
        TTL=0
        This may be broken by resolvers with minimum TTL
         requirements (implemented to fight Dynamic DNS server
         loads)
                     KDNS[3]
Client HOWTO
   Retrieve latest.server.com, learn
    <window>.<timestamp>.server.com
   Retrieve audio samples from <timestamp-
    window>.server.com up through
    <timestamp>.server.com
        Prefer to start playing at a packet that has spent some
         time decrementing in the cache – binary search for oldest
         sample within the window
        Add random jitter for the start sample – this way,
         everyone‟s at a different point in the stream, and if the
         “lead looker” drops, someone else will be next up to be
         first in line
                KDNS[4]
Alternate Implementation: Proxy-Friendly
HTTP Streaming
   Abandon DNS entirely; simply distribute 3-10s
    chunks of audio over HTTP and let proxy servers
    share them out
   Requires client cooperation, and greatly increases
    latency, but solves the (bandwidth) problem that
    proxies don‟t cache streams
   Proxies don‟t support TTLs, though – a server side
    script would need to monitor that
   HTTP of course supports much higher bandwidth!
                 Crossroads
Everything we‟ve done with DNS is slow and
low bandwidth?
   DNS servers store little data, and may return it
    relatively slowly
Can‟t we go any faster? Is there no DNS
“solution” with capacity?
   Many hands make light work
   There are many many many DNS servers
        And almost all of them cache.
            DomainCast[0]
The basic concept
   Normal DNS operation: Talk to your own server; it
    retrieves data on demand from the official server
    upstream
   Domaincast DNS operation: Talk to servers with
    different blocks preloaded into their cache; each of
    those blocks refers to the location of other cached
    blocks. Upstream server has actually preloaded
    data (through directed queries) into all of the
    servers.
        This is possible. This is not necessarily a good idea.
         Don’t try this at home.
               DomainCast[1]
Sidestepping Limited Resources
   Capacity
        20K/server = ~80 records @ 256bytes
        700MB = Knoppix, a full Linux distribution
        700MB / 20K = 35,000 servers required
              Number of DNS servers detected in a single class A:
               +140,000 (More on this later)
   Speed
        1Kb/s * 35,000 = 35MB/s
              Would require upload of ~3.5MB/s
              ~50% packet / formatting overhead = 17MB/s
        Not going to achieve peak bandwidth (it‟d shred
         reliability) – but not going to get 1k/s either
               DomainCast[2]
Packet Formats
   Request: <offset>.<filename>.server.com
   Reply – Either TXT or MX
        TXT – Base64 Representation of Fixed Size Struct
         (describing file size, byte offset, data, and other servers)
        MX – Can also represent data as mail server addresses
              Requires dots every 64 characters
              Responses are reordered randomly – must sort by
               “precedence”
              Some headaches with extra information being
               “volunteered”
              MX may stress BIND servers more (triggers search tree)
      DomainCast[3]
Basic Mode – All packets retrieved from
master server, provide linked list
pointers to same host
 0.file.server.com = first block, here for
  second
 256.file.server.com = second block, here
  for third
 Etc.
             Domaincast[4]:
Population via Brute Force
Simplest Approach:
    Find enough servers that recurse
         DNS servers don‟t move rapidly -> Scan amortizes well
          across many populations
    Plan out who‟s going to store which blocks
    Populate servers with data, references to planned
     other servers
    Easy to validate caching, though not easy to fix a
     broken reference
         Can populate out of order – 64 populating, verifying
          streams
          Domaincast[5][0]:
Reverse Serial Propagation
 The problem: To populate the first server, you need
 the address of the second server. But to populate
 the second, you need the addr of the third. What to
 do?
 Answer: Populate backwards. The last server
 points nowhere. The second to last points to the last.
 The third to last points to the second to last.
    Multi-server – last server can equal “last server group”.
         Server groups may be larger than packet capacity to list
          servers – just include a random set
         Domaincast[5][1]:
Reverse Serial Propagation
Can be quickly and statelessly deployed
   Scan networks with generic recursive probe
   For each incoming request seeking to service the
    probe, return whatever(TTL=0) and probe with an
    actual block request
        If a block request comes back from the recurser,
         populate the server
        If the population packet drops, the upstream should
         retransmit
   Move back through the file after each server group
    fills up
   Can be much slower to populate!
         Large Scale DNS
           Scanning[0]
Modding the scanrand codebase…
   Basic scanrand concept: One half spews traffic, the other
    sees what comes back. Little to no communication between
    the two halves = fast!
   Scanrand manages TCP; statelessness is a bit of a novelty
    to it. DNS is built on UDP, raw packet manipulation isn‟t
    even necessary (it was useful, though).
   Reflection: Stateless operation depends on sending packets
    out and forgetting, only to have the necessary data returned
    back in the response packet
        TCP barely reflects back 48 bits worth of data
        DNS reflects back whatever was in the query = thousands of
         bits if you want „em!
        Crypto signatures can be embedded in the reflection
         Large Scale DNS
           Scanning[1]
Floodable DNS Queries implemented by
miname
   Is anyone there, recursive or not?
        Many hosts that don‟t support recursion will at least
         provide a “NXDOMAIN”, but not all will
        We need a generic query that appears to almost always
         work
             1.0.0.127.in-addr.arpa PTR -> 127.0.0.1 = localhost
                 Everyone is localhost 

   Will you recurse back to me?
        BIG_COOKIE.server.com
        Large Scale DNS
          Scanning[2]
Returned Results
 Direct: You requested, they responded.
 Forward Lookup: You requested, they
  requested back to you, you responded.
       Often not the server that you asked that asks
        you
   Reverse Lookup: You requested…and
    someone wondered who you were asking
    such questions
         Large Scale DNS
           Scanning[3]
More on Reverse Auditing
   Not being too secretive
        root@bsd:~# host 64.81.64.164
        164.64.81.64.IN-ADDR.ARPA domain name pointer
         mail.dan.at.doxpara.com
   L0pht‟s Antisniff project – locate sniffers and IDS‟s by
    watching reverse DNS traffic on the LAN
   Miname w/ Co-opted Servers: Watch sniffers and IDS‟s
    across the entire internet
        TTL=0 on PTR replies means they continually look you up
         when you pass by them
        Hard, but not impossible, to associate reverse lookup with node
         scanned to attract
              Temporally associated w/ scan
              Multiple scans, out of order, should provide required correlation
              Traceroutes may help too – we know who‟s looking us up
 Rendering The Flood
How much data could it be?
   [root@fire root]# ls -l dns.log
    -rw-r--r--    1 root     root
    360215644 May 28 12:42 dns.log
How are we going to visualize this?
   3D space – could use the tools we used to
    visualize the complexity of arbitrary data
Phentropy w/ OpenQVIS
                 3D Viz[0]
3D Space provides a potentially
valuable perspective on extremely
complex datasets
Volume tools have gotten more mature
   Volsuite: Free, Open Source, Full Color,
    Cross Platform, Fast
        Video games = New Absurdly High Speed
         Graphics Chipsets
                     3D Viz[1]
“Volumetric” – Texture Based
      Stacks of transparent photos
      Arbitrary complexity data – it all gets blurred in
            Limits positional accuracy
      Totally static – you can‟t animate the population
“Particle” – Vertex Based
      Dots! Dots everywhere!
      Arbitrary positional accuracy – floating point coordinates
       can be zoomed into
            Limits number of particles
      Very easy to make dynamic
            Dynamic particles can express higher dimensional data
                    3D Viz[2]
Dimensions
   Static: XYZ, Color
   Dynamic
        Shape
        Direction
        Color Shift
        Trajectory Shift
        Brightness / “Flare”
   Surprising aspect of dynamics: Motion away from
    source point appears to bolster memory /
    perception of that point
                3D Viz[3]
Under Development – As yet unnamed
Generic 3D Plotter
 Coordinates are streamed in, perhaps with
  information about how to render each
  particle
 A more generic version of the “Spinning
  Cube of Potential Doom”
       Parallel development 
        Conclusion
Stuff = Cool
More Stuff Soon

				
DOCUMENT INFO
Shared By:
Categories:
Stats:
views:9
posted:1/8/2011
language:English
pages:44