Denial-of-Service Attacks and Defenses by ihm18500


									Denial-of-Service Attacks
     and Defenses
        Jinyang Li
            DoS in a nutshell
• Goal: overwhelm a victim site with huge
• How?
  – Workload amplification
    • Exploit protocol design oversights
    • Small work for attackers  Lots of work at victims
  – Brute-force flooding
    • Command botnets
           Lecture overview
• A “tour” of attacks on protocol bugs at many
  – Link, IP, TCP/UDP, DNS, applications
• In-network DoS mitigations
  – IP Traceback [Savage et al. SIGCOMM‟00]
  – TVA [Yang et al. SIGCOMM‟05]
    Link layer: attacks on 802.11
        [Bellardo, Savage, USENX Security „03]

• De-authentication attack
  – De-auth packets not authenticated
  – Attackers forge de-auth packets to AP
  – De-authenticated clients lose communications

• Trick a victim into setting a large NAV (max=215)
  – NAV is meant for (honest) nodes to reserve channel
    (in RTS/CTS mode)

  Small work for attacker, lots of overhead at the victim
             Smurf ping attack
       Ping: dst=broadcast src=victim

attacker                                victim
   DNS traffic amplification
• Attackers send DNS queries with forged
  source address
• DNS servers send response to victims
• 50X traffic amplification
  – DNS query size (60-byte UDP)
  – DNS reply (w/ extensions) 3000 bytes
         TCP SYN flood
             SYN: SNC    daddr,dport

client                   server

                 TCP SYN flood


      attacker               SYNC3

• Attackers flood SYN with forged source addresses
• Victim keeps connection state while awaiting for ACK
• Legitimate connections rejected due to state full
                 DoS via state exhaustion
            TCP SYN Floods
                OS                    queue size
                Linux 2.6                1024
                FreeBSD 2.1.5            128
                WinNT 4.0                  6

   Backlog lingers (3 minutes): victims must re-send SYN-ACKs

• Real world attacks:
Blaster (2003) worm launches SYN flood against
             Defense: SYN cookies
• Keep no state until connection is fully set up
• Encode state in SNS during 3-way handshake

                      SYN: SNC

    client                                 server
                 SYN-ACK: SNC,SNS
                                           SNS=T || L24-bits, no state

                             ACK: SNS
                                           Check for SN S validity:
                                           if so, allocate state

             T=5-bit counter
  Inferring SYN Flood activity
• Backscatter: victim servers send SYN-ACK to
  random IPs
• Monitor unused IP address space
  – Unsolicited SYN-ACKs could be from SYN floods
• Found 400 SYN attacks/week [MVS USENIX
        Higher level DoS

• Flood the victim‟s DNS server
• Send HTTP requests for large files
• Make victims perform expensive ops
  – SSL servers must decrypt first message
• Requests for expensive DB operations
     End-system solutions

Idea: increase clients‟ workloads
1. Computational client puzzle
                Client puzzles
• Make clients consume CPU before service
• Example puzzle:
  – Find X s.t. SHA-1(C|x) = 0 at rightmost n bits
  – Clients take O(2n) time to find answer
                                           Tunable: depend
  – Servers take O(n) times to check       on attack volume
• Servers checks solutions before doing work
  – SSL: server checks solution before decryption
• Make clients solve puzzles only during attack
• Make clients devote “human” resources

• Make clients solve CAPCHAS before performing
  DB ops during attack [Killbots NSDI‟05]
DoS mitigations inside the
    Network-level solution:
     source identification
• Goal: block attacks at the source
• Problem: attack traffic forge source IPs
• Possible solution: ingress filtering
  – ISP should only forward packets with
    legitimate source IP
  Requires all ISPs (in all countries) to perform
   IP traceback               [Savage et al. SIGCOMM„00]

• Goal: Determine paths to attack sources based on
  attack packets
• Insights:
  – Routers record info in packets
  – Victim assembles info from large amounts of attack packets
• Assumptions:
  –   Attackers might generate any packet (with markings)
  –   There could be  1 attack paths
  –   Routers are not compromised
  –   Paths are stable during attack
Caveat: traceback is approximate
       A1              A2             A3

        R5             R6        R7

             R3             R4

                             Ideal traceback:
                             Approximate but robust traceback:
                             R10 R9 R3 R6 R3 R2 V
       Potential solutions:
        #1 node append
• A router appends its address to each packet
 Each packet contains the entire attack path

 Not enough space in packet
 Can be expensive to implement
          Potential solutions:
          #2 node sampling
• A router records its address in a single
  field with probability p
• Victim orders recorded routers into a path
  – Place routers with fewer samples farther away
 Easy to implement
p must be > 0.5
  -- so forged markings cannot change path order
Converges slowly
Not robust against multiple attacks
            Traceback solution:
              Edge sampling
• Record edge with probability p in packets
  //pkt.start, pkt.end encodes an edge
  //pkt.distance is the distance between edge and victim
  r = random(0,1)
  If (r < p)
      pkt.start = self
       if pkt.distance==0
          Reduce space usage
• Record edge-id = start  end
  – Work backwards to construct path

      a+b         b+c          c+d            d
  a           b            c            d            V

• Record one of k fragments of edge-id at a time
• Include hash(edge-id) to verify edge-id
  correctness after reconstruction
Result: overload 16-bit IP identifier field to mark packets
      Limitations: reflectors
• Reflectors are nodes that responds to traffic
  – Attackers forge source so responses are sent to
• Examples:
  – DNS servers
  – Web servers
  – …
• Traceback cannot track across requests and
  their responses at higher levels
          Limitations: DDoS

• Numerous sources
  and reflectors attack
• Traceback needs
  O(mk) to find each
  valid edge-id
    Capabilities based solution
• Traceback: detect sources after attack has
• Capabilities: prevent nodes from sending
  unwanted traffic
   – let receivers explicitly specify what it wants
• [Yang et al. SIGCOMM‟05] [Yaar et al. IEEE S&P‟04]
  [Anderson et al. SIGCOMM „04]
     Sketch of network capabilities

              cap                                                   

1.       Source requests permission to send.
2.       Destination authorizes source for limited transfer, e.g, 32KB
         in 10 secs
     •      A capability is the proof of a destination‟s authorization.
3.       Source places capabilities on packets and sends them.
4.       Network filters packets based on capabilities.
          TVA Challenges
1. Counter flooding attack
  – Flood initial requests
  – Flood (mistakenly) permitted packets
2. Design unforgeable capabilities
3. Make capability verification efficient
          Challenge #1
     Problem: Request floods

• Request do not carry capabilities
Solution: rate limit requests


• More problem: attackers‟ requests
  cripple good requests
       Solution: fair queue
    requests based on path id
                                         Per path-id queues


• Routers insert path identifier tags [Yarr03].
• Fair queue requests using the most recent tags.
Problem: Flood using
  allowed packets

 Solution: Fair queue permitted
   packets w.r.t. destinations

           cap   cap

           cap   cap

• Per-destination queues
• TVA bounds the number of queues (later)
               Challenge #2:
              Capability design

                     pre1                    pre2

                                                    cap1 cap2

• Routers stamp pre-capabilities on request packets
   – (timestamp, hash(src, dst, key, timestamp)
• Destinations return fine-grained capabilities
   – (N, T, timestamp, hash(pre-cap, N, T))
   – send N bytes in the next T seconds, e.g. 32KB in 10
       Validating capabilities
                  N, T, timestamp, hash(pre-cap, N, T)

               cap1 cap2 data


• Each router verifies hash correctness
   – Check for expiration: timestamp + T < now
   – Check for byte bound: sent_pkts * pkt_len < N
               Challenge #3:
Efficiently count with bounded state

• Create counting state only for fast flows
  – Fast flows: a capability with rate > N/T
• A link with capacity C have < N /T fast flows
  – min N/T = 3.2 Kbps  312,500 records at routers
    with 1Gbps link

• Implementation: expire state at rate N/T,
  reuse expired state
       Efficiency: bound queues
                                   Queue on most recent tags
                                                path-identifier queue
       regular packets
                            Y                   per-destination queue
      Validate capability
legacy packets                                  low priority queue
             Keeps a queue if a destination
             receives faster than a threshold rate R

  • Tag space bounds the number of request queues.
  • Number of destination queues is bounded by C/R
  Efficiency: reduce capabilities‟
          packet overhead
• A sender associates a nonce with a capability
• Routers cache nonce to <src,dst> mapping
  – nonce is found in cache => permitted packet
• Caveat: if nonce is evicted, packets are
  treated as legacy and put on slow paths
      Other DoS defenses
• Pushback filtering
  – Iterative “push back” traffic filters towards attack
    sources along the paths
  – [Mahajan et al. CCR‟02, Ioannidis et al. NDSS‟02,
    Argyraki USENIX‟05]
• Overlay filtering
  – Offline authenticators determine who can send
  – Overlay performs admission control using
  – [Keromytis, SIGCOMM‟02, Andersen USITS‟03]
• One must design protocols with DoS
  attacks in mind
• Current Internet is ill-suited for
  coping with DDoS attacks
• Many good proposals for detection,
  mitigation, prevention
     Project administravia
• Dec 10: In-class presentation/demo
• Dec 11: CS department poster/demo
• Dec 17: Final report
     Project presentation
• 8 groups
  – 10 min presentation + 5 min Q&A
  – Demo is preferred
• 10 minutes  ≈10 content slides
      System-based projects
• (2) Explain motivation
  –   What is the problem you are tackling?
  –   Why is it interesting or important?
  –   What are existing designs/systems?
  –   Why are they not good enough?
• (4) Explain your design
  – Give a strawman
  – Specify key challenges
  – Explain your solutions
• (4) Convince with results
  – Did you system tackle the stated problem?
  – Did your design prove to be essential?
   Measurement-based projects
• Explain goal
  – What problem? Similar studies not sufficient?
  – Your study will be useful for …?
    • Designing new protocols
    • Debunking old myths
• Explain measurement methodology
  – A list of experiments and hypothesis
  – How each experiment proves/disproves
• Discuss results
• What is your proudest technical nugget?
  – Convince others it‟s cool
• What did you learn from your project?
  – Share your lessons with others
• Demo helps
  – Seeing is believing
  How can you do a good job?
• Prepare, prepare, prepare
• Practice talks to non-group members
  – Did they understand your problem & solution?
• Discuss your slides with me
          Project poster demo
• Why do it?
  –   Seek a wider audience; publicize your work
  –   Diverse feedbacks
  –   Check out what others have done
  –   Mingle with people
• CS department wide: graphics, vision etc.
• Reuse talk slides for poster
• Demo helps
          Project report
• 8 page maximum
• Same flow as the talk but has room
  for details
• Email me by Dec 17 (Mon)
  – The PDF report
  – A bundle of your source code

To top