Introduction to Backtrack by Anujak124

VIEWS: 226 PAGES: 48

A Project Report on BackTrack Which Introduce the Backtrack and Linux Operating System

More Info

Keep away from people who try to belittle your ambitions. Small people always do
that, but the really great make you feel that you too, can become great.

                                                            -   Mark Twain

              I take this opportunity to express my sincere thanks and deep gratitude to all
Those people who extended their wholehearted co-operation and have helped me in
completing this project successfully.

              First of all, I would like to thank Mr. Ankit verma for creating opportunities to
take me under this project, and providing me a valuable guidance that made a great help in
completing this project. I would also like to thank my parents, friends and Lab faculties of APPIN
TECHNOLOGIES LAB NOIDA for guiding and encouraging me throughout the duration of the

      In all I found this completion of the project will mark a new beginning for me in the
coming days.

                                                                                   Neelam yadav

BackTrack is a Linux distribution distributed as a Live CD which resulted from the merger
of WHAX(previously Whoppix) and the auditor security collection, which used for
penetration testing.
The BackTrack project was created by Mati Aharani and Max moser and is collaborative
effort involving the community.

   1) Backtrack 2 released march 6, 2007
      (includes over 300 security tools)
   2) Beta version of BackTrack 3 released Dec. 14, 2007
      (focus was to support more and newer hardware as well as provide more flexibility
      and modularity)
   3) BackTrack 3 released june 19,2008
      (new additions include SAINT and Maltego)
   4) BackTrack 4 Beta Released Feb 11, 2009
      (move to debian)
   5) BackTrack 4 pre-release Released 19th June, 2009
   6) BackTrack 4 Final released 11th January 2010

BackTrack is the world’s leading penetration testing and information security
auditing distribution. With hundreds of tools preinstalled and configured to run out
of the box, BackTrack 4 provides a solid Penetration testing platform ‐ from Web
application Hacking to RFID auditing – its all working in once place.


Company/Developer          Mati Aharani, Emanuele Gentilli, and others
OS family                  Unix-like
Working State              Current
Source Model               Open Source
Latest Stable release      4.0/January9, 2010
Latest Unstable release    4 PreFinal/June19,2009
Kernel type                Monolithic
Default User Interface     Bash,KDE,Fluxbox
License                    Various
Official Website 

BackTrack Base

There have been many changes introduced into backtrack 4 most notably
Move to an Ubuntu Intrepid base It maintains its own full repositories with modified
Ubuntu Packages in addition to its own penetration testing tools.

Another significant change is the update kernel version, currently at This new
kernel brought an onset of internal changes, which have greatly changed the structure of


Earlier the kernel was Lzma enabled squashfs as live filesystem, which one hand results in
larger ISO size, but on the other hand, frees from having to maintain its own kernel patches.

      Now BackTrack 4 uses squashfs-tools version 4.0 (which is not backward
compatible with previous versions) and the inbuilt squashfs kernel module, which is
present in AUFS is used as the unification filesystem(aufs2.x).

Several wireless driver injection optimization patches have been applied to the kernel, as
well as a bootsplash patch. These patches can be found in the kernel source
packages(/usr/src/linux/patches). These changes mean that much of what you were used
to in BackTrack 2/3 has been changed in terms of boot cheet codes and such, as this kernel
shift also means we no longer use the live-linux scripts to create our images(we use casper


BackTrack focuses its central idea on the needs of penetration testers. The inclusion of live
CD and Live USB functionality enables any user to just insert their respective data medium
and boot up BackTrack

Direct hard disk installation(2.7GB uncompressed) can also be completed within the Live
DVD(1.5GB compressed) environment the basic graphical installation wizard with no
restart subsequent to installation. BackTrack further continue its compatibility with
accessibility and internationalization by including support for japenese input in reading
and writing in hiragana, katakana and kanji.


   1) Metasploit integration
   2) RFMON Injection capable wireless drivers
   3) Kismet
   4) Autoscan-network(AutoScan-Network is a network discovering and managing
   5) Nmap
   6) Ettercap
   7) Wireshark(Formely known as Ethereal)
   8) BeEF(browser Exploitation Framework)

BackTrack’s functionality further increases with the arrangement of each tool in 11
categories. The tool categories are as follows.

       1) Information Gathering
       2) Network Mapping
       3) Vulnerability Identification
       4) Web Application Analysis
       5) Radio network, Analysis(802.11, Bluetooth, Rfid)
       6) Penetration(Exploit & social Engineering Toolkit)
       7) Privilege Escalation
       8) Maintaining Access
       9) Digital Foreensics
       10)Reverse Engineering
       11)Voice over IP

In relation to basic software packages, BackTrack includes some ordinary desktop
programs such as Mozilla Firefox, Pidgin, K3b and XMMS.

Packages and Repositories
One of the most significant changes introduced in BackTrack 4 are the Debian like
repositories available, which are frequently updated with security fixes and new
tools. This means that if you choose to install BackTrack to disk, you will be able to
get package maintenance and updates by using aptget
Our BackTrack tools are arranged by parent categories. These are the categories
that currently exist:
        1) BackTrack ‐ Enumeration
        2) BackTrack ‐ Tunneling
        3) BackTrack ‐ Bruteforce
        4) BackTrack ‐ Spoofing
        5) BackTrack ‐ Passwords
        6) BackTrack ‐ Wireless
        7) BackTrack ‐ Discovery
        8) BackTrack ‐ Cisco
        9) BackTrack – Web Applications
        10)BackTrack ‐ Forensics
        11) BackTrack ‐ Fuzzers
        12) BackTrack ‐ Bluetooth
        13) BackTrack ‐ Misc
       14)   BackTrack ‐ Sniffers
       15)   BackTrack ‐ VOIP
       16)   BackTrack ‐ Debuggers
       17)   BackTrack ‐ Penetration
       18)   BackTrack ‐ Database
       19)   BackTrack ‐ RFID
       20)   BackTrack – Python
       21)   BackTrack – Drivers
       22)   BackTrack ‐ GPU


keeping BackTrack up to date is relatively simple by using the apt-get commands.

    apt-get update :     Synchronizes your package list with our repository.
    apt-get upgrade :    downloads and installs all the updates available.
    apt-get dist-upgrade :     downloads and installs all new upgrades.

       Meta Packages
       A nice feature that arises from the tool categorization, is that we can now support
       “Backtrack meta Packages”.
       A meta package is a dummy package which includes several other packages.
       For eg., the meta package “backtrack web” would include all the web application
       penetration testing tools backtrack has to offer.

       Meta Meta packages
       There are two “Meta Meta Packages”: BackTrack world and BackTrack-desktop.
       BackTrack-World contains all the backtrack meta packages, while backtrack-
       desktop contains backtrack-world, backtrack-networking and backtrack-
       multimedia. The latter two meta packages are select applications imported from
       Ubantu Repositories.

Working with BackTrack

BackTrack 4 contains an “imposed” KDE3 repository, alongside the KDE4 Ubuntu
Intrepid repositories.

Updating tools manually
Our BackTrack repositories will always strive to keep updated with the latest
versions of tools, with the exception of a select few. These “special” tools get
updated by their authors very frequently, and often include significant updates. We
felt that creating static binaries for these types of tools would not be beneficial and
users were better of keeping these tools synched with the SVN versions
respectively. The tools include MSF, W3AF, Nikto, etc.


BackTrack comes as a live CD, so to run it, you simply need to insert it in the CD drive and
then boot the system. At the prompt, log on as root and then enter the root password toor
before going on to set up the GUI with xconf. After you have completed the setup, simply
type startx to launch the GUI. If an error occurs, try gui as a workaround for launching the
graphical interface.If you need to, you can type dhcpcd to ask the DHCP server for an IP
address. BackTrack does not do this automatically. BackTrack’s KDE-based menu system
provides access to dozens of security tools and other forensic-analysis applications (see
Figure 1). Browsing the BackTrack menu is a little like browsing the many menus and
submenus of a games distribution; only, instead of a bunch of games, the GUI is stocked
with sniffers, spoofers, scanners, and other utilities to assist you with security testing.

Creating your own Live CD – Method 1
Creating your own flavor of BackTrack is easy.
1. Download and install the bare bones version of BackTrack
2. Use apt‐get to install required packages or meta packages.
3. Use remastersys to repackage your installation.

Creating your own Live CD – Method 2
Download the BackTrack 4 iso. Use the customization script to update and modify
your build as show here:

Installing BackTrack to USB
The easiest method of getting BackTrack4 installed to a USB key is by using the
unetbootin utility (resent in BackTrack in /opt/).

                              INSTALLED FEATURES
DNStracer determines where a given Domain Name Server (DNS) gets its information
from, and follows the chain of DNS servers back to the servers which know the data.

        dnstracer [options] name

      dnstracer determines where a given Domain Name Server (DNS) gets its
  information from, and follows the chain of DNS servers back to the
servers which know the data.

    Options are:

        -c   Disable local caching.

        -C   Enable negative caching.

        -o   Enable overview of received answers at the end.

        -q q>u>e>r>y>c>l>a>s>s>
             Change the query-class, default is A. You can either specify a number of the
    type (if you're brave) or one of the following strings: a, aaaa, a6, soa, cname,
    hinfo, mx, ns, txt and ptr.

        -r r>e>t>r>i>e>s>
             Number of retries for DNS requests, default 3.

        -s s>e>r>v>e>r>
             DNS server to use for the initial request, default is acquired from the
    system. If a dot is specified (.), A.ROOT-SERVERS.NET will be used.

        -v   Be verbose on what sent or received.

       -4 Use only IPv4 servers, don't query IPv6 servers (only available when IPv6
    support hasn't been disabled)

        -S s>o>u>r>c>e>a>d>d>r>e>s>s>
            Use this as source-address for the outgoing packets.

      It sends the specified name-server a non-recursive request for the name.

       Non-recursive means: if the name-server knows it, it will return the data
    requested. If the name-server doesn't know it, it will return pointers to name-
    servers that are authoritive for the domain part in the name or it will return the
    addresses of the root name-servers.

       If the name server does returns an authoritative answer for the name,the next
    server is queried. If it returns an non-authoritative answer for the name, the name
    servers in the authority records will be

        The program stops if all name-servers are queried.
   Make sure the server you're querying doesn't do forwarding towards other
servers, as dnstracer is not able to detect this for you.

    It detects so called lame servers, which are name-servers which has been told
to have information about a certain domain, but don't
have this information.

Search for the A record of on your local nameserver:


   Search for the MX record of on the root-nameservers:

   dnstracer "-s" . "-q" mx

   Search for the PTR record (hostname) of

   dnstracer "-q" ptr

   And for IPv6 addresses:

   dnstracer "-q" ptr "-s" . "-o"
tcptraceroute: A traceroute implementation using TCP packets The more traditional
traceroute(8) sends out either UDP or ICMP ECHO packets with a TTL of one, and
increments the TTL until the destination has been reached. By printing the gateways that
generate ICMP time exceeded messages along the way, it is able to determine the path
packets are taking to reach the destination. The problem is that with the widespread use of
firewalls on the modern Internet, many of the packets that traceroute(8) sends out end up
being filtered, making it impossible to completely trace the path to the destination.
However, in many cases, these firewalls will permit inbound TCP packets to specific ports
that hosts sitting behind the firewall are listening for connections on. By sending out TCP
SYN packets instead of UDP or ICMP ECHO packets, tcptraceroute is able to bypass the most
common firewall filters.

It is worth noting that tcptraceroute never completely establishes a TCP connection with
the destination host. If the host is not listening for incoming connections, it will respond
with an RST indicating that the port is closed. If the host instead responds with a SYN|ACK,
the port is known to be open, and an RST is sent by the kernel tcptraceroute is running on
to tear down the connection without completing three-way handshake. This is the same
half-open scanning technique that nmap(1) uses when passed the -sS flag.

To trace the path to a web server listening for connections on port 80:

tcptraceroute webserver

To trace the path to a mail server listening for connections on port 25:
tcptraceroute mailserver 25

Nmap ("Network Mapper") is a utility for network exploration or security auditing.
Many systems and network administrators also find it useful for tasks such as network
inventory, managing service upgrade schedules, and monitoring host or service uptime.
Nmap uses raw IP packets in novel ways to determine what hosts are available on the
network, what services (application name and version) those hosts are offering, what
operating systems (and OS versions) they are running, what type of packet filters/firewalls
are in use, and dozens of other characteristics. It was designed to rapidly scan large
networks, but works fine against single hosts. Nmap runs on all major computer operating
systems, and official binary packages are avalable for Linux, Windows, and Mac OS X.

Command >nmap -v -A targethost

Nmap features include:

   Host Discovery - Identifying hosts on a network, for example listing the hosts which
    respond to pings, or which have a particular port open
   Port Scanning - Enumerating the open ports on one or more target hosts
   Version Detection - Interrogating listening network services listening on remote
    devices to determine the application name and version number
   OS Detection - Remotely determining the operating system and some hardware
    characteristics of network devices.
   Scriptable interaction with the target - using Nmap Scripting Engine (NSE)
    and Lua programming language customized queries can be made Nmap Scripting
Typical uses of Nmap:

   Auditing the security of a device, by identifying the network connections which can be
    made to it
   Identifying open ports on a target host in preparation for auditing
   Network inventory, Network mapping, maintenance, and asset management
   Auditing the security of a network, by identifying unexpected new servers

Nmap is used to discover computers and services on a computer network, thus creating a
“map” of the network. Just like many simple port scanners, Nmap is capable of discovering
passive services on a network despite the fact that such services aren’t advertising
themselves with a service discovery protocol. In addition Nmap may be able to determine
various details about the remote computers. These include operating system, device type,
uptime, software product used to run a service, exact version number of that product,
presence of some firewall techniques and, on a local area network, even vendor of the
remote network card.

By default, Nmap performs a SYN Scan, which works against any compliant TCP stack,
rather than depending on idiosyncrasies of specific platforms. It can be used to quickly scan
thousands of ports, and it allows clear, reliable differentiation between ports in open,
closed and filtered states.

To perform a SYN scan on the host,

use the command

nmap [Scan Type(s)] [Options] {target specification}


-iL                        Input from list of hosts/networks
-iR                        Choose random targets
--exclude                   Exclude hosts/networks
--excludefile              Exclude list from file


-sL                    List Scan - simply list targets to scan
-sP                    Ping Scan - go no further than determining if host is online
-P0                    Treat all hosts as online -- skip host discovery
-PS/PA/PU [portlist]   TCP SYN/ACK or UDP discovery to given ports
-PE/PP/PM              ICMP echo, timestamp, and netmask request discovery probes
-n/-R                  Never do DNS resolution/Always resolve [default: sometimes]
--dns-servers          Specify custom DNS servers
--system-dns           Use OS's DNS resolver


-sS/sT/sA/sW/sM        TCP SYN/Connect()/ACK/Window/Maimon scans
-sN/sF/sX              TCP Null, FIN, and Xmas scans
--scanflags <flags>    Customize TCP scan flags
-sI <zombie            Idlescan
-sO                    IP protocol scan
-b <ftp relay host>    FTP bounce scan

-p <port ranges>       Only scan specified ports
                       Ex: -p22; -p1-65535; -p U:53,111,137,T:21-25,80,139,8080
-F                     Fast - Scan only the ports listed in the nmap-services file)
-r                     Scan ports consecutively - don't randomize


-sV                    Probe open ports to determine service/version info
--version-intensity    Set from 0 (light) to 9 (try all probes)
--version-light        Limit to most likely probes (intensity 2)
--version-all          Try every single probe (intensity 9)
--version-trace        Show detailed version scan activity (for debugging)


-O                     Enable OS detection
--osscan-limit         Limit OS detection to promising targets
--osscan-guess         Guess OS more aggressively


Options which take <time> are in milliseconds, unless you append 's' (seconds), 'm'
(minutes), or 'h' (hours) to the value (e.g. 30m).

-T[0-5]                Set timing template (higher is faster)
--min-                 Parallel host scan group sizes
hostgroup <size>
--min-                 Probe parallelization
parallelism <time>
--min-rtt-             Specifies probe round trip time.
timeout <time>
--max-retries <tries> Caps number of port scan probe retransmissions.
--host-timeout          Give up on target after this long
--scan-delay/--max-     Adjust delay between probes
scan-delay <time>


-f; --mtu <val>            fragment packets (optionally w/given MTU)
-D                       Cloak a scan with decoys
-S <IP_Address>            Spoof source address
-e <iface>                 Use specified interface
-g/--source-port           Use given port number
--data-length <num>        Append random data to sent packets
--ttl <val>                Set IP time-to-live field
--spoof-mac <mac           Spoof your MAC address
--badsum                   Send packets with a bogus TCP/UDP checksum


-oN/-oX/-oS/-oG         Output scan in normal, XML, s|<rIpt kIddi3, and Grepable format, respectively, to th
<file>                  given filename.
-oA <basename>          Output in the three major formats at once
-v                      Increase verbosity level (use twice for more effect)
-d[level]               Set or increase debugging level (Up to 9 is meaningful)
--packet-trace          Show all packets sent and received
--iflist                Print host interfaces and routes (for debugging)
--log-errors            Log errors/warnings to the normal-format output file
--append-output         Append to rather than clobber specified output files
--resume <filename>    Resume an aborted scan
--stylesheet           XSL stylesheet to transform XML output to HTML
--webxml               Reference stylesheet from Insecure.Org for more portable XML
--no-stylesheet        Prevent associating of XSL stylesheet w/XML output


-6                     Enable IPv6 scanning
-A                     Enables OS detection and Version detection
--datadir <dirname>    Specify custom Nmap data file location
--send-eth/--send-ip   Send using raw ethernet frames or IP packets
--privileged           Assume that the user is fully privileged
-V                     Print version number

     nmap -P0

     Running the above port scan on the Computer Hope IP address would give
     information similar to the below example. Keep in mind that with the above command
     it's -P<zero> not the letter O.

     Interesting ports on (
     Not shown: 1019 filtered ports, 657 closed ports
     21/tcp open ftp
     80/tcp open http
     113/tcp open auth
     443/tcp open https
SPIKE is written in C and exposes an API for quickly and efficiently developing network
protocol fuzzers. . SPIKE utilizes a novel technique for representing and thereafter fuzzing
network protocols. Protocol data structures are broken down and represented as blocks,
also referred to as a SPIKE, which contains both binary data and the block size. Block-based
protocol representation allows for abstracted construction of various protocol layers with
automatic size calculations. To better understand the block-based concept, consider the
following simple example from the whitepaper "The Advantages of Block-Based Protocol
Analysis for Security Testing":8

s_block_size_binary_bigendian_word("somepacketdata"); s_block_start("somepacketdata")
s_binary("01020304"); s_block_end("somepacketdata");

This basic SPIKE script (SPIKE scripts are written in C) defines a block named
somepacketdata, pushes the four bytes 0x01020304 into the block and prefixes the block
with the block length. In this case the block length would be calculated as 4 and stored as a
big endian word. Note that most of the SPIKE API is prefixed with either s_ orspike_.
The s_binary() API is used to add binary data to a block and is quite liberal with its
argument format, allowing it to handle a wide variety of copied and pasted inputs such as
the string 4141 x41 0x41 41 00 41 00. Although simple, this example demonstrates the
basics and overall approach of constructing a SPIKE. As SPIKE allows blocks to be
embedded within other blocks, arbitrarily complex protocols can be easily broken down
into their smallest atoms. Expanding on the previous example:

s_binary("00 01");

In this example, two blocks are defined, somepacketdata and innerdata. The latter block is
contained within the former block and each individual block is prefixed with a size value.
The newly defined innerdata block begins with a static two-byte value (0x0001), followed
by a four-byte variable integer with a default value of 0x02, and finally a string variable
with a default value of SELECT.
Thes_binary_bigendian_word_variable()and s_string_variable() APIs will loop through a
predefined set of integer and string variables (attack heuristics), respectively, that have
been known in the past to uncover security vulnerabilities. SPIKE will begin by looping
through the possible word variable mutations and then move on to mutating the string
variable. The true power of this framework is that SPIKE will automatically update the
values for each of the size fields as the various mutations are made. To examine or expand
the current list of fuzz variables, look at SPIKE/src/spike.c.Version 2.9 of the framework
contains a list of almost 700 error-inducing heuristics.

Using the basic concepts demonstrated in the previous example, you can begin to see how
arbitrarily complex protocols can be modeled in this framework. A number of additional
APIs and examples exist. Refer to the SPIKE documentation for further information.
Sticking to the running example, the following code excerpt is from an FTP fuzzer
distributed with SPIKE. This is not the best showcase of SPIKE's capabilities, as no blocks
are actually defined, but it helps to compare apples with apples.

s_string("HOST ");
s_string(" v);
s_string("PASS ");
s_string("SITE ");
s_string("ACCT ");
s_string("CWD ");
s_string("SMNT ");
s_string("PORT ");

The Goals of SPIKE

      Find new vulnerabilities by
         ● Making it easy to quickly reproduce a complex binary protocol
         ● Develop a base of knowledge within SPIKE about different kinds of
             bugclasses affecting similar protocols
         ● Test old vulnerabilities on new programs
         ● Make it easy to manually mess with protocols

How the SPIKE API works

      Unique SPIKE data structure supports lengths and blocks
         ● s_block_start(), s_block_end(), s_blocksize_halfword_bigendian();
       SPIKE utility routines make dealing with binary data, network code, and common
       marshalling routines easy
          ● s_xdr_string()

       SPIKE fuzzing framework automates iterating through all potential problem spots
          ● s_string(“Host: “); s_string_variable(“localhost”);

       A SPIKE is a kind of First In First Out Queue or “Buffer Class”

       A SPIKE can automatically fill in “length fields”
          ● s_size_string(“post”,5);
          ● s_block_start(“Post”);
          ● s_string_variable(“user=bob”);
          ● s_block_end(“post”);

Httprint is a web server fingerprinting tool. It relies on web server characteristics to
accurately identify web servers, despite the fact that they may have been obfuscated by
changing the server banner strings, or by plug-ins such as mod_security or servermask.
Httprint can also be used to detect web enabled devices which do not have a server banner
string, such as wireless access points, routers, switches, cable modems, etc. httprint uses
text signature strings and it is very easy to add signatures to the signature database.
Source: Httprint
To get the CLI use:
#cd /pentest/enumeration/www/httprint_301/linux
# httprint

Now first things first you should probably go ahead and update your "Signature File"
So it will usually be in:
look for signatures.txt
ok now to update just go to signatures and do a save as make sure you use a .txt extension.
Overwriting the one we found earlier.

Next let’s get the input.txt file and set it up (its located in the same place as before)
This is the second file that we want to work with so and open it up using your favorite text
Ok you should see something like:
# inputs for httprint can be:
# - individual IP addresses (default port 80)
# - http://servername :[port] /
# - https://servername:[port] /
# - IP range xx.xx.xx.xx-yy.yy.yy.yy
#http://www.apache DOT org /

# inputs for httprint can be:
# - individual IP addresses (default port 80)
# - http://servername:[port]/
# - https://servername:[port]/
# - IP ranges xx.xx.xx.xx-yy.yy.yy.yy
http://www.apache DOT org/

dsniff - password sniffer The ability to access the raw packets on a network interface
(known as network sniffing), has long been an important tool for system and network
administrators. For debugging purposes it is often helpful to look at the network traffic
down to the wire level to see exactly what is being transmitted. Dsniff, as the name implies,
is a network sniffer - but designed for testing of a different sort. dsniff is a package of
utilities that includes code to parse many different application protocols and extract
interesting information, such as usernames and passwords, web pages being visited,
contents of email, and more. Additionally, it can be used to defeat the normal behaviour of
switched networks and cause network traffic from other hosts on the same network
segment to be visible, not just traffic involving the host dsniff is running on.

It also includes new programs to launch man-in-the-middle attacks on the SSH and HTTPS
protocols, which would allow viewing of the traffic unencrypted, and even the possibility of
taking over interactive SSH sessions.

dsniff [-c] [-d] [-m] [-n] [-i interface | -p pcapfile] [-s snaplen] [-f services] [-t trigger[,...]]] [-
r|-w savefile] [expression]




Perform half-duplex TCP stream reassembly, to handle asymmetrically routed traffic (such
as when using arpspoof(8) to intercept client traffic bound for the local gateway).


Enable debugging mode.


Enable automatic protocol detection.


Do not resolve IP addresses to hostnames.

-i interface
        Specify the interface to listen on.
-p pcapfile
        Rather than processing the contents of packets observed upon the network process
        the given PCAP capture file.
-s snaplen
        Analyze at most the first snaplen bytes of each TCP connection, rather than the
        default of 1024.
-f services
        Load triggers from a services file.
-t trigger[,...]
        Load triggers from a comma-separated list, specified as port/proto=service (e.g.
-r savefile
        Read sniffed sessions from a savefile created with the -w option.
-w file
        Write sniffed sessions to savefile rather than parsing and printing them out.
        Specify a tcpdump(8) filter expression to select traffic to sniff.
On a hangup signal dsniff will dump its current trigger table to


       Default trigger table
       Network protocol magic

Dsniff contains several powerful new network tools, written for use in penetration testing.
Arpredirect is a very effective way of sniffing traffic on a switch by forging arp replies.
Findgw determines the local gateway of an unknown network via passive sniffing, which
can be used in conjunction with arpredirect to intercept all outgoing traffic on a switch.
Macof floods the network with random MAC addresses, causing some switches to fail in
open repeating mode, facilitating sniffing. Dsniff is a simple password sniffer which parses
passwords from many protocols, only saving the "interesting" bits. Mailsnarf is a fast and
easy way to violate the Electronic Communications Privacy Act of 1986. urlsnarf outputs all
requested URL's from HTTP traffic. webspy sends URLs sniffed from a client to your local
Netscape browser for display, updated in real-time (as the target surfs, your browser surfs
along with them, automagically).

Bluetooth is meant to be a wireless replacement for some of the functions USB fulfills,
and Wi-Fi is more of a wireless replacement for Ethernet. Many high-end phones,
laptops, PDAs, car stereos and other electronics are being shipped with Bluetooth
capability so they can communicate

root@slax:~# hciconfig hci0 up
root@slax:~# hciconfig
jhci0: Type: USB
BD Address: 00:0A:3A:52:69:8C ACL MTU: 192:8 SCO MTU: 64:8
RX bytes:148 acl:0 sco:0 events:17 errors:0
TX bytes:65 acl:0 sco:0 commands:17 errors:0

root@slax:~# hcitool scan
Scanning ...
00:02:72:CA:14:6D TestTop
3proxy is universal proxy server. It can be used to provide internal users wuth fully
controllable access to external resources or to provide external users with access to
internal resources. 3proxy is not developed to replace squid(8), but it can extend
functionality of existing cashing proxy. It can be used to route requests between different
types of clients and proxy servers. Think about it as application level gateway with
configuration like hardware router has for network layer. It can establish multiple
gateways with HTTP and HTTPS proxy with FTP over HTTP support, SOCKS v4, v4.5 and
v5, POP3 proxy, UDP and TCP portmappers. Each gateway is started from configuration file
like independant
service proxy(8) socks(8) pop3p(8)tcppm(8) udppm(8) ftppr(8) dnspr but 3proxy is
not a kind of wrapper or superserver for this daemons. It just has same code compiled in,
but provides much more functionality. SOCKSv5 implementatation allows to use 3proxy
with any UDP or TCP based client applications designed without proxy support
(with SocksCAP, FreeCAP or another client-side redirector under Windows of with
socksification library under Unix). So you can play your favourite games, listen music,
exchange files and messages and even accept incoming connections behind proxy server.

  dnspr does not exist as independant service. It' DNS caching proxy (it
requires nscache and nserver to be set in configuration. Only A-records are cached. Please
note, the this caching is mostly a 'hack' and has nothing to do with real DNS server, but it
works perfectly for SOHO networks.

   3proxy supports access control lists (ACL) like network router. Source and destination
networks and destination port can be specified. In addition, usernames and gateway action
(for example GET or POST) can be used in ACLs. In order to filter request on username
basis user must be authenticated somehow. There are few authentication types including
password authentication and authentication by NetBIOS name for Windows clients (it's
very like ident authentication). Depending on ACL action request can be allowed, denied or
redirected to another host or to another proxy server or even to a chain of proxy servers.

   It supports different types of logging: to logfiles, syslog(3) (only under Unix) or to ODBC
database. Logging format is turnable to provide compatibility with existing log file parsers.
It makes it possible to use 3proxy with IIS, ISA, Apache or Squid log parsers.

       Name of config file. See 3proxy.cfg(3) for configuration file format. Under Windows,
       if config_file is not specified, 3proxy looks for file named 3proxy.cfg in the default
       location (in same directory with executable file and in current directory). Under
       Unix, if no config file is specified, 3proxy reads configuration from stdin. It makes it
       possible to use 3proxy.cfg file as executable script just by setting +x mode and
       as a first line in 3proxy.cfg
       (Windows NT family only) install 3proxy as a system service
       (Windows NT family only) remove 3proxy from system services

Under Unix there are few signals 3proxy catches. See kill(1).
      cleanup connections and exit
      stop to accept new connections, on second signal - start and re-read configuration
      start to accept new conenctions
      reload configuration

Under Windows, if 3proxy is installed as service you can standard service management to
start, stop, pause and continue 3proxy service, for example:
net start 3proxy
net stop 3proxy
net pause 3proxy
net continue 3proxy

  Web admin service can also be used to reload configuration. Use wget to automate this

/usr/local/3proxy/3proxy.cfg (3proxy.cfg)
       3proxy configuration file
 How to open ports
socks -p28800

Cryptcat is a simple Unix utility which reads and writes data across network connections,
using TCP or UDP protocol while encrypting the data being transmitted. It is designed to be
a reliable "back-end" tool that can be used directly or easily driven by other programs and
scripts. At the same time, it is a feature-rich network debugging and exploration tool, since
it can create almost any kind of connection you would need and has several interesting
built-in capabilities.

And as a powerful back-end tool it also lets user to hide his IP and establish connection a
victim would not know about. A hacker would also be able to run commands on your
computer through the connection. If you look through the features of Crypcat listed in this
article again, you will find out that it can easily switch ports and slow down the data
sending process, so that you will never get an idea of being hacked, until you find out that,
perhaps, your passwords, accounts information and credit-cards numbers are stolen.

To sum up, Cryptcat is a powerful networking tool with almost unlimited performance
capabilities. On the one hand, it can provide security and save your information, but on the
other hand any experienced hacker has it installed. And not only for security purposes.

Cryptcat is the standard netcat enhanced with twofish encryption. Cryptcat is the standard
netcat enhanced with twofish encryption.

        Machine A: cryptcat -l -p 1234 < testfile
        Machine B: cryptcat <machine A IP> 1234

This is identical to the
        normal netcat options for doing exactly the
        same thing. However, in this case the data transferred is encrypted.

Vulnerability Note VU#165099 - cryptcat does not encrypt data communications when -e
command argument is used

Encrypting Data with Cryptcat

 Cryptcat has the same syntax and functions
as netcat
Encrypted data transfer.
Encrypting files means that:
 Attacker’s sniffer cannot compromise your
information (Unless your passphrase is
 Encryption nearly eliminates risk of data
contamination or injection



cryptcat -k secret [-options] hostname port[s] [ports] cryptcat -k secret -l -p port [-
options] [hostname] [port]


Cryptcat can act as a tcp or udp client or server - connecting to or listening on a socket,
while otherwise working as the standard Unix command cat(1) .

cryptcat takes a password as a salt to encrypt the data being sent over the connection.
Without a specified password cryptcatmetallica’’. Needless to say, failure to specify a
different password makes the connection as good as unencrypted. will default to the
hardcoded password ‘‘


This programs does not follow the usual GNU command line syntax, with long options
starting with two dashes (‘-’). A summary of the options specific to cryptcat is included


         Show summary of options.

-k secret password

         Change the shared secret password to be used to establish a connection.


This version of cryptcat does not support the -e command command line option available
in some versions of nc.

AIR - Automated Image and Restore

     AIR (Automated Image and Restore) is a GUI front-end to dd/dc3dd designed for easily
                                 creating forensic images.

auto-detection of IDE and SCSI drives, CD-ROMs, and tape drives

choice of using either dd or dc3dd

image verification between source and copy via MD5 or SHA1/256/384/512

image compression/decompression via gzip/bzip2

image over a TCP/IP network via netcat/cryptcat
supports SCSI tape drives

wiping (zeroing) drives or partitions

splitting images into multiple segments

detailed logging with date/times and complete command-line used
What is Automated Image & Restore

Automated Image & Restore (AIR) is an open source application that provides a GUI front
end to the dd/dcfldd (Dataset Definition (dd)) command. AIR is designed to easily create
forensic disk/partition images. It supports MD5/SHAx hashes, SCSI tape drives, imaging
over a TCP/IP network, splitting images, and detailed session logging. To date, the AIR
utility has only been developed for use on Linux distributions. In its simplest form, AIR
provides a convenient interface to execute the dd set of commands. It eliminates the risk of
"fat fingering" an error in the shell terminal and ultimately makes using the dd command
more user-friendly for those who are not as experienced. Please note that using the AIR
front end still requires some basic knowledge of how the dd (or dcfldd) commands work.

The dd command has been around for quite a while. It is well known throughout the
Unix/Linux community, well documented, and as I can only imagine extensively used. A dd
image is a bit by bit image of a source device or file. The uses for dd range from creating
and maintaining system backups and restore images to the forensic application of imaging
evidence that will be returned to the lab and examined.

This tutorial is not designed to teach the use of the dd command; this is well documented
and a simple internet search will yield a plethora of results. Instead, the intent of this mini
"how-to" is to introduce users to the AIR front end application, increase overall awareness
of the utility, and provide a brief example of creating a dd image using this tool.

Setting up AIR

The first thing you will want to do is download and install the latest version of the AIR
application. The AIR application is available for download at
Once you have downloaded the files to your system, decompress, extract, and install the
application. [In this example, I have downloaded the .tar.gz package and will display the
commands related to this particular file type]

-- Make sure you are in a root shell

sudo -s

-- Check your current directory to make sure you are in the right location to access the
package you downloaded


-- Decompress and extract ("untar") the AIR files

tar -zxvf /path/air-1.2.8.tar.gz

-- If you desire, this is a good time to read the README.txt file

-- Switch to your AIR directory

cd /path/air-1.2.8

-- Run the install script



Note that AIR does not work on all Linux distributions. Refer to the project information on and the README.txt file for a list of known supported distibutions - I am
using Ubuntu which is not among the list. Ubuntu can still run AIR, however, some
functionality is unavailable. Now that you have successfully downloaded and installed the
application, run AIR in root shell by typing "air" in the terminal. AIR will run through a
series of checks and the GUI will launch automatically.

Take a moment to familiarize yourself with the AIR GUI. Note how the buttons and options
relate to various dd commands that can be used in the terminal.

Creating a dd Image Using AIR

For this exercise, we will create a dd image of a .jpg in the root folder and copy it to a CD-
ROM. AIR will run the commands behind the scenes that will create the image and copy it
to the CD-ROM. (In a real scenario, this .jpg could very easily represent a compromised
hard drive or other piece of evidence).

The OllyDbg debugger is a machine level debugger created by Oleh Yuschuk for the
80x86. This machine-level debugger works with a variety of different assemblers including
VoIPER is a VoIP security testing toolkit incorporating several VoIP fuzzers and auxilliary
tools to assist the auditor. It can currently generate over 200,000 SIP tests and H.323/IAX
modules are in development. The primary goal of VoIPER is to create a toolkit with all
required testing functionality built in and to minimise the amount of effort an auditor
has to put into testing the security of a VoIP code base.

Run `python -h` to get an explanation of any of the command line options. To get a
list of valid fuzzer names run `python -l` and for information on what a particular
fuzzer does run `python -l FUZZERNAME`

Versions 0.06 and under have an issue with protocol based crash detection (-c 1 or -c 2). As
a result you have the following options if you don't want to be plagued with false positives:

      Use level 0, (-c 0). This turns off crash detection and leaves it up to you to check
       what request killed the device if a crash occurs.
      Use level 1, (-c 1). This uses the same type of inband, protocol based crash detection
       as level 2 but instead of pausing the fuzzer it just logs the crash details when a crash
       occurs and keeps fuzzing. This avoids you having to restart the fuzzer when a false
       positive occurs but it also means the fuzzer won't be paused when an actual crash
       occurs. This will result in every request that is sent to a dead target being logged (so
       basically thousands of crash log files). You can discern which caused the actual crash
       as it will be the earliest request logged in the continuous linear sequence of crash
       log files.
      Use level 3, (-c 3). This is what I always use if possible. It uses out of band, process
       based crash detection and is not susceptible to false positives. On the down side it
       requires you to set up crash detection script running on the target computer but
       that is just a case of running a single command and passing a few extra paramaters
       to VoIPER.
       This issue is resolved in version 0.07 but some VoIP applications have an annoying
       habit of not responding as they should while being fuzzed. As a result, it is
       recommended to use process based crash detection (-c 3) when at all possible.

Macchanger - MAC Changer

macchanger [options] device


macchanger is a Linux utility for viewing/manipulating the MAC address for network


macchanger accepts the following options:
-h, --help
        Show summary of options.
-V, --version

       Show version of program.

-e, --endding

       Don't change the vendor bytes.

-a, --another

       Set random vendor MAC of the same kind.

       Set random vendor MAC of any kind.

-r, --random

       Set fully random MAC.

-l, --list[=keyword]
         Print known vendors (with keyword in the vendor's description string)

-m, --mac XX:XX:XX:XX:XX:XX
       Set the MAC XX:XX:XX:XX:XX:XX


macchanger -A eth1

Wireshark Wireshark is the network analyzer. This very powerful tool provides
network and upper layer protocols informations about data captured in a network.
Like a lot of other network programs, Wireshark uses the pcap network library to capture

The Wireshark strength comes from:
- its easiness to install.
- the simplicity of use of its GUI interface.
- the very high number of functionality available.

Wireshark was called Ethereal until 2006 when the main developer decided to change its name
because of copyright reasons with the Ethereal name, which was registered by the company he
decided to leave in 2006. Install everything that it comes with. WinPcap is a driver that
Wireshark needs in order to run. It will be automatically installed when you install
wireshark. You can find more information about WinPcap at

Now that we have Wireshark installed lets open it up, so I can show you how to use it.
Wireshark should have made a folder somewhere in your start menu called Wireshark. Go
ahead and run Wireshark.

Wireshark lets you
"see" the data that is traveling across your network.

You can "see" what ports a program is using.

You can basically see all the traffic on your network.

You can see what comes in and what is going out of your router.

 You can see so much that it becomes a problem. You end up getting too much data. To fix
   this Wireshark comes with two very useful filters that we will go over here. The filters
allow you to sort the traffic that you have captured making it much easier to read. Well lets
start by clicking the Capture link at the top of your screen. Then click Options in the menu
                                       that drops down.

This is the window that allows you to define how to start capturing data with Wireshark.
You can use the Interface drop down box to select which network card to capture data
from. There will only be one option here, if you only have one ethernet card. Later on we
will modify this page a bit. Now we need to tell Wireshark what to capture. Click on the
Capture Filter button.

Put First Capture Filter into the Filter Name box. I want you to enter host followed by your
ip address into the Filter String box. If you ip address is, the Filter String box
would contain the following.

We are telling Wireshark to capture everything coming from and going to your ip address.
So we will get a log of all the traffic that is coming from or going to your computer. When
you have finished those two changes click the Ok button at the bottom of this page.
You should now be back at the Capture Options window. Then click the Start button at the
bottom of the screen.
You are now see packets as they are being sent to and from your computer. You might see a
lot of traffic or just a little traffic depending upon how much is going on on your network. If
you do not see any packets, try opening up a web page. If you still do not see captured data,
then you probably have the wrong Interface selected on the Capture options window.
When you have a couple packets, click the Capture option at the top of the screen and then
Stop option in the menu that drops down.

       Wireshark has captured some data as you can see on your screen. There are three
frames here. I have labeled them as Frame 1, Frame 2, and Frame 3 in the picture above.
Frame 1 shows you an overview of what packets came in and when out of your network.
Frame 2 shows more detailed information about a selected packet. Frame 3 shows the hex
data of the packet. We only really care about frame 1.

       The source column tells us where the data was coming from and the destination
column tells us where the data was going to. Both of these columns will always have ip
addresses in them. The protocol column tells us what protocol that packet was sent with.
Which is useful when trying to figure out what ports/procotols a program uses. The info
box contains the information that we really need. The info box lists specific requests made
over the network. It also lists what ports the data traveled on.
        Notice that every time a port is listed it is listed as a pair of ports. Data always
travels on ports. It is send out of the source ip address on a port, and then received on the
destination ip address on a port. These ports are rarely the same. Keeping that in mind, it is
easy to see why there are two ports listed in the info box. The first port is the source port.
Notice the > which you can think of as the word to.

       From the first port > to the second port. I hope that I have explained enough to give
you a general feel for the program. Check out the help section of the program for more
capture filter options. Notice that there is also a filter box above the data you have
captured. This is the dISPlay filter. It works like the capture filter, but allows you to filter
data that has already been captured. Click the help button in the dISPlay filter window for
examples of how to use it.

Snort(IDS/IPS) is a free and open source network intrusion prevention system (NIPS)
and network intrusion detection system (NIDS) capable of performing packet logging and
real-time traffic analysis on IP networks. Snort was written by Martin Roesch and is now
developed by Sourcefire, of which Roesch is the founder and CTO. Integrated enterprise
versions with purpose built hardware and commercial support services are sold by

       Snort performs protocol analysis, content searching/matching, and is commonly
       used to actively block or passively detect a variety of attacks and probes, such as
       buffer overflows, stealth port scans, web application attacks, SMB probes, and OS
       fingerprinting attempts, amongst other features. The software is mostly used for
       intrusion prevention purposes, by dropping attacks as they are taking place. Snort
       can be combined with other free software such as sguil, OSSIM, and the Basic
       Analysis and Security Engine (BASE) to provide a visual representation of intrusion
Konqueror is a web browser and file manager that provides file-viewer functionality to a
wide variety of things: local files, files on a remote ftp server and files in a disk image. It is
designed as a core part of the KDE desktop environment. It is developed by volunteers and
can run on most Unix-like operating systems and on Windows systems, too. Konqueror,
along with the rest of the components in the KDEBase package, is licensed and distributed
under the GNU General Public License version 2.

The name "Konqueror" is a reference to the two primary competitors at the time of the
browser's first release: "first comes the Navigator, then Explorer, and then the Konqueror".
It also follows the KDE naming convention: the names of most KDE programs begin with
the letter K.

Konqueror came with the version 2 of KDE, released on October 23, 2000. It replaces its
predecessor, KFM (KDE file manager).

Konqueror uses a very capable HTML rendering engine called KHTML. This engine is
implemented as a KPart and as such, it can be easily used by other KDE programs. KHTML
is also used by the Apple browser Safari.

Features of the HTML rendering component in KDE 3.4:

   1. HTML 4.01 compliance.
   2. ECMAscript 262 support (JavaScript). Notice that ECMAscript can still give problems
      because websites can detect browsers and choose to ignore Konqueror. Spoofing as
      another browser will often make sites work anyway.
   3. Ability to house Java applets.
   4. Cascading Style Sheets:
          o CSS 1: supported
          o CSS 2.1: supported (paged media only partially supported)
          o CSS 3 Selectors: supported
          o CSS 3 (other): Details about the visual media support can be found here.
   5. DOM1, DOM2 and partially DOM3 support in ECMAScript and native C++ bindings.
   6. Full support for bidirectional scripts (arabic and hebrew).
   7. SSL support (requires OpenSSL).
Konqueror provides all the functionalities one will expect from a modern file manager,
including navigation of the filesystem, file/folder copying, renaming, deletion and creation
and application launching.

It is also able to display graphic image files and generate an image gallery web page from
them. In addition. Konqueror is a standards-compliant web-browser and is perfectly
capable of browsing the WWW on the Internet - just enter the website to go to in the
Konqueror location bar.

The most obvious advantage of Konqueror (for people using KDE) is the great integration
with the rest of KDE.

And the article you mentioned isn't really that convincing. Of course, KHTML does support
XHTML. And the rant about Konqueror being not only a browser but "a file manager, a web
browser, a universal document viewer and a fully customizable application" is pretty
flawed as the first comment points out. Konqueror is actually just a shell for various KParts
(comparable to plugins). Those KParts have specific tasks (e.g. there's the KHTML part
which renders HTML, there's the file manager part, there are multiple document viewer
parts, etc.) and this makes Konqueror a lightweight but still very versatile application.

BeEF is the browser exploitation framework.                       A professional tool to
demonstrate the real-time impact of browser vulnerabilities. Development has focused on
creating a modular structure making new module development a trivial process with the
intelligence residing within BeEF. Current modules include the first public Inter-protocol
Exploit, a traditional browser overflow exploit, port scanning, keylogging, clipboard theft
and more. The modules are aimed to be a representative set of current browser attacks -
with the notable exception of launching cross-site scripting viruses. You can download
BeEF from

To top