Docstoc

Switching Concepts

Document Sample
Switching Concepts Powered By Docstoc
					                                 Switching Concepts

Overview



  LAN design has evolved. Network designers until very recently used hubs and bridges to build networks.
  Now switches and routers are the key components in LAN design, and the capabilities and performance
  of these devices continue to improve.


  This module describes the roots of modern Ethernet LANs with an emphasis on the evolution of
  Ethernet/802.3, the most commonly deployed LAN architecture. A look at the historical context of LAN
  development and various network devices that can be utilized at different layers of the OSI model will
  help students better understand the reasons why network devices have evolved as they have.


  Until recently, repeaters were used in most Ethernet networks. Network performance suffered as too
  many devices shared the same segment. Network engineers then added bridges to create multiple
  collision domains. As networks grew in size and complexity, the bridge evolved into the modern switch
  which allows microsegmentation of the network. Modern networks are now built with switches and
  routers, often with both functionalities in one device.


  Many modern switches are capable of performing varied and complex tasks in the network. This module
  will provide an introduction to network segmentation and will describe the basics of switch operation.


  Switches and bridges perform much of the heavy work in LANs where they make nearly instantaneous
  decisions when frames are received. This module describes in detail how switches learn the physical
  addresses of nodes, and how switches transmit and filter frames. This module also describes the
  principles of LAN segmentation and collision domains.


  Switches are Layer 2 devices that are used to increase available bandwidth and reduce network
  congestion. A switch can segment a LAN into microsegments, which are segments with only a single
  host. Microsegmentation creates multiple collision-free domains from one large domain. As a Layer 2
  device, the LAN switch increases the number of collision domains, but all hosts connected to the switch
  are still part of the same broadcast domain.


  This module covers some of the objectives for the CCNA 640-801 and ICND 640-811 exams.


  Students who complete this module should be able to perform the following tasks:


          Describe the history and function of shared, or half-duplex Ethernet
          Define collision as it relates to Ethernet networks
                                                                                                            1
          Define CSMA/CD
          Describe some of the key elements that affect network performance
          Describe the function of repeaters
          Define network latency
          Define transmission time
          Define network segmentation with routers, switches, and bridges
          Define Ethernet switch latency
          Explain the differences between Layer 2 and Layer 3 switching
          Define symmetric and asymmetric switching
          Define memory buffering
          Compare and contrast store-and-forward and cut-through switching
          Understand the differences between hubs, bridges, and switches
          Describe the main functions of switches
          List the major switch frame transmission modes
          Describe the process by which switches learn addresses
          Identify and define forwarding modes
          Define LAN segmentation
          Define microsegmentation with the use of switches
          Describe the frame-filtering process
          Compare and contrast collision and broadcast domains
          Identify the cables needed to connect switches to workstations
          Identify the cables needed to connect switches to other switches



4.1       Introduction to Ethernet/802.3 LANs
4.1.1 Ethernet/802.3 LAN development



This page will review the devices that are found on a network.


The earliest LAN technologies used either thick Ethernet or thin Ethernet infrastructures.




                                                                                             2
It is important to understand the limitations of these infrastructures, as shown in Figure , in order to
understand the advancements in LAN switching.


The addition of hubs or concentrators into the network offered an improvement on thick and thin Ethernet
technology. A hub is a Layer 1 device and is sometimes referred to as an Ethernet concentrator or a
multi-port repeater. Hubs allow better access to the network for more users. Hubs regenerate data
signals which allows networks to be extended to greater distances. Hubs do not make any decisions
when data signals are received. Hubs simply regenerate and amplify the data signals to all connected
devices, except for the device that originally sent the signal.


Ethernet is fundamentally a shared technology where all users on a given LAN segment compete for the
same available bandwidth. This situation is analogous to a number of cars that try to access a one-lane
road at the same time. Since the road has only one lane, only one car can access it at a time.




As hubs were added to the network, more users competed for the same bandwidth.


Collisions are a by-product of Ethernet networks. If two or more devices try to transmit at the same time,
a collision occurs. This situation is analogous to two cars that try to merge into a single lane and cause a
collision. Traffic is backed up until the collision can be cleared. Excessive collisions in a network result in
slow network response times. This indicates that the network is too congested or has too many users



                                                                                                                  3
who need to access the network at the same time.


Layer 2 devices are more intelligent than Layer 1 devices. Layer 2 devices make forwarding decisions
based on Media Access Control (MAC) addresses contained within the headers of transmitted data
frames.


A bridge is a Layer 2 device used to divide, or segment, a network. Bridges collect and selectively pass
data frames between two network segments. In order to do this, bridges learn the MAC address of
devices on each connected segment. With this information, the bridge builds a bridging table and
forwards or blocks traffic based on that table. This results in smaller collision domains and greater
network efficiency.




Bridges do not restrict broadcast traffic. However, they do provide greater traffic control within a network.


A switch is also a Layer 2 device and may be referred to as a multi-port bridge. Switches make
forwarding decisions based on MAC addresses contained within transmitted data frames. Switches learn
the MAC addresses of devices connected to each port and this information is entered into a switching
table.


Switches create a virtual circuit between two connected devices that want to communicate. When the
virtual circuit is created, a dedicated communication path is established between the two devices. The
implementation of a switch on the network provides microsegmentation. This creates a collision free
environment between the source and destination, which allows maximum utilization of the available
bandwidth. Switches are able to facilitate multiple, simultaneous virtual circuit connections.




                                                                                                                4
This is analogous to a highway that is divided into multiple lanes and each car has its own dedicated
lane.


The disadvantage of Layer 2 devices is that they forward broadcast frames to all connected devices on
the network. Excessive broadcasts in a network result in slow network response times.


A router is a Layer 3 device. Routers make decisions based on groups of network addresses, or classes,
as opposed to individual MAC addresses. Routers use routing tables to record the Layer 3 addresses of
the networks that are directly connected to the local interfaces and network paths learned from neighbor
routers.


The following are functions of a router:


          Examine inbound packets of Layer 3 data
          Choose the best path for the data through the network
          Route the data to the proper outbound port


Routers do not forward broadcasts unless they are programmed to do so. Therefore, routers reduce the
size of both the collision domains and the broadcast domains in a network. Routers are the most
important devices to regulate traffic on large networks. Routers enable communication between two
computers regardless of location or operating system.




                                                                                                           5
LANs typically employ a combination of Layer 1, Layer 2, and Layer 3 devices. Implementation of these
devices depends on factors that are specific to the particular needs of the organization.


The Interactive Media Activity will require students to match network devices to the layers of the OSI
model.


The next page will discuss network congestion.


4.1   Introduction to Ethernet/802.3 LANs
4.1.2 Factors that impact network performance



This page will describe some factors that cause LANs to become congested and overburdened. In
addition to a large number of network users, several other factors have combined to test the limits of
traditional LANs:




                                                                                                         6
          The multitasking environment present in current desktop operating systems such as Windows,
           Unix/Linux, and Mac OS X allows for simultaneous network transactions. This increased
           capability has lead to an increased demand for network resources.
          The use of network intensive applications such as the World Wide Web has increased.
           Client/server applications allow administrators to centralize information and make it easier to
           maintain and protect information.
          Client/server applications do not require workstations to maintain information or provide hard
           disk space to store it. Given the cost benefit of client/server applications, such applications are
           likely to become even more widely used in the future.


The next page will discuss Ethernet networks.


4.1       Introduction to Ethernet/802.3 LANs
4.1.3 Elements of Ethernet/802.3 networks



This page will describe some factors that can have a negative impact on the performance of an Ethernet
network.


Ethernet is a broadcast transmission technology. Therefore network devices such as computers,
printers, and file servers communicate with one another over a shared network medium. The
performance of a shared medium Ethernet/802.3 LAN can be negatively affected by several factors:




                                                                                                                 7
       The data frame delivery of Ethernet/802.3 LANs is of a broadcast nature.
       The carrier sense multiple access/collision detect (CSMA/CD) method allows only one station to
        transmit at a time.
       Multimedia applications with higher bandwidth demand such as video and the Internet, coupled
        with the broadcast nature of Ethernet, can create network congestion.
       Normal latency occurs as frames travel across the network medium and through network
        devices.


Ethernet uses CSMA/CD and can support fast transmission rates. Fast Ethernet, or 100BASE-T,
provides transmission speeds up to 100 Mbps. Gigabit Ethernet provides transmission speeds up to
1000 Mbps and 10-Gigabit Ethernet provides transmission speeds up to 10,000 Mbps. The goal of
Ethernet is to provide a best-effort delivery service and allow all devices on the shared medium to
transmit on an equal basis.




Collisions are a natural occurrence on Ethernet networks and can become a major problem.


The next page will describe half-duplex networks.



                                                                                                         8
4.1    Introduction to Ethernet/802.3 LANs
4.1.4 Half-duplex networks



This page will explain how collisions occur on a half-duplex network.


Originally Ethernet was a half-duplex technology. Half-duplex allows hosts to either transmit or receive at
one time, but not both. Each host checks the network to see whether data is being transmitted before it
transmits additional data. If the network is already in use, the transmission is delayed. Despite
transmission deferral, two or more hosts could transmit at the same time. This results in a collision.
When a collision occurs, the host that detects the collision first, sends out a jam signal to the other hosts.
When a jam signal is received, each host stops data transmission, then waits for a random period of time
to retransmit the data.




The back-off algorithm generates this random delay. As more hosts are added to the network, collisions
are more likely to occur.


Ethernet LANs become saturated because users run network intensive software, such as client/server
applications, which cause hosts to transmit more often and for longer periods of time. The network
interface card (NIC), used by LAN devices, provides several circuits so that communication among
devices can occur.


The next page will discuss some other factors that cause network congestion.


4.1    Introduction to Ethernet/802.3 LANs
4.1.5 Network congestion



This page will discuss some factors that create a need for more bandwidth on a network.


                                                                                                                 9
Advances in technology produce faster and more intelligent desktop computers and workstations. The
combination of more powerful workstations and network intensive applications has created a need for
greater network capacity, or bandwidth.




All these factors place a strain on networks with 10 Mbps of available bandwidth and that is why many
networks now provide 100 Mbps bandwidth on their LANs.




The following are types of media that have increased in transmission over networks:


       Large graphics files
       Full-motion video
       Multimedia applications


There is also an increase in the number of users on a network. As more people utilize networks to share
larger files, access file servers, and connect to the Internet, network congestion occurs. This results in
slower response times, longer file transfers, and less productive network users.




                                                                                                             10
To relieve network congestion, either more bandwidth is needed or the available bandwidth must be
used more efficiently.


The next page will discuss network latency.


4.1       Introduction to Ethernet/802.3 LANs
4.1.6 Network latency



This page will help students understand the factors that increase network latency.


Latency, or delay, is the time a frame or a packet takes to travel from the source station to the final
destination. It is important to quantify the total latency of the path between the source and the destination
for LANs and WANs. In the specific case of an Ethernet LAN, it is important to understand latency and its
effect on network timing as it is used to determine if CSMA/CD will work properly.


Latency has at least three sources:


          First, there is the time it takes the source NIC to place voltage pulses on the wire and the time it
           takes the destination NIC to interpret these pulses. This is sometimes called NIC delay, typically
           around 1 microsecond for a 10BASE-T NIC.
          Second, there is the actual propagation delay as the signal takes time to travel through the
           cable. Typically, this is about 0.556 microseconds per 100 m for Cat 5 UTP. Longer cable and
           slower nominal velocity of propagation (NVP) results in more propagation delay.
          Third, latency is added based on network devices that are in the path between two computers.
           These are either Layer 1, Layer 2, or Layer 3 devices.


Latency does not depend solely on distance and number of devices. For example, if three properly
configured switches separate two workstations, the workstations may experience less latency than if two

                                                                                                                  11
properly configured routers separated them.




This is because routers conduct more complex and time-intensive functions. A router must analyze Layer
3 data.


The next page will discuss transmission time.


4.1    Introduction to Ethernet/802.3 LANs
4.1.7 Ethernet 10BASE-T transmission time



This page will explain how transmission time is determined for 10BASE-T.


All networks have what is called bit time or slot time. Many LAN technologies, such as Ethernet, define
bit time as the basic unit of time in which one bit can be sent. In order for the electronic or optical devices
to recognize a binary one or zero, there must be some minimum duration during which the bit is on or off.


Transmission time equals the number of bits to be sent times the bit time for a given technology. Another
way to think about transmission time is the interval between the start and end of a frame transmission, or
between the start of a frame transmission and a collision. Small frames take a shorter amount of time.




Large frames take a longer amount of time.


Each 10-Mbps Ethernet bit has a 100 ns transmission window. This is the bit time. A byte equals eight
bits. Therefore, 1 byte takes a minimum of 800 ns to transmit. A 64-byte frame, which is the smallest
10BASE-T frame that allows CSMA/CD to function properly, has a transmission time of 51,200 ns or
51.2 microseconds. Transmission of an entire 1000-byte frame from the source requires 800
microseconds. The time at which the frame actually arrives at the destination station depends on the
additional latency introduced by the network. This latency can be due to a variety of delays including all
                                                                                                                  12
of the following:


          NIC delays
          Propagation delays
          Layer 1, Layer 2, or Layer 3 device delays


The Interactive Media Activity will help students determine the 10BASE-T transmission times for different
frame sizes.


The next page will describe the benefits of repeaters.


4.1       Introduction to Ethernet/802.3 LANs
4.1.8 The benefits of using repeaters



This page will explain how a repeater can be used to extend the distance of a LAN.


The distance that a LAN can cover is limited due to attenuation. Attenuation means that the signal
weakens as it travels through the network. The resistance in the cable or medium through which the
signal travels causes the loss of signal strength. An Ethernet repeater is a physical layer device on the
network that boosts or regenerates the signal on an Ethernet LAN. When a repeater is used to extend
the distance of a LAN, a single network can cover a greater distance and more users can share that
same network. However, the use of repeaters and hubs adds to problems associated with broadcasts
and collisions. It also has a negative effect on the overall performance of the shared media LAN.




                                                                                                            13
The Interactive Media Activity will teach students about the Cisco 1503 Micro Hub.


The next page will discuss full-duplex technology.


4.1   Introduction to Ethernet/802.3 LANs
4.1.9 Full-duplex transmitting



This page will explain how full-duplex Ethernet allows the transmission of a packet and the reception of a
different packet at the same time. This simultaneous transmission and reception requires the use of two
pairs of wires in the cable and a switched connection between each node. This connection is considered
point-to-point and is collision free. Because both nodes can transmit and receive at the same time, there
are no negotiations for bandwidth. Full-duplex Ethernet can use a cable infrastructure already in place,
as long as the medium meets the minimum Ethernet standards.


To transmit and receive simultaneously, a dedicated switch port is required for each node. Full-duplex
connections can use 10BASE-T, 100BASE-TX, or 100BASE-FX media to create point-to-point
connections. The NICs on all connected devices must have full-duplex capabilities.


The full-duplex Ethernet switch takes advantage of the two pairs of wires in the cable and creates a
direct connection between the transmit (TX) at one end of the circuit and the receive (RX) at the other
end. With the two stations connected in this manner a collision free environment is created as the
transmission and receipt of data occurs on separate non-competitive circuits.


Ethernet can usually only use 50 to 60 percent of the available 10 Mbps of bandwidth because of
collisions and latency. Full-duplex Ethernet offers 100 percent of the bandwidth in both directions.




                                                                                                             14
This produces a potential 20 Mbps throughput, which results from 10 Mbps TX and 10 Mbps RX.


The Interactive Media Activity will help students learn the different characteristics of two full-duplex
Ethernet standards.


This page concludes this lesson. The next lesson will introduce LAN switching. The first page describes
LAN segmentation.


4.2    Introduction to LAN Switching
4.2.1 LAN segmentation



This page will explain LAN segmentation.


A network can be divided into smaller units called segments.




Figure shows an example of a segmented Ethernet network. The entire network has fifteen computers.
Of the fifteen computers, six are servers and nine are workstations. Each segment uses the CSMA/CD
access method and maintains traffic between users on the segment.


                                                                                                           15
Each segment is its own collision domain.


Segmentation allows network congestion to be significantly reduced within each segment. When data is
transmitted within a segment, the devices within that segment share the total available bandwidth. Data
that is passed between segments is transmitted over the backbone of the network through a bridge,
router, or switch.


The next page will discuss bridges.


4.2    Introduction to LAN Switching
4.2.2 LAN segmentation with bridges



This page will describe the main functions of a bridge in a LAN.


Bridges are Layer 2 devices that forward data frames based on the MAC address. Bridges read the
source MAC address of the data packets to discover the devices that are on each segment. The MAC
addresses are then used to build a bridging table.




                                                                                                          16
This allows bridges to block packets that do not need to be forwarded from the local segment.


Although bridges are transparent to other network devices, the latency on a network increases by ten to
thirty percent when a bridge is used. The increased latency is because of the decisions that bridges
make before the packets are forwarded. A bridge is considered a store-and-forward device. Bridges
examine the destination address field and calculate the cyclic redundancy check (CRC) in the Frame
Check Sequence field before the frame is forwarded.




                                                                                                          17
If the destination port is busy, bridges temporarily store the frame until that port is available.


The next page will discuss routers.


4.2    Introduction to LAN Switching
4.2.3 LAN segmentation with routers



This page will explain how routers are used to segment a LAN.


Routers provide network segmentation which adds a latency factor of twenty to thirty percent over a
switched network. The increased latency is because routers operate at the network layer and use the IP
address to determine the best path to the destination node.




Figure shows a Cisco router.


Bridges and switches provide segmentation within a single network or subnetwork. Routers provide
connectivity between networks and subnetworks.




                                                                                                         18
Routers do not forward broadcasts while switches and bridges must forward broadcast frames.


The Interactive Media Activities will help students become more familiar with the Cisco 2621 and 3640
routers.


The next page will discuss switches.


4.2   Introduction to LAN Switching
4.2.4 LAN segmentation with switches



This page will explain how switches are used to segment a LAN.


Switches decrease bandwidth shortages and network bottlenecks, such as those between several
workstations and a remote file server.




Figure shows a Cisco switch. Switches segment LANs into microsegments which decreases the size of
collision domains.




                                                                                                        19
However, all hosts connected to a switch are still in the same broadcast domain.


In a completely switched Ethernet LAN, the source and destination nodes function as if they are the only
nodes on the network. When these two nodes establish a link, or virtual circuit, they have access to the
maximum available bandwidth. These links provide significantly more throughput than Ethernet LANs
connected by bridges or hubs.




This virtual network circuit is established within the switch and exists only when the nodes need to
communicate.


The next page will explain the role of a switch in a LAN.


4.2   Introduction to LAN Switching
4.2.5 Basic operations of a switch



This page will discuss the basic functions of a switch in a LAN.


Switching is a technology that decreases congestion in Ethernet, Token Ring, and Fiber Distributed Data

                                                                                                           20
Interface (FDDI) LANs. Switches use microsegmentation to reduce collision domains and network traffic.
This reduction results in more efficient use of bandwidth and increased throughput. LAN switches often
replace shared hubs and are designed to work with cable infrastructures already in place.




The following are the two basic operations that switches perform:


       Switch data frames - The process of receiving a frame on a switch interface, selecting the
        correct forwarding switch port(s), and forwarding the frame.
       Maintain switch operations - Switches build and maintain forwarding tables. Switches also
        construct and maintain a loop-free topology across the LAN.




                                                                                                         21
22
Figures through show the basic operations of a switch.


The next page will discuss latency.


4.2   Introduction to LAN Switching
4.2.6 Ethernet switch latency



This page will explain how Ethernet switches contribute to latency.




Switch latency is the period of time when a frame enters a switch to the time it takes the frame to exit the
switch. Latency is directly related to the configured switching process and volume of traffic.


Latency is measured in fractions of a second. Network devices operate at incredibly high speeds so
every additional nanosecond of latency adversely affects network performance.


The next page will describe Layer 2 and Layer 3 switching.


                                                                                                               23
4.2   Introduction to LAN Switching
4.2.7 Layer 2 and Layer 3 switching



This page will show students how switching occurs at the data link and the network layer.


There are two methods of switching data frames, Layer 2 switching and Layer 3 switching. Routers and
Layer 3 switches use Layer 3 switching to switch packets. Layer 2 switches and bridges use Layer 2
switching to forward frames.


The difference between Layer 2 and Layer 3 switching is the type of information inside the frame that is
used to determine the correct output interface. Layer 2 switching is based on MAC address information.
Layer 3 switching is based on network layer addresses, or IP addresses. The features and functionality
of Layer 3 switches and routers have numerous similarities. The only major difference between the
packet switching operation of a router and a Layer 3 switch is the physical implementation. In general-
purpose routers, packet switching takes place in software, using microprocessor-based engines,
whereas a Layer 3 switch performs packet forwarding using application specific integrated circuit (ASIC)
hardware.


Layer 2 switching looks at a destination MAC address in the frame header and forwards the frame to the
appropriate interface or port based on the MAC address in the switching table.




The switching table is contained in Content Addressable Memory (CAM). If the Layer 2 switch does not
know where to send the frame, it broadcasts the frame out all ports to the network. When a reply is
returned, the switch records the new address in the CAM.




                                                                                                           24
Layer 3 switching is a function of the network layer. The Layer 3 header information is examined and the
packet is forwarded based on the IP address.


Traffic flow in a switched or flat network is inherently different from the traffic flow in a routed or
hierarchical network. Hierarchical networks offer more flexible traffic flow than flat networks.


The next page will discuss symmetric and asymmetric switching.


4.2    Introduction to LAN Switching
4.2.8 Symmetric and asymmetric switching



This page will explain the difference between symmetric and asymmetric switching.


LAN switching may be classified as symmetric or asymmetric based on the way in which bandwidth is
allocated to the switch ports. A symmetric switch provides switched connections between ports with the
same bandwidth.




                                                                                                           25
An asymmetric LAN switch provides switched connections between ports of unlike bandwidth, such as a
combination of 10-Mbps and 100-Mbps ports.




Asymmetric switching enables more bandwidth to be dedicated to the server switch port in order to
prevent a bottleneck. This allows smoother traffic flows where multiple clients are communicating with a
server at the same time. Memory buffering is required on an asymmetric switch. The use of buffers
keeps the frames contiguous between different data rate ports.


The next page will discuss memory buffers.


4.2   Introduction to LAN Switching
4.2.9 Memory buffering



This page will explain what a memory buffer is and how it is used.



                                                                                                           26
An Ethernet switch may use a buffering technique to store and forward frames. Buffering may also be
used when the destination port is busy. The area of memory where the switch stores the data is called
the memory buffer.




This memory buffer can use two methods for forwarding frames, port-based memory buffering and
shared memory buffering.


In port-based memory buffering frames are stored in queues that are linked to specific incoming ports. A
frame is transmitted to the outgoing port only when all the frames ahead of it in the queue have been
successfully transmitted. It is possible for a single frame to delay the transmission of all the frames in
memory because of a busy destination port. This delay occurs even if the other frames could be
transmitted to open destination ports.


Shared memory buffering deposits all frames into a common memory buffer which all the ports on the
switch share. The amount of buffer memory required by a port is dynamically allocated. The frames in
the buffer are linked dynamically to the destination port. This allows the packet to be received on one
port and then transmitted on another port, without moving it to a different queue.


The switch keeps a map of frame to port links showing where a packet needs to be transmitted. The map
link is cleared after the frame has been successfully transmitted. The memory buffer is shared. The
number of frames stored in the buffer is restricted by the size of the entire memory buffer, and not limited
to a single port buffer. This permits larger frames to be transmitted with fewer dropped frames. This is
important to asymmetric switching, where frames are being exchanged between different rate ports.


The next page will describe two switching methods.


4.2     Introduction to LAN Switching
4.2.10 Two switching methods



This page will introduce store-and-forward and cut-through switching.




                                                                                                               27
The following two switching modes are available to forward frames:


       Store-and-forward - The entire frame is received before any forwarding takes place. The
        destination and source addresses are read and filters are applied before the frame is forwarded.
        Latency occurs while the frame is being received. Latency is greater with larger frames because
        the entire frame must be received before the switching process begins. The switch is able to
        check the entire frame for errors, which allows more error detection.
       Cut-through - The frame is forwarded through the switch before the entire frame is received. At a
        minimum the frame destination address must be read before the frame can be forwarded. This
        mode decreases the latency of the transmission, but also reduces error detection.




                                                                                                            28
The following are two forms of cut-through switching:


          Fast-forward - Fast-forward switching offers the lowest level of latency. Fast-forward switching
           immediately forwards a packet after reading the destination address. Because fast-forward
           switching starts forwarding before the entire packet is received, there may be times when
           packets are relayed with errors. Although this occurs infrequently and the destination network
           adapter will discard the faulty packet upon receipt. In fast-forward mode, latency is measured
           from the first bit received to the first bit transmitted.
          Fragment-free - Fragment-free switching filters out collision fragments before forwarding begins.
           Collision fragments are the majority of packet errors. In a properly functioning network, collision
           fragments must be smaller than 64 bytes. Anything greater than 64 bytes is a valid packet and is
           usually received without error. Fragment-free switching waits until the packet is determined not
           to be a collision fragment before forwarding. In fragment-free mode, latency is also measured
           from the first bit received to the first bit transmitted.


The latency of each switching mode depends on how the switch forwards the frames. To accomplish
faster frame forwarding, the switch reduces the time for error checking. However, reducing the error
checking time can lead to a higher number of retransmissions.


This page concludes this lesson. The next lesson will describe Ethernet Switches. The first page will
explain the main functions of switches.


4.3       Switch Operation
4.3.1 Functions of Ethernet switches



This page will review the functions of an Ethernet switch.



                                                                                                                 29
A switch is a device that connects LAN segments using a table of MAC addresses to determine the
segment on which a frame needs to be transmitted.




Both switches and bridges operate at Layer 2 of the OSI model.


Switches are sometimes called multiport bridges or switching hubs. Switches make decisions based on
MAC addresses and therefore, are Layer 2 devices.




In contrast, hubs regenerate the Layer 1 signals out of all ports without making any decisions. Since a
switch has the capacity to make path selection decisions, the LAN becomes much more efficient.
Usually, in an Ethernet network the workstations are connected directly to the switch. Switches learn
which hosts are connected to a port by reading the source MAC address in frames. The switch opens a
virtual circuit between the source and destination nodes only. This confines communication to those two
ports without affecting traffic on other ports. In contrast, a hub forwards data out all of its ports so that all
hosts see the data and must process it, even if that data is not intended for it.




                                                                                                                    30
High-performance LANs are usually fully switched:




       A switch concentrates connectivity, making data transmission more efficient. Frames are
        switched from incoming ports to outgoing ports. Each port or interface can provide the full
        bandwidth of the connection to the host.
       On a typical Ethernet hub, all ports connect to a common backplane or physical connection
        within the hub, and all devices attached to the hub share the bandwidth of the network. If two
        stations establish a session that uses a significant level of bandwidth, the network performance
        of all other stations attached to the hub is degraded.
       To reduce degradation, the switch treats each interface as an individual segment. When stations
        on different interfaces need to communicate, the switch forwards frames at wire speed from one
        interface to the other, to ensure that each session receives full bandwidth.


To efficiently switch frames between interfaces, the switch maintains an address table. When a frame
enters the switch, it associates the MAC address of the sending station with the interface on which it was
                                                                                                             31
received.


The main features of Ethernet switches are:


       Isolate traffic among segments
       Achieve greater amount of bandwidth per user by creating smaller collision domains


The first feature, isolate traffic among segments, provides for greater security for hosts on the network.
Each segment uses the CSMA/CD access method to maintain data traffic flow among the users on that
segment.




Such segmentation allows multiple users to send information at the same time on the different segments
without slowing down the network.


By using the segments in the network fewer users and/or devices are sharing the same bandwidth when
communicating with one another. Each segment has its own collision domain.




Ethernet switches filter the traffic by redirecting the datagrams to the correct port or ports, which are
based on Layer 2 MAC addresses.


                                                                                                             32
The second feature is called microsegmentation. Microsegmentation allows the creation of dedicated
network segments with one host per segment. Each hosts receives access to the full bandwidth and
does not have to compete for available bandwidth with other hosts. Popular servers can then be placed
on individual 100-Mbps links. Often in networks of today, a Fast Ethernet switch will act as the backbone
of the LAN, with Ethernet hubs, Ethernet switches, or Fast Ethernet hubs providing the desktop
connections in workgroups.




As demanding new applications such as desktop multimedia or video conferencing become more
popular, certain individual desktop computers will have dedicated 100-Mbps links to the network.


The next page will introduce three frame transmission modes.


4.3       Switch Operation
4.3.2 Frame transmission modes



This page will describe the three main frame transmission modes:




          Cut-through - A switch that performs cut-through switching only reads the destination address
           when receiving the frame. The switch begins to forward the frame before the entire frame

                                                                                                            33
           arrives. This mode decreases the latency of the transmission, but has poor error detection.
           There are two forms of cut-through switching:
               1. Fast-forward switching - This type of switching offers the lowest level of latency by
                   immediately forwarding a packet after receiving the destination address. Latency is
                   measured from the first bit received to the first bit transmitted, or first in first out (FIFO).
                   This mode has poor LAN switching error detection.
               2. Fragment-free switching - This type of switching filters out collision fragments, with are
                   the majority of packet errors, before forwarding begins. Usually, collision fragments are
                   smaller than 64 bytes. Fragment-free switching waits until the received packet has been
                   determined not to be a collision fragment before forwarding the packet. Latency is also
                   measured as FIFO.
          Store-and-forward - The entire frame is received before any forwarding takes place. The
           destination and source addresses are read and filters are applied before the frame is forwarded.
           Latency occurs while the frame is being received. Latency is greater with larger frames because
           the entire frame must be received before the switching process begins. The switch has time
           available to check for errors, which allows more error detection.
          Adaptive cut-through - This transmission mode is a hybrid mode that is a combination of cut-
           through and store-and-forward. In this mode, the switch uses cut-through until it detects a given
           number of errors. Once the error threshold is reached, the switch changes to store-and-forward
           mode.




The Interactive Media Activity will help students understand the three main switching methods.


The next page will explain how switches learn about the network.


4.3       Switch Operation
4.3.3 How switches and bridges learn addresses



This page will explain how bridges and switches learn addresses and forward frames.


                                                                                                                      34
Bridges and switches only forward frames that need to travel from one LAN segment to another. To
accomplish this task, they must learn which devices are connected to which LAN segment.


A bridge is considered an intelligent device because it can make decisions based on MAC addresses. To
do this, a bridge refers to an address table. When a bridge is turned on, broadcast messages are
transmitted asking all the stations on the local segment of the network to respond. As the stations return
the broadcast message, the bridge builds a table of local addresses. This process is called learning.


Bridges and switches learn in the following ways:


       Reading the source MAC address of each received frame or datagram
       Recording the port on which the MAC address was received


In this way, the bridge or switch learns which addresses belong to the devices connected to each port.


The learned addresses and associated port or interface are stored in the addressing table. The bridge
examines the destination address of all received frames. The bridge then scans the address table
searching for the destination address.


The switching table is stored using Content Addressable Memory (CAM). CAM is used in switch
applications to perform the following functions:


       To take out and process the address information from incoming data packets
       To compare the destination address with a table of addresses stored within it


The CAM stores host MAC addresses and associated port numbers. The CAM compares the received
destination MAC address against the CAM table contents.




                                                                                                             35
If the comparison yields a match, the port is provided, and the switch forwards the packet to the correct
port and address.


An Ethernet switch can learn the address of each device on the network by reading the source address
of each frame transmitted and noting the port where the frame entered the switch. The switch then adds
this information to its forwarding database. Addresses are learned dynamically. This means that as new
addresses are read, they are learned and stored in CAM. When a source address is not found in CAM, it
is learned and stored for future use.


Each time an address is stored, it is time stamped. This allows for addresses to be stored for a set period
of time. Each time an address is referenced or found in CAM, it receives a new time stamp. Addresses
that are not referenced during a set period of time are removed from the list. By removing aged or old
addresses, CAM maintains an accurate and functional forwarding database.


The processes followed by the CAM are as follows:


    1. If the address is not found, the bridge forwards the frame out all ports except the port on which it
        was received. This process is called flooding. The address may also have been deleted by the
        bridge because the bridge software was recently restarted, ran short of address entries in the
        address table, or deleted the address because it was too old. Since the bridge does not know
        which port to use to forward the frame, it will send it to out all ports, except the one from which it
        was received. It is clearly unnecessary to send it back to the same cable segment from which it
        was received, since any other computer or bridges on this cable must already have received the
        packet.
    2. If the address is found in an address table and the address is associated with the port on which it
        was received, the frame is discarded. It must already have been received by the destination.
    3. If the address is found in an address table and the address is not associated with the port on
        which it was received, the bridge forwards the frame to the port associated with the address.


If the address is found in an address table and the address is not associated with the port on which it
                                                                                                                 36
was received, the bridge forwards the frame to the port associated with the address.


The next page will describe the process that is used to filter frames.


4.3       Switch Operation
4.3.4 How switches and bridges filter frames



This page will explain how switches and bridges filter frames. In this discussion, the terms “switch” and
“bridge” are synonymous.


Most switches are capable of filtering frames based on any Layer 2 frame field. For example, a switch
can be programmed to reject, not forward, all frames sourced from a particular network. Because link
layer information often includes a reference to an upper-layer protocol, switches can usually filter on this
parameter. Furthermore, filters can be helpful in dealing with unnecessary broadcast and multicast
packets.


Once the switch has built the local address table, it is ready to operate. When it receives a frame, it
examines the destination address. If the frame address is local, the switch ignores it. If the frame is
addressed for another LAN segment, the switch copies the frame onto the second segment.


          Ignoring a frame is called filtering.
          Copying the frame is called forwarding.


Basic filtering keeps local frames local and sends remote frames to another LAN segment.


Filtering on specific source and destination addresses performs the following actions:


          Stopping one station from sending frames outside of its local LAN segment
          Stopping all "outside" frames destined for a particular station, thereby restricting the other
           stations with which it can communicate


Both types of filtering provide some control over internetwork traffic and can offer improved security.


Most Ethernet switches can now filter broadcast and multicast frames. Bridges and switches that can
filter frames based on MAC addresses can also be used to filter Ethernet frames by multicast and
broadcast addresses. This filtering is achieved through the implementation of virtual local-area networks
or VLANs. VLANs allow network administrators to prevent the transmission of unnecessary multicast and
broadcast messages throughout a network. Occasionally, a device will malfunction and continually send
out broadcast frames, which are copied around the network. This is called a broadcast storm and it can
significantly reduce network performance. A switch that can filter broadcast frames makes a broadcast



                                                                                                               37
storm less harmful.


Today, switches are also able to filter according to the network-layer protocol. This blurs the demarcation
between switches and routers. A router operates on the network layer using a routing protocol to direct
traffic around the network. A switch that implements advanced filtering techniques is usually called a
brouter. Brouters filter by looking at network layer information but they do not use a routing protocol.




The next page will explain how switches are used to segment a LAN.


4.3    Switch Operation
4.3.5 Why segment LANs?



This page will explain the two main reasons to segment a LAN.


There are two primary reasons for segmenting a LAN. The first is to isolate traffic between segments.




The second reason is to achieve more bandwidth per user by creating smaller collision domains.


Without LAN segmentation, LANs larger than a small workgroup could quickly become clogged with
traffic and collisions.


LAN segmentation can be implemented through the utilization of bridges, switches, and routers. Each of



                                                                                                              38
these devices has particular pros and cons.




With the addition of devices like bridges, switches, and routers the LAN is segmented into a number of
smaller collision domains. In the example shown, four collision domains have been created.


By dividing large networks into self-contained units, bridges and switches provide several advantages.
Bridges and switches will diminish the traffic experienced by devices on all connected segments,
because only a certain percentage of traffic is forwarded.




Bridges and switches reduce the collision domain but not the broadcast domain.


Each interface on the router connects to a separate network. Therefore the insertion of the router into a
LAN will create smaller collision domains and smaller broadcast domains. This occurs because routers
do not forward broadcasts unless programmed to do so.


A switch employs "microsegmentation" to reduce the collision domain on a LAN. The switch does this by
creating dedicated network segments, or point-to-point connections. The switch connects these


                                                                                                            39
segments in a virtual network within the switch.


This virtual network circuit exists only when two nodes need to communicate. This is called a virtual
circuit as it exists only when needed, and is established within the switch.


The next page will discuss microsegmentation.


4.3   Switch Operation
4.3.6 Microsegmentation implementation



This page will explain the functions of a switch in a LAN due to microsegmentation.


LAN switches are considered multi-port bridges with no collision domain, because of microsegmentation.




Data is exchanged at high speeds by switching the frame to its destination. By reading the destination
MAC address Layer 2 information, switches can achieve high-speed data transfers, much like a bridge
does. This process leads to low latency levels and a high rate of speed for frame forwarding.


Ethernet switching increases the bandwidth available on a network. It does this by creating dedicated
network segments, or point-to-point connections, and connecting these segments in a virtual network
within the switch. This virtual network circuit exists only when two nodes need to communicate.




                                                                                                         40
This is called a virtual circuit because it exists only when needed, and is established within the switch.


Even though the LAN switch reduces the size of collision domains, all hosts connected to the switch are
still in the same broadcast domain.




Therefore, a broadcast from one node will still be seen by all the other nodes connected through the LAN

                                                                                                             41
switch.


Switches are data link layer devices that, like bridges, enable multiple physical LAN segments to be
interconnected into a single larger network. Similar to bridges, switches forward and flood traffic based
on MAC addresses. Because switching is performed in hardware instead of in software, it is significantly
faster. Each switch port can be considered a micro-bridge acting as a separate bridge and gives the full
bandwidth of the medium to each host.


The next page will discuss collisions.


4.3   Switch Operation
4.3.7 Switches and collision domains



This page will discuss collisions, which is a major disadvantage of Ethernet 802.3 networks.


A major disadvantage of Ethernet 802.3 networks is collisions. Collisions occur when two hosts transmit
frames simultaneously. When a collision occurs, the transmitted frames are corrupted or destroyed in the
collision. The sending hosts stop sending further transmissions for a random period of time, based on the
Ethernet 802.3 rules of CSMA/CD.




Excessive collisions cause networks to be unproductive.


The network area where frames originate and collide is called the collision domain. All shared media
environments are collision domains.




                                                                                                            42
When a host is connected to a switch port, the switch creates a dedicated connection. This connection is
considered to be an individual collision domain.




For example, if a twelve-port switch has a device connected to each port then twelve collision domains
are created.


A switch builds a switching table by learning the MAC addresses of the hosts that are connected to each
switch port.




When two connected hosts want to communicate with each other, the switch looks up the switching table
and establishes a virtual connection between the ports. The virtual circuit is maintained until the session
is terminated.



                                                                                                              43
In Figure , Host B and Host C want to communicate with each other. The switch creates the virtual
connection which is referred to as a microsegment. The microsegment behaves as if the network has
only two hosts, one host sending and one receiving providing maximum utilization of the available
bandwidth.


Switches reduce collisions and increase bandwidth on network segments because they provide
dedicated bandwidth to each network segment.


The next page will discuss three methods of data transmission in a network.


4.3   Switch Operation
4.3.8 Switches and broadcast domains



This page will describe the three methods of data transmission that are used in a network.


Communication in a network occurs in three ways. The most common way of communication is by
unicast transmissions. In a unicast transmission, one transmitter tries to reach one receiver.




Another way to communicate is known as a multicast transmission. Multicast transmission occurs when
one transmitter tries to reach only a subset, or a group, of the entire segment.


                                                                                                      44
The final way to communicate is by broadcasting. Broadcasting is when one transmitter tries to reach all
the receivers in the network. The server station sends out one message and everyone on that segment
receives the message.


When a device wants to send out a Layer 2 broadcast, the destination MAC address in the frame is set
to all ones. A MAC address of all ones is FF:FF:FF:FF:FF:FF in hexadecimal. By setting the destination
to this value, all the devices will accept and process the broadcasted frame.


The broadcast domain at Layer 2 in referred to as the MAC broadcast domain. The MAC broadcast
domain consists of all devices on the LAN that receive frame broadcasts by a host to all other machines
on the LAN.


A switch is a Layer 2 device. When a switch receives a broadcast, it forwards it to each port on the
switch except the incoming port. Each attached device must process the broadcast frame.




This leads to reduced network efficiency, because available bandwidth is used for broadcasting
purposes.




When two switches are connected, the broadcast domain is increased. In this example a broadcast
                                                                                                           45
frame is forwarded to all connected ports on Switch 1. Switch 1 is connected to Switch 2. The frame is
propagated to all devices connected to Switch 2.


The overall result is a reduction in available bandwidth. This happens because all devices in the
broadcast domain must receive and process the broadcast frame.


Routers are Layer 3 devices. Routers do not propagate broadcasts. Routers are used to segment both
collision and broadcast domains.


The next page will explain how a workstation connects to a LAN.


4.3       Switch Operation
4.3.9 Communication between switches and workstations



This page will explain how switches learn about workstations in a LAN.


When a workstation connects to a LAN, it is unconcerned about the other devices that are connected to
the LAN media. The workstation simply transmits data frames using a NIC to the network medium.


The workstation could be attached directly to another workstation using a crossover cable. Cross-over
cables are used to connect the following devices:




          Workstation to Workstation
          Switch to Switch
          Switch to hub
          Hub to hub
          Router to router
          Router to PC



                                                                                                         46
Straight-through cables are used to connect the following devices:


       Switch to router
       Switch to workstation or server
       Hub to workstation or server


Switches are Layer 2 devices that use intelligence to learn the MAC addresses of the devices that are
attached to the ports of the switch. This data is entered into a switching table. Once the table is
complete, the switch can read the destination MAC address of an incoming data frame on a port and
immediately forward it.




Until a device transmits, the switch does not know its MAC address.


Switches provide significant scalability on a network and may be directly connected.




                                                                                                        47
 Figure illustrates one scenario of frame transmission utilizing a multi-switch network.


 This page concludes this lesson. The next page will summarize the main points from this module.


Summary



 This page summarizes the topics discussed in this module.




 Ethernet is the most common LAN architecture and it is used to transport data between devices on a
 network. Originally Ethernet was a half-duplex technology. Using half-duplex, a host could either transmit
 or receive at one time, but not both. When two or more Ethernet hosts transmit at the same time on a
 shared medium, the result is a collision. The time a frame or a packet takes to travel from the source
 station to the final destination is known as latency or delay. The three sources of latency include NIC
 delay, actual propagation delay, and delay due to specific network devices.


 Bit or slot time is the basic unit of time in which ONE bit can be sent. there must be some minimum
 duration during which the bit is on or off in order for the device to recognize a binary one or zero.


 Attenuation means that a signal will weaken at it travels through the network. This limits the distance that
 a LAN can cover. A repeater can extend the distance of a LAN but it also has a negative effect on the
 overall performance of a LAN.


                                                                                                                48
Full-duplex transmission between stations is achieved by using point-to-point Ethernet connections. Full-
duplex transmission provides a collision-free transmission environment. Both stations can transmit and
receive at the same time, and there are no negotiations for bandwidth. The existing cable infrastructure
can be utilized as long as the medium meets the minimum Ethernet standards.


Segmentation divides a network into smaller units to reduce network congestion and enhance security.
The CSMA/CD access method on each segment maintains traffic between users. Segmentation with a
Layer 2 bridge is transparent to other network devices but latency is increased significantly. The more
work done by a network device, the more latency the device will introduce into the network. Routers
provide segmentation of networks but can add a latency factor of 20% to 30% over a switched network.
This increased latency is because a router operates at the network layer and uses the IP address to
determine the best path to the destination node. A switch can segment a LAN into microsegments which
decreases the size of collision domains. However all hosts connected to the switch are still in the same
broadcast domain.


Switching is a technology that decreases congestion in Ethernet, Token Ring, and Fiber Distributed Data
Interface (FDDI) LANs. Switching is the process of receiving an incoming frame on one interface and
delivering that frame out another interface. Routers use Layer 3 switching to route a packet. Switches
use Layer 2 switching to forward frames. A symmetric switch provides switched connections between
ports with the same bandwidth. An asymmetric LAN switch provides switched connections between ports
of unlike bandwidth, such as a combination of 10-Mbps and 100-Mbps ports.


A memory buffer is an area of memory where a switch stores data. It can use two methods for forwarding
frames including port-based memory buffering and shared memory buffering.


There are two modes used to forward frames. Store-and-forward receives the entire frame before
forwarding while cut-through forwards the frame as it is received decreasing latency. Fast-forward and
fragment-free are two types of cut-through forwarding.




                                                                                                            49

				
DOCUMENT INFO
Shared By:
Categories:
Stats:
views:711
posted:7/20/2010
language:English
pages:49