the challenge for next generation network processors

Document Sample
the challenge for next generation network processors Powered By Docstoc
					White Paper

The Challenge for Next Generation Network Processors
Abstract Networking hardware manufacturers face the dual demands of supporting ever increasing bandwidth requirements, while also delivering new features, such as the ability to implement Quality of Service (QoS) and Service Level Agreement (SLA) monitoring. As a result, hardware vendors are increasingly in need of a network processor solution that allows them to meet these demands while allowing them to shorten time-to-market cycles and thus gain a competitive advantage.

This paper will provide an overview of how network processor technology has evolved and why current processor technology is insufficient to meet future demands. Agere’s Wicked Smart approach will be shown to offer vendors new hope for delivering a new class of product that combines the performance benefits of ASICs with the ease of configuration of software-based devices.

THE HISTORY OF NETWORK PROCESSORS
Software-Based Routers
Up until the late 1990’s, most network routers were based on an architecture similar to a personal computer. A central processing unit performed tasks such as forwarding table lookups, access control filtering, and processing of routing updates. The central processor received instructions from the router operating system that ran in random access memory (RAM), along with basic instructions sometimes stored in read only memory (ROM). The advantage to this architecture was that all instructions were stored in software; therefore new features could be added simply by upgrading the system software, much like a PC. Routers could also easily be designed with additional interfaces such as support for V.35 or HSSI (High Speed Serial Interface), without having to touch the basic processor architecture. For vendors, this meant that their basic router architecture could support a variety of configurations as well as different versions of the same basic operating system. This made it possible for vendors to quickly develop new products with a short time-to-market window, and that often were geared to very small segments of the market.

April 2001

Agere Systems Proprietary

1

T h e

C h a l l e n g e

f o r

N e x t

G e n e r a t i o n

N e t w o r k

THE HISTORY OF NETWORK PROCESSORS

A perfect example of this approach was (and still is) the Cisco 2500 product line. This software based router uses a central processor unit to perform routing instructions based on software configurations stored in nonvolatile RAM. The Cisco 2500 series is available in a variety of specialized hardware configurations depending on specific application requirements such as multiple or single serial ports, multiple or single AUI (attachment unit interface), multiple or single Token Ring, or even various types of serial ports. On the software side, dozens of different software versions are available to support highly specific individual requirements. The drawback to software-based routers is their limited ability to scale to support the demands of higher bandwidth and additional features. For example, most software- based routers currently available can only support wire speed throughput for less than a single OC-3 at 155 Mbps or in some extreme cases, up to a single OC-12 at 622 Mbps. For many software-based routers, delivering support for nonblocking performance for more than a handful of Fast Ethernet ports is simply out of the question.When these same routers are then asked to perform complex traffic filtering, policy based routing, or collection of traffic statistics, their performance suffers further, and greatly reduces their maximum throughput. Software based routers are typically deployed in a collapsed backbone architecture with local LAN segments connecting to Ethernet (or Token Ring) ports on the router. The router is responsible for forwarding traffic between segments and for controlling the transmission of broadcast packets. However, as networks have grown, this architecture has become expensive to maintain and the router is often a performance bottleneck. New, higher speed technologies such as Gigabit Ethernet and Packet over SONET can quickly overwhelm the ability of traditional software-based routers to process packets fast enough. In addition, most software-based routers still require complex, manual configuration, which adds to network management complexity.

The Emergence of ASICs
In 1990 a new device appeared called the EtherSwitch and a new market was born. Developed by Kalpana1, the EtherSwitch was essentially a Layer 2 multi-port bridge2. Like a bridge, the EtherSwitch learned what devices were connected to each port, and only forwarded unicast traffic to the port to which the destination device was connected.
1. Kalpana, Inc., is considered to be the inventor of Ethernet switching, and designed and manufactured internetworking products that increased the throughput of Ethernet networks. Kalpana was acquired by Cisco Systems in 1994. 2 Agere Systems Proprietary April 2001

T h e

C h a l l e n g e

f o r

N e x t

G e n e r a t i o n

N e t w o r k

THE HISTORY OF NETWORK PROCESSORS

Many Layer 2 switches employed custom Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSP) or Reduced Instruction Set Computing (RISC)-based processors, to forward and control traffic in hardware. In some products, ASICs were combined with RISC processors to allow the switch to support additional features as code was developed. In some ways, this approach combines the best of both worlds between software and hardware based processing. The use of ASICs was a major paradigm shift for hardware manufacturers. Rather than using software-based processing, and improving performance by increasing the speed of the central processor, hardware manufacturers found that they could achieve tremendous performance improvements by creating specialized chips that were manufactured with embedded instructions and that could, therefore, perform forwarding decisions directly in hardware. This created a new market for chip manufacturers as well, as they began to market off-the-shelf chips that hardware manufacturers could quickly integrate into their new products in order to slash time-to-market cycles. While early switching products from companies like Kalpana could only make traffic forwarding decisions based on information contained at Layer 2, it wasn't long before switches capable of making forwarding decisions based on information at Layers 3 and above arrived on the market. The Layer 3 Switch was initially marketed by companies such as Bay Networks (now Nortel Net- 1 Kalpana, Inc., is considered to be the inventor of Ethernet switching, and designed and manufactured internetworking products that increased the throughput of Ethernet networks. Kalpana was acquired by Cisco Systems in 1994. Layer 2 refers to the OSI networking model layer two. The Open Systems Interconnection (OSI) networking model was created by the ISO (International Organization for Standardization), a worldwide federation that promotes international standards. works) and many of the Gigabit Ethernet startup companies (Yago, Rapid City, Packet Engines, etc.). These switches combined the speed and scalability of ASIC-based traffic processing with the intelligence of traditional software-based routers. Most of these Layer 3 switches were actually hybrids of software routers and ASIC-based switches that improved performance by using ASICs for frame forwarding, but general purpose CPUs for control functions (e.g. calculating routes). Another example of this approach is Cisco's NetFlow technology. In a NetFlow network, distributed multilayer LAN switches (Catalyst 5000s) provide the frame forwarding function and are dependent upon a software-based router for all route calculations. The router forwards path information across the network to each of the multilayer switches that the switches then store, so that the router can be bypassed for future communications. With very high packet forwarding rates (on the order of tens of millions of packets/sec.), Layer 3 switches have also become sufficiently inexpensive that their use is becoming commonplace in the backbone of enterprise building/campus networks. Many Layer 3 switches now provide support for standard routing
2. Layer 2 refers to the OSI networking model layer two. The Open Systems Interconnection (OSI) networking model was created by the ISO (International Organization for Standardization), a worldwide federation that promotes international standards. April 2001 Agere Systems Proprietary 3

T h e

C h a l l e n g e

f o r

N e x t

G e n e r a t i o n

N e t w o r k

THE HISTORY OF NETWORK PROCESSORS

protocols such as OSPF and RIP, high speed interfaces such as Packet over SONET (POS) and Gigabit Ethernet, and even wide area network T1/T3 interfaces. With this support, Layer 3 switches are often used to replace existing networks of software-based routers and Layer 2 switches. Layer 3 switches provide tremendous benefits in performance and ease of management, especially if VLANs or Layer 2 traffic filtering are currently in use. While ASIC-based switching has allowed for a new generation of very high speed routers and switches, there is a downside to this approach, in that once instructions or logic have been embedded into silicon, it is difficult to change them to add new features or to improve performance. This means that manufacturers must replace the ASICs to enable new functionality, unlike traditional software-based routers and switches in which new features can be added by a simple upgrade to the operating system. Errors in the ASIC design during product development can result in substantial time-to-market delays since it sometimes takes months to get a new set of ASICs produced by a silicon foundry. Also, implementation of many advanced features such as complex quality of service routing, identification of upper layer flows, gathering of accounting information, or access control filtering still requires traffic to be processed in software, reducing the performance benefits of the ASIC-based architecture.

Current Network Evolution
The rise of low cost, ASIC-based, Layer 2 switches has had a direct impact on the evolution of the modern network architecture. Before Layer 2 switches, the standard LAN architecture consisted of shared 10 Mbps connections to each PC or server, with learning bridges used to reduce unicast traffic and to segment collision domains. With the advent of low cost Layer 2 switches (as well as inexpensive 10/100 NICs) coupled with the invention of 100 Mbps Ethernet, the standard LAN architecture has become switched 100 Mbps to each desktop. New technologies such as Gigabit Ethernet (and Gigabit Ethernet over copper) are causing the demand for bandwidth to increase at an even more rapid rate. Within the WAN, new technologies such as Packet over SONET and Dense Wave Division Multiplexing (DWDM) are rapidly increasing the amount of bandwidth that can be supported by a single strand of fiber, and the amount of potential bandwidth that network service providers can offer to their customers. Many network service providers are migrating their backbones to support individual channel speeds as high as OC-48 (2.5 Gbps) or even OC-192 (10 Gbps). Even greater speeds loom on the horizon. New local loop technologies such as Digital Subscriber Line (DSL) and cable modems are removing the 56 Kbps barrier for home users, while offering new options for business users. To meet these new demands for bandwidth, new companies such as Qwest, Frontier Communications, IXC, Level 3 and Enron are laying vast new fiber networks based on DWDM that will eventually provide so much bandwidth that many analysts are forecasting a bandwidth glut in the not so distant future.

4

Agere Systems Proprietary

April 2001

T h e

C h a l l e n g e

f o r

N e x t

G e n e r a t i o n

N e t w o r k

The Definition Of Network Processors

With bandwidth rapidly becoming a commodity that many consumers are buying based on price, carriers see the need to implement advanced features to differentiate their services. Examples of these types of new carrier services available now, or soon to be available include: ❏ Offering multiple quality of service levels to provide lower latency and delay to mission critical traffic ❏ Service Level Guarantees of up to 99.999% availability, and/or ❏ Integrated services such as the ability to carry voice, video, and data over a single connection. From the point of view of carriers, they want to be able to provide these new services as quickly as possible for as low of a cost as possible. To provide this support, carriers must be able to support new technologies within their networks such as Multi Protocol Label Switching (MPLS), H.323 voice over IP, and integration with legacy ATM infrastructures. Carriers must also implement complex billing and security systems as well as systems to support SLA verification and management. Many carriers are now deploying devices at the customer edge that are capable of interfacing with a variety of different types of traffic such as IP, ATM, DSL, SONET, frame relay and analog voice. Carriers have rapidly discovered that the days of deploying a specialized device for each type of service are going away. Not only do they want devices that can rapidly support emerging technologies, they also want devices that can scale to meet ever increasing demands for higher bandwidth.

The Definition Of Network Processors
In the days of software-based routers and bridges there was no need for a specialized network processor. A physical processor (PHY) decoded the incoming packet from the network and passed data onto the central processor, which made traffic forwarding decisions based on instructions provided within complex software code. However, in order to provide acceptable performance to handle higher speed technologies such as Fast and Gigabit Ethernet, specialized network processors are needed. These processors are highly specialized integrated circuits that handle the wire speed data path and perform protocol classification and analysis. Network processors sit on the data path between the physical interface processor and the backplane. Typical functions performed by network processors include: Segmentation Assembly and Reassembly (SAR) — Frames are disassembled, processed, and then reassembled for forwarding. Protocol Recognition and Classification — Frames are identified based on information such as protocol type, port number, destination URL or other application or protocol specific information.

April 2001

Agere Systems Proprietary

5

T h e

C h a l l e n g e

f o r

N e x t

G e n e r a t i o n

N e t w o r k

The Basic Requirements Of A Network Processor

Queuing and Access Control — Once frames have been identified, they are placed in appropriate queues for further processing (e.g., prioritization or traffic shaping). Frames are also checked against security access policy rules to see if they should be forwarded or discarded. Traffic Shaping and Engineering — Some protocols or applications require that, as traffic is released to the outgoing wire or fiber, it is shaped to ensure that it meets required delay or delay variation (jitter) requirements. Other requirements specify the priority of traffic between different channels or message types. Quality of Service (QoS) — in addition to appropriately shaping traffic for QoS, frames may need to be tagged for fast processing subsequent devices within the network (e.g., 802.1P or IP TOS).

The Basic Requirements Of A Network Processor
From a network equipment manufacturer’s point a view, the network processor is a key component they can use to differentiate their products from those of their competitors. By delivering products with advanced network processor capabilities, they can offer features and performance enhancements that make their products superior to those of their competitors. Therefore, hardware vendors will actively seek out network processor technologies that can provide them with this advantage. Some of the differentiating factors in various network processors include programmability, performance, management, and routing, as described below.

Programmability
The network processor must be easily programmable in order to support customization of feature sets and the rapid integration of new and existing technologies. For example, a vendor wishing to provide a product that can classify traffic flows based on Layer 4 information will demand a network processor that allows them to support this functionality. As time-to-market cycles are critical to the success of any product, programmability tools that speed customization of network processor feature sets are essential. Hardware manufacturers don't want to find themselves in a situation where they are still debugging or testing processors when their competitors have already brought competing products to market. In order to meet this demand, network processor manufacturers must strive to supply programming and testing tools that are as simple as possible to use. These programming tools should be based on a simple programming language that allows for reuse of code wherever possible. In addition, programming tools must provide extensive testing capabilities that provide intelligent debugging features, such as descriptive codes and definitions, as well as code level statistics for optimization. Testing tools must be able to simulate real world conditions and provide accurate measurements of throughput and other performance measurements.

6

Agere Systems Proprietary

April 2001

T h e

C h a l l e n g e

f o r

N e x t

G e n e r a t i o n

N e t w o r k

The Basic Requirements Of A Network Processor

Programming tools should be based on a graphical user interface (GUI) that integrates with current GUIs such as Windows NT or the UNIX-based Common Desktop Environment (CDE). This working environment should also provide analysis and simulation tools so manufacturers can perform extensive real world simulation testing. And as with any programmable object, the network processor must provide an extensive software developer's kit that contains detailed documentation for all APIs (application programming interfaces).

Performance
For the last several decades, the development of integrated circuits has followed Moore's Law which states that the number of gates, and hence the processing power, of an integrated circuit doubles roughly every eighteen months. With the increasing deployment of high speed bandwidth technologies such as Gigabit Ethernet and DWDM, this rate of improvement in speed might not be fast enough to support future performance requirements. Therefore, one of the primary requirements for network processors is to be able to rapidly scale their performance capabilities to support ever increasing bandwidth requirements: Gigabit Ethernet / OC-12 today, OC-48 in the near future, and OC-192 / 10-times-Gigabit Ethernet and higher down the road. In addition, network processors must evolve to be able to track and support thousands of simultaneous connections in order to support such technologies as MPLS and quality of service for the H.323 standard3 for voice over IP. Support for large numbers of connections is also a requirement to support features such as ATM or frame relay switched virtual circuits. Network processors must be able to support a variety of protocols such as ATM, IP, AAL5 and control mechanisms such as MPLS. In many cases, network processors must also provide support for legacy protocols such as IPX and SNA. Finally, network processors must be able to support large bandwidth connections, multiple protocols, and advanced features without becoming a performance bottleneck. That is, network processors must be able to provide wire speed, nonblocking performance regardless of the size of the pipe, the type of protocol or the features that are enabled.

3. H.323 uses dynamic port mappings for call data, requiring devices to track these ports in order to apply a quality of service level. April 2001 Agere Systems Proprietary 7

T h e

C h a l l e n g e

f o r

N e x t

G e n e r a t i o n

N e t w o r k

The Basic Requirements Of A Network Processor

Management
Network processors must provide support for services such as SLA management and enforcement (as described above). Network processors must be able to gather performance and traffic flow statistics that can then be collected by a billing or accounting system using such common protocols as RMON (Remote Monitoring) and SNMP (Simple Network Management Protocol). Network processors also play a key role in enforcing various classes of service that service providers may offer. For example, a service provider may provide a tiered service whereby mission critical traffic such as e-commerce has a guaranteed delay of less than 80 milliseconds, while all other traffic is delivered with a best effort guarantee. To enforce this policy, traffic must be identified and classified as it enters the service provider's network. These tasks are typically performed within the network processor. Hence, the network processor must be able to classify packets into multiple classes of service, each with their own quality of service requirements. It is typical in most networks that this classification occurs at the network boundary, where traffic either leaves or enters the network. Another function that typically occurs at this boundary is traffic filtering using access control lists or some other policy enforcement mechanism. For example, Internet service providers typically block incoming traffic with a source IP address that is part of the range of private IP addresses specified within RFC 1918 (RFC 1918 sets aside a group of IP addresses that should never be routed on the public Internet). If the traffic is coming from the private range of IP addresses, it is discarded. Much like the identification and classification process for managing quality of service, this process also should ideally take place within the network processor. By performing management tasks such as identification, classification, and accounting within the network processor, hardware vendors can take advantage of the specialized nature of these processors to provide a large performance boost to their products.

Routing
After identifying, classifying, and accounting for traffic flows, network processors must be able to make forwarding decisions based on preprogrammed information. For example, an organization may have a policy that all HTML traffic destined for its e-commerce servers should receive the highest priority through the network. As this traffic enters the network, it is identified, classified, and accounted for by the network processor at the edge of the network. Once the classified traffic moves through the network, other network processors must be able to quickly identify the traffic type and recognize that it needs to be forwarded ahead of all other traffic such as e-mail or simple web surfing. To enforce this policy requires not only network processors that are capable of looking deep within a traffic flow to determine application and destination URL information, but also the ability to quickly tag these packets for subsequent processing by other network processors.

8

Agere Systems Proprietary

April 2001

T h e

C h a l l e n g e

f o r

N e x t

G e n e r a t i o n

N e t w o r k
Approaches

Approaches
When designing new products, network hardware manufacturers may choose among a variety of different technologies to handle network processing. These choices included using ASICs, RISC, augmented RISC, or hybrid approaches. Each of these choices is discussed below.

Application Specific Integrated Circuit (ASIC)
The application specific integrated circuit, as its name implies, is designed from the start for a very specific application. This is in contrast to a typical microprocessor that is designed with the flexibility to perform a variety of functions. Since an ASIC is designed for a very specific set of tasks, its design can be optimized to carry out those tasks as efficiently as possible. Unlike traditional microprocessors, ASICs don't have to be designed around the lowest common denominator in order to be able to perform a wide variety of tasks. ASICs typically perform the tasks they are designed to do at performance levels much greater than a general purpose microprocessor. However ASICs are not well suited to situations where a variety of tasks must be performed. In the networking world, ASICs are typically used to forward traffic at very high rates of speed, but they are usually unable to perform additional traffic management tasks such as classification, access control and/or accounting. In most current implementations, a separate central processor typically performs those types of tasks. Once an ASIC is designed and manufactured, it is very difficult to change it. Unlike traditional microprocessor architectures, ASICs cannot be modified with a simple software upgrade to the operating system – functions are embedded into the chip during the design and manufacturing process. Upgrading or adding new features to ASIC-based components typically requires the replacement of the ASICs themselves. This can lead to longer time-to-market cycles for subsequent changes.

Reduced Instruction Set Computing (RISC)
Microprocessors such as the Intel 286/386/486/Pentium I series have been based on the Complex Instruction Set Computing (CISC) model. This approach embeds a large complex set of instructions within the processor to make the processor extremely versatile to program. Programmers can take advantage of the large instruction set to create complex, custom tailored applications to fit a variety of applications. However, due to the complex instructions that must be processed, CISC-based processors are difficult to scale to meet the increased performance demands that network applications require. The alternative approach to CISC is called Reduced Instruction Set Computing or RISC. RISC processors are designed with support for a much simpler and smaller set of instructions that allows them to perform processing tasks at speeds much greater than their CISC counterparts, while still preserving much of the support for flexible programming that applications require.

April 2001

Agere Systems Proprietary

9

T h e
Approaches

C h a l l e n g e

f o r

N e x t

G e n e r a t i o n

N e t w o r k

Compared to ASICs, RISC processors are easily programmable. A RISC-based network processor follows the traditional software-based model and therefore retains the advantages (high level of programmability and flexibility) while also retaining the disadvantages (slower performance, since decisions are made in software) of that model. The performance of RISC processors has been improved over the years by taking advantage of Moore’s law (doubling of transistor density every 18 months). The decreased transistor geometries allow both higher transistor counts, and increased clock rates. Most modern software-based routers use a RISC processing core (although in some configurations tasks are off-loaded to ASIC-based line cards). This RISC architecture makes these routers capable of supporting a large variety of protocols and features; however they do not scale well as additional features are added or additional bandwidth requirements are imposed. For example, activating features such as Access Control Lists (ACL) or IP accounting on a RISC-based router under heavy load can cause the product to quickly reach its maximum processor utilization level, hurting traffic throughput. For more details on the architectural limitations of RISC Network Processors, please see the Agere white paper entitled Building Next Generation Network Processors.

Parallel Processing
Most current CPUs are based upon the Von Neumann architecture, in which processors process information serially, one instruction at a time. Research continues into parallel processing in which processors can perform multiple instructions simultaneously. For more details on the architectural limitations of parallel network processors, please see the Agere white paper entitled Building Next Generation Network Processors.

Hybrid Approaches
For hardware vendors, the Holy Grail is the flexibility of RISC with the performance of ASICs. One approach that hardware vendors have tried in order to accomplish this is by using ASIC and RISC-based processors within the same product. In these types of devices, a central RISC based processors typically acts as the core processor and specific tasks are moved out to ASIC-based line cards. An example of this is the Cisco Express Forwarding model in which the central processor calculates routes and downloads a complete copy of this routing information into line interface cards. The interface cards then use ASICs to switch traffic between themselves at very high rates of speed. However this approach runs into the limitations previously described for ASIC-based architectures, namely that ASIC-based line cards can't be easily upgraded nor can they easily support new features. This continues to leave hardware manufacturers in a quandary. The basic requirement continues: how to combine the flexibility of RISC with the performance of ASICs within a single platform. Fortunately, Agere has a solution.

10

Agere Systems Proprietary

April 2001

T h e

C h a l l e n g e

f o r

N e x t

G e n e r a t i o n

N e t w o r k
WICKED SMART

WICKED SMART
Agere’s Functional Processor
Agere's functional processor truly combines the flexibility of RISC with the performance of ASIC. The unique PayloadPlus architecture uses a patented technology called Pattern Matching Optimization to achieve greater than 5x performance improvement over even advanced RISC processors. This performance reaches the level of fixed function ASICs while allowing the flexibility and programmability of RISC. The Payload Plus architecture is able realize these performance gains by using less overhead, fewer clock cycles, and more data processing per clock cycle than augmented RISC. Compared to competitive products based on advanced RISC processors, only Agere's Payload Plus architecture is capable of supporting bandwidth speeds of OC-192 and beyond while still maintaining the flexibility and programmability of a traditional RISC processor. The Agere functional processor is segmented into two components, the Fast Pattern Processor and the Routing Switch Processor. The Fast Pattern Processor (FPP) takes data from the PHY chip and performs protocol recognition and classification as well as reassembly. The FPP can classify traffic based on information contained at Layers 2 through 7. Based on parallel processing algorithms, the FPP can provide ATM reassembly and can support table lookups with millions of entries and variable entry lengths. From the FPP, traffic moves to the Routing Switch Processor (RSP) which handles queuing, packet modification, traffic shaping, application of quality of service tagging, and segmentation. The FPP and RSP interface with the Agere System Interface (ASI). The ASI is the management component of the processor and provides support for RMON and tracks state information. The ASI also controls the movement of data between the FPP and RSP to ensure that it moves at wire speed rates.

Application Of Agere’s Functional Processor
In a typical system architecture, functions are partitioned into two types of processors. A general purpose RISC processor can be used to develop and maintain the Routing Information Base (RIB) – the RISC processor calculates routes and handles OSPF, MPLS LDP, or RSVP packets and signaling. These are control functions where high processing rates are not required.

April 2001

Agere Systems Proprietary

11

T h e
Summary

C h a l l e n g e

f o r

N e x t

G e n e r a t i o n

N e t w o r k

In contrast, the Agere Functional Processor is best applied to the data fast path, and thus helps network equipment providers to offer products capable of true wire speed performance. In a typical system, the RISC processor will forward RIB information to a Forwarding Information Base (FIB) located in each of the line cards. Using Agere high performance FPP/RSP technology, these line cards are can enforce policies and forward packets at very high speeds. However, unlike ASICs, Agere functional processors are programmable and can flexibly accommodate product changes.

Agere’s Functional Programming Language
In order to allow hardware manufacturers to quickly develop and build solutions based on Agere's functional processor, Agere provides a rich development environment called the Functional Programming Language (FPL). This language is a true fourth generation programming language that is designed to be highly intuitive. Instructions are coded like a protocol definition language, with interspersed action statements and the ability to embed routing table information directly into the protocol code. Agere's FPL provides developers with an enormous advantage over complex to develop RISC code and allows developers to shorten time-to-market cycles thus reducing life cycle development expenses. Through its functional approach, FPL provides a high level of reuse of code in multiple applications. Agere's programming environment also delivers a full suite of testing and simulation tools that allow developers to fully test applications before deployment. Agere's Integrated Development Environment (IDE) provides developers with a complete software developer's kit including traffic generation tools for real world simulation and testing. For more information on the advantages of FPL, see the Agere white paper entitled The Case for a Classification Language.

Summary
The demands places on network processors are expected to rapidly scale over the next several years due to demand for bandwidth as well as demands by carriers to provide new services. Current network processor technology is ill equipped to adequately meet these demands. Current ASIC processors aren't flexible enough to support a rapidly changing marketplace, while RISC and augmented RISC processors will not be able to scale to support the throughput rates required for complex features at speeds of GbE, OC-48, and above, and require complex programming methods. Only Agere's Wicked Smart approach provides network processors that combines the performance of ASICs with the flexibility and feature support of RISC, exploited by a comprehensive and simple development environment. This unique combination will allows network equipment manufacturers to quickly bring new products to market that can scale to support ever increasing bandwidth demands, while supporting complex feature sets for Quality of Service and SLA management.

12

Agere Systems Proprietary

April 2001

T h e

C h a l l e n g e

f o r

N e x t

G e n e r a t i o n

N e t w o r k
Summary

By exploiting the advantages of Agere's approach, manufacturers will be able to shorten time-to-market cycles, create new products that scale to meet future performance demands, and quickly add new features as market conditions warrant. Using Agere's network processors, equipment manufacturers will realize a tremendous competitive advantage over competitors using alternate approaches. For more information, contact Agere at www.agere.com.

April 2001

Agere Systems Proprietary

13

T h e
Summary

C h a l l e n g e

f o r

N e x t

G e n e r a t i o n

N e t w o r k

14

Agere Systems Proprietary

April 2001


				
DOCUMENT INFO
Shared By:
Stats:
views:47
posted:12/21/2009
language:English
pages:14
Description: the challenge for next generation network processors