![]() |
BigIron
8000 An NSS Group White Paper by Steve Broadhead Table of Contents The aims of this test were as follows: -� To prove whether or not the BigIron 8000 is capable of supporting high levels of mixed traffic types over a prolonged period of time -� To define, test and compare Layer 2, Layer 3 and Layer 4 (QoS) performance -� To test the suitability of the BigIron 8000 and Gigabit Ethernet in general for carrying real-time (streaming video) traffic in addition to high levels of mixed data traffic -� To test the interoperability of the BigIron 8000 with other members of the Foundry product family -� To define suitable roles and applications for the BigIron 8000 and prove that it can perform in those roles -� To evaluate the ease with which relatively complex networks can be created and managed using the BigIron 8000. -� To examine the full feature set of the BigIron 8000 and evaluate its effectiveness in a real-world environment N.B.� The testing was not intended to saturate the switch but to work within typical and realistic - albeit still high - traffic levels and to evaluate the functionality of the BigIron 8000 within this environment. Saturation testing has already been carried out in the US by Business Communications and others, as a result of which the 8000 has proved its ability to withstand extreme levels of generic IP and IPX traffic. Here we've taken this as the starting point of our testing and added in real application tests to see how the switch copes within a "real world", rather than laboratory, environment. We would like to extend our thanks to Foundry Networks, Hewlett-Packard, NetCom, Peapod Distribution and Bluecurve (Dynameasure is a trademark of Bluecurve Inc.) for assistance with the testing and supply of equipment and benchmarking products. Finally, a big thank you to Chevin Software for use of its CNA Pro network monitor/analyser software which enabled us to monitor the exact state of the network at all times and accurately configure the hardware based on knowledge of the traffic streams it analysed. Introduction: What role for Gigabit Ethernet �It all sounds logical enough � Take a tried and tested technology and increase the bandwidth ten-fold. Such was the thinking behind Fast Ethernet, when moving from the traditional 10Mbps technology standard, and again when moving from the 100Mbps to a Gigabit to produce Gigabit Ethernet.� What Fast Ethernet, at least in switched rather than shared format, gave us was a way around basic bottlenecks at the server and client. It also gave us a new problem - huge amounts of traffic flooding the network backbone. Even an old i486 based PC or early Pentium model is capable of pushing out around 20Mbps, so imagine the situation with high-end (say PII 400/450MHz) but still PC-based servers capable of pushing in excess of 200Mbps worth of traffic from their PCI bus out onto the network. Suddenly a 100Mbps backbone - the classic backbone of many companies throughout the '90's - was no longer sufficient. Here, then, is an obvious role for Gigabit Ethernet, as a replacement technology for older Ethernet and software-based router technology at the heart of the network. It also happens to be a role that ATM has - for some years now� - claimed to play very well. ATM has also moved forward from its' 155Mbps limits to 622Mbps and beyond so the bandwidth argument is not necessarily won by either technology. More important are two very different arguments - the relative complexity of the technology and, therefore, the ease with which it can be deployed, and the cost involved in acquisition, implementation and ownership of that technology. Here, Ethernet argues a strong case. It is essentially a simple, low-cost technology and one which has severe limitations in shared format but which are resolved in a switched capacity, the only format Gigabit Ethernet supports. ATM, on the other hand, is relatively complex and relatively expensive. It was not designed to run classic networking protocols and applications and this shows in the number of elements - LANE, PNNI, MPOA etc etc - required in order to get an ATM switch to do the job of an Ethernet router.� However, ATM was designed from day one to support real-time traffic such as video; Ethernet was not. Extending this argument, it could also be argued that running a Gigabit of traffic down an Ethernet pipe could lead to total chaos if no traffic management is in place. For this reason, Gigabit Ethernet vendors introduced a number of proprietary and industry standard measures for controlling and prioritising traffic flows in an attempt to nullify the ATM argument. Logically, then, Gigabit Ethernet has a role to play in enabling the use of new wave applications such as video, which require dedicated bandwidth at all times. Another natural role for Gigabit Ethernet is to create a fat pipe between a server farm and the network. Here also, perhaps, indicates a role for 10Gbps Ethernet in the future, supporting multiple Gigabit streams from high-end servers and interconnecting switches. Foundry Networks notably markets its' BigIron 8000 as "10Gig Ready" which might sound like marketing hype but ultimately might prove to be extremely valuable in avoiding the classic "forklift upgrade" scenario in the mid-term future. The BigIron 8000 sits at the top of Foundry's range of Gigabit Ethernet switches and routing switches and is classified as a "switching router". Put simply it acts as both a switch and a router and can be set up as either or both, working at Layers 2, 3 and 4 and - shortly - Layer 7.� The chassis has eight slots available, which can be filled with any of the following modules: -�
4 port Gigabit Ethernet Management Module� At the heart of the product is what Foundry calls a "parallel cross-point switch fabric", an ASIC based architecture which supports a claimed switching capacity up to 100,000,000 packets per second, a claim recently verified by Business Communications and The Tolly Group in the US. Each interface module has an 8Gbps full-duplex data path to the cross-point fabric, providing separate priority queues for each module destination. In theory, then, this means that a fully populated BigIron 8000 chassis can deliver up to 256Gbps of total switching capacity. More important still is the relative port density. With eight Gigabit blades in place the 8000 provides 64 Gigabit ports - this is genuinely class leading. Now here's another noteworthy point. Despite the excellent port density, the BigIron 8000 is not actually physically "big" at all. This is significant as it means it takes up relatively little space in what are becoming increasingly crowded "machine rooms". It is literally half the size of some competing switches we've had in the labs while offering twice the number of Gigabit ports on a module. What this implies is that the product has been well designed and well engineered, which is a good starting point.� So what exactly can it do? Well, much is spoken currently of the difference between Layer 2, Layer 3 and Layer 4 switching and most of it is completely incomprehensible! Yet Foundry claims that the 8000 can carry out all three at wire speed. So here is the Foundry definition as applied to the BigIron 8000 under test here: Layer 2 switching is simply MAC-layer, multi-port, store and forward bridging, in line with other vendors Layer 3 is full routing, not purely route acceleration via offloading data and switching Layer 4 comes in two varieties where Foundry is concerned. In the switching router products it is defined as traffic prioritisation, but in the ServerIron product specifically it is TCP/UDP port switching. Note that Foundry is also on the verge of introducing Layer 7 switching in the form of URL based switching, initially for the ServerIron product then for the BigIron at a later date. What all this means is that, in practise the 8000 is very flexible. VLANs can be set up in a number of different ways to suit specific traffic flows (see summary of testing). For example, you can assign VLANs on a per-port, protocol, sub-net, or 802.1q tagged basis, the latter enabling the creation of VLANs that cross switch boundaries. Much is also spoken in the networking industry of QoS - "Quality of Service" - and, again, much of it is marketing hype with little substance. So what does the BigIron 8000 actually offer in terms of real QoS support? The answer is a simple but effective option to switch QoS on or off and, if the former, to then choose from eight levels of prioritisation, which actually form four true prioritisation queues. Within IP it is possible to specifically prioritise TCP or UDP traffic flows, or prioritise packets based upon a combination of destination address and destination port number. The aim here is to make sure that specific applications receive the bandwidth and priority they need in order to function correctly, regardless of what other, and how much other, traffic is on the network. The QoS works across switch boundaries with industry standard IEEE 802.1q/p VLAN tagging and prioritisation respectively. Other key features include the following: IP multicast support in the form of the IGMP (Internet Group Membership) Protocol to reduce traffic by forwarding a single copy of a transmission only to requesting ports.� Layer 3/4 filtering to enable the building of firewalls to prevent unauthorised network access. Permit/Deny filters can be created, based on IP source and destination addresses, protocol type and port number.� DHCP (Dynamic Host Configuration Protocol) assist to ensure that a DHCP server managing multiple IP sub-nets can recognise the requester's IP sub-net, even when that server is not on the client's local LAN segment. It does this by "stamping" the correct gateway IP address into a DHCP discovery packet on behalf of the router.� Multi-Protocol Support enabling the BigIron to act as a front-end for existing routers. This is an important feature as so much emphasis is placed by vendors on IP, yet in the real world, there are still many protocols in use, not just IP. It means that the 8000 can offload the router in the same way an FEP (Front-End Processor) offloads traffic from a mainframe host's CPU or actually replace the router entirely - yes!!!!!. Protocol supports includes IP, RIP, OSPF, IPX/RIP/SAP, AppleTalk, IGMP, DVMRP, PIM, BGP4, and Virtual Router Redundancy Protocol (VRRP).� Multi-level redundancy with up to four hot-swappable PSU's, hot-swappable blades, no slot/module restrictions, plus redundant management modules but - importantly - with the option to combine this with eight Gigabit ports so that no module space is wasted. Installation and Configuration Given the diversity and range of options available within the 8000, it is easy to assume that it is a complex beast to get up and running, but this is not the case. If all you want is a Layer 2 (flat network) switch then it is simply a case of plugging in the cables and away you go. However, few customers would want to simply do this and ignore all the features. These are configured using one of three management options: a command line interface (CLI) which is modelled on the classic Cisco interface and accessed via a terminal session, a Web-browser based manager or by using a Windows-based management console IronView - supplied with the Foundry products and from which you can manage multiple Foundry devices. You can also Telnet into the boxes, either locally or remotely. For mixed or multi-vendor environments, IronView is also available to run under HP OpenView on most platforms.�
Using the Windows manager, setting up VLANs can be as simple as pointing and clicking at the ports per module you want to combine in a VLAN, then assigning the VLAN type, QoS etc.� The real art, however, is in knowing the actual traffic flows across the network so that you can accurately assign prioritisation and create VLANs which really are effective. A good quality network monitor/analyser is therefore a more than useful tool to have around when setting up the 8000, though some trace and debug options are provided with the CLI management. In most cases making changes does not require the switch to be re-booted, though in some circumstances this is required. What is impressive is the speed of the boot process. This includes a fast-boot option which is close to instantaneous and makes a mockery of the NT workstations we were running to carry out the testing. In general the 8000 is relatively easy to configure and - despite the large number of options - it is very easy to familiarise yourself quickly with the IronView manager. For the testing we used a combination of traffic generators, benchmarks and applications in order to create a relatively stressed but real-life network scenario. Test Network Configuration In total we added three Foundry switches into the network, the BigIron 8000, a ServerIron and a FastIron Workgroup switch. The BigIron 8000 was configured with a full compliment of blades - six 8-port Gigabit fibre modules and two 24-port 10/100Mbps modules. The FastIron had twenty-four 10/00 ports and two Gigabit fibre ports while the ServerIron was configured with sixteen 10/100 ports.� To these we connected a total of 24 Pentium II PC clients running NT Workstation (V4.0 SP3) or Windows 95. Twenty-one of these were connecting into the 8000 and ServerIron via 100Mbps (200Mbps full duplex) links, while one was connected to the Workgroup switch at 200Mbps and the final two connecting into the 8000 via Gigabit fibre links. In addition, four NT Servers (V4.0 SP3), all connecting into the 8000 via Gigabit ports, performed the roles of benchmark server, applications server, video server and mail server. All the PCs had 3Com Etherlink NICs installed in them, either in 10/100 or Gigabit format. Our network was split into two sub-nets and, combined with connections from the NetCom SmartBits packet blasters, spread across the entire 8000 chassis and out onto the ServerIron and Foundry Workgroup switches. The latter was connected to the 8000 via a 4Gbps (2x2Gbps) trunk while the ServerIron connection was made with an 800Mbps trunk (4x200Mbps).� Benchmarking and Test System Components For background traffic we used NetCom SmartBits packet blasters. The NetCom units were configured to provide six Gigabit connections and 20 100Mbps connections into the 8000, all of which were saturated. Running in full duplex mode this provided us with up to 16Gbps of background traffic load on the switch at all times, running a combination of IP, IPX and Layer 2 traffic. ![]() Figure 2 -SmartBits Manager The second layer of traffic was generated using NSS developed tests from the Bluecurve Dynameasure benchmarking and capacity planning application suite. Using the suite we set up two application datasets. The first was a transaction processing (TP) application using a combination of dataset types (.bmp, .txt. .dat, .bin - compressed and uncompressed) with file sizes ranging between 1KB and 100MB. The second application was a messaging dataset using MS Exchange Server 5.5 and MS Outlook clients. Each client had to carry out a series of operations - 34 in all - such as reads, sends, replies, copys, deletes and other typical email actions.� The Bluecurve software enabled us to create multiple virtual clients up to 25 on each PC, using the multi-threading techniques of NT, with each virtual client then sending and receiving live data during the tests. For the primary testing we split the clients between the TP and messaging tests. However, during the test period, Dynameasure application tests were also run across all the PC clients as a pure TP application with the NetCom SmartBits creating 16Gbps of background traffic throughout. No bottleneck problems arose, other than predictably - at the applications server itself, the throughput maxing at 226.67Mbps. In addition to the NetCom and Dynameasure data traffic we also ran a live backup across the network between one NT Server and an NT Workstation client configured with a CD Writer, the two PCs sitting on different sub-nets.� Finally we added live video streaming using MS Netshow Theater Server software and clients. In line with the limits of the NetShow software running on a single server, we were able to run a maximum of six video sessions across the network. One session was streamed out across the trunk between the 8000 and the Workgroup switch, to an NT Workstation client on the Foundry Workgroup switch with the NetCom SmartBits switch fully loading the piped connection in both directions.� For the tests we created multiple, port-based VLANs and ran QoS against the trunked connection out to a port on the FastIron Workgroup switch where an NT client was attached playing back video streamed across the network from the NetShow Server. The server was itself attached to the 8000, on a separate sub-net to the FastIron client, thereby testing true Layer 3 and layer 4 switching capabilities between the two nodes.� Throughout the testing, the NetCom devices blasted up to 16Gbps of packets across the network, for a total of 120 hours non-stop. Adding in the Dynameasure benchmarks for around eight hours a day (a working day), we saw no problems arise on the network, with respect to either the TP or messaging applications. In addition to this and the backup between sub-nets we then introduced streamed video, with up to five sessions running simultaneously on NT clients attached to the 8000, plus a sixth session running across the saturated link to the NT Workstation client connected to the Workgroup switch.� ![]() Figure 3 - Microsoft NetShow Theater Server Administrator Without any form of QoS in place, this latter application would simply have been impossible, with the NetCom traffic winning hands down. By using Foundry's QoS against this traffic stream, however, we were able to run live video streamed across the connection without the picture breaking up. In fact, the quality was cinema standard and exactly on a par with video streams being sent to NT clients attached to the 8000 itself.� Again, bear in mind this was running in tandem with all the data traffic being produced by the NetCom devices and benchmark applications. We ran the video sessions on the 8000 overnight several times in tandem with the NetCom and Dynameasure traffic, with only two session dropouts, both traced back to NetShow Server problems. Overall the BigIron 8000 did everything we asked of it, with capacity to spare and confirmed the following: It is capable of supporting a heavy and constant traffic load at the same time as running multiple, different office applications in a multi-protocol environment, plus real-time video traffic. This makes it equally applicable as a backbone router replacement technology for corporates and service providers alike. Layer 2, 3, and 4 traffic was all handled equally efficiently. This kind of flexibility means that whichever way you want to design the network, the 8000 is capable of supporting any particular requirement. Importantly it was also very simple to configure. The QoS function enabled us to run time-critical, live video streams across a saturated connection proving the 8000s capability as a reliable core backbone switch for service provider applications such as video on demand combined with multiple data services. The addition of Layer 7 URL-based switching will further enhance the claims of the BigIron 8000 in the ISP market. This arguably gives it a clear lead over ATM in most environments.� The amount of traffic flow control available with the 8000 banished any old views of Gigabit Ethernet being big on bandwidth but unmanageable. Every packet of data can be managed as we proved. Interoperability with other Foundry devices extends all the way to QoS as well as tagged VLANs.
|
![]() |
Send mail to webmaster
with questions or�
|