![]() |
F5
Networks Web Content Delivery Solution Under Test
by Steve Broadhead Table of Contents Aims of the
Test F5, once known as a �one-product company�, now has what can only be described as a very comprehensive product portfolio. Especially, that is, when considered in the context of web content delivery and distribution � the focus of this report. Whereas, to date, most companies looking to provide such a solution have been forced to bolt together a number of different (vendor) components in the hope of getting a result, or buy in a �solution� from several different source vendors, F5 now offers a complete solution in its own right. The importance of this single-vendor approach cannot be over-emphasised as Appendix A explains in more detail. But, put simply, trying to combine multi-vendor solutions for web content delivery has created serious headaches in the NSS labs in the past. Speaking with service providers we�ve found that they have relied heavily � and very expensively � on continuous external consultancy in order to try and create an optimised solution, both at the back-end in the data centre, and out at the remote sites. And the problems of interoperability and optimisation never stop. Hence the interest in F5�s in-house approach to the problem. So, to put this �all-in-one� solution to the test, within the NSS labs we�ve got together the complete product set and created a truly realistic web architecture for the F5 products to slot into. While the report focuses on the core product here, the BIG-IP 5000 IP application switch, we�re also evaluating the other component products from F5 � the 3-DNS site load balancer, the EDGE-FX cache and the Edge-FX Content Controller web content distributor � as essential parts of what is really under test: the total web content delivery solution. Specifically, our aims of the testing were as follows: To determine how comprehensive F5�s web content and service delivery solution really is. To test the individual components of the F5 solution for resilience and range of features within a multi-site, web content delivery scenario where speed, accuracy and reliability are critical. To test the ability of the BIGIP-5000 switch to support intensive web content demand in a �big hit� scenario. To test the cache �awareness� and server/cache load balancing capabilities of the solution as a whole. To test the ease of configuration, management and homogeneous nature of the F5 products. To consider the possible applications for the web content delivery and judge the business potential of this service. Making The business case: Web Content � Hype Or Real Business Opportunity? In a world where the search seems to be forever on to find the �killer application� it is easy to become cynical to point of believing that everything an IT product vendor or consultant tells you is hype with no substance. So when the idea of �web content� being the next great business opportunity presented by IT comes along, there needs to be a solid and viable business case behind it if it is not to be dismissed as just another piece of vendor hype. But there are always exceptions to the rule and this is one such example. Computing develops in waves, both in terms of the technology and the applications. First we had the mainframe wave with large-scale administrative business applications such as payroll and personnel predominant. Then the PC and networking came along. With them, data was more flexible, more distributable and so a new wave of applications were born, based around moving information between different systems, different offices, different individuals. Everything from simple diary and contact databases, through to fully-blown customer relationship management (CRM) and enterprise resource planning (ERP) applications fit into this late �80�s onwards model. At the same time, the old �office automation� application set became the de facto method of working at the PC or workstation � Microsoft�s Office suite typifying this approach. Then the Internet exploded and the browser view of the computing world gave users a new interface, the application developer new tools and the networking product vendors new headaches and new opportunities. Web content is fundamentally different to the kind of computer-based data that was around before. Thanks to standards such as HTML being in place, web content - or data � was easily created and easily distributed across the Internet. Consequently, the Internet has been flooded with vast quantities of worthless information, as well as lots of incredibly useful stuff. But the essence of web content remains regardless � it is an incredible medium for delivering information across the world, quickly, cheaply and easily. As such, promoting the merits of web content as a business lever � a means to improve business performance, whatever the nature of that business � is not mere hype but simple common sense. Whether the nature of your business dictates that IT is a means to an end � simply a way of improving its efficiency � or an end in itself, whereby you actually create profit by actively selling IT-related products or services, the ability to move web content around your business and out to your customers is directly related to improving that bottom line � profitability. And that, for any company - regardless of their business � is what really counts. The F5 Networks web content delivery solution reviewed The BIG-IP 5000 marked F5�s first venture into switching products.� Prior to its launch, the BIG-IP concept was labelled as Internet appliance, with a simple, gateway-esque, �in one end out the other� approach. With the 5000, F5 has retained the basic BIG-IP concept and code, though the latter is updated constantly, and created what it calls an IP application switch.
With 24 10/100 and four gigabit Ethernet ports, plus integrated SSL acceleration, the 5000 is still not trying to play the role of a �me too� high port density Layer 3 Ethernet switch, but is intended for very specific roles such as providing many different types of load balancing: server, cache or firewall, for example.� The key aims of the product can be summarised as follows: It provides local load balancing for servers, caches, firewalls, VPN gateways, terminal servers and other specialised devices with full Layer 2 through 7 switching support. It provides intelligent content switching that is conservatively claimed to reduce bandwidth costs and server overhead by up to 20%.� It allows applications to directly control network traffic by pre-emptively avoiding application failures using F5�s iControl application and device integration software. It provides SSL processing and integrated load balancing in a single appliance. It provides multiple levels of failover and redundancy to ensure services stay up and running at all times. To explain the feature set in more detail we need to examine each feature area in more detail. An important point to bear in mind here is that what features are important to one customer � even in a specific web content delivery scenario � may be less important than others are to a different customer with a slightly different set of requirements. So we�re not looking to prioritise one set of features over another here, but simply to explain what they are and what they do. Load Balancing Scenarios The BIG-IP 5000 is designed to load balance a number of different devices on the network. In addition to classic server load balancing, the 5000 can equally load balance any type of applications server or directory server, including multimedia/streaming servers, Internet servers, firewalls, routers, cache devices, proxy servers and VPN gateways.� Load Balancing Methodologies In addition to the �industry standard� round robin and weighted round robin/ratio modes there are a number of more complex dynamic alternatives, which are described on the following page. As with all the BIG-IP 5000 features, these are selected using a browser-based GUI which we can only describe as the best we�ve ever seen on any product in our labs, but more on the interface later.
Static Modes Round Robin - In Round Robin mode, the BIG-IP distributes connections evenly across the nodes that it manages. Each time a new connection is requested, the BIG-IP passes the connection to the next node in line. Ratio - The Ratio mode allows you to assign weights to each node. Over time, the total number of connections for each node is in proportion to the specified weights. Priority - In Priority mode, you create groups of nodes and assign a priority level to each group. The BIG-IP distributes connections in a round robin fashion to all nodes in the highest priority group. Should all the nodes in the highest priority group go down, the BIG-IP begins to pass connections on to nodes in the next lower priority group. Dynamic Modes Least Connections - The BIG-IP passes a new connection to the node with the least number of current connections. Fastest - The Fastest mode passes a new connection to a node based on the fastest measured response time of all currently active nodes. This mode selects the node with the fastest measured response time. Response time is determined by measuring the time that elapses between sending each packet to the node and receiving each packet from the node. Observed - This mode is a combination of "least connections" and "fastest". Nodes are ranked based on a combination of the number of current connections and the response time. Predictive - In Predictive mode, the BIG-IP analyses the trend of the Observed ranking over time, determining whether a node�s performance is currently improving or declining. The node with the best performance ranking that is currently improving, rather than declining, receives the next connection. Dynamic Ratio - Using this, the BIG-IP can receive information directly from application servers such as Windows 2000, Real Server and other SNMP systems to directly change the ratios set for load-balancing without human intervention. Clever� One of the real advances in traffic routing and control to emerge in recent times is content routing and switching; that is, the ability for a device to make an intelligent decision about how to route a packet, based on that packet�s contents. As such it is particularly applicable to http-based web traffic, where so much information is available from the URL or even within a cookie. This is essentially how content routing works within the context of the 5000. It reads the information in a request header (such as the type of content requested and the IP address of the sender) and using that information it can then route the request to the server or web cache, for example, best able to fulfil that demand. As part of the content routing engine, the BIG-IP 5000 uses what F5 calls �OneConnect� technology. This eliminates the need for clients to open up a separate TCP connection for each of the objects that make up a Web page. As a result this accelerates page-load performance while also obviously reducing bandwidth�� in this case by a claimed 20 percent maximum. Another content routing feature is �Client Aggregation�. This takes requests from separate users and consolidates them on the 5000. Consolidated traffic is then sent to appropriate web servers, so minimising the number of connections to the server and enabling more efficient use of existing servers and network resources. Cache management is also part of the content routing story. The BIG-IP 5000, on reading a request header can intelligently determine whether content is cacheable or not � for example a static versus active web server page - then intelligently directs only the correct content to the cache. It also recognises �hot content� based on the number of hits a particular web object is receiving and uses the available caching accordingly, load balancing as necessary. Groups of caches can also be created and assigned to specific functions, such as handling a particular service. Another way in which the BIG-IP 5000 can create differential service levels is with its support for QoS and ToS standards. Policies can be created based on specified QoS and ToS and these priorities can then be sent to the appropriate resources. Alternatively, it can set the priority identifier on the traffic from specific web resources so the policy manager then can manage this traffic. In common with most contemporary network traffic-handling devices, the BIG-IP 5000 supports VLANs. This allows BIG-IP to simultaneously load balance traffic to clients belonging to different logical and secure VLANs. You can define a single VLAN for each IP address and apply it to any or all of the BIG-IP 5000 interfaces (ports) and declare these are �internal� or �external�. One of the big selling points of the BIG-IP products has always been is its range of fault tolerant features and the 5000 is no different in this respect. It includes a number of features to prevent downtime in the event of a hardware failure, the common approach in each case being to eliminate the classic single point of failure. In a configuration with dual-BIG-IP 5000 switches, as tested here, it is possible to set up the systems in a number of different ways to support fail-over from the first to the second 5000, both in stateful and persistence modes. A watch dog card is supplied with every switch, so in a redundant pair, these cards are connected to provide a claimed fail-over time of less than .07 seconds. So should a single server or all servers fail, the BIG-IP 5000 allows automatic redirection of traffic to a different server or site. In the event of a back-end server or service failure or unavailability, the server or service is removed from the availability table instantaneously (from the point of the user specified time-out interval). The server or service it then continually checked for availability at a user specified interval, and is brought back into the availability table once the server or service is back on line. No manual user intervention is required. With the BIG-IP 5000 you can also assign priority levels to servers in a group to create a set of standby servers. If a certain number of higher priority servers ever fail in the group, a lower priority group will automatically be added to handle the load. The basic fail-over configuration options are as below: Active/Active Mode When engaged, this allows both switches to simultaneously manage traffic for different virtual addresses. This option allows you to take advantage of the throughput of both devices simultaneously. In the event of a failure on one of the switches, the remaining active switch assumes the virtual servers of the failed device. Mirroring Connection Information for Fail-Over Mirroring provides seamless fail-over of client connections and persistence records from an active BIG-IP 5000 to a standby switch. This allows a user session to continue even if the primary BIG-IP 5000 fails. The 5000 also supports individual port mirroring, as well as Spanning-Tree Protocol. Network-Based Fail-Over Network-based fail-over allows you to configure a redundant BIG-IP 5000 to use a network connection to determine the status of the active switch. Network-based fail-over can be used in addition to, or in place of, hard-wired fail-over. This is a significant feature because it gives more flexibility to the network manager. With network-based fail-over, redundant BIG-IP 5000s are not limited to the physical proximity caused by the 25-foot serial port fail-over cable and is therefore ideal in the WAN scenarios we tested here. Definition The BIG-IPs handling of persistence is another key selling point for the product range as a whole. But what exactly does F5 mean by �persistence�?� Basically persistence is necessary when a server has data associated with the user and the data is not dynamically shared with the other servers. A classic example lies with online Internet shopping. So let us take the scenario where a customer builds a "shopping cart" of goods at a web site, and then leaves the site before completing the transaction. If, upon returning to the site, the BIG-IP 5000 directs the customer�s request to a different server, that new server may not know about the user and his or her shopping cart. Of course, if all the servers stored the user information and their selected goods in a single back-end database server, this would not be a problem. But if the site is not designed this way, the specific shopping cart data resides on just one server. In this case, the BIG-IP 5000 must select the same server that the user was directed to in the past, in order to seamlessly process the user�s request. Persistence Modes With the BIG-IP 5000, F5 offers six modes of persistence: Source, Server, VIP, SSL, Cookie Persistence, and Destination Address Affinity. This is more than we have seen available from any other vendor. Moreover, the cookie persistence has three modes where one of the modes does not require any change to the web host application, making it easier to configure. These are Rewrite Mode, Insert Mode, and Passive Mode. The cookie persistence uses cookie information stored by a client to direct the client connection to the appropriate server. The primary difference between the BIG-IP Cookie Persistence modes and SSL Persistence is that with Cookie Persistence, data is stored at the client, not in BIG-IP, so the infinite resources of the client are available for use. That is, Cookie Persistence persists on the HTTP Cookie and information is stored on the client�s disk drive. As part of the BIG-IP 5000 configuration, an SSL accelerator card is included, terminating SSL sessions at the BIG-IP, rather than the web server and therefore speeding up SSL processing.� The ability to accelerate up to 100 SSL Transactions per second is included with the product, which equates to 6000 SSL transactions per minute. By extending the license, this can be increased to a maximum of 800 SSL transactions per second � 48000 transactions per minute. This SSL termination can be combined with SSL persistence, as described above. One issue with the first implementation of terminating SSL at the BIG-IP device was that it meant the SSL sessions were then unencrypted between the BIG-IP and the web server. Obviously this was an inevitable trade-off situation � performance against security � but F5 now offers a configuration whereby the SSL session is re-encrypted after termination at the BIG-IP, so � at least in a fashion�� satisfying all requirements. Other than the SSL accelerator, the BIG-IP 5000 has a number of integrated security features. One example is with the supported NAT (Network Address Translation) which � while very useful and very popular � is not in itself secure. To resolve this, the BIG-IP 5000 offers Secure NAT (SNAT). This provides clients with a secure outbound connection to servers external to the switch or on an internal server array through a load balanced virtual server. SNAT connection requests can only come from IP addresses recognized by a BIG-IP device. In addition to NAT/SNAT the BIG-IP 5000 is designed to protect itself from attacks such as denial of service attacks as well as providing protection for the servers behind the device. Basic functionality includes:
Network Management and Monitoring The BIG-IP 5000 features an extensive range of management functions, both within the device and as part of an external management package, SEE-IT Network Manager. This is a data collection manager that enables you to monitor network performance and traffic patterns. In addition there are a number of different components that allow you to monitor different parts of the network. The BIG-IP 5000 includes features to monitor application servers called health monitors. These are a collection of predefined scripts for testing the health and availability of the servers and applications that comprise your web system, given that applications and servers of any kind can and do fail. So, for example, you can use the health monitors to check if a server is responding to WEB, FTP, LDAP, or other requests - or to verify that an application (web, database, credit card verification, etc.) is operating properly before sending traffic to that server. Configuring and Managing a BIG-IP 5000 While there is a standard, aforementioned, browser-based GUI for almost all BIG-IP product operations, also shared with other F5 products such as the 3-DNS site load balancer, exactly how you configure a number of BIG-IP 5000s depends on exactly what you intend making them do. In order to test several different scenarios within the labs we created multiple configurations. These are very easy to save and switch around, adding to the flexibility of the product. There is, however, a basic approach in all cases that we can describe here to give an idea of what installing and configuring the devices entails. The first point to refer back to is that all BIG-IP devices �see� an inside world and an outside world. In our scenarios, the inside world was the bank of web servers simulating that of a service provider or company providing its own web-based applications and services to its uses � what we used to call an Intranet! The outside world is that of the users, whether via an Internet connection or an external LAN./WAN. Separate Ethernet management and serial console ports are provided on the BIG-IP 5000. Once a basic IP address has been set up for the two different IP networks � internal and external � just pointing a browser at one of these addresses enables you to fully configure the device from any PC configured on the same IP network. The switch ships with a default address, so it can be configured via a browser, straight out of the box. Alternatively a CLI approach or via SSH (Secure Shell) to management is equally available. Each to his or her own� With a redundant configuration as under test, the BIG-IP 5000s can also use what it calls a MAC Masquerade � a virtual MAC address which dramatically speeds up failover on switching from one BIG-IP device to another. Using the Web-Based Configuration Tool As we said, it is possible to simply point a web browser at the BIG-IP 5000s and use the web-management on the devices themselves to create the required configurations.
This consists of a number of configuration wizards, in addition to which there is a main screen, with a menu down the left-hand side of the screen and all information to the right. The menu works in �explorer-esque� mode whereby clicking on any option with a plus sign expands that option to reveal sub-menu options and vice versa for minus signs. The information section of the screen is in tabbed page format, so clicking on a tab brings up the related screen. At all points online help � and extensive at that � is available by clicking on the help option in a menu that sits at the top of the screen.� We have to say up front that this is absolutely the best web-based device management tool we have ever seen and by some distance. It is simply excellent and makes what might be � at first � a daunting task, due to the sheer number of options available, relatively straightforward. This is helped by the wizards which are genuinely very useful � as opposed to the usual token-gesture versions seen on many management tools � and can be used to carry out much of the overall configuration. The basic concept of the BIG-IP device view of the world revolves around the creation of �virtual servers�, each of which has a unique IP address. These are used to front-end whatever that virtual server may consist of. In each case, it will include a pool of networking devices, in the case of our labs configuration, web servers. The pools are created individually by combining one or more devices into a pool. These devices are then controlled by a number of different functions, such as which load balancing mode they are using, to which other factors such as persistence settings, mirroring and rules (where you direct traffic based on a particular rule such as � �if GIF traffic then direct to server pool x�) all of which combine to create the virtual server. Just how many different virtual servers you create depends on how many different functions you want the BIG-IPs to carry out. For any configurations involving terminating SSL sessions at the BIG-IP it is also necessary to create a proxy server and generate a certificate. This then validates the SSL session as it enters the BIG-IP and terminates it at that point. All the key features of BIG-IP 5000 are accessible from the web-based configuration tool though there is a complete CLI available either at the device itself, or via the web-based tool, called BIGpipe. This gives you access to every possible command and function on the device. What is great is that you can combine the two without even running up a telnet session. Once up and running, the sheer amount of networking statistics available from the BIG-IP 5000 is impressive. Using the health monitor functions described earlier it is also possible to check on the status of key devices on the network at all times. Log files are also created to view a range of historical events with. Overall the configuration and management aspects of the BIG-IP 5000 have been extremely well thought out and could scarcely be improved upon in our view. The 3-DNS Controller is defined as a 24x7 availability and intelligent load balancing solution for geographically distributed Internet sites and data centers, sharing the same load balancing options as the BIG-IP 5000.
It manages and distributes Internet requests across multiple, redundant server sites - regardless of the platform type or combination, and without requiring additional software on the servers.� End user requests are distributed according to data centre and network conditions such as round trip time, packet loss and other QOS metrics. The aim is to ensure the highest possible availability for any web site. The primary features of the 3-DNS are as follows:
The EDGE-FX� is a dedicated cache engine, designed by F5 but based on Inktomi�s well-established Traffic Server cache engine. Key features include:
The Edge-FX Content Controller is a web content distribution engine that can automatically move and synchronise content on the �edge� servers and caches, complete with version control on the application servers.� For example, if you update the content you can deliver it into the network from any location to support multiple distributed publishers or departments, so localised content creation and deployment is easily supported. By providing synchronised content at multiple sites, it gives true redundancy, ensuring instant content availability with the 3-DNS global load balancing, as put to the test here.
Its key features are as follows:
Putting The F5 Networks Products To The Test First we needed to do some basic validation tests on three key strengths of the BIG-IP 5000 � SSL acceleration, persistence and resilience, as well as basic throughput. For these we used a combination of Spirent SmartBits 6000 (see Appendix B) for high-density traffic generation and a farm of web servers running ZDNet Webbench 4.0 benchmarks and Microsoft�s WCAT web content benchmarks. Test: Validating SSL Acceleration Technology This was not an exhaustive SSL performance benchmarking test but was aimed at proving the SSL acceleration can significantly improve performance in a real-world environment with minimal configuration requirements. To this end we created a virtual server on the BIG-IP with a pool of six servers, all pointing at http port 443 � so terminating SSL traffic at the servers. We then created a third virtual server and a proxy server to terminate all SSL traffic at the BIG-IP. We then created a Webbench SSL script that generated new session IDs from 500 virtual clients over a period of 500 seconds. We first ran the test against the virtual server terminating at the web servers, then at the virtual server/proxy combination terminating at the BIG-IP and compared the results.
We saw a very significant improvement in performance when terminating at the BIG-IP with on average, 598.7 new SSL session IDs per second being terminated by the accelerator card compared to 383.7 at the web servers. Here we were looking to prove that the BIG-IP supports persistent connections between client and server using the different persistence modes available within the product. We focused on Simple, Cookie (passive) and SSL persistence modes as we deemed that these would be the most popular. For the Simple persistence we looked to persist on the client IP addresses of those taking part in a series of WCAT benchmarks. These generated a series of html �Gets� from the web server pool. This involved us adding Simple persistence to a server pool to generate a revised virtual server.
We then pointed the benchmarks at this virtual server and monitored persistent connections. Within the BIG-IP web-based configuration manager it is possible to see, at any point, what persistent connections are prevalent, so a simple monitoring of the statistics screens was all that was necessary to confirm the �stickiness� and that the persistence was working (see Figure 8). As you can see we had no problems confirming the persistence between client and server. The next test was to confirm cookie persistence of the three modes available � which include the BIG-IP generating cookies � we wanted to test the ability to read cookie information from an incoming packet and persist on that. So we ran more WCAT benchmarks � this time, cookie-based and were able to identify the cookie name which we could then add to the BIG-IP configuration using passive cookie persistence. We were then able to monitor the persistence between client and server and confirm the cookie persistence, as can be seen in Figure 9 Test:� Test:� Basic Resilience and Throughput In order to test the basic resilience of the BIG-IP 5000 we attached a Spirent SmartBits traffic generator to the four gigabit ports of the switch and repeatedly ran SmartApps Throughput and Frame Loss tests across two pairs of ports, simulating a client-server combination. In theory this fully saturates the �backplane� bandwidth of the BIG-IP at 8Gbps in each direction.
On every occasion we observed 100% throughput success and suffered zero percent frame loss and no downtime from the switch during the prolonged �assaults� on the gigabit ports of the BIG-IP, as indicated in the highlighted section of the screenshot that follows. Using a combination of F5�s BIG-IP 5000 web content switches, Edge-FX Content Controller, Edge-FX cache devices and 3DNS global site balancer, we created a two-site network across a simulated routed WAN link.
The sites were fully replicated so that, at any one time, one site was fully able to support user demands while the other site was being updated or (forced) down. While each product is a true standalone device, able to work within any generic networking environment, we were looking to see how much extra value could be delivered when using them in combination, as a total solution. In this way, we were looking to maximise the features made available through the tight product integration the F5-only solution offers. The test-bed for the F5 product test consisted of a combination of real and virtual web servers and clients to simulate a complete Internet/service provider environment. The test scenario is equally applicable to an enterprise with a large staff and/or customer base.� Using Caw Networks WebAvalanche and WebReflector client and server traffic generators (see Appendix C for more details on these), we created a live http environment simulating thousands of users and billions of transactions.� This enabled us to test the resilience and capacity of the F5 solution while equally testing the �intelligence� of the system and ensuring it was capable of delivering the web content in line with our requirements and stated aims. With this configuration in place we were able to carry out a series of tests, building up to a �live� simulation of complete web content delivery, distribution and updating session while still maintaining heavy �Internet� traffic. All the tests were run repeatedly in order to judge consistency of results and test for overall resilience. In total, the F5 Networks solution was under test for eight weeks, a long enough period to provide a true test of the products in every sense. Test: Capacity and Redundancy With Web Content The next test focused on seeing how the BIG-IP 5000 would cope with a long and intense series of hits, typical of either a mass web-site frenzy as a result of a major news story arising, or of a deliberate attack such as a denial of service attack. To simulate this scenario we modelled our own NSS web site, then used the Caw Networks http traffic generators to create an increasing amount of traffic and connections sent to a BIG-IP 5000, configured to fail-over to a remote site in the event of a failure arising, the twin-site configuration being managed by an F5 3DNS site balancer. Behind the BIG-IP 5000 sat a farm of virtual servers, holding �content� all based around our own web site pages. The test ran for over five and a half hours, during which time we recorded a total of almost 215 million transactions (214,975,028) with 641.1 Gigabits of data in and 1.7 Terabits of data out of the BIG-IP 5000 and a total of 19.1 million connections with over 9000 transactions per second at peak.
This equates to an average of almost 4 million connections per hour and data handling capabilities of in excess of 100 Gigabits in and 300 Gigabits out per hour � all from a single BIG-IP 5000 and that set to switch to a redundant site in the event of a failure. We repeated the test several times. Results varied only marginally and well within the accepted limits from a statistical analysis point of view. So what happened when we deliberately induced a failure? The traffic was immediately re-routed to the redundant site by the 3-DNS, the second BIG-IP 5000 now hosting the simulated NSS domain. We were able to confirm this both with the monitoring and statistical tools within the F5 management software and manually, just to make sure. From a regular users perspective, sat at their PC with Explorer or Netscape loaded, only those �savvy� enough to notice the change of IP address of the web site as the home page loads, would be aware that any change had taken place. The next test in line for the F5 solution was to check the accuracy of the BIG-IP�s ability to work with the EDGE-FX cache.
The modelled NSS website used for the tests here contains a significant number of cacheable web pages. These were incorporated into the tests alongside non-cacheable content to create a ratio of approximately 55:45, cacheable versus non-cacheable. So, all the non-cacheable content requests should be routed directly to the servers and the rest directly to the cache pool. The results show that over the complete test period, 49% of all content requests were sent to the cache. Allowing for the cache to build up in the first place, during the early part of the test run, this is an impressive figure. Having established that the BIG-IP 5000 switches and Edge-FX cache engines were performing correctly and to the desired levels, we then needed to put the final piece of the jigsaw into place � the Edge-FX Content Controller. So what exactly is the Edge-FX Content Controller? Well, think in terms of a software distribution server, optimised for web content delivery, and you�re getting the idea. We set up a staging server to hold new web content that the Edge-FX Content Controller could then grab every time there was an update and deliver it across our test network to both sites where real web servers resided alongside the �virtual� ones. Importantly, Edge-FX Content Controller manages the updates of the cache as well as the web servers, so the aim of the test was to see whether it could accurately update both the servers and Edge-FX cache engines at each site in turn, while pointing http traffic to whichever site was not being updated. In this way, the users � virtual in our case � would never be sent to an out of date site or delivered an out of date cached web page. Having made changes to the web site on the hosting server, we then created a �new edition� on the Edge-FX Content Controller to update the server/cache combinations as described above. We then set a Caw WebAvalanche benchmark running to simulate live traffic, pointed at our NSS web site copy on our test network. The Edge-FX Content Controller application (browser-based) keeps you up to date on how the content delivery is progressing. Then, using the 3DNS and BIG-IP 5000 management statistics screens � which are truly excellent by the way � we were able to monitor exactly what traffic was going where, and when.
As the first site was being updated by Edge-FX Content Controller, so the web traffic being created by the Caw WebAvalanche was routed to the second BIG-IP site � xactly what we wanted to see. Importantly, there was also no impact on the web traffic flow performance, the benchmark producing the same results as when the Edge-FX Content Controller was not updating the network with new web content.
In practise this means that you can maintain a 24x7x365 online service without compromising on web site content updates. Have we discovered WWW.Utopia at last? We think it is important here to make the point that, within the NSS labs, we were really pushing out the boundaries of what had been achieved with the F5 products to date. Web content delivery systems are new kids on the block. Yes, there have been solutions of sorts around for a couple of years but these are really cobbled-together systems made up of many different products (and usually multi-vendor) and we�ve already highlighted the problems with such a solution. So we were really hoping that the F5 Networks, in-house, solution would do the job and make web content delivery much more manageable and easier than before. And we weren�t let down. The combination of tests we ran show that the F5 approach really does work and, importantly, that it is very easy to set-up and manage thereafter � it almost runs itself in truth. Overall, then, we strongly recommend that anyone looking to deliver web content and services across a global network, easily, quickly, and reliably, should take a look at the F5 Networks solution. It�s the most comprehensive and integrated solution we�ve seen in the labs to date by far. Appendix A: The big challenge: Control versus raw throughput How many times do you try to access a web site, only to find that it�s down or, at best, working in slow motion? Reliability is key to successful web sites and services, yet there�s a rule in life that some people hold on to like it�s the first law of survival, that is scarcely applicable when creating resilient web content and service delivery mechanisms: more is better. No clearer example of this rule lay other than with that old IT �chestnut�, the old adage that �throwing more bandwidth� at a problem is the best cure. If this is true then in the present day of 10Gbps Internet backbones, multi-gigabit LAN backbones and ever-cheaper per-port costs on switches, NICs and just about every other networking component, surely this rule makes more sense than ever? Except that it�s never made sense and still doesn�t. In France, for example, there�s a passenger train service called the TGV (Train a Grand Vitesse). This is a very high speed (300km/hr) train which would be a disaster waiting to happen, were it not for the network it is running on � tracks, junctions, stations � being carefully designed in advance, specifically to cope with not just one TGV but a complete national service of TGVs running alongside their much slower brethren, AKA regular trains. The idea is that the TGVs get priority without impacting on the ability of the slower trains to still get through to their destinations. The railway junctions and signalling features are used to balance the traffic so that, ideally, there are never any delays, though in some cases it means slowing the TGVs for short periods in order to get all the traffic through. In practice, of course there are delays, but as railway systems go it is about as good as it gets, considering all the geographical issues and the problems of balancing low and high speed traffic on a national network. All of which sounds remarkably like the Internet in many ways, or maybe even more like a large-scale Intranet/Extranet environment. If we take such an environment, be it within a corporate or a service provider, the average network makes a national railway network look idyllically simplistic. We�re talking a heady mix of routers, switches, firewalls (and other security devices), web caches, packet shapers, load balancers, web servers, application servers� the list goes on and on. So how exactly do you go about simply �throwing bandwidth� at a contemporary network when there are so many devices in the way that need to optimised, balanced and made resilient? The answer is, you don�t. Instead you optimise the network through intelligence, not brute force. And it helps if, as with our solution under test here, all the critical components come from a single vendor. Not that performance isn�t important, it is. But predictable, reasonable performance with built-in reliability beats manic but unreliable performance every time. Especially when you�re dealing with something as unpredictable as web traffic. Witness the many and various famously documented outages major service providers and portals have suffered from time to time when a huge story breaks on the Internet and you see why control and reliability are key to ensuring connections stay up and tempers stay cool. And to ensuring that service providers of all types stay in business� To this end, over the past couple of years a new wave of Ethernet products have emerged � F5�s BIG-IP 5000 among them � which are focusing not so much on pure throughput but on intelligently managing the data passing through them and directly it to its destination as efficiently as possible. So is this another groundbreaking moment in the evolution of Ethernet-based network? Certainly you can look at it that way � finally Ethernet gets intelligent and intelligence should never be over-rated� The Evolution of Ethernet: Adding Intelligence To The Bandwidth Within the history of Ethernet technology there have been several moments when a �groundbreaking� moment occurred.� One example is when the Ethernet Switch was introduced, another was the introduction of Fast Ethernet and, more latterly, Gigabit Ethernet. Ten-gigabit Ethernet is around the corner, so when will this bandwidth explosion stop? The answer is that, one way or the other, it probably won�t. And as Ethernet moves onto a higher bandwidth platform each time, so the price per megabyte of that bandwidth falls, often dramatically. But is it really as simple as just buying bandwidth as cheaply as possible, or is there more to consider when looking at creating a contemporary Ethernet network? Well yes there is. The old classic �throw more bandwidth at a problem and it�ll go away� never has worked in the medium � let along long � term and certainly will be less and less effective as the applications themselves are getting smarter. For too long networking was black art, a science dedicated to hiding the reality � that networking hardware is essentially plumbing, albeit complex plumbing at that. Even domestic plumbing would be of no use if �applications� such as the delivery of hot and cold water, central heating and air-conditioning around the home and office were not required. And so it is with the networking variety. There is no use in �plumbing in� gigabytes of networking capacity unless there are applications in demand that take advantage of this capacity. In the same way that you don�t base a central heating system for a small house on a boiler designed to provide heating to an office-block just to guarantee that it works, similarly networking should be based around intelligent use of what you have and what you need, not over-engineering a solution at great expense just to be on the safe side. The important factor now is to be able to deliver a broad range of diverse services and applications as efficiently and reliably as possible. This is especially so in the Internet and Intranet-related worlds where content is both hugely varied and impacts significantly on performance if not handled correctly. With the product solution set we�ve evaluated in this report, F5 has taken the approach that intelligence is key to modern Internet/Intranet/Extranet traffic control and delivery of the kinds of services and applications that the end user world really wants. Adding real intelligence into the network � real network traffic management � appears to be the best way forward and to this end, F5 appears to have got it right. Appendix B: SPirent Communications Smartbits 6000B Spirent Communications� SmartBits 6000B (SMB-6000B) chassis is claimed to be the industry �s highest-port-density network performance analysis test system. Each chassis can support up to 24 Gigabit Ethernet ports, 96 10/100 Mbps Ethernet ports, 12 POS (Packet over SONET) ports, 12 SmartMetrics (more on this later) Gigabit Ethernet ports, or a mixture of these port types. SmartBits 6000B �s can be daisy-chained together to achieve very high port density, enabling users to perform automated large scale testing in Quality Control and high-volume production environments.
The SMB-6000B is controlled by a PC through a 10/100 Mbps Ethernet connection and uses a Windows-based interface. On this platform, Spirent has made a wide range of applications available to use with the 6000B covering everything from classic throughput testing to complex QoS/CoS tests. The latter is achieved with the SmartMetrics modules, offering a layered approach to performance, designed specifically to address the often-complex set of optimization and prioritization methods at all network layers. It offers the ability to measure:
SmartMetrics gives you the ability to measure and analyse every aspect of your network, from the performance of each network port, to the performance of millions of IP flows, to the effect of opening and closing thousands of TCP or Multicast sessions. The optimization of traffic can take place at layer 2, layer 3, layer 4, and even up to layer 7. IEEE 802.1p prioritizes traffic at layer 2 (Datalink). DiffServ and TOS optimize and MPLS and RSVP manage resources at layer 3 (Network).� A multitude of new QoS products, such as server load balancers and traffic shaping/access control services, optimize traffic based on criteria including TCP or User Datagram Protocol (UDP) port number at layer 4 (transport).� Even higher-layer criteria such as Uniform Resource Locator (URL) or application type can be used so it is vital to be able to measure performance in these areas. SmartMetrics addresses these needs by measuring how well devices and networks optimize, prioritize, and segment traffic using an expanded set of metrics, based on per-flow, connection, network application and access device and cross-technology. Appendix C: CAW Networks WebAvalanche and WebReflector Web Traffic Generators Internet architectures are becoming increasingly complex. Whether you're building network equipment or providing a service, you must deliver consistent performance under all conditions. Until now, capacity assessment at high-loads has been a costly and complex process. For this reason Caw Networks introduced the WebAvalanche and WebReflector appliances to assist with the challenge. At NSS we have taken these capacity planning products and integrated them into our test-bed simulating real-life Internet conditions; those that the average user experiences daily.
WebAvalanche is described by Caw as a capacity assessment product that challenges any computing infrastructure or network device to stand up to the real-world load and complexity of the Internet or intranets The system determines the architectural effectiveness, points of failure, and the performance capabilities of a network or system. Using WebAvalanche to generate Internet user traffic and WebReflector to emulate large clusters of data servers, you can simulate even the world's largest customer environments.� The system provides invaluable information about a site's architectural effectiveness, points of failure, modes of performance degradation, robustness under critical load, and potential performance bottlenecks. It is able to set up, transfer data over, and tear down connections at rates of more than 20,000 per second-all while handling cookies, IP masquerading for large numbers of addresses, and traversing tens of thousands of URLs. WebAvalanche initiates and maintains more than a million concurrent connections, each appearing to come from a different IP address. This allows realistic and accurate capacity assessment of routers, firewalls, load-balancing switches, and Web, application, and database servers. It helps identify potential bottlenecks from the router connection all the way to the database. This accuracy is especially critical for gauging Layer 4-7 performance. The ability to additionally simulate error conditions such as http aborts, packet loss, and TCP/IP stack idiosyncrasies can help anticipate-and avoid-significant and previously unknown impacts on performance. To enable more accurate load simulations across multi-tiered Web site architectures, the system also supports extremely realistic user modelling behaviours such as think times, click stream, and http aborts that cause Web servers to terminate connections while back-end application servers continue to process requests. Configuring in this way is simple as both WebAvalanche and WebReflector directly from a desktop browser to set up tests, review feedback in real time, and easily reconfigure test parameters.
The WebAvalanche also supports browser cookies, html forms, http posts, and SSL-encrypted traffic. The system therefore gives you the flexibility to specify data sources and mix and match data sets to recreate accurate user behaviour at very high performance levels. It also simulates SSL loads that can stress the world's most sophisticated secure e-commerce platforms. It also includes configurable cipher suites that enable you to emulate different types of browsers. WebAvalanche includes a high-accuracy delay factor that mimics latencies in users' connections by simulating the long-lived connections that tie up networking resources. Long-lived, slow links can have a far more detrimental effect on performance than a large number of short-lived connections, so this approach delivers more realistic test results. While WebAvalanche focuses on the client activity, WebReflector realistically simulates the behaviour of large Web, application, and data server environments. Combined with WebAvalanche it therefore provides a total solution for recreating the world's largest server environments. By generating accurate and consistent http responses to WebAvalanche's high volume of realistic Internet user requests, WebReflector tests to capacity any equipment or network you connect between the two systems. Its protocol-level accuracy helps you assure the stability and performance of switches, routers, load balancers, firewalls, caches, and other Layer 4-7 devices. The system is ideal for helping infrastructure service providers validate, enforce, and maintain service level agreements (SLAs).
|
![]() |
Send mail to webmaster
with questions or�
|