![]() |
Appendix B - The Test Equipment Spirent Communications SmartBits SMB-6000/SMB-600 Spirent Communications (www.spirentcom.com) provides network and communications testing solutions for developers, manufacturers, evaluators and operators of communications equipment, networks and services. The SmartBits 6000 (and its smaller sibling the SmartBits 600) are multi-port network performance analysis systems designed to measure the performance limits of complex network configurations and network devices.
The SmartBits 6000 is a high port density network performance analysis test system. Each chassis can hold up to 12 SmartModules in various combinations to support up to 48 Gigabit Ethernet ports, 96 10/100 Mbps Ethernet ports, 24 POS (Packet over SONET) ports, 24 SmartMetrics Gigabit ports, or a mixture of these port types. Multiple SmartBits 6000 chassis can also can be daisy-chained together to achieve even higher port densities. The SmartBits 6000 is controlled from a Windows-based PC or a UNIX workstation through a 10/100 Mbps Ethernet connection. Control is via a �soft front panel� SmartWindow application, and the system also includes SmartApplications software, which automates industry standard performance tests as defined in RFC 1242 and RFC 2544. Spirent�s SmartBits SMB-600 chassis is a portable and compact version of the SMB-6000, providing all the same features and holding up to two modules. It can support up to 8 Gigabit Ethernet ports, 16 10/100 Mbps Ethernet ports, 4 POS (Packet over SONET) ports, 4 SmartMetrics Gigabit ports, or a mixture of these port types. Spirent has recently introduced a new generation of SmartBits network and Internet test systems called TeraMetrics. The TeraMetrics open architecture is a foundation for a new family of SmartBits test systems to meet the accelerating demands, complexity, increased speeds and scalability of terabit (up to 10 gigabits per second) switching. The range of SmartBits products also includes a set of software tools that allow SmartBits systems to be used for a variety of applications, ranging from industry standard tests to specific applications for new and emerging Internet and data technologies. Those used extensively within NSS include: SmartWindow � SmartBits virtual front panel. Within SmartWindow, the test engineer simply needs to select a protocol, set class of service parameters, and then test any of the following: NIC cards, servers, bridges, cable modems, xDSL modems, switches, routers, VLANs, firewalls, live networks, or multimedia scenarios. SmartApplications � Provides automated performance analysis for bridges, switches, and routers per RFC 1242 (Benchmarking Terminology for Network Interconnection Devices) and RFC 2544 (Benchmarking Methodology for Network Interconnect Devices). Tests are available for Ethernet, ATM, and Frame Relay.
SmartFlow � Tests line rate QoS. Enables both forwarding and policy tests. Analyses each incoming stream to test a device's (or network's) ability to forward very large numbers of flows. Analyses the device's ability to correctly handle policies implemented in the network or device under test. SmartTCP � Tests load balancer performance. Tests measure the TCP session performance of server load balancer devices that make forwarding decisions based on Layer 4-7 information. SmartTCP benchmarks both the rate and connection capacities of the device under test to establish, maintain, and tear down TCP sessions. WebSuite/Firewall � Designed to simulate real-world traffic loads in order to support the testing of content delivery and network equipment. Gauges the performance of firewall's performing NAT (Network Address Translation). Determines maximum application transaction capacity. Measures application throughput with TCP acting as the transport agent. Evaluates an in-line device�s ability to deal with DoS (Denial of Service) attacks. TeraVPN � Designed to measure the network performance of IP Virtual Private Networks. Determines IP-VPN tunnel creation capacity using IPSec protocols. Also generates UDP or HTTP traffic over each tunnel and measures data performance characteristics like packet loss, latency, and response time. SmartWindow and SmartFlow are used to generate background traffic for the UDP tests in this report. WebSuite is one of the tools used to generate high-volume DoS attacks. In general, the Spirent software is not particularly easy to use, lacking a consistent look-and-feel across the range making it difficult to switch from one product to another. Not all of the software packages run across all of the SmartModules either, making it difficult to select the exact combination of hardware and software required to perform a range of tests. However, the hardware is solid and reliable, and provides a means to generate high volumes of layer 2/3 traffic up to multi-Gigabit speeds. Spirent Communications Avalanche and Reflector Whether you are building network equipment or providing a service, you must deliver consistent performance under all conditions. Until now, capacity assessment at high-loads has been a costly and complex process. For this reason Spirent Communications introduced the Avalanche and Reflector appliances to assist with the challenge. At NSS we have taken these capacity planning products and integrated them into our test-bed to aid in simulating real-life Internet conditions � the sort of conditions that the average user experiences daily.
Avalanche is described by Spirent as a capacity assessment product that challenges any computing infrastructure or network device to stand up to the real-world load and complexity of the Internet or intranets. The system generates simulated network traffic that features real-world characteristics such as connection speed, packet loss, browser emulation, user think-time and aborted transactions. This helps provide invaluable information about a site's architectural effectiveness, points of failure, modes of performance degradation, robustness under critical load, and potential performance bottlenecks. Using Avalanche to generate Internet user traffic and Reflector to emulate large clusters of data servers, it is possible to simulate the largest customer environments. Each one sports up to four copper or fibre Gigabit Ethernet ports which are load-balanced equally between dual Intel processors when generating traffic to achieve in excess of 2Gbps traffic per Avalanche / Reflector pair. Between them they can set up, transfer data over, and tear down connections at rates of more than 45,000 requests per second (HTTP 1.0 with no persistence) and over 60,000 requests per second (HTTP 1.1 with persistence). They can sustain over 4,000 HTTPS requests per second with no SSL session ID re-use, generate more than 30,000 streaming requests, and simulate more than 2 million simultaneously connected users with unique IP addresses. All this while handling cookies, IP masquerading for large numbers of addresses, traversing tens of thousands of URLs and operating under a realistic mix of traffic. This allows realistic and accurate capacity assessment of routers, firewalls, in-line security appliances (IDS/IPS/UTM), load-balancing switches, and Web, application, and database servers. It helps identify potential bottlenecks from the router connection all the way to the database, or can simply be used to generate a background test load of realistic traffic. Load can be specified in a number of ways, using user sessions, user sessions per second, transactions, transactions per second, connections or connections per second. Protocols supported include HTTP/1.0, HTTP/1.1 and HTTPS (including persistence and simultaneous connection settings); RTSP/RTP (QuickTime and Real Networks); Microsoft Media Streaming; FTP; SMTP (including attachments) and POP3; DNS; and Telnet traffic. It also supports SSL versions V2, V3 and TLS V1, and SSL protocol parameters (version selection, cipher suites and session ID re-use), as well as allowing generation of a range of simulated Distributed Denial of Service (DDoS) attacks. The system also allows modelling of user behaviour, supporting such actions as use of proxies and proxy caches, use of multiple browser types, multi-level HTTP redirects, user think times, click streams, and HTTP aborts (�click-aways�). Support is provided for dynamic content sites, cookies, session IDs, HTML forms, HTTP posts, and HTTP basic and proxy authentication, and the tester can specify a list of URLs and data object parameters that can be changed on a per-transaction basis. Avalanche includes a high-accuracy delay factor that mimics latencies in users' connections by simulating the long-lived connections that tie up networking resources. Long-lived, slow links can have a completely different effect on performance than a large number of short-lived connections, so this approach provides the ability to finely tune the test scenario for more realistic results. As does the ability to introduce conditions that can seriously affect real-world performance such as packet loss levels, TCP/IP stack characteristics (with control over maximum segment size, slow start/congestion avoidance, VLAN tagging, IP fragmentation, and TCP timeout behaviour) and, of course, line speed. User profiles can be created which enable Web Avalanche to mix different user types in a single test � perhaps one group of users could be running over a GSM link with high latency and heavy packet loss, whilst another group could be running over a 64K ISDN line, and yet another over a T1 connection. While Avalanche focuses on the client activity, Reflector realistically simulates the behaviour of large Web, application, and data server environments. Combined with Avalanche it therefore provides a total solution for recreating the world's largest server environments. By generating accurate and consistent HTTP responses to Avalanche's high volume of realistic Internet user requests, Reflector tests to capacity any equipment or network connected between the two systems. The operating system for both units is proprietary � Unix-like in appearance � and is loaded from disk at boot time. Luckily, it is rarely necessary to get to grips with the underlying OS, since all configuration for both Avalanche and Reflector is performed via a Web-based graphical interface. An Avalanche test consists of a sequence of phases, each of which are defined in the Test Specification. The operator configures the number of sessions, connections or transactions initiated (either per second, or throughout the entire test), the maximum number of active simultaneous user sessions, and the duration of each phase. The Test Specification consists of several sub-categories, including Load Profiles, Network Profiles and Interface Profiles. The Load Profile settings control how traffic is generated during a test, while the Network Profile options enable special routing, DNS and TCP functionality of Avalanche.
Interface Profiles allow different Subnet Profiles, User Profiles, Load Profiles and URL Lists to be allocated to each physical interface on Avalanche. This can be used to simulate a wide variety of user behaviour, as well as to combine different protocols and DDOS attacks within the same test, but running on different ports. User Profiles control a user�s actions, such as the period of time for which they view a Web page (think time), how often the abandon a slow-loading page (click-away), as well as the URLs targeted, form data submitted, and cookies used during a user session. Connection Properties settings determine the amount of packet loss and the connection line speed. Subnet Profiles specify the IP address ranges used by the clients. Browser Emulation settings control HTTP protocols, HTTP headers, SSL configuration, and user authorisation. Finally, the URL Lists specify the requests to be made to the servers emulated by Reflector. A similar process is required on the Reflector unit, where a test consists of Test Specifications with a defined number of servers (Server Profiles) handling specified types of transactions (Transaction Profiles). Server Profiles consist of connection properties and server emulation settings. Transaction Profiles control things like status codes, data and MIME types, HTTP headers, response size, placement of operator-defines response data, and response latency. Test Specifications are extremely complex things to create, though there is extensive assistance available in the form of context sensitive help in the console, wizards to step you through the process, and good hard-copy documentation. The most recent releases of the software have moved away from the older �monolithic� test cases to a more modular approach with the majority of the profiles mentioned above stored in separate files.
However, far from making them reusable as you would expect from this type of architecture, the way they are grouped together in Test Cases means that, for example, even if you only ever use one set of client subnet addresses, you need to create a separate Subnet Profile for every interface (four of them on the 2500) in every Test Case. This is bad enough, but there is no �test copy� facility, meaning that if you want to duplicate a test to make slight modifications to create a new test, you actually have to manually copy every profile before you can make the changes. This is one area which still needs improvement. It is also very difficult to create tests that provide exactly the sort of network traffic you may be looking for. For example, a delicate balance needs to be struck between the connections per second and the number or type of URLs to be retrieved. If an attempt is made to alter the mix of packet sizes returned by Reflector, for example, the operator could suddenly find that the traffic becomes very bursty, or the number of connections per second increases to unacceptable levels. This balancing act is very difficult to master, and this is one area where the documentation is not much help. The operation of the GUI has improved significantly from release to release, and each new release provides a significant increase in speed of response making the user experience much more enjoyable. The new hardware platform also provides a welcome increase in both response of the GUI and traffic generation performance. Note that NSS currently uses V6.0.44 for all IPS testing - though there is a later version currently available (V6.51) with an even more flexible and powerful user interface, V6.0.44 continues to provide the most stable and repeatable traffic generation capabilities for our IPS testing needs. Once the tests are running, there is an excellent real-time display available at the console which provides detailed information on the progress of the test, transactions, network traffic, sessions, response times and use of resources. As each test is completed, results are written to a CSV file, which can then be imported into a third party reporting package - such as Excel - for further analysis. An extensive analysis package is also available for separate download.
The Spirent Avalanche/Reflector equipment is one of only a handful of devices capable of performing this type of �real world� testing concentrating on layer 4 to 7, and this type of test tool is essential when attempting to replicate high levels of real-life background traffic in order to adequately test today�s sophisticated network security products. The ability to generate over 2Gbps of traffic and 2 million simultaneous users in a single chassis (or two if you want to make use of the matched Reflector unit) makes Web Avalanche an essential and permanent part of our standard test rig. The AX/4000 (also from Spirent Communications) is a modular, multi-port system that can currently test four different transmission technologies (IP, ATM, Ethernet, and Frame Relay) simultaneously at speeds of up to 10 Gbps. Unlike software-based testing solutions, the AX/4000�s FPGA hardware-based architecture is fast enough to provide more than one million full-rate measurements and statistics continuously and in real time.
The AX/4000 Generator and Generator/Analyser modules include tools for creating unlimited traffic variations and detail. Set-up �wizards� and logical functional blocks allow you to build complex traffic streams quickly and easily. When injected onto the network, these traffic streams can be �shaped� (to simulate constant or bursty traffic) and even introduce error conditions. The controller software, available in both Windows and UNIX versions, has a very intuitive graphical user interface. Test set-up is logical and quick and when tests are running, the software displays real-time data and statistics that are thorough and easy to understand. With an Ethernet control module installed in the AX/4000 chassis, the system can be connected to an Ethernet-based LAN for access by remote users. Because every test module in the chassis has its own address on the network, users can access the modules they need and leave the remainder for others to use. This enables multiple users to access the same chassis simultaneously across a network. The AX/4000 is available with a 16-slot mainframe or a four-slot portable chassis. Both are functionally identical except for the number of available slots, and all AX/4000 components will operate in either chassis. Spirent currently produces a range of different test modules to support different test requirements and speeds, including ATM, Frame Relay, Ethernet, and IP. The AX/4000 uses plug-in port interfaces to provide the physical interface for test modules, and these cards are interchangeable allowing a single test module to perform tests with a variety of physical connections and speeds. Despite the advanced traffic generation capabilities, it is for its high-speed packet capture and network monitoring capabilities that the AX/4000 (with both fibre Gigabit and copper 10/100 ports) finds itself in the NSS test rig. In addition to providing live statistics, the analyser can also capture traffic at full wire-rate for further analysis or protocol decoding. Captures can be triggered manually or automatically based on specific events or errors and can include packets or cells received before, after, or both before and after the trigger event. The AX/4000 can maintain over 125,000 simultaneous QoS measurements per port at full rate and in real time. All statistics can be saved on disk for further analysis and for printing detailed test reports. Cisco Catalyst 6500 Series Switches Cisco describes the Cisco Catalyst 6500 Series Switch as its premier intelligent multilayer modular switch, designed to deliver secure, converged services from the wiring closet to the core, and from the data centre, to the WAN edge. Depending on the model chosen, the 6500 Series chassis can support up to 576 10/100/1000-Mbps or 1152 10/100-Mbps Ethernet ports. High capacity network cores support multiple Gigabit and 10-Gbps trunks, and 400 million packets per second (mpps).
Operational consistency is provided by 3-, 6-, 9-, and 13-slot chassis configurations sharing a common set of modules, Cisco IOS Software, Cisco Catalyst Operating System Software, and network management tools. The Catalyst 6500 Series product portfolio includes:
Network security is provided by integrating existing Cisco security solutions, including intrusion detection, firewall, VPN, and Secure Sockets Layer (SSL) into existing networks. It is worth noting that, despite the multi-Gigabit throughput claims for this device, there is actually an 8Gbps throughput limit per blade/slot. This means that any blade with more than eight Gigabit ports is oversubscribed, and this fact should be taken into account when designing the network to ensure that a single blade does not become a bottleneck. One of the perennial problems facing anyone attempting to test IDS/IPS systems - whether it is the in-depth testing NSS performs in its labs or whether it is a quick check to confirm that the new IDS/IPS sensor you just installed is picking up Sasser - is how to force the sensor to raise alerts. Hopefully, your network is not normally a hackers playground, and so the first thing that usually happens following the installation of a new device is�..nothing! And then come the false positives. These are the confusing (at least initially) alerts that are caused by legitimate traffic acting against a poorly tuned rule set. At this point the novice administrator will begin to wonder if his sensor is working correctly at all. What he needs to do is launch some known exploits on his network to make sure the device detects them. But he is no hacker, and doesn�t fancy unleashing the Sasser worm just to test his IDS/IPS implementation. So he is stuck. Or perhaps he might know enough to be able to download and run some of those attack scripts he has located on the Internet. Only to find that the scripts are launching simple �trigger packets� onto the network, which are being blithely ignored by his stateful IDS/IPS as being of no consequence. Still no alerts� This is where the Informer Suite, from Blade Software (www.blade-software.com) comes in. IDS Informer and Firewall Informer are essentially exploit replay tools, in the same vein as the open source tcpreplay and Tomahawk tools. The product is installed on a PC with two network cards � one connected to the internal interface of the product being tested, and the other connected to the external interface. The idea is that a capture file of exploit traffic (such as might be created from tcpdump or Ethereal) is taken by the Informer product and divided into the two halves of the �conversation� - one consisting of those packets sent by the client, and the other those packets sent by the server. Each packet is then replayed in the correct sequence through the correct network card in order to arrive at the appropriate interface of the sensor. As packets are placed on the wire, the IP addresses and port numbers are overwritten (if specified in the Protocol Scan File � they can be left at the original values if preferred), and the checksums are recalculated to ensure the packet contents remain valid. Providing the capture file has been created correctly, the sensor will see the correct SYN on the correct interface, followed by the SYN ACK on the opposite interface, followed by the ACK on the first interface, and so on throughout the capture file. Thus, in theory, the sensor will see the attack as it was originally played across the wire. This does work most of the time, but it is important to realise that the Informer Suite does not actually implement a complete replacement stack, and thus with certain exploits or with badly-crafted capture files things can go wrong, resulting in invalid or out-of-sequence packets on the wire. When this happens, depending on where it happens in the replay, the sensor may quite legitimately ignore the �exploit� and no alert will be raised. This makes it difficult to determine if the fault lies with the Informer Suite or the IPS/IDS device being tested. The Informer Suite, therefore, is no panacea - you still need a reasonable knowledge of vulnerabilities, exploits, and what a �good� capture file should look like in order to get the most out of a product like this. In this respect, however, it is not alone, and suffers from the same shortcomings as all other replay tools. It is thus not applicable for all your exploit testing (it cannot handle badly fragmented traffic or capture files containing multiple sessions, for example). However, for those who prefer a Windows-based GUI to the Unix command line, the Informer Suite offers a much more usable user interface than the open source offerings for basic attack recognition testing with well-crafted capture files (such as those included in the Informer exploit library). The open source offerings provide their own advantages. Both Tomahawk and tcpreplay, for example, provide much higher transmission rates than is possible with the Informer Suite running on Windows, and Tomahawk also provides the ability to spawn multiple sessions and thus bombard the sensor with multiple attacks (and genuine traffic, if required) simultaneously. Open source tools, however, do not come with a well-stocked, ready-to-run, library of exploits.
Thus we would recommend using the Informer Suite as just one of several such tools when performing IDS/IPS testing. And at the end of the day, with some �unusual� exploits which do not play nicely with capture files and replay tools (and especially when testing evasion techniques involving heavy TCP/IP/RPC fragmentation) there is simply no substitute for running the original exploit (either using actual exploit code, or via tools such as Metasploit Framework). This is what NSS does in its own tests, running a mixture of replay tools such as Firewall Informer, tcpreplay and Tomahawk with our own library of exploits, together with tools such as Metasploit, and often resorting to using live exploit code where necessary. There is no one-size-fits-all shortcut or magic bullet approach to testing signature recognition capabilities. It is not easy to do well - tools such as Informer simply make it easier. IDS Informer was originally designed to test passive IDS devices, and Firewall Informer designed to test in-line devices such as firewalls and IPS. However, the line of demarcation has blurred somewhat with recent releases - IDS Informer now supports dual NIC hosts, for example, and so is also capable of testing in-line devices. But it is Firewall Informer which we use in the NSS labs, which retains a more useful architecture for testing in-line devices. Whereas IDS Informer allows the creation of groups of exploits, Firewall Informer allows the administrator to create Protocol Scan Files. These are collections of exploits � much like the IDS Informer groups � but within Firewall Informer it is possible to define unique source and destination IP addresses and ports for every exploit defined in the test. This not only ensures that different IP quads are used for each exploit run, but that the resultant alerts are easier to spot in the IDS/IPS logs. Vendor descriptions of alerts rarely match up to each other, to the CVE reference, or to Blade Software�s own descriptions, and thus it makes life much easier to be able to determine that the alert entered in the log files from address 10.10.107.106 was actually generated by a particular exploit in the Protocol Scan File. Firewall Informer is actually a very simple tool in operation. A Windows-based GUI allows the administrator to define the network configuration - including source and destination IP addresses, source and destination MAC addresses, default gateway(s) and which network cards to designate as internal/external.
Once that has been done, one or more Protocol Scan Files can be created using the exploits provided as part of the Informer Suite package, or your own exploits if preferred (more on that later). Each of the attacks included with the product are genuine exploits, most of them executed against vulnerable servers and the traffic captured and packaged into an Informer Suite attack file. The clever part is that the software then allows you to specify your own source and target IP address and port number via the entries in the Protocol Scan File. The packets are then rebuilt on the fly using the new information, and the checksums recalculated, before being sent on their way across the network. Each entry in the file can also be given a time limit and an expected result (succeed or fail) for later comparison with the results report. As the exploits are run, a count of the number which succeeded (i.e. where all the packets made it through the device) or failed (i.e. the exploit was blocked successfully by the device under test) is displayed on screen. A summary of which exploits succeeded and which failed is then available via the Reports option. By switching from Audit Logging to Packet Logging in the Preferences screen, it is not only possible to see which exploits failed, but which packets within the capture file were blocked by the device under test. This is a very useful feature. Because the entire session was captured during the original exploit, Firewall Informer transmits both sides of the conversation exactly as it was originally seen (except using the IP, port and MAC addresses specified by the user). Thus stateful in-line products are able to track the complete session and will alert on the exploit. It is also possible to utilise �normal� network traffic captures too, of course (such as simple Web transactions), mixing with exploits in the same scan file, but for that you will need to create your own. In the standard product, it is not possible to create custom attack files. For that, you need a modicum of knowledge, your own exploits, a third-party packet capture program (we use the ubiquitous tcpdump or Ethereal), and the optional Attack Developer Kit (ADK). The hard part is in executing the attack and capturing it cleanly to create a good capture file in the first place - generating the attack file for Informer is easy. The ADK simply takes the capture file, adds some checksums and a user-defined description, and generates a .DLL file which can then be copied to the appropriate Informer Suite attack subdirectory. Multiple attack libraries can be created and placed in separate subdirectories, from where they can be selected at run-time within Firewall Informer. In this way, it is easy to switch between different libraries - NSS uses this capability to switch between the different libraries used for each of our group test reports, for example. So why use a replay tool at all if it is necessary to run the live exploits first to create capture files for the attack library? Well firstly, many users may be perfectly satisfied using the pre-packed exploit library provided by Blade Software as part of the product, in which case they would never have to worry about creating their own. Most IDS/IPS products will have signatures for the majority of the Informer Suite exploits these days, and so as a tool to verify that your IDS/IPS appliance is detecting, logging and alerting correctly, the Informer Suite is incredibly useful. But even for those of us who need to create our own exploits from scratch, the Informer Suite offers some key advantages - mainly in the area of ease of use and repeatability. Some of the exploits we run as part of our standard IDS/IPS test suite are fairly complex to set up - perhaps requiring unusual combinations of operating systems and services - and it can be painful to recreate them over and over again for each product we test. Using a packet capture tool and the ADK we now only need to run the exploits once, before converting them to Firewall Informer attack files. After that, they can be replayed quickly and easily (via a user-friendly GUI) time and time again, and we know that every device we test sees exactly that same traffic as the last one tested. We also occasionally have useful trace files sent to us by third parties. The problem with these, of course, is that the IP addresses could be anything, and often they are unusable as they stand since the addresses used might fall outside the range allowed by the license key of the product we are testing. The Informer Suite gives us the opportunity to use those trace files since, once they are converted using the ADK, the IP and MAC addresses can be replaced as the attacks are run. However, the product does have a number of disadvantages (which will vary in importance or relevance depending on the user): it cannot handle fragmented traffic; badly constructed capture files can cause �misbehaviour� when replaying them; it costs money (tcpreplay and Tomahawk are free); it uses proprietary attack files (you need the ADK if you want to convert your own PCAPs to Informer DLLs); and the built-in attack library has an excess of Backdoors and Trojans (many of them very obscure). The library also contains a lot of older exploits. Whilst this is not actually too much of an issue when performing basic testing/verification that an IPS/IDS is working correctly, it would be nice to see a more regular update of the library to take into account the latest reported vulnerabilities. Since Blade Software was recently acquired by RedSeal Systems, we have our doubts that this will actually happen, unfortunately. The biggest drawback with the Informer Suite is the very availability of its exploit library. It is almost impossible to use the built-in library on its own as a test for signature coverage in a competitive environment (whether it is our group tests or your own internal bake-off) because every vendor has access to it, and most of them have already ensured that they have signatures to cover it. Thus, as a comparison between products, it serves little purpose (unless you come across a �rogue� product that actually cannot detect most of the Informer Suite exploits, which would be odd in itself). This is why we, as a testing organisation, cannot rely on the Blade Software library entirely, and thus end up producing our own capture files for each of our group tests. Note that this is not a fault of the library itself, just the way it is being used. However, as proof that your own device is working correctly following installation, policy configuration and/or product updates, it serves a valuable purpose. If a test tool provider could produce a tool like this and produce regular updates to the exploit library in the manner of the average Anti Virus software vendor, then administrators will finally have the capability to audit and verify the efficacy of the latest signature pack from their IDS/IPS vendor and that test tool will really come into its own. Unfortunately, since the acquisition of Blade Software by RedSeal Systems, it would appear that that is not going to happen with the Informer Suite. However, we are currently evaluating a similar product called TrafficIQ from Karalon (www.karalon.com), who is promising regular updates to its exploit library in the manner described above. We will be using this product for our next round of testing, and an evaluation will be included in the next edition of this report. Other replay tools which are used in the NSS Group labs include tcpreplay (tcpreplay.sourceforge.net) and Tomahawk (tomahawk.sourceforge.net). It is worth noting that most of the same caveats/problems mentioned previously regarding the Informer Suite also apply to these products, since many of those issues are common to any replay tool. Whereas neither product offers a simple GUI interface, and neither comes with a pre-built exploit library, both tcpreplay and Tomahawk do provide certain advantages over the Informer suite which makes them worthy of inclusion in your security tool-box. At the end of the day, it is worth reiterating that very often the best - and sometimes only - way to trigger an alert on your IDS/IPS is to run the live exploit. Tomahawk is a tool for testing the performance and in-line blocking capabilities of IPS devices. Tomahawk is run on a machine with three NICs: one for management and two for testing. The two test NICs (eth0 and eth1, by default) are typically connected through a switch, crossover cable, or Network-based IPS device. Briefly, Tomahawk divides a packet trace (pcap) into two parts: those generated by the client and those generated by the server. Tomahawk parses the packet trace one packet at a time. The first time an IP address is seen in a file, the IP address is associated with the client if it is in the IP source address field of the packet, or associated with the server if it is in the destination field. For example, consider a pcap consisting of a standard three-way TCP handshake that contains three packets: Packet 1 (SYN) ip.src = 172.16.5.5 ip.dest = 172.16.5.4 Packet 2 (SYN-ACK) ip.src = 172.16.5.4 ip.dest = 172.16.5.5 Packet 3 (ACK) ip.src = 172.16.5.5 ip.dest = 172.16.5.4 When Tomahawk reads the first packet, the address 172.16.5.5 is encountered for the first time in the source field, and the address 172.16.5.4 is encountered for the first time in the destination field. The address 172.16.5.5 is therefore associated with the client, while the address 172.16.5.4 is associated with the server. When the system replays the attack, server packets are transmitted on eth1, and client packets are transmitted on eth0. To replay the sequence above, Tomahawk begins by sending packet 1 (a client packet) over eth0. When this packet arrives on eth1, it sends packet 2 on eth1 and waits for packet 3 to arrive on eth0. When the packet arrives, Tomahawk sends packet 3 on eth0. When the last packet arrives on eth1, Tomahawk outputs a message that it has completed the pcap. If a packet is lost, the sender retries after a timeout period. The sender infers that the packet is lost if it does not receive the next packet in sequence within the timeout. For example, if Tomahawk sends packet 2 on eth1 and does not receive it on eth0 within the timeout, it resends packet 2. If progress is not made after a specified number of retransmissions, the session is aborted and Tomahawk outputs a message indicating that the session has timed out. To ensure that the packet is correctly routed through the switches, the Ethernet MAC addresses are rewritten when the packet is sent. In addition, the IP addresses are also rewritten and the packet�s checksums updated accordingly. Thus, in the example above, when Tomahawk sends packet 1, the IP source address of the packet that appears on the wire is 10.0.0.1, and the IP destination address is 10.0.0.2 (the start IP address for a session can be specified on the command line). When the replay is finished, either because all the packets made it through or because the number of retransmissions was exceeded, Tomahawk reports whether the replay completed or timed out. When testing an IPS, if Tomahawk reports that the pcap containing the attack has timed out, then the IPS has blocked the attack. If Tomahawk reports that the pcap has completed successfully, then the IPS missed the attack, regardless of what the log indicates. To ramp up the bandwidth, Tomahawk can replay multiple copies of the same pcap in parallel (each copy is given its own block of IP addresses). This allows the tool to be used for more than straight exploit detection testing (as noted, Tomahawk does not come with a pre-built library of exploits - you have to produce your own from scratch). It can also be used for reliability, repeatability and performance testing. By instructing Tomahawk to run multiple packet traces simultaneously, it is possible to generate significant traffic loads from a single Tomahawk server. Bear in mind that it is possible to use clean, non-exploit packet traces too, such as simple HTTP transactions, which allows the generation of normal background traffic for your tests. Repeatability testing ensures the IPS is deterministic. If we take a sample set of, say, 50 attacks, an IPS will perhaps block 40 and miss 10 (you can include �normal� non-exploit packet traces in the mix to ensure that some traffic will pass through the device). As a test, that does not tell us much. But if we were to run those same 50 attacks 100,000 times each, then we expect to see 4 million sessions blocked, and 1 million allowed through. In this test, however, the device is under pressure for a greater amount of time, and this is where you may begin to see evidence of �leakage� (where some exploits are allowed through the device in error - very bad), or blocking of legitimate traffic (which may or may not be serious depending on the amount of traffic that was blocked and how great the tolerance in your own environment for this condition). Tomahawk currently has its problems and some inherent limitations. The most obvious limitation is the fact that it can only operate across a layer 2 network. Thus, unlike the Informer suite, it is not possible to have the generated traffic pass through routers - instead, the device under test must be connected directly to the NICs of the Tomahawk host, or connected to them via switches. Like the Informer suite, it cannot handle PCAPs containing badly fragmented traffic, and multiple sessions in the same PCAP can sometimes confuse it. The current version also contains a rather more serious bug, whereby should an IPS send TCP reset packets to client and server when it drops the session, Tomahawk believes the RST on the internal interface to be a packet from its own sending NIC (because it appears to come from the client) and it duly reports that it has seen traffic from the client. This will appear as though the device failed to block the exploit when, in fact, it blocked it correctly. Currently, the only workaround is to disable RST transmission on the IPS (not always possible, depending on the device being tested). Originally written by Matt Undy of Anzen Computing, and more recently maintained Matt Bing of NFR and Aaron Turner, tcpreplay is a tool designed to replay saved tcpdump files at arbitrary speeds. It provides a variety of features for replaying traffic for both passive sniffer devices as well as in-line devices such as routers, firewalls, and IPS. tcpreplay 2.x includes the following tools: tcpreplay - the tool for replaying capture files tcpprep - this can be used to pre-process the capture file, performing all the calculations and packet re-writing necessary to replay the capture file across two interfaces. The results are written to a cache file which is subsequently used by tcpreplay - the original PCAP remains untouched capinfo - tool for printing statistics about capture files pcapmerge - a tool for merging pcap files into one larger one flowreplay - a tool for replaying connections Although originally designed to support a single interface (for passive IDS testing), recent versions have added multiple interface support such that tcpreplay offers similar functionality to Tomahawk and the Informer Suite when it comes to testing in-line devices. Where tcpreplay really scores, however, is in its post-processing options. IP addresses can be rewritten or randomised, MAC addresses can be rewritten, packets truncated by tcpdump can be �repaired�, transmission speeds can be tightly controlled, and specific packets or ranges of packets in the pcap file can be replayed alone, ignoring the rest. Very few of these options (with the exception of rewriting IP and MAC addresses, obviously) are currently supported by other tools such as Tomahawk or the Informer Suite. One other very interesting direction being taken by the tcpreplay author is the flowreplay tool. This is intended to provide the ability to test servers and Host IDS/IPS products by playing only the client-side of a pcap against a real service on the target host. Although in the early days of development, this feature is unique amongst replay tools at the moment, and is one capability in which we will be maintaining a close interest. Click here to return to the IPS Index Section |
Send mail to webmaster
with questions or
|