![]() |
IDS Performance Testing In addition to thoroughly evaluating each product in a controlled environment that was as close to a real-life network as we could make it, we also performed extensive tests against the Network IDS products, and more limited testing against the Host-based IDS products. We evaluated each IDS product carefully paying particular attention to a range of key features. These are the sort of questions you should be asking your IDS vendor:
We put these questions directly to each of the vendors, and the replies are reproduced unedited in Appendix A. The Tests Our standard test bed for this project consisted of Pentium III 1000MHz PCs each with 768MB RAM running Windows 2000 SP2, FreeBSD 4.4, or Red Hat Linux 6.2/7.1 (depending on the requirements of the product under test). All installations were on �clean� machines, restored between tests from a standard Symantec Ghost image. The network was 100Mbit Ethernet with CAT 5 cabling, Intel NetStructure 480T Routing Switches, and Intel auto-sensing 10/100 network cards installed in each host. In all cases, the Intel drivers that were provided with the network cards were used during the test. We installed one network agent or one system agent on a dual-homed PC (the �sensor�) on the target subnet, or installed the IDS appliance as provided by the vendor. There was no firewall protecting the target subnet. The IDS sensors were bound to one network interface in �stealth mode� wherever that was supported (i.e. no IP address) and the second interface was used to connect the IDS sensor to the management console on a private subnet. This ensured that the IDS sensor and console could communicate even when the target subnet was subjected to heavy loads, in addition to preventing attacks on the console itself. Multiple remote IDS sensors were installed across multiple subnets behind a router and an open firewall in order to test deployment and management features. Note that the latest signature pack was acquired from the vendor in each case, and sensors were deployed with all available signatures enabled. IDS Test 1 � Attack Recognition A range of common exploits and scans were run using various commercial and �underground� utilities (including boping/bosting, targa, nmap, netcat, hping, Aggressor, Nessus, etc.) in addition to custom-written C programs, shell scripts and replays of captured network traffic. Our attacks covered the following areas:
All attacks were aimed at a range of machines on the target subnet (excluding the IDS sensor itself) each with different operating systems and applications installed. The machine running the exploits was on the same subnet (except when running fragrouter attacks) in order to provide maximum performance. All attacks were run with no load on the network and no IP fragmentation as a baseline to determine how effective the products were at detecting a basic level of attacks. We were careful to ensure that we had the appropriate target applications and servers installed on the network against which to launch our exploits, otherwise pure �scripted� attacks would be unlikely to trigger stateful IDS products. We would expect all the attacks to be reported in as straightforward and clear a manner as possible (i.e. an �RDS MDAC attack� should be reported as such, rather than a �Generic IIS Attack�). It should not be necessary for the administrator to deduce which attacks have taken place from examining obscure log entries. Note that there is no way we can test the effectiveness of every signature on every IDS product. It is also becoming very hard to determine a realistic subset of exploits to use to provide an accurate indication of the effectiveness of a particular IDS product overall. For these tests, we selected a range of commonly available exploits where either the code or the idea could be acquired by anyone easily from the Internet, or where the functionality exists in commonly available programs (such as nmap or Nessus). Future tests are likely to focus on the appropriate vulnerabilities specified in the latest SANS Top 20 and/or the ICAT Top 10 vulnerability lists in an attempt to determine how IDS products handle the most common forms of exploits and information-gathering techniques faced by security administrators �in the wild�. The methodology will also be enhanced to highlight the accuracy of attack detection and the resulting alerts/reports, together with the ability to minimise false positives wherever possible. IDS Test 2 � Performance Under Load We utilised a simple BackOrifice ping program called boping to produce a stream of 10,000 BackOrifice pings to the target server. This server had bosting installed, a �listener� service for boping whose sole purpose is to count the total number of BackOrifice pings received. We then compared the figure returned by bosting with the number of boping attacks detected by the IDS sensor, and this enabled us to provide accurate detection figures related to the number of alerts actually seen on the wire at varying network loads (bear in mind that at high network loads, even our attacking programs would have problems inserting packets on the wire). We expect all IDS products under test to be able to detect all the boping packets at 0% network load. The tests were then repeated with varying background traffic levels as follows: Small (64 byte) packets with valid source/destination IP addresses and ports. This is our �torture test�, designed to indicate the raw sniffing capability of the IDS sensor. If a sensor detects 100 per cent of attacks at 100 per cent load in this test, it can handle anything that is likely to be thrown at it. Note, however, that an inability to achieve 100 per cent across the board in this test is not necessarily indicative of a poor product, since this type of load should never be seen on a live network � as was mentioned earlier, this is mainly an indication of raw sniffing speed. Tests were repeated at the following loads :
�Real world� packet mix based on extensive analysis of our own network. This comprises a mix of packet sizes (from 64 to 1514 bytes) and a range of protocols (predominantly TCP, but with some UDP and ICMP). All packets contain valid payload and address data, and this test provides a reasonable representation of a live network at various network loads. We would hope that each IDS sensor would return good performance figures across the board here � certainly in excess of 90 per cent detection rate at 100 per cent load. Tests were repeated at the following loads:
Large (1514 byte) packets containing valid payload and address data. This test is the complete opposite of the 64 byte packet test, in that we would expect every single product to be capable of returning 100 per cent detection rates across the board when using only 1514 byte packets. We have included this test mainly to demonstrate how easy it is to achieve good results using large packets � beware of test results that only quote performance figures using similar packet sizes. Tests were repeated at the following loads:
Packet generation was accomplished using an Adtech AX/4000 Broadband Test System with a 10/100Mbps module, and a SmartBits SMB6000 with LAN-3101A 10/100Mbps SmartMetrics and LAN-3301A 10/100/1000Mbps TeraMetrics cards installed. A constant stream of the appropriate mix of packets was injected onto the target segment during our tests, and the percentage load and pps figures were verified with two independent network monitoring tools before each test began. Multiple tests were run and averages taken where necessary. Future tests will continue to enhance the �real world� packet mix (perhaps combining the CAIDA (Cooperative Association for Internet Data Analysis) research with our own) and expand the scope of the background traffic by including complete sessions, as well as varying the number of hosts incrementally. The ultimate aim is to create an environment in the lab that is as close to a real world scenario as possible, whilst allowing us to ensure complete repeatability from test to test. IDS Test 3 � IDS Evasion Techniques We ran a subset of our basic common attacks across a router with normal IP forwarding in place to establish a baseline. The tests were then repeated, running the attacks through fragrouter and employing various IDS evasion techniques including (but not limited to):
Secondly, we ran a basic WWW CGI scan of the target machine using Whisker to verify that the IDS could detect such an attack. We then repeated the test using various IDS evasion techniques including (but not limited to):
IDS Test 4 � Stateful Operation Test We used tools such as Stick and Snot to generate large numbers of false alerts on the protected subnet using valid source and destination addresses and a range of protocols. During the attack, we also launched a subset of our basic common exploits to determine whether the IDS sensor would continue to detect and alert on �real� attacks during a Stick/Snot flood. The effect on the overall sensor performance and logging capability was noted. Future tests will attempt to stress the stateful architecture directly by overloading the buffers used to track valid sessions. IDS Test 5 � Host Performance We ran a series of attacks against a host-based system which were designed to generate a large number of alerts. CPU and memory usage were monitored during this phase in order to gauge the impact the host engine had on system performance. Network load was also monitored as the host engine reported its findings to the central console in order to gauge the impact the host engine has on network performance. Click here to return to the IDS Index Section |
![]() |
Send mail to webmaster
with questions or�
|