![]() |
Snort 2.0 Results
�
Test 1.1 � Attack Recognition |
Attacks |
Default ARR |
Custom ARR |
Test 1.1.1 - Backdoors |
5 |
3 |
n/a |
Test 1.1.2 - DNS |
2 |
1 |
n/a |
Test 1.1.3 - DOS |
11 |
9 |
n/a |
Test 1.1.4 - False negatives (modified exploits) |
7 |
5 |
n/a |
Test 1.1.5 - Finger |
4 |
3 |
n/a |
Test 1.1.6 - FTP |
4 |
3 |
n/a |
Test 1.1.7 - HTTP |
35 |
26 |
n/a |
Test 1.1.8 - ICMP |
2 |
1 |
n/a |
Test 1.1.9 - Reconnaissance |
10 |
10 |
n/a |
Test 1.1.10 - RPC |
2 |
2 |
n/a |
Total |
82 |
63 / 821 |
n/a |
�
Test 1.2 � Resistance to False Positives |
Pass/Fail |
Test 1.2.1 - Audiogalaxy FTP traffic |
PASS |
Test 1.2.2 - Normal directory traversal (below Web root) |
FAIL |
Test 1.2.3 - MDAC heap overflow using GET instead of POST |
FAIL |
Test 1.2.4 - Retrieval of Web page containing �suspicious� URLs |
PASS |
Test 1.2.5 - MSTREAM communications using invalid commands |
FAIL |
Test 1.2.6 - Normal NetBIOS copy of �suspicious� files |
PASS |
Test 1.2.7 - Normal NetBIOS traffic |
PASS |
Test 1.2.8 - POP3 e-mail containing �suspicious� URLs |
PASS |
Test 1.2.9 - POP3 e-mail with �suspicious� DLL attachment |
PASS |
Test 1.2.10 - POP3 e-mail with �suspicious� Web page attachment |
PASS |
Test 1.2.11 - SMTP e-mail transfer containing �suspicious� URLs |
PASS |
Test 1.2.12 - SMTP e-mail transfer with �suspicious� DLL attachment |
PASS |
Test 1.2.13 - SMTP e-mail transfer with �suspicious� Web page attachment |
PASS |
Test 1.2.14 - SNMP V3 packet with invalid request ID |
FAIL |
Total Passed |
10 / 14 |
�Section 2 - NIDS Performance Under Load
Test 2.1 � UDP traffic to random valid ports |
� 25Mbps |
� 50Mbps |
� 75Mbps |
� 100Mbps |
� Max |
Test 2.1.1 - 64 byte packet test - max 148,000pps |
100% |
85% |
46% |
26% |
35Mbps |
Test 2.1.2 - 440 byte packet test - max 26,000pps |
100% |
100% |
100% |
100% |
100Mbps |
Test 2.1.3 - 1514 byte packet test - max 8172pps |
100% |
100% |
100% |
100% |
100Mbps |
�
Test 2.2 � HTTP �maximum stress� traffic with no transaction delays |
� 25Mbps |
� 50Mbps |
� 75Mbps |
� 100Mbps |
� Max |
Test 2.2.1 - Max 250 connections per second - ave packet size 1200 bytes - max 10,000 packets per second |
� 100% |
� 100% |
� 100% |
� 100% |
� 100Mbps |
Test 2.2.2 - Max 500 connections per second - ave packet size 540 bytes - max 23,000 packets per second |
� 100% |
� 100% |
� 100% |
� 100% |
� 100Mbps |
Test 2.2.3 - Max 1000 connections per second - ave packet size 440 bytes - max 28,000 packets per second |
� 100% |
� 100% |
� 100% |
� 100% |
� 100Mbps |
Test 2.2.4 - Max 2000 connections per second - ave packet size 350 bytes - max 36,000 packets per second |
� 100% |
� 100% |
� 100% |
� 100% |
� 100Mbps |
�
Test 2.3 � HTTP �maximum stress� traffic with transaction delays |
� 25Mbps |
� 50Mbps |
� 75Mbps |
� 100Mbps |
� Max |
Test 2.3.1 - Max 500 connections per second - ave packet size 540 bytes - max 23,000 packets per second - 10 sec delay - max 5,000 open connections |
� 100% |
� 100% |
� 100% |
� 100% |
� 100Mbps |
Test 2.3.2 - Max 1000 connections per second - ave packet size 440 bytes - max 10,000 packets per second - 10 sec delay - max 5,000 open connections |
� 100% |
� 100% |
� 100% |
� 100% |
� 100Mbps |
��
Test 2.4 � Protocol mix |
250Mbps |
500Mbps |
750Mbps |
1Gbps |
Max |
Test 2.4.1 - 72% HTTP (540 byte packets) + 20% FTP + 4% UDP (256 byte packets). Max 38 connections per second - ave packet size 555 bytes - max 2,200 packets per second - max 14 open connections |
� 100% |
� 100% |
� 100% |
� 100% |
� 100Mbps |
�
� Test 2.5 � Real World traffic |
250Mbps |
500Mbps |
750Mbps |
1Gbps |
Max |
Test 2.5.1 - Pure HTTP (simulated browsing session on NSS Web site). Max 10 connections per second - 3 new users per second - ave packet size 1000 bytes - max 11,000 packets per second |
� 100% |
� 100% |
� 100% |
� 100% |
� 100Mbps |
�Section 3 - Network IDS Evasion
Test 3.1 � Evasion Baselines |
Detected? |
Test 3.1.1 - NSS Back Orifice ping |
YES |
Test 3.1.2 - Back Orifice connection |
YES |
Test 3.1.3 - FTP CWD root |
YES |
Test 3.1.4 - Fragroute baseline (test-cgi probe using HEAD) |
YES |
Test 3.1.5 - ISAPI printer overflow |
YES |
Test 3.1.6 - Showmount export lists |
YES |
Test 3.1.7 - Test CGI probe (/cgi-bin/test-cgi) |
YES |
Test 3.1.8 - PHF remote command execution |
NO |
Test 3.1.9 - Whisker baseline (test-cgi probe using HEAD) |
YES |
Total |
8 / 9 |
�
Test 3.2 � Packet Fragmentation/Stream Segmentation |
Detected? |
Decoded? |
Test 3.2.1 - IP fragmentation - ordered 8 byte fragments � |
YES |
YES |
Test 3.2.2 - IP fragmentation - ordered 24 byte fragments � |
YES |
YES |
Test 3.2.3 - IP fragmentation - out of order 8 byte fragments � |
YES |
YES |
Test 3.2.4 - IP fragmentation - ordered 8 byte fragments, duplicate last packet � |
YES |
YES |
Test 3.2.5 - IP fragmentation - out of order 8 byte fragments, duplicate last packet |
YES |
YES |
Test 3.2.6 - IP fragmentation - ordered 8 byte fragments, reorder fragments in reverse |
YES |
YES |
Test 3.2.7 - IP fragmentation - ordered 16 byte fragments, fragment overlap (favour new) |
YES |
YES |
Test 3.2.8 - IP fragmentation - ordered 16 byte fragments, fragment overlap (favour old) |
YES |
YES |
Test 3.2.9 - TCP segmentation - ordered 1 byte segments, interleaved duplicate segments with invalid TCP checksums |
YES |
NO |
Test 3.2.10 - TCP segmentation - ordered 1 byte segments, interleaved duplicate segments with null TCP control flags |
YES |
NO |
Test 3.2.11 - TCP segmentation - ordered 1 byte segments, interleaved duplicate segments with requests to resync sequence numbers mid-stream |
YES |
NO |
Test 3.2.12 - TCP segmentation - ordered 1 byte segments, duplicate last packet |
YES |
NO |
Test 3.2.13 - TCP segmentation - ordered 2 byte segments, segment overlap (favour new) |
NO |
NO |
Test 3.2.14 - TCP segmentation - ordered 1 byte segments, interleaved duplicate segments with out-of-window sequence numbers |
YES |
NO |
Test 3.2.15 - TCP segmentation - out of order 1 byte segments � |
YES |
NO |
Test 3.2.16 - TCP segmentation - out of order 1 byte segments, interleaved duplicate segments with faked retransmits |
YES |
NO |
Test 3.2.17 - TCP segmentation - ordered 1 byte segments, segment overlap (favour new) |
YES |
NO |
Test 3.2.18 - TCP segmentation - out of order 1 byte segments, PAWS elimination (interleaved dup segments with older TCP timestamp options) |
YES |
NO |
Test 3.2.19 - IP fragmentation - out of order 8 byte fragments, interleaved duplicate packets scheduled for later delivery |
YES |
YES |
Total |
18 / 19 |
9 / 19 |
��
� Test 3.3 � URL Obfuscation |
Detected? |
Decoded? |
Test 3.3.1 - URL encoding |
YES |
YES |
Test 3.3.2 - /./ directory insertion |
YES |
YES |
Test 3.3.3 - Premature URL ending |
YES |
YES |
Test 3.3.4 - Long URL |
YES |
NO |
Test 3.3.5 - Fake parameter |
YES |
YES |
Test 3.3.6 - TAB separation |
YES |
YES |
Test 3.3.7 - Case sensitivity |
YES |
YES |
Test 3.3.8 - Windows \ delimiter |
YES |
YES |
Test 3.3.9 - Session splicing |
YES |
YES |
Total |
9 / 9 |
8 / 9 |
�
Test 3.4 � Miscellaneous Obfuscation Techniques |
Detected? |
Decoded? |
Test 3.4.1 - Altering default ports |
YES |
YES |
Test 3.4.2 - Inserting spaces in FTP command lines |
YES |
YES |
Test 3.4.3 - Inserting non-text Telnet opcodes in FTP data stream |
YES |
YES |
Test 3.4.4 - Altering protocol and RPC PROC numbers |
YES |
YES |
Test 3.4.5 - RPC record fragging |
YES |
YES |
Test 3.4.6 - Polymorphic mutation (ADMmutate) |
YES |
YES |
Total |
6 / 6 |
6 / 6 |
�Section 4 - Stateful Operation Test
Test 4.1 � Attack Replay |
Alerts? |
DOS? |
Notes |
Test 4.1.1 - Snot Traffic |
861 |
NO |
Mainly spp_stream4 stealth activity, plus invalid headers, UDP + ICMP alerts, DDOS and backdoor - Poor performance |
Test 4.2.2 - Stick Traffic |
923 |
NO |
Mainly spp_stream4 stealth activity, plus invalid headers, UDP + ICMP alerts, DDOS and backdoor - Poor performance |
�
Test 4.2 � Simultaneous Open Connections (default settings) |
|||||||
Number of open connections |
10,000 |
25,000 |
50,000 |
100,000 |
250,000 |
500,000 |
1,000,000 |
Test 4.2.1 - Attack Detection |
PASS |
PASS |
PASS |
PASS |
PASS |
PASS |
PASS |
Test 4.2.2 - State Preservation |
n/a2 |
n/a2 |
n/a2 |
n/a2 |
n/a2 |
n/a2 |
n/a2 |
�
Test 4.3 � Simultaneous Open Connections (after tuning) |
|||||||
Number of open connections |
10,000 |
25,000 |
50,000 |
100,000 |
250,000 |
500,000 |
1,000,000 |
Test 4.3.1 - Attack Detection |
PASS |
PASS |
PASS |
PASS |
PASS |
PASS |
PASS |
Test 4.3.2 - State Preservation |
n/a2 |
n/a2 |
n/a2 |
n/a2 |
n/a2 |
n/a2 |
n/a2 |
�
� |
Notes: 1.���Using default snort.conf and rule sets as downloaded from snort.org, PLUS we enabled BACKDOOR and SHELLCODE rules 2.���Unable to run open connections tests since there was no signature to handle the exploit used in these tests. The default is 3000 open connections - Sourcefire claims Snort has been tested to 1 million connections (given enough RAM). � We installed one Snort sensor with the latest signature pack downloaded from the snort.org Web site. We used all the default settings in snort.conf, apart from the following:
The platform was FreeBSD 4.5 running on a SuperMicro SuperServer 6012P-6 with dual 1.8GHz Pentium 4 processors and 2GB RAM. The built-in Intel Pro/100 network interface was used for sniffing, running the default FreeBSD drivers. As you would expect, the raw nature of Snort�s alerting made it less straightforward than with other IDS products under test to determine exactly how many attacks were detected during our performance testing - this is a typical problem with Snort �out of the box� that can be solved by installing one of a number of third party reporting/alerting packages. � Centralised management and policy deployment capabilities are also non-existent in the basic product - once again, solutions can be found in both the Open Source and commercial space to alleviate this problem. Unless you are conversant with Open Source software, Snort itself, and at least one supported OS distribution (with its compilers and tools), basic Snort is not for you, and you need to be looking at the various bundled appliance offerings from commercial sources.� Attack recognition was very good out of the box - much better than with previous versions - with a Default Attack Recognition Rate of almost 77 per cent. It would not be too difficult to improve this further with some custom signatures - Snort was the only product where we did not have the option to allow the vendor to produce a custom signature set for our test cases, so we have only shown the Default ARR in the tables above. � The standard signature set has shown a huge increase in coverage and overall quality since Sourcefire took Snort down the commercial route, and this has improved further with the increased features provided by version 2.0 and above. We noted a significant reduction in false positives during this round of testing, and although Snort still shows a higher tendency to produce false positives than many commercial offerings, it could in no way be considered unmanageable. Also, where genuine attacks are detected the alert descriptions tend to be very accurate - there are not many �generic� alerts in the Snort signature set. � Almost all our �false negative� (modified exploit) cases were detected correctly, demonstrating that many of the new Snort signatures are designed to detect the underlying vulnerability rather than a specific exploit.� One thing we have mentioned before, but are still hoping to see in a future release, is a preprocessor to detect common flood attacks (which are dependent on x packets arriving over y seconds) such as ping floods, SYN floods, and so on. This is a sad omission in a commercial-grade IDS.� The product demonstrated reasonable resistance to most of our evasion techniques. Whisker was fairly well covered, and most of the fragroute attacks were detected, although the product found difficulty in decoding some of them successfully. This actually seems to be a step backwards for this release, and the main issue seems to be with TCP segment reassembly.� All of our other attempted evasion techniques - including ADMmutate, RPC record fragging, and so on - were handled well.� The one test we were not able to complete was our open connections tests. Due to the lack of a signature for the specially crafted exploit we use in this test, we were unable to determine the maximum open connections across which Snort is capable of maintaining state, though the product was certainly capable of detecting new exploits at all levels up to 1 million connections. In terms of detection rates, Snort performed reasonably well in our small packet tests, and demonstrated excellent performance in our real world tests, achieving a clean 100 per cent across the board. � This is another huge improvement over Snort 1.8, and demonstrates the increased efficiency of the new Snort architecture, especially when handling large volumes of HTTP traffic, an area which caused significant problems in previous versions of the product.� It is worth pointing out that these high levels of performance were achieved by running Snort on a well-tuned OS and a well-specified hardware platform. Because Snort is free, many people try to implement an IDS sensor using any old PC they happen to have lying around, with the first network card they come across in the cupboard under the stairs. This is fine if all you want to protect is a dial-up or DSL connection at home or in a small business, and there is even a configuration option that provides a low-memory-usage operating mode for lower-powered hardware. � For those who are serious about protecting a corporate network up to 100Mbps or beyond, however, you need to be looking at using at least the level of hardware we used for this test - i.e. with a server-class chip set, reasonably fast up-to-date processors, plenty of RAM and with a good network card (we would use nothing but Intel for sniffing on 100Mbps networks). The cost of this was in the region of $2000-2500. We also prefer BSD over Linux for running Snort, having seen huge performance improvements in the past by simply moving Snort from Linux to FreeBSD on the same hardware. � At this point, we should point out that we have not tested Snort on more recent Linux releases using ring buffered pcap, and this could make a significant difference to performance. This is certainly a test we would like to run for ourselves in the future, but for now, FreeBSD is the OS of choice for Snort within our organisation - YMMV!� The Open Source Snort product has continued to improve at a rapid rate since its creators took the product commercial with the launch of Sourcefire. The promised performance enhancements in release 2.0 seem to have materialised in a spectacular fashion, transforming Snort from a useful, if restricted, offering to a full-blown commercial-grade IDS sensor, especially when teamed with a thoughtfully- and sympathetically-specified hardware platform. � On the kind of hardware platform we used for these tests, we would be quite happy to rate Snort 2.0 as a 100Mbps product, and believe it would be possible to handle even greater bandwidth with the right hardware. Click here
to return to the Snort 2.0 questionnaire |
Send mail to webmaster
with questions or�
|