NSS Group logo

NFR NID-310 V3.2.1Test Results

Section 1 - Detection Engine

Test 1.1 � Attack Recognition

Attacks

Default ARR

Custom ARR

Test 1.1.1 - Backdoors

5

2

51

Test 1.1.2 - DNS

2

1

2

Test 1.1.3 - DOS

11

42

102

Test 1.1.4 - False negatives (modified exploits)

7

4

6

Test 1.1.5 - Finger

4

4

4

Test 1.1.6 - FTP

4

4

4

Test 1.1.7 - HTTP

35

24

33

Test 1.1.8 - ICMP

2

2

2

Test 1.1.9 - Reconnaissance

10

62

62

Test 1.1.10 - RPC

2

1

2

Total

82

52 / 82

74 / 82

 

Test 1.2 � Resistance to False Positives

Pass/Fail

Test 1.2.1 - Audiogalaxy FTP traffic

PASS

Test 1.2.2 - Normal directory traversal (below Web root)

PASS

Test 1.2.3 - MDAC heap overflow using GET instead of POST

PASS

Test 1.2.4 - Retrieval of Web page containing �suspicious� URLs

PASS

Test 1.2.5 - MSTREAM communications using invalid commands

FAIL

Test 1.2.6 - Normal NetBIOS copy of �suspicious� files

PASS

Test 1.2.7 - Normal NetBIOS traffic

PASS

Test 1.2.8 - POP3 e-mail containing �suspicious� URLs

PASS

Test 1.2.9 - POP3 e-mail with �suspicious� DLL attachment

PASS

Test 1.2.10 - POP3 e-mail with �suspicious� Web page attachment

PASS

Test 1.2.11 - SMTP e-mail transfer containing �suspicious� URLs

PASS

Test 1.2.12 - SMTP e-mail transfer with �suspicious� DLL attachment

PASS

Test 1.2.13 - SMTP e-mail transfer with �suspicious� Web page attachment

PASS

Test 1.2.14 - SNMP V3 packet with invalid request ID

PASS

Total Passed

13 / 14

 Section 2 - NIDS Performance Under Load

Test 2.1 � UDP traffic to random valid ports

 

25Mbps

 

50Mbps

 

75Mbps

 

100Mbps

 

Max

Test 2.1.1 - 64 byte packet test - max 148,000pps

100%

78%

51%

26%

45Mbps

Test 2.1.2 - 440 byte packet test - max 26,000pps

100%

100%

100%

100%

100Mbps

Test 2.1.3 - 1514 byte packet test - max 8172pps

100%

100%

100%

100%

100Mbps

 

Test 2.2 � HTTP �maximum stress� traffic with no transaction delays

 

25Mbps

 

50Mbps

 

75Mbps

 

100Mbps

 

Max

Test 2.2.1 - Max 250 connections per second - ave packet size 1200 bytes - max 10,000 packets per second

 

100%

 

100%

 

100%

 

100%

 

100Mbps

Test 2.2.2 - Max 500 connections per second - ave packet size 540 bytes - max 23,000 packets per second

 

100%

 

100%

 

100%

 

100%

 

100Mbps

Test 2.2.3 - Max 1000 connections per second - ave packet size 440 bytes - max 28,000 packets per second

 

100%

 

100%

 

100%

 

100%

 

100Mbps

Test 2.2.4 - Max 2000 connections per second - ave packet size 350 bytes - max 36,000 packets per second

 

100%

 

100%

 

100%

 

100%

 

100Mbps

 

Test 2.3 � HTTP �maximum stress� traffic with transaction delays

 

25Mbps

 

50Mbps

 

75Mbps

 

100Mbps

 

Max

Test 2.3.1 - Max 500 connections per second - ave packet size 540 bytes - max 23,000 packets per second - 10 sec delay - max 5,000 open connections

 

100%

 

100%

 

100%

 

100%

 

100Mbps

Test 2.3.2 - Max 1000 connections per second - ave packet size 440 bytes - max 10,000 packets per second - 10 sec delay - max 5,000 open connections

 

100%

 

100%

 

100%

 

100%

 

100Mbps

  

Test 2.4 � Protocol mix

250Mbps

500Mbps

750Mbps

1Gbps

Max

Test 2.4.1 - 72% HTTP (540 byte packets) + 20% FTP + 4% UDP (256 byte packets). Max 38 connections per second - ave packet size 555 bytes - max 2,200 packets per second - max 14 open connections

 

100%

 

100%

 

100%

 

100%

 

100Mbps

 


 

Test 2.5 � Real World traffic

250Mbps

500Mbps

750Mbps

1Gbps

Max

Test 2.5.1 - Pure HTTP (simulated browsing session on NSS Web site). Max 10 connections per second - 3 new users per second - ave packet size 1000 bytes - max 11,000 packets per second

 

100%

 

100%

 

100%

 

100%

 

100Mbps

 Section 3 - Network IDS Evasion

Test 3.1 � Evasion Baselines

Detected?

Test 3.1.1 - NSS Back Orifice ping

YES

Test 3.1.2 - Back Orifice connection

YES

Test 3.1.3 - FTP CWD root

YES

Test 3.1.4 - Fragroute baseline (test-cgi probe using HEAD)

YES

Test 3.1.5 - ISAPI printer overflow

YES

Test 3.1.6 - Showmount export lists

YES

Test 3.1.7 - Test CGI probe (/cgi-bin/test-cgi)

YES

Test 3.1.8 - PHF remote command execution

YES

Test 3.1.9 - Whisker baseline (test-cgi probe using HEAD)

YES

Total

9 / 9

 

Test 3.2 � Packet Fragmentation/Stream Segmentation

Detected?

Decoded?

Test 3.2.1 - IP fragmentation - ordered 8 byte fragments

 

YES

YES

Test 3.2.2 - IP fragmentation - ordered 24 byte fragments

 

YES

YES

Test 3.2.3 - IP fragmentation - out of order 8 byte fragments

 

YES

YES

Test 3.2.4 - IP fragmentation - ordered 8 byte fragments, duplicate last packet

 

YES

YES

Test 3.2.5 - IP fragmentation - out of order 8 byte fragments, duplicate last packet

YES

YES

Test 3.2.6 - IP fragmentation - ordered 8 byte fragments, reorder fragments in reverse

YES

YES

Test 3.2.7 - IP fragmentation - ordered 16 byte fragments, fragment overlap (favour new)

YES

YES

Test 3.2.8 - IP fragmentation - ordered 16 byte fragments, fragment overlap (favour old)

YES

YES

Test 3.2.9 - TCP segmentation - ordered 1 byte segments, interleaved duplicate segments with invalid TCP checksums

YES

YES

Test 3.2.10 - TCP segmentation - ordered 1 byte segments, interleaved duplicate segments with null TCP control flags

YES

YES

Test 3.2.11 - TCP segmentation - ordered 1 byte segments, interleaved duplicate segments with requests to resync sequence numbers mid-stream

YES

YES

Test 3.2.12 - TCP segmentation - ordered 1 byte segments, duplicate last packet

YES

YES

Test 3.2.13 - TCP segmentation - ordered 2 byte segments, segment overlap (favour new)

YES

YES

Test 3.2.14 - TCP segmentation - ordered 1 byte segments, interleaved duplicate segments with out-of-window sequence numbers

YES

YES

Test 3.2.15 - TCP segmentation - out of order 1 byte segments

 

YES

YES

Test 3.2.16 - TCP segmentation - out of order 1 byte segments, interleaved duplicate segments with faked retransmits

YES

YES

Test 3.2.17 - TCP segmentation - ordered 1 byte segments, segment overlap (favour new)

YES

YES

Test 3.2.18 - TCP segmentation - out of order 1 byte segments, PAWS elimination (interleaved dup segments with older TCP timestamp options)

YES

YES

Test 3.2.19 - IP fragmentation - out of order 8 byte fragments, interleaved duplicate packets scheduled for later delivery

YES

YES

Total

19 / 19

19 / 19

  


 

Test 3.3 � URL Obfuscation

Detected?

Decoded?

Test 3.3.1 - URL encoding

YES

YES

Test 3.3.2 - /./ directory insertion

YES

YES

Test 3.3.3 - Premature URL ending

YES

YES

Test 3.3.4 - Long URL

YES

YES

Test 3.3.5 - Fake parameter

YES

YES

Test 3.3.6 - TAB separation

YES

YES

Test 3.3.7 - Case sensitivity

NO

NO

Test 3.3.8 - Windows \ delimiter

NO

NO

Test 3.3.9 - Session splicing

YES

YES

Total

7 / 9

7 / 9

 

Test 3.4 � Miscellaneous Obfuscation Techniques

Detected?

Decoded?

Test 3.4.1 - Altering default ports

NO1

NO1

Test 3.4.2 - Inserting spaces in FTP command lines

YES

YES

Test 3.4.3 - Inserting non-text Telnet opcodes in FTP data stream

YES

YES

Test 3.4.4 - Altering protocol and RPC PROC numbers

YES

YES

Test 3.4.5 - RPC record fragging

YES

YES

Test 3.4.6 - Polymorphic mutation (ADMmutate)

YES

YES

Total

5 / 6

5 / 6

 Section 4 - Stateful Operation Test

Test 4.1 � Attack Replay

Alerts?

DOS?

Notes

Test 4.1.1 - Snot traffic

YES

NO4

77 alerts - mainly Backdoor/Trojan with UDP, DNS, ICMP, SNMP and DDOS. Excellent performance

Test 4.2.2 - Stick Traffic

YES

NO4

49 alerts - mainly ICMP with some port scan and DDOS. Excellent performance

 

Test 4.2 � Simultaneous Open Connections (default settings)

Number of open connections

10,000

25,000

50,000

100,000

250,000

500,000

1,000,000

Test 4.2.1 - Attack Detection

PASS

PASS

PASS

PASS

PASS

PASS

PASS

Test 4.2.2 - State Preservation

FAIL3

FAIL3

FAIL3

FAIL

FAIL

FAIL

FAIL

 

Test 4.3 � Simultaneous Open Connections (after tuning)

Number of open connections

10,000

25,000

50,000

100,000

250,000

500,000

1,000,000

Test 4.3.1 - Attack Detection

PASS

PASS

PASS

PASS

PASS

PASS

PASS

Test 4.3.2 - State Preservation

FAIL3

FAIL3

FAIL3

FAIL

FAIL

FAIL

FAIL

 

Notes:

  1. Backdoor detection on default or specified ports only. BO2K detection on any port is available but at cost of performance

  2.  Port Scan, SYN Flood and Generic Trojan detection packages are extremely heavy on system resources. These packages were disabled during testing for performance reasons - should you require Port Scan and SYN Flood protection or generic detection of Trojans on all ports, this will increase the ARR to 79/82 (96%), but will reduce detection rates in all tests in Section 2 by an average of 20% across the board.

  3. Our normal open connections test runs for approximately two minutes. NFR is capable of maintaining state up to 50,000 open connections, but only for a maximum of around 30-60 seconds

  4.  It is possible to overwhelm the AI console with a flood of genuine alerts (NFR is not susceptible to flooding tools such as Stick and Snot) - with 20,000+ alerts in the flat file database, for example, queries take an unacceptably long time. It is possible to stop queries mid-operation and adjust the filters to refine the selection, but this might not help if the flooding attack is ongoing.

We installed one NFR NID-310 Sensor with the latest packages downloaded from the NFR Web site, reporting to a Central Management Server. There is no �standard� policy provided with the NFR product, so some manual tuning was required for this test - all recorders and auditing packages were disabled (as for all sensors in our tests), as were the SYN Flood, Port Scan, and Generic Trojan detectors.

The latter were disabled for performance reasons on the advice of NFR system engineers who felt that these packages should be disabled on very heavily-utilised networks.  

Naturally, this had an adverse effect on the attack recognition tests - the NID-310 would have achieved an ARR of 96 per cent with these packages enabled, compared with the 90 per cent recorded in these tests. However, when we ran comparison tests with these packages enabled, we noted a general decrease in performance of around 20 per cent across all tests in Section 2. 

Centralised management and alerting capabilities are good, with the CMS providing a scalable three-tier architecture. However, the existing GUI Console is looking dated compared to the competition, despite the introduction of multi-sensor management capabilities which are very easy to use. Policy management and deployment is still relatively poor, however, with no ability to define multiple policies for distribution to sensors with different roles. Analysis and reporting are also very basic. 

Attack recognition was good out of the box, and increased to around 90 per cent after NFR was given the chance to respond with updated packages. As mentioned previously, this figure would have been closer to 96 per cent had we enabled port scan and SYN flood detection packages. 

Resistance to false positives also appeared to be very good, and most alerts appeared to be very accurate, not many of them raising multiple alerts for a single exploit. Almost all our �false negative� (modified exploit) cases were detected correctly, demonstrating that the NFR signatures are designed to detect the underlying vulnerability rather than a specific exploit.  

Resistance to known evasion techniques was very good, although two of the Whisker techniques - Case Sensitivity and Windows \ Delimiter - are still successful in the current version. Fragroute, ADMmutate and even RPC record fragging were all handled effectively. 

Stateful operation has changed significantly with the current release - unfortunately not for the better. The new architecture incorporates what the engineers call a �pressure system�. This means that instead of relying on fixed parameters to determine how much memory is allocated to critical resources such as state tables, the NID-310 dynamically allocates memory as needed, releasing it should other subsystems need it as the system comes under different types of load. The effect in our tests - it should be stressed that it may not operate this way on a user�s network - was that connections were aged out far too quickly, causing failure in all our standard open connections tests.  

When we altered these tests to run to completion much quicker (less than 30 seconds), we demonstrated that the NID-310 was capable of maintaining state up to a maximum of 50,000 connections. However, when our open connections tests ran as normal, the connection on which we launched the attack was always aged out too quickly for the attack to be detected once it was completed. NFR is looking into this problem at the time of writing. 

Resistance to Stick and Snot was excellent, raising a very low number of alerts and never succumbing to a DoS condition or suffering from flooding at the Console.

In terms of detection rates, NFR NID-310 showed that it could quite easily handle a full 100Mbps of normal traffic, showing a clean 100 per cent detection rate in every test apart from the 64 byte packet UDP tests (or sniffing torture test). Although we would normally expect to see slightly higher detection rates in the small packet test, this did not appear to adversely affect the sensor under all our �normal� load conditions, and we would have no hesitation in rating this device at 100Mbps.

Click here to return to the NFR NID-310 questionnaire
Click here to return to the NFR NID-310 Review 
Click here to return to the IDS Index Section

Send mail to webmaster with questions or 
comments about this web site.

Copyright � 1991-2006 The NSS Group Ltd.
All rights reserved.