NSS Group logo

Summary

The Market

Eighteen months further on from our first IPS Group Test report and the market continues to grow.

The major analyst firms are beginning to take serious notice of the IPS product space, and some of the more enlightened ones are taking into account the type of testing we are doing to prove the viability of these products (see “Seven Key Selection Criteria for Network IPS”, G. Young, Gartner, Inc., Nov 1, 2004). 

Edition 1 of our IPS Group Test included five products - the entire complement of the test, and probably the entire market at that time. Almost immediately the market began to grow, and nine vendors signed up for Edition 2, although four of them failed our stringent tests leaving only five to pick up NSS Approved awards. For this latest edition, ten vendors submitted a total of twelve products for testing, and eight of these passed our stringent testing to receive NSS Approved awards. Already we have nine signed up for our new Multi-Gigabit IPS report for which we start testing later this year. This is definitely not a stagnant market! 

But lest you get carried away with the idea that IPS will solve all your security problems, it is worth taking a step back and considering all of the threats which beset the average network. 

For some time now, the firewall has been considered essential for those connecting their networks to the Internet. Even home users with DSL connections are beginning (finally) to realise that a simple firewall is the very least of the available security tools they should be considering. I do believe that eventually some form of perimeter gateway device may well perform all our firewall, IDS, IPS and AV functions - whether that is a Deep Inspection Firewall, an Intrusion Prevention System, a Perimeter Security Gateway, UTM …. I don’t know what it will be called. The terminology does not matter - the functionality is key. 

For now, at least at the high-end, the functionality is best divided amongst several different devices for reasons of performance, ease of deployment (who is going to rip out their firewalls and replace with something else at this stage?), and ease of management. Deep inspection firewalls - at least those which offer adequate performance for high-end deployments - are still a good year or two away yet.  

This brings us back to that old chestnut, defence in depth. Layered defences are the way to go for now (and the foreseeable future). From the outside in (with some of the inside ones in no particular order) we have: 

Rate-Based Intrusion Prevention Systems (Attack Mitigators) - These are Intrusion Prevention Systems (IPS) designed with one main purpose in mind - to mitigate DOS and DDOS attacks where this cannot be accomplished successfully by the firewall. They are also often capable of performing packet shaping and rate limiting functions.

Firewalls - These are perimeter-based policy enforcement devices. They are designed to match traffic against a set of access control or policy enforcement rules, and accept or deny traffic based on those rules. For example, a firewall could allow all FTP traffic to one particular server on the DMZ, but deny FTP to any other machine; or it could prevent the use of Telnet altogether; or it could enable Web access for all internal users.

Content-Based Intrusion Prevention Systems (Network IPS) - Placed in line between the firewall and the internal network (or between two internal subnets), IPS devices are designed to detect potential exploit traffic rather than enforce policy (although several devices also include policy-enforcement firewall filters).

For example, our firewall has already allowed through the FTP traffic destined for the FTP server, the IPS device now watches that FTP traffic for suspicious patterns. Whenever an IPS device detects suspicious traffic it is capable of dropping the packet immediately and blocking the rest of the suspicious flow, thus preventing suspicious traffic from entering the protected network.

Network Intrusion Detection Systems (NIDS) - These are passive devices, designed for detection and analysis rather than prevention. Installed on one or more internal subnets, the IDS systems behave as auditing devices - gathering information on potentially bad traffic which might be considered “information only” for a Network IPS; collecting and correlating detailed forensic information (some IPS concentrate more on stopping attacks than gathering extensive information about them); and monitoring those subnets which are not deemed important enough to warrant protection by a more expensive IPS sensor. And, of course, the IDS is still necessary to catch malicious traffic from an infected laptop which has been connected inside our expensive perimeter defences. Intrusion prevention only works if the traffic passes through the IPS sensor - sometimes detection is the best we can hope to achieve.

Host-Based Intrusion Prevention / Detection Systems (HIDS/HIPS) - Some exploits are better detected and/or prevented on the host being attacked rather than on the network. Both Host IDS and Host IPS systems rely on agents installed directly on the systems being protected to provide direct protection for that host alone. As with Network IDS, the Host IDS/IPS is sometimes the only way to detect and/or prevent exploits launched inadvertently from rogue laptops installed within the perimeter defences.

Content Filtering - This is a gateway device which is designed to monitor Web and mail traffic for inappropriate content. Unlike IDS/IPS, this device is less concerned with attempting to detect malicious traffic such as buffer overflows and other network-level exploits, although it will usually scan mail attachments and Web pages for suspicious content that could be virus or Trojan related.

In addition to scanning for malicious content at an application level, this product can also be used as a policy enforcement device, just like the firewall. For example, it can block Web requests to inappropriate sites (porn, violence, job-seeking, etc.); enforce corporate e-mail policy (inserting standard headers, footers, disclaimers, copyright notices, etc.); and prevent transmission of inappropriate content via e-mail. 

Anti Virus - Installed on a gateway security device, a mail server, or the user’s desktop (or all three), the AV scanner will monitor e-mail attachments, Web content and data files for malicious content.

Anti Spam - Not as destructive as a virus, but to many people almost as annoying, spam has become the bane of our lives - whether you are an e-mail user or administrator. As with the AV scanner, the anti-spam scanner can be installed on a gateway security device, a mail server, or the user’s desktop (or all three), to monitor e-mail transmissions for spam content.

Patching and hardening - At the lowest level, we need to ensure that our operating systems, server-based software and applications are configured from the moment they are deployed with security in mind (why open that port if it is not used anywhere on your network?). They should also be maintained as far as possible with the latest security updates and patches.

Patching is important, but as a prevention mechanism it can never work. Quite apart from the sheer scale of the difficulty in locating all of those “hidden” operating systems and Web servers and SSH servers that populate your network printers, network fax devices, routers, and so on, it can often prove impossible to deploy patches across an enterprise network in a timely enough manner. 

Consider the common scenario where Microsoft releases an update or Service Pack to address known security issues in one of its mainstream products. This is a product that is installed on every one of your mission critical servers. Are you going to blindly install that patch - trust that Microsoft has got it right first time? Or are you going to test it thoroughly on non-production servers first?  

It has to be the latter - yet whilst you diligently apply your patching/updating procedure, there is a window of opportunity where the vulnerability can be exploited on your public-facing servers. Intrusion Prevention Systems are designed to close that window of opportunity.  

As with all of the other technologies mentioned above (including deep inspection firewalls), they should not be considered a silver bullet. They should, however, be considered as a worthwhile addition to your security toolbox - you never know when you might need them. 

Note that the above is not intended to be a definitive list, merely a set of examples of the types of technologies that can be used in the implementation of a layered defence. In addition, we have not attempted to define every single location where those technologies may be applied - anti virus can be installed on a desktop or mail server or gateway device, for example; firewalls can be installed at the perimeter, between internal subnets, or as software on desktop PCs; Intrusion Prevention Systems can be installed at the perimeter or between internal subnets; and so on.

Interestingly, despite the performance implications, we are already beginning to see the appearance of Unified Threat Management (UTM) - or Multi-Function - appliances. These can combine three or more of the above technologies (generally we would expect to see firewall/VPN, AV and IPS/IDS at a minimum) in a single device designed to be installed at the gateway of a SME or branch-office network. The NSS Group is launching the first Multi-Function Gateway Appliance test later this year to examine this interesting new development. 

They can all be considered intrusion prevention systems. The distinction in capitalisation (or lack thereof) of those words is deliberate. Intrusion Prevention Systems (the marketing term) are just as valid a means of preventing intrusions as any of the others - don’t let the marketing hype, blind you to their usefulness. 

The Products

As ever, testing IPS products proved to be extremely challenging. Our job is to attempt to simulate as close to a real world environment as possible in our labs, whilst keeping the tests completely repeatable from test to test, and from product to product. Our latest methodology, under development for the last eighteen months, mixes elements of “traditional” switch testing (IPS devices need to offer switch-like performance), pure stress testing at an application level, and elements of real world use, whilst stressing as many different features of each IPS as is feasible.

As with our previous two IPS tests, testing in-line devices reliably at Gigabit speeds and beyond presented us with interesting challenges and pushed our testing equipment to the limits. Luckily, Spirent has continued to improve the performance of the Avalanche/Reflector product line, and the use of the new Avalanche/Reflector 2500 allowed us to reduce the number of devices from three pairs to two in order to produce the levels of background traffic we required. At the end of the day, the environment we created was one of the toughest ever likely to be faced by the average IPS product, whilst remaining fair in that certain tests represented an extremely busy “normal” network in terms of background traffic. 

It was interesting to note that, whereas last year we were seeing top speeds of 1-2Gbps, this year we are starting to see devices that can go well beyond that limit and which are looking over-engineered for Gigabit environments. The NSS Group is launching a Multi-Gigabit IPS test later this year to look specifically at these kinds of enterprise/carrier-class devices.  

At the same time, we are seeing more fractional-Gigabit and 100Mbps devices appearing - the majority of the devices tested in this edition of the report, as it happens - which would indicate a willingness for this type of technology to be accepted in low-end environments, such as SME and branch-office scenarios. 

Pushing the products under test to their limits in a heavily-utilised network certainly produced some interesting results, and posed problems for some vendors. A third of the products (four out of twelve) submitted for this year’s test failed at some point - only the eight products included in this report were awarded NSS Approved. This is a much improved success ratio over Edition 2. 

Note, however, that at these levels, performance can be affected significantly by changes in the make-up of the traffic being monitored. If your network has a different average packet size to ours, or a different average HTTP response size, or different connection rates, or is predominantly FTP traffic, then your mileage may vary. 

What makes summarising the results so difficult is the fact that each vendor submitted a very different product in terms of bandwidth, capabilities, number of ports, and so on. We therefore encourage detailed study of the Test Results section of this report which includes not only the complete benchmark results in tabular form, but extensive explanations of the results together with our informed conclusions. 

There are a wide range of products on show in this report, ranging from 100Mbps all the way up to 1Gbps. All of the IPS devices included in this report proved themselves capable of handling real world (“typical” network) traffic up to their rated speeds and, in some notable cases, beyond. It is good to see a trend of products being over-engineered for their rated speed rather than struggling to match marketing claims with real-world performance. 

In terms of general signature coverage, it was a tough one to call, as always. It is important to recognise that our signature recognition test is not intended to provide an exhaustive audit of each vendor’s signature databases - to do that would be impossible. It is also important to recognise that we also cannot guarantee to test all devices against the very latest (within the last week or month, for example) exploits.  

The reason for this is simple. To be fair to all participants, we need to test every vendor against the same exploit test suite. Since the tests for this particular report and the one prior to it ran from July 2004 to July 2005 (and the test suite had to be finalised well before we started testing), then clearly even the latest exploits we use will be 12-15 months old at the point of publication.  

The most important thing to recognise, however, is that none of this actually matters! We try to include in our test suite as many cases as we can where a particular vulnerability has manifested itself in numerous exploits, and we include all the variants we can find. We also take “standard” exploits and alter key elements (changing the NOP sled, altering banners, modifying key “patterns” within the code) whilst ensuring that the exploits still functions as intended. That way we can give an opinion on where we think signatures have been written for a specific exploit or for an underlying vulnerability. 

We also try to produce “normal” traffic that has potentially suspicious content which is out of context in order to gain insight on a product’s susceptibility to false positives. 

Possibly the most important objective of our signature recognition test, however, is the comparison of the out-of-the-box total versus the total after the vendor has provided us with a signature pack update. Each vendor is allowed 48 hours maximum to produce an update after being informed of which exploits they missed in the first run. This is a key comparison, because it provides insight as to how quickly and accurately a vendor can respond when informed that they have missed a particular exploit. 

Finally, the signature recognition tests helps us to ensure that the vendor has not surreptitiously disabled entire categories of signatures in order to improve performance of their device in the remainder of the test suite - in other words, it helps to enforce our “default security policy”. 

But the only true test of signature coverage and accuracy is to install the device in your own network, run your own traffic past it (the only way to determine what kinds of false positives you are likely to see), and acquire your own test tools/up to date exploits to verify that the vendor is seeing those exploits which were announced in the SANS newsletter this week. Admittedly, none of this is easy, but testing in your own live environment is a must - there is no way we can help you with this from within a report such as this. All we can provide is some help in creating a short-list of products you might want to test in this way. 

In general, one of the things that impressed us in this round of tests was the obvious effort that had been put in by all the vendors to reduce noise and spurious alerts - it was good to note that vendors are paying much more attention to quality of signatures rather than just the quantity, or breadth of coverage.  

Once the general auditing signatures were disabled we noted few false positives in most of the devices tested, and very few misidentified alerts - something which is even more important with IPS sensors due to the fact that they are operating in-line. We also noted relatively few instances of exploits generating multiple alerts where some of the alerts were “noise”.  

Where signature recognition is concerned, most of the devices performed very well, with Symantec, Juniper and Cisco deserving particular mention.

Compared to the others tested, Radware was below average in both breadth of coverage and quality, with the highest number of false positives out of the box. 

With most devices, resistance to evasion techniques was excellent across the board - all of our fragroute, Whisker, RPC and other miscellaneous techniques being detected with few problems. With RPC exploits being in the news so much recently, however, it was disappointing to see both Radware and Westline fail to handle RPC record fragmentation successfully in our tests. 

Those products which raised too many false positives or who failed to handle too many of the evasion techniques simply failed the test completely and did not receive NSS Approved (they do not appear in this report).  

In terms of price/performance, the figures make interesting reading, and we have gathered the relevant data together in the table in Figure 1. Note that these tables are arranged purely in order of bandwidth, and then alphabetically. These costs include the basic product, plus all required management (one management console for all sensors) and hardware. Note that no prices were available for the Intoto product, since this is provided as an OEM offering only, and thus final price will depend on the integrator. 

Costs are based on list prices of the devices as tested in this report and pricing was provided by each participant in the questionnaires (see Appendix A). Where management hardware is required and not included, we have assumed $6,000 for a well-specified platform to monitor a Gigabit network, and $3,000 for a sub-1Gbps network. Note that this is a one-off cost, however, and so reduces in effect on the TCO as more sensors are added. 

Figure 1 - IPS cost comparison (including hardware and management console)

Where it is not mandatory to purchase additional management software (i.e. where it is possible to manage the device using the software provided) we have not included additional software costs in the above figures. However, where users are considering purchase of multiple devices (or where they simply demand more advanced management and reporting capabilities), they may need to factor in the cost of additional software (where noted in the table above) in order to manage them effectively.  

On top of the straight purchase costs, there is also the need to pay for ongoing maintenance, signature updates, and so on.

If we factor these costs into the equation over a one year and three year period, the following figures provide a rough idea of the actual Total Cost of Ownership (TCO) (not including staff costs to manage the devices). 

Figure 2 - IPS TCO comparison (including hardware, management and maintenance)

We have not attempted to draw any firm comparisons or conclusions from these figures since each device is very different in maximum bandwidth and number of ports provided in the tested configuration, and final price will often be determined by the customer’s relationship with the vendor. Clearly the reader needs to consider the maximum bandwidth to be supported and over how many segments (in-line port pairs), as well as costs for additional management software/hardware and annual maintenance charges. 

Most of the devices rated at 500Mbps and above are priced similarly to each other, and are all - in our opinion - priced realistically. Below these, it is nice to finally see some realistically-price entry-level devices. Cisco, in particular deserves special mention here for the IPS-4240, with NFR also producing a well-priced entry-level offering. 

When it comes to usability, policy management and alert handling are key features in devices of this type. The ability to fine-tune individual signatures or entire policies and then deploy them to multiple sensors at the click of a button is essential. As is the ability to consolidate alerts from multiple sensors, and perform extensive drill-down capabilities and detailed alert analysis.  

After some of the poor management offerings we witnessed in the Edition 2 of this report, it was nice to see the trend reversed in this round of testing. Cisco, Juniper, NFR and Symantec all provided management software which impressed us greatly.  

Every system had its good points and bad points, and we will not list them again here - they have been examined in detail in the individual product evaluations. We would also encourage the reader to study the vendor questionnaires (Appendix A) carefully, since these provide excellent feature-by-feature comparisons between the products on test.

Click here to visit our on-line store.

Click here to return to the IPS Index Section

Top         Home

Certification Programs

Group Test Reports

White Papers

On-Line Store

Contact The NSS Group

Home

Send mail to webmaster with questions or 
comments about this web site.

Copyright � 1991-2006 The NSS Group Ltd.
All rights reserved.