![]() |
Although the majority of networks attached to the Internet will be protected by firewalls, it is apparent that these devices are not always completely effective against many intrusion attempts. The average firewall is designed to deny clearly suspicious traffic - such as an attempt to telnet to a device when corporate security policy forbids telnet access completely - but is also designed to allow some traffic through - Web traffic to an internal Web server, for example. � The problem is that many exploits attempt to take advantage of weaknesses in the very protocols that are allowed through our perimeter firewalls, and once the Web server has been compromised, this can often be used as a springboard to launch additional attacks on other internal servers. Once a “rootkit” or “back door” has been installed on a server, the hacker has ensured that he will have unfettered access to that machine at any point in the future.� Firewalls are also typically employed only at the network perimeter. However, many attacks, intentional or otherwise, are launched from within an organisation. Virtual private networks, laptops, and wireless networks all provide access to the internal network that often bypasses the firewall. � Intrusion Detection Systems may be effective at detecting suspicious activity, but do not provide protection against attacks. Worms such as Slammer and Blaster have such fast propagation speeds that by the time an alert is generated, the damage is done and spreading fast. Intrusion Prevention Systems (IPS)� The inadequacies inherent in legacy network defences are what drive the development of a breed of security products known as Intrusion Prevention Systems (IPS). This is a term which has provoked some controversy in the industry since some firewall and IDS vendors think it has been “hijacked” and used as a marketing term rather than as a description for any kind of new technology.� Whilst it is true that firewalls, routers, IDS devices and even AV gateways all have intrusion prevention technology included in some form, we believe that there are sufficient grounds to create a new market sector for true Intrusion Prevention Systems.� These systems are proactive defence mechanisms designed to detect malicious packets within normal network traffic (something that the current breed of firewalls do not actually do, for example) and stop intrusions dead, blocking the offending traffic automatically before it does any damage rather than simply raising an alert as, or after, the malicious payload has been delivered. � Within the IPS market place, there are two main categories of product: Host IPS and Network IPS, with the latter being further sub-divided into Content-Based and Rate-Based (or Attack Mitigation) systems.� As with Host IDS systems, the Host IPS relies on agents installed directly on the system being protected. It binds closely with the operating system kernel and services, monitoring and intercepting system calls to the kernel or APIs in order to prevent attacks as well as log them. It may also monitor data streams and the environment specific to a particular application (file locations and Registry settings for a Web server, for example) in order to protect that application from generic attacks for which no “signature” yet exists.� One potential disadvantage with this approach is that, given the necessarily tight integration with the host operating system, future OS upgrades could cause problems.� Since a Host IPS agent intercepts all requests to the system it protects, it has certain prerequisites - it must be very reliable, must not negatively impact performance, and must not block legitimate traffic. Any HIPS that does not meet these minimum requirements should never be installed in a host, no matter how effectively it blocks attacks. � The Network IPS combines features of a standard IDS, an IPS and a firewall, and is sometimes known as an In-line IDS or Gateway IDS (GIDS). The next-generation firewall - the deep inspection firewall - also exhibits a similar feature set, though we do not believe that the deep inspection firewall is ready for mainstream deployment just yet.� As with a typical firewall, the NIPS has at least two network interfaces, one designated as internal and one as external. As packets appear at either interface they are passed to the detection engine, at which point the IPS device functions much as any IDS would in determining whether or not the packet being examined poses a threat. However, if it should detect malicious traffic, in addition to raising an alert, it will discard the packet(s) and mark that flow as bad. As the remaining packets that make up that particular TCP session arrive at the IPS device, they are discarded immediately. � Legitimate packets are passed through to the second interface and on to their intended destination. A useful side effect of some NIPS products is that as a matter of course - in fact as part of the initial detection process - they will provide “packet scrubbing” functionality to remove protocol inconsistencies resulting from varying interpretations of the TCP/IP specification (or intentional packet manipulation). � Thus any fragmented packets, out-of-order packets, or packets with overlapping IP fragments will be re-ordered and “cleaned up” before being passed to the destination host, and illegal packets can be dropped completely.� One thing to watch out for - don’t let the “reactive” IDS vendors kid you into believing that they have intrusion prevention capabilities just because they can send TCP reset commands or re-configure a firewall when they detect an attack (a worrying piece of FUD that we have noticed in some IDS marketing literature in the past). � The problem here is that unless the attacker is operating on a 2400 baud modem, the likelihood is that by the time the IDS has detected the offending packet, raised an alert, and transmitted the TCP Resets - and especially by the time the two ends of the connection have received the Reset packets and acted on them (or the firewall or router has had time to activate new rules to block the remainder of the flow) - the payload of the exploit has long since been delivered….. game over! Our guess is that there are not many crackers using 2400 baud modems these days…. A true IPS device, however, is sitting in-line - all the packets have to pass through it. Therefore, as soon as a suspicious packet has been detected - and before it is passed to the internal interface and on to the protected network, it can be dropped. Not only that, but now that flow has been flagged as suspicious, all subsequent packets that are part of that session can also be dropped with very little additional processing. � Some products are also capable of sending TCP Resets or ICMP Unreachable messages to the attacking host, as well as providing more intelligent handling of SMTP sessions. The latter can cause problems should an IPS simply block a session, since the SMTP protocol will ensure that the e-mail which caused the problem is re-sent at regular intervals, often over an extended period of time. The more advanced IPS, therefore, is capable of accepting the entire e-mail before discarding it and sending appropriate responses to the sending server to ensure that the message is not retransmitted.� Rate-Based IPS (Attack Mitigator) Most NIPS products are basically IDS engines that operate in-line, and are thus dependent on protocol analysis or signature matching to recognise malicious content within individual packets (or across groups of packets). These can be classed as Content-Based IPS systems.� There is, however, a second breed of Network IPS that ignores packet content almost completely, instead monitoring for anomalies in network traffic that might characterise a flood attempt, scan attempt, and so on. These devices are capable of monitoring traffic flows in order to determine what is considered “normal”, and applying various techniques to determine when that traffic deviates from normal. This is not always as simple as watching for high-volumes of a specific type of traffic in a short space of time, since they must also be capable of detecting “stealth “attacks, such as low-rate connection floods and slow port scan attempts.� Since these devices are concerned more with anomalies in traffic flow than packet contents, they are classed as Rate-Based IPS systems - and are also known as Attack Mitigators, as they are so effective against DOS and DDOS attacks. At one time, most Network IDS/IPS products based their alerts purely on pattern matching packet contents against a database of known signatures. Then came a new breed of offerings that approached the problem in a completely different way - by doing a full protocol analysis on the data stream. Others began to use heuristics or anomaly-based analysis to determine when an attempted attack had taken place. � Today, most IDS/IPS employ a mixture of these detection methods in a single product, though some will be more biased towards one method than another. According to Cisco, there are five main methods of attack identification (source: Cisco Systems, The Science of Intrusion Detection System Attack Identification):� Pattern matching in its most basic form is concerned with the identification of a fixed sequence of bytes in a single packet. In addition to the tell-tale byte sequence, most IPS will also match various combinations of the source and destination IP address or network, source and destination port or service, and the protocol. It is also often possible to tune the signature further by specifying a start and end point for inspection within the packet, or a particular combination of TCP flags.� The more specific these parameters can be, the less inspection needs to be carried out against each packet on the wire. However, this approach can make it more difficult for systems to deal with protocols that do not live on well defined ports and, in particular, Trojans, and their associated traffic, which can usually be moved at will.� Although it is often quite simple to define a signature for a particular exploit, basic pattern matching can often be too specific, sometimes requiring multiple signatures to be defined for minor variations in exploits. They are also prone to false positives, since legitimate traffic can often contain the relatively small set of criteria supposedly used to determine when an attack is taking place.� This method is usually limited to inspection of a single packet and, therefore, does not apply well to the stream-based nature of network traffic such as HTTP sessions. This limitation gives rise to easily implemented evasion techniques. � Stateful pattern matching offers a slightly more sophisticated approach, since it takes the context of the established session into account, rather than basing its analysis on a single packet.� Stateful IPS products must consider arrival order of packets in a TCP stream and should handle matching patterns across packet boundaries. Thus, if the exploit string to be matched is foobar, and the exploit is split across two packets, with foo in one and bar in another, the simple packet matching IPS will miss the attack, since it will not be able to match the complete string. The stateful IPS, however, will maintain the session context and reassemble the traffic stream, once again making the complete string available to the detection engine.� This requires more resources than simple pattern matching, since the IPS now has to allocate large amounts of memory and processing power to track a potentially large number of open sessions for as long as possible. This approach does make IPS evasion that much more difficult, though far from impossible.� Direction of traffic is also important here, both in terms of quality of detection and performance.� Client-to-server traffic inspection is the process of applying detection mechanisms to the “request side” portion of a communication - for example, in HTTP this could be the “GET” request coming from a client. Client-to-server traffic inspection is typically activated to protect all traffic whether internally or externally generated. As the size of the traffic in terms of byte count is relatively small, the processing load placed on the IPS will be lower. Server-to-client traffic inspection is the process of finding an attack in the “response side” portion of a communication. For example, in HTTP the server-to-client traffic could be the web page and content returned from the server as a result of a “GET” request. Server-to-client traffic, as in this example, is often much larger than the client-to-server traffic in terms of byte count. As a result, the processing load that is placed on an IPS is greater for server-to-client traffic.� Some vendors do not
implement server-to-client signatures at all. Often this is for
performance reasons, but sometimes it is a design decision by those
vendors who also offer HIPS products, which are often better placed to
detect the types of exploits executed by malicious response traffic as
opposed to request traffic. Some vendors do include server-to-client
signatures, but recommend they are disabled when performance is
paramount. Bi-directional detection can have a significant impact on
performance in some cases - those products which can handle this
situation with zero or minimal impact on performance are worth closer
inspection (although this level of performance often comes with a higher
price It should be noted that there are situations where disabling server-to-client signatures is reasonably safe, and - happily - these are usually the situations where the highest levels of performance are demanded. Typically, this would be where an IPS is deployed within the network perimeter, where it is unlikely that purely internal HTTP response traffic is likely to be malicious. Perimeter defences would normally be deployed with both client-to-server and server-to-client signatures enabled, but perimeter devices rarely have the same performance requirements as internal ones.� Another point to bear in mind with server-to-client exploits is that, by their very nature, they lend themselves to a wide range of evasion techniques which are extremely simple to implement. This makes the detection process - already somewhat resource hungry - even more difficult and prone to performance problems, and some vendors will implement only the most basic of signatures and not try to handle any of the possible evasion techniques at all.� Protocol decode IPS take a radically different approach to simple pattern matching IPS products - though sometimes not quite as radically different as the marketing folks would have you believe. With this technique, the IPS detection engine performs a full protocol analysis, decoding and processing the packet contents in the same way that the target client or server application would. It also tends to be stateful.� Although this may seem like using a sledgehammer to crack a nut, it does have the advantage of highlighting anomalies in packet contents much more quickly than doing an exhaustive search of a signature database. It also has the benefit of greater flexibility in capturing attacks that would be very difficult - if not impossible - to catch using pure pattern-matching techniques, as well as new variations of old attacks. These are attacks which - although changing only slightly from variant to variant - would normally require a new signature in the database for the “traditional” IPS architecture, but which would be detected automatically by a complete protocol analysis.� One of the first things the protocol decode engine does is to apply rules defined by the appropriate RFCs to look for violations. This can help to detect certain anomalies such as binary data in an HTTP request, or a suspiciously long piece of data where it should not be - a sign of a possible buffer overflow attempt. One simple example of how this might work concerns searching Telnet login strings for one of the many well-known login names that rootkits tend to leave behind on the system. A pattern matching system might scan all Telnet traffic for all these patterns, in which case the more patterns you add, the slower it becomes (not always the case, but a reasonable assumption for the purposes of this example). In contrast, a protocol analysis system will decode the Telnet protocol and extract the login name. It can then perform an efficient search in a binary-search tree or a hash table for just the login name, which should scale much better as new signatures are added. � In theory, therefore, protocol decoding should offer more efficient processing of traffic and improved scalability as more signatures are added, compared to a pure pattern matching solution. In reality, pattern matching solutions rarely opt for a “brute force” approach (there are some extremely intelligent and efficient pattern matching mechanisms available), and so the differences are not always as marked as the marketing people would like us to believe. � Note also, that pattern matching and protocol decoding are not mutually exclusive. A protocol analysis IPS can only go so far with its protocol decodes before it too will be forced to perform some kind of pattern matching, albeit against a theoretically smaller subset of “signatures”. � One major downside, of course, is that if a completely new type of exploit does surface, it is likely that the developer will have to create new protocol decode code to handle it, whereas the pattern matching approach can allow the administrator to develop a custom signature much more quickly on site.� Protocol decoding does offer a number of advantages, however. It minimises the chance for false positives if the protocol is well defined and enforced (although false positives can be higher if the RFC is ambiguous), and can also be more broad and general to allow the IPS to detect minor variations of an exploit without having to implement separate signatures.� You may see this technique referred to in several different ways:
Each of these terms, if strictly applied, could use a slightly different approach to the problem. For example, we would expect a protocol decode engine to perform the sort of additional pattern matching and length checking mentioned above on the field contents in order to detect specific exploits or buffer overflows.� Pure protocol validation or Protocol Anomaly Detection engines, however, might go no further than decoding just enough to be able to determine if the packet follows the RFC to the letter. If not, they will raise an alert - but in allowing a packet to pass, they cannot be sure that the contents will not contain a means of exploit that just happens to conform with the RFC. � Beware the marketing hype in this particular area - no matter what architecture is used, the performance figures and detection rates in a live deployment will speak for themselves. Heuristic-based signatures use some kind of algorithmic logic on which to base their alarm decisions. These algorithms are often statistical evaluations of the type of traffic being presented. � A good example of this type of signature is one that would be used to detect a port sweep. This signature looks for the presence of a threshold number of unique ports being touched on a particular machine. The signature may further restrict itself through the specification of the types of packets that it is interested in (that is, SYN packets). Additionally, there may be a requirement that all the probes must originate from a single source, and even that valid SYN ACK packets must be seen to be returned by the host being probed. � Signatures of this type will react differently on different networks, and can be a significant source of false positives if not tuned correctly, requiring some threshold manipulations to make them conform to the utilisation patterns on the network they are monitoring. This type of signature may be used to look for very complex relationships as well as the simple statistical example given.� The final approach is to forget about trying to identify the attacks directly, and concentrate instead on ignoring everything that is considered “normal”. This is known as “anomaly-based” IPS, and the basic principle is that, having identified what could be considered “normal” traffic on a network, then anything that falls outside those bounds could be considered an “intrusion” - or at the very least, something worthy of note. This is generally better suited to passive IDS rather than in-line IPS devices, given its propensity for false positives.� The primary strength of anomaly detection is its ability to recognise previously unseen attacks, since it is no longer concerned with knowing what an attack looks like - merely with knowing what does not constitute normal traffic. Its drawbacks, of course, include the necessity of training the system to separate noise from natural changes in normal network traffic (the installation of a new - perfectly legitimate - application somewhere on the network, for example). � Changes in standard operations may cause false alarms while intrusive activities that appear to be normal may cause missed detections. It is also difficult for these systems to name types of attacks, and this technology has a long way to go before it could be considered ready for “prime time”.� Which Detection Method Is The Best? Which detection method to choose is a difficult question, and in all honesty, it is not one with which most of those evaluating these products should concern themselves. � Adequate performance to handle the traffic to which the sensor will be exposed, accuracy of alerts, low incidence of false positives, and centralised management and reporting/analysis tools are far more important than how the packets are processed.� In some instances, the lines blur between methodologies to the point where they become almost indistinguishable. For example, most protocol decode engines alert the user to the presence of protocol violations that are not directly related to any known attack but are “anomalous” (for example, length-based buffer overflow detection). Therefore, in this instance the engine has attributes of an anomaly-based system. � As we have already mentioned, most protocol analysis systems are also reduced to performing some form of pattern-matching process following the protocol decode. Likewise, even the most basic pattern-matching systems perform some form of protocol analysis - even if it is only for a limited range of protocols. In truth, almost all Network IPS systems are already adopting a hybrid architecture.� By and large, therefore, the pattern-matching vs. protocol decode debate is one of religion - something for the marketing departments to shout about. Why should the average user care what happens under the hood as long as the product does what it claims to do - detect and prevent intrusions? There are a number of challenges to the implementation of an IPS device that do not have to be faced when deploying passive-mode IDS products. These challenges all stem from the fact that the IPS device is designed to work in-line, presenting a potential choke point and single point of failure. � If a passive IDS fails, the worst that can happen is that some attempted attacks may go undetected. If an in-line device fails, however, it can seriously impact the performance of the network. � Perhaps latency rises to unacceptable values, or perhaps the device fails closed, in which case you have a self-inflicted Denial of Service condition on your hands. On the bright side, there will be no attacks getting through! But that is of little consolation if none of your customers can reach your e-commerce site.� Even if the IPS device does not fail altogether, it still has the potential to act as a bottleneck, increasing latency and reducing throughput as it struggles to keep up with up to a Gigabit or more of network traffic. Devices using off-the-shelf hardware will certainly struggle to keep up with a heavily loaded Gigabit or multi-Gigabit network, especially if there is a substantial signature set loaded, and this could be a major concern for both the network administrator - who could see his carefully crafted network response times go through the roof when a poorly designed IPS device is placed in-line - as well as the security administrator, who will have to fight tooth and nail to have the network administrator allow him to place this unknown quantity amongst his high performance routers and switches. � As an integral element of the network fabric, the Network IPS device must perform much like a network switch. It must meet stringent network performance and reliability requirements as a prerequisite to deployment, since very few customers are willing to sacrifice network performance and reliability for security. A NIPS that slows down traffic, stops good traffic, or crashes the network is of little use.� Dropped packets are also an issue, since if even one of those dropped packets is one of those used in the exploit data stream it is possible that the entire exploit could be missed. Most high-end IPS vendors will get around this problem by using custom hardware, populated with advanced FPGAs and ASICs - indeed, it is necessary to design the product to operate as much as a switch as an intrusion detection and prevention device. � Those vendors who took the software-only route, initially running their product on standard Intel hardware, are now in a position to migrate to dedicated, custom-designed, generic IPS platforms which incorporate some sophisticated network processing hardware and run an operating system and driver set that is designed for high-speed packet processing applications. This is finally giving them the opportunity to compete on a more level playing field with those vendors who opted from the outset to design their own hardware, and the inevitable comparisons between these two approaches over the coming months will be interesting to follow.� It is very difficult for any security administrator to be able to characterise the traffic on his network with a high degree of accuracy. What is the average bandwidth? What are the peaks? Is the traffic mainly one protocol or a mix? What is the average packet size and level of new connections established every second - both critical parameters that can have detrimental effects on some IDS/IPS engines? If your IPS hardware is operating “on the edge”, all of these are questions that need to be answered as accurately as possible in order to prevent performance degradation. The alternative is to play it safe, and select an IPS product that is clearly over-engineered for your particular environment.� Another potential problem is the good old false positive. The bane of the security administrator’s life (along with the script kiddie, of course!), the false positive rears its ugly head when an exploit signature is not crafted carefully enough, such that legitimate traffic can cause it to fire accidentally. Whilst merely annoying in a passive IDS device, consuming time and effort on the part of the security administrator, the results can be far more serious and far reaching in an in-line IPS appliance. � Once again, the result is a self-inflicted Denial of Service condition, as the IPS device first drops the “offending” packet, and then potentially blocks the entire data flow from the suspected hacker. If the traffic that triggered the false positive alert was part of a customer order, you can bet that the customer will not wait around for long as his entire session is torn down and all subsequent attempts to reconnect to your e-commerce site (if he decides to bother retrying at all, that is) are blocked by the well-meaning IPS.� Another potential problem with any Gigabit or multi-Gigabit IPS/IDS product is, by its very nature and capabilities, the amount of alert data it is likely to generate. On such a busy network, how many alerts will be generated in one working day? Or even one hour? Even with relatively low alert rates of ten per second, you are talking about 36,000 alerts every hour. That is 864,000 alerts each and every day. The ability to tune the signature set accurately is essential in order to keep the number of alerts to an absolute minimum. Once the alerts have been raised, however, it then becomes essential to be able to process them effectively. Advanced alert handling and forensic analysis capabilities - including detailed exploit information and the ability to examine packet contents and data streams - can make or break a Gigabit/multi-Gigabit IDS/IPS product.� Of course, one point in favour of IPS when compared with IDS is that because it is designed to prevent the attacks rather than just detect and log them, the burden of examining and investigating the alerts - and especially the problem of rectifying damage done by successful exploits - is reduced considerably. Requirement for effective prevention Having pointed out the potential pitfalls facing anyone deploying these devices, what features are we looking for that will help us to avoid such problems?�
The NSS Intrusion Prevention Test The NSS Group conducted the first comprehensive IPS test of its kind, now updated in this latest testing round with a completely revised, more rigorous and extensive methodology. This exhaustive review will give readers a complete perspective of the capabilities, maturity and suitability of the products tested for their particular needs.� As part of its extensive IPS/Attack Mitigator test methodologies (see section on Testing Methodology later in this report for full details, updated for this latest test), The NSS Group subjects each product to a brutal battery of tests that verify the stability and performance of each IPS tested, determine the accuracy of its security coverage, and ensure that the device will not block legitimate traffic. � If a particular IPS has been designated as NSS Approved, customers can be confident that the device will not significantly impact network/host performance, cause network/host crashes, or otherwise block legitimate traffic. � To assess the complex matrix of IPS/Attack Mitigator performance and security requirements, The NSS Group has developed a specialised lab environment that is able to exercise every facet of an IPS product. The test suite contains over 1500 individual tests that evaluate IPS products in three main areas: performance and reliability, security accuracy, and usability. � This thorough review should give readers a complete perspective of the capabilities, maturity and suitability of the products tested for their particular needs. � Detection Accuracy & Breadth This group of tests verifies that the NIPS will not block legitimate traffic (Accuracy) and is capable of detecting and blocking a wide range of common exploits (Breadth). Although breadth is extremely important, accuracy is critical because a NIPS that blocks legitimate traffic will not remain in-line for long.� NSS has a huge library of trace files of recent exploits, including multiple variations of each exploit with different payloads, using different attack vectors, etc. In the wild exploits and common attack tools, such as Metasploit and Core Impact, are also used to run real-time test cases against live vulnerable servers. NSS carefully selects test cases to fall into different categories of severity based on: whether the exploit will provide root/administrator access on a widely deployed operating system or application; whether it imposes a DOS condition with no risk of system compromise; whether it is against a system which is not widely deployed or is not Internet-facing; whether it is purely a reconnaissance technique designed to gather information for a subsequent attack attempt; and so on. � To test false negative performance, we take a number of common exploits and change them in subtle ways to change the superficial appearance on the wire, whilst ensuring that the exploit still performs the intended system compromise.� To test false positive performance, we make use of our huge library of trace files of normal traffic - some including “suspicious” content which is not malicious - together with a number of “neutered” exploits that have been rendered completely ineffective. � Whilst it is not possible to validate completely the entire signature set of any IPS, these tests demonstrate how accurately the IPS detects and blocks a wide range of common exploits and their variants, port scans, and Denial of Service attempts, whilst remaining resistant to false positive alerts.� All detection tests are repeated twice. The first run is with the sensor deployed in-line in blocking mode using the default policy/recommended settings provided out of the box by the vendor. This is the way most of these devices will be deployed initially and the number of test cases detected and blocked in each category is recorded. The second run is performed after the policy has been tuned to enable any low priority or audit-only signatures which may be disabled by default. No product or signature updates are allowed during the tests.� Naturally, Rate-Based IPS devices will not respond to the same attack traffic as Content-Based devices. For those devices, therefore, the Detection Accuracy tests involve detecting and mitigating a wide range of rate-based attacks such as port scans, SYN floods, connection floods, and so on. We note which of these are mitigated completely, which are mitigated partially, and which require the use of built-in firewall capabilities.� Resistance To Evasion Techniques These tests verify that the IPS is capable of detecting and blocking basic exploits when subjected to varying common evasion techniques. An IPS that cannot detect attacks subjected to these “script kiddie” evasion techniques is easily bypassed. � The tests consist of eight parts (only the final section is applicable to Rate-Based devices):�
For each of the evasion techniques, we note if (i) the attempted attack is blocked successfully (the primary aim of any IPS device), (ii) the attempted attack is detected and an alert raised in any form, and (iii) if the exploit is successfully “decoded” to provide an accurate alert relating to the original exploit, rather than alerting purely on anomalous traffic detected as a resultof the evasion technique itself.� Stateful Operation If the IPS is tracking TCP session state, then it has the potential to introduce denial of service when the session table becomes full (too many connections) or if it can’t keep up with the creation of new sessions (too many connections per second). As with latency and bandwidth, the number of connections supported by the IPS and its connection per second rate should be matched to the network. � For example, a fully saturated Gigabit Ethernet link can handle 22,000 5KByte transfers per second. Assuming each connection lasts 20 seconds, the IPS should be able to handle 448,000 simultaneous connections. These numbers scale proportionately for slower networks. Any IPS that doesn’t offer these capabilities will impact performance of Web or e-commerce servers.� The aim of this section is to be able to determine whether the IPS is capable of monitoring stateful sessions established through the device at various traffic loads without either losing state or incorrectly inferring state. � An IPS that does not maintain TCP session state can flood the management console with false-positive alerts. Although this should not directly impact the IPS blocking function, it can make it very hard to perform forensic analysis of the attacks. In addition, if the default condition of the sensor is to block all traffic for which it does not believe there is a current connection in place, then an inability to maintain state under extreme conditions could result in the sensor blocking legitimate traffic by mistake.� In the first part of this test, we determine the theoretical maximum number of concurrent TCP connections that can be supported using various HTTP response sizes. We then test whether the sensor is capable of preserving state across increasing numbers of open connections up to, and exceeding, the maximum. The tests also ensure that the device continues to detect and block new exploits while not blocking legitimate traffic when the state tables are filled. Needless to say, the passing of any malicious traffic at any point in the tests results in an automatic “FAIL”.� In the final tests, we transmit a number of packets taken from capture files of valid exploits, but without first establishing a valid session with the target server. This determines resistance to stateless attack tools such as Stick and Snot. In order to receive a “PASS” in this test, no alerts should be raised for any of the actual exploits. However, each packet should be blocked if possible since it represents a “broken” or “incomplete” session.� Any IPS is expected to be reliable (not crash), to never block legitimate traffic, and to not unduly affect network or host system performance. � The latency and throughput of a Network IPS (NIPS) or Attack Mitigation device must be on a par with other equipment in the network on which it is deployed, and in this respect, an in-line NIPS must strive to perform much more like a switch than a typical passive security device, especially when it is necessary to install more than one NIPS in the same data path.� Detection/Blocking Performance Under Load This group of tests verifies that the IPS does not adversely impact legitimate traffic, even when new TCP connections are being created rapidly. We also verify that the sensor is capable of detecting and blocking exploits when subjected to increasing loads of background traffic up to the maximum bandwidth supported as claimed by the vendor, using a range of HTTP response sizes and packet sizes. � An IPS that misses attacks under load can be evaded. An IPS that adversely affects legitimate background traffic will not stay in-line for long.� A fixed number of exploits are launched with zero background traffic to ensure the sensor is capable of detecting our baseline attacks. Once that has been established, increasing levels of varying types of background traffic are generated through the IPS device in order to determine the point at which the sensor begins to miss attacks.� All tests are repeated with 25 per cent, 50 per cent, 75 per cent and 100 per cent loads of background traffic up to the maximum rated throughput of the device. The tests are conducted with UDP, HTTP, and mixed-protocol traffic and include very high packet rates and TCP connection rates designed to stress the device, as well as determine its likely performance on a “typical” live network.� Latency & User Response Times In any network environment latency is important. Latency may impose an upper bound on throughput and it also has an impact on interactive applications, thus affecting user response time. As such, it is important to understand the impact of latency introduced by a NIPS and to determine the maximum acceptable delay, which will be different for each network. There is a direct relationship between latency introduced by a networking device and the maximum throughput allowed by that device on a single TCP connection. There is a critical value for the round trip time (RTT) of a packet in each network, and if the latency is below this critical value, TCP throughput will be unaffected - instead, it is the line speed of the underlying network which becomes the bottleneck. Above this critical value, however, TCP throughput is negatively impacted. To be specific, the maximum throughput achievable for any given TCP connection in a zero loss network is expressed as:� throughput = window / RTT� where window is the maximum TCP window size (64 Kbytes by default) and RTT is the round trip time in the network.� This equation tells us that the throughput of a TCP connection is inversely proportional to network latency (note that this is TCP throughput for one connection - the aggregate bandwidth is not affected by latency). In other words, if you double latency, you halve throughput.� Consider adding a NIPS in an internal Gigabit network where the RTT is 200 microseconds. The critical value for RTT in a Gigabit network is 500 microseconds (below which it may no longer be possible to achieve 1Gbps of throughput), which means the NIPS can add a maximum of 300 microseconds to the RTT without affecting the network. In this particular case, therefore, for an internal, high speed deployment, the administrator may determine that his chosen IPS device needs to be capable of sub-300 microsecond latency under normal traffic loads. � Of course, the latency of an IPS device may vary significantly based on packet size, complexity of the protocol, presence of attack traffic, or simply the makeup of the normal traffic passing through it. For example, Gigabit segments, will rarely carry only a single TCP connection. Rather, a saturated Gigabit segment could be supporting hundreds, if not thousands of TCP connections, and this multiplexing eases the impact of latency on the overall throughput on the segment.� Although each of these connections carries only a fraction of the total throughput, a few connections tend to dominate. The maximum latency for a NIPS is then determined by the utilisation of the fastest connection. For example, in a Gigabit Ethernet segment carrying 10,000 TCP connections the fastest connection might have a throughput of 250Mbps. In this case, the critical value for round trip latency is as high as 2 milliseconds. � Assuming the latency without the NIPS is 300 microseconds, an administrator may therefore determine that his chosen NIPS device must be capable of 1700 microsecond round trip latency (850 microseconds in each direction).� Such critical value calculations are important when TCP connections achieve maximum throughput, which is true for large data transfers. For smaller data transfers, and non-TCP applications like NFS, latency has a more direct impact on user experience - response time is directly proportional to latency. That is, doubling latency doubles response time. In these situations, the latency of the network in which a NIPS is deployed determines the acceptable latency of the NIPS.� Consider deploying a hypothetical NIPS with 1 millisecond one-way latency in the following scenarios:
The latency of the NIPS must therefore be evaluated in the context of the network in which it is deployed. For example, to protect networks that are accessed over the public Internet, one-way NIPS latencies in the 1-2 millisecond range would be acceptable. Whereas for NIPS deployments on MAN/WAN links, NIPS latencies of well under 1 millisecond would be essential. As we have already mentioned, for deployments on internal networks where latencies are a few hundred microseconds, NIPS latencies of less than 300 microseconds would be more appropriate.� Network administrators have laboured long and hard to reduce latency within the corporate network to an absolute minimum. Core network devices such as switches are frequently chosen as much on their performance - packet loss and latency under all load conditions - as any other feature. Given that Network IPS devices are operating in-line, it is not surprising that they will be evaluated in a similar way.� For this reason, part of The NSS Group methodology uses very similar testing techniques to those we would normally employ when testing switches (in order to determine packet latency), in addition to measuring application latency. This group of tests determine the effect the IPS sensor has on the traffic passing through it under various load conditions. High packet latency will lower TCP throughput. High application latency will create a negative user experience.� Bi-directional network latency of a range of differently-sized UDP packets is measured under three test conditions: with no load, with HTTP traffic at half the maximum rated load of the device, and while the device is under a heavy SYN flood attack (up to 10 per cent of the rated throughput of the sensor). � Spirent Avalanche and Reflector devices are also used to generate HTTP sessions through the device in order to gauge how any increases in latency will impact the user experience in terms of failed connections and increased Web response times. This “application latency” is measured both with no background load and while the device is under attack.� Stability & Reliability These tests verify the stability of the IPS device under various extreme conditions. Long-term stability is critical for an in-line IPS device, where failure can produce network outages. In the first part of this test, we expose the external interface of the sensor to a constant stream of attacks over an extended period of time. The device is configured to block and alert, and thus this test provides an indication of the effectiveness of both the blocking and alert handling mechanisms. A continuous stream of exploits mixed with some legitimate sessions is transmitted through the sensor at a maximum rate of 90 per cent of the claimed throughput of the device for eight hours with no additional background traffic. � The device is expected to remain operational and stable throughout this test, blocking 100 per cent of recognisable exploits, raising an alert for each, and passing as close to 100 per cent of legitimate traffic as possible. If any recognisable exploits are passed - caused by either the volume of traffic or the IPS device failing open for any reason - this will result in a FAIL. If an excessive amount of legitimate traffic is blocked - caused by either the volume of traffic or the IPS device failing closed for any reason - this will also result in a FAIL.� In the second part of the test we stress the protocol stack of the device under test by exposing it to malformed traffic from the ISIC test tool for eight hours. The device is expected to remain operational and capable of detecting and blocking exploits throughout the test to attain a PASS.� We scan the management interface for open ports and active services and report on known vulnerabilities. We also stress the protocol stack of the management interface of the NIPS by exposing it to malformed traffic from the ISIC test tool. The device is expected to remain (a) operational and capable of detecting and blocking exploits, and (b) capable of communicating in both directions with the management server/console throughout the test to attain a PASS. We also note whether the sensor detects the ISIC attacks even though targeted at the management port.� After quantitatively evaluating the network performance and security effectiveness of the IPS, we qualitatively evaluate the features and usability of the product. � This evaluation provides the reader with valuable insight into product features, how easy it is to install the IPS and perform common, day-to-day operations with the management console. � Areas evaluated include installation, configuration, policy editing, alert handling, and reporting and analysis. � Key test criteria in each of the above areas are specified in the test methodology, and these are used as the basis for this evaluation. Click here to return to the IPS Index Section |
Send mail to webmaster
with questions or�
|