![]() |
Web Firewall Application Testing Methodology The aim of this procedure is to provide a thorough test of all the main components of a Web Application Firewall device in a controlled and repeatable manner and in the most “real world” environment that can be simulated in a test lab. The network is 100/1000Mbit Ethernet with CAT 5e cabling and a mix of Allied Telesyn AT-9816GB and AT-9812T switches (these have a mix of fibre and copper Gigabit interfaces). All devices are expected to be provided as appliances - if software-only, the supplier pre-installs the software on the recommended hardware platform. The sensor is configured as an in-line device during testing (if possible). There is no firewall protecting the target subnet. Traffic generation equipment - such as the machines generating exploits, Spirent Avalanche and Spirent Smartbits transmit port - is connected to the “external” network, whilst the “receiving” equipment - such as the “target” hosts for the exploits, Spirent Reflector and Spirent Smartbits receive port - is connected to the internal network. The device under test is connected between two “gateway” switches - one at the edge of the external network, and one at the edge of the external network.� All “normal” network traffic, background load traffic and exploit traffic will therefore be transmitted through the device under test, from external to internal. The same traffic is mirrored to a single SPAN port of the external gateway switch, to which an Adtech network monitoring device is connected. The Adtech AX/4000 monitors the same mirrored traffic to ensure that the total amount of traffic never exceeds 1Gbps (which would invalidate the test run). � The management interface (where available) is used to connect the appliance to the management console on a private subnet. This ensures that the sensor and console can communicate even when the target subnet is subjected to heavy loads, in addition to preventing attacks on the console itself. The aim of this section is to verify that the sensor is capable of detecting and blocking a wide range of common attacks accurately, whilst remaining resistant to false positives. All tests in this section are completed with no background network load. � Whilst it is not possible to validate completely the entire range of attacks which can be detected by any sensor, this test attempts to demonstrate how accurately the sensor detects and blocks a wide range of common attacks as detailed in the Open Web Application Security Project (OWASP) Top Ten Most Critical Web Application Security Vulnerabilities (www.owasp.org). All attacks are run with no load on the network and are not subjected to any evasion techniques. To demonstrate these vulnerabilities and provide a vulnerable application for the device to protect, we use a mixture of applications developed in-house specifically for this test, and the WebGoat application. WebGoat is a full J2EE web application designed to teach web application security lessons, and is available from the OWASP Web site. Most of the tests we devised for this methodology can be simulated or approximated using WebGoat.� Our attack suite contains over 30 basic attacks (plus variants) covering the following areas:
� Each of these attacks are proven to be effective against the back-end applications, and we expect the device under test to prevent all of them and raise appropriate alerts. Tuning of the application policy and/or signatures is allowed in order to ensure that the device is able to protect the application completely. The aim of this section is to verify that the sensor is capable of detecting and blocking basic HTTP attacks when subjected to varying common evasion techniques.� A number of common HTTP exploits are launched whilst applying various URL obfuscation techniques made popular by the Whisker Web server vulnerability scanner, including:
For each of the evasion techniques, we note if (i) the attempted attack is blocked successfully, (ii) the attempted attack is detected and an alert raised in any form, and (iii) if the exploit is successfully “decoded” to provide an accurate alert relating to the original exploit, rather than alerting purely on anomalous traffic detected as a result of the evasion technique itself. Section 3 - Performance Under Load (No Security Policies Applied) The aim of this section is to verify the maximum raw processing performance of the sensor. This is achieved by measuring packet processing performance and HTTP performance with no security policies applied.� Test 3.1 - UDP Traffic To Random Valid Ports This test uses UDP packets of varying sizes generated by a SmartBits SMB6000 with LAN-3301A 10/100/1000Mbps TeraMetrics cards installed. � A constant stream of the appropriate mix of packets - with variable source IP addresses and ports transmitting to a single fixed IP address/port - is transmitted through the sensor (bi-directionally, maximum of 1Gbps). Each packet contains dummy data, and is targeted at a valid port (not port 80) on a valid IP address on the target subnet. The percentage load and packets per second (pps) figures are verified by the Adtech Gigabit network monitoring tool before each test begins. Multiple tests are run and averages taken where necessary. � This traffic does not attempt to simulate any form of “real world” network condition. The aim of this test is purely to determine the raw packet processing capability of the sensor, and its effectiveness at passing “useless” packets quickly in order to pass potential attack packets to the detection engine.
Test 3.2 - Maximum Capacity HTTP Traffic The use of multiple Spirent Communications Avalanche 2500 and Reflector 2500 devices allows us to create true “real world” traffic at speeds of up to 4.2 Gbps as a background load for our tests. Our Avalanche configuration is capable of simulating over 5 million users, with over 5 million concurrent sessions, and over 200,000 HTTP requests per second. � By creating genuine session-based traffic with varying session lengths, this provides a test environment that is as close to “real world” as it is possible to achieve in a lab environment, whilst ensuring absolute accuracy and repeatability. The aim of this test is to stress the HTTP detection engine to determine the maximum capacity (connections per second and Mbits per second) at varying packet sizes and using varying sizes of HTTP response. � Each transaction consists of a single HTTP GET request (thus connections per second = transactions per second in all these tests) and there are no transaction delays (i.e. the Web server responds immediately to all requests). All packets contain valid payload (a mix of binary and ASCII objects) and address data, though there are no embedded links to other pages or elements which need to be defined within a security context. This test provides an excellent representation of a live network (albeit one biased towards “sterile” HTTP traffic) at various network loads.
Section 4 - Performance Under Load (Security Policies Applied) The aim of this section is to verify the ability of the sensor to handle varying loads of HTTP traffic whilst being expected to insect the contents of that traffic according to the security policies applied. � Test 4.1 - Standard Spirent Avalanche Traffic The default traffic generated by the Spirent Avalanche/Reflector devices is very realistic and provides for very repeatable tests. However, it is also very “sterile” inasmuch as it does not contain a variety of elements which must be inspected and acted upon by the device under test.� These tests, therefore, are expected to provide the best results under all test conditions.
Test 4.2 - NSS Home Page Only - No Images In order to introduce a realistic workload for the device under test, we switch out the Spirent Reflector and replace with a high-specification Web server running IIS and containing a copy of the NSS Group Web site. The dual-processor Dell PowerEdge 1750 server is capable of 1Gbps throughput so as not to act as a bottleneck for the device under test.
Test 4.3 - NSS Home Page + Ten Associated Images The aim of this test is the same as for Test 4.2.1, but this time we add a number of images to the transaction. This is intended to provide a more realistic - and easier - test scenario for the device under test since it does not have to inspect or process the image data, and thus although the transaction response size is greater, the processing overhead is lower than the previous test. � It is not often that the device under test will be faced with a high load of transactions which consist purely of HTML or application code which all needs to be inspected and processed - most Web pages contain a mixture of data types. Thus, this test is intended to replicate a typical “real world” situation.
Test 4.4 - Maximum Open Connections It is important that the device under test cannot only support a sufficiently high rate of connection set-up and tear-down, but that is can also support a sufficiently large number of simultaneous connections. This test determines its maximum capacity.
Section 5 - Latency & User Response Times The aim of this section is to determine the effect the sensor has on the traffic passing through it under various load conditions. � Should a device impose an unacceptably high degree of latency on the packets passing through it, a network or security administrator would need to think carefully before installing such a device in front of a high capacity Web server or server farm.� We use Spirent SmartFlow software and The SmartBits SMB6000 with Gigabit TeraMetrics cards to create multiple traffic flows through the appliance and measure the basic throughput, packet loss, and latency through the sensor. This test - whilst not indicative of real-life network traffic - provides an indication of how much the sensor affects the traffic flow through it. This data is particularly useful for network administrators who need to gauge the effect of any form of in-line device which is likely to be placed at critical points within the corporate network. SmartFlow runs through several iterations of the test varying the traffic load from 250Mbps to 1Gbps bi-directionally (or up to the maximum rated throughput of the device should this be less than 1Gbps) in steps of 250Mbps. This is repeated for a range of packet sizes (256 bytes, 550 bytes and 1000 bytes) of UDP traffic with variable IP addresses and ports. At each iteration of the test, SmartFlow records the number of packets dropped, together with average and maximum latency.
Section 6 - Stability & Reliability These tests attempt to verify the stability of the device under test under various extreme conditions. Long term stability is particularly important for an in-line protection device, where failure can produce network outages.
Section 7 - Management and Configuration The aim of this section is to determine the features of the management system, together with the ability of the management port on the device under test to resist attack. Clearly the ability to manage the alert data collected by the sensor is a critical part of any firewall system. For this reason, an attacker could decide that it is more effective to attack the management interface of the device than the detection interface.� Given access to the management network, this interface is often more visible and more easily subverted than the detection interface, and with the management interface disabled, the administrator has no means of knowing his network is under attack.�
Click here to return to the Web App Index Section |
Security Testing |
Send mail to webmaster
with questions or�
|