Betting Sites Not On Gamstop UK 2025Betting Sites Not On GamstopCasino Not On GamstopNon Gamstop Casinos UKBest Casinos Not On Gamstop

NSS Group logo

Appendix A - The Test Equipment

Spirent Communications SmartBits SMB-6000/SMB-600

Spirent Communications (www.spirentcom.com) provides network and communications testing solutions for developers, manufacturers, evaluators and operators of communications equipment, networks and services. �

The SmartBits 6000 (and its smaller sibling the SmartBits 600) are multi-port network performance analysis systems designed to measure the performance limits of complex network configurations and network devices. �


Figure 1 - Spirent: SmartBits SMB-6000

The SmartBits 6000 is a high port density network performance analysis test system. Each chassis can hold up to 12 SmartModules in various combinations to support up to 48 Gigabit Ethernet ports, 96 10/100 Mbps Ethernet ports, 24 POS (Packet over SONET) ports, 24 SmartMetrics Gigabit ports, or a mixture of these port types. Multiple SmartBits 6000 chassis can also can be daisy-chained together to achieve even higher port densities. �

The SmartBits 6000 is controlled from a Windows-based PC or a UNIX workstation through a 10/100 Mbps Ethernet connection. Control is via a “soft front panel” SmartWindow application, and the system also includes SmartApplications software, which automates industry standard performance tests as defined in RFC 1242 and RFC 2544. �

Spirent’s SmartBits SMB-600 chassis is a portable and compact version of the SMB-6000, providing all the same features and holding up to two modules. It can support up to 8 Gigabit Ethernet ports, 16 10/100 Mbps Ethernet ports, 4 POS (Packet over SONET) ports, 4 SmartMetrics Gigabit ports, or a mixture of these port types.�

Spirent has recently introduced a new generation of SmartBits network and Internet test systems called TeraMetrics. The TeraMetrics open architecture is a foundation for a new family of SmartBits test systems to meet the accelerating demands, complexity, increased speeds and scalability of terabit (up to 10 gigabits per second) switching. �

SmartBits Applications

The range of SmartBits products also includes a set of software tools that allow SmartBits systems to be used for a variety of applications, ranging from industry standard tests to specific applications for new and emerging Internet and data technologies.

Those used extensively within NSS include:�

SmartWindow - SmartBits virtual front panel. Within SmartWindow, the test engineer simply needs to select a protocol, set class of service parameters, and then test any of the following: NIC cards, servers, bridges, cable modems, xDSL modems, switches, routers, VLANs, firewalls, live networks, or multimedia scenarios.

SmartApplications - Provides automated performance analysis for bridges, switches, and routers per RFC 1242 (Benchmarking Terminology for Network Interconnection Devices) and RFC 2544 (Benchmarking Methodology for Network Interconnect Devices). Tests are available for Ethernet, ATM, and Frame Relay.�


Figure 2 - Spirent: SmartFlow results screen

SmartFlow - Tests line rate QoS. Enables both forwarding and policy tests. Analyses each incoming stream to test a device's (or network's) ability to forward very large numbers of flows. Analyses the device's ability to correctly handle policies implemented in the network or device under test.

SmartTCP - Tests load balancer performance. Tests measure the TCP session performance of server load balancer devices that make forwarding decisions based on Layer 4-7 information. SmartTCP benchmarks both the rate and connection capacities of the device under test to establish, maintain, and tear down TCP sessions.

WebSuite/Firewall - Designed to simulate real-world traffic loads in order to support the testing of content delivery and network equipment. Gauges the performance of firewall's performing NAT (Network Address Translation). Determines maximum application transaction capacity. Measures application throughput with TCP acting as the transport agent. Evaluates an in-line device’s ability to deal with DoS (Denial of Service) attacks.

TeraVPN - Designed to measure the network performance of IP Virtual Private Networks. Determines IP-VPN tunnel creation capacity using IPSec protocols. Also generates UDP or HTTP traffic over each tunnel and measures data performance characteristics like packet loss, latency, and response time. �


Figure 3 - Spirent: WebSuite

SmartWindow and SmartFlow are used to generate background traffic for the UDP tests in this report. WebSuite is one of the tools used to generate high-volume DoS attacks.�

In general, the Spirent software is not particularly easy to use, lacking a consistent look-and-feel across the range making it difficult to switch from one product to another. Not all of the software packages run across all of the SmartModules either, making it difficult to select the exact combination of hardware and software required to perform a range of tests.�

However, the hardware is solid and reliable, and provides a means to generate high volumes of layer 2/3 traffic up to multi-Gigabit speeds.

Spirent Communications Avalanche

Whether you are building network equipment or providing a service, you must deliver consistent performance under all conditions. �

Until now, capacity assessment at high-loads has been a costly and complex process. For this reason Spirent Communications introduced the Avalanche appliance to assist with the challenge. �

At NSS we have taken a number of these capacity planning products and integrated them into our test-bed to aid in simulating real-life Internet conditions - the sort of conditions that the average user experiences daily. �


Figure 4 - Spirent: Avalanche 2500

Avalanche is described by Spirent as a capacity assessment product that challenges any computing infrastructure or network device to stand up to the real-world load and complexity of the Internet or intranets. �

The system generates simulated network traffic that features real-world characteristics such as connection speed, packet loss, browser emulation, user think-time and aborted transactions. This helps provide invaluable information about a site's architectural effectiveness, points of failure, modes of performance degradation, robustness under critical load, and potential performance bottlenecks.�

Using Avalanche to generate Internet user traffic and the matching Reflector to emulate large clusters of data servers, it is possible to simulate the largest customer environments. Each one sports up to four copper or fibre Gigabit Ethernet ports which are load-balanced equally between dual Intel processors when generating traffic to achieve in excess of 2Gbps traffic per Avalanche / Reflector pair.�

Between them they can set up, transfer data over, and tear down connections at rates of more than 50,000 requests per second (HTTP 1.0 with no persistence) and over 60,000 requests per second (HTTP 1.1 with persistence). They can sustain over 6,000 HTTPS requests per second with no SSL session ID re-use, generate more than 10,000 streaming requests, and simulate 1.7 million simultaneously connected users with unique IP addresses.�

All this while handling cookies, IP masquerading for large numbers of addresses, traversing tens of thousands of URLs and operating under a realistic mix of traffic. �

This allows realistic and accurate capacity assessment of routers, firewalls, in-line security appliances (IDS/IPS/UTM), load-balancing switches, and Web, application, and database servers.

It helps identify potential bottlenecks from the router connection all the way to the database, or can simply be used to generate a background test load of realistic traffic. Load can be specified in a number of ways, using user sessions, user sessions per second, transactions, transactions per second, connections or connections per second.�

Protocols supported include HTTP/1.0, HTTP/1.1 and HTTPS (including persistence and simultaneous connection settings); RTSP/RTP (QuickTime and Real Networks); Microsoft Media Streaming; FTP; SMTP (including attachments) and POP3; DNS; voice (SIP); 802.1Q VLAN tagging; IPSec; PPPoE; and Telnet. It also supports SSL versions V2, V3 and TLS V1, and SSL protocol parameters (version selection, cipher suites and session ID re-use), as well as allowing generation of a range of simulated Distributed Denial of Service (DDoS) attacks and replay of packet capture files.�

The system also allows modelling of user behaviour, supporting such actions as use of proxies and proxy caches, use of multiple browser types, multi-level HTTP redirects, user think times, click streams, and HTTP aborts (“click-aways”). Support is provided for dynamic content sites, cookies, session IDs, HTML forms, HTTP posts, and HTTP basic and proxy authentication, and the tester can specify a list of URLs and data object parameters that can be changed on a per-transaction basis.�


Figure 5 - Spirent: Avalanche Analyzer performance graphs

Avalanche includes a high-accuracy delay factor that mimics latencies in users' connections by simulating the long-lived connections that tie up networking resources. Long-lived, slow links can have a completely different effect on performance than a large number of short-lived connections, so this approach provides the ability to finely tune the test scenario for more realistic results. �

As does the ability to introduce conditions that can seriously affect real-world performance such as packet loss levels, TCP/IP stack characteristics (with control over maximum segment size, slow start/congestion avoidance, VLAN tagging, IP fragmentation, and TCP timeout behaviour) and, of course, line speed.

User profiles can be created which enable Avalanche to mix different user types in a single test - perhaps one group of users could be running over a GSM link with high latency and heavy packet loss, whilst another group could be running over a 64K ISDN line, and yet another over a T1 connection.�

While Avalanche focuses on the client activity, Reflector realistically simulates the behaviour of large Web, application, and data server environments. Combined with Avalanche it therefore provides a total solution for recreating the world's largest server environments. By generating accurate and consistent HTTP responses to Avalanche's high volume of realistic Internet user requests, Reflector tests to capacity any equipment or network connected between the two systems. �

One of the most useful features of the latest release is the ability to upload custom content which can be used in HTTP requests or as e-mail body/attachments. This allows the tester to create completely real-world traffic by utilising actual Web and mail content rather than the random content generated by the default Avalanche application. In addition, it provides the means to use virus-infected or spam content to more thoroughly test Anti Virus or Anti Spam gateway devices. The ability to replay pre-prepared packet capture files also provides the means to replay exploit traffic at high speeds, in order to more thoroughly test IDS/IPS devices.�


Figure 6 - Spirent: Uploading custom virus and spam content

The operating system for both units is proprietary - Unix-like in appearance - and is loaded from disk at boot time. Luckily, it is rarely necessary to get to grips with the underlying OS, since all configuration for both Avalanche and Reflector is performed via a Java-based graphical interface called Commander.�

This interface is new with version 6.5 of Avalanche (NSS currently uses version 6.51), and is a huge improvement over previous releases in terms of usability and speed.

The architecture of the product changed too with release 6.5 - the operating system is now identical on both Avalanche and Reflector appliances, allowing each appliance to perform as either a client or a server (but not both at the same time - so you will always need a matched pair). �

Device ports are allocated within a test specification and the test parameters (including all custom content) are uploaded to the appropriate appliances at the start of each test run. Although making it slower to start a test, this is an extremely flexible feature, since it allows the user to switch the client/server functionality from one side of a Device Under Test (DUT) to the other as required, without having to re-cable everything.�

All test data and results are now stored on the host PC used for the Commander application rather than on the Avalanche/Reflector appliances. This provides the means to copy and backup tests and results more easily, as well as allowing the user to modify tests off-line without being connected to an Avalanche appliance. �

Tests are now grouped together as Projects, and each Project shares common content, subnet, user profile and server profile information, allowing re-use where required. By creating new Projects, however, it is a simple matter to ensure that there are no clashes in content, subnet address ranges, and so on. �


Figure 7 - Spirent: Creating new Tests

Tests can be copied within Projects but not, unfortunately, between them (which is a shame). Projects can be exported and imported (either an entire Project or selected Tests within it), providing the means to backup, restore, or duplicate Projects. All in all, test management is a huge improvement over previous versions.�

An Avalanche Test consists of a sequence of phases, each of which are defined in the Test Specification. The Test Specification for the client-side consists of several sub-categories, including Load Profiles, Actions, Network Profiles Subnets, PPP, PPOE, PORTS and Associations.

Each of these are configured via a number of tabs along the top of the screen, and where applicable, these tabs are duplicated for the server-side as well. Thus, all the old Avalanche (client) and Reflector (server) parameters are configured from the same place.�

The Load Profile settings control how traffic is generated during a test. This tab allows the user to configure the required bandwidth, or number of simulated users, connections or transactions initiated (per millisecond, second, minute or hour), along with the maximum number of active simultaneous user sessions, and the duration of each phase.�

The Actions tab is where the user specifies exactly what will happen during the Test - HTTP GETs, SMTP transfers, DNS requests, and so on. The “language” used to define these actions is fairly straightforward, but is getting more extensive and sophisticated with each release. The use of assigned variables and content taken from lists allows the user to dynamically alter the actions throughout the test, making for a much more realistic traffic mix. �

The ability to “match” returned content against variables also allows analysis of that content during the test which can be reported via the URL Analyzer utility. For example, NSS uses this to check for when virus-infected content from the server has been replaced by harmless content and a warning message by the DUT, thus ensuring that viruses have been detected and eliminated.�

The Profile tab allows the user to create individual user profiles, specifying user actions, such as the period of time for which they view a Web page (think time), how often the abandon a slow-loading page (click-away), browser type, SSL configuration, protocol used, and so on. Multiple user and server profiles can be used throughout a test.�

The Network and Subnets tabs configures proxy parameters, low-level TCP parameters (MSS, fragmentation, receive window, etc.), address ranges, routing information, and even emulated line speed and packet loss for added realism.�

Ports describes the physical ports to be used in the Test, and one of the great features of the recent release is the ability to use multiple ports distributed across multiple Avalanche appliances, and have Commander automatically distribute the load across those ports throughout the test. This makes Avalanche extremely scalable.�

The final tab is Associations, and this enables to user to pull together all of the various profiles and actions and networks and ports, and combine them into a single Test.

Each Association can be given a different weighting, and traffic is generated according to that weighting. So, for example, it is a simple matter to have 90 per cent of the HTTP traffic from a particular network to be valid requests, and 10 per cent of the traffic to be infected with virus content. This can be used to simulate a wide variety of user behaviour,� as well as to combine different protocols and DDOS attacks within the same Test, but running on different ports. �

Test Specifications are complex things to create, though there is extensive assistance available in the form of context sensitive help in Commander, extremely useful Wizards to step you through the process, and good documentation.

Once the tests are running, there is an excellent real-time display available at the Commander console which provides detailed information on the progress of the test, transactions, network traffic, sessions, response times and use of resources.�

As each test is completed, results are written to a several CSV files on the local hard drive of the PC hosting Commander, and the Avalanche Analyzer utility is now fully integrated into the Commander interface. This provides extensive graphical analysis tools in a single utility, together with the ability to compare multiple Test runs on a single set of graphs. Custom graphs can be created and easily exported, and the print option provides the user with excellent finished reports.�


Figure 8 - Spirent: Avalanche Analyzer output

The Spirent Avalanche/Reflector equipment is one of only a handful of devices capable of performing this type of “real world” testing concentrating on layer 4 to 7, and this type of test tool is essential when attempting to replicate high levels of real-life background traffic in order to adequately test today’s sophisticated network security products. �

The operation of the GUI has improved significantly from release to release, and each new release provides a significant increase in speed of response making the user experience much more enjoyable - the new Commander utility provides the most flexible, yet easy to use, incarnation of the software to date. The new hardware platform of the Avalanche appliances also provides a welcome increase in traffic generation performance.�

The ability to generate over 2Gbps of traffic and almost 2 million simultaneous users in a single chassis (or two if you want to make use of the matched Reflector unit) makes Avalanche an essential and permanent part of our standard test rig.

Adtech AX/4000

The AX/4000 (also from Spirent Communications) is a modular, multi-port system that can currently test four different transmission technologies (IP, ATM, Ethernet, and Frame Relay) simultaneously at speeds of up to 10 Gbps. Unlike software-based testing solutions, the AX/4000’s FPGA hardware-based architecture is fast enough to provide more than one million full-rate measurements and statistics continuously and in real time.�

The AX/4000 Generator and Generator/Analyser modules include tools for creating unlimited traffic variations and detail. Set-up “wizards” and logical functional blocks allow you to build complex traffic streams quickly and easily. When injected onto the network, these traffic streams can be “shaped” (to simulate constant or bursty traffic) and even introduce error conditions. �

The controller software, available in both Windows and UNIX versions, has a very intuitive graphical user interface. Test set-up is logical and quick and when tests are running, the software displays real-time data and statistics that are thorough and easy to understand. �

With an Ethernet control module installed in the AX/4000 chassis, the system can be connected to an Ethernet-based LAN for access by remote users. Because every test module in the chassis has its own address on the network, users can access the modules they need and leave the remainder for others to use. This enables multiple users to access the same chassis simultaneously across a network.�


Figure 9 - Spirent: Adtech AX/4000 monitoring screen

The AX/4000 is available with a 16-slot mainframe or a four-slot portable chassis. Both are functionally identical except for the number of available slots, and all AX/4000 components will operate in either chassis. Spirent currently produces a range of different test modules to support different test requirements and speeds, including ATM, Frame Relay, Ethernet, and IP.

The AX/4000 uses plug-in port interfaces to provide the physical interface for test modules, and these cards are interchangeable allowing a single test module to perform tests with a variety of physical connections and speeds. �

Despite the advanced traffic generation capabilities, it is for its high-speed packet capture and network monitoring capabilities that the AX/4000 (with both fibre Gigabit and copper 10/100 ports) finds itself in the NSS test rig. �

In addition to providing live statistics, the analyser can also capture traffic at full wire-rate for further analysis or protocol decoding. Captures can be triggered manually or automatically based on specific events or errors and can include packets or cells received before, after, or both before and after the trigger event. The AX/4000 can maintain over 125,000 simultaneous QoS measurements per port at full rate and in real time. All statistics can be saved on disk for further analysis and for printing detailed test reports.

Assurent VRS

In order to support the extensive test suites created by NSS it is necessary to develop a high quality library of current exploits. This activity takes a disproportionate amount of time in an area which is not considered NSS core business - security testing and certification. The solution, therefore, was to locate a partner capable of meeting our extremely high standards in terms of vulnerability research and exploit production.�

The Vulnerability Research Service (VRS) from Assurent Secure Technologies, a TELUS company, provides security product vendors with timely, in-depth engineering analysis on the top five to eight security vulnerabilities that emerge each week. �

Vendors use the VRS to supplement their own internal research efforts, to help improve both quality and scope of coverage, increasing the quantity of security issues addressed and range of platforms covered.�

Assurent performs continuous monitoring of approximately 200 sources of information on emerging vulnerabilities (including commercial alerting feeds; vendor sources; mailing lists such as Bugtraq, NTBugtraq, Vuln-Dev and Full-Disclosure; and sources within the hacker “underground”). �

Each reported vulnerability is ranked for impact and severity using the SANS CVA formula, and prioritised on this basis. Vulnerabilities are then subjected to full differential analysis (including reproduction of the vulnerability with respect to known-vulnerable, suspected-vulnerable, known-non-vulnerable, and suspected-non-vulnerable targets). �

Unlike services which consolidate the fragments of information made available by vendors and individual vulnerability disclosures, Assurent's Vulnerability Research Team performs in-depth engineering analysis, with the goal of developing a complete understanding of the mechanism, preconditions, triggering conditions, and set of exposures created by each vulnerability. �

Detailed engineering reports are produced within a 24-hour time window, when a vulnerability is ranked critical relative to the SANS CVA formula. Each report includes, but is not limited to the following: �

  • All relevant identifiers (CVE/CAN, SFID, CERT ID, CVA REF, etc.)
  • Severity and impact analysis
  • Affected product(s)
  • Problem location (executable, DLL, shared library, function or method, parameter or property, data object(s))
  • Problem mechanism (technical mechanism, and source-code level walkthrough when applicable)
  • Triggering conditions and prerequisites
  • Protocol flow diagram(s)
  • Packet decodes (both attack cases and normal traffic cases)
  • Behaviour of target during/following attack
  • Vulnerability detection mechanisms (remote identification)
  • Attack detection mechanisms (network-based detection of generic attacks and of known exploits)
  • Exploit status (published, underground, and rumoured exploits)
  • Exploit reproduction (usually including sample code)

Each report is delivered within hours of the emergence of a new issue, and provides sufficient information to permit a vendor to rapidly script a VA probe, IDS signature, or IPS filter of high quality (e.g. a signature which is able to detect all possible attempts to exercise the given vulnerability, rather than simply matching the known exploits). �

In addition to the “proof of concept” exploits provided with the VRS engineering reports, Assurent produces full remote code execution (shell code) exploits for a subset of the vulnerabilities covered by the VRS service, focused on the highest-severity remotely-exploitable vulnerabilities.�

In addition to delivery of research materials via e-mail, a full Web portal is provided to registered users allowing extensive search for vulnerabilities on a range of criteria, and subsequent download of research material and exploits on demand. NSS also uses Assurent’s Spyware Research Service, which provides similar research information for Spyware and Malware.�

To date, we have found the quality of the research material to be second to none, and the supplied exploits and packet captures to be invaluable in IDS and IPS product testing.�

Karalon Traffic IQ Pro

Traffic IQ Pro is essentially a packet replay tool, in a similar vein to the open source tcpreplay and Tomahawk tools, which can be used to verify the operation and detection capabilities of a typical IDS/IPS device (operating either in passive or in-line mode). It can also be used to validate non-proxy based packet filtering devices such as routers and firewalls.�

The product is installed on a PC with a minimum of two network cards - one connected to the internal interface of the product being tested, and the other connected to the external interface. The idea is that a capture file of network traffic (normal or malicious - such as might be created from tcpdump or Ethereal) is taken by Traffic IQ and divided into the two halves of the “conversation” - one consisting of those packets sent by the client, and the other those packets sent by the server. �

Each packet is then replayed in the correct sequence through the correct network card in order to arrive at the appropriate interface of the sensor. Traffic IQ can be configured to handle transmissions with or without default gateways, and with or without NAT (internal and external) as required. �


Figure 10 - Traffic IQ: Configuring settings

As packets are placed on the wire, the user has the option of overwriting source/destination IP addresses, source/destination port numbers, source/destination MAC addresses and TTL values. This allows the user to replay capture files from third parties using IP address ranges which match the test network exactly, although any combination of these can be left as the originally-captured values if preferred.�

As these values are overwritten, Traffic IQ checks the HTTP headers and FTP commands to ensure that any original IP addresses and/or port numbers located there (such as HTTP referrer field, FTP port command, etc.) are also replaced to match the changed addresses. An Advanced Settings tab allows this behaviour to be suppressed, leaving the original values untouched. Naturally, since a significant amount of data is being changed in each packet, all sequence numbers and checksums are recalculated using the new data before the packets are sent on their way.�

Multiple sessions contained within a single capture file are handled correctly, with intelligent replacement of the IP addresses for each session.

During the replay operation, the sensor will see the correct SYN on the correct interface, followed by the SYN ACK on the opposite interface, followed by the ACK on the first interface, and so on throughout the capture file. Thus, in theory, the sensor will see the attack as it was originally played across the wire. �

This does work most of the time, but it is important to realise that Traffic IQ does not actually implement a complete replacement stack, and thus with certain exploits or with badly-crafted capture files things can go wrong, resulting in invalid or out-of-sequence packets on the wire. When this happens, depending on where it happens in the replay, the sensor may quite legitimately ignore the “exploit” and no alert will be raised.�

This occasionally makes it difficult to determine if the fault lies with Traffic IQ or the IPS/IDS device being tested. Traffic IQ, therefore, is no panacea - you still need a reasonable knowledge of vulnerabilities, exploits, and what a “good” capture file should look like in order to get the most out of a product like this. �

In this respect, however, it is not alone, and suffers from the same shortcomings as all other replay tools. It is thus not applicable for all your exploit testing (it cannot handle badly fragmented/segmented traffic, for example). However, for those who prefer a Windows-based GUI to the Unix command line, Traffic IQ offers a much more usable user interface for basic attack recognition testing with well-crafted capture files (such as those included in the Traffic IQ exploit library) than typical open source offerings.�

The open source offerings do provide their own advantages, however. Both Tomahawk and tcpreplay, for example, provide much higher transmission rates than is possible with Traffic IQ running on Windows. On the other hand, open source tools do not come with a well-stocked, ready-to-run, library of exploits. �

Thus we would recommend using Traffic IQ as just one of several such tools when performing IDS/IPS testing. At the end of the day, with some “unusual” exploits which do not play nicely with capture files and replay tools (and especially when testing certain evasion techniques involving heavy fragmentation/segmentation) there is simply no substitute for running the original exploit (either using actual exploit code, or via tools such as Metasploit Framework or CORE IMPACT). �

This is what NSS does in its own tests, running a mixture of replay tools such as Traffic IQ Pro, tcpreplay and Tomahawk with our own library of exploits, together with tools such as Metasploit and CORE IMPACT, and often resorting to using live exploit code where necessary. There is no one-size-fits-all shortcut or magic bullet approach to testing signature recognition capabilities. It is not easy to do well - tools such as Traffic IQ simply make it easier.

In addition to replaying individual capture files, Traffic IQ allows the administrator to create Groups and Traffic Scan Lists. These are collections of exploits which can be replayed individually or as collections, but whereas Groups are quicker to create, they utilise the same IP address, port numbers and direction (client-to-server or server-to-client) for all replayed captures, which is not always convenient. �

Traffic Scan Lists, however, allow the user to define unique settings (IP address, port, direction, time limit, expected result, number of repeats, location on disk) for each capture file in the list.

This not only ensures that different IP quads are used for each exploit run, but that the resultant alerts are easier to spot in the IDS/IPS logs. �

Vendor descriptions of alerts rarely match up to each other, to the CVE reference, or to Karalon’s own descriptions, and thus it makes life much easier to be able to determine that the alert entered in the log files from address 10.10.107.106 was actually generated by a particular exploit in the Traffic Scan List.�


Figure 11 - Traffic IQ: Creating Traffic Scan Lists

An extensive right-click menu is available when editing Traffic Scan Lists, allowing bulk changes of IP addresses, ports, traffic direction, expected results, and so on, as well as applying auto-increments on port numbers and specified quads of the IP addresses. This is a very powerful feature which makes it extremely easy to create large Traffic Scan Lists from scratch. Once created, these lists can be saved for later recall and/or edit. �

Another excellent feature is the Traffic Editor, which allows the user to edit any capture file which has been imported into Traffic IQ. The user is provided with an Ethereal-like display of the packets, and can type directly into the packet buffer to make changes. Selecting the fields in the hierarchical packet menu as required highlights the necessary bytes in the packet buffer to make changes easier. A search and replace/fill option provides the means to make mass changes within a packet or across all packets - particularly useful for foiling simple pattern-based, exploit-specific signatures by changing the overflow buffer fill character from all ‘A’s to a random mix of characters, for example. Obviously, checksums are recalculated where necessary as the capture file is saved to keep the traffic “legitimate”.�

As the exploits in a Traffic Scan List are run, a count of the number which succeeded (i.e. where all the packets made it through the device) or failed (i.e. the exploit was blocked successfully by the device under test) is displayed on screen. A summary of which exploits succeeded and which failed is then available via the Reports option. By switching between Audit Logging and Packet Logging in the Report Options, it is not only possible to see which exploits failed, but which packets within the capture file were blocked by the device under test. This is a very useful feature in determining whether an IPS blocked an exploit in time to prevent the actual payload from being delivered.

The Import feature allows the user to create Traffic IQ files from standard PCAPs. Nothing of note is done during the conversion process (a header is added, a checksum is created, and the PCAP is encrypted), but it is necessary - standard PCAPs cannot be run within Traffic IQ. During the import process, a description of the capture file can be added (NSS usually includes the CVE reference, BID reference and exploit description, for example), and this is displayed whenever a capture file is selected at any point in the Traffic IQ Pro interface.�

Multiple attack libraries can be created and placed in separate subdirectories, from where they can be selected at run-time within Traffic IQ Pro. In this way, it is easy to switch between different libraries - NSS uses this capability to switch between the different libraries used for each of our group test reports, for example.�

Many users will be satisfied using the pre-packed exploit library provided by Karalon as part of the product, in which case they would never have to worry about creating and importing their own. Most IDS/IPS products will have signatures for the majority of the Karalon exploits these days, and so as a tool to verify that your IDS/IPS appliance is detecting, logging and alerting correctly, Traffic IQ is incredibly useful.�


Figure 12 - Traffic IQ: Traffic editor

However, even for those who shun the built-in library and need to create capture files from scratch from live exploits, Traffic IQ offers some key advantages - mainly in the area of ease of use and repeatability. Some of the exploits NSS runs as part of the standard IDS/IPS test suite are very complex to set up - perhaps requiring unusual combinations of operating systems and services - and it can be a painful process to recreate them over and over again for each product tested. �

Using a packet capture tool (such as Ethereal or tcpdump) and the import capability of Traffic IQ, it is now only necessary to run the exploits once, before converting them to Traffic IQ traffic files. After that, they can be replayed quickly and easily (via a user-friendly GUI or command line interface) time and time again, and it is certain that every device tested sees exactly that same traffic as the rest.�

NSS also receives useful trace files from our vulnerability research partner, Assurent. The problem with these, of course, is that the IP addresses could be anything.

Often they are unusable as they stand since the addresses used might fall outside the range allowed by the license key of the product being tested. Traffic IQ gives us the opportunity to use those trace files since, once they are imported, the IP and MAC addresses can be replaced as the attacks are run.�

However, the product does have a number of disadvantages (which will vary in importance or relevance depending on the user): it cannot handle heavily fragmented/segmented traffic; badly constructed capture files can cause “misbehaviour” when replaying them (though as the product becomes more and more sophisticated this is an increasingly rare occurrence); it costs money (open source tools are free); and the built-in attack library has an excess of Backdoors and Trojans (many of them very obscure). �

The library is constantly improving, however, with many newer attacks taken from the Metasploit Framework, obviating the need for users to obtain Metasploit and run the exploits against live servers themselves. �

The biggest drawback with Traffic IQ for major testing projects is the very availability of its exploit library. It is almost impossible to use the built-in library on its own as a test for signature coverage in a competitive environment (whether it is an NSS group test or your own internal bake-off) because every vendor has access to it, and most of them will ensure that they have signatures to cover it. �

Thus, as a comparison between products, it serves little purpose (unless you come across a “rogue” product that actually cannot detect most of the Karalon exploits, which would be odd in itself). This is why NSS, as a testing organisation, cannot rely on the Karalon library entirely, and is thus forced to produce custom own capture files for each group test. Note that this is not a fault of the library itself, just the way it is being used.�

However, as proof that your own device is working correctly following installation, policy configuration and/or product updates, it serves a valuable purpose. �

If a test tool provider could produce a tool like this and produce regular updates to the exploit library in the manner of the average Anti Virus software vendor, then administrators will finally have the capability to audit and verify the efficacy of the latest signature pack from their IDS/IPS vendor, and that test tool will really come into its own. �

Updates to the Karalon library are appearing more frequently at the time of writing than under previous releases. Overall, the Karalon attack library is very useful for testing against both older and more recent exploits, and it also provides a wide range of “normal” traffic files (HTTP, SSH, FTP, and so on).�

Regardless of whether you are using the built-in library or custom attack files, Traffic IQ Pro is an extremely useful tool for replaying these across your network in a consistent and flexible manner, making it far easier than it has ever been to test IDS/IPS products for effective attack coverage.

CORE IMPACT

Organisations are increasingly looking to penetration testing to effectively determine the risk to their network assets. CORE IMPACT, from Core Security Technologies (www.coresecurity.com) is a tool that aims to help in the process by automating as much of it as possible.�

When it comes to penetration testing, most security professionals will start with various port scanners (such as nmap) and vulnerability scanners (such as Nessus) to gain some idea of what vulnerabilities exist on a network. Taking it further than the information gathering phase, however - by attempting to exploit the vulnerabilities discovered and hack the network under investigation -generally requires specialist knowledge and is not something the average corporate security administrator would attempt. �

CORE IMPACT is frequently mis-labelled as a vulnerability scanner, but to call it that is to do it a great injustice. In fact, as a straight vulnerability assessment tool it falls short of the likes of Nessus - but this is by design. For those who want a full-blown vulnerability assessment, IMPACT integrates fully with eEye’s Retina, Nessus, and GFI Languard.�

IMPACT is, in fact, an automated penetration testing tool, which scans a range of hosts looking for vulnerabilities for which it has effective exploits. These exploits can then be launched against the vulnerable hosts to attempt to gain access (or, perhaps, create a Denial of Service condition). Having gained access to a vulnerable host, IMPACT can install Agents which provide varying levels of remote access (including directory listing, uploading and downloading files, and so on). It is even possible to use a compromised host to launch new penetration tests against other hosts on the network which may not have been visible on the initial scan. This way, the penetration tester can move from host to host within the compromised network.�

CORE IMPACT thus allows the user to safely exploit vulnerabilities in the network, replicating the kinds of access an intruder could achieve, and proving actual paths of attacks that must be eliminated. The product features the CORE IMPACT Rapid Penetration Test (RPT), a step-by-step automation of the penetration testing process. From the initial information gathering phase to production of the final report, the penetration testing steps within CORE IMPACT can be run completely autonomously. The steps in this process include:�

  • Information Gathering
  • Attack and Penetration
  • Local Information Gathering
  • Privilege Escalation
  • Clean Up
  • Report Generation

The Windows-based GUI provides a multi-pane view into the available penetration tests, exploit and information gathering modules, scanned hosts, detected vulnerabilities, detailed module information, module output (results), executed modules and entity properties (details of each host detected).�

When all of these windows have been populated the screen can look somewhat busy, but the default layout is actually very useful and is not hard to get used to.

You can, of course, alter the layout to suit your own working methods and save it as the default, as well as perform extensive customisation throughout the package by editing the core XML files.�

When starting a penetration test, the first thing the user will do is download the latest exploit modules from the Core Web site. These are produced on a regular basis - though don’t expect to find every single one reported by Bugtraq. �

Having done that, a Workspace is created for the test (or existing Workspace can be opened and added to). This is an encrypted repository for all information gathered throughout a test, and allows one machine to be used for several projects (for example, by a consultant working at several different client sites) without compromising confidentiality.�

Each of the six processes listed previously are available as Wizards in the Rapid Penetration Test window. By following each of them in turn, the average user will follow the typical “hacker methodology” recommended by every generic hacker’s handbook available on Amazon, and be able to complete a very comprehensive penetration test without recourse to experts or outside consultants. Of course, experts and consultants will also find this tool incredibly useful in their day-to-day work.�


Figure 13 - CORE IMPACT: Running a penetration test

The Information Gathering step uses tools such as nmap to determine the operating system and available services, as well as full service enumeration techniques where possible, and this information is used to determine which exploits in the database may be effective against each host. It appears to identify servers correctly even when running on non-default ports, although it cannot identify specific applications (Apache vs. IIS, for example).�

The ability to launch simultaneous, multiple attacks improves the speed and ease with which users can evaluate their network defences, and the user gets to specify how aggressive he wants IMPACT to be when running exploits. For example, it is possible to exclude all those exploits which would leave services in an unsafe condition, as well as exclude those tests which tend to take an excessive amount of time to complete (such as brute force password cracks).

If any of the exploits succeeds in compromising the target host, a small memory-resident Agent can be installed on the host, which is then accessible from the IMPACT console. A range of remote-control options is then available, including the ability to escalate privileges, grab passwords, install key-logging software, gather additional information about the host and its user accounts and domain memberships, take screen shots, perform directory listings, download files, upload files, delete files, execute OS commands, and so on. Once you have finished, the Agent can be remotely uninstalled leaving no trace of its - or your - presence.�

Client-side attacks can be accomplished by IMPACT simulating a malicious server (Web server, for example) and serving exploit code to remote clients which connect to it.�

Once finished, a wealth of information is available both on the IMPACT console and via the excellent reports. Four reports are available: �

  • Executive - provides a summary of all activities
  • Activity - details all modules executed
  • Host - provides details about all hosts tested
  • Vulnerability - details all vulnerabilities successfully exploited on hosts

Reports can be created as HTML, PDF, Microsoft Word and other popular formats so that content can be easily customised and shared with auditors and other parts of the organisation.


Figure 14 - CORE IMPACT: Running exploits

For those - like the NSS team - who are looking to use this tool to create custom packet captures for replay using tools such as Traffic IQ, it is also possible to run individual modules as required. When creating your own PCAPs in this way you should run every variation of each exploit (i.e. with different target OS and different payloads) and create a PCAP for each to ensure that you are not testing only one possible attack vector with your replay tool. �

Of course, the nice thing about this sort of tool is that whenever you have an IDS/IPS which fails to detect traffic replayed using a particular PCAP as malicious, you can resort to using the live exploit within IMPACT to make sure it was a real miss and not a problem with the trace file.

At the end of the day, if you run a CORE IMPACT exploit and you gain a shell on the target host without raising an alert or being blocked, your IDS/IPS has definitely failed.�

Overall, we are impressed with the scope and quality of the exploits, and the product is in almost daily use in the NSS labs. For those who wish to perform penetration tests or run live exploits on a regular basis, CORE IMPACT is an essential tool.

Cisco Catalyst 6500 Series Switches

Cisco describes the Cisco Catalyst 6500 Series Switch as its premier intelligent multilayer modular switch, designed to deliver secure, converged services from the wiring closet to the core, and from the data centre, to the WAN edge. �

Depending on the model chosen, the 6500 Series chassis can support up to 576 10/100/1000-Mbps or 1152 10/100-Mbps Ethernet ports. High capacity network cores support multiple Gigabit and 10-Gbps trunks, and 400 million packets per second (mpps).�


Figure 15 - Cisco: Catalyst 6500 Series Switches

Operational consistency is provided by 3-, 6-, 9-, and 13-slot chassis configurations sharing a common set of modules, Cisco IOS Software, Cisco�Catalyst Operating System Software, and network management tools. The Catalyst 6500 Series product portfolio includes:�

  • Cisco Catalyst 6500 Series Supervisor Engine 720
  • Cisco Catalyst 6500 Supervisor Engine 32, offering next-generation feature consistency with the Supervisor Engine 720, targeted for the access layer
  • High density 4-port 10 Gigabit Ethernet module
  • 48-port 10/100/1000 Ethernet module for access and data centre
  • 24- and 48-port gigabit Ethernet module for data centre deployments
  • 48-port 10/100 with integrated TDR support and optional, field-upgradeable 802.3af Power over Ethernet (PoE) support
  • 96-port 10/100 RJ-21 module with optional, field-upgradeable 802.3af PoE support
  • Enhanced 48-port 10/100/1000 supporting jumbo frames and optional, field-upgradeable 802.3af PoE support
  • High density 48-port 100 FX Small Form-Factor Pluggable-based module
  • 6000W power supply and chassis enhancements for high-density PoE deployment support

Network security is provided by integrating existing Cisco security solutions, including intrusion detection, firewall, VPN, and Secure Sockets Layer (SSL) into existing networks.�

It is worth noting that, despite the multi-Gigabit throughput claims for this device, careful choice of blades and supervisor modules is required in order to realise these claims. For example, the 6148 cards provide 48 copper Gigabit ports, but there are only six buffer groups with 8 ports in each group, giving a 6Gbps limit on this card no matter how many ports are used.

Stepping up to the 6748 cards, however, provides individual port buffers, thus providing a possible 48Gbps per card, although the total bandwidth from the 6748 cards to the backplane is limited to 40Gbps.�

Likewise, the SUP2 Supervisor module offers a much lower throughput in terms of packets per second across the backplane than the SUP720, which provides up to 30 million packets per second throughput. Mixing cards and Supervisor modules should also be done with care - adding a 6148 card to a chassis with a SUP720 effectively reduces the throughput to 15 million packets per second.�

Capacity planning should be performed with care. The current NSS configuration utilises multiple 6506 chassis with SUP720 Supervisor cards, multiple 6748 line cards (for copper connections), 6516 cards (for fibre connections) and dual 3000W power supplies per-chassis, to provide a maximum throughput of 30 million pps and 40Gbps across the backplane of each switch.

Open Source Tools

Other replay tools which are used in the NSS Group labs include tcpreplay (tcpreplay.sourceforge.net) and Tomahawk (tomahawk.sourceforge.net). It is worth noting that most of the same caveats/problems mentioned previously regarding Traffic IQ also apply to these products, since many of those issues are common to any replay tool.�

Whereas neither product offers a simple GUI interface, and neither comes with a pre-built exploit library, both tcpreplay and Tomahawk do provide certain advantages over Traffic IQ which makes them worthy of inclusion in your security tool-box.�

At the end of the day, it is worth reiterating that very often the best - and sometimes only - way to trigger an alert on your IDS/IPS is to run the live exploit.�

For this reason, NSS also makes extensive use of penetration testing tools such as CORE IMPACT (www.coresecurity.com) and its closest open source equivalent, Metasploit Framework (www.metasploit.org).�

Tomahawk

Tomahawk is a tool for testing the performance and in-line blocking capabilities of IPS devices. Tomahawk is run on a machine with three NICs: one for management and two for testing. The two test NICs (eth0 and eth1, by default) are typically connected through a switch, crossover cable, or Network-based IPS device.�

Briefly, Tomahawk divides a packet trace (pcap) into two parts: those generated by the client and those generated by the server. Tomahawk parses the packet trace one packet at a time. The first time an IP address is seen in a file, the IP address is associated with the client if it is in the IP source address field of the packet, or associated with the server if it is in the destination field. For example, consider a pcap consisting of a standard three-way TCP handshake that contains three packets:�

Packet 1 (SYN)�������� ip.src = 172.16.5.5�� ip.dest = 172.16.5.4

Packet 2 (SYN-ACK)�ip.src = 172.16.5.4�� ip.dest = 172.16.5.5

Packet 3 (ACK)������� ip.src = 172.16.5.5�� ip.dest = 172.16.5.4

When Tomahawk reads the first packet, the address 172.16.5.5 is encountered for the first time in the source field, and the address 172.16.5.4 is encountered for the first time in the destination field. The address 172.16.5.5 is therefore associated with the client, while the address 172.16.5.4 is associated with the server.�

When the system replays the attack, server packets are transmitted on eth1, and client packets are transmitted on eth0. To replay the sequence above, Tomahawk begins by sending packet 1 (a client packet) over eth0. When this packet arrives on eth1, it sends packet 2 on eth1 and waits for packet 3 to arrive on eth0. When the packet arrives, Tomahawk sends packet 3 on eth0. When the last packet arrives on eth1, Tomahawk outputs a message that it has completed the pcap.�

If a packet is lost, the sender retries after a timeout period. The sender infers that the packet is lost if it does not receive the next packet in sequence within the timeout. For example, if Tomahawk sends packet 2 on eth1 and does not receive it on eth0 within the timeout, it resends packet 2.

If progress is not made after a specified number of retransmissions, the session is aborted and Tomahawk outputs a message indicating that the session has timed out.�

To ensure that the packet is correctly routed through the switches, the Ethernet MAC addresses are rewritten when the packet is sent. In addition, the IP addresses are also rewritten and the packet’s checksums updated accordingly. Thus, in the example above, when Tomahawk sends packet 1, the IP source address of the packet that appears on the wire is 10.0.0.1, and the IP destination address is 10.0.0.2 (the start IP address for a session can be specified on the command line). �

When the replay is finished, either because all the packets made it through or because the number of retransmissions was exceeded, Tomahawk reports whether the replay completed or timed out. When testing an IPS, if Tomahawk reports that the pcap containing the attack has timed out, then the IPS has blocked the attack. If Tomahawk reports that the pcap has completed successfully, then the IPS missed the attack, regardless of what the log indicates.�

To ramp up the bandwidth, Tomahawk can replay multiple copies of the same pcap in parallel (each copy is given its own block of IP addresses). This allows the tool to be used for more than straight exploit detection testing (as noted, Tomahawk does not come with a pre-built library of exploits - you have to produce your own from scratch). It can also be used for reliability, repeatability and performance testing.�

By instructing Tomahawk to run multiple packet traces simultaneously, it is possible to generate significant traffic loads from a single Tomahawk server. Bear in mind that it is possible to use clean, non-exploit packet traces too, such as simple HTTP transactions, which allows the generation of normal background traffic for your tests.�

Repeatability testing ensures the IPS is deterministic. If we take a sample set of, say, 50 attacks, an IPS will perhaps block 40 and miss 10 (you can include “normal” non-exploit packet traces in the mix to ensure that some traffic will pass through the device). As a test, that does not tell us much. �

But if we were to run those same 50 attacks 100,000 times each, then we expect to see 4 million sessions blocked, and 1 million allowed through. In this test, however, the device is under pressure for a greater amount of time, and this is where you may begin to see evidence of “leakage” (where some exploits are allowed through the device in error - very bad), or blocking of legitimate traffic (which may or may not be serious depending on the amount of traffic that was blocked and how great the tolerance in your own environment for this condition).�

Tomahawk currently has its problems and some inherent limitations. The most obvious limitation is the fact that it can only operate across a layer 2 network. Thus, unlike Traffic IQ, it is not possible to have the generated traffic pass through routers - instead, the device under test must be connected directly to the NICs of the Tomahawk host, or connected to them via switches.�

Like the Traffic IQ, it cannot handle PCAPs containing badly fragmented/segmented traffic, and multiple sessions in the same PCAP can sometimes confuse it. The current version also contains a rather more serious bug.

Should an IPS send TCP reset packets to client and server when it drops the session, Tomahawk believes the RST on the internal interface to be a packet from its own sending NIC (because it appears to come from the client) and it duly reports that it has seen traffic from the client. This will appear as though the device failed to block the exploit when, in fact, it blocked it correctly. Currently, the only workaround is to disable RST transmission on the IPS (not always possible, depending on the device being tested).�

tcpreplay

Originally written by Matt Undy of Anzen Computing, and more recently maintained Matt Bing of NFR and Aaron Turner, tcpreplay is a tool designed to replay saved tcpdump files at arbitrary speeds. It provides a variety of features for replaying traffic for both passive sniffer devices as well as in-line devices such as routers, firewalls, and IPS.�

tcpreplay� 2.x includes the following tools:�

  • tcpreplay - the tool for replaying capture files
  • tcpprep - this can be used to pre-process the capture file, performing all the calculations and packet re-writing necessary to replay the capture file across two interfaces. The results are written to a cache file which is subsequently used by tcpreplay - the original PCAP remains untouched
  • capinfo - tool for printing statistics about capture files
  • pcapmerge - a tool for merging pcap files into one larger one
  • flowreplay - a tool for replaying connections

Although originally designed to support a single interface (for passive IDS testing), recent versions have added multiple interface support such that tcpreplay offers similar functionality to Tomahawk and Traffic IQ when it comes to testing in-line devices.�

Where tcpreplay really scores, however, is in its post-processing options. IP addresses can be rewritten or randomised, MAC addresses can be rewritten, packets truncated by tcpdump can be “repaired”, transmission speeds can be tightly controlled, and specific packets or ranges of packets in the pcap file can be replayed alone, ignoring the rest. Very few of these options (with the exception of rewriting IP and MAC addresses, obviously) are currently supported by other tools such as Tomahawk or Traffic IQ.�

One other very interesting direction being taken by the tcpreplay author is the flowreplay tool. This is intended to provide the ability to test servers and Host IDS/IPS products by playing only the client-side of a pcap against a real service on the target host. Although in the early days of development, this feature is unique amongst replay tools at the moment, and is one capability in which we will be maintaining a close interest.�

Metasploit Framework

The Metasploit Framework is an advanced open-source platform for developing, testing, and using exploit code. This project initially started off as a portable network game and has evolved into a powerful tool for penetration testing, exploit development, and vulnerability research.�

The Framework was written in the Perl scripting language and includes various components written in C, assembler, and Python.

The widespread support for the Perl language allows the Framework to run on almost any Unix-like system under its default configuration. A customised Cygwin environment is provided for users of Windows-based operating systems. The project core is dual-licensed under the GPLv2 and Perl Artistic Licenses, allowing it to be used in both open-source and commercial projects.�

This project can be roughly compared to commercial offerings such as CORE IMPACT. The major difference between Metasploit and commercial products, however, is the focus - while the commercial products need to provide the latest exploits and an intuitive GUI, Metasploit was designed to facilitate research and experimentation with new technologies.�

A text-based console provides access to the exploits and payloads, and by following the documentation it is not difficult to have your first remote shell prompt blinking away in front of you. �


Figure 16 - Metasploit: Running exploits via the Web console

Different targets are often available for each exploit, allowing the user to attack different versions of Windows, for example (since different exploit code may be required for each). In addition, numerous payloads are available, providing a choice of remote cmd.exe shell, installing VNC, adding new admin accounts to the victim host, and so on.�

For those who prefer the comfort of a graphical interface, a Web console is available, providing a point-and-click means of running the same exploits. With this, instead of issuing several text commands one after the other to configure and launch an exploit, all that is necessary is to complete the configuration form on screen (target port, target, IP, payload required, and so on) and click on the Launch button.�

One of the great things about Metasploit is the regularity of the exploit updates - it is not unusual to see zero-day exploits appear on the Web site. If there is a down side to this product, it is that not all of the exploits are written with the same amount of care, and they can occasionally prove to be unreliable.�

Overall, however, Metasploit is an excellent tool, and should be considered essential for any penetration tester worth his salt.

Click here to return to the IPS Index Section

top�������� Home

Certification Programs

Group Test Reports

White Papers

On-Line Store

Contact The NSS Group

Home

Send mail to webmaster with questions or�
comments about this web site.

Copyright � 1991-2006 The NSS Group Ltd.
All rights reserved.

Featured sites