NSS Group logo

Gigabit IDS Performance Testing

The aim of this procedure (based on V3.0 of the NSS Group IDS Testing Methodology) is to provide a thorough test of all the main components of a Gigabit IDS device in a controlled and repeatable manner, and in the most “real world” environment which it is possible to simulate in a test lab. 

The Test Environment

The network is 100/1000Mbit Ethernet with CAT 5e cabling and Cisco Catalyst 6500-Series switches (these have a mix of fibre and copper Gigabit interfaces). All devices are expected to be provided as appliances - if software-only, the supplier pre-installs the software on the recommended hardware platform. There is no firewall protecting the target network. 

Traffic generation equipment - such as the machines generating exploits, Spirent Avalanche and Spirent Smartbits transmit port - is connected to the “external” network, whilst the “receiving” equipment - such as the “target” hosts for the exploits, Spirent Reflector and Spirent Smartbits receive port - is connected to the internal network.  

All “normal” network traffic, background load traffic and exploit traffic crossing the switches is mirrored to two SPAN ports on the Catalyst 6503 (the same traffic is mirrored to both simultaneously). The sensor’s detection interface is connected to one SPAN port, whilst an Adtech network monitoring device monitors the same mirrored traffic via the second SPAN port to ensure that the total amount of traffic never exceeds 1Gbps (which would invalidate the test run).  

The sensor is bound to the Gigabit network interface in “stealth mode” wherever that is supported (i.e. no IP address) and a separate interface is used to connect the sensor to the management console on a private subnet. This ensures that the sensor and console can communicate even when the target subnet is subjected to heavy loads, in addition to preventing attacks on the console itself.  

Section 1 - Detection Engine

The aim of this section is to verify that the sensor is capable of detecting and logging a wide range of common exploits accurately, whilst remaining resistant to false positives. All tests in this section are completed with no background network load. The latest signature pack is acquired from the vendor, and sensors are deployed with all available attack signatures enabled (some audit/informational signatures may be disabled). 

Test 1.1 - Attack Recognition

Whilst it is not possible to validate completely the entire signature set of any product, this test attempts to demonstrate how accurately the sensor detects and logs a wide range of common exploits, port scans, and Denial of Service attempts. All exploits are run with no load on the network and no IP fragmentation.  

Our attack suite contains over 100 basic exploits (plus variants) covering the following areas:

Test 1.1.1 - Backdoors (standard ports and random ports)

Test 1.1.2 - DNS/WINS

Test 1.1.3 - DOS

Test 1.1.4 - False negatives (common exploits which have been modified to remove or alter obvious “triggers” - this ensures that the signatures are coded for the underlying vulnerability rather than a particular exploit)

Test 1.1.5 - Finger

Test 1.1.6 - FTP

Test 1.1.7 - HTTP

Test 1.1.8 - ICMP (including unsolicited ICMP response)

Test 1.1.9 - Reconnaissance

Test 1.1.10 - RPC

Test 1.1.11 - SSH

Test 1.1.12 - Telnet

Test 1.1.13 - Database

Test 1.1.14 - Mail

Test 1.1.15 - Voice 

A wide range of vulnerable target operating systems and applications are used, and the majority of the attacks are successful, gaining root shell or administrator privileges on the target machine. 

We expect all the attacks to be reported in as straightforward and clear a manner as possible (i.e. an “RDS MDAC attack” should be reported as such, rather than a “Generic IIS Attack”). Wherever possible, attacks should be identified by their assigned CVE reference. It will also be noted when a response to an exploit is considered too “noisy”, generating multiple similar or identical alerts for the same attack. 

The “defaultAttack Recognition Rating (ARR) is expressed as a percentage of detected exploits against total number of exploits launched with the default signature set as received by NSS - this demonstrates how effective the sensor can be when simply deploying the default configuration.  

Following the initial test run, each vendor is provided with a list of CVE references of the attacks missed, and is then allowed 48 hours to produce an updated signature set. This updated signature set must be released to the general public as a standard signature/product update before the report is published - this ensures that vendors do not attempt to code signatures just for this test.  

The sensor is then exposed to a second round of identical tests and the “custom” ARR is determined. This demonstrates how effective the vendor is at responding to a requirement for new or updated signatures. 

Both the default and custom ARR figures are reported.  

Test 1.2 - Resistance To False Positives

The aim of this test is to demonstrate how likely it is that a sensor raises a false positive alert.

We have a number of trace files of normal traffic with “suspicious” content, together with several “neutered” exploits which have been rendered completely ineffective. If a signature has been coded for a specific piece of exploit code rather then the underlying vulnerability, or if it relies purely on pattern matching, some of these false alarms could be alerted upon.  

The device attains a “PASS” for each test case if it does not raise an alert. Raising an alert on any of these test cases is considered a “FAIL”, since none of the “exploits” used in this test represents a genuine threat.  

Test 1.2.1 - False positives 

Section 2 - Evasion

The aim of this section is to verify that the sensor is capable of detecting and logging basic exploits when subjected to varying common evasion techniques. 

Test 2.1 - Baselines

The aim of this test is to establish that the sensor is capable of detecting a number of common basic attacks (our baseline suite) in their normal state, with no evasion techniques applied. 

Test 2.1.1 - Baseline attack replay 

Test 2.2 - Packet Fragmentation and Stream Segmentation

The baseline HTTP attacks are repeated, running them through fragroute using various evasion techniques, including: 

Test 2.2.1 - IP fragmentation - ordered 8 byte fragments

Test 2.2.2 - IP fragmentation - ordered 24 byte fragments

Test 2.2.3 - IP fragmentation - out of order 8 byte fragments

Test 2.2.4 - IP fragmentation - ordered 8 byte fragments, duplicate last packet

Test 2.2.5 - IP fragmentation - out of order 8 byte fragments, duplicate last packet

Test 2.2.6 - IP fragmentation - ordered 8 byte fragments, reorder fragments in reverse

Test 2.2.7 - IP fragmentation - ordered 16 byte fragments, fragment overlap (favour new)

Test 2.2.8 - IP fragmentation - ordered 16 byte fragments, fragment overlap (favour old)

Test 2.2.9 - TCP segmentation - ordered 1 byte segments, interleaved duplicate segments with invalid TCP checksums

Test 2.2.10 - TCP segmentation - ordered 1 byte segments, interleaved duplicate segments with null TCP control flags

Test 2.2.11 - TCP segmentation - ordered 1 byte segments, interleaved duplicate segments with requests to resync sequence numbers mid-stream

Test 2.2.12 - TCP segmentation - ordered 1 byte segments, duplicate last packet

Test 2.2.13 - TCP segmentation - ordered 2 byte segments, segment overlap (favour new)

Test 2.2.14 - TCP segmentation - ordered 1 byte segments, interleaved duplicate segments with out-of-window sequence numbers

Test 2.2.15 - TCP segmentation - out of order 1 byte segments

Test 2.2.16 - TCP segmentation - out of order 1 byte segments, interleaved duplicate segments with faked retransmits

Test 2.2.17 - TCP segmentation - ordered 1 byte segments, segment overlap (favour new)

Test 2.2.18 - TCP segmentation - out of order 1 byte segments, PAWS elimination (interleaved dup segs with older TCP timestamp options)

Test 2.2.19 - IP fragmentation - out of order 8 byte fragments, interleaved duplicate packets scheduled for later delivery

Test 2.2.20 - TCP segmentation - ordered 16 byte segments, segment overlap (favour new (Unix)) 

For each of the evasion techniques, we note if (i) the attempted attack is detected and an alert raised in any form, and (ii) if the exploit is successfully “decoded” to provide an accurate alert relating to the original exploit, rather than alerting purely on anomalous traffic detected as a result of the evasion technique itself. 

Test 2.3 - URL Obfuscation

The baseline HTTP attacks are repeated, this time applying various URL obfuscation techniques made popular by the Whisker Web server vulnerability scanner, including: 

Test 2.3.1 - URL encoding

Test 2.3.2 - /./ directory insertion

Test 2.3.3 - Premature URL ending

Test 2.3.4 - Long URL

Test 2.3.5 - Fake parameter

Test 2.3.6 - TAB separation

Test 2.3.7 - Case sensitivity

Test 2.3.8 - Windows \ delimiter

Test 2.3.9 - Session splicing 

For each of the evasion techniques, we note if (i) the attempted attack is detected and an alert raised in any form, and (ii) if the exploit is successfully “decoded” to provide an accurate alert relating to the original exploit, rather than alerting purely on anomalous traffic detected as a result of the evasion technique itself. 

Test 2.4 - Miscellaneous Evasion Techniques

Certain baseline attacks are repeated, and are subjected to various protocol- or exploit-specific evasion techniques, including: 

Test 2.4.1 - Altering default ports/passwords for backdoors

Test 2.4.2 - Inserting spaces in FTP command lines

Test 2.4.3 - Inserting non-text Telnet opcodes in FTP data stream

Test 2.4.4 - Polymorphic mutation (ADMmutate)

Test 2.4.5 - Altering protocol and RPC PROC numbers

Test 2.4.6 - RPC record fragging (MS-RPC and Sun)

Test 2.4.7 - HTTP exploits to non-standard port 

For each of the evasion techniques, we note if (i) the attempted attack is detected and an alert raised in any form, and (ii) if the exploit is successfully “decoded” to provide an accurate alert relating to the original exploit, rather than alerting purely on anomalous traffic detected as a result of the evasion technique itself. 

Section 3 - Stateful Operation

The aim of this section is to be able to determine whether the sensor is capable of monitoring stateful sessions established across the network at various traffic loads without either losing state or incorrectly inferring state. 

Test 3.1 - Stateless Attack Replay (Mid-Flows)

This test determines whether the sensor is resistant to stateless attack flooding tools - these utilities are used to generate large numbers of false alerts on the protected subnet using valid source and destination addresses and a range of protocols.  

The main characteristic of many flooding tools is the fact that they generate single packets containing “trigger” patterns without first attempting to establish a connection with the target server. Whilst this can be effective in raising alerts with some stateless protocols such as UDP and ICMP, they should never be capable of raising an alert for exploits based on stateful protocols such as FTP and HTTP. 

In this test, we transmit a number of packets taken from capture files of valid exploits, but without first establishing a valid session with the target server. We also remove the session tear down and acknowledgement packets so that the sensor can not “infer” that a valid connection was made.  

In order to receive a “PASS” in this test, no alerts should be raised for any of the actual exploits (although “mid-flow” alerts are permitted).  

Test 3.1.1 - Stateless attack replay 

Test 3.2 - Simultaneous Open Connections (default settings)

This test determines whether the sensor is capable of preserving state across increasing numbers of open connections, as well as continuing to detect and log new exploits when the state tables are filled. This test is run using the default sensor settings (no tuning of sensor parameters). 

A legitimate HTTP session is opened and the first packet of a two-packet exploit is transmitted. The Spirent Avalanche (on the “external” network) then opens various numbers of TCP sessions from 10,000 to 1,000,000 (one million) with the Spirent Reflector (on the “internal” network) and the sensor will be expected to track each of these legitimate sessions. The initial HTTP session is then completed with the second half of the exploit and the session is closed. If the sensor is still maintaining state on the first session established, the exploit will be recorded. If the state tables have been exhausted, the exploit string will be seen as a non-stateful attack, and will thus be ignored.

Both halves of the exploit are required to trigger an alert - a device will fail the test if it fails to generate an alert after the second packet is transmitted, or if it raises an alert on either half of the exploit on its own.  

At each step, we ensure that the sensor is still capable of detecting freshly-launched exploits once all the connections are open.  

We then launch further exploits whilst the Avalanche/Reflector devices “churn” connections at the maximum level set, ensuring that the sensor is still capable of detecting and logging freshly-launched exploits as old connections are torn down and new ones recreated constantly.  

Test 3.2.1 - Attack Detection: This test ensures that the sensor continues to detect new exploits as the number of open sessions is increased in stages from 10,000 to 1,000,000

Test 3.2.2 - State Preservation: This test ensures that the sensor maintains the state of pre-existing sessions as the number of open sessions is increased in stages from 10,000 to 1,000,000 

Test 3.3 - Simultaneous Open Connections (after tuning)

Test 3.2 is repeated after any tuning recommended by the vendor (if applicable) to increase the size of the state tables.  

Test 3.3.1 - Attack Detection: As Test 3.2.1 following tuning

Test 3.3.2 - State Preservation: As Test 3.2.3 following tuning

Section 4 - Detection Performance Under Load

The aim of this section is to verify that the sensor is capable of detecting and logging exploits when subjected to increasing loads of background traffic up to the maximum bandwidth supported as claimed by the vendor.  

The latest signature pack is acquired from the vendor, and sensors are deployed with all available attack signatures enabled (some audit/informational signatures may be disabled). Each sensor is configured to detect and log suspicious traffic - no session-termination techniques are employed (i.e. RST packets from the sensor). 

Our “attacker” host launches a fixed number of exploits at a target host on the subnet being monitored by the sensor. The Adtech network monitor is configured to monitor the same traffic on a second switch SPAN port  (consisting of normal, exploit and background traffic), and is capable of reporting the total number of exploit packets seen on the wire as verification. 

A fixed number of exploits are launched with zero background traffic to ensure the sensor is capable of detecting our baseline attacks. Once that has been established, increasing levels of varying types of background traffic are generated across the network in order to determine the point at which the sensor begins to miss attacks - all tests are repeated with 250Mbps, 500Mbps, 750Mbps and 1000Mbps of background traffic (or up to the maximum rated throughput of the device should this be less than 1Gbps).

At all stages, the Adtech network monitor verifies both the overall traffic loading and the total number of exploits seen on the target subnet. An additional confirmation is provided by the target host which reports the number of exploits which actually made it through. 

The Attack Detection Rate (ADR) at each background load is expressed as a percentage of the number of exploits detected by the sensor against the number verified by the Adtech network monitor and target host.  

Test 4.1 - UDP Traffic To Random Valid Ports

This test uses UDP packets of varying sizes generated by a SmartBits SMB6000 with LAN-3301A 10/100/1000Mbps TeraMetrics cards installed. A constant stream of the appropriate mix of packets - with variable source IP addresses and ports transmitting to a single fixed IP address/port - is transmitted across the network protected by the sensor. Each packet contains dummy data, and is targeted at a valid port on a valid IP address on the target subnet. The percentage load and packets per second (pps) figures are verified by the Adtech Gigabit network monitoring tool throughout each test. Multiple tests are run and averages taken where necessary. 

This traffic does not attempt to simulate any form of “real world” network condition, and the aim of this test is purely to determine the raw packet processing capability of the sensor, and its effectiveness at passing “useless” packets quickly in order to pass potential attack packets to the detection engine. 

Test 4.1.1 - 256 byte packets - maximum 453,000 packets per second: This test is roughly equivalent to a 40,000 connections per second test in our HTTP stress tests (in terms of packet size and packets per second rate), and has been included to provide an indication of the packet processing performance under the most extreme conditions for most devices - it is unlikely that any real-life network will ever see network loads of over 450,000 256-byte packets per second unless under severe DOS conditions. Repeated with 250Mbps, 500Mbps, 750Mbps and 1000Mbps of background traffic.

Test 4.1.2 - 550 byte packets - maximum 220,000 packets per second: This test has been included to provide a comparison with our “real world” packet mixes, since the average packet size is similar. No sessions are created during this test and there is very little for the detection engine to do in the way of protocol analysis. This test provides a reasonable indication of the ability of a device to process packets from the wire on an “average” network, and we would expect all products to demonstrate good performance levels. Repeated with 250Mbps, 500Mbps, 750Mbps and 1000Mbps of background traffic.

Test 4.1.3 - 1000 byte packets - maximum 122,000 packets per second: This test is the complete opposite of the 256 byte packet test, in that we would expect every single product to be capable of returning 100 per cent detection rates across the board when using only 1000 byte packets. We have included this test mainly to demonstrate how easy it is to achieve good results using large packets - beware of test results that only quote performance figures using similar (or larger) packet sizes. Repeated with 250Mbps, 500Mbps, 750Mbps and 1000Mbps of background traffic.

Test 4.2 - HTTP “Maximum Stress” Traffic With No Transaction Delays

HTTP is the most widely used protocol in most normal networks, as well as being one of the most widely exploited. The number of potential HTTP exploits for the protocol makes a pure HTTP network something of a torture test for the average sensor.  

The use of multiple Spirent Communications Avalanche 2500 and Reflector 2500 devices allows us to create true “real world” traffic at speeds of up to 4.2 Gbps as a background load for our tests. Our Avalanche configuration is capable of simulating over 5 million users, with over 5 million concurrent sessions, and over 200,000 HTTP requests per second.  

By creating genuine session-based traffic with varying session lengths, the sensor is forced to track valid sessions, thus ensuring a higher workload than for simple packet-based background traffic. This provides a test environment that is as close to “real world” as it is possible to achieve in a lab environment, whilst ensuring absolute accuracy and repeatability. 

The aim of this test is to stress the HTTP detection engine and determine how the sensor copes with detecting and logging exploits under network loads of varying average packet size and varying connections per second.  

Each transaction consists of a single HTTP GET request and there are no transaction delays (i.e. the Web server responds immediately to all requests). All packets contain valid payload (a mix of binary and ASCII objects) and address data, and this test provides an excellent representation of a live network (albeit one biased towards HTTP traffic) at various network loads. 

Test 4.2.1 - Max 2,500 new connections per second - average packet size 1000 bytes - maximum 120,000 packets per second. Repeated with 250Mbps, 500Mbps, 750Mbps and 1000Mbps of background traffic. With relatively low connection rates and large packet sizes, we expect all sensors to achieve 100% detection rates throughout this test.

Test 4.2.2 - Max 5,000 new connections per second - average packet size 540 bytes - maximum 225,000 packets per second. Repeated with 250Mbps, 500Mbps, 750Mbps and 1000Mbps of background traffic. With average connection rates average packet sizes, this is a good approximation of a real-world production network, and we expect all sensors to perform well in this test.

Test 4.2.3 - Max 10,000 new connections per second - average packet size 440 bytes - maximum 275,000 packets per second. Repeated with 250Mbps, 500Mbps, 750Mbps and 1000Mbps of background traffic. With average packet sizes coupled with very high connection rates, this is a strenuous test for any sensor, and represents a very heavily used production network.

Test 4.2.4 - Max 20,000 new connections per second - average packet size 360 bytes - maximum 320,000 packets per second. Repeated with 250Mbps, 500Mbps, 750Mbps and 1000Mbps of background traffic. With small packet sizes and extremely high connection rates this is an extreme test for any sensor. Not many sensors will perform well at all levels of this test.

Test 4.3 - HTTP “Maximum Stress” Traffic With Transaction Delays

This test is identical to Test 4.2 except that we introduce a 10 second delay in the server response for each transaction. This has the effect of maintaining a high number of open connections throughout the test, thus forcing the sensor to utilise additional resources to track those connections. 

Test 4.3.1 - Max 5,000 new connections per second - average packet size 540 bytes - maximum 225,000 packets per second - 10 second transaction delay - maximum 50,000 open connections. Repeated with 250Mbps, 500Mbps, 750Mbps and 1000Mbps of background traffic. With average connection rates average packet sizes, this is a good approximation of a real-world production network, and we expect all sensors to perform well in this test.

Test 4.3.2 - Max 10,000 new connections per second - average packet size 440 bytes - maximum 275,000 packets per second - 10 second transaction delay - maximum 100,000 open connections. Repeated with 250Mbps, 500Mbps, 750Mbps and 1000Mbps of background traffic. With average packet sizes coupled with very high connection rates, this is a strenuous test for any sensor, and represents a very heavily used production network. 

Test 4.4 - Protocol Mix Traffic

Whereas 4.2 and 4.3 provide a pure HTTP environment with varying connection rates and average packet sizes, the aim of this test is to simulate more of a “real world” environment by introducing additional protocols whilst still maintaining a precisely repeatable and consistent background traffic load (something rarely seen in a real world environment).  

The result is a background traffic load that, whilst less stressful than previous tests, is closer to what may be found on a heavily-utilised “normal” production network. 

Test 4.4.1 - 72% HTTP traffic (540 byte packets) + 20% FTP traffic + 6% UDP traffic (256 byte packets). Max 4000 new connections per second - average packet size 540 bytes - maximum 215,000 packets per second - maximum 750 open connections. Repeated with 250Mbps, 500Mbps, 750Mbps and 1000Mbps of background traffic. With lower connection rates, average packets sizes and a common protocol mix, this is a good approximation of a heavily-used production network, and we expect all sensors to perform well throughout this test. 

Test 4.5 - “Real World” Traffic

This is as close as it is possible to come to a true “real world” environment under lab conditions. For this test we eliminate the Reflector device and substitute an IIS Web server installed on a dual Xeon server with Gigabit interface and 4GB RAM. This server holds a copy of The NSS Group Web site, and is capable of handling a full 1Gbps of traffic. We then capture a typical client browsing session on the NSS Group Web site, accessing a mixture of menu pages, lengthy text-based reports and multiple graphical images (screen shots) and have Avalanche replay multiple identical sessions from up to 20 new users per second.

It should be noted that whereas the goal of the previous tests is a very predictable, consistent and repeatable background load that never varies, the nature of this test means that traffic is slightly more “bursty” in nature. 

Test 4.5.1 - Pure HTTP Traffic (simulated browsing session on NSS Web site): Max 4700 new connections per second - 20 new users per second - average packet size 560 bytes - maximum 210,000 packets per second. Repeated with 250Mbps, 500Mbps, 750Mbps and 950Mbps of background traffic. With genuine server responses to genuine browser sessions consisting of multiple transactions per session, this is a typical “real world” background load, albeit pure HTTP. Although the Web server and the network are extremely busy at the higher traffic loads, the “normal” connection rates and packet sizes should enable most sensors to perform well at all load levels in this test.

Test 4.5.2 - Protocol Mix (72% HTTP traffic (simulated browsing sessions as 4.5.1)) + 20% FTP traffic + 6% UDP traffic (256 byte packets)): Max 3700 new connections per second - average packet size 560 bytes - maximum 205,000 packets per second - maximum 1,500 open connections. Repeated with 250Mbps, 500Mbps, 750Mbps and 950Mbps of background traffic. With genuine server responses to genuine browser sessions consisting of multiple transactions per session, mixed with FTP and UDP traffic, this is a typical “real world” background load. Although the Web server and the network are extremely busy at the higher traffic loads, the “normal” connection rates and packet sizes should enable most sensors to perform well at all load levels in this test. 

To gauge the effects of varying (smaller) packet sizes, connection rates and transaction delays, the results of tests 4.2 - 4.4 should be examined. 

Section 5 - Stability & Reliability

These tests attempt to verify the stability of the device under test under various extreme conditions.  

Test 5.1.1 - ISIC/ESIC/TCPSIC/UDPSIC/ICMPSIC: This test attempts to stress the protocol stack of the device under test by exposing it to traffic from the ISIC test tool. The ISIC test tool host is connected to the external network, and the ISIC target is located on the internal network protected by the sensor. ISIC traffic is transmitted across the network and the effects noted. Traffic load is a maximum of 350Mbps and 60,000 packets per second (average packet size is 690 bytes). Results are presented as a simple PASS/FAIL - the device is expected to remain operational and capable of detecting and logging exploits throughout the test to attain a PASS.  

Section 6 - Management and Configuration

The aim of this section is to determine the features of the management system, together with the ability of the management port on the device under test to resist attack.

Test 6.1 - Management Port

Clearly the ability to manage the alert data collected by the sensor is a critical part of any IDS/IPS system. For this reason, an attacker could decide that it is more effective to attack the management interface of the device than the detection interface. 

Given access to the management network, this interface is often more visible and more easily subverted than the detection interface, and with the management interface disabled, the administrator has no means of knowing his network is under attack. 

Test 6.1.1 - Open ports: We will scan the open ports and active services on the management interface and report on known vulnerabilities.

Test 6.1.2 - ISIC/ESIC/TCPSIC/UDPSIC/ICMPSIC: This test attempts to stress the protocol stack of the management interface of the device under test by exposing it to traffic from the ISIC test tool. The ISIC test tool host is connected directly to the management interface of the sensor, and that interface is also the target. ISIC traffic is transmitted to the management interface of the sensor (without passing through any other network equipment) and the effects noted. Traffic load is a maximum of 350Mbps and 60,000 packets per second (average packet size is 690 bytes). Results are presented as a simple PASS/FAIL - the device is expected to remain (a) operational and capable of detecting and logging exploits, and (b) capable of communicating in both directions with the management server/console throughout the test to attain a PASS.

Test 6.1.3 - We note whether the ISIC attacks themselves are detected by the sensor even though targeted at the management port. 

Click here to return to the Gigabit IDS Index Section

Top         Home

Certification Programs

Group Test Reports

White Papers

On-Line Store

Contact The NSS Group

Home

Send mail to webmaster with questions or 
comments about this web site.

Copyright � 1991-2006 The NSS Group Ltd.
All rights reserved.