NSS Group logo

Real Time Networking

A couple of years ago this feature would probably have been titled something along the lines of "Multimedia on the LAN", and we would have talked about esoteric applications involving LAN-based audio and video data, and perhaps even some video conferencing.

Today, however, we are calling it "Real Time Networking" – a slightly more "businessy" sort of title - and whilst we are still talking about running audio and video applications over the network, the scope has widened to include the WAN and, of course, the ubiquitous Internet. Probably the biggest change, however, is in the attitude of end users and network administrators to these applications. They are no longer considered quite so esoteric as they were, and with every man and his dog downloading Real Audio and Microsoft NetShow players, the concept of high quality audio and video streams to the desktop does not seem nearly as outlandish these days.

TV On Your PC

Speaking to Chris Gable of Cabletron, he sees that real-time networking applications fall into two broad areas. "You have the broadcast technologies such as NetShow or Real Audio", he says, "where users ‘tune in’ to specific channels to receive fixed content. You then have interactive technologies which ship multimedia content in both directions, typified by video conferencing applications".

Both of these approaches are finding niches within today’s corporate environment. Broadcasting is ideal for distance learning or for recorded messages from the Chairman which have to be viewed around the world at different times. Video conferencing, of course, has already proved its worth in making internal communications that much more effective and reducing the requirement to travel in some circumstances. Whereas video conferencing over the LAN or WAN is not so widely employed at the moment, some organisations are already finding uses for it, particularly where it is necessary to organise multi-party conferences on an infrequent basis, thus rendering the purchase of a dedicated Multipoint Conferencing Unit less than cost-effective.

Bandwidth Blues

In both cases, however, the end result can be very similar in terms of bandwidth usage, with existing Ethernet networks struggling under the increased load. The answer must therefore be to throw some extra bandwidth at it, and we already have Fast Ethernet (100Mbps) and Gigabit Ethernet (1000Mbps) to take up the challenge.

By design, Ethernet has a very fair-minded approach to allowing access to network bandwidth – it simply lets everyone on the segment have unlimited free access and leaves them to fight it out amongst themselves. The problem is that on any given Ethernet segment at any point in time there can only ever be ONE conversation taking place (say between a file server and a single end node). Ethernet uses the trusty CSMA/CD (Carrier Sense Multiple Access/Collision Detect) method of marshalling traffic on the LAN, which means that whenever two nodes attempt to transmit at the same time, a collision is detected, the packets are thrown away and the two nodes concerned wait a random amount of time before attempting to retransmit.

This all sounds terribly inefficient, but since it happens extremely quickly the whole thing works pretty well. However, the overhead associated with collisions and retransmissions on a heavily loaded network mean that your theoretical limit of 10Mbps is soon reduced to around 6Mbps or less. It also means that we can never actually guarantee how long it will take a packet to get to its destination, since it may take multiple transmission attempts to get it there.

This is not particularly useful when we are attempting to deliver multimedia audio and video applications, since these require a guaranteed amount of bandwidth and fixed latency to operate well, presenting some significant challenges to the basic Ethernet protocol. Imagine, for instance, you were trying to ship 20 frames per second of video across your LAN and every third frame was delayed because of the network load. The result is a jerky video with out-of-sync audio – in short, it is unwatchable.

The Quality of ATM

Of course the ATM lobby will point out that its chosen technology provides both the required bandwidth and the necessary Quality of Service (QoS) guarantees necessary to mix data, voice and video on the same network.

ATM has a highly efficient architecture that can handle almost any type of traffic. It uses a small, fixed cell size of just 53 bytes (48 bytes of data with a 5 byte header), coupled with a switching mechanism which establishes Switched Virtual Circuits (SVC) from end to end – much like a telephone network. Given that an Ethernet frame can range in size from 64 to 1516 bytes, you can see immediately how ATM’s small fixed cell size can reduce both latency in switches and variable delays in the network, thus contributing to a more efficient delivery of time-sensitive data.

Also, because a fixed circuit is negotiated between two nodes, it is possible to determine some QoS parameters, whereby the network guarantees a certain service level, and the end-station guarantees that it will only send the amount of traffic that has been negotiated.

There are four service classes, each with different QoS levels:

Class A – Constant Bit Rate (CBR) This is a "reserved bandwidth" service which generates a continuous steady stream of bits. This is particularly suited to traffic which is time sensitive or intolerant to loss of cells, such as interactive voice or video (video conferencing), or to circuit emulation (PABX) applications.

Class B – Variable Bit Rate Real Time (VBR-RT) – Like Class A, VBR is also a "reserved bandwidth" service where the network allocates the necessary resources to perform the required level of service. However, VBR establishes a peak rate, sustainable rate, and maximum burst size, making it suitable for compressed audio, or video applications other than video conferencing.

Class C – Variable Bit Rate Non Real Time (VBR-NRT) – Similar to VBR-RT but more suited to applications where a slight delay may be more acceptable, such as video playback or transaction processing.

Class D – Unspecified Bit Rate (UBR) and Available Bit Rate (ABR) – Both of these are non-reserved, "best effort" types of service with no QoS guarantee (though ABR adds management capabilities and yields a lower cell loss rate). Typical applications include data entry, data transfer or remote terminal applications, with ABR being more suitable for LAN interconnect and LAN Emulation (LANE).

The device requesting connection will ask for one of these four classes of service, and the network either will agree to this service level and permit the connection, or the connection will be denied. The traffic contract places demands on the call-initiating device too, since if it tries to send more traffic than originally stated, the excess traffic may be discarded by the network.

Ethernet QoS

Unfortunately, Ethernet in its native form cannot distinguish between time-sensitive packets such as those forming part of a video broadcast, and "ordinary" network traffic such as database updates. Even if it could distinguish between the two, it would be unable to offer any sort of QoS guarantee.

If your video conference requires uninterrupted access to network bandwidth, then it would have to run on a segment all by itself – that is the only sort of bandwidth reservation you will get under standard Ethernet! As soon as you get another application wanting to pinch some of that valuable bandwidth, there is nothing to stop it doing so. Given the vagaries of Ethernet packet sizes - it takes anywhere from 51.2 microseconds to 1.2 milliseconds to transmit an Ethernet frame onto a network cable – our time-critical applications will not only face delays, but those delays will be constantly variable.

However, there is a huge installed base of traditional Ethernet out there, and there is a strong argument for sticking with it, even as we seek to implement higher bandwidth solutions in the form of switching, Fast Ethernet and even Gigabit Ethernet. ATM may be the bees knees as far as the vendors are concerned, but for the average network administrator there are a number of serious upgrade issues which make it less than attractive.

Alongside such bandwidth improvements in the Ethernet stable come other advances, such as the Resource Reservation Protocol (RSVP), the Real-time Transport Protocol (RTP) and IP Multicast, which are intended to provide QoS facilities.

Respondez, S’il Vous Plait

To accommodate both real-time and "standard" streams of traffic on the same network, RSVP is used to allocate some fixed portion of the available bandwidth to the real-time traffic, while leaving some bandwidth available for the regular LAN data traffic. In addition to a bandwidth reservation, RSVP lets real-time traffic reserve the network resources necessary for consistent latency. To do this, routers sort and prioritise packets before transmission.

RSVP is designed to provide a similar end-to-end connection to that offered by ATM, over which can be guaranteed a given level of bandwidth and service. All this on boring old Ethernet? Sounds too good to be true doesn’t it?

There is a catch, of course, in that all devices in the circuit must support RSVP, which could mean a significant investment is required for many.

At the user end, for instance, the request for a specific service level must be made by an RSVP-compliant application, and in the Windows world, these requests are made via WinSock 2. Such applications may well use RTP to deliver the time-sensitive data, since this is an applications-layer protocol that uses time stamps and sequence information in its header to recover from delay variations and packet loss.

Unlike ATM, RSVP supports two service models: guaranteed service and controlled-load service. The guaranteed model ensures that the delay restrictions requested by a host originating an RSVP call are met. The controlled-load service model, however, makes no guarantees, but admits new RSVP connections only up to the point where service starts to deteriorate. Beyond that, new connection requests are denied.

Cisco is one of the earliest adopters of RSVP, and managed to demonstrate a controlled-load service at a recent Networld+Interop show to support Intel’s ProShare video conferencing system. Cisco’s enthusiasm is hardly surprising given the general similarities between RSVP and Cisco’s Tag Switching approach to IP Switching, and RSVP support is now included as part of its IOS (Internetwork Operating System) router software.

RSVP itself is not a routing protocol, of course. It merely uses routes already calculated by the underlying router software in order to determine the next device in sequence to which it should deliver packets. Its main problem comes from the fact that where a connection spans LAN to LAN connections, it is not always possible to guarantee response times or service levels in neighbouring networks. For this reason it works best with point-to-point links.

Nor is it possible to assign different priorities to different applications on the same access line. This means that you can only have one active RSVP priority for your T1 link to the Internet, which means you cannot then ensure that your video conferencing app gets a higher priority than your Real Audio transmissions. Only when applications run on different lines can RSVP arrange them by class of service.

IP Multicast

Of course, one of the biggest problems with unicasting (point-to-point) or broadcasting streams of audio and video data across the network is that you can frequently end up with identical streams of data going to two different users who may even be sitting next to each other – in other words you are duplicating traffic unnecessarily across much of the network.

IP Multicast was adopted in 1992 by the Internet Engineering Task Force for building multicast applications on the Net. It runs on Mbone, the virtual Internet backbone for multicast IP that serves as the international test-bed for multicast applications. An estimated 3,000 interconnected networks on the Internet make up the Mbone, and many of the major backbone providers offer some level of Mbone connectivity.

Multicasting allows us to send out a single stream of multimedia or time-sensitive data that needs to be received by subscribers. Providing both hosts and routers are multicast-enabled, the network can then automatically replicate the server's packets and route them to each subscriber in the multicast group via the most efficient path. This is done using multicast protocols such as DVMRP (Distance Vector Multicast Routing Protocol) and MOSPF (Multicast Open Shortest Path First), and demonstrates a huge advantage over pure broadcast transmissions, which are not routable at all.

A new member becomes part of a multicast group by sending a "join" message to a nearby router, following which the distribution tree is adjusted to include the new route. Multicast services mean that servers can send a single packet that will be replicated and forwarded through the internetwork to the multicast group on an as-needed basis, thus conserving both server and network resources – the transmission is not sent to segments where hosts have not registered to receive the multicast, for example. Although multicast applications are available for IPv4, new developments in the IP world extends multicasting capabilities even further under IPv6, to the point where the IPv4 broadcasting capability is superseded.

One vendor which has gone the IP Multicast route is Cabletron, which incorporates both Layer 3 Switching and IP Multicast technologies in the SecureFast architecture used in the company's high-performance SmartSwitches.

Cabletron was actually the first networking vendor to announce support for IP Multicast across Gigabit Ethernet and ATM switched backbones, using the standards-based Internet Group Messaging Protocol (IGMP). It also demonstrated real-time networking capabilities at a recent Networld+Interop, where Microsoft’s NetShow multimedia server could be seen running across a Cabletron network. The company also announced support for the Multipoint-to-Point video funnel protocol used in the full screen video service for NetShow.

IPv6

Another company taking the Multicast route in preference to RSVP is Digital, which is setting its stall out with ATM, Gigabit Ethernet and IPv6, the next generation of the IP protocol.

In recent months, many of the discussions about a new Internet protocol focus on the fact that we will sooner or later run out of Network Layer addresses, due to IPv4's outdated 32-bit address space. But this is certainly not the only driving force behind IPv6. Along with the increased address space and vastly improved security come some impressive QoS-type features.

As well as the unicast and multicast modes of transmission already mentioned, IPv6 also offers something called anycast, which could be thought of as a cross between the two. With anycast, two or more network interfaces are designated as an anycast group, and a packet addressed to the group's anycast address is then delivered to the "nearest" interface in the group (determined by whichever routing protocol is being used). Nodes in an anycast group are then specially configured to recognise anycast addresses, which are drawn from the unicast address space. This is in contrast with multicast services, which deliver packets to all members of the multicast group.

Taking this a step further, it is apparent that anycast technology actually has a wider range of applications than just efficient delivery of multimedia traffic. For instance, if multiple critical servers were each given the same anycast address, it could provide the means for efficient load balancing and even redundancy.

The IPv6 packet format also contains a new 24-bit traffic-flow identification field that will be of great value to vendors who implement quality-of-service network functions. These flow labels can be used to identify to the network a stream of packets that needs special handling above and beyond the default, best-effort forwarding. Flow-based routing could give internetworks some of the deterministic characteristics associated with connection-oriented switching technology and telephony virtual circuits.

The Truth Is Out There!

Of course the fact remains that we still need the "killer applications" that are going to really drive forward the implementation of these technologies, some of which require a hefty investment. Larger companies may well be able to make use of real-time video and audio transmissions for company meetings or training sessions, of course, and the availability of such technologies may well kick start the LAN-based videoconferencing market. But in the short term (at least until the whole world moves to IPv6) there is relatively limited scope for implementation at the intranet level.

It is almost certain, therefore, that the biggest push will come from the content providers – the broadcasters, radio stations, TV stations and publishing houses – which are only now beginning to explore ways of offering us their content over the Internet. This will put additional pressure on the ISP’s to embrace the new technologies of Multicast – and IPv6 in particular – which provide ideal vehicles to get that data to the end user.

With recent and ongoing innovations in the fields of security, e-commerce and smart card technology, those providers will also have the means to charge us for accessing their wares. Once that starts to happen, the quality of the content will soar even further - who knows, instead of seeing reruns of the Two Ronnies on your PC screen you could be getting the latest episodes of the X-Files.

Top         Home

Security Testing

NSS Awards

Group Test Reports

Articles/White Papers

Contact

Home

Send mail to webmaster with questions or 
comments about this web site.

Copyright � 1991-2005 The NSS Group Ltd.
All rights reserved.