|
The Router Is Dead - Long Live The
Router
Just like the mainframe and the COBOL programmer, the router finds itself the perennial
target of the doom merchants who would have us believe that it is on the way out. But just
like the mainframe and the COBOL programmer, the router is likely to be with us for quite
some time yet - although its role may well change.
The main problem in a lot of traditionally routed networks is that the routers are
suddenly being identified as serious bottlenecks. Why should such a change occur in a
network which has been well designed and maintained, one which has worked perfectly well
until now? The answer is simple - the Intranet.
In the past, network designers have segmented their networks and placed nodes on specific
subnets guided by a simple rule of thumb often referred to as the "80/20 rule".
Simply speaking, this rule specifies that 80 per cent of all the traffic on a segment
should remain local, with 20 per cent or less traversing routers or bridges to other
subnets. The thinking behind it, of course, is fairly straightforward: routers are not the
fastest devices in the world, so the less traffic you can put through them, the better
your network performance. Whilst not always easy to achieve, it was nevertheless possible
in the past to stick by this rule in the majority of cases by careful placement of major
file servers and other key network resources.
The growth of client-server applications resulted in fairly consistent traffic patterns
between - you guessed it - certain clients and key applications servers. Even though this
occasionally meant that major data files required replicating across multiple servers in
order to keep inter-segment and WAN-link traffic to a minimum, the segments loads were
fairly predictable and the 80/20 rule was adhered to wherever possible.
However, client-server applications have evolved into distributed applications with
presentation logic, business rules and data often split across multiple servers. This
meant that the 80/20 rule was often pushed to breaking point, and once the Internet caught
on, it was shattered into a million pieces.
One of the reasons that the Internet is so popular as a means of information dissemination
is that it is so unstructured. One minute you can be reading plain text on one server, and
then a click on a hypertext link can have you retrieving a graph from the other side of
the world, and another click could see you streaming audio or video data to your desktop
from the next office. The apparently random nature of the placement of data on the
Internet - and now the corporate intranet - makes it virtually impossible to predict
traffic loading patterns accurately. With a peer model like this, a user on one subnet can
be pulling data from a different subnet with every click of the mouse, and all of a sudden
IP needs to be routed everywhere - to the desktop, the LAN and the backbone. The result in
many networks is chaos.
To Switch, Or Not To
Switch
.
Switching was initially touted as the answer to all our bottleneck problems, with
management issues alleviated by the creation of Virtual LANs (VLANs). But in
some circumstances, this offers few advantages.
Firstly, the intranet scenario described above causes just as many problems on a VLAN
network as it does on a routed one, with inter-segment traffic reducing efficiency and
virtually making the concept of VLANs redundant. Secondly, traditional switching
works only at the Data Link layer of the OSI stack, and thus cannot cope with the multiple
protocols which are in use in many large networks. Even though the Internet vendors would
like us all to move to TCP/IP today, it remains a fact that there is an awful lot of SNA,
IPX/SPX and NetBIOS/NetBEUI out there (amongst others). These all need routing at the
Network layer.
Finally, switching can actually compound the problem, since these wonderful high-speed
devices can suddenly start throwing millions of packets per second at our backbone
routers, which are beginning to choke under the load.
Ideally, what we need is a magic box which can route like a layer 3 device, but with the
performance of a layer 2 device. This is not a trivial issue, since routing is basically
very hard to do quickly - hence the performance issues we are facing now. Because it is
one layer up from MAC layer switching, it has to unwrap the layer 2 "packaging",
figure out what sort of packet it is dealing with (TCP/IP, IPX/SPX, etc.), do a few
lookups in routing tables, and then send the packet on its way. Even with everything in
silicon, it is virtually impossible to achieve anything like wire speed performance.
Layer Two Routing, Or Layer
Three Switching?
This is where much of the development effort is going at the moment, with the usual rash
of new products all following their own set of "standards" and causing no end of
confusion in the market place. One way or another, switching is the future, and is
destined to be the technology of choice for the core of the network, pushing the router
further and further out to the edge.
This explains why the big router manufacturers have done everything they can to purchase
switching technology. The likes of Bay and Cisco recognise that they are losing the centre
of the network to switching, and naturally wish to maintain revenue streams wherever
possible.
The new wave of switching is generally known as Layer 3 switching, or IP Switching. The IP
Switching tag in particular seems to have stuck, though this is more to do with the
marketing collateral of anything to do with the Internet than any real desire to limit the
technology to a single protocol.
Here we look at the strategies of the major players in this burgeoning market, as they
fight it out to ensure that the winning technology is theirs, no matter what the cost in
casualties (decimated IT budgets and wrecked careers for those who step in too soon and
get their fingers burned) in the process.
Despite all the marketing hype you are going to hear in the coming months, the various
schemes on offer can be boiled down to five main categories :
Multigigabit Routers
Here we are talking about a new breed of router which is designed from the outset to route
IP. Ageing bus-based backplanes are replaced by high-speed cross-point switching matrices
to whizz packets on their way between multiple interface cards, and route look-ups are now
handled by dedicated hardware rather than general purpose CPUs running routing
software. Other tricks can be employed too, such as holding a full copy of the routing
table at each interface rather than just part of it.
The result is a beast which can process IP packets at blistering speeds, in the millions
of packets per second range rather than the paltry 500,000pps or less achievable with
traditional routers. Ascend already have a box which can process 2.8 million pps, whilst
BBN is promising 20 to 30 million pps from its offering which should be available early
next year. Naturally enough, Cisco is also in this camp (amongst others), and is planning
to ship boxes in the latter half of this year.
Whilst the upside to this technology is that a router is a router, so these boxes should
just slot straight into existing networks, the downside is that they will have to be
networks with deep pockets - performance benefits like this will be priced accordingly.
And with the world going switch crazy, could it be that the long term future for routers
such as this is looking shaky, no matter what the speed?
Layer 3 Switching
Also lightning fast routers which handle route look-ups in hardware, these are scaled down
versions of their multigigabit big brothers with a price tag which is a bit more palatable
to most. Lacking refinements such as cross-point switching matrices, layer 3 switches tend
to work at speeds in the region of the 100,000 to 400,000 packets per second and will
usually only support a single protocol - IP, as you may have guessed - or perhaps two at
the most (with IP and IPX being a favourite combination). With such scaled down
performance and price, these devices are aimed squarely at the LAN arena rather than
backbone or carrier-class applications for which the multigigabit devices would be better
suited.
Bay, Case (now Intel), and Madge (amongst others) are all playing at this end of the
market. Bay is claiming wire speed performance for its Switch Node routing switch (which
supports both IP and IPX), and has introduced a new feature called IP AutoLearn.
Independent of the IP routing protocol running on the network, IP AutoLearn automatically
builds forwarding tables for subnets and VLANs that are connected directly to the
Switch Node. As the forwarding tables are built, packets can be switched at layer 2 and
traffic is gradually off-loaded from the router, thus relieving the bottleneck.
Peer-to-Peer Multilayer Mapping
Tese schemes use route processors to monitor the network topology and calculate the best
path from point A to point B (plus a few others, of course), just like an ordinary router.
Standard routing protocols such as OSPF (Open Shortest Path First), RIP (Routing
Information Protocol) and BGP (Border Gateway Protocol) are even used to exchange
information about network topology in a peer-to-peer fashion.
What happens next is that the switch can use the paths calculated by the route processor
to set up a virtual connection between two end-points, thus effectively mapping layer 3 IP
addresses on to layer 2 destination addresses (MAC or VCI/VPI) - multilayer mapping, see?
The basic idea behind this is to replace the connectionless approach of the traditional
router with a switched virtual circuit from source to destination node, thus removing the
bottleneck-inducing multi-hop scenario associated with pure router-based networks. If an
end-to-end connection cannot be determined, a session is not established, thus
theoretically providing for deterministic latency and thus Quality Of Service (QOS).
Sounds simple in theory, doesnt it? But like all these things, there are a million
ways of achieving the same end, and Cabletron (Secure Fast Virtual Networking), Cascade
(IP Navigator), Cisco (Tag Switching), DEC (IP Packet Switching), Frame Relay Technologies
(FrameNet Virtual LAN Switching), IBM (Aggregate Route-based IP Switching), Toshiba (Cell
Switched Router) and Ipsilon (IP Switching) have each come up with their own.
The one you have probably heard most about is Ciscos Tag Switching, which uses
routers (surprise surprise) at the edge of a switched network to assign tags to packets
which tells the tag switches how to deal with the packet at layer 2. This is a technology
which is clearly aimed at the Internet and the wide area rather than the campus, though it
should work quite well with MPOA to provide that route to the desktop.
IP Navigator addresses potential scalability issues with large switched networks - that of
an ever increasing number of virtual circuits to be tracked as more and more switches are
added - by using something called multipoint-to-point tunnelling. IP multicast addresses
are used to chart a packets path through the network meaning that switches need only
maintain tables of multicast addresses rather than details of all possible virtual
circuits. This allows IP Navigator to support huge number of logical routes, and has the
added advantage that no edge devices are required.
ARIS (from IBM) and FrameNet take a similar approach, but aggregate traffic from multiple
Virtual Circuits (VCs) into a single circuit whilst traversing the network. Edge
devices are then used to split out the separate circuits again when leaving the network.
Whilst these systems are intended for the core of the network, SFVN (from Cabletron) and
IP Packet Switching (from DEC) take it all the way to the desktop. These make use of ARP
(Address Resolution Protocol) requests from clients to determine where the packet is
destined. These are intercepted by the switches, which are layer 3 devices which have
knowledge of the network. The switch replies to the client either with the correct MAC
address for a device on the local subnet, or with the MAC address of the outgoing port
that can reach the destination.
This all sounds a little convoluted, and it is difficult to see where the real performance
advantages will come, especially in large networks. There are also some management issues
with these last two technologies which will see them restricted purely to the campus and
local area network environment.
The final two technologies in this section move away from the topology-based
implementations discussed so far, instead opting for a flow-based scheme, where the switch
attempts to consider the characteristics of an application in deciding whether or not to
establish an end-to-end connection. These architectures attempt to identify IP flows which
are carrying real-time traffic, flows with QOS requirements, or flows likely to have a
long holding time. Such flows are handled most efficiently by mapping them to virtual
circuits and switching them over an ATM network. Short-duration flows and database queries
are better handled by connectionless, hop-by-hop packet forwarding between IP routers
using shared, pre-established ATM connections between those routers.
Coming in with an all-ATM solution is a bold move, but then new-kid-on-the-block Ipsilon
(with IP Switching) and Toshiba (CSR) do not have a large installed base to protect, and
are aiming mainly at new markets such as Sips anyway. There are a couple of major
differences between the two flow-based solutions. The first is that Ipsilon uses an edge
device to set up flows, whilst CSR can use any switch on the network. The second is that
the Toshibas solution can support both CSR and native ATM switching, whereas Ipsilon
supports only IP Switching.
Server-Based
This architecture combines traditional layer 3 routers at the edge of the network, with
layer 2 switches at the core, and ties them together with some form of route server whose
job it is to calculate paths through the network. Proponents of this technology include
Hughes (Streaming/Radiant), IBM (Zero-Hop Routing), Newbridge (Vivid) and 3Com (Fast IP).
Both IBM and 3Com expect the router to field initial session requests between clients and
servers on the network, with the resulting connection being initiated at the desktop.
Using NHRP (Next Hop Resolution Protocol), the end node determines a route to the
destination point based on MAC addresses plus IEEE 802.1Q compliant VLAN tags which carry
additional information. Once the route servers have calculated the route, a direct
connection is established between the two end-points and from then on all the data is
switched. Once the virtual circuit has been established, subsequent transmissions do not
have to pass through the route servers.
3Com believes the key to Fast IP is that it is standards based - important bearing in mind
the deal with Cascade and IBM. In truth, the "standards" being employed are
still in the draft stage at present.
The Newbridge and Hughes solutions work in a similar fashion, though Hughes Streaming
relies on plain NHRP, and Vivid makes use of a "standard" (again still in draft)
Multi Protocol Over ATM (MPOA) implementation.
IP Learning
This is a technology which requires the least change to an existing network (working only
with Ethernet and Fast Ethernet), so the cynics amongst you might think that it is hardly
surprising that the big boys have eschewed IP Learning in favour of more lucrative (and
scaleable) models. This leaves this end of the market to the smaller companies such as RND
(Power IP) and Nbase Communications (DirectIP).
Instead of having a router as a default gateway, the client now uses the IP Learning
switch, which responds to the client ARP request as if it is the target. When the
initiating client then sends data to the intended recipient, the IP Learning switch
forwards it to the destination, and uses an ICMP redirect to fool the source into thinking
that the destination machine is on the same subnet. From then on, packets between source
and destination are switched over this virtual circuit as if they were on the same subnet,
with the IP Learning switch spoofing both target and destination addresses.
Because Power IP supports routing standards such as RIP and OSPF, it works dynamically and
can operate with any standard router. DirectIP switches are proprietary, however, and do
not support any routing protocols, meaning that layer 3 addressing tables must be manually
configured.
Summary
Bet you didnt realise there was so much going on in this sector of the networking
industry, huh? For most network managers wondering where to go next with their routed
networks, the choice is bewildering - where to start?
First thing to decide is how scaleable a solution you need. The differences between the
multigigabit and layer 3 switches are fairly clear, with the former aimed clearly at the
carrier or ISP market, or perhaps at very large enterprise Wide Area Networks. Layer 3
switches, as with IP Learning switches, are aimed at the campus environment, however.
When it comes to the remaining technologies, things are not quite so clear. As a general
rule of thumb, you could say that most of the peer-to-peer multilayer mapping solutions
are aimed at the wide area, with the exception of Cabletrons SFVN and DECs IP
packet switching. Of the server-based solutions, it is Newbridge and Hughes who are
pitching at the high end, whilst the 3Com and IBM solutions are tailored more to the local
area.
At the end of the day, however, most of these solutions require a huge investment in new
infrastructure, and there are very few network managers who will be prepared to throw out
their existing routed network, even for the sorts of performance increase promised by
these technologies. It will be a gradual change - switched networks may require
significant protocol reconfiguration for the real benefits of a switched infrastructure to
be recognised.
And because it will be a gradual change, interoperability will be key. But with
"standards-based" solutions appearing before the standards are even ratified,
users are faced with a difficult choice. Add to this the fact that many of the
"announced" products will not be appearing until later this year or early next
year, and the vendors should not be surprised to see a slow take-up of their
"bleeding edge" products for some time yet.
|
|