Betting Sites Not On Gamstop UK 2025Betting Sites Not On GamstopCasino Not On GamstopNon Gamstop Casinos UKBest Casinos Not On Gamstop
NSS Group logo

The Optimisation Issue for Server Providers

by Steve Broadhead

Is there a day-to-day computer user who doesn’t now regularly liaise with the Internet?

Whether directly through browsing, or indirectly via email or another Internet-based service, most users have daily experience of the Internet and the ISPs who typically provide that service. And as outsourcing increases at all company levels, so the ASP or Application Service Provider too is taking a broader role and hosting many applications via the Internet, often transparent to the end user. The NSP or Network Service Provider has also come online, hosting entire network operations for companies.

What this all means is that the service provider, in any context, is becoming a vital element of many companies’ IT strategies. A survey carried out by outsourcing specialist Milgo Solutions showed that the key areas of interest for companies were indeed service-oriented; 60% were already working with Intranets, over 50% were investing heavily in remote access services and, interestingly, almost 40% were investing in managed WAN services. It is noticeable too that the WAN, rather than the LAN, is the primary area for concern currently. This is hardly surprising when you consider the interest also in “Extranet” and potential E-commerce activities, then marry that with the currently available WAN bandwidth most companies have at their disposal. While LAN performance has exploded, the WAN has effectively got slower, thanks to the amount of traffic the Internet and other services are creating. This is therefore a problem area for many companies at least in the short to mid term, until the full capacity of all the fibre cable currently being laid across the world becomes available.

It seems clear, then, that end users are seemingly willing to put their faith in the hands of third-party specialists to run part or all of their IT operations for them. The reasoning behind this is equally clear. What most users will not be aware of is the sheer complexity of the network at the service provider’s end of the chain, required in order to support what, in some cases, is millions of online sessions per day. Without getting into the technological realms of what device does what exactly, if we simply state baldly that, on leaving an end user’s computer, a request to access a web site on the Internet may pass through any or all of the following:

  • one or maybe several backbone routers, en route to the ISP and the host web site
  • a front-end router at the ISP itself and maybe several more internal routers
  • one or maybe several firewalls
  • packet-shaping devices
  • web cache engines
  • web/application content switches
  • load balancing switches for firewalls, cache and servers
  • a fibre channel switch as part of the SAN (Storage Area Network) controlling the host disk subsystems

If we consider that basic networking – the means of connecting several computers together in a single, shared environment via the equivalent of a junction box and a few wires – was deemed a challenge only a few years ago, then it is not difficult to conceive of the complexities a service providers’ network environment holds. Were it simply a case of connecting device after device in daisy-chain fashion, a little like hanging several hard disks, CD ROM drives and tape backup units off a single SCSI device controller, the complexity level would be low. In this instance you are simply letting data pass through to the next device along the chain, without interruption. But in a contemporary service provider network environment life isn’t so simple and that is largely due to the networking devices themselves not being simply “black boxes” any longer but intelligent devices.

The problem is, networking is never simple. Even when the “network” consists of a few dozen PCs connected to a stack of Ethernet hubs, there are potential complications. It may seem simple in principle but once you start adding in shared printers, scanners, remote access to the Internet, firewalls, networked fax, IP telephony… and then there are the common applications to worry about – it all gets very complicated. So imagine when there are hundreds of thousands of thousand users instead and it is easy to see why networking cannot be defined as “simple”. Worse still, networking is getting ever more complicated and for a number of reasons. The Internet, of course, is partly to blame for this current cacophony also known as the state of the networking industry and has made companies look at wide area networking in a new light.

The promise of low-cost services across the Internet such as voice (IP telephony), remote access and management of office networks from almost anywhere in the world and the possibilities of E-commerce and E-business have further magnified the “Internet effect”. In many cases this has meant a radical reappraisal of service provider networking strategies from two to three years ago, both in terms of basic network design and hardware, as well as applications and services being offered.

The result is that service providers are being offered a new generation of intelligent networking devices, where the inherent software within the device is capable of making significant decisions about how it should handle traffic on the network. In a situation where one intelligence device, playing the role of traffic police, tells simple devices around it what to do, the potential problems are not significant. The only issue here is setting that intelligent device up correctly in the first place, though this is not trivial. However, the real situation now is that several devices on the network have artificial intelligence in-built and, if configured in isolation, are as likely to fight with each other in order to take control of the network traffic as they are to compliment each other and optimise network performance.

And here is the irony of the matter. Intelligent networking devices, such as load-balancing Ethernet switches, web content switches, packet shapers and web cache engines are all designed with one aim in mind: to optimise traffic flow and, therefore, performance across the network. Independently of each other, that is. So when you get several devices all trying to be clever at the same time, there is more chance of them competing for the data than co-operating to optimise performance. Given that very few service provider networks consist of products from a single vendor, the issue of multi-vendor interoperability once again raises its ugly head. For example, a web cache engine from one vendor may be intelligent in its own right and can therefore cache suitable data to speed up web server access, for example. But this “intelligence” does not extend to being aware of other “intelligent” devices such as web content and load-balancing switches on the same network.

The problem is that for all concerned, new ground is being trod daily and there are no set guidelines for optimising the contemporary, multi-vendor, multi-device networks that ISPs and ASPs must use. Phil Wainewright, managing editor at ASP media specialists, ASPnews.com, explained:

“Maintaining high performance when hosting servers in an Internet data centre is still something of a black art. We see a lot of vendors investing in helping ASPs and hosting providers set up their data centres simply so that they can understand the issues better and start to define some best practice guidelines.”

Outsourcing Reinvented

There is nothing new about outsourcing. And in the past, the success of outsourcing has been questioned and rightly so. Towards the end of the ‘80s many large companies chose to outsource their entire IT function, partly as a result of having their fingers burnt when the economy collapsed, only to have them burnt again and return to running their own networks. But in many cases this was unarguably a question of handing over too much responsibility for far too much money to service providers who were simply incapable of doing the job properly. Then it was a case of expensive outsourcing contracts for big companies or nothing, with little in the way of support for those smaller companies who didn’t want to spend millions of pounds a year. Now the outsourcing concept has both come down the scale to apply to the SME (Small/Medium Enterprise) market where around 95% of companies exist, and broadened to allow the company to tailor its outsourcing requirements to its own needs. So it is no longer an all-or-nothing deal, and nor is it necessarily expensive. Many companies are afraid to give up complete control of their network but would like assistance in the day to day management of it, as well as expert but impartial advice on planning the way forward and acquisition policies.

According to international analyst firm Gartner Group, selective outsourcing of business and IT functions will quickly become the norm, not the exception. In a recently presented five-year vision of the future of IT, Gartner claimed that there is a major requirement for change in the way the IT function is traditionally handled within companies, leading to more reliance on external service providers and less permanent in-house staffing. However, according to the analysts, this does not mean the end is in sight for the IT manager! While Gartner sees some companies completely outsourcing their IT functions, over 85% will maintain internal IT services, albeit changed in size and scope.

Send mail to webmaster with questions or�
comments about this web site.

Copyright � 1991-2003 The NSS Group Ltd.
All rights reserved.

Featured sites