NSS Group logo

Internet/Intranet Security

Introduction

I have seen it written recently that, thanks to the Internet "the world is a global village, and all the doors are unlocked!"

It is an interesting thought - and largely true - but the analogy has one weak point. In the past (and even today in some parts of the country) people in small villages left their doors unlocked because they trusted everyone else, and knew that nothing bad was going to come of their lack of security.

In the case of the Internet, however, people know that something bad is likely to happen to them if they do not lock their doors. They know that there are a number of individuals out there who have nothing better to do than to attempt to break into their computer systems just for the hell of it - perhaps trashing a great deal of important data in the process. Yet despite this frightening knowledge they have little choice but to go ahead and make the connection. These brave - or foolhardy - pioneers can be grouped into four categories :

  • Those who have no locks on their doors - they are bewildered and confused at the choice.
  • Those who have fitted locks but do not know how, or cannot be bothered, to use them - the locks are there for show, but are completely ineffective.
  • Those who have fitted the equivalent of single-lever locks, not realising that the average intruder can open them with nothing more sophisticated than a screwdriver. These users are probably in a worse position than those in category 2, because they believe themselves to be secure, whilst in actual fact they are not. They are thus more inclined to leave valuables in plain sight.
  • Those with five-lever locks, chains and deadbolts which have all been tested and proven to keep out intruders. They also ask for identification before opening the door to anyone - this group is relatively safe.

The Growth of the Internet

In the last ten years, the Internet has grown from 50 to well over 10,000 networks. In 1988, the National Science Foundation estimated that there were over half a million users; today, that organisation estimates 20 million people around the world use the Internet.

In the not too distant future, this massive network is destined to become a cornerstone for the much hyped "information superhighway." This has led to an increasing number of commercial organisations scrambling to get a stake, and since mid-1990, the number of businesses connecting to the Internet has multiplied by 30 times, to 11,000 organisations, according to one expert.

Many of these are using the Internet as a source of information and reference material, browsing the WWW and Newsgroups for the latest gossip or hard facts on a whole range of issues. Others rely on it for transfer of information, sending messages and data files via e-mail, File Transfer Protocol (FTP) or Gopher sites.

Web commerce is also set to become big news, and many businesses are setting up electronic "shop fronts" on the Internet in anticipation of secure payment mechanisms becoming widely available followed, hopefully, by an increase in consumer confidence.

Some of those businesses are also using Internet technologies, such as Web browsers and Web servers, to provide easy and widespread access to corporate information to internal users only. Such "private Internets", sometimes with no connection at all to the outside world, are known as "intranets". Despite the fact that intranets are for internal use only, the challenges are no different to those involved with the Internet itself. Indeed, building intranets and connecting corporate networks to the Internet is not a simple task.

Companies are turning to the Internet as an established, easily available, yet cost-effective resource that will hopefully allow them to gain a competitive edge. The benefits of adopting Internet technology range from lower communications costs (transporting data across the Internet can cost much less than using a private network) to greatly improved communication - but there are many different risks involved.

Security Issues

Less than two years ago, people were asking whether or not the virus threat was real, or if it was a scare tactic dreamed up by vendors of anti-virus software. As reports of "in the wild" viruses began to hit the press, the threat was finally accepted as having basis in fact, but still it was thought that as long as we were careful, "it could never happen to us".

But, of course, it can - and it does. Address a room full of people today and ask them how many have suffered from any form of virus attack and you will find a good sixty per cent or more respond in the positive. Why? The simple answer is the creation of the Macro virus, which can be transmitted via innocent-looking Word documents or Excel spreadsheets.

People are generally wary of downloading executable files from the Internet, and would rightly be suspicious of an unsolicited executable attachment to an e-mail message. But the majority of us are heavy users of e-mail, and Internet mail is designed to transmit documents. We are never as careful when we open a document attachment to an e-mail message because we are so used to receiving information that way. Add to that the fact that the corporate e-mail server could have broadcast an identical infected message to every single employee in a matter of minutes and you have a very powerful, yet simple method of effecting a virus attack.

The same scepticism has, to a certain extent, been evident in the face of the Internet hacking threat. Is there really a problem? Once again, it has taken one or two well-publicised attacks on major organisations to bring home just how vulnerable we are when we take up residence in the Global Village. In August 1996, for example, the US Department of Justice was the victim of a notorious Web site hacking. In a protest against the Computer Decency Act, the perpetrator gained access to the DoJ Web server and made extensive changes to the pages there, posting some less-than-savoury replacement material. Just one month later, the CIA (of all people) became the victim of a similar attack.

Following this was the instance when a major US-based Internet Service Provider (ISP) was virtually closed down for a week - preventing access to all Web servers and e-mail - due to what has since become known as a Denial of Service (DoS) attack.

One in five respondents to a recent survey admitted that intruders had broken into, or had tried to break into, their corporate networks, via the Internet, during the preceding twelve months.

This is even more worrying than it sounds, since most experts agree that the majority of break-ins go undetected. For example, attacks by the Defence Information Systems Agency (DISA) on 9,000 US Department of Defence computer systems had an 88 per cent success rate but were detected by less that one in twenty of the target organisations. Of those organisations, only five per cent actually reacted to the attack (Source: NCSA).

The threat, therefore, is demonstrably real, and the network administrator is in the unenviable position of securing the corporate networks against similar events.

The use of Internet technology in private networks has led to a "blurring" of the boundaries between the public and private portions of those networks. When purely internal networks take on the "look and feel" of the Internet, and when the public Internet is used to create a "Virtual Private Network" on an ad hoc basis, the goal is to make both public and private portions of the network appear as a seamless whole as far as the end-user is concerned.

Such mixed access is not without its problems, however, since the security systems chosen must be flexible enough to meet the needs of both type of network connection whilst remaining completely transparent to the user.

But effective network security is not just about buying the electronic equivalent of the five-lever lock and deadbolt.

Security Policy

Whilst many organisations are still trying to get to grips with producing a formal IT strategy document, we must now ask them to look at producing a written Security Policy too.

Too many organisations go shopping for a firewall, install it with all the default settings, and then sit back in the belief (usually mistaken) that they are fully protected. A firewall is purely a means to an end - a means of implementing a corporate Security Policy. The firewall is not the policy itself - different companies have different demands, for instance, and some may be able to accept more risks than others. Implementing a firewall involves making a number of often difficult choices - which service to allow or disallow, for instance - and these choices are driven by your own Policy.

This document should cover such things as network service access, physical access, limits of acceptable behaviour, specific responses to security violations (i.e. disciplinary offence, instant dismissal, etc.), and who is responsible for the maintenance and enforcement of the policy.

There are two levels of network policy that directly influence the design, installation and use of a firewall system:

Network Service Access Policy: a high-level, issue-specific policy which defines those services that will be allowed or explicitly denied from the restricted network, plus the way in which these services will be used, and the conditions for exceptions to this policy.

Firewall Design Policy: a lower-level policy which describes how the firewall will actually go about restricting the access and filtering the services as defined in the network service access policy.

The network service access policy should simply be an extension of a strong site security policy, and an overall policy regarding the protection of information resources within the company. This will cover everything from document shredders, through virus scanners and floppy disk tagging. At the highest level, this document will address the limits of Internet connectivity, often represented by the "four P's" of Internet security:

  • Paranoid - No Internet connection. Everything is forbidden (even, perhaps, that which should be allowed).
  • Prudent - Everything is forbidden except that which is explicitly allowed.
  • Permissive - The philosophical opposite of prudent. Everything is allowed except that which is explicitly forbidden.
  • Promiscuous - Everything is allowed (including those things that ought to be forbidden).

You would be surprised how many organisations unwittingly implement a firewall along the lines of the "promiscuous" model, believing themselves to be completely protected simply because they have the firewall. As I have already mentioned, such a false sense of security is possibly more dangerous than having no protection at all. In most cases, you would not require a firewall for either the first or the last category - the "promiscuous" would end up disabling most of the firewall’s features, whilst the "paranoid" would simply not have a direct Internet connection.

  • Most policies implemented by organisations wanting to make efficient use of the Internet therefore fall into either the "permissive" or "prudent" categories.
  • The first allows all services to pass into the site by default, with the exception of those that the service-access policy has identified as disallowed.
  • A firewall that implements the "prudent" policy denies all services by default, but then passes those services that have been identified as allowed. This follows the classic access model used in all areas of information security.
  • The "permissive" policy is less desirable, since it offers more avenues for getting around the firewall. For example, users could access new services currently not denied (or even addressed) by the policy, such as services which are denied elsewhere but which can then be found running at non-standard TCP/UDP ports. Certain services, such as X Windows, FTP, Archie, and RPC are difficult to filter, and for this reason they may be better accommodated by a firewall that implements a "permissive" policy. Also, while the "prudent" approach is stronger and safer, it is more difficult to implement and more restrictive for users - services such as those just mentioned may have to be blocked or heavily curtailed.

Another requirement of the Security Policy is the classification of data. This is alien to many organisations, since it requires that they define the relative value of various type of information used within the company. This can range from low value - such as price lists, product specifications, and other public information which may be placed on a Web server - to high value - such as brand new product designs and other commercially sensitive information.

There are three characteristics that an organisation should consider concerning the classification of important data :

  • Confidentiality - Whilst some corporate data is for public consumption, the vast majority of it should remain private
  • Integrity - What (if any) data can be amended by external sources. In most cases, corporate data should remain unchanged by third parties, so the system should be capable of ensuring that only authorised personnel can effect changes. Integrity also concerns the subject of non-repudiation - once an order is received, for instance, the customer should not be able to claim later that it did not come from him. Digital signatures allow us to verify the originator, as well as ensuring that data has not been tampered with in transit.
  • Availability - What data needs to be available continually, compared to data which can be "off line" for limited periods.

The Security Policy must be part of an overall organisational security scheme by which everyone abides from the Chairman down to the janitor.

The focus for such a policy must come from the top - it must have the unwavering support of the Chairman and the Board of Directors and those people must be seen to be practising what they preach.

For instance, it is not acceptable for the Managing Director to turn off virus scanning because he finds it inconvenient. This is often the real test of an organisation’s commitment to its security policy - when it involves money. If there are a number of slow workstations still in use which take an inordinately long time to perform the virus scan, will management sanction capital expenditure to upgrade that hardware or will it tolerate a lapse in security by allowing users to disable the scanning operation?

The short-sighted view is to suspend certain aspects of the Security Policy - perhaps initially only on a "temporary" basis until the upgrades can be effected. This, however, can backfire seriously since a security breach could result in data loss which could cost far more to recover from than the proposed hardware upgrades.

Implementing Security Policy - The Firewall

Having defined our Security Policy, the next step is to implement it. There are various tools around to provide protection against unwanted intruders into our corporate networks, but the most widely-known, and widely-used, is the firewall.

There are a number of definitions of the firewall, but perhaps the simplest is "a mechanism used to protect a trusted network from an untrusted network". A firewall is a system, or group of systems that enforces an access control policy between two networks, and thus should be viewed as an implementation of policy. The bottom line, therefore, is that a firewall is only as good as the Security Policy it supports.

One thing to bear in mind right from the outset is that a firewall is not simply for protecting a corporate network from unauthorised external access via the Internet, it can also be used internally to prevent unauthorised access to a particular subnet, workgroup or LAN within a corporate network. Figures from the FBI suggest that 70 per cent of all security problems originate from inside an organisation. Thus, for example, if your Research & Development department has its own server, you could protect it and the department’s workstations behind a firewall, whilst still allowing them to remain a part of the corporate-wide network.

Traditional Firewall Architectures

The road to the current state of firewall technology has progressed through three generations. The "traditional" technologies take the form of :

"Packet filter firewalls, however, remain the most common type in use at the moment"

Packet Filters - The earliest firewalls were capable of permitting or denying traffic based on simple field-level filters which could determine such things as source or destination packet address, or the protocol being used. They have the advantage that they are fast (imposing minimal processor overhead), transparent and inexpensive. Unfortunately they lack any form of application-level awareness (since they operate purely at the Network layer and below of the OSI stack), are relatively easy to "spoof", and difficult to configure and manage effectively.

Packet filter firewalls, however, remain the most common type in use at the moment - if not the most common type currently being purchased - by virtue of the fact that many routers include such capabilities as a standard part of the system code. Even the later generations of firewalls are capable of performing simple packet-filtering functions if required.

Proxy servers - These generally fall into two types. Application-Level Gateways establish a connection to a remote system on behalf of specific applications. This type of firewall is usually a collection of application proxies, with a one-to-one relationship between the application used and its proxy.

Circuit-Level Gateways, however, provide proxy/relay capabilities in a generalised form which is not limited to specific applications. Their primary advantage over application level gateways is that they do not require a specific application proxy for each new custom application that needs to be routed outside the internal network.

Although very secure, proxy servers typically suffer from a programming overhead (which can result in a delay in release of new proxies for Application-Level Gateways) and tend to be very CPU intensive, which can impact network performance.

New Firewall Architectures

The "third generation" of firewall technology brings us two competing architectures :

Stateful multi-layer inspection (SMLI) - SMLI is similar to Application Gateways in the sense that all levels of the OSI model are examined - from the network wire to the IP application layer.

Whereas proxies "stand in" for the application when talking to the outside world, thus imposing a significant processing overhead, SMLI merely examines each packet and compares it against known states (i.e. bit patterns) of "friendly" packets.

The "stateful" tag implies that the firewall is capable of remembering the state of each ongoing conversation across it, thus allowing it to effectively screen all packets for unauthorised access whilst maintaining high security, even with connectionless protocols such as UDP.

Such a firewall is completely transparent to both users and applications, and does not impose such a high processing overhead on the host CPU. Because SMLI is rules based, it has the disadvantage that new applications may require new rules to be written. However, the burden is far lower than that involved in writing new proxies, thus allowing SMLI vendors to support new Internet applications almost overnight.

SOCKS

As we have seen, each different type of network security protects data at a different layer of the OSI model. Implicit in the notion of a firewall at any level, however, is the idea of controlled traversal, usually controlled by a set of configurable rules applied to traffic on the firewall.

SOCKS is an open, industry-standard protocol advanced by the Authenticated Firewall Traversal working group of the IETF (Internet Engineering Task Force). It defines a protocol which allows TCP applications to traverse firewalls in a secure and controlled manner, gaining authenticated access through that server to an external network. On a multi-homed TCP/IP server, it can also be used to construct a firewall in its own right.

SOCKS is networking middleware: a circuit-level gateway, acting as a proxy at the session layer to mediate client/server (or host to host) connections and transactions on an intranet or the Internet. Because SOCKS operates at the session layer, rather than at the application layer, it is application-independent, applying security services on a generic session-by-session basis.

The current SOCKS specification is version 5, which is backward compatible with previous versions as well as supporting key features such as authentication, encryption, the UDP protocol, DNS and Version 6 IP addressing.

The main problem with SOCKS is its hitherto lack of transparency to software developers and even users.

Implementation requires a change to all existing client-based software to use the SOCKS libraries, a process known as "socksifying". On the face of it, therefore, SOCKS offers only a marginal advantage over traditional proxy servers given that both solutions require regular programming.

US-based start-up company Aventail, however, has developed a product known as AutoSOCKS, which allows existing Windows-based TCP/IP applications to negotiate a secure client connection with a SOCKS V5 server based on any of the supported authentication and encryption methods - and without making any changes to the client software.

SOCKS technology, therefore, combines the powerful features of circuit-level proxies without the programming overhead of traditional application-level firewalls. A number of companies, including IBM, DEC, Cyberguard, and Sterling, have commercial firewall products employing the SOCKS protocol.

"Security technology should also be inexpensive, easy to implement, and transparent to end users"

Summary

The implementation of a corporate Security Policy should cover all aspects of corporate security from physical access to the site, through storage and disposal of confidential documents, and obviously including network and Internet access.

A good security technology should be powerful enough to support the features administrators need, including rules validation to inform the administrator of potential security back doors, automatic incident reporting to inform administrators when a security breach has occurred, and secure management of the firewall itself so hackers cannot reconfigure the firewall and create security problems. Such security technology should also be inexpensive, easy to implement, and transparent to end users.

As we move forward into the world of Internet commerce, it is important to remember that there have always been barriers to any kind of commerce. Pirates, highwaymen, bank robbers and confidence tricksters in the past, are all parodied by the modern-day "electronic highwayman" - the hacker.

Whatever the risks, business practices must continue to evolve. In order to move forward, we must accept some of those risks, whilst doing our utmost to minimise them as far as is humanly - and technologically - possible

 Top         Home

Security Testing

NSS Awards

Group Test Reports

Articles/White Papers

Contact

Home

Send mail to webmaster with questions or 
comments about this web site.

Copyright � 1991-2005 The NSS Group Ltd.
All rights reserved.