NSS Group logo

Internet Security

"The Modern Day Gold Rush"

In the last ten years, the Internet has grown from 50 to well over 10,000 networks. In 1988, the National Science Foundation estimated that there were over half a million users; today, that organisation estimates 20 million people around the world use the Internet.

One of the Internet's key strengths is that no one agency or organisation is responsible for its overall management, leaving it largely free of bureaucratic control and burdensome regulation. From a security standpoint, however, this could also be viewed as one of its weaknesses.

Early in the Internet's development, responsibility for managing and securing host computers was given to end users - the host sites, such as college campuses and government agencies, that owned and operated them. It was believed that the host sites were in the best position to manage and determine a level of security appropriate for their systems. Thus, each of the Internet's thousands of networks maintains operational control over its own network, whether it is a backbone network, regional network or local-area network

Of course, in those early days, the main users of the Internet were educational and research establishments which were using this convenient new medium to ensure that their data reached as wide an audience as possible. Commercial concerns were largely ignored.

Today, however, this massive network is destined to become a cornerstone for the much hyped "information superhighway." This has led to an increasing number of commercial organisations scrambling to get a stake in the Internet, and since mid-1990, the number of businesses connecting to the Internet has multiplied by 30 times, to 11,000 organisations, according to one expert. Many of these are using the Internet as a source of information and reference material, browsing the World Wide Web (WWW) and Newsgroups for the latest gossip or hard facts on a whole range of issues. Others rely on it for transfer of information, sending messages and data files via e-mail, File Transfer Protocol (FTP) or Gopher sites.

But the Internet is nothing if not an equal opportunity environment, and any company can set up a Web server (or rent space on someone else’s) in order to publish information about their own company or services. As well as simply giving out information, a well-designed Web site can collect information from its "visitors" - perhaps customer feedback or suggestions - and even take orders for products. This has led to the development of Web Commerce, and many businesses are setting up electronic "shop fronts" on the Internet in anticipation of secure payment mechanisms becoming widely available during 1996.

Internet Security Issues

As the number of networks, hosts and users on the Internet has mushroomed, so too have security incidents. The Computer Emergency Response Team, or CERT, an Internet security watchdog organisation, calculates that the number of security breaches has risen from 180 in 1990 to over 1,300 in 1993. To put it in another perspective, security breaches, which occurred at the rate of one every other day in 1990, exploded to nearly four a day in 1993.

Unfortunately, commercial organisations are notoriously tight-lipped about security breaches of any kind, thus making it difficult to track and stop security problems in the increasingly overburdened Internet. It is easy to imagine, however, what some of the potential consequences of a security breach via the Internet might have on a commercial operation. Given the enormous reliance most organisations now place on information systems technology, unauthorised tampering with those systems or the theft of the information they contain could have serious financial impact. In addition to data loss, organisations are faced with network downtime, lost productivity and the possibility of negative publicity in the marketplace.

Given the current interest in Web commerce from both consumer - who wants an easier way to shop - and the supplier - who is keen to exploit any new market opportunity - it is unfortunate that the lack of adequate security also is hindering many organisations with Internet protocol networks that could otherwise conduct business on the Internet. The Internet Society estimates that about half of all networks are not tied into the Internet mainly because their network administrators are too fearful of the security risks.

In short, the Internet cannot be trusted for business until companies can control and audit data completely. There are two key areas which need to be addressed : access (keeping out the bad guys) and transit. (ensuring that data gets to its intended destination safely).

Access

At the most basic level, all that is required to connect to the Internet is a modem attached to your PC and a connection to an Internet Service Provider (ISP). However, whilst the most straightforward means of connection is undoubtedly the direct modem connection from desktop to ISP, it is not necessarily the most cost-effective. Because an increasing number of companies are recognising the Internet as an ideal source of information and free technical support - as well as the perfect medium for advertising - the requirement for an Internet connection to every desk is becoming a reality. Nevertheless, the cost of a modem and a telephone connection for all employees in even a small company can make the exercise cost-prohibitive. And it certainly is not the best use of resources which are already available.

We are talking, of course, about the Local Area Network (LAN). Most LANs were installed in the first place to leverage investment in hardware and software by allowing a number of users to share scarce - and often expensive - central resources such as database servers, printers, scanners, and communications links.

Unfortunately, the most obvious problem with Internet security is that as soon as you connect your network to the Internet, you are effectively opening a data pipe to the outside world. This is necessary to provide outbound connections for all your network clients, but is just as likely to allow unwelcome intruders to wander around your confidential data if you are not careful. In order to prevent this, effective hardware and software barriers can be put in places - in the form of firewalls and password/authentication schemes.

Firewalls

Essentially, a firewall should be thought of as a gap between two networks - in our case an internal network and the Internet - occupied by a mechanism that lets only a few selected forms of traffic through.

At its simplest level, a firewall can be nothing more than a screening router configured to filter out unwanted TCP/IP packets - perhaps restricting inbound connections to known sites, for example. The plus side to this is that most organisations already have routers in place, thus eliminating the requirement for additional capital expenditure. The down side is that it is not particularly flexible, particularly in environments which need to permit wide-ranging access (i.e. to anyone who wants to purchase products), yet restrict them to only a tiny portion of the network (i.e. the Web server only). In general, routers cannot be customised to specific network environments, do not authenticate users, and have no audit capability. If not properly set up, the firewall may thus have trapdoors through which intruders can surreptitiously enter.

For this reason, most organisations who are serious about Internet connectivity will invest in a full proxy-based Internet Firewall. Also known as a "dual-homed gateway", this is a system with two network interfaces that sits on both the protected network and the public network. Since the gateway can communicate with both networks it is an ideal place to install software for carrying data back and forth. These agents are called "proxies", and one is required for each service you wish to provide. For instance, a WWW proxy will manage user connections to the Internet, and will ensure that incoming data packets are for a valid recipient - otherwise they will not be passed through the firewall.

One of the biggest advantage of effective firewalls is that they present just a single IP address to the outside world, thus hiding the real structure of your network from prying eyes. They will also usually provide full auditing and reporting facilities. Unfortunately, the administrative burden is high, since the network administrator must create and maintain the security architecture, programming for every possible exposure. It is also true to say that a completely secure firewall is not always transparent to the user.

It is important to recognise that the most secure configuration is to place your Web server on the outside of the firewall’s protection, making it a part of the external Internet. This obviously leaves the Web server itself open to attack, but maintains the integrity of the internal network completely. Protection of the Web server itself - as well as your internal network - often comes down to how secure your password and authentication mechanism is. This brings us on to our next topic - passwords.

Passwords

Passwords have long been the front line of defence in protecting information systems and networks. Unfortunately, they are usually the first thing a hacker will try to "break" to gain access to your system, and even well-composed passwords are vulnerable to being intercepted and "stolen" by today's more sophisticated system attackers.

The main problem is that most systems today rely on reusable passwords so once a valid password has been captured electronically from the network cable, or simply guessed, it can be used over and over again to gain illegal access to a supposedly protected system.

As a means of eliminating reliance on stand-alone passwords, today's information systems managers may choose from several access control technologies, such as dial-back systems, biometric devices, and a series of "token technologies" including challenge/response "calculators", smart cards that require card readers, and time-synchronised "super" smart cards that can be used without a card reader.

Each of these authentication systems offers its own unique advantages. However, dial-back systems can't authenticate users on the road and can be rendered useless by convenient telephone features like call forwarding. Dial-back systems quite simply are not designed to secure Internet access and are therefore not a comprehensive solution, often authenticating terminals only - not users. And while biometric devices may be highly effective in authenticating user identity, their cost and lack of portability may preclude their use in today's mobile computing environments.

Information systems that deploy challenge-response technology require that the user accurately respond to a challenge or request for a password from the host computer. The response or password is usually generated by a small "calculator-type" device carried by the user, though the generation process itself can sometimes be a time-consuming process that leads to user frustration. Smart cards requiring card readers restrict those users who travel and may be expensive for host end and application software support.

In contrast, the time-synchronised super smart card contains a microprocessor that generates and displays a new password every time it is used or within a predetermined period of time (usually every 60 seconds). Once the smart card and the host system are synchronised, the provides a relatively secure and convenient means of challenge/response authentication.

The premise of using a smart card for security applications is based on a long recognised notion that there are three ways for a user to authenticate himself or herself:

  • Something the user knows, such as a PIN or reusable password
  • Something the user has, such as a smart card or a token
  • Something specific to the user, such as his fingerprint or voice

More advanced security technologies employ at least two of these three factors of user authentication and identification. Factor one is a memorised personal identification number; and factor two is a smart card with its displayed code generated at a programmed interval. The two factors combine to produce a one-time password which, having been used once is rendered inactive, and thus useless to the hacker on subsequent occasions.

Transit

Having ensured that our network is protected from illegal access, the next step is to ensure that data transmitted over the public network is safe from prying eyes - particularly of the electronic variety! There are two means of ensuring this - encryption (where data is securely encoded), and authentication (where the identity of the recipient is verified before transmission).

Encryption

Encryption is the transformation of data into a form unreadable by anyone without a secret decryption key. Its purpose is to ensure privacy by keeping the information hidden from anyone for whom it is not intended, even those who can see the encrypted data. For example, one may wish to encrypt files on a hard disk to prevent an intruder from reading them.
In a multi-user setting, encryption allows secure communication over an insecure channel, such as the Internet. Traditional, or "secret-key", cryptography is based on the sender and receiver of a message knowing and using the same secret key: the sender uses the secret key to encrypt the message, and the receiver uses the same secret key to decrypt the message. The main problem is getting the sender and receiver to agree on the secret key without anyone else finding out.

With public-key cryptography, however, each person gets a pair of keys. These are known as the public key and the private key, and each person's public key is published while the private key is kept secret. The need for sender and receiver to share secret information is eliminated, since all communications involve only public keys, and no private key is ever transmitted or shared.

The primary advantage of public-key cryptography is increased security, since the private keys do not ever need to transmitted or revealed to anyone. In a secret-key system, by contrast, there is always a chance that a hacker could discover the secret key while it is being transmitted.

In practice, however, a public-key system such as RSA is combined with a secret-key cryptosystem, such as DES, to encrypt a message by means of an RSA digital envelope.

Suppose Alice wishes to send an encrypted message to Bob. She first encrypts the message with DES, using a randomly chosen DES key. Then she looks up Bob's public key and uses it to encrypt the DES key. The DES-encrypted message and the RSA-encrypted DES key together form the RSA digital envelope and are sent to Bob. Upon receiving the digital envelope, Bob decrypts the DES key with his private key, then uses the DES key to decrypt to message itself.

Authentication

Authentication in a digital setting is a process whereby the receiver of a digital message can be confident of the identity of the sender and/or the integrity of the message. Authentication protocols can be based on either conventional secret-key cryptosystems like DES or on public-key systems like RSA, which uses digital signatures.

Suppose Alice wishes to send a signed message to Bob using RSA. She uses a "hash function" to create a uniquely concise version of the original text - known as a "message digest" - which serves as a "digital fingerprint" of the message. She then encrypts the message digest with her RSA private key, and this becomes the digital signature, which she sends to Bob along with the message itself.

Bob, upon receiving the message and signature, decrypts the signature with Alice's public key to recover the message digest. He then hashes the message with the same hash function Alice used and compares the result to the message digest decrypted from the signature.

If they are exactly equal, the signature has been successfully verified and he can be confident that the message did indeed come from Alice. If, however, they are not equal, then the message either originated elsewhere or was altered after it was signed, and he rejects the message. Note that for authentication, the roles of the public and private keys are converse to their roles in encryption, where the public key is used to encrypt and the private key to decrypt.

One or more certificates - containing a public key and a name - may accompany a digital signature. A certificate is a signed document attesting to the identity and public key of the person signing the message, its purpose being to prevent someone from impersonating someone else, using a false key pair. If a certificate is present, the recipient (or a third party) can check the authenticity of the public key, assuming the certifier's public key is itself trusted. This means that digitally signed messages can be proved authentic to a third party, such as a judge, thus allowing such messages to be legally binding.

Web Security Protocols - SSL and S-HTTP

Netscape Communications has designed and specified a protocol for providing data security layered between application protocols (such as HTTP, Telnet, NNTP, or FTP) and TCP/IP. This security protocol, called Secure Sockets Layer (SSL), provides data encryption, server authentication, message integrity, and optional client authentication for a TCP/IP connection.

SSL provides a security "handshake" that is used to initiate the TCP/IP connection. This handshake results in the client and server agreeing on the level of security they will use, and fulfils any authentication requirements for the connection. Thereafter, SSL will encrypt and decrypt the byte stream of the application protocol being used (for example, HTTP, NNTP, or Telnet). This means that all the information in both the HTTP request and the HTTP response are fully encrypted, including the URL the client is requesting, any submitted form contents (including things like credit card numbers), any HTTP access authorisation information (user names and passwords), and all the data returned from the server to the client.

S-HTTP is a security-enhanced variant of HTTP, though the two have different motivations. SSL layers security beneath application protocols like HTTP (the language of the Web), NNTP, and Telnet, whereas S-HTTP adds message-based security to HTTP specifically. SSL and SHTTP are not mutually exclusive - rather, they can easily co-exist in a complementary fashion by layering SHTTP on top of SSL.

Conclusion

There are means available to combat the unwelcome visitor. But the biggest challenge of all is staying one step ahead of the hacker. Because the tools to break in change as fast as the tools in the guard house, staying abreast of current technology is a full-time occupation. This certainly raises several questions over the efficacy of DIY approaches versus the implementation of Independent Software Vendor (ISV) developed security products with good support and regular updates.

With regard to commerce on the Internet, it is unlikely to take off significantly until secure end-to-end authentication and encryption schemes are standardised and adopted world wide. One method of providing this is by the use of proprietary encrypted pipes, enabled by new combined router/firewall products with built-in encryption technology. Unfortunately, this is still some way off, since the Internet Engineering Task Force (IETF) is still working on the IP Secure Protocol (IP SP) standard, the encryption/authentication scheme for TCP/IP. Until then, Virtual Private Networks over encrypted network links will be rife with incompatibilities.

In the short term, therefore, applications-based encryption is the answer. With the backing of IBM/Prodigy, Netscape, EIT, CompuServe, RAS and AOL, it is US-based Terisa’s intention to deliver a single developers tool-kit that combines Web security protocols SSL and S-HTTP. Terisa’s de facto leadership should signal the beginning of next-generation Web products, ultimately resulting in a secure "World-wide Transaction Web".

At the end of the day, however, one hundred per cent security can never be guaranteed, no matter how much you spend on firewalls and authentication mechanisms, so the question is how much of a loss is acceptable?

A recent report by Forrester Research puts the potential loss via Internet fraud of just �1 per �1000 of transactions (Source: Forrester Network Strategy Service, June 1995). Forrester believes that as a measure of risk versus return, this 0.1 per cent loss can be tolerated. In fact, it compares exceptionally well with other forms of business such as MasterCard (a loss of �1.41 per �1000) and cellular communications (�19.83 per �1000)

Top         Home

Certification Programs

Group Test Reports

White Papers

On-Line Store

Contact The NSS Group

Home

Send mail to webmaster with questions or 
comments about this web site.

Copyright � 1991-2006 The NSS Group Ltd.
All rights reserved.