Betting Sites Not On Gamstop UK 2025Betting Sites Not On GamstopCasino Not On GamstopNon Gamstop Casinos UKBest Casinos Not On Gamstop

NSS Group logo

Tonic 1.8

by Bob Walder

Table of Contents

Introduction
Application Availability
The Internet
Web Applications
Web Servers
Back-end systems
The Problem of Distributed Applications
Tonic 1.8
Architecture
Installation
Configuration and Use
The Console
Documentation
Job Definition
Scripts
Event Handling
Reporting and Analysis
Verdict

INTRODUCTION

Web applications are often victims of their own success. Unplanned increases in web traffic create server outages, application failures, and long wait times between transactions. This results in lost sales, disappointed customers, and damaged reputations - all of which, in turn, can result in lost revenue.�

To make things worse, today’s web applications are really “meta-apps” that combine multiple back-end applications and systems into a single interface for the user, with many of these applications and systems distributed across an organisation or even across the globe. This added layer of complexity makes web applications extremely difficult to manage with a single tool.�

Application Availability

The goal of all on-line services is to be available to customers 24 hours a day, seven days a week. Designing web sites for high-availability means companies must try to overcome many points of failure, including hardware, software, in-house applications, external applications (such as business partner sites), networks, and human involvement (both deliberate attacks and human error). Failure at any of these points can make Web sites inaccessible resulting in damage to the business.�

The reality is that there will always be some part of the web application that will not perform as designed or expected. For companies that are betting their business on the Internet, a web applications management platform that can detect, diagnose, and fix a problem quickly pays for itself.�

Brief discussions of four potential problem areas for availability appear below.

The Internet

Network availability is a common point of failure for many sites. Losing connections to your ISP can mean your site will not be available for customer use. Slow connections to a site frustrate users who often abandon one site to explore a more responsive competitor site. Even problems with DNS routing tables can render sites unavailable because users are not able to reach the site they want.

Web Applications

Web applications must function as expected. For example, the traditional definition of availability sounds like this: “Our servers are up 99.99 per cent of the time and maintain an eight-second or better response time to requests.”

However, this definition means that if you log onto an application as “John Doe” and erroneously receive personalised information meant for “Pat Paulsen,” as long as you receive the response within eight seconds the application is technically available.�

For the Internet, a working definition for availability must include the delivery – still in a timely manner, of course - of the right information to the right user.

Web Servers

Web servers have many points of failure. CPU(s), memory, disk controllers, hard disks, network cards, and power supplies can all bring down a system if they fail to perform adequately. To mitigate the risks of a downed system, many companies place redundant machines in clusters with load-balancers to distribute load among active servers. This means that if one system fails there are others ready to handle traffic. Many load-balancing schemes are designed to route traffic to the system with the best response time.�

However, imagine the case in which a web application is down and the web server software reports Error 505: System not responding. If the load-balancer receives this error message in two seconds from the problem server and it takes four seconds to get a response from the servers with working web applications, then the load-balancer will divert all traffic to the bad server because it has a better response time. Again, content integrity must be considered when developing a load-balancing solution – we may be getting a rapid response, but is the returned content valid?

Back-end Systems

Today’s web applications are often integration points for multiple back-end systems inside and outside an organisation. For example, a typical shopping cart might combine an internal inventory database, an in-house application for collecting contact information and placing orders, and a third-party application at another site that calculates shipping costs and delivery dates. If a web application is unable to connect to any of these back-end systems, the customer will be unable to complete their transaction online. To maintain availability, you must be able to determine which back-end systems are problematic and be able to take corrective action.

The Problem of Distributed Applications

Web sites are rarely monolithic. That is, what appears to be a single site is often an amalgam of commercial and in-house applications and partner web sites. The interdependence of today’s web applications means that your company’s web applications are only as available as your partner’s applications.�

Most organisations lack the ability to verify content integrity or to generate transactions against web applications and make sure that the information the application returns is complete and accurate. If information on a page is missing and that information derives from a partner site, then it is necessary to take corrective action and notify the partner of the problem as soon as possible. A common example is a site using credit card verification. Most sites subscribe to third-party services from companies like CyberCash to instantly verify credit card transactions before approving a purchase. If the link to the authorising site goes down, customers will have gone through all the steps needed to purchase online, yet they will be unable to buy.�

Because nearly half of all visitors to e-Commerce sites abandon the sites if they grow impatient, the most likely scenario at this point is that the customer will click to a competitor site and conduct the same transaction there. On-line businesses thus need some means of constantly monitoring the status of Web applications from beginning to end in order to verify that all the components needed to complete an on-line sale are working, and thus preserve customer relationships.

Tonic 1.8

Tonic is a first-generation solution in the newly-defined Web Applications Management (WAM) market space.

It is designed to be able to assess the content integrity, functional integrity, and scalability of Web-based applications throughout the application life-cycle, as well as to monitor applications after they are deployed to ensure they continue to function as expected from the customer’s perspective.�

Where Tonic discovers that an application is not performing as expected, it has capabilities that go way beyond simply notifying operations staff. Specifically its capabilities cover the three key monitoring requirements as defined by WAM:

Problem Definition: The discovery of operational or web page content faults that stop the application functioning as intended.

Root-Cause Analysis: Identification of the error source, through analysis of the HTTP protocol, page content and other�system data, (specific technical data relating to the current state of the components that make up the web application, for example). Fault diagnosis may involve Tonic running other, pre-determined tests in order that the source can be isolated.

Automated Corrective Action: The automatic application of appropriate corrective procedures to correct the identified fault and keep the application running. This may be as simple as interfacing to existing Enterprise Systems Management products to allow the fault to be addressed, or may require Tonic itself to execute a complex set of tasks in order to correct problems with other systems.

To some degree, the difference between web testing (tends to be more reactive) and WAM (proactive) is a matter of timing. Being able to predict, diagnose, and fix problems before they occur is critical. And that’s where Tonic comes in.

The target market is global 2000 companies, ISP’s, ASP’s, in fact any organisation with a transactional or business critical Web site. Tonic is designed to be of use throughout the life cycle of a Web application:

Development - Stress testing and capacity planning

QA – Assessing and validating application functionality and performance levels

Deployment – Performance validation in a live environment, regular monitoring and automated corrective action. In addition to ensuring the performance of your Web applications, WAM tools like Tonic can also be used to enforce Service Level Agreements (SLA) with external hosting companies and ISP’s.

Architecture

The Tonic platform is built around a modular, scalable engine, which combines service response, event management, data validation, and protocol implementation.

A lightweight HTTP server provides access to all Tonic services, allowing a Tonic administrator to use the platform from either a web browser or from the Java interface provided with Tonic.�

Tonic Servers (which can be run as a stand alone application or as a native NT service) also communicate with each other using the HTTP server, which allows them to communicate through a firewall without the need for opening non-standard ports. This is an important and useful feature given that Web-based applications can often span systems from several different companies (as in the case of external credit card validation and charging systems), forcing transactions to traverse several different firewalls.�

Testing and monitoring tools therefore need be capable of following transactions along the same route that they would take from a user’s perspective if expect to be able to model an entire transaction as accurately as possible.

The Security Manager provides authentication services that restrict access to Tonic servers, currently via a simple user name and password mechanism, though full role-based authentication is planned for a future release.

The Job Manager controls all the transactions the Tonic server conducts with Web servers or other Tonic servers.


Figure 1 - Tonic architecture

The Protocol Engine provides built-in support for standard Internet protocols such as HTTP and HTTPS. However, due to the plug-in architecture, other protocols – such as wireless protocols like WAP, protocols for iDTV, and even customer specific protocols - can be easily added as demand dictates.�

The Event Engine watches for a change of state in the web application environment. Changes can be negative (such as a system down, HTTP error messages, or sluggish application response times) or positive (such as system up or excellent response times).�

Corporate business policies can be used to determine what changes are noteworthy and should be registered with Tonic as actionable events, and what actions should be taken once those alerts are raised.�

Tonic ships with the following commonly used event handlers:

  • ODBC
  • SMTP
  • SNMP (via NT Event Logging)
  • NT Event logging
  • ASCII File logging
  • SQL logging
  • Forwarding events to another Tonic Server
  • Forwarding events to a Tivoli Enterprise Console (TEC) server
  • System calls for custom event processing (it is possible to extend the capabilities of the event engine to create unique event handlers for environments by using scripts or executables written to the Tonic event handler specification.)

Tonic uses a plug-in architecture to allow tight integration with external database systems, application servers, ESM tools, and any other applications with open APIs.�

This integration can provide root-cause analysis for detected problems and automate corrective action. When Tonic detects an event condition, it can notify the applications so they can take corrective action.�

Installation

Installation of the software is very straightforward (the product currently runs on Windows NT/2000 platforms only). The usual Windows installer provides “Typical” and “Custom” options – with “Typical” you get all the modules, and all the default settings (port 5463 and the Tonic Server running as an NT Service).�

With “Custom” you get to choose which modules you want to install, which port is used for management communications, and whether the Tonic Server runs as a Service or an application.

Where a distributed environment is being used with multiple Tonic Servers, each installation must be performed individually on the host machine. There is no centralised distribution and installation mechanism within Tonic at this time.

Prior to installation, however, the customer will have typically experience a pre-sales phase that includes a comprehensive proof of concept report, where Tonic personnel (or personnel from one of the company’s distributors) will examine the customer’s existing Web applications and attempt to model a subset of key transactions in order to demonstrate Tonic’s effectiveness.

The installation itself is also unlikely to be left to the customer alone. Tonic (or its distributor) will provide a systems engineer to install the product and provide assistance in creating the scripts necessary to provide complete monitoring, stress testing, root cause analysis and automated corrective action for the customer’s particular environment.

Not that the product is inherently difficult to use – it is not. Certainly it is easy enough to be up and running with basic monitoring tasks within minutes of installing the software. But without guidance, it would be very easy to overlook some of the more powerful and useful features offered by Tonic.

Configuration and Use

The Console

Once installed, one or more Tonic Servers can be configured and controlled from a single central Console. This Console can be either browser-based – where a connection is made directly to the lightweight HTTP module in the Tonic Server from any standard Web browser – or via the Java-based Console.�

The Java Console provides a more “traditional” look and feel to the seasoned Windows user, and a far more intuitive interface. The first task is to log on – a Tonic administrator account is created during installation – with a user name and password. This is the only authentication mechanism at present, and there is no granularity of access rights based on user credentials. In other words, if you have a user account and password, you can do anything and everything.�

The next release of the product promises the first phase of a role-based administration model, whereby individual users and/or groups can be assigned specific permissions. Some users, for example, will have full administrator rights, whilst others may be restricted to running jobs, but not allowed to create new jobs. A third group will only be allowed to view job results. This promises to be a very useful feature – at present, you need to be careful just who you allow to access the Tonic Console. Note also that in the current release, communications between the Console and the Tonic Server are not encrypted. This, too, should be corrected in a future release.

Fig2-tonic1.jpg (96376 bytes)
Figure 2 - The "Tonic Today" view in the Java-based Console

Once logged in, you are presented with a three-pane view. The left-hand pane contains the menu icons from where you can control Tonic Servers, Jobs, Events and Scripts. The upper right-hand pane is the action window, the contents of which change according to which menu icon is selected. An iconised toolbar runs along the bottom of this window, the contents of which are also unique to each main menu icon.�

For example, when the Job Manager is selected the action window contains a list of Job definitions, and the toolbar contains icons to allow the creations, modification, deletion and running of those definitions. The Tonic Today menu icon, on the other hand, provides an overall summary of the state of the system, providing brief details of all Tonic Servers, Job definitions, Events, and Job instances defined to the system.

Documentation

The lower right-hand pane is the documentation window. This contains hyperlinks to the on-line documentation: Getting Started Guide, User Manual, and Technical Reference. The documentation is excellent, providing extensive information on all aspects of using and configuring the Tonic software. Unfortunately, it is only available on-line, at present, although hard copy versions are promised for a future release.

Fig3-tonic3.jpg (79909 bytes)
Figure 3 - Viewing the Tonic documentation on-line

Both the menu and documentation windows can be collapsed at the click of a button to provide more room for the action window.

Job Definition

Any number of Tonic Servers can be defined to a single Console, and once they have been defined, they can have Jobs allocated to them. For all Jobs, the Tonic Server essentially behaves just like a normal Web browser as far as the remote site is concerned. It generates HTTP/HTTPS requests according to the Job parameters, and responds just as a normal browser would to anything the remote site might request of it. This allows Tonic to follow a Web-based transaction from beginning to end, just as a normal external user would.�

The simplest type of Job is the URL Test. This only requires a target URL to be defined along with a transaction method (GET, HEAD or POST). GET is the most common, acting as a simple site “ping” to ensure that a site, or a particular page, is available at the time the Job is run.�

If POST is selected, the administrator needs to define the data to be posted to the URL on the Job form. If the target site is expecting a cookie to be available on the “user’s” local machine, then the cookie data can also be defined within the Tonic Job. As far as the remote Web server is concerned, it should appear as if it is talking to a genuine Web browser, rather than a test tool.

Once the Job has been given a name, it can be saved, at which point it becomes available for selection in the Job Definition window. Once there, it can be edited, deleted, or run. Clicking on the Run button assigns the Job to the Tonic Server which the administrator is currently logged on to – you can only be logged on to one Server at a time, and there is no obvious way to dynamically assign a Job to another Server without logging on to that Server. As it happens, it is possible to work around this restriction by using the Load Generation capability (which does allow you to specify which Servers will be used to run the Jobs), and only running one instance of the Job on one Server, but it would be nice to see this made more intuitive.


Figure 4 - Monitoring Job instances and viewing Job results

Once the Job is “detached” from the Console it comes under the control of the Tonic Server, at which point its progress can be monitored in the Job Instances window, which contains a list of Jobs that are running, completed and aborted. Unfortunately, Jobs are listed in this Window in order of the randomly-allocated Job ID, rather than in chronological order. This, along with an inability to easily clear batches of Jobs from this Window (Job results have to be deleted individually) renders the Job monitoring capabilities of Tonic very difficult to use indeed.�

As well as running Jobs on demand, it is also possible to schedule them for regular unattended running using the built-in Scheduler. All that is required is to define the Job to be run, the Server on which to run it (this is another way around the Job-to-Server allocation restriction we mentioned earlier) and the interval (from 1 minute upwards) and the Job will run until the Schedule entry is deleted.�

Once again, we had issues with the basic nature of the user interface here, in that there is no way to pause a Job temporarily (the Scheduler entry has to be deleted completely to prevent it from running, and recreated from scratch to restart it) and there is no means of specifying a start time and date.

To be honest, there are many interface-related niggles like the Scheduler and the poor Job instance screen (mentioned above) which we could bring out in this report. In general, Tonic gives the impression that all the development has been put into the functionality and the interface design has been a very low priority – the user interface in the Java Console in particular has something of an “unfinished” feel to it. Of course, this is certainly preferable to a slick interface with no substance behind it, so it would be unfair of us to pick too much at these areas, which certainly don’t impact on the usefulness of the product. We are also promised that most, if not all, of the problem areas we have identified should be fixed by the next release.

After the URL Test, the next type of Job is the Site Traversal. Give it a starting URL, a depth to traverse and a maximum load time for each page it finds, and Tonic will follow every single link from the starting page, and from every page it finds below, down to the specified depth. Even forms can be traversed by providing a set of default values in the Job definition dialogue.�

Fig5-tonic5.jpg (88733 bytes)
Figure 5 - Adding a Site Traversal Job

Load times in excess of the maximum time specified are reported back, as are broken links or other errors, and all these results are returned in a simple form which can be accessed from the Job instance window once the job has completed (see Figure 4). The Site Traversal Job provides the ideal means to verify the integrity of a Web site, as well as providing feedback on performance characteristics. For example, schedule a Site Traversal to run at regular intervals, and the page load times can be compared for different times of the day, and different days of the week.

All Job definition dialogues have one additional page which we have not discussed so far – Load Generation. Load generation creates multiple concurrent transactions from the same Job to simulate large number of “virtual users”. Tonic is thus capable of identifying precisely how scalable a Web application is, and providing insight into how performance degrades across all of the components of the application as user load increases.�

Fig6-tonic8.jpg (97323 bytes)
Figure 6 - Creating a Load Generation Job

All that is required to start a Load Generation Job is to specify how many iterations of the Job you wish to run, which Tonic Servers should host the “virtual users”, and any delay between iterations. To more accurately simulate typical varied user response times, it is also possible to place random delays between individual operations in the Job scripts. Once the Job has been run, the appropriate number of threads are spawned on each of the specified Tonic Servers, and each thread runs a complete transaction.�

Unfortunately, it is necessary to manually copy the appropriate Scripts to, and create the necessary Events on, each of the Tonic Servers specified as part of the Load Generation task. It would be nice if all the files and parameters necessary to run the Job could be packaged up at the central Console and shipped to each of the Servers automatically. This feature is promised for a future release.

As part of this test we did not have time to perform extensive scalability and performance testing on the Load Generation aspect of Tonic. However, as a general rule of thumb, we found that memory, rather than CPU power, was the restrictive factor when running on reasonably powerful Pentium III processors. We found that a typical Tonic “virtual user” consumed approximately 400-500KB of RAM, which is a much smaller footprint than most of Tonic’s competitors.�

On our test systems, for example – 700MHz Pentium III processors with 320MB RAM – we noted the upper limit of Tonic processes to be approximately 600. Thus, with a rack of just five machines, it would be possible to simulate 3000 “real” users, making it much more feasible to perform extensive scalability testing on Web applications – even from geographically remote locations - than it ever has before.

Whilst one group of Tonic Servers is performing the Load Generation, another group can be running normal Site Traversal Jobs to provide an instant measurement of how a particular site – or a particular transaction – behaves when the system is placed under severe load. This has a number of benefits.

For example, by comparing performance levels under increasing degrees of load, system administrators can begin to recognise if a site is slowing down before it hits threshold levels that will launch performance alerts.

Tonic also enables administrators to determine application and Internet performance by combining local and remote Tonic servers to provide baseline and comparative data. By deploying multiple servers on different ISPs, it is possible determine performance real users see based on whom they use for their Internet connections. This can also help in enforcing Service Level Agreements (SLA) with ISPs or external Web-hosting organisations.

Scripts

Although the basic URL Test and Site Traversal Jobs can have you up and running – and Tonic monitoring your Web site - very quickly, the real power of Tonic is unleashed via Scripts. Whilst a programmer will get the most out of the scripting environment, it is amazing just how far you can get with little or no programming knowledge, simply by using the AutoScript Recorder feature within Tonic.

Once started, the AutoScript Recorder prompts for a name for the Script, and initial URL from which to begin recording, and a Script type. There are two types of Script within Tonic: HTTPScript and TonicScript.

HTTPScript is a very simple scripting language that even non-programmers will not find too daunting. However, it does not offer the power and flexibility of TonicScript, and is thus only included in the later Tonic releases for backwards compatibility. There are no plans to develop HTTPScript further in future releases.

TonicScript is based on Microsoft’s Jscript, which in turn is based on ECMAScript, the industry standard definition of JavaScript. JScript is the Microsoft implementation of the ECMA 262 language specification (ECMAScript Edition 3), and with only a few minor exceptions (to maintain backwards compatibility), JScript is a full implementation of the ECMA standard. TonicScript thus offers much more in the way of features and flexibility than HTTPScript, although it does require a little more programming effort as a result in order to exploit its features to the full.

Having said that, it should be stressed that the AutoScript Recorder produced complete Scripts which are ready to run. Once the recording session is initiated, a browser window is opened at the initial URL as specified, and the user can then walk through a Web site or application just as they normally would.�

Every keystroke and every click of the mouse is recorded – along with the delays introduced as a result of the user’s “thinking time”, which are a vital part of any accurate transaction simulation – and saved in the Script file.�

The resulting file is then used as the basis for the third, and final, type of Tonic Job – the Custom Job. Whilst most of the Job parameters remain the same as for URL Tests and Site Traversals, the Custom Job actually runs a pre-prepared Script on a specific Tonic Server when executed. The result is that it is possible to record a number of transactions as Scripts, and have these replayed individually, or as part of a Load Generation sequence, under the control of Tonic.�

Scripting – even when only using the AutoScript Recorder - allows the administrator or Web developer to model transactions completely from the end-user’s point of view. This provides the capability to emulate an actual user, including the most complex transactions involving cookie data or session ID’s (even running across multiple distributed systems through multiple firewalls), thus ensuring that applications operate as they should under all conditions.�

Even complex forms can be modelled using Tonic scripts, allowing every possible combination of data input to be tested, thus ensuring that an application will always behave as intended, and making it easier to test updates and modifications to Web applications on a level playing field.�

Fig7-tonicb.jpg (78176 bytes)
Figure 7 - An example of TonicScript

It is also possible for site traversal jobs to generate a script automatically, thus providing a “baseline” traversal for comparisons against future runs. Within minutes of installing the product, therefore, it is possible to be running your own Scripts – a significant advantage over some of the competition.

Of course, if you are prepared to spend some time and effort on the programming side, TonicScript provides a rich environment where much more can be achieved than mere testing of static links.

For example, TonicScript provides the ability to handle sites that use dynamic URLs. Dynamic URLs come in many flavours, but examples are HTTP 302 redirections, time and session ID data coded into the URL string (not as URL parameters), and dynamically generated pages that actually change the complete URL underlying a piece of visible content on a regular basis. Scripts can now be created that can deal with these aspects of web applications, without alteration.

Sites with this kind of dynamic content have more points of failure than sites using static HTML. Tonic's content checking capabilities can ensure that content appears in the right place at the right time. Tonic validates three content types:

Static content – Tonic can traverse a site, capture baseline information about all of the pages and graphics, and alert the administrator to changes in content, page size, modification date, load time, and more.

Dynamic content – To verify content created and supplied on-the-fly by back office systems, Tonic can identify regions on a page and verify the content that appears in each region. For example, the product can ensure that a stock ticker feed appears as it should, or that a user's account number or bank balance is correct.

Invisible content – The product can monitor cookie values, session IDs, URL parameters, and hidden fields.

Dynamically changing content within a web site can provide problems for many Web application testing tools on the market, but TonicScript can handle this by allowing navigation through the site using rules based on discovered content. For example, a web application may offer for a sale a “product of the day” - this product changes daily, and so, therefore, does the underlying URL. In order to monitor this site and not have to re-write the monitoring script every day, TonicScript allows navigation by rules, such as:

Load the front page

Using TonicScript’s regular expression parsing, find the content that indicates the HTML page area representing “product of the day”

Using regular expressions again, pick out the provided URL attached to the content that has been isolated.

Load the URL into a TonicScript variable and use it to ‘follow’ the link and load the “product of the day” offer page.

Using similar techniques to those above, buy the product.

This means the script does not have to be altered for changes in the product of the day, or for changes in graphics or page layout.

Finally, Tonic’s Scripting capabilities allow it to “intelligently” diagnose faults and their sources, and hence invoke applicable corrective procedures. Since it is based on Jscript, Tonic provides access to the operating environment in such areas as:

ODBC Compliant databases

File and Directory services

Office 2000 components

WBEM (allowing control of WMI resources such as drive mappings, permissions, system properties, and so on).

Other system components classified as Automation Objects.

Any VB Automation Objects created by developers or systems management staff.

Thus the Tonic Scripts can now query numerous system resources when it discovers an error. For example, if it finds that dynamic page content does not load correctly, it could determine that it is a link to an ODBC database that is the cause of the problem, and then query the database directly to ensure that the data is not corrupt at source. If it is not, it could force a refresh of the Web server’s cache to try and clear the problem without raising an alert to the administrator.�

Event Handling

Each Job can have one or more Events associated with it. At the simplest level, an Event can be created in the Tonic Console, and then associated with a particular Job when that Job is defined. If any problems are found when the Job runs, and Event flag is raised at the Tonic Console. Once the Job has completed, the individual Event details can be examined in the Event Classes window of the Console.�

Fig8-tonic9.jpg (77410 bytes)
Figure 8 - Viewing Events

Note that this is the least granular method of Event handling, in that whatever kind of error is discovered – lengthy page loads, incorrect CRC on a page, and so on – an instance of the same Event is raised. If a more granular method of Event handling is required, then individual Events can be raised for each type of error by modifying the Script.

Events Tonic can detect include:

1. Incorrect content, such as:

  • CRC values for fixed objects (for example, Java applets or GIFs or static HTML pages)
  • Size range for dynamic objects (for example, HTML pages with changing banners)
  • Cookie, URL parameter, and hidden HTML parameter values
  • HTML table values
  • Dynamically produced content within an HTML page
  • Meta and user defined tag values

2. Functional breakdowns, such as:

  • Broken links to data sources like back-end databases or partner web applications
  • HTTP error return codes
  • Problems with load-balancers incorrectly distributing traffic among servers
  • Failed logons to web applications

3. Scalability problems, such as:

  • Excessive page and object load times
  • Load balancers fail to distribute web application requests�

This allows Tonic to detect any condition that falls outside of configurable business policies, such as Web page objects that take too long to load, Web requests resulting in non-normal return codes, Web content that was modified when it should not have been, and so on.�

The default action is simply to report the Event instance to the Tonic Console, but it is also possible to log events and forward them to other Tonic servers or an integrated third-party event system, such as the NT event log or an Enterprise Systems Management product. Where it is not desirable that Tonic Servers external to the corporate firewall initiate connections to internal Servers in order to report Events, Tonic can be configured to store Events on external Servers, and have one or more internal Servers poll and retrieve these on a regular basis. Note that in the current release, communications between Tonic Servers are not encrypted. Other notification options include SNMP, SMTP and running a shell program (which could provide access to SMS–based alerts, for example).

When Events are coupled with automated corrective action via TonicScript, this can ensure minimal impact caused by unanticipated problems within complex Web applications.

Fig9-tonic2.jpg (99202 bytes)
Figure 9 - Browser-based Console interface

Note that all the functionality of the Java-based Tonic Console is also available through the browser-based HTML interface. This allows the administration of Tonic Servers from anywhere in the world over standard Internet connections, and without the requirement to install client-side software.

Reporting and Analysis

There is actually very little in the way of reporting and analysis capabilities built into the current release of Tonic.�

On a Job-by-Job basis, Tonic returns data regarding the essential characteristics of Web pages accessed during the Job, including size, type, download time, modification date, return code, and changes since the last operation. This information is presented in a simple HTML table which can be accessed from the Job Instances screen in the Console (see Figure 4).

All of this Job history data is actually stored as an XML “database” which is streamed to disk during a Job run. At present, there is precious little on offer in terms of suitable XML report generators, so Tonic has provided a more conventional means of collecting data for subsequent reporting and analysis.�

The ODBC plug-in engine provides the means to replicate most of the Job data into an ODBC-compliant database such as Access, SQL Server, and so on.�

Fig10-tonica.jpg (80297 bytes)
Figure 10 - Viewing the Performance Summary report

If you want to use anything other than Access at present, you are on your own when it comes to reporting. If, however, you already have third-party reporting tools for ODBC databases, then you have a wealth of information stored within the Tonic database to play with.�

For those without such facilities available to them, Tonic has included a simple – though unsupported – reporting tool which can query an Access database and produce eight different report types:

Performance Trend - line graph of average load times over time

Performance Summary- pie/bar chart of average load times

Performance Detail - bar graph of page load times per Job

Performance Table - text table of page load times per Job

Transaction Detail - bar graph of page load times per transaction

Link Integrity Trend - line graph of HTTP return codes over time

Link Integrity Summary - pie/bar chart of HTTP return codes

Link Integrity Table - text table of HTTP return codes/errors

There reports actually cover most of the basic needs of the average Tonic user, and each one can restrict data displayed based on date, time and Job instance.

This reporting capability (or something very similar in terms of functionality, at least, since it is unlikely to be based on Access) will be fully integrated into the Tonic Console in the next release of the product, making reporting and analysis much more straightforward “out of the box”.

Verdict

If the ultimate test of a Web site is customer loyalty, it is an imperative to increase the chances that end users will return to it by eliminating problems before they find them. Failure is not an option. As a superset of several categories, WAM has elements that are an extension of traditional Web Testing and Enterprise Systems Management.�

Tonic is a tool which falls neatly into the new WAM market space, offering real-time, continuous, cross-platform enterprise system diagnostic information to keep Web applications running.

The odd user interface “niggle” and rather basic reporting built-in capabilities notwithstanding, Tonic is a fine product. The ability to be up and running, scripting transactions and monitoring Web sites within minutes of installing the software speaks volumes for the ease of use of the product. The on-line documentation is excellent when required, but most of the time the interface is intuitive enough to simply get on with monitoring your Web applications.

One of the greatest strengths is with the automated script recording capabilities, which allows even non-technical users to record typical transactions, and then wrap those in a Job for Tonic to replay on demand, or via the Scheduler. The simplicity of creating and running these simulated transactions belies the power behind the system.�

With just the bare minimum of programming, those same scripts can begin to check static content for excessive page load times, content errors and changes in size. The latter can also be considered a useful security feature, given that it is capable of warning the administrator when unauthorised changes have been made to Web pages or images.

Finally, those with a programming bent can release the real power of Tonic, as it not only performs content and integrity checking on static pages, but also on dynamic changing content.�

Complex systems can introduce a multitude of strange errors, and TonicScript programs are capable of performing root cause analysis to determine the exact source of the error, followed by automated corrective action in an attempt to eliminate the problem without involving the administrator.

All of this makes for an extremely powerful system that any organisation running business-critical Web applications should consider.

Contact: Tonic SoftwareInc����
Phone:
+44 (0) 1256 698048���
Web:
http:// www.tonic.com

Send mail to webmaster with questions or�
comments about this web site.

Copyright � 1991-2003 The NSS Group Ltd.
All rights reserved.

Featured sites