Betting Sites Not On Gamstop UK 2025Betting Sites Not On GamstopCasino Not On GamstopNon Gamstop Casinos UKBest Casinos Not On Gamstop
NSS Group logo
NTSim-Exchange and NetIQ AppManager Integration - A Perfect Marriage?

An NSS Group White Paper

by Steve Broadhead

Table of Contents

Introduction:� The Role of Application Management and Capacity planning Management
Capacity Planning Tools:� Realtime v simulation?
Making The Case For A Combined Best of Breed Solution for Application Management
NetIQ AppManager - An Overview
NTSim-Exchange - An Overview
Summary

INTRODUCTION: The role of Application Management and Capacity Planning Management

Funny how it’s a word closely associated with “network” but not necessarily with “applications”, despite the average network being full of them. Network management has become an accepted, fundamental element of any company’s strategic computing plans but when it comes to applications it’s a different story. When the mainframe was king, performance management and capacity planning were de rigeur, both for hardware and applications, but the networked, PC-based alternative has largely been ignored when it comes to real application management and planning. Yet NT server-based networks have grown out of all recognition, in terms of size and complexity, over the years and many larger organisations have a number of distributed, mission-critical applications running on Microsoft NT and, increasingly, 2000 platforms.

Managing NT has proved to be a nightmare at the best of times especially when servers are geographically dispersed. So how can a network administrator be expected to handle all those distributed servers layered with complex applications such as Exchange Server, SQL Server, IIS and so on without some kind of assistance? Enter the role of the application manager such as NetIQ’s AppManager that we’re featuring here – a tool, or set of tools which provides constant monitoring of server-based applications and delivers useful data on just how those applications are performing. But this is only part of the story. Few would deny that the real art of network management is in forward planning – ensuring that, regardless of the “network component” in question, there is sufficient capacity to cope with user demands. Nowhere is this more evident than in the areas of email and messaging in general. Take Microsoft’s Exchange Server for example. What started as an unlikely alternative to Lotus Notes and even Novell’s GroupWise has acquired a huge market share of LAN-based messaging servers. Its OS platforms, Windows NT and – shortly - Windows 2000, are not exactly renowned for being entirely trouble-free when it comes to scalability issues. All of which adds up to a greater than ever need to plan ahead carefully if disaster isn’t to lie in wait.

What we’re talking about here is capacity planning - the process of determining which resources are needed to satisfy projected future requirements. While this is certainly applicable to networking and communications in general, the huge increase in the use of messaging and Exchange in particular means that it is a top priority requirement to plan email server sites in great detail. A key part of the process of employing capacity planning is in bringing planning of the messaging infrastructure right down into the application planning phase and not waiting until the service has been up and running – or not – for some time. It has been suggested that this can take the form of both stress/load testing in a prototype environment and full-scale simulation or analytical modelling. Clearly, however, this could be a very expensive exercise unless the modelling tools in question are efficient, accurate and easy to use.�

Which brings us to another question – should capacity planning tools be “real-time” or simulation based?

Capacity Planning Tools: Realtime v Simulation?

You only have to mention the word “simulation” to some IT professionals and they immediately switch off.

The belief among many is that there's far too much hype in words like "virtual" and "simulated" and that only "actual" and "real-time", when applied to networking, delivers anything that is actually meaningful. So when it comes to offline network simulation and modelling there are still a great number of sceptics out there. And not without good reason if we look at the genre from an historical perspective. Network modelling tools have been around in a minor way for years but have never become popularised in the same way that real-time tools such as network monitor and analyser products have.

But there are reasons for this that need no longer apply. One of the problems historically with simulators is that they simply haven't allowed the user to model their own network accurately enough. However, we found recently when evaluating NTSim Inc.’s NTSim-Exchange product, which is the joint subject of this follow-on white paper, that such limitations can be addressed. It does this by providing both the flexibility and attention to detail to model absolute elements of the applications server down to the nth degree and by allowing the user to either create a model from scratch or import actual data captures from their own network of servers. The latter is obviously the closest you can get to real-time network capacity planning and evaluation without actually loading up your own network and means that effectively you get something approaching the best of both worlds.

This is a significant point in that real-time capacity planning and modelling tools do exist and attempt to offer approximately the same kind of features and net information as a simulation tool. Indeed we actually use such tools within our network test labs at the NSS Group. But one of the key reasons behind using such tools is that, while providing true network simulation, they also put a huge load on the network - exactly what we want in test lab conditions but hardly ideal for burdening an already busy live network with.

So with real-time tools the only answer is to simulate out of office hours; that is on the few networks nowadays which don't use up all of the 24 hours a day available. This might be for either regular "user-based" work, automated backup, file updates, data consolidation, server and database synchronisation, overnight ERP runs or any other number of "round the clock" applications that now exist and, in some cases, have always existed. The whole essence of computers is that, unlike humans, they don't need eight hours sleep every night to maximise their existence. They increasingly get used all day and all night – true 24x7x365 operations. For this very reason the concept of using “slack” network time to carry out real-time testing is now largely inapplicable. One alternative is to build a dedicated test network but this is equally infeasible for most companies and still doesn’t equate to the actual network you are looking to plan capacity for.

Which leaves a better alternative – an accurate simulator.

Making The Case For a Combined, Best of Breed Solution for Application Management

So what have we established to date in this white paper?

First, it is clear that application management and application server capacity planning are both crucial strategic elements of overall systems management, if your network is to perform at anything like optimal levels. It is also clear that simulation tools for capacity planning, if developed accurately, are the only realistic option on today’s 24x7x365 networks, at least in the mid if not long term. But what is equally clear is what simulators cannot do and that is to let you know what is happening on your applications servers day in, day out, or even right now. For this, then, you need real-time application management tools, such as NetIQ’s AppManager.

The question is: do you really need both? Well the simple answer is yes. In terms of general network management, for years now it has been accepted that you need a number of different tools to work in combination, in order to provide total management. There are network device mapping and fault detection – typically SNMP based – tools, real-time monitoring and analysis tools, traffic shaping tools, what-if network planning tools… the list is almost endless. With application management however, it really is a case of just two requirements – real-time monitoring and offline planning. But you do need both, otherwise you’re only ever seeing part of the whole picture.

It’s all very well having a detailed breakdown of application server performance and usage at any one moment in time, but what it doesn’t tell you is how those parameters will change when you add another 1000 users in 12 months time, for example. Likewise, there is little point in planning ahead, years, months or even days, if you don’t actually know the state and status of your application servers right now. It’s like a football club choosing to increase the capacity of its ground at great cost, without knowing whether it is ever full at current capacity, or is ever likely to be in the future.

Given this line of argument, then, is it fair to say that either tool is effectively useless without the other? The answer here is no. An application manager at least lets you know of any immediate potential problems, as well as providing useful breakdowns on usage patterns which can be used to make a – not very – educated guess at future performance issues. Likewise a good simulator has accurate background data in the first place, in order to provide a baseline appraisal, from which it can then make accurate predictions as to the future. But an application manager does not in any way offer the same flexibility of modelling that a simulator can and – likewise – a baseline estimate built-into a simulator in no way replaces pulling in real-time data from the applications server itself.

Is the answer, then, to find a product suite which does both of these jobs – and maybe more? Well in years of evaluating products in the NSS labs, the sad truth is that we have yet to find a “jack of all trades” product that really makes a good job of any one aspect of its specified capabilities. In all areas of network management such products exist and yet we find that a combination of products from different vendors is the only realistic option, if you want a real solution.

The ideal scenario, then, is to combine two best-of-breed products – one from each of the application management and capacity planning camps – in order to maximise your applications management and capacity planning strategies. But life isn’t necessarily that simple. What if the two chosen products don’t actually integrate very well or at all even? And what if the user interface and mode of working with those tools is so radically different in each case that training costs prove enormous and compatibility issues will impact on usability now and into the future?

Clearly there has to be some common ground between the two chosen products and such is the case here with AppManager and NTSim-Exchange. From an integration aspect, it could not be much simpler, given that the two products are from different vendors. NTSim-Exchange directly supports Exchange Server data gathered using NetIQ’s AppManager monitoring software. NTSim Inc. provides an AppManager script with its NTSim-Exchange package which you add to the NetIQ installation (a case of simply copying a file into the appropriate folder) that can then be called up to capture traffic specifically for NTSim. This file can then be imported directly into NTSim-Exchange using a wizard-based importer, as shown below.


Figure 1 – NetIQ Data Import Tool

While the current integration, from a data perspective, is impressive, it is set to get better with the release of an extension to the NetIQ range in the form of AppAnalyzer for Microsoft Exchange Server. This is a tool

designed to help messaging administrators and IT managers understand and monitor usage of Exchange Server in more detail than before. It displays comprehensive data about Exchange Servers located across an enterprise, including message traffic analysis, message delivery times, historical and current mailbox and public folder storage data, chargeback information, and message content filtering. It is based around a data-mining concept so a manager can drill-down into the system to document enterprise-wide message delivery performance, understand system usage trends and plan for system enhancements. The latter point is key since it is the broader range of data being generated which in turn enables NTSim to import a more detailed view of the network and therefore produce a more accurate simulation than before. So improvements in the application server manager are reflected in the capacity-planning simulator!

From a user perspective too, the integration is tight. Both products use Microsoft’s MMC management console as their interface, so the basic mode of operation is very similar, though NetIQ’s product is the more complex of the two to master (see reviews that follow). It means that there is no great learning curve required in order to master the two products. If you can use one, then you can find your way around the other. The shared interface platform also bodes well for future interoperability. Indeed, it’s not unreasonable to argue that NTSim should be able to be fired up from within NetIQ, as a simple addition to the menu options.

NetIQ AppManager – An Overview

NetIQ AppManager Suite is designed as a comprehensive solution for managing and monitoring the performance and availability of distributed NT and Windows 2000 systems, applications and hardware.

It allows system administrators to view all of their Windows NT and Windows 2000 servers and workstations from a single, central console and to monitor computer resources and application configuration, check for potential system problems, initiate responsive actions, and gather performance data for real-time and historical reporting.

In truth AppManager is not a simple, out of the box solution, since it consists of a number of different product options that can be used together or separately, depending on your requirements. Each product option in the Suite is specially focussed to perform specific management tasks for a particular application or resource. For instance, in the Windows NT/2000 category there are agents to cover the base operating system, Citrix WinFrame, Active Directory, Cluster Server, Network Load Balancing, and Terminal Server. There are also out-of-the-box monitoring scripts providing the ability to identify CPU bottlenecks and terminate runaway processes, check if key services are down and auto start them, track disk space usage, reboot downed servers, determine if running low on DHCP leases, and so on.

Here, however, we’re focusing on messaging. AppManager covers Lotus Domino and Microsoft Message Queue Server, as well as our absolute focus here – Exchange Server. For each it performs such tasks as monitoring e-mail connectivity and response time, reporting on e-mail traffic flow, identifying top e-mail senders and receivers, monitor e-mail disk space usage, and restarting services or tasks that have failed.

Basically, AppManager uses a number of pre-defined rules – known as Knowledge Scripts – in order to perform one or more management tasks, such as monitoring CPU usage, disk usage, verifying connectivity between messaging hosts, or detecting whether or not a critical service is up and running. Depending on the task, Knowledge Scripts can collect performance data (such as CPU usage), monitor systems for simple or complex events (such as whether a device or service is down), and respond with one or more actions (such as paging or e-mailing the administrator, or automatically re-starting a failed service). New knowledge scripts can be created by the administrator if required, or created by third-parties and added in, as is the case with the NTSim-Exchange integration. Script parameters can be quickly and easily changed to reflect new or changing requirements.


Figure 2 – Checking in the NTSim-Exchange Knowledge Script

The AppManager multi-tiered architecture is designed to be scalable, in order to provide central monitoring and management for an entire organisation. At the lowest level are the Agents, comprising Windows NT/2000 services that perform the actual monitoring of the host systems, communicating their results to the Management Server. The Management Server manages the event-driven communication from the Agents and stores the results in the Repository, a SQL Server database where all AppManager data resides for alerting and reporting purposes. Finally, there is the AppManager Console, the GUI interface where the administrator can configure the monitoring services and view alerts and reports (via the AppManager Report Manager). Web versions of the Console and Management Server are also available.

The Console presents what can, at first, appear to be an overly complicated multi-paned view to the administrator. After an initial network discovery process during which all active Agents are located, it consists of a hierarchical Explorer-like display of managed objects, along with all the systems resources and applications located on each of those objects. So it is possible to expand the entry for an individual machine and examine critical resources such as CPU activity and disk usage, as well as application-specific data such as Exchange log file usage. It is also possible to select a number of alternative application-oriented views which displays all objects that are running Exchange Server, IIS, SQL Server, and so on, and individual resources can be arranged into logical groups for ease of administration.

Jobs are created by dragging and dropping Scripts onto the managed objects in the AppManager Console, where they can be scheduled for once-only or regular runs. One parent job can spawn any number of “child” processes, thus making it easy to change job parameters for hundreds of machines at a time, and then apply them in a single operation. Network traffic is kept to a minimum by ensuring that only key exceptions and final results are reported back to the Management Server, rather then maintaining constant communication. Clicking on any managed resource will show the jobs currently running or scheduled to run in the future on that object, and results can be displayed in an easy-to-understand graph or report format

While AppManager is necessarily complex – given that it is designed to monitor and manage complex network environments, once you have it configured and running it makes the day-to-day management of an NT/Windows 2000-based network relatively simple. What it still does not give within an Exchange environment is truly extensive web-based reporting and analysis, which is why NetIQ is now introducing a second product, AppAnalyzer for Microsoft Exchange Server, as explained earlier.

NTSIM-Exchange – An Overview

An obvious – but still much asked – question when it comes to simulation tools is what do they actually simulate? In the case of a tool like NTSim the answer typically is anything related to impacting on server performance, service and application availability, and the capacity planning thereof. But there can be a problem with this kind of modelling. Traditional simulation tools use a general approach. They analyse the performance of a process, disk or processor, but cannot typically model what causes them to perform the way they do. To do this the user needs to understand the operation of the system as a whole. Additionally, the data produced by simulation engines is generally large in volume and complex in nature, meaning that a skilled capacity planning professional is still needed at the end of the day to accurately interpret the results. Self defeating or what?

So what NTSim does is to take a mainframe-derived engine – the company has decades of performance management and capacity planning modelling knowledge - and effectively put a template on top of it, in this case for Exchange Server, to do the interpretation automatically and run it on a standalone Windows 98/2000/NT machine. Hence we have the NTSim-Exchange product. As can be easily derived from this, the Exchange module is just the first variant from NTSim Inc based on the generic NTSim engine. Others will follow soon, notably a module for Microsoft IIS web server software. Similarly, the modelling engine could be applied to other server-based applications such as unified messaging servers, ERP and CRM systems, E-business and E-Commerce hosts – basically anything which follows the format of a multi-server base.

NTSim itself highlights what it calls application targeting as having three main benefits. First anyone who knows the application can make use of modelling techniques that would normally require extensive training in capacity planning. So the power of simulation becomes available to technical sales staff, application consultants and system administrators. Second, NTSim is able to construct models of a new system without the need for captured data. Providing the operator understands the expected workload, systems can be created from scratch, modelled and tested entirely in software. Third, a whole Exchange site can be processed, not just a specific server.

In its Exchange flavour, then, NTSim takes the traditional capacity planning questions and answers them in Exchange-specific form. Examples include: how many servers are required in order to prevent bottleneck issues reducing service quality, what is the expected life cycle of the current NT-based network, given the planned company growth rate, or what is the likely effect of adding more users or changing the servers themselves? In a standard modelling environment the answers returned apply to the network in general and so still need a certain amount of “manual” interpretation. But with specific applications the calculations should need little in the way of interpretation, merely acknowledgement and action taken as a result of the specific recommendations the simulation throws up. This makes for a far more efficient – and easy to use – method of modelling than the classic, all encompassing model does. It gives a huge speed advantage when anticipating future needs in the form of and the ability to be proactive in making overall systems changes – the key element of managing a network let us reiterate.

There are three basic elements to NTSim-Exchange – Servers, User Groups and Scenarios. Within the Servers sub-node in MMC you can create the server hardware and software you wish to model. A full range of Compaq and Dell servers are currently pre-defined but you can also define your own servers, component by component, such as processor type, number and speed, amount, speed and type of memory, disk subsystem (controller, disk types and RAID levels as well as partition details) and NIC type. A wizard option provides a very easy way of creating new server entries.

Software wise, little needs to be added; what version of Exchange is being used and mailbox/folder details being the key entries. However, if you have the data to hand from traffic captured using the PerfMon network monitoring tool either it is possible to add in figures for the Information Store and Directory Cache Hit Rates. Additionally thread counts can be entered into the system for Information Store, Directory, RPC (which controls concurrent user requests) and Gateway processes. Where no figures are entered for these processes, NTSim uses a combination of default values and available memory in the servers being modelled to create an estimate.

User Groups describes the traffic patterns of the different levels of user group you wish to base the model around. Five groups ranging from “very light” to “heavy” users are provided as standard with usage attributes defined to match the expected usage levels. As well as the four pre-set groups it is also possible to set up a user group yourself. Here you can be very specific identifying, for example, what number of messages each group member generates, the distribution of sizes of those messages and where they are sent and a weighting which determines the chance of a message being received by the user in that group, rather than outside of that group.

The newly created groups can then be used instead of, or along with, the pre-defined groups NTSim provides as standard. In this way you are able to create a model consisting of many different levels of user types, ideal for “what-if” type analysis based on what happens if a certain type of user – for example, heavy – is increased in number, rather than another type – for example, light. Scenarios is used to create the model itself and control its execution. Here you can control what combination of servers and user groups enters the model and what kind of “what-if” changes you want to model, via a Wizard-based procedure or manually.

Each scenario also has three sub-nodes that combine to create the complete model scenario. These are Iteration Rules, Termination Rules and Results. Using these you effectively create a modelling sequence. The Iteration Rules tell the system what to change and when, while the Termination Rules tell it when to stop. Iteration rules are used to perform a series of change actions on either or both of Server Components and User Group properties. Typically this would be to gauge what happens if the number or percentage of certain user types increases over a period of time, or alternatively what the effect of a gradual server hardware upgrade would have on end user performance. Termination Rules apply a limit to the sequence of actions. If just Set rules are used in the Iteration Rules then there is no need for a Terminal rule to be defined as the system will perform just a baseline and a single changed iteration. However, if Increment or Decrement rules are used then you need to set a “Terminate After n Iterations” rule, or a default of six iterations is used instead. A classic example here would be to set up the model to increase user email activity by 20% every month over a year-long period so ending up with 12 iterations of the model.

Of course, where NTSim-Exchange provides real flexibility is in its ability to use real data, pulled from the applications server via AppManager. You simply select the correct .EBS file generated from AppManager and add it into NTSim-Exchange via the import tool described earlier.


Figure 3 – Selecting the NTSim-Exchange EBS File to Import

Regardless of the data source, the execution of the scenarios is the same. This produces a series of results which, by default, differ depending on what NTSim-Exchange views as being of key importance. The results produced come in three flavours – Graphs, Reports and Scenarios. These, in turn, are grouped into different result categories under a specific heading, such as Perceived Response Times or Issues, and exactly what mix of reports and graphs you receive depends on the nature of the simulation being run and what the outcome was. For example, Issues only appear when NTSim recognises that problems have arisen within your scenario that require further investigation and at this point the system might suggest alternative scenarios to run.

This is not simply trend analysis of a given element of the model over time – which NTSim does extremely well – but also comparing the behaviour of one component against another over a given period of time. So it is very easy to see where the problems lie within the server or servers down to the specific component within a given server. In this way you can build up a picture of what will happen if certain strategies are adopted and build a definitive application server strategy accordingly.

The great thing here is that it that since NTSim-Exchange runs off-line on a standalone PC you can carry out as many “what-ifs” as you wish and it does not impact on the network at all. You can even take your modelling home with you in the evening on the laptop and let it run through the night.


Figure 4 – Graph Showing Utilisation over a 12-Month Period

Summary

Clearly both the NetIQ and NTSim Inc. products are very worthy products in their own right. With AppAnalyzer to follow AppManager, NetIQ is establishing a heavy-duty base for gathering data from applications servers that can be used in a number of ways. Meantime, NTSim Inc. is heading in a very interesting direction with NTSim-Exchange. Historically, too many modelling and simulation tools have been vague – both in operation and results – but NTSim, by taking a specific application to model, is very direct and very usable as a result. It scores on flexibility too, in every sense of the word. You don’t have to be an expert in capacity planning to use it. Equally it is a tool for capacity planning experts with only a rudimentary knowledge of computing and here it really provides a unique selling point.

Put the two together and what you have is a truly comprehensive means of creating an applications server management strategy and applying it accurately in the long term. While NTSim Inc. is currently supporting only Exchange Server, in the future NTSim will become a platform for supporting many different application servers, so the fit between AppManager-AppAnalyzer and NTSim becomes even tighter. As such we highly recommend an evaluation of the two products in tandem if application server management is your requirement.

Send mail to webmaster with questions or�
comments about this web site.

Copyright � 1991-2003 The NSS Group Ltd.
All rights reserved.

Featured sites