Page tree
Skip to end of metadata
Go to start of metadata

A short tour through UXMon



Methodology for determining whether IT is actually usable from the perspective of the person operating it is conventionally called end user experience monitoring. Usability in this sense is generally defined by the system's availability and performance and is simply another name for the ability to carry out a given task correctly and within an adequate timeframe. These methods do actually reflect the actual usability by collecting the data from the system users themselves and far away from the server rooms.
This approach seems unusual at first and would appear to take many an IT manager all too quickly out of their comfort zone: after all, didn't we always strive to document every aspect of our IT systems by collecting thousands of measurements to prove that it was working correctly? Where Laplace was concerned, the goal was always to collect ever more measurement data at ever decreasing intervals to describe the system in ever greater detail. Unfortunately, even a quick look at the support components forces determinism – also long since obsolete in physics – into the realms of insignificance. Unmoved by the diversity of measurement values available, end users simply rate their IT system as either "It doesn't work!" or "It's slow" ˗ terms which can be superimposed exactly over the aspects of availability and performance that are observed in User Experience Monitoring. So the concept of measuring and verifying precisely what the user or customer wants and understands isn't far off the mark.
In terms of technical implementation, there are basically two different points of departure: The more apparent method attempts – put simply – to "look over the shoulder" of the user and to send measurement data about the observed transactions to a central evaluation server. Depending on the configuration and manufacturer, this can take place permanently (in the case of monitoring) or, in the event of an error, for analysis purposes only. To counteract the feeling that employees are being placed under scrutiny, these solutions provide a number of sophisticated anonymization and security settings. If this bitter aftertaste persists or you're worried about this kind of measuring structure gaining widespread acceptance in your company, there is an alternative approach, which also provides decisive technical advantages.


The approach adopted by SAP User Experience Monitoring (UXMon - Formerly EEM) dispenses with human users as a source of data, relying instead on a network of artificial helpers who carry out transactions on site in the respective regions, reporting levels of availability and performance of the IT systems being used to SAP Solution Manager in the process.
These UXMon helpers – known as "robots" – are installed on inexpensive desktop computers and operate in the system landscape like genuine employees or customers. They open portal pages, check shopping baskets, search databases, and complete SAPGUI forms, because the scripts executed for these purposes were created by mapping precisely these activities, which are actually reserved for human users. So on the system side, these UXMon robot activities are completely unobtrusive and it's difficult to distinguish them from those of normal users. They are carried out on an equal footing and are therefore a representative indicator.
As the name "UXMon robots" suggests, it is the advantages provided by automatic load generation that more than make up for the not inconsiderable initial outlay for creating the scripts. Just like industrial robots, the UXMon robots go about their work resolutely, tirelessly, flawlessly, and without interruption.
Instead of simply waiting for a problem to arise, they are responsible for proactively monitoring all transactions without exception, even if no real user is currently using the function in that particular region, either due to the local time difference or because the transaction is only used in infrequent but extremely urgent cases. So in an emergency, you save valuable time and, depending on the error, the application is up and running again be-fore your colleagues abroad have even started their breakfast.
One of the robot strategy's principal benefits is the ability to reproduce script executions and the resulting ability to compare individual executions, either locally or between different regions, using various robots as the data source. This way, it's relatively easy to assess whether a problem is localized, so it can only be observed in one location, or whether business processes are being disrupted globally for all UXMon robots and it is therefore more an issue with the central IT system. So you monitor the behavior of a certain script for numerous UXMon robots.
The opposite approach of examining one particular robot and the different script types executed there makes it easy to distinguish between a generic network problem and more specific causes. If several scripts that are disjunctive from one another in terms of their content are affected by malfunctions simultaneously, you don't have to be a mind-reader to suspect that the cause of the branch's problem is generic and more technical in nature.
So as you can see, with its broadly based measuring setup and simple status comparisons, SAP User Experience Monitoring provides important indications as to the cause of the problem with no need to apply a more detailed understanding of the script processes.
Naturally, individual steps for scripts and detailed error notifications for each step are also shown in UXMon Monitoring. Consequently, not only the implemented business process as a whole but also individual steps can be evaluated and analyzed specifically according to the criteria of availability and performance. So to put it simply, the script represents a business transaction and the stages of the script a user interaction, such as a button being pressed or details being entered on a request screen by the user or robot.
Knowing the stage at which a script is executed with which error status considerably limits the range of possible causes. If the system returns "Wrong Password or Username" as confirmation of the first step, an error search will probably involve something more obvious than an intensive lock table or heapdump check.


Unfortunately, the expectations of an ideal business transaction are diametrically opposed for productive human users and User Experience Monitoring. To achieve optimal evaluations, from an UXMon perspective, it is preferable to adopt the most linear approach possible, taking the smallest possible steps. The better you can split subtasks into separate steps, the easier it is to pinpoint a certain component as being responsible for an incident.
A user-friendly application, on the other hand, is designed with the aim of relieving the operator of any complexity, carrying out as many activities as possible in the background, which is what the user wants. This digital equivalent of an Aladin's Lamp is a nightmare for pure User Experience Monitoring: Just one click of the mouse and everything happens in the background, as if by magic. Merely pushing a button triggers a frenzy of activity and hectic goings-on behind the scenes as dozens of RFC connections are used for a variety of databases and systems, data is consolidated, operations wait for work processes, and lock entries are written and deleted.
From the user's point of view and, unfortunately, also the UXMon robot's perspective, all you get is the monotonous rotating hourglass until the result is displayed. Or not. So the measured values that the UXMon robot can send to SAP Solution Manager in this situation are probably of little help in narrowing the problem down.
At this point, it may comfort to you to know that the measurement enables a localized, objective quantification of the bottleneck to be carried out and this information is provided proactively, even before a real user has had to report the problem. But a really satisfactory solution must go a step further, look behind the curtains, and shed some light on the hidden procedures going on behind the scenes. For this, SAP User Experience Monitoring uses SAP Passport Technology, which is also used by E2E Trace Analysis.
Every message that is sent by the UXMon robot to the IT system has an "SAP Passport" attachment. The SAP Passport contains a unique ID number and details about which information the other person should retain for analysis purposes while the actual request is being processed. If processing has to be continued in the background on another component, the SAP Passport is forwarded together with the request and the local systems are instructed to also retain information about the processing.
To continue with the same metaphor, the robot still might not be able to look behind the curtains itself, but it is now in a position to slide a business card underneath them, including the request to keep a record of all activities taking place behind the scenes and to ensure that everyone involved learns of this procedure by word of mouth. So in UXMon Monitoring, script execution and all its assigned steps are reported by an UXMon robot and simply evaluated in terms of availability and performance as before. In a downstream process, the IT system's involved components are addressed by SAP Solution Manager and the information stored there is requested in line with the relevant SAP Passort ID number. The UXMon Monitoring UI now shows more the individual script steps in greater detail and lists, for example, the involved SAP systems, RFC times, client times, and HTTP times. By the time you switch to the E2E Trace Analysis at the latest, the curtain is fully raised and, depending on the configured level of detail, a bottleneck can be accurately analyzed, for example, by ABAP, Wily, or SQL Traces.
So that you don't have to choose between minimal influence on the IT system through tracing and the in-depth analysis option, SAP User Experience Monitoring offers you three ways of increasing the level of detail if required. You can execute another one-off script manually with a freely configured level of detail whenever you like and without permanently changing the regular execution configuration. However, you can also increase the level of detail for a fixed period of time before the script returns to its normal settings automatically. If a measured runtime is exceeded, the third option automatically ensures that the measurement is repeated immediately with a freely configurable level of detail. The latter two methods are particularly well suited to getting to the bottom of phenomena that occur sporadically, can seldom be selectively reproduced, and generally generate frustration on the part of users and support personnel.
Who's never experienced this before: "Murphy's Law" ensures that users first have to convince the support staff that there is a problem before it is taken seriously. Having laboriously convinced the person responsible that there really is a problem, this individual then repeatedly executes the transaction successfully as similar complaints gradually accumulate. With SAP User Experience Mon-itoring, the UXMon robot takes care of the arduous detective work and reports its findings to UXMon Monitoring.


The UXMon Monitoring UI is the central analysis platform for SAP User Experience Monitoring data. Based on SAP UI5, this application is accessible via the System and Application Monitoring group in the SAP Solution Manager Fiori Launchpad. It mainly comprises pages that query a selected group of script executions in certain locations from Solution Manager over a specific period of time. Or to put it more simply: You specify which scripts you're interested in, which robots are to focus on them, and how far into the past you wish to look.
In the next step, you decide how the requested data should be displayed by choosing one or more views for the page. You can choose from a number of options, including tree structures, pie charts, curve diagrams, and tile views. Depending on the task at hand, these are of varying suitability for providing an overview or comparing different executions in detail. Filtering capabilities gives the possibility to either have the views in the page share the same set of data or let each view display its own set of data.
If the requested data is no longer in the local data buffer, the UXMon Monitoring UI requests consolidated data from Business Warehouse. So taking a quick look at the previous year's data doesn't force you to switch to an unfamiliar BW Web template environment. There are countless options and workflows on the UXMon Monitoring UI, but most are intuitive to learn.


Globally monitoring the usability of business transactions in real time and, if required, being able to carry out a detailed technical analysis is a fascinating opportunity. But if you're beginning to feel like you'll be sitting in the Kennedy Space Center control room, you're in for a disappointment, because although UXMon Monitoring is aesthetically appealing and functional, it won't constantly be the focus of attention and only rarely your initial access to User Experience Monitoring. Generally, the UXMon robots will be left to do their work in the background while you concentrate on more pressing matters.
Here, you can totally rely on the alert infrastructure of Solution Manager. If an UXMon robot measures unexpectedly long response times or unearths functional deficits, an alert event is created in the Alert Inbox, the person responsible is informed by text message or e-mail, and, depending on the configuration, an incident can also be generated. A direct link to UXMon Monitoring then enables you to investigate this immediately. A sophisticated algorithm also prevents a problem that is already reported but still needs time to solve from attracting too much attention with a constant stream of alert events and text messages, thus obscuring other events.


The final data sink in SAP Solution Manager is Business Warehouse and this is also how UXMon data from the local data buffer is eventually stored in consolidated form in BW. You can access this overview data using UXMon Monitoring. The criteria used to assess whether measured response times are expected values or exceeded critical thresholds are rigorously transferred from the alerting configuration. So reporting corresponds exactly with the data displayed in the Monitoring application.
However, if BW is used for verification purposes within the context of a service level agreement (SLA), this threshold philosophy could soon elicit a conflict of interests. On the one hand, threshold values and their associated alerts are important indicators for quickly identifying inconsistencies and, if possible, rectifying them even before they reach a really critical level. On the other hand, the service level agreement precisely defines the threshold values, thus reducing the advance warning time to almost zero. Help here is provided by an independent set of threshold values for service level agreements in SAP User Experience Monitoring. This enables adequate advance warning using appropriate alerts, while at the same time providing accurate reports for the service level agreement. In this context, "accurate" also means that an agreement has been either upheld or broken, so only one threshold value that clearly defines this limit must be specified.
To display the collected SLA data, an dedicated view was made available in the UXMon Monitoring. So for each of the "availability" and "performance" categories, you can immediately see the percentage of cases in which the defined thresholds were adhered to. Using the green and red color coding, you can see whether these percentages meet the specifications of the SLA, and the previous month's data is also displayed in graphical format. It is no longer necessary to have a detailed knowledge of the underlying threshold values for interpretation purposes. Reporting in the field of service level agreements is aimed primarily at external and internal customers of an IT solution who express only a certain level of interest in the technical background so long as the usability can be guaranteed and documented.

This page is part of the Application Operations Wiki. Notice that Application Operations itself is a use-case of SAP Solution Manager

  • No labels