Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Article

Enhancing Application Manageability Part 1

Enhancing Application Manageability Part 1

When we build enterprise applications based on either a J2EE-compatible application server or an XML Web services platform, we tend to leave the manageability of our application as a problem for the base platform to solve. We therefore may not do any work in our business logic to enhance the manageability of our application in production. This is not a good policy and can lead to an application that is more difficult to manage than it should be when it is deployed.

J2EE application servers, a foundation for XML/SOAP Web services applications, give us tools for monitoring our software servers and, to an extent, our applications. However, software developers need to exploit these facilities to give more application-level manageability, as opposed to platform-level manageability. The difference between these two can be compared to knowing the number of items in my shopping cart (an object at the application or business logic level) as opposed to what is happening in the BEA WebLogic Server (WLS) message queues that are supporting it.

The absence of manageability can cause serious problems. The application software can become an unmanageable entity once it is deployed because the operations staff doesn't understand it. Lack of attention to manageability raises the cost of supporting and maintaining the software product significantly. Software developers can be engaged in solving post-deployment management problems that could have been handled in steps taken earlier in the software life cycle. Those steps will be described here.

This article describes a range of choices for the developer, including:

  • Writing messages from the application code to human-readable files, and having those messages appropriately processed by a management tool
  • Encapsulating some of these logs in management templates
  • Using prebuilt Java Management Extensions (JMX)-based managed bean objects and tools that manipulate them
  • Building and using your own JMX managed bean objects for specific purposes

    The concept behind these JMX managed bean objects will be explained later.

    These steps can be viewed as incrementally increasing the levels of manageability in an application. They may be followed separately or all together for one application. The article positions the various technologies to help the developer integrate them and build a better application. The good news is that much of the hard work has already been taken care of by the application servers and by management tools. However, there is still some work to be done in the application.

    As developers, we sometimes believe we have done enough work towards making an application manageable if we have produced messages in a text log file for our application, and no more than that. This can be sufficient for the management of very simple application deployments. Should two copies, or instances, of the application be required to run in parallel, then the log file method will need more design attention.

    Application manageability is not limited, however, to knowing simply whether the application is alive or not, or whether the application has produced an error message in its log file. For certain applications we may well need to know how many user transactions the application processed over the past hour or two, how long on average each transaction took, and what percentage of them failed to complete. Sophisticated applications have queues of requests, some of which are outstanding and some are being processed. The system manager in such cases needs to know the condition of each queue, so that he or she can respond to problem situations appropriately. Should a queue of incoming requests be causing delays to the end user, for example, the administrator may wish to allocate some of those requests to another identical process, thus avoiding a problem.

    We need a framing definition of what the term "manageability" means in order to judge how much of it to build into any one application. This "level of manageability" determines what technologies we use and how much effort we expend on this aspect of the application design.

    Definition of Manageability
    We define the term "manageability" as the ability to exercise administrative and supervisory actions and receive information that is relevant to such actions on a component.

    Manageability is further broken down into three main pieces:
    1.   Monitoring: The ability to capture runtime and historical events from a particular component, for reporting and notification.
    2.   Tracking: The ability to observe aspects of a single unit of work or thread of execution across multiple components (e.g., tracking messages from senders to receivers).
    3.   Control: The ability to alter the runtime behavior of a managed component (e.g., changing the logging level of an application).

    Manageability covers the activity of ensuring that the application is alive and functioning normally, as well as checking that an application's performance is as expected. When a problem occurs, manageability tools enable the troubleshooting engineer to find the problem. These aspects of manageability are used to assess the various technologies.

    Further, manageability covers the monitoring, tracking, and control of more than one instance of the same software. For example, applications built on J2EE application servers are frequently replicated on several computers for load balancing and fault tolerance reasons. Operators need to be able to see and control the levels of activity of each one of these instances at all times. They can then reroute requests or take other actions to allay problems and maintain service level agreements.

    Key Design Decisions in Manageability
    The decisions the developer must make in applying manageability functionality to his or her software are:

  • What level of information is most useful to the operator of the software?
  • What actions can be taken on the software as a result of this information?
  • What level of detail (fine-grained access to individual objects, or coarser-grained access to subsystems, modules, or otherwise) is best for application manageability?
  • Is the management software to be allowed to interact synchronously with the application code or must all actions be taken "after the fact"?

    Approaches to Building Manageability into Applications
    This section describes a set of technology approaches to building manageability into an application. They are presented in ascending order of "work to do" on the part of the developer. They are not mutually exclusive and in many applications the right level of manageability is achieved through a combination of several of these techniques. Most application developers will recognize the first option, writing messages to a log file, as something they would normally consider doing with any application. This means that legacy applications that produce log files will be easy to arm with this level of manageability, which fits in the "monitoring" category above.

    Application Log Files
    Many applications are designed to produce a set of textual, human-readable error messages in their log files or to the screen using standard programming I/O mechanisms. These log files are either viewed by an operations person for identification of issues, or scanned by a separate process that takes critical messages and displays them in some form on a management console. Management infrastructures such as HP's OpenView Operations (OVO), with its message sourcing and processing functions, allow for this type of capability and also allow sophisticated filtering of those messages.

    Direct Logging from the Application to the Management Console
    Each application takes its own approach to solving this problem. The "opcmsg" command, for example, in the HP OpenView management suite, allows the direct logging of messages to OpenView management consoles, along with a similar C and Java API for programmatic access to the console. Programmers choose whether to build calls to this logging API into their program based on the urgency of the need to get this message to the console at a point in time without further filtering. Labeling the various text messages in a log file with severity levels or categories such as "informational", "warning", "critical", and others can help with the interpretation of those messages by a management tool. These labels are essential for operators who don't know the internals of the software. The management tools will pick up "critical" messages from a potentially large set of messages and highlight them on the display device.

    Indirect Logging from the Application
    However, the application programmer does not have to embed calls to the console logging APIs, mentioned earlier, in the application. The application developer can concentrate on logging messages to a file or other persistent storage and later use the management tools to selectively display those messages.

    In the OVO toolkit are utilities for encapsulating application log files (and other sources of data, such as binary files and SNMP traps) as message sources. Further, there are screens for setting up "templates" or patterns for processing these messages at a centrally located management server, and then distributing these templates to the managed nodes so that all instances of an application can be monitored in this way. Such processing patterns may define that certain message types be excluded from appearing at management consoles, for example. The processing template may restructure the format of the messages, or may aggregate a set of messages into one, to avoid overloading the operator with too many messages. The choices for the template designer are many and are fully documented in the OVO administrator's guide. All the developer or IT operations person needs to do is:
    1.   Identify the file or other source of messages (such as SNMP or the event log in Windows).
    2.   Choose an existing message template from the supplied set, or create a new one.
    3.   Attach a template for message processing to the message source.
    4.   Distribute this message template to the locations (machines, application sites) where it is needed.

    The developer can choose the monitoring options for such log files and make decisions on factors such as:

  • The interval that the log files should be read at
  • Whether to read from the beginning of the file or from the last scanned position
  • Whether to close the file after an access has been made to it, and other factors

    Issues in Log File Encapsulation and Message Template Processing
    Filtering messages: The message logging approach can easily become subject to the problem of producing too many messages to give the operator a meaningful picture of what is happening with the software. Careful attention to labeling each message as "Informational", "Warning", "Critical", and possibly other categories can help with this. Developers should decide with the operators whether they want to suppress all noncritical messages.

    Filtering of the messages to highlight those that are of interest in certain situations can be achieved. These filters can then feed messages into a subsequent notification command or piece of logic. Tools exist in the management platform that can take these messages and demand that the operator respond in some fashion to the message.

    Correlation: Message correlation is also a task that requires the user or software designer to work harder with log files. Either the end user/operator has to correlate the messages, which apply to one situation, by hand, or a special piece of functionality has to be written to perform the correlation for him. The OpenView Operations templates take care of this issue for the developer.

    Further details on log file handling and message processing can be found in the OpenView Operations Concepts Guide (OVO-Concepts) and in the OpenView Operations Administrators Guide (OVO-Admin).

    Log Files Summary
    This message file encapsulation approach is recommended as a minimal starting level for building manageability into an application. It is a necessary, but not sufficient, condition for adequate application manageability. With this approach, we have achieved part of the "monitoring" aspect of manageability described in the earlier definition section. That is, we can visualize at least those parts of the application that the developer takes care to tell us about, but we cannot exercise control over it as yet. We tackle the "control" subject in the next section.

    The Java Management Extensions Standard
    The Java Management Extensions (JMX) have become the accepted industry standard for managing Java applications. The specification for JMX within the Java community process (JCP) is a Java Specification Request (JSR) named JSR3. A separate specification, JSR77, which is part of the J2EE 1.4 specification, is a description of the "model" of objects that each application server must expose through JMX. This version of the J2EE specification,1.4, was still in preparation at the time of this writing. Further detail on these submission requests can be found at JSR3 and JSR77, respectively. The JMX specification provides for the construction of the manageability aspect of Java applications in a standard way.

    The JMX environment is composed of three levels of software

    • Instrumentation
    • Agent
    • Distributed services
    These levels are shown in Figure 1 (the distributed services layer is the topmost one).

    At the Instrumentation level, there are managed bean objects, or MBeans. These are Java objects that conform to either

  • A simple style of object construction in the Java world called JavaBeans (for simple MBeans) or
  • A JMX standard called "DynamicMBean" for more flexible management or
  • The JMX "Model MBeans", which are an extension to Dynamic MBeans that provide more generic templates for management instrumentation

    Conforming to the JavaBeans format is straightforward. It implies that an object is serializable and has a null constructor. It is common practice to implement "getter" and "setter" methods on all of that object's properties or data members (class variables) that may be required to be visible to external management tools. Creating MBeans in this style means that once the MBeans are registered with the JMX server, called the MBean server, they can be reached for monitoring and control purposes by the tools at the distributed services layer at the top of the diagram.

    JMX requires the use of an MBean server shown at the Agent Level in Figure 1. MBeans are required to register their presence with an MBean server in order to be noticed by the management framework. The MBean server handles the management messages that are flowing to and from objects that have been previously registered as MBeans. Certain properties of the object that conform to the MBean interface can then be viewed by management tools and, in some cases, the behavior of the object can be changed using the functionality of management consoles. This is done through the connectors and other adapters that allow a management console to extract data from a JMX server, not through direct manipulation of the MBeans themselves.

    Key Issues with MBeans for the JMX Developer
    The MBean mechanism is powerful in the sense that it provides an industry-standard method for "wrapping" a business object with another MBean object, the latter being dedicated to manageability.

    Trade-offs in using the MBean Interfaces Directory
    The specification for JMX does not determine whether a business object may itself be an MBean. This is possible, since the MBean hierarchy is made up of interfaces. The disadvantage of this approach is that the business object now contains both business logic and manageability logic, causing it to be more complex. This approach may provide a benefit in the sense that a reference or pointer may be eliminated between the business object and its peer MBean object if they are both contained in one. This will be discussed later.

    The developer has a choice of making their business object conform to the appropriate JMX-style interfaces or building separate JMX bean objects that are specifically for the management of their business objects. Why would this separation be interesting? Well, both techniques have their advantages.

    Using the inheritance method, where business objects inherit their JMX management interface, every such business object instance will need to be registered with the MBean server - and there could be thousands of them. With a separate object for manageability, which is the one conforming to the JMX interfaces, it alone registers with the MBean server, causing the number of entries to be fewer. This single management object will then be responsible for managing a group of business objects. There may be a need for only one management object for all instances of a business class. This is a trade-off and there is no right answer for all applications. The developer must make a design decision at this point. This will be discussed later.

    Communication and Linkage
    The JMX specification does not specify how the communication is achieved between the business object and the MBean. This is entirely under the application developer's control.

    Key decisions therefore must be made by the developer on questions such as

  • Are all or just some of the attributes (member variables) of the business object exposed to management tools?
  • Are these attributes only readable or writable?
  • At what frequency will updates be carried to the business object from the MBean and vice-versa?
  • Will there be an MBean for a set of business objects or for each business object?

    The application developer has full control over the amount of communication going on between these business objects and MBean pairs. The application developer can choose to have a reference from one to the other or a bidirectional reference beween them.

    In this first of a two-part article, I started from the position that the manageability of the application determines its acceptance in production. I looked at a definition of application manageability and at some options for adding it into your Java/J2EE application, from the very simple message logging to the more comprehensive JMX approach. These approaches should be seen as complementary rather than competing with each other. In the second part of this article I'll look at the management tools and how they help us build more manageability into our applications, whether through applying existing templates or through writing code.


  • BEA JMX: http://edocs.bea.com/wls/docs70/ javadocs/index.html
  • JSR3: www.jcp.org/en/jsr/detail?id=3
  • JSR77: www.jcp.org/en/jsr/detail?id=77
  • OVO-Metrics: HP OpenView Operations - Metrics Guide
  • OVO-Admin: HP OpenView Operations - Administrators Guide
  • OVO-Concepts: HP OpenView Operations - Concepts Guide
  • OVTA: HP OpenView Transaction Analyzer: www.openview.hp.com/products/ transaction_analyzer/index.asp
  • Poole: HP OpenView Architectures for Managing Network Based Services
  • More Stories By Justin Murray

    Justin Murray is a technical consultant in the application development resources organization in HP. He has worked for HP in various consulting and teaching roles. Justin has consulted at a technical level on customer projects involving Java, J2EE and performance management, as well as specializing on application performance tuning for HP-UX. Justin has published several technical papers on these subjects. He can be reached at : [email protected]

    Comments (0)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

    @ThingsExpo Stories
    Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, discussed how from store operations and ...
    "There's plenty of bandwidth out there but it's never in the right place. So what Cedexis does is uses data to work out the best pathways to get data from the origin to the person who wants to get it," explained Simon Jones, Evangelist and Head of Marketing at Cedexis, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
    In his session at 21st Cloud Expo, Carl J. Levine, Senior Technical Evangelist for NS1, will objectively discuss how DNS is used to solve Digital Transformation challenges in large SaaS applications, CDNs, AdTech platforms, and other demanding use cases. Carl J. Levine is the Senior Technical Evangelist for NS1. A veteran of the Internet Infrastructure space, he has over a decade of experience with startups, networking protocols and Internet infrastructure, combined with the unique ability to it...
    SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
    "MobiDev is a software development company and we do complex, custom software development for everybody from entrepreneurs to large enterprises," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
    SYS-CON Events announced today that Telecom Reseller has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.
    WebRTC is great technology to build your own communication tools. It will be even more exciting experience it with advanced devices, such as a 360 Camera, 360 microphone, and a depth sensor camera. In his session at @ThingsExpo, Masashi Ganeko, a manager at INFOCOM Corporation, introduced two experimental projects from his team and what they learned from them. "Shotoku Tamago" uses the robot audition software HARK to track speakers in 360 video of a remote party. "Virtual Teleport" uses a multip...
    Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...
    "Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
    "IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...
    "Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offer guided learning experiences on AWS, Azure, Google Cloud and all the surrounding methodologies and technologies that you need to know and your teams need to know in order to leverage the full benefits of the cloud," explained Alex Brower, VP of Marketing at Cloud Academy, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clar...
    It is of utmost importance for the future success of WebRTC to ensure that interoperability is operational between web browsers and any WebRTC-compliant client. To be guaranteed as operational and effective, interoperability must be tested extensively by establishing WebRTC data and media connections between different web browsers running on different devices and operating systems. In his session at WebRTC Summit at @ThingsExpo, Dr. Alex Gouaillard, CEO and Founder of CoSMo Software, presented ...
    A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, whic...
    SYS-CON Events announced today that Evatronix will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Evatronix SA offers comprehensive solutions in the design and implementation of electronic systems, in CAD / CAM deployment, and also is a designer and manufacturer of advanced 3D scanners for professional applications.
    Leading companies, from the Global Fortune 500 to the smallest companies, are adopting hybrid cloud as the path to business advantage. Hybrid cloud depends on cloud services and on-premises infrastructure working in unison. Successful implementations require new levels of data mobility, enabled by an automated and seamless flow across on-premises and cloud resources. In his general session at 21st Cloud Expo, Greg Tevis, an IBM Storage Software Technical Strategist and Customer Solution Architec...
    To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
    An increasing number of companies are creating products that combine data with analytical capabilities. Running interactive queries on Big Data requires complex architectures to store and query data effectively, typically involving data streams, an choosing efficient file format/database and multiple independent systems that are tied together through custom-engineered pipelines. In his session at @BigDataExpo at @ThingsExpo, Tomer Levi, a senior software engineer at Intel’s Advanced Analytics gr...
    When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things’). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing? IoT is not about the devices, it’s about the data consumed and generated. The devices are tools, mechanisms, conduits. In his session at Internet of Things at Cloud Expo | DXWor...
    Everything run by electricity will eventually be connected to the Internet. Get ahead of the Internet of Things revolution. In his session at @ThingsExpo, Akvelon expert and IoT industry leader Sergey Grebnov provided an educational dive into the world of managing your home, workplace and all the devices they contain with the power of machine-based AI and intelligent Bot services for a completely streamlined experience.
    SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone inn...