Welcome!

Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Article

Enhancing Application Manageability Part 1

Enhancing Application Manageability Part 1

When we build enterprise applications based on either a J2EE-compatible application server or an XML Web services platform, we tend to leave the manageability of our application as a problem for the base platform to solve. We therefore may not do any work in our business logic to enhance the manageability of our application in production. This is not a good policy and can lead to an application that is more difficult to manage than it should be when it is deployed.

J2EE application servers, a foundation for XML/SOAP Web services applications, give us tools for monitoring our software servers and, to an extent, our applications. However, software developers need to exploit these facilities to give more application-level manageability, as opposed to platform-level manageability. The difference between these two can be compared to knowing the number of items in my shopping cart (an object at the application or business logic level) as opposed to what is happening in the BEA WebLogic Server (WLS) message queues that are supporting it.

The absence of manageability can cause serious problems. The application software can become an unmanageable entity once it is deployed because the operations staff doesn't understand it. Lack of attention to manageability raises the cost of supporting and maintaining the software product significantly. Software developers can be engaged in solving post-deployment management problems that could have been handled in steps taken earlier in the software life cycle. Those steps will be described here.

This article describes a range of choices for the developer, including:

  • Writing messages from the application code to human-readable files, and having those messages appropriately processed by a management tool
  • Encapsulating some of these logs in management templates
  • Using prebuilt Java Management Extensions (JMX)-based managed bean objects and tools that manipulate them
  • Building and using your own JMX managed bean objects for specific purposes

    The concept behind these JMX managed bean objects will be explained later.

    These steps can be viewed as incrementally increasing the levels of manageability in an application. They may be followed separately or all together for one application. The article positions the various technologies to help the developer integrate them and build a better application. The good news is that much of the hard work has already been taken care of by the application servers and by management tools. However, there is still some work to be done in the application.

    Background
    As developers, we sometimes believe we have done enough work towards making an application manageable if we have produced messages in a text log file for our application, and no more than that. This can be sufficient for the management of very simple application deployments. Should two copies, or instances, of the application be required to run in parallel, then the log file method will need more design attention.

    Application manageability is not limited, however, to knowing simply whether the application is alive or not, or whether the application has produced an error message in its log file. For certain applications we may well need to know how many user transactions the application processed over the past hour or two, how long on average each transaction took, and what percentage of them failed to complete. Sophisticated applications have queues of requests, some of which are outstanding and some are being processed. The system manager in such cases needs to know the condition of each queue, so that he or she can respond to problem situations appropriately. Should a queue of incoming requests be causing delays to the end user, for example, the administrator may wish to allocate some of those requests to another identical process, thus avoiding a problem.

    We need a framing definition of what the term "manageability" means in order to judge how much of it to build into any one application. This "level of manageability" determines what technologies we use and how much effort we expend on this aspect of the application design.

    Definition of Manageability
    We define the term "manageability" as the ability to exercise administrative and supervisory actions and receive information that is relevant to such actions on a component.

    Manageability is further broken down into three main pieces:
    1.   Monitoring: The ability to capture runtime and historical events from a particular component, for reporting and notification.
    2.   Tracking: The ability to observe aspects of a single unit of work or thread of execution across multiple components (e.g., tracking messages from senders to receivers).
    3.   Control: The ability to alter the runtime behavior of a managed component (e.g., changing the logging level of an application).

    Manageability covers the activity of ensuring that the application is alive and functioning normally, as well as checking that an application's performance is as expected. When a problem occurs, manageability tools enable the troubleshooting engineer to find the problem. These aspects of manageability are used to assess the various technologies.

    Further, manageability covers the monitoring, tracking, and control of more than one instance of the same software. For example, applications built on J2EE application servers are frequently replicated on several computers for load balancing and fault tolerance reasons. Operators need to be able to see and control the levels of activity of each one of these instances at all times. They can then reroute requests or take other actions to allay problems and maintain service level agreements.

    Key Design Decisions in Manageability
    The decisions the developer must make in applying manageability functionality to his or her software are:

  • What level of information is most useful to the operator of the software?
  • What actions can be taken on the software as a result of this information?
  • What level of detail (fine-grained access to individual objects, or coarser-grained access to subsystems, modules, or otherwise) is best for application manageability?
  • Is the management software to be allowed to interact synchronously with the application code or must all actions be taken "after the fact"?

    Approaches to Building Manageability into Applications
    This section describes a set of technology approaches to building manageability into an application. They are presented in ascending order of "work to do" on the part of the developer. They are not mutually exclusive and in many applications the right level of manageability is achieved through a combination of several of these techniques. Most application developers will recognize the first option, writing messages to a log file, as something they would normally consider doing with any application. This means that legacy applications that produce log files will be easy to arm with this level of manageability, which fits in the "monitoring" category above.

    Application Log Files
    Many applications are designed to produce a set of textual, human-readable error messages in their log files or to the screen using standard programming I/O mechanisms. These log files are either viewed by an operations person for identification of issues, or scanned by a separate process that takes critical messages and displays them in some form on a management console. Management infrastructures such as HP's OpenView Operations (OVO), with its message sourcing and processing functions, allow for this type of capability and also allow sophisticated filtering of those messages.

    Direct Logging from the Application to the Management Console
    Each application takes its own approach to solving this problem. The "opcmsg" command, for example, in the HP OpenView management suite, allows the direct logging of messages to OpenView management consoles, along with a similar C and Java API for programmatic access to the console. Programmers choose whether to build calls to this logging API into their program based on the urgency of the need to get this message to the console at a point in time without further filtering. Labeling the various text messages in a log file with severity levels or categories such as "informational", "warning", "critical", and others can help with the interpretation of those messages by a management tool. These labels are essential for operators who don't know the internals of the software. The management tools will pick up "critical" messages from a potentially large set of messages and highlight them on the display device.

    Indirect Logging from the Application
    However, the application programmer does not have to embed calls to the console logging APIs, mentioned earlier, in the application. The application developer can concentrate on logging messages to a file or other persistent storage and later use the management tools to selectively display those messages.

    In the OVO toolkit are utilities for encapsulating application log files (and other sources of data, such as binary files and SNMP traps) as message sources. Further, there are screens for setting up "templates" or patterns for processing these messages at a centrally located management server, and then distributing these templates to the managed nodes so that all instances of an application can be monitored in this way. Such processing patterns may define that certain message types be excluded from appearing at management consoles, for example. The processing template may restructure the format of the messages, or may aggregate a set of messages into one, to avoid overloading the operator with too many messages. The choices for the template designer are many and are fully documented in the OVO administrator's guide. All the developer or IT operations person needs to do is:
    1.   Identify the file or other source of messages (such as SNMP or the event log in Windows).
    2.   Choose an existing message template from the supplied set, or create a new one.
    3.   Attach a template for message processing to the message source.
    4.   Distribute this message template to the locations (machines, application sites) where it is needed.

    The developer can choose the monitoring options for such log files and make decisions on factors such as:

  • The interval that the log files should be read at
  • Whether to read from the beginning of the file or from the last scanned position
  • Whether to close the file after an access has been made to it, and other factors

    Issues in Log File Encapsulation and Message Template Processing
    Filtering messages: The message logging approach can easily become subject to the problem of producing too many messages to give the operator a meaningful picture of what is happening with the software. Careful attention to labeling each message as "Informational", "Warning", "Critical", and possibly other categories can help with this. Developers should decide with the operators whether they want to suppress all noncritical messages.

    Filtering of the messages to highlight those that are of interest in certain situations can be achieved. These filters can then feed messages into a subsequent notification command or piece of logic. Tools exist in the management platform that can take these messages and demand that the operator respond in some fashion to the message.

    Correlation: Message correlation is also a task that requires the user or software designer to work harder with log files. Either the end user/operator has to correlate the messages, which apply to one situation, by hand, or a special piece of functionality has to be written to perform the correlation for him. The OpenView Operations templates take care of this issue for the developer.

    Further details on log file handling and message processing can be found in the OpenView Operations Concepts Guide (OVO-Concepts) and in the OpenView Operations Administrators Guide (OVO-Admin).

    Log Files Summary
    This message file encapsulation approach is recommended as a minimal starting level for building manageability into an application. It is a necessary, but not sufficient, condition for adequate application manageability. With this approach, we have achieved part of the "monitoring" aspect of manageability described in the earlier definition section. That is, we can visualize at least those parts of the application that the developer takes care to tell us about, but we cannot exercise control over it as yet. We tackle the "control" subject in the next section.

    The Java Management Extensions Standard
    The Java Management Extensions (JMX) have become the accepted industry standard for managing Java applications. The specification for JMX within the Java community process (JCP) is a Java Specification Request (JSR) named JSR3. A separate specification, JSR77, which is part of the J2EE 1.4 specification, is a description of the "model" of objects that each application server must expose through JMX. This version of the J2EE specification,1.4, was still in preparation at the time of this writing. Further detail on these submission requests can be found at JSR3 and JSR77, respectively. The JMX specification provides for the construction of the manageability aspect of Java applications in a standard way.

    The JMX environment is composed of three levels of software

    • Instrumentation
    • Agent
    • Distributed services
    These levels are shown in Figure 1 (the distributed services layer is the topmost one).

    At the Instrumentation level, there are managed bean objects, or MBeans. These are Java objects that conform to either

  • A simple style of object construction in the Java world called JavaBeans (for simple MBeans) or
  • A JMX standard called "DynamicMBean" for more flexible management or
  • The JMX "Model MBeans", which are an extension to Dynamic MBeans that provide more generic templates for management instrumentation

    Conforming to the JavaBeans format is straightforward. It implies that an object is serializable and has a null constructor. It is common practice to implement "getter" and "setter" methods on all of that object's properties or data members (class variables) that may be required to be visible to external management tools. Creating MBeans in this style means that once the MBeans are registered with the JMX server, called the MBean server, they can be reached for monitoring and control purposes by the tools at the distributed services layer at the top of the diagram.

    JMX requires the use of an MBean server shown at the Agent Level in Figure 1. MBeans are required to register their presence with an MBean server in order to be noticed by the management framework. The MBean server handles the management messages that are flowing to and from objects that have been previously registered as MBeans. Certain properties of the object that conform to the MBean interface can then be viewed by management tools and, in some cases, the behavior of the object can be changed using the functionality of management consoles. This is done through the connectors and other adapters that allow a management console to extract data from a JMX server, not through direct manipulation of the MBeans themselves.

    Key Issues with MBeans for the JMX Developer
    The MBean mechanism is powerful in the sense that it provides an industry-standard method for "wrapping" a business object with another MBean object, the latter being dedicated to manageability.

    Trade-offs in using the MBean Interfaces Directory
    The specification for JMX does not determine whether a business object may itself be an MBean. This is possible, since the MBean hierarchy is made up of interfaces. The disadvantage of this approach is that the business object now contains both business logic and manageability logic, causing it to be more complex. This approach may provide a benefit in the sense that a reference or pointer may be eliminated between the business object and its peer MBean object if they are both contained in one. This will be discussed later.

    The developer has a choice of making their business object conform to the appropriate JMX-style interfaces or building separate JMX bean objects that are specifically for the management of their business objects. Why would this separation be interesting? Well, both techniques have their advantages.

    Using the inheritance method, where business objects inherit their JMX management interface, every such business object instance will need to be registered with the MBean server - and there could be thousands of them. With a separate object for manageability, which is the one conforming to the JMX interfaces, it alone registers with the MBean server, causing the number of entries to be fewer. This single management object will then be responsible for managing a group of business objects. There may be a need for only one management object for all instances of a business class. This is a trade-off and there is no right answer for all applications. The developer must make a design decision at this point. This will be discussed later.

    Communication and Linkage
    The JMX specification does not specify how the communication is achieved between the business object and the MBean. This is entirely under the application developer's control.

    Key decisions therefore must be made by the developer on questions such as

  • Are all or just some of the attributes (member variables) of the business object exposed to management tools?
  • Are these attributes only readable or writable?
  • At what frequency will updates be carried to the business object from the MBean and vice-versa?
  • Will there be an MBean for a set of business objects or for each business object?

    The application developer has full control over the amount of communication going on between these business objects and MBean pairs. The application developer can choose to have a reference from one to the other or a bidirectional reference beween them.

    Conclusion
    In this first of a two-part article, I started from the position that the manageability of the application determines its acceptance in production. I looked at a definition of application manageability and at some options for adding it into your Java/J2EE application, from the very simple message logging to the more comprehensive JMX approach. These approaches should be seen as complementary rather than competing with each other. In the second part of this article I'll look at the management tools and how they help us build more manageability into our applications, whether through applying existing templates or through writing code.

    References

  • BEA JMX: http://edocs.bea.com/wls/docs70/ javadocs/index.html
  • JSR3: www.jcp.org/en/jsr/detail?id=3
  • JSR77: www.jcp.org/en/jsr/detail?id=77
  • OVO-Metrics: HP OpenView Operations - Metrics Guide
  • OVO-Admin: HP OpenView Operations - Administrators Guide
  • OVO-Concepts: HP OpenView Operations - Concepts Guide
  • OVTA: HP OpenView Transaction Analyzer: www.openview.hp.com/products/ transaction_analyzer/index.asp
  • Poole: HP OpenView Architectures for Managing Network Based Services
  • More Stories By Justin Murray

    Justin Murray is a technical consultant in the application development resources organization in HP. He has worked for HP in various consulting and teaching roles. Justin has consulted at a technical level on customer projects involving Java, J2EE and performance management, as well as specializing on application performance tuning for HP-UX. Justin has published several technical papers on these subjects. He can be reached at : [email protected]

    Comments (0)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


    @ThingsExpo Stories
    High-velocity engineering teams are applying not only continuous delivery processes, but also lessons in experimentation from established leaders like Amazon, Netflix, and Facebook. These companies have made experimentation a foundation for their release processes, allowing them to try out major feature releases and redesigns within smaller groups before making them broadly available. In his session at 21st Cloud Expo, Brian Lucas, Senior Staff Engineer at Optimizely, will discuss how by using...
    In this strange new world where more and more power is drawn from business technology, companies are effectively straddling two paths on the road to innovation and transformation into digital enterprises. The first path is the heritage trail – with “legacy” technology forming the background. Here, extant technologies are transformed by core IT teams to provide more API-driven approaches. Legacy systems can restrict companies that are transitioning into digital enterprises. To truly become a lead...
    SYS-CON Events announced today that Daiya Industry will exhibit at the Japanese Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Ruby Development Inc. builds new services in short period of time and provides a continuous support of those services based on Ruby on Rails. For more information, please visit https://github.com/RubyDevInc.
    As businesses evolve, they need technology that is simple to help them succeed today and flexible enough to help them build for tomorrow. Chrome is fit for the workplace of the future — providing a secure, consistent user experience across a range of devices that can be used anywhere. In her session at 21st Cloud Expo, Vidya Nagarajan, a Senior Product Manager at Google, will take a look at various options as to how ChromeOS can be leveraged to interact with people on the devices, and formats th...
    SYS-CON Events announced today that Yuasa System will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Yuasa System is introducing a multi-purpose endurance testing system for flexible displays, OLED devices, flexible substrates, flat cables, and films in smartphones, wearables, automobiles, and healthcare.
    Organizations do not need a Big Data strategy; they need a business strategy that incorporates Big Data. Most organizations lack a road map for using Big Data to optimize key business processes, deliver a differentiated customer experience, or uncover new business opportunities. They do not understand what’s possible with respect to integrating Big Data into the business model.
    Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities – ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups. As a result, many firms employ new business models that place enormous impor...
    SYS-CON Events announced today that Taica will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Taica manufacturers Alpha-GEL brand silicone components and materials, which maintain outstanding performance over a wide temperature range -40C to +200C. For more information, visit http://www.taica.co.jp/english/.
    SYS-CON Events announced today that Dasher Technologies will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Dasher Technologies, Inc. ® is a premier IT solution provider that delivers expert technical resources along with trusted account executives to architect and deliver complete IT solutions and services to help our clients execute their goals, plans and objectives. Since 1999, we'v...
    Recently, REAN Cloud built a digital concierge for a North Carolina hospital that had observed that most patient call button questions were repetitive. In addition, the paper-based process used to measure patient health metrics was laborious, not in real-time and sometimes error-prone. In their session at 21st Cloud Expo, Sean Finnerty, Executive Director, Practice Lead, Health Care & Life Science at REAN Cloud, and Dr. S.P.T. Krishnan, Principal Architect at REAN Cloud, will discuss how they b...
    SYS-CON Events announced today that MIRAI Inc. will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. MIRAI Inc. are IT consultants from the public sector whose mission is to solve social issues by technology and innovation and to create a meaningful future for people.
    SYS-CON Events announced today that TidalScale, a leading provider of systems and services, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. TidalScale has been involved in shaping the computing landscape. They've designed, developed and deployed some of the most important and successful systems and services in the history of the computing industry - internet, Ethernet, operating s...
    SYS-CON Events announced today that IBM has been named “Diamond Sponsor” of SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California.
    Join IBM November 1 at 21st Cloud Expo at the Santa Clara Convention Center in Santa Clara, CA, and learn how IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Cognitive analysis impacts today’s systems with unparalleled ability that were previously available only to manned, back-end operations. Thanks to cloud processing, IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Imagine a robot vacuum that becomes your personal assistant tha...
    SYS-CON Events announced today that TidalScale will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. TidalScale is the leading provider of Software-Defined Servers that bring flexibility to modern data centers by right-sizing servers on the fly to fit any data set or workload. TidalScale’s award-winning inverse hypervisor technology combines multiple commodity servers (including their ass...
    As hybrid cloud becomes the de-facto standard mode of operation for most enterprises, new challenges arise on how to efficiently and economically share data across environments. In his session at 21st Cloud Expo, Dr. Allon Cohen, VP of Product at Elastifile, will explore new techniques and best practices that help enterprise IT benefit from the advantages of hybrid cloud environments by enabling data availability for both legacy enterprise and cloud-native mission critical applications. By rev...
    Infoblox delivers Actionable Network Intelligence to enterprise, government, and service provider customers around the world. They are the industry leader in DNS, DHCP, and IP address management, the category known as DDI. We empower thousands of organizations to control and secure their networks from the core-enabling them to increase efficiency and visibility, improve customer service, and meet compliance requirements.
    With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend 21st Cloud Expo October 31 - November 2, 2017, at the Santa Clara Convention Center, CA, and June 12-14, 2018, at the Javits Center in New York City, NY, and learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.
    SYS-CON Events announced today that N3N will exhibit at SYS-CON's @ThingsExpo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. N3N’s solutions increase the effectiveness of operations and control centers, increase the value of IoT investments, and facilitate real-time operational decision making. N3N enables operations teams with a four dimensional digital “big board” that consolidates real-time live video feeds alongside IoT sensor data a...
    Amazon is pursuing new markets and disrupting industries at an incredible pace. Almost every industry seems to be in its crosshairs. Companies and industries that once thought they were safe are now worried about being “Amazoned.”. The new watch word should be “Be afraid. Be very afraid.” In his session 21st Cloud Expo, Chris Kocher, a co-founder of Grey Heron, will address questions such as: What new areas is Amazon disrupting? How are they doing this? Where are they likely to go? What are th...