Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Article

The Evolution Continues

The Evolution Continues

As developers rapidly embraced the use of component-based architectures, the role of application servers in production has expanded from hosting somewhat simple, servlet-based applications to exploiting Enterprise JavaBeans (EJBs) and Java messaging services (JMS) to build robust eBusiness applications.

The proliferation of new, online business applications has sparked technology innovations in what was once a completely separate industry - performance management solutions. In the past 12-18 months, two previously unrelated areas, development and IT operations, have been forced to work together with new monitoring and management products that can pinpoint performance issues down to an individual Java method. These production-oriented management products take advantage of technologies such as management APIs, instrumentation, and Java profiling, which have been incorporated with more traditional management architectures such as server-side agents. Regardless of the granularity that these tools offer, the management of these increasingly complex n-tier infrastructures is still applied in a disparate, stovepipe approach, with multiple departments owning the operations responsible for various layers. This often results in reactive finger-pointing and departmental blame when performance issues arise.

Beyond App Server Management
In less than two years, a relatively new market niche (performance management of J2EE application servers) has become saturated by a variety of vendors. Each solution is slightly different, but all identify essentially the same data - hundreds of statistics for various application servers and their components. The opportunity now exists for management software vendors to take the next step and provide both a view into, and management of, the actual application topology. This visibility will extend beyond the application server and address management of the complete business process infrastructure with monitoring of transaction dependencies and the paths available to execute a transaction.

New management solutions are leading the evolution from traditional systems management to what AMR Research refers to as Business Services Management. "Many different pieces of technology are assembled to deliver a business service, including Web servers, application servers, and database servers. Monitoring and managing all of this technology in the context of the business processes it supports is the cornerstone of Business Services Management."

While yesterday's tools provide massive amounts of data for the various layers, often resulting in too much information, - known as a "data glut" - IT still struggles to gain a view of the paths each transaction could potentially take throughout this complex architecture. At the same time, business owners are looking to IT for details on the transaction ecosystem to help measure the success or failures of their online initiatives.

Moreover, as more and more companies look to maximize their application server infrastructures by launching new, online business initiatives, IT lacks visibility into the disconnected or distributed applications that many transactions depend upon to execute. There is still a need to track the real-time transaction dependencies to understand what each component requires to execute and how those components behave.

Market research firm Hurwitz Group underscores this growing management requirement, "Hurwitz Group believes that application management folks will have to borrow some network management concepts (such as topology mapping and route tracing) to address this problem. We are looking for a solution that can create a software topology - a map of the various software components in the clustered production environment that can make up transactions. The solution should also be able to trace the particular route that a transaction is taking across the software topology. Only then can administrators match individual transactions with the specific components that were used - the uncertainty disappears and problem diagnosis can begin."

Mapping the Application Logic
Today, new technologies are being introduced to monitor the real-time J2EE application logic. Mapping the individual business processes, or "Transaction Path Mapping," is the next step in Web application management solutions. Correlating component-level performance data with the system, database, application, and business-related metrics allows IT operations to measure the true performance of an e-business system and translate it into relevant, business-related information. This look into the application logic can ultimately bring management functionality more in line with the business and online revenue goals of the organization.

While load balancing, firewalls, and clustering are essential to the scalability and performance of a Web application, they also add complexities that pose new IT obstacles. Transaction Path Mapping identifies the components required to execute an application and creates a topology view of the potential paths available for transactions to take. The cross-functional view incorporates each element or layer that was previously monitored in its own "stovepipe" view, enabling IT to drill down to the exact source of the problem causing a transaction failure.

By correlating the various data points monitored, the overall health of a transaction path can be displayed, allowing for transactions to be routed to a healthier path, thereby maintaining or increasing transaction success rates. Additionally, the transaction map can help isolate the location of a transaction failure within an infrastructure. More importantly, this level of monitoring enables IT operations to communicate to both development and line-of-business managers the detail they require.

New Concept, Different Approaches
Similar to the various component-monitoring solutions, vendors are developing different approaches to transaction mapping. One popular method requires a manual mapping process to outline an application diagram. Typically, this takes advantage of the developer's application architecture and requires IT to select the dependencies of each transaction. This methodology is merely a representation of the infrastructure. It fails to monitor the real-time application logic and requires constant maintenance to the dependency map as new components are added or deleted. Other tools will require C or C++ development to actually hard-code the mapping into a model, sacrificing flexibility and requiring resources that are typically outside of IT. While this method may provide an accurate blueprint of the original, intended architecture, the continual changes required to update the mapping of evolving component-based applications may be costly.

Another approach to identifying transaction paths is the use of transaction labels or tags. This method involves the monitoring solution labeling a transaction and tracking the actual path it takes through all layers of the infrastructure. It is ideally suited for preproduction environments and offers detailed component information, down to individual SQL statements and method calls, but requires a significant amount of JVM and byte-code instrumentation. Combined with the added overhead from the label applied to each transaction, it can have a significant adverse effect on performance in a production environment.

The most effective approach to transaction mapping will likely be a combination of a monitoring agent for each system, with the ability to monitor the entire Web stack (Web server, application server, and database) and some form of instrumentation (controlled by IT) or API communication for component-level statistics. By leveraging a "bottom-up" approach with real-time data, not only can IT establish a live view of each transaction path, but the view can be dynamic, to ensure an accurate representation of the infrastructure. Any changes to the topology can be tracked without the need for development time or resources.

Beginning with a transaction entry point at the presentation layer, typically a Web page serving up a servlet or active server page, IT can auto-discover each servlet, EJB, or COM+ object. EJB transactional methods, JDBC connection; or other resource pools that make up the business logic. This information can then be used to dynamically map the real-time flow of each path and represent it visually in a portal-style view.

This methodology offers maximum visibility and detail for data correlation while avoiding costly overhead on production systems. It also fits in with the flexibility required to support rapidly evolving Web architectures, which may add, delete, or modify production-side applications and components on a regular basis. More importantly, this approach allows for the mapping solution to invoke various actions automatically, based on defined thresholds. These can be corrective measures to address poor performance, or even quality-of-service actions to prioritize resources based on a particular transaction trait (i.e., a preferred customer or large transaction value).

With IT operations and development forced to collaborate when addressing Web application performance issues, products that address the needs of both groups are more apt to be embraced. Because these solutions can display more detailed performance information in a manner that is familiar to IT, the need to involve development in less complex issues is reduced. The result is significantly increased productivity from both operations and development.

As e-business applications become the foundation for delivering business services online, the ability to measure, monitor, and proactively manage all aspects of the service becomes increasingly important. In fact, the introduction of reliable transaction management technologies will likely help drive the adoption of new online initiatives. This new approach to Web application performance management will set the stage for management technology innovations in the near future.


  • Gaughan, Dennis (March 11, 2002). "Business Services Management: Managing an ECM Infrastructure."
  • Hurwitz TrendWatch, March 15, 2002. Web Application's Uncertainty Principle.
  • More Stories By Frank Moreno

    Frank Moreno is the product marketing manager for Dirig Software, a leading developer of award-winning enterprise performance management solutions. Frank has over 10 years of experience in product marketing, product management, and strategic alliances in the networking and software industries, and has written multiple articles on e-Business performance management.

    Comments (0)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

    IoT & Smart Cities Stories
    Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
    Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
    René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
    Early Bird Registration Discount Expires on August 31, 2018 Conference Registration Link ▸ HERE. Pick from all 200 sessions in all 10 tracks, plus 22 Keynotes & General Sessions! Lunch is served two days. EXPIRES AUGUST 31, 2018. Ticket prices: ($1,295-Aug 31) ($1,495-Oct 31) ($1,995-Nov 12) ($2,500-Walk-in)
    According to Forrester Research, every business will become either a digital predator or digital prey by 2020. To avoid demise, organizations must rapidly create new sources of value in their end-to-end customer experiences. True digital predators also must break down information and process silos and extend digital transformation initiatives to empower employees with the digital resources needed to win, serve, and retain customers.
    IoT is rapidly becoming mainstream as more and more investments are made into the platforms and technology. As this movement continues to expand and gain momentum it creates a massive wall of noise that can be difficult to sift through. Unfortunately, this inevitably makes IoT less approachable for people to get started with and can hamper efforts to integrate this key technology into your own portfolio. There are so many connected products already in place today with many hundreds more on the h...
    Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
    Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight...
    Digital Transformation is much more than a buzzword. The radical shift to digital mechanisms for almost every process is evident across all industries and verticals. This is often especially true in financial services, where the legacy environment is many times unable to keep up with the rapidly shifting demands of the consumer. The constant pressure to provide complete, omnichannel delivery of customer-facing solutions to meet both regulatory and customer demands is putting enormous pressure on...
    Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...