Welcome!

Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Article

The Evolution Continues

The Evolution Continues

As developers rapidly embraced the use of component-based architectures, the role of application servers in production has expanded from hosting somewhat simple, servlet-based applications to exploiting Enterprise JavaBeans (EJBs) and Java messaging services (JMS) to build robust eBusiness applications.

The proliferation of new, online business applications has sparked technology innovations in what was once a completely separate industry - performance management solutions. In the past 12-18 months, two previously unrelated areas, development and IT operations, have been forced to work together with new monitoring and management products that can pinpoint performance issues down to an individual Java method. These production-oriented management products take advantage of technologies such as management APIs, instrumentation, and Java profiling, which have been incorporated with more traditional management architectures such as server-side agents. Regardless of the granularity that these tools offer, the management of these increasingly complex n-tier infrastructures is still applied in a disparate, stovepipe approach, with multiple departments owning the operations responsible for various layers. This often results in reactive finger-pointing and departmental blame when performance issues arise.

Beyond App Server Management
In less than two years, a relatively new market niche (performance management of J2EE application servers) has become saturated by a variety of vendors. Each solution is slightly different, but all identify essentially the same data - hundreds of statistics for various application servers and their components. The opportunity now exists for management software vendors to take the next step and provide both a view into, and management of, the actual application topology. This visibility will extend beyond the application server and address management of the complete business process infrastructure with monitoring of transaction dependencies and the paths available to execute a transaction.

New management solutions are leading the evolution from traditional systems management to what AMR Research refers to as Business Services Management. "Many different pieces of technology are assembled to deliver a business service, including Web servers, application servers, and database servers. Monitoring and managing all of this technology in the context of the business processes it supports is the cornerstone of Business Services Management."

While yesterday's tools provide massive amounts of data for the various layers, often resulting in too much information, - known as a "data glut" - IT still struggles to gain a view of the paths each transaction could potentially take throughout this complex architecture. At the same time, business owners are looking to IT for details on the transaction ecosystem to help measure the success or failures of their online initiatives.

Moreover, as more and more companies look to maximize their application server infrastructures by launching new, online business initiatives, IT lacks visibility into the disconnected or distributed applications that many transactions depend upon to execute. There is still a need to track the real-time transaction dependencies to understand what each component requires to execute and how those components behave.

Market research firm Hurwitz Group underscores this growing management requirement, "Hurwitz Group believes that application management folks will have to borrow some network management concepts (such as topology mapping and route tracing) to address this problem. We are looking for a solution that can create a software topology - a map of the various software components in the clustered production environment that can make up transactions. The solution should also be able to trace the particular route that a transaction is taking across the software topology. Only then can administrators match individual transactions with the specific components that were used - the uncertainty disappears and problem diagnosis can begin."

Mapping the Application Logic
Today, new technologies are being introduced to monitor the real-time J2EE application logic. Mapping the individual business processes, or "Transaction Path Mapping," is the next step in Web application management solutions. Correlating component-level performance data with the system, database, application, and business-related metrics allows IT operations to measure the true performance of an e-business system and translate it into relevant, business-related information. This look into the application logic can ultimately bring management functionality more in line with the business and online revenue goals of the organization.

While load balancing, firewalls, and clustering are essential to the scalability and performance of a Web application, they also add complexities that pose new IT obstacles. Transaction Path Mapping identifies the components required to execute an application and creates a topology view of the potential paths available for transactions to take. The cross-functional view incorporates each element or layer that was previously monitored in its own "stovepipe" view, enabling IT to drill down to the exact source of the problem causing a transaction failure.

By correlating the various data points monitored, the overall health of a transaction path can be displayed, allowing for transactions to be routed to a healthier path, thereby maintaining or increasing transaction success rates. Additionally, the transaction map can help isolate the location of a transaction failure within an infrastructure. More importantly, this level of monitoring enables IT operations to communicate to both development and line-of-business managers the detail they require.

New Concept, Different Approaches
Similar to the various component-monitoring solutions, vendors are developing different approaches to transaction mapping. One popular method requires a manual mapping process to outline an application diagram. Typically, this takes advantage of the developer's application architecture and requires IT to select the dependencies of each transaction. This methodology is merely a representation of the infrastructure. It fails to monitor the real-time application logic and requires constant maintenance to the dependency map as new components are added or deleted. Other tools will require C or C++ development to actually hard-code the mapping into a model, sacrificing flexibility and requiring resources that are typically outside of IT. While this method may provide an accurate blueprint of the original, intended architecture, the continual changes required to update the mapping of evolving component-based applications may be costly.

Another approach to identifying transaction paths is the use of transaction labels or tags. This method involves the monitoring solution labeling a transaction and tracking the actual path it takes through all layers of the infrastructure. It is ideally suited for preproduction environments and offers detailed component information, down to individual SQL statements and method calls, but requires a significant amount of JVM and byte-code instrumentation. Combined with the added overhead from the label applied to each transaction, it can have a significant adverse effect on performance in a production environment.

The most effective approach to transaction mapping will likely be a combination of a monitoring agent for each system, with the ability to monitor the entire Web stack (Web server, application server, and database) and some form of instrumentation (controlled by IT) or API communication for component-level statistics. By leveraging a "bottom-up" approach with real-time data, not only can IT establish a live view of each transaction path, but the view can be dynamic, to ensure an accurate representation of the infrastructure. Any changes to the topology can be tracked without the need for development time or resources.

Beginning with a transaction entry point at the presentation layer, typically a Web page serving up a servlet or active server page, IT can auto-discover each servlet, EJB, or COM+ object. EJB transactional methods, JDBC connection; or other resource pools that make up the business logic. This information can then be used to dynamically map the real-time flow of each path and represent it visually in a portal-style view.

This methodology offers maximum visibility and detail for data correlation while avoiding costly overhead on production systems. It also fits in with the flexibility required to support rapidly evolving Web architectures, which may add, delete, or modify production-side applications and components on a regular basis. More importantly, this approach allows for the mapping solution to invoke various actions automatically, based on defined thresholds. These can be corrective measures to address poor performance, or even quality-of-service actions to prioritize resources based on a particular transaction trait (i.e., a preferred customer or large transaction value).

With IT operations and development forced to collaborate when addressing Web application performance issues, products that address the needs of both groups are more apt to be embraced. Because these solutions can display more detailed performance information in a manner that is familiar to IT, the need to involve development in less complex issues is reduced. The result is significantly increased productivity from both operations and development.

As e-business applications become the foundation for delivering business services online, the ability to measure, monitor, and proactively manage all aspects of the service becomes increasingly important. In fact, the introduction of reliable transaction management technologies will likely help drive the adoption of new online initiatives. This new approach to Web application performance management will set the stage for management technology innovations in the near future.

References

  • Gaughan, Dennis (March 11, 2002). "Business Services Management: Managing an ECM Infrastructure."
  • Hurwitz TrendWatch, March 15, 2002. Web Application's Uncertainty Principle.
  • More Stories By Frank Moreno

    Frank Moreno is the product marketing manager for Dirig Software, a leading developer of award-winning enterprise performance management solutions. Frank has over 10 years of experience in product marketing, product management, and strategic alliances in the networking and software industries, and has written multiple articles on e-Business performance management.

    Comments (0)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


    IoT & Smart Cities Stories
    The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
    Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
    The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
    René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
    Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
    Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
    Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
    Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
    Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
    As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...