Welcome!

Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Article

A Service-Oriented Management Approach for Service-Oriented Architecture

Creating the process application

Much has been written about service-oriented architecture (SOA) and the many technology and business benefits of adopting this approach. Poised to change the computing landscape once again, progressive IT departments, software vendors, and service providers have all been eager to embrace its concepts - familiar to anyone acquainted with the many past attempts to represent applications and IT infrastructure as modular reusable services.

Contrary to the views of some, SOA is not about .NET or J2EE or any specific platform or standards, although the continued adoption and early successes of Web services implementations will likely galvanize the industry around its standards. Rather, SOA is an application architecture approach to building distributed systems that deliver application functionality as services to end-user applications or to build other services.

Because SOA not only represents a philosophical shift for developers but also has great implications for IT operations, understanding the operational aspects of managing and monitoring, SOA with what I will call a "service centric, end-to-end approach" is critical. It offers IT professionals a great opportunity to finally get it right throughout the application life cycle, from the development process, through the operational management aspects.

Background
Service-oriented architecture is an approach to loosely coupled, standards-based, and protocol-independent distributed computing, where coarse-grained software resources and functions are made accessible via the network. In an SOA, these software resources are considered "services," which are well defined, self-contained, and ideally do not depend on the state or context of other services. Services have a published interface and communicate with each other. Services that utilize Web services standards (WSDL, SOAP, UDDI) are the most popular type of services available today.

Many believe that SOA, by leveraging new and existing applications and abstracting them as modular, coarse-grained services that map to discrete business functions, represents the future enterprise technology solution that can deliver the flexibility and agility that business users want. These coarse-grained services can be organized/orchestrated and reused to facilitate the ongoing and changing needs of business.

Advantages of an SOA
Implementing SOA provides both technical and business advantages. From a technical point of view, the task of building business processes is faster and cheaper with SOA, because existing services can more easily be reused and combined to define the business processes themselves. Applications can expose their services in a standard way and, hence, to more diverse clients and consumers. From a business perspective, IT staff can communicate more easily with business people, who understand services. Because business processes become explicit, they can be understood and improved with greater ease. Additionally, applications or business processes can be managed internally more easily or outsourced, because they're well-defined and discrete. As business changes and new requirements are generated, IT can reuse services to meet new demands in a much more efficient and timely manner.

The value and ultimate success of SOA is based on the assumption that everything enterprise IT does is ultimately manifested in the service of some business process. Given this assumption, SOA is about making business processes better, cheaper to create, and easier to change and manage.

New Operational Challenges for Managing SOA
Ironically, IT operations tasked with managing and monitoring SOA face the same major challenges as developers: the fundamental philosophical shift that SOA represents. Operations staff currently manages IT assets from a technology perspective. With SOAs in place, the focus needs to shift to a service centricity. wihout understanding their interractions and interdependencies, or how they impact the services provided SOA, managing technology from the perspective of services was difficult for IT operations, which have difficulty understanding and defining services in general. In the absence of clear definitions of business services, IT operations have traditionally focused on managing and monitoring all of the individual tiers of technology separately without understanding their interactions and interdependencies or how they impact the services provided.

However, when you consider the adoption of SOA many obvious operational questions arise:

  • Who is going to own the management of business services?
  • How will the health, performance, and capacity of these services be monitored?
  • When a problem arises, how will operations personnel be able to relate coarse-grained business service degradation to infrastructure bottlenecks?
  • What enabling technologies or techniques need to be made available to enable personnel across multiple departments (development, QA/Test, operations support, etc.) to work together in real time to prevent service failures or performance degradations?
  • Do the current technology-segregated IT processes work in an SOA-enabled environment?
The answers lie in a best practice approach that manages as one cohesive and integrated solution, manages the interaction between the services and underlying infrastructure. This management id done from an end-to-end perspective, using measurements of capacity, availability, and performance (CAP) to integrate and simplify management functions.

Best Practices for Managing SOA
Using such an end-to-end approach offers IT operations far more flexibility and adaptability in an SOA environment than traditional, more piecemeal management of underlying systems or of services and their interfaces.

Measurements of CAP at the services layer should act as a trigger for all other management functions and actions so that the proper focus on services and service quality can be maintained throughout an organization. The advent of clearly articulated business services via SOA can and should drive all operational management functions from the perspective of service quality, expressed as measurements of service capacity, service availability, and service performance. This could eliminate once and for all the finger-pointing and ambiguities we all encounter in operations when finding and fixing problems during runtime. For the purpose of this best practice approach to service-oriented management, Web services standards are implied.

Figure 1 depicts a simple service. It is presumed that Web services standards are used to abstract and integrate the functions of two existing applications on different platforms, written in different languages and in different locations. Both the service producer and consumer's services become interoperable via Web services standards, including SOAP for messaging, XML for message and data format, WSDL for description of services, and UDDI for service discovery. With the application's services clearly articulated and defined, the opportunity exists to coherently "instrument" it and make its measurements available for the runtime management of the service.

Applying CAP metrics and measurements to the Web Service enables operations to clearly understand the behavior of the service and its interactions. For example:

  • Capacity/load metrics: Is the number of connections, sessions, and requests/responses within the intended design limits? Is the number of connections/requests within the defined service-levels for capacity?
  • Availability metrics: Is the service accessible and functioning? Is it returning the expected results? Is it operating within the defined service-levels for availability?
  • Performance metrics: Is the response time within an acceptable range? Is response being impacted by load? Is the response time within the defined service levels for performance?
Instrumentation Techniques: Getting the Service Measurements
Two fundamental principles can be applied to accurately and proactively measuring and monitoring services - active and passive monitoring. Active monitoring implies creating "synthetic transactions" that actively test a service by periodically executing over specified intervals. Passive monitoring looks at the transactions and interactions as they occur. Active monitoring is inherently proactive in that it, doesn't wait for an error or degradation to occur before detecting it even though it does not reflect the actual interactions between services. Experience shows that using both of these techniques together produces the best result.

Web services enable powerful instrumentation without the need to modify applications. Definitions exist for inserting instrumentation to reveal characterization of the transaction, its start time, its stop time, its transaction type, and the service with which it is communicating.

Instrumentation techniques fall into two categories: proxy and native instrumentation. The proxy method involves modifying the IP address to intercept messages between service providers and service consumers. The native approach requires you to use available exits in SOAP processors contained in both the service providers and consumers. Of course, both techniques involve tradeoffs. Basically, the proxy method enables you to be SOAP-processor neutral but you take a performance hit by being in-line with all messages. The native method doesn't entail these consequences, but it does require specificity to a particular SOAP processor.

Managing Interactions Between Services
Because Web services applications are likely to have many producers and consumers active at any time, the interactions between them must be managed. With this increased complexity (see Figure 2) comes increased concerns about service availability and performance. Being able to perform both active and passive monitoring of Web services and the interactions between them becomes paramount. From an operational perspective, managing interactions with external services (across enterprises) also represents an added complexity. Service-level-agreements (SLAs) need to be in place in order to clearly define and monitor the expected performance characteristics of the service.

Applying SOA Concepts to Infrastructure
As we have seen, a best practice approach for managing SOA-enabled business services will require the management of the interaction between the services and the underlying system from the perspective of capacity, availability, and performance and as one integrated solution. Using the right technology and approach, it is possible to manage applications stacks end-to-end and provide "coarse-grained" representations of CAP at each infrastructure tier instead of monitoring capacity, availability, and performance of IT assets in a piecemeal fashion. Unlike current methods, this approach would enable IT operations to quickly locate emerging problems.

However, from a service-support perspective, operations must be able to make sense of the torrent of information and events that they receive from a myriad of monitoring and analysis tools that neither abstract low-level measurements into cohesive and easy to understand information nor provide a contextual reference for interpretation. A solution to this problem is to limit the number of monitoring and analysis tools and insist that they minimally automate the analysis process out-of-the-box. Such tools should not require operations staff to set thousands of static thresholds to manually define how an alarm/event is generated. Modern management solutions should come configured to automatically detect any abnormal conditions in the environment. They should also provide a mechanism to aggregate and convert low-level events/alarms into coarse-grained and humanly understandable measurements of CAP.

In order to manage IT assets in the context of business services, all underlying infrastructure measurement and monitoring technology related to the service needs to be standardized into a unified taxonomy. Figure 3 depicts how individual infrastructure elements can be monitored and analyzed across tiers in real time. Individual metrics (informational events, alarms, etc.) from each managed element must be abstracted into overall measures of capacity, availability, and performance in order for them to be humanly consumable and then to enable automated and "standardized" monitoring across tiers of infrastructure.

Figure 3 is not meant to be "anatomically correct," but rather to illustrate that a cohesive, end-to-end CAP monitoring strategy is not only possible but also necessary as infrastructure stacks become more complex and dynamic. Like the SOA concept of abstracting fine-grain application functions into coarse-grained services, end-to-end application and infrastructure stack component measurements (Web server, application server, database, etc.) could be abstracted into higher-level measurements of CAP. New service-oriented management systems will leverage these standardized measurements and provide a means to aggregate and correlate them to the services that they provision. Imagine being able to categorize and find a capacity or performance bottleneck down to at least the level of an element or component. How are application server performance measurements impacted by network performance measurements did what impact do they have on the service layer? In my experience, most IT shops do not have this down to a science, but it is possible - and in an SOA-enabled enterprise, where services are providing business differentiation, service quality will be increasingly important.

Bridging the Gap Between Web Services and Application and Infrastructure Management
Having standardized measurements of both Web services and the supporting application stack makes it possible to trigger downstream infrastructure stack analysis and alerting to the measurements of Web service quality, as described earlier. If the Web service layer measurements and monitors detect a performance problem, they can automatically trigger downstream analysis in order to determine like-kind (performance) infrastructure problems that mayhave been occurring at the time the service degraded. Figure 4, a high-level diagram of an actual project to provide end-to-end proactive monitoring for a BEA WebLogic Integration 8.1 SOA platform, depicts a Web services-centric operational management diagram along with a simple SLA-based analysis workflow for correlated problem detection. BEA made it easier than usual by publishing Web services statistics via JMX and also publishing performance statistics of Web services that were organized into business processes via their Workshop product.

Summary
The SOA trend - already pronounced across the industry - will, in my view, only accelerate over the next several years. The promise of enhanced flexibility, adaptability, and agility in the context of "everything services" will win in the end. However, the complete value of SOA will be fully realized only when all parties involved in IT service delivery, and service support of the entire application life cycle, work together with the common goal of designing, coding, testing, deploying, and managing services from the common objectives of business.

This is an exciting time for IT developers and operations. In an SOA world, they are both seated at the head table as trusted advisors to the business and as critical partners for any key revenue-generating or cost-reduction objectives. By taking a unified, service-oriented approach to designing, deploying, and managing business services, they have a wonderful opportunity to get it right.

More Stories By Franco R. Negri

Franco Negri is the founder, CTO, and chief strategist of PANACYA. In a 23-year career with leading suppliers and consumers of advanced management technology, Franco developed a keen understanding of market needs and a strong vision for the next generation. He was most recently VP, Product Marketing and VP, Research & Development at Computer Associates, where he was responsible for Unicenter TNG - CA's flagship Enterprise Systems Management product line.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
SYS-CON Events announced today that Dasher Technologies will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Dasher Technologies, Inc. ® is a premier IT solution provider that delivers expert technical resources along with trusted account executives to architect and deliver complete IT solutions and services to help our clients execute their goals, plans and objectives. Since 1999, we'v...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities – ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups. As a result, many firms employ new business models that place enormous impor...
SYS-CON Events announced today that Massive Networks, that helps your business operate seamlessly with fast, reliable, and secure internet and network solutions, has been named "Exhibitor" of SYS-CON's 21st International Cloud Expo ®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. As a premier telecommunications provider, Massive Networks is headquartered out of Louisville, Colorado. With years of experience under their belt, their team of...
SYS-CON Events announced today that TidalScale, a leading provider of systems and services, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. TidalScale has been involved in shaping the computing landscape. They've designed, developed and deployed some of the most important and successful systems and services in the history of the computing industry - internet, Ethernet, operating s...
SYS-CON Events announced today that Taica will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Taica manufacturers Alpha-GEL brand silicone components and materials, which maintain outstanding performance over a wide temperature range -40C to +200C. For more information, visit http://www.taica.co.jp/english/.
SYS-CON Events announced today that MIRAI Inc. will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. MIRAI Inc. are IT consultants from the public sector whose mission is to solve social issues by technology and innovation and to create a meaningful future for people.
SYS-CON Events announced today that IBM has been named “Diamond Sponsor” of SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California.
SYS-CON Events announced today that TidalScale will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. TidalScale is the leading provider of Software-Defined Servers that bring flexibility to modern data centers by right-sizing servers on the fly to fit any data set or workload. TidalScale’s award-winning inverse hypervisor technology combines multiple commodity servers (including their ass...
Join IBM November 1 at 21st Cloud Expo at the Santa Clara Convention Center in Santa Clara, CA, and learn how IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Cognitive analysis impacts today’s systems with unparalleled ability that were previously available only to manned, back-end operations. Thanks to cloud processing, IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Imagine a robot vacuum that becomes your personal assistant tha...
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
As popularity of the smart home is growing and continues to go mainstream, technological factors play a greater role. The IoT protocol houses the interoperability battery consumption, security, and configuration of a smart home device, and it can be difficult for companies to choose the right kind for their product. For both DIY and professionally installed smart homes, developers need to consider each of these elements for their product to be successful in the market and current smart homes.
Infoblox delivers Actionable Network Intelligence to enterprise, government, and service provider customers around the world. They are the industry leader in DNS, DHCP, and IP address management, the category known as DDI. We empower thousands of organizations to control and secure their networks from the core-enabling them to increase efficiency and visibility, improve customer service, and meet compliance requirements.
With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend 21st Cloud Expo October 31 - November 2, 2017, at the Santa Clara Convention Center, CA, and June 12-14, 2018, at the Javits Center in New York City, NY, and learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.
SYS-CON Events announced today that mruby Forum will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. mruby is the lightweight implementation of the Ruby language. We introduce mruby and the mruby IoT framework that enhances development productivity. For more information, visit http://forum.mruby.org/.
Digital transformation is changing the face of business. The IDC predicts that enterprises will commit to a massive new scale of digital transformation, to stake out leadership positions in the "digital transformation economy." Accordingly, attendees at the upcoming Cloud Expo | @ThingsExpo at the Santa Clara Convention Center in Santa Clara, CA, Oct 31-Nov 2, will find fresh new content in a new track called Enterprise Cloud & Digital Transformation.
Most technology leaders, contemporary and from the hardware era, are reshaping their businesses to do software. They hope to capture value from emerging technologies such as IoT, SDN, and AI. Ultimately, irrespective of the vertical, it is about deriving value from independent software applications participating in an ecosystem as one comprehensive solution. In his session at @ThingsExpo, Kausik Sridhar, founder and CTO of Pulzze Systems, will discuss how given the magnitude of today's applicati...
Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a reality. However, most organizations today are building their data infrastructure to focus solely on addressing immediate business needs vs. a platform capable of quickly adapting emerging technologies to address future ...
SYS-CON Events announced today that NetApp has been named “Bronze Sponsor” of SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. NetApp is the data authority for hybrid cloud. NetApp provides a full range of hybrid cloud data services that simplify management of applications and data across cloud and on-premises environments to accelerate digital transformation. Together with their partners, NetApp emp...
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It’s clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. Tha...
SYS-CON Events announced today that Avere Systems, a leading provider of enterprise storage for the hybrid cloud, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Avere delivers a more modern architectural approach to storage that doesn't require the overprovisioning of storage capacity to achieve performance, overspending on expensive storage media for inactive data or the overbui...