Welcome!

Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Article

Building Infrastructure-Aware Applications

Building Infrastructure-Aware Applications

Just as service-based applications leverage a shared set of application resources, infrastructure-aware applications leverage a shared infrastructure that can adapt to meet the needs of the application. This article will introduce the basics of a shared infrastructure, look at the Blue Titan Network Director and how it interoperates with WebLogic Enterprise Platform to enable infrastructure-aware application development, and provide a glimpse into the future of adaptive applications.

Infrastructure-Aware Applications
Applications have traditionally been stand-alone islands of functionality. Web services are changing this by providing a natural way to decompose applications into granular pieces, allowing them to be recomposed and recombined across multiple systems for different uses. This granularity, along with the ability to compose higher-level applications out of individual pieces, provides developers with new facilities to build extremely sophisticated applications with a high degree of reuse and efficiency.

A number of emerging standards and technologies will extend this granularity and reusability down into the application infrastructure itself - allowing infrastructure-aware applications to interact with the application backbone in real time to serve broader service-level or reliability requirements. For example, if the latency of a Web service increases to an unacceptable level due to a traffic spike, an infrastructure-aware application would be able to increase capacity by adding another service instance, or alternatively, limiting access to only priority customers. Either of these actions would ensure that performance stays within the bounds of service-level agreements (SLAs) established with your most important customers.

Blue Titan gives developers a practical way to create infrastructure-aware applications by providing a shared infrastructure that virtualizes core functions such as SLA enforcement, routing, failover, access control, logging, and prioritization into a common set of shared services that can be reused across any number of Web service applications. This clean separation of application from infrastructure functionality allows developers to focus on creating core business logic without concern for deployment details, while also allowing administrators to define and enforce consistent, enterprise-wide control policies for all Web services.

Shared Infrastructure
The benefits of shared infrastructure are numerous and clear. From a development perspective, developers are no longer burdened with building infrastructure into every application. Instead, they simply utilize a set of shared services that are executed in the network. From an administration perspective, administrators can define and apply uniform infrastructure practices to all of the applications they manage, and maintain a well-defined, common set of services and infrastructure facilities. From a resource utilization perspective, infrastructure can be managed and scaled as a cohesive set of resources to meet broad performance or reliability requirements, rather than as a disparate set of isolated islands.

To create an adaptive application backbone a shared infrastructure must have four key attributes:

  • A set of shared network nodes that monitor and route Web service traffic in accordance with established infrastructure policies: These nodes proxy all Web service traffic and are generally managed by the enterprise IT organization.
  • A set of shared infrastructure services that can be provisioned across an organization to be utilized by multiple developer groups: These services either run on the shared network nodes or on specialized enterprise systems (for example, a shared authentication service would probably run on an existing identity system).
  • The ability to define and enforce unified policies across the entire shared infrastructure: These policies govern such shared functions as authentication and prioritization of requestors, logging of inbound messages, and compensation for breached SLAs.
  • An open, service-based interface that provides access to infrastructure control functions and service metadata.

    With these four pieces in place it's possible to create an adaptive feedback loop between real-time service performance and the underlying application infrastructure.

    In Figure 1, a continuous compliance cycle runs in the background of every application to verify that all associated infrastructure policies are met. If all policy requirements are met, this cycle just hums away unnoticed. When a policy is breached (unacceptable latency, multiple unauthorized requests, etc.), an event is generated that triggers a set of compensation services. These services adapt operational properties of the shared infrastructure to return the service to a compliant state.

    A Practical Implementation of Shared Infrastructure
    Blue Titan Network Director is enterprise software that enables the control and coordination of Web services. It provides a shared infrastructure of Control Points (network nodes) and Fabric Services (shared infrastructure services). When used in conjunction with the BEA WebLogic Platform, Network Director enables a practical path toward infrastructure-aware applications.

    Services created in BEA WebLogic Workshop are registered with the Network Director, where they become manageable network resources. Once registered, services are assigned specific security, management, and monitoring polices that are enforced across the enterprise in a consistent, unified way. Developers are responsible for creating services with well-defined interfaces and WSDLs, and IT handles the rest. This way, if policies change (say a new logging requirement is imposed), the service itself remains valid and usable.

    Fabric Services
    Currently Blue Titan provides a set of over 80 shared, reusable services out of the box. These Fabric Services allow developers and administrators to programmatically interact with the Network Director without coding or scripting. They give programmatic access to service metadata, service management functions, and control functions. This lets IT build infrastructure policies that can affect traffic flow, capacity, and access properties of the network to meet operational requirements.

    Indirect Addressing Through Control Points
    Blue Titan Control Points execute and enforce Web service control policies across an enterprise. Control Points are servlet-based components that run on BEA WebLogic Server and are deployed across a network as dictated by traffic, security, or reliability requirements. Upon registration with the Network Director, all Web service interfaces are assigned a virtual address that references a Control Point. All requests for the service are routed first to a Blue Titan Control Point and then to the corresponding service provider. This indirect addressing enables all of the control, coordination, and metadata functions provided by Network Director. As requests and responses pass through the Control Point, policy is enforced (access, failover, etc.) and operational metadata (response time, uptime percentage, etc.) is gathered and stored in logs. This information can then be referenced by any application with any Web service-compatible application, including WebLogic Workshop or Portal.

    Infrastructure-Aware Application Examples
    Using the BEA WebLogic Enterprise Platform and the Blue Titan Network Director, developers and enterprise IT can collaborate to create an adaptive infrastructure. Given that all visibility and control functions are Web services themselves, the flexibility of the resulting infrastructure is almost unlimited. The examples below demonstrate a few simple ways that a developer can utilize the shared infrastructure to make more reliable and responsive applications.

    Analyzing Service Metadata at Build Time
    Using the Blue Titan Network Director and BEA WebLogic Workshop, a developer can query up-to-the-minute performance information for a specific service. With this information, decisions can be made on whether to use a specific service or an alternative with a better performance and reliability history. (see Figure 2).

    Using WebLogic Workshop a developer would call the Blue Titan Fabric Service listConsumableServices to display which services are available for use. In this example, assume that there are two versions of a service: ServiceA.1 and ServiceA.2. Then the Fabric Services getLatencyByService, getUptimeByService, and getUsageByService would be called to view the metadata for ServiceA.1 and ServiceA.2. The resulting information would then be used to decide which service is better suited to the application (see Figure 2).

    Note that this sequence of Fabric Service calls could easily be aggregated into a single coarse-grained Web service named listServiceMetadata. This service could then be provisioned to developers for future use, made into a Workshop Control, or hooked into BEA WebLogic Portal to provide a graphical interface. The result is a streamlined process for discovering and acting upon service metadata at build time.

    Deploying Additional Service Instances Based on SLA Breach
    An infrastructure-aware application can interact with the underlying WebLogic Platform to fulfill service-level agreement (SLA) requirements. Assume that a certain high-value service, tradeStatus, has a maximum latency constraint of 500ms. In other words, the service provider has guaranteed that tradeStatus will execute within 500ms or the usage fee will be returned to the consumer. To enforce this constraint the service provider utilizes the Blue Titan Network Director.

    Look again at the flowchart in Figure 1. The compliance cycle runs continuously to monitor the operational metrics of tradeStatus. A usage surge causes latency to exceed 500ms and an event is generated that kicks off the compensation cycle. Given that the compensation cycle simply calls other (infrastructure) Web services, it can perform any number of actions in response to the event. In this case, the compensation cycle will call two Workshop Controls, deployWebService and registerWithBlueTitan, which will bring an additional instance of tradeStatus online, increasing capacity and reducing latency. (see Figures 3 and 4).

    The Workshop Control deployWebService invokes a previously defined set of Ant tasks that automatically build an EAR in Workshop and deploy it to an available WebLogic Server. Then the Workshop Control registerWithBlueTitan invokes a series of Fabric Services that registers the new WSDL with the Network Director and assigns appropriate policy. Once that is done the Web service will be accessed automatically by consumers and included in the compliance cycle. Should the latency remain out of bounds, the process can be repeated.

    Conclusion
    BEA and Blue Titan are actively collaborating to provide a best-in-class Web service infrastructure. The WebLogic Platform is purpose built to compose and run business-critical Web services. Blue Titan Network Director augments these capabilities by providing a common control infrastructure to ensure that all Web services are utilized in a consistent way across an enterprise. Today this offers customers a practical way to operate reliable and scalable Web service applications. In the near future, as customers utilize conversations to support asynchronous processes, BEA and Blue Titan will extend the flexibility of Web services into event-driven stateful processes that span multiple systems or domains. The end result is a 100% standards-based approach to enterprise computing and a dramatic reduction in the cost of building, integrating, and operating business applications.

  • More Stories By Robert Shear

    Robert Shear is the director of product management for Blue Titan Software. He has more than 10 years of experience in the high-tech industry. Before joining Blue Titan, Robert was director of marketing for CommerceRoute, a pioneer in XML appliances, and a development engineer for the Scripps Institution of Oceanography. He holds a B.S. in physics from UC San Diego and an MBA from the Haas School of Business.

    Comments (0)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


    IoT & Smart Cities Stories
    Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
    Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
    René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
    If a machine can invent, does this mean the end of the patent system as we know it? The patent system, both in the US and Europe, allows companies to protect their inventions and helps foster innovation. However, Artificial Intelligence (AI) could be set to disrupt the patent system as we know it. This talk will examine how AI may change the patent landscape in the years to come. Furthermore, ways in which companies can best protect their AI related inventions will be examined from both a US and...
    In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
    Bill Schmarzo, Tech Chair of "Big Data | Analytics" of upcoming CloudEXPO | DXWorldEXPO New York (November 12-13, 2018, New York City) today announced the outline and schedule of the track. "The track has been designed in experience/degree order," said Schmarzo. "So, that folks who attend the entire track can leave the conference with some of the skills necessary to get their work done when they get back to their offices. It actually ties back to some work that I'm doing at the University of San...
    When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...
    Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...
    Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...
    Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.