Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Article

Building Infrastructure-Aware Applications

Building Infrastructure-Aware Applications

Just as service-based applications leverage a shared set of application resources, infrastructure-aware applications leverage a shared infrastructure that can adapt to meet the needs of the application. This article will introduce the basics of a shared infrastructure, look at the Blue Titan Network Director and how it interoperates with WebLogic Enterprise Platform to enable infrastructure-aware application development, and provide a glimpse into the future of adaptive applications.

Infrastructure-Aware Applications
Applications have traditionally been stand-alone islands of functionality. Web services are changing this by providing a natural way to decompose applications into granular pieces, allowing them to be recomposed and recombined across multiple systems for different uses. This granularity, along with the ability to compose higher-level applications out of individual pieces, provides developers with new facilities to build extremely sophisticated applications with a high degree of reuse and efficiency.

A number of emerging standards and technologies will extend this granularity and reusability down into the application infrastructure itself - allowing infrastructure-aware applications to interact with the application backbone in real time to serve broader service-level or reliability requirements. For example, if the latency of a Web service increases to an unacceptable level due to a traffic spike, an infrastructure-aware application would be able to increase capacity by adding another service instance, or alternatively, limiting access to only priority customers. Either of these actions would ensure that performance stays within the bounds of service-level agreements (SLAs) established with your most important customers.

Blue Titan gives developers a practical way to create infrastructure-aware applications by providing a shared infrastructure that virtualizes core functions such as SLA enforcement, routing, failover, access control, logging, and prioritization into a common set of shared services that can be reused across any number of Web service applications. This clean separation of application from infrastructure functionality allows developers to focus on creating core business logic without concern for deployment details, while also allowing administrators to define and enforce consistent, enterprise-wide control policies for all Web services.

Shared Infrastructure
The benefits of shared infrastructure are numerous and clear. From a development perspective, developers are no longer burdened with building infrastructure into every application. Instead, they simply utilize a set of shared services that are executed in the network. From an administration perspective, administrators can define and apply uniform infrastructure practices to all of the applications they manage, and maintain a well-defined, common set of services and infrastructure facilities. From a resource utilization perspective, infrastructure can be managed and scaled as a cohesive set of resources to meet broad performance or reliability requirements, rather than as a disparate set of isolated islands.

To create an adaptive application backbone a shared infrastructure must have four key attributes:

  • A set of shared network nodes that monitor and route Web service traffic in accordance with established infrastructure policies: These nodes proxy all Web service traffic and are generally managed by the enterprise IT organization.
  • A set of shared infrastructure services that can be provisioned across an organization to be utilized by multiple developer groups: These services either run on the shared network nodes or on specialized enterprise systems (for example, a shared authentication service would probably run on an existing identity system).
  • The ability to define and enforce unified policies across the entire shared infrastructure: These policies govern such shared functions as authentication and prioritization of requestors, logging of inbound messages, and compensation for breached SLAs.
  • An open, service-based interface that provides access to infrastructure control functions and service metadata.

    With these four pieces in place it's possible to create an adaptive feedback loop between real-time service performance and the underlying application infrastructure.

    In Figure 1, a continuous compliance cycle runs in the background of every application to verify that all associated infrastructure policies are met. If all policy requirements are met, this cycle just hums away unnoticed. When a policy is breached (unacceptable latency, multiple unauthorized requests, etc.), an event is generated that triggers a set of compensation services. These services adapt operational properties of the shared infrastructure to return the service to a compliant state.

    A Practical Implementation of Shared Infrastructure
    Blue Titan Network Director is enterprise software that enables the control and coordination of Web services. It provides a shared infrastructure of Control Points (network nodes) and Fabric Services (shared infrastructure services). When used in conjunction with the BEA WebLogic Platform, Network Director enables a practical path toward infrastructure-aware applications.

    Services created in BEA WebLogic Workshop are registered with the Network Director, where they become manageable network resources. Once registered, services are assigned specific security, management, and monitoring polices that are enforced across the enterprise in a consistent, unified way. Developers are responsible for creating services with well-defined interfaces and WSDLs, and IT handles the rest. This way, if policies change (say a new logging requirement is imposed), the service itself remains valid and usable.

    Fabric Services
    Currently Blue Titan provides a set of over 80 shared, reusable services out of the box. These Fabric Services allow developers and administrators to programmatically interact with the Network Director without coding or scripting. They give programmatic access to service metadata, service management functions, and control functions. This lets IT build infrastructure policies that can affect traffic flow, capacity, and access properties of the network to meet operational requirements.

    Indirect Addressing Through Control Points
    Blue Titan Control Points execute and enforce Web service control policies across an enterprise. Control Points are servlet-based components that run on BEA WebLogic Server and are deployed across a network as dictated by traffic, security, or reliability requirements. Upon registration with the Network Director, all Web service interfaces are assigned a virtual address that references a Control Point. All requests for the service are routed first to a Blue Titan Control Point and then to the corresponding service provider. This indirect addressing enables all of the control, coordination, and metadata functions provided by Network Director. As requests and responses pass through the Control Point, policy is enforced (access, failover, etc.) and operational metadata (response time, uptime percentage, etc.) is gathered and stored in logs. This information can then be referenced by any application with any Web service-compatible application, including WebLogic Workshop or Portal.

    Infrastructure-Aware Application Examples
    Using the BEA WebLogic Enterprise Platform and the Blue Titan Network Director, developers and enterprise IT can collaborate to create an adaptive infrastructure. Given that all visibility and control functions are Web services themselves, the flexibility of the resulting infrastructure is almost unlimited. The examples below demonstrate a few simple ways that a developer can utilize the shared infrastructure to make more reliable and responsive applications.

    Analyzing Service Metadata at Build Time
    Using the Blue Titan Network Director and BEA WebLogic Workshop, a developer can query up-to-the-minute performance information for a specific service. With this information, decisions can be made on whether to use a specific service or an alternative with a better performance and reliability history. (see Figure 2).

    Using WebLogic Workshop a developer would call the Blue Titan Fabric Service listConsumableServices to display which services are available for use. In this example, assume that there are two versions of a service: ServiceA.1 and ServiceA.2. Then the Fabric Services getLatencyByService, getUptimeByService, and getUsageByService would be called to view the metadata for ServiceA.1 and ServiceA.2. The resulting information would then be used to decide which service is better suited to the application (see Figure 2).

    Note that this sequence of Fabric Service calls could easily be aggregated into a single coarse-grained Web service named listServiceMetadata. This service could then be provisioned to developers for future use, made into a Workshop Control, or hooked into BEA WebLogic Portal to provide a graphical interface. The result is a streamlined process for discovering and acting upon service metadata at build time.

    Deploying Additional Service Instances Based on SLA Breach
    An infrastructure-aware application can interact with the underlying WebLogic Platform to fulfill service-level agreement (SLA) requirements. Assume that a certain high-value service, tradeStatus, has a maximum latency constraint of 500ms. In other words, the service provider has guaranteed that tradeStatus will execute within 500ms or the usage fee will be returned to the consumer. To enforce this constraint the service provider utilizes the Blue Titan Network Director.

    Look again at the flowchart in Figure 1. The compliance cycle runs continuously to monitor the operational metrics of tradeStatus. A usage surge causes latency to exceed 500ms and an event is generated that kicks off the compensation cycle. Given that the compensation cycle simply calls other (infrastructure) Web services, it can perform any number of actions in response to the event. In this case, the compensation cycle will call two Workshop Controls, deployWebService and registerWithBlueTitan, which will bring an additional instance of tradeStatus online, increasing capacity and reducing latency. (see Figures 3 and 4).

    The Workshop Control deployWebService invokes a previously defined set of Ant tasks that automatically build an EAR in Workshop and deploy it to an available WebLogic Server. Then the Workshop Control registerWithBlueTitan invokes a series of Fabric Services that registers the new WSDL with the Network Director and assigns appropriate policy. Once that is done the Web service will be accessed automatically by consumers and included in the compliance cycle. Should the latency remain out of bounds, the process can be repeated.

    BEA and Blue Titan are actively collaborating to provide a best-in-class Web service infrastructure. The WebLogic Platform is purpose built to compose and run business-critical Web services. Blue Titan Network Director augments these capabilities by providing a common control infrastructure to ensure that all Web services are utilized in a consistent way across an enterprise. Today this offers customers a practical way to operate reliable and scalable Web service applications. In the near future, as customers utilize conversations to support asynchronous processes, BEA and Blue Titan will extend the flexibility of Web services into event-driven stateful processes that span multiple systems or domains. The end result is a 100% standards-based approach to enterprise computing and a dramatic reduction in the cost of building, integrating, and operating business applications.

  • More Stories By Robert Shear

    Robert Shear is the director of product management for Blue Titan Software. He has more than 10 years of experience in the high-tech industry. Before joining Blue Titan, Robert was director of marketing for CommerceRoute, a pioneer in XML appliances, and a development engineer for the Scripps Institution of Oceanography. He holds a B.S. in physics from UC San Diego and an MBA from the Haas School of Business.

    Comments (0)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

    @ThingsExpo Stories
    Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
    As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...
    Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
    Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
    The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
    DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of bus...
    With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading...
    DXWorldEXPO LLC announced today that "Miami Blockchain Event by FinTechEXPO" has announced that its Call for Papers is now open. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Financial enterprises in New York City, London, Singapore, and other world financial capitals are embracing a new generation of smart, automated FinTech that eliminates many cumbersome, slow, and expe...
    DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
    DXWorldEXPO LLC announced today that ICOHOLDER named "Media Sponsor" of Miami Blockchain Event by FinTechEXPO. ICOHOLDER give you detailed information and help the community to invest in the trusty projects. Miami Blockchain Event by FinTechEXPO has opened its Call for Papers. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Miami Blockchain Event by FinTechEXPO also offers s...
    DXWorldEXPO | CloudEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
    With tough new regulations coming to Europe on data privacy in May 2018, Calligo will explain why in reality the effect is global and transforms how you consider critical data. EU GDPR fundamentally rewrites the rules for cloud, Big Data and IoT. In his session at 21st Cloud Expo, Adam Ryan, Vice President and General Manager EMEA at Calligo, examined the regulations and provided insight on how it affects technology, challenges the established rules and will usher in new levels of diligence arou...
    Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
    Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
    Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
    Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
    The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
    Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
    Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
    Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive ov...