Welcome!

Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: @CloudExpo, Microservices Expo

@CloudExpo: Article

Autonomic Management Architectures for Cloud Platforms

Discussing differing approaches for managing cloud environments

The platform services segment of cloud is multi-faceted... to say the least. Lately, likely spurred on by announcements like IBM Workload Deployer and VMware Cloud Foundry, I have been thinking quite a bit about one of those facets: environment management. To be clear, I'm not talking about management tools for end-users, though that topic is worthy of many discussions. Rather, I'm talking about the autonomic management capabilities for deployed environments.

Put simply, I define autonomic management capabilities as anything that happens without the user having to explicitly tell the system to do it. The user may define policies or specify directives that shape the system's behavior, but when it comes time to actually take action, it happens as if it were magic. Cloud users, specifically platform services users, are steadily coming to expect a certain set of management actions, such as elasticity and health management, to be autonomic. Increasingly, we see platform service providers responding to these expectations to create more intelligent, self-aware platform management capabilities for the cloud.

Now, it may be tempting to say that users would not need to know much about the way autonomic management techniques work. Beyond knowing what capabilities their platform provider offers and how to take advantage of those, the end-user can be blissfully unaware. That's the point, right? I agree up to a point. The user probably does not need to know much about the algorithms and inner workings that carry out the autonomic actions. However, I do think the user should be aware of the basic architectural approach used to deliver this kind of functionality. After all, the architectural approach has the potential to impact costs (in terms of resources used), and it certainly will impact the way you begin debugging system failures.

When it comes to architectural approaches for providing these self-aware management capabilities, it seems to me that a few different philosophies prevail in the current state of the art. First off, there is the isolated management approach. In this case, there are separate processes, actually they are usually even separate virtual containers, that manage one or many deployed environments. The main benefit of this approach is that the containers running the application environment do not compete for resources with the processes managing that environment. The management processes are completely separate. They observe from afar and take action as necessary. Of course, there are drawbacks to this approach as well. Chief among them is the fact that the management and workload components scale separately. Presumably, as the workload components scale up the management components will have to scale up as well (surely not at a 1:1 ratio, but there will be an upper bound to what a single management process can manage). Additionally, one has to manage availability of both the management and workload components separately. All of these factors can increase resource usage and management overhead.

Another architectural approach in this arena is the self-sustaining approach. Here, the system embeds management capabilities and processes into each deployed environment. This removes the need for an external observer, means that management capabilities scale with your deployments, and eliminates the need to manage availability for separate components. These facts can contribute to reducing the overall management overhead. The main drawback in this case is the fact that the management processes can potentially compete with your application processes for resources. If the platform service solution you are using takes this approach to delivering management functionality, my advice is simple: do not assume anything. That means don't assume that management processes will adversely affect the performance of your application workload and don't assume they will not. Test, test, test, test!

As always, there is a middle ground, a hybrid approach if you will. In this case, every deployment creates a running application environment with some amount of embedded management capability. The embedded management components can take some actions on their own, but rely on an external component for other actions or raw information. This is actually quite popular in virtual environments since it can be tough for processes running in a guest virtual machine to get details about overall resource usage on the underlying physical host. Instead, processes running in the virtual machine call to an external component that has visibility of resource usage at the physical level. This approach affords a compromise between the higher management overhead of the first approach and the dueling processes of the second approach. That said, it comes with a certain amount of drawbacks from both approaches.

I am not necessarily advocating for one architectural approach over the other. I certainly think some are better approaches for certain scenarios, but I do not see a silver bullet answer here. I simply think that users should be aware of what their particular choice of a solution does in this respect and plan (and test!) accordingly.

More Stories By Dustin Amrhein

Dustin Amrhein joined IBM as a member of the development team for WebSphere Application Server. While in that position, he worked on the development of Web services infrastructure and Web services programming models. In his current role, Dustin is a technical specialist for cloud, mobile, and data grid technology in IBM's WebSphere portfolio. He blogs at http://dustinamrhein.ulitzer.com. You can follow him on Twitter at http://twitter.com/damrhein.

IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to great conferences, helping you discover new conferences and increase your return on investment.
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
We are seeing a major migration of enterprises applications to the cloud. As cloud and business use of real time applications accelerate, legacy networks are no longer able to architecturally support cloud adoption and deliver the performance and security required by highly distributed enterprises. These outdated solutions have become more costly and complicated to implement, install, manage, and maintain.SD-WAN offers unlimited capabilities for accessing the benefits of the cloud and Internet. ...
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
SYS-CON Events announced today that Silicon India has been named “Media Sponsor” of SYS-CON's 21st International Cloud Expo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Published in Silicon Valley, Silicon India magazine is the premiere platform for CIOs to discuss their innovative enterprise solutions and allows IT vendors to learn about new solutions that can help grow their business.
DXWorldEXPO LLC announced today that "IoT Now" was named media sponsor of CloudEXPO | DXWorldEXPO 2018 New York, which will take place on November 11-13, 2018 in New York City, NY. IoT Now explores the evolving opportunities and challenges facing CSPs, and it passes on some lessons learned from those who have taken the first steps in next-gen IoT services.
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
Founded in 2000, Chetu Inc. is a global provider of customized software development solutions and IT staff augmentation services for software technology providers. By providing clients with unparalleled niche technology expertise and industry experience, Chetu has become the premiere long-term, back-end software development partner for start-ups, SMBs, and Fortune 500 companies. Chetu is headquartered in Plantation, Florida, with thirteen offices throughout the U.S. and abroad.
SYS-CON Events announced today that DatacenterDynamics has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY. DatacenterDynamics is a brand of DCD Group, a global B2B media and publishing company that develops products to help senior professionals in the world's most ICT dependent organizations make risk-based infrastructure and capacity decisions.