Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Article

Simplifying Infrastructure Software

Simplifying Infrastructure Software

Adaptive computing, self-healing systems, Grid and on-demand computing, autonomic computing.... Vendors from all sides are throwing buzzwords around, a new one every day or so it seems.

This month we'll try to make sense of it all by looking at what is here today, what will be here tomorrow, and what is mere science fiction. More important, we'll examine how these new ideas impact your ability to develop and deploy applications.

In tough economic times, IT managers are asked to do more with less. That means driving down the cost of everything from laptops to enterprise-wide applications. The cost of deploying an application lies primarily in two areas: hardware and people. Standards-based software platforms such as BEA WebLogic have been instrumental in reducing cost and time-to-market of applications by providing a reliable base and a set of built-in services.

Increased Complexity
Standard platforms also allow a much higher degree of application interoperability by allowing applications to talk to each other using standard protocols like RMI, JCA, or SOAP (Web services). That move from silo architectures to interconnected, high-level architectures has increased the complexity of applications considerably. From client/server we've gone to Web client-Web server-business tier-database, with the business tier potentially connecting to many other applications, including legacy systems and Web services across the Internet. We have Web site objects, workflow objects, database connections, and JCA connections. The platform has kept up with this increased complexity, but only barely. To take applications to the next level a new set of technologies that radically simplify the development, deployment, and maintenance of applications will be required. We group these technologies under the label "Adaptive Computing" because in that model the infrastructure adapts itself to the application. It optimizes, provisions, and heals itself without the intervention of developer or administrator.

From Grid to Adaptive Computing
Grid computing is among the first ideas for the sharing and optimizing of computing resources, and comes straight from the halls of academia. The idea behind Grid is to take some large chunk of work, say the mapping of the human genome, and break it up into many small chunks spread over many computers. The Search for Extraterrestrial Intelligence at Home ([email protected]) project at Berkeley is an example of this. Its goal is to analyze radio telescope data to detect specific patterns indicative of extraterrestrial intelligence. The amount of data collected is so enormous that no single computer could possibly do it by itself, so researchers devised a scheme where anyone with an Internet connection and a PC could participate by running a screen-saver program capable of analyzing a small chunk of data. When your computer is idle, the program uses the idle CPU cycles for the project. To date, the [email protected] project has had 4,257,524 users who contributed a total of 1,336,810.852 years of computer time. So far no one has found an extraterrestrial. (Note: This may or may not be entirely true. At least one signal matching the target profile was recorded by the Big Ear radio telescope at Ohio State University on the night of August 15, 1977. It was never detected again.) Since then many other projects have emulated SETI to solve hard scientific problems, from breaking cryptographic keys to finding a smallpox vaccine.

More recently, the Globus project has developed the Globus Toolkit, an open-source implementation of a Grid infrastructure, written in C. The toolkit is a "bag of services" that can be used to develop Grid applications and programming tools. While some companies are talking about using Globus in an enterprise setting, Globus is really designed for the scientific and engineering problems we just described rather than the problems found in corporate IT.

Think Globally, Act Locally
Like many academic ideas, Grid needs to be refined before it can be used in the real world. The vast majority of businesses, enterprises, and government organizations don't want to spread their data or their applications all over the Internet, or even across computers they don't control completely. While interacting with computers and services on a different network or across the Internet is common practice, sending one's applications and data is not. IT professionals want to maintain administrative control over their IT infrastructure. IT departments want to make the most efficient use of their hardware and don't want idle CPUs. The solution is to evolve and broaden the Grid to the more powerful concept of Adaptive Computing. Adaptive Computing is an umbrella term for a far more intelligent application infrastructure. Such an infrastructure makes better use of resources through dynamic provisioning, self-healing, and self-tuning.

Better Provisioning
IT departments must often allocate enough machines to handle peak demand for a particular application, leaving most of their boxes idle most of the time. Traffic at e-commerce sites such as Amazon.com or FedEx may be highest in the weeks leading up to Christmas, but lowest after New Year's. A CRM application may peak during the day when customers call in while the inventory application could make use of the same hardware at night, when no one is calling in. Upcoming versions of application infrastructure will let applications share hardware and other resources effectively to minimize duplication and hardware costs.

While saving on hardware costs can generate large savings, development and maintenance costs dominate the cost of deploying an enterprise application. Companies like Microsoft and BEA have focused on reducing the cost of development with tools like BEA WebLogic Workshop and Visual Studio .NET. The cost of testing, optimization, management, and administration, however, is still too high. This is where so-called "self-tuning" and "self-healing" applications can save an enterprise a lot of money.

Imagine if you will a system that notices that its process performance is slowly degrading over time. After running a diagnostic procedure, it concludes that one of the applications running in the JVM is leaking memory. At that point it will notify an operator and take action by itself: it may quiesce the application in the question process (i.e., instruct the application not to take any new request, complete all outstanding requests, and shut down) and leave the other applications alone and the process running. It may quiesce all applications and either 1) restart the process minus the offending application or 2) restart the process with a bigger heap, until the application is fixed. The BEA WebLogic Platform provides robust self-healing features, including fail-over and automatic connection pool resizing, but this is just the beginning and you'll be seeing much more coming in that area.

While self-healing is the ability to deal with exceptional conditions gracefully, self-tuning is about improving the application's performance under normal conditions. In other words, self-tuning is the ability for the platform to optimize itself for a particular application. Platforms such as BEA WebLogic have hundreds if not thousands of configurations and tuning knobs. Today a typical application is tuned in a testing lab by a developer with a load simulator in one hand and a tuning guide in the other. A developer or an administrator can adjust many parameters, including memory heap size, the number of execution threads, the number of IO threads, the size of EJB caches, or the size of a JMS queue. The idea behind self-tuning is to let the infrastructure monitor the application, gather and analyze the data, and based on that data optimize the application automatically. This has the twin benefits of making the infrastructure easier to use and of improving application performance. As for self-healing, the BEA WebLogic Platform has been leading the pack with self-tuning features. Its J2EE JDBC drivers, the software that lets Java applications connect to databases, have long been self-tuning. There again there is much more we can do.

Easier Deployment
The idea behind dynamic provisioning, or the sharing of hardware resources, is to treat a large pool of computers (a distributed system) as if it were just one computer, much like a mainframe. We call this the "Virtualized Mainframe." BEA WebLogic has pioneered the most advanced and robust implementation of the key concepts needed to do this, such as clustering, load-balancing, and fail-over. There is much left to do however, and you should look for some exciting improvements in the coming years. These include distributed application deployment so that deployment, undeployment, and quiescing of applications across a domain becomes seamless. Application containment is included so that a specific application can be granted a specific amount of resources, but not more. This is key to ensuring that no one application can take down an entire IT domain, either by mistake through a programming error, or by design through a virus or a Trojan horse.

Reducing Complexity
All of these features have one aspect in common: automation. Automation, or letting the infrastructure do more and the administrator do less, is the only viable way to reduce complexity. Managing, optimizing, understanding, and debugging the applications of the future will only be possible through radical simplification. This is what Adaptive Computing is about.

More Stories By Benjamin Renaud

Benjamin Renaud is a strategist in the Office of the CTO at BEA. In that role he helps set BEA's technical vision and guide its execution. He came to BEA via the acquisition of WebLogic, where he was a pioneer in Java and Web application server technology. Prior to joining WebLogic, Benjamin worked on the original Java team for Sun Microsystems, where he helped create Java 1.0, 1.1 and 1.2.

Reproduced with permission from BEA Systems

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@ThingsExpo Stories
In his session at 21st Cloud Expo, Carl J. Levine, Senior Technical Evangelist for NS1, will objectively discuss how DNS is used to solve Digital Transformation challenges in large SaaS applications, CDNs, AdTech platforms, and other demanding use cases. Carl J. Levine is the Senior Technical Evangelist for NS1. A veteran of the Internet Infrastructure space, he has over a decade of experience with startups, networking protocols and Internet infrastructure, combined with the unique ability to it...
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
"Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offer guided learning experiences on AWS, Azure, Google Cloud and all the surrounding methodologies and technologies that you need to know and your teams need to know in order to leverage the full benefits of the cloud," explained Alex Brower, VP of Marketing at Cloud Academy, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clar...
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
"MobiDev is a software development company and we do complex, custom software development for everybody from entrepreneurs to large enterprises," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, discussed how from store operations and ...
"There's plenty of bandwidth out there but it's never in the right place. So what Cedexis does is uses data to work out the best pathways to get data from the origin to the person who wants to get it," explained Simon Jones, Evangelist and Head of Marketing at Cedexis, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
SYS-CON Events announced today that Telecom Reseller has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.
It is of utmost importance for the future success of WebRTC to ensure that interoperability is operational between web browsers and any WebRTC-compliant client. To be guaranteed as operational and effective, interoperability must be tested extensively by establishing WebRTC data and media connections between different web browsers running on different devices and operating systems. In his session at WebRTC Summit at @ThingsExpo, Dr. Alex Gouaillard, CEO and Founder of CoSMo Software, presented ...
WebRTC is great technology to build your own communication tools. It will be even more exciting experience it with advanced devices, such as a 360 Camera, 360 microphone, and a depth sensor camera. In his session at @ThingsExpo, Masashi Ganeko, a manager at INFOCOM Corporation, introduced two experimental projects from his team and what they learned from them. "Shotoku Tamago" uses the robot audition software HARK to track speakers in 360 video of a remote party. "Virtual Teleport" uses a multip...
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, whic...
SYS-CON Events announced today that Evatronix will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Evatronix SA offers comprehensive solutions in the design and implementation of electronic systems, in CAD / CAM deployment, and also is a designer and manufacturer of advanced 3D scanners for professional applications.
Leading companies, from the Global Fortune 500 to the smallest companies, are adopting hybrid cloud as the path to business advantage. Hybrid cloud depends on cloud services and on-premises infrastructure working in unison. Successful implementations require new levels of data mobility, enabled by an automated and seamless flow across on-premises and cloud resources. In his general session at 21st Cloud Expo, Greg Tevis, an IBM Storage Software Technical Strategist and Customer Solution Architec...
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
An increasing number of companies are creating products that combine data with analytical capabilities. Running interactive queries on Big Data requires complex architectures to store and query data effectively, typically involving data streams, an choosing efficient file format/database and multiple independent systems that are tied together through custom-engineered pipelines. In his session at @BigDataExpo at @ThingsExpo, Tomer Levi, a senior software engineer at Intel’s Advanced Analytics gr...