Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Article

Simplifying Infrastructure Software

Simplifying Infrastructure Software

Adaptive computing, self-healing systems, Grid and on-demand computing, autonomic computing.... Vendors from all sides are throwing buzzwords around, a new one every day or so it seems.

This month we'll try to make sense of it all by looking at what is here today, what will be here tomorrow, and what is mere science fiction. More important, we'll examine how these new ideas impact your ability to develop and deploy applications.

In tough economic times, IT managers are asked to do more with less. That means driving down the cost of everything from laptops to enterprise-wide applications. The cost of deploying an application lies primarily in two areas: hardware and people. Standards-based software platforms such as BEA WebLogic have been instrumental in reducing cost and time-to-market of applications by providing a reliable base and a set of built-in services.

Increased Complexity
Standard platforms also allow a much higher degree of application interoperability by allowing applications to talk to each other using standard protocols like RMI, JCA, or SOAP (Web services). That move from silo architectures to interconnected, high-level architectures has increased the complexity of applications considerably. From client/server we've gone to Web client-Web server-business tier-database, with the business tier potentially connecting to many other applications, including legacy systems and Web services across the Internet. We have Web site objects, workflow objects, database connections, and JCA connections. The platform has kept up with this increased complexity, but only barely. To take applications to the next level a new set of technologies that radically simplify the development, deployment, and maintenance of applications will be required. We group these technologies under the label "Adaptive Computing" because in that model the infrastructure adapts itself to the application. It optimizes, provisions, and heals itself without the intervention of developer or administrator.

From Grid to Adaptive Computing
Grid computing is among the first ideas for the sharing and optimizing of computing resources, and comes straight from the halls of academia. The idea behind Grid is to take some large chunk of work, say the mapping of the human genome, and break it up into many small chunks spread over many computers. The Search for Extraterrestrial Intelligence at Home ([email protected]) project at Berkeley is an example of this. Its goal is to analyze radio telescope data to detect specific patterns indicative of extraterrestrial intelligence. The amount of data collected is so enormous that no single computer could possibly do it by itself, so researchers devised a scheme where anyone with an Internet connection and a PC could participate by running a screen-saver program capable of analyzing a small chunk of data. When your computer is idle, the program uses the idle CPU cycles for the project. To date, the [email protected] project has had 4,257,524 users who contributed a total of 1,336,810.852 years of computer time. So far no one has found an extraterrestrial. (Note: This may or may not be entirely true. At least one signal matching the target profile was recorded by the Big Ear radio telescope at Ohio State University on the night of August 15, 1977. It was never detected again.) Since then many other projects have emulated SETI to solve hard scientific problems, from breaking cryptographic keys to finding a smallpox vaccine.

More recently, the Globus project has developed the Globus Toolkit, an open-source implementation of a Grid infrastructure, written in C. The toolkit is a "bag of services" that can be used to develop Grid applications and programming tools. While some companies are talking about using Globus in an enterprise setting, Globus is really designed for the scientific and engineering problems we just described rather than the problems found in corporate IT.

Think Globally, Act Locally
Like many academic ideas, Grid needs to be refined before it can be used in the real world. The vast majority of businesses, enterprises, and government organizations don't want to spread their data or their applications all over the Internet, or even across computers they don't control completely. While interacting with computers and services on a different network or across the Internet is common practice, sending one's applications and data is not. IT professionals want to maintain administrative control over their IT infrastructure. IT departments want to make the most efficient use of their hardware and don't want idle CPUs. The solution is to evolve and broaden the Grid to the more powerful concept of Adaptive Computing. Adaptive Computing is an umbrella term for a far more intelligent application infrastructure. Such an infrastructure makes better use of resources through dynamic provisioning, self-healing, and self-tuning.

Better Provisioning
IT departments must often allocate enough machines to handle peak demand for a particular application, leaving most of their boxes idle most of the time. Traffic at e-commerce sites such as Amazon.com or FedEx may be highest in the weeks leading up to Christmas, but lowest after New Year's. A CRM application may peak during the day when customers call in while the inventory application could make use of the same hardware at night, when no one is calling in. Upcoming versions of application infrastructure will let applications share hardware and other resources effectively to minimize duplication and hardware costs.

While saving on hardware costs can generate large savings, development and maintenance costs dominate the cost of deploying an enterprise application. Companies like Microsoft and BEA have focused on reducing the cost of development with tools like BEA WebLogic Workshop and Visual Studio .NET. The cost of testing, optimization, management, and administration, however, is still too high. This is where so-called "self-tuning" and "self-healing" applications can save an enterprise a lot of money.

Imagine if you will a system that notices that its process performance is slowly degrading over time. After running a diagnostic procedure, it concludes that one of the applications running in the JVM is leaking memory. At that point it will notify an operator and take action by itself: it may quiesce the application in the question process (i.e., instruct the application not to take any new request, complete all outstanding requests, and shut down) and leave the other applications alone and the process running. It may quiesce all applications and either 1) restart the process minus the offending application or 2) restart the process with a bigger heap, until the application is fixed. The BEA WebLogic Platform provides robust self-healing features, including fail-over and automatic connection pool resizing, but this is just the beginning and you'll be seeing much more coming in that area.

While self-healing is the ability to deal with exceptional conditions gracefully, self-tuning is about improving the application's performance under normal conditions. In other words, self-tuning is the ability for the platform to optimize itself for a particular application. Platforms such as BEA WebLogic have hundreds if not thousands of configurations and tuning knobs. Today a typical application is tuned in a testing lab by a developer with a load simulator in one hand and a tuning guide in the other. A developer or an administrator can adjust many parameters, including memory heap size, the number of execution threads, the number of IO threads, the size of EJB caches, or the size of a JMS queue. The idea behind self-tuning is to let the infrastructure monitor the application, gather and analyze the data, and based on that data optimize the application automatically. This has the twin benefits of making the infrastructure easier to use and of improving application performance. As for self-healing, the BEA WebLogic Platform has been leading the pack with self-tuning features. Its J2EE JDBC drivers, the software that lets Java applications connect to databases, have long been self-tuning. There again there is much more we can do.

Easier Deployment
The idea behind dynamic provisioning, or the sharing of hardware resources, is to treat a large pool of computers (a distributed system) as if it were just one computer, much like a mainframe. We call this the "Virtualized Mainframe." BEA WebLogic has pioneered the most advanced and robust implementation of the key concepts needed to do this, such as clustering, load-balancing, and fail-over. There is much left to do however, and you should look for some exciting improvements in the coming years. These include distributed application deployment so that deployment, undeployment, and quiescing of applications across a domain becomes seamless. Application containment is included so that a specific application can be granted a specific amount of resources, but not more. This is key to ensuring that no one application can take down an entire IT domain, either by mistake through a programming error, or by design through a virus or a Trojan horse.

Reducing Complexity
All of these features have one aspect in common: automation. Automation, or letting the infrastructure do more and the administrator do less, is the only viable way to reduce complexity. Managing, optimizing, understanding, and debugging the applications of the future will only be possible through radical simplification. This is what Adaptive Computing is about.

More Stories By Benjamin Renaud

Benjamin Renaud is a strategist in the Office of the CTO at BEA. In that role he helps set BEA's technical vision and guide its execution. He came to BEA via the acquisition of WebLogic, where he was a pioneer in Java and Web application server technology. Prior to joining WebLogic, Benjamin worked on the original Java team for Sun Microsystems, where he helped create Java 1.0, 1.1 and 1.2.

Reproduced with permission from BEA Systems

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@ThingsExpo Stories
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
The IoT Will Grow: In what might be the most obvious prediction of the decade, the IoT will continue to expand next year, with more and more devices coming online every single day. What isn’t so obvious about this prediction: where that growth will occur. The retail, healthcare, and industrial/supply chain industries will likely see the greatest growth. Forrester Research has predicted the IoT will become “the backbone” of customer value as it continues to grow. It is no surprise that retail is ...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of bus...
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading...
DXWorldEXPO LLC announced today that "Miami Blockchain Event by FinTechEXPO" has announced that its Call for Papers is now open. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Financial enterprises in New York City, London, Singapore, and other world financial capitals are embracing a new generation of smart, automated FinTech that eliminates many cumbersome, slow, and expe...
Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive ov...
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
DXWorldEXPO LLC announced today that ICOHOLDER named "Media Sponsor" of Miami Blockchain Event by FinTechEXPO. ICOHOLDER give you detailed information and help the community to invest in the trusty projects. Miami Blockchain Event by FinTechEXPO has opened its Call for Papers. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Miami Blockchain Event by FinTechEXPO also offers s...
DXWorldEXPO | CloudEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.