Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Article

Avoiding Middle-Aged Spread for Your WebLogic Infrastructure

Don't get crushed in the pizza-box rush

I have been knocking around the computer industry for a while now, and I've noticed some changes in my contemporaries and myself... For one thing, the buttons around the stomach of those old shirts that have eluded capture by my wife are looking a bit more strained than they did in the shirts' heyday.

Another thing: I can't run down the road to the shops without spending five minutes glowing a vivid purple hue. I'm sure that kind of thing used to be much easier. I suppose that the increasing demands on my time - having a young family, more responsibility at work, etc., mean that all that time I used to spend playing badminton and going to the gym is being consumed by other activities, and what spare time I do get is generally used up exercising my right arm by lifting 568 g of brown liquid to my lips...

The other thing that I have noticed is that the same thing has happened to many computer systems I have known over the years. Those once-lithe applications so keenly tuned and lovingly slotted into production are getting a little tired too. Gone are the hours of tuning and tweaking that kept them at their svelte best; now, tired operations folk gaze blankly at a wall of CPU meters creeping ever upward, occasionally brushing off a cobweb or two to ring a bell when the utilization hits 90 percent for the umpteenth time.

The applications have gone the same way as the folk who created them. They too are doing different workloads than they did in the early days of their lives - as workloads have increased, extra machines have been thrown at the applications in an attempt to keep them useful. Eventually one will be unable to consume any more machine resources due to some architectural constraint or another, at which point the unfortunate application will be taken out back and put out of its misery, and a new project will be spawned to create a new one, more fit for the purpose at hand. Maybe that's what they mean when they say that J2EE is a mature server-side platform...

It is lucky that application servers, with their clustering technology, make it easier to deploy new capacity than it was in the 2-tier client/server days, when the only option was to try to find an even bigger hunk of tin and hope that the application could make use of all of it. Application servers have thereby augmented this more traditional "scale up" option for adding more power to the middle tier with the "scale out" alternative - just rack up another Linux blade, and Bob's your uncle. The flavor-of-the-month solution to this is to scale out with blades, since the acquisition cost of a really big server machine is generally rather high. However, the lifetime cost of blades is also very high - they consume a lot of power - which in turn means they require a lot of cooling, and require a lot of management. "Will you just apply that security patch to the OS please?" is the tip of a pretty unpleasant iceberg if there are a few tens of machines to "just" upgrade.

Rather than continue to bark up this tree, which is feeling more and more wrong, it is worth taking a step back and considering what it is that drives the need for this incredible quantity of machines. The answer turns out to be - at least partially - the need to overprovision to provide headroom for demand spikes. That order-processing system that manages the sales of Christmas wrapping paper idles along utilizing 5 percent of a UNIX server for most of the year, just so it has 75 percent in its back pocket for the Christmas rush. And so it goes on, across the whole application estate, since each application generally has its own dedicated hardware on which to execute. It would be too difficult a configuration management job to have all of the applications on one machine - imagine if one of them needed an OS upgrade that the rest weren't compatible with. Therefore in an effort to make configuration management manageable, lots of CPU headroom is wasted, complete with the associated wasted power and cooling.

It is this issue that the Azul appliance is designed to address. The appliance is designed to run Virtual Machine-based applications on behalf of existing UNIX servers. An application that used to leave the UNIX CPU 15 percent idle under load will leave the same CPU 75 percent idle under the same load, when it is mounting the appliance's compute capacity. Suddenly, there is much more headroom on the UNIX box - maybe you can take the 10 you have already and replace them with just three to handle the same load and still provide for redundancy for failover. The appliances providing this compute capacity are deployed in a pool themselves, which means that they are redundant and highly available too. It also means that extra capacity can be brought online administratively, without making any changes at all to the UNIX application environments; the next time a Java application launches, any additional capacity added will be available to service its needs.

Finally, because the UNIX tier still exists to provide the Configuration Management control that it is very good for, the appliances can be shared among multiple applications, so one pool of appliances can be used for the Christmas wrapping paper application, the Easter Bunny application, and the Halloween gifts application. At any one time, the peaks and troughs will balance out, so the compute pool can be kept much more utilized than any one of the UNIX boxes ever could have been. In one fell swoop, the Network Attached Processing approach provided by the appliance pool has allowed consolidation of applications, and provided the flexibility to provide additional capacity on demand whilst preserving the existing configuration management systems.

This is clearly a recipe for teaching some of the "old dog," middle-aged applications some new tricks, and thereby preserving the investment made in them for some years to come.

However for the hopeless cases, the appliance approach brings other benefits: new applications can be written to take advantage of a 96GB heap. Without fear of stop-the-world garbage collection pauses stretching into infinity, they can use easy-to-maintain, coarse-grained lock structures - since the appliance provides optimistic Java lock concurrency - and they can be written to take advantage of large numbers of active threads, because even the smallest appliance provides 96 CPU cores, each of which can run a thread in parallel.

All in all, the Network Attached Processing solution provides a powerful way to combat mid-tier bloat, while reducing management costs and providing new opportunities for innovative Java architectures. After all, since the advent of Storage Area Networks, when was the last time you worried how much spare disk capacity there was in your machine? Why shouldn't compute become commoditized in the same way?

More Stories By Peter Holditch

Peter Holditch is a senior presales engineer in the UK for Azul Systems. Prior to joining Azul he spent nine years at BEA systems, going from being one of their first Professional Services consultants in Europe and finishing up as a principal presales engineer. He has an R&D background (originally having worked on BEA's Tuxedo product) and his technical interests are in high-throughput transaction systems. "Of the pitch" Peter likes to brew beer, build furniture, and undertake other ludicrously ambitious projects - but (generally) not all at the same time!

Comments (1) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

Most Recent Comments
SYS-CON Australia News Desk 01/26/06 08:02:47 PM EST

I have been knocking around the computer industry for a while now, and I've noticed some changes in my contemporaries and myself... For one thing, the buttons around the stomach of those old shirts that have eluded capture by my wife are looking a bit more strained than they did in the shirts' heyday.

IoT & Smart Cities Stories
In today's enterprise, digital transformation represents organizational change even more so than technology change, as customer preferences and behavior drive end-to-end transformation across lines of business as well as IT. To capitalize on the ubiquitous disruption driving this transformation, companies must be able to innovate at an increasingly rapid pace.
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
"MobiDev is a Ukraine-based software development company. We do mobile development, and we're specialists in that. But we do full stack software development for entrepreneurs, for emerging companies, and for enterprise ventures," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading...