Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Article

Measuring the Value of Software Infrastructure

What do you get for your license fee?

As I write, the noise level that continues to be generated around open source application servers and their claims to be coming into the world of enterprise computing continues. In my view, the main reason why the noise travels so far and seems so loud has nothing to do with the reality of the situation and everything to do with the media's love of controversy. That said, however, to take a position it is necessary to understand why you stand where you stand. Given that, and since my feet don't seem to want to walk me anywhere, I had better start charting my position... break out the sextant and get ready for some enterprise cartography!

So, What Is WebLogic?
A quick peek at my laptop shows me that it's a 32.3MB JAR file with supporting JARs that bring it up to about 73MB of Java. So, presumably to work out the license cost you divide 73MB by the per-CPU cost and you have the value of the product. Sounds like a rough-and-ready measure perhaps, but not totally crazy. So, I download my favorite open source alternative and I see a load of JAR files, totaling about 66MB. Since the license cost is effectively zero and there is broadly the same amount of "product" there, what are you waiting for? The choice is clear!

Clearly, a static analysis of this type done on any software product will not yield a very meaningful result. Open source is clearly a good mechanism of generating code bulk, and it is obviously cheaper to take a handout from an altruistic developer than to buy the fruits of an employed development team.

There must be something wrong with this logic... Otherwise, why does BEA Systems, who do nothing more than sell infrastructure licenses, have a market cap of over $3 billion, and why (despite all the noisy protestations to the contrary) are open source application servers not rampaging across the mission-critical environments of the world?

There is clearly intrinsic value to software beyond the lines of code that constitute its makeup.

So, is it support? BEA offers 24x7 mission-critical support to its customers, providing them with the assurance that if one of their critical systems fails, there are technicians on hand around the clock to diagnose problems, whether they lie inside or outside of the middleware layer, and to repair them if they lie within. Clearly, this is important - it cannot be cost effective for an organization such as a bank to employ enough people with enough in-depth technical skill to diagnose arbitrary problems on a 24x7 basis. Having the neck of the product author available to choke where necessary (and having the assurance that the vendor's engineers will be online to fix problems as needed) is a necessary part of the effective running of a technology-enabled business (i.e., any business at all). Here pure open source software presents a problem - development is done on an ad hoc basis by a community of enthusiasts. They are clearly not going to be able or willing to stop their world when a user of their software hits a problem. As a result, an organization whose developers have used an open source infrastructure must ensure that it has the internal resources to provide this level of support. Not only is this not cost effective (as noted above), but skilled systems developers are pretty thin on the ground anyway - as evidenced by the move to simplify Enterprise Java programming over the last few years in the face of Microsoft's low-skill entry point alternative - and it is only the most highly skilled developers who will be up to this kind of task. Better to underpin your business with a business relationship than to become dependent on a set of high-cost specialist individuals who, at the very least, will live in perpetual fear of falling under the proverbial bus...

Live in Perpetual Fear of the Proverbial Bus...
That all said, however, software support is a recurring charge, separate from the license fee, so while its availability and underpinning by a technology-specialist business provides a very good set of reasons why licensed infrastructure software is good, it cannot be the whole story since, despite the emergence of organizations who undertake to provide production support for open source software infrastructures, there is still no visible migration of mission-critical applications to open source software platforms.

I believe that the answer to this riddle does lie in support, but not in the kind of production support provided under a traditional production support contract. Allow me to take a step back to describe what I mean...

If you look at the software architecture of a typical production system, you will see many moving parts. Typically these systems consist of a set of business components that receive messages for processing (usually from incoming reliable queues, or from end user-facing applications). The processing will typically consist of applying some business rules (by the way, these are the principal value of the system!) and then updating one or more databases and potentially dispatching some more reliable messages. Usually, these messages represent flows of large amounts of business value (money for a bank, stock for a manufacturer, shipments for a logistics operator). Because of the value of these things, care is needed to ensure that each message is processed only once and that once processed, the results of the processing are not lost and are consistent from wherever they are viewed. The provision of these guarantees is usually the job of the infrastructure (often the transaction manager - I bet you wondered if I was ever going to mention that in this issue?!). After all, as already stated, the value of the application from a business perspective is the business rules - the business will just assume that the technology works.

To take this architecture into the realm of concrete examples, let's imagine that the incoming messages are arriving on an MQ Series queue, that the application data is held in an Oracle database, and that the customer reference information (which can occasionally be updated) is held in Sybase.

Our infrastructure software not only has to work flawlessly within itself, but the interfaces to MQ, Oracle, and Sybase have to work too.

Okay, you say, that's fine. As a developer, I will build and test this application from end to end before I put it into production, and then I will know that it works. What's the difference between an open source application server and WebLogic? The difference, it turns out, is huge...imagine your test plan. It must include all fo the business use cases to make sure they successfully complete. It must include all of the edge cases in the business data to make sure edges and errors are handled gracefully. So, now we have a test plan that will give us an appropriate level of confidence that the application meets our business requirements. We're done, right? Well... Yes, if we are using WebLogic and supported versions of the databases and MQSeries. You know that BEA has tested this from an infrastructure perspective and that it all works, because that's what the documentation says (http://e-docs.bea.com/platform/suppconfigs/ configs81/81_over/supported_db.html#1129093).

If you use an infrastructure platform that isn't explicitly certified against your external systems, you need to do this testing yourself. But, you say, didn't we do the testing earlier? Well, no. Not from an infrastructure perspective. Given that we require transactional access to three resources, we need to test that all three can commit the work in the success cases, and roll it back in the failure cases - and we have done that with our application-level testing - but we also need to test that if {MQSeries|Sybase|Oracle} fails to prepare, we roll back the transaction, and that if {Sybase|Oracle|MQSeries} fails after a successful prepare but before the transaction is logged that they are rolled back correctly, and that if {Oracle|Sybase|MQSeries} prepares and then fails after the transaction is logged then they are correctly rolled forward on recovery. That's quite a test matrix. And if you change versions on one of these external systems, you will have to run through at least a subset of these tests again. This is clearly very time-consuming, and that's before you factor in how technically tricky it can be to simulate failures at these various points during the transaction processing life cycle. This is a lot of work added onto your project plan, or a lot of risk introduced into your deployment, if you simply choose to assume that this stuff will work...and that assumes that the tests all pass. What if they don't? Well, you can resort to whatever developer support you might get from the open source community, but if your problem is specific to your version of Sybase, you'd better hope that whoever understands the code out there has access to that same version too...and that they have time to reproduce and fix your problem. But never mind...you have access to the source, you can fix it yourself! In that case, Good luck!

These problems are notoriously difficult to track down and complex to fix. Is that really what your employers want you to be spending your time on? If you don't believe me, take a look at some of the settings that WebLogic offers to work around xa implementation issues in various database drivers - do you fancy finding and fixing one of these (http://e-docs.bea.com/wls/docs81/jta/thirdpartytx.html#1038034)? Not to mention what happens when you start introducing embellishments like Oracle RAC - how long do you think it would take you to test and debug your way to the conclusions about that documented in the WebLogic Server manual?

This effect ripples forward too - if you do meet problems in the production system, not only will you have access to the engineers, etc., in the way that we talked about before, but they will in turn have access to and experience with the databases and other external resources necessary to reproduce and solve your problem for you.

For any organization to invest in this level of testing of an open source framework (as opposed to just agreeing to handle production issues) would be a significant commitment of time and effort, and therefore money. And what would be the payback? If this fictional organization spent its life testing and applying fixes to the open source product in open source, who would pay for it? Everyone would be grateful and benefit from their endeavors, but how would they show their gratitude? Or maybe this organization would keep the fixes to itself and charge end users who wanted the additional assurances its fix packs brought. Oh, but isn't that a software license?!

It is worth reflecting that all the open source application servers I am aware of do not ship with fully functional transaction managers. I suspect that the kinds of issues raised in this discussion might account for that - transaction managers, like all other software, are after all only code...

So to conclude, open source is an excellent idea for generating code - lots of the world runs on open source - every time you google your search runs on Linux, a good proportion of the J2EE Web app world on Struts, much of the Java world on Log4j, XMLBeans, Xerces, etc., an increasing number of applications on open source databases. But all of these open source projects have two things in common - first, they interface to a bounded number of stable external systems: Linux to the hardware and BIOS (how often does that change?!), Struts to the stable and mature servlet API, and the rest simply to the Java system itself. Second, with the exception of Linux and database engines (both implementations of well-understood systems, whose requirements have been defined for over a decade and so are very stable), they are developer-oriented toolkits, which do not lie in the critical path of transaction processing. Application infrastructure has to interface to a relatively unbounded set of external systems, queues, databases, clusters, and so on, and by definition is of most value when it is managing the transactional critical path.

For these reasons, I think open source application servers will continue to exist to host low-volume, low-risk systems, but when the stakes are high, their licensed cousins will continue enabling operations staff and business owners to sleep at night, knowing that they were designed from the outset to expect and manage the unexpected.

More Stories By Peter Holditch

Peter Holditch is a senior presales engineer in the UK for Azul Systems. Prior to joining Azul he spent nine years at BEA systems, going from being one of their first Professional Services consultants in Europe and finishing up as a principal presales engineer. He has an R&D background (originally having worked on BEA's Tuxedo product) and his technical interests are in high-throughput transaction systems. "Of the pitch" Peter likes to brew beer, build furniture, and undertake other ludicrously ambitious projects - but (generally) not all at the same time!

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

IoT & Smart Cities Stories
The platform combines the strengths of Singtel's extensive, intelligent network capabilities with Microsoft's cloud expertise to create a unique solution that sets new standards for IoT applications," said Mr Diomedes Kastanis, Head of IoT at Singtel. "Our solution provides speed, transparency and flexibility, paving the way for a more pervasive use of IoT to accelerate enterprises' digitalisation efforts. AI-powered intelligent connectivity over Microsoft Azure will be the fastest connected pat...
There are many examples of disruption in consumer space – Uber disrupting the cab industry, Airbnb disrupting the hospitality industry and so on; but have you wondered who is disrupting support and operations? AISERA helps make businesses and customers successful by offering consumer-like user experience for support and operations. We have built the world’s first AI-driven IT / HR / Cloud / Customer Support and Operations solution.
Codete accelerates their clients growth through technological expertise and experience. Codite team works with organizations to meet the challenges that digitalization presents. Their clients include digital start-ups as well as established enterprises in the IT industry. To stay competitive in a highly innovative IT industry, strong R&D departments and bold spin-off initiatives is a must. Codete Data Science and Software Architects teams help corporate clients to stay up to date with the mod...
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throug...
Druva is the global leader in Cloud Data Protection and Management, delivering the industry's first data management-as-a-service solution that aggregates data from endpoints, servers and cloud applications and leverages the public cloud to offer a single pane of glass to enable data protection, governance and intelligence-dramatically increasing the availability and visibility of business critical information, while reducing the risk, cost and complexity of managing and protecting it. Druva's...
BMC has unmatched experience in IT management, supporting 92 of the Forbes Global 100, and earning recognition as an ITSM Gartner Magic Quadrant Leader for five years running. Our solutions offer speed, agility, and efficiency to tackle business challenges in the areas of service management, automation, operations, and the mainframe.
The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, compared the Jevons Paradox to modern-day enterprise IT, examin...
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading...
DSR is a supplier of project management, consultancy services and IT solutions that increase effectiveness of a company's operations in the production sector. The company combines in-depth knowledge of international companies with expert knowledge utilising IT tools that support manufacturing and distribution processes. DSR ensures optimization and integration of internal processes which is necessary for companies to grow rapidly. The rapid growth is possible thanks, to specialized services an...
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throug...