Welcome!

Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Article

Measuring the Value of Software Infrastructure

What do you get for your license fee?

As I write, the noise level that continues to be generated around open source application servers and their claims to be coming into the world of enterprise computing continues. In my view, the main reason why the noise travels so far and seems so loud has nothing to do with the reality of the situation and everything to do with the media's love of controversy. That said, however, to take a position it is necessary to understand why you stand where you stand. Given that, and since my feet don't seem to want to walk me anywhere, I had better start charting my position... break out the sextant and get ready for some enterprise cartography!

So, What Is WebLogic?
A quick peek at my laptop shows me that it's a 32.3MB JAR file with supporting JARs that bring it up to about 73MB of Java. So, presumably to work out the license cost you divide 73MB by the per-CPU cost and you have the value of the product. Sounds like a rough-and-ready measure perhaps, but not totally crazy. So, I download my favorite open source alternative and I see a load of JAR files, totaling about 66MB. Since the license cost is effectively zero and there is broadly the same amount of "product" there, what are you waiting for? The choice is clear!

Clearly, a static analysis of this type done on any software product will not yield a very meaningful result. Open source is clearly a good mechanism of generating code bulk, and it is obviously cheaper to take a handout from an altruistic developer than to buy the fruits of an employed development team.

There must be something wrong with this logic... Otherwise, why does BEA Systems, who do nothing more than sell infrastructure licenses, have a market cap of over $3 billion, and why (despite all the noisy protestations to the contrary) are open source application servers not rampaging across the mission-critical environments of the world?

There is clearly intrinsic value to software beyond the lines of code that constitute its makeup.

So, is it support? BEA offers 24x7 mission-critical support to its customers, providing them with the assurance that if one of their critical systems fails, there are technicians on hand around the clock to diagnose problems, whether they lie inside or outside of the middleware layer, and to repair them if they lie within. Clearly, this is important - it cannot be cost effective for an organization such as a bank to employ enough people with enough in-depth technical skill to diagnose arbitrary problems on a 24x7 basis. Having the neck of the product author available to choke where necessary (and having the assurance that the vendor's engineers will be online to fix problems as needed) is a necessary part of the effective running of a technology-enabled business (i.e., any business at all). Here pure open source software presents a problem - development is done on an ad hoc basis by a community of enthusiasts. They are clearly not going to be able or willing to stop their world when a user of their software hits a problem. As a result, an organization whose developers have used an open source infrastructure must ensure that it has the internal resources to provide this level of support. Not only is this not cost effective (as noted above), but skilled systems developers are pretty thin on the ground anyway - as evidenced by the move to simplify Enterprise Java programming over the last few years in the face of Microsoft's low-skill entry point alternative - and it is only the most highly skilled developers who will be up to this kind of task. Better to underpin your business with a business relationship than to become dependent on a set of high-cost specialist individuals who, at the very least, will live in perpetual fear of falling under the proverbial bus...

Live in Perpetual Fear of the Proverbial Bus...
That all said, however, software support is a recurring charge, separate from the license fee, so while its availability and underpinning by a technology-specialist business provides a very good set of reasons why licensed infrastructure software is good, it cannot be the whole story since, despite the emergence of organizations who undertake to provide production support for open source software infrastructures, there is still no visible migration of mission-critical applications to open source software platforms.

I believe that the answer to this riddle does lie in support, but not in the kind of production support provided under a traditional production support contract. Allow me to take a step back to describe what I mean...

If you look at the software architecture of a typical production system, you will see many moving parts. Typically these systems consist of a set of business components that receive messages for processing (usually from incoming reliable queues, or from end user-facing applications). The processing will typically consist of applying some business rules (by the way, these are the principal value of the system!) and then updating one or more databases and potentially dispatching some more reliable messages. Usually, these messages represent flows of large amounts of business value (money for a bank, stock for a manufacturer, shipments for a logistics operator). Because of the value of these things, care is needed to ensure that each message is processed only once and that once processed, the results of the processing are not lost and are consistent from wherever they are viewed. The provision of these guarantees is usually the job of the infrastructure (often the transaction manager - I bet you wondered if I was ever going to mention that in this issue?!). After all, as already stated, the value of the application from a business perspective is the business rules - the business will just assume that the technology works.

To take this architecture into the realm of concrete examples, let's imagine that the incoming messages are arriving on an MQ Series queue, that the application data is held in an Oracle database, and that the customer reference information (which can occasionally be updated) is held in Sybase.

Our infrastructure software not only has to work flawlessly within itself, but the interfaces to MQ, Oracle, and Sybase have to work too.

Okay, you say, that's fine. As a developer, I will build and test this application from end to end before I put it into production, and then I will know that it works. What's the difference between an open source application server and WebLogic? The difference, it turns out, is huge...imagine your test plan. It must include all fo the business use cases to make sure they successfully complete. It must include all of the edge cases in the business data to make sure edges and errors are handled gracefully. So, now we have a test plan that will give us an appropriate level of confidence that the application meets our business requirements. We're done, right? Well... Yes, if we are using WebLogic and supported versions of the databases and MQSeries. You know that BEA has tested this from an infrastructure perspective and that it all works, because that's what the documentation says (http://e-docs.bea.com/platform/suppconfigs/ configs81/81_over/supported_db.html#1129093).

If you use an infrastructure platform that isn't explicitly certified against your external systems, you need to do this testing yourself. But, you say, didn't we do the testing earlier? Well, no. Not from an infrastructure perspective. Given that we require transactional access to three resources, we need to test that all three can commit the work in the success cases, and roll it back in the failure cases - and we have done that with our application-level testing - but we also need to test that if {MQSeries|Sybase|Oracle} fails to prepare, we roll back the transaction, and that if {Sybase|Oracle|MQSeries} fails after a successful prepare but before the transaction is logged that they are rolled back correctly, and that if {Oracle|Sybase|MQSeries} prepares and then fails after the transaction is logged then they are correctly rolled forward on recovery. That's quite a test matrix. And if you change versions on one of these external systems, you will have to run through at least a subset of these tests again. This is clearly very time-consuming, and that's before you factor in how technically tricky it can be to simulate failures at these various points during the transaction processing life cycle. This is a lot of work added onto your project plan, or a lot of risk introduced into your deployment, if you simply choose to assume that this stuff will work...and that assumes that the tests all pass. What if they don't? Well, you can resort to whatever developer support you might get from the open source community, but if your problem is specific to your version of Sybase, you'd better hope that whoever understands the code out there has access to that same version too...and that they have time to reproduce and fix your problem. But never mind...you have access to the source, you can fix it yourself! In that case, Good luck!

These problems are notoriously difficult to track down and complex to fix. Is that really what your employers want you to be spending your time on? If you don't believe me, take a look at some of the settings that WebLogic offers to work around xa implementation issues in various database drivers - do you fancy finding and fixing one of these (http://e-docs.bea.com/wls/docs81/jta/thirdpartytx.html#1038034)? Not to mention what happens when you start introducing embellishments like Oracle RAC - how long do you think it would take you to test and debug your way to the conclusions about that documented in the WebLogic Server manual?

This effect ripples forward too - if you do meet problems in the production system, not only will you have access to the engineers, etc., in the way that we talked about before, but they will in turn have access to and experience with the databases and other external resources necessary to reproduce and solve your problem for you.

For any organization to invest in this level of testing of an open source framework (as opposed to just agreeing to handle production issues) would be a significant commitment of time and effort, and therefore money. And what would be the payback? If this fictional organization spent its life testing and applying fixes to the open source product in open source, who would pay for it? Everyone would be grateful and benefit from their endeavors, but how would they show their gratitude? Or maybe this organization would keep the fixes to itself and charge end users who wanted the additional assurances its fix packs brought. Oh, but isn't that a software license?!

It is worth reflecting that all the open source application servers I am aware of do not ship with fully functional transaction managers. I suspect that the kinds of issues raised in this discussion might account for that - transaction managers, like all other software, are after all only code...

Conclusion
So to conclude, open source is an excellent idea for generating code - lots of the world runs on open source - every time you google your search runs on Linux, a good proportion of the J2EE Web app world on Struts, much of the Java world on Log4j, XMLBeans, Xerces, etc., an increasing number of applications on open source databases. But all of these open source projects have two things in common - first, they interface to a bounded number of stable external systems: Linux to the hardware and BIOS (how often does that change?!), Struts to the stable and mature servlet API, and the rest simply to the Java system itself. Second, with the exception of Linux and database engines (both implementations of well-understood systems, whose requirements have been defined for over a decade and so are very stable), they are developer-oriented toolkits, which do not lie in the critical path of transaction processing. Application infrastructure has to interface to a relatively unbounded set of external systems, queues, databases, clusters, and so on, and by definition is of most value when it is managing the transactional critical path.

For these reasons, I think open source application servers will continue to exist to host low-volume, low-risk systems, but when the stakes are high, their licensed cousins will continue enabling operations staff and business owners to sleep at night, knowing that they were designed from the outset to expect and manage the unexpected.

More Stories By Peter Holditch

Peter Holditch is a senior presales engineer in the UK for Azul Systems. Prior to joining Azul he spent nine years at BEA systems, going from being one of their first Professional Services consultants in Europe and finishing up as a principal presales engineer. He has an R&D background (originally having worked on BEA's Tuxedo product) and his technical interests are in high-throughput transaction systems. "Of the pitch" Peter likes to brew beer, build furniture, and undertake other ludicrously ambitious projects - but (generally) not all at the same time!

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Early Bird Registration Discount Expires on August 31, 2018 Conference Registration Link ▸ HERE. Pick from all 200 sessions in all 10 tracks, plus 22 Keynotes & General Sessions! Lunch is served two days. EXPIRES AUGUST 31, 2018. Ticket prices: ($1,295-Aug 31) ($1,495-Oct 31) ($1,995-Nov 12) ($2,500-Walk-in)
IoT is rapidly becoming mainstream as more and more investments are made into the platforms and technology. As this movement continues to expand and gain momentum it creates a massive wall of noise that can be difficult to sift through. Unfortunately, this inevitably makes IoT less approachable for people to get started with and can hamper efforts to integrate this key technology into your own portfolio. There are so many connected products already in place today with many hundreds more on the h...
Digital Transformation is much more than a buzzword. The radical shift to digital mechanisms for almost every process is evident across all industries and verticals. This is often especially true in financial services, where the legacy environment is many times unable to keep up with the rapidly shifting demands of the consumer. The constant pressure to provide complete, omnichannel delivery of customer-facing solutions to meet both regulatory and customer demands is putting enormous pressure on...
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight...
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.