Welcome!

Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Article

Measuring the Value of Software Infrastructure

What do you get for your license fee?

As I write, the noise level that continues to be generated around open source application servers and their claims to be coming into the world of enterprise computing continues. In my view, the main reason why the noise travels so far and seems so loud has nothing to do with the reality of the situation and everything to do with the media's love of controversy. That said, however, to take a position it is necessary to understand why you stand where you stand. Given that, and since my feet don't seem to want to walk me anywhere, I had better start charting my position... break out the sextant and get ready for some enterprise cartography!

So, What Is WebLogic?
A quick peek at my laptop shows me that it's a 32.3MB JAR file with supporting JARs that bring it up to about 73MB of Java. So, presumably to work out the license cost you divide 73MB by the per-CPU cost and you have the value of the product. Sounds like a rough-and-ready measure perhaps, but not totally crazy. So, I download my favorite open source alternative and I see a load of JAR files, totaling about 66MB. Since the license cost is effectively zero and there is broadly the same amount of "product" there, what are you waiting for? The choice is clear!

Clearly, a static analysis of this type done on any software product will not yield a very meaningful result. Open source is clearly a good mechanism of generating code bulk, and it is obviously cheaper to take a handout from an altruistic developer than to buy the fruits of an employed development team.

There must be something wrong with this logic... Otherwise, why does BEA Systems, who do nothing more than sell infrastructure licenses, have a market cap of over $3 billion, and why (despite all the noisy protestations to the contrary) are open source application servers not rampaging across the mission-critical environments of the world?

There is clearly intrinsic value to software beyond the lines of code that constitute its makeup.

So, is it support? BEA offers 24x7 mission-critical support to its customers, providing them with the assurance that if one of their critical systems fails, there are technicians on hand around the clock to diagnose problems, whether they lie inside or outside of the middleware layer, and to repair them if they lie within. Clearly, this is important - it cannot be cost effective for an organization such as a bank to employ enough people with enough in-depth technical skill to diagnose arbitrary problems on a 24x7 basis. Having the neck of the product author available to choke where necessary (and having the assurance that the vendor's engineers will be online to fix problems as needed) is a necessary part of the effective running of a technology-enabled business (i.e., any business at all). Here pure open source software presents a problem - development is done on an ad hoc basis by a community of enthusiasts. They are clearly not going to be able or willing to stop their world when a user of their software hits a problem. As a result, an organization whose developers have used an open source infrastructure must ensure that it has the internal resources to provide this level of support. Not only is this not cost effective (as noted above), but skilled systems developers are pretty thin on the ground anyway - as evidenced by the move to simplify Enterprise Java programming over the last few years in the face of Microsoft's low-skill entry point alternative - and it is only the most highly skilled developers who will be up to this kind of task. Better to underpin your business with a business relationship than to become dependent on a set of high-cost specialist individuals who, at the very least, will live in perpetual fear of falling under the proverbial bus...

Live in Perpetual Fear of the Proverbial Bus...
That all said, however, software support is a recurring charge, separate from the license fee, so while its availability and underpinning by a technology-specialist business provides a very good set of reasons why licensed infrastructure software is good, it cannot be the whole story since, despite the emergence of organizations who undertake to provide production support for open source software infrastructures, there is still no visible migration of mission-critical applications to open source software platforms.

I believe that the answer to this riddle does lie in support, but not in the kind of production support provided under a traditional production support contract. Allow me to take a step back to describe what I mean...

If you look at the software architecture of a typical production system, you will see many moving parts. Typically these systems consist of a set of business components that receive messages for processing (usually from incoming reliable queues, or from end user-facing applications). The processing will typically consist of applying some business rules (by the way, these are the principal value of the system!) and then updating one or more databases and potentially dispatching some more reliable messages. Usually, these messages represent flows of large amounts of business value (money for a bank, stock for a manufacturer, shipments for a logistics operator). Because of the value of these things, care is needed to ensure that each message is processed only once and that once processed, the results of the processing are not lost and are consistent from wherever they are viewed. The provision of these guarantees is usually the job of the infrastructure (often the transaction manager - I bet you wondered if I was ever going to mention that in this issue?!). After all, as already stated, the value of the application from a business perspective is the business rules - the business will just assume that the technology works.

To take this architecture into the realm of concrete examples, let's imagine that the incoming messages are arriving on an MQ Series queue, that the application data is held in an Oracle database, and that the customer reference information (which can occasionally be updated) is held in Sybase.

Our infrastructure software not only has to work flawlessly within itself, but the interfaces to MQ, Oracle, and Sybase have to work too.

Okay, you say, that's fine. As a developer, I will build and test this application from end to end before I put it into production, and then I will know that it works. What's the difference between an open source application server and WebLogic? The difference, it turns out, is huge...imagine your test plan. It must include all fo the business use cases to make sure they successfully complete. It must include all of the edge cases in the business data to make sure edges and errors are handled gracefully. So, now we have a test plan that will give us an appropriate level of confidence that the application meets our business requirements. We're done, right? Well... Yes, if we are using WebLogic and supported versions of the databases and MQSeries. You know that BEA has tested this from an infrastructure perspective and that it all works, because that's what the documentation says (http://e-docs.bea.com/platform/suppconfigs/ configs81/81_over/supported_db.html#1129093).

If you use an infrastructure platform that isn't explicitly certified against your external systems, you need to do this testing yourself. But, you say, didn't we do the testing earlier? Well, no. Not from an infrastructure perspective. Given that we require transactional access to three resources, we need to test that all three can commit the work in the success cases, and roll it back in the failure cases - and we have done that with our application-level testing - but we also need to test that if {MQSeries|Sybase|Oracle} fails to prepare, we roll back the transaction, and that if {Sybase|Oracle|MQSeries} fails after a successful prepare but before the transaction is logged that they are rolled back correctly, and that if {Oracle|Sybase|MQSeries} prepares and then fails after the transaction is logged then they are correctly rolled forward on recovery. That's quite a test matrix. And if you change versions on one of these external systems, you will have to run through at least a subset of these tests again. This is clearly very time-consuming, and that's before you factor in how technically tricky it can be to simulate failures at these various points during the transaction processing life cycle. This is a lot of work added onto your project plan, or a lot of risk introduced into your deployment, if you simply choose to assume that this stuff will work...and that assumes that the tests all pass. What if they don't? Well, you can resort to whatever developer support you might get from the open source community, but if your problem is specific to your version of Sybase, you'd better hope that whoever understands the code out there has access to that same version too...and that they have time to reproduce and fix your problem. But never mind...you have access to the source, you can fix it yourself! In that case, Good luck!

These problems are notoriously difficult to track down and complex to fix. Is that really what your employers want you to be spending your time on? If you don't believe me, take a look at some of the settings that WebLogic offers to work around xa implementation issues in various database drivers - do you fancy finding and fixing one of these (http://e-docs.bea.com/wls/docs81/jta/thirdpartytx.html#1038034)? Not to mention what happens when you start introducing embellishments like Oracle RAC - how long do you think it would take you to test and debug your way to the conclusions about that documented in the WebLogic Server manual?

This effect ripples forward too - if you do meet problems in the production system, not only will you have access to the engineers, etc., in the way that we talked about before, but they will in turn have access to and experience with the databases and other external resources necessary to reproduce and solve your problem for you.

For any organization to invest in this level of testing of an open source framework (as opposed to just agreeing to handle production issues) would be a significant commitment of time and effort, and therefore money. And what would be the payback? If this fictional organization spent its life testing and applying fixes to the open source product in open source, who would pay for it? Everyone would be grateful and benefit from their endeavors, but how would they show their gratitude? Or maybe this organization would keep the fixes to itself and charge end users who wanted the additional assurances its fix packs brought. Oh, but isn't that a software license?!

It is worth reflecting that all the open source application servers I am aware of do not ship with fully functional transaction managers. I suspect that the kinds of issues raised in this discussion might account for that - transaction managers, like all other software, are after all only code...

Conclusion
So to conclude, open source is an excellent idea for generating code - lots of the world runs on open source - every time you google your search runs on Linux, a good proportion of the J2EE Web app world on Struts, much of the Java world on Log4j, XMLBeans, Xerces, etc., an increasing number of applications on open source databases. But all of these open source projects have two things in common - first, they interface to a bounded number of stable external systems: Linux to the hardware and BIOS (how often does that change?!), Struts to the stable and mature servlet API, and the rest simply to the Java system itself. Second, with the exception of Linux and database engines (both implementations of well-understood systems, whose requirements have been defined for over a decade and so are very stable), they are developer-oriented toolkits, which do not lie in the critical path of transaction processing. Application infrastructure has to interface to a relatively unbounded set of external systems, queues, databases, clusters, and so on, and by definition is of most value when it is managing the transactional critical path.

For these reasons, I think open source application servers will continue to exist to host low-volume, low-risk systems, but when the stakes are high, their licensed cousins will continue enabling operations staff and business owners to sleep at night, knowing that they were designed from the outset to expect and manage the unexpected.

More Stories By Peter Holditch

Peter Holditch is a senior presales engineer in the UK for Azul Systems. Prior to joining Azul he spent nine years at BEA systems, going from being one of their first Professional Services consultants in Europe and finishing up as a principal presales engineer. He has an R&D background (originally having worked on BEA's Tuxedo product) and his technical interests are in high-throughput transaction systems. "Of the pitch" Peter likes to brew beer, build furniture, and undertake other ludicrously ambitious projects - but (generally) not all at the same time!

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, whic...
Imagine if you will, a retail floor so densely packed with sensors that they can pick up the movements of insects scurrying across a store aisle. Or a component of a piece of factory equipment so well-instrumented that its digital twin provides resolution down to the micrometer.
In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settle...
Product connectivity goes hand and hand these days with increased use of personal data. New IoT devices are becoming more personalized than ever before. In his session at 22nd Cloud Expo | DXWorld Expo, Nicolas Fierro, CEO of MIMIR Blockchain Solutions, will discuss how in order to protect your data and privacy, IoT applications need to embrace Blockchain technology for a new level of product security never before seen - or needed.
BnkToTheFuture.com is the largest online investment platform for investing in FinTech, Bitcoin and Blockchain companies. We believe the future of finance looks very different from the past and we aim to invest and provide trading opportunities for qualifying investors that want to build a portfolio in the sector in compliance with international financial regulations.
Leading companies, from the Global Fortune 500 to the smallest companies, are adopting hybrid cloud as the path to business advantage. Hybrid cloud depends on cloud services and on-premises infrastructure working in unison. Successful implementations require new levels of data mobility, enabled by an automated and seamless flow across on-premises and cloud resources. In his general session at 21st Cloud Expo, Greg Tevis, an IBM Storage Software Technical Strategist and Customer Solution Architec...
Nordstrom is transforming the way that they do business and the cloud is the key to enabling speed and hyper personalized customer experiences. In his session at 21st Cloud Expo, Ken Schow, VP of Engineering at Nordstrom, discussed some of the key learnings and common pitfalls of large enterprises moving to the cloud. This includes strategies around choosing a cloud provider(s), architecture, and lessons learned. In addition, he covered some of the best practices for structured team migration an...
No hype cycles or predictions of a gazillion things here. IoT is here. You get it. You know your business and have great ideas for a business transformation strategy. What comes next? Time to make it happen. In his session at @ThingsExpo, Jay Mason, an Associate Partner of Analytics, IoT & Cybersecurity at M&S Consulting, presented a step-by-step plan to develop your technology implementation strategy. He also discussed the evaluation of communication standards and IoT messaging protocols, data...
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, discussed how from store operations and ...
In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, provided a fun and simple way to introduce Machine Leaning to anyone and everyone. He solved a machine learning problem and demonstrated an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intelligence and B...
"IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...
When shopping for a new data processing platform for IoT solutions, many development teams want to be able to test-drive options before making a choice. Yet when evaluating an IoT solution, it’s simply not feasible to do so at scale with physical devices. Building a sensor simulator is the next best choice; however, generating a realistic simulation at very high TPS with ease of configurability is a formidable challenge. When dealing with multiple application or transport protocols, you would be...
Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a reality. However, most organizations today are building their data infrastructure to focus solely on addressing immediate business needs vs. a platform capable of quickly adapting emerging technologies to address future ...
We are given a desktop platform with Java 8 or Java 9 installed and seek to find a way to deploy high-performance Java applications that use Java 3D and/or Jogl without having to run an installer. We are subject to the constraint that the applications be signed and deployed so that they can be run in a trusted environment (i.e., outside of the sandbox). Further, we seek to do this in a way that does not depend on bundling a JRE with our applications, as this makes downloads and installations rat...
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
DX World EXPO, LLC, a Lighthouse Point, Florida-based startup trade show producer and the creator of "DXWorldEXPO® - Digital Transformation Conference & Expo" has announced its executive management team. The team is headed by Levent Selamoglu, who has been named CEO. "Now is the time for a truly global DX event, to bring together the leading minds from the technology world in a conversation about Digital Transformation," he said in making the announcement.
In this strange new world where more and more power is drawn from business technology, companies are effectively straddling two paths on the road to innovation and transformation into digital enterprises. The first path is the heritage trail – with “legacy” technology forming the background. Here, extant technologies are transformed by core IT teams to provide more API-driven approaches. Legacy systems can restrict companies that are transitioning into digital enterprises. To truly become a lead...
Digital Transformation (DX) is not a "one-size-fits all" strategy. Each organization needs to develop its own unique, long-term DX plan. It must do so by realizing that we now live in a data-driven age, and that technologies such as Cloud Computing, Big Data, the IoT, Cognitive Computing, and Blockchain are only tools. In her general session at 21st Cloud Expo, Rebecca Wanta explained how the strategy must focus on DX and include a commitment from top management to create great IT jobs, monitor ...
"Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offer guided learning experiences on AWS, Azure, Google Cloud and all the surrounding methodologies and technologies that you need to know and your teams need to know in order to leverage the full benefits of the cloud," explained Alex Brower, VP of Marketing at Cloud Academy, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clar...
The IoT Will Grow: In what might be the most obvious prediction of the decade, the IoT will continue to expand next year, with more and more devices coming online every single day. What isn’t so obvious about this prediction: where that growth will occur. The retail, healthcare, and industrial/supply chain industries will likely see the greatest growth. Forrester Research has predicted the IoT will become “the backbone” of customer value as it continues to grow. It is no surprise that retail is ...