Welcome!

Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Article

And Now for Something Completely Different

Are you in the frame?

This issue, in an uncharacteristic attempt to fit in with the Zeitgeist, I propose to depart slightly from my well-trodden path to the transaction manager and take a look at frameworks. I expect you can guess which particular framework I am going to take a pass at, too.

For nearly as long as there have been microprocessors, there have been frameworks. J2EE application servers can be said to be frameworks, as could messaging systems or even operating systems. It is also pretty easily arguable that frameworks are the only elements of software that have succeeded in delivering the holy grail of reuse. While instances of truly reusable business logic remain hard to identify, frameworks abound (especially if you accept my definition of frameworks, which includes application servers and such - what could be seen as more reusable than the code comprising a widely adopted commercial product like BEA WebLogic Server?).

Frameworks are entities that exist in layers, each taking care of some generic issues that exist at their architectural layer, and thereby provide a higher level of abstraction to their consumers sitting in the level above. The application server, for instance, takes care of low-level technical issues such as thread and socket management, thereby leaving the programmer of the components that use it able to put more of their focus on the business functionality he or she is trying to implement.

Despite the raised level of abstraction that the application server so effectively provides, pretty much every application server customer that I am aware of does not simply run projects that result in the production of a set of EJBs and servlets created from the ground up to implement some business function. In almost all cases, the app server customers have written a framework themselves, and their application developers write code that exists within their own bespoke framework. Typically, these frameworks perform such functions as reading data from centralized corporate repositories to configure static reference data and business rules. In many cases the frameworks also constrain the degrees of freedom afforded to the application developers - J2EE provides the ability for its users to make many architectural decisions. Is my logic in a servlet, or an EJB? Is the EJB a session or an entity bean? Is the session bean stateful or stateless ... the choices roll on and on. These choices provide a powerful toolkit for solving a wide range of server-side, logic-style problems, but in the 80% case the problems application server customers are trying to solve are not wide ranging. Many applications follow very common standard usage patterns. These patterns drive a common set of J2EE usage patterns, or at least they should. By giving the powerful tools to the application developers, you have forced them to be capable of making wise decisions as to how best employ their toolkit.

What is the business value of an entire team of experienced J2EE architects? Why do you need to staff up with people capable of making these hard decisions? Moreover, how will you retain them? If they spend their lives solving problems that are all pretty much identical from a technical perspective they will probably look for another job - ("Brain the size of a planet, and here I am parking cars") or maybe spend the vast majority of their time dreaming up and writing a new framework for you that you may or may not want, or benefit from. Even if you are in the situation where you do have a team of experts and they are disciplined enough not to go off on architectural off-road odysseys of this nature, the very fact that you have many talented individuals doing their own thing in the pursuit will inevitably lead to multiple code styles and variations in practice, which will be a future maintenance cost as the next generation of programmers looks to maintain the portfolio of business J2EE applications in production and quietly running the business. In production, regularity and conformity are your friends!

So, frameworks are needed (not simply to keep the gurus happy, either!) to provide consistency to developments and to enable "ordinary mortals" to be productive, not just superheroes. However, most application server customers find hidden peril in their bespoke frameworks. When they recruit new team members, these individuals cannot be immediately productive - the first few weeks of their employment will be spent grappling with the bespoke framework they have to work within. Inevitably, documentation will be scanty or nonexistent, so the new hires, however experienced in J2EE they may be, will spend lots of time badgering the guru in the corner about how they should get going. Not only does this waste their time, but the guru in the corner spends half his or her life answering basic questions, not delivering the high-value code they are paid for. This problem gets worse if you look to outsource some of the development. Developers in another organization, or maybe even on another continent, cannot bother the guru in the corner - he or she is not in their corner - with the obvious impact on productivity and so on. The framework has brought us consistency and an ability to have a multiskill level team, at the expense of the benefit of skill transferability, which was the reason we adopted J2EE in the first place.

So, the net of all this is that while we need a framework to enable productivity, we have identified a set of requirements of any truly useful framework, namely:

  • It must be widely adopted and fully documented to maximize developer transferability.
  • It must be easy to use to minimize the need of developers to engage in the low-level details of the plumbing of server-side computing. This means it must be toolable.
Enter the Beehive Framework
It is the next generation of the runtime framework powering today's BEA WebLogic Platform applications and comprises three basic elements: PageFlow, controls, and JSR 181 Web services.

All three elements use code annotations to drive what is deployed from a runtime perspective - resulting in declarative setting of runtime behavior confining most of the code-level developer interaction to the direct pursuit of implementing the business requirements. Code annotations on the controller allow the pageflow subsystem to handle the minutiae of struts; annotations at the class and method levels allow things close to business objects to be exposed easily as Web services; and annotations allow developers to make declarations about the reusable chunks of functionality that are delivered in controls. The best news of all: although the annotations are collected together with the code they relate to and could be manipulated with the trusty vi editor, they lend themselves to intelligent manipulations via a Beehive-aware IDE (WebLogic Workshop being the first, but other tool vendors - not to mention the Apache Pollinate project - lining up to ensure it's not the only) that allows developers to easily interact, not only with the Beehive development model, but with custom controls in custom ways - for example, take a look at today's database control - creating one in the IDE prompts you to find a datasource from your running server, and can even go and look at what tables are in the schema. This intelligent, context-sensitive prompting is clearly going to get developers up the learning curve fast, and unlike a wizard-based, code generator approach, the novice user will never be left with hundreds of lines of code and other artifacts that he or she didn't write, doesn't understand, and now has to maintain (anybody remember the 4GL COBOL generator tools?!).

Of course, BEA's existing customers benefit from this approach to development today, as documented in studies by Gartner and Crossvale among others, but Captain Paranoia on their shoulder is whispering the dreaded "proprietary" word, and muttering about vendor lock-in So for these people, Beehive represents the possibility of benefiting from BEA's innovation without needing to commit to a lifetime of BEA infrastructure (although why that would scare anyone is beyond me!)

For the rest of the world, Beehive is not only providing an open, proven, toolable development framework supported by a major vendor to the world at large, but it is bringing ease of development to server-side Java, all in a way that is sympathetic to the current vogue for services-oriented architecture.

The final bastion of resistance to the tooled framework approach often comes from the J2EE developers themselves. They see their ability to chant the arcane incantations of JNDI lookup on demand as their value - "if tools and abstraction make things so simple, then we will be out of a job!" I believe this is misguided for two reason. First, all the code that does the lookups and so on is invariably cut-and-pasted from samples, or "one I prepared earlier." In my days as a Tuxedo consultant, I lost count of the number of production systems whose comments claimed they put a string into upper case (even the comments were cut and pasted from the "simpapp" sample application). It is the same in the J2EE world - any J2EE developer who thinks his superior ability to cut and paste lines of code will guarantee him job security is, well, misguided must be the polite term. Moreover, frameworks don't mean projects can get by with no J2EE developers - an architect or two will always be required on every development. With a framework, there is now a clear distinction between J2EE architects and business developers. Far from devaluing expertise, the framework actually provides some sort of career path for the developers - highlighting the true value of experience. Finally, if Java (and hence the market value of associated skills) is to survive in the enterprise for the long term, it is imperative that the experts target their expertise wisely, and that the masses can mass produce productively.

So come one, come all... wake up and smell the honey! Beehive has arrived!

References

  • "Application Platform Suites An Architectural Cost Analysis": www.bea.com/content/news_events/white_papers/ Gartner_APS_Architectural_Cost_Analysis.pdf
  • "A Study in Enterprise Development Productivity": http://download.crossvale.com:88/wsad_vs_wlw/ J2EEDevelopment-Productivity-Analysis.pdf
  • More Stories By Peter Holditch

    Peter Holditch is a senior presales engineer in the UK for Azul Systems. Prior to joining Azul he spent nine years at BEA systems, going from being one of their first Professional Services consultants in Europe and finishing up as a principal presales engineer. He has an R&D background (originally having worked on BEA's Tuxedo product) and his technical interests are in high-throughput transaction systems. "Of the pitch" Peter likes to brew beer, build furniture, and undertake other ludicrously ambitious projects - but (generally) not all at the same time!

    Comments (0)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


    IoT & Smart Cities Stories
    In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
    Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...
    Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...
    Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
    René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
    Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
    Whenever a new technology hits the high points of hype, everyone starts talking about it like it will solve all their business problems. Blockchain is one of those technologies. According to Gartner's latest report on the hype cycle of emerging technologies, blockchain has just passed the peak of their hype cycle curve. If you read the news articles about it, one would think it has taken over the technology world. No disruptive technology is without its challenges and potential impediments t...
    If a machine can invent, does this mean the end of the patent system as we know it? The patent system, both in the US and Europe, allows companies to protect their inventions and helps foster innovation. However, Artificial Intelligence (AI) could be set to disrupt the patent system as we know it. This talk will examine how AI may change the patent landscape in the years to come. Furthermore, ways in which companies can best protect their AI related inventions will be examined from both a US and...
    Bill Schmarzo, Tech Chair of "Big Data | Analytics" of upcoming CloudEXPO | DXWorldEXPO New York (November 12-13, 2018, New York City) today announced the outline and schedule of the track. "The track has been designed in experience/degree order," said Schmarzo. "So, that folks who attend the entire track can leave the conference with some of the skills necessary to get their work done when they get back to their offices. It actually ties back to some work that I'm doing at the University of San...
    When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...