Welcome!

Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Blog Feed Post

The Attack of Oracle Guest

Last October I published a post that identified the features that both JBoss Data Grid and Oracle Coherence provide (link). My goal was to establish a baseline for the features that a data grid should provide. It was not to state that one data grid was better than the other. Little did I know an Oracle employee would respond by attacking Red Hat, its engineers, and myself.

It is fear? Is it hostility? I don’t know.

I have engaged in discussions with competitors before. Roman and I engaged in a competitive discussion in response to one of my posts comparing IBM WebSphere and JBoss EAP (link). However, we both conducted ourselves in a professional manner. I’ve engaged in competitive discussions with Spring evangelists, but we focused on the technology.

I always enjoy reading the discussions between Cameron, Nikita, and Nati on the TheServerSide. I find their discussions to be insightful. They conduct themselves in a professional manner. To me, it looks like they respect each other and they respect each others products.

To be fair, this is just a single anonymous visitor. If they had not left a comment while connected to Oracle’s network, I would not have known that they work for Oracle. However, I would have inferred it.

Let the show begin.

Oracle Guest

I’m also interested in this question too RK. JDG lack of references that could corroborate performance majority against Coherence. Coherence in the other hand has a lot of public cases that shows how scalable, reliable and fast it is. It is the word’s first in-memory computing platform of the world, so this blog doesn’t offer credibility at all, mostly because Shane is a marketing guy from Red Hat.

He’s just using a old marketing technique to improve reliability of their offers product, comparing it with another one which is leader in its industry, like Coherence. Comparing with Coherence would pass the idea of “JDG is so good just like Coherence, so instead of buying Coherence, buy from Red Hat” but in fact it is not true. JDG should implement A LOT OF features to be comparable with Coherence.


Me

I published the results of a performance test (JDG 6.0.1) last December (link). I have written a technical white paper that includes the results of a number of performance tests (JDG 6.0.1). However, it is awaiting publication. I expect it to be made available via the Red Hat Customer Portal. In addition, I will be publishing the results of a few performance tests (JDG 6.1) executed on better hardware to How to JBoss within the next two weeks.

I executed the performance tests with RadarGun (link), an open source project for data grid performance testing. When I published the results, I provided both the RadarGun and the JDG configuration files. The best way for an organization to select a data grid based on performance and reliability is to configure and execute their own performance tests based on their own requirements in their own controlled environment.

Oracle Guest

Those results compares JDG against Terracotta from Software AG, not with Coherence from Oracle. You cannot say at all that JDG is better than Coherence because you’ve never tested. Again, not reliable statements coming from you. You’ve tried to use a Terracotta comparison to generalize JDG performance results. Lets call Oracle, VMware, IBM, Gigaspaces and TIBCO to participate of the tests.

Me

I have never stated that JDG performs better than Oracle Coherence. Thus, I am unaware of these “not reliable statements”. Can you could point them out? You are welcome to call Oracle, VMWare, IBM, GigaSpaces, and TIBCO. They are welcome to use the my RadarGun and JDG configuration files to configure and execute their own performance tests with RadarGun. This is essentially what I did after coming across the performance test results published by Terracotta. I simply used the parameters that they made available. As I mentioned previously, a number of organizations evaluating JDG are doing just that. They are executing performance tests against both JDG and Oracle Coherence using RadarGun or YCSB (Yahoo! Cloud Serving Benchmark).


Me

Tongosol Coherence was an innovative product in its day, but that day was several years ago. I do not question that it remains reliable. However, there have been a number of advancements in distributed systems over the past few years. JBoss Data Grid brings together the reliability of the previous generation of data grids and the innovation of the next generation of data grids.

Oracle Guest

For some unique features of Coherence like its non-blocking I/O TCP/IP network based on TCMP, which allow it to achive better results with distributed transactions, fail-over detection (the fastest of the industry) WAN replication with latency issues due geographical distribution and the HTTP Session offload from AppServers. Not mentioning integration with A LOT OF AppServers like WebLogic, GlassFish, Websphere, Tomcat, IIS, Resin and even your JBoss AS. JDG only gives support for which is from Red Hat. What a nice example of being “open” hãm ?! :)

Me

The TCMP features that you listed are not unique to TCMP. They are provided by JGroups as well. Those features include non-blocking I/O (NIO), failure detection, and cross site (WAN) replication. I would hope that Oracle Coherence*Web would support both Oracle WebLogic and Oracle GlassFish. Is that really a feature? If so, Red Hat provides it as well. JDG supports both JBoss EAP and JBoss EWS (Apache Tomcat). However, there is no reason for an organization to use JDG or Oracle Coherence*Web for session replication with IBM WebSphere.


Me

Red Hat public references include both Chicago Board Options Exchange (CBOE) and Cisco, and they have both presented at Red Hat Summit / JBoss World. I can’t think of an environment with higher demands for both performance and reliability than financial trading. The Pentaho BI Platform / Server includes a plugin for Infinispan (link). There is no Oracle Coherence plugin.

Oracle Guest

Only this? Coherence has thousands of customer references, including mission critical ones that for years NEVER, I mean, NEVER restarted their servers. Come on, you can do better than this. Red Hat (you) should be a little bit more humble when talking about leaders like Oracle. Someday Red Hat will be a huge company, I don’t doubt that, but that didn’t happened so far and will take some time.

Me

Can you point out a list that includes thousands of public customer references for Oracle Coherence? The only list that I found includes 39 customer references, and that list includes duplicates (link). You state that thousands of Oracle Coherence customers have NEVER restarted their servers. That is a bold claim. Do you have evidence to support such a claim? After all, servers may be restarted to upgrade the hardware and / or operating system. That, and enterprise software typically has a finite life cycle. Has not a just one of those thousands of Oracle Coherence customers ever upgraded their original version to the latest version? I suspect that you and I have different interpretations of “huge company”. I find it ironic that you demand humility while showing disrespect.


Me

Are you stating that because the company you work for (Oracle) productized (well, acquired) a data grid before the company I work for (Red Hat) and that my role is now in marketing, I lack credibility in the data grid domain? I would advise against such a statement. My technical knowledge of data grids is second to none, and it is not derived from my role in marketing. I have worked with a number of enterprise organizations in the financial, telecommunication, and media sectors in a developer / architect capacity in my previous role to integrate data grids in demanding environments.

Oracle Guest

Oh yes? Give me examples of data grid technologies you’ve worked with, scenarios of data partitioning and JVM tuning you’ve implemented for, entity domain versioning strategies you’ve designed it, hashCode algorithms strategies that you’ve proposed for a complex based key node, examples of KPIs that you retrieved from JMX and from the DG, and of course, examples of the following DG scenarios: average latency less than 600 microseconds, 5k TPS or higher considering a transaction with a minimum of 15KB of size, client applications both based on Java, C++, .NET and “the rest of world” that could be accessed with REST or SOAP, projects with more than 20K hours of duration (real one projects) instead of stupid POCs, usage of at least three serious data grids technologies including Coherence, GemFire, Websphere eXtreme Scale, Gigapaces, TIBCO ActiveSpaces, etc.

Me

I do not question your knowledge and experience, nor am I going to. I am dumbfounded as to why you feel justified in questioning mine.

I look at it like this. You have pilots, and you have mechanics. You have users, and you have engineers. A pilot knows how the controls work, a mechanic knows the parts work. When it comes to JDG, I have been a full time pilot and a part time mechanic. However, the activities that you have mentioned are those of a user, not of an engineer. Further, they are not specific to data grids. It’s one thing to talk about metrics, latency, and throughput. It’s another to talk about concurrency, algorithms, and how distributed systems work.

JVM tuning. I have posted a handful of notes on both OS and JVM tuning (link / link / link). Instead of talking about JVM tuning, let’s talk about implementations of ConcurrentMap (link). JMX. I have monitored and analyzed the performance of JDG with JBoss Operations Network, in-house tools, and BTrace. Here is a list of JMX attributes and operations for JDG (link). An average of latency of 600 microseconds is not particularly impressive in the financial trading industry. Nor is 5,000 transactions per second. Did I not mention, the fact that I collaborated with their engineers, and that co-presented with them at Red Hat Summit / JBoss World? Instead of talking about latency and throughput, let’s talk about data structures and eviction algorithms (link). I’ll be honest, I have not worked on projects that required integration in a heterogeneous environment. Those that have, have done so with REST and memcache. Oracle Coherence doesn’t support the memcache protocol, does it? Instead of talking about REST and SOAP, let’s talk about local / remote transaction contexts and the number of remote procedure calls (RPC) required for optimistic / pessimistic locking. Partitioning and hashing. JDG has implemented consistent hashing and virtual nodes. A modern solution. It uses an implementation of the excellent MurmurHash3 algorithm (link). It does not rely on dated implementation based on centralized and / or manual hashing. Does Oracle Coherence? Instead of talking about hashing, let’s talk about vector clocks.

Let’s talk about rebalancing and push / pull implementations.

Of course, that is the benefit of open source software. Users can be engineers. The can understand the implementation by studying the code. That is exactly what I did. I studied the code, I modified the code, I created and submitted patches, and I engaged in discussions with Red Hat engineers on implementations details. With proprietary software, users can only be users.

Asking how fast someone has flown will not reveal how much they know about planes.

Are you familiar with all of the projects that I have been on? I ask because I’m uncertain as to why you would describe them as “stupid POCs”. I do not think that the engineers at CBOE or any of the other organizations that I have collaborated with would appreciate you calling the work that they put into production “stupid POCs”. I know I don’t.


Me

How am I “improving the reliability” of JBoss Data Grid by identifying the functionality that both JBoss Data Grid and Oracle Coherence provide? Do you not believe that JBoss Data Grid has implemented A LOT OF features? The functionality descibed in this post represents nearly all of the features and benefits listed in the Oracle Coherence data sheet (link). JBoss Data Grid lacks a few features provided by Oracle Coherence. Oracle Coherence lacks a few features provided by JBoss Data Grid. Would you say that Oracle Coherence has not implemented A LOT OF features because it lacks a few features provided by JBoss Data Grid?

Oracle Guest

No! It just had integrated a couple open-source existing technologies into a new ecosystem and productized in a minimum level to take some money from the customers with subscriptions. Nothing really new, innovated, creative or respectable. The type of thing Red Hat likes to do: take existing technologies, combine them and make some money.

Me

What are these open-source, existing technologies that you are referring to? Could they be Infinispan? Of course they exist, Red Hat created them. It would be hard to productize something that does not exist. I find it both disrespectful and insulting to Red Hat engineers to describe their work as not new, innovative, creative, or respectable.  You said “take existing technologies, combine them and make some money”. Interesting. Is that not what Oracle did with Tangosol Coherence? Oracle purchased their data grid. Red Hat created its data grid.

Oracle Guest

You really knows to play with words, starting with the usage of the word “nearly” :)

You forgot some key features that only Coherence has like: Elastic Data (off-heap and SSD storage of data), distributed GC against any type of storage and cache layout, ability to handle thousands of GB being able to handle even terabytes of data. Don’t came say to me that with on-heap allocation and regular JVM like HotSpot (or OpenJDK which is even worse) you could allocate terabytes of data. Native SDKs for C/C++ and .NET, Continuous Queries, support for many AppServers rather than only JBoss, integration with Java EE 6 using @Resource annotation, monitoring and management capabilities both integrated with the product and with other external tools like Enterprise Manager, integration with CEP world to enrich events and being the clustering enabled mechanism to handle fail-over scenarios, security features that could deal with scenarios of authentication, authorization, SSL and load-balancers (Eg: BigIP) integration. Pre-built filters and a powerful query language that could make easier for the developers to interact with the cache instead of force them to write Java code, support for Hibernate, Toplink, EclipseLink, GoldenGate, etc. Thousands of pre-implemented scenario patterns in the product and externally with the incubator strategy started by Tangosol and now owned by Oracle. Oh and of course: support for a high performance serialization strategy and a highly scalable TCP/IP implementation like TCMP. Not mentioning that support for InfiniBand based networks.

Me

I admit that off-heap storage is an interesting concept. However, I question how practical it is. I would not recommend storing a TB of data on a single node with or without off-heap storage. I recommend partitioning physical servers into multiple virtual servers. It increases node portability while reducing the effects (e.g. rebalancing) of adding or removing nodes. JDG supports Java EE integration with both @resource and @inject. Does Oracle Coherence not support @inject? JDG includes management and monitoring as well. JDG clients are smart clients. It is not practical to load balance requests. Can you point me to a list of these “thousands of pre-implemented scenario patterns”? JDG supports both high performance serialization (JBoss Marshalling) and a highly scalable TCP / IP implementation (JGroups). However, JDG does not require developers to write additional code to use high performance serialization unlike Oracle Coherence and Portable Object Format (POF) (link). I will give you Infiniband, but it may not matter for long (link).

Update: I thought when you referred to off-heap storage that you referring to off-heap memory. I had not realized that “off-heap storage” is the new marketing term for “disk storage”. It turns out that Elastic Data is marketing for “overflow to disk” (link). This was a featured provided by Ehcache 10 years ago. It’s a feature provided by JDG. You mentioned 1TB of data. However, Elastic Data can only support up to 100GB per node. It is not a persistence solution. It does not support eviction. It should not be used with aggregation (i.e. map / reduce) or entry processors.

Oracle Guest

All of the “unique” features provided by JDG are not considered by real customers, independent analysts like Gartner, Forrester and IDC as really important. Are features that just align with the Red Hat strategy to force its entrance in the Big Data world, which on the other hand is a terrible strategy because to a real Big Data strategy Red Hat lacks A LOT OF technology stacks compared with real Big Data vendors like Oracle, EMC and IBM. Just an example, even Oracle does not consider Coherence as its Big Data strategy. When Oracle talk about Coherence, they’re talking about caching, grid and in-memory computing scenarios, which fits perfectly to elastic data grid technologies.

Me

It find it funny that justify the lack of features by stating that they are not important to analysts. Will you go on record as stating that Oracle Coherence will never implement JDG features that it lacks? Software evolves. New features become standard features. Personally, I think data grids will continue to incorporate features provided by NoSQL implementations. Eventual consistency comes to mind. What is a “real’ customer”? Is there another kind of customer? What do these features have to do with Red Hat’s big data strategy? Did you see the Red Hat big data announcement (link)? It was quite clear on what is and what is not our big data strategy. Just as Coherence is not Oracle’s big data strategy, JDG is not Red Hat’s big data strategy. We too place our data grid in the context of in-memory distributed data and parallel processing. I would expect a data grid to “fit perfectly to elastic data grid technologies” as it is, after all, a data grid and one of the defining characteristics of a data grid is that it is elastic. However, there is some overlap between in-memory data grids, NoSQL, and big data platforms. They all distribute data and implement parallel processing. The provide data locality. As such, in-memory data grids fit perfectly inside of broader big data solutions.


Read the original blog entry...

More Stories By Daniel Thompson

I curate the content on this page, but the credit goes to my talented colleagues for the posts that you see here. Much of what you read on this page is the work of friends at How to JBoss, and I encourage you to drop by the site at http://www.howtojboss.com for some of the best JBoss technical and non-technical content for developers, architects and technology executives on the Web.

@ThingsExpo Stories
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
Amazon started as an online bookseller 20 years ago. Since then, it has evolved into a technology juggernaut that has disrupted multiple markets and industries and touches many aspects of our lives. It is a relentless technology and business model innovator driving disruption throughout numerous ecosystems. Amazon’s AWS revenues alone are approaching $16B a year making it one of the largest IT companies in the world. With dominant offerings in Cloud, IoT, eCommerce, Big Data, AI, Digital Assista...
The Founder of NostaLab and a member of the Google Health Advisory Board, John is a unique combination of strategic thinker, marketer and entrepreneur. His career was built on the "science of advertising" combining strategy, creativity and marketing for industry-leading results. Combined with his ability to communicate complicated scientific concepts in a way that consumers and scientists alike can appreciate, John is a sought-after speaker for conferences on the forefront of healthcare science,...
In his session at Cloud Expo, Alan Winters, U.S. Head of Business Development at MobiDev, presented a success story of an entrepreneur who has both suffered through and benefited from offshore development across multiple businesses: The smart choice, or how to select the right offshore development partner Warning signs, or how to minimize chances of making the wrong choice Collaboration, or how to establish the most effective work processes Budget control, or how to maximize project result...
In his keynote at 19th Cloud Expo, Sheng Liang, co-founder and CEO of Rancher Labs, discussed the technological advances and new business opportunities created by the rapid adoption of containers. With the success of Amazon Web Services (AWS) and various open source technologies used to build private clouds, cloud computing has become an essential component of IT strategy. However, users continue to face challenges in implementing clouds, as older technologies evolve and newer ones like Docker c...
Personalization has long been the holy grail of marketing. Simply stated, communicate the most relevant offer to the right person and you will increase sales. To achieve this, you must understand the individual. Consequently, digital marketers developed many ways to gather and leverage customer information to deliver targeted experiences. In his session at @ThingsExpo, Lou Casal, Founder and Principal Consultant at Practicala, discussed how the Internet of Things (IoT) has accelerated our abilit...
When shopping for a new data processing platform for IoT solutions, many development teams want to be able to test-drive options before making a choice. Yet when evaluating an IoT solution, it’s simply not feasible to do so at scale with physical devices. Building a sensor simulator is the next best choice; however, generating a realistic simulation at very high TPS with ease of configurability is a formidable challenge. When dealing with multiple application or transport protocols, you would be...
Data is the fuel that drives the machine learning algorithmic engines and ultimately provides the business value. In his session at Cloud Expo, Ed Featherston, a director and senior enterprise architect at Collaborative Consulting, discussed the key considerations around quality, volume, timeliness, and pedigree that must be dealt with in order to properly fuel that engine.
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
Two weeks ago (November 3-5), I attended the Cloud Expo Silicon Valley as a speaker, where I presented on the security and privacy due diligence requirements for cloud solutions. Cloud security is a topical issue for every CIO, CISO, and technology buyer. Decision-makers are always looking for insights on how to mitigate the security risks of implementing and using cloud solutions. Based on the presentation topics covered at the conference, as well as the general discussions heard between sessio...
Detecting internal user threats in the Big Data eco-system is challenging and cumbersome. Many organizations monitor internal usage of the Big Data eco-system using a set of alerts. This is not a scalable process given the increase in the number of alerts with the accelerating growth in data volume and user base. Organizations are increasingly leveraging machine learning to monitor only those data elements that are sensitive and critical, autonomously establish monitoring policies, and to detect...
In his session at @ThingsExpo, Arvind Radhakrishnen discussed how IoT offers new business models in banking and financial services organizations with the capability to revolutionize products, payments, channels, business processes and asset management built on strong architectural foundation. The following topics were covered: How IoT stands to impact various business parameters including customer experience, cost and risk management within BFS organizations.
Recently, REAN Cloud built a digital concierge for a North Carolina hospital that had observed that most patient call button questions were repetitive. In addition, the paper-based process used to measure patient health metrics was laborious, not in real-time and sometimes error-prone. In their session at 21st Cloud Expo, Sean Finnerty, Executive Director, Practice Lead, Health Care & Life Science at REAN Cloud, and Dr. S.P.T. Krishnan, Principal Architect at REAN Cloud, discussed how they built...
In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settl...
IoT solutions exploit operational data generated by Internet-connected smart “things” for the purpose of gaining operational insight and producing “better outcomes” (for example, create new business models, eliminate unscheduled maintenance, etc.). The explosive proliferation of IoT solutions will result in an exponential growth in the volume of IoT data, precipitating significant Information Governance issues: who owns the IoT data, what are the rights/duties of IoT solutions adopters towards t...
With tough new regulations coming to Europe on data privacy in May 2018, Calligo will explain why in reality the effect is global and transforms how you consider critical data. EU GDPR fundamentally rewrites the rules for cloud, Big Data and IoT. In his session at 21st Cloud Expo, Adam Ryan, Vice President and General Manager EMEA at Calligo, examined the regulations and provided insight on how it affects technology, challenges the established rules and will usher in new levels of diligence arou...
Organizations planning enterprise data center consolidation and modernization projects are faced with a challenging, costly reality. Requirements to deploy modern, cloud-native applications simultaneously with traditional client/server applications are almost impossible to achieve with hardware-centric enterprise infrastructure. Compute and network infrastructure are fast moving down a software-defined path, but storage has been a laggard. Until now.
DXWorldEXPO LLC announced today that All in Mobile, a mobile app development company from Poland, will exhibit at the 22nd International CloudEXPO | DXWorldEXPO. All In Mobile is a mobile app development company from Poland. Since 2014, they maintain passion for developing mobile applications for enterprises and startups worldwide.
Digital Transformation is much more than a buzzword. The radical shift to digital mechanisms for almost every process is evident across all industries and verticals. This is often especially true in financial services, where the legacy environment is many times unable to keep up with the rapidly shifting demands of the consumer. The constant pressure to provide complete, omnichannel delivery of customer-facing solutions to meet both regulatory and customer demands is putting enormous pressure on...
The best way to leverage your CloudEXPO | DXWorldEXPO presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering CloudEXPO | DXWorldEXPO will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at CloudEXPO. Product announcements during our show provide your company with the most reach through our targeted audienc...