Welcome!

Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Article

JMS Performance Notes

JMS Performance Notes

It's almost impossible to address the performance of the JMS implementation on the WebLogic Server in a generic fashion. Message size, acknowledge mode, persistence mode, and type of consumer are just a few of the things that can impact the performance. Add the JVM, the operating system, and hardware, which can also affect the performance, and you begin to see why we can't generalize. With so many variables, it's not possible to extrapolate the performance of another JMS-based application, no matter how similar to yours it seems. The only way to understand JMS performance is by testing your own application (or a proof of concept).

It's possible, however, to run a few tests to see the cost differentials of using various options, thus setting a level of expectations for the general behavior of JMS. Since we can't test all possible combinations, we'll limit ourselves to a subset of options using a simple application, a stock ticker. This is the most popular example of publish-and- subscribe messaging.

A stock ticker works by continuously presenting all the trades that occur in a stock exchange, providing the name of the company, the number of shares traded, and the price of the trade. This can be cumbersome for those only interested in the trades of a few companies. Using the Pub/Sub messaging model, they can view only the trades of the companies of interest.

From the JMS perspective, a stock ticker application can be seen as one that sends an event to a topic for every trade that occurs in a stock exchange. Consumers receive only the desired events from selected companies by subscribing to the appropriate topic and specifying the desired companies. In this case, the message selector is the company name or symbol.

Before going into more detail on the test application, let's review how performance is measured.

A common problem associated with performance testing a messaging-based system is misunderstanding the performance metrics. Performance of asynchronous messaging systems is typically measured based on throughput. In this case, the most obvious throughput measurement is "messages per second" (MPS). However, you have to be very careful with this measurement because throughput is a measure of capacity, not speed. MPS tends to be interpreted as a measurement of speed, which is not the case. Consider, for example, consumers who can't process messages fast enough - the messages are just waiting in the corresponding queue or topic. Alternatively, when the message publishers can't produce a high enough rate of messages, the consumers are just waiting. In both examples, the messaging system handles messages at a pace imposed by external factors (message producers and consumers). It's important to measure the throughput for both the message producer(s) and the consumer(s) because each is heavily dependent on the other.

The Stock Ticker Application
This example uses an oversimplified version of a rather primitive stock ticker application where there's only one producer or publisher of trades in the stock exchange. Each trade is a message placed on a single topic. The single publisher continuously publishes events to the topic. The event in this case is a message of a particular type identified by a property in the message, which contains the symbol of the company for which the trade was done. There is one trade per company, and each company on the exchange trades in an orderly, sequential fashion (this is extremely simplistic, but still effective for testing purposes).

On the consumer side, each customer subscribes to a few of the companies that trade in the stock exchange. Some of the message consumers subscribe to listen to events (trades) of the same companies, so some subscribers will receive the same messages.

Note that because of the transient nature of the information, it doesn't make sense to have durable clients or persistent messaging. If the message detailing a particular trade is lost for any reason, by the time it's recovered from the server it will probably be obsolete because many other stock trades will have occurred in the meantime. In this kind of application it makes more sense for the subscriber to simply wait for the next message. With this in mind, our tests are limited to only two acknowledge modes: NONE and MULTICAST with no persistence (bear in mind that this refers to the subscriber; the publisher acknowledgement mode is always AUTO_ACKNOWLEDGE).

Testing Environment
These tests use WebLogic Server 6.1 SP2, JDK 1.3.1-b24 with a heap of 256MB running on a Sun Ultra 60 (dual Ultra SPARC 450MHz, 512 MB of memory). The load is generated using The Grinder (http://grinder.sf.net; see the related article in WLDJ, Vol. 1, issue 7). There are special plug-ins for the functionality of the stock ticker publisher and the consumers.

In this example, the publisher creates 100 different types of messages (0-99), each 64 bytes, which is the approximate size of this kind of message. The stock exchange consists of 100 companies, where each company makes one trade at a time, always in a sequential fashion.

Think time isn't used for publishing the messages in these tests. However, the publisher does write a line to The Grinder log file for every message published, which makes the simulation more realistic.

On the other side of the messaging system, the subscriber plug-in simulates, in a very basic way, a trader that is subscribed to receive events of 25 of the possible 100 companies. Every trader runs on its own JVM and establishes a JMS connection and session before it starts the test run. During the test run it receives the messages it's subscribed to, in this case a range of 25 contiguous message types where the first type has been selected randomly. In real life, every trader would have subscribed to a number of companies that aren't likely to be a block of an alphabetically ordered list of companies; it's modeled this way for the sake of simplicity.

The subscriber does nothing but write a line to The Grinder log file for every message received. This is very important because it has an impact on performance, and your application is likely to do an operation as time-consuming as writing to a file. The publisher and each consumer run on their own JVMs on four computers (Pentium III 600MHz, 256MB memory, SuSE Linux 7.0). Special care has been taken to ensure that no paging or swapping occurs during the execution of the test runs.

Test Runs
Publisher and subscriber performance (MPS) are investigated using the no-acknowledgement mode under various subscriber loads (see Table 1).

  • For the case of one subscriber, the publisher: subscriber performance exhibits a 1:1 relationship (remember that the consumer is subscribed to only 25% of the messages).
  • For other subscriber loads, the throughput is very stable, at around 25 MPS.
  • For other subscriber loads, the 1:1 relationship no longer exists. This happens because the consumers are waiting for the events they're subscribed to. If every consumer were subscribed to exactly the same block of companies, we'd expect to see a rate of about 47 MPS, but the beginning of the block is randomly selected. This means many consumers will be idle waiting for the first message of their block. This idle time decreases the average of messages received.
We repeated the above test run using the multicast no acknowledge mode. We expected this mode to be faster, but it's less reliable. Figure 1 compares the results of the publisher throughput for the two modes.

As expected, the throughput is substantially faster using multicast - a little more than three times faster. Figure 2 depicts the same comparison for subscriber performance.

Since we don't observe a similar trend for the subscriber performance, we must investigate why. First, we rule out the possibility that we're losing messages by looking at the network usage for 60 consumers (see Figure 3).

With a paltry 3.5% utilization, it's hard to imagine that messages are being lost because of high traffic. This is interesting because the network utilization here is less than one-half of that observed in the no acknowledge mode. We then check the CPU usage of the computer running the JMS topic in Figure 4.

Again, the activity is about half that observed using the no acknowledge mode. Thus, we're convinced we're not losing messages for these reasons.

Custom Grinder plug-ins provide us with the actual number of messages handled during the sample period, so we can proceed to analyze this. First, we examine the number of messages handled using no acknowledge mode. Table 2 shows the actual number of messages produced by the publisher and received by the consumer. Our expectation that each subscriber could handle, in the best-case scenario, 25% of the published messages isn't possible because of the overlap between the various blocks of companies to which the consumers are subscribed. Thus, the differential of 40-50% seems reasonable. The differential increases as the number of consumers increases, which again seems reasonable. Next we perform the same analysis for the multicast no acknowledge test runs (see Table 3).The differential is almost double the expected 40-50%.

There are a few things happening here. The messages aren't getting lost; they're in the topic. We proved this by stopping the producer of the messages; after a few minutes the consumer had picked up all the messages. More important, the consumers are already at their maximum speed; changing the transport mechanism from no ack to multicast no ack will not make things go faster.

Using an analogy, with the no acknowledge mode we're drinking water from a glass. With the multicast no acknowledge mode we're drinking water from a fireman's hose. A couple of test runs with 60 consumers illustrate this, this time using a sleep time of 2. milliseconds before publishing every message (see Table 4).

As you can see, messages are now published and consumed at about the same rate. This example illustrates that:

  • MPS is more a measure of throughput capacity than plain speed. Notice how the publisher MPS decreases from 195 MPS for no acknowledge mode to about 50 MPS with the addition of a 2-millisecond sleep time before publishing every message.
  • You have to be very careful when defining the throughput for your application and interpreting the results.

Conclusion
No matter how similar your application might look to another, you can't extrapolate performance results. Testing your application is the only way to really understand JMS performance.

Acknowledgements
This article is an extract from the book J2EE Performance Testing by Peter Zadrozny (Expert Press, June 2002). Thanks to Phil Aston for writing the custom plug-ins for The Grinder. The software used in these tests can be downloaded from www.expert-press.com.

More Stories By Peter Zadrozny

Peter Zadrozny is CTO of StrongMail Systems, a leader in digital messaging infrastructure. Before joining StrongMail he was vice president and chief evangelist for Oracle Application Server and prior to joining Oracle, he served as chief technologist of BEA Systems for Europe, Middle East and Africa.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
Whenever a new technology hits the high points of hype, everyone starts talking about it like it will solve all their business problems. Blockchain is one of those technologies. According to Gartner's latest report on the hype cycle of emerging technologies, blockchain has just passed the peak of their hype cycle curve. If you read the news articles about it, one would think it has taken over the technology world. No disruptive technology is without its challenges and potential impediments t...
If a machine can invent, does this mean the end of the patent system as we know it? The patent system, both in the US and Europe, allows companies to protect their inventions and helps foster innovation. However, Artificial Intelligence (AI) could be set to disrupt the patent system as we know it. This talk will examine how AI may change the patent landscape in the years to come. Furthermore, ways in which companies can best protect their AI related inventions will be examined from both a US and...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
Bill Schmarzo, Tech Chair of "Big Data | Analytics" of upcoming CloudEXPO | DXWorldEXPO New York (November 12-13, 2018, New York City) today announced the outline and schedule of the track. "The track has been designed in experience/degree order," said Schmarzo. "So, that folks who attend the entire track can leave the conference with some of the skills necessary to get their work done when they get back to their offices. It actually ties back to some work that I'm doing at the University of San...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...