Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Article

JMS Performance Notes

JMS Performance Notes

It's almost impossible to address the performance of the JMS implementation on the WebLogic Server in a generic fashion. Message size, acknowledge mode, persistence mode, and type of consumer are just a few of the things that can impact the performance. Add the JVM, the operating system, and hardware, which can also affect the performance, and you begin to see why we can't generalize. With so many variables, it's not possible to extrapolate the performance of another JMS-based application, no matter how similar to yours it seems. The only way to understand JMS performance is by testing your own application (or a proof of concept).

It's possible, however, to run a few tests to see the cost differentials of using various options, thus setting a level of expectations for the general behavior of JMS. Since we can't test all possible combinations, we'll limit ourselves to a subset of options using a simple application, a stock ticker. This is the most popular example of publish-and- subscribe messaging.

A stock ticker works by continuously presenting all the trades that occur in a stock exchange, providing the name of the company, the number of shares traded, and the price of the trade. This can be cumbersome for those only interested in the trades of a few companies. Using the Pub/Sub messaging model, they can view only the trades of the companies of interest.

From the JMS perspective, a stock ticker application can be seen as one that sends an event to a topic for every trade that occurs in a stock exchange. Consumers receive only the desired events from selected companies by subscribing to the appropriate topic and specifying the desired companies. In this case, the message selector is the company name or symbol.

Before going into more detail on the test application, let's review how performance is measured.

A common problem associated with performance testing a messaging-based system is misunderstanding the performance metrics. Performance of asynchronous messaging systems is typically measured based on throughput. In this case, the most obvious throughput measurement is "messages per second" (MPS). However, you have to be very careful with this measurement because throughput is a measure of capacity, not speed. MPS tends to be interpreted as a measurement of speed, which is not the case. Consider, for example, consumers who can't process messages fast enough - the messages are just waiting in the corresponding queue or topic. Alternatively, when the message publishers can't produce a high enough rate of messages, the consumers are just waiting. In both examples, the messaging system handles messages at a pace imposed by external factors (message producers and consumers). It's important to measure the throughput for both the message producer(s) and the consumer(s) because each is heavily dependent on the other.

The Stock Ticker Application
This example uses an oversimplified version of a rather primitive stock ticker application where there's only one producer or publisher of trades in the stock exchange. Each trade is a message placed on a single topic. The single publisher continuously publishes events to the topic. The event in this case is a message of a particular type identified by a property in the message, which contains the symbol of the company for which the trade was done. There is one trade per company, and each company on the exchange trades in an orderly, sequential fashion (this is extremely simplistic, but still effective for testing purposes).

On the consumer side, each customer subscribes to a few of the companies that trade in the stock exchange. Some of the message consumers subscribe to listen to events (trades) of the same companies, so some subscribers will receive the same messages.

Note that because of the transient nature of the information, it doesn't make sense to have durable clients or persistent messaging. If the message detailing a particular trade is lost for any reason, by the time it's recovered from the server it will probably be obsolete because many other stock trades will have occurred in the meantime. In this kind of application it makes more sense for the subscriber to simply wait for the next message. With this in mind, our tests are limited to only two acknowledge modes: NONE and MULTICAST with no persistence (bear in mind that this refers to the subscriber; the publisher acknowledgement mode is always AUTO_ACKNOWLEDGE).

Testing Environment
These tests use WebLogic Server 6.1 SP2, JDK 1.3.1-b24 with a heap of 256MB running on a Sun Ultra 60 (dual Ultra SPARC 450MHz, 512 MB of memory). The load is generated using The Grinder (http://grinder.sf.net; see the related article in WLDJ, Vol. 1, issue 7). There are special plug-ins for the functionality of the stock ticker publisher and the consumers.

In this example, the publisher creates 100 different types of messages (0-99), each 64 bytes, which is the approximate size of this kind of message. The stock exchange consists of 100 companies, where each company makes one trade at a time, always in a sequential fashion.

Think time isn't used for publishing the messages in these tests. However, the publisher does write a line to The Grinder log file for every message published, which makes the simulation more realistic.

On the other side of the messaging system, the subscriber plug-in simulates, in a very basic way, a trader that is subscribed to receive events of 25 of the possible 100 companies. Every trader runs on its own JVM and establishes a JMS connection and session before it starts the test run. During the test run it receives the messages it's subscribed to, in this case a range of 25 contiguous message types where the first type has been selected randomly. In real life, every trader would have subscribed to a number of companies that aren't likely to be a block of an alphabetically ordered list of companies; it's modeled this way for the sake of simplicity.

The subscriber does nothing but write a line to The Grinder log file for every message received. This is very important because it has an impact on performance, and your application is likely to do an operation as time-consuming as writing to a file. The publisher and each consumer run on their own JVMs on four computers (Pentium III 600MHz, 256MB memory, SuSE Linux 7.0). Special care has been taken to ensure that no paging or swapping occurs during the execution of the test runs.

Test Runs
Publisher and subscriber performance (MPS) are investigated using the no-acknowledgement mode under various subscriber loads (see Table 1).

  • For the case of one subscriber, the publisher: subscriber performance exhibits a 1:1 relationship (remember that the consumer is subscribed to only 25% of the messages).
  • For other subscriber loads, the throughput is very stable, at around 25 MPS.
  • For other subscriber loads, the 1:1 relationship no longer exists. This happens because the consumers are waiting for the events they're subscribed to. If every consumer were subscribed to exactly the same block of companies, we'd expect to see a rate of about 47 MPS, but the beginning of the block is randomly selected. This means many consumers will be idle waiting for the first message of their block. This idle time decreases the average of messages received.
We repeated the above test run using the multicast no acknowledge mode. We expected this mode to be faster, but it's less reliable. Figure 1 compares the results of the publisher throughput for the two modes.

As expected, the throughput is substantially faster using multicast - a little more than three times faster. Figure 2 depicts the same comparison for subscriber performance.

Since we don't observe a similar trend for the subscriber performance, we must investigate why. First, we rule out the possibility that we're losing messages by looking at the network usage for 60 consumers (see Figure 3).

With a paltry 3.5% utilization, it's hard to imagine that messages are being lost because of high traffic. This is interesting because the network utilization here is less than one-half of that observed in the no acknowledge mode. We then check the CPU usage of the computer running the JMS topic in Figure 4.

Again, the activity is about half that observed using the no acknowledge mode. Thus, we're convinced we're not losing messages for these reasons.

Custom Grinder plug-ins provide us with the actual number of messages handled during the sample period, so we can proceed to analyze this. First, we examine the number of messages handled using no acknowledge mode. Table 2 shows the actual number of messages produced by the publisher and received by the consumer. Our expectation that each subscriber could handle, in the best-case scenario, 25% of the published messages isn't possible because of the overlap between the various blocks of companies to which the consumers are subscribed. Thus, the differential of 40-50% seems reasonable. The differential increases as the number of consumers increases, which again seems reasonable. Next we perform the same analysis for the multicast no acknowledge test runs (see Table 3).The differential is almost double the expected 40-50%.

There are a few things happening here. The messages aren't getting lost; they're in the topic. We proved this by stopping the producer of the messages; after a few minutes the consumer had picked up all the messages. More important, the consumers are already at their maximum speed; changing the transport mechanism from no ack to multicast no ack will not make things go faster.

Using an analogy, with the no acknowledge mode we're drinking water from a glass. With the multicast no acknowledge mode we're drinking water from a fireman's hose. A couple of test runs with 60 consumers illustrate this, this time using a sleep time of 2. milliseconds before publishing every message (see Table 4).

As you can see, messages are now published and consumed at about the same rate. This example illustrates that:

  • MPS is more a measure of throughput capacity than plain speed. Notice how the publisher MPS decreases from 195 MPS for no acknowledge mode to about 50 MPS with the addition of a 2-millisecond sleep time before publishing every message.
  • You have to be very careful when defining the throughput for your application and interpreting the results.

No matter how similar your application might look to another, you can't extrapolate performance results. Testing your application is the only way to really understand JMS performance.

This article is an extract from the book J2EE Performance Testing by Peter Zadrozny (Expert Press, June 2002). Thanks to Phil Aston for writing the custom plug-ins for The Grinder. The software used in these tests can be downloaded from www.expert-press.com.

More Stories By Peter Zadrozny

Peter Zadrozny is CTO of StrongMail Systems, a leader in digital messaging infrastructure. Before joining StrongMail he was vice president and chief evangelist for Oracle Application Server and prior to joining Oracle, he served as chief technologist of BEA Systems for Europe, Middle East and Africa.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@ThingsExpo Stories
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of bus...
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading...
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
DXWorldEXPO LLC announced today that ICOHOLDER named "Media Sponsor" of Miami Blockchain Event by FinTechEXPO. ICOHOLDER give you detailed information and help the community to invest in the trusty projects. Miami Blockchain Event by FinTechEXPO has opened its Call for Papers. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Miami Blockchain Event by FinTechEXPO also offers s...
DXWorldEXPO | CloudEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...
The IoT Will Grow: In what might be the most obvious prediction of the decade, the IoT will continue to expand next year, with more and more devices coming online every single day. What isn’t so obvious about this prediction: where that growth will occur. The retail, healthcare, and industrial/supply chain industries will likely see the greatest growth. Forrester Research has predicted the IoT will become “the backbone” of customer value as it continues to grow. It is no surprise that retail is ...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
DXWorldEXPO LLC announced today that "Miami Blockchain Event by FinTechEXPO" has announced that its Call for Papers is now open. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Financial enterprises in New York City, London, Singapore, and other world financial capitals are embracing a new generation of smart, automated FinTech that eliminates many cumbersome, slow, and expe...
Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive ov...