Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Article

The Grinder: Load Testing for Everyone

The Grinder: Load Testing for Everyone

The Grinder is an easy-to-use Java-based load generation and performance measurement tool that adapts to a wide range of J2EE applications. If you have a J2EE performance measurement requirement, The Grinder will probably fit the bill.

Paco Gómez developed the original version of The Grinder for Professional Java 2 Enterprise Edition with BEA WebLogic Server (Wrox Press, 2000). I took ownership of the source code at the end of 2000 and began The Grinder 2 stream of development. The Grinder is freely available under a BSD-style license.

This article will introduce only the basic features of The Grinder. I encourage you to download the tool and try it out. The recently published J2EE Performance Testing by Peter Zadrozny, Ted Osborne, and me (Expert Press, 2002) contains much more information about The Grinder.

Where to Obtain The Grinder
You can download The Grinder distribution from The Grinder home page at http://grinder.sourceforge.net. The examples in this article were run using The Grinder 2.8.3.

There are some mailing lists that you can join to become a part of The Grinder community:

  • grinder-announce: Low-volume notifications of new releases
  • grinder-use: The place to ask for help
  • grinder-development: For those interested in developing The Grinder
So, What Is The Grinder?
In short, The Grinder is a framework for generating load by simulating client requests to your application, and for measuring how your application copes with that load.

Typically, you will have begged, bought, or borrowed a number of test-client machines with which to test your application. You can use The Grinder console to control many processes across your test-client machines, each running many threads of control. The Grinder is a pure Java application, so there's a wide variety of platforms that you can use.

Three types of processes make up The Grinder:

  • Agent processes: A single agent process runs on each test-client machine and is responsible for managing the worker processes on that machine.

  • Worker processes: Created by The Grinder agent processes, they are responsible for performing the tests.

  • The console: Coordinates the other processes and collates statistics.

    Each of these processes is a Java Virtual Machine (JVM) and can be run on any computer with a suitable version of Java installed.

    To run a given set of tests, an agent process is started on each test-client machine. This process is responsible for creating a number of worker processes. Each loads a plug-in component that determines the type of tests to run and then starts a number of worker threads. For example, with the provided HTTP plug-in each test corresponds to an HTTP request to a URL. Each of the worker threads uses the plug-in to execute tests.

    The grinder.properties file is a configuration file that is read by the agent and worker processes, and the plug-in. This file contains all the information necessary to run a particular set of tests, such as the number of worker processes, the number of worker threads, and the plug-in to use. For most plug-ins, the file also specifies the tests to run and can be thought of as the "test script." For example, when using the HTTP plug-in, the grinder.properties file contains the URL for each test.

    The agent process and the worker processes read their configuration from grinder.properties when they are started (see Figure 1). I usually put the grinder.properties file on a shared network drive so I don't have to copy it to each of the test-client machines.

    The net effect of this scheme is to allow the easy configuration of many separate client contexts, each of which will run the same set of tests against your server or servers. Each context simulates an active user session. The number of contexts is given by the following formula:

    (Number of agent processes) x (Number of worker processes)
    x (Number of worker threads)

    The Console
    The Grinder console (see Figure 2) provides an easy way to control multiple test-client machines, display test results, and control test runs.

    The console is used to coordinate the actions of the worker processes by sending them start, reset, and stop commands. IP multicast is used to broadcast the commands simultaneously to processes running on many machines. The worker processes send statistics reports to the console, which combines these reports to produce graphs and tables showing test activity. The results of a particular test run can be saved for further analysis.

    The console also calculates and displays derived statistics. A key derived statistic that the console can calculate, but the individual worker processes cannot, is a combined transactions per second (TPS) figure for all the worker processes. This is because a rate, such as TPS, can't be calculated without a shared notion of the beginning and the end of the timing period. The console performs the required timekeeping function.

    Getting Started
    Have I whetted your appetite? Let's try running The Grinder. In this example, we'll start both the console and an agent process on a single machine.

    Having expanded The Grinder distribution and set up your CLASSPATH appropriately (see the README file provided with The Grinder for details), you can start the console with the following command:

    $ java net.grinder.Console.

    The console window should appear. Now change to a directory to hold the output of the worker processes and create a grinder.properties file:




    This particular file specifies that there will be two worker processes with five worker threads each, and that the HTTP plug-in will be used. It also defines two tests that involve accessing resources from the BEA e-docs site.

    Start the agent process in the same directory as the grinder.properties file :

    $ java net.grinder.Grinder

    The console display will update to show the two tests. To instruct the worker processes to start the test run, select Start processes from the Action menu. After a short delay, the console display will show graphs of the incoming reports.

    Individual graphs will show the TPS for each test, and a full graph will show the total TPS. Alongside each graph, the mean transaction time, mean transactions per second, peak transactions per second, number of transactions, and number of errors recorded for each test are shown. The colors of the individual test graphs vary from yellow to red to indicate the tests that have the longest mean transaction times. The more red a test graph is, the longer the transactions for that test are taking.

    Try selecting the Results tab to see the results in a tabular form. You can also select the Sample tab to show the sum of all reports received during the current console sample interval.

    Note: If this example doesn't work the first time, it's usually something straightforward. Have a look though the documentation that comes with The Grinder, and if that doesn't help you, e-mail [email protected].

    Recording Test Scripts
    It's quite feasible to have HTTP plug-in grinder.properties test scripts containing hundreds or thousands of individual tests. The Grinder lets you specify the timing of each test. Additionally, the HTTP plug-in provides support for setting cookies, authentication information, dynamically generated requests, HTTPS, and other HTTP features. All of these are configured using properties in the grinder.properties file.

    Writing such test scripts by hand quickly becomes impractical. The Grinder is shipped with a tool, the TCP Sniffer, that can automatically capture test-script entries corresponding to the HTTP requests a user makes using a browser, and generate corresponding test-script entries. The TCP Sniffer is configured to sit between the user's browser and the target server and capture all the requests the browser makes before proxying the requests on to the server. (Technically the TCP Sniffer is a proxy and not a sniffer at all, but it's very useful despite being misnamed!) The responses the TCP Sniffer receives from the server are returned to the browser.

    You can start the TCP Sniffer in a special mode in which it outputs a recording of the requests you make with the browser as a full grinder.properties test script. You can then take this test script and replay it using The Grinder.

    More Than HTTP
    While the HTTP plug-in is the most commonly used, The Grinder can also be used in contexts other than Web and Web-services testing. Two other example plug-ins are shipped with The Grinder, a JUnit plug-in that allows you to repeatedly exercise a JUnit test case using many threads, and a raw socket plug-in.

    It's also easy to write your own plug-in - you just provide a Java class that conforms to a simple interface. I often do this to test J2EE applications with EJB or a JMS interface.

    The Grinder is already a powerful tool, but it can be improved. One of the key limitations is that each worker process executes the tests in the test script sequentially, in a fixed order. The Grinder 3 will address this by allowing tests to be specified using a variety of scripting languages, including Visual Basic, Jython, and JavaScript. Test scripts will allow arbitrary branching and looping, perhaps using the scripting languages' support for random variables. That's if I can find the hacking time.

    Happy grinding!

    I am grateful to Tony Davis and the Expert Press team for their permission to use material from J2EE Performance Testing. As well as full coverage of The Grinder, this book contains much practical information about J2EE performance and application benchmarking.

    I also wish to express gratitude to VA Software for the SourceForge site (http://sourceforge.net/). SourceForge is without doubt a great resource for the open source community and is responsible for the continued success of The Grinder and many other open-source projects.

  • Comments (0)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

    IoT & Smart Cities Stories
    Chris Matthieu is the President & CEO of Computes, inc. He brings 30 years of experience in development and launches of disruptive technologies to create new market opportunities as well as enhance enterprise product portfolios with emerging technologies. His most recent venture was Octoblu, a cross-protocol Internet of Things (IoT) mesh network platform, acquired by Citrix. Prior to co-founding Octoblu, Chris was founder of Nodester, an open-source Node.JS PaaS which was acquired by AppFog and ...
    In today's enterprise, digital transformation represents organizational change even more so than technology change, as customer preferences and behavior drive end-to-end transformation across lines of business as well as IT. To capitalize on the ubiquitous disruption driving this transformation, companies must be able to innovate at an increasingly rapid pace.
    Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
    "MobiDev is a Ukraine-based software development company. We do mobile development, and we're specialists in that. But we do full stack software development for entrepreneurs, for emerging companies, and for enterprise ventures," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
    The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
    The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
    Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...
    When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...
    Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
    Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.