Welcome!

Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic

Weblogic: Article

Is the Glass Half Full or Half Empty?

Threads, amdahl, and a very big machine

In this column over the years, I have spent a considerable amount of time talking about contention and locking in the database tier. At the end of the day, the endless conversations about scaling the application tier boil down to less than a bag of beans if a scaled application can't go any faster because the database has hit a limit.

Sometimes, the "database hitting a limit" involves a limit to the physical capacity of the database to service requests for data, which usually leads to purchase of a larger database server, or some kind of partitioning or tuning work. Other times, the database server is not too busy, but the data itself is the problem (or at least, the pattern of access to the data) - remembering back to ACID transactions and the rules for using them, the most golden of golden rules is that transactions should be as short in duration as possible, to ensure that the system does not grind to a halt. The reason a system with lots of long transactions in it might grind to a halt is that an ACID transaction is associated with a bunch of locks in the database; therefore, the longer the transaction is active, the higher the probability that several pieces of the application will try to obtain the same lock - resulting in the application being bogged down in contention for the same piece of data. In this situation no amount of hardware can help; while data accesses do not touch each others' locks, they can fly through the database as fast as the server can access its disks. As soon as multiple pieces of the application want to touch the same data, things are not so free and easy.

This is all well understood stuff, at the database tier. A lot of effort has gone in over the years to remove lock contention wherever possible. In particular, optimistic locking has become a well-established technique to reduce data contention. With optimistic locking, the database will give its locks out to multiple accessors simultaneously, on the assumption that they will not make conflicting updates to the data. At the time the transaction commits, if it turns out the two accessors did actually trample on each other, one of the transactions will be rolled back with an exception, since it has turned out that the database's optimism was unfounded and the access really needed to be serialized in the more traditional pessimistic way. The success of this technique in increasing throughput is not magic; clearly, it works because what is being contended for is only the lock - not the data itself (this being the key optimistic assumption). If the actual data were being contended for a majority of the time, optimistic locking would actually result in a net slowdown, brought about by the endless collisions and retrying.

So why am I rambling at high speed through this resume of transaction design and database lock management?

"You cannot change the laws of physics!"
Well, it turns out that the same principles apply in other tiers - surprise surprise, these are "laws of physics" after all... So why haven't we heard the analogue to this tale of lock-related pain and trickery in the application tier? Well, it all comes down to what there is a lot of, and what there is not much of. In the database view, there is just one database, and a desire for throughput that leads to many concurrent accessors. It is as you try to drive the concurrency up that you get more and more likely to hit locking issues in your database. So where is that pattern in the application tier? Well, one of the features of the Java language is easy threadedness so there are likely to be many threads, as anyone who has designed a Java application will know of the "singleton" design pattern because it is often used to represent something that there is only one of in a given JVM - a log singleton, a hashmap of reference data, you name it... everywhere there are apps, there are singletons!

So, you say, apps are full of singletons; Java in general and Java application servers in particular have a capacity for running lots of threads through them, so why hasn't every highly threaded java application ground to a halt long ago? Amdahl's law (the particular "law of physics" in question), after all, states that you can only parallelize an application to the degree that the threads can be run independently - what gives!

The answer turns out to be in the hardware that is running the software virtual machine. In a typical environment, eight CPUs constitute a pretty large machine, and it is more common to have deployed many smaller servers, with one or two CPUs each. Therefore, in this environment, if you code a Java application that spawns a number of threads that all access a single hashmap as part of their work, you will not see much difference to the throughput if you compare it with and without the hashmap as you increase the number of threads from, say 50 to 5,000. The reason for that is that even if you spawned 5,000 threads, they are still getting scheduled over the eight CPUs (or however many you have), so the level of parallelism you are achieving is much less than you expected - only eight threads can really physically run at once. This points up a fundamental scaling issue with Java on traditionally architected machines. The "thing there is not much of" - the place where contention will limit your throughput - is not the hashmap, it's the CPU itself. This is one of the reasons why Java applications typically end up deploying over a pretty large set of machines - that way you get enough CPUs to really physically run things in parallel and have a hope of meeting your throughput goals and other SLAs. It is a shame about the running costs of the resulting complex, partitioned system.

This latter partitioning problem is one of those that Azul is addressing by manufacturing 384 processor SMP boxes to allow truly big Java virtual machines to run - removing the complexity of having to artificially break the application up into small pieces. "AHA!" you cry - now that lock thing in the singleton will kill you! You now have many truly concurrent threads all contending for those singletons. It turns out that there is a solution to this problem supported in the hardware (here's an advantage of designing hardware explicitly to execute virtual machines), and it's called Optimistic Thread Concurrency or OTC for short.

As the name suggests, the system's behavior using OTC is the same as (or at least analogous to) the behavior of a database doing optimistic locking: the virtual machine allows many threads to pass through a lock on the assumption that bad things won't happen (that's the optimism part). If bad things do happen and data (rather than just the lock) is actually contended for, then the contending thread is reverted to the state it was in before it acquired the lock and it proceeds forward again - none the wiser, with the integrity of the data intact, and all with no change to the application code itself.

The trick is knowing which locks to be optimistic about - if a particular lock protects a piece of data that is nearly always written to, all that rolling back will hurt, not help, performance. In other cases, for data that is read very frequently but seldom written such as reference data, optimism will be the best policy. In order to tune the OTC system for these different eventualities (which will doubtless coexist in a single app) there are three types of lock that are implemented internally by Azul's virtual machine. Thick locks are the usual pessimistic locks through which access will be serialized, and thin locks are the cheapest form of lock. If a thin lock experiences contention, it must be promoted to either a thick lock or a speculative lock, which is the type of lock that sits between thin and thick and allows multiple threads through, using the hardware to check that there is no contention, and, importantly, to drive the rollbacks if contention occurs. Throughout the life of a virtual machine, data is kept about all the locks. Locks that are frequently contended will be thick, those that are never contended will be thin, and those that are "mostly" uncontended will be speculative. As usage patterns change, heuristics in the VM move individual locks between these three states, thus providing optimum parallelization, and hence throughput, for the application at run time.

Not only does this scheme just improve parallelism with no code changes, but it also allows simpler code to be used. Often, programmers in search of the ultimate throughput end up coding complex nested locking schemes, which are difficult to code, and even more difficult to debug and maintain. With OTC, the locking can be coded in a coarse-grained manner (and therefore can be simple to code and maintain) without impacting throughput - for once the developers get to have their cake and eat it too.

So, fill your glass with OTC. You may find that that old glass of yours holds more than you think, if you deploy it on Azul! For more details about OTC, you can download the whitepaper from the Azul Web site, at www.azulsystems.com/products/whitepaper_abstracts.html.

More Stories By Peter Holditch

Peter Holditch is a senior presales engineer in the UK for Azul Systems. Prior to joining Azul he spent nine years at BEA systems, going from being one of their first Professional Services consultants in Europe and finishing up as a principal presales engineer. He has an R&D background (originally having worked on BEA's Tuxedo product) and his technical interests are in high-throughput transaction systems. "Of the pitch" Peter likes to brew beer, build furniture, and undertake other ludicrously ambitious projects - but (generally) not all at the same time!

Comments (1) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Most Recent Comments
SYS-CON Australia News Desk 02/17/06 07:07:44 PM EST

In this column over the years, I have spent a considerable amount of time talking about contention and locking in the database tier. At the end of the day, the endless conversations about scaling the application tier boil down to less than a bag of beans if a scaled application can't go any faster because the database has hit a limit.

IoT & Smart Cities Stories
The current age of digital transformation means that IT organizations must adapt their toolset to cover all digital experiences, beyond just the end users’. Today’s businesses can no longer focus solely on the digital interactions they manage with employees or customers; they must now contend with non-traditional factors. Whether it's the power of brand to make or break a company, the need to monitor across all locations 24/7, or the ability to proactively resolve issues, companies must adapt to...
DXWorldEXPO LLC announced today that ICC-USA, a computer systems integrator and server manufacturing company focused on developing products and product appliances, will exhibit at the 22nd International CloudEXPO | DXWorldEXPO. DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City. ICC is a computer systems integrator and server manufacturing company focused on developing products and product appliances to meet a wide range of ...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
DXWorldEXPO | CloudEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
Founded in 2000, Chetu Inc. is a global provider of customized software development solutions and IT staff augmentation services for software technology providers. By providing clients with unparalleled niche technology expertise and industry experience, Chetu has become the premiere long-term, back-end software development partner for start-ups, SMBs, and Fortune 500 companies. Chetu is headquartered in Plantation, Florida, with thirteen offices throughout the U.S. and abroad.
CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
SYS-CON Events announced today that DatacenterDynamics has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY. DatacenterDynamics is a brand of DCD Group, a global B2B media and publishing company that develops products to help senior professionals in the world's most ICT dependent organizations make risk-based infrastructure and capacity decisions.
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
@DevOpsSummit at Cloud Expo, taking place November 12-13 in New York City, NY, is co-located with 22nd international CloudEXPO | first international DXWorldEXPO and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time t...
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...