Welcome!

Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Linux Containers, Agile Computing

Linux Containers: Blog Feed Post

Dear Slashdot: You Get What You Pay For

Open Source SSL Accelerator solution not as cost effective or well-performing as you think

Open Source SSL Accelerator solution not as cost effective or well-performing as you think

o3 Magazine has a write up on building an SSL accelerator out of Open Source components. It's a compelling piece, to be sure, that was picked up by Slashdot and discussed extensively.

If o3 had stuck to its original goal - building an SSL accelerator on the cheap - it might have had better luck making its arguments. But it wanted to compare an Open Source solution to a commercial solution. That makes sense, the author was trying to show value in Open Source and that you don't need to shell out big bucks to achieve similar functionality. The problem is that there are very few - if any - commercial SSL accelerators on the market today. SSL acceleration has long been subsumed by load balancers/application delivery controllers and therefore a direct comparison between o3's Open Source solution and any commercially available solution would have been irrelevant; comparing apples to chicken is a pretty useless thing to do.

To the author's credit, he recognized this and therefore offered a complete Open Source solution that would more fairly be compared to existing commercial load balancers/application delivery controllers, specifically he chose BIG-IP 6900. The hardware platform was chosen, I assume, based on the SSL TPS rates to ensure a more fair comparison. Here's the author's description of the "full" Open Source solution:

The Open Source SSL Accelerator requires a dedicated server running Linux. Which Linux distribution does not matter, Ubuntu Server works just as well as CentOS or Fedora Core. A multi-core or multi-processor system is highly recommended, with an emphasis on processing power and to a lesser degree RAM. This would be a good opportunity to leverage new hardware options such as Solid State Drives for added performance. The only software requirement is Nginx (Engine-X) which is an Open Source web server project. Nginx is designed to handle a large number of transactions per second, and has very well designed I/O subsystem code, which is what gives it a serious advantage over other options such as Lighttpd and Apache. The solution can be extended by combining a balancer such as HAproxy and a cache solution such as Varnish. These could be placed on the Accelerator in the path between the Nginx and the back-end web servers.

o3 specs out this solution as running around $5000, which is less than 10% of the listed cost of a BIG-IP 6900. On the surface, this seems to be quite the deal. Why would you ever purchase a BIG-IP, or any other commercial load balancer/application delivery controller based on the features/price comparison offered?

Turns out there are quite a few reasons; reasons that were completely ignored by the author.

CHAINING PROXIES vs INTEGRATED SOLUTIONS
While all of the moving parts cited by the author (Nginx, Apache, HAproxy, Varnish) are all individually fine solutions, he suggests combining them to assemble a more complete application delivery solution that provides caching, Layer 7 inspection and transformation, and other advanced functionality. Indeed, combining these solutions does provide a deployment that is closer to the features offered by a commercial application delivery controller such as BIG-IP.

Unfortunately, none of these Open Source components are integrated. This necessitates an architecture based on chaining of proxies, regardless of their deployment on the same hardware (as suggested by the author) or on separate devices; in path, of course, but physically separated.

Chaining proxies incurs latency at every point in the process. If you chain proxies, you are going to incur latency in the form of:

  • TCP connection setup and teardown processing
  • Inspection of application data (layer 7 inspection is rarely computationally inexpensive)
  • Execution of functionality (caching, security, acceleration, etc...)
  • Transfer of data between proxies (when deployed on the same device this is minimized)
  • Multiple log files

This network sprawl degrades response time by adding latency at every hop and actually defeats the purposes for which they were deployed. The gains in performance achieved by offloading SSL to Nginx is almost immediately lost when multiple proxies are chained in order to provide the functionality required to match a commercial application delivery controller.

A chained proxy solution adds complexity, obscures visibility (impacts ability to troubleshoot) and makes audit paths more difficult to follow. Aggregated logging never mentioned, but this is a serious consideration, especially where regulatory compliance enters the picture. The issue of multiple log files is one that has long plagued IT departments everywhere, as they often require manual aggregation and correlation - which incurs time and costs. A third party solution is often required to support troubleshooting and transactional monitoring, which incurs additional costs in the form of acquisition, maintenance, and management not considered by the author.

Soft costs, too, are ignored by the author. The configuration of the multiple Open Source intermediaries required to match a commercial solution often require manual editing of configuration files; and must be configured individually. Commercial solutions - and specifically BIG-IP - reduce the time and effort required to configure such solutions by offering myriad options for management - standards-based API, scripting, command line, GUI, application templates and wizards, central management system, and integration as part of other standard data center management systems.

COMPRESSION SHOULD NEVER BE A BINARY CONFIGURATION
The author correctly identifies that offloading compression duties from back-end servers to an intermediary can result in improved performance of the application and greater efficiencies of the servers. NGinx supports industry-standard gzip compression.

The problem with this - and there is a problem - is that it is not always beneficial to apply compression. Years of extensive experience and testing prove that the use of compression can actually degrade performance. Factors such as size of application payload, type of content, and the speed of the network on which the application data will be transferred should all be considered when making the decision to compress or not compress.

This intelligence, this context-awareness, is not offered by this Open Source solution. o3's solution is on or off, with nothing in between. In situations where images are being delivered over a LAN, for example, this will not provide any significant performance benefit and in fact will likely degrade performance. Certainly NGinx could be configured to ignore images, but this does not solve the problem of the inherent uselessness of trying to compress content traversing a LAN and/or under a specific length.

SECURITY
Another overlooked item is security. Not just application security, but full TCP/IP stack security. The Open Source solution could easily add mod_security to the list to achieve parity with the application security features available in commercial solutions. That does not address the underlying stack security. The author suggests running on any standard Linux platform. To be sure, anyone building such a solution for deployment in a production environment will harden the base OS; potentially using SELinux to further lock down the system. No need to argue about this; it's assumed good administrators will harden such solutions.

But what will not be done - and can't be done - is securing the system against network and application attacks. Simple DoS, ARP poisoning, SYN floods, cookie tampering. The potential attacks against a system designed to sit in front of web and application servers are far more lengthy than this, but even these commonly seen attacks will not be addressed by o3's Open Source solution. By comparison, these types of attacks are part and parcel of BIG-IP; no additional modules or functionality necessary.

Furthermore, the performance numbers provided by o3 for their solution seem to indicate that testing was accomplished using 512-bit key certificates.  A single Opteron core can only process around 1500 1024-bit RSA operations per second. This means an 8-core CPU could only perform approximately 12,000 1024-bit RSA ops per second - assuming that's all they were doing. 512-bit keys run around five times faster than 1024-bit. The author states: "The system had no problems handling over 26,590 TPS" which seems to indicate it was not using the industry standard 1024-bit key based on the core capabilities of the processors to process RSA operations. In fact, 512-bit key certificates are no longer supported by most CAs due to their weak key strength.

Needless to say, if the testing used to determine the SSL TPS for BIG-IP were to use 512-bit keys, you'd see a marked increase in the number of SSL TPS in the data sheet.

YOU GET WHAT YOU PAY FOR
Look, o3 has a put together a fairly cool and cheap solution that accomplishes many of the same tasks as a commercial application delivery controller. That's not the point. The point is trying to compare a robust, integrated application delivery solution with a cobbled together set of components designed to mimic similar functionality is silly.

Not only that, the logic that claims it is more cost efficient is flawed.

Is the o3 solution cheaper? Sure- as long as we look only at acquisition. If we look at cost to application performance, to maintain the solution, to troubleshoot, and to manage it then no, no it isn't. You're trading in immediate CAPEX cost savings for long-term OPEX cost outlays.

And as is always the case, in every market, you get what you pay for. A $5000 car isn't going to last as long or perform as well as the $50,000 car, and it isn't going to come with warranties and support, either. It will do what you want, at least for a while, but you're on your own when you take the cheap route.

That said, you are welcome to do so. It is your data center, after all. Just be aware of what you're sacrificing and the potential issues with choosing the road less expensive.

Follow me on Twitter View Lori's profile on SlideShare friendfeedicon_facebook AddThis Feed Button Bookmark and Share

Related blogs and articles:

Technorati Tags: ,,,,,
,,,,,,,,
,,,,,
Categories:  ,  ,  

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

IoT & Smart Cities Stories
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
DXWorldEXPO LLC announced today that "IoT Now" was named media sponsor of CloudEXPO | DXWorldEXPO 2018 New York, which will take place on November 11-13, 2018 in New York City, NY. IoT Now explores the evolving opportunities and challenges facing CSPs, and it passes on some lessons learned from those who have taken the first steps in next-gen IoT services.
SYS-CON Events announced today that Silicon India has been named “Media Sponsor” of SYS-CON's 21st International Cloud Expo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Published in Silicon Valley, Silicon India magazine is the premiere platform for CIOs to discuss their innovative enterprise solutions and allows IT vendors to learn about new solutions that can help grow their business.
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
Founded in 2000, Chetu Inc. is a global provider of customized software development solutions and IT staff augmentation services for software technology providers. By providing clients with unparalleled niche technology expertise and industry experience, Chetu has become the premiere long-term, back-end software development partner for start-ups, SMBs, and Fortune 500 companies. Chetu is headquartered in Plantation, Florida, with thirteen offices throughout the U.S. and abroad.
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
SYS-CON Events announced today that DatacenterDynamics has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY. DatacenterDynamics is a brand of DCD Group, a global B2B media and publishing company that develops products to help senior professionals in the world's most ICT dependent organizations make risk-based infrastructure and capacity decisions.
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...