Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Wearables, Containers Expo Blog, Agile Computing, @CloudExpo

Wearables: Blog Feed Post

Virtual Infrastructure in Cloud Computing Just Passes the Buck

I’m not arguing against virtual infrastructure in theory

Infrastructure 2.0 Journal on Ulitzer

There are many good reasons to go down the virtual infrastructure road. The illusion that it’s cheaper than dedicated hardware solutions is not one of them.

I was reading an interesting predictive article on WAN optimization that contends that virtualized WAN optimization controllers (WOC) are, well, just better than sliced bread. One of the reasons why the author opined this way was presented as the great benefits of horizontal scalability (linear) in cloud computing environments.

blockquote Savings and scalability.  This approach ensures that there is no need for dedicated hardware to support WAN optimization, saving on CAPEX and OPEX. Cost savings will also be realized through virtual scalability.  As enterprises add more services or applications to be accessed by additional remote workers via the cloud, the virtualized WAN optimization model will be able to scale linearly.

The implication here is clear: WAN optimization via virtual solutions saves CAPEX and OPEX over dedicated hardware and additional savings are achieved through virtual scalability. But that’s ignoring that the initial investment cost is simply shifted from CAPEX to longer-term OPEX when scalability enters the picture. Not just scalability of the solution, but the impact of application and virtual infrastructure scalability on the solution as well.


Back in the old days we used to deploy all our infrastructure as software. As you needed more compute resources, you deployed bigger, beefier servers on which to deploy said solutions. That’s vertical scalability. Today we prefer the cloud computing model: horizontal scalability. Pay as you grow, compute resources on-demand. Whatever you want to call it the appeal is certainly in the perception that it’s easier and, perhaps more importantly, cheaper than traditional hardware-based scalability solutions. But it’s not accurate at all to equate this model with what is essentially “cheaper” scalability. The operational expenses associated with management, the cost of additional licenses, integration, and the hourly costs associated with the cloud computing environment in question all must be factored into the equation lest we fall prey to the hype that encircles cloud computing today.

One of the reasons you see cost savings in cloud computing is that the costs of the hardware – the physical servers – are shared. You only pay a “nominal” fee per hour for using that   hardware. The cost of that hardware is shared across hundreds of other customers, all seeking the same reduction in operating and capital expenditures. So far, so good. Sharing the physical hardware certainly does spread the cost around and results in a cheaper operating environment – at least for the customer.

passbuck But when you start virtualizing the infrastructure (as in virtual software equivalents) you generally don’t get to share the costs of the solution and you never share the costs of management. Most of the time you just share the same costs you do for any other generic virtual image: the underlying physical hardware. You’re also forced to scale horizontally based on the capacity constraints inherent in the virtual image. The provider and/or solution vendor sets the RAM/compute resources available for the virtual instance and if you need more resources when you’ve reached the largest configuration you’ll have to start scaling horizontally. Whether you want to or not. The second image incurs the same management costs as well as the hourly fees. Likely, too, you’re paying for the licensing because virtual versions of solutions aren’t free, after all, unless you’re leveraging open source solutions that are.

You don’t share those costs with anyone. They are yours, and yours alone. The buck passes from CAPEX to OPEX. CAPEX is reduced, yes, but OPEX? Not so much. Perhaps that’s better from an accounting point of view, but from a total cost perspective it doesn’t really change much.


You can, of course, choose the largest image and thus avoid horizontal scalability. But that is going to increase the costs of the solution overall. Consider the virtual equivalent of an application delivery controller delivered via Amazon EC2 on its largest (quadruple large) image is $4.80 / hour (based on pricing listed by Zeus Technologies for its virtual  solution on Amazon). It is unlikely you’ll have any hour in which that solution is not used. Assuming even one request handled per hour, every hour, every day you’re looking at more than $42000 per year

. Don’t forget, too, you may likely have additional charges for bandwidth – both ingress and egress. Not nearly as “inexpensive” as purported. You could start smaller, but that means it’s more likely you’ll need to “upgrade” midstream. This is far easier to do with a virtual infrastructure than with hardware, at least from a physical deployment Someone is happy with this situation, but probably not you. perspective, but it is just as disruptive a process and may lead to jumping onto the horizontal scalability path earlier rather than later because it is so easy to simply “add another instance” when compared to “upgrade to a new image.” Consider, too, that deploying virtual infrastructure means it is not integrated with the rest of the environment. That may not sound bad, until you realize that automatic scalability means new instances of applications – and perhaps other infrastructure solutions - may be popping up that you need to manage via the infrastructure. How is the infrastructure going to know about it? Either you are manually managing this process or you are going to be doing some integration work. That’s yet another soft-cost of “scalability” that isn’t factored into the equation when comparing hardware to virtual infrastructure.

Contrast that to a model in which services are provided via shared hardware infrastructure solutions. The cost of the hardware is not nominal. But like the rest of the physical infrastructure its costs are shared across all customers. Providing traditional network and application network solutions as services is inherently better suited to a cloud computing environment in that it allows the management costs to be shared (the provider manages the solution, not the customer) and is completely on-demand. Scalability is not the concern of the customer and generally speaking the limitations on RAM/compute resources do not exist in the same way they exist in virtual solutions. Bandwidth in both scenarios can be limited or unlimited, depending on requirements and implementation. Integration should also be taken care of by virtue of the fact that it’s a part of the cloud computing environment and the provider likely wants to ensure that they are billed properly for services rendered.

The current method of deploying a virtual infrastructure actually breaks the “shared resources, shared costs” model of cloud computing and negates the cost savings associated with the elimination of CAPEX for the hardware with the OPEX costs of management, integration, licensing, and a more constrained operating environment that ultimately leads to the need to scale out sooner than would otherwise be required. Certainly a shared model could be implemented via virtualized software solutions, but this model has the same implementation roadblocks as hardware solutions that lead to non-implementation today. Virtual infrastructure shifts many of the management and maintenance-related burdens offloaded by a public cloud computing model back onto the organization and requires more vigilance and dedication to ensuring the overall architecture is operating as expected.


Today, virtualized infrastructure may be the only option for an organization to obtain the control and choice that is currently lacking in today’s cloud computing environments. Deploying hardware solutions and associated services requires an investment on the part of the provider and additional time and investment in developing the means by which customers can take advantage of the solution via services. While most providers invest in hardware solutions without pause, they rarely take the next step in integrating its offerings as services for customers. This means that if you need specific infrastructure components – application acceleration, WAN optimization, web application security – that you’ll likely need to go the virtual infrastructure route. That’s not all bad; this path leads to control and isolation of implementation and configuration, which can be a requirement for conforming to organizational security policies. Organizations having concerns about the impact of other customers sharing infrastructure resources (they already do, but a service-based model brings this to the fore) will almost certainly want to take advantage of the isolation afforded by a virtualized infrastructure implementation.

I’m not arguing against virtual infrastructure in theory or against the control and choice they offer customers. There are challenges with such implementations, mind you, but that’s not really the point today. I’m simply arguing against the “it’s cheaper” mantra that is patently false and fails to take into consideration all the variables in the equation and instead focuses only on the most tangible ones.

There are certainly benefits realized from both deployment models and it is up to the organization to decide which model is right for them. But don’t fall into the trap of thinking virtual infrastructure is a “cheaper” solution, because when you step back and take a look at the entire cost of a solution, that’s just not the case and in fact a services-enabled infrastructure may be a much more financially advantageous solution – except for the provider.

Which may be the real reason the only option you ever have is a virtual one.

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@ThingsExpo Stories
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
Personalization has long been the holy grail of marketing. Simply stated, communicate the most relevant offer to the right person and you will increase sales. To achieve this, you must understand the individual. Consequently, digital marketers developed many ways to gather and leverage customer information to deliver targeted experiences. In his session at @ThingsExpo, Lou Casal, Founder and Principal Consultant at Practicala, discussed how the Internet of Things (IoT) has accelerated our abilit...
Organizations planning enterprise data center consolidation and modernization projects are faced with a challenging, costly reality. Requirements to deploy modern, cloud-native applications simultaneously with traditional client/server applications are almost impossible to achieve with hardware-centric enterprise infrastructure. Compute and network infrastructure are fast moving down a software-defined path, but storage has been a laggard. Until now.
Digital Transformation is much more than a buzzword. The radical shift to digital mechanisms for almost every process is evident across all industries and verticals. This is often especially true in financial services, where the legacy environment is many times unable to keep up with the rapidly shifting demands of the consumer. The constant pressure to provide complete, omnichannel delivery of customer-facing solutions to meet both regulatory and customer demands is putting enormous pressure on...
The best way to leverage your CloudEXPO | DXWorldEXPO presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering CloudEXPO | DXWorldEXPO will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at CloudEXPO. Product announcements during our show provide your company with the most reach through our targeted audienc...
JETRO showcased Japan Digital Transformation Pavilion at SYS-CON's 21st International Cloud Expo® at the Santa Clara Convention Center in Santa Clara, CA. The Japan External Trade Organization (JETRO) is a non-profit organization that provides business support services to companies expanding to Japan. With the support of JETRO's dedicated staff, clients can incorporate their business; receive visa, immigration, and HR support; find dedicated office space; identify local government subsidies; get...
@DevOpsSummit at Cloud Expo, taking place November 12-13 in New York City, NY, is co-located with 22nd international CloudEXPO | first international DXWorldEXPO and will feature technical sessions from a rock star conference faculty and the leading industry players in the world.
DXWorldEXPO LLC announced today that ICC-USA, a computer systems integrator and server manufacturing company focused on developing products and product appliances, will exhibit at the 22nd International CloudEXPO | DXWorldEXPO. DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City. ICC is a computer systems integrator and server manufacturing company focused on developing products and product appliances to meet a wide range of ...
DXWorldEXPO LLC announced today that the upcoming DXWorldEXPO | CloudEXPO New York event will feature 10 companies from Poland to participate at the "Poland Digital Transformation Pavilion" on November 12-13, 2018.
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
In his keynote at 19th Cloud Expo, Sheng Liang, co-founder and CEO of Rancher Labs, discussed the technological advances and new business opportunities created by the rapid adoption of containers. With the success of Amazon Web Services (AWS) and various open source technologies used to build private clouds, cloud computing has become an essential component of IT strategy. However, users continue to face challenges in implementing clouds, as older technologies evolve and newer ones like Docker c...
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
Michael Maximilien, better known as max or Dr. Max, is a computer scientist with IBM. At IBM Research Triangle Park, he was a principal engineer for the worldwide industry point-of-sale standard: JavaPOS. At IBM Research, some highlights include pioneering research on semantic Web services, mashups, and cloud computing, and platform-as-a-service. He joined the IBM Cloud Labs in 2014 and works closely with Pivotal Inc., to help make the Cloud Found the best PaaS.
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
DXWorldEXPO LLC announced today that All in Mobile, a mobile app development company from Poland, will exhibit at the 22nd International CloudEXPO | DXWorldEXPO. All In Mobile is a mobile app development company from Poland. Since 2014, they maintain passion for developing mobile applications for enterprises and startups worldwide.
We are seeing a major migration of enterprises applications to the cloud. As cloud and business use of real time applications accelerate, legacy networks are no longer able to architecturally support cloud adoption and deliver the performance and security required by highly distributed enterprises. These outdated solutions have become more costly and complicated to implement, install, manage, and maintain.SD-WAN offers unlimited capabilities for accessing the benefits of the cloud and Internet. ...
Headquartered in Plainsboro, NJ, Synametrics Technologies has provided IT professionals and computer systems developers since 1997. Based on the success of their initial product offerings (WinSQL and DeltaCopy), the company continues to create and hone innovative products that help its customers get more from their computer applications, databases and infrastructure. To date, over one million users around the world have chosen Synametrics solutions to help power their accelerated business or per...
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
Founded in 2000, Chetu Inc. is a global provider of customized software development solutions and IT staff augmentation services for software technology providers. By providing clients with unparalleled niche technology expertise and industry experience, Chetu has become the premiere long-term, back-end software development partner for start-ups, SMBs, and Fortune 500 companies. Chetu is headquartered in Plantation, Florida, with thirteen offices throughout the U.S. and abroad.