Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Wearables, Containers Expo Blog, Agile Computing, @CloudExpo

Wearables: Blog Feed Post

Virtual Infrastructure in Cloud Computing Just Passes the Buck

I’m not arguing against virtual infrastructure in theory

Infrastructure 2.0 Journal on Ulitzer

There are many good reasons to go down the virtual infrastructure road. The illusion that it’s cheaper than dedicated hardware solutions is not one of them.

I was reading an interesting predictive article on WAN optimization that contends that virtualized WAN optimization controllers (WOC) are, well, just better than sliced bread. One of the reasons why the author opined this way was presented as the great benefits of horizontal scalability (linear) in cloud computing environments.

blockquote Savings and scalability.  This approach ensures that there is no need for dedicated hardware to support WAN optimization, saving on CAPEX and OPEX. Cost savings will also be realized through virtual scalability.  As enterprises add more services or applications to be accessed by additional remote workers via the cloud, the virtualized WAN optimization model will be able to scale linearly.

The implication here is clear: WAN optimization via virtual solutions saves CAPEX and OPEX over dedicated hardware and additional savings are achieved through virtual scalability. But that’s ignoring that the initial investment cost is simply shifted from CAPEX to longer-term OPEX when scalability enters the picture. Not just scalability of the solution, but the impact of application and virtual infrastructure scalability on the solution as well.


Back in the old days we used to deploy all our infrastructure as software. As you needed more compute resources, you deployed bigger, beefier servers on which to deploy said solutions. That’s vertical scalability. Today we prefer the cloud computing model: horizontal scalability. Pay as you grow, compute resources on-demand. Whatever you want to call it the appeal is certainly in the perception that it’s easier and, perhaps more importantly, cheaper than traditional hardware-based scalability solutions. But it’s not accurate at all to equate this model with what is essentially “cheaper” scalability. The operational expenses associated with management, the cost of additional licenses, integration, and the hourly costs associated with the cloud computing environment in question all must be factored into the equation lest we fall prey to the hype that encircles cloud computing today.

One of the reasons you see cost savings in cloud computing is that the costs of the hardware – the physical servers – are shared. You only pay a “nominal” fee per hour for using that   hardware. The cost of that hardware is shared across hundreds of other customers, all seeking the same reduction in operating and capital expenditures. So far, so good. Sharing the physical hardware certainly does spread the cost around and results in a cheaper operating environment – at least for the customer.

passbuck But when you start virtualizing the infrastructure (as in virtual software equivalents) you generally don’t get to share the costs of the solution and you never share the costs of management. Most of the time you just share the same costs you do for any other generic virtual image: the underlying physical hardware. You’re also forced to scale horizontally based on the capacity constraints inherent in the virtual image. The provider and/or solution vendor sets the RAM/compute resources available for the virtual instance and if you need more resources when you’ve reached the largest configuration you’ll have to start scaling horizontally. Whether you want to or not. The second image incurs the same management costs as well as the hourly fees. Likely, too, you’re paying for the licensing because virtual versions of solutions aren’t free, after all, unless you’re leveraging open source solutions that are.

You don’t share those costs with anyone. They are yours, and yours alone. The buck passes from CAPEX to OPEX. CAPEX is reduced, yes, but OPEX? Not so much. Perhaps that’s better from an accounting point of view, but from a total cost perspective it doesn’t really change much.


You can, of course, choose the largest image and thus avoid horizontal scalability. But that is going to increase the costs of the solution overall. Consider the virtual equivalent of an application delivery controller delivered via Amazon EC2 on its largest (quadruple large) image is $4.80 / hour (based on pricing listed by Zeus Technologies for its virtual  solution on Amazon). It is unlikely you’ll have any hour in which that solution is not used. Assuming even one request handled per hour, every hour, every day you’re looking at more than $42000 per year

. Don’t forget, too, you may likely have additional charges for bandwidth – both ingress and egress. Not nearly as “inexpensive” as purported. You could start smaller, but that means it’s more likely you’ll need to “upgrade” midstream. This is far easier to do with a virtual infrastructure than with hardware, at least from a physical deployment Someone is happy with this situation, but probably not you. perspective, but it is just as disruptive a process and may lead to jumping onto the horizontal scalability path earlier rather than later because it is so easy to simply “add another instance” when compared to “upgrade to a new image.” Consider, too, that deploying virtual infrastructure means it is not integrated with the rest of the environment. That may not sound bad, until you realize that automatic scalability means new instances of applications – and perhaps other infrastructure solutions - may be popping up that you need to manage via the infrastructure. How is the infrastructure going to know about it? Either you are manually managing this process or you are going to be doing some integration work. That’s yet another soft-cost of “scalability” that isn’t factored into the equation when comparing hardware to virtual infrastructure.

Contrast that to a model in which services are provided via shared hardware infrastructure solutions. The cost of the hardware is not nominal. But like the rest of the physical infrastructure its costs are shared across all customers. Providing traditional network and application network solutions as services is inherently better suited to a cloud computing environment in that it allows the management costs to be shared (the provider manages the solution, not the customer) and is completely on-demand. Scalability is not the concern of the customer and generally speaking the limitations on RAM/compute resources do not exist in the same way they exist in virtual solutions. Bandwidth in both scenarios can be limited or unlimited, depending on requirements and implementation. Integration should also be taken care of by virtue of the fact that it’s a part of the cloud computing environment and the provider likely wants to ensure that they are billed properly for services rendered.

The current method of deploying a virtual infrastructure actually breaks the “shared resources, shared costs” model of cloud computing and negates the cost savings associated with the elimination of CAPEX for the hardware with the OPEX costs of management, integration, licensing, and a more constrained operating environment that ultimately leads to the need to scale out sooner than would otherwise be required. Certainly a shared model could be implemented via virtualized software solutions, but this model has the same implementation roadblocks as hardware solutions that lead to non-implementation today. Virtual infrastructure shifts many of the management and maintenance-related burdens offloaded by a public cloud computing model back onto the organization and requires more vigilance and dedication to ensuring the overall architecture is operating as expected.


Today, virtualized infrastructure may be the only option for an organization to obtain the control and choice that is currently lacking in today’s cloud computing environments. Deploying hardware solutions and associated services requires an investment on the part of the provider and additional time and investment in developing the means by which customers can take advantage of the solution via services. While most providers invest in hardware solutions without pause, they rarely take the next step in integrating its offerings as services for customers. This means that if you need specific infrastructure components – application acceleration, WAN optimization, web application security – that you’ll likely need to go the virtual infrastructure route. That’s not all bad; this path leads to control and isolation of implementation and configuration, which can be a requirement for conforming to organizational security policies. Organizations having concerns about the impact of other customers sharing infrastructure resources (they already do, but a service-based model brings this to the fore) will almost certainly want to take advantage of the isolation afforded by a virtualized infrastructure implementation.

I’m not arguing against virtual infrastructure in theory or against the control and choice they offer customers. There are challenges with such implementations, mind you, but that’s not really the point today. I’m simply arguing against the “it’s cheaper” mantra that is patently false and fails to take into consideration all the variables in the equation and instead focuses only on the most tangible ones.

There are certainly benefits realized from both deployment models and it is up to the organization to decide which model is right for them. But don’t fall into the trap of thinking virtual infrastructure is a “cheaper” solution, because when you step back and take a look at the entire cost of a solution, that’s just not the case and in fact a services-enabled infrastructure may be a much more financially advantageous solution – except for the provider.

Which may be the real reason the only option you ever have is a virtual one.

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@ThingsExpo Stories
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Conference Guru has been named “Media Sponsor” of the 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to gre...
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform. In his session at @ThingsExpo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and shared the must-have mindsets for removing complexity from the develop...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
"Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...
In his session at 21st Cloud Expo, Carl J. Levine, Senior Technical Evangelist for NS1, will objectively discuss how DNS is used to solve Digital Transformation challenges in large SaaS applications, CDNs, AdTech platforms, and other demanding use cases. Carl J. Levine is the Senior Technical Evangelist for NS1. A veteran of the Internet Infrastructure space, he has over a decade of experience with startups, networking protocols and Internet infrastructure, combined with the unique ability to it...
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
"Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offer guided learning experiences on AWS, Azure, Google Cloud and all the surrounding methodologies and technologies that you need to know and your teams need to know in order to leverage the full benefits of the cloud," explained Alex Brower, VP of Marketing at Cloud Academy, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clar...
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...
"MobiDev is a software development company and we do complex, custom software development for everybody from entrepreneurs to large enterprises," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, discussed how from store operations and ...
"There's plenty of bandwidth out there but it's never in the right place. So what Cedexis does is uses data to work out the best pathways to get data from the origin to the person who wants to get it," explained Simon Jones, Evangelist and Head of Marketing at Cedexis, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
SYS-CON Events announced today that Telecom Reseller has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.
It is of utmost importance for the future success of WebRTC to ensure that interoperability is operational between web browsers and any WebRTC-compliant client. To be guaranteed as operational and effective, interoperability must be tested extensively by establishing WebRTC data and media connections between different web browsers running on different devices and operating systems. In his session at WebRTC Summit at @ThingsExpo, Dr. Alex Gouaillard, CEO and Founder of CoSMo Software, presented ...