Welcome!

Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: @CloudExpo, Containers Expo Blog

@CloudExpo: Blog Feed Post

The Virtual Public-Private Cloud Connection

Secure, optimized tunnels to a remote site, e.g., the cloud. Haven’t we been here before?

Secure, optimized tunnels to a remote site, e.g. the cloud. Haven’t we been here before?

In the continuing discussion around Business Intelligence in the cloud comes a more better (yes I did, in fact, say that) discussion of the reasons why you’d want to put BI in the cloud and, appropriately, some of the challenges. As previously mentioned, BI data sets are, as a rule, huge. Big. Bigger than big. Ginormous, even. One of the considerations, then, if you’re going to leverage a cloud-based business intelligence offering – or any offering in which very, very large data sets/files are required - would be how the heck are you going to transfer all that data to the cloud in a timely fashion?

The answer is, apparently, to ship a disk to the provider. Seriously. I couldn’t make this stuff up if I tried.

Ann All, in “Pros and Cons of Business Intelligence in the Cloud” paraphrases a post on the subject by “two guys from Persistent Software, Mukund Deshpande and Shreekanth Joshi.”

Moving large data sets to the cloud could get costly. They recommend shipping disks, an approach they say is often recommended by cloud providers like Amazon.

The original authors, cite network costs as a reason to choose that “never gets old sneaker network” option.

Moving data to the cloud – Large data sets in silos sitting on premises need to get to the cloud before they can be completely used. This is an expensive proposition due to the network costs. The cheapest option is to ship disks, and this is often recommended by cloud providers like Amazon. Although this introduces latencies, it is often acceptable for most BI options.

It’s not just latency that’s the problem – though latency that’s measured in days is certainly not a good thing - you’re talking about shipping off disks and taking the risk that it will get “lost” en route, damaged, or misplaced at some point in time.

Think I’m kidding about that risk?

 


WHY ARE WE REINVENTING THE WHEEL?


 

Actually, the basic problem is with our perception of “the cloud” as an external, completely separate entity. The problem is that cloud providers, too, have this basic perception. Basically the cloud is “over here” and your data is “over there” and ne’er the twain shall meet. That means that it is costly and time consuming to transfer extremely large files to the cloud. Hence the suggestion by cloud providers and bloggers, apparently, alike to “ship some disks” instead.

But as Christofer Hoff has pointed out in the past, why can’t there be “private, external clouds” or, at a minimum, “private, secure access” to external clouds that as a benefit of such a point-to-point connection, also employs WAN optimization and application acceleration techniques to improve the performance of large data transfers across the Internet?

Haven’t we been here before? Haven’t we already addressed this exact scenario when organizations realized that using the Internet to connect with partners and remote offices was less expensive than dedicated lines? Didn’t we go through the whole PKI / SSL VPN / WAN optimization controller thing and solve this particular problem already?

Seems we did, and there’s no reason that we can’t apply what we’ve already learned and figured out to this problem. After all, if you treat the cloud as the “headquarters” and its customers as “remote offices” you have a scenario that is not unlike those dealt with by organizations across the world every day.

In fact, Amazon just announced their version of a cloud VPN and Google’s SDC (Secure Data Connection) has been around for quite some time, providing essentially the same kind of secure tunnel/access to the cloud functionality.


BUT IT’S NOT THE SAME


The scenario is almost identical, but in many ways it isn’t. First, organizations have control – physical control – over remote offices. They can determine what products/solutions/services will be implemented and where, and they can deploy them as they see fit. This is not true with cloud, both from the cloud provider’s perspective and from the customer’s perspective.

isessiontothecloud After all, the solution to the pain point of hefty data transfer from organization to cloud is a combination of WAN optimization and secure remote access but what we don’t want is the traditional “always-on point-to-point secure tunnel.” A cloud provider has (or will have, hopefully) more customers than it can possibly support with such a model. And the use of such secure tunnels is certainly sporadic; there’s no need for an “always on” connection between organizations and their cloud provider.

What’s needed is a dynamic, secure, optimized tunnel that can be created and torn down on an on-demand basis. The cloud provider needs to ensure that only those organizations who are authorized are allowed to create such a tunnel, and it needs to be deployed on a platform that is able to be integrated into the provisioning process such that the management of such external connectivity and access doesn’t end up consuming human operational cycles.

But Lori, you say looking at the eye-candy diagram, it looks like such a solution requires hardware at both ends of the connection.

Yes, yes it does. Or at least it requires a solution at the cloud provider and a solution at each customer site. Maybe that’s hardware, maybe it’s not. But you aren’t going to get around the fact that a secure, encrypted, accelerated on-demand session that enables a more efficient and secure transfer of large data sets or virtual machine images across the Internet is going to require some network and application level optimization and acceleration. We’ve been down this road before, we know where it ends: point-to-point encrypted, optimized, and accelerated tunnels.

You’ll note that Amazon and Google already figured this one out, and yes, it’s going to be proprietary and it’s going to require software/hardware/something on both ends of the connection.

The difference with cloud – and it is a big difference, make no mistake about that – is that a cloud provider needs to support hundreds or perhaps thousands of periodic sessions with remote sites. That means it needs to be on-demand and not always-on as most site-to-site tunnels are today.

Your next observation will be to note that if this is going to require a solution on both sides of the relationship then who gets to decide what that solution is? Good question. Probably the cloud provider. Again, you’ll note that Amazon and Google have already decided. At least in the short term. There are no standards around point-to-point connectivity of this kind. There’s IPSEC VPNs and SSL VPNs, of course, but there’s no standards around WAN optimization and no way to connect product A to product B, so the solution in the short term is a single vendor solution. The long term solution would be to adopt some standards around WAN optimization but before that can happen WAN optimization controllers need to support an on-demand model. Most, unfortunately, do not.

You can, of course, continue to use a sneakernet. It’s an option. You can also continue to transfer over the Internet in the clear, which may or may not be an acceptable method of data transfer for your organization. But there is another solution out there and it’s not nearly as difficult to implement as you might think – as long as you have a solution capable of providing such a service in the first place and your cloud provider is willing to offer or, apparently, reinvent it.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

IoT & Smart Cities Stories
Early Bird Registration Discount Expires on August 31, 2018 Conference Registration Link ▸ HERE. Pick from all 200 sessions in all 10 tracks, plus 22 Keynotes & General Sessions! Lunch is served two days. EXPIRES AUGUST 31, 2018. Ticket prices: ($1,295-Aug 31) ($1,495-Oct 31) ($1,995-Nov 12) ($2,500-Walk-in)
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
The challenges of aggregating data from consumer-oriented devices, such as wearable technologies and smart thermostats, are fairly well-understood. However, there are a new set of challenges for IoT devices that generate megabytes or gigabytes of data per second. Certainly, the infrastructure will have to change, as those volumes of data will likely overwhelm the available bandwidth for aggregating the data into a central repository. Ochandarena discusses a whole new way to think about your next...
CloudEXPO | DevOpsSUMMIT | DXWorldEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...
All in Mobile is a place where we continually maximize their impact by fostering understanding, empathy, insights, creativity and joy. They believe that a truly useful and desirable mobile app doesn't need the brightest idea or the most advanced technology. A great product begins with understanding people. It's easy to think that customers will love your app, but can you justify it? They make sure your final app is something that users truly want and need. The only way to do this is by ...
DXWorldEXPO LLC announced today that Big Data Federation to Exhibit at the 22nd International CloudEXPO, colocated with DevOpsSUMMIT and DXWorldEXPO, November 12-13, 2018 in New York City. Big Data Federation, Inc. develops and applies artificial intelligence to predict financial and economic events that matter. The company uncovers patterns and precise drivers of performance and outcomes with the aid of machine-learning algorithms, big data, and fundamental analysis. Their products are deployed...