Click here to close now.

Welcome!

Weblogic Authors: Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Blog Feed Post

vCenter Operations Manager - Monitoring Messiah or VMware Monopolisation?


Are VMware slowly becoming the new Oracle? That was the question being asked after the backlash to their initial vRAM licensing model at the launch of vSphere 5. With VMware’s somewhat quick retraction, it was ensured that any unsavoury Larry Ellison comparisons were quickly put to bed. Despite this it was still a signal of intent and an indication of VMware’s recognition of its ever growing influence and clout. Now as VMware make serious manoeuvers into the PaaS space, a VMware based Cloud monitoring solution is a must. So with this in mind what is to be made of the huge marketing and push for their customers to adopt their VM monitoring tool vCenter Operations 5.0?

Back when VMware was only considered ideal for virtualizing test and development environments, the native alarms and performance graphs of the then termed Virtual Center were more than adequate for general monitoring purposes. As the vSphere revolution began with the average customer having VMs numbering in the hundreds, VMware generously allowed a plethora of third-party performance and capacity tools to plug into vCenter via their SDK. Suddenly every subsequent VMworld trade show would get bigger not just by the number of attendees but by the number of VM monitoring companies and tools such as vKernel vOPS, Veeam Monitor, VMTurbo Operations Manager, Quest vFoglight and Xanagti VI to name just a few. So when in February 2011 VMware eventually did enter the monitoring space with the purchase of Integrien’s Alive and its later relaunch and rebranding as vCenter Operations Manager, there wasn’t anything majorly distinctive between what were already mature and in most cases cheaper solutions. More than a year later, a huge marketing campaign and a revamped version, vCenter Operations 5.0 is slowly gaining traction amongst end users as the VM monitoring tool of choice but how much of this is related to its actual capabilities as opposed to VMware driving an agenda to monopolise a market segment that is clearly profitable?

To answer this, the first thing to do is to assess whether there is a need for such a tool, whether it’s any good and what distinction, if any does it bring from the competition? The truth is that anyone who has had to troubleshoot a VMware environment, or gauge the capacity or performance, regardless of the size of the infrastructure will testify that the default tools are simply not sufficient. Add the factor that more and more business critical applications are now virtualised with virtual environments growing at an immense rate, then an enterprise-grade performance, capacity and monitoring tool is a necessity.

So looking at the vCenter Operations Manager vApp (from now on to be referred as VCOPs) the first thing to note is that it collects data not only from VMware‘s vCenter Server but also from vCenter Configuration Manager as well as third-party data sources such as SNMP. This collected data is then processed by the vCenter Operations Manager Analytics VM which presents the results through the rather colourful looking GUI. Compared to its predecessor the most notable change with the VCOPs 5.0 GUI is its integration with vCenter’s navigation/inventory pane. This small yet effective change makes it look much more like a VMware product as opposed to the bolt-on appearance that both Integrien and previous VCOPs versions possessed.
vCenter Operations Manager now incorporates the vCenter navigation pane 

Using the themes / badges of Health, Risk and Efficiency, the GUI organises the view of an entire infrastructure onto a main dashboard that can be drilled down to root causes and further details. Utilising a green, yellow, red scheme where green means good and red is bad, the badges are a quick indication of areas of concern or that require investigating. By seeing something as red, a couple of clicks and simple drill down will show you the relevant VMs and their affected hosts as well as any shared or affected datastores. Furthermore each badge carries a score where a high number is good for the Health and Efficiency badges but potentially detrimental for the Risk badge as a low risk is optimum for your environment. All of this enables quicker troubleshooting in large VM environments as issues can be quickly pinpointed from a very high level view down to the granular detail in just seconds.
vCenter Operations enables easy drill downs to granular details from high level overviews

The Health badge identifies current problems in the system and highlights issues that require immediate resolution. Using a heatmap, the end user has a quick health overview of all parent and child objects such as virtual machines and hosts that can also be rewound by up to six hours to track back trends. The Risk badge identifies exactly that and uses data based on infrastructure stress, time and capacity remaining. It also identifies potential issues that could impact the infrastructure’s performance and can also be trended back to seven days worth of data. Finally the Efficiency Badge, which takes advantage of the now integrated CapacityIQ tool, is used for capacity planning where CPU, memory and disk space resource metrics are referred to for identifying overprovisioned, under-utilised or optimally resourced VMs.
A single dashboard can highlight issues from numerous objects such as datastores, clusters and VMs

As well as the Badges and their drill down details, VCOPs also has several menu tabs such as Operations, Planning, Alerts, Analysis, and Reports. Of most interest in the Operations tab is the Environmental section where a visual representation of objects such as the associated vCenter Server, datacenters, datastores, hosts, and virtual machines are presented alongside their scores and relationship. This is an excellent feature that enables the end user to quickly drill down, identify and investigate more granular objects of concern and their health status. The Planning section also contains a very useful summary section that provides a visual overview in graphs and tables of capacity for any selected object enabling you to easily switch between deployed and remaining capacity. Here VCOPs provides the ability to have extended forecasts of remaining capacity for up to several months, an essential value add especially as environments grow at such a radical pace.
The integration of CapacityIQ and VCOPs' predictive analytics enables forecasted capacity planning capabilities

In addition to the capacity planning and forecasting features, it’s also good to see VCOPs incorporate what-if scenarios. Now becoming common amongst several VM monitoring tools, what-if scenarios are a useful addition to any VM environment especially as they allow you to foresee the impact on capacity and workload on your virtual environment prior to making any actual changes.

Finally the area in which VCOPs really stands out from the competition, is its unique and new vCenter Infrastructure Navigator feature. With the understanding that paramount to any business, monitoring solutions that look at the performance of their applications as opposed to just their infrastructure are far more attractive, VMware’s vCenter Infrastructure Navigator has been introduced to automatically discover application services and map their dependencies and relationships. One of the main benefits of having a knowledge of the application and virtual infrastructure’s interdependencies is that it will immediately help reduce MTTR by either eliminating or implicating the infrastructure as a cause of application slowdowns. Furthermore as key applications and their underlying infrastructure are constantly identified and monitored the end user can quickly ensure that the right level of resources are allocated and that priority is given to those VMs that actually need it.

When you put this in the context of disaster recovery and more specifically VMware’s latest version of Site Recovery Manager, end users now have the opportunity to create recovery plans and protection groups that are aligned to the applications that reside on their VCOPs monitored VMs. This is a far cry from the competition whose equivalent Disaster Recovery solutions still don’t allow you to automatically failback or even failover multiple VMs simultaneously. Using VCOPs’ metrics and mapping of application interdependencies with VMs and underlying hosts, the level of sophistication in Disaster Recovery planning is raised significantly in that it’s now related to what matters to the business most, namely the apps.
vCenter Infrastructure Navigator provides VCOPs a unique visibility of application interdependency mappings 

So while this all sounds great does VCOPs really spell the end of other VM monitoring solutions and a consequent reduction of third party stalls and their scantily clad glamour models at VMworld? Does it really constitute a comprehensive Cloud monitoring solution? At present, probably not. VCOPs is still more expensive than most of its competitors with a per VM pricing model and still has some limitations, most significantly its inability to monitor physical servers in the same way it monitors VMs. It also has to gain a market share by going against already popular and seasoned solutions that have already existent end users and champions. In saying that, this is VCOPs 5 and is merely the beginning.

Looking firstly at the price challenge, VCOPs is software and indeed it would be foolhardy to not expect the pricing model to change and become more attractive to new customers or even bundled in with new hypervisor purchases.  When looking at the bigger picture, VMware are clearly focusing further up the stack with a PaaS offering and it would also be short-sighted to think VMware only see VCOPs as a single entity product that just monitors the infrastructure space. If anything it’s an investment to what will be an integral component to a comprehensive Cloud monitoring package that enables successful migrations to VMware’s PaaS offerings. In such a scenario it would be ideal for VMware to have a PaaS offering that was already built on an IaaS monitoring, management and orchestration solution that they themselves have developed. Furthermore should the next version of VCOPs or the package that it comes with include the ability to monitor analytics that incorporate physical blades we could well have an integrated monitoring tool that’s impossible to compete with. Just imagine being able to run a what if scenario on a physical blade prior to virtualizing it onto a VM so that you’re able to size up the resources accurately not based just on current metrics but also analysed and predicted growth?  

So taking a step back and looking at the whole VMware portfolio it seems that the heavy investment and marketing of VCOPs is more a ploy of eventually tying in many of their separate solutions as a single comprehensive management, orchestration and Cloud monitoring package that is managed singlehandedly via the vCenter interface. Currently VMware is littered with lots of separate solutions that include vCloud Director and vCenter Chargeback Manager but if they were fully integrated with VCOPs they would make a tasty introduction package to those looking to deploy a Private Cloud. Then there’s VMware’s Hyperic which has the ability to close the aforementioned physical gap as it can monitor the physical environment underlying vSphere hence providing performance management of applications in both physical and virtual environments.Therefore it’s not impossible even today for a Cloud infrastructure’s components to be monitored with a bolted together Hyperic and VCOPs solution, with Hyperic monitoring the applications and the processes running on vCloud Director, which in turn is conveyed to VCOPs which is monitoring the VMs and consequently other components such as vShield Manager. 
Using VCOPs in conjunction with vCloud Director, vShield Manager & Hyperic may provide a Cloud monitoring solution but currently it's the sum of parts as opposed to a fully integrated and seamless stack


But for VMware to be successful in the PaaS sphere they need to enter and engage with a market segment they’ve had little exposure to i.e. the application owners. Looking at VMware’s vFabric Application Performance Manager (driven by AppInsight) as part of a fully integrated package of VCOPs, vCloud Director, Hyperic etc. could be they key that opens the door to application owners. It would also provide VMware a true Cloud monitoring solution that could provide real-time visibility and control from application to infrastructure via a single management interface.

Integration with vFabric AppInsight could provide a comprehensive Cloud monitoring solution that looks from infrastructure all the way up to the application layer 

Ultimately this requires a lot of development work and effort from VMware and will eventually bring them into competition with a new breed of vendors that specialize in Cloud management and monitoring. The point is as good as VCOPs is and as good as the competitors are it's important not to get blinkered and avoid the bigger picture of what VCOPs may eventually become part of. Either way VCOPs’ current competitors need to up their game quick or find alternative features to their solutions – to survive in this game it’s clear, you’re either big or niche.

Read the original blog entry...

More Stories By Archie Hendryx

SAN, NAS, Back Up / Recovery & Virtualisation Specialist.

@ThingsExpo Stories
SYS-CON Events announced today that Vitria Technology, Inc. will exhibit at SYS-CON’s @ThingsExpo, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Vitria will showcase the company’s new IoT Analytics Platform through live demonstrations at booth #330. Vitria’s IoT Analytics Platform, fully integrated and powered by an operational intelligence engine, enables customers to rapidly build and operationalize advanced analytics to deliver timely business outcomes for use cases across the industrial, enterprise, and consumer segments.
The explosion of connected devices / sensors is creating an ever-expanding set of new and valuable data. In parallel the emerging capability of Big Data technologies to store, access, analyze, and react to this data is producing changes in business models under the umbrella of the Internet of Things (IoT). In particular within the Insurance industry, IoT appears positioned to enable deep changes by altering relationships between insurers, distributors, and the insured. In his session at @ThingsExpo, Michael Sick, a Senior Manager and Big Data Architect within Ernst and Young's Financial Servi...
SYS-CON Events announced today that Open Data Centers (ODC), a carrier-neutral colocation provider, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place June 9-11, 2015, at the Javits Center in New York City, NY. Open Data Centers is a carrier-neutral data center operator in New Jersey and New York City offering alternative connectivity options for carriers, service providers and enterprise customers.
When it comes to the Internet of Things, hooking up will get you only so far. If you want customers to commit, you need to go beyond simply connecting products. You need to use the devices themselves to transform how you engage with every customer and how you manage the entire product lifecycle. In his session at @ThingsExpo, Sean Lorenz, Technical Product Manager for Xively at LogMeIn, will show how “product relationship management” can help you leverage your connected devices and the data they generate about customer usage and product performance to deliver extremely compelling and reliabl...
The IoT market is projected to be $1.9 trillion tidal wave that’s bigger than the combined market for smartphones, tablets and PCs. While IoT is widely discussed, what not being talked about are the monetization opportunities that are created from ubiquitous connectivity and the ensuing avalanche of data. While we cannot foresee every service that the IoT will enable, we should future-proof operations by preparing to monetize them with extremely agile systems.
There’s Big Data, then there’s really Big Data from the Internet of Things. IoT is evolving to include many data possibilities like new types of event, log and network data. The volumes are enormous, generating tens of billions of logs per day, which raise data challenges. Early IoT deployments are relying heavily on both the cloud and managed service providers to navigate these challenges. Learn about IoT, Big Data and deployments processing massive data volumes from wearables, utilities and other machines.
SYS-CON Events announced today that CodeFutures, a leading supplier of database performance tools, has been named a “Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. CodeFutures is an independent software vendor focused on providing tools that deliver database performance tools that increase productivity during database development and increase database performance and scalability during production.
The explosion of connected devices / sensors is creating an ever-expanding set of new and valuable data. In parallel the emerging capability of Big Data technologies to store, access, analyze, and react to this data is producing changes in business models under the umbrella of the Internet of Things (IoT). In particular within the Insurance industry, IoT appears positioned to enable deep changes by altering relationships between insurers, distributors, and the insured. In his session at @ThingsExpo, Michael Sick, a Senior Manager and Big Data Architect within Ernst and Young's Financial Servi...
“In the past year we've seen a lot of stabilization of WebRTC. You can now use it in production with a far greater degree of certainty. A lot of the real developments in the past year have been in things like the data channel, which will enable a whole new type of application," explained Peter Dunkley, Technical Director at Acision, in this SYS-CON.tv interview at @ThingsExpo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Intelligent Systems Services will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Established in 1994, Intelligent Systems Services Inc. is located near Washington, DC, with representatives and partners nationwide. ISS’s well-established track record is based on the continuous pursuit of excellence in designing, implementing and supporting nationwide clients’ mission-critical systems. ISS has completed many successful projects in Healthcare, Commercial, Manufacturing, ...
PubNub on Monday has announced that it is partnering with IBM to bring its sophisticated real-time data streaming and messaging capabilities to Bluemix, IBM’s cloud development platform. “Today’s app and connected devices require an always-on connection, but building a secure, scalable solution from the ground up is time consuming, resource intensive, and error-prone,” said Todd Greene, CEO of PubNub. “PubNub enables web, mobile and IoT developers building apps on IBM Bluemix to quickly add scalable realtime functionality with minimal effort and cost.”
The major cloud platforms defy a simple, side-by-side analysis. Each of the major IaaS public-cloud platforms offers their own unique strengths and functionality. Options for on-site private cloud are diverse as well, and must be designed and deployed while taking existing legacy architecture and infrastructure into account. Then the reality is that most enterprises are embarking on a hybrid cloud strategy and programs. In this Power Panel at 15th Cloud Expo (http://www.CloudComputingExpo.com), moderated by Ashar Baig, Research Director, Cloud, at Gigaom Research, Nate Gordon, Director of T...
Sensor-enabled things are becoming more commonplace, precursors to a larger and more complex framework that most consider the ultimate promise of the IoT: things connecting, interacting, sharing, storing, and over time perhaps learning and predicting based on habits, behaviors, location, preferences, purchases and more. In his session at @ThingsExpo, Tom Wesselman, Director of Communications Ecosystem Architecture at Plantronics, will examine the still nascent IoT as it is coalescing, including what it is today, what it might ultimately be, the role of wearable tech, and technology gaps stil...
DevOps tends to focus on the relationship between Dev and Ops, putting an emphasis on the ops and application infrastructure. But that’s changing with microservices architectures. In her session at DevOps Summit, Lori MacVittie, Evangelist for F5 Networks, will focus on how microservices are changing the underlying architectures needed to scale, secure and deliver applications based on highly distributed (micro) services and why that means an expansion into “the network” for DevOps.
With several hundred implementations of IoT-enabled solutions in the past 12 months alone, this session will focus on experience over the art of the possible. Many can only imagine the most advanced telematics platform ever deployed, supporting millions of customers, producing tens of thousands events or GBs per trip, and hundreds of TBs per month. With the ability to support a billion sensor events per second, over 30PB of warm data for analytics, and hundreds of PBs for an data analytics archive, in his session at @ThingsExpo, Jim Kaskade, Vice President and General Manager, Big Data & Ana...
For years, we’ve relied too heavily on individual network functions or simplistic cloud controllers. However, they are no longer enough for today’s modern cloud data center. Businesses need a comprehensive platform architecture in order to deliver a complete networking suite for IoT environment based on OpenStack. In his session at @ThingsExpo, Dhiraj Sehgal from PLUMgrid will discuss what a holistic networking solution should really entail, and how to build a complete platform that is scalable, secure, agile and automated.
We’re no longer looking to the future for the IoT wave. It’s no longer a distant dream but a reality that has arrived. It’s now time to make sure the industry is in alignment to meet the IoT growing pains – cooperate and collaborate as well as innovate. In his session at @ThingsExpo, Jim Hunter, Chief Scientist & Technology Evangelist at Greenwave Systems, will examine the key ingredients to IoT success and identify solutions to challenges the industry is facing. The deep industry expertise behind this presentation will provide attendees with a leading edge view of rapidly emerging IoT oppor...
In the consumer IoT, everything is new, and the IT world of bits and bytes holds sway. But industrial and commercial realms encompass operational technology (OT) that has been around for 25 or 50 years. This grittier, pre-IP, more hands-on world has much to gain from Industrial IoT (IIoT) applications and principles. But adding sensors and wireless connectivity won’t work in environments that demand unwavering reliability and performance. In his session at @ThingsExpo, Ron Sege, CEO of Echelon, will discuss how as enterprise IT embraces other IoT-related technology trends, enterprises with i...
When it comes to the Internet of Things, hooking up will get you only so far. If you want customers to commit, you need to go beyond simply connecting products. You need to use the devices themselves to transform how you engage with every customer and how you manage the entire product lifecycle. In his session at @ThingsExpo, Sean Lorenz, Technical Product Manager for Xively at LogMeIn, will show how “product relationship management” can help you leverage your connected devices and the data they generate about customer usage and product performance to deliver extremely compelling and reliabl...
The Internet of Things (IoT) is causing data centers to become radically decentralized and atomized within a new paradigm known as “fog computing.” To support IoT applications, such as connected cars and smart grids, data centers' core functions will be decentralized out to the network's edges and endpoints (aka “fogs”). As this trend takes hold, Big Data analytics platforms will focus on high-volume log analysis (aka “logs”) and rely heavily on cognitive-computing algorithms (aka “cogs”) to make sense of it all.