Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Containers Expo Blog

Containers Expo Blog: Article

Virtualization for Deeply Embedded Applications

Virtualization has penetrated far into the enterprise; now it's begun the march into portable electronics:

In networking applications, primarily using multi-core devices, there are considerable advantages in virtualization.  For example virtualization allows for considerably more efficient load balancing as it is now possible to move virtual machines, and their hosted process, from core to core dynamically as conditions change. This same mechanism can drive power savings as it’s now possible to consolidate processing on fewer cores during low traffic periods and shut down unused cores. Higher up-time is possible as it is now possible to download updated firmware in the background, validate the new image, and then migrate process to the new firmware, all without taking the system off line. In systems where it’s necessary to support many different firmware versions this capability is enormously compelling.
In highly secure environments it is now possible to add a secure processing element to an SOC, without having to have a separate security processor. The Payment Card Industry Pin Entry Device (PCI-PED) certification imposes an extremely rigorous set of requirements on manufacturers from the standpoint of separating the user interface from the pin entry device.  With virtualization what had previously required two devices, can now be accomplished with a single physical device, with a hypervisor hosting multiple secure execution environments, one for the user interface, and one for the pin entry device.
In applications where there is a concern about how best to preserve proprietary IP, and still get the benefit from using open source code released under GPL, virtualization provides a way of isolating those two domains.  Integrate GPL code with your proprietary IP, and under the terms of the license, you have to release the full source.  With virtualization it’s now possible to compartmentalize the GPL code, and control the amount of proprietary code that must be released to the public.  (http://www.trango-vp.com/dynamic/front_downloadFile.php?fileName=TGO-TEC-0340-TRANGO_GPL.pdf registration required)
Key Criteria in Selecting a Hypervisor
There are numerous ways of creating virtual machines for embedded applications. While just assigning a name to a particular approach does very little to illuminate the critical issues, it is important to understand the foundation upon which a product design is undertaken as it quite often has substantial impacts on the design’s final character. 
We’ve labeled the most typical approaches to virtualization that we run across in our day to day work as microscheduler, microkernel, and ‘nanokernel’ (I’ll explain the quotes later).  After a quick once-over of each approach I’ll try to focus on key attributes that customers should be aware of.
In a microkernel, an OS kernel is stripped down to its bare essence by removing services that are not strictly required to allow the microkernel to run.  This leaves thread management, interprocess communications, scheduling, and address management.  Hooks and catches are then put in place that allow designers to add those services at a user level.  What this means in practice is that the user mode/kernel mode separation is maintained so a high level of security and robustness is similarly achieved.  But, due to the nature of the originating kernel architecture, there are architectural preferences in the nature of the hosted OS.  In other words, a Linux derived microkernel will have an affinity for hosting Linux as a guest OS.
A microscheduler is a closely related approach to that of a microkernel but while the scheduling portion itself runs in kernel mode or the highest privilege level of the system as is the case with a microkernel, at the same time guest operating systems are also allowed to run at this same extremely high privilege level.  What this means in practice is that the guest operating system must be well behaved both from a performance and a security perspective.  This partially eliminates one of the key strengths of virtualization; security.  Robustness is also compromised as a crash on the part of a privileged guest OS or application can still do extensive damage as it’s running “bare metal” and able to bypass protections that are available in a fully virtualized processing environment.
Another approach to creating a hypervisor, is to create a hardware abstraction layer or HAL, and add services such as time management, memory management, and interprocess communications to make a useful hypervisor.  “Nanokernel” is a term that I use with some fear and trepidation as it seems that the word was coined more to separate more modern and streamlined microkernel implementations from first-generation implementations such as “Mach.”  While the term may be imprecise, it will have to do until a more precise way of describing this approach comes along.  “HAL-Like” really doesn’t do it justice and, full disclosure, this is the approach that Trango subscribes to.  The key practical difference in this approach and that of typical microkernels is this; as the basis for the creation of the HAL is the underlying SOC, rather than an OS port that just happened to target that SOC, the hypervisor is typically thinner and lighter, and the hypervisor is less ‘picky’ about the specific details of a hosted OS.  In other words, as an approach it tends to be more OS agnostic and a better reflection of the underlying hardware.
The good news is that there are lots of good choices out there, and the technology has enormous capabilities.  It’s all a matter of looking at the CPU as one of many virtual devices rather than as unitary and fixed and of keeping an eye out for applications for embedded device programming’s newest tool.

More Stories By Frank Altschuler

Frank Altschuler is in charge of marketing for Trango Virtual Processors, a leading provider of embedded virtualization IP. He has just recently joined Trango from Newisys where he was in charge of marketing for their X86 scaling solutions. He has previously held marketing positions at Starcore LLC, a DSP Intellectual property firm, and Cirrus Logic, a fabless semiconductor company. Prior to moving into marketing, Altschuler spent 15 years in engineering design and development in areas such as communications and electro-optics.
He has earned a bachelor's degree in electrical engineering from North Carolina State University. For more information on Trango Virtual Processors, please visit http://www.trango-vp.com or email [email protected]

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

IoT & Smart Cities Stories
Early Bird Registration Discount Expires on August 31, 2018 Conference Registration Link ▸ HERE. Pick from all 200 sessions in all 10 tracks, plus 22 Keynotes & General Sessions! Lunch is served two days. EXPIRES AUGUST 31, 2018. Ticket prices: ($1,295-Aug 31) ($1,495-Oct 31) ($1,995-Nov 12) ($2,500-Walk-in)
IoT is rapidly becoming mainstream as more and more investments are made into the platforms and technology. As this movement continues to expand and gain momentum it creates a massive wall of noise that can be difficult to sift through. Unfortunately, this inevitably makes IoT less approachable for people to get started with and can hamper efforts to integrate this key technology into your own portfolio. There are so many connected products already in place today with many hundreds more on the h...
Digital Transformation is much more than a buzzword. The radical shift to digital mechanisms for almost every process is evident across all industries and verticals. This is often especially true in financial services, where the legacy environment is many times unable to keep up with the rapidly shifting demands of the consumer. The constant pressure to provide complete, omnichannel delivery of customer-facing solutions to meet both regulatory and customer demands is putting enormous pressure on...
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight...
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...