Welcome!

Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic, Cloud Security

Weblogic: Article

WebLogic Security Framework

WebLogic Security Framework

WebLogic Server 7.0 offers a new, integrated approach to solving the overall security problem for enterprise applications. With this framework, application security becomes a function of the application infrastructure and is separate from the application itself. Any application deployed on WebLogic Server (WLS) can be secured either through the security features included with the server out of the box, by extending the open Security Service Provider Interface to a custom security solution, or by plugging in other specialized security solutions from major security vendors that the customer's enterprise standardizes on.

This article defines the major requirements for an integrated application security solution, and explains how WebLogic Server 7.0 Security Framework delivers them to your application.

Requirements
The goals of application security are simple: (1) enforce business policies concerning which people should have access to which resources, and (2) don't let attackers access any information. Goal (1) causes a problem because it seems acceptable to enforce business policies in business logic. This belief is misplaced because it's much harder to change policies when enforcement occurs in business logic. Consider the analogy to a secure physical filing system. You don't take a document and rewrite it when a security policy changes. You put it in a different filing cabinet. Different filing cabinets have different keys and a security officer controls their distribution. Similarly, application developers should not have to change business logic when security policy changes. A security administrator should simply alter the protection given to affected components.

Moreover, mixing security code with business logic compromises both goals if developers make mistakes. When the security code in a component has a defect, people may accidentally access information they shouldn't and attackers may exploit the defect to gain unauthorized access. Of course, mistakes are unavoidable. That's why we test software. But it's a lot harder to test the security of every application component individually than a security system as a whole. The difference is somewhat analogous to reading every document in our hypothetical filing system for its fidelity to security policies rather than simply testing the integrity of the locked filing cabinets. However, we shouldn't blame application developers for mixing security code and business logic. We should blame middleware security models. Most of them simply do not support the types of policies many enterprises have, such as only an account holder can access his account. Unless these security models begin supporting a much more dynamic type of security, developers really have no choice.

Middleware security models also fail enterprises in goal (2). Keeping attackers out requires a united front from all the elements in a distributed system. Cooperation is the key to this united front. Middleware sits between front-end processors and back-end databases. The middleware security system must be prepared to accept as much information as it can from the front-end processors about the security context of their requests and must be prepared to offer as much information as it can to back-end databases about the context of its requests. Moreover, it must be prepared to cooperate with special security services that work to coordinate the efforts of all these tiers. Middleware security models offer little, if anything, to support such cooperation. This failing affects many aspects of application security.

Authentication
Authentication is the first line of defense. Knowing the identity of requesters enables the application layer to decide whether to grant their requests and poses a barrier to attackers. All authentication schemes work in fundamentally the same way. They offer a credential to establish identity and provide a means to verify that credential. However, there is a wide variation in the form of credentials and verification mechanisms. Each enterprise's choices of authentication schemes depend on a number of factors, including the sensitivity of protected resources, expected modes of attack, and solution life cycle cost. In most cases, enterprises already have one or more authentication schemes in place, so middleware must work with them by accepting their credentials and engaging their verification mechanisms. Without this cooperation, the enterprise must use a lowest common denominator scheme like passwords, potentially limiting the use of such middleware to low-value applications.

The problem of Web single sign-on (SSO) is even more difficult. The motivation for SSO stems from the distributed nature of Web applications. From the user perspective, a single application may actually encompass different software components running on different servers and operated by different organizations. Users don't want to resubmit credentials every time they click a link that happens to take them to a page running in a different location. Their experiences should be seamless. The previous problem of working with existing authentication schemes requires only understanding credential formats and integrating with verification mechanisms. However, with Web SSO users don't even want to provide credentials in many circumstances. Establishing a user's identity without seeing his credentials requires sophisticated behind-the-scenes communication between the two servers involved in handing off a user session. There are a number of proprietary solutions and some emerging standards for this communication, but it is likely that a given application may have to support multiple approaches for the foreseeable future, so an open model is necessary.

Working with other Web application components involves cooperation on the front end, but middleware infrastructure must also cooperate on the back end. Databases have been around a long time and enterprises take database security very carefully. They really don't trust the front end and middleware layers. If an attacker were to compromise either one of these layers, he could potentially issue a sequence of database requests that would return a large fraction of all the data it maintains. Also, if the front-end or middleware components have defects, they could unintentionally request data for the wrong user, resulting in an embarrassing disclosure of private information. Therefore, many enterprises want to bind each database request to a particular end user, including the appropriate credentials that establish the user's identity. Applications must be prepared to propagate this information.

Authorization
Once an application has established the requester's identity, it must decide whether the set of existing security policies allows it to grant the request. Typically, middleware infrastructure such as J2EE uses a static, role-based system. During user provisioning, security administrators explicitly assign roles to users and then update these assignments as conditions require. During component deployment, security administrators indicate the roles allowed to access the component. At runtime, if a request comes from a user with the necessary roles the application grants the request. This static approach ignores the dynamic nature of many business policies. Consider the policies governing bank accounts, expense reports, and bank tellers. For bank accounts, customers should only be able to access their own accounts. For expense reports, a manager can provide an approval only up to a set amount and never for his own expenses. For bank tellers, they only fulfill the teller role when they're on duty. In even more sophisticated policies, authorization depends on the combination of roles assigned to a user, as well as the content of the request. Middleware infrastructure must explicitly support these dynamic policies or at least provide enough context to specialized security services that do.

The need for dynamic authorization raises the issue of administration. We definitely don't want to force security administrators to become experts in programming languages like Java. Certainly there will be unusual situations that require some custom programming, but routinely updating the dollar threshold for expense report authorization shouldn't require it. At a more mundane level, we don't want them to dig through XML-formatted deployment descriptors and then redeploy components to update role assignments. Security administrators need a well-designed graphical user interface that lets them perform all of their routine tasks and most of their nonroutine ones at runtime. Managing user lists and their assigned roles, changing the level of protection for components, and configuring dynamic constraints should all require just a few moments.

A more complicated headache for security administrators comes in migrating from one authorization service to another. Due to the complexity of authorization decisions, many enterprises rely on specialized services and all applications delegate such decisions to them. When it comes time to perform a major version upgrade or switch to a different service, administrators face a quandary. When do they switch over to the new provider? The concern lies with defects or configuration problems in the new service. They don't want to switch over only to experience a massive case of improper authorizations or mistaken rejections. What they'd really like is to use both systems simultaneously and note when the old and the new service differ in their decisions, but this approach requires an even greater ability for the middleware infrastructure to cooperate with the rest of the security ecology.

Auditing
If an application could simultaneously use two different authorization services, a difference of opinion would be a noteworthy event and administrators would want to know about it. Unfortunately, most middleware infrastructure neglects this type of security auditing. Proper auditing is not simply a matter of writing information to disk somewhere. To support their duties to verify, detect, and investigate, administrators need records of all security events in a single location, active notification of certain especially important events, and the ability to quickly search the records.

Security administrators are responsible for ensuring the enforcement of the enterprise policies regarding information access. Obviously, they must first specify these policies, hopefully using a productive interface as described above. Then they must verify the actual enforcement of these policies by periodically inspecting the audit trail. Government regulations or commercial contracts may require such audits. Administrators sample a representative set of transactions and track their paths through various application components to ensure the correct enforcement of security policies at each step. They need a consolidated audit trail or they'll have to spend a significant effort on manually assembling logs from different locations. They need detailed records or they won't be able to determine full compliance.

Responding to potential breaches is the other primary responsibility of security administrators. Responses involve two steps, detection and investigation. First, they need the ability to specify conditions under which the security system will actively notify them. These conditions could involve transaction values, such as transfers over a million dollars, or a pattern of events, such as a spike in the number of clients connecting using weak encryption when accessing sensitive functions. Once they receive notification, administrators must be able to quickly search the logs to determine if there has been an actual breach and the extent of any damage. These searches may involve complex criteria and must execute against the live audit trail so they can track a particular attack as it unfolds. These requirements make the auditing subsystem a substantial piece of software in its own right that middleware providers must devote a substantial effort to perfecting.

As we've seen, there are many challenges in application security. But like the essence of application security itself, the solution is also rather simple.

  1. A clean, elegant abstraction between security policy and business logic
  2. A simple, declarative interface for managing security policies in real time
  3. An open, flexible architecture for integrating with security services
These practices avoid the problem of mixing security and business logic, streamline security administration, and enable cooperation with the rest of the security ecology. The WebLogic Security Framework delivers these critical capabilities.

Security Framework Architecture
Overview

The goal of the WebLogic Security Framework is to deliver an approach to application security that is comprehensive, flexible, and open. Unlike the security realms available in earlier versions of WLS, the new framework applies to all J2EE objects, including JSPs, servlets, EJBs, JCA Adapters, JDBC connection pools, and JMS destinations. It complies with all of the J2EE 1.3 security requirements, such as JAAS for objects related to authentication and authorization, JSSE for communication using SSL and TLS, and the SecurityManager class for code-level security.

The heart of the architecture is the separation of security and business logic. Business logic executes in an appropriate container, be it a JSP, servlet, or EJB. When the container receives a request for an object it contains, it delegates the complete request and its entire context to the Security Framework. The framework returns a yes or no decision on whether to grant the request. This approach takes business logic out of the security equation by providing the same information to the security system that is available to the target object. They each use this information to fulfill their dedicated responsibility: the framework enforces security policy and the object executes business logic.

When the Security Framework receives a delegated request, it manages security processing (see Figure 1). This processing is very flexible, with fine-grained steps not found in many systems, such as dynamic role mapping, dynamic authorization, and adjudication of multiple authorizers. At each step, it delegates processing to an included third-party or custom provider through the corresponding service provider interface (SPI). This architecture enables WLS to route the information necessary to each service provider so that applications can take full advantage of specialized security services.

Service Provider Integration
The Security Framework per se manages security processing. Each step requires execution by a service provider. WLS 7.0 includes providers for every step, but they simply use the framework SPIs. Any other provider has access to the same facilities. These SPIs include:

  • Authentication: Handles the direct verification of requester credentials. The included provider supports username/password and certificate authentication via HTTPS.
  • Identity Assertion: Handles requests where an external system vouches for the requester. The included provider supports X.509 certificates and CORBA IIOP CSIv2 tokens. Because the Security Framework can dispatch requests to different providers based on the type of assertion, you can support a new external system by simply adding a provider for that system type.
  • Role Mapping: Handles the assignment of roles to a user for a given request. The included provider supports dynamic assignment based on username, group, and time.
  • Authorization: Handles the decision to grant or deny access to a resource. In the future, WLS will support many dynamic features, such as the evaluation of request parameter values. The Security Framework supports simultaneous use of multiple authorizers coordinated by an adjudicator.
  • Adjudication: Handles conflicts when using multiple authorization providers. When all the authentication providers return their decisions, the included provider determines whether to grant the original request based on either the rule "all must grant" or the rule "none can deny."
  • Credential Mapping: Handles the mapping of application principals to backend system credentials. As shown in Figure 1, it's not part of the process leading to an access decision because it's invoked when an object makes a request rather than when an object receives a request. The included provider supports username/password credentials and is used internally for J2EE calls and Web SSO.
  • Auditing: Handles the logging of security operations. As shown in Figure 1, it is slightly different from the other SPIs because it is invoked whenever a provider of any kind executes a function. The included provider supports reporting based on thresholds and writes all reported events to a log file. The Security Framework supports simultaneous use of multiple auditors, making it easy to integrate with external logging systems.

    These clean SPIs make it possible to plug and unplug different providers as the security ecology evolves, benefiting everyone involved. BEA can individually upgrade the providers included with WLS. Specialist security vendors can easily make their services available to J2EE applications by coding their products to the appropriate SPIs and many have already done so. Moreover, enterprises can quickly implement customized security processing where necessary. Instead of adapting your security posture to suit the middleware, the middleware adapts its security processing to you.

    From an administrator's perspective, selecting from available providers is simply a matter of pointing and clicking. Using the WLS console, you expand the Realms node and then expand the Providers node. For a given provider, you select one of the available provider instances and configure its properties.

    Backwards Compatibility
    As described above, the WebLogic Security Framework revolutionizes application layer security. However, you may have invested a substantial effort in configuring the security realms used in WLW 6.x. You might not want to upgrade your security model immediately, so the framework offers a realm adapter for backwards compatibility. This adapter is the complete security subsystem from WLS 6.x and the framework treats it just as any other service provider that implements the authentication and authorization SPIs. At server startup, the adapter extracts access control definitions from the deployment descriptor just as before. At runtime, it accepts authentication and authorization requests delegated from the framework through the corresponding SPI. From your perspective, WLS 7.0 security behaves just like 6.x security. From the server's perspective, the realm adapter is fully integrated into the 7.0 Security Framework. Once you decide to upgrade, you can easily import the security information from 6.x definitions. You can even perform simultaneous authorization with the realm adapter and the Security Framework's native provider to verify proper behavior of the upgrade.

    In some cases, you may be using the 6.x realm's integration with the distributed user management systems in Unix or Windows NT. In these cases, you might want to continue using this integration for authentication but want the benefits of the dynamic authorization from the Security Framework. Therefore, it has an option to use the 6.x realm adapter only as an authentication provider. It's interesting to note how the flexibility of the framework's SPI architecture cleanly addresses what might otherwise be a very tricky backwards compatibility issue.

    Security Integration Scenarios
    The Security Framework offers a lot of flexibility. Let's look at a few specific examples of this. In some cases, the solution may not be completely finished, but the planned design demonstrates the superiority of an open framework approach.

    Perimeter Authentication
    In many cases, a party other than WLS's own authenticator vouches for the identity of a requester. It may be the SSL layer of WLS. It may be a Kerberos system. It may be an intermediary Web service. In these cases, the third party provides a token that the application can verify. As long as it trusts the third party, it can accept a verified token as if it were the original user credential.

    The Security Framework employs a straightforward mechanism for working with such systems. All a third party has to do is put its token in an HTTP header. The Security Framework examines the token and dispatches an appropriate service provider based on the token type. If an X.509 certificate from mutual SSL authentication comes in, the framework dispatches a provider that can verify the certificate chain to a root certificate authority and perhaps check the current validity of the certificate using the Online Certificate Status Protocol. If a Kerberos ticket or WS-Security token comes in, the appropriate provider decodes the token and performs the necessary verification.

    Once this verification is done, the provider maps the identity in the credential to a local user. The framework calls back to the JAAS with this local user, which then populates the Principal object as specified in J2EE 1.3. This approach is fully compliant with the appropriate standards yet still offers flexibility. A third-party provider or enterprise development team can integrate any authentication technology with WLS as long as they can populate an HTTP header. Integrating WLS applications with Web SSO solutions is easy because most of them, including SAML, already use cookies or HTTP headers.

    Role Associations
    Most application security models employ the concept of roles, which provide a layer of indirection between users and resources that increases the ease of administration. Roles are like Groups, but more dynamic. Typically, a security administrator assigns a user to a group upon provisioning and then changes this assignment only when the user's job responsibilities change. Roles change more often, perhaps even from request to request based on specific conditions. The Security Framework supports both Groups and Roles.

    Administrators can set up roles so that they embody a logical identity such as Teller or name a set of logical permissions such as Deposit, Withdraw, and Transfer. It's really a matter of design. The first approach is more focused on the logical role of the user and the second approach is more focused on the logical role of the resource. The Security Framework distinguishes between globally scoped roles, which apply to all resources in an installation, and resource-scoped roles, which apply only to specific resources. Globally scoped roles are intended primarily for managing different levels of administrative privileges. Administrators will configure and manage resource-scoped roles.

    The Security Framework enables service providers to dynamically assign roles based on context (see Figure 2). The included provider can take into account username, group, and time. For example, suppose a bank had an AccountManagement EJB restricted to users with a Teller role. A user would fulfill the Teller role if he or she were a member of the DayTeller group and the local time was between 8 a.m. and 6 p.m. Administrators could also use this time feature to set up role assignments that automatically expire, which might be especially useful for very sensitive information such as human resources data. Figure 3 shows how easy it is to set up this dynamic assignment. The role-mapping SPI actually supports the use of additional information such as the parameters of the method call. Therefore, custom providers could offer even more flexible role mapping, Consider the case of a CFO and expense accounts. A custom provider could grant the CFO an Approver role unless the Employee parameter of the request were the CFO himself. That way, he couldn't approve his own expenses.

    Credential Mapping
    As discussed above, enterprises often want to tie each request of a back-end database, packaged application, or legacy system to the ultimate user. Therefore, when a J2EE object accesses a back-end system on behalf of a user, it has to supply the appropriate credentials to the system. The basic problem is mapping. a J2EE Principal to back-end system credentials. The included service provider solves this problem for the most common case of username/password credentials. Each WLS instance has an embedded LDAP directory in which it can store the encrypted username/password pair for every valid combination of Principal and back-end system.

    Parametric Authorization
    One of the classic application security problems is making an authorization decision based on the content of the request or the target object. Approval thresholds are a common case where you want to evaluate the value of these parameters. First, you would create a set of roles such as Manager, SeniorManager, and Director. Then, you would create a set of policies that authorizes approval requests for each role based on the value of the Amount parameter, such as $5,000 for a Manager, $10,000 for a SeniorManager, and $20,000 for a Director. Most middleware does not currently address this issue. A future version of the included authorization service provider will allow such decisions based on the content of the request (see Figure 4). In fact, the authorization SPI already supports the use of method call parameters in access decisions, so you could build a custom provider with these capabilities today.

    Authorization based on the content of the target is a little trickier. For example, you may want to inspect an Account object to get the value of AccountHolder before you decide to authorize a withdrawal. Unfortunately, in the general case, this type of visibility can break encapsulation and present a security vulnerability. If the security system can access any data in the system, it presents a tempting target for compromise. It will soon be possible to write custom code to perform this type of operation in select cases. A future version of the SPI will enable you to access context besides the method call parameters, such as the EJB primary key. You could then create a very specialized provider that examined the primary key of the Account object targeted by a Withdrawal request to determine the correct row in an account database. The provider would make its own call to this database, using special credentials to retrieve the account holder and compare it to the Principal. This solution might require some manual mapping of account holder values to Principal values, but it would work. The dotted line from the authorizer to the external database in Figure 4 illustrates such a solution.

    Note that you would have to write almost the exact same code to perform this check in the Account object itself. However, the service provider approach partitions the security logic and the business logic. Different people can maintain the two types of logic and changing one does not introduce the risk of potentially breaking the other. This flexibility is important if you consider the actual complexity of banking applications that handle minor accounts, joint accounts, and business accounts. Maintaining the security policies for such an application could be a full time job in and of itself.

    Conclusion
    The WebLogic Server 7.0 Security Framework doesn't impose a rigid security model that hinders security integration with other system elements and forces the costly workaround of mixing security code with business logic. Instead, it adopts an open processing model so that application components can seamlessly cooperate with the rest of the enterprise security ecology. Moreover, its processing model delivers a clean abstraction of policy enforcement from business logic that lowers the cost of administering security policies and decreases the chance of security breaches.

    Both application developers and security administrators benefit from the Security Framework. Developers no longer have to shoulder the responsibility and potential embarrassment of mixing application and security code. Administrators don't have to become experts in middleware paradigms to meet security requirements. When someone has to write special security code, he or she only has to do it once - everyone can use it and it's easy to maintain.

    The key to the Security Framework's benefits lies in its open service provider model. Third-party security vendors can easily integrate their solutions with WebLogic and enterprises can quickly create custom security modules. Most important, an open model means that enterprises do not have to wait for the middleware vendor to adopt new security technologies because there are plenty of hooks for future innovations.

  • More Stories By Vadim Rosenberg

    Vadim Rosenberg is the product marketing manager for BEA WebLogic Server. Before joining BEA two years ago, Vadim had spent 13 years in business software engineering, most recently at Compaq Computers (Tandem Division) developing a fault-tolerant and highly scalable J2EE framework.

    More Stories By Paul Patrick

    As chief security architect for BEA Systems, Paul Patrick is responsible for the overall security product strategy at BEA. He plays a key role in driving the design and implementation of security functionality across all of BEA’s products, and is the architect for BEA’s new enterprise security infrastructure product, WebLogic Enterprise Security. Prior to becoming chief security architect, Paul was the lead architect of BEA’s ObjectBroker CORBA ORB and co-architect of WebLogic Enterprise (now Tuxedo). He is also the author of several patent applications as well as industry publications and a book on CORBA.

    Comments (0)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


    @ThingsExpo Stories
    Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a reality. However, most organizations today are building their data infrastructure to focus solely on addressing immediate business needs vs. a platform capable of quickly adapting emerging technologies to address future ...
    In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settle...
    Product connectivity goes hand and hand these days with increased use of personal data. New IoT devices are becoming more personalized than ever before. In his session at 22nd Cloud Expo | DXWorld Expo, Nicolas Fierro, CEO of MIMIR Blockchain Solutions, will discuss how in order to protect your data and privacy, IoT applications need to embrace Blockchain technology for a new level of product security never before seen - or needed.
    Leading companies, from the Global Fortune 500 to the smallest companies, are adopting hybrid cloud as the path to business advantage. Hybrid cloud depends on cloud services and on-premises infrastructure working in unison. Successful implementations require new levels of data mobility, enabled by an automated and seamless flow across on-premises and cloud resources. In his general session at 21st Cloud Expo, Greg Tevis, an IBM Storage Software Technical Strategist and Customer Solution Architec...
    Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, discussed how from store operations and ...
    Imagine if you will, a retail floor so densely packed with sensors that they can pick up the movements of insects scurrying across a store aisle. Or a component of a piece of factory equipment so well-instrumented that its digital twin provides resolution down to the micrometer.
    A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, whic...
    When shopping for a new data processing platform for IoT solutions, many development teams want to be able to test-drive options before making a choice. Yet when evaluating an IoT solution, it’s simply not feasible to do so at scale with physical devices. Building a sensor simulator is the next best choice; however, generating a realistic simulation at very high TPS with ease of configurability is a formidable challenge. When dealing with multiple application or transport protocols, you would be...
    We are given a desktop platform with Java 8 or Java 9 installed and seek to find a way to deploy high-performance Java applications that use Java 3D and/or Jogl without having to run an installer. We are subject to the constraint that the applications be signed and deployed so that they can be run in a trusted environment (i.e., outside of the sandbox). Further, we seek to do this in a way that does not depend on bundling a JRE with our applications, as this makes downloads and installations rat...
    Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
    DX World EXPO, LLC, a Lighthouse Point, Florida-based startup trade show producer and the creator of "DXWorldEXPO® - Digital Transformation Conference & Expo" has announced its executive management team. The team is headed by Levent Selamoglu, who has been named CEO. "Now is the time for a truly global DX event, to bring together the leading minds from the technology world in a conversation about Digital Transformation," he said in making the announcement.
    In this strange new world where more and more power is drawn from business technology, companies are effectively straddling two paths on the road to innovation and transformation into digital enterprises. The first path is the heritage trail – with “legacy” technology forming the background. Here, extant technologies are transformed by core IT teams to provide more API-driven approaches. Legacy systems can restrict companies that are transitioning into digital enterprises. To truly become a lead...
    Digital Transformation (DX) is not a "one-size-fits all" strategy. Each organization needs to develop its own unique, long-term DX plan. It must do so by realizing that we now live in a data-driven age, and that technologies such as Cloud Computing, Big Data, the IoT, Cognitive Computing, and Blockchain are only tools. In her general session at 21st Cloud Expo, Rebecca Wanta explained how the strategy must focus on DX and include a commitment from top management to create great IT jobs, monitor ...
    "Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offer guided learning experiences on AWS, Azure, Google Cloud and all the surrounding methodologies and technologies that you need to know and your teams need to know in order to leverage the full benefits of the cloud," explained Alex Brower, VP of Marketing at Cloud Academy, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clar...
    The IoT Will Grow: In what might be the most obvious prediction of the decade, the IoT will continue to expand next year, with more and more devices coming online every single day. What isn’t so obvious about this prediction: where that growth will occur. The retail, healthcare, and industrial/supply chain industries will likely see the greatest growth. Forrester Research has predicted the IoT will become “the backbone” of customer value as it continues to grow. It is no surprise that retail is ...
    "Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
    SYS-CON Events announced today that Conference Guru has been named “Media Sponsor” of the 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to gre...
    The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform. In his session at @ThingsExpo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and shared the must-have mindsets for removing complexity from the develop...
    In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
    "Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.