Welcome!

Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic, Cloud Security

Weblogic: Article

WebLogic Security Framework

WebLogic Security Framework

WebLogic Server 7.0 offers a new, integrated approach to solving the overall security problem for enterprise applications. With this framework, application security becomes a function of the application infrastructure and is separate from the application itself. Any application deployed on WebLogic Server (WLS) can be secured either through the security features included with the server out of the box, by extending the open Security Service Provider Interface to a custom security solution, or by plugging in other specialized security solutions from major security vendors that the customer's enterprise standardizes on.

This article defines the major requirements for an integrated application security solution, and explains how WebLogic Server 7.0 Security Framework delivers them to your application.

Requirements
The goals of application security are simple: (1) enforce business policies concerning which people should have access to which resources, and (2) don't let attackers access any information. Goal (1) causes a problem because it seems acceptable to enforce business policies in business logic. This belief is misplaced because it's much harder to change policies when enforcement occurs in business logic. Consider the analogy to a secure physical filing system. You don't take a document and rewrite it when a security policy changes. You put it in a different filing cabinet. Different filing cabinets have different keys and a security officer controls their distribution. Similarly, application developers should not have to change business logic when security policy changes. A security administrator should simply alter the protection given to affected components.

Moreover, mixing security code with business logic compromises both goals if developers make mistakes. When the security code in a component has a defect, people may accidentally access information they shouldn't and attackers may exploit the defect to gain unauthorized access. Of course, mistakes are unavoidable. That's why we test software. But it's a lot harder to test the security of every application component individually than a security system as a whole. The difference is somewhat analogous to reading every document in our hypothetical filing system for its fidelity to security policies rather than simply testing the integrity of the locked filing cabinets. However, we shouldn't blame application developers for mixing security code and business logic. We should blame middleware security models. Most of them simply do not support the types of policies many enterprises have, such as only an account holder can access his account. Unless these security models begin supporting a much more dynamic type of security, developers really have no choice.

Middleware security models also fail enterprises in goal (2). Keeping attackers out requires a united front from all the elements in a distributed system. Cooperation is the key to this united front. Middleware sits between front-end processors and back-end databases. The middleware security system must be prepared to accept as much information as it can from the front-end processors about the security context of their requests and must be prepared to offer as much information as it can to back-end databases about the context of its requests. Moreover, it must be prepared to cooperate with special security services that work to coordinate the efforts of all these tiers. Middleware security models offer little, if anything, to support such cooperation. This failing affects many aspects of application security.

Authentication
Authentication is the first line of defense. Knowing the identity of requesters enables the application layer to decide whether to grant their requests and poses a barrier to attackers. All authentication schemes work in fundamentally the same way. They offer a credential to establish identity and provide a means to verify that credential. However, there is a wide variation in the form of credentials and verification mechanisms. Each enterprise's choices of authentication schemes depend on a number of factors, including the sensitivity of protected resources, expected modes of attack, and solution life cycle cost. In most cases, enterprises already have one or more authentication schemes in place, so middleware must work with them by accepting their credentials and engaging their verification mechanisms. Without this cooperation, the enterprise must use a lowest common denominator scheme like passwords, potentially limiting the use of such middleware to low-value applications.

The problem of Web single sign-on (SSO) is even more difficult. The motivation for SSO stems from the distributed nature of Web applications. From the user perspective, a single application may actually encompass different software components running on different servers and operated by different organizations. Users don't want to resubmit credentials every time they click a link that happens to take them to a page running in a different location. Their experiences should be seamless. The previous problem of working with existing authentication schemes requires only understanding credential formats and integrating with verification mechanisms. However, with Web SSO users don't even want to provide credentials in many circumstances. Establishing a user's identity without seeing his credentials requires sophisticated behind-the-scenes communication between the two servers involved in handing off a user session. There are a number of proprietary solutions and some emerging standards for this communication, but it is likely that a given application may have to support multiple approaches for the foreseeable future, so an open model is necessary.

Working with other Web application components involves cooperation on the front end, but middleware infrastructure must also cooperate on the back end. Databases have been around a long time and enterprises take database security very carefully. They really don't trust the front end and middleware layers. If an attacker were to compromise either one of these layers, he could potentially issue a sequence of database requests that would return a large fraction of all the data it maintains. Also, if the front-end or middleware components have defects, they could unintentionally request data for the wrong user, resulting in an embarrassing disclosure of private information. Therefore, many enterprises want to bind each database request to a particular end user, including the appropriate credentials that establish the user's identity. Applications must be prepared to propagate this information.

Authorization
Once an application has established the requester's identity, it must decide whether the set of existing security policies allows it to grant the request. Typically, middleware infrastructure such as J2EE uses a static, role-based system. During user provisioning, security administrators explicitly assign roles to users and then update these assignments as conditions require. During component deployment, security administrators indicate the roles allowed to access the component. At runtime, if a request comes from a user with the necessary roles the application grants the request. This static approach ignores the dynamic nature of many business policies. Consider the policies governing bank accounts, expense reports, and bank tellers. For bank accounts, customers should only be able to access their own accounts. For expense reports, a manager can provide an approval only up to a set amount and never for his own expenses. For bank tellers, they only fulfill the teller role when they're on duty. In even more sophisticated policies, authorization depends on the combination of roles assigned to a user, as well as the content of the request. Middleware infrastructure must explicitly support these dynamic policies or at least provide enough context to specialized security services that do.

The need for dynamic authorization raises the issue of administration. We definitely don't want to force security administrators to become experts in programming languages like Java. Certainly there will be unusual situations that require some custom programming, but routinely updating the dollar threshold for expense report authorization shouldn't require it. At a more mundane level, we don't want them to dig through XML-formatted deployment descriptors and then redeploy components to update role assignments. Security administrators need a well-designed graphical user interface that lets them perform all of their routine tasks and most of their nonroutine ones at runtime. Managing user lists and their assigned roles, changing the level of protection for components, and configuring dynamic constraints should all require just a few moments.

A more complicated headache for security administrators comes in migrating from one authorization service to another. Due to the complexity of authorization decisions, many enterprises rely on specialized services and all applications delegate such decisions to them. When it comes time to perform a major version upgrade or switch to a different service, administrators face a quandary. When do they switch over to the new provider? The concern lies with defects or configuration problems in the new service. They don't want to switch over only to experience a massive case of improper authorizations or mistaken rejections. What they'd really like is to use both systems simultaneously and note when the old and the new service differ in their decisions, but this approach requires an even greater ability for the middleware infrastructure to cooperate with the rest of the security ecology.

Auditing
If an application could simultaneously use two different authorization services, a difference of opinion would be a noteworthy event and administrators would want to know about it. Unfortunately, most middleware infrastructure neglects this type of security auditing. Proper auditing is not simply a matter of writing information to disk somewhere. To support their duties to verify, detect, and investigate, administrators need records of all security events in a single location, active notification of certain especially important events, and the ability to quickly search the records.

Security administrators are responsible for ensuring the enforcement of the enterprise policies regarding information access. Obviously, they must first specify these policies, hopefully using a productive interface as described above. Then they must verify the actual enforcement of these policies by periodically inspecting the audit trail. Government regulations or commercial contracts may require such audits. Administrators sample a representative set of transactions and track their paths through various application components to ensure the correct enforcement of security policies at each step. They need a consolidated audit trail or they'll have to spend a significant effort on manually assembling logs from different locations. They need detailed records or they won't be able to determine full compliance.

Responding to potential breaches is the other primary responsibility of security administrators. Responses involve two steps, detection and investigation. First, they need the ability to specify conditions under which the security system will actively notify them. These conditions could involve transaction values, such as transfers over a million dollars, or a pattern of events, such as a spike in the number of clients connecting using weak encryption when accessing sensitive functions. Once they receive notification, administrators must be able to quickly search the logs to determine if there has been an actual breach and the extent of any damage. These searches may involve complex criteria and must execute against the live audit trail so they can track a particular attack as it unfolds. These requirements make the auditing subsystem a substantial piece of software in its own right that middleware providers must devote a substantial effort to perfecting.

As we've seen, there are many challenges in application security. But like the essence of application security itself, the solution is also rather simple.

  1. A clean, elegant abstraction between security policy and business logic
  2. A simple, declarative interface for managing security policies in real time
  3. An open, flexible architecture for integrating with security services
These practices avoid the problem of mixing security and business logic, streamline security administration, and enable cooperation with the rest of the security ecology. The WebLogic Security Framework delivers these critical capabilities.

Security Framework Architecture
Overview

The goal of the WebLogic Security Framework is to deliver an approach to application security that is comprehensive, flexible, and open. Unlike the security realms available in earlier versions of WLS, the new framework applies to all J2EE objects, including JSPs, servlets, EJBs, JCA Adapters, JDBC connection pools, and JMS destinations. It complies with all of the J2EE 1.3 security requirements, such as JAAS for objects related to authentication and authorization, JSSE for communication using SSL and TLS, and the SecurityManager class for code-level security.

The heart of the architecture is the separation of security and business logic. Business logic executes in an appropriate container, be it a JSP, servlet, or EJB. When the container receives a request for an object it contains, it delegates the complete request and its entire context to the Security Framework. The framework returns a yes or no decision on whether to grant the request. This approach takes business logic out of the security equation by providing the same information to the security system that is available to the target object. They each use this information to fulfill their dedicated responsibility: the framework enforces security policy and the object executes business logic.

When the Security Framework receives a delegated request, it manages security processing (see Figure 1). This processing is very flexible, with fine-grained steps not found in many systems, such as dynamic role mapping, dynamic authorization, and adjudication of multiple authorizers. At each step, it delegates processing to an included third-party or custom provider through the corresponding service provider interface (SPI). This architecture enables WLS to route the information necessary to each service provider so that applications can take full advantage of specialized security services.

Service Provider Integration
The Security Framework per se manages security processing. Each step requires execution by a service provider. WLS 7.0 includes providers for every step, but they simply use the framework SPIs. Any other provider has access to the same facilities. These SPIs include:

  • Authentication: Handles the direct verification of requester credentials. The included provider supports username/password and certificate authentication via HTTPS.
  • Identity Assertion: Handles requests where an external system vouches for the requester. The included provider supports X.509 certificates and CORBA IIOP CSIv2 tokens. Because the Security Framework can dispatch requests to different providers based on the type of assertion, you can support a new external system by simply adding a provider for that system type.
  • Role Mapping: Handles the assignment of roles to a user for a given request. The included provider supports dynamic assignment based on username, group, and time.
  • Authorization: Handles the decision to grant or deny access to a resource. In the future, WLS will support many dynamic features, such as the evaluation of request parameter values. The Security Framework supports simultaneous use of multiple authorizers coordinated by an adjudicator.
  • Adjudication: Handles conflicts when using multiple authorization providers. When all the authentication providers return their decisions, the included provider determines whether to grant the original request based on either the rule "all must grant" or the rule "none can deny."
  • Credential Mapping: Handles the mapping of application principals to backend system credentials. As shown in Figure 1, it's not part of the process leading to an access decision because it's invoked when an object makes a request rather than when an object receives a request. The included provider supports username/password credentials and is used internally for J2EE calls and Web SSO.
  • Auditing: Handles the logging of security operations. As shown in Figure 1, it is slightly different from the other SPIs because it is invoked whenever a provider of any kind executes a function. The included provider supports reporting based on thresholds and writes all reported events to a log file. The Security Framework supports simultaneous use of multiple auditors, making it easy to integrate with external logging systems.

    These clean SPIs make it possible to plug and unplug different providers as the security ecology evolves, benefiting everyone involved. BEA can individually upgrade the providers included with WLS. Specialist security vendors can easily make their services available to J2EE applications by coding their products to the appropriate SPIs and many have already done so. Moreover, enterprises can quickly implement customized security processing where necessary. Instead of adapting your security posture to suit the middleware, the middleware adapts its security processing to you.

    From an administrator's perspective, selecting from available providers is simply a matter of pointing and clicking. Using the WLS console, you expand the Realms node and then expand the Providers node. For a given provider, you select one of the available provider instances and configure its properties.

    Backwards Compatibility
    As described above, the WebLogic Security Framework revolutionizes application layer security. However, you may have invested a substantial effort in configuring the security realms used in WLW 6.x. You might not want to upgrade your security model immediately, so the framework offers a realm adapter for backwards compatibility. This adapter is the complete security subsystem from WLS 6.x and the framework treats it just as any other service provider that implements the authentication and authorization SPIs. At server startup, the adapter extracts access control definitions from the deployment descriptor just as before. At runtime, it accepts authentication and authorization requests delegated from the framework through the corresponding SPI. From your perspective, WLS 7.0 security behaves just like 6.x security. From the server's perspective, the realm adapter is fully integrated into the 7.0 Security Framework. Once you decide to upgrade, you can easily import the security information from 6.x definitions. You can even perform simultaneous authorization with the realm adapter and the Security Framework's native provider to verify proper behavior of the upgrade.

    In some cases, you may be using the 6.x realm's integration with the distributed user management systems in Unix or Windows NT. In these cases, you might want to continue using this integration for authentication but want the benefits of the dynamic authorization from the Security Framework. Therefore, it has an option to use the 6.x realm adapter only as an authentication provider. It's interesting to note how the flexibility of the framework's SPI architecture cleanly addresses what might otherwise be a very tricky backwards compatibility issue.

    Security Integration Scenarios
    The Security Framework offers a lot of flexibility. Let's look at a few specific examples of this. In some cases, the solution may not be completely finished, but the planned design demonstrates the superiority of an open framework approach.

    Perimeter Authentication
    In many cases, a party other than WLS's own authenticator vouches for the identity of a requester. It may be the SSL layer of WLS. It may be a Kerberos system. It may be an intermediary Web service. In these cases, the third party provides a token that the application can verify. As long as it trusts the third party, it can accept a verified token as if it were the original user credential.

    The Security Framework employs a straightforward mechanism for working with such systems. All a third party has to do is put its token in an HTTP header. The Security Framework examines the token and dispatches an appropriate service provider based on the token type. If an X.509 certificate from mutual SSL authentication comes in, the framework dispatches a provider that can verify the certificate chain to a root certificate authority and perhaps check the current validity of the certificate using the Online Certificate Status Protocol. If a Kerberos ticket or WS-Security token comes in, the appropriate provider decodes the token and performs the necessary verification.

    Once this verification is done, the provider maps the identity in the credential to a local user. The framework calls back to the JAAS with this local user, which then populates the Principal object as specified in J2EE 1.3. This approach is fully compliant with the appropriate standards yet still offers flexibility. A third-party provider or enterprise development team can integrate any authentication technology with WLS as long as they can populate an HTTP header. Integrating WLS applications with Web SSO solutions is easy because most of them, including SAML, already use cookies or HTTP headers.

    Role Associations
    Most application security models employ the concept of roles, which provide a layer of indirection between users and resources that increases the ease of administration. Roles are like Groups, but more dynamic. Typically, a security administrator assigns a user to a group upon provisioning and then changes this assignment only when the user's job responsibilities change. Roles change more often, perhaps even from request to request based on specific conditions. The Security Framework supports both Groups and Roles.

    Administrators can set up roles so that they embody a logical identity such as Teller or name a set of logical permissions such as Deposit, Withdraw, and Transfer. It's really a matter of design. The first approach is more focused on the logical role of the user and the second approach is more focused on the logical role of the resource. The Security Framework distinguishes between globally scoped roles, which apply to all resources in an installation, and resource-scoped roles, which apply only to specific resources. Globally scoped roles are intended primarily for managing different levels of administrative privileges. Administrators will configure and manage resource-scoped roles.

    The Security Framework enables service providers to dynamically assign roles based on context (see Figure 2). The included provider can take into account username, group, and time. For example, suppose a bank had an AccountManagement EJB restricted to users with a Teller role. A user would fulfill the Teller role if he or she were a member of the DayTeller group and the local time was between 8 a.m. and 6 p.m. Administrators could also use this time feature to set up role assignments that automatically expire, which might be especially useful for very sensitive information such as human resources data. Figure 3 shows how easy it is to set up this dynamic assignment. The role-mapping SPI actually supports the use of additional information such as the parameters of the method call. Therefore, custom providers could offer even more flexible role mapping, Consider the case of a CFO and expense accounts. A custom provider could grant the CFO an Approver role unless the Employee parameter of the request were the CFO himself. That way, he couldn't approve his own expenses.

    Credential Mapping
    As discussed above, enterprises often want to tie each request of a back-end database, packaged application, or legacy system to the ultimate user. Therefore, when a J2EE object accesses a back-end system on behalf of a user, it has to supply the appropriate credentials to the system. The basic problem is mapping. a J2EE Principal to back-end system credentials. The included service provider solves this problem for the most common case of username/password credentials. Each WLS instance has an embedded LDAP directory in which it can store the encrypted username/password pair for every valid combination of Principal and back-end system.

    Parametric Authorization
    One of the classic application security problems is making an authorization decision based on the content of the request or the target object. Approval thresholds are a common case where you want to evaluate the value of these parameters. First, you would create a set of roles such as Manager, SeniorManager, and Director. Then, you would create a set of policies that authorizes approval requests for each role based on the value of the Amount parameter, such as $5,000 for a Manager, $10,000 for a SeniorManager, and $20,000 for a Director. Most middleware does not currently address this issue. A future version of the included authorization service provider will allow such decisions based on the content of the request (see Figure 4). In fact, the authorization SPI already supports the use of method call parameters in access decisions, so you could build a custom provider with these capabilities today.

    Authorization based on the content of the target is a little trickier. For example, you may want to inspect an Account object to get the value of AccountHolder before you decide to authorize a withdrawal. Unfortunately, in the general case, this type of visibility can break encapsulation and present a security vulnerability. If the security system can access any data in the system, it presents a tempting target for compromise. It will soon be possible to write custom code to perform this type of operation in select cases. A future version of the SPI will enable you to access context besides the method call parameters, such as the EJB primary key. You could then create a very specialized provider that examined the primary key of the Account object targeted by a Withdrawal request to determine the correct row in an account database. The provider would make its own call to this database, using special credentials to retrieve the account holder and compare it to the Principal. This solution might require some manual mapping of account holder values to Principal values, but it would work. The dotted line from the authorizer to the external database in Figure 4 illustrates such a solution.

    Note that you would have to write almost the exact same code to perform this check in the Account object itself. However, the service provider approach partitions the security logic and the business logic. Different people can maintain the two types of logic and changing one does not introduce the risk of potentially breaking the other. This flexibility is important if you consider the actual complexity of banking applications that handle minor accounts, joint accounts, and business accounts. Maintaining the security policies for such an application could be a full time job in and of itself.

    Conclusion
    The WebLogic Server 7.0 Security Framework doesn't impose a rigid security model that hinders security integration with other system elements and forces the costly workaround of mixing security code with business logic. Instead, it adopts an open processing model so that application components can seamlessly cooperate with the rest of the enterprise security ecology. Moreover, its processing model delivers a clean abstraction of policy enforcement from business logic that lowers the cost of administering security policies and decreases the chance of security breaches.

    Both application developers and security administrators benefit from the Security Framework. Developers no longer have to shoulder the responsibility and potential embarrassment of mixing application and security code. Administrators don't have to become experts in middleware paradigms to meet security requirements. When someone has to write special security code, he or she only has to do it once - everyone can use it and it's easy to maintain.

    The key to the Security Framework's benefits lies in its open service provider model. Third-party security vendors can easily integrate their solutions with WebLogic and enterprises can quickly create custom security modules. Most important, an open model means that enterprises do not have to wait for the middleware vendor to adopt new security technologies because there are plenty of hooks for future innovations.

  • More Stories By Vadim Rosenberg

    Vadim Rosenberg is the product marketing manager for BEA WebLogic Server. Before joining BEA two years ago, Vadim had spent 13 years in business software engineering, most recently at Compaq Computers (Tandem Division) developing a fault-tolerant and highly scalable J2EE framework.

    More Stories By Paul Patrick

    As chief security architect for BEA Systems, Paul Patrick is responsible for the overall security product strategy at BEA. He plays a key role in driving the design and implementation of security functionality across all of BEA’s products, and is the architect for BEA’s new enterprise security infrastructure product, WebLogic Enterprise Security. Prior to becoming chief security architect, Paul was the lead architect of BEA’s ObjectBroker CORBA ORB and co-architect of WebLogic Enterprise (now Tuxedo). He is also the author of several patent applications as well as industry publications and a book on CORBA.

    Comments (0)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


    @ThingsExpo Stories
    As hybrid cloud becomes the de-facto standard mode of operation for most enterprises, new challenges arise on how to efficiently and economically share data across environments. In his session at 21st Cloud Expo, Dr. Allon Cohen, VP of Product at Elastifile, will explore new techniques and best practices that help enterprise IT benefit from the advantages of hybrid cloud environments by enabling data availability for both legacy enterprise and cloud-native mission critical applications. By rev...
    Join IBM November 1 at 21st Cloud Expo at the Santa Clara Convention Center in Santa Clara, CA, and learn how IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Cognitive analysis impacts today’s systems with unparalleled ability that were previously available only to manned, back-end operations. Thanks to cloud processing, IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Imagine a robot vacuum that becomes your personal assistant tha...
    Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, will discuss how from store operations...
    Recently, REAN Cloud built a digital concierge for a North Carolina hospital that had observed that most patient call button questions were repetitive. In addition, the paper-based process used to measure patient health metrics was laborious, not in real-time and sometimes error-prone. In their session at 21st Cloud Expo, Sean Finnerty, Executive Director, Practice Lead, Health Care & Life Science at REAN Cloud, and Dr. S.P.T. Krishnan, Principal Architect at REAN Cloud, will discuss how they bu...
    Nordstrom is transforming the way that they do business and the cloud is the key to enabling speed and hyper personalized customer experiences. In his session at 21st Cloud Expo, Ken Schow, VP of Engineering at Nordstrom, will discuss some of the key learnings and common pitfalls of large enterprises moving to the cloud. This includes strategies around choosing a cloud provider(s), architecture, and lessons learned. In addition, he’ll go over some of the best practices for structured team migrat...
    With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend 21st Cloud Expo October 31 - November 2, 2017, at the Santa Clara Convention Center, CA, and June 12-14, 2018, at the Javits Center in New York City, NY, and learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.
    SYS-CON Events announced today that Datera will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Datera offers a radically new approach to data management, where innovative software makes data infrastructure invisible, elastic and able to perform at the highest level. It eliminates hardware lock-in and gives IT organizations the choice to source x86 server nodes, with business model option...
    Infoblox delivers Actionable Network Intelligence to enterprise, government, and service provider customers around the world. They are the industry leader in DNS, DHCP, and IP address management, the category known as DDI. We empower thousands of organizations to control and secure their networks from the core-enabling them to increase efficiency and visibility, improve customer service, and meet compliance requirements.
    Digital transformation is changing the face of business. The IDC predicts that enterprises will commit to a massive new scale of digital transformation, to stake out leadership positions in the "digital transformation economy." Accordingly, attendees at the upcoming Cloud Expo | @ThingsExpo at the Santa Clara Convention Center in Santa Clara, CA, Oct 31-Nov 2, will find fresh new content in a new track called Enterprise Cloud & Digital Transformation.
    SYS-CON Events announced today that NetApp has been named “Bronze Sponsor” of SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. NetApp is the data authority for hybrid cloud. NetApp provides a full range of hybrid cloud data services that simplify management of applications and data across cloud and on-premises environments to accelerate digital transformation. Together with their partners, NetApp emp...
    SYS-CON Events announced today that N3N will exhibit at SYS-CON's @ThingsExpo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. N3N’s solutions increase the effectiveness of operations and control centers, increase the value of IoT investments, and facilitate real-time operational decision making. N3N enables operations teams with a four dimensional digital “big board” that consolidates real-time live video feeds alongside IoT sensor data a...
    Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a reality. However, most organizations today are building their data infrastructure to focus solely on addressing immediate business needs vs. a platform capable of quickly adapting emerging technologies to address future ...
    SYS-CON Events announced today that Avere Systems, a leading provider of hybrid cloud enablement solutions, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Avere Systems was created by file systems experts determined to reinvent storage by changing the way enterprises thought about and bought storage resources. With decades of experience behind the company’s founders, Avere got its ...
    SYS-CON Events announced today that Avere Systems, a leading provider of enterprise storage for the hybrid cloud, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Avere delivers a more modern architectural approach to storage that doesn't require the overprovisioning of storage capacity to achieve performance, overspending on expensive storage media for inactive data or the overbui...
    SYS-CON Events announced today that IBM has been named “Diamond Sponsor” of SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California.
    SYS-CON Events announced today that Ryobi Systems will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Ryobi Systems Co., Ltd., as an information service company, specialized in business support for local governments and medical industry. We are challenging to achive the precision farming with AI. For more information, visit http:...
    High-velocity engineering teams are applying not only continuous delivery processes, but also lessons in experimentation from established leaders like Amazon, Netflix, and Facebook. These companies have made experimentation a foundation for their release processes, allowing them to try out major feature releases and redesigns within smaller groups before making them broadly available. In his session at 21st Cloud Expo, Brian Lucas, Senior Staff Engineer at Optimizely, will discuss how by using...
    In this strange new world where more and more power is drawn from business technology, companies are effectively straddling two paths on the road to innovation and transformation into digital enterprises. The first path is the heritage trail – with “legacy” technology forming the background. Here, extant technologies are transformed by core IT teams to provide more API-driven approaches. Legacy systems can restrict companies that are transitioning into digital enterprises. To truly become a lead...
    SYS-CON Events announced today that CAST Software will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. CAST was founded more than 25 years ago to make the invisible visible. Built around the idea that even the best analytics on the market still leave blind spots for technical teams looking to deliver better software and prevent outages, CAST provides the software intelligence that matter ...
    SYS-CON Events announced today that Daiya Industry will exhibit at the Japanese Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Ruby Development Inc. builds new services in short period of time and provides a continuous support of those services based on Ruby on Rails. For more information, please visit https://github.com/RubyDevInc.