Welcome!

Weblogic Authors: Yeshim Deniz, Elizabeth White, Michael Meiner, Michael Bushong, Avi Rosenthal

Related Topics: Weblogic, Cloud Security

Weblogic: Article

WebLogic Security Framework

WebLogic Security Framework

WebLogic Server 7.0 offers a new, integrated approach to solving the overall security problem for enterprise applications. With this framework, application security becomes a function of the application infrastructure and is separate from the application itself. Any application deployed on WebLogic Server (WLS) can be secured either through the security features included with the server out of the box, by extending the open Security Service Provider Interface to a custom security solution, or by plugging in other specialized security solutions from major security vendors that the customer's enterprise standardizes on.

This article defines the major requirements for an integrated application security solution, and explains how WebLogic Server 7.0 Security Framework delivers them to your application.

Requirements
The goals of application security are simple: (1) enforce business policies concerning which people should have access to which resources, and (2) don't let attackers access any information. Goal (1) causes a problem because it seems acceptable to enforce business policies in business logic. This belief is misplaced because it's much harder to change policies when enforcement occurs in business logic. Consider the analogy to a secure physical filing system. You don't take a document and rewrite it when a security policy changes. You put it in a different filing cabinet. Different filing cabinets have different keys and a security officer controls their distribution. Similarly, application developers should not have to change business logic when security policy changes. A security administrator should simply alter the protection given to affected components.

Moreover, mixing security code with business logic compromises both goals if developers make mistakes. When the security code in a component has a defect, people may accidentally access information they shouldn't and attackers may exploit the defect to gain unauthorized access. Of course, mistakes are unavoidable. That's why we test software. But it's a lot harder to test the security of every application component individually than a security system as a whole. The difference is somewhat analogous to reading every document in our hypothetical filing system for its fidelity to security policies rather than simply testing the integrity of the locked filing cabinets. However, we shouldn't blame application developers for mixing security code and business logic. We should blame middleware security models. Most of them simply do not support the types of policies many enterprises have, such as only an account holder can access his account. Unless these security models begin supporting a much more dynamic type of security, developers really have no choice.

Middleware security models also fail enterprises in goal (2). Keeping attackers out requires a united front from all the elements in a distributed system. Cooperation is the key to this united front. Middleware sits between front-end processors and back-end databases. The middleware security system must be prepared to accept as much information as it can from the front-end processors about the security context of their requests and must be prepared to offer as much information as it can to back-end databases about the context of its requests. Moreover, it must be prepared to cooperate with special security services that work to coordinate the efforts of all these tiers. Middleware security models offer little, if anything, to support such cooperation. This failing affects many aspects of application security.

Authentication
Authentication is the first line of defense. Knowing the identity of requesters enables the application layer to decide whether to grant their requests and poses a barrier to attackers. All authentication schemes work in fundamentally the same way. They offer a credential to establish identity and provide a means to verify that credential. However, there is a wide variation in the form of credentials and verification mechanisms. Each enterprise's choices of authentication schemes depend on a number of factors, including the sensitivity of protected resources, expected modes of attack, and solution life cycle cost. In most cases, enterprises already have one or more authentication schemes in place, so middleware must work with them by accepting their credentials and engaging their verification mechanisms. Without this cooperation, the enterprise must use a lowest common denominator scheme like passwords, potentially limiting the use of such middleware to low-value applications.

The problem of Web single sign-on (SSO) is even more difficult. The motivation for SSO stems from the distributed nature of Web applications. From the user perspective, a single application may actually encompass different software components running on different servers and operated by different organizations. Users don't want to resubmit credentials every time they click a link that happens to take them to a page running in a different location. Their experiences should be seamless. The previous problem of working with existing authentication schemes requires only understanding credential formats and integrating with verification mechanisms. However, with Web SSO users don't even want to provide credentials in many circumstances. Establishing a user's identity without seeing his credentials requires sophisticated behind-the-scenes communication between the two servers involved in handing off a user session. There are a number of proprietary solutions and some emerging standards for this communication, but it is likely that a given application may have to support multiple approaches for the foreseeable future, so an open model is necessary.

Working with other Web application components involves cooperation on the front end, but middleware infrastructure must also cooperate on the back end. Databases have been around a long time and enterprises take database security very carefully. They really don't trust the front end and middleware layers. If an attacker were to compromise either one of these layers, he could potentially issue a sequence of database requests that would return a large fraction of all the data it maintains. Also, if the front-end or middleware components have defects, they could unintentionally request data for the wrong user, resulting in an embarrassing disclosure of private information. Therefore, many enterprises want to bind each database request to a particular end user, including the appropriate credentials that establish the user's identity. Applications must be prepared to propagate this information.

Authorization
Once an application has established the requester's identity, it must decide whether the set of existing security policies allows it to grant the request. Typically, middleware infrastructure such as J2EE uses a static, role-based system. During user provisioning, security administrators explicitly assign roles to users and then update these assignments as conditions require. During component deployment, security administrators indicate the roles allowed to access the component. At runtime, if a request comes from a user with the necessary roles the application grants the request. This static approach ignores the dynamic nature of many business policies. Consider the policies governing bank accounts, expense reports, and bank tellers. For bank accounts, customers should only be able to access their own accounts. For expense reports, a manager can provide an approval only up to a set amount and never for his own expenses. For bank tellers, they only fulfill the teller role when they're on duty. In even more sophisticated policies, authorization depends on the combination of roles assigned to a user, as well as the content of the request. Middleware infrastructure must explicitly support these dynamic policies or at least provide enough context to specialized security services that do.

The need for dynamic authorization raises the issue of administration. We definitely don't want to force security administrators to become experts in programming languages like Java. Certainly there will be unusual situations that require some custom programming, but routinely updating the dollar threshold for expense report authorization shouldn't require it. At a more mundane level, we don't want them to dig through XML-formatted deployment descriptors and then redeploy components to update role assignments. Security administrators need a well-designed graphical user interface that lets them perform all of their routine tasks and most of their nonroutine ones at runtime. Managing user lists and their assigned roles, changing the level of protection for components, and configuring dynamic constraints should all require just a few moments.

A more complicated headache for security administrators comes in migrating from one authorization service to another. Due to the complexity of authorization decisions, many enterprises rely on specialized services and all applications delegate such decisions to them. When it comes time to perform a major version upgrade or switch to a different service, administrators face a quandary. When do they switch over to the new provider? The concern lies with defects or configuration problems in the new service. They don't want to switch over only to experience a massive case of improper authorizations or mistaken rejections. What they'd really like is to use both systems simultaneously and note when the old and the new service differ in their decisions, but this approach requires an even greater ability for the middleware infrastructure to cooperate with the rest of the security ecology.

Auditing
If an application could simultaneously use two different authorization services, a difference of opinion would be a noteworthy event and administrators would want to know about it. Unfortunately, most middleware infrastructure neglects this type of security auditing. Proper auditing is not simply a matter of writing information to disk somewhere. To support their duties to verify, detect, and investigate, administrators need records of all security events in a single location, active notification of certain especially important events, and the ability to quickly search the records.

Security administrators are responsible for ensuring the enforcement of the enterprise policies regarding information access. Obviously, they must first specify these policies, hopefully using a productive interface as described above. Then they must verify the actual enforcement of these policies by periodically inspecting the audit trail. Government regulations or commercial contracts may require such audits. Administrators sample a representative set of transactions and track their paths through various application components to ensure the correct enforcement of security policies at each step. They need a consolidated audit trail or they'll have to spend a significant effort on manually assembling logs from different locations. They need detailed records or they won't be able to determine full compliance.

Responding to potential breaches is the other primary responsibility of security administrators. Responses involve two steps, detection and investigation. First, they need the ability to specify conditions under which the security system will actively notify them. These conditions could involve transaction values, such as transfers over a million dollars, or a pattern of events, such as a spike in the number of clients connecting using weak encryption when accessing sensitive functions. Once they receive notification, administrators must be able to quickly search the logs to determine if there has been an actual breach and the extent of any damage. These searches may involve complex criteria and must execute against the live audit trail so they can track a particular attack as it unfolds. These requirements make the auditing subsystem a substantial piece of software in its own right that middleware providers must devote a substantial effort to perfecting.

As we've seen, there are many challenges in application security. But like the essence of application security itself, the solution is also rather simple.

  1. A clean, elegant abstraction between security policy and business logic
  2. A simple, declarative interface for managing security policies in real time
  3. An open, flexible architecture for integrating with security services
These practices avoid the problem of mixing security and business logic, streamline security administration, and enable cooperation with the rest of the security ecology. The WebLogic Security Framework delivers these critical capabilities.

Security Framework Architecture
Overview

The goal of the WebLogic Security Framework is to deliver an approach to application security that is comprehensive, flexible, and open. Unlike the security realms available in earlier versions of WLS, the new framework applies to all J2EE objects, including JSPs, servlets, EJBs, JCA Adapters, JDBC connection pools, and JMS destinations. It complies with all of the J2EE 1.3 security requirements, such as JAAS for objects related to authentication and authorization, JSSE for communication using SSL and TLS, and the SecurityManager class for code-level security.

The heart of the architecture is the separation of security and business logic. Business logic executes in an appropriate container, be it a JSP, servlet, or EJB. When the container receives a request for an object it contains, it delegates the complete request and its entire context to the Security Framework. The framework returns a yes or no decision on whether to grant the request. This approach takes business logic out of the security equation by providing the same information to the security system that is available to the target object. They each use this information to fulfill their dedicated responsibility: the framework enforces security policy and the object executes business logic.

When the Security Framework receives a delegated request, it manages security processing (see Figure 1). This processing is very flexible, with fine-grained steps not found in many systems, such as dynamic role mapping, dynamic authorization, and adjudication of multiple authorizers. At each step, it delegates processing to an included third-party or custom provider through the corresponding service provider interface (SPI). This architecture enables WLS to route the information necessary to each service provider so that applications can take full advantage of specialized security services.

Service Provider Integration
The Security Framework per se manages security processing. Each step requires execution by a service provider. WLS 7.0 includes providers for every step, but they simply use the framework SPIs. Any other provider has access to the same facilities. These SPIs include:

  • Authentication: Handles the direct verification of requester credentials. The included provider supports username/password and certificate authentication via HTTPS.
  • Identity Assertion: Handles requests where an external system vouches for the requester. The included provider supports X.509 certificates and CORBA IIOP CSIv2 tokens. Because the Security Framework can dispatch requests to different providers based on the type of assertion, you can support a new external system by simply adding a provider for that system type.
  • Role Mapping: Handles the assignment of roles to a user for a given request. The included provider supports dynamic assignment based on username, group, and time.
  • Authorization: Handles the decision to grant or deny access to a resource. In the future, WLS will support many dynamic features, such as the evaluation of request parameter values. The Security Framework supports simultaneous use of multiple authorizers coordinated by an adjudicator.
  • Adjudication: Handles conflicts when using multiple authorization providers. When all the authentication providers return their decisions, the included provider determines whether to grant the original request based on either the rule "all must grant" or the rule "none can deny."
  • Credential Mapping: Handles the mapping of application principals to backend system credentials. As shown in Figure 1, it's not part of the process leading to an access decision because it's invoked when an object makes a request rather than when an object receives a request. The included provider supports username/password credentials and is used internally for J2EE calls and Web SSO.
  • Auditing: Handles the logging of security operations. As shown in Figure 1, it is slightly different from the other SPIs because it is invoked whenever a provider of any kind executes a function. The included provider supports reporting based on thresholds and writes all reported events to a log file. The Security Framework supports simultaneous use of multiple auditors, making it easy to integrate with external logging systems.

    These clean SPIs make it possible to plug and unplug different providers as the security ecology evolves, benefiting everyone involved. BEA can individually upgrade the providers included with WLS. Specialist security vendors can easily make their services available to J2EE applications by coding their products to the appropriate SPIs and many have already done so. Moreover, enterprises can quickly implement customized security processing where necessary. Instead of adapting your security posture to suit the middleware, the middleware adapts its security processing to you.

    From an administrator's perspective, selecting from available providers is simply a matter of pointing and clicking. Using the WLS console, you expand the Realms node and then expand the Providers node. For a given provider, you select one of the available provider instances and configure its properties.

    Backwards Compatibility
    As described above, the WebLogic Security Framework revolutionizes application layer security. However, you may have invested a substantial effort in configuring the security realms used in WLW 6.x. You might not want to upgrade your security model immediately, so the framework offers a realm adapter for backwards compatibility. This adapter is the complete security subsystem from WLS 6.x and the framework treats it just as any other service provider that implements the authentication and authorization SPIs. At server startup, the adapter extracts access control definitions from the deployment descriptor just as before. At runtime, it accepts authentication and authorization requests delegated from the framework through the corresponding SPI. From your perspective, WLS 7.0 security behaves just like 6.x security. From the server's perspective, the realm adapter is fully integrated into the 7.0 Security Framework. Once you decide to upgrade, you can easily import the security information from 6.x definitions. You can even perform simultaneous authorization with the realm adapter and the Security Framework's native provider to verify proper behavior of the upgrade.

    In some cases, you may be using the 6.x realm's integration with the distributed user management systems in Unix or Windows NT. In these cases, you might want to continue using this integration for authentication but want the benefits of the dynamic authorization from the Security Framework. Therefore, it has an option to use the 6.x realm adapter only as an authentication provider. It's interesting to note how the flexibility of the framework's SPI architecture cleanly addresses what might otherwise be a very tricky backwards compatibility issue.

    Security Integration Scenarios
    The Security Framework offers a lot of flexibility. Let's look at a few specific examples of this. In some cases, the solution may not be completely finished, but the planned design demonstrates the superiority of an open framework approach.

    Perimeter Authentication
    In many cases, a party other than WLS's own authenticator vouches for the identity of a requester. It may be the SSL layer of WLS. It may be a Kerberos system. It may be an intermediary Web service. In these cases, the third party provides a token that the application can verify. As long as it trusts the third party, it can accept a verified token as if it were the original user credential.

    The Security Framework employs a straightforward mechanism for working with such systems. All a third party has to do is put its token in an HTTP header. The Security Framework examines the token and dispatches an appropriate service provider based on the token type. If an X.509 certificate from mutual SSL authentication comes in, the framework dispatches a provider that can verify the certificate chain to a root certificate authority and perhaps check the current validity of the certificate using the Online Certificate Status Protocol. If a Kerberos ticket or WS-Security token comes in, the appropriate provider decodes the token and performs the necessary verification.

    Once this verification is done, the provider maps the identity in the credential to a local user. The framework calls back to the JAAS with this local user, which then populates the Principal object as specified in J2EE 1.3. This approach is fully compliant with the appropriate standards yet still offers flexibility. A third-party provider or enterprise development team can integrate any authentication technology with WLS as long as they can populate an HTTP header. Integrating WLS applications with Web SSO solutions is easy because most of them, including SAML, already use cookies or HTTP headers.

    Role Associations
    Most application security models employ the concept of roles, which provide a layer of indirection between users and resources that increases the ease of administration. Roles are like Groups, but more dynamic. Typically, a security administrator assigns a user to a group upon provisioning and then changes this assignment only when the user's job responsibilities change. Roles change more often, perhaps even from request to request based on specific conditions. The Security Framework supports both Groups and Roles.

    Administrators can set up roles so that they embody a logical identity such as Teller or name a set of logical permissions such as Deposit, Withdraw, and Transfer. It's really a matter of design. The first approach is more focused on the logical role of the user and the second approach is more focused on the logical role of the resource. The Security Framework distinguishes between globally scoped roles, which apply to all resources in an installation, and resource-scoped roles, which apply only to specific resources. Globally scoped roles are intended primarily for managing different levels of administrative privileges. Administrators will configure and manage resource-scoped roles.

    The Security Framework enables service providers to dynamically assign roles based on context (see Figure 2). The included provider can take into account username, group, and time. For example, suppose a bank had an AccountManagement EJB restricted to users with a Teller role. A user would fulfill the Teller role if he or she were a member of the DayTeller group and the local time was between 8 a.m. and 6 p.m. Administrators could also use this time feature to set up role assignments that automatically expire, which might be especially useful for very sensitive information such as human resources data. Figure 3 shows how easy it is to set up this dynamic assignment. The role-mapping SPI actually supports the use of additional information such as the parameters of the method call. Therefore, custom providers could offer even more flexible role mapping, Consider the case of a CFO and expense accounts. A custom provider could grant the CFO an Approver role unless the Employee parameter of the request were the CFO himself. That way, he couldn't approve his own expenses.

    Credential Mapping
    As discussed above, enterprises often want to tie each request of a back-end database, packaged application, or legacy system to the ultimate user. Therefore, when a J2EE object accesses a back-end system on behalf of a user, it has to supply the appropriate credentials to the system. The basic problem is mapping. a J2EE Principal to back-end system credentials. The included service provider solves this problem for the most common case of username/password credentials. Each WLS instance has an embedded LDAP directory in which it can store the encrypted username/password pair for every valid combination of Principal and back-end system.

    Parametric Authorization
    One of the classic application security problems is making an authorization decision based on the content of the request or the target object. Approval thresholds are a common case where you want to evaluate the value of these parameters. First, you would create a set of roles such as Manager, SeniorManager, and Director. Then, you would create a set of policies that authorizes approval requests for each role based on the value of the Amount parameter, such as $5,000 for a Manager, $10,000 for a SeniorManager, and $20,000 for a Director. Most middleware does not currently address this issue. A future version of the included authorization service provider will allow such decisions based on the content of the request (see Figure 4). In fact, the authorization SPI already supports the use of method call parameters in access decisions, so you could build a custom provider with these capabilities today.

    Authorization based on the content of the target is a little trickier. For example, you may want to inspect an Account object to get the value of AccountHolder before you decide to authorize a withdrawal. Unfortunately, in the general case, this type of visibility can break encapsulation and present a security vulnerability. If the security system can access any data in the system, it presents a tempting target for compromise. It will soon be possible to write custom code to perform this type of operation in select cases. A future version of the SPI will enable you to access context besides the method call parameters, such as the EJB primary key. You could then create a very specialized provider that examined the primary key of the Account object targeted by a Withdrawal request to determine the correct row in an account database. The provider would make its own call to this database, using special credentials to retrieve the account holder and compare it to the Principal. This solution might require some manual mapping of account holder values to Principal values, but it would work. The dotted line from the authorizer to the external database in Figure 4 illustrates such a solution.

    Note that you would have to write almost the exact same code to perform this check in the Account object itself. However, the service provider approach partitions the security logic and the business logic. Different people can maintain the two types of logic and changing one does not introduce the risk of potentially breaking the other. This flexibility is important if you consider the actual complexity of banking applications that handle minor accounts, joint accounts, and business accounts. Maintaining the security policies for such an application could be a full time job in and of itself.

    Conclusion
    The WebLogic Server 7.0 Security Framework doesn't impose a rigid security model that hinders security integration with other system elements and forces the costly workaround of mixing security code with business logic. Instead, it adopts an open processing model so that application components can seamlessly cooperate with the rest of the enterprise security ecology. Moreover, its processing model delivers a clean abstraction of policy enforcement from business logic that lowers the cost of administering security policies and decreases the chance of security breaches.

    Both application developers and security administrators benefit from the Security Framework. Developers no longer have to shoulder the responsibility and potential embarrassment of mixing application and security code. Administrators don't have to become experts in middleware paradigms to meet security requirements. When someone has to write special security code, he or she only has to do it once - everyone can use it and it's easy to maintain.

    The key to the Security Framework's benefits lies in its open service provider model. Third-party security vendors can easily integrate their solutions with WebLogic and enterprises can quickly create custom security modules. Most important, an open model means that enterprises do not have to wait for the middleware vendor to adopt new security technologies because there are plenty of hooks for future innovations.

  • More Stories By Vadim Rosenberg

    Vadim Rosenberg is the product marketing manager for BEA WebLogic Server. Before joining BEA two years ago, Vadim had spent 13 years in business software engineering, most recently at Compaq Computers (Tandem Division) developing a fault-tolerant and highly scalable J2EE framework.

    More Stories By Paul Patrick

    As chief security architect for BEA Systems, Paul Patrick is responsible for the overall security product strategy at BEA. He plays a key role in driving the design and implementation of security functionality across all of BEA’s products, and is the architect for BEA’s new enterprise security infrastructure product, WebLogic Enterprise Security. Prior to becoming chief security architect, Paul was the lead architect of BEA’s ObjectBroker CORBA ORB and co-architect of WebLogic Enterprise (now Tuxedo). He is also the author of several patent applications as well as industry publications and a book on CORBA.

    Comments (0)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


    IoT & Smart Cities Stories
    Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
    DXWorldEXPO LLC announced today that ICOHOLDER named "Media Sponsor" of Miami Blockchain Event by FinTechEXPO. ICOHOLDER gives detailed information and help the community to invest in the trusty projects. Miami Blockchain Event by FinTechEXPO has opened its Call for Papers. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Miami Blockchain Event by FinTechEXPOalso offers sp...
    DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
    @DevOpsSummit at Cloud Expo, taking place November 12-13 in New York City, NY, is co-located with 22nd international CloudEXPO | first international DXWorldEXPO and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time t...
    Headquartered in Plainsboro, NJ, Synametrics Technologies has provided IT professionals and computer systems developers since 1997. Based on the success of their initial product offerings (WinSQL and DeltaCopy), the company continues to create and hone innovative products that help its customers get more from their computer applications, databases and infrastructure. To date, over one million users around the world have chosen Synametrics solutions to help power their accelerated business or per...
    Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight...
    When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...
    Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
    Digital Transformation is much more than a buzzword. The radical shift to digital mechanisms for almost every process is evident across all industries and verticals. This is often especially true in financial services, where the legacy environment is many times unable to keep up with the rapidly shifting demands of the consumer. The constant pressure to provide complete, omnichannel delivery of customer-facing solutions to meet both regulatory and customer demands is putting enormous pressure on...
    Bill Schmarzo, Tech Chair of "Big Data | Analytics" of upcoming CloudEXPO | DXWorldEXPO New York (November 12-13, 2018, New York City) today announced the outline and schedule of the track. "The track has been designed in experience/degree order," said Schmarzo. "So, that folks who attend the entire track can leave the conference with some of the skills necessary to get their work done when they get back to their offices. It actually ties back to some work that I'm doing at the University of San...