Core Security Patterns: Best Practices and Strategies for J2EE, Web Services, and Identity Management
Audit Interceptor
Problem
You want to intercept and audit requests and responses to and from the Business tier. Auditing is an essential part of any security design. Most enterprise applications have security-audit requirements. A security audit allows auditors to reconcile actions or events that have taken place in the application with the policies that govern those actions. In this manner, the audit log serves as a record of events for the application. This record can then be used for forensic purposes following a security breach. That record must be checked periodically to ensure that the actions that users have taken are in accordance with the actions allowed by their roles. Deviations must be noted from audit reports, and corrective actions must be taken to ensure those deviations do not happen in the future, either through code fixes or policy changes. The most important part of this procedure is recording the audit trail and making sure that the audit trail helps proper auditing of appropriate events and user actions associated. These events and actions are often not completely understood or defined prior to construction of the application. Therefore, it is essential that an auditing framework is able to easily support additions or changes to the auditing events. Forces
Solution
Use an Audit Interceptor to centralize auditing functionality and define audit events declaratively, independent of the Business tier services. An Audit Interceptor intercepts Business tier requests and responses. It creates audit events based on the information in a request and response using declarative mechanisms defined externally to the application. By centralizing auditing functionality, the burden of implementing it is removed from the back-end business component developers. Therefore, there is reduced code replication and increased code reuse. A declarative approach to auditing is crucial to maintainability of the application. Seldom are all the auditing requirements correctly defined prior to implementation. Only through iterations of auditing reviews are all of the correct events captured and the extraneous events discarded. Additionally, auditing requirements often change as corporate and industry policies evolve. To keep up with these changes and avoid code maintainability problems, it is necessary to define audit events in a declarative manner that does not require recompilation or redeployment of the application. Since the Audit Interceptor is the centralized point for auditing, any required programmatic change is isolated to one area of the code, which increases code maintainability. Structure
Figure 10-1 depicts the class diagram for the Audit Interceptor pattern. The Client attempts to access the Target. The AuditInterceptor class intercepts the request and uses the AuditEventCatalog to determine if an audit event should be written to the AuditLog. Figure 10-1. Audit Interceptor class diagram
Figure 10-2 shows the sequence of events for the Audit Interceptor pattern. The Client attempts to access the Target, not knowing that the Audit Interceptor is an intermediary in the request. This approach allows clients to access services in the typical manner without introducing new APIs or interfaces specific to auditing that the client would otherwise not care about. Figure 10-2. Audit Interceptor sequence diagram
The diagram in Figure 10-2 does not reflect the implementation of how the request is intercepted, but simply illustrates that the AuditInterceptor receives the request and then forwards it to the Target. Participants and Responsibilities
Client. A client sends a request to the Target. AuditInterceptor. The AuditInterceptor intercepts the request. It encapsulates the details of auditing the request. EventCatalog. The EventCatalog maintains a mapping of requests to audit events. It hides the details of managing the life cycle of a catalog from an external source. AuditLog. AuditLog is responsible for writing audit events to a destination. This could be a database table, flat file, JMS queue, or any other persistent store. Target. The Target is any Business-tier component that would be accessed by a client. Typically, this is a business object or other component that sits behind a SessionFaçade, but not the SessionFaçade itself, because it would mostly be the entry point that invokes the AuditInterceptor. The Audit Interceptor pattern is illustrated in the following steps (see Figure 10-2):
Strategies
The Audit Interceptor pattern provides a flexible, unobtrusive approach to auditing Business tier events. It offers developers an easy-to-use approach to capturing audit eventsby decoupling auditing from the business flow. This allows business developers to disregard auditing and defer the onus to the security developers, who then only deal with auditing in a centralized location. Auditing can easily be retrofitted into an application using this pattern. By making use of an Event Catalog, the Audit Interceptor becomes decoupled from the actual audit events and therefore can incorporate changes in auditing requirements via a configuration file. The following is a strategy for implementing the Audit Interceptor. Intercepting Session Façade Strategy
The Audit Interceptor requires that it be inserted into the message flow to intercept requests. The Intercepting Session Façade strategy designates the Session Façade as the point of interception for the Intercepting Auditor. The Session Façade receives the request and then invokes the Audit Interceptor at the beginning of the request and again at the end of the request. Figure 10-3 depicts the class diagram for the Secure Service Façade Interceptor Strategy. Figure 10-3. Secure Service Façade Interceptor strategy class diagram
Using a Secure Service Façade Interceptor strategy, developers can audit at the entry and exit points to the Business tier. The SecureServiceFaçade is the appropriate point for audit interception, because its job is to forward to the Application Services and Business Objects. Typically, a request consists of several Business Objects or Application Services, though only one audit event is required for that request. For example, a credit card verification service may consist of one Secure Service Façade that invokes several Business Objects that make up that service, such as an expiration date check, a LUN10 check, and a card type check. It is unlikely that each individual check generates an audit event; it is likely that only the verification service itself generates the event. In Figure 10-3, the SecureServiceFaçade is the entry to the Business tier. It provides the remote interface that the Client uses to access the target component, such as another EJB or a Business Object. Instead of forwarding directly to the target component, the SecureServiceFaçade first invokes AuditInterceptor. The AuditInterceptor then consults the EventCatalog to determine whether to generate an audit event and, if so, what audit event to generate. If an audit event is generated, the AuditLog is then used to persist the audit event. Afterward, the SecureServiceFaçade then forwards the request as usual to the Target. On the return of invocation of the Target, the SecureServiceFaçade again calls the AuditInterceptor. This allows auditing of both start and end events. Exceptions raised from the invocation of the Target also cause the SecureServiceFaçade to invoke the AuditInterceptor. More often than not, you want to generate audit events for exceptions. Figure 10-4 depicts the Secure Service Façade Interceptor strategy sequence diagram. Figure 10-4. Secure Service Façade Interceptor strategy sequence diagram
Consequences
Auditing is one of the key requirements for mission-critical applications. Auditing provides a trail of recorded events that can tie back to a Principal. The Audit Interceptor provides a mechanism to audit Business-tier events so that operations staff and security auditors can go back and examine the audit trail and look for all forms of application-layer attacks. The Audit Interceptor itself does not prevent an attack, but it does provide the ability to capture the events of the attack so that they can later be analyzed. Such an analysis can help prevent future attacks. The Audit Interceptor pattern has the following consequences for developers:
Sample Code
Example 10-1 is sample source code for the AuditRequestMessageBean class. This class is used subsequent to the AuditLog class placing audit events onto the JMS queue and is responsible for pulling audit messages off a JMS queue and writing them to a database using an AuditLogJdbcDAO class (not shown here). It is not reflected in the previous diagrams. Example 10-1. AuditRequestMessageBean.java: AuditLog
package com.csp.audit; import javax.jms.*; /** * @ejb.bean transaction-type="Container" * acknowledge-mode="Auto-acknowledge" * destination-type="javax.jms.Queue" * subscription-durability="NonDurable" * name="AuditRequestMessageBean" * display-name="Audit Request Message Bean" * jndi-name= * "com.csp.audit.AuditRequestMessageBean" * * @ejb:transaction type="NotSupported" * * @message-driven * destination-jndi-name="Audit_Request_Queue" * connection-factory-jndi-name="Audit_JMS_Factory" */ public class AuditRequestMessageBean extends MessageDrivenBeanAdapter { public void onMessage(Message msg) throws Exception { ObjectMessage objMsg = (ObjectMessage)msg; try { String message = (String)objMsg.getObject(); JdbcDAOBase dao = (JdbcDAOBase) JdbcDAOFactory.getJdbcDAO( "com.csp.audit.AuditLogJdbcDAO"); // The DAO is responsible for actually writing the // audit message in the database using the JDBC API. dao.executeUpdate(dto); } catch(Exception ex) { System.out.println("Audit event write failed: " + ex, ex); } } // Other EJB Methods for MessageDrivenBean interface public void ejbCreate() { System.out.println("ejbCreate called"); } public void ejbRemove() { System.out.println("ejbRemove called"); } public void setMessageDrivenContext(MessageDrivenContext context) { System.out.println("setMessageDrivenContext called"); this.context = context; } }
Example 10-2 lists the sample source code for the AuditClient class, which is responsible for placing audit event messages on a JMS queue for persisting later. This class is used by the AuditLog class. Example 10-2. AuditClient.java: Helper class used by AuditInterceptor
package com.csp.audit; import javax.naming.*; import javax.jms.*; public class AuditClient { private static String JMS_FACTORY_NAME = "Audit_JMS_Factory"; private static String AUDIT_QUEUE_NAME = "Audit_Request_Queue"; private static QueueSender queueSender = null; private static ObjectMessage objectMessage = null; // Initialize the JMS Client // 1. Lookup JMS connection factory // 2. Create a JMS connection // 3. Create a JMS session object // 4. Lookup a JMS Queue and Create a JMS sender synchronized static void init() throws Exception { Context ctx = new InitialContext(); QueueConnectionFactory cfactory = (QueueConnectionFactory) ctx.lookup( JMS_FACTORY_NAME); QueueConnection queueConnection = (QueueConnection) cfactory.createQueueConnection(); QueueSession queueSession = (QueueSession) queueConnection.createQueueSession( false, javax.jms.Session.AUTO_ACKNOWLEDGE); Queue queue = (Queue)ctx.lookup(AUDIT_QUEUE_NAME); queueSender = queueSession.createSender(queue); objectMessage = queueSession.createObjectMessage(); } // 5. Send the audit message to the Queue public static void audit(String auditMessage) throws Exception{ try { if(queueSender == null || objectMessage == null){ init(); objectMessage.setObject(auditMessage); queueSender.send(objectMessage); return; } objectMessage.setObject(auditMessage); queueSender.send(objectMessage); } catch(Exception ex) { System.out.println("Error sending audit event: " + ex, ex); throw ex; } } }
Security Factors and Risks
The Audit Interceptor pattern provides developers with a standard way of capturing and auditing events in a decoupled manner. Auditing is an essential part of any security architecture. Audit events enable administrators to capture key events that they can later use to reconstruct who did what and when in the system. This is useful in cases of a system crash or in tracking down an intruder if the system is compromised. Business Tier
Auditing. The Audit Interceptor pattern is responsible for providing a mechanism to capture audit events using an Interceptor approach. It is independent of where the audit information gets stored or how it is retrieved. Therefore, it is necessary to understand the general issues relating to auditing. Typically, audit logs (whether flat files or databases) should be stored separately from the applications, preferably on another machine or even off-site. This prevents intruders from covering their tracks by doctoring or erasing the audit logs. Audit logs should be writable but not updateable, depending on the implementation. Distributed Security
JMS. The Audit Interceptor pattern is responsible for auditing potentially hundreds or even thousands of events per second in high-throughput systems. In these cases, a scalable solution must be designed to accommodate the high volume of messages. Such a solution would involve dumping the messages onto a persistent JMS queue for asynchronous persistence. In this case, the JMS queue itself must be secured. This can be done by using a JMS product that supports message-level encryption or using some of the other strategies for securing JMS described in Chapter 5. Since the queue must be persistent, you will also need to find a product that supports a secure backing store. Reality Check
What is the performance cost? The Audit Interceptor adds additional method calls and checks to the request. Using a JMS queue to asynchronously write the events reduces the impact to the end user by allowing the request to complete before the data is actually persisted. The trade-off would be to insert auditing code only where it is required. But anticipating that requirements will change, and a lot of areas that require auditing, the benefits of decoupling and reduced maintenance outweigh the slight performance degradation. Why not use Aspect Oriented Programming (AOP) techniques instead? AOP provides a new technique that reduces code complexity by consolidating code such as auditing, logging, and other functions that are spread across a variety of methods. It does this by inserting the (aspect) code into the methods either during the build process or through post-compile bytecode insertion. This makes it very useful when you require method-level auditing. The Audit Interceptor allows you to do service-level auditing. It can be as fine-grained as your Service Façade or other client allows, though usually not as fine-grained as AOP allows. The drawback to AOP is that it requires a third-party product and may introduce slight performance penalties, depending on the implementation. Is auditing essential? In most cases, the answer is yes. It's essentialnot just for record-keeping, but for forensic analyses purposes as well. You may not be able to detectand most likely cannot diagnosean attack if you do not maintain an audit log of events. The audit log can be used to detect brute-force password attacks, denial of service attacks, and many others. Related Patterns
Intercepting Filter [CJP2]. The Audit Interceptor pattern is similar to the Intercepting Filter but is not as complex and is better suited for asynchronous writes. Pipes and Filters [POSA1]. The Audit Interceptor pattern is closely related to the Pipes and Filters pattern. Message Interceptor Gateway. It is often necessary to audit on the Web Services tier as well as the Business tier. In such cases, the Message Interceptor Gateway should employ the Audit Interceptor pattern. Container Managed Security
Problem
You need a simple, standard way to enforce authentication and authorization in your J2EE applications and don't want to reinvent the wheel or write home-grown security code. Using a Container Managed Security pattern, the container performs user authentication and authorization without requiring the developer to hard-wire security policies in the application code. It employs declarative security that requires the developer to only define roles at a desired level of granularity through deployment descriptors of the J2EE resources. The administrator or deployer then uses the container-provided tool to map the roles to the users and groups available in the realm at the time of deployment. A realm is a database of users and their profiles that includes at least usernames and passwords, but can also include role, group, and other pertinent attributes. The actual enforcement of authentication and authorization at runtime is handled by the container in which the application is deployed and is driven by the deployment descriptors. Most containers provide authentication mechanisms by configuring user realms for LDAP, RDBMS, UNIX, and Windows. Declarative security can be supplemented by programmatic security in the application code that uses J2EE APIs to determine user identity and role membership and thereby enforce enhanced security. In cases where an application chooses not to use a J2EE container, configurable implementation of security similar to Container Managed Security can still be designed by using JAAS-based authentication providers and JAAS APIs for programmatic security. Forces
Solution
Use Container Managed Security to define application-level roles at development time and perform user-role mappings at deployment time or thereafter. In a J2EE application, both ejb-jar.xml and web.xml deployment descriptors can define container-managed security. The J2EE security elements in the deployment descriptor declare only the logical roles as conceived by the developer. The application deployer maps these application domain logical roles to the deployment environment. Container Managed Security at the Web tier uses delayed authentication, prompting the user for login only when a protected resource is accessed for the first time. On this tier, it can offer security for the whole application or specific parts of the application that are identified and differentiated by URL patterns. At the Enterprise Java Beans tier, Container Managed Security can offer method-level, fine-grained security or object-level, coarse-grained security. Structure
Figure 10-5 depicts a generic class diagram for a Container Managed Security implementation. Note that the class diagram can only be applicable to the container's implementation of Container Managed Security. The J2EE application developer would not use such a class structure, because it is already implemented and offered by the container for use by the developer. Figure 10-5. Container Managed Security class diagram
Participants and Responsibilities
Figure 10-6 depicts a sequence of operations involved in fulfilling a client request on a protected resource on the Web tier that uses an EJB component on the Business tier. Both tiers leverage Container Managed Security for authentication and access control. Figure 10-6. Sequence diagram leveraging Container Managed Security
Client. A client sends a request to access a protected resource to perform a specific task. Container. The container intercepts the request to acquire authentication credentials from the client and thereafter authenticates the client using the realm configured in the J2EE container for the application. Protected Resource. The security policy of the protected resource is declared via the Deployment Descriptor. Upon authentication, the container uses the Deployment Descriptor information to verify whether the client is authorized to access the protected resource using the method, such as GET and POST, specified in the client request. If authorized, the request is forwarded to the protected resource for fulfillment. Enterprise Java Bean. The protected resource in turn could be using a Business Tier Enterprise Java Bean that declares its own security policy via the ejb.jar deployment descriptor. The security context of the client is propagated to the EJB container while making the EBJ method invocation. The EJB container intercepts the requests to validate against the security policy much like it did in the Web tier. If authorized, the EJB method is executed, fulfilling the client request. The results of execution of the request are then returned to the client. Strategies
Container Managed Security can be used in the Web and Business tiers of a J2EE application, depending on whether a Web container, an EJB container, or both are used in an application. It can also be supplemented by Bean Managed/Programmatic Security for fine-grained implementations. The various scenarios are described in this section. Web Tier Container Managed Security Strategy
In this strategy, security restraints are specified in the web.xml of the client/user-facing Web application (that is, the Web tier of the J2EE application). If this is the only security strategy used in the application, an assumption is made that the back-end Business tier is not directly exposed to the client for direct integration. The web.xml declares the authentication method via the <auth-method> node of the web.xml to mandate either BASIC, DIGEST, FORM, or CLIENT-CERT authentication modes whenever authentication is required. It also declares authorization for protected resources that are identified and distinguished by their URL patterns. The actual enforcement or security is performed by the J2EE-compliant Web container in this strategy. Service Tier Container Managed Security Strategy
In this strategy, the developer configures the EJB's deployment descriptors to incorporate security into the service backbone of the application. A security role in EJB's ejb-jar.xml is defined through a <security-role-ref> element. These bean-specific logical roles can be associated to a security role defined with a different name in the <role-name> elements of the application deployment descriptor via a <role-link> element. The <assembly-descriptor> section of ejb-jar.xml, which is the application-level deployment descriptor, lists all the logical application-level roles via <role-name> elements, and these roles are mapped to the actual principals in the realm at the time of deployment. Declarative Security for EJBs can either be at the bean level or at a more granular method level. Home and Remote interface methods can declare a <method-permission> element that includes one or more <role-name> elements that are allowed to access one or more EJB methods as identified by the <method> elements. One can also declare <exclude-list> elements to disable access to specific methods. To specify an explicit identity that an EJB should use when it invokes methods on other EJBs, the developer can use <use-caller-identity> or <run-as>/<role-name> elements under the <security-identity> element of the deployment descriptor. Container Manager Security in Conjunction with Programmatic Security
For finer granularity or to meet requirements unfulfilled by Container Managed Security, a developer could choose to use programmatic security in bean code or Web tier code in conjunction with Container Managed Security. For example, in the EJB code, the caller principal as a java.security.Principal instance can be obtained from the EJBContext.getCallerPrincipal() method. The EJBContext.isCallerInRole(String) method can determine if a caller is in a role that is declared with a <security-role-ref> element. Similarly, on the Web tier, HttpServletRequest.getUserPrincipal() returns a java.security.Principal object containing the name of the current authenticated user, and HttpServletRequest.isUserInRole(String) returns a Boolean indicating whether the authenticated user is included in the specified logical role. These APIs are very limited in scope and are confined to determining a user's identity and role membership. This approach is useful where instance-level security is required, such as permitting only the admin role to perform account transfers exceeding a certain amount limit. A simple example is illustrated in Example 10-5 later in this chapter.
Consequences
Container Managed Security offers flexible policy management at no additional cost to the organization. While it allows the developer to incorporate security in the application by way of simply defining roles in the deployment descriptor without writing any implementation code, it also supports programmatic security for fine-grained access control. The pattern offers the following other benefits to the developer:
Sample Code
Sample code for each strategy described earlier is illustrated in this section. The samples could be used in conjunction with each other to implement multiple flavors of Container Managed Security. Example 10-3 shows declarative security via a web.xml deployment descriptor. Example 10-3. web.xml deployment descriptor
<web-app> ... <security-constraint> <display-name>App Sec Constraints </display-name> <web-resource-collection> <web-resource-name> System Admin Resources </web-resource-name> <url-pattern>/sysadmin/*</url-pattern> <http-method>GET</http-method> <http-method>POST</http-method> </web-resource-collection> <auth-constraint> <role-name>CORPORATEADMIN</role-name> <role-name>CLIENTADMIN</role-name> </auth-constraint> <user-data-constraint> <transport-guarantee> NONE </transport-guarantee> </user-data-constraint> </security-constraint> <!-- Declare login configuration here --> <login-config> <auth-method>FORM</auth-method> <form-login-config> <form-login-page> /login.jsp </form-login-page> <form-error-page> /login.jsp </form-error-page> </form-login-config> </login-config> <security-role> <description>Corporate Administrators</description> <role-name>CORPORATEADMIN</role-name> </security-role> <security-role> <description>Client Administrators</description> <role-name>CLIENTADMIN</role-name> </security-role> ... </web-app> Example 10-4 shows declarative security via an ejb-jar.xml deployment descriptor. Example 10-4. ejb-jar.xml deployment descriptor
... <enterprise-beans> ... <session> <ejb-name>SecureServiceFacade</ejb-name> <ejb-class>SecureServiceFacade.class</ejb-class> ... <security-role-ref> <role-name> "admin_role_referenced_by_bean" </role-name> <role-link> admin_role_depicted_in_assembly_descriptor </role-link> </security-role-ref> ... </session> </enterprise-beans> ... <assembly-descriptor> <security-role> <description> Security Role for Administrators </description> <rolename> admin_role_depicted_in_assembly_descriptor </role-name> </security-role> ... <method-permission> <role-name>GUEST</role-name> <method> <ejb-name>PublicUtilities</ejb-name> <method-name>viewStatistics</method-name> </method> </method-permission> ... <exclude-list> <description>Unreleased Methods</description> <method> <ejb-name>PublicUtilities</ejb-name> <method-name>underConstruction</method-name> </method> </exclude-list> ... </assembly-descriptor> ...
Example 10-5 shows programmatic or bean-managed security in the bean code. Example 10-5. EJB method employing programmatic security
//... public void transfer(double amount, long fromAccount, long toAccount){ if (amount>1000000 && !sessionContext.isCallerInRole("admin")){ throw new EJBException( sessionContext.getCallerPrincipal().getName() + " not allowed to transfer amounts exceeding " + 1000000."); } else { //perform transfer } } //...
Security Factors and Risks
The extent of security offered by this pattern is limited to the security mechanisms offered by the container where the application code is deployed. It is also constrained by the limited subset of security aspects covered in the J2EE specification. As a result, the pattern elicits several risks:
Reality Check
Is Container Managed Security comprehensive at the Web tier? If the granularity of security enforcement is not matched by the granularity offered by the resource URL identifiers used by Container Managed Security to distinguish and differentiate resources, this pattern may not fulfill the requirements. This is particularly true in applications that use a single controller to front multiple resources. In such cases, the request URI would be the same for all resources, and individual resources would be identified only by way of some identifier in the query string (such as /myapp/controller?page=resource1). Container Manager Security by URL patterns is not applicable in such cases unless the container supports extensive use of regular expressions. Resource-level security in such scenarios requires additional work in the application. Is Container Managed Security required at the service tier? If all the back-end business services are inevitably fronted by a security gateway such as Secure Service Proxy or Secure Service Façade, having additional security enforcement via Container Managed Security on EJBs may not add much value and may incur unnecessary performance overhead. The choice must be carefully made in such cases. Related Patterns
Authentication Enforcer, Authorization Enforcer. Authentication Enforcer enforces authentication on a request that is antecedently unauthenticated, much like what Container Managed Security implementation can enforce on the Web tier resource of a J2EE application. Similarly, Authorization Enforcer behaves like the Business tier implementation of Container Managed Security Secure Service Proxy. If security architecture was not planned in the initial phases of application development, utilization of Container Managed Security at later stages may seem chaotic. In such cases, Secure Service Proxy or Secure Service Façade can be used to offer a secure gateway exposed to the client that enforces security in lieu of such enforcement at the business service level. Intercepting Web Agent. Rather than custom-building security via deployment descriptors and configuring the container as in Container Manager Security, one may delegate those tasks to a COTS product, with the application using Web Agent Interceptor to preprocess the security context of the requests before they are forwarded and fulfilled by the security-unaware application services. Dynamic Service Management
Problem
You need to dynamically instrument fine-grained components to manage and monitor your application with the necessary level of detail. Management is an important, if overlooked, aspect of security. There is the monitoring aspect that security administrators use to detect intrusions and other anomalies caused by malicious activity. Then there is the active management aspect that empowers administrators to proactively prevent intrusions by modifying objects or invoking operations before an attack can conclude. Consider a scenario where an intruder launches a denial-of-service (DoS) attack against an LDAP server that causes the application to time out and drop the connection to the LDAP server, thus preventing new users from logging in. In many implementations, the only remedy to this scenario would be to restart the application, causing logged-in users to be dropped and forced to log in again. Ideally, an administrator would want to be able to monitor the connection, detect that it is not responding, determine why, take steps to stop the DoS attack, and then invoke an operation on the object responsible for connecting to LDAP that forces it to reestablish the connection. Another scenario involves an authenticated user who is exhibiting malicious activity in the application. Security administrators would like the ability to detect such activity, log the user out, and disable that user's account. All of this requires a level of instrumentation not available in most applications today. Forces
Solution
Use a Dynamic Service Management pattern to enable fine-grained instrumentation of business objects at runtime on an as-needed basis using JMX. Structure
Figure 10-7 illustrates a Dynamic Service Management pattern class. Figure 10-7. Dynamic Service Management class diagram
Participants and Responsibilities
Figure 10-8 is a sequence diagram of the Dynamic Service Management pattern. Figure 10-8. Dynamic Service Management sequence diagram
Client. A Client requests registration of an object as an MBean from the ServiceManager. ServiceManager. The ServiceManager creates an instance of the MBeanServer and obtains an instance of an MBeanFactory. ServiceManager instantiates the Registry and then uses the MBeanFactory to create an MBean for a particular object passed in by the Client. It creates an ObjectName for that object and then registers it with the MBeanServer. MBeanServer. The MBeanServer exposes registered MBeans via adaptor-specific protocols. MbeanFactory. The MBeanFactory creates the Registry and uses it to find managed MBean definitions, which it loaded from the Descriptor Store. Registry. The Registry loads and maintains a registry of MBean descriptors. It creates a Registry Monitor to monitor changes to the DescriptorStore and reloads the definitions when the RegistryMonitor notifies it that the DescriptorStore has changed. RegistryMonitor. The RegistryMonitor is responsible for monitoring changes to the DescriptorStore. It registers listeners and notifies those listeners when it detects a change to the DescriptorStore. DescriptorStore. The DescriptorStore is an abstract representation of a persistent store of MBean descriptor definitions. Figure 10-8 shows the following sequence for registering an object as an MBean using the Dynamic Service Management pattern.
On call to addListener method, RegistryMonitor stores listener and begins polling DescriptorStore passed in as argument to addListener.
Strategies
The Dynamic Service Management pattern provides dynamic instrumentation of business objects using JMX. JMX is a commonly used technology, present in all major application server products. There are several strategies for implementing this pattern, depending on what product you choose and what type of persistent store you require for your MBean Descriptors. Model MBean Strategy
This strategy involves using JMX Model MBean loaded from an external configuration source. Model MBeans allow developers to define the attributes and operations they want to expose on their classes through metadata. This metadata can then be externalized from the class definition entirely. With a bit of work, the metadata can be reloaded at runtime to allow for just-in-time creation of MBeans as needed. The Jakarta Commons subproject of the Apache Software Foundation is focused on building open source, reusable Java components. One of the components of the Commons project is the Commons-Modeler. Commons-Modeler provides a framework for creating JMX Model MBeans that allows developers to circumvent the creation of the metadata programmatically (as described in the specification) and instead defines that data in an XML descriptor file. This greatly reduces the amount of source code needed to create the Model MBeans. The Model MBean Strategy utilizes the Commons-Modeler framework approach to simplify the task of creating MBeans and to leverage the file-based XML descriptor to implement dynamic reloading of MBeans based on changes to that descriptor file at runtime. This provides a mechanism that allows developers and operations staff to instrument components on an as-needed basis instead of incurring the run-time overhead of trying to instrument all of the components statically, most of which will never be used. Figure 10-9 depicts a class diagram of a Dynamic Service Management pattern implemented using the Model MBean strategy. Figure 10-9. Model MBean Strategy class diagram
Figure 10-10 is sequence diagram of the Dynamic Service Management pattern implemented using the Model MBean strategy. In this strategy, the Commons-Modeler framework supplies the Registry implementation and provides an XML DTD for the MBeans descriptor file. The Registry does all the work of creating the MBean from the data in the descriptor file, which is the bulk of the work overall. A simple file monitor can be implemented to detect changes to the XML file and the Registry can be told to reload from the changed file. Figure 10-10. Model MBean Strategy sequence diagram
Consequences
The Dynamic Service Management pattern helps to identify and mitigate several types of threats. By enabling operations staff to monitor business components, they can readily identify an attack in progress, whether it is a denial-of-service attack or somebody trying to guess passwords using a dictionary attack. It also enables staff to manage those components so that they can take reactive action during an attack, such as setting a filter on an incoming IP or locking a user account. By employing the Dynamic Service Management pattern, developers can benefit from the following:
Using a Dynamic Service Management pattern eliminates the need for upfront analysis and needless run-time overhead from monitoring or exposing components and attributes unnecessarily. Instead, components and attributes can be dynamically instrumented at runtime on an as-needed basis. When the need no longer exists, the instrumentation can be turned off, freeing up cycles and memory for business processing. Sample Code
Example 10-6 is a sample source listing of a Service Manager class. Example 10-6. MBeanManager.java: MBeanManager implementation
package com.csp.management; import java.util.Enumeration; import javax.management.MBeanServer; import javax.management.ObjectName; import javax.management.modelmbean.ModelMBean; import javax.naming.Context; import com.sun.jdmk.comm.HtmlAdaptorServer; public class MBeanManager implements ManagedObject { private static MBeanManager instance = null; private HtmlAdaptorServer htmlServer = null; private MBeanFactory factory = null; private HashMap objNames = new HashMap(); // This class returns an instance of MBeanManager public static MBeanManager getInstance() { if(instance == null) { instance = new MBeanManager(); try { instance.registerObject(instance, "CSPM"); } catch (Exception e) { log.error("Unable to register mbean.", e); } } return instance; } // Create and initialize the MBeanManager private MBeanManager() { init(); } // Initializes the adaptors and servers. private void init() { htmlServer = new HtmlAdaptorServer(); htmlServer.setPort(4545); String htmlServiceName = "Adaptor:name=html, port=" + port; // Create the Object name to register with. ObjectName htmlObjectName = new ObjectName(htmlServiceName); objNames.put(htmlServiceName, htmlObjectName); htmlServer.start(); // Load MBean factory factory = MBeanFactory.getInstance(); } // Register a service object as an MBean. public void registerObject(Object service, String serviceName) throws Exception { ModelMBean mbean = factory.createMBean(service, serviceName); if(mbean == null) { return; } // Create the ObjectName ObjectName objName = factory.createObjectName( mbeanDomain, service, serviceName); if (objName == null) { log.error("Could not create object name."); return; } // Get the MBeanServer MBeanServer wlsServer = getWLSMBeanServer(); // Register the MBean with the server wlsServer.registerMBean(mbean, objName); // Add the ObjectName to the list of names objNames.put(serviceName, objName); } // Method to unregister an object as an MBean public void unregisterObject(String serviceName) { try { if(serviceName != null && objNames != null) { // Remove the ObjectName from the list ObjectName oName = (ObjectName)objNames.remove(serviceName); if(oName != null) { MBeanServer server = getMBeanServer(); // Unregister the bean from the server server.unregisterMBean(oName); } } } catch(Exception e) { log.error("Unable to unregister service.", e); } } // Method to reload the MBean descriptor from the registry public void reloadMBeans() throws Exception { // Unload previously registered mbeans unloadMBeans(); // Tell factory to reload new MBeans. factory.loadRegistry(); } // Unload the MBeans public void unloadMBeans() throws Exception { // Get a handle to all of our registered MBeans Enumeration svcNames = objNames.keys(); while(svcNames.hasMoreElements()) { String svc = (String) svcNames.nextElement(); // Iterate through list unregistering each MBean unregisterObject(svc); } } } Example 10-7 is a sample source code listing of an MBeanFactory class. Example 10-7. MBeanFactory.java: MBean factory implementation
package com.csp.management; import java.io.IOException; import java.io.InputStream; import java.net.URL; import java.io.FileNotFoundException; import javax.management.modelmbean.ModelMBean; import javax.management.ObjectName; import org.apache.commons.modeler.ManagedBean; import org.apache.commons.modeler.Registry; import com.csp.logging.SecureLogger; import com.csp.management.FileMonitor; import com.csp.management.FileChangeListener; /** * This class is responsible for creating, loading and * reloading the MBean descriptor registry. */ public class MBeanFactory implements FileChangeListener { private static SecureLogger log = (SecureLogger)SecureLogger.getLogger(); private static MBeanFactory instance = null; private Registry registry = null; private String registryFileName = "mbeans-descriptors.xml"; private FileMonitor fileMonitor = null; private static final Object lock = new Object(); // Private constructor private MBeanFactory() { init(); } // Initialization method for loading the MBean descriptor // registry and adding a file listener to detect changes. private void init() { loadRegistry(); try { fileMonitor.getInstance().addFileChangeListener( this, registryFileName); } catch (FileNotFoundException fnfe) { log.error("Unable to add listener."); } } // Load the MBean descriptor registry. public void loadRegistry() { InputStream inputStream = null; try { inputStream = ClassLoader.getSystemClassLoader(). getResourceAsStream(registryFileName); // Get the registry registry = Registry.getRegistry(null, instance); // Load the descriptors from the input stream. registry.loadDescriptors(inputStream); } catch (Exception e) { log.error("Unable to load file.", e); } } // Returns an MBeanFactory instance public static MBeanFactory getInstance() throws Exception { if (instance == null) { instance = new MBeanFactory(); } return instance; } // Create a ModelMBean given a service and name public ModelMBean createMBean(Object service, String serviceName) throws Exception { ModelMBean mbean = null; // Create an MBean from the Registry ManagedBean managed = registry.findManagedBean(serviceName); if (managed != null) { mbean = managed.createMBean(service); } return mbean; } // Create an ObjectName for a service. public ObjectName createObjectName(String domain, Object service, String serviceName) throws Exception { ObjectName oName = null; if(service instanceof ManagedObject) { ManagedObject svcImpl = (ManagedObject)service; // Set the JMX name to the input service name. svcImpl.setJMXName(serviceName); // Create the ObjectName oName = new ObjectName(domain + "Name:" + svcImpl.getJMXName() + ",Type=" + svcImpl.getJMXType()); } else { oName = new ObjectName(domain + ":service=" + serviceName + ",className=" + service.getClass().getName()); } return oName; } public String getRegistryFileName() { return this.registryFileName; } public void setRegistryFileName(String fileName) { this.registryFileName = fileName; } public void fileChanged(String fileName) { try { loadRegistry(); } catch(Exception e) { log.error("Failed to reload registry."); } } }
Security Factors and Risks
The following are some of the security factors and risks related to the Dynamic Service Management pattern:
Reality Check
What types of things need to be managed and monitored? What should be managed and monitored is very subjective and depends on the circumstances. The Dynamic Service Management pattern provides a means to transparently attach management and monitoring capabilities to business objects without prior consideration or elaboration of those objects. But to be effective, developers must at least understand what the approach provides and design their business objects to be taken advantage of by the JMX framework. If they do not implement member variables and choose to pass in only complex Objects as parameters to their method calls, they will be unable to make use of the framework in many cases. Related Patterns
Secure Pipe. The Dynamic Service Management pattern makes use of the Secure Pipe pattern to provide confidentiality when communicating with the application via the management protocol. Obfuscated Transfer Object
Problem
You need a way to protect critical data as it is passed within application and between tiers. Transfer Objects [CJP2] provide a mechanism for transporting data elements across tiers and components. This is an efficient means of moving large sets of data without invoking multiple getter or setter methods on remote objects across tiers. You probably use strategies such as Updateable Transfer Objects or Multiple Transfer Objects when implementing the Transfer Object pattern. In many cases you then find yourself passing Transfer Objects across multiple components. This leads to a security concern. By passing data in Transfer Objects across components, you unnecessarily expose data to components that may not require or should not have access to it. Consider an application that stores credit card information in a user's profile. The application passes the profile using a profile transfer object. This profile transfer object passes through many business and presentation tier components on its way to being stored in the database. Many of those components are not privy to the sensitive nature of the credit card data in the profile transfer object. They may print all of the data in the transfer object for debugging purposes or write it to an audit log that is not supposed to expose sensitive data. You do not want to modify all of those components just to handle that data differently. Instead, you want the transfer object itself to take responsibility for protecting that data. Forces
Solution
Use an Obfuscated Transfer Object to protect access to data passed within and between tiers. The Obfuscated Transfer Object allows developers to define data elements within it that are to be protected. The means of protection can vary between applications or implementations depending on the business requirements. The Obfuscated Transfer Object provides a way to prevent either purposeful or inadvertent unauthorized access to its data. The producers and consumers of the data can agree upon the sensitive data elements that need to be protected and on their means of access. The Obfuscated Transfer Object will then take the responsibility of protecting that data from any intervening components that it is passed to on its way between producer and consumer. Credit card and other sensitive information can be protected from being accidentally dumped to a log file or audit trail, or worse, such as being captured and stored for malicious purposes. Structure
Figure 10-11 is the class diagram for the Obfuscated Transfer Object. Figure 10-11. Obfuscated Transfer Object class diagram
Participants and Responsibilities
Figure 10-12 shows the sequence diagram of the Obfuscated Transfer Object pattern. Figure 10-12. Obfuscated Transfer Object sequence diagram
Client. The Client wants to send and receive data from a Target component via an intermediary Component. The Client can be any component in any tier. Component. The Component is any application component in the message flow that is not the intended target of the Client. The Component can be any component in any tier that acts as an intermediary in the message flow between the Client and the Target. Target. The Target is any object that is the intended recipient of the Client's request. It is responsible for setting the data that needs to be obfuscated. Obfuscated Transfer Object. The ObfuscatedTransferObject is responsible for protecting access to data within it, as necessary. Typically, the intermediary Component is not trusted or should not have access to any or all data in the Obfuscated Transfer Object. It then becomes the Obfuscated Transfer Object's responsibility to protect the data. The means of protection is dependent upon the business requirements and the level of trust of the intermediary components. Figure 10-12 takes us through a typical sequence of events for an application employing an ObfuscatedTransferObject.
Strategies
A variety of strategies can implement the Obfuscated Transfer Object. A simple strategy is just to mask various data elements to prevent them from inadvertently being logged or displayed in an audit event. A more elaborate strategy is to encrypt the protected data within the Obfuscated Transfer Object. This entails a more complex implementation, but offers a higher degree of protection. Masked List Strategy
Sensitive information like credit card numbers, Social Security numbers, and other personal information should not be stored in the system for security purposes. Since intermediary components within a request workflow may be unaware of the existence of such data in the Transfer Object, or do not know what data not to log, a simple Masked List Strategy prevents inadvertent storage or display of this data. Figure 10-13 shows a Masked List Strategy class diagram. Figure 10-14 shows a Masked List Strategy Sequence Diagram. Figure 10-13. Masked List Strategy class diagram
Figure 10-14. Masked List Strategy sequence diagram
In this strategy, the client sets data as name-value (NV) pairs in the Obfuscated Transfer Object. Internally, the Obfuscated Transfer Object maintains two maps, one for holding NV pairs that should be obfuscated and another for NV pairs that do not require obfuscation. In addition to the two maps, the Obfuscated Transfer Object contains a list of NV pair names that should be protected. Data passed in with names corresponding to names in the masked list, are placed in the map for the obfuscated data. This map is then protected. In the sequence above, when the Component logs the ObfuscatedTransferObject, the data in the obfuscated map is not logged, and thus it is protected. Encryption Strategy
Using the Encryption Strategy for Obfuscated Transfer Object provides the highest level of protection for the data elements protected within. The data elements are stored in a Data Map, and then the Data Map as a whole is encrypted using a symmetric key. To retrieve the Data Map and the elements within it, the consumer must supply a symmetric key identical to the one used by the producer to seal the Data Map. The Sun Java 2 Standard Edition (J2SE) runtime provides a Sealed Object class that allows developers to easily encrypt objects by passing in a serialized object and a Cipher object in the constructor. The serialized object can then be retrieved by either passing in an identical Cipher or a Key object that can be used to recreate the Cipher. This encapsulates all of the underlying work associated with encrypting and decrypting objects. The only issue remaining is the management of symmetric keys within the application. This poses a significant challenge because it requires the producers and consumers to share symmetric keys without providing any intermediary components with access to those keys. This may be simple or overwhelmingly complex depending on the architecture of the application and the structure of the component trust model. Use this strategy with caution, because the key-management issues may be harder to overcome than architecting the application again to eliminate the need for the pattern. Figure 10-15 is a sequence diagram illustrating the Encryption Strategy for an Obfuscated Transfer Object. Figure 10-15. Encryption Strategy sequence diagram
The sequence diagram shown in Figure 10-15 illustrates implementation of the Obfuscated Transfer Object using an Encryption Strategy. The client creates the Obfuscated Transfer Object and adds the data as name value pairs. The client then seals the data by passing in an encryption key. The intermediate components in the request flow are unable to access the data. The target object, upon receiving the Obfuscated Transfer Object, first unseals it by passing in the corresponding decryption key. It can then access the data as before, through the name-value pair keys. Consequences
The Obfuscated Transfer Object protects against sniffing attacks and threats arising from log-file capture within the Business tier by ensuring that sensitive data is not passed or logged in the clear. By employing the Obfuscated Transfer Object pattern, the following consequences will apply:
Sample Code
Example 10-8 shows a sample listing of an Obfuscated Transfer Object implemented using an Encryption Strategy. Example 10-8. Sample obfuscated TO using encryption implementation
package com.csp.business; import java.io.Serializable; import java.util.HashMap; import javax.crypto.Cipher; import javax.crypto.SealedObject; public class GenericTO implements Serializable { final long serialVersionUID = -5831612260903682186L; private HashMap map; private SealedObject sealedMap; /** * Default constructor that initializes the object. */ public GenericTO() { map = new HashMap(); } public void seal(Cipher cipher) throws Exception { map = sealedMap.getObject(cipher); } public void unseal(Cipher cipher) throws Exception { sealedMap = new SealedObject(map, cipher); // Set the map to null so data can't be accessed. map = null; } public Object getData(Object key) throws Exception { return map.get(key(key); } public void setData(Object key, Object data) throws Exception { map.put(key, data); } }
Security Factors and Risks
Confidentiality. The Obfuscated Transfer Object pattern provides a means to ensure varying degrees of confidentiality for data passed within the application, such as between components, across asynchronous message boundaries, and between tiers. This is necessary for applications that have sensitive data that should not be accidentally logged or displayed, or where data is passed through intermediary components that are not trusted and should not have access to that data. Reality Check
Should we use a Masked List Strategy or an Encryption Strategy? It depends on your requirements and whether you trust your intermediary components not to access the data in the masked list. Using a Masked List Strategy, a component could access the data and dump it to a log if it wished, circumventing the intention of the masked list. By using an Encryption Strategy, the intermediary components cannot gain access to the sensitive data unless they obtain the Cipher used to protect that data. There is significant processing overhead to encrypting and decrypting the data, so you should only use this strategy when necessary and only for the data elements that require it. Related Patterns
Transfer Object [CJP2]. The Obfuscated Transfer Object is similar to, and may be considered a strategy of, the Core J2EE Patterns Transfer Object pattern. It provides the additional capability of protecting data elements within it from unauthorized access. Data Transfer HashMap (Middleware). The Obfuscated Transfer Object is similar to the Data Transfer HashMap pattern from the Middleware Company. Like the Data Transfer HashMap, it employs a strategy that makes use of an underlying HashMap for storing and retrieving data elements. In the case of the Obfuscated Transfer Object, that underlying map may be encrypted using a Sealed Object or may be divided into two maps, one containing data that can be dumped to a log or audit table and another containing sensitive data that should not be accessed. Policy Delegate
Problem
You want to shield clients from discovery and invocation details of security services and to control client interactions by intercepting and administering policy on client requests. You need an abstraction between enterprise security infrastructure and clients; hiding the intricacies of finding and invoking security services. It is desirable to abstract common framework specific code related to invocation of those services, thus reducing the coupling between clients and the security framework. As a result of the loose coupling, clients and services can then be easily replaced with alternate technologies, when appropriate, to increase the lifespan of the application. Forces
Solution
Use Policy Delegate to mediate requests between clients and security services, and to reduce the dependency of client code on implementation specifics of the service framework. Policy Delegate is a coordinator of Business-tier security services that is akin to the Secure Base Action in the Web tier. The clients use the delegate to locate and mediate back-end security services. A delegate could in turn use a Secure Service Façade that offers a coarse-grained aggregate interface to fine-grained security services or business components and entities. This abstraction also offers a looser coupling and cleaner contract between clients and the secure services, reducing the magnitude of change required in the clients when the implementations of the security services change over time. To use a delegate, the client need not be aware of the actual location of the service. A Policy Delegate uses a Service Locator to locate distributed security services. The client is unaware of the underlying implementation technology and the communication protocol of the service, which could be RMI, Web services, DCOM, CORBA, or another service. While coordinating and mediating requests and responses between clients and the security framework, a delegate could also perform pertinent message translation to accommodate disparate message formats and protocols both expected and supported by the clients and individual services. In the same vein, the delegate could choose to perform error translation to encapsulate service-level security exceptions as user-friendly, application-level error messages. The Policy Delegate can be a stateless delegate or a stateful delegate. A stateful delegate, identified and looked up by an appropriate ID, can cache the security context, service references, and transient state between multiple invocations by the client. This caching at the server side optimizes and reduces the number of object creations, service lookups, and security computations. The security context could be cached as a Secure Session Object. The clients can retrieve a security delegate using a Factory pattern [GoF]. This is particularly useful when the application exposes multiple Policy Delegates rather than one aggregate delegate that mediates between multiple services. Structure
Figure 10-16 shows a typical Policy Delegate class diagram. The Target in the diagram represents any security service, a Secure Service Façade, or a security-unaware Session Façade. The delegate uses a SecureSessionObject to maintain the transient state associated with a client session. Figure 10-16. Policy Delegate pattern class diagram
The single PolicyDelegate could maintain a one-to-many relationship with multiple targets, or multiple Policy Delegates could each map exactly to one of the several possible targets. In the latter case, it could make use of a Factory that returns an appropriate delegate, depending on the requested service. Participants and Responsibilities
Figure 10-17 depicts a scenario where a client uses a Business Delegate retrieved from a Factory to invoke a security service on a SecureSessionFaçade, located using a Service Locator. Figure 10-17. Policy Delegate sequence diagram
Client. A Client retrieves a PolicyDelegate through DelegateFactory to invoke a specific service. PolicyDelegate. The PolicyDelegate uses ServiceLocator [CJP2] to locate the service. SecureSessionObject. The PolicyDelegate maintains a SecureSessionObject to store transient client security context and service references between consecutive invocations by the same client. SecureServiceFaçade, Service2. The back-end service could be implemented using any technology, such as a SecureServiceFaçade session bean or as a Web service depicted as Service2. Strategies
The Policy Delegate pattern could be implemented in a variety of flavors depending on the magnitude of services it mediates and the approach to state management as discussed here.
Consequences
The Policy Delegate pattern reduces the coupling between the security framework and the client of security services offered by the framework and thereby reduces the number of complex security interfaces exposed to the client. This has the overall effect of reducing complexity and therefore reducing potential software bugs that could lead to a variety of attacks. It also allows you to cache and manage the life cycle of a client's security context at the server and use it across multiple invocations by the same client, which enhances performance. The Policy Delegate pattern benefits developers in the following ways:
Sample Code
Example 10-9 lists the interface of the Policy Delegate that serves as the contract between security framework and clients. Example 10-9. Policy Delegate interface
package com.csp.business; import com.csp.*; import com.csp.interfaces.*; public interface PolicyDelegateInterface { // Alternative 1: Declare service specific methods public boolean authenticate(GenericTO request) throws AuthenticationFailureException; public boolean authorize(GenericTO request) throws AuthorizationFailureException; public SAMLMessage assertRequest(GenericTO request) throws ApplicationException; // ... // Alternative 2: Declare a generic method (execute) with // with a generic transfer object as the inputs and outputs public GenericTO execute(String svcName, GenericTO input) throws ApplicationException; }
Example 10-10 lists the implementation code of the Policy Delegate. The implementation code is not relevant to the client, which only relies on the Delegate Interface and a reference to the delegate. Example 10-10. Sample Policy Delegate implementation code
package com.csp.business; import com.csp.*; import com.csp.interfaces.*; public class PolicyDelegate implements PolicyDelegateInterface { private AuthenticationEnforcer authenticationEnforcer; private AuthorizationEnforcer authorizationEnforcer; private SecureSessionManager secureSessionManager; private SecureLogger secureLogger; private SecureServiceFacade secureServiceFacade; private RequestContext rc; //Manage lifecycle of the delegate public PolicyDelegate(RequestContext rc) { this.rc = rc; init(rc); } private void init(RequestContext rc) { // Look up and keep references to security // services/session facades/session beans... try { authenticationEnforcer = ServiceLocator.lookup( AuthenticationEnforcer.SERVICE_NAME); authorizationEnforcer = ServiceLocator.lookup( AuthorizationEnforcer.SERVICE_NAME); secureSessionManager = ServiceLocator.lookup( SecureSessionManager.SERVICE_NAME); secureLogger = ServiceLocator.lookup( SecureLogger.SERVICE_NAME); //... secureServiceFacade = ServiceLocator.lookup( SecureServiceFacade.SERVICE_NAME); } catch (Exception e) { throw new ApplicationException(e); } } public void destroy() { secureSessionManager.invalidate(rc); } //implement delegate methods // Alternative 1: Declare service specific methods public boolean authenticate(GenericTO request) throws AuthenticationFailureException { try { // Return the results of authentication return authenticationEnforcer.authenticate(request); } catch (SecurityFrameworkException e) { throw new AuthenticationFailureException(e); } } // Authorize the request. public boolean authorize(GenericTO request) throws AuthorizationFailureException { try { // Check the request is authenticated if(!request.authenticated()){ if(!authenticationEnforcer.authenticate(request)) throw new AuthorizationFailureException( new AuthenticationFailureException()) } // Return the result of authorization return authenticationEnforcer.authorize(request); } catch (SecurityFrameworkException e) { throw new AuthorizationFailureException(e); } } // Alternative 2: Implement a generic method with generic // transfer object as input or output public GenericTO execute(String serviceName, GenericTO input) throws ApplicationException{ //Validate request as per security policy if(!input.authenticated()){ if (!authenticationEnforcer.authenticate(input)) throw new AuthenticationFailureException(); } if(!input.authorized()){ if (!authorizationEnforcer.authorize(input)) throw new AuthorizationFailureException(); } //process request GenericService service = ServiceLocator.lookup(serviceName); return service.execute(input); } }
Example 10-11 lists a sample client code that uses a Policy Delegate. Example 10-11. Client code using Policy Delegate
// ... try{ // Get a dynamic proxy from the factory // This proxy will contain a populated GTO and overlay // the appropriate interface on top of it. PolicyDelegateInterface request = new PolicyDelegateFactory.getPolicyDelegate( PolicyDelegateFactory.AUTHENTICATION_ENFORCER); // This proxy is specific for Authentication and has // a method to retrieve the underlying GTO instance. // Retrieve the BusinessDelegate using the standard // technique outlined in the Core J2EE Patterns book // [CJP2]. // BusinessDelegate delegate = ... GenericTO results = delegate.execute( request.getGenericTO()); // ... Do something with the results. You can use the // dynamic proxy to apply the appropriate interace. } catch (ApplicationException e){ e.printStackTrace(); } // ...
Security Factors and Risks
The Policy Delegate simply acts as a central controller of security invocations. It is intended to be a helper class that provides seamless access to security functionality exposed in the Business tier. It eliminates the risks usually associated with business developers attempting to implement or integrate with security services. The fewer security touchpoints, the less potential for security holes. The Policy Delegate address the following security factors:
Reality Check
Is Policy Delegate redundant? If the Web tier is already integrated with back-end security services in an implementation-specific manner without using Policy Delegate but using a Secure Service Façade, adding a Policy Delegate at that stage may not offer any benefit and will only cause rework. A thoughtful, careful design could avoid such scenarios. Is the Policy Delegate interface too complex? If Policy Delegate usage becomes too complicated and requires too much knowledge of the underlying security framework by clients, it defeats the purpose of abstracting the complex logic in a simple helper as described in this pattern. Related Patterns
Secure Base Action. Secure Base Action on the Web tier has a similar objective as the Policy Delegate on the Business tier. A Secure Base Action could in turn use a Policy Delegate to access security services. Business Delegate [CJP2]. A Policy Delegate is similar to the Business Delegate pattern, but leverages other patterns discussed in this book related to security. Policy Delegate additionally makes use of a SecureSessionObject to protect the confidentiality and integrity of a client session. Secure Service Façade
Problem
You need a secure gateway mandating and governing security on client requests, exposing a uniform, coarse-grained service interface over fine-grained, loosely coupled business services that mediates client requests to the appropriate services. Having more access points in the Business tier leads to more opportunities for security holes. Every access point is then required to enforce all security requirementsfrom authentication and authorization to data validation and auditing. This becomes exacerbated in applications that have existing Business-tier services that are not secured. Retrofitting security to security-unaware services is often difficult. Clients must not be made aware of the disparities between service implementations in terms of security requirements, message specifications, and other service-specific attributes. Offering a unified interface that couples the otherwise decoupled business services makes the design more comprehensible to clients and reduces the work involved in fulfilling client requests. Forces
Solution
Use a Secure Service Façade to mediate and centralize complex interactions between business components under a secure session. Use a Secure Session Façade to integrate fine-grained, security-unaware service implementation and offer a unified, security-enabled interface to clients. The Secure Service Façade acts as a gateway where client requests are securely validated and routed to the appropriate service implementations, often maintaining and mediating the security and workflow context between interactive client requests and between fine-grained services that fulfill portions of the client requests. Structure
Figure 10-18 illustrates a Secure Service Façade class diagram. The Façade is the endpoint exposed to the client and could be implemented as a stateful session bean or a servlet endpoint. It uses the security framework (implemented using other patterns) to perform security-related tasks applicable to the client request. The framework may request the client to present further credentials if the requested service mandates doing so and if those credentials were not found in the initial client request. The Façade then uses the Dynamic Service Management pattern to locate the appropriate service-provider implementations. The request is then forwarded to the individual services either sequentially, in parallel, or in any complex relationship order as specified in the request description. Figure 10-18. Secure Service Façade class diagram
If the client request represents an aggregation of fine-grained services, the return messages from previous sequential service invocations can be aggregated and delivered to the subsequent service to achieve a sequential workflow-like implementation. If those fine-grained services are independent of each other, then they can be invoked in parallel and the results can be aggregated before delivering to the client, thus achieving parallel processing of the client request. Participants and Responsibilities
Figure 10-19 depicts a sequence diagram for a typical Secure Service Façade implementation that corresponds to the structure description in the preceding section. Figure 10-19. Secure Service Façade sequence diagram
The fine-grained business services are not directly exposed to the client. The services themselves maintain loose coupling between each other and the façade. The façade takes the responsibility of unifying the individual services in the context of the client request. The service façade contains no business logic itself and therefore requires no protection. Strategies
The Secure Service Façade manages the complex relationships between disparate participating business services, plugs in security to request fulfillment, and provides a high-level, coarse-grained abstraction to the client. The nature of such tasks opens up multiple choices for implementation flavors, two of which are briefly discussed now.
Consequences
The Secure Service Façade pattern protects the Business-tier services and business objects from attacks that circumvent the Web tier or Web Services tier. The Web tier and the Web Services tier are responsible for upfront authentication and access control. An attacker who has penetrated the network perimeter could circumvent these tiers and access the Business tier directly. The Secure Service Façade is responsible for protecting the Business tier by enforcing the security mechanisms established by the Web and Web Services tiers. By employing the Secure Service Façade pattern, developers and clients can benefit in the following ways:
Sample Code
The sample code that follows illustrates a Stateful Session Bean approach to a Secure Service Façade implementation. Example 10-12 and Example 10-13 show the home and remote interfaces to the Façade Session bean. Example 10-12. SecureServiceFaçade home interface
package com.csp.business; import java.rmi.*; import javax.ejb.*; import com.csp.*; public interface SecureServiceFacadeHome extends EJBHome { public SecureServiceFacade create(SecurityContext ctx) throws RemoteException,CreateException; }
Example 10-13. SecureServiceFaçade remote interface
package com.csp.business; import java.rmi.*; import javax.ejb.*; import com.csp.*; public interface SecureServiceFacade extends EJBObject { public TransferObject execute(SecureMessage msg) throws RemoteException;
Example 10-14 lists a sample bean implementation code. The important item to notice is that the SecurityContext object is maintained as a state variable in the stateful session bean in order to facilitate propagation of the context to any individual service that expects it. The SecureMessage encapsulates the aggregate service description of the client request and is used to locate the appropriate services and optionally establish a dynamic sequence of participating service executions. Example 10-14. SecureSessionFaçadeSessionBean.java sample implementation
package com.csp.business; import java.rmi.*; import javax.ejb.*; import javax.naming.*; import java.util.*; import com.csp.*; public class SecureServiceFacadeSessionBean implements SessionBean { private SessionContext context; private SecurityContext securityContext; // Remote references for the individual services // can be encapsulated as facade attributes // or made part of the message private ServiceMaps services = new HashMap(); // Create the facade and initialize the security context public void ejbCreate(SecurityContext ctx) throws CreateException, ResourceException { securityContext = ctx; } // Locate the requested service and cache for // prospective future use and stickiness private SecureMessage execute(SecureMessage msg) throws SecureServiceFacadeException, ServiceLocatorException { SecureService svc = ServiceLocator.getService( msg.getRequestedServiceName()); services.put(msg.getRequestedServiceName(), svc); return svc.execute(msg); } // ... // Other lifecycle methods public void ejbActivate() { ... } public void ejbPassivate() { ... } public void setSessionContext(SessionContext ctx) { ... } public void ejbRemove() { ... } }
Security Factors and Risks
The Secure Service Façade pattern is susceptible to code bloating if too much interaction logic is incorporated. However, this can be minimized by appropriate design of the façade using other common design patterns. As the gateway into the Business tier, the Secure Service Façade serves to limit the touchpoints between the Web and Web Services tiers and the Business tier. This means that there are fewer entry points that need to be secured and therefore fewer opportunities for security holes to be introduced. The following security factors are addressed by the Secure Service Façade:
Reality Check
Does the Service Façade need to incorporate security? The Secure Service Proxy uses the existing security framework while aggregating fine-grained services. However, security context validation may not be required if other means of authentication and access control are pertinently enforced on the client request before it reaches the façade. Does the Secure Service Façade need to perform service aggregation? If the client requests will mostly be fulfilled by a single, fine-grained service component, there is no necessity for aggregation. In such cases, Secure Service Proxy may well suit the purpose. Does the Secure Service Façade reduce security code duplication? If security context validation is performed by each service component, the validation at the façade level may turn out to be redundant and wasteful. A planned design could reduce such duplication. Related Patterns
Secure Service Proxy. Secure Service Proxy, implemented as a Web service endpoint, acts as a mediator between the clients and the J2EE components with a one-on-one mapping between proxy methods and remote methods of J2EE components. Secure Service Façade, on the other hand, maintains complex relationships between participating services and exposes an aggregated uniform interface to the client. Session Façade. The Secure Service Façade and the generic Session Façade [CJP2] offer the same benefits with respect to business object integration and aggregation. However, Secure Service Façade does not require that the participating components are EJBs. The participating services could use any framework and the façade would incorporate the appropriate invocation logic to use those services. In addition, Secure Service Façade emphasizes the security context life cycle management and its propagation to appropriate services. Secure Session Object
Problem
You need to facilitate distributed access and seamless propagation of security context and client sessions in a platform-independent and location-independent manner. A multi-user, multi-application distributed system needs a mechanism to allow global accessibility to the security context associated with a client session and secure transmission of the context among the distributed applications, each with its own address space. While many choices are possible, the developer must design a standardized structure and interface to the security context. The security context propagation is essential within the application because it is the sole means of allowing different components within the application to verify that authentication and access control have been properly enforced. Otherwise, each component would need to enforce security and the user would wind up authenticating on each request. The Secure Session Object pattern serves this purpose. Forces
Solution
Use a Secure Session Object to abstract encapsulation of authentication and authorization credentials that can be passed across boundaries. You often need to persist session data within a single session or between user sessions that span an indeterminate period of time. In a typical Web application, you could use cookies and URL rewriting to achieve session persistence, but there are security, performance, and network-utilization implications of doing so. Applications that store sensitive data in the session are often compelled to protect such data and prevent potential misuse by malicious code (a Trojan horse) or a user (a hacker). Malicious code could use reflection to retrieve private members of an object. Hackers could sniff the serialized session object while in transit and misuse the data. Developers could unknowingly use debug statements to print sensitive data in log files. Secure Session Object can ensure that sensitive information is not inadvertently exposed. The Secure Session Object provides a means of encapsulating authentication and authorization information such as credentials, roles, and privileges, and using them for secure transport. This allows components across tiers or asynchronous messaging systems to verify that the originator of the request is authenticated and authorized for that particular service. It is intended that this serves as an abstract mechanism to encapsulate vendor-specific implementations. A Secure Session Object is an ideal way to share and transmit global security information associated with a client. Structure
Figure 10-20 is a class diagram of the Secure Session Object. Figure 10-20. Secure Session Object class diagram
Participants and Responsibilities
Figure 10-21 contains the sequence diagram and illustrates the interactions of the Secure Session Object. Figure 10-21. Secure Session Object sequence diagram
Client. The Client sends a request to a Target resource. The Client receives a SecureSessionObject and stores it for submitting in subsequent requests. SecureSessionObject. SecureSessionObject stores information regarding the client and its session, which can be validated by consumers to establish authentication and authorization of that client. Target. The Target creates a SecureSessionObject. It then verifies the SecureSessionObject passed in on subsequent requests. The Secure Session Object is implemented through the following steps:
Strategies
You can use a number of strategies to implement Secure Session Object. The first strategy is using a Transfer Object Member, which allows you to use Transfer Objects to exchange data across tiers. The second strategy is using an Interceptor, which is applicable when transferring data across remote endpoints, such as between tiers. Transfer Object Member Strategy
In the Transfer Object Member strategy, the Secure Session Object is passed as a member of the more generic Transfer Object. This allows the target component to validate the Secure Session Object wherever data is passed using a Transfer Object. Because the Secure Session Object is contained within the Transfer Object, the existing interfaces don't require additional instances of the Secure Session Object. This keeps the interfaces from becoming brittle or inflexible and allows easy integration of the Secure Session Object into existing applications with established interfaces. Figure 10-22 is a class diagram of the Secure Session Object pattern implemented using a Transfer Object Member strategy. Figure 10-22. Transfer Object Member Strategy class diagram
Interceptor Strategy
In the Interceptor Strategy, which is mostly applicable to a distributed client-server model, the client and the server use appropriate interceptors to negotiate and instantiate a centrally managed Secure Session Object. This session object glues the client and server interceptors to enforce session security on the client-server communication. The client and the server interceptors perform the initial handshake to agree upon the security mechanisms for the session object. The client authenticates to the server and retrieves a reference to the session object via a client interceptor. The reference could be as simple as a token or a remote object reference. After the client has authenticated itself, the server interceptor uses a session object factory to instantiate the Secure Session Object and returns the reference of the object to the client. The client and the server interceptors then exchange messages marshalled and unmarshalled according to the security context maintained in the Secure Session Object. Figure 10-23 is a class diagram of the Secure Session Object pattern implemented using an Interceptor Strategy. Figure 10-23. Interceptor Strategy class diagram
This strategy offers the ability to update or replace the security implementations in the interceptors independently of one another. Moreover, any change in the Secure Session Object implementation causes changes only in the interceptors instead of the whole application. Consequences
The Secure Session Object prevents a form of session hijacking that could occur if session context is not propagated and therefore not checked in the Business tier. This happens when the Web tier is distributed from the Business tier. This also applies to message passing over JMS as well. The ramifications of not using a Secure Session Object are that impersonation attacks can take place from inside the perimeter. By employing the Secure Session Object pattern, developers benefit in the following ways:
Sample Code
Example 10-15 shows sample code for Transfer Object Member strategy. Example 10-15. SecureSessionTransferObject.java: Transfer Object member strategy implementation
package com.csp.business; public class SecureSessionTransferObject implements java.io.Serializable { private SecureSessionObject secureSessionObject; public SecureSessionObject getSecureTransferObject() { return secureSessionObject; } public void setSecureTransferObject( SecureSessionObject secureSessionObject) { this.secureSessionObject = secureSessionObject; } // Additional TransferObject methods... }
A developer can implement a SecureSessionTransferObject whenever they want to pass credentials within a Transfer Object. Security Factors and Risks
Reality Check
Is Secure Session Object too bloated? Abstracting all session information into a single composite object may increase the object size. Serializing and de-serializing such an object quite frequently degrades performance. In such cases, one could revisit the object design or serialization routines to alleviate the performance degradation. Concurrency implications. Many components associated with the client session could be competing to update and read session data, which could lead to concurrency issues such as long wait times or deadlocks. A careful analysis of the possible scenarios is recommended. Related Patterns
Transfer Object [CJP2]. Secure Service Proxy, implemented as a Web service endpoint, acts as a mediator between the clients and the J2EE components with a one-on-one mapping between proxy methods and remote methods of J2EE components. Secure Service Façade, on the other hand, maintains complex relationships between participating services and exposes an aggregated uniform interface to the client. Session Façade. The Secure Service Façade and the generic Session Façade [CJP2] offer the same benefits with respect to business object integration and aggregation. However, Secure Service Façade does not require that the participating components be EJBs. The participating components may be plain old java objects (POJOs) or any other object. |
Категории