A Closer Look at the Frontend Tier

The frontend of our demonstration multi-tier cluster architecture consists of two managed servers that host the JSPs, servlets, and HTML pages. If your application doesn't use any EJB or RMI objects, the web/presentation tier cluster should be sufficient for your "pure" web application. To this end, the server instances that live in this frontend cluster host a servlet/JSP container and an HTTP server. Typically, this tier sits in front of a DBMS server you may need to configure one or more data sources that provide JDBC access to the cluster. In this section, you will learn how to set up this frontend tier, and how to deploy web and presentation tier components to such a cluster.

Our web/presentation tier consists of two servers in a WebLogic cluster. External to this cluster is the Administration Server, which hosts the Administration Console and the domain configuration. So, our application setup requires a WebLogic domain with an Administration Server and two Managed Servers that belong to a cluster. We'll expand on the load balancer later.

Figure 14-2 depicts the structure of the web/presentation tier within a domain. Here, we have zoomed in on the configuration of the frontend cluster of the multi-tier application setup illustrated earlier in Figure 14-1.

Figure 14-2. A simple web/presentation tier

As the figure shows, each server instance has its own name, listen address, and port, just as you would expect in a nonclustered scenario. The Managed Servers belong to a WebLogic cluster named FECluster. Here, 237.0.0.1:7777 is the address/port combination that each cluster member uses for multicast broadcasts. Finally, the cluster itself has an address, whose value identifies all of the server instances that participate in the cluster. An important constraint of this cluster configuration, and of any other WebLogic cluster you may design, is that the Administration Server must be accessible to all the Managed Servers in your domain.

14.2.1 Working with Clusters

Before examining the frontend tier, let's cover some aspects of configuring a WebLogic cluster. The most important aspect is how to address the cluster. The cluster address needs to be set up both for internal use (e.g., EJB handles need the cluster address when load balancing method calls), and external use (e.g., a Java client needs the cluster address to access the various resources available to the object tier cluster).

14.2.1.1 Addressing a cluster

A cluster is composed of a number of individual WebLogic instances. In previous chapters, if you needed to address a server, you simply supplied the hostname and port of the server instance. This still will work if the server instance is part of a cluster, but it usually is not what you want. For example, if a servlet needs to make a call to an EJB that is hosted on a separate cluster, the servlet should not need to know which server it should contact. Clearly, a more generalized addressing scheme is required that will allow you to point to a cluster (all servers within the cluster, really), and not just to a single server. We will call this address the cluster address. WebLogic lets you specify the cluster address either as a list of IP addresses, or as a DNS name that is mapped externally to the addresses of all servers that belong to the WebLogic cluster.

You can specify the cluster address using a comma-separated list of hostnames and port numbers. Both of these are valid examples of the cluster address for a WebLogic cluster:

10.0.10.10:7001,10.0.10.10:7002,10.0.10.11:7001 oreilly1:7001,oreilly2:7001

Remember that any address/port combination in the cluster address must be unique. This means that if you need to run a WebLogic cluster on a single machine, you must ensure all server instances participating in the cluster are assigned a unique port number:

dnsofmachine:7001,dnsofmachine:7002,dnsofmachine:7003

In the case of the web tier cluster illustrated in Figure 14-2, we've configured two servers to run on the same machine. Each server is assigned a different listen port, so its cluster address is specified using the following comma-separated list of addresses:

10.0.10.10:7001,10.0.10.10:7003

The cluster address generally is used when a client needs to access some resource bound in the cluster-wide JNDI tree. In our case, an external client would create the initial JNDI context to this cluster using the following code:

InitialContext ctx = new InitialContext("t3://10.0.10.10:7001,10.0.10.10:7003");

This highlights the drawbacks of specifying the cluster address using a comma-separated list of host addresses it forces you to hardcode the addresses and port numbers of the servers that belong to the cluster. That is, your code is no longer immune from changes in the configuration of your cluster. For instance, if you add or remove physical hardware or alter the IP addresses assigned to your NICs, those same changes need to be applied to your WebLogic configuration and then to your code source. Clearly, this is not pretty.

For this reason, we recommend that in production environments you configure the cluster address using a DNS name. Your DNS server would then be configured to map the DNS name to all of the servers that belong to the WebLogic cluster. By specifying a DNS name for the cluster address, you establish a naming abstraction that shields your source code and cluster configuration from any changes in the hardware configuration.

There are disadvantages to using DNS names. They do not capture port information, so if the DNS name assigned to the WebLogic cluster is mapped to multiple IP addresses, you must assign the same listen port to all Managed Servers in the cluster. For instance, if we configure a WebLogic cluster with two Managed Servers running on separate machines say, 10.0.10.10 and 10.0.10.11 on port 7001 you could modify the DNS server for the participating machines to map a DNS name say, mycluster to both of these addresses. You then can set the cluster address for the WebLogic cluster to mycluster:7001. If a client needs to interact with the cluster-wide JNDI tree, it would set up the initial JNDI context as follows:

InitialContext ctx = new InitialContext("t3://mycluster:7001");

WebLogic will expand the DNS name to the list of IP addresses mapped to the DNS name in this case, 10.0.10.10 and 10.0.10.11 and then proceed as before.

You also can specify a cluster address from the Administration Console. Select a cluster from the left pane and then navigate to the Configuration/General tab. The cluster address that you specify is used internally when constructing EJB home handle references.

This entry does not define the cluster address for the WebLogic cluster. It simply informs WebLogic of the cluster address that it should embed in its EJB home handles returned to clients so that the EJB handles can locate the cluster when their homes are reconstructed.

 

14.2.1.2 Creating a cluster

The easiest way to create a WebLogic cluster is to use the Domain Configuration Wizard. This lets you create a WebLogic domain composed of an Administration Server and a cluster of Managed Servers. Start up the Domain Configuration Wizard as described in Chapter 13. In WebLogic 8.1, you need to choose the Custom setup. Remember to indicate that your WebLogic configuration should be distributed across a cluster. In WebLogic 7.0, after choosing the WLS Domain template and a name for the WebLogic domain, select the Admin Server with Clustered Managed Servers option. On the Configure Clustered Servers panel, you then can create an entry for each server instance that will participate in the cluster.

For each Managed Server, you must specify the name of the server, and the listen address and listen port on which the server will be available. Each member of the cluster also uses a particular address and port that it needs to send multicast broadcasts. This address/port combination will apply to all of the servers in the cluster. The Domain Configuration Wizard supplies default values for the multicast address and port: 237.0.0.1:7777. All of these settings are independent of the Administration Server, which needs its own name, listen address, and port number.

If you've already created a WebLogic domain, you can simply use the Administration Console to configure a new WebLogic cluster, or to modify the configuration settings of an existing WebLogic cluster. In order to create a new cluster, you need to first configure one or more Managed Servers that will participate in the new cluster. After this, choose the Clusters node from the left pane of the Administration Console and then select the Configure a New Cluster link. Here you should supply values for various configuration settings for the new WebLogic cluster, such as its name and cluster address. Then select the Configuration/Servers tab to add or remove Managed Servers that should belong to the cluster.

14.2.1.3 Starting and monitoring the domain

Starting a WebLogic domain that is clustered is no different from starting one that isn't. You first need to start the Administration Server, followed by the Managed Servers that belong to the domain. Each Managed Server notifies the Administration Server when it's up and running, and automatically enlists itself with the cluster. The cluster is alive and healthy when all of its servers are up and running. You can start the Administration Server by running the startWebLogic command. You then can start each Managed Server by using the startManagedWeblogic command:

startManagedWebLogic ServerA http://10.0.10.10:8001/ startManagedWebLogic ServerB http://10.0.10.10:8001/

Here, 10.0.10.10:8001 refers to the listen address and port of the Administration Server, and ServerA refers to the name of a Managed Server. Once a Managed Server completes its boot sequence, you may notice an additional log message at the bottom of the console log, similar to the following:

<13-Jan-2003 23:01:31 GMT> <000102>

This indicates that the Managed Server has located the cluster and has joined it successfully.

You also can use the Administration Console to monitor the status of the cluster. Select the cluster from under the Clusters node in the left pane. Then, if you select the Monitoring tab from the right pane, you can view the number of servers configured for the cluster, and the number of servers currently participating in the cluster. For the example setup, once both servers have completed their boot sequence successfully, you should expect a value of 2 for both settings.

14.2.1.4 Deploying to a cluster

In a multi-tier application setup in which each tier is mapped physically to a WebLogic cluster, you need to deploy only those components that must live on that tier. So, in our sample web tier cluster, only the web applications should be deployed. If the application is available as a WAR file, you can deploy and target the WAR to the cluster. If the web application is part of an EAR file, only the web applications ought to be deployed to the web tier cluster. Remember to also deploy any shared classes that may be referenced from the servlets and JSP pages within the web application. You can achieve this via the Administration Console itself. Simply choose the Targets tab for a selected web application, and then choose the name of the cluster that will host the web application.

As explained in Chapter 12, you need to be particularly vigilant when deploying to a cluster. The last thing that you want is a partially deployed application in which the web application has failed to deploy on some servers in the cluster. You always should try to deploy components when all of the members of the cluster are available. In addition, you must not change, add, or remove members of the cluster during deployment.

Furthermore, in a WebLogic 7.0 domain where no service packs have been applied, you cannot deploy to a partial cluster. This means that you will not be able to deploy a component to a cluster successfully if the Administration Console detects an unavailable cluster member. If you must deploy under these circumstances, you can either bring up all the Managed Servers in the cluster, or remove those members that you are unable to start. WebLogic 7.0 SP 1 and WebLogic 8.1 lift this restriction, and allow you to deploy to a partial cluster. In this case, the application component (web application, EJB module, resource adapter) is deployed only to those servers in the cluster that are alive at that time. Deployment on those members that are not available at that time will be deferred until they come back up.

14.2.2 Servlets and JSPs in a Cluster

The major components that typically are deployed to the presentation tier are servlets and JSP pages. You also could deploy JDBC connection pools and data sources to this cluster, though in our multi-tier application framework, we will make these available to the object tier cluster only. Let's review the load-balancing and failover features WebLogic provides for the servlets and JSPs.

WebLogic can load-balance the requests to servlets and JSP pages deployed to a cluster. It can distribute the requests across all of the servers in the cluster that host the web application. There is one important caveat. This load balancing occurs only for those requests that are not bound to an HTTP session. As soon as a client is involved in an HTTP session on the server side, session-aware requests to servlets and JSPs are directed to that server while it's available. WebLogic provides various session persistence mechanisms that ensure the HTTP session can be re-created on another member of the cluster in case the primary server fails.

If you choose in-memory session replication for persisting the HTTP session state, WebLogic maintains on a secondary server a copy of the session-state information that is held on the primary server. This means that all web requests to the cluster that are involved in a session are directed to the same server instance holding the primary session state. Only when the primary server fails are the requests redirected to the secondary server holding the replicated session-state information. For this reason, we refer to sessions as "sticky." In this case, failover is provided by replicating the session state onto a secondary server within the cluster.

If you want to deploy JSPs and servlets to a cluster and benefit from WebLogic's load-balancing and failover features, you also should enable session-state persistence for the web application. We've already looked at the various session persistence mechanisms provided by WebLogic in Chapter 2. In our case, we chose in-memory session-state replication for handling session-state failover in the presentation tier cluster. To enable in-memory session-state replication, you need to ensure that the weblogic.xml descriptor file for the deployed web application incorporates the following XML fragment:

PersistentStoreType replicated

In addition, you should target your web application to the cluster, and not to each server in the cluster individually.

14.2.3 Configuring a Load Balancer

A frontend cluster cannot work effectively without a load balancer. The load balancer provides a single unified address that clients can use (ignoring firewalls), and it serves as the main entry point into the cluster. The role of the load balancer is twofold. First, it should balance the load across the available members of the cluster while remaining faithful to the sticky sessions. Second, the load balancer should detect and avoid failed servers in the cluster. WebLogic provides a rudimentary software load balancer, the HttpClusterServlet, which round-robins HTTP requests through all the available servers in the cluster. A hardware solution typically includes additional logic to monitor the load on individual machines and distribute the requests accordingly. You also can use web server plug-ins, described in Chapter 3.

In our example multi-tier application framework depicted in Figure 14-1, we included a single load balancer that distributes requests across all members in the web/presentation tier. This means that we can use either the HttpClusterServlet running on a single WebLogic instance or a hardware load balancer. Later, we vary the frontend tier setup using proxy plug-ins in tandem with popular HTTP servers.

If we use the HttpClusterServlet, the load balancer in Figure 14-2 represents an additional WebLogic instance that hosts the HttpClusterServlet. This server instance is not part of the cluster it simply forwards requests to the members of the cluster. The servlet maintains a list of WebLogic instances that host the clustered servlets and JSP pages, and forwards HTTP requests to these servers using a round-robin strategy. If a client has created an HTTP session, the HttpClusterServlet forwards the request to the WebLogic instance that holds the primary state, and fails over to a secondary server in case of a failure. In general, it can do this by creating a cookie that holds the locations of the primary and secondary servers specific to the client's session. This cookie is then passed between the browser and the server on subsequent requests to the cluster. The HttpClusterServlet examines the cookie sent by the client on subsequent session-aware requests, and determines the cluster member it should forward the request to.

Chapter 2 shows how to configure the HttpClusterServlet in more detail. The two important aspects of its setup include the set of URL requests that ought to be forwarded and the cluster address for the frontend tier cluster. In our case, we want all HTTP requests to be forwarded to the front tier. The cluster address would be specified as an initialization parameter for the HttpClusterServlet in the web.xml descriptor file:

WebLogicCluster 10.0.10.10:7001:7002|10.0.10.10:7003:7004

Alternatively, if you require more sophisticated load-balancing logic, you can use a hardware load balancer. You need to ensure the load balancer is configured to work with WebLogic. For instance, if the load balancer supports passive cookie persistence, you must configure the load balancer to recognize WebLogic's cookie format. Only then can the load balancer extract the locations of the primary and secondary servers from the cookie, which is vital to preserving sticky sessions.

14.2.4 Using the Front Tier

Once you've set up the web/presentation tier cluster, you are in a position to test its load-balancing and failover features. You can access the various web applications deployed to the cluster by using the address of the load balancer. A request to a resource within the web application then will be directed to all available servers in the cluster. In the example, a web request such as http://10.0.10.1/index.jsp will be routed by the load balancer to either ServerA or ServerB. If index.jsp initiated an HTTP session, further requests from the same client will be directed to the same server.

Moreover, because we've already configured session replication for the web application and targeted the application to FECluster, we can comfortably take the primary server down and watch WebLogic automatically redirect further web requests from the client to the other available server, re-creating the primary session state on that server.

14.2.5 Other Frontend Architectures

In the previous multi-tier application architecture, the web and presentation tiers were combined into a single WebLogic cluster. Sometimes, a more elaborate setup is necessary say, when your application needs to be integrated with an already established bank of web servers. In this case, the existing web servers can be used to serve up the static content, while WebLogic can be used to serve up the rest. That is, the web tier maps to a bank of web servers that handle requests for all the static content, and the presentation tier maps to a WebLogic cluster that handles requests for all the dynamic content.

By doing this, you can take advantage of the existing hardware and resources for serving requests for static content, and let the WebLogic cluster focus on serving requests for dynamic content only. This does, however, increase the complexity of your application setup, and places extra demands on its proper configuration and administration. This physical split between the web and presentation tiers is depicted in Figure 14-3.

Figure 14-3. A two-tier frontend architecture

Here we have used a bank of web servers, each configured identically with a proxy plug-in. The plug-ins work in the same way as the HttpClusterServlet, each maintaining the operational state in the sibling tier and routing requests to the appropriate server. Chapter 3 shows how to install and configure these plug-ins. The web tier could even consist of a single HTTP server/proxy plug-in combination. A bank of web servers merely ensures that your web tier can survive even if one of the web servers fails. There are two ways in which you can distribute web requests across the bank of web servers:

The bank of web servers also could be implemented by a cluster of WebLogic instances that serve static content only, each hosting the HttpClusterServlet. In all cases, the web servers in the web tier should serve requests for static content while forwarding requests for JSPs and servlets to the presentation tier.

Note that in the situation depicted in Figure 14-3, the load balancer doesn't handle sticky sessions. Unlike the load balancer in Figure 14-2, this one is distributing requests to a bank of web servers that do not hold any session-state information. The load balancer still may hold some (internal) client session information for its own load-balancing purposes, but this is WebLogic-independent. The plug-in, on the other hand, will look at the client's cookie to determine which server to choose from in the presentation tier.

Категории