Interaction Between the WCS and the WC

To demonstrate the interaction between the WC and the WCS, it's easiest to review the process of displaying a Crystal Report on a Crystal Viewer to examine exactly what traffic is being passed between the browser, Web server, WC, and the Crystal Enterprise report processing tier.

  1. A request is made from the browser to the Web server for a specific report viewer. In this example, the user has clicked on a hyperlink (http:///directory/viewrpt.csp&rptid=1863), meaning that a request has been made to view a Crystal Report within the Crystal Viewer.
  2. The WC on the Web server forwards the request to the WCS.
  3. The WCS processes the CSP and calls the Crystal Enterprise SDK to invoke a report viewer object, which renders the report on the Page Server, passes the first page onto the Cache server, and then tells the WCS to send the appropriate HTML to the browser.
  4. The viewer HTML is sent from the WCS to the WC, through the Web server, and to the user's browser.

For the purposes of the discussion of firewall configuration, there are essentially three discrete Crystal Enterprise entities that are likely to be deployed at different positions of a firewall architecture, seen in Figure 26.1. (The Web browser will be ignored because this is clearly outside the scope of the firewall.)

Figure 26.1. Crystal Enterprise has three levels at which security must be determined.

Each of these entities is likely to require different levels of firewall protection determined by their closeness to the internal network.

In cases where you do not deploy a WCS (such as the Java, COM-SDK, and .NET scenarios) there are only two entities, however: the SDK running on the application server and the Crystal Enterprise services or daemons.

As previously mentioned, the barriers that a secure system provides are commonly broken down into distinct layers (with each layer defining a security measurement and effectively denoting an acceptable level of exposure). Each layer is identified by the communication from one network to another network via a firewall. Then, a detailed example involving Crystal Enterprise communication through a firewall will be provided as well as what the appropriate system settings should be and how the communication will be addressed at an IP/port level.

Figure 26.2 shows the most typical example of a firewall implementation. In this scenario, the browser-to-Web server communication is controlled through the standard firewall control, allowing only HTTP requests to be forwarded through to the Web server on port 80 (other services such as Telnet and mail might be permitted through other predefined ports as well). Clearly, Crystal Enterprise is not involved at this stage of the firewall. This does, however, represent the entry point into the resources managed by the target environment. From this point forward, internal network resources will be used; this interim environment is normally called the DMZ (or Demilitarized Zone). The DMZ, therefore, is a network added between a protected network and an external network.

Figure 26.2. Crystal Enterprise can be divided into tiers for firewall deployment.

The architecture of Crystal Enterprise fits conveniently into this infrastructure. The separation of the WC from the WCS enables the WC to remain within the DMZ along with the Web server. Consequently, an additional firewall can easily be deployed to protect the requests forwarded through the WCSthis, after all, will be communicating directly to the other Crystal Enterprise servers and is a component on the Crystal eBusiness Framework. Alternatively, the various application server deployments (Java, COM, or .NET) would also reside in the DMZ on the application server.

Because the WC and WCS allow for support of URL-level requests to support legacy Crystal Enterprise applications, some enterprises choose not to implement the WC and WCS in an extranet environment. Instead the application-server deployment with the CE-SDK allows for application-level control of the interaction between the Web (application) server facilitating tighter security at this level. Please refer to Figure 26.3 for an illustration.

Figure 26.3. This is how to configure a firewall in a non-WCS deployment.

For instance, organizations might not desire any extranet access to the Crystal Management Console (CMC). By not installing the WC and WCS in the extranet DMZ, no access to the CMC can occur from the extranet, perhaps alleviating security concerns in an extranet environment where malicious attacks are routine.

To examine the details of the communication of the WC to the WCS through the second firewall, the discussion will be broken down into two distinct portions: the initialization of the communication (a request for service), and servicing of the request once the communication has been established. This two-stage nature of communication was detailed in the eBusiness Framework in an earlier section in this chapter.

Understanding Initial TCP/Port Processing

When the Web server receives a Crystal Enterprise resource request from a Web browser, it forwards the request to the WC or processes the request internally in the case of the SDK. For this example, assume that the Web server has an IP address of 10.55.222.241 (see Figure 26.4).

Figure 26.4. Browser requesting information from the Web server.

The WC prepares to make a TCP connection to the WCS. A TCP connection request has four critical elements:

The destination portion of this communication is determined by settings entered in the WC configuration dialog box in the Crystal Configuration Manager.

When making up the destination information, the WC reads this information from these settings. Because this is the machine name of the WCS, the IP address of the WCS is determined by network name resolution. The port destination takes less workit's simply the number as entered in this dialog box. By default, this port is 6401. The only requirement is that this port number is the same as the WCS was set to use when the WCS started. The source portion of the requests are both determined by the Web server's operating system.

The Source IP is the IP address of the machine sending the requestthe Web server. This IP address is determined by a request to the operating system. The port is also chosen by an operating system request. The WC asks for an available socket (or port) that is not in use. The operating system randomly chooses an unused socket. The WC begins temporarily listening on this port for a response from the WCS as soon as the initialization request is sent. It will only accept a response from the IP address of the WCSany other requests at this port will be dropped. At this point, the TCP connection request is ready to be sent with this information:

Assuming that the WCS has an IP address of 10.55.222.242 and that the assigned port (retrieved by an operating system call) is 3333, for this example, the completed request will be as follows (these are formatted as IP address:port):

The WCS is constantly listening on its defined port for service requests (the default 6401 in your example, though another port could be used for listening if configured to do so).

When the WCS receives the TCP Connection Request from the WC, it begins to form a response. The response will have the same four primary components that the request hada source port and IP and a destination port and IP. Embedded in this response is the IOR of the WCS. The IOR of WCS contains the IP address of the WCS, as well as the port number specified in the -requestport option.

NOTE

When in a Java environment, the web.xml file configures the port of choice. When in the .NET environment, the port that the SDK uses to connect to the various services is determined by those services because no outgoing communication is necessary from the Crystal Enterprise Framework to the .NET application server and CE-SDK.

If the option is not specified, a free port is picked up at random by the CORBA library by asking the operating system for an available port. At this point, the WCS responds to the WC to complete the TCP connection. This TCP connection response will have this information:

Assuming a randomly generated source port of 2345, you'll have the TCP connection confirmation of the following:

While the WCS has been building its confirmation response, the Web server machine has been listening for the response on the chosen port.

The Web server/Connector will only accept packets from the IP to which it sent the requestthis is for security. In this example, the operating system of the Web server machine has been listening on port 3333. When the TCP Connection Response/Confirmation is received, the OS of the Web server machine will determine whether it's from the correct location. If it is from the correct IP, it will accept the data and complete the TCP connection. The IOR is embedded inside this request, and the operating system passes it onto the WC for processing.

Now that all this work has been done to establish this connection, the WC immediately closes it. This was merely to establish that both client and server are up and running and accepting connections. The WC also received the IOR of the WCS in this short connection. The listening port on the WCS resumes listening to other clients, and the work of sending IP packets back and forth will be done on a second TCP connection.

Understanding Secondary TCP/Port Processing

It is in establishing the second TCP connection that Crystal Enterprise works differently from most TCP/IP applications. The WC reads the IOR of the WCS and acquires the IP address and port number from it. Using this port number and IP address a second connection is madeone that will be used for actually transferring the data. (A straight TCP/IP application doesn't have an IOR from which to read the IP and port of the server. It uses the source IP and port from the TCP connection confirmation to establish this second connection.) The Crystal Enterprise application uses the information in the IOR and discards the source IP and port from the TCP connection confirmation.

Continuing your example, upon reading the port and IP information from the IOR of the WCS, the WC initiates a second TCP connection. This request will use this information:

In this example, assume that the randomly generated destination port number is 2345 and the generated source port is 1061. You'll have the TCP connection confirmation of the following:

When the WCS receives this request, it will respond to the WC to complete the connection. The address to which it will send this connection is 10.55.222.241:1061. This destination and port was determined by reading the source information of the incoming TCP connection request. This is where the WCS's connection response gets its destination. As demonstrated in the following list, this is really just the reversal of the information received from the WC. Completing your example, therefore, the WCS will communicate as follows:

After the Web server/Connector machine receives the TCP connection response from the WCS, it is able to complete the TCP connection. Now that the TCP connection is made, IP packets will be sent back and forth on this channel. IP datagrams will be forwarded back and forth from the Web server (10.55.222.241:1061) to the WCS (10.55.222.242:4000), and vice versa, on these ports. The secondary connection, therefore, is the one that does nearly all the data transference.

Now that you have seen exactly how the IP/port allocation is determined in the Crystal Enterprise environment, you can look at a fully worked example applying a specific firewall technology. Initially, you will look at packet filtering and then apply NAT on top of this. Then this chapter briefly discusses how Crystal Enterprise would fit in with the application of a Proxy Server (Socks) firewall.

Категории