Citrix CCA MetaFrame Presentation Server 3. 0 and 4. 0 Exam CramT (Exams 223 and 256)
Independent Management Architecture (IMA) is a unifying architectural framework for previously independent technologies and processes. The IMA protocol offers a platform for future Citrix products to plug into and utilize. It also offers scalability and centralization. It is the mechanism by which MetaFrame server-to-server communication occurs. Let's look at some of the features that were collapsed into the IMA from earlier versions and also look at what the IMA offers today.
Centralized Administration
Centralized administration is at the core of the IMA, and no enterprise solution would be complete without a centralized management process, which is exactly what the Presentation Server Console does. This Java-based console uses the IMA protocol, gathers information from MetaFrame servers, and allows the user to make changes on a farmwide basis. It incorporates utilities that were standalones in MF 1.8 and earlier, such as Citrix Server Administrator. Today, you can configure and view Resource Manager counters and run reports , also previously standalone tools. You can configure Load Evaluators and Citrix policies as well.
Data Store
The Data Store is a database that stores all the configuration information needed by the Citrix farm. Any time you make configuration changes to a MetaFrame server, the changes are recorded in the Data Store. In this respect, if you are adding a new MetaFrame server to spread the user load of an application, this new server can get all its information by tapping into the Data Store. The information stored in the Data Store includes
-
Published Application Includes the name of the application and any configurable property available through the Management Console.
-
Server Configuration Includes all the configuration information made through the Management Console.
-
User Configuration Includes the MetaFrame administrators and any sort of user configuration configurable through the Management Console.
-
Print Environment Includes the entire configuration you make through the Management Console. This includes print driver information and any configurable option under the Print Management node in the Management Console.
A Blast from the Past
How does the Data Store differ starting with MetaFrame XP, and how did MetaFrame 1.8 handle this information? Prior to the IMA Data Store, all the pieces of configuration information from the preceding list were stored in the Registry of every server. As a server came online and loaded its Registry, it then broadcasted its changes to all the other servers in the farm. Any time the data changed on one server, it was broadcasted over User Datagram Protocol (UDP) to all the other servers in the farm. In addition, the ICA Browser held a shadow copy of this information in its memory as well. All this generated a lot of network traffic, in addition to simply being a bad and very error-prone method as it was. Now, with the IMA Data Store, all the servers talk to the Data Store, which acts as a centralized repository for the entire farm. This reduces the amount of network traffic dramatically and also protects the information in a database, where it should be. If a server goes down or if a new one is brought into production, as soon as they connect to the Data Store, they gain access to the necessary configuration they need to fulfill their role. |
Alert
Prior to MetaFrame Presentation Server 3.0, licensing information was also stored in the Data Store. With the advent of MetaFrame Presentation Server (MPS) 3.0, licensing is stored on the Citrix Licensing Server.
Note
All the information stored in the IMA Data Store is manipulated through the Management Console.
Local Host Cache
Local Host Cache (LHC) is an Access database that is located on every MetaFrame Presentation Server and that holds a smaller version of the Data Store. It carries enough information to keep the server running in the event that the main Data Store should become unavailable for any reason. The Local Host Cache is located in C:\Program Files\Citrix\Independent Management Architecture\IMALHC.MDB . As changes are made to the IMA Data Store, the MetaFrame servers are notified of this change, and they, in turn , update or refresh their Local Host Cache database with the updated information. The LHC contains information about published applications in the farm and the servers that host them.
Note
If the Data Store goes offline, the server continues to function normally using the Local Host Cache database for up to 48 hours. After 48 hours, if the server does not re-establish connectivity to the Data Store, its licenses expire, and the server refuses client connections.
Zones
Zones provide a way of grouping geographically close servers to save network bandwidth and also improve performance. Every zone elects one Data Collector, which every server in that zone reports to. If the servers are in the same zone but are geographically very dispersed, significant network bandwidth is constantly used because the servers constantly talk with the DC and vice versa. This is why it is recommended that you group your servers in zones based on their location.
Data Collectors
Data Collectors (DCs) are responsible for keeping zone-specific information. Every zone has one elected server that acts as the DC and maintains information gathered from all the servers in that zone, information such as server user load and active and disconnected sessions. Every MPS server in the zone will notify the DC of its changes every 60 seconds.
Zone-to-Zone DC Communications
Prior to MetaFrame Presentation Server 3.0, every Data Collector in every zone communicated its information to other Data Collectors in other zones. With MPS 3.0, this capability has been disabled by default to preserve network bandwidth. This change was prompted because large organizations suffered network bandwidth problems due to the constant replication of information between DCs in different zones. This change, however, comes at a cost. If a user now wants to connect to an application that is located outside his or her primary zone, the DC for that user's zone needs to request information from the DC in the other zones, and as such, application launch times may be delayed a bit. To get around this delay, whenever you have more than one zone, you should configure the Zone preference and failover policy in the Policies node discussed in greater detail in Chapter 7, "MetaFrame Presentation Server Policy Management." We do, however, go over it briefly here just to get the idea across. With the Zone preference and failover policy, you can set a preferred primary and backup zone. Because Citrix policies can be applied to users, servers, client names , and client IP addresses, what you should do when you have more than one zone is create a policy for each zone. For example, you would create policy1, which would be applied to the client IP range of users at a particular location so that their preferred zone is their native zone and their backup zones are all the other zones. In this case, whenever these users need to access an application in a different zone, they would be able to query the backup zone. You would then create another policy for users at different locations doing the same thing for them. |
Alert
MetaFrame Presentation Server 3.0 introduced changes in the way zone DCs communicate with each other. This change will most likely make it on the exam in the form of a question or two. Make sure you understand this new change and why it was implemented.
Data Collectors, Elections , and Priority
Every zone needs to hold TCP elections to determine which server is to become the zone DC. The election criteria and process are similar to the election of the Master ICA Browser in MF 1.8 and follow the same guidelines. The server with the highest priority wins the elections. Elections are held any time one of the following events occurs:
-
You manually change the memberships of a zone or any time you make a change to the zone's election criteria within the Management Console.
-
You manually trigger an election using the command querydc -e .
-
An MPS server loses connectivity to the zone DC.
-
A new MPS server is brought online.
-
The zone DC is shut down or goes offline for any reason.
Election priorities are as follows :
-
Most Preferred This is the favorite server; it always wins the election.
-
Preferred This server is a favorite among others to win an election.
-
Default Preference This server is neutral and can be selected to become a DC but is not a preferred candidate.
-
Not Preferred This server should never win an election unless it is the only server in the zone or all the other preferred servers are unavailable.
As with any election, the DC elections can also be tampered with by an administrator. You can give each server in the farm the status that you see fit for its role via the Management Console, as you see later in Chapter 6.
Data Collector elections are triggered if any of the following actions occur:
-
The Data Collector goes offline.
-
A new server is introduced into the farm.
-
A server in the zone loses communication with the DC.
-
A DC election is manually forced using the querydc -e command from a command prompt.
-
The zone information and configuration changes such as the zone name, zone server memberships, or server priority settings change.
IMA Subsystems
IMA subsystems aresimilar to plug-ins: They are core technologies that plug into the IMA and take advantage of its architecture. This architecture is the framework that unifies existing Citrix products and is the blueprint that Citrix will use in the future when developing new products to help integrate them all together. Currently, the IMA manages the following subsystems:
-
ICA Browser Provides backward compatibility. Its services come into use only when the farm is in mixed mode to offer older MF servers UDP broadcast capability.
-
Server Management Handles user sessions.
-
Application Management Manages published applications and their related information.
-
Runtime Offers services such as zone management and Data Collector.
-
Persistent Storage Updates the local host cache on every MF server from the Data Store.
-
Distribution Manages file transfers between different subsystems.
-
Remote Procedure Call Allows external processes to communicate with IMA.
-
User Management Provides authentication and security.
-
Printer Management Allows for printer administration.
-
Licensing Manages and enforces Citrix licensing guidelines.
-
Program Neighborhood Handles PN communications with ICA clients .
-
Load Management Handles load information and management.
Listener Ports
A listener port for every transport protocol is created automatically as soon as Terminal Services is installed. This service's sole function is to listen to and detect clients attempting to connect to the Terminal Server or, in this case, to the MetaFrame servers. After it establishes a connection with a client session, it then proceeds to connect that client with an idle session on the server. Think of the listener port as a host at a restaurant. A restaurant is always waiting to receive, greet, and seat customers, after which a waiter or a maitre d' serves them. The same thing is true of a listener port: It listens for, detects, and initiates contact with incoming client sessions and connects them with an available idle session for servicing .
Note
One listener port exists for every transport protocol that is installed on the server, such as TCP or IPX.
You can view the listener port by opening the Management Console for MPS 3.0 and clicking on a server from the Servers node in the left control pane. Click the Sessions tab in the right control pane, as shown in Figure 2.1. You can see the listener port under the State column.
Figure 2.1. Windows 2000 view of the listener port and idle sessions.
Tip
If users experience problems connecting to or establishing a session with an MPS server, you can try resetting the listener port in an attempt to remedy the problem. Right-click the listener port in the Management Console and click Reset.
Idle Sessions
On Windows 2000 servers, when Terminal Services is installed in Application Server mode, every transport protocol has two idle sessions created by default. The primary function of these idle sessions is to accept an incoming connection from the listener port and turn it into an ICA session. Every time one of these two idle sessions is turned into an ICA session, a new idle session is created and awaits a connection. Following up on the example we used for listener ports, idle sessions can be considered the waiters or maitre d' that serve the customers the host or listener ports seated. Now the same way a busy restaurant may require more waiters and more maitre d's to service its customers, sometimes you may need to increase the number of idle sessions available to handle peak logon times. In large organizations, especially in the morning hours when all the users are trying to connect, the server does not have enough time to create a new idle session. Thus, some of your users may not be able to connect right away and will be required to try again.
For this reason, large organizations that have users in the thousands are advised to add more idle sessions to cope with heavy logon attempts. You can do this by editing the Registry in the following location: HKLM\system\currentcontrolset\control\terminal server\idlewinstationpoolcount . Modify the value accordingly . It is recommended that you add these idle sessions in multiplies of two.
Tip
For performance reasons, it is recommended that you do not exceed a total 10 idle sessions. The more idle sessions you create, the more memory and other server resources are consumed.
In Windows Server 2003, Microsoft changed the architecture of Terminal Server a bit. The two idle sessions that were created by default for every transport protocol have disappeared. The functionality remains the same, but the connection from the listener port to the idle session has been made a seamless one. In addition, you can no longer control how many idle sessions exist, and the Registry key mentioned earlier has no effect on it. As you can see in Figure 2.2, a Windows Server 2003 server does not show the idle sessions anymore. Compare this figure to Figure 2.1, which clearly shows a Windows 2000 server with the idle sessions.
Figure 2.2. Windows Server 2003 no longer has idle sessions.
ICA Sessions
As we mentioned earlier, as soon as an incoming connection is connected with an idle session, it then proceeds to make that session into an ICA session. The state of the session is immediately changed to ConnQ, which means it is in the process of being connected. It then changes its status again to Conn after a connection with the MF server has occurred, and it changes its status one final time after the connection is successful and the session state becomes Active. It remains Active as long as the user is using the application with no problems.
A session may go into the following different states:
-
Listen The listener port is listening for any connection attempts.
-
ConnQ The session is in the process of being connected with a MetaFrame server.
-
Conn The session has established a connection with the MetaFrame server.
-
Active The session has successfully logged on to the server and can now be used by the user.
-
Idle The session is idle and is awaiting a connection transfer from the listener port.
-
Disc The session is in disconnect mode, which means it has not logged off the server but has been disconnected from the ICA client.
-
Shadow The ICA session is shadowing another session.
-
Down The listener port has not initialized successfully and is down. Also, when a session has been lost, it changes to a down state.
-
Init The ICA session port is initialized.
Published Application Discovery Process
Back in the MetaFrame 1.8 days, whenever an ICA client requested information or queried the farm for published applications, it broadcasted a message via UDP port 1604. A Master ICA Browser residing in the same subnet as the client requesting the information responded to the request. Now if an ICA client computer that is broadcasting the request does not have a browser gateway configured, it can view only information that the Master ICA Browser in its same subnet carries, thereby getting only a partial listing.
Beginning with MetaFrame XP 1.0 and later, Citrix solved this problem by storing all the information in the IMA Data Store and then replicating it to the Local Host Cache on every MF server. It also eliminated the need for the UDP broadcast and replaced it with IMA. Now when an ICA client queries any server, a full list of published applications is provided.
SNMP
Simple Network Management Protocol (SNMP) is known and widely used by various organizations for the purposes of monitoring their systems. Companies can use third-party tools such Microsoft Operations Management (MOM), HP OpenView, or various other tools to monitor and manage their servers. In addition, if you are using the Enterprise Edition of MetaFrame, you can use Citrix Network Manager as an SNMP agent to gather farmwide performance monitoring and management information.
Auditing Shadowed Sessions
Shadowing is probably one of the most useful troubleshooting tools available to administrators, engineers , and helpdesk technicians alike. It allows a technician to remotely view and interact with a user's session. However, this tool can also be potentially misused by nosy or malicious users and technicians. Therefore, many companies that implement shadowing require a way of monitoring its usage to ensure no one is using the tool other than in its intended framework.
When you enable shadow auditing, every time a session is shadowed, an event is logged in the Event Viewer on the Windows Server specifying the shadowing and shadowed sessions. Shadowing is not enabled by default on the servers. To enable it, follow these instructions:
1. | Open the Management Console.
|
2. | Select a server from the Servers node in the left control pane.
|
3. | Right-click it and select Properties.
|
4. | Select MetaFrame Settings and check the box next to Enable Shadow Logging on This Server, as shown in Figure 2.3.
Figure 2.3. Enable Shadow Logging.
|