CNE Update to NetWare 6 Study Guide
Test Objectives Covered:
Now we are clustering! Well, actually, the servers are clustering but the users are not. After you have created an NCS cluster, you must configure cluster resources to make them highly available to users. There are two main network resources that users are interested in: files and services. In the remaining lesson of this chapter, we will learn how to configure high availability for each of these resources:
Let's continue by enabling an NCS file system. NCS High-Availability File Access
As we just learned, there are two main network resources that you can make highly available by using NCS: files and services. In this section, we will explore using NCS 1.6 to create high-availability file access. Fortunately, clustering leverages the sophistication and stability of Novell Storage Services (NSS). In order to cluster-enable NSS, you must first create a shared disk partition and NSS file system on the shared device. Then you can cluster-enable the NSS components (such as Volumes and Pools) by associating them with a new Virtual Server object via a unique IP address. This enables NSS Volumes to be accessible even if the user's host server fails such is the definition of high availability. To configure NCS 1.6 to make data and files highly availabile to users, we must perform three steps:
Now, let's use ConsoleOne to create a NetWare 6 high-availability file-access solution. Create a Shared Disk Partition
As you recall from Chapter 5, NetWare 6 NSS architecture relies on partitions, pools and volumes (in that order) for scalable file storage. Therefore, it makes sense that you need to create a shared disk partition to enable high-availability file access. Effectively we are building a shared NSS file system on the SAN. To create a shared disk partition on your clustered SAN, make sure that all of your nodes are attached to the SAN and the appropriate drivers have been loaded. Then activate ConsoleOne and navigate to the Cluster object. Right-click it and select Properties. On the Media tab, select Devices and choose the device that will host your shared partition. Make sure that the shareable For Clustering box is marked. In fact, this box should be marked because NetWare 6 detects that it is a shared storage device when you add it to the SAN. If this option is not chosen, then it means that NetWare did not detect it as a shared storage device. This might be a problem. Next, on the Media tab, select Partitions and click New. Select the device once again and configure the following parameters:
To create the new shared partition, click OK. That completes step 1. In step 2, you must create a shared NSS Volume and Pool for hosting clustered files. Create a Shared NSS Volume and Pool
Storage pools are next in the NSS architecture hierarchy. Although storage pools must be created prior to creating NSS volumes, you can create both at the same time by using the Create a New Logical Volume option in ConsoleOne. First, right-click any Server object in your cluster and select Properties. Next, choose Media, NSS Logical Volumes, New. The Create a New Logical Volume dialog box should appear. In the Name field, enter a unique name for the volume. ConsoleOne will choose a subsequent related name for the host storage pool. Select Next to continue. When the Storage Information dialog box appears, select the shared disk partition that you created in step 1. This is where the shared storage pool and volume will reside. Enter a quota for the volume, or select the box to allow the volume to grow to the pool size. Remember that we want to make the volume and pool as large as possible because it will host shared file storage. After you select Next, the Create a New Pool dialog box will appear. Again, enter a related name for the pool and select OK. Next the Attribute Information dialog box will appear. Review and edit the attributes as necessary. (Refer to Chapter 5 for more details.) When you have finished editing the volume attributes, select Finish to complete step 2. Now that you have created an NSS storage pool and volume on the shared storage device, it's time to cluster-enable them. Believe it or not, NetWare 6 does not cluster-enable shared volumes by default. At this point, the volume and pool are currently assigned as local resources to the server you chose in step 2. Now we will cluster-enable it in step 3. Cluster-Enable the NSS Volume and Pool
When you create a standard NSS volume, it is associated with a specific server. For example, the WHITE_NSSVOL01 volume would be connected to the WHITE-SRV1 server. The problem with this scenario is that all files on the NSS volume are subject to a single point of failure the WHITE-SRV1 server. Furthermore, if WHITE-SRV1 goes down, its server IP address is no longer broadcast and the volume cannot be migrated to a new server for high availability. To solve this problem, NCS allows you to cluster-enable an NSS volume and pool independently of the physical server object. This means you associate the volume and pool with a new virtual server with its own IP address. This enables the volume to be accessible even if WHITE-SRV1 goes down. Furthermore, during the cluster-enabling process, the old Volume object is replaced with a new Volume object that is associated with the pool and the old Pool object is replaced with a new Pool object that is associated with the virtual server. Table 7.6 provides a detailed description of this eDirectory object transition. TIP You should create an A-record on your DNS server for the new virtual server's IP address. This enables your users to login using the logical DNS name.
Following are three important guidelines that you must follow when you cluster-enable volumes and pools in NCS 1.6:
To cluster-enable an NSS volume (and pool) using ConsoleOne, navigate to the Cluster object and select File, New, Cluster, Cluster Volume. Then browse and select a volume on the shared disk system to be cluster-enabled. Next enter an IP address for the new volume. This is only required for the first volume in the pool. Subsequent volumes will adopt the same IP address because it is assigned at the pool level. Finally, mark the following three fields and click Create: Online Resource After Create (to mount the volume once it is created), Verify IP Address (validates there are no IP address conflicts), and Define Additional Properties.
That completes our lesson in NCS high-availability file access. In this section, we learned the three-step process for creating a clustered file access solution. First, we created a shared disk partition on the SAN. Then we created an NSS volume and pool to host the shared files. Finally, we cluster-enabled the volume and pool with a new Virtual Server object. This process should help you sleep at night now that your users' files are always up. In the final NCS lesson, we will learn how to build a high-availability service solution. NCS High-Availability Services
Network services are just as important to users as files. With NCS you can make network applications and services highly available to users even if they don't recognize the cluster. The good news is Novell already includes a number of cluster-aware applications that take full advantage of NCS clustering features (one example is GroupWise). However, you can also cluster-enable any application by creating a cluster resource and migrating it into NCS. In this section, we will learn how to use NCS 1.6 to guarantee always up NetWare 6 services. Along the way, we will discover two different types of NCS resources:
In this final NCS lesson, we will learn how to configure high-availability services by performing these five administrative tasks:
Cluster-Enabling Applications
Cluster resources are at the center of the NCS universe. To cluster-enable any network service (such as an application), you must create a corresponding cluster resource. The resource includes a unique IP address and is available for automatic or manual migration during a node failure. You can create cluster resources for cluster-aware or cluster-naïve applications, including websites, e-mail servers, databases, or any other server-based application. This magic is accomplished using ConsoleOne or NetWare Remote Manager. After you have created an application's cluster resource, you can assign nodes to it and configure failover options (we will discuss these topics in just a moment). To create a cluster resource for a given network application, launch ConsoleOne. Next navigate to the host Cluster object and select File, New, Cluster, Cluster Resource. Then enter a descriptive name for the cluster resource that defines the application it will be serving. Next, mark the Inherit from Template field to perform additional configurations based on a preexisting template. If one does not exist, select the Define Additional Properties box to make the configurations manually. Finally if you want the resource to start on the master node as soon as it is created, select Online Resource After Create and click Create. You have created a new cluster resource in eDirectory for your highly available application. However, this is only the beginning. For users to have constant access to the application, you must assign nodes to the cluster resource, configure failover options, and build load scripts (so NCS knows how to enable the application). Let's continue with Node Assignment. Assigning Nodes to a Cluster Resource
Before your new cluster resource is highly available it must have two (or more) nodes assigned to it. Furthermore, the order in which the nodes appear in the Assigned Nodes list determines their priority during failover. To assign nodes to a cluster resource in ConsoleOne, navigate to the new Cluster Resource in eDirectory. Next right-click it and select Properties. When you activate the Nodes tab, two lists will appear: Unassigned (which should have two or more servers in it) and Assigned (which should be blank). To assign nodes to this cluster resource, simply highlight the server from the Unassigned list and click the right-arrow button to move the selected server to the Assigned Nodes list. Then when you have two (or more) servers in the Assigned Nodes list, you can use the up-arrow and down-arrow buttons to change the failover priority order. Speaking of failover, let's continue with a quick lesson in configuring cluster resource failover. Configuring Cluster Resource Failover
After you have created a cluster resource for your application and added nodes to it, you're ready to configure the automatic and manual failover settings. Following is a list of the failover modes supported by the Policies page in ConsoleOne:
If you don't feel comfortable automatically migrating cluster resources in NCS, you can always migrate them manually. Let's continue with a quick lesson in resource migration. TIP When configuring cluster resource failover modes, ConsoleOne presents an Ignore Quorum check box. By selecting this parameter, you can instruct NCS to ignore the cluster-wide timeout period and node number limits. This ensures that the cluster resource is launched immediately on any server in the Assigned Nodes list as soon as the server is brought online. We highly recommend that you check the Ignore Quorum box because time is of the essence when building a high-availability solution.
Migrating Cluster Resources
You can migrate cluster resources to different nodes in the Assigned Nodes list without waiting for a failure to occur. This type of load-balancing is a very good idea to lessen the performance load on any specific server. In addition, resource migration is a great tool to free up servers when they are scheduled for routine maintenance. Finally, migration allows you to match resource-intensive applications with the best server hardware. To migrate cluster resources by using ConsoleOne, navigate to the Cluster object that contains the resource that you want to migrate. Highlight the Cluster object and select View, Cluster State View. Then in the Cluster Resource list, select the resource you want to migrate. Next, the Cluster Resource Manager screen appears, displaying the resources host server and a list of possible servers you can migrate the resource to. Select a server from the list and click the Migrate button to manually move the resource to the new server. Furthermore, you can select a resource and click the Offline button to unload it from its host server. At this point, the resource hangs in limbo until you manually assign it to another node. TIP Cluster resources must be in a Running state to be migrated.
So far, you have created a cluster resource for your network application and assigned nodes to it. Then you configured automatic cluster failover modes and migrated resources manually for load balancing. That leaves us with only one important high-availability task configuring cluster resource scripts. This is probably the most important task because it determines what the resources do when they are activated. Ready, set, script! Configuring Cluster Resource Scripts
When a cluster resource loads, NCS looks to the Load Script to determine what to do. This is where the application commands and parameters are stored for the specific cluster resource. Load Scripts are analogous to NCF (NetWare Configuration Files) batch files that run automatically when NetWare servers start. In fact, cluster resource load scripts support any command that you can place in an NCF file. Similarly, the Unload Script contains all of the commands necessary to deactivate the cluster resource, or take it offline. Both Load and Unload Scripts can be viewed or edited by using ConsoleOne or NetWare Remote Manager. To configure a specific cluster resources Load Script in ConsoleOne, navigate to the Cluster Resource object and right-click it. Next, select Properties and enable the Load Script tab. Then, the Cluster Resource Load Script window will appear. Simply edit the commands as you would any NCF batch file. In addition, you will need to define a timeout setting for the load script. If the load script does not complete within the timeout period (600 seconds by default), then the resource will go into a comatose state.
In this final NetWare 6 lesson, we learned how to implement Novell's new AAA Anytime, Anywhere, Always Up. Always up is accomplished by using NCS (Novell Cluster Services). In this lesson, we learned how to design a NetWare 6 NCS solution, how to install it, how to configure it, and how to keep it running. In the first NCS section, we explored high availability in theory and built an impressive NCS vocabulary, including Mean Time Between Failures (MTBF) and Mean Time to Recovery (MTTR). After we nailed down the basic fundamentals of NCS, we used NCS 1.6 to design a clustering solution. In the basic system architecture, we learned how to use a Fiber Channel or SCSI configuration to share a central disk system. In the third lesson, we discovered the four-step process for installing NCS 1.6. Then, at the end of the chapter, we learned how to configure two high-availability solutions: File Access and Services. So there you go…. Novell Cluster Services in all its glory!! Congratulations! You have completed Novell's CNE Update to NetWare 6 Study Guide. With this companion, you have extended your CNE venture beyond NetWare 4 and 5, into the Web-savvy world of NetWare 6. Furthermore, you have learned how to boldly serve files and printers where no one had served them before with iPrint, iFolder, and iManager. That is Novell Course 3000: Upgrading to NetWare 6 in a nutshell. Wow, what a journey! You should be very proud of yourself. Now you are prepared to save the 'Net with NetWare 6. Your mission should you choose to accept it is to pass the NetWare 6 CNE Update exam. You will need courage, eDirectory, iFolder, NCS, and this book. All in a day's work…. Well, that does it. The end. Finito. Kaput. Everything you wanted to know about NetWare 6 but were afraid to ask. I hope that you have had as much fun reading this book as I've had writing it. It's been a long and winding road a life changer. Thanks for spending the last 700 pages with me, and I bid you a fond farewell in the only way I know how: Cheerio! Happy, Happy Joy, Joy! Hasta la Vista! Ta, Ta, for now! Grooovy, Baby! May the force be with you…. So long, and thanks for all the fish! Lab Exercise 7.1: Building a High-Availability Network (Word Search Puzzle)
See Appendix C for answers. Lab Exercise 7.2: NetWare 6 High-Availability with Cluster Services (Crossword Puzzle)
|