Microsoft SQL Server 2000 High Availability

With log shipping, whether you use the SQL Server 2000 Enterprise Edition functionality or code your own, there are some common points you must think about prior to configuring your log shipping solution.

More Info

Many of the concepts and rules that apply to configuring log shipping are similar or the same as those for upgrading or migrating to SQL Server 2000. For more information, see Chapter 13, Highly Available Microsoft SQL Server Upgrades.

Ask the Right Questions

Before you even think about implementing log shipping, you must determine the answers to certain questions. Without the answers, you cannot accurately plan and set up log shipping from both an availability and performance perspective.

Business Questions

Perhaps the most important questions to ask are not the technical ones but the ones related to why you are implementing log shipping and what the cost to the business will be.

Technical Questions

After you understand the nontechnical issues and potential barriers, you can then consider the technical issues and barriers.

How Current Do You Need To Be?

To briefly revisit a point mentioned in Chapter 3, log shipping inherently involves some amount of latency; it is not real time. Transactions are only applied to the secondary after they are backed up, copied, and restored from the transaction log. With log shipping, you are only as current as:

The log shipping secondary is only as current as the last transaction log that is applied, so if the delta is five minutes off of the primary, your secondary should always remain five minutes behind if all is functioning properly. Analyzing these points is fairly straightforward.

Secondary Server Capacity and Configuration

As mentioned briefly earlier, the capacity and configuration of your secondary is crucial for the success of any log shipping solution. If you underpower your secondary server, end users will notice the performance difference. The point of switching servers is to provide, at a minimum, the same quality or level of service you provided before.

Log Shipping More Than One Database to One Secondary

One of the nice features of log shipping is that you can, as long as you have the capacity, use one instance as a standby for multiple databases. This takes two assumptions into account: that you have the capacity to handle the additional workload in the event that all databases would need to be servicing end users at once (not unlike planning failover clustering to ensure that multiple instances do not affect each other in a failover), and that there are no conflicts or security issues in doing so. Some common issues related to log shipping more than one database to a single secondary include the following:

Disk Space, Retention, and Archiving

It is very important to plan the disk capacity needed for implementing log shipping. You must know how large your transaction log file sizes are and how much disk capacity you have, and then coordinate those two factors with your corporate archival scheme. During the configuration of the built-in feature of SQL Server 2000, you can configure a retention period for the transaction log files, so it must be known from the start.

More Info

For more information on retention and archiving of backup files, see the topic Backup Retention in Chapter 10, Implementing Backup and Restore.

Full-Text Searching and Log Shipping

If the primary database utilizes full-text searching, you must consider how you will handle anything relating to full-text after the role change. The issues are pretty straightforward:

The steps to do this are listed below. The only steps that you would need to perform (once you establish the backup of the database and the DBID) when using Change Tracking are Steps 4, 5, and 6. These should be performed as often as possible because all the changes made to the full-text catalogs are in files in the FTDATA folder. To roll these changes to the secondary server in log shipping, you need to copy these files over to the secondary machine at regular intervals.

Important

Stopping and starting MSSearch could cause an availability problem from an application perspective and cause perceived unavailability or downtime of your application. Do this with caution after making the proper risk assessment of your business.

  1. Create and populate the full-text catalog on the primary machine.

  2. Back up the primary database.

  3. Restore the primary database on the secondary server. The DBID has to be the same. Follow the procedure in Knowledge Base article 240867, INF: How to Move, Copy, and Back Up Full-Text Catalog Folders and Files, to restore a database with the same DBID.

  4. Stop the MSSearch service on both the primary and secondary servers.

  5. Copy the SQL xxxxxyyyyy folder under FTDATA over to the location on the secondary server.

  6. Restart the MSSearch service.

  7. Re-create and repopulate the catalogs on the secondary database.

  8. Restore the database backup using the STANDBY option and a .tuf file extension.

  9. Run the wizard and select the existing database restored in Step 7.

  10. Select the appropriate settings for log shipping.

  11. Create a scheduled job to stop MSSearch, copy the catalog files from the FTDATA folder over to the secondary machine, and restart the MSSearch service.

    • You cannot take a copy of an older full-text index and apply it to a database that is out of sync with that copy. Although it might seem to work for a while, you will eventually encounter problems. This is not supported or recommended.

    • You cannot build a full-text index on a database that is in NORECOVERY or STANDBY mode, which is required for the transaction logs to be applied. You thus have to start building the index after the database is brought online after the role change. This means some features of the application will not be functioning (which might prove unacceptable) and, depending on the size of the index, it could take quite some time to generate.

      More Info

      The Microsoft Support Knowledge Base article 240867, How to Move, Copy, and Back Up Full-Text Catalog Folders and Files, details the process of how to move, copy, and back up full-text files to another server. This article can be found at http://support.microsoft.com.

Recovery Models and Log Shipping

Log shipping requires that the databases participating in log shipping be in either Full or Bulk-Logged recovery models. By default, all databases are configured with Full recovery, as that is the default behavior of the model database. Recovery models were first introduced in SQL Server 2000, and there is no real equivalent of this feature in prior versions of SQL Server. Simple recovery does not allow you to make transaction log backups, and therefore does not let you use log shipping because it will break the log sequence number (LSN) chain. When you apply transaction logs, the process checks to see that the first LSN in the new backup file comes sequentially after the last LSN applied. A SQL Server 2000 error in LSNs during the transaction log applications should look similar to this:

Server: Msg 4305, Level 16, State 1, Line 2 The log in this backup set begins at LSN 6000000007200001, which is too late to apply to the database. An earlier log backup that includes LSN 6000000005000001 can be restored. Server: Msg 3013, Level 16, State 1, Line 2 RESTORE LOG is terminating abnormally.

This is the SQL Server 7.0 error:

Server: Msg 4305, Level 16, State 1, Line 2 This backup set cannot be restored because the database has not been rolled forward far enough. You must first restore all earlier logs before restoring this log. Server: Msg 3013, Level 16, State 1, Line 2 Backup or restore operation terminating abnormally.

With all versions of SQL Server prior to SQL Server 2000, if you did anything such as a Transact -SQL BULK INSERT or a nonlogged operation or if you set the Truncate Log On Checkpoint option (which is no longer valid in SQL Server 2000), you would invalidate regular transaction log backups. First and foremost, with Truncate Log On Checkpoint, it is what it says: when a checkpoint happens, the log is truncated and not backed up. Database administrators (DBAs) frequently like to turn this on without thinking about the recoverability of SQL Server.

From a log shipping perspective, there is no apparent difference between Full or Bulk-Logged but there are some differences that affect performance. If you are using Bulk-Logged, the transaction logs contain information added or modified since the transaction log backup. In addition, the transaction log contains the data pages modified by any bulk operations (such as BULK INSERT or BCP) since the last backup. This means that you will have potentially larger file sizes that need to be backed up, copied, and restored if you are doing bulk operations. This definitely impacts your time to copy if you have limited network bandwidth. Consider the following example: if you do a bulk load of data on a single processor server that runs at 3 MB per second and takes 10 minutes, that means you may have generated 1800 MB (or 1.8 GB) of changed data pages!

With Full, you still have the same issue as Bulk-Logged with large file sizes. However, one of the benefits of Full is that the data rate will be reduced to nearly match your network. Even though you are technically moving a bit more information across the network in total (Bulk-Logged and Full are about the same file size; Bulk-Logged gathers the data directly from the data file instead of indirectly from the transaction log as redo log records), the secondary server does not lag far behind because the primary is being loaded much more slowly due to the logging of all inserted data. If you have a server with eight processors, it stands to reason that if the load rate is eight times, it affects the time for logs.

Caution

If you configure log shipping and then switch the recovery model to Simple at any point after it is up and running, you invalidate log shipping. An example of when this could occur is if you were doing a bulk insert and wanted to minimize your log file size and not have a record of the page changes. The same would also apply if someone ” even briefly, whether accidentally or on purpose ”switched the recovery model to Simple and immediately changed it back.

More Info

For more information on recovery models, see Chapter 10, Implementing Backup and Restore.

Network Bandwidth

Network bandwidth is a potential barrier to successful log shipping implementation. The three main issues associated with network bandwidth were listed in the earlier section, Technical Questions. They are:

Although log shipping is arguably the easiest SQL Server “based high availability method to configure, it can crush a network if your transaction log backup files are huge. That will eventually affect the delta of time for which the primary and secondary are out of sync. Coupled with that, you also need reliable network connections between the participating servers. Intermittent network problems will not increase your availability when it comes to log shipping; they can only hurt it.

Tip

If you are sending large transaction logs on a regular basis, you should configure a separate and dedicated network card as well as a private network (such as a 10.x.x.x network) for sending transaction logs between the primary and secondary servers. This ensures that user traffic will not be affected. You might also want to consider a faster network (such as Gigabit Ethernet) to ensure that your recoverability happens much more quickly.

Logins and Other Objects

Log shipping only captures anything in the initial database backup (including users, but not their corresponding server-level login) as well as any transactions captured in subsequent transaction logs. That leaves any other objects that reside outside the database or are not captured as part of the transaction log to be dealt with.

Tip

As far as objects go, outside of Data Transformation Services (DTS) packages, you should really be placing any objects related to the database in the database itself. Unless there is a logical reason to put the objects outside the database, you will create problems when you are trying to synchronize a secondary server to become a warm standby. If such objects exist, they should be recorded and placed in any document related to the solution. If possible, script the objects and have those scripts available.

Stored Procedures, Extended Stored Procedures, Functions, and More

As just noted, things such as stored procedures should be created in the database itself. Because the creation of a normal stored procedure is captured in the transaction log, it will automatically be created at the secondary. If you have anything residing outside the database, such as in msdb, you need to manually create all of these objects on the secondary server to ensure a successful role change.

Logins

There are two levels of logins that you need to take into account:

If you do not take both types into account, you could end up with a situation in which you have orphaned users in your database and application- related problems accessing the database after the role change process.

More Info

The section Step 3: Post-Wizard Configuration Tasks later in this chapter details one way to configure the transfer of your logins and users.

DTS Packages

To move DTS packages to another SQL Server, the easiest method is probably to save the package to a file and manually copy and add it to the secondary. There is no automated way to perform this task.

Caution

Make sure that the DTS package is compatible with the version of SQL Server that is the destination. If, for example, functionality or the format of the file is changed due to a fix in a service pack (as it did between SQL Server 7.0 Service Pack 1 and SQL Server 7.0 Service Pack 2), if the other server cannot either handle the functionality or it will be broken on the standby, you need to resolve those issues prior to bringing that secondary online as a primary. Similarly, you might need to fix settings (such as source server names or destination server names) within the DTS package to match the new server. Do not assume that it works unmodified.

Clients , Applications, and Log Shipping

One of the most important considerations for using log shipping is planning for how you will redirect end users and applications to the new server. Unlike failover clustering, where the switch to another physical server is abstracted to an end user or application, a change to another server in most cases is not abstracted. That means the systems and applications accessing SQL Server must tolerate such a change. Like failover clustering, there will be some downtime involved.

Coding Your Application for Log Shipping

The following are some tips for application developers when coding for a solution that will eventually include log shipping:

Role Change

Performing the role change from a SQL Server perspective is straightforward, but redirecting clients is not, so you can have the least impact on applications and end users. As in failover clustering, not only is there an interruption in service, you are also switching to a completely different server. You have a few options available to you to assist in the role change:

The goal is to pick a manageable solution that will incur the least downtime.

More Info

For more information on configuring Network Load Balancing for use with log shipping, see Chapter 5, Designing Highly Available Microsoft Windows Servers.

Security

As with any technology, with log shipping there are security considerations you must consider up front that influence how you will plan and deploy a solution. Because all security changes (such as GRANT statements) are in the transaction log, they are applied automatically to the secondary server once log shipping is configured and working properly.

More Info

For additional information or changes to what is written in this section, please reference Knowledge Base article 321247, HOW TO: Configure Security for Log Shipping, which can be found at http://support.microsoft.com .

SQL Server Startup Account

If the server hosting your SQL Server 2000 instance is part of a domain, and not a workgroup, you should use a domain account to start the SQL Server services. A domain account is absolutely required for certain features, such as failover clustering. If this is a nonclustered instance, you can use a local network account or the Local System account.

Securing the Transfer of Logins

Because the transfer of logins using the built-in functionality requires a bulk copy out of the syslogins table, this file should be contained in a secure directory. Files generated by the bcp utility are in plaintext and not encoded in any way.

Securing the Backup Share

You also need to secure the directories that contain your transaction log backups. The files created by SQL Server are not encrypted in any way. The network share should be secured so that all servers participating in log shipping should be able to access it.

Log Shipping Across Domains

You can log ship a database across domains. To do this, you should set up two- way trusts between the domains involved in log shipping. This is done through the Active Directory Domains and Trusts tool located in the Start menu in the Administrative Tools folder, as shown in Figure 7-1. If it is not there, it can be started from %systemroot% \System32\Domain.msc. If you cannot establish trusts, you can use pass-through security, which involves the use of the local network account. This cannot be done on a clustered instance of SQL Server.

Figure 7-1: The Active Directory Domains and Trusts tool of Windows 2000.

Authentication for Log Shipping

Microsoft recommends you use Windows Authentication for the SQL Servers participating in log shipping. If you use mixed mode security, with the built-in functionality, a login with the name of log_shipping_monitor_probe will be created if it does not already exist.

Категории