DATABASE

Exchange Server 2010 : Deploying a Database Availability Group (part 4)

2/16/2011 9:15:20 AM

Moving the Active Copy of the Database

When, in the course of administrator events, it becomes necessary for one to perform maintenance on a member of a Database Availability Group, it might be necessary to move the active copy of a given database from one server to another. Although taking down a node that is currently the active copy causes the second priority node to activate, the process is cleaner and smoother if it is initiated intentionally.

Given that the active copy of the database is effectively the source of replication, it is important to consider that the site holding a given copy of the database will experience an increase in bandwidth usage if there are multiple copies of a database distributed across the environment. As such, it is recommended to move the active copy to a well connected site whenever possible.

To make a replica the active copy of a mailbox database, perform the following steps:

1.
Launch Exchange Management Console.

2.
Expand Organization Configuration.

3.
Click mailbox.

4.
Click the Database Management tab.

5.
Right-click the database copy you want to make the active copy.

6.
Select Activate Database Copy (see Figure 16).



Figure 16. Active Database Copy.

7.
When the wizard launches, confirm the mount dial override and click OK.

The same process can be done entirely from the Exchange Management Shell as well by following these steps:

1.
Launch Exchange Management Shell.

2.
Type Move-ActiveMailboxDatabase–Identity DBName–ActivateOnServer NewServer.

For example, Move-ActiveMailboxDatabase–Identity Mailbox Database 2010A–ActivateOnServer E2010.

Note

It is of value to point out that when the active copy status is moved from one host to another, several things happen automatically. Replication from active copy to replicas automatically resumes from the new source, and Client Access Servers automatically connect to the copy of the database that is now active. OWA clients might see a small hiccup in service, stating that their mailbox is not mounted, but simply refreshing the browser will reconnect the session.


Changing Priorities on Replicas of Mailbox Databases

As an Exchange Server 2010 environment grows and evolves, you might want to alter the initial distribution of mailbox database replicas in terms of the preferred replacement if a failure occurs. For example, an environment may start off with an active copy of a database in San Francisco with a replica copy in an office in New York. Some time down the road a new office, say in Denver, might be brought online. Based on latency, it might be more desirable for Denver to be the secondary site for San Francisco rather than New York. Denver could be added as a replica and it would default to being the third in the list of preferred replicas. To change this replica to preference 2, simply follow these steps:

1.
Launch Exchange Management Console.

2.
Expand Organization Configuration.

3.
Click mailbox.

4.
Click the Database Management tab.

5.
Right-click the database copy whose preference you wish to change and select Properties.

6.
Change the Preferred List Sequence Number, in this case from 3 to 2 (see Figure 17).

Figure 17. Viewing Mailbox Database Copy properties.


7.
Click OK.

The same change can be performed via the Exchange Management Shell by following these steps:

1.
Launch Exchange Management Console.

2.
Type Set-mailboxDatabaseCopy –Identity DBName –MailboxServer ServerWithCopyToChange –ActivationPreference PreferenceNumber.

For example, Set-MailboxDatabaseCopy–Identity SF-DB-01–MailboxServer EX-Denver-MB01–ActivationPreference 2.

This would take the copy of the database called SF-DB-01 that was living on Exchange DAG member EX-Denver-MB01 and alter its priority to 2–making it the copy that would become active if the current active copy were to become unavailable.

Hardware Considerations for Database Availability Group Members

With the evolutionary changes that come with Exchange Server 2010, there is a noticeable shift in recommendations around hardware that should be used to support Exchange Server, especially in the area of Database Availability Groups.

Administrators who were familiar with Clustered Continuous Replication in Exchange Server 2007 will immediately see the parallels between DAG and CCR. CCR introduced, and DAG continues to show, the benefits of directly attached storage for Exchange Server. Older versions of Exchange Server required very high levels of I/O to be provided in order for users to get good performance. Similarly, older versions of Exchange Server were dependent on shared storage to allow for redundancy at the mailbox level. DAG creates a situation in which there is no requirement to share any storage whatsoever across the nodes that form a DAG. This means that DAG members are free to utilize directly attached storage. This also means that because many DAG members are comprised mostly of mailbox databases that aren’t being directly accessed, their I/O requirements tend to be relatively low. This, coupled with the changes in architecture within Exchange Server 2010 that further lower I/O requirements through the use of larger blocks of data being transferred, further reduces the need for high performance disk subsystems.

Another factor that heavily influenced the hardware used for Exchange Server in the past was the requirements around localized redundancy and fault tolerance. Traditionally Exchange servers were built with multiple sets of disks for the operating system, the database files and the transaction logs. The logic was that by mirroring the operating system, one could protect against a server failure due to a failed hard drive. Similarly, logs were always kept separate from the databases so that if a database failed, the data could be restored from a backup and the current log files could be replayed to bring the database back to a current state. In a Database Availability Group, the Exchange Server mailbox servers become a disposable resource. Not unlike domain controllers in Active Directory, there is very little need to ever restore a failed DAG member, so long as at least one other DAG member exists with a replica of the mailbox databases. One simply installs a fresh Exchange Server 2010 mailbox server and adds it to the appropriate DAG and then adds that server to the list of replicas for the databases that were previously hosted in that site. The data is replicated and the level of redundancy and fault tolerance is restored to its previous state.

Based on this ability to quickly and easily replace a lost DAG member, the requirements around local redundancy are effectively removed. Money that previously would have been used to purchase multiple disks and very high performance disks can now be used to instead purchase commodity hardware to act as an additional DAG member with replicas of databases. Microsoft has gone as far as to recommend building mailbox servers with no RAID whatsoever. The redundancy is effectively moved from the storage layer to the application layer.

When planning for the hardware to deploy an Exchange Server 2010 environment, keep in mind that the old CCR model of active/passive is no longer a limiting factor in your planning. For example, in Exchange Server 2007, if one were to build in redundancy for a site, they would have to build a stretched CCR pair where one system was active for all its users and the other system sat by just dealing with replication. Often administrators would take the hardware for CCR and utilize Hyper-V or VMWare to effectively turn each system into two systems. Each host would have two guest systems where one was the active CCR node for its own site and the other guest was the passive node for its partner site. This meant that each system had to be built with enough available performance to host all users from both sites. In a Database Availability Group, this 1 for 1 relationship isn’t necessarily a requirement. Imagine a 3 site DAG where each host is running at 66% capacity. Each site has 10 databases that are active copies and each site is replicated to every other site. In the event of a failure, rather than having all 10 databases fail over to a single site, a clever administrator might have set 5 of the databases to priority 2 in “site B” and priority 3 in “Site C” with the other 5 set to priority 3 in “site B” and priority 2 in “Site C.” In this scenario, the 66% load is spread evenly to the other two sites, resulting in each site being capable of handling the load. This level of granularity in determining where loads will go in the case of a failure is exactly what Exchange Server 2007 administrators were wishing for.

This concept can be carried further in the case of smaller sites that are replicated to a centralized disaster recovery site. While an environment might have a dozen sites with say 500 users each, one doesn’t necessarily build their disaster recovery site to handle 6,000 users. If, by some terrible series of events, all 12 sites suffered simultaneous failures, the odds of all 6,000 users being able to access their email would be very low. One might instead plan the capacity from a performance standpoint to handle say 2,000 users but build the storage to replicate the mailboxes for all 6,000 users.

In older versions of Exchange Server, supporting 6,000 users required a fair amount of spindles on the disk subsystem. Exchange Server 2003, for example, recommended about 0.75 disk i/o per second per user. With a 6,000 user load, this meant about 4500 i/o per sec. Given that a typical 10,000rpm disk can provide 110 i/o at an acceptable disk latency (under 20ms) it required 41 disks to provide this level of performance. If one needed the storage to be redundant, and most did, this requirement jumped to 82 disks to provide the i/o in a RAID0+1 configuration. It was in these days that SAN reigned supreme as it was otherwise unrealistic to present 82 disks to an Exchange server. In Exchange Server 2010, the i/o requirement is more like 0.1 i/o per sec per user, due to the larger transfer block size and the amount of data that is cached. The cache of mailbox information is significantly larger due to the ability to access large amounts of memory, made possible by the 64-bit architecture. In this scenario, the same 6,000 users would require 600 i/o per sec, which could be provided by 6 spindles. Assuming one was planning to replicate the data via a Database Availability Group, the local requirement would literally be for 6 disks, in RAID0. The cost of a SAN plus the 82 disks minus the costs of the 6 local disks would more than cover the price of a second server with 6 disks to provide the replica. It becomes easy to see that Exchange Server 2010 has the potential to be a much lower cost to deploy, and become a much faster return on investment.

The logical extension of the reduction in resources required to support Exchange Server 2010 in a Database Availability Group model is the concept of virtualization for the hosts. Depending on one’s level of trust and expertise with virtualization, it might make sense to initially virtualize one set of the replicas in order to gain knowledge and trust in managing a virtualized environment. For those who have already headed down a path of virtualization, all the roles in Exchange Server 2010 are very good candidates for virtualization.

Perhaps the simplest point to take away from this discussion is that the old days of needing identical hardware across all cluster nodes is no longer the case. Database Availability Groups, while loosely dependent on Windows clustering services, have no requirements for the hardware to be identical, or even similar for that matter. Mixing and matching levels of performance, processor architecture, and storage types are completely supported. Just make sure a given system has enough performance to perform its primary job and to take over any additional loads that you’re planning it to be redundant for.

Note

Administrators who are considering virtualization of very large Exchange Server 2010 servers should be aware of performance issues surrounding non uniform memory access (NUMA) boundaries. The short version is that host system memory divided by the number of processor cores is the size of a NUMA boundary on that system. Guest virtual machines that are allocated memory larger than a single NUMA boundary will suffer a performance loss compared to a virtual system whose memory allocation is equal to or smaller than a NUMA boundary.


Dedicating a Network to Log Shipping for DAG Replication

Many companies that run Exchange Server have invested in very high performance WAN connections. MPLS networks have become something of a corporate standard due to their performance and stability. The drawback to these high speed MPLS networks is often the cost. While bandwidth has become steadily more affordable, the connections and high bandwidth is nonetheless a large portion of most IT groups’ budgets. Many companies have moved toward a strategy of utilizing the high performance MPLS network for servicing end users and have moved their replication to lower cost networks, such as IPSec or VPN tunnels running across the Internet. Environments who run these multi-tiered networks will likely wish to take advantage of Exchange Server 2010’s capability to specify a network to be used for DAG replication. In Exchange Server 2007, one had to create additional network interfaces as cluster resources and associate them with each cluster group and then utilize host files so that the CCR or SCR targets always resolved their sources by the dedicated replication interfaces. Exchange Server 2010 makes this significantly easier by allowing an administrator to define a database availability network.

To create a Database Availability Group network via the GUI, follow these steps:

1.
Launch Exchange Management Console.

2.
Expand Organization Configuration.

3.
Click mailbox.

4.
Click the Database Availability Group tab.

5.
Right-click the DAG for which you need to define a replication network.

6.
Choose New Database Availability Group Network.

7.
Enter a Network Name of up to 128 characters.

8.
Enter a Network Description of up to 256 characters.

9.
Click Add to add subnets to the DAG network.

10.
Check the box for Enable replication.

11.
Click New.

To create a Database Availability Group network via the Exchange Management Shell, follow these steps:

1.
Launch Exchange Management Shell.

2.
Type New-DatabaseAvailabilityGroupNetwork –DatabaseAvailabilityGroup DAG –Name DAGNet –Description "description" –Subnets "#.#.#.#/#" –ReplicationEnabled:$true.

For example, New-DatabaseAvailabilityGroupNetwork–DatabaseAvailabilityGroup US-DAG-1 –Name DAGNetworkSFtoNY–Description "dedicated replication network via IPSec tunnel from SF to NY"–Subnets "192.168.1.0/24" –ReplicationEnabled:$true.

Using DAG to Provide a “Tiered Services” Model

One of the limitations of Clustered Continuous Replication in Exchange Server 2007 was that you didn’t have any level of granularity in which content got replicated. It was really an all or nothing configuration. Database Availability Groups give you the ability to determine which database should be replicated and how often. This allows an administrator to create an interesting tiered services model that allows them to establish parameters around classes of mailboxes.

For example, one might wish to replicate all mailboxes locally to allow for simplified maintenance windows. One can simply alter a replica to be the current active copy and perform maintenance on the previously active copy. When that maintenance is completed, the administrator can optionally reactivate the previous replica, after it has had a chance to get back into sync. This is likely to be a common scenario as it accomplishes redundancy and allows for quick maintenance without incurring a lot of overhead expense. LAN bandwidth isn’t much of a concern in most environments and the total cost to provide this level of convenience and protection is simply an additional server and its associated licenses.

In more advanced environments there may be a requirement to replicate mailbox data offsite to protect against a failure of an entire site or perhaps even an entire geographic region. In some cases, this need for geographic redundancy may really only be appropriate for specific types of users. While managers and executives may have a requirement for nearly 100% mailbox availability, it might not be required for resource mailboxes or for part time workers or perhaps for factory floor users whose jobs aren’t dependent on email access. For these types of situations, administrators can take advantage of the granularity of Database Availability Groups to set different replication rules for different databases. By organizing users into databases by job types, one can easily increase the number of replicas for specific groups to provide the level of protection they need without incurring the overhead of having to replicate an entire server.

Other  
 
Video
Top 10
SG50 Ferrari F12berlinetta : Prancing Horse for Lion City's 50th
The latest Audi TT : New angles for TT
Era of million-dollar luxury cars
Game Review : Hearthstone - Blackrock Mountain
Game Review : Battlefield Hardline
Google Chromecast
Keyboards for Apple iPad Air 2 (part 3) - Logitech Ultrathin Keyboard Cover for iPad Air 2
Keyboards for Apple iPad Air 2 (part 2) - Zagg Slim Book for iPad Air 2
Keyboards for Apple iPad Air 2 (part 1) - Belkin Qode Ultimate Pro Keyboard Case for iPad Air 2
Michael Kors Designs Stylish Tech Products for Women
REVIEW
- First look: Apple Watch

- 3 Tips for Maintaining Your Cell Phone Battery (part 1)

- 3 Tips for Maintaining Your Cell Phone Battery (part 2)
Popular Tags
Video Tutorail Microsoft Access Microsoft Excel Microsoft OneNote Microsoft PowerPoint Microsoft Project Microsoft Visio Microsoft Word Active Directory Exchange Server Sharepoint Sql Server Windows Server 2008 Windows Server 2012 Windows 7 Windows 8 Adobe Flash Professional Dreamweaver Adobe Illustrator Adobe Photoshop CorelDRAW X5 CorelDraw 10 windows Phone 7 windows Phone 8 Iphone