Network
traffic patterns are an important consideration when implementing AD
DS, and a firm understanding of the “pipes” that exist in an
organization’s network is warranted. If all remote sites are connected
by T1 lines, for example, there will be fewer replication concerns than
if network traffic passes through a slower link.
With this point in mind,
mapping out network topology is one of the first steps in creating a
functional and reliable replication topology.
Mapping Site Design into Network Design
Site structure in
Windows Server 2008 R2 is completely independent from the domain, tree,
and forest structure of the directory. This type of flexibility allows
domain designers to structure domain environments without needing to
consider replication constraints. Consequently, domain designers can
focus solely on the replication topology when designing their site
structure, enabling them to create the most efficient replication
environment.
Essentially, a site
diagram in Windows Server 2008 R2 should look similar to a WAN diagram
of your environment. In fact, site topology in AD DS was specifically
designed to be flexible and adhere to normal WAN traffic and layout.
This concept helps to define where to create sites, site links, and
preferred site link bridgeheads.
Figure 1
illustrates how a sample site structure in AD overlays easily onto a
WAN diagram from the same organization. Consequently, it is a very good
idea to involve the WAN personnel in a site design discussion. Because
WAN environments change in structure as well, WAN personnel will
subsequently be more inclined to inform the operating system group of
changes that could affect the efficiency of your site design as well.
Establishing Sites
Each “island” of high connectivity
should normally be configured as a site. This not only assists in
domain controller replication, but also ensures that clients receive the
closest domain controller and global catalog server to themselves.
Note
If your DNS records are
inaccurate for a site, clients could be potentially redirected to a
domain controller or global catalog server other than the one that is
closest to them. Consequently, it is important to ensure that all your
sites listed in DNS contain the appropriate server host records.
Choosing Between One Site or Many Sites
In
some cases, multiple LAN segments might be consolidated into a single
site, given that the appropriate bandwidth exists between the two
segments. This might be the case for a corporate campus, with various
buildings that are associated with LAN “islands” but that are all joined
by high-speed backbones. However, there might also be reasons to break
these segments into sites themselves. Before the decision is made to
consolidate sites or separate into individual sites, all factors must be
taken into account.
Single-site design is
simpler to configure and administer, but also introduces an increase in
intersegment traffic, as all computers in all buildings must traverse
the network for domain authentication, lookups, frequent replication,
and so on.
A multiple-site
design addresses the problems of the intersegment traffic because all
local client requests are handled by domain controllers or global
catalog servers locally.
However, the complexity of the environment is more significant and the resources required increase.
Note
It is no longer a firm
recommendation that all sites contain at least one global catalog domain
controller server. The introduction of the universal group caching
capability and Read-Only Domain Controllers (RODCs) can reduce the
number of global catalog servers in your environment and significantly
reduce the amount of replication activity that occurs. This
recommendation still stands, however, for sites with a local Exchange
server, as one or more local full global catalog servers are still
critical for these environments.
The requirements of an
organization with the resources available should be mapped to determine
the best-case scenario for site design. Proper site layout helps to
logically organize traffic, increase network responsiveness, and
introduce redundancy into an environment.
Associating Subnets with Sites
It is critical to
establish the physical boundaries of your AD sites because this
information utilizes the most efficient logon and directory requests
from clients and helps to determine where new domain controllers should
be located. Multiple subnets can be associated with a single site, and
all potential subnets within an organization should be associated with
their respective sites to realize the greatest benefit.
Determining Site Links and Site Link Costs
As previously
mentioned, site links should normally be designed to overlay the WAN
link structure of an organization. If multiple WAN routes exist
throughout an organization, it is wise to establish multiple site links
to correspond with those routes.
Organizations with a
meshed WAN topology need not establish site links for every connection,
however. Logically consolidating the potential traffic routes into a
series of pathways is a more effective approach and helps to make your
environment easier to understand and troubleshoot.
Choosing Replication Scheduling
Replication traffic
can potentially consume all available bandwidth on small or saturated
WAN links. By changing the site link replication schedule for off-hours,
you can easily force this type of traffic to occur during times when
the link is not utilized as heavily. Of course, the drawback to this
approach is that changes made on one side of the site link would not be
replicated until the replication schedule dictates. Weighing the needs
of the WAN with the consistency needs of your directory is, therefore,
important. Throttling the replication schedule is just another tool that
can help to achieve these goals.
Choosing SMTP or IP Replication
By
default, most connections between sites in AD DS utilize IP for
replication because the default protocol used, RPC, is more efficient
and faster. However, in some cases, it might be wiser to utilize
SMTP-based replication. For example, if the physical links on which the
replication traffic passes are not always on (or intermittent), SMTP
traffic might be more ideal because RPC has a much lower retry
threshold.
A second common use for SMTP
connections is in cases where replication needs to be encrypted so as to
cross unsecured physical links, such as the Internet. SMTP can be
encrypted through the use of a Certificate Authority (CA) so that an
organization that requires replication across an unsecured connection
can implement certificate-based encryption.
Note
SMTP replication cannot be
used as the only method of replicating to a remote site. It can only be
used as a supplemental replication transport, as only certain aspects of
domain replication are supported over SMTP. Subsequently, the use of
SMTP replication as a transport is limited to scenarios where this form
of replication is used in addition to RPC-based replication.
Windows Server 2008 R2 Replication Enhancements
The introduction of
Windows 2000 provided a strong replication topology that was adaptive to
multiple environments and allowed for efficient, site-based
dissemination of directory information. Real-world experience with the
product has uncovered several areas in replication that required
improvement. Windows Server 2008 R2 addressed these areas by including
replication enhancements in AD DS that can help to increase the value of
an organization’s investment in AD.
Domain Controller Promotion from Media
An ingenious mechanism in
Windows Server 2008 R2 allows for the creation of a domain controller
directly from media such as a burnt CD/DVD, USB drives, or tape. The
upshot of this technique is that it is now possible to remotely build a
domain controller or global catalog server across a slow WAN link by
shipping the media to the remote site ahead of time, effectively
eliminating the common practice of building a domain controller in the
central site and then shipping it to a remote site after the fact.
The concept behind the
media-based GC/DC replication is straightforward. A current, running
domain controller backs up the directory through a normal backup
process. The backup files are then copied to a backup media, such as a
CD/DVD, USB drive, or tape, and shipped off to the remote destination.
Upon their arrival, the dcpromo command can be run, and Advanced mode
can be chosen from the wizard. In the Advanced mode of the wizard, the
dialog box shown in Figure 2 allows for dcpromo to be performed against a local media source.
After
the dcpromo command restores the directory information from the backup,
an incremental update of the changes made since the media was created
will be performed. Because of this, there still needs to be network
connectivity throughout the dcpromo process, although the amount of
replication required is significantly less. Because some dcpromo
operations across slow WAN links have been known to take days and even
weeks, this concept can dramatically help to deploy remote domain
controllers.
Note
If the copy of the global
catalog that has been backed up is older than the tombstone date for
objects in the AD DS (by default, 60 days from when an object was last
validated as being active), this type of dcpromo will fail. This
built-in safety mechanism prevents the introduction of lingering objects
and also ensures that the information is relatively up to date and no
significant incremental replication is required.
Identifying Linked-Value Replication/Universal Group Membership Caching
Previously, all groups
in AD DS had their membership listed as a multivalued attribute. This
meant that any time the group membership was changed, the entire group
membership needed to be rereplicated across the entire forest. Windows
Server 2008 R2 includes an incremental replication approach to these
objects, known as linked-value replication. This approach significantly
reduces replication traffic associated with AD DS.
Directly associated with
this concept, Windows Server 2008 R2 allows for the creation of domain
controllers that cache universal group membership. This means that it is
no longer necessary to place a global catalog server in each site. Any
time a user utilizes a universal
group, the membership of that group is cached on the local domain
controller and is utilized when the next request comes for that group’s
membership. This also lessens the replication traffic that would occur
if a global catalog was placed in remote sites.
One of the main sources of
replication traffic was discovered to be group membership queries—hence,
the focus on fixing this problem. In Windows 2000 Active Directory,
every time a client logged on, the client’s universal group membership
was queried, requiring a global catalog to be contacted. This
significantly increased logon and query time for clients who did not
have local global catalog servers. Consequently, many organizations
stipulated that every site, no matter the size, must have a local global
catalog server to ensure quick authentication and directory lookups.
The downside of this was that replication across the directory was
increased because every site received a copy of every item in the entire
AD, even though only a small portion of those items was referenced by
an average site.
Universal group caching
solved this problem because only those groups that are commonly
referenced by a site are stored locally, and requests for group
replication are limited to the items in the cache. This helps to limit
replication and keep domain logons speedy.
Universal group caching capability is established on a per-site basis through the following technique:
1. | Open Active Directory Sites and Services.
|
2. | Navigate to Sites\<Site Name>.
|
3. | Right-click NTDS Site Settings and choose Properties.
|
4. | Check the Enable Universal Group Membership Caching check box, as shown in Figure 3.
Optionally, you can specify from which site to refresh the cache.
|
5. | Click OK to save the changes.
|
Removing Lingering Objects
Lingering objects, also
known as zombies, are created when a domain controller is down for a
period of time that is longer than the tombstone date for the deletion
of items. When the domain controller is brought back online, it never
receives the tombstone request and those objects always exist on the
downed server. These objects could then be rereplicated to other domain
controllers, arising from the dead as “zombies.” Windows Server 2008 R2
has a mechanism for detecting lingering objects, isolating them, and
marking them for cleanup.
Disabling Replication Compression
By default, intersite AD
replication is compressed so as to reduce the bandwidth consumption
required. The drawback to this technique is that extra CPU cycles are
required on the domain controllers to properly compress and decompress
this data. Windows Server 2008 R2 allows designers the flexibility to
turn off this compression, if an organization is short on processor time
and long on bandwidth, so to speak.
Understanding How AD Avoids Full Synchronization of Global Catalog with Schema Changes
In
the original version of Active Directory, any schema modifications
would force a complete resynchronization of the global catalog with all
domain controllers across an enterprise. This made it extremely ominous
to institute any type of schema modifications because replication
modifications would increase significantly following schema
modifications. Windows Server 2003 or 2008 environments do not have this
limitation, however, and schema modifications are incrementally updated
in the global catalog.
Intersite Topology Generator Algorithm Improvements
The intersite topology
generator (ISTG) portion of the Knowledge Consistency Checker (KCC) has
been updated to allow AD environments to scale to site structures of up
to 5,000 sites. Previous limitations to the Windows 2000 ISTG
essentially kept AD implementations effectively limited to 1,000 sites.
This improvement, however, is available only when all domain controllers
in your AD DS environment are at least Windows Server 2003 systems and
the forest functional level has been raised to Windows Server 2003 or
2008 level.