Best Practices for Active Directory Implementation

Best Practices for Active Directory Implementation
By
Don Jones

Don Jones is a Senior Partner and Principal Technologist for Concentrated Technology, LLC, an IT consulting and analysis firm.He’s the author of more than 35 books,

Is your Active Directory (AD) setup to be the most reliable, stable, and recoverable directory that it can be? There really aren’t any universal guidelines on what an AD infrastructure should look like, because organizations of different sizes have different needs and requirements. Smaller companies, for example, don’t typically need the dozens of domain controllers that a larger company needs. But with a new version of Windows Server on the horizon for 2012, now’s a good time to look at your AD infrastructure and decide if a little re-structuring might be appropriate.

The Smallest Organizations

No matter how little an organization you work in, having two domain controllers is an absolute must. It’s fine for those to run in virtual machines, but they should live on different physical hosts so that a single computer failure won’t take down the entire directory.

Fast, easy recoverability is a must for small organizations, and tape backups almost never equate to “fast” or “easy.” Instead, consider one of the new breed of continuous disk-to-disk backup applications that are out there (I’m personally familiar with AppAssure’s Replay solution, but there are others).

If you want the comfort of a tape backup, then backup that disk-to-disk solution. That kind of solution can handle anything from a full domain recovery to restoring a single user attribute, so it’ll cover just about any recovery scenario.

Medium-Sized Organizations

Organizations of this size will usually have at least a handful of domain controllers, and should start looking at running them on Server Core rather than the full version of Windows. Why? Server Core offers a smaller footprint (meaning more flexibility when run in a virtual machine), fewer moving parts (meaning fewer hotfixes) and greater stability and uptime.

Medium-sized organizations shouldn’t be using the “all-purpose” servers that a smaller company might choose, meaning that a domain controller should be a domain controller and perhaps provide other infrastructure services like DHCP, DNS, and so forth—but that’s all. Domain controllers shouldn’t be doing double-duty as print servers, file servers, e-mail servers or any other major tasks.

Backup and recovery is equally important to companies of this size, and a continuous, disk-to-disk backup system is still appropriate. However, organizations of this size might also benefit from a dedicated AD recovery tool that can provide graphical interfaces for attribute-level, single-object, and whole-domain recovery. Such tools are available from a wide range of vendors, including Quest Software, ScriptLogic, NetIQ, Blackbird Group and more.

Change auditing and reporting can also be a need in organizations of this size, especially those in legally-sensitive fields like healthcare, finance, and so forth. This kind of change auditing need can rarely be satisfied by the native Windows event logs, which don’t provide separation of duties, high-performance high-volume logging and so on.

So you’ll also need to start shopping for a change auditing solution, perhaps from companies like Quest, NetWrix, ScriptLogic, Blackbird Group, etc. My company performed an analysis of the major players in this space, and the results are published (and available at no charge) at http://itpro.concentratedtech.com/papers.

Bigger Organizations

Bigger organizations obviously have more users, and therefore more domain controllers to handle the load. Load management becomes more important at this scale, because you need to assume that a server will become unavailable at some point during its life. In other words, plan your infrastructure so that no single domain controller is working at more than 70% of capacity during peak times (perhaps 80% if you’re feeling brave). That way, a certain number of domain controllers can be offline, and the remaining ones will still have the capacity to take up the load.

Large organizations are also noted for the larger number of administrators running the show, and for distributed networks spread across many locations. That means it’s especially important to keep up on AD’s site configuration, and to closely model your wide-area network (WAN) connections by using AD’s site links and site link bridges.

Once a quarter, do a “sanity check” to make sure AD has all of your subnets properly represented and located and that all of your WAN links are properly represented by AD site links. Try to avoid using site link bridges except in the most extreme circumstances, as those bridges essentially override AD’s own replication intelligence, and can–if overused–create performance problems and make replication troubleshooting more difficult.

These organizations are the ones who should most seriously consider moving most, if not all, of their DCs to a virtualization infrastructure. Being able to relocate DCs to another virtualization host lets you dynamically manage workload, work around host failures and downtime, and recover entire virtual machines more quickly and easily. AD-specific recovery tools are a must for these organizations, since most recovery tasks will be the granular, single-object and single-attribute recoveries that aren’t easily performed with Windows’ native tools.

Comments
variables