DESKTOP

Windows Server 2008 : The Discovery Phase - Understanding the Existing Environment

1/15/2011 3:50:46 PM
Assuming that the previous steps have been taken, the high-level picture of the Windows Server 2008 R2 upgrade should be very clear by now. It should be clear what the business and technology goals are from a “50,000-foot view” business standpoint all the way down to the “1,000-foot” staff level. The components of the upgrade, or the scope of the work, and priorities of these components should also be identified as well as the time constraints and who will be on the design and implementation teams.

The picture of the end state (or scope of work) and goals of the project should start becoming more clear. Before the final design is agreed upon and documented, however, it is essential to review and evaluate the existing environment to make sure the network foundation in place will support the new Windows Server 2008 R2 environment.

It is an important time to make sure the existing environment is configured the way you think it is and to identify existing areas of exposure or weakness in the network. The level of effort required will vary greatly here, depending on the complexity and sheer scope of the network. Organizations with fewer than 200 users and a single or small number of locations that use off-the-shelf software applications and standard hardware products (for example, Hewlett-Packard, IBM, Cisco) will typically have relatively simple configurations. In contrast, larger companies, with multiple locations and vertical-market custom software and hardware will be more complex. Companies that have grown through the acquisition of other organizations might also have mystery devices on the network that play unknown roles.

Another important variable to define is the somewhat intangible element of network stability and performance. What is considered acceptable performance for one company might be unacceptable for another, depending on the importance of the infrastructure and type of business. Some organizations lose thousands of dollars of revenue per minute of downtime, whereas others can go back to paper for a day or more without noticeable impact.

The discovery work needs to involve the design team as well as internal resources. External partners can often produce more thorough results because they have extensive experience with network reviews and analysis and predicting the problems that can emerge midway through a project and become showstoppers. The discovery process will typically start with onsite interviews with the IT resources responsible for the different areas of the network and proceed with hands-on review of the network configuration.

Developing standard questionnaires can be helpful in collecting data on the various network device configurations, as well as recording input on areas of concern of the network. Key end users can reveal needs that their managers or directors aren’t aware of, especially in organizations with less-effective IT management or unstable infrastructures. Special attention should be paid to ferreting out the problem areas and technologies that never worked right or have proven to be unstable.

For the most part, the bigger the project, the more thorough the discovery should be. For projects involving a complete NOS upgrade, every affected device and application will need to be reviewed and evaluated to help determine its role in the new environment.

If network diagrams exist, they should be reviewed to make sure they are up to date and contain enough information (such as server names, roles, applications managed, switches, routers, firewalls, and so on) to fully define the location and function of each infrastructure device.

If additional documentation exists on the detailed configuration of key infrastructure devices, such as “as built” server documents with details on the server hardware and software configurations, or details on router configurations or firewalls, they should be dusted off and reviewed. Information such as whether patches and fixes have been applied to servers and software applications becomes important in the design process. In some cases, the desktop configurations need to be inventoried if client changes are required. Software inventory tools can save many hours of work in these cases.

Certain documented company policies and procedures that are in place need to be reviewed. Some, such as disaster recovery plans or service-level agreements (SLAs), can be vital to the IT department’s ability to meet the needs of the user community.

The discovery process can also shed light on constraints to the implementation process that weren’t considered previously, such as time restrictions that would affect the window of opportunity for change. These restrictions can include seasonal businesses as well as company budgeting cycles or even vacation schedules.

Ultimately, while the amount of time spent in the discovery process will vary greatly, the goals are the same: to really understand the technology infrastructure in place and the risks involved in the project, and to limit the surprises that might occur during the testing and implementation phases.

Understanding the Geographical Depth and Breadth

At the same time that data is being gathered and verified pertaining to what is in place and what it does, connectivity among devices should also be reviewed, to review the logical as well as the physical components of the network. This information might be available from existing diagrams and documentation, or might need to be gathered in the field.

Important items to understand include answering the following questions: How are DNS and DHCP being handled? Are there VPNs or VLANs in place? How are the routers configured? What protocols are in use? What types of circuits connect the offices: DSL, T1, fiber? What is the guaranteed throughput or the SLAs that are in place?

Has connectivity failure been planned for through a partially or fully meshed environment? Connections to the outside world and other organizations need to be reviewed and fully understood at the same level, especially with an eye toward the security features in place. The best security design in the world can be defeated by a modem plugged in a plain old telephone line and a disgruntled ex-employee.

Along the same lines, remote access needs, such as access to email, network file and print resources, and the support needs for PDAs and other mobile devices, should be reviewed.

Geographically diverse companies bring added challenges to the table. As much as possible, the same level of information should be gathered on all the sites that will be involved in and affected by the migration. Is the IT environment centralized, where one location manages the whole environment, or decentralized, where each office is its own “fiefdom”?

The distribution of personnel should be reviewed and clarified. How many support personnel are in each location, what key hardware and software are they tasked with supporting, and how many end users are there? Often, different offices have specific functions that require a different combination of support personnel. Some smaller, remote offices might have no dedicated staff at all, and this can make it difficult to gather updated information. Accordingly, is there expansion or contraction likely in the near future or office consolidations that will change the user distribution?

Problems and challenges that the wide area network (WAN) design has presented in the past should be reviewed. How is directory information replicated between sites, and what domain design is in place? If the company already has Active Directory in place, is a single domain with a simple organizational unit (OU) structure in place, or are there multiple domains with a complex OU structure? Global catalog placement should also be clarified.

How is the Internet accessed? Does each office have its own Internet connection, firewall, router, and so on, or is it accessed through one location?

The answers to these questions will directly shape the design of the solution, as well as affect the testing and rollout processes.

Managing Information Overload

Another area that can dramatically affect the design of the Windows Server 2008 R2 solution to be implemented is the place where the company’s data lives and how it is managed.

At this point, you should know what the key network software applications are, so it is worth having some numbers on the amount of data being managed and where it lives on the network (1 server? 10 servers?). The total number of individual user files should be reviewed, and if available, statistics on the growth of this data should be reviewed.

Database information is often critical to an organization, whether it pertains to the services and products the company offers to the outside world, or enables the employees to perform their jobs. Databases also require regular maintenance to avoid corruption and optimize performance, so it is useful to know whether maintenance is happening on a regular basis.

Mail databases pose their own challenges. Older mail systems typically were quite limited in the size of their databases, and many organizations were forced to come up with interesting ways of handling large amounts of data. As email has grown in importance and become a primary tool for many companies, the Inbox and personal folders have become the primary storage place for many email users. If the organization uses Microsoft Exchange for its email system, users might have personal stores and/or offline stores that might need to be taken into account.

How the data is backed up and stored should also be reviewed. Some organizations have extremely complex enterprise storage systems and use clustering, storage area networks, and/or a distributed file system to ensure that data is always available to the user community. Sometimes, hierarchical storage processes are in place to move old data to optical media or even to tape.

An overall goal of this sleuthing is to determine where the data is, what file stores and databases are out there, how the data is maintained, and whether it is safe. It might also become clear that the data can be consolidated, or needs to be better protected through clustering or fault tolerance disk solutions. The costs to the company of data loss or temporary unavailability should also be discussed.

Other  
  •  Identifying the Technical Goals and Objectives to Implement Windows Server 2008 R2
  •  Working with Windows 7
  •  Installing and Using Windows 7
  •  Improvements in Server Roles in Windows Server 2008 R2
  •  Windows Server 2008: Improvements for Thin Client Remote Desktop Services
  •  Improvements in Windows Server 2008 R2 for Better Branch Office Support
  •  Improvements in Mobile Computing in Windows Server 2008 R2
  •  Windows Server 2008 R2 Benefits for Administration
  •  Visual Studio 2010 : Understanding Solutions and Projects (part 3)
  •  Visual Studio 2010 : Understanding Solutions and Projects (part 2)
  •  Visual Studio 2010 : Understanding Solutions and Projects (part 1)
  •  Becoming an Excel Programmer : Macros and Security
  •  Becoming an Excel Programmer : Where's My Code?
  •  Becoming an Excel Programmer : View Results
  •  Becoming an Excel Programmer : Start and Stop
  •  Windows Server 2008 : Configuring and Monitoring Terminal Service Resources
  •  Visual Studio 2010 : Understanding Debugging
  •  Visual Studio 2010 : Structured Exception Handling to the Rescue
  •  Implement an Observer (aka Subscriber) Pattern
  •  Use a Stopwatch to Profile Your Code
  •  
    Top 10
    Adobe Photoshop CS5 : Specialized Processes - Save For Web & Devices, Convert to CMYK, Add an Alpha Channel
    Adobe Photoshop CS5 : PDF Essentials - Compression Options for Adobe PDF
    Adobe Photoshop CS5 : Professional Printing Options, Desktop Printing Options
    Memory Management : Prevent Memory from Being Moved, Allocate Unmanaged Memory
    Memory Management : Use Pointers, Speed Up Array Access
    Memory Management : Force a Garbage Collection, Create a Cache That Still Allows Garbage Collection
    Windows Server 2008 Server Core : Verifying Application and Role Status Using the OCList Utility
    Windows Server 2008 Server Core : Accessing the Windows Package Manager with the PkgMgr Utility, Adding and Removing Applications with the OCSetup Utility
    Personalizing Windows 8 : Adjusting the Look of Windows 8
    Personalizing Windows 8 : Adding Badges
    Most View
    Advanced ASP.NET : Caching with Dependencies
    Exchange Server 2010 and Active Directory
    Understanding and Using Windows Server 2008 R2 UNIX Integration Components (part 1)
    Review: Nikon D4 – The master of the dark arts (Part 3)
    Home Theatre Pc Software And Operating Systems (Part 5) - MediaPortal
    .NET Compact Framework : Drawing Text
    DrayTek Vigor 3200n
    Flora - Nature - Photo Expert (Part 1)
    IBM WebSphere Process Server 7 and Enterprise Service Bus 7 : Monitoring WPS/WESB applications
    Water-proof 13MP-Camera Phone Of Fujitsu
    Windows Server 2003 : Server Clustering (part 1) - Cluster Terminology, Types of Resources, lanning a Cluster Setup
    Windows 7 : Working with User Accounts (part 1)
    Programming the Mobile Web : Widgets and Offline Webapps - Platforms (part 3) - webOS & Android
    Exchange Server 2010 : Developments in High Availability (part 1) : Exchange database replication & Database Availability Group and Continuous Replication
    Upgrading to SQL Server 2008
    IIS 7.0 : Managing Configuration - Sharing Configuration Between Servers
    Leveraging and Optimizing Search in SharePoint 2010 : Deploying FAST Search Service Applications
    Introduction to Transport-Level Security in Windows Server 2008 R2
    Windows Phone 7 Development : Building a Phone Client to Access a Cloud Service (part 5) - Deploying the Service to Windows Azure
    Post-Boot Startup in Windows Vista