ENTERPRISE

Developing the SAP Data Center : Rack Planning for Data Center Resources

9/26/2012 1:33:44 AM
With your highly available power infrastructures laying the foundation for your SAP Data Center, and attention to environmental requirements already addressed, you are ready to proceed with the next “physical” data center infrastructure layer—the rack mounting systems for housing all of our enterprise computing gear. In this section, I will cover:
  • Rack layout and design considerations

  • Optimizing rack “real estate”

  • Rack mounting and related best practices

  • Cabling and cable management

  • Rack placement in the data center

What exactly is a rack? One of my customers describes a rack simply as “furniture for computers.” In most cases, racks are somewhere between four and seven feet tall, 19 inches wide, and something like 34–39 inches deep, depending on requirements. Racks allow for computer gear—servers, disk subsystems, network components and more—to be easily mounted, cooled, and serviced in what amounts to a “stacked” configuration. That is, the gear appears to be stacked one on top of the other. In reality, of course, none of it should literally be stacked one on top of the other, as this makes servicing and cooling the gear quite difficult.

Often, each component is mounted on sliding rails (though less advantageous fixed rails are still quite popular with some server and rack vendors). These sliding rails facilitate rapid access to the top and sides of each hardware component, where service areas are easily accessed. And something called cable management arms make this serviceability possible, allowing hardware systems to be pulled out for service without requiring cables to be disconnected or rerouted.

Rack Layout and Design Considerations

Before you order a truckload of racks, you need to step back and develop a plan for deploying and laying out these racks. There are a number of best practices to be considered, but nearly as important is achieving some sort of consistency in deployment. This can be accomplished by working with your hardware vendors, and demanding detailed deployment guidance as it relates to how servers and disk resources should be mounted, how many racks are required, how they should be optioned, and so forth. And there are questions related to rack placement in terms of metrics and standards that must be answered:

  • Determine the airflow design for the equipment to be mounted. Then check to ensure that the racks allow for this design. For example, the HP ProLiant server line has always been designed for front-to-back airflow, thus mandating the use of a rack that supports this. In other cases, servers pull air from the data center subfloor and vent it out the top of the rack. It is therefore necessary to ensure that the servers and racks “match” each other—do they?

  • It is highly recommended that the racks be arranged in a front-to-front, or back-to-back manner. Picture standing in between two rows of racks that either face toward each other, or face away from each other, and you will understand front-to-front and back-to-back, respectively. For maximum front-to-rear airflow, the air registers (the floor tiles with the little holes in them that force out cool air) should be placed in line with the front of the racks. Similarly, air returns located inline with the rear of the racks are a must, too, as you see in Figure 1. In the case of bottom-to-top airflow, the air registers are instead placed such that the rack sits on top of them. Are your air registers positioned for the best airflow?

    Figure 1. Placement and layout of racks and floor tile airflow registers/returns are key to maintaining proper air temperature in the SAP Data Center.

  • Racks need space. Is there enough room both in front of and behind each rack (25” in front and 30” in back is suggested, though your specific rack documentation may indicate otherwise) to open and close the doors, and to provide the cooling airflow necessary to keep everything running? If not, then reposition the racks, or move the offending gear/walls.

  • Would purchasing a top-mounted fan option or split rear door (both enhance cooling to some degree, regardless of airflow direction) make sense, given the volume of computing gear to be racked? I generally recommend top-mounted fans in the densest of racks, regardless of the direction of airflow otherwise; these fans help draw component-damaging heat out of the rack as quickly as possible.

  • Are there overhead fire protection sprinkler devices that must be considered? Check local building codes for acceptable clearances before installing racks underneath such devices.

At this juncture, it makes sense to actually diagram rack layouts for your specific data center needs. Again, I recommend working with your hardware vendor or an experienced systems integrator to get this right the first time.

Optimizing Rack Real Estate

When loading equipment into the racks, observe the following best practices and observations:

  • Put heavier items in the bottom, that is, UPSes and large servers.

  • Address floor-loading weight limitations. For example, never load a rack with all UPSes, as the weight alone will almost certainly exceed the data center’s raised floor load-bearing factor. Instead, distribute UPSes evenly across all racks.

  • Locate monitors and keyboards for maximum ergonomic comfort (based on your particular sitting/standing arrangement). Do NOT locate them on the top of each of your 7-foot racks, like one of my customers did! It’s funny reading about this after the fact, but it was no laughing matter at the time.

  • Attempt to distribute monitors and keyboards such that access to a particular environment is available in two ways. For example, if an SAP Staging landscape consists of eight application servers, it would make sense to attach four of these to one monitor/keyboard, and the other four to the second monitor/keyboard. In this way, performing maintenance, upgrades, applying patches, and so on is possible from one monitor/keyboard, while the other remains available for business-as-usual.

  • Rack blanking panels must be used to “fill in” spaces greater than 1U in size (where a “U” is a standard rack unit of measurement equivalent to 1.75 inches). This is important to ensure that good airflow is maintained within the rack enclosure itself.

  • All items of gear housed within the rack—servers, disk subsystems, everything—should be installed with their covers and sides on. Running a server without the top cover or side panels, for example, disturbs the flow of air through the computer such that it is not cooled off as effectively as it otherwise would. In some case, running a server without the cover and sides will actually invalidate the unit’s warranty, too.

Other best practices exist as well. Refer to the Planning CD for a comprehensive “Rack Best Practices” document.

Rack Mounting and Related Best Practices

When connecting the racks to power, fault-tolerant considerations should be heeded. Your servers and disk subsystems already have dual redundant (or N+1) power, and to make the most optimum configuration, multiple (at least two) PDUs should be deployed. I recommend at least one mounted on each side of each rack. All right-side power supplies should be plugged into the right-side PDU, and the left-side power supplies into the left-side PDU. The PDUs should be plugged into separate circuits and breaker panels, as I have discussed previously. These circuits should be adequately sized to meet the potential load demand, which should rarely exceed 80% of their rated capacity if at all possible. In this configuration, if a circuit is lost, only half of the power supplies in each rack will be without power, leaving the other half to continue powering all of the computing equipment. And you will have no downtime.

Another thing to consider in rack planning is how the servers should be grouped. Ask yourself what makes sense. Some of my clients like to group all development resources together, all test resources together, and so on. Others prefer to group all servers together, all disk subsystems together, all network components together, and so on, similar to what you see in Figure 2. I suggest that you select either the approach that is used in your data center today, or the one that makes the most sense given the specific SAP system landscape being deployed.

Figure 2. This grouping approach is hardware-centric, rather than SAP landscape-centric. Either approach is valid.


Racking Clustered Servers

Regardless of the grouping approach, though, a best practice exists when it comes to clustering servers. Cluster nodes should always be mounted in separate racks, using separate keyboards, mice, and monitors, connected to completely different PDUs and network infrastructure, and so on. Why? Because if the cluster nodes share anything in common, this becomes a single point of failure. And given that you are clustering in the first place, avoiding single points of failure is key for these systems.

Racking Production

When it comes to mounting the Production servers in their racks, it is critical to maintain a separation of all infrastructure. That is, Production should ideally be isolated from all other SAP environments, from a power, network, rack, SAN, and server perspective. The Production racks therefore require their own power infrastructure, network infrastructure, and so on.

In the real world, as SANs continue to enable the highest levels of availability and scalability for mySAP customers, the line between production and other resources is beginning to blur. Expensive switched-fabric switches, SAN management appliances, SAN-enabled tape backup solutions, and the fibre cabling that ties everything together is often shared between production and development systems, for example. The important thing to address in these cases where the cost of a sophisticated SAN outweighs the risk of maintaining separate production resources is to provide for some kind of change management support. That is, you still need a test environment of sorts, be it a Technical Sandbox, Test/QA system, or whatever, to test the impact of changes to your SAN. In my experience, these changes are often firmware and hardware-related, though topology and design changes also need to be tested as well.

Cabling and Cable Management

If you have ever attempted to squeeze 21 servers into a 7-foot (42U) rack, the concept of cable management should evoke special thoughts of long days and late nights. Cables have shrunk in diameter over the last five years, but not to the extent or in keeping with the trends we have all seen in hardware platform densities. As a result, one of the biggest immediate challenges facing new SAP implementations is how to best route the various keyboard, mouse, monitor, disk subsystem, network, and other cables required of each piece of gear. More and more, we see fat cables being replaced by smaller ones, and lots of little cables being replaced by fewer centralized cables. But in the end, there’s still a whole lot of cabling to consider!

Alternatives to traditional KVM (keyboard, video, and mouse) cable connections abound today. The following represent two common methods of reducing the number and size of cables (a long-time favorite and a newer alternative):

  • Installing a dedicated “computer on a board” into a PCI slot in each server, to facilitate out-of-band and in-band management of the server. Such boards typically require a network or modem connection only, and facilitate communications via a browser-enabled user interface. A fine example is HP’s Remote Insight Board. Depending on the particular server, these boards may come standard with the system, and may even be integrated into the server’s motherboard.

  • Installing “KVM over IP,” which represents another way to shrink various cables into a small single network cable. This method also requires a small server of its own, not to mention licensing costs for the particular enabling product.

More often than not, I figure we will continue to see the widespread use of cable management arms, tie-wraps, Velcro-wraps, and similar inexpensive approaches. Regardless of the cabling techniques, though, continue to stay focused on availability. Single points of failure can exist in this realm like any other area—remember that a server may be effectively unavailable once it loses access to its keyboard, mouse, or video connections. Even in the best case, if the server remains up you may still be operating blind, saved only through your management and monitoring tools if so enabled.

Other  
  •  Engaging the SAP Solution Stack Vendors : Selecting Your SAP Solution Stack Partners, Executing the SAP Infrastructure Planning Sessions
  •  Visual Studio Team System 2008 : Command Line (part 2)
  •  Visual Studio Team System 2008 : Command Line (part 1)
  •  System Center Configuration Manager 2007 : Configuration Manager Network Communications (part 4) - Site-to-Site Communications - Tuning Status Message Replication
  •  System Center Configuration Manager 2007 : Configuration Manager Network Communications (part 3) - Site-to-Site Communications - Configuring Senders, Configuring Sender Addresses
  •  System Center Configuration Manager 2007 : Configuration Manager Network Communications (part 2) - Client-to-Server Communications
  •  System Center Configuration Manager 2007 : Configuration Manager Network Communications (part 1) - Intrasite Server Communications
  •  External Drive Western Digital My Book Thunderbolt Duo
  •  IP Camera Eyespy247 EXT+ - Cloud-Based Home And Office Video-Monitoring System
  •  Set-Top Box Philips HMP2000
  •  
    Most View
    ASRock Z77 Extreme4 Motherboard
    Asus Zenbook Prime UX31A Touch Review - New Touchscreen And Equally Stable Performance (Part 1)
    EA4500 – Cheap Router With Wavebands
    Sony Action Cam - A Good Rugged Camera With A Few Software Wrinkles (Part 2)
    Canon IXUS 140 Camera - Great Color Reproduction
    The Jabra Solemate - To Music With Love
    Stealth Machines Espionage
    InfoPath with SharePoint 2010 : Apply Rich Text to the Entry
    Motorola EX226 - The Ultra-Slim, Dual-Sim
    STM Bags Tracer Deluxe Stylus Pen
    Top 10
    Windows 8 : Managing authorization and access rights (part 4) - Run As,Using and managing certificates
    Windows 8 : Managing authorization and access rights (part 3) - Running tasks as administrator and user account control
    Windows 8 : Managing authorization and access rights (part 2) - Local Security Policy console
    Windows 8 : Managing authorization and access rights (part 1) - Assigning user rights
    Windows 8 : Determining who’s who through authentication (part 5) - Logging on by using a picture password,Using a personal identification number for authentication
    Windows 8 : Determining who’s who through authentication (part 4) - Managing credentials in Windows 8 by using Credential Manager,Configuring a Microsoft account for use with Windows
    Windows 8 : Determining who’s who through authentication (part 3) - Smart card authentication, Biometric authentication
    Windows 8 : Determining who’s who through authentication (part 2) - User name and password-based authentication
    Windows 8 : Determining who’s who through authentication (part 1) - How does Windows authenticate users accessing the system?
    SQL Server 2012 : Validating Server Configuration (part 2) - Evaluate the Policy, Using the Central Management Server