programming4us
programming4us
ENTERPRISE

The HP Virtual Server Environment : Choosing a Partitioning Solution (part 1) - Why Choose nPars?

12/24/2012 3:08:36 AM

Given all the partitioning solutions that are available, it can be a daunting task to decide which of these would best meet the needs for your particular situation. The real answer is likely to be that you will want to take advantage of some combination.

There are a number of key benefits that you get from partitioning in general. These include:

  • Application isolation: The ability to run multiple workloads on a system at the same time while ensuring that no single workload can impact the normal running of any other workload.

  • Increased system utilization: This is an outgrowth of the first benefit above. If you can run multiple workloads on a system, you can increase the utilization of the server because resources that normally would have gone to waste can be used by the other workloads.


1. Why Choose nPars?

HP nPartitions provide for fully electrically isolated partitions on a cell-based server.

Key Benefits

A number of benefits can be gained from using nPartitions to break up a large system into several smaller ones. The few that we are going to focus on here are the ones that make this a technology that you won't want to do without. These include hardware-fault isolation, OS flexibility, and the fact that using nPars does not impact performance.

The fact that nPartitions are fully electrically isolated means that a hardware failure in one partition can't impact any other partition. In fact, it is possible to do hardware maintenance in one partition while the other partitions are running. This also makes it possible to run different partitions with different CPU speeds and even different CPU families (Precision Architecture (PA-RISC) and Itanium). The key benefit is that you can perform some upgrades on the system one partition at a time.

Another advantage of the electrical isolation is the fact that the operating system can't tell the difference between a partition and a whole system. Therefore, you can run different operating systems in each partition. The only supported OS on a PA-RISC-based HP 9000 system is HP-UX, so this is only possible on an Integrity server running Itanium processors. If you have an Integrity server, you can run HP-UX in one partition, Microsoft Windows Datacenter in another, Linux in a third, and OpenVMS in a fourth. All on one system—all at the same time.

The electrical isolation between nPars will also allow you to run PA-RISC processors in on partition and Itanium processors in another on the same system. This can simplify the upgrade process by allowing a rolling upgrade from PA to Itanium.

Another key benefit is that you can partition a server using nPars with no performance penalty. In fact, the opposite is true. Partitioning a server with nPars increases the performance significantly. Some performance benchmarks for a Superdome which has been partitioned into 16 four-CPU nPartitions are roughly 60% faster than the same benchmark for a fully loaded 64-CPU Superdome. There are several key reasons why this is so. The first is that there is lower multiprocessing overhead with smaller partitions and the other is that there are fewer connections between the crossbars that are traversed with smaller partitions.

Key Tradeoffs

Our goal here is to ensure that you understand how to best take advantage of the Virtual Server Environment (VSE) technologies. We want to explain some of the tradeoffs of using each of them to ensure that you put together a configuration that has all the benefits you want and that you can minimize the impact of any tradeoffs. Many of the tradeoffs can be mitigated by combining VSE technologies.

The first real tradeoff of nPartitions is granularity. The smallest partition you can create is a single cell. If you are using dual-core processors, this is a two-to-eight CPU component. The granularity can also be improved by including instant capacity processors. This way you can configure a partition with up to eight physical CPUs but have as few as two of them be active.

Another tradeoff is that although you can use instant capacity processors to adjust the capacity of an nPar, its cell configuration can't be changed online. This really isn't a limitation of nPartitions but rather of the operating systems themselves. Currently HP-UX is the only supported OS that will allow activation and deactivation of CPUs while online. None of the supported operating systems allow reallocation of memory while they are running. You will need to shut down any partition to remove a cell, for example. You can, however, reconfigure both partitions while they are running and then issue a “reboot for reconfiguration” for each of them when it is convenient. A future release of HP-UX will support online addition and deletion of memory.

One other tradeoff is that because each partition is electrically isolated from the others, there is no sharing of physical resources. If you need redundant components for each workload, they will be needed in every partition. This is simply a cost of hardware-fault isolation.

You can migrate active CPU capacity between nPars, but only if you have instant capacity processors. This means that you must have sufficient physical capacity in each partition to meet the maximum CPU requirements for the partition. You would then purchase some of that capacity as instant capacity. These processors can be activated when needed by deactivating a processor in another partition or by activating temporary capacity.

A clarification about nPars: Running nPartitions is not the same as running separate systems. There are events that can bring all the partitions down. The most common is operator error. If an operator with administrator privileges on the Management Processor should make a serious mistake, they could bring the whole system down. Another would be a natural disaster, like a major power outage or fire. The bottom line here is that using nPartitions does not eliminate your need to implement high-availability software and hardware if you are running mission-critical workloads on the system. The complex should be configured with at least one other complex in a cluster to ensure that any failure, whether it is a single partition or multiple, would result in a minimum of downtime.

nPar Sweet Spots

Now that we have provided a brief overview of the benefits and tradeoffs of nPartitions, we will provide some guidance on a few “sweet spot” solutions that allow you to get the benefits while minimizing the impact of the tradeoffs.

First of all, if you are doing consolidation of multiple workloads onto a cell-based server, you will want to set up at least two nPars. It would just not make sense to have the option of hardware-fault isolation and not take advantage of it. You might want to set up more nPars so you can provide hardware isolation between mission-critical applications. This ensures that the need to do hardware maintenance doesn't require that you take down multiple mission-critical applications at once.

Sweet Spot #1: At least two nPars

If you have a cell-based system that supports nPars and you are doing consolidation of multiple workloads, you should seriously consider setting it up with at least two. The resulting hardware-fault isolation and the flexibility with instant capacity makes this compelling.


You will want to make your nPars as big as possible. Steer clear of single-cell partitions unless you have a really good reason. Bigger partitions provide you with more flexibility in the future. If your capacity-planning estimates end up being off and one partition isn't big enough for the workload there, you can easily react to that by reconfiguring the system. But if you have many single-cell partitions you will need to rehost one of the workloads to reallocate a cell. This makes it very difficult to take advantage of the flexibility benefits of partitioning a larger server.

Sweet Spot #2: Separate nPars for each Mission-Critical Application

If you set up a separate nPar for each mission-critical application, you will ensure that a hardware failure or routine hardware maintenance will impact only one of them at a time.


Clearly, there is a tradeoff between larger partitions and more isolation. You really want to find the happy medium. Setting up a system with a few nPars and then using one of the other partition types inside the nPars to allow you to run multiple workloads provides a nice combination of isolation and flexibility. One interesting happy medium is to set up an nPar for each mission-critical production application and then use vPars or Integrity VM to set up development, production test, or batch partitions in the nPar along with the mission-critical production application. That way the lower-priority applications are isolated from the mission-critical application by a separate OS instance, yet some of the resources normally used for the lower-priority applications can be used to satisfy the production application if it ever experiences an unexpected increase in load.

Sweet Spot #3: vPars or VMs inside nPars

Subdividing nPars using vPars or VMs provides a very nice combination of hardware-fault isolation and granularity in a single solution.


Consider this scenario: You have a large Integrity server running several HP-UX partitions and because you are taking advantage of the VSE technologies you find that you have spare capacity you thought you would need. At the same time, you have a Windows server that has run out of capacity and you need a larger system to run it on. Rather than purchasing a separate Integrity server for the Windows application, you can set up another nPar on the existing system and put the application there. You might even be able to use this as a stopgap solution while waiting for the “real” server this application will be running on. You could then set up the Windows partition with the new server in a cluster and migrate it over. Because of the flexibility of the system with instant capacity processors, you can use this partition as a failover or disaster-recovery location for the primary Windows server.

Sweet Spot #4: Use Spare Capacity for Other OS Types

If you have an Integrity server with spare capacity, you have the flexibility of creating an nPar and running any of four different operating systems on that spare capacity.


A newer feature of nPars is the ability to run PA-RISC processors in one partition and Itanium processors in another. This provides a nice solution for a rolling upgrade from PA to Itanium inside a single system. You could also add some Itanium cells to an existing system for either testing or migration.

Sweet Spot #5: Use nPars to Migrate from PA to Itanium

Use nPars on existing HP 9000 systems to set up Itanium partitions for migration of existing partitions or other systems.


Last, when using nPartitions, you should always have some number of instant capacity processors configured into each partition.  An example configuration would be a single-cabinet Superdome with dual-core processors split into four nPars. Each nPar has two cells and 16 physical processors. Since most systems are only 25% to 30% utilized, you can get half the CPUs as instant capacity processors and increase the utilization to over 50%. That way you still have the extra capacity if you need it, but you can defer the cost until later. In addition, you will get the flexibility of scaling each partition up to 16 CPUs by deallocating CPUs from the other partitions in real time. You can also get temporary capacity in case you have multiple partitions that get busy at the same time. Figure 1 provides a view of this configuration which shows the dual-core processors and the configuration of inactive instant capacity processors.

Figure 1. A Single-Cabinet Superdome with four nPars and Instant Capacity Processors

This picture shows that each partition contains two cells each with eight physical CPUs. Each partition has the ability to scale up to 16 CPUs because there are eight inactive CPUs that can be activated by deactivating a CPU in another partition or by using temporary capacity.

Sweet Spot #6: Always Configure in Instant Capacity

Instant capacity processors are very inexpensive headroom and provide the ability to flex the size of nPars dynamically.


2. Why Choose vPars?

When HP Virtual Partitions (vPars) was introduced in late 2001, it was the only virtual partitioning product available that supported a mission-critical Unix OS. Even today there continue to be a number of features that make vPars an excellent solution for many workloads.

Key Benefits

This section compares vPars with each of the other partitioning technologies to help you determine when you might want to use vPars in one of your solutions.

When comparing vPars to nPars, the primary benefits you get with vPars is granularity and flexibility. vPars can go down to single-CPU granularity and single card-slot granularity for I/O. With nPars, each partition must be made up of whole cells and I/O card cages. This means that you can have a vPar with a single CPU and a single card slot (if you use a LAN/SCSI combo I/O card). In addition, you can scale the partition in single CPU increments. vPars also provides the flexibility of dynamically reallocating CPUs between partitions without the instant capacity requirement. In other words, you can deallocate a CPU from one vPar and allocate the same CPU to another vPar. The tradeoff, of course, is that you won't have the hardware-fault isolation you get with nPars.

When comparing vPars with Integrity VM, the key benefits of vPars are its scalability and performance. vPars has no limit on its scalability—in other words, you could create a 64 CPU vPar and get only a slight degradation of the performance you would get with an nPar. The first release of Integrity VMs is tuned for four virtual CPUs in each VM (although you can create VMs with more). This will be increased with time, but will take some time to reach the scalability of vPars. In addition, because vPars are built by assigning physical I/O cards to each partition, the OS talks directly to the card once it is booted. There is almost no performance degradation at all.

When comparing vPars with Secure Resource Partitions, the primary benefit you get is isolation. This includes OS, namespace, and software-fault isolation. Because each vPar has its own OS image, you can tune each partition for the application that runs there. This includes kernel tunables, OS patch levels, and application versions. Also, an application or OS-level problem that each vPar isolated from software faults in other vPars and can be independently rebooted.

Key Tradeoffs

The first tradeoff is that vPars only supports HP-UX. Both nPars and Integrity VM will eventually support all four operating systems that are targeted for Integrity Servers (HP-UX, Windows, Linux, and OpenVMS) on an Integrity server.

Several other tradeoffs come as a result of the same reason vPars has better performance—the vPar monitor is emulating the firmware of the system. The two most significant tradeoffs from this are the fact that it is not possible to share I/O card slots between vPars and that vPars doesn't support sub CPU granularity. In addition, vPars is not supported on all platforms and doesn't support all I/O cards. Realistically, it does support most of the high-end systems and most of the more common I/O cards. The bottom line here is that when considering a system for vPars, you should work with your HP sales consultants, or authorized partner, and ensure you get the right configuration.

vPar Sweet Spots

You should always set up some number of nPars if you doing consolidation on a cell-based server. The key question is whether you want to further partition the nPars or system with another partitioning solution, such as vPars, VMs, or Secure Resource Partitions. If you are planning to run more than one workload in each nPar, you may want to run each workload in its own isolated environment. The key question is whether you need OS level isolation. If so, your choices are vPars or VMs. You can't run both of these at the same time on the same nPar or system. However, you can run vPars in one nPar and VMs in another on the same system. This is another nice advantage of the electrical isolation you get with nPars.

Sweet Spot #1: vPars Larger than eight CPUs

If you require finer granularity than nPars and partitions larger than eight CPUs, vPars is an excellent option.


If you need the OS-level isolation, vPars are a good choice if you need large partitions or if the workload is I/O intensive and performance is critical.

Sweet Spot #2: I/O Intensive Applications

If you require finer granularity than nPars and have I/O-intensive applications that require peak performance, vPars has very low I/O performance overhead.

Other  
 
PS4 game trailer XBox One game trailer
WiiU game trailer 3ds game trailer
Top 10 Video Game
-   Unravel | Live Gameplay from E3 2015
-   Destiny: Xur, Agent of the Nine, Reef location and exotic items
-   Metal Gear Solid 5: The Phantom Pain | E3 2015 Stage Demo
-   OVERKILL's The Walking Dead | The VR Experience Trailer
-   Batman: Arkham Knight | NVIDIA GameWorks Gameplay Video
-   World of Warcraft: Warlords of Draenor | Patch 6.2 – Fury of Hellfire
-   Call of Duty: Black Ops III | Cyber Core Tutorial and Co-Op Playthrough
-   Back to Bed - E3 2015 Trailer
-   Whispering Willows - E3 2015 Trailer
-   Velocibox - E3 2015 Trailer
-   Anno 2025 - E3 2015 Gameplay Trailer
-   Anno 2025 - E3 2015 Intro Trailer
-   Awesome GTA V Sniper Chopper Kill
-   Awesome GTA V Parachute Video
-   GTA V Explosive Ammo Rounds with Bikini
Game of War | Kate Upton Commercial
programming4us
 
 
programming4us