1. Combining Partitioning Solutions
The
various partitioning solutions in the VSE are almost arbitrarily
stackable. The only exception is the fact that you cannot run Integrity
VM and vPars in the same nPar. Inside each partition type you can run:
nPartitions:
Virtual Partitions, Integrity VMs, or Secure Resource Partitions. In
other words, you can run any one of these to subdivide each nPar. In
addition, you can mix and match these in any combination in separate
nPars.
Virtual Partitions:
Secure Resource Partitions. You can run SRPs inside vPars if you want
to save on the management and maintenance costs of separate OS images
for each application.
Integrity VM:
Secure Resource Partitions. You can run SRPs inside VMs as well.
If
we take a look at all the sweet spots and
think about how they might be combined to take advantage of all the
technologies where they are most useful, we would get something along
the lines of Figure 1.
This
figure shows what is possible with the partitioning continuum. It is
likely that you will choose one or two partitioning alternatives and
roll those out. That said, the real benefit we are trying to show here
is the flexibility; you can implement whatever combinations make sense
for your solution. Let's analyze this configuration:
We have the ability to run 32 workloads of various sizes on the two nodes.
Each
system has a total of 64 physical CPUs, but only 32 of them are active.
This is sufficient to handle the normal peak load of the 32 workloads.
This provides us room for growth and the ability to react to short-term
peaks in load. This also makes these systems significantly less
expensive than they would be if we needed to purchase all the spare
capacity up front. Effectively, HP is paying for much of the cost of
your headroom.
Each system is configured
with four nPars. This ensures that a hardware failure will not take
down more than 25% of the system. This load can easily be absorbed by
the other system on a failover because of the Instant Capacity
available there.
Each of these systems
has Temporary Instant Capacity as well.
Spreading this load over two nodes allows us to provide high availability.
Two
of the workloads have their own nPars. These are relatively large
workloads that normally peak out at about eight CPUs but will
occasionally go over that. These nPars have 16 physical CPUs each and
we will use temporary capacity for the infrequent times that they go
over eight CPUs.
We have 16 Integrity
VMs that each have four virtual CPUs. These workloads can consume a
small fraction of a CPU if they are idle and can go up to four full
CPUs when they get busy. These nPars also have instant capacity
processors, so in the infrequent case where we have more than a few of
these VMs getting busy at the same time, we can instantly activate
additional capacity.
We have eight
vPars, each of which can scale from one to four CPUs. These are for our
smaller I/O-intensive workloads where peak performance is critical.
Again, there are eight instant capacity processors available in each
nPar just in case more than a few of these workloads get busy at the
same time. If only a few of them are peaking at any time, the peaking
vPars can be scaled up by borrowing CPUs from the other idle vPars.
Finally
we have two nPars running Secure Resource Partitions. These are all
running the same version of the same application, so we get the
benefits of lower maintenance costs in addition to the lower software
and hardware costs because we can run six copies on the 16 active CPUs.
If we put these on individual systems or partitions they would each
need six CPUs. We just dropped our active CPU count for hardware and
software licensing from 36 to 16. And, as before, we have instant
capacity processors that can take these nPars up to a total of 32
processors if more than a few of these peaked all at once.
There
you have it. These systems are using all of the partition types, have
an incredible amount of flexibility and scalability, and are paired for
high availability. Some of you might be concerned about how you would
manage an environment like this.
2. Independent Software Vendor Support
HP
has been shipping Resource Partitioning for over 10 years and Workload
Management for over five, and we frequently get the question—“How do
the ISVs support this technology?” The problem can really be broken
down into three key issues—whether the application can run in a
partitioned environment, how it responds to automatic changes to
resource entitlements, and whether there are any price breaks or
penalties for using these technologies.
ISV Support of Partitions
We
are primarily focused here on whether there will be any issues with an
application being able to run correctly inside a partition. We will
cover licensing later.
For nPars there is
no issue. Each nPar is a fully electrically isolated set of hardware
components. The software can't tell the difference between running in
an nPar and running in a similarly sized separate system.
For
vPars the same is true. Even though you don't have electrical
isolation, you do assign separate hardware components to each
partition. Again, the software can't tell the difference between
running in a vPar and running on a separate system.
Any
applications or management utilities that interact with the operating
system using standard interfaces will not be affected by running in an
Integrity VM. Some management applications might work differently;
generally speaking, that is because they should. The design goal of the
Integrity VM product is to emulate the hardware such that all
management utilities should “do the right thing” when running in a VM.
In other words, applications will not notice any difference and all
management utilities should run without errors, or the errors you get
should be expected. For example, instant capacity commands run in a VM
will exit gracefully with an error that the system does not support
instant capacity. This is correct because the CPUs in a VM are virtual
CPUs and instant capacity operates on physical CPUs, so adding and
deleting them doesn't make sense in this context. If you want to
allocate additional capacity to a VM, you would activate the physical
CPU in the VM host and the new CPU would then automatically be
allocated to each of the VMs as appropriate.
The
last partitioning solution to consider is Secure Resource Partitions.
This is one where you are running multiple applications in a single
copy of the operating system. The vast majority of ISV applications
will run just fine with virtually any other ISV application on the same
OS image. That really isn't the issue. The primary issue you need to be
concerned about is whether the ISV will make life difficult for you
when you call them for support. If they are going to force you to
reproduce the problem on a system where their application is the only
one running, it may not be worth the trouble. Of course, if you already
have set up or plan to set up independent test environments, maybe this
isn't such a big deal. In any event, this is one reason why we
typically recommend consolidating multiple instances of the same
application in an OS image. Virtually all ISVs will support running
multiple copies of their own applications in a single OS instance.
ISV Support of Dynamic Resource Allocation
We
are often asked whether an application will be able to take advantage
of resources that are dynamically added to a partition while the
application is running. How applications will handle the deallocation
of resources is another common question.
The
answer to the first question is yes in virtually all cases. The reason
is that the OS process scheduler is responsible for allocating
processes and threads on available CPUs. If additional CPUs are
allocated to an OS image, the scheduler will simply spread the
processes and threads out over more CPUs. This means there are fewer
processes sharing each CPU and each will have access to more processing
power. So the bottom line is that the application really doesn't have
any choice in the matter.
The same thing
can be said of the deallocation of a CPU. The process scheduler is
notified that a CPU is going to be removed. It will then migrate any
processes or threads that are on the run queue for that CPU to another
CPU in the OS image. Once that is done, the CPU is deallocated. Once
again, this is done completely transparently to the application.
The
reason we said that the answer is yes in virtually all cases is that
there is one case where a CPU could be added and not help. This is in
the lightly loaded case—when there are not enough processes or threads
in the run queue to consume all of the CPUs that are active. In this
case, adding another CPU would not help because there are no threads
that can be scheduled on it.
ISV Licensing When Using VSE Technologies
The
last major issue with ISV support is licensing. Many ISVs price their
software based on the number of CPUs running on the system the software
is running on. An interesting question is this: When you have a 16-CPU
system and you are running multiple different applications there, how
much should the ISV license cost? The licensing issue impacts the
revenue stream of ISVs, so they are being very cautious in adopting
support for these technologies. That said, many ISVs are indeed
starting to support more flexible licensing schemes.
Virtually
all ISVs will support licensing by the number of CPUs in an nPar. In
other words, if you have a 16-CPU system partitioned into two eight-CPU
nPars, you will only need to pay for an eight-CPU license if you are
only running the software in one of the nPars. This is also becoming
true of vPars. At this point, most ISVs will recognize that if you are
running their application on only one vPar on a system, you should not
have to pay for CPUs that the application can't access. You should work
with your ISVs to make sure they understand the maximum CPU count
configured for each vPar and make sure they will only charge you for
that amount.
VMs are a more complex issue.
Consider the sweet spot of running, say, four VMs, each with four
virtual CPUs on a system or nPar with four physical processors. If you
have an application in one of the VMs, it can realistically get access
to more than three physical CPUs. Therefore, you will probably always
have to pay for a four CPU license if you are running a VM with four
virtual CPUs. Now consider the case where you have the same application
in each of the four VMs. Should you have to pay for a 16-CPU license,
even though the physical system only has four? Clearly this wouldn't be
fair, and most ISVs will recognize this. However, most of them don't
yet have firm across-the-board policies for how VMs should be handled.
Therefore, if you are considering using VMs, you should engage the
ISV's sales organization and work out a fair arrangement.
Secure
Resource Partitions is the last partition option. Generally speaking,
ISVs don't yet provide licensing for a subset of an OS image, although
some of them are starting to investigate this.