[mdlug] Virtual Server Time
Carl T. Miller
carl at carltm.com
Thu Mar 13 18:44:36 EDT 2014
Robert Adkins II wrote:
> I'm in the process of building a test system that I will be using to
> test a few elements of virtual servers to eventually replace the two
> servers that I'm running here.
Very nice. I've already replaced several physical servers
with KVM virtual servers. It's well worth the effort.
> I've done some reading on the subject and found that KVM is
> essentially the closest to "Raw Hardware" than all of the other
> Virtual Machine systems.
> Part of my concern is maintaining the best performance with the hardware
> that I will be using for this system.
Yes. KVM is relatively easy to maintain and, if configured
properly, is almost as fast as real hardware. (I have a Windows
XP vm that runs faster than XP on the raw hardware, but that's
another story for later.)
> Goals Include:
>
> Being able to pull data drives out of the Host Server and move those
> to the back-up hardware, boot and pull up the virtual machines and be
> back up and running in roughly a half-hour or less.
Um. I'm not sure that I'd recommend this approach. If the
goal is to recover from a borked host quickly, shared storage
makes more sense. We can discuss this further if desired.
> Test server configuration/software updates with the ability to toss
> out a broken system and go back to a still safe and fully functioning
> server image.
There are several ways to approach this. Selecting a good
shared storage system will make it a snap.
> Setup instructions so that an otherwise unskilled office worker can be
> expected to move hard drives into the backup server so that myself or a
> contract source can ssh into the backup host and effect final corrections
> to bring the virtual servers back up, no matter where I am in the world.
This would not be needed if you have shared storage and you
keep your servers running. Otherwise you'd just need instructions
to power on the reserve server and do everything else remotely.
> Things that I believe will be of help:
>
> Multiple NICs in the host server.
Perhaps. If you are using network attached storage, then yes.
> Running the Host OS on a mirrored SSD drive, Virtual Machines might
> benefit from being on their own Hard Drives with the data each would be
> sharing also be on separate drivers???
Sure. SSDs are good, even if they can't survive as many write
operations as mechanical hard drives. Be sure to investigate
best practices, including using the "noatime" option for ext
filesystems.
> First Hurdle:
>
> I have a test server loaded up and connected via a bridge on the test
> Host to the network. My intention is for this to be a Samba server and the
> Host has a separate bare hard drive already formatted to act as the file
> system to be shared via the Virtual Samba Server. I will be working on
> this element tomorrow morning.
>
> If there are any pointers for what I should look at for the host
> server hardware, etc., etc., I would greatly appreciate it.
My first suggestion is to be sure to use LVM on any and all
partitions of your physical hard drives (with the notable
exception of /boot). If possible, keep between 20% and 40%
unused so you can expand partitions as needed.
Plan your partitions so you won't need to nest mountpoints.
For example, I wanted /var/lib/libvirt on a separate partition,
and I already had /var separated. So I created a /vm mountpoint
then configured libvirt to use /vm instead of /var/lib/libvirt.
Of course this meant updating selinux for the new directory.
Use atop and iftop on both the host and the vms to see what
"normal" looks like before you need to use them for diagnostics.
If you see any resources are overworked, either plan for more
hardware or scale down what each host does.
I plan my hosts to have at least one memory hungry vm and
one cpu stomper. This balances the load well. Also I have
3 hosts running with several vms each. If any host fails,
I can bring its vms up on the other two machines quickly.
c
More information about the mdlug
mailing list