[mdlug] Virtual Server Time

Michael ORourke mrorourke at earthlink.net
Fri Mar 14 13:54:41 EDT 2014


Rather than discount Virtual Box altogether, I would keep that as an option.  So what if it is slower than other hypervisors?  Best to look at the big picture and scope of the project.  Sometimes building something simple that can easily be supported out-weighs a complex high performance solution that is difficult to manage.  Now if you were building an enterprise-level virtual environment for hundreds of virtual servers, I would recommend something other than Virtual Box.  But 3 VMs can be managed fairly easily with Virtual Box. 

-Mike

-----Original Message-----
>From: Robert Adkins II <radkins at impelind.com>
>Sent: Mar 14, 2014 9:25 AM
>To: 'MDLUG's Main discussion list' <mdlug at mdlug.org>
>Subject: Re: [mdlug] Virtual Server Time
>
>> >     I've done some reading on the subject and found that KVM is 
>> > essentially the closest to "Raw Hardware" than all of the other 
>> > Virtual Machine systems.
>> > Part of my concern is maintaining the best performance with the 
>> > hardware that I will be using for this system.
>> 
>> Yes.  KVM is relatively easy to maintain and, if configured 
>> properly, is almost as fast as real hardware.  (I have a 
>> Windows XP vm that runs faster than XP on the raw hardware, 
>> but that's another story for later.)
>> 
>
>
>	I was considering using VirutalBox, because of familiarity with the
>system, but my research showed that it is absolutely the slowest system
>around for this process.
>
>> >     Goals Include:
>> >
>> >     Being able to pull data drives out of the Host Server and move 
>> > those to the back-up hardware, boot and pull up the virtual 
>> machines 
>> > and be back up and running in roughly a half-hour or less.
>> 
>> Um.  I'm not sure that I'd recommend this approach.  If the 
>> goal is to recover from a borked host quickly, shared storage 
>> makes more sense.  We can discuss this further if desired.
>> 
>
>	I would like to learn more about NAS, it's just not been something
>high on my list, as the pricing had been out of reach of the budget that I
>typically have available. It's a very interesting concept and I think it
>would suit my needs quite well. I believe there is also some additional
>benefit with backing up data, data throughput and restoring/replacing the
>NAS in cases of catastrophic failure.
>
>
>> >     Test server configuration/software updates with the ability to 
>> > toss out a broken system and go back to a still safe and fully 
>> > functioning server image.
>> 
>> There are several ways to approach this.  Selecting a good 
>> shared storage system will make it a snap.
>> 
>> >     Setup instructions so that an otherwise unskilled office worker 
>> > can be expected to move hard drives into the backup server so that 
>> > myself or a contract source can ssh into the backup host and effect 
>> > final corrections to bring the virtual servers back up, no 
>> matter where I am in the world.
>> 
>> This would not be needed if you have shared storage and you 
>> keep your servers running.  Otherwise you'd just need 
>> instructions to power on the reserve server and do everything 
>> else remotely.
>> 
>
>	This is another benefit of NAS that I would be interested in
>learning more about.
>
>> 
>> My first suggestion is to be sure to use LVM on any and all 
>> partitions of your physical hard drives (with the notable 
>> exception of /boot).  If possible, keep between 20% and 40% 
>> unused so you can expand partitions as needed.
>> 
>
>	LVM on the Virtual Machines or on the drives that will hold the VMs
>or the drives that will hold the data? I have been using a very effective
>for me drive layout for a little over 10 years now, it's evolved slightly,
>but I have never run into any issues with it. There are probably a few ways
>to do some elements better though.
>
>> 
>> Use atop and iftop on both the host and the vms to see what 
>> "normal" looks like before you need to use them for diagnostics.
>> If you see any resources are overworked, either plan for more 
>> hardware or scale down what each host does.
>> 
>
>	That is certainly part of my plan.
>
>> I plan my hosts to have at least one memory hungry vm and one 
>> cpu stomper.  This balances the load well.  Also I have
>> 3 hosts running with several vms each.  If any host fails, I 
>> can bring its vms up on the other two machines quickly.
>> 
>
>	This is how I intend on configuring the VMs
>
>	1. Email/Proxy Server/DNS
>		This one will handle the basic MTA, the IMAP service, DNS
>and run Squid. It's going to be one of two machines that can "see" outside
>of the network.
>
>	2. SSL Email Server
>		I am particular to a particular MTA and it's going to be
>easier, for now, to setup a MTA that uses SSL asa  "full" separate server.
>This one will also "see" outside of the network.
>
>	3. File Server/Account Management Server
>		Windows Domain Controller and I plan on consolidating all
>user account information onto this server as well with all other VMs asking
>this server for user authentication.
>
>	-Rob
>
>_______________________________________________
>mdlug mailing list
>mdlug at mdlug.org
>http://mdlug.org/mailman/listinfo/mdlug



More information about the mdlug mailing list