[GRLUG] Raid, LVM, and Cheap Storage

Ben DeMott ben.demott at gmail.com
Tue Oct 14 14:01:15 EDT 2008


How long does it take to transfer the state of 39VM's ? (and yes I
understand this may be through ISCSI)

I remember having 6 VM's on a pretty beefy Proliant with bunches of memory,
and the IT manager kept moving my VM's off to older servers, 6 VM's in one
place was just too much for his brian to handle I guess.

Thanks for your response Adam,

> In Linux this is furthered by the fact we have LVM2 and Software Raid
> > support in our Kernels.
>
> Every other current OS provides these as well.
>

Really ? What does Windows Server 2003 offer at the kernel level similar to
the capabilites of LVM2?
http://www.redhat.com/magazine/009jul05/features/lvm2/


> Layering Abstraction technologies:
> > Scenario -
> >  Raid 1+0
> >  LVM2
> >  ISCSI Partition
> >  Contains a Virtual File System.
> > The only situation in which this is a good idea, is if you are
> > mirroring the data to another Raid array.
> > So much can go wrong to corrupt data in this scenario that the
> > complexity can outweigh the benefits very quickly.
>
> It may be technologically more complex;  but management, and
> flexibility, is vastly improved.  It makes no sense not to do this.
> Almost our entire enterprise runs on LVM over iSCSI over RAID.  All
> these technologies are extremely reliable;  in many years I've *never*
> had data corruption resulting from LVM or RAID.  Only blown filesystem
> I've ever had was a ReiserFS - that was simple, don't use ReiserFS.
>

My comment was more or less for smaller - medium shops who aren't willing to
make the real investment and see Virtualization as a 'savior'.  How many
people in IT know what you know and can support what you can support?
I run into a lot of people, and I haven't meant very many that off the shelf
know much more about ESX other than what the manual told them. Red Hat -
what?
Small/Medium Shops make that initial leap, then don't have the knowledge,
expertise, or budget to retain, pay, or hire individuals who can manage the
complexity - the technology itself.  And I did qualify my statement, "You
need to backup your data on another storage array if you are going to do
this, preferably not abstracted by a virtual file system, then the
combination above is a great solution."

Thanks for your responses.

These questions are all part of creating a *legally required* data
> retention policy.  Acting contrary to an organizations data retention
> policy opens the possibility of *criminal* prosecution.  The fear of
> lawyers is very effective in making an organization get pro-active about
> such things.
> <http://www.whitemiceconsulting.com/node/157>
>

I couldn't agree more!
What a nice world you must live in, where companies are concerned with
legality and processes :)

At my last employer we used ESX - and anytime it was possible the data on
the machine was kept on an ISCSI SAN. (Databases, File Storage, Application
Data, etc)
Because I didn't have the resources to make the san box real-time redundant,
or make the iSCSI switch redundant, we had a rack that contained 6 500GB
Lacie drives connected through external sATA. The virtual machines Operating
Systems resided on these externally connected drives.
We then archived the operating systems states on a regular basis.
The one exception to this was windows domain controllers, because they can't
be offline, or be rolled back due to the possability of USN Rollback, and
because the very nature of the dc's was redundant due to their being
multiples, the virtual machine OS physically resided on the SAN Box.

On Tue, Oct 14, 2008 at 1:18 PM, Tim Schmidt <timschmidt at gmail.com> wrote:

> On Tue, Oct 14, 2008 at 12:54 PM, Adam Tauno Williams
> <awilliam at whitemice.org> wrote:
> > A guy at OLF said he had 39 VMs on one box,  two and a half racks of
> > equipment down onto a single 3U box.
>
> 3U sounds like too much...  we have a box here with 8 cores, 32Gb ram
> (16x 2Gb DDR2 DIMMs), 4x 750Gb SATA drives, 2x Gbit NICs, and a free
> PCI-E x16 slot in 1U.
>
> --tim
> _______________________________________________
> grlug mailing list
> grlug at grlug.org
> http://shinobu.grlug.org/cgi-bin/mailman/listinfo/grlug
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://shinobu.grlug.org/pipermail/grlug/attachments/20081014/f138aafd/attachment-0001.htm 


More information about the grlug mailing list