[GRLUG] Raid, LVM, and Cheap Storage

Adam Tauno Williams awilliam at whitemice.org
Tue Oct 14 07:03:23 EDT 2008


> > Only blown filesystem
> > I've ever had was a ReiserFS - that was simple, don't use ReiserFS.
> Maybe Hans can fix that, if they let him have a computer in his cell.
> He'll definitely have lots of free time, for the rest of his life!  ;-)

I think time has passed by for ReiserFS anyway.  ext3 now supports
indexed directories, there is tmpfs, even XFS is improved...

> >> 2.) The only way of detecting a disk failure at a remote site is for
> >> someone to notice there is a 'red' or 'amber' light on.
> >> 3.) No one monitors system voltages
> >> 4.) No one monitors system temperatures.
> > OpenNMS! <http://www.opennms.org/index.php/Main_Page>  I guarantee if
> > the above have to be done manually, it just won't happen.
> Can OpenNMS detect a failed disk in an array?

Many, yes.
<http://www.opennms.org/index.php/Dell_OpenManage_Storage>

> >> 5.) No one performs read / access tests or write tests or head tests -
> >> if this is just done once a year you can almost always predict a drive
> >> failure
> > SMART will do this automatically on current systems.
> Can SMART monitor drives that are members of an array?
>I was under the impression it couldn't, because the OS only sees the
> virtual disk, not the individual drives.  (but I never tried either)
> Ditto for SMART above.

Depends on support for the array.  But, I believe, the OP was advocating
MD (software raid) where the OS would still have SMART support.  

SMART posts to the message log.  Anything, including OpenNMS, can log
scrape.

> >> Now I've pointed out all of the short sightedness I've experienced in
> >> my journeys - I'm sure the Linux IT community being as informed as it
> >> is will not be among these organizations I speak of - so I'll be
> >> interested to hear about your experiences and setups and uses of linux
> >> technology to come up with unique and cost effective storage
> >> environments.
> > We've recently started scrapping out all our old hardware and
> > consolidating on VMware ESX on a pair of Silicon Mechanics servers
> > connected to an EMC SAN via iSCSI.  Myriad physical servers is too much
> > of a maintanence burden, too hot, and too inflexible.  This frees up IT
> > to focus on interesting/useful problems.
> VMware is great for combining old hardware, especially servers with
> low resource requirements (like Linux firewalls).

I've consolidated servers with high resource requirements. :)  So far
I'm nothing but impressed regarding performance; part because even
"busy" servers spend a fair amount of time idle and current servers have
*8* cores...

> And VMware ESXi Server is now FREE of charge!  :-)

Yep.



More information about the grlug mailing list