[GRLUG] Is Sun Solaris on its deathbed?

Adam Tauno Williams awilliam at whitemice.org
Wed Sep 24 20:22:42 EDT 2008


> ZFS - How big do you have to get before GFS, LVM2 becomes prolematic? 

The vast majority of companies don't even need GFS.

> Amway (Alticor) here in MI uses alot of AIX for DB2 and Oracle stuff -
> Corporations need a solution that is marketed and comes with a suit.
> ZFS is cool, but it really only matters for the 0.1%
> I agree with the point, and would add - It sometimes frustrates me
> when people say "Bladecenter ... bla bla bla, (enterprise technology)"
> - I don't need or want a proprietary blade center, I hate my
> proprietary 2u servers enough - 91% of the companies out there are
> SMB's - Most SMB's are 10 or less servers.

I think blade centers are an excellent solution, but the up-front cost
is rather severe;  it doesn't make economic sense unless you are looking
at an almost full chassis which in most companies could replace the
entire data center.  Not all are proprietary:  Intel makes a fairly open
solution that is resold by various vendors.
<http://www.siliconmechanics.com/c1048/Bladeform-5100-Series.php>
<http://www.siliconmechanics.com/c1105/Bladeform-9100-Series.php>
These have the advantage that you don't have to re-pay for redundancy
and management in ever server since the redundancy is in the chassis,
also the network itself is in the chassis so you just have less junk and
cables.

For our virtualization project we ended up going with a pair of R258's.
<http://www.siliconmechanics.com/i14080/UIO-server.php>   The
performance and support of these has been excellent.

> Server Hatred Story #1:
> Broken Optiplex Motherboard - attempting to restore raid array, Dells
> solution: Buy an identical Optiplex.
> Server Hatred Story #2: A Single HP Proliant refuses to boot without
> bypassing a warning screen indicating a fan has failed.  In fact no
> fan has failed or fallen below rpm range - the zone sensor has failed
> - to replace 1 diode, you must replace 8 fans, and a circuit board. 
> After several calls to HP and a week going by, I have the new part,
> install it .. fixed.

Up until very recently we were a straight-up IBM shop and we've never
lost a server to a component failure that wasn't related to
environments.

> The other thing I run into a lot, is people think they need to upgrade
> technologies, when they just aren't using the technology they have
> correctly or to its potential.

It is often more cost effective to replace aging equipment than to take
the time to finess it;  also important is to take into account the
increase probability of failure as equipment ages.

> My Cad designers always complained of slow file access - when I
> started at company their switch architecture (of daisy chaining 1gb
> backplanes) left a total backplane of 2gigabits/sec across the entire
> 140 workstations. Nevermind the fact that purchasing a $700 all
> Gigabit switch with a 96gigabit backplane fixed the problem - they
> wanted to upgrade all of the $1200 100mbit Adtran switches. (sigh)

Yep, we use used or non-prime brand switches.  They work just the same.

-- 
          Consonance: an Open Source .NET OpenGroupware client.
 Contact:awilliam at whitemiceconsulting.com   http://freshmeat.net/projects/consonance/



More information about the grlug mailing list