[NTLUG:Discuss] Re: looking for raid & controller advice -- RAID analysis, system v. peripheral interconnect
Bryan J. Smith
b.j.smith at ieee.org
Sat Dec 4 21:30:01 CST 2004
On Sat, 2004-12-04 at 21:26, Robert Pearson wrote:
> None of this reply really applies to Kevin's original request. It
> applies to questions I have that were raised by answers given to
> Kevin. Maybe it should be a new topic?
That's why I like to "append" to the original subject line.
It gives a hint of what minor (or major) changes are made in the
follow-up(s), but its still remains apart of the original thread when
sorted by subject.
I get chastized in any case, by people who actual differ on whether a
subject should or shouldn't be changed. So I just stick with this
approach, which maps back to the old O'Reilly Discussion Guidelines from
UseNet long ago. It works very well for mailing lists as it did for
UseNet, because ultimately the header IDs inter-relate messages for
mailing list servers, archivers and readers, just like NNTP servers do.
> I saw a big discussion on RAID-0+1 versus RAID-1+0, which seems to be
> RAID-10. In that discussion RAID-1+0 (RAID-10) was determined to be
> the better solution because of recovery from the loss of a drive.
> RAID-1+0 (RAID-10) was stated to be slightly slower than RAID-0+1.
Yes, there is a difference -- whether you stripe or mirror first.
And there are differences in recovery/performance, at least when it
comes to software RAID.
In the case of 3Ware, like most _hardware_ RAID implementations, they do
not stripe/mirror separately. The operation is integrated into one,
single logic.
> I also heard a discussion where it was stated that "The Best of All
> Possible Worlds" for RAID and Storage would be to have RAID-5
> configured in the Storage hardware box and RAID-1+0 (RAID-10) in
> software on the host. Is this possible?
Yes. Especially if you put different hardware controllers on different
PCI-X channels.
> Did I hear all this wrong or mix it up in my head?
No, you're thinking ahead of where I was. I didn't want to get into
that deep, but there's nothing wrong in going there.
> What is the best configuration for performance, Information High
> Availability (Data Availability) and Information Integrity (Data
> Integrity)?
Depends on your application. Network filesystem redundancy opens a
whole new can of worms. We're talking dozens of variables. So it,
again, depends on your application.
> I ask this because with the new bus standards like HyperTransport,
> PCIe, and RIO
Now remember to separate "system" interconnects from "peripheral"
busses. HyperTransport is a system interconnect, PCI-Express (PCIe) is
a peripheral bus.
HyperTransport typically allows multiple peripheral busses in a system.
At the same time, PCIe is sometimes used as a "cheap" alternative
without a real system interconnect (e.g., to Intel AGTL+). It's all an
interesting discussion.
> along with the continuing increase in Areal Density,
> with no similar increase in Access Density, the right Storage
> configuration for your application will be critical to achieve
> sufficient throughput to satisfy all requests while maintaining
> recoverability. The hardware bandwidth will be there as---
> The way I read the review at---
> www.anandtech.com/systems/showdoc.aspx?i=2255
> ...is that faster CPUs (higher clock rates) are now giving RISC/Unix
> platforms real competition.
First off, AnandTech isn't testing for heavy file server or multiple
tasks. Even AnandTech's recent database article still used "old" kernel
2.4 (SLES 8), instead of kernel 2.6 (SLES 9) with all its Opteron/x86-64
enhancements.
But in reality, RISC/UNIX is largely dying below the 16-way space. The
R&D isn't being put there anymore, because it's hard to compete with the
economies-of-scale. About the only vendor that believes it can is IBM.
> Commodity priced motherboards with dual+CPUs and the new, fast bus
> are real contenders.
"fast bus" isn't where it's at. It's multiple points of connection.
One thing is for certain, AMD is completely _eliminated_ the "front side
bottleneck." Intel looks to be moving to that soon, possibly by 2006.
> Everywhere except the high end Mid-range and Enterprise. Clustered or
> Gridded they could be contenders in those area as well.
Yes, it all depends. There are some limits to the "broadcast" approach
of HyperTransport, but companies like Cray have introduced optical links
where multiple 4-way systems can be put together with full-speed
HyperTransport links right on the system interconnect.
Not quite a full mesh (which the 4-way is), but very close.
> The reason I write al this is that I sense a real divergence coming.
> There was a major shift in the Fundamental IT paradigm in 2001.
> Another Fundamental IT paradigm shift is starting now and will be
> rolling full steam in 2005.
The shift is largely due to the fact that Moore's Law is dying.
This was expected by 2006 by the SIA (Semiconductor Industry
Association), among other things.
The new way around Moore's Law, at least short-term, is multiple cores.
And that's where AMD is more prepared thanx to how its CPUs
interconnect, and Intel has real problems with aged AGTL+.
--
Bryan J. Smith b.j.smith at ieee.org
--------------------------------------------------------------------
Subtotal Cost of Ownership (SCO) for Windows being less than Linux
Total Cost of Ownership (TCO) assumes experts for the former, costly
retraining for the latter, omitted "software assurance" costs in
compatible desktop OS/apps for the former, no free/legacy reuse for
latter, and no basic security, patch or downtime comparison at all.
More information about the Discuss
mailing list