[mdlug] Parallella: A Supercomputer For Everyone by Adapteva — Kickstarter

Adam Tauno Williams awilliam at whitemice.org
Fri Oct 12 10:54:53 EDT 2012


On Thu, 2012-10-11 at 16:18 -0400, Aaron Kulkis wrote:
> Adam Tauno Williams wrote:
> > Software really does have to be tweaked to get the bang-for-the-buck
> > from this type of setup.   And you are still going to run up against
> > other bottlenecks - primarily I/O and network.  Now you just have 100
> > concurrent processes making demands of those subsystems rather than 10.
> > End result is that is just doesn't go that much faster.
> The problem is that these were typically sold as "this is your
> database server"...and if the database code isn't tweaked to
> the nth degree to utilize all of the cores, then yes, it's not
> very cost efficient, because a lot of the cores are sitting
> around idle.
> On the other hand, with a raft of processes, all of which are
> completely independent of each other, without any synchronization
> requirements causing processes to go to sleep until a sibling
> process catches up, these problems go away.

"without any synchronization requirements"

And you've just eliminated a huge portion of the problem domain.
Certainly things like database servers where transactional consistency
is key.

> Using massively multi-core computers as general compute
> servers works very well.  I've seen this with Sequent
> (bought out by IBM about 10-12 years ago for their
> NUMA - Non-Uniform Memory Architecture) with their machines
> made of large arrays (32 ~ 128) of off-the-shelf Intel
> x86 CPUs going all the way back to the 80386 days.
> They were very cost effective, compared to other makers'
> super-minis.  Load averages, even with dozens of students
> logged in, mostly running X11 graphics terminals, rarely
> rose to 2.

And bits of that technology have filtered into work-a-day computers.

> >> Depending on machine load, that single-threaded app might STILL
> >> get more clock-cycles (since CPU contention goes way down due to
> >> the plethora of CPUs available),
> > Nah.  CPUs like the i-family are very smart and internally concurrent.
> > Maybe if you have a cluster of really good CPUs, but a shoebox of ARMs
> > is going to get its butt kicked.
> >> and therefore complete faster
> >> than on a low-core, high clock-rate CPU.
> >> I have a 2006-era dual core... it's constantly getting bogged down
> >> whenever I want to do a couple CPU intensive things simultaneously.
> > A 2006 vintage machine [assuming it wasn't state-of-the-art at the time]
> > is going to hit lots of bottlenecks.  Most notably the front-side-bus
> > and the speed of the RAM.
> >> I don't need parallelized software...what I need are parallel cores.
> > An i7 can give you eight, and that is a lot.  A dual i7 can give you 16.
> Can they do it at 5W of power consumption?

Nope.  But you are moving the mileposts.   I don't care a whit about
power consumption.  

If you want/need a supercomputer than you very much want/need to crunch
a pile of data as quickly as possible.

> > My laptop has an i7-2670QM CPU and I peg three cores @ 100% and it
> > remains very responsive.  If I swamp the hard-drive it turns into a
> > sled.
> And it's probably sucking down 150~200 W during those moments.

I doubt it is that high [given my laptop can work hard on the battery
for *hours*], but it is probably high.  But making watts is easy.



More information about the mdlug mailing list