I’m not sure what a Teraflop is, but these computers have a lot of them. The Blue Gene/L computer shown can do 360 Teraflops. Link
Update: Commenter Christophe puts this in perspective.
A teraflop measures a computer's ability to do Floating Point Operations Per Second. So 360 Teraflops = 360,000 gigaflops = 360,000,000 megaflops = 360,000,000,000 flops. 360,000,000,000 Floating Point Operations Per Second. That's 360 Billion.
Update 2: Kevin and Adam say its 360 trillion.
Wow!
Pax et bonum.
aparantly it can be scaled to 3 petaflops
so yes, Zeke, we are measuring in petaflops now
it is the new number one, officially it is 3 times faster than its predecessor
just because you may not believe me, heres a link to it
http://www.gizmag.com/go/7511/
"The Blue Gene/P system can be scaled to an 884,736-processor, 216-rack cluster to achieve three-petaflop performance"
few more weeks and they'll cure cancer
To put this into a greater perspective, a Core 2 Duo processor puts out a *theoretical maximum* 4 flops per clock (per core, or 8 flops per clock for both) and runs somewhere between 1.8GHz and 2.66GHz.
That means that an average (2GHz) processor operating at theoretical peak efficiency (and for various reasons this never actually happens in real life, and the Top500 scores are actually measured with a tool called Linpack that calculates closer to real performance), your average Core 2 Duo processor would put out about 16 gigaflops. Real performance probably peaks at about 70-80% of that, which means about 12-13 gigaflops.
So...
1,900,000,000 flops -- Cray-2 supercomputer (1985)
12,000,000,000 flops -- average Core 2 duo performance
94,210,000,000,000 flops -- MareNostrum (#5)
360,000,000,000,000 flops -- Blue Gene/L (#1)
1,000,000,000,000,000 flops -- one petaflop.
As you can see the top cluster systems are thousands of times more powerful than your average desktop system, and your desktop system has the advantage of not having to actually talk to the other systems to achieve that performance! These clusters take a performance hit from the individual nodes having to talk to each other.
Anyway, supercomputing has come a long way but still has plenty of room to grow.
The term "billion" means different things depending on which English you speak. It comes down to something called the "short and long scale numerical systems", which vary their terminolgy in multiplying units of 1,000 (short system) or in 1,000,000 (long system). You can read about it here (http://en.wikipedia.org/wiki/Long_and_short_scales), but here's the gist:
10^9: Americans (and Canadians) call this a "billion", but Britons (at least old ones) call it a thousand million. Sometimes the term "milliard" is used, but that is mostly in non-English speaking countries.
10^12: Americans can this a "trillion", whilst Brits call it a "billion".
10^15: Americans call it a quadrillion; Brits call it a thousand billion or sometimes a "billiard", but that is pretty archaic.
10^18: Americans call it quintillion;
Brits call this a trillion.
Reading the Wiki article, it seems like the Brits themselves have been changing since 1974 to the "short scale" system used by the US, Canada, Australia, and other English speaking countries. Most of continetal Europe continues to use the long scale (like Christophe).
Straight talk from Sid.