Is Parallel Processing Dead?

the biased opinion of Hank Dietz, June 6, 1996

Perhaps you've noticed that, just as the more modest forms of parallel processing have become accepted as mainstream computing technology (e.g., pipelining, superscalar execution, etc.), parallel processing research has been taking some serious criticism. Sure, some of it is well deserved; a lot of research and early commercial parallel systems were firmly based on the principle that "If you build it, they will come" -- and few came. Despite that, parallel processing is a critical technology, fundamental advances are still being made, and there is certainly no need to be so serious. ;-)

It is in this spirit that Hank Dietz proudly presents this page full of parallel processing kitsch. Yes, this is a page full of photos of random promotional items from various supercomputer manufacturers... along with my (not Purdue's) strongly biased opinions about what happened to make those companies essentially leave parallel processing and/or scientific supercomputing.

They say that failure to understand history condemns one to repeat it. Although I'm just 36 years old, I'm somewhat of an "old timer" in the field of parallel processing; I've been active in parallel processing research for over a dozen years... and I have learned much from the history I've seen. For example, it was just five years ago that I very publically (at ICPP) referred to some workstation clustering efforts as "radiation hazards" because of all the CRTs -- and now I've got two clusters with precisely the type of "video wall" I was thinking of when I made that first comment. In short, I've learned. I'm writing this rather strange document so that others might also learn....


One of the best known companies in parallel processing is/was Thinking Machines Corporation, which has recently emerged from chapter 11 protection as a "data mining" software company that also sells a Sun SPARC ATM cluster (Globalyst server).

TMC truly had a vision for the future, but it was based on integer-only computation using Lisp.... Despite the multitude of flashing lights on their "blinking machines," TMC's early hardware proved to be remarkably dim when it came to floating point math. Although the later addition of floating-point hardware helped, the addition was an afterthought, and getting good performance meant dealing with a variety of low-level quirks. What is most strange is that this afterthought design cycle happened twice: once as the CM1 was upgraded to the CM2/CM200, and again with the addition of the infamous vector units to the CM5. My biased opinion is that their rather expensive machines were marketed too heavily on peak performance numbers that users found impossible to achieve.


Multiflow, which was scattered to the winds some years ago, has the dubious honor of having "done the right thing" just a little too early... Multiflow's VLIW (Very Long Instruction Word) design was essentially the forerunner of most superscalar technology. Rather than hoping that programs would be re-written to use parallel operations on big arrays (vectors), they wanted existing programs to take advantage of "a smart compiler and a dumb machine" to obtain significant speed-up using fine-grain parallelism.

Although the folks at Multiflow were basically right, they had some problems. One was that their hardware wasn't built using the newest technology, which significantly damaged price/performance. Perhaps the worst mistake, however, was again in marketing. To distinguish itself, Multiflow aggressively knocked vector computing -- but most scientific computing users they wanted as customers were in love with vector computing. They didn't like the Multiflow buttons very much.

Although Multiflow is no more, portions of their compiler technology were picked up by both Intel and HP.


ETA was the rather tentative spawn of a well-established mainframe manufacturer that didn't want to bet its future on the machine. Given this sentiment, it isn't surprising that ETA's machine was both late and unspectacular. The plug was pulled pretty quickly when ETA started to have trouble... "the new force in supercomputing" was unable to overcome the opposing forces of start-up friction.


Although many people view the USA as the source of all supercomputing, companies in many other countries also have recognized the importance of this field and have produced some very interesting machines. One of the earlier, and more interesting, systems came from a Canadian company called Myrias.

In many ways, Myrias led the way into using a shared-memory programming model implemented by page fault mechanisms using conventional message-passing hardware, a technology that is now becoming important. Unfortunately, this type of simulated shared memory tends to yield a lot of unpleasant performance surprises; by ignoring this fact, Myrias got a bit of a reputation for not delivering the promised performance. The advertising pin shown was originally intended to poke fun at a common mispronunciation at ICPP one year, but in retrospect was a little too close to reality.

Myrias is still around, selling their software for use on other machines.


Somewhat like Multiflow, Kendall Square Research was a company that also had a bright architectural idea that was dimmed by mediocre custom hardware implementation and bad marketing.

The KSR machines used custom cache coherence hardware to provide a shared memory; in many ways, their systems can be seen as the forerunner of the many current efforts to connect standard microprocessor modules by specialized cache coherence hardware. However, the KSR's custom processors did not keep up with the performance of commodity microprocessors, and the large amount of custom hardware resulted in stable products too late. KSR's strange marketing strategy didn't help. For example, rather than technical details, they had a content-free puppet show at one Supercomputing exhibit; similarly, the little cardboard models of the machines were cute, but didn't really inspire confidence.


Although everyone knows of Cray, some years ago Cray became two separate companies. Within the past year, Cray Computer essentially died; Cray Research, Inc. is becoming a subsidiary of SGI (Silicon Graphics).

The quick summary is that Cray (both of them) primarily built big, black, computers for big, "black," computer sites... but targeting that market didn't really give them much to build on for the wider commercial market. They set the standard in vector and shared-memory high-end computing, but had little that sold for under a million dollars... plus very hefty yearly maintenance fees.

Cray Research did attempt to branch into lower-end machines, by buying a low-end Cray-compatible product line and later creating their own low-end systems, but the organization of the company just wasn't optimized for that kind of market. It will be very interesting to see what happens under SGI's guidance... it seems that SGI is pushing the idea that the Cray products will become the "binary compatible" (I'm quoting their press release -- I don't see binary compatibility when different processor instruction sets are used) high-end of the SGI graphic workstation product line.


NCube, or however they happen to want it capitalized (currently, it is nCUBE), started life with a strong focus on making custom VLSI processor chips that would allow a large machine to be built with a minimum chip count. They did fairly well in building two generations of parallel computer based on their own custom processors and a hypercube interconnection... but the third generation chips didn't happen on schedule and floating point price/performance started to get unpleasant.

Fortunately, that was about when they teamed up with Oracle. Since then, nCUBE has done fairly well as supplier of "multimedia server" and "large scale decision support" systems. Those markets are larger, and less demanding of floating point speed, than the traditional scientific supercomputing market... have they found a safer way to stay on the cutting edge of parallel processing?


In a lot of ways, MasPar never quite found themselves. Their MP1 and MP2 systems offered up to 16,384 SIMD processing elements in a custom VLSI based, well engineered, reasonably priced, production-quality system. They even had very good software... at least by supercomputer standards. The problem was that the custom VLSI didn't give the system the peak "macho FLOPS" to capture the interest of many people, and the fact that the architecture and software environment were so "clean" made it a less interesting target for many research efforts. Just when the scientific community started to fall in love with MasPar, MasPar decided to discourage that market and target database applications.

The final blow came early this year, when MasPar avoided bankruptcy by canceling the MP3 development effort. Now MasPar is officially a "data mining" software company called NeoVista... and they'll still sell you an MP1 or MP2, but they aren't making any new hardware.


I know I've left out quite a few companies. Perhaps I also have been a bit harsh in expressing my opinions about the companies I've included. However, I believe there are several important lessons to be learned from the above:

In summary, parallel processing is NOT DEAD, and people need to realize how important it is and get things on track before the lack of this understanding truly does kill the field within the USA.

It is mildly ominous that, for example, Japan still has several successful parallel processing supercomputer companies... such as NEC and Fujitsu. In fact, the fastest parallel supercomputer in the world (as of November 1994) was a Fujitsu machine. Fujitsu even makes nice kitsch -- like the toy in the photo, which spins only in one direction... theirs.



Interested in learning more about parallel processing or in presenting your latest parallel processing research results with a high-quality peer-reviewed conference publication? The 26th International Conference on Parallel Processing will be held August 11-15, 1997 in Bloomingdale, IL. The CFP deadline is January 10, 1997 for hardcopy submission and January 20, 1997 for electronic.


HGD

This page was last modified .