It has become apparent that the future of high performance computing lies with massively parallel architectures. There already exist a variety of parallel hardware platforms, but our ability to fully utilize the potential of these machines is constrained by our inability to write software of a sufficient complexity.
There are two fairly distinctive kinds of parallel architecture in use today: SIMD (single instruction multiple data) and MIMD (multiple instruction multiple data). In the SIMD architecture, the machine may have thousands of processors, but in each CPU cycle, all of the processors must execute the same instruction, although they may operate on different data. It is relatively easy to write software for this kind of machine, since what is essentially a normal sequential program will be broadcast to all the processors.
In the MIMD architecture, there exists the capability for each of the hundreds or thousands of processors to be executing different code, but to have all of that activity coordinated on a common task. However, there does not exist an art for writing this kind of software, at least not on a scale involving more than a few parallel processes. In fact it seems unlikely that human programmers will ever be capable of actually writing software of such complexity.