In parallel computing, multiple processors work on the same problem at the same time. In distributive computing, each processor has its own copy of the task to solve. While this might seem like the obvious choice, each option has its own advantages and disadvantages. This article traces the history of distributive computing, explains how it works today, and discusses some alternatives to distributivity.
On the surface, it might seem like distributive computing is superior to parallel systems. Not needing any coordination or communication between processors makes the system less complex and easier to program. However, this ease comes with a cost. The necessity of having data local to each processor slows down performance compared to that of a parallel machine. While there are ways to optimize this performance gap, it is a difficult task to achieve.
In the 1980s, distributive systems gave way to parallel machines. Processors could compute faster when they worked on different parts of a problem at the same time. The need for local data storage was replaced by fast memory shared between processors. However, as machine architecture improved, the performance gap between shared and distributive systems narrowed. In recent years, an increasing number of companies have brought distributive computing back into vogue. Intel has been a leader in this movement with its Many Integrated Core (MIC) product line. The goal of MIC products is to focus on taking advantage of local storage rather than faster, larger memory. While MIC products are targeted at the commercial computing market, their instruction sets also make them viable for scientific computation.