Computer Architecture

Appointments made in the Department of Computer Science in 1985 led to a resurgence of research activity in computer systems in general and computer architecture in particular. Example projects include a Context Flow Architecture, the Edinburgh Sparse Processor and Nigel Topham's work with a number of computer design companies.

Context Flow Architecture

One of the drivers of research in computer architecture is the desire to make efficient use of the computational power available. In early computers peripheral operations and transfers between main memory and backing storage took place under direct program control using instructions that required milliseconds for their completion. When the Manchester Atlas was being designed in the late 1950s, the disparity between processor and peripheral speeds was so great that it became clear that the processor would need to be able to switch rapidly between processes, thus leading to a multiprogramming system and the invention of virtual memory.

By the 1980s most high-performance processors were heavily pipelined and incorporated cache memory and branch outcome prediction mechanisms. This meant that discontinuities occured at much finer levels of granularity, with events such as cache misses or unpredicted branch outcomes causing long instruction latencies. This led to the use of a micromultiprogramming strategy in which a process switch was initiated whenever a latency-inducing operation was encountered. At Edinburgh Nigel Topham proposed the idea of a "context flow" architecture and led an examination of the design possibilities for uniprocessors and multiprocessors based on this concept. Details can be found in Volume 2 of the Journal of Supercomputing:

The Edinburgh Sparse Processor

A collaboration with Ken McKinnon in the Mathematics Department led to a project to investigate possible architectural mechanisms to support efficient processing of sparse vectors. Sparse vectors are an important feature of a number of computer applications, especially linear programming, a technique used commercially to optimise the outcome of a set of activities. Typically, only a small fraction of the elements of a sparse vector have non-zero values, so it is important to avoid wasting memory space and compute cycles dealing with the zero values. The list vector mechanism built into the design of the Edinburgh Sparse Processor (ESP) does just this and also solves the problem of fill-in. Fill-in occurs when the output from an operation on a sparse vector contains more non-zero values than it did previously. Details can be found in the Proceedings of the 16th annual International Symposium on Computer Architecture (ICSA '89):