Home Tech Asynchronous Integrated Circuits Design – Making Chips Without Clocks (Part 2)

Asynchronous Integrated Circuits Design – Making Chips Without Clocks (Part 2)

Asynchronous circuit design is not entirely new in theory and practice. It has been studied since the early 1940’s when the focus was mainly on mechanical relays and vacuum tube technologies. These studies resulted to two major theoretical models (Huffman and Muller models) in the 1950’s. Since then, the field of asynchronous circuits went through a number of high interest cycles with a huge amount of work accumulated. However, problems of switching hazards and ordering of operations encountered in early complex asynchronous circuits resulted to its replacement by synchronous circuits. Since then, the synchronous design has emerged as the prevalent design style with nearly all the third (and subsequent) generation computers based on synchronous system timing.

 

Despite the present unpopularity of the asynchronous circuits in the mainstream commercial chip production and some problems noted above, asynchronous design is an important research area. It promises at least with the combination of synchronous circuits to drive the next generation chip architecture that would achieve highly dependable, ultrahigh-performance computing in the 21st century.

 

The design of the asynchronous circuit follows the established hardware design flow, which involves in order: system specification, system design, circuit design, layout, verification, fabrication and testing though with major differences in concept. A notable one is the impractical nature of designing an asynchronous system based on ad-hoc fashion. With the use of clocks as in synchronous systems, lesser emphasis is placed on the dynamic state of the circuit whereas the asynchronous designer has to worry over hazard and ordering of operations. This makes it impossible to use the same design techniques applied in synchronous design to asynchronous design.

Tekedia Mini-MBA edition 14 (June 3 – Sept 2, 2024) begins registrations; get massive discounts with early registration here.

Tekedia AI in Business Masterclass opens registrations here.

Join Tekedia Capital Syndicate and invest in Africa’s finest startups here.

 

The design of asynchronous circuit begins with some assumption about gate and wire delay. It is very important that the chip designer examines and validates the assumption for the device technology, the fabrication process, and the operating environment that may impact on the system’s delay distribution throughout its lifetime. Based on this delay assumption, many theoretical models of asynchronous circuits have been identified.

 

There is the delay-insensitive model in which the correct operation of a circuit is independent of the delays in gates and in the wires connecting the gate, assuming that the delays are finite and positive. The speed-independent model developed by D.E. Muller assumes that gate delays are finite but unbounded, while there is no delay in wires. Another is the Huffman model, which assumes that the gate and wire delays are bounded and the upper bound is known.

 

For many practical circuit designs, these models are limited. For the examples in this discussion, quasi delay insensitive (QDI), which is a combination of the delay insensitive assumption and isochronic-fork assumption, is used. The latter is an assumption that the relative delay between two wires is less than the delay through a sequence of gates. It assumes that gates have arbitrary delay, and only makes relative timing assumptions on the propagation delay of some signals that fan-out to multiple gates.

 

Over the years, researchers have developed a method for the synthesis of asynchronous circuits whose correct functioning do not depend on the delays of gates and which permitted multiple concurrent switching signals. The VLSI computations are modeled using Communicating Hardware Processes (CHP) programs that describe their behavior algorithmically. The QDI circuits are synthesized from these programs using semantics-preserving transformations.

 

In conclusion, as the trend continues to build highly dependable, ultrahigh-performance computing in the 21st century, the asynchronous design promises to play a dominant role.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here