A Math Co-processor for the Jupiter Ace
A Math Co-processor for the Jupiter AceInstitute of Nuclear and Particle Physics
University of Virginia
Charlottesville, VA 22901 USA
This note reminisces about the author's first digital design project. Despite his initial unfamiliarity with the world of digital electronics, the project took but two weeks from start to finish, working part-time. Much of this celerity can be attributed to the fact that the host computer was programmed in ROM-based FORTH.
In 1980 and 1981 I acquired my first personal digital computers, the Sinclair ZX81 and the Timex-Sinclair 1000 (same machine, different name). Although vastly slower at floating point arithmetic than the CDC mainframe at work, they were nevertheless useful for serious calculations because of their rapid turnaround: they were interactive, and their time belonged solely to ME. In fact, a paper I published in 1982 noted that '...all calculations in this paper were performed on a Sinclair ZX81 computer'--one in the eye for a competitor who had gotten the wrong result on a huge IBM mainframe.
The Sinclair's were remarkably simple Z80A machines that ran at 4 MHz, stored programs on an external cassette tape recorder, and used a standard TV as a 25 line by 40-character display. The membrane keyboard was, bluntly, horrible; the display hard on the eyes; and the entire lash-up fragile beyond belief--the slightest jar produced power or data glitches that instantly vaporized an afternoon's work.
About the same time I received a flurry of consulting commissions from local attorneys, dealing with accident investigation and reconstruction. Whereas the Sinclair's were adequate for small programs that ran a long time, the programs needed for accident reconstruction were more complex and more computation-intensive. Typical cases ran an hour or more. It seemed to me that both for use at home, and (eventually) in the field, a faster, more stable machine was needed. Manifestly one could easily make a Sinclair (trans)portable: a car TV and a 12 Volt cassette power supply could plug into a cigarette lighter, from which the computer also could be powered. However I doubted such a rig would work reliably amidst the bumps and grinds of the automotive environment; and anyway it was not fast enough by an order of magnitude to explore the necessary range of parameters during in-the-field reconstructions of accidents.
As I sat pondering my next move, advertisements began to appear for a new machine, the Jupiter ACE ('Automatic Computing Engine') by Jupiter Cantab, Ltd. It was touted as 10* faster than the ZX81 because it employed a compiled language, FORTH, rather than an interpreted BASIC. The machine was distributed in the United States by Computer Distribution Associates (CDA, henceforth) of Pittsford, New York. (For those unfamiliar with the more arcane geography of New York State I should add that Pittsford is a small neighbor of Rochester, that Babylon of iniquity lying between the Genesee River and Erie Canal.)
My ACE cost a deuce--$200--but I was happy: it was indeed 10* faster than the Sinclair, especially at floating point arithmetic, despite being essentially the same 4 MHz Z80A machine. So what if the internal representation of floating point numbers was a strangely bastardized BCD? So what if floating point and integer data shared a common stack? And so what if the developers had neglected to provide trigonometric, exponential and logarithmic functions... Uh, oh: programs that describe colliding cars need those functions. How could I get them?
Settling down resignedly to a tedious session of programming table look-ups I was struck by a happy thought: perhaps the jolly old chaps at Cambridge, England intended a math co-processor add-on to plug into the expansion port on the backside of the ACE. I called CDA to inquire if such might be the case, pointing out that many like me might purchase an FPU accessory if available. The engineer at CDA said Jupiter Cantab had no plans for such an item, as floating point math chips were unavailable abroad. He then inquired whether I would be interested in designing one and licensing it to them.
Flattered but flabbergasted best describes my state of mind. I temporized, requesting a week to consider whether the commission lay within my competence. During that week I was able to acquire spec sheets for Intel 8231A and AMD 9511 (equivalent 8-bit math chips). I also discovered that supply houses wanted serious bread for these chips in quantity 1. The rest of the week I spent reading the relevant sections of Horowitz and Hill, The Art of Electronics (Cambridge University Press, Cambridge, 1980). This remarkable book convinced me that it, coupled with my extensive practical experience of analog electronics, would see me through. This is how I came to design the math board that is the subject of this reminiscence. My only stipulation was that CDA should supply the rather expensive math chip.
The hardware presented several design problems. First, the HMOS chips demanded a dual-voltage (5- and 12 V) power supply. A regulated 5 V line was available at the expansion bus. Where to find +12 V? Eventually I settled on voltage-doubling the unregulated +9 V line of the expansion bus using the circuit of Fig. 1.
The next step was to provide data transmission and chip control. The math chip's 8 data lines were interfaced via a 74LS245 bi-directional tri-state bus driver, to avoid excessive loads on either the CPU or math chip. This driver requires two signals, CE, and S/R. When CE is low data can be transmitted, S/R controlling the direction. When CE is high, however, the two sides are disconnected.
The FPU was controlled via the ACE's IORQ, RD, WR and WAIT lines. The latter is pulled low by the FPU to indicate it is busy, and is used to prevent the CPU from attempting to read or write. (I eventually found it necessary to provide an open-collector buffer for this line using two transistors, to prevent the CPU from being locked up by the FPU's WAIT line.) Finally, I used 3 port addresses, 251, 253 and 255 to provide the signals for reading and writing data, and for resetting the FPU. As I had never designed a digital circuit before, the address decoding and control circuitry was unnecessarily complex (Fig. 2). If I were designing the circuit today it would be at least one 'glue' chip simpler.
Nevertheless, IT WORKED! The breadboard version was completed in two weeks of part-time labor. I made heavy use of FORTH's ability to address directly the data ports, in testing the address decoding and control signal logic. That is, the breadboard was assembled incrementally to avoid dangerous inputs to the expensive FPU. Once the math chip was added, its operation could be tested immediately using 16 and 32-bit integer arithmetic (of which it was capable). Simple unary floating point operations such as square roots of perfect squares, and logarithms of powers of 10 further confirmed that the FPU was operational. (The designers of the 8231A thoughtfully provided direct conversion of integers to IEEE floating point and vice-versa.) During the testing period I used only high-level FORTH--involving IN and OUT--as I was by no means confident of my ability to write CODE definitions.
True floating point software presented difficulties. Jupiter FORTH contained unfortunate choices such as mingling floating point numbers and integers on the same stack as well as a non-standard IEEE- incompatible representation for floating point numbers. Worse, I was at that time a FORTH tyro who had not yet learned to use the power of the language. It took an additional month to write the revised floating point lexicon. Most of the work involved translating their fp representation to IEEE and vice-versa.
Every story deserves an epilogue, or in this case, an epitaph. When I reported back to Computer Distribution Associates with a completed design, I discovered to my dismay that they had in the meantime performed their famous 'dead-guppie' imitation: belly up. So had Jupiter Cantab, Ltd. No more ACEs, no royalties, in fact nothing but the circuit board shown in Fig. 3a and my (somewhat modified) ACE shown in Fig. 3b. Finis.
I'm sorry to say that the Fig 3a and 3b images have been lost over time. We would like to thank Professor J.V. Noble for his permission to archive this information.