Computers & Geosciences, Volume 26, Number 3, 2000

Graeme F. Bonham-Carter
Geological Survey of Canada
601 Booth St.
Ottawa, Canada K1A 0E8
bonham-carter@gsc.nrcan.gc.ca

E-mail: &

John C. Butler
Geosciences
University of Houston
Houston, TX 77204
mailto:jbutler@uh.edu

Predicting the future of computing Hutton's Principle of Uniformitarianism, "the present is the key to the past" is of course the opposite to the normal approach of historical research which involves understanding present and future events from a study of the past. When it comes to digital computers, forecasting the future is a particularly hazardous occupation. Not only is the past record short, about 50 years at most, but a number of people well-known in the computing world have made notorious blunders in their prognoses. This is presumably because the effects of innovations are difficult to predict: innovations are by definition something new, and don't have a history. For example Thomas Watson Jnr. of IBM predicted in the early 1950s that the US market for large computers was the grand total of 5 mainframes! Perhaps even more surprising was the decision by Bill Gates in 1975 to restrict DOS applications to 64K bytes of memory, presumably under the assumption that this would be adequate for many years to come! Apparently the WorldWide Web more or less appeared out of the blue in the early 1990s, unpredicted and yet having an unprecedented effect on many aspects of modern life. Two "laws" based on the short history of computing are perhaps interesting because of the light they may shine on the future. They are Corbato's Law and Moore's Law (Bell and Gray, 1997). My discussion here is drawn largely from this last-named reference, a chapter in a stimulating book about the future of computing which I can strongly recommend.

Corbato's Law deals with the writing of code, a topic pertinent to this journal. When I checked on the web for Corbato, I found that a Fernando J. Corbato was the 1990 winner of the Alan Turing Award for "..his pioneering work organizing the concepts and leading the development of the general-purpose, large-scale, time-sharing and resource-sharing computer systems, CTSS and Multics." I assume that this is the same Corbato, whose law roughly states that programmers write about the same amount of code per unit time, no matter what language they are using. This is an interesting observation, because of the well-known fact that computer languages have evolved over time from simple low-level languages to high-level languages. Starting with machine code, then assembler code, progressing to languages like Fortran and Algol, until today we have powerful macro languages that control complex tasks such as numerical analysis, graphics, image processing and database search. A hundred lines of assembler, 100 lines of Fortran and 100 lines of S (the language developed at Bell Labs, now used in the scripting language for statistics in S-Plus) represent enormous differences in computing complexity, but still involve about the same amount of effort from the programmer.

Taking this a step further, we now have macro languages that are written by executing a sequence of operations invoked through a menu or by clicking on a graphical icon, and behind the scenes the corresponding code is written to a file that can then be edited, saved and executed at will. To generate this code, the user need not understand the language syntax. We are also beginning to see graphical representation of macros, where objects can be moved around, and new structures generated from a tool bar. I suspect that programming will get even easier in time, with specialised graphical interfaces for generating code automatically in a variety of special areas of application. There are specialized languages used in statistics, for example, for specifying and building complex multivariate models (e.g. Edwards, 1995), and versions of the same idea that allow a model to be specified graphically using on-screen "doodles"(Thomas, 1999). In the geological world there is tremendous scope for providing, for example, user-friendly tools for building simulation models. Models of fluid and heat flow, sediment transport, ocean circulation, basin development, and many other specialized areas could be made much easier to use, if their development did not involve extensive code writing. Already there are many robust computer codes available for these topics, but we can expect the tools to become much more accessible and user-friendly for the nonspecialist in the future. Now consider the real driving engine behind the computer revolution: the spectacular developments in hardware.

To get an up-to-date definition of Moore's Law, I did a web search and first came up with some cricket scores from New Zealand. "Hartland c Moores b Law" which for non-cricketers (probably most C&G readers) translates to "Hartland was caught by Moores from the bowling of Law". Then I came on a better definition "The observation made in 1965 by Gordon Moore, co-founder of Intel, that the number of transistors per square inch on integrated circuits had doubled every year since the integrated circuit was invented. Moore predicted that this trend would continue for the foreseeable future. In subsequent years, the pace slowed down a bit, but data density has doubled approximately every 18 months, and this is the current definition of Moore's Law, which Moore himself has blessed. Most experts, including Moore himself, expect Moore's Law to hold for at least another two decades" ( A related story on the same URL goes on to say that Moore (in 1997) predicts that physical limitations will eventually halt this miniaturization process, but probably not until at least 2017, although there are many engineering problems to be overcome before this limit is reached (e.g. power consumption, heat dissipation). The phenomenon of geometric growth rates of miniaturization applies to other computer hardware: processing speed, cost and capacity of hard-drives, communication speeds over networks among them. If this rate of growth in hardware continues (suppose that any physical limitations are overcome), it has been estimated that computers will exceed the processing speed and memory capacity of the human brain (as quoted by Bell and Gray, 1997), certainly by the middle of the 21st Century. By 2010, PCs may have 10 gigabytes (1010 bytes) of RAM, a terrabyte (1012 bytes) hard-drive and operate at speeds of 1010 instructions per second (104 mips). By 2040, these capacities could increase to 1015 bytes of RAM, 1016 byte hard-drives and 1014 instructions per second. Wide area networks which at present have transmission speeds of 1010 bits per second may handle 1012 bits per second and by 2040 up to 1014 bits per second.

Translating this into something more tangible for a geologist (see Bonham-Carter, 1999), consider a digital image of the Earth's surface, an area of about 150,000,000 km2. If the resolution of the image is 1 m, and assuming that we use one byte per pixel, the image needs 150*1012 bytes (150 million megabytes) of storage. If Moore's Law holds, we should almost be able to put such an image on a hard-drive in ten years, and in memory in 20 years. In 10 years we would be able to download such an image over a WAN in 8 seconds, and in 40 years download 100 such images in the same time! It would also have major implications for 3-D modellers, both for the storage requirements of large arrays and the speed of processing.

We have certainly witnessed an amazing revolution in information technology over the past 40 years, but perhaps we have only just seen the tip of the iceberg: the main event is still to come. This is an exciting prospect, and we can be sure that there will unexpected and unpredicted developments: the present will probably not be the key to the future.

References Bell, G., Gray, J.N., 1997. The revolution yet to happen. In: Denning, P.J. and Metcalfe, R.M. (Eds.), Beyond Calculation. Springer-Verlag, New York, p. 5-32.

Bonham-Carter, G.F., 1999. Computing in the geosciences in 1999: retrospect and prospect. In: Proceedings International Association for Mathematical Geology Conference (IAMG'99), Trondheim, Norway, August 9-11, 1999, p.1-13. Edwards, D., 1995. Introduction to Graphical Modelling. Springer Verlag, New York.

Thomas, A., 1999. Constructing software from graphical models. In: Proceedings International Statistical Institute (ISI'99), August 11-18, 1999, Helsinki, Finland, Topic 57, Computational Aspects of Graphical Models.