The Evolution of Computing at Cambridge

A chart showing the development of computing at Cambridge, along with images showing the development of the chart itself.

AI @ Cambridge

Computing @ Cambridge

The evolution of computer science and AI within the Cambridge Ecosystem.

This chart shows a snapshot of firms associated with the computing phenomenon at Cambridge, along with a small selection of academics pushing the field forwards.

These are both bookended by a selection of other pioneering thinkers and companies worldwide. As for the future, below is a modest list of present, probable and possible applications of AI technologies. This is intended to be illustrative (and speculative) rather than definitive (or exhaustive).

Present applications

Spell checking

Spam filtering

Fraud detection

Interactive maps

Movie recommendations

Web searching

Image recognition

Voice recognition

Online shopping

Personalised advertising

Self-driving cars

Face recognition

Automated recruitment

Language translation

Digital assistants

Precision farming

Probable applications

Brain-computer interfaces

Lifelong avatar assistants

Decoding of human intent

Lip reading CCTV

AI board members

Real-time crime prediction

Computers with feelings

Self-writing software

Autonomous robotic surgery

Smart energy grids & appliances

Drone to drone combat

Modelling of crowd behaviour

National mood prediction

Longevity forecasting at birth

Detection of algorithmic biases

Amplification of IQ

Design of ‘impossible’ buildings

Acceleration of drug discovery

Personalised medicine

Volcano modelling

Possible applications

Inner speech recording

Artificial curiosity

Artificial creativity

Dream recording

Amplification of EQ

Automated insights

Decision-making apps

Software that rusts

Foreign policy algorithms

Dynamic allocation of tax revenues

Crisis forecasting

Fake memory implants

Earthquake prediction

Dispute resolution algorithms

Whole planet simulation

AI to AI negotiation

Animal communication

Reanimation of the deceased

VR games based on known fears

Pilotless commercial aircraft

Fully immersive virtual realities

“One day ladies will take their computers for walks in the park and tell each other, “My little computer said such a funny thing this morning”. ~ Alan Turing

A Brief History of AI (on a single sheet of blue paper)

A history of computing and artificial intelligence on a single sheet of paper was always a little bit ambitious. Perhaps a little foolish too. Anyway, I’d like to think of this as the equivalent of Joe Cocker singing With a Little Help from My Friends at Woodstock in 1969 – an exhausting yet transcendent moment where mind and body meld together and time stands still. In reality it ended up taking me over and has become a bit of fragmented failure, although maybe one should think of it in quantum terms – simultaneously good and bad, its fate being linked to random events that may or may not occur.

Regardless, it’s been more fun to do than an adult colouring-in book. I did learn quite a bit doing it too (e.g. Nolan Bushnell saying “no” to the offer of 1/3 of Apple Computer for $50,000). It has perhaps laid some foundations to ‘map’ the future of artificial intelligence too. Make what you like of it. If you want to print this ask me for a higher resolution file (which I’ll happily supply for free) or I might print a few large copies myself, so do ask if you want one (smalll charge to cover printing and postage).

AI History

Map of the History of AI

Just working on another revision of my artificial intelligence (AI) foundations map. But I like Greg Orme’s idea of turning this map into a series of monthly instalments, each looking at a particular area of computing, AI or robotics. So, for example, one could focus on how literature has influenced robotics and vice versa. Ovid’s Metamorphosis would be a good a place to start, but so might Mary Shelly’s Frankenstein, itself influenced by the Year Without Summer (1816), which was caused by the 1815 eruption of Mount Tambora in Indonesia. (Who knew one could draw a link between a super-volcano in 1815, Mary Shelley in 1818, Isaac Asimov’s iRobot in 1950 and Ex Machina 2015?). If you were really going for it you might even link a 2020 episode of Dr Who (The Haunting of Villa Diodati) in which abnormal weather is linked to the Cybermen.

Or we could look at the origins of the Internet (ARPANET, 1969-1990) and link this to ARPA/DARPA, JCR (Lick) Licklider, Bob Taylor, Vint Cerf and Bob Kahn, but also HG Wells World Brain (1936-8) and Pierre Teilhard de Chardin’s and Vladimir Vernadsky’s ‘The Noosphere’ in 1927 (also attributed to Le Roy). But did you know that Bob Kahn’s father was the futurist Herman Kahn, who worked at RAND, who provided the inspiration for the Dr Strangelove character in Stanley Kubrick’s 1964 film of the same name.

Anyway, bear with, a new improved version of the AI-sphere coming soon…and donl’ forget that once I’ve got the history of artificail intelligence done, my next job is to step into the future and speculate about the future of computing, data science, artificiual intelligence and robotics.

Back to the Future of AI

I think I’m getting somewhere with this. I often find that before looking forwards at the future of something, artificial intelligence, for example, it’s often worth going backwards to the start of things. This is a draft of a visual looking at the history and influences of computing and AI.

It needs a sense check, contains errors and is somewhat western-centric, but then I am all of these things. It’s also very male-heavy, but then that was the world back then – and in AI and IT possibly still is. Perhaps I should do a visual showing women in AI – starting with women at Bletchley Park, NASA and so on. I may spin this off into a series of specific maps looking purely at robotics or software too.

History of Computing and AI – Richard Watson, April ’21

Using AI to Predict Future AI

To much of this and your head explodes…

Having an interesting conversation yesterday about whether you could use AI to predict future developments in AI. The answer is almost certainty yes. We’ve already got self-writing software, 3D printers that print 3D printers and we are on the cusp of automated scientific discovery. One rather wonders where this would leave humans?