Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

A few years back, I wrote my editor’s letter on the subject of AI, saying how IBM was among the first corporates to use AI in their labs in order to push their technology and solutions forward.

Today, with the hype over AI the highest its ever been, I thought it would be a great time to revisit just who started it all those years ago and give credit where credit is due!

While a number of definitions of artificial intelligence (AI) have surfaced over the last few decades, John McCarthy offered the following definition back in 2004: “It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable."

However, decades before this definition, the birth of the artificial intelligence conversation was denoted by Alan Turing's seminal work, “Computing Machinery and Intelligence,” which was published in 1950. In this paper, Turing — often heralded as the father of computer science — asked the following question: "Can machines think?"

From this simple query, he suggested a basic assessment, now famously known as the "Turing Test," where a human interrogator would try to distinguish between a computer and a human through the text response. While this test has undergone much scrutiny since it was first posited, it remains an important part of the history of AI as well as an ongoing concept within philosophy itself, as it utilizes ideas around linguistics, the bedrock of the language model landscape of today.

Later, in 1994, Stuart Russell and Peter Norvig proceeded to publish Artificial Intelligence: A Modern Approach, which went on to become one of the leading textbooks in the study of AI.

As we enter into this new AI frontier, it’s important to remember where it all began.

Pin It