A Brief History of AI: An Article on the Past and Present of Artificial Intelligence

blog (loanword)11mos agoupdate AiFun
418 0

Artificial Intelligence (AI) has a long history. In ancient myths and legends, highly skilled craftsmen could create artificial people and give them intelligence or consciousness. AI in its modern sense began with classical philosophers' attempts to explain human thought processes in terms of mechanical symbol processing, and the invention of the computer in the 1940s led a group of scientists to seriously explore the possibility of constructing an electronic brain.

The field of AI research was formalized at a conference held at Dartmouth College in 1956. The participants of the conference were leaders in AI research for the next several decades. Meanwhile, tens of millions of dollars were poured into AI research in an effort to achieve the goals specified by the conference, but ultimately it didn't happen, and AI hit its first low point.

There have been several downturns in the history of Artificial Intelligence, and despite the ups and downs, the field of AI continues to make progress. Certain problems that were considered impossible to solve in the 1970s have been satisfactorily solved and successfully applied today. But contrary to the optimistic estimates of the first generation of AI researchers, machines with the same level of intelligence as humans have yet to emerge.

I. Theoretical origins

ancient Greek mythology
AI简史:一文读懂人工智能的前世今生

Images of intelligent machinery appear in ancient Greek mythology: "Hephaestus, the god of craftsmen, had made a set of female robots made of gold that could speak and take over his difficult work; he also had a set of three-legged tables that surrounded the outside of his smithy, and that could run away on their own to a gathering of the gods, and then run back on their own afterward."

formal inference

In the 4th century BC, Aristotle of Ancient Greece pioneered the Trinitarian Theory, which is seen as the origin of programming logic for artificial intelligence. The basic assumption of AI is that human thought processes can be mechanized and programmed.

The philosopher Ramon Lyuli (1232-1315) developed the "logic machine" in an attempt to acquire knowledge through logical methods. His theories influenced the later Leibniz.

In the 17th century, Leibniz, Thomas Hobbes and Descartes attempted to translate rational systems of thought into algebraic or geometric systems. Leibniz believed that "the human mind can be reduced to a certain kind of arithmetic". Hobbes famously said in Leviathan, "Reasoning is computing." These philosophers have begun to make explicit the assumption of a formal symbolic system that will be the guiding principle of AI research.

AI简史:一文读懂人工智能的前世今生

At the beginning of the 20th century, David Hilbert asked the mathematicians of his time a fundamental dilemma: "Can all mathematical reasoning be formalized?" The ultimate answer to this question was given by Gödel's incompleteness theorem, the Turing machine, and the λ-algorithm. They implicitly prove the possibility that any form of mathematical reasoning can be mechanized under certain restrictions. The Church-Turing thesis states that a mechanical device capable of handling only simple binary symbols like 0 and 1 is capable of modeling arbitrary mathematical reasoning processes.

computer science

In the early 19th century, Charles Babbage designed a programmable computer ("analyzing machine"). Ada Lovelace, the first programmer in history, predicted that this machine "will produce a subtle scientific music of infinite complexity and breadth".

During World War II, the first modern computers were created. This provided the hardware basis for the creation of "thinking machines".

In 1968, the novel 2001 A Space Odyssey predicted many future technologies, including Earth satellites and the Internet. One of these, the Hal 9000 computer, was a focus for early visions of artificial intelligence.

II. Birth of AI

In the 1940s and 1950s, a group of scientists from a variety of fields (mathematics, psychology, engineering, economics, and political science) began to explore the possibility of creating an artificial brain.In 1956, artificial intelligence was established as a discipline.

Early Theory and Practice
AI简史:一文读懂人工智能的前世今生

The original research on artificial intelligence was the product of a series of scientific advances that converged in the late 1930s and early 1950s. Neurological studies revealed that the brain was an electronic network of neurons whose excitation levels existed only in "yes" and "no" states, with no intermediate states; Norbert Wiener's cybernetics described the control and stability of electronic networks. control and stability of electronic networks. Claude Shannon's information theory describes digital signals (i.e., binary signals represented by high and low levels). Turing's theory of computation demonstrated that digital signals are sufficient to describe any form of computation. These closely related ideas hint at the possibility of building an electronic brain.

Warren McCulloch and Walter Pitts analyzed idealized artificial neuronal networks and pointed out the basic mechanisms of operation. Their student Marvin Minsky (one of the most important leaders and innovators in AI for the next fifty years) built the first neural network machine, SNARC, in 1951.

In 1950, Turing published a groundbreaking paper that predicted the possibility of creating machines with true intelligence. Noting that the concept of "intelligence" was difficult to define with certainty, he proposed the famous Turing test: a machine is said to be intelligent if it is able to carry on a conversation with a human being without being recognized as a machine. This simplification allowed Turing to convincingly show that "thinking machines" were possible.

AI简史:一文读懂人工智能的前世今生

In the mid-1950s, with the rise of digital computers, some scientists intuited that a machine that could operate digitally should also be able to operate symbolically, and that symbolic manipulation might be the essence of human thought. This was a new way to create intelligent machines, and in 1955 a program based on this theory, Logic Theorist, was introduced, capable of proving 38 of the first 52 theorems of Principia Mathematica. Herbert Simon, one of the developers, believed that they had "solved the mysterious mind/body problem by explaining how systems of matter acquire the nature of mind."

Dartmouth Conference 1956:The Birth of AI
AI简史:一文读懂人工智能的前世今生

The organizers of the 1956 Dartmouth conference were Marvin Minsky, John McCarthy and two other senior scientists, Claude Shannon and Nathan Rochester, the latter of whom was from IBM.One of the assertions made at the conference was that "Every aspect of learning or any other characteristic of intelligence should be able to be characterized with such precision that machines can model it." Each of the attendees will make important contributions in the first decade of AI research. "Logical theorists" were discussed, and McCarthy persuaded the participants to accept the term "artificial intelligence" as the name of the field.The name and mission of AI were finalized at the 1956 Dartmouth meeting, along with the first achievements and the first researchers, and the event has been widely recognized. This event is widely recognized as the birth of AI.

The picture is a close-up black-and-white photo of seven smiling young men sitting on the lawn, with several of the key figures who initiated and attended the conference, all seven of whom have contributed to artificial intelligence, computer science, or related fields. They are (from left to right):

Oliver Selfridge: an MIT mathematician;

Nathaniel Rochester: Head of Information Research at BM and one of the conference sponsors;

Ray Solomonoff: American mathematician;

Marvin Minsky: Fellow in Mathematics and Neurology at Harvard University, one of the conference sponsors;

Milner: Professor of Neuropsychology, McGill University, Montreal;

John McCarthy: Assistant Professor of Mathematics at Dartmouth College, one of the conference sponsors;

Claude Shannon: mathematician at Bell Telephone Laboratories, one of the initiators of the conference.

III. The Golden Age

The years following the Dartmouth Conference were an era of great discovery. The programs developed during this period were amazing: computers could solve algebra applications, prove geometry theorems, and learn and use the English language. At the time, most people could hardly believe that machines could be so "smart". Researchers expressed considerable optimism in private conversations and published papers that fully intelligent machines would be available within two decades, and government agencies such as DARPA (Defense Advanced Research Projects Agency) invested large sums of money in this emerging field. A few of the most influential of these studies are listed below

search-based reasoning
AI简史:一文读懂人工智能的前世今生

To achieve a goal (e.g., to win a game or prove a theorem), they move forward step by step, as if searching for a way out of a maze, or backtrack if they hit a dead end. This is "searching reasoning". The main difficulty with this idea is that in many problems, the total number of possible routes in the "maze" is astronomical (i.e., an "exponential explosion"). Heuristic algorithms were used to narrow the search by removing branches that were unlikely to lead to the correct answer, and in 1957 the program General Problem Solver was created, applying this theory. 1966 saw the first generation of programs capable of autonomously "thinking and acting". "In 1966, the development of Shakey, the first robot in history to be able to "think and act" autonomously, was launched. This was the first serious attempt to build an autonomous robot.

natural language
AI简史:一文读懂人工智能的前世今生

An important goal of AI research is to enable computers to communicate through natural language (e.g., English.) In 1964, the first chatbot, ELIZA, was created. Users who "chatted" with ELIZA sometimes mistakenly thought they were talking to a human being, rather than a program. But in reality, ELIZA had no idea what it was saying. It just replies in a fixed way, or repeats the question in a grammatical way.

microcosm

In the late 1960s, Marvin Minsky and Seymour Pappert of MIT's AI Lab suggested that AI researchers focus on simple scenarios called "microworlds". They noted that simplified models are often used in established disciplines to help understand basic principles, such as smooth planes and perfect rigid bodies in physics. The scenario for many of these studies was the "block world," which consisted of a flat surface on which a number of blocks of different shapes, sizes, and colors were placed. With this in mind, in 1968, the SHRDLU program was created, which could communicate with a human being in simple terms, make decisions, and perform actions in ordinary English sentences.

expert system

Another early milestone was the DENDRAL project, begun in 1965, which launched a whole new industry: expert systems.DENDRAL encoded domain-specific knowledge (in this case, molecular chemistry) into a computer program that used its own knowledge base of chemistry to suggest chemical structures that could potentially be influenced. Its success in the specialized field demonstrates that by explicitly encoding human expertise in a narrow subject matter, a computer program can perform a specific task with expert-level performance.

IV. First low

By the 1970s, AI began to encounter criticism, and with it funding difficulties.AI researchers failed to make sound judgments about the difficulty of their subject matter: previous judgments of over-optimism had led to high expectations, and when promises could not be fulfilled, funding for AI was scaled back or eliminated. At the same time, connectionism (i.e., neural networks) fell into obscurity for a decade due to Marvin Minsky's fierce criticisms of perceptrons. in the late 1970s, despite suffering from public misunderstanding, AI made progress in a number of areas, such as logic programming, commonsense reasoning, and others.

Conundrums in AI

In the early 1970s, AI hit a bottleneck. Even the most brilliant AI programs could only solve the simplest part of the problems they attempted to solve, i.e., all AI programs were just "toys." AI researchers encountered insurmountable fundamental obstacles. While some of these limitations have since been successfully broken through, many remain unsatisfactorily resolved to this day:

  • Computing power of computers. Computers of the time had limited memory and processing speed.
  • Computational complexity and exponential explosion. Many problems take a near-infinite amount of time to solve.
  • Common sense and reasoning. Many important AI applications require a great deal of information about how the world is perceived.
  • Moravec's Paradox. Proving theorems and solving geometric problems is relatively easy for computers, while some seemingly simple tasks, such as face recognition or getting a robot to walk through a room, are extremely difficult to realize.
  • Framing and qualification issues. AI cannot express common reasoning involving automated planning without restructuring the logic.
Discontinuation of appropriations

Due to lack of progress, the agencies that fund AI (e.g., the UK government, DARPA, and the NRC) have gradually stopped funding directionless AI research. The situation was foreshadowed by the ALPAC (Automatic Language Processing Advisory Committee) report as early as 1966, which criticized progress in machine translation, and the NRC (National Research Council) stopped funding after allocating $20 million. The NRC (National Research Council) stopped funding after allocating $20 million, and DARPA was so disappointed with CMU's speech understanding research program that it canceled its $3 million annual grant. By 1974 it was difficult to find funding for AI programs.

V. Reclaiming Prosperity

In the 1980s, expert systems began to be adopted by companies around the world, and "knowledge processing" became the focus of mainstream AI research. The Japanese government actively invested in AI in the same era to promote its fifth generation of computer engineering. John Hopfield made a breakthrough in connectionism, and AI was once again a success.

Expert systems are appreciated
AI简史:一文读懂人工智能的前世今生

An expert system is a program capable of answering or solving a problem in a particular area based on a set of logical rules derived from specialized knowledge.The Dendral, designed from 1965 onwards, was able to distinguish mixtures on the basis of spectrophotometer readings.The MYCIN, designed in 1972, was able to diagnose blood-borne diseases. They demonstrate the capabilities of this method.

The expert system is limited to a very small field of knowledge, thus avoiding common sense problems; its simple design in turn allows it to be programmed relatively easily for implementation or modification. In short, practice has proven the usefulness of such programs. It makes AI start to become practical.

In 1980 CMU designed an expert system called XCON for DEC (Digital Equipment Corporation), which was a huge success. Until 1986, it saved the company forty million dollars a year. By this time, companies all over the world were developing and applying expert systems, and by 1985 they had invested more than a billion dollars in AI.

Reappropriation of funds:Fifth generation engineering

In 1981, Japan's Ministry of Economy, Trade and Industry (METI) allocated $850 million to support the Fifth Generation Computer Project. The goal was to build machines that could talk to people, translate language, interpret images, and reason like people.

Other countries have responded. The United Kingdom has begun the £350 million Alvey project. In the United States, a business association organized the MCC (Microelectronics and Computer Technology Corporation) to provide funding for large-scale projects in AI and information technology.DARPAalso acted, organizing the Strategic Computing Initiative, whose 1988 investment in AI was three times that of 1984.

The Rebirth of Connectivism

In 1982, physicist John Hopfield demonstrated that a new type of neural network (now known as the Hopfield network) could learn and process information in a completely new way, a discovery that revitalized connectionism, which had been abandoned since 1970. The 1986 book Distributed Parallel Processing was published, and neural networks became a commercial success in the 1990s, with applications in optical character recognition and speech recognition software.

VI. AI Winter

The pursuit and cold-shouldering of AI by business organizations in the mid-1980s fit the classic pattern of economic bubbles, and the bursting of the bubble was in the midst of government agencies' and investors' observations of AI. Despite all the criticisms encountered, the field continues to move forward.

Second AI trough 1987-1993

From the late 1980s through the early 1990s, AI suffered a series of financial problems. The earliest sign of the change of heart was the sudden drop in market demand for AI hardware in 1987, when desktop computers from Apple and IBM increased in performance and by 1987 were outperforming expensive older models from Symbolics and other manufacturers. The old products lost their raison d'etre: a half-billion dollar industry collapsed overnight.

Initially successful expert systems such as XCON were expensive to maintain. They were difficult to upgrade, difficult to use and fell victim to a wide range of problems that had previously been exposed. The usefulness of expert systems was limited to specific scenarios and no longer met universal needs.

By the late 1980s, the Strategic Computing Initiative had drastically cut back on funding for AI, and the new leadership at DARPA argued that AI was not "the next wave" and that grants would go to programs that appeared to be more likely to produce results.

Japan's "fifth-generation project" was not realized until 1991, and in fact some of its goals, such as "starting a conversation with a human being", were not achieved until 2010. As with other AI projects, expectations were much higher than what was actually possible. From the late 1980s to the early 1990s, AI suffered a series of financial problems.

The importance of the body:Nouvelle AI and Embedded Reasoning

Some researchers are proposing entirely new artificial intelligence scenarios. They believe that in order to gain true intelligence, a machine must have a body - it needs to perceive, move, survive, and interact with the world.

In his 1990 paper "Elephants Don't Play Chess," robotics researcher Rodney Brooks proposed the "Physical Symbol System Hypothesis," which argues that symbols are dispensable because "the world is the best model for describing itself".

vii. new developments

AI, now in its 50s, has finally achieved some of its original goals. It has been used successfully in the technology industry, though sometimes behind the scenes. Some of these accomplishments are attributed to improvements in computer performance, while others are the result of the constant pursuit of specific topics driven by a noble sense of scientific responsibility. The original dream of "achieving human-level intelligence" captivated the world's imagination in the 1960s, and the reasons for its failure are still widely debated. A combination of factors has split AI into separate subfields, sometimes even disguising the tarnished gold standard of "artificial intelligence" with new terms. While AI is more cautious than ever, it is also more successful.

AI behind the scenes

The algorithms developed by AI researchers are starting to become part of larger systems.AI has solved a large number of problems and these solutions have played an important role in industry. Some of the areas where AI technology has been applied include data mining, industrial robotics, logistics, speech recognition, banking software, medical diagnostics and Google search engine.

The field of AI has not benefited much from these achievements, and many of AI's great innovations have been viewed as just one tool in the computer science toolbox. Oxford philosopher Nick Bostrom explains that "many of AI's cutting-edge achievements have been applied to programs in general, though not usually called AI. this is because, once it becomes sufficiently useful and ubiquitous, it ceases to be called AI."

deep learning

AI简史:一文读懂人工智能的前世今生

Deep learning is a new field in machine learning research motivated by the idea of building, neural networks that simulate the human brain for analytical learning, which mimics the mechanisms of the human brain for interpreting data, such as images, sounds, and text. The concept of deep learning originated from the study of artificial neural networks.

Deep learning has already produced many successful applications in other areas such as computer vision, speech recognition and natural language processing. Speech recognition software from Google and Baidu that utilizes deep learning has been able to compete with humans in converting speech to text; AlphaGo, which utilizes deep learning technology, beat the world's top Go player in early 2016.

Gaming Milestones and Moore's Law

Gaming AI has long been considered a criterion for evaluating progress in AI. on May 11, 1997, IBM DeepBlue defeated Kasparov, the world chess champion.

AI简史:一文读懂人工智能的前世今生

In 2005, a robot developed at Stanford University successfully traveled 131 miles on a desert trail on its own.

In 2011, the IBM Watson computer (Watson) tested its abilities by participating in the variety quiz show Jeopardy! in the show's first-ever human-computer showdown, ultimately defeating the show's top prize winner Brad Ruttle and the show's all-time record-holder for consecutive victories, Ken -Jennings. During the competition, Watson was not connected to the Internet, but instead used advanced natural language processing, information retrieval, knowledge representation and reasoning, and machine learning techniques within a 4TB disk containing 2 million pages of structured and unstructured information.

It is generally believed that it is much more difficult for a computer to win at Go than at games such as chess, because Go has an extremely large number of play points and considerably more branching factors than other games. However, in March 2016, AlphaGo, developed by the Google DeepMind team, defeated top professional player Lee Sedol 4-1; its enhanced version then went on to beat the world's No. 1 player, Ke Jie, three to nil at the 2017 Go Summit in Wuzhen, and has since announced its retirement due to the lack of a human opponent.

AI简史:一文读懂人工智能的前世今生

The rapid growth in computer performance described above can be explained by Moore's Law: "Computing speed and memory capacity double every two years." Fundamental barriers to computing performance have been gradually overcome.

Human Voice Interaction Technology

If you've ever used Siri, Cortana, Alexa, or various voice search functions, then you've been exposed to vocal interaction technology. The theory underlying human voice interaction is natural language understanding, one of the subfields of artificial intelligence research. In recent years, thanks to the development of new technologies such as deep learning and big data and the improvement of computer computing power, human voice interaction technology has gradually matured.

VIII. New breakthroughs

 受聊天机器人ChatGPT于2022年11月推出加持,2023年成为了AI(人工智能)发展史的一个转折点,活跃的Open Source环境和多模态模型一同推动了AI研究的进步。

  As generative AI continues to move from the lab to reality, attitudes toward the technology are becoming more sophisticated. For the development trend of AI in 2024, industry experts have also given some outlook. Here the surging news reporter synthesized the relevant analysis and summarized the five major development trends of AI in 2024:

Generative AI will continue to grow rapidly

AI text-based graphing software first ignited the generative AI buzz in the second half of 2022, and that buzz peaked with the release of ChatGPT.

AI简史:一文读懂人工智能的前世今生

"Generative AI" searches see a surge in 2023. Source: Exploding Topics

  Before generative AI came into the limelight, most AI applications used predictive AI, which, as the name suggests, predicts trends or provides insights based on existing data without generating entirely new content. In contrast, generative AI utilizes machine learning to create original output by learning patterns of "thinking" from training data.

Henry Adjer, an expert in generative AI and Deepfake research, notes, "We are still in the early stages of this generative revolution; in the future, synthetic media and content will be ubiquitous and democratized in everyday life. This is not just a simple novelty, but will drive breakthroughs in entertainment, education and provisioning."

AI models will move from unimodal to multimodal

Traditionally, AI models have focused on processing information from a single modality. Now, with multimodal deep learning, we can train models to discover relationships between different types of modalities, meaning that they can "translate" text into images, as well as turn images into video, text into audio, and so on.

Multimodal models have received a lot of attention since last year, allowing users to interact with AI more efficiently. That's why Google's promo video for the big model Gemini, released last December, caused a stir: in the film, Gemini appears to be able to recognize images in real time, and will also generate audio and images to aid in answering.

AI简史:一文读懂人工智能的前世今生

Screenshot from Google Gemini promo

However, Google admitted afterward that the promo went through some editing. But it at least shows us what multimodal AI might look like in the future.

AI will be further integrated into all walks of life

I believe that many people in the work, have been habitually open ChatGPT and other AI tools, let it as a "secretary" to assist their work at any time.

AI简史:一文读懂人工智能的前世今生

ChatGPT is becoming the most popular "office buddy".

  At the Davos Forum in January, Sam Altman, founder and CEO of AI giant OpenAI, emphasized that the technological revolution brought about by AI is unlike anything that has happened before, but instead of replacing many jobs as feared, AI is becoming an "incredible tool for productivity".

One thing is certain about this future: as "laborers," we will need to adapt and acquire new AI-related skills.

AI will amplify and enhance personalization

In recent years, users have felt the charm of "personalized push": from social media to video sites, increasingly sophisticated algorithms always seem to know what users want to see, and display the right content at the right time.AI is accelerating the transformation of media from "mass" to "niche", with the ultimate goal of truly one-to-one interaction. AI is accelerating the transformation of media from "mass" to "niche", with the ultimate goal of true one-to-one interaction.

AI简史:一文读懂人工智能的前世今生

  Victor Riparbelli, CEO of AI startup Synthesia, said, "Our prediction: in the not-too-distant future, mass communication will increasingly become a thing of the past. Synthetic media and content will create new, personalized forms of communication, and the (traditional) media landscape will change forever."

AI regulatory issues will be taken seriously

Finally, unsurprisingly, 2024 will be a pivotal year for AI regulation. Progressively stronger AI also presents many new challenges for regulators, as in the classic line from Marvel's Spider-Man: "With great power comes great responsibility."

AI简史:一文读懂人工智能的前世今生

  Gillian Crossan, Head of Risk Consulting and Global Technology at Deloitte, believes that AI has brought the "right to be forgotten" back into focus: "When these big models are using large amounts of data to learn, how do you ensure that they are controlled and that your information can be forgotten by them? and that your own information can be forgotten by them?"

  The EU has arguably taken the lead on AI regulation. Negotiators from the European Parliament and EU countries have reportedly reached an agreement on AI regulation last December. In the future, AI systems will be categorized into different risk groups: the higher the potential risk of an application, the higher the requirements for it should be. The EU hopes that these rules will be replicated worldwide.

 

© Copyright notes

Related posts

No comments

none
No comments...