4/10/2024

How To Define Artificial General Intelligence



Academics and tech entrepreneurs disagree. A court may soon decide.

The idea of machines outsmarting humans has long been the subject of science fiction. Rapid improvements in artificial-intelligence (ai) programs over the past decade have led some experts to conclude that science fiction could soon become fact. On March 19th Jensen Huang, the chief executive of Nvidia, the world’s biggest manufacturer of computer chips and its third most valuable publicly traded company, said he believed today’s models could advance to the point of so-called artificial general intelligence (agi) within five years. What exactly is agi—and how can we judge when it has arrived?

Mr Huang’s words should be taken with a pinch of salt: Nvidia’s profits have soared because of the growing demand for its high-tech chips, which are used to train ai models. Promoting ai is thus good for business. But Mr Huang did set out a clear definition of what he believes would constitute agi: a program that can do 8% better than most people at certain tests, such as bar exams for lawyers or logic quizzes.

This proposal is the latest in a long line of definitions. In the 1950s Alan Turing, a British mathematician, said that talking to a model that had achieved agi would be indistinguishable from talking to a human. Arguably the most advanced large language models already pass the Turing test. But in recent years tech leaders have moved the goalposts by suggesting a host of new definitions. Mustafa Suleyman, co-founder of DeepMind, an ai-research firm, and chief executive of a newly established ai division within Microsoft, believes that what he calls “artificial capable intelligence”—a “modern Turing test”—will have been reached when a model is given $100,000 and turns it into $1m without instruction. (Mr Suleyman is a board member of The Economist’s parent company.) Steve Wozniak, a co-founder of Apple, has a more prosaic vision of agi: a machine that can enter an average home and make a cup of coffee.

Some researchers reject the concept of agi altogether. Mike Cook, of King’s College London, says the term has no scientific basis and means different things to different people. Few definitions of agi attract consensus, admits Harry Law, of the University of Cambridge, but most are based on the idea of a model that can outperform humans at most tasks—whether making coffee or making millions. In January researchers at DeepMind proposed six levels of agi, ranked by the proportion of skilled adults that a model can outperform: they say the technology has reached only the lowest level, with ai tools equal to or slightly better than an unskilled human.

The question of what happens when we reach agi obsesses some researchers. Eliezer Yudkowsky, a computer scientist who has been fretting about ai for 20 years, worries that by the time people recognise that models have become sentient, it will be too late to stop them and humans will become enslaved. But few researchers share his views. Most believe that ai is simply following human inputs, often poorly.

There may be no consensus about what constitutes agi among academics or businessmen—but a definition could soon be agreed on in court. As part of a lawsuit lodged in February against Openai, a company he co-founded, Elon Musk is asking a court in California to decide whether the firm’s gpt-4 model shows signs of agi. If it does, Mr Musk claims, Openai has gone against its founding principle that it will license only pre-agi technology. The company denies that it has done so. Through his lawyers, Mr Musk is seeking a jury trial. Should his wish be granted, a handful of non-experts could decide a question that has vexed ai experts for decades.

- The Economist

0 comments:

Post a Comment

Grace A Comment!