4/26/2023

A.I. - '' BILL OF RIGHTS '' - AT : MASTER GLOBAL ESSAY



'' The Surprising Things A.I. Engineers Will Tell You If You Let Them. ''

Among the many unique experiences of reporting on A.I. is this : In a young industry flooded with hype and money, person after person tells me that they are desperate to be regulated, even if it slows them down. In fact, especially if it slows them down.

What they tell me is obvious to anyone watching. Competition is forcing them to go too fast and cut too many corners. This technology is too important to be left to a race between Microsoft, Google, Meta and a few other firms.

But no one company can slow down to a safe pace without risking irrelevancy. That's where the government comes in - or so they hope.

A place to start is with the frameworks policymakers have already put forward to govern A.I. The two major proposals, at least in the West, are the ''Blueprint for an A.I. Bill of Rights,'' which the White House put forward in 2022, and the Artificial Intelligence Act, which the European Commission proposed in 2021. Then, last week, China released its latest regulatory approach.

Let's start with the European proposal, as it came first. The Artificial Intelligence Act tries to regulate A.I. systems according to how they're used. It is particularly concerned with high-risk uses, which include everything from overseeing critical infrastructure to grading papers to calculating credit scores to making hiring decisions.

High-risk uses, in other words, are any use in which a person's life or livelihood might depend on a decision made by a machine-learning algorithm.

The European Commission described this approach as ''future-proof,'' which proved to be predictably arrogant, as new A.I. systems have already thrown the bill's clean definitions into chaos. Focusing on use cases is fine for narrow systems designed for a specific use, but it's a category error when it's applied to generalized systems.

Models like GPT-4 don't do any one thing except predict the next word in a sequence. You can use them to write code, pass the bar exam, draw up contracts, create political campaigns, plot market strategy and power A.I. companions or sexbots.

In trying to regulate systems by use case, the Artificial Intelligence Act ends up saying very little about how to regulate the underlying model that's powering all these use cases.

Unintended consequences abound. The A.I.A mandates, for example, that in high-risk cases, ''training, validation and testing data sets shall be relevant, representative, free of errors and complete.''

But what the large language models are showing is that the most powerful systems are those trained on the largest data sets. Those sets can't plausibly be free of error, and it's not clear what it would mean for them to be representative.

There's a strong case to be made for data transparency, but I don't think Europe intends to deploy weaker, less capable systems across everything from exam grading to infrastructure.

The other problem with the use case approach is that it treats A.I. as a technology that will, itself, respect boundaries. But its disrespect for boundaries is what most worries the people working on these systems.

Imagine that ''personal assistant'' is rated as a low-risk use case and a hypothetical GPT-6 is deployed to power an absolutely fabulous personal assistant. The system gets tuned to be extremely good at interacting with human beings and accomplishing a diverse set of goals in the real world.

That's great until someone asks it to secure a restaurant reservation at the hottest place in town and the system decides that the only way to do it is to cause a disruption that leads a third of that night's diners to cancel their bookings.

The Publishing continues. The World Students Society thanks author Ezra Klein.

0 comments:

Post a Comment

Grace A Comment!