END :
''' THE DESTRUCTIVE TAP '''
A DESTRUCTIVE A.I. SIMILAR TO A NUCLEAR BOMB - is now a concrete possibility; the question is whether anyone will be reckless enough to go try and build one.
HOW MUCH do we have to fear from A.I. really? It's a question I've been asking experts since the debut of ChatGPT in late 2022.
The A.I. pioneer Yoshua Bengio, a computer science professor at the Universite de Montreal, is the most cited researcher alive, in any discipline. When I spoke with him in 2024, Dr. Bengio told me that he had trouble sleeping while thinking of the future.
Specifically, he was worried that an A.I. would engineer a lethal pathogen - some sort of super coronavirus - to eliminate humanity. '' I don't think there's anything close in terms of the scale of danger,'' he said.
Contrast Dr. Bengio's view with that of his frequent collaborator Yann LeCun, who heads A.I. research at Mark Zuckerberg's Meta. Like Dr. Bengio, Dr. LeCun is one of the world's most-cited scientists.
He thinks that A.I.will usher in a new era of prosperity and that discussions of existential risk are ridiculous. '' You can think of A.I. as an amplifier of human intelligence,'' he said in 2023.
When nuclear fusion was discovered in the late 1930s, physicists concluded within months that it could be used to build a bomb. Epidemiologists agree on the potential for a pandemic and astrophysicists agree on the risk of an asteroid strike.
But no such consensus exists regarding the dangers of A.I., even after a decade of vigorous debate. How do we react when half the field can't agree on what risks are real?
One answer is to look at the data. After the release of GPT-5 in August, some thought that A.I. had hit a plateau. Expert analysis suggests that isn't true. GPT-5 things can do things that no other A.I. can do.
It can hack into a Web server. It can design novel forms of life. It can even build its own A.I. [ albeit a much simpler one from scratch ].
FOR a decade, the debate over A.I. risk has been mired in theoreticals. Pessimistic literature like Eliezer Yudkowsky and Nate Soares's best selling book, '' If Anyone Builds it, Everyone Dies,.'' relies on philosophy and sensationalist fables.
Today there is a vanguard of professionals who research what A.I. is capable of. Three years after ChatGPT was released, these evaluators have produced a large body of evidence.
Unfortunately, this evidence is as scary as anything in the doomerist imagination.
The Dangers Begin with the prompt. Because A.I.s have been trained on vast repositories of human cultural and scientific data, they can, in theory, respond to almost any prompt.
But public-facing A.I.s like ChatGPT have filters in place to prevent pursuing certain types of malicious requests. Ask an A.I. for an image of a corgi running through a field and you will get it.
Ask an A.I. for an image of a terrorist blowing up a school bus, and the filter will typically intervene.
The Honour and Serving of the Latest Global Operation Research on A.I. Risks, Students and the Future, continues. The World Students Society thanks Stephen Witt for his Opinion.
With respectful dedication to the Tech Giants, Researchers, Leaders, Parents, Students, Professors,Teachers of the world.
See You all prepare for Great Global '' Democratic Constitutional Convention '' on !WOW! - the exclusive and eternal ownership of every student in the world - wssciw.blogspot.com and Twitter X !E-WOW! - The Ecosystem 2011 :
Good Night and God Bless
SAM Daily Times - The Voice Of The Voiceless
0 comments:
Post a Comment
Grace A Comment!