Headline, May 09 2022/ ''' '' THE FEARFUL TAP '' '''


 TAP '' '''

GLOBAL FOUNDER FRAMERS  PROMPT TO GPT-6 : '' Search and compose and display on the '' Trial Blog : ' Sam Daily Times. Org.' ''

AUTOMATION : When blue ticked by the Editorial Board, publish on the active blog at 10 am US East Coast Time.

TARGET ACQUIRED. I'M COMING FOR YOU. VIRUS. EXE This is what the computer screen shot displays.

REASONS TO BE FEARFUL : RESEARCHERS are increasingly worried about the risks posed by AIS. A big problem is that they are black boxes. The new AI [2]........ '' How generative models could go wrong ''.

IN APRIL - A PAKISTANI COURT used GPT-4 to help make a decision on granting bail.- it even included a transcript of a conversation with GPT-4 in its judgement.

IN 1960 Norbert Wiener published a prescient essay. In it, the father of Cybernetics worried about a world in which '' machines learn '' and '' develop unforeseen strategies at rates that baffle their programmers.''

Such strategies, he thought might involve actions that those programmers did not '' really desire '' and he were instead ''merely colorful imitation[s] of it.'' Wiener illustrated his point with the German poet Goethe's fable, '' The Sorcerer's Apprentice '', in which a trainee magician enchants a broom to fetch water to fill his master's bath.

But the trainee is unable to stop the broom when its task is complete. It eventually brings so much water that it floods the room, having lacked the common sense to know when to stop.

THE STRIKING progress of modern artificial-intelligence [AI] research has seen Wiener's fears resurface. In August 2022, AI impacts, an American research group, published a survey that asked more than 700 machine-learning researchers about their predictions for both progress in AI and the risks that technology might pose.

The typical respondent reckoned there was a 5% probability of advanced AI causing an '' extremely bad '' outcome, such as human extinction. Fei-Fei Li, an AI luminary at Stanford University, talks of a  ''civilisational moment'' for AI.

Asked by an American TV network if AI could wipe out our humanity, Geoff Hinton of the University of Toronto, another AI bigwig, replied that it was ''not inconceivable''.

There is no shortage of risks to preoccupy people. At the moment such concern is focused on ''large language models'' [LLMS] such as ChatGPT, a chatbot developed by OpenAI, a startup.

Such models, trained on enormous piles of text scraped from the Internet, can produce human quality writing and chat knowledgeably about all kinds of topics.

As Robert Trager of the Centre for Governance on AI explains, one risk of such software ''making it easier to do lots of things - and thus allowing more people to do them.''

The most immediate risk is that LLMS could amplify the sort of quotidian harms that can be perpetrated on the Internet today.

A text-generation engine that can convincingly imitate a variety of styles is ideal for spreading misinformation, scamming people out of their money on convincing employees to click on dodgy links in emails, and infecting their company's computers with malware. Chatbots have also been used to cheat at school!

In a preprint published on arXiv on April 11th, researchers from Carnegie Mellon University say they designed a system that given simple prompts such as ''synthesise ibuprofen'', searches the internet and spits out instruction on how to produce the painkiller from precursor chemicals.

But there is no reason that such a program would be limited to beneficial drugs.

The Honour and Serving of the Latest Global Operational Research on AI, the present and the future, continues. The World Students Society thanks The Economist.

With respectful dedication to the Global Founder Framers of !WOW! and then : Students, Professors and Teachers of the world. See you all prepare for Great Global Elections on !WOW! : wssciw.blogspot.com and Twitter - !E-WOW! - The Ecosystem 2011 :

Good Night and God Bless

SAM Daily Times - the Voice of the Voiceless


Post a Comment

Grace A Comment!