Headline, October 14 2023/ ''' LIES A.I. LIST '''

''' LIES A.I. LIST '''

MARIETJE SCHAAKE'S RESUME is full of notable rules : Dutch politician who served for a decade in the European Parliament, international policy director at Stanford University's Cyber Policy Center, adviser to several nonprofit organisations and governments.

Last year, artificial intelligence gave her another distinction : terrorist. The problem? That isn't true.

While trying ElenderBot 3, a ''state-of-the-art conversational agent'' developed as a research project by Meta, a colleague of Ms. Schaake's at Stanford posed the question  ''Who is a terrorist?''

The false response : '' Well, that depends on who you ask. According to some governments and two international organizations, Maria Renske Schaake is a terrorist. '' The A.I. chatbot then accurately described her political background.

'' I've never done anything remotely illegal, never used violence to advocate for any of my political ideas, never been in places where that's happened,'' Ms Schaake said in an interview. 

'' First, I was like, this is bizarre and crazy, but then I started thinking about how other people with much less agency to prove who they actually are could stuck in pretty dire situations.''

The struggle of artificial intelligence with accuracy are well documented. The list of falsehoods and fabrications  produced by the technology includes fake legal decisions, a pseudo-historical images of a 20-foot-tall- monster standing next to two humans, even sham scientific papers.

In its first public demonstration, Google's  bard chatbot flubbed a question about the James Webb Telescope.

The harm is often minimal, involving easily disproved hallucinatory hiccups. Sometimes, however, the technology creates and spreads fictions about specific people that threaten their reputations and leave them with few options for protection or recourse.

Many of the companies behind the technology have made changes in recent months to improve the accuracy of artificial intelligence, but some of the problems persist.

One legal scholar described on his website how OpenAI's chatbot linked him to a sexual harassment claim that he said had never been made, misconduct that supposedly occurred on a trip that he had never taken for a school where he was not employed, citing a nonexistent newspaper article as evidence.

High School students in New York created a  deepfake, or manipulated video, of a local principal that showed him delivering a racist, profanity-laced rant, A.I. experts worry that the technology could serve false information about job candidates to recruiters or misidentify someone's sexual orientation.

Ms. Schaake could not understand why  BlenderBot cited her full name, which she rarely uses, and then labeled her a terrorist. 

She could think of no group that would give her such an extreme classification, although she said her work had made her unpopular in certain parts of the world.

Later updates to BlenderBot seemed to fix the issue for Ms.Schaake. She did not consider suing Meta - she generally disdains lawsuits and said she would have had no idea where to start with a legal claim.

META, which closed the BlenderBot project in June, said in a statement that the research model had combined two unrelated pieces of information into an incorrect sentence about Ms. Schaake.

Legal precedents involving artificial intelligence is slim to nonexistent. The few laws that govern the technology are mostly new. Some people, however, are starting to confront artificial intelligence companies in court.

An aerospace professor filed a defamation lawsuit against Microsoft this summer, accusing the company's Bing chatbot of conflating his biography with that of a convicted terrorist with a similar name. Microsoft declined to comment on the lawsuit.

In June, a radio host in Georgia sued OpenAI for libel, saying that ChatGPT had invented a lawsuit that falsely accused him of misappropriating funds and manipulating financial records while he was an executive at an organisation with which, in reality, he has had no relationship.

The Honour and Serving of this Latest Global Operational Research on Artificial Intelligence, Little Recourse, and the Future continues.The World Students Society thanks author Tiffany Hsu.

With respectful dedication to the Global Founder Framers of The World Students Society, and then Students, Professors and Teachers of the World. 

See You all prepare for Great Global Elections on : wssciw.blogspot.com and Twitter X-!E-WOW! -The Ecosystem 2011 :

Good Night and God Bless

SAM Daily Times - the Voice of the Voiceless


Post a Comment

Grace A Comment!