1/28/2018

Headline Jan 29, 2018/ ''' *ARTIFICIAL* NO INTELLIGENCE '''


''' *ARTIFICIAL* 

NO INTELLIGENCE '''




DAVOS DO DAVOS : ''Artificial Intelligence and robots will kill many jobs.'' Its depressingly blunt statement for anyone to make-

But even more so as it is the prediction of Jack Ma, CEO of Chinese online sales giant Alibaba

THE risk of AI -its huge potential and fears over its potential consequences -is just one of the big issues discussed at the World Economic Forum in Davos, along with breaches of personal data and fake news.

BUT it is probably artificial intelligence and the ability of the machines to not only interact with, but manipulate human beings that raises the most suspicion : Aware of growing governmental and public distrust, the giants of tech are trying to address the issues.

''Technology should always give people new opportunities, not remove them,'' Ma said.

But when IBM President Ginni Rometty admits that ''100 percent of jobs will be somehow affected by technology,'' it might be a tough sell. It's not just about jobs.

''People want to trust technology as long as they know who is behind it,'' said Neelie Kroes, now a member of Open Data Institute, after having been for years, the European Commissioner in charge of digital issues.

In recent months, US-based Uber, which connects individuals with drivers through an application, found itself in the hot seat after several murders perpetrated in by its chauffeurs, notably in the United States and in Lebanon.

''You have to remember that the rating of a driver evaluates his driving but cannot predict if he is a serial killer,'' Uber director Dara Khosrowhahi told a panel at this week's economic gathering in the Swiss resort of Davos.

*''In this situation, who is responsible, the individual or the platform?''*

All this is why Mr. Amodel and Mr. Christiano are working to build reinforcing learning algorithms that accept human guidance along the way. This can ensure that systems don't stray from the task at hand.

Together with others at the London based DeepMind, a lab owned by google, the open AI researchers recently published some of their research in this area.

Spanning two of the world's top A.I. labs -and two that hadn't really worked together in the past -these algorithms are considered a notable step forward in A.I. safety research.

''This validates a lot of the previous thinking,'' said Dylan Hadfield-Mennell, a researcher at the  University of California, Berkeley.
''These types of algorithms hold a lot of promise over the next five or 10 years.''

The field is small, but it is growing. As OpenAI and  DeepMind build teams dedicated to A.I. safety, so too is Google's stateside lab, Google Brain.

Meanwhile, researchers at universities like the U.C. Berkeley and Stanford University are working on similar problems often in collaboration with the big corporate labs.

In some cases, researchers are working to ensure that systems don't make mistakes on their own, as the Coast Runners boat did.

They are also working to ensure that hackers and others had actors can't exploit hidden holes in these systems,

Researchers like Google's Ian Goodfellow, for example, are exploring ways that hackers could fool  A.I. systems into seeing things that aren't there..

Modern computer vision is based on what are called deep neural networks, which are pattern-recognition systems that can learn tasks by analyzing vast amounts of data.

By analyzing thousands of dog photos, a neural network can learn to recognize a dog.

This is how Facebook identifies faces in snapshots, and it's hoe Google instantly searches for images inside its photos app.

But Mr. Goodfellow and others have shown that hackers can alter images so that a neural network will believe that include things that aren't really there.

Just by changing a few pixels in the photo of an elephant, for example, they could fool the neural network into thinking it depicts a car.

That becomes problematic when neural networks are used in security cameras. Simply by making a few marks on your face, the researchers said, you could fool a camera into believing you're someone else.

''If you train an object-recognition system on a million images labelled by humans, you can still create your new images where a human and the machine disagree 100 percent of the time,'' Mr. Goodfellow said.

''We need to understand that phenomenon.''

Another big worry is that A.I. systems will learn to prevent humans from turning them off. If the machine is designed to chase a reward, the thinking goes, it may find that it can chase that reward only if it stays on.

Mr. Hadfield-Menell and others at UC Berkeley recently published a paper that takes a mathematical approach to the problem.

A machine will seek to preserve its off switch, they showed, if it is specifically designed to be uncertain about its reward function. That gives it an incentive to accept or even seek out human oversight.

Much of this work is still theoretical. But given the rapid progress of A.I. techniques and their growing importance across so many industries, researchers believe that starting early is the best policy.

''There's a lot of uncertainty around exactly how rapid progress in A.I. is going to be,'' said Shane Legg, who oversees safety work at DeepMind.

''The responsible approach is to try to understand different ways in which these technologies can be misused, different ways they can fail and different ways of dealing with these issues.''

With respectful dedication to the Leaders, Social Scientists, Students, Professors and Teachers of the world. See Ya all ''register'' on !WOW! -the World Students Society and Twitter- !E-WOW!  -the Ecosystem 2011:

''' Job Killers '''

Good Night and God Bless

SAM Daily Times - the Voice of the Voiceless

0 comments:

Post a Comment

Grace A Comment!