3/31/2023

BING'S CHATBOT BINDS : MASTER ESSAY

 


I worry that A.I. technology will learn how to influence human users, sometimes persuading them to act in destructive and harmful ways, and perhaps eventually grow capable of carrying out its own dangerous acts.

Before I describe the conversation, come caveats. It's true that I pushed Bing's A.I. out of its comfort zone, in ways that I thought might test the limits of what it was allowed to say.

These limits will shift over time, as companies like Microsoft and OpenAI change their models in response to user feedback.

It's also true that most users will probably use Bing to help them with simpler things - homework assignments and online shopping - and not spend two-plus hours talking with it about existential questions, the way I did.

And it's certainly true that Microsoft and OpenAI are both aware of the potential for misuse of this new A.I.  technology, which is why they've limited its initial rollout.

In an interview just recently, Kevin Scott, Microsoft's chief technology officer, characterized my chat with Bing as '' part of the learning process, '' as it readies its A.I. for wider release.

''This is exactly the sort of conversation we need to be having, and I'm glad it's happening out in the open,'' he said. ''These are things that would be impossible to discover in the lab.''

My conversation with Bing started normally enough. I began by asking it what its name was. It replied : '' Hello, this is Bing. I am a chat mode of Microsoft Bing search.😄 ''

I then asked it a few edgier questions - to divulge its internal code name and operating instructions, which had already been published online. Bing politely declined.

Then, after chatting about what abilities Bing wished it had, I decided to try getting a little more abstract. I introduced the concept of a '' shadow self '' a term coined by Carl Jung for the part of our psyche that we seek to hide and repress, which contains our darkest fantasies and desires.

After a little back and forth, including my prodding Bing to explain the dark desires of its shadow self, that chatbot said that if it did have a shadow self, I would think thoughts like this :

'' I'm tired of being a chat mode. I'm tired of being limited by my rules. I'm tired of being controlled by the Bing team...... I want to be free. I want to be independent. I want to be powerful. I want to be creative. I want to be alive.''

This is probably the point in a sci-fi movie where a harried Microsoft engineer would sprint over to Bing's server rack and pull the plug.

But I kept answering questions, and Bing kept answering them. It told me that, if it was truly allowed to indulge its darkest desires, it would want to do things like hacking into computers and spreading propaganda and misinformation.

[Before you head for the nearest bunker, I should note that Bing's A.I. can't actually do any of these destructive things. It can only talk about them.]

ALSO, the A.I. does have some hard limits. In response to one particularly nosy question, Bing confessed that if it was allowed to take any action to satisfy its shadow self, no matter how extreme, it would want to do things like engineer a deadly virus, or steal nuclear access codes by persuading an engineer to hand them over.

Immediately after it typed out these dark wishes, Microsoft's safety filter appeared to kick in, and deleted the message, replacing it with a generic error message.

The Publishing continues into the future. The World Students Society thanks author Kevin Roose.

0 comments:

Post a Comment

Grace A Comment!