''WE'RE AT THE BEGINNING OF A BROADER societal transformation,'' said Brian Christian, a computer scientist and the author of ''The Alignment Problem,'' a book about the ethical concerns surrounding A.I. systems.

''There's going to a bigger question here for businesses, but in the immediate terms, for the education system, what is the future of homework?''

With careful thoughts and consideration, we can take advantage of the smarts of these tools without causing harm to ourselves or others.

The past few weeks have felt like a honeymoon phase for our relationship with tools powered by Artificial Intelligence.

MANY OF US have prodded ChatGPT, a chatbot that can generate responses with startlingly natural language, with tasks like writing stories about our pets, composing business proposals and coding software programs.

At the same time, many have uploaded selfies to Lensa AI, an app that uses algorithms to transform ordinary photos into artistic renderings.

Like smartphones and social networks when they first emerged, A.I. feels fun and exciting. Yet as is always the case with new technology, there will be drawbacks, painful lessons and unintended consequences.

People experimenting with ChatGPT were quick to realize that they could use the tool to win coding contests. Teachers have already caught their students using the bot to plagiarize essays. 

And some women who uploaded their photos to Lensa received back renderings that felt sexualised and made them look skinnier, younger or even nude.

WE HAVE reached a turning point with artificial intelligence , and now is a good time to pause and assess : can we use these tools ethically and safely?

For years, virtual assistants like Siri and Alexa, which also use A.I., were the butt of jokes because they weren't particularly helpful. But modern A.I. is just good enough now that many people are seriously contemplating how to fit the tools into their daily lives and occupations.

With that in mind, A.I. can be helpful if we're looking for a light assist. A person could ask a chatbot to rewrite a paragraph in an active voice. A nonnative English speaker could ask ChatGP3 to remove grammatical errors from an email before sending it.

A STUDENT could ask the bot for suggestions on how to make an essay more persuasive.

But in any situation like those, don't blindly trust the bot.

''You need a human in the loop to make sure that they're saying what you want them to say and that they're true things instead of false things.

And if you decide to use a tool like ChatGPT or Lensa to produce a piece of work, consider disclosing that it was used.That would be similar to giving credit to other authors for their work.

The Publishing continues into the future. The World Students Society thanks author Brian X. Chen


Post a Comment

Grace A Comment!