WORRIES about A.I.-assisted threats and extortion intensified with the introduction last month of Sora, a text-to-video app from OpenAI.
The app which allows users to upload images of themselves to be incorporated into hyper-realistic scenes, quickly depicted actual people in frightening situations.
Artificial Intelligence is already raising concerns for its ability to mimic real voices in service of scams or to produce deepfake pornography without a subject's permission.
Now the technology is also being used for video threats - priming them to maximize fear by making them far more personalized, more convincing and more easily delivered.
'' TWO things will always happen when technology like this gets developed : We will find clever and creative and exciting ways to use it, and we will find forrific and awful ways to abuse it.'' said Hany Farid, a professor of computer science at the University of California, Berkeley.
'' What's frustrating is that this is not a surprise.''
Digitally generated threats have been possible for at least a few years. A judge in Florida was sent a video in 2023, most likely made using a character customization tool in the Grand Theft Auto 5-video game, that featured an avatar who looked and walked like her being hacked and shot to death.
But threatening images are rapidly becoming easier to make, and more persuasive. One YouTube page had more than 40 realistic videos - most likely made using A.I. according to experts who reviewed the channel - each showing a woman being shot.
[ YouTube, after The New York Times contacted it, said it had terminated the channel for '' multiple violations '' of its guidelines. A deepfake video of a student carrying a gun sent a high school into lockdown this spring.
In July, a lawyer in Minneapolis said xAI's Grok chatbot had provided an anonymous social media user with detailed instructions on breaking into his house, sexually assaulting him and disposing off his body.
Until recently, artificial intelligence could replicate people only if they had huge online presence, such as film stars with throngs of publicly accessible photos.
Now, a single profile image will suffice, said Dr. Farid, who co-founded GetRealSecurity, a service that identifies malicious digital content.
The same is true of voices - what once took hours of example data to clone now requires less than a minute.
'' The concern is that now, almost anyone with no skills, but with motive or lack of scruples can easily use these tools to do damage,'' said Jane Bambauer, a professor who teaches about A.I. and the law at the University of Florida.
This Master Essay Publishing continues. The World Students Society thanks Tiffany Hsu.

0 comments:
Post a Comment
Grace A Comment!