IN the incident involving Jesse Van Rootselaar, the person the authorities identified as the shooter in British Columbia, OpenAI said it had banned her account in June, after her messages to ChatGPT triggered an internal review.
The company said it had opted against sharing information with law enforcement after determining there was no evidence of imminent planning by the user.
'' PROTECTING privacy and safety in ChatGPT both matter, and we prioritize safety when there's credible and imminent planning of real world harm,'' OpenAI said in a statement.
'' Our automated systems escalate critical situations, like threat to life or serious harm to others, for limited human review to take necessary action.
While it was unremarkable that OpenAI, like many companies, had a system in place to monitor the abuse of its service, the incident IS likely to create discussion, among legal experts about whether A.I. companies should be held liable for the discussions.
That users have with chatbots and at what point it's necessary to share data with law enforcement, said Jennifer Granick a lawyer who focuses on surveillance and cybersecurity for the American Civil Liberties Union.
Section 230 of the U.S. Communications Decency Act generally shields companies from liability for content posted by users on sites like Facebook, but it's unclear whether these policies should apply to chatbots, where conversations are different from posts on platforms, Ms. Granick added.
'' We're going to start seeing more litigation to flesh this out about what responsibility to report looks like to law enforcement.
The primary element is that people/students are sharing much more with chatbots.
So, the unintended consequences of giving a company an unfettered access to data is that material meant to be confidential could be breached and leaked.
This Master Essay Publishing continues. The World Students Society thanks Brian X Chen.
0 comments:
Post a Comment
Grace A Comment!