''Along with all the fantastic aspects of the web come new problems like bias, misinformation and offensive content to name a few,'' Biz Stone, a Twitter cofounder, wrote on a crowdfunding page last year for Factmata, another A.I, fueled disinformation defense operation. 

''It can be confusing and difficult to cut through to the trusted, truthful information.''

The business are hiring across a broad spectrum of trust and safety roles.

Companies have courted people expert at recognizing content posted by child abusers or human traffickers, as well as former military counterterrorism agents with advanced degrees in law, political science and engineering.

Moderators, many of whom work as contractors are also in demand.

Mounir Ibrahim, the vice president of public affairs and impact for Truepic, a tech company specializing in image and digital content authenticity, said many early clients were banks and insurance companies that relied more and more on digital transactions.

''We are at an inflection point of the modern internet right now,'' he said. ''We are facing a tsunami of generative and synthetic material that is going to hit our computer screens, very, very soon - not just images and videos, but text, code audio, everything under the sun.

And this going to have tremendous effects on not just disinformation but brand integrity, the financial tech world, on the insurance world and across nearly every vertical that is now digitally transforming on the heels of Covid.''

Efforts to tackle misinformation and disinformation have included research initiatives from top-tier universities and policy institutes, media literacy campaigns and initiatives to repopulate news deserts with local journalism outfits.

Many social media platforms have set up internal teams to address the problem or outsourced content moderation work to large companies such as Accenture, according to a July report from the geopolitical think tank German Marshall Fund.

In September, Google completed its $5.4 billion acquisition of Mandiant, an 18-year-old company that tracks online influence activities as well as offering other cybersecurity services.

A growing group of start-ups,  many of which rely on artificial intelligence  to root out and decode online narratives, conduct similar exercises, often for clients in corporate America.

Althea wrapped up a $10 million fund-raising round in October. Also last month, Spotify said it bought the five-year-old Irish company Kinzen, citing its grasp on ''the complexity of analyzing audio content in hundreds of languages and dialects, and the challenges in effectively evaluating the nuance and the intent of the content.''

[Months earlier, Spotify found itself trying to quell an uproar over  accusations that its star pod-cast host, Joe Rogan, was spreading vaccine misinformation.]

Amazon's Alexa Fund participated in a $24 million funding round last winter for five-year-old Logically, which uses artificial intelligence to identify misinformation and disinformation on topics such as climate change and Covid-19.

Truepic was featured with companies such as ZignalLabs and Memetica in the German Marshall Fund report about disinformation-defense start-ups.

Anya Schiffrin, the lead author and a senior lecturer at Columbia's School of International and Public Affairs, said future regulation of disinformation and other malicious content could lead to more jobs in trust and safety space.

She said regulators around the European Union were already hiring people to help carry out the new Digital Services Act, which requires internet platforms to combat misinformation and restrict certain online ads.

''I'm really tired of these really rich companies saying that it's too expensive -it's a cost of doing business, not an extra, add-on luxury,'' Ms. Schiffrin said. 

''If you can't provide accurate, quality information to your customers, then you're not a going concern.''

The World Students Society thanks author Tiffany Hsu.


Post a Comment

Grace A Comment!