Twitter said Wednesday it will study whether its machine-learning technology causes unintentional harm, a move that comes as social media companies face increasing scrutiny for their role in spreading dangerous conspiracy theories and enabling harassment.
In the coming months, Twitter’s Responsible Machine Learning Initiative will release reports analyzing possible racial and gender bias in its image cropping algorithm, “a fairness assessment of our Home timeline recommendations across racial subgroups” and “an analysis of content recommendations for different political ideologies across seven countries.”
After Twitter’s image algorithm was criticized last year for focusing on white faces over darker ones, the company maintained that its tests haven’t shown any racial or gender bias, though in the aftermath it announced features giving more users more control over images because ”we recognize that the way we automatically crop photos means there is a potential for harm.”
Twitter said the results from the reports may inform changes to the platform, new guidelines for how it designs certain products and “heightened awareness” around ethical machine learning.
“When Twitter uses ML, it can impact hundreds of millions of Tweets per day and sometimes, the way a system was designed to help could start to behave differently than was intended,” the company said. “These subtle shifts can then start to impact the people using Twitter and we want to make sure we’re studying those changes and using them to build a better product.”
The Twitter initiative comes as social media companies face accusations their algorithms are responsible for increasing polarization, echo chambers, misinformation and online radicalization. Twitter in particular has been criticized for not doing enough to combat harassment. Former New York Times columnist Charlie Wetzel argued in his newsletter, “Galaxy Brain,” this week that Twitter’s Trending section siphons the entire internet’s attention onto single users, leading to disproportionate outrage and the appearance of a widespread “cancel culture.” CEO Jack Dorsey has previously said he wants to see a future where users can choose which algorithm they want to use inside an App Store-like interface, instead of relying on one algorithm made by Twitter.
Twitter’s tacit acknowledgement that its algorithms might be harmful to users and society at large contrasts with Facebook’s defenses of its recommendation algorithms. In a Medium post last month, Vice President of Global Affairs Nick Clegg argued that Facebook doesn’t have a commercial interest in amplifying extreme content, and that the platform isn’t solely responsible for increasing political polarization because individual choices also impact what users see on their News Feeds. “We need to look at ourselves in the mirror, and not wrap ourselves in the false comfort that we have simply been manipulated by machines all along,” he said.