To get a proper assessment of the harmful impact of its AI and ML algorithms, Twitter has initiated responsible machine learning, said the company in a blog.
The responsible ML module consists of the following pillars:
- Taking responsibility for the algorithmic decisions
- Equity and fairness of outcomes
- Transparency about our decisions and how we arrived at them
- Enabling agency and algorithmic choice
Twitter employees, Rumman Chowdhury, Director., Software Engineering and Jutta Williams, Staff Product Manager ML Ethics, Transparency & Accountability (META) described that the responsible technological use studies the effects it can have over time.
“When Twitter uses ML, it can impact hundreds of millions of Tweets per day and sometimes, the way a system was designed to help could start to behave differently than was intended. These subtle shifts can then start to impact the people using Twitter and we want to make sure we’re studying those changes and using them to build a better product,” Chowdhury and Williams state in the blog.
Some of the analyses that the users can have access to in the upcoming months include:
- A gender and racial bias analysis of image cropping (saliency) algorithm
- A fairness assessment of our Home timeline recommendations across racial subgroups
- An analysis of content recommendations for different political ideologies across seven countries
The blog also noted that the team involved–the META team—is studying how the systems work and use the feedback to improve twitter experience. Also, the explainable ML solutions that the team is building will help better understand the algorithms, what informs them, and how they impact what you see on Twitter, claims the company in the blog. Similarly, algorithmic choice will allow people to have more input and control in shaping what they want Twitter to be for them. And the team is currently in the early stages of exploring this.