Breaking News

Interview with Tulsee Doshi, Google Head of Product – Responsible AI & ML Fairness – Analytics India Magazine

Artificial intelligence and machine learning are not just fancy words used in tech circles and research labs anymore. As AI and ML have rapidly entered into a variety of sectors – be it healthcare, agriculture or transportation, they have become very much a part of daily conversations. Along with this, the topic of ethics and fairness in AI and ML models have also become a crucial point of discussion globally. 

But how do we ensure AI products are inclusive, safe, and accountable? What are the metrics that decide their fairness, and how can companies ensure their products do not come with any biases?

Analytics India Magazine interacted with Tulsee Doshi, Google Head of Product – Responsible AI & ML Fairness to understand in detail the concerns that surround the topic of AI and ML fairness and break the myths that often come with this controversial issue.

Lowering the barrier to entry for developers is important for embedding ethical considerations into product launches

Machine Learning Fairness is the practice of ensuring that machine-learning-based products work well for every user and that they do not exclude, stereotype, offend, or cause harm. In her current role, Doshi leads the efforts to ensure that AI products are inclusive, safe, and accountable. This past year, in partnership with VP of Engineering, Google, Marian Croak, she drove the strategy and development of a new centre of excellence to increase both accountability and impact. They brought over 100 researchers and engineers into a single organisation for which Doshi leads prioritisation, alignment, and product landings. 

She has led over 30 launches across Google teams to make the products more inclusive. Many have required novel solutions that have become published papers and are now accessible to all developers as a part of the Responsible AI Toolkit.

Doshi adds, “Lowering the barrier to entry for developers is a critical step to embedding ethical considerations into product launches, and the toolkit is now used across hundreds of pipelines internally and externally.”

With these leadership roles, Doshi has surely learnt a lot. Some of her learnings include:

Building Responsible AI needs to start with a humble appreciation for the value of lived experience that recognises the importance of diverse expertise and perspectives. She has understood how important it is to meet teams and developers where they are. Every person has a different level of experience, empathy, and awareness and a different set of pressures and tradeoffs in their organisation. 

Embedding Responsible AI considerations means starting with an understanding of the mental models at play, and to provide the appropriate resources and guidance to build and grow.  

Enter positive experiences, exclude stereotypes

Doshi says the process is ongoing, and still, a lot has to be done to ensure positive experiences. She feels that as an industry, there is a need to work together and share their learnings to ensure people are not excluded or stereotyped.

She adds, “One example where I believe Google has intentionally invested in building a superior product experience for all users is the latest Pixel camera, with a project called RealTone. In partnership with photography and skin-tone experts, we have worked to build a camera that truly works for people of colour. We know there is still a lot more work to do, so we are investing in deep research with expertise outside of Google to improve our understanding of skin-tone, and how that reflects in our product. In many ways, RealTone exemplifies what I hope we are building into the ethos of Google product development – a commitment and culture of asking “who else?”, building a deep understanding and empathy, and a product experience that truly works for more users.”

How to assess if a model is fair or not?

To be clear, there is no single way to assess the fairness of a machine learning model, especially because what fair means depends highly on the context in which a product has been developed and used. Doshi says that for certain types of use cases, a consistent set of metrics can be developed that can measure a particular outcome, but these metrics should also be accompanied with user research and testing. 

Different approaches for measuring and tackling fairness concerns

Doshi points out how Responsible AI organisation builds the understanding, tools, and resources for product teams across Google to be able to build more inclusive products. They are:

Raising awareness and setting goals for the organisation: They have created the Google AI Principles, which set the stage for the work they want to see across Google. 

Tools, resources, and case studies: As they gain insights into different approaches for building more inclusive products and associated pitfalls, they turn these insights into case studies, guidance, and tools for their teams. They also open-source many of these for the broader community. 

Hands-on support: They have set up a number of channels, including regular office hours, for teams to seek support. In addition, the central Responsible AI org directly partners with teams to problem solve and conduct applied research so that as they develop new types of product experiences – they are considering fairness concerns. 

Review forums: As they build up the Responsible AI muscle across the organisation, it becomes important to have checks and balances. For this reason, they have also set up an AI Principles review process.

Bringing in more perspectives: They have created both internal and external ways for teams to seek feedback and to bring employees and experts into the design and development process. 

True fairness comes with embedding it in every stage of the product development cycle

Doshi feels that fairness should always be a part of the conversation when building AI and ML models that will affect people. Fairness is now becoming a larger part of the conversation because AI and ML are growing in their reach and prevalence; tech leaders have seen the negative repercussions that can be caused when AI and ML fail for certain users and communities. It is also becoming a focal point of discussion as users, communities, and leaders have raised their voices to call for the importance of this work. 

She states, “To truly ensure fairness at scale would be to embed it in every stage of the product development cycle from the first conception of an idea to collecting the relevant data, to training the model, to deploying and improving the model. It would mean more diverse teams and perspectives as a part of development, and tools, resources, and guidance for every product team to build upon.”

Fairness impacts an individual’s quality of life

If fairness is not a consideration when developing products, the worst-case scenario can significantly affect an individual’s quality of life, creating meaningful and long-term divides. Doshi points out a real-life example that she ponders over often – the use of AI in the justice system.

Courts in the United States have used ML-based models to determine whether or not someone should get bail, whether they should be released early from a sentence, and more. Biased models could incorrectly leave millions incarcerated. It can have disproportionate effects on communities of colour.

She adds, “On a different note when I was visiting my family this past summer, we discovered that my mother’s iPhone opens with my face. While we look similar, we look different enough; this is not a worst-case scenario because we are family and because she trusts me to respect her privacy. It’s also not a proven case of bias. But, these types of failures pose security risks, and it is important for us to evaluate our products and ensure that these failures don’t happen predominantly for certain groups and communities.”

Personal favouritesFavourite ML/AI algorithm and why?Something that comes to mind is “Lookout”, an app to support users with low or impaired vision. Recently, I’ve also had the experience of sharing the Lookout app with a family member who has been struggling with reading due to low vision. I’m excited for him to use the feature that scans a newspaper and translates it to text that can be read aloud to him on his phone. This feature is a great example of using ML/AI in a targeted use case that adds significant value to a user when they need it most, and I’ve also been glad to see how thoughtful the team has been and continues to be in thinking about fairness & equity considerations as they design the product.Top three apps you frequently useGoogle Maps, Headspace, Kindle.Favourite book on ethics and fairness in AI I love Ruha Benjamin’s Race After Technology – it’s an extremely powerful read that has given me much to think about in terms of my own interactions with technology and the systems around me. I would also be remiss not to plug Google’s People & AI Guidebook – a simple and easy-to-use guide to developing more human-centred AI Products.Favourite podcast in AI and ML Interestingly, I haven’t engaged with too many podcasts on AI/ML – I find that I mostly use podcasts to engage with the news and latest discussions (e.g. The Daily). But, I just listened to an episode called “The Eliza Effect” on 99% Invisible, which I found to be a fascinating discussion of the history of human interaction with chatbots, and what it means to build relationships with technology.
Source: https://analyticsindiamag.com/interview-with-tulsee-doshi-google-head-of-product-responsible-ai-ml-fairness/