Enable javascript in your browser for better experience. Need to know to enable it? Go here.

Three lenses to help design better, socially responsible AI

by Jesse McCrosky

How do you build AI responsibly?

 

AI presents unique challenges to developing and deploying technology responsibly. A design orientation can help ground the complexity of modern AI systems in a way that supports meaningful transparency and social responsibility.

 

That’s why we’ve developed the AI Design Alignment Analysis Framework.

This framework consists of three lenses to analyze important elements of AI systems:

 

  1. Technical function: what does the system actually do?
  2. Communicated function: what do developers or deployers say it does?
  3. Perceived function: what do users of the system believe it does?

 

You can use framework to :

 

  • help identify misalignment in a given system
  • audit existing systems
  • guide the development or deployment of new systems in a responsible manner.

 

Read the ebook now to dive into the three lenses and what each one actually means and how it plays a part in the overarching framework we’re proposing.

Three lenses to help design better, socially responsible AI

About the author

Jesse McCrosky, Head of Sustainability and Social Change & Principal Data Scientist

Jesse is Thoughtworks’ Head of Sustainability and Social Change for Finland and a Principal Data Scientist. He has worked with data and statistics since 2009 including with Mozilla, Google, and Statistics Canada. With Thoughtworks, Jesse is helping our clients build socially responsible AI systems, including new solutions for sustainability. 

 

His approach to the intersection of tech and sustainability is broad, including greening-of-tech, greening-by-tech, and how technology can support the social alignment needed to tackle the climate emergency.

 

Jesse lives in Helsinki with his wife and two daughters.