How do you build AI responsibly?
AI presents unique challenges to developing and deploying technology responsibly. A design orientation can help ground the complexity of modern AI systems in a way that supports meaningful transparency and social responsibility.
That’s why we’ve developed the AI Design Alignment Analysis Framework.
This framework consists of three lenses to analyze important elements of AI systems:
- Technical function: what does the system actually do?
- Communicated function: what do developers or deployers say it does?
- Perceived function: what do users of the system believe it does?
You can use framework to :
- help identify misalignment in a given system
- audit existing systems
- guide the development or deployment of new systems in a responsible manner.
Read the ebook now to dive into the three lenses and what each one actually means and how it plays a part in the overarching framework we’re proposing.
About the authorJesse McCrosky, Head of Sustainability and Social Change & Principal Data Scientist
Jesse is Thoughtworks’ Head of Sustainability and Social Change for Finland and a Principal Data Scientist. He has worked with data and statistics since 2009 including with Mozilla, Google, and Statistics Canada. With Thoughtworks, Jesse is helping our clients build socially responsible AI systems, including new solutions for sustainability.
His approach to the intersection of tech and sustainability is broad, including greening-of-tech, greening-by-tech, and how technology can support the social alignment needed to tackle the climate emergency.
Jesse lives in Helsinki with his wife and two daughters.
Waitlist for our forthcoming Responsible AI ebook
Be the first to receive our new Responsible AI ebook in your inbox as soon as we launch by filling out the below form.