Responsible tech: a critical consideration
Building ethical guardrails
As technology has become more deeply interwoven in various parts of everyday life, the possibility of harm — both intentional and inadvertent — becomes more acute. The ongoing debate about the far-reaching implications of generative AI (GenAI) is just one example of how focus on the potential fallout from technology solutions, whether in the form of misinformation, excessive carbon emissions or the exclusion of certain groups, is increasing. According to our research on what consumers want from GenAI, 93% of those surveyed say businesses which fail to incorporate responsible and ethical thinking risk detrimental impacts.
Organizations must be prepared for their technology practices to come under more scrutiny and think through the ethical ramifications of their technology choices — not just for end-users, but for society as a whole.
Responsible technology ensures consideration for all stakeholders, as well as guardrails around privacy, security and sustainability, are firmly embedded in the organization’s technology approach.
As leaders we must recognize that we often struggle to accurately predict the consequences of our technology choices. More often than not the negative effects of technology are unintentional — but that makes them no less harmful. Practicing responsible technology is a matter of broadening our perspectives and taking advantage of emerging tools and techniques that can support enterprises on their responsible tech journey, from secure software to privacy-first design.
Responsible tech is not just about being aware of what could happen as a result of our well-intentioned actions. It’s about being fully engaged with the now — by constantly reevaluating who and what we’re protecting and how we’re doing it; we’re never done.
Responsible tech is not just about being aware of what could happen as a result of our well-intentioned actions. It’s about being fully engaged with the now — by constantly reevaluating who and what we’re protecting and how we’re doing it; we’re never done.
Signals
- New resources to define and provide education on responsible technology practices, such as the responsible technology playbook developed jointly by Thoughtworks and the United Nations, as well as the Social and Ethical Responsibilities of Computing curriculum developed by MIT, available via the institution’s OpenCourseWare platform.
Regulatory and policy initiatives designed to mandate more considered approaches to technology, such as the US Executive Order on the Safe, Secure, and Trustworthy Development and US of Artificial Intelligence, India’s Digital Personal Data Protection Act 2023, and the EU’s proposal for harmonized regulation on AI and connected impact assessment.
The emergence of alliances focused on developing and promulgating sustainable technology practices. The Green Software Foundation, which has developed training and code for less carbon-intensive software and methodologies to calculate the emissions associated with technology, is one prominent example.
The birth of investment funds targeting responsible technology companies and solutions, such as Mozilla Ventures, which is channeling $35 million to early-stage startups working in privacy, decentralizing digital power and ethical AI.
An uptick in firms promoting responsible technology principles and credentials, including giants like IBM, PwC and Salesforce.
Trends to watch
Adopt
-
Also known as self-sovereign identity, decentralized identity (DiD) is an open-standards-based identity architecture that uses self-owned and independent digital IDs and verifiable credentials to transmit trusted data. Although not dependent on blockchains, many current examples are deployed on them as well as other forms of distributed ledger technology, and private/public key cryptography, it seeks to protect the privacy of and secure online interactions.
-
More granular access controls for data, such as policy-based (PBAC) or attribute-based (ABAC) that can apply more contextual elements when deciding who has access to data.
Analyze
-
Government regulation and guidance on the use of AI, intended to ensure responsible use and consequences of AI systems. This includes monitoring, compliance and good practice.
-
Combines AI tools and techniques with behavioral and management sciences for the purpose of upskilling and amplifying decision making and decision makers across a variety of complex problems from scenario planning to operations research.
-
AI-powered virtual assistants and non-playable characters that recreate human interaction within the metaverse.
-
Design of user interfaces and prompts that help people understand the environmental consequences of the choices they make. Examples include an airline website displaying carbon emissions for flights or a mapping tool showing the carbon output for driving a particular route.
-
Systems that monitor metrics across complex distributed systems and take corrective action if a problem is detected. They are often used for security, but increasingly also for resilience and recovery in the face of an outage.
-
An emerging set of techniques to certify the provenance of data and to govern its use across an organization. This could prove transformative in the effort to track and enhance progress towards sustainability targets.
Anticipate
-
These are attacks on (or using) machine learning systems. Attackers may tamper with training data or identify specific inputs that a model classifies poorly to deliberately create undesired outcomes.
-
A collective term for systems and devices that can recognize, interpret, process, simulate and respond to human emotions.
-
Use of probabilistic states of photons, rather than binary ones and zeros, to run algorithms. Although proven to work in specific problem spaces, quantum computing has yet to scale to broadly useful applications.
-
Tools and techniques are emerging that support incorporating responsible tech into software delivery processes, primarily focusing on actively seeking to incorporate under-represented perspectives; some examples include Tarot Cards of Tech, Consequence Scanning, and Agile Threat Modeling.
-
A closed economic system where raw materials and products are constantly shared so as to lose their value as little as possible. Technology that supports this includes reusable services, traceability, IoT and data mining.
-
Most terms of service (TOS) or end-user license agreements (EULAs) are impenetrable legalese that make it difficult for people without a law background to understand. Understandable consent seeks to reverse this pattern, with easy-to-understand terms and clear descriptions of how customers' data will be used.
Adopt
Analyze
Anticipate
-
An Artificial General Intelligence (AGI) has broad capabilities across a range of intellectual tasks, and is often compared to human-level intelligence. This contrasts with today's "narrow" AI which can be remarkable, but only for very specific tasks.
-
A data architecture style where individuals control their own data in a decentralized manner, allowing access on a per-usage bases (for example, Solid PODs).
-
Forms of cryptography created in response to technological or societal challenges. Examples include quantum-resistant encryption algorithms, confidential computing with specialized hardware secure enclaves, homomorphic encryption allowing computation to occur on the data while it is still encrypted, and energy efficient cryptography.
-
Machine learning algorithms adapted and executed on a quantum computing engine, generally used to analyze classical (non-quantum) data.
The opportunities
By getting ahead of the curve on this lens, organizations can:
Prevent reputational damage among customers, talent and investors when technology-driven ethical lapses enter the public sphere. Beyond mitigating harm, an effective responsible technology practice can pay dividends in terms of customer and talent attraction and retention. One recent survey of millennial and Gen Z workers, for example, found they place high priority on employers being positive community actors and protecting customer data.
Avoid regulatory scrutiny or sanctions, such as those faced by Apple when apparent bias in the algorithm that sets Apple Card spending limits triggered investigations into the company’s use of AI.
Reduce the likelihood of data breaches or misuse. Cases such as the massive customer data theft at retailer Target and, more recently, Meta’s apparent violations of EU data regulations have proven these incidents come with punishing costs that can drag on for years.
Generate positive environmental outcomes. Efforts to measure and reduce the carbon intensity of computing and cloud usage through tools like Thoughtworks’ open-source Cloud Carbon Footprint (CCF) open the door to aligning technology with the organization’s overall sustainability strategy and the path to net zero.
What we’ve done
Responsible tech playbook with the UN
In line with the UN Secretary-General’s Strategy On New Technologies, the United Nations Secretariat worked together with Thoughtworks to provide guidance on ensuring inclusivity, awareness of bias, transparency and the mitigation of negative unintended consequences in examining emerging technologies, including generative artificial intelligence (GenAI). Following a series of interviews and workshops with a wide range of United Nations staff, the Thoughtworks and UN team developed a framework and set of approaches for the responsible creation and management of technology systems and products.
Actionable advice
Things to do (Adopt)
Treat responsible tech practices as a cross-functional requirement. As mentioned in the title of this lens, including ethical considerations is critical to all organizations.
Continuously update technology planning and processes to incorporate techniques and exercises that help map out the broader consequences of solutions you apply or develop — for example by involving underrepresented groups in design and testing, or simulating breaches that show how data could be misused. Make these techniques part of every process.
Establish clear guardrails and policies governing the use of AI and ensure these are communicated not only to technologists, but to other parts of the organization where more people will be experimenting with AI tools in their day-to-day roles.
Adopt secure software delivery practices, such as making secure development a collective responsibility, producing clean, transparent and easily maintainable code, and continuous testing.
Examine your software development processes and tools to understand where you can make more sustainable decisions. Understand the cost / benefit trade offs of green software engineering techniques.
- Be a good consumer by taking steps to understand your partners’ and suppliers’ stances on responsible technology, and making efforts to engage and support organizations that demonstrate commitment to ethical technology usage
Things to consider (Analyze)
Developing trustworthy data sources by examining the provenance of information; gathering data where possible from providers that have been vetted and forging partnerships with trusted organizations in your space that govern data-sharing and exchange.
Constantly considering changes to what constitutes responsible technology. Technology is quickly evolving; the problematic activities of tomorrow may not even be possible with the technology of today.
Adopting a code of ethics for software development, either by developing principles that are customized to your organization, or by building on or promoting pioneering standards like the ACM/IEEE-CS Software Engineering Code.
- Utilizing green software development techniques such as implementing real-time power consumption monitoring to keep emissions to the minimum viable level, optimizing infrastructure and algorithms and carefully selecting both the location and timing of computation.
Things to watch for (Anticipate)
Evolving opportunities and threats from developments in AI. As the frontiers of what is possible for AI to create — or manipulate — rapidly expand, it promises to make significant contributions to everything from market research to product development. However, it will also vastly accelerate the scale and reach of destructive forces like deepfakes and misinformation campaigns. Ensure your organization remains cognizant of and prepared for the new dilemmas AI will present even as you take advantage of its capabilities.
- Evolving regulations. You should expect to see regulatory changes across the entire gamut of responsible technology areas. Educate your compliance organizations about new regulatory bodies or agencies which need to be monitored and potentially engaged with across the broad areas we’ve covered here.