Enable javascript in your browser for better experience. Need to know to enable it? Go here.

Society urged to hold companies to account on tech use

Society urged to hold companies to account on tech use

As AI spreads, we need to become much more aware of its capacity for good and harm. In my interview with Helen Trinca, an associate editor at The Australian, I share my view on how society can weigh in on responsible technology and shed light on how this is a pressing business need today. Below is a copy of the article which The Australian has kindly given us permission to republish. Read on to learn about how responsible tech benefits everyone.

Society needs to hold companies to account for artificial intelligence products such as ChatGPT when they fail the "responsible tech'' test, according to Dr Rebecca Parsons, chief technology officer at global consultancy Thoughtworks.

 

She says we need a combination of government regulation and consumer and employee market pressure to ensure organizations are aware of the consequences of programs that use algorithms to do anything from vetting job candidates and granting bank loans to writing court reports.

 

We've gotten pretty good at (managing) the data breaches but there are a lot more subtle issues emerging," she says.

 

"It's one thing if Amazon gives me a recommendation that I don't like," she says. "It's another thing if an AI assistant gives a recommendation to a judge on whether or not I should be let out on parole. As the consequences of getting things wrong increase we have to be more conscious."

 

AI programs to advise judges are in use in the US and judges have already ruled that defendants can't interrogate the basis of the recommendation because it's the intellectual property of the company that created the program. Says Parsons: "That flies in the face of our constitutional right to face your accuser."

 

Thoughtworks recently sponsored an MIT Technology Review Insights report into responsible tech which found 55 per cent of Australian executives believe technology should be used without causing harm to external or internal stakeholders.

 

Globally, 67 per cent of respondents said they already had some level of responsible technology methodologies and guidelines in place.

 

"Seventy three per cent actually think it is not just a buzzword, it's actually important," Parsons says. "They look at that importance from the perspective of reputational risk and brand, and interest from investors, attractiveness as an employer, particularly among Gen Zs and Millennials, as well as and broader market acceptance of their products."

 

Parsons defines the move to responsible tech as the "active consideration of values, unintended consequences and negative impact of technology" and argues that "we can make our technology more responsible if we believe we can, and we insist we do''.

 

She points to ethical issues around banks assessing credit card applications using machine learning to AI: "Everybody knows it's wrong to use race or ethnic origin in making that kind of determination but you can have proxy indicators (in your data) that that would tend to indicate this person is likely to be African American, and therefore, we are going to downgrade the score."

 

In a well publicised exercise, Amazon tried to use a machine learning model to filter job applications but found that "because their system had been so biased in the past against women, they couldn't train it in such a way that it didn't discriminate against women. People need to take into account when they're thinking about machine learning models that the algorithms' job is to reflect the patterns that exist in historical data. If there's bias in the historical data, the model is going to replicate that bias."

ChatGPT is an Al chatbot that can communicate through messages with human. Picture: iStock

Parsons says ChatGPT is a significant improvement on earlier systems in its ability to generate text.

 

"But the problem is that what it is doing is assigning a probability to what the next word should be based on the words that have come before," she says. "It has no idea what the words are.

 

"It will very confidently assert that if you drive two cars, from city A to city B, instead of one car, you will get there in half the time. But it has no idea of what a car is and it has no idea what it means to travel from point A to point B."

 

Parsons says companies can use auditing and analysis programs to check if their data is skewed.

 

"We have to acknowledge that these problems exist, and we have to be more honest with ourselves as a society, on what the limits of some of these things are, and what are the societal pressures as well, what are the forms that can keep up with the speed of technological advance, and allow civil society to weigh in."

 

She admits however that ethics and values differ across borders and societies and people are not always consistent: "I remember reading a few years ago about self­ driving cars. An automobile manufacturer did a survey, and asked: do you think auto manufacturers should program their self driving cars to prioritise the life of the pedestrian or the life of the driver?

 

"Most people said they should sell cars that prioritise the life of the pedestrian. But when they asked people what they would buy, people said they would buy the car that prioritised the life of the driver. So what (is the company) supposed to do?"

 

Parsons says that as AI spreads, we need to become much more aware of its capacity for good and harm: "ChatGPT is a great example. It can do some amazing things, and we need to celebrate that and we need to talk about the kinds of problems we can now solve that were hard to solve before. But we also have to recognise it's going to give stupid answers, because it doesn't really understand and that there are risks in that."

Want to know more about responsible tech?