Responsible AI: A marriage of theory and practice

In this blog post we discuss implications related to Artificial Intelligence (AI) by exploring possible areas of concern. The insights we present are sprung from two different data collection methods: assessments of academic and policy reports on potential ethical implications related to AI usage in digital media and consumer brands, and internal studies where these were discussed with Schibsted employees. 

Going into the woods

Imagine that you are on a wooden path in the dark. The only thing you know about the path is that there is a big rock somewhere ahead of you. You know this because a friend of yours tripped over the rock and broke a leg on this very path a while back. While not happy about the broken leg, your friend said the views on the other side of the rock are absolutely stunning and urges you to try to get past the rock to enjoy them yourself. 

You have a few options when deciding what to do next. You could continue walking straight ahead, as if you do not know about the rock. You could turn around, and walk away from the rock (and the view). You could dig the rock up and remove it, or you could attempt climbing the rock. 

Walking straight ahead seems risky. Learning from your friend’s example, you could very well break your leg. Turning around would yield low returns. You won’t get to enjoy the view on the other side, and you won’t learn how to cope the next time there is a big rock in front of you. Digging up the rock would be time consuming, and also, where would you move the rock to without it getting in harm’s way for someone else? The option you are left with is finding a safe way to climb the rock.

In this post, we are approaching Artificial Intelligence (AI) as our metaphorical rock. Improvements in AI technology over the past decades have gone more rapidly than ever before, and today AI is virtually ubiquitous in our everyday lives. As a company serving millions of people in the Nordics with everyday digital services – now partially powered by AI – we see a need for introspection regarding the implications of our developments. 

Since 1839, Schibsted has been working to empower people in their daily lives. 181 years ago this was all about publishing newspapers. Today, what our empowerment looks like has evolved to include helping people shop second hand, finding the best deals, and much more – often through employing new technology. We are currently working with AI in many different ways, from recommending relevant ads to users to helping human moderators review explicit content or predicting how many newspapers we should print to minimize our environmental footprint. Our use cases are many and diverse, and their impact can be seen in user facing as well as internal applications. 

As a group with a strong tradition of transforming ourselves and our products through digital developments we are truly excited by all the potential that AI technologies offer us. At the same time, we consider the potential negative implications these technologies may bring with them. We believe ourselves to have a responsibility to consider and manage these. Or, using our metaphor, we are dedicated to finding a safe way to scale rocks related to AI as we believe there are amazing views to be enjoyed on the other side.

Getting to know our rocks

After reviewing publications on the theme of the ethical and societal implications of AI systems from leading international research and policy institutions we decided to more closely explore four themes which may relate to Schibted’s areas of operations. In the following section we will use illustrative examples to highlight how each of these themes could potentially play out through examples that relate to our industries. Neither the themes or examples are to be considered exhaustive.

Let’s explore the rocks!

1) Traceability & Interpretability

AI systems have been used to allocate police resources, help judges decide if people should be released on bail, and allocate hours of assistance to the disabled. When AI systems are employed to make influential and complicated decisions, it is important that we scrutinise how they are reached. But can we do that?

Even if we have agency over what data and instructions we give an AI system, it is often hard to know how they reached a given conclusion. The lack of transparency in AI systems’ rationale is the source of many social and ethical concerns, and the technical challenge of the “black box” of AI is especially problematic when there are high stakes involved. Although we might not understand the exact goings-on of AI systems, one might imagine that a second-best option would be AI systems explaining themselves in an understandable way. Unfortunately, AI systems are often unable to explain their rationale to humans in a way that they can understand, leading to decisions reached through them unable to be appealed or scrutinised. 

Illustrative example: Anna works at a publishing company and is in charge of an AI system that produces news stories. One day the AI system produces a factually incorrect news story, and in order to make sure it does not happen again in the future Anna wants to find out where it went wrong. Yet, as the program cannot explain itself to her in a way she understands, she cannot identify nor correct the AI system.

If we are unable to account for how AI systems reach their conclusion, we have limited opportunities to apply systems of accountability. This is an issue both from an abstract ethical perspective (why have morality if people cannot be held morally responsible?), but also from a legal standpoint. 

Illustrative example: Anna’s news-producing AI made another mistake, but this time it was not spotted before publication. The story’s false information causes mass panic and disorder. Who is responsible for causing it? Is it the publishing company? The people who developed the AI system? Is it Anna? Without knowing where in the system the mistake lies, we have limited ways of finding out who is to be held accountable for it. 

2) Reliability

In the context of AI, we need to distinguish between information and knowledge. While we can often get completely reliable information from AI systems, it gets more challenging once you ask it to take that information and make inferences about it (give us knowledge). Still, we see a growing number of use cases where AI systems’ influence reaches beyond providing information, and the following two points are variants on AI systems producing unreliable knowledge.

Illustrative example: AI might be able to tell us that X number of applicants for a job have gone to university, but it will struggle to reliably determine which of those applicants is well suited or qualified for the role in question.

AI which aims to produce actionable insights often tries to establish correlations within a data set. If the correlation is found by an AI in a sufficiently large amount of data, establishing causation is not seen as necessary for insights to be acted on. Yet, AI systems routinely come with biased correlations, or mix up correlation and causation all together. Ethically, this is problematic because it can lead to biased or unfair outcomes. 

Illustrative example: An online marketplace has devised a program where people who are viewed as “most qualified” automatically get shortlisted for job interviews. The sorting process to find the “most qualified” applicants is done by the algorithm sorting through people’s resumes. After having looked at the resumes of historically successful candidates, the system makes an inference that being male is correlated with being qualified. As a result of this, the AI removes all female applicants from the pool of possible applicants. When the creators of the algorithm try to modify this by not giving the algorithm applicants’ gender, the AI still manages to detect it through things such as women having been captains on female-only lacrosse teams.

3) Curation

In modern information societies, AI systems often filter information. Although this is not inherently a negative thing, it can be problematic if it is done in such a way as to filter away information which people would nonetheless benefit from seeing.  

Illustrative example: A news site has a personalisation algorithm which promotes only the most popular stories for users. A serious human rights violation is happening, but because the algorithm does not deem this story as sufficiently popular, it is not read by many people and the violation goes largely unnoticed by readers of the newspaper. 

A prevalent concern is that AI systems will curate information to an extent where it will significantly influence and change our view of the world. Any type of curation of information can influence our views of the world and society, and with AI the scale of potential impact is growing.

Illustrative example: Lars regularly reads The Newspaper which offers a personalised news feed. A match of his demographics, set preferences, and browning history renders him an AI-enabled front page that offers a view of the world setting him on a trajectory for radicalisation and contributes to him committing acts of violence.

4) Marginalisation

To a large extent, AI systems excel by seeing patterns. This is a useful tool, yet the dark side of seeing patterns is that it risks entrenching biases and further disadvantaging those who are already systematically discriminated against.

The notion of AI systems can automate simple tasks which have previously been done by people may lead to greater efficiency, but it also opens up for the effects of the potential inherent bias in the data or system employed to be increasingly widespread. 

Illustrative example: A financial company has employed AI for the process of evaluating loan applications. Explicitly programmed or not, the system is routinely refusing loans to people who live in a specific neighborhood. As the AI is processing all applications, the bias is present across the board – rendering the neighborhood poorly equipped to practice socio-economic mobility. 

Pattern recognition can also lead to unintentionally offending people and make them feel that their communities are not welcomed. When creating products and services which are widely used, it is important to treat all groups with equal respect. Acting or appearing transphobic, ableist, sexist or racist – even unintentionally – is not acceptable.

Time to get the right shoes 

To be vigilant in facing the future, we need to respect the fact that we are going to be needing tools in order to scale these rocks. What shoes might we require, what rope do we need, and which is the best flashlight? 

As a first step, we turned inwards to assess our current equipment. 

We conducted internal studies exploring these themes as relating to Schibsted. In 2019, in-depth interviews were conducted with employees across our organisation, including members of our various technology teams and top management. In 2020, we followed up this qualitative work with a survey asking employees about their opinions on the potential and implications of AI at Schibsted, both in the present and the future. 

We were explicit in our choice to pair our inquiries about potential and possible negative societal/ethical implications, and this yielded some important insights for us. What became clear through our internal studies was that we see exciting opportunities and emerging ethical implications stemming from tightly related domains. For example, we see great potential in using AI to create relevant and engaging user experiences, yet at the same time consider the significating down sides of an all too personalised information society. 

What we learned through our studies was that Schibsted practitioners see little risk with our current AI applications. For example, one of the biggest theoretical concerns lies within the field of curation and personalization. In practice, Schibsted is developing various editorial tools aimed at safeguarding editorial integrity and contributing to an informed society. While the theoretical risk still remains, as an organization we are putting in practical guards against it. 

What did become clear through our studies is that our practitioners want to put more focus on risk management and mitigation going forward. As AI becomes  increasingly advanced, our practitioners are more concerned about the scale of their potential negative implications. In order to reap the benefits we have identified (our magnificent view, if you would), our findings show motivation to safely scale the rocks we see lying in the path towards it. This is a big task.  Consensus on what is harmful or unwanted is relatively simple (few people want marginalisation or decisions no one can account for). Consensus on what is ethical or desirable, though, is harder. 

We have work ahead of us in terms of systematically approaching the possible implications discussed in this post. As a starting point, we aim to keep doing the following as an organisation:

  • Foster diverse teams. We strive to create (tech) teams that challenge and complement each others’ ways of thinking.
  • Discuss our systems and datasets. Is the employed dataset representative of those intended to serve? What goals are the systems optimized for? We need to continuously reflect on and discuss our efforts. 
  • Iterate. We embrace iteration and welcome improvements.  

When the car was made available to the public, opportunities and challenges came along with it. These were managed through assigning responsibilities on both those building and driving  cars (regulations, industry standards, drivers license etc.) to create safe streets. While the implications of harmful AI systems aren’t always as definite as, say, the results of a car crash, we believe ourselves and our industry peers to have a big job in front of us to safeguard our digital streets.

AI is a technology with enormous potential, for good and ill. We believe that Integrating responsible AI into our practices does not mean immediately abandoning projects which could have negative consequences, but rather push ourselves to find the necessary tools to safely move forward. 

/Agnes Stenbom, Responsible Data & AI Specialist, Schibsted / Industial PhD Candidate, KTH
& Sidsel Håbjørg Størmer, Philosophy Student, University of Cambridge

Staff from diverse domains of knowledge and practice were involved in this analysis, including but not limited to the fields of philosophy, law, management, and technology. We believe that interdisciplinary teams are essential to understand – and act upon – the opportunities and challenges ahead.

 

References 

Ananny, Mike. “Toward an ethics of algorithms: Convening, observation, probability, and timeliness.” Science, Technology, & Human Values 41, no. 1 (2016): 93-117.
Bostrom, Nick. “Superintelligence: Paths, Dangers, Strategies”, Oxford University Press (2014).
Brennen, J.S., Howard, P.N. and Nielsen, R.K., (2018). An Industry-Led Debate: How UK Media Cover Artificial Intelligence. Reuters Institute for the Study of Journalism Fact Sheet,(December), pp1-10.
EU HLEG, AI. “High-level expert group on artificial intelligence.” Ethics Guidelines for Trustworthy AI (2019).
Goffey, A. 2008. ‘‘Algorithm.’’ In Software Studies: A Lexicon, edited by M. Fuller, 15-20. Cambridge, MA: MIT Press.
Kraemer, Felicitas, Kees Van Overveld, and Martin Peterson. “Is there an ethics of algorithms?.” Ethics and information technology 13, no. 3 (2011): 251-260.
Mittelstadt, Brent Daniel, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter, and Luciano Floridi. “The ethics of algorithms: Mapping the debate.” Big Data & Society 3, no. 2 (2016): 2053951716679679.
Whittaker, Meredith, Kate Crawford, Roel Dobbe, Genevieve Fried, Elizabeth Kaziunas, Varoon Mathur, Sarah Mysers West, Rashida Richardson, Jason Schultz, and Oscar Schwartz. AI now report 2018. New York: AI Now Institute at New York University, 2018.
Whittlestone, Jess, Rune Nyrup, Anna Alexandrova, Kanta Dihal, and Stephen Cave. “Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research.” London: Nuffield Foundation (2019).