Published 2021-11-29

Introducing the Schibsted FAST Framework

FAST is a new Schibsted-wide framework for risk analysis in AI. The framework provides a common basis for approaching risks in the areas of Fairness, Accountability, Sustainability and Transparency, and it is currently being piloted across our group.

Artificial intelligence (AI) has great potential for a group like Schibsted. But as we have learned in recent years, there are risks related to using the technologies. These could for example relate to human biases being encoded into AI systems, outcomes being hard to explain or understand, or information not reaching citizens in ways that align with our mission.

We can view such risks as metaphorical rocks – challenges that we need to understand and overcome to fully reap the benefits of AI.

In a 2020 blog, we wrote that “AI is a technology with enormous potential, for good and ill. We believe that integrating responsible AI into our practices does not mean immediately abandoning projects which could have negative consequences, but rather push ourselves to find the necessary tools to safely move forward.” 

The new Schibsted-wide FAST framework is such a tool!

The FAST framework provides helpful structures for brands and functions across the diverse Schibsted ecosystem to identify, manage and share risks in AI-powered products and services. Ultimately, the FAST framework will help Schibsted and our brands create great products and services that our users trust, enjoy and find worthwhile paying for. 

The FAST framework consists of four elements:

  • Principles guiding the way
  • Questions spurring discussion
  • Structures for sharing learnings
  • A dynamic support system

In this blog post, we outline the structure of the framework to describe the approach we are taking. Hopefully, it can serve as input to discussions about AI risk awareness and management in your organisation. 

The Principles

The FAST principles help us navigate our way forward with AI, and we define risk as anything that contradicts them – for example in our models’ performance, training data sets or communication. The principles read as follows:

  • Our AI is fair for all user groups (Fairness)
  • We are readily held accountable for our AI development and make sure that humans are in the loop (Accountability)
  • Our AI development safeguards environmental and social sustainability (Sustainability)
  • Information about our AI development and use is discoverable and understandable for our stakeholders (Transparency)

The principles are the backbone of the FAST framework, but just expressing them is not enough. We need to hold ourselves to them.

The questions

The first step towards assessing if our development or use of AI aligns with the FAST principles is to discuss the four areas. For this, we have a shared bank of questions and discussion exercises for teams across our group to consider. Example questions include:

  • Are you training AI-powered systems on datasets that include people of different races and gender, and persons with disabilities? (Fairness)
  • Who is the primary contact within your team with overall responsibility for the AI application? (Accountability)
  • Have you considered the energy consumption of training and operating your implementation of AI? (Sustainability)
  • Do you have a feasible way of understanding how your AI-system comes to a given conclusion? (Transparency)

The FAST questions can be seen as a buffet; the important thing is not that everybody takes big portions of every item, but that they at least see and consider what’s on there.

In this first pilot of the framework, we have published questions of varying relevance to teams across our group who work with AI in settings such as parcel distribution, personal finance and journalism. Instead of making the full question bank mandatory for teams (e.g. through a checklist), our diverse teams are tasked with making use of the questions in ways that make sense for their product/service reality.

One could say that this approach allows us to skip the vegetables and focus only on the dessert table (or to leave the metaphor: that it allows us to avoid the tough questions and instead only ask ourselves the easy ones).

While a possible scenario, it won’t be to our advantage to do so. 

It all comes down to why we want to do risk assessment in AI to begin with. Fundamentally, FAST is about building the best possible products and services with the help of AI. Only grabbing items on the dessert table won’t help us achieve that.

Sharing learnings

When we identify a risk, for example in the model performance, training data or design, that is a success – it means that we have found an area for improvement!

When a risk is found, we ask teams to summarize their learnings from working with it to share in our internal AI community. By openly discussing the risks that we find, we help each other out towards building the best possible products and services for our stakeholders. 

We have a shared template to make it as easy as possible for teams to share their work. Below we outline what this includes, using a real example from Schibsted. 

Learnings from a FAST assessment

Context: The contextual ads product provides advertisers with a way to connect with target audiences by matching ads to the contents of the article they are seeing (i.e. segmenting by context rather than user data). As part of the product, we also provide a “topic insights”-section, which shows the currently trending topics in Schibsted’s content inventory. 

Identified risk: Creating these topic insights, our initial method used a Python implementation of Dynamic Topic Model (DTM), which has a runtime of 8 hours. This would lead to long delays when errors occurred, as well as significant computer resource consumption which contradicts our FAST principle of safeguarding environmental sustainability.

Approach to managing it:  Since we already train topic models using a very fast Mallet implementation of Latent Dirichlet Allocation (LDA), we decided to scrap DTM altogether and simply re-use our LDA models combined with a simple moving average (MA). Discussions with our stakeholders about how often updates were needed also led to us deciding that retraining once every 14 days was enough, thus reducing energy consumption to 0.07% of what daily training with the initial model would have cost.

Results: Our revised approach has a runtime of 5 mins, or 1% of the initial approach, thereby saving both time and energy. By critically reviewing methodology and implementation, we delivered the same amount of user value with vastly improved cost, energy consumption and CO2 emissions.

This example is quite straightforward; it was obvious to the team in question that reducing energy consumption and costs while maintaining the same performance was desirable, and they could manage the risk they had identified. As we move forward it is important to keep in mind that not all identified risks will be as easily managed, but likely bring with them tougher dilemmas for us to deal with.

Support system

It is up to teams across our ecosystem to identify, evaluate and manage risks related to their product or service. They know their product the best and should make decisions about it. 

However, if a team has not been able to manage the identified risk, we facilitate connections to established leadership forums in Schibsted to provide the input and support that they need to do so. In this first piloting phase of FAST, we will evaluate the effectiveness of this approach and iterate if/as needed. 

Celebrating diverse approaches

Who brings FAST to practice in our many diverse teams working with AI will vary across the group. To us, the important thing is not that we use FAST in the same way across Schibsted, but in ways that have an impact on our road ahead. 

The same goes for timing. We strongly suggest that FAST assessments are done before use cases are taken into production, but an active approach to identifying risks and areas for improvement is critical through the product or service lifespan. 

We look forward to sharing more learnings as our iterative journey with FAST and AI risk assessments mature!

Sven Størmer Thaulow, Chief Data and Technology Officer
Ingvild Næss, Chief Privacy and Data Trends Officer
Agnes Stenbom, Responsible AI Specialist