Agnes Stenbom
Written by Agnes Stenbom
Responsible Data and AI Specialist Stockholm
Published 2020-06-16

How Schibsted uses artificial intelligence

Schibsted is exploring artificial intelligence – AI – for an increasing number of products and services. In a recent webinar, we gave viewers a sneak peek into our AI operations and shared our thoughts on the ethical dilemmas that accompany these technologies.

An invalid security key was specified. Please use at least the following shortcode:
[advanced_iframe securitykey="<your security key - see settings>"]. Please also check in the html mode that your shortcode does only contain notmal spaces and not a &nbsp; instead.

Schibsted recently hosted a webinar on artificial intelligence (AI) in practice. During the session, Sven Størmer Thaulow (Chief Data and Technology Office), Ingvild Næss (Chief Privacy and Data Trends Officer) and I, Agnes Stenbom (Responsible Data and AI Specialist), shared insights into what we do with AI today and how one might go about managing the many dilemmas these new technological opportunities raise!

Here is the recording of the AI webinar

 

Getting the basics right

Data and AI are completely interrelated. Without data, AI cannot exist. With poorly managed data, AI will not thrive. This is why we have an extensive data strategy in place in Schibsted to ensure that we have high-quality data that we can use in a lawful and responsible manner. 

During the webinar, Ingvild highlighted privacy by design as a key aspect of working on AI in a lawful and responsible manner. Even though the key principles following from GDPR are clear, a lot of questions arise as to how for instance data minimization shall be ensured in practice.  In Norway, the Data Protection Authorities recently launched a regulatory sandbox that focuses on AI. We in Schibsted have already expressed our interest to take part in this important initiative.

Practical examples

At Schibsted we work with various aspects of machine learning, which is a subset of the wider term AI. We have machine learning services in internal processes as well as user-facing products and applications across our ecosystem. During the webinar, Sven shared a few concrete examples:

Ad Category Suggestions

We are using computer vision to identify what category ads on our marketplaces should belong to. If a user uploads a picture of a chair, we use AI to know that the ad belongs to the category of ‘furniture’ and should be tagged with ‘chair’. This makes the ad insertion process smoother for the user, but it also makes the overall quality of the service better as more ads will be categorized in the correct category.

Content Moderation

To further protect our users’ safety, we are supporting our human content moderation team with computer vision solutions that can identify content that contains sexually explicit imagery or other malicious or fraudulent behaviours. 

Editorial Insights

Schibsted newspaper Bergens Tidende (BT) is using computer vision to track what people the imagery on their site depicts. By estimating the age and gender of faces used in an article’s imagery, BT’s application of computer vision enables insights into how their news coverage relates to the demographics of their audience. 

Distribution Route Optimization

Schibsted-co-owned company Distribution Innovation employs AI to optimize the delivery route and ensure that the most efficient path is taken when delivering parcels and newspapers. This is particularly important regarding cars and trucks, as the optimal route will minimize fuel use and reduce co2 emissions.

Print Prediction

Every day Schibsted newspapers are sent from our distribution centres to various retailers such as Seven-Eleven. By predicting how many papers a specific store will sell during a given day, machine learning is helping us better predict how many papers to print and distribute to specific stores to avoid selling out and to minimize waste.

AI is full of dilemmas

AI earns us incredible new tools for innovation. We need to have a positive mindset to identify new opportunities and ways to do good through them, but at the same time, we need to keep in mind the many inherent dilemmas of AI. 

To most new opportunities there are also downsides and/or risks. That does not mean that we should avoid new opportunities, but we need to find ways to balance the advantages and downsides. We need to mitigate and seek to reduce the risks. With algorithmic tools and services come a risk of algorithmic bias, or, human biases and prejudice being built into the systems. With automation comes a risk of job loss. The list of “AI dilemmas” goes on and on.

 

A key aspect to consider when discussing these new tools is the fact that data and algorithmic systems like AI are socio-technical. These systems don’t just appear from thin air or exist in a vacuum – but they’re built, deployed and used by people, within organizations, within social, political, legal and cultural contexts. We all have a responsibility to reflect on and manage the dilemmas of AI. But how should we go about doing so? 

Three pieces of advice to manage AI dilemmas

There are no clear cut answers as to how to move forward with AI in safe and responsible ways. While the methods will vary depending on the project, organization, industry etc, Agnes Stenbom shared some advice that may travel across contexts:

Foster diverse teams

Enrich your team with more perspectives. With different types of eyes on the challenge, you’ll be better equipped to identify both opportunities and risks.

Discuss your systems and datasets

Discuss what goals your systems are optimized for and if you’ve chosen the right training data to reach them. Think about whether there are structures of the past built into the data you are employing, and discuss whether you wish for your AI solution to enhance them and of course – make sure that your data management practices are lawful and responsible.

Iterate

Rethinking and developing is an integral part of AI – embracing iteration is a key step in improvement. If you identify downsides, learn from them, and iterate your solution.

 

Q&A shows interest beyond buzzwords

During the Q&A session of the live webinar, participants sent in questions ranging from team diversity to GDPR and cloud providers. To our delight, the questions showed interest way beyond buzzwords and Hollywood robots. 

As noted by Sven Størmer Thaulow during the webinar, we believe that the best way to develop as an organization within AI is to try these technologies in practical, everyday cases and build our competence by trying and learning together. 

Slido-summary of popular themes during Q&A

———
Stay tuned for future webinars and events on the topic of AI!

Written by Agnes Stenbom
Responsible Data and AI Specialist Stockholm
Published 2020-06-16