David Schneller
Written by David Schneller
Published 2016-09-29

Experimental Apps For Internal Users

How can the development of a mobile app prototype provide insights into user workflows and scenarios? Can it make those users rethink how mobile devices support them?

Context Introduction

Advertising-related departments in Schibsted use several tools to manage their ad campaigns. Campaigns are monitored to make sure they reach their goals and a report is generated in the end to show clients how their campaigns performed.

We started a project in Q3 2015 to build a mobile app that could display this ad campaign data for monitoring and analysis. The goal was to discover how mobile devices can support our ad departments and possibly replace computers for some work tasks.

Two larger departments are working with this data: Ad Operations sets up, manages and monitors ad campaigns; Ad Sales brings in the orders for ad campaigns, keeps an overview on the progress of campaign goals and communicates campaign performance results back to the client.

 

Preparation

We already had experience of ad reporting at Schibsted, from developing a web tool to create static campaign reports. Interviewing users who worked with those tools about their needs and expectations gave us a base for the mobile app.

 

Screen Shot 2016-09-29 at 14.42.20

Users were asked how they pictured a mobile app supporting their workflow, but could not think of any ways a mobile app would help. Therefore we decided to create a prototype that would demonstrate some of the potential.

We targeted Ad Sales as users because it seemed like a mobile app could help them check campaign data when away from their desks, and they could use it as a mediation tool in meetings with colleagues and clients.

 

Design and Development

We began by asking if the tool could be improved by interacting directly with the data, rather than reading through pages of static data. The user could choose display settings to get the right slice of their data, zooming in to analyse small time frames or zooming out to look at the big picture of a campaign’s success.

We had just built up a new mobile app engineering team, so this was a great opportunity to find the team’s workflow and get everybody familiar with the world of online advertising.

 

Workshop / Testing / Evaluation

As well as targeting sales users we held a workshop with Ad Operations to learn more about the lifecycle of an ad campaign. The Ad Operations team’s deep understanding of how campaigns worked could provide valuable input on reading and interpreting performance data and campaign states. They also communicated with Ad Sales often – mostly when campaigns underperformed or there were questions about the state of a campaign.

One way this would occur was when ads were scheduled for display so that the lion’s share of the impressions were left until rather late in the campaign. It’s not unusual for 70% of the impressions in a week-long campaign to be served during the last day. Such nonlinear behavior often led to worries about underperforming campaigns, but displaying campaign performance forecasts would address these concerns and reassure sales users.

We tested on a small group of sales staff with three different versions of the app: two working native apps on iOS and Android, and a third click prototype build in Axure RP. The test evaluated user understanding of the apps’ navigation and interactions. We also tested three concepts for slicing the data to see which performed best.

Interviews afterwards showed that the scenarios we designed for didn’t really work as expected. Sales staff usually brought phones and computers to their status meetings to update each other, so carrying tablets as well didn’t make sense. Also, clients rarely saw  campaign performance because they were usually represented by agencies that split campaign budgets over multiple ad providers. The agencies compile performance metrics themselves from several sources.

However, we discovered other scenarios, such as communication between Ad Operations and Ad Sales, and rapid monitoring of campaigns was mentioned, which is particularly useful on a smartphone.

 

What did we learn?

First, we learned a lot about our users: they want monitoring campaigns to be quicker and optimized for skimming the current state of campaigns relevant to them. Second, we now understand how Ad Sales and Ad Operations collaborate and communicate, revealing opportunities to adapt data visualisation, reducing the need for sales to contact Ad Operations department. Third, the app could be useful when different departments discuss campaigns and performance.

Showing the prototypes and talking about real use cases highlighted several ways to improve the app to better support these cases. Having an actual prototype helped interviewees to think about how the device and app could fit in their workflows, challenging them to rethink how they work and how tools can support them in interviews, before the project this had been difficult.

It was an interesting way to learn about the users. In future, we could use lower fidelity click prototypes on real devices. These might be built in tools like Axure RP Pro –  a much lower investment of resources, but requiring UX to be given a head start on the project before Engineering. Also, such prototypes cannot usually make use of production data, which make the tests more approachable for users.

Written by David Schneller
Published 2016-09-29