Giving users feedback on their datasets

The problem

The planning data providers service was originally comprised of a single tool to allow LPAs to check the validity of their datasets. Once the data had been checked, the LPAs had to complete a series of manual steps to send their dataset to the data management team, who would then get the data onto the platform.

We had been instructed to develop another tool to give feedback to LPAs on the status of datasets they had already provided. We reviewed an existing prototype which had been designed by Paul Smith, but not tested with users.

The organisation dashboard from the original feedback prototype designed by Paul Smith

The tasklist page from the original feedback prototype developed by Paul Smith

We identified the follow problems with the current service:

  • because giving feedback to providers is a manual process — it can take a long time to improve data with a lot of back and forth
  • because there is no visability of system status users do not know what to do
  • some users think they’ve submitted data when they haven’t
  • users don’t know what order to complete tasks in
  • it’s not clear what dataset is being uploaded once started publish flow

Our approach

We held a workshop with the data management team to explore ideas on how to develop this further to meet user needs.

We came up with the following hypotheses to resolve the problems we had identified:

hypothesis 1

If we show providers the status of the data they’ve provided

Then they will understand what they need to do to improve that data

Because it will be clear to them what data is missing and what the steps are to improve it

hypothesis 2

If we signpost users to where to amend/improve their data

Then they will understand where to go to improve their data

Because they will be directed there

hypothesis 3

If we show a score for each LPAs data completion

Then providers will be motivated to improve their score

Because they will not want to be seen to be doing badly and will understand the value of investing in the platform

hypothesis 4

If we show a score for each LPAs data completion

Then low performing LPAs might be disincentivised from providing data

Because they will feel they can’t achieve a higher score due to external circumstances

From this workshop, we developed the idea of a dashboard which would give LPAs feedback on their data and provide a spring board for the tools they would need to submit and improve their data.

First iteration of the feedback tool user flow

Our solution

On presenting these early flows to the team, we decided it would be best to take this forward into a design sprint with the outcome of designing an end-to-end service.

The second iteration of the feedback flow showing how the dashboard is a launching point for the different tools LPAs need to use