Platform APIs weeknotes, 22 April 2020

What is it?

The platform API project is part of an effort to separate the end-user interfaces, business logic and data of the various databases used by Hackney Council to deliver services to residents. We will do this in a way that enables us to provide a better experience to residents and staff, make changes to how we deliver services more quickly and at a lower cost, and have a consistent version of the truth for our people and property data. 

What have we done this week?

Over the last week we’ve analysed the seven main data sources:

  • Universal Housing (housing)
  • The Council’s “Ask for help” service (COVID-19 support)
  • NHS Shield info, provided to councils by GOV.UK (COVID-19 support for the vulnerable)
  • Academy (council tax and benefits)
  • Comino (social care documents)
  • Mosaic (social care caseworking)
  • London local property gazetteer (unique property info)

We focused on personal details – name, date of birth, contact details, NHS and National Insurance numbers.

Some of the data sets were fairly straightforward, but we found that some systems contain duplicates. This was mostly around housing information: some tenants may make an application on two different channels (e.g. in person and online) though these are mostly caught during the application process. Our housing database is also tenancy-centric rather than person-centric, so one person will appear against each tenancy they’ve had, even in the same property. We already knew this from previous work, and it’s not likely to be a problem. 

The social care database Mosaic was much more complex due to the sheer number of tables – over 2500 – and again there is a degree of duplication, as it’s possible to enter the same person twice. However, the information we needed to pull from it was fairly simple to identify and analyse. Multiple sources also gave us some vulnerability indicators, which will be useful bearing in mind our end goal of supporting vulnerable residents through the COVID-19 pandemic. 

We ended our week with a workshop to compare notes and determine the way forward. We agreed that a single API for these seven sources would be very complex and unwieldy; if any of the data models changed, the entire API could be affected. Obviously scaling this up to all the other databases (26 in total) would be unrealistic. 

We therefore agreed that the way forward would be to have individual platform APIs for each data source, which would then be queried by a separate platform API which would consolidate the data. When data is returned, the source will be made apparent to the user; this is similar to what is currently done for Single View. Ultimately, we would have a cleansed database removing duplicates and matching separate records, but this is some way off yet. 

We are confident that this approach will provide the repeatable framework that we want for the future. We’ve gone one more week of building the data models and specifications for each API, and more work is needed on vulnerability data due to its complexity. Next week a team from Made Tech joins us to start the build, and we should hit the ground running. 

+ posts

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.