For a long time, there’s been an established view that publishing more open data should encourage the proliferation of new applications and services across the public and private sector based on the availability of common datasets across local authority boundaries.
Historically in Hackney, we’ve only ever dipped our toes in the world of open data and to be honest, we don’t know whether there’s enough evidence to prove that simply publishing as much as possible will realise these benefits – we just don’t know enough about the user need to justify a big bang open data offering.
That being said, we’ve been inspired by the ODI, Nesta and others who are keen to encourage greater transparency and common standards in data publication and we have reviewed some of the good practice of other local authorities who’ve experimented with open data portals in recent years. But our HackIT manifesto tells us to think big and act small – we don’t want to dive into the deep end of an expensive open data portal for publishing every non-personal data set we can find. Instead, we want to understand the kind of data users want to see, and work out the most simple and cost effective way of providing that data in a useful way.
Currently, we don’t proactively publish the data that is most commonly requested via Freedom of Information (FOI) requests, which results in a significant burden of resource spent re-actively responding to FOI requests. In his 2018 manifesto, the Mayor Glanville committed us to more transparency and this has helped shape a set of basic principles to help us experiment with user focused open data publication. We’re keen to open ourselves up to some challenge on this and share what we learn as we experiment:
Focus on releasing data commonly requested via FOI – We believe that concentrating our open data offering on the most frequently requested topics will release significant amounts of officer time and provide a better service for residents. Therefore, we will focus on these opportunity areas first.
Use common standards wherever possible – to be valuable for developers and organisations, we need to publish our data in a format that is easy to join across local authority areas. Where a standard doesn’t exist, we will try to match other Councils who are already publishing this data to maximise reusability. We will openly publicise our data structures to encourage reuse by other local authorities.
Automated with a focus on currency of data – to be of maximum value in reducing the FOI burden we face, the data included in our open data products should be as current as possible. To ensure we aren’t just moving the resource problem from FOI administration to open data publication, data updates should be automated and require a minimum amount of officer time to publish.
Adopt a flexible approach to the technology we use to publish our data – we don’t believe in being constrained by an online data portal for publishing open data. We will aim to use the best modern technology and data analytics techniques depending on the data involved to provide a data dashboard, visualisation and full download of the dataset. We are motivated by providing a great user experience that’s easily discoverable first and foremost.
Aim to meet GDS design standards– all of our products will be designed in line with modern, accessible design standards and we will always test our products with our users.
Understand our impact – we will always begin by understanding the baseline volume of FOI requests for the data set in question and monitor over time, the impact of publishing the dataset. We expect to see more exemptions (where an FOI is refused under the grounds that the data is already published) and over time, fewer FOI requests overall. If the data set isn’t reducing FOI demand, we will look for other areas where we can add more value by publishing new data.
Our first experiment is with Penalty Charge Notice (PCN) data (that’s parking tickets to you and me…) – it’s one of the most commonly requested datasets and we think publishing this openly will help residents answer their questions in a more timely way and reduce the time we spend responding to FOIs on the topic. We’re experimenting with providing a simple app on the parking pages on our website, which will allow users to extract the type of information from the data that is often asked for in FOI requests. Our great User Research team are helping to keep us focused on delivering something simple and intuitive for residents. We’ll also be trialing a light touch open data portal which will allow us to curate our open data sets in one place on the website. We’ll share more as we develop our MVP over the coming weeks.
Last week, with lots of pushing from Tom (plus some 5:30am testing) and a hefty amount of support from Phil, Rasit and Lindsay, Farthest Gate deployed the Verification Hub in the live Liberator environment. Tom’s testing proved it hadn’t caused any problems with the permit application process itself but for now VH remains switched off…essentially this means Farthest Gate are lined up to flip the switch to on, just as soon as we iron out some new problems…
The main thing we’re trying to solve (beyond the project team’s availability having hit leave season!) are some problems we’ve encountered with the VH API. We had previously been testing against ClearCore v4 but have since been upgrade to v5 which is setup with SSL certificates over HTTPS and requires authentication, which wasn’t the case before. It’s much more secure but means we now need to develop our API to call the ClearCore service using credentials.
Unfortunately .Net Core doesn’t currently provide the necessary libraries to easily connect to a SOAP web service over HTTPS using credentials. Matt and Farthest Gate confirmed this as an issue and agreed with Matt’s two options of either generating our own SOAP requests or switching to the .Net Framework. Matt felt that there was too much scope for potential issues when generating our own SOAP requests which means he now needs to build a new API project in the .Net framework and move the code over. This will involve rebuilding but hopefully not too much rewriting.
At this point we’re hoping Matt will be able to focus on the changes over the next week by which time, Steve will be back with us and able to move housing benefits data into the VH data core to expand out what we’re verifying against. Charlene, our optimistic and very supportive product owner, continues to marshal us towards pressing the on button in the next 2 weeks so we can sit back and watch how it performs IRL.
We’ve had it in the back of our minds to revisit the technology and presentation options for map.Hackney for a while, in particular, to make it more accessible and mobile friendly. The Re-engineering Hackney Content project has provided the impetus we needed to test out a new thematic approach to redesigning map.Hackney, with the added fun of super short timescales (4 weeks) and a very assertive product owner (Susan M-L).
Our project aim: to deliver a web map that enables residents to view topical services that are useful to them; is web friendly and meets GDS accessibility standards.
Last week we:
Developed user stories and reviewed the range of mapping layers we’re currently offering through map.Hackney
Agreed to test an approach where we present a map based on a theme (we’ve selected summer in Hackney) rather than a service by service approach. This means that rather than a map that shows you all the parks and open spaces across all of Hackney and related info, we’ll present a range of layers (water fountains, adventure playgrounds, the Hackney carnival route) meeting a specific theme or activity (having a picnic, playing, going out etc).
Developed a prototype with a new base map, that’s starting to incorporate some layers by theme
This week we’ll be:
Carrying out user research in Hackney Central and some libraries and open spaces to understand what users need from a summer in Hackney map – Wing has helpfully designed up a google form for us to record feedback
Developing the prototype further with improved design based on UI toolkit
Knowledge transfer with Mo and Carolina providing some advice and support to Sandrine and Marta to develop the design elements of the prototype
Farthest Gate have completed the work to implement VH in front of CI in the test environment and we’ve been carrying out some functional testing. So far, so good – it’s performing as we’d expect (our dummy data is passing and failing where we want it to…see below) and we remain painfully optimistic that we can get it into live by the end of next week…but this is dependent on Farthest Gate deploying to the bridge environment first in the next day or so.
However, Matt has been able to snatch some minutes around the edge of other work to connect our API with ClearCore and we’ve hit some blockers with security certificates. We think this will need a bit of infrastructure support to resolve (as well as time from Matt that we can’t yet guarantee). This is on our ‘must solve’ list – if we can’t resolve it won’t be possible to go live next week.
Thanks are due to Rasit and Phil who have become involved to look at the full CMR for Liberator given the changes we’re making. In effect they’re checking that the changes don’t break anything, that it’s safe to deploy to live and they have sight of all the documentation in case any issues arise.
In the meantime, InfoShare have completed installation of ClearCore 5 in our Live environment. Tom C is now building an internal LLPG gazetteer and changing the VH project configuration so that is ready to load data into the Hub. Steve F has Housing Benefit data and has almost finished automating this as daily update delta files from Council Tax and Housing Bens into the ClearCore data hub.
We’re about to hit holiday season and we’re reaching crunch point for this work – we’re 80% of the way through this work but it will require some focused support from the whole project team to get it over the line.
Verification Hub has been suffering slightly in recent weeks from competing priorities and leave commitments across our part-time project team. We’re approaching a point where we can start live testing the VH in front of Citizen Index in the parking verification process so we’re redoubling efforts to get to that point by the end of June.
Our focus this week has been on establishing the key steps to get to live testing over our next 2 sprints. Farthest Gate are working at the moment to connect with our new API and Tom is working with InfoShare to get ClearCore 5 deployed in our live environment. Steve continues to work on the feed of Housing Benefit data and will be prioritising the daily feeds of both HB and Council Tax data over the next sprint. After that, we’re reliant on Matt to make sure that our new API is connected to ClearCore version 5 and deploy it into the production AWS environment. Then it’s all about watching how VH performs in the live environment so we can’t start tweaking and testing our data inputs and thresholds…