“Love tunnels, hate walls.” Profound words from Steve our data engineer, which neatly summarise our last sprint. Getting the VPN tunnel working was our biggest challenge and the majority of our work in the sprint depended on an operational VPN.
On the positive side, our developers and infrastructure engineers did some excellent problem solving and we did get it working. We learnt a lot, shared knowledge between two teams and we are able to document what we’ve done to make it repeatable. We’re chipping away at professional silos and demonstrating that collaboration really does make the world a better place (cue some suitably cheesy ear worm).
On the flipside, three quarters of our stories were blocked at the end of the sprint because of our prickly VPN problem. We understood that the VPN was a dependency when we planned our work, but we didn’t anticipate the degree to which it would hinder us.
Our star of the sprint is Isaiah who worked long and late last Thursday to get the VPN tunnel working.
What we’ve changed as a result
Omar, our colleague from the infrastructure team will be coming along to daily standups and sprint planning. We would love to steal him and bring him onto the project team, but that’s not possible at the moment.
What’s happening this sprint
We are facing our biggest unknown – how to sync data back to our on prem database. We’ll be doing some in depth work on this with AWS in February. Ahead of time, we need to do our homework and make a decent stab of understanding the art of the possible.
This really plays to the strengths of the team. Collectively, we are comfortable with a high level of detail and enjoy tackling complexity. There’s a sense of anticipation about the sync bit of the puzzle. It’s the make or break of the hypothesis we’re testing. There’s a flurry of activity going on as I write. The team is preparing for a workshop on Monday afternoon. I’ll share the outcomes in my next weeknote.
This is the last weeknote of 2019. The team has worked hard to keep things moving over the last couple of weeks. I’m grateful for their continued enthusiasm in the face of winter germs, train strikes and general end-of-year weariness.
I’m iterating my weeknotes based on some valuable feedback; I’m attempting to be clearer about progress and next steps. Here goes…
Bold ambition meets messy reality When we started out four weeks ago, our ambition was to have (some) data migrated into the cloud by Christmas. Spoiler alert: we didn’t achieve this.
However, we’ve made solid progress by:
Building the cloud platform
Building and testing a VPN tunnel
Identifying a use case (solving a problem for users)
Agreeing on a database (postgreSQL)
Our workshop with AWS last week highlighted the need for a more indepth look at how we migrate data into the cloud. We will bring together infrastructure, data and developer colleagues from Hackney and AWS in January to tackle this problem.
What we plan to do in the New Year We took a pause from sprinting this week. Instead, we focused on tying up some loose ends. Getting the VPN tunnel up and running has been a blocker for a couple of weeks and we wanted to crack this before Christmas.
On 6th January, we are going to re-group and kick off the New Year with a sprint refinement session. With our stakeholders, we will:
Remind ourselves of the goal for the prototype and what “done” looks like
Review progress against the roadmap
Brainstorm backlog items
Roughly prioritise these into 2-3 sprints
Things on my mind Despite pace* being an agile buzzword, I’m less concerned about this and more interested in team cohesion. As we learn more, do we:
Have the right skills mix in the team?
Are we sharing knowledge effectively or still working in silos?
Have the right balance between doing “enough” for the prototype and the bigger picture
“It’s the most wonderful time of the year…” and full of cold and flu germs. We’re falling by the wayside, brought low by various seasonal ills. I’m pleased with what we’ve managed to get done but I’m concerned for the health and well-being of the team. Some enforced relaxation over Christmas can’t come soon enough.
A change is as good as a rest. We spent the day with AWS on Friday testing our thinking around database choice and migration options. It feels like we are on the right track with our initial decision to go with a PostgreSQL database for the prototype. We identified that we need to do a bit more exploration on how we might migrate data in New Year. It was great to have colleagues from the infrastructure team with us, but I’m also aware that our data and insight team need eyes on this too.
What’s next? We parked the idea of doing a sprint next week. Instead, we are using the time to refine our backlog (long “to-do” list) and prioritise that into manageable chunks. There’s a lot of moving parts to manage and we need to get more of a handle on what these are and what’s in our gift to control or influence. To this end, I’ve asked Father Christmas for a crystal ball this year.
Housing data in the cloud. It does what it says on the tin.
We’ve got data, lots of data.
This data lives in an old house which has had many occupants. Each occupant has added data, moved that data around, put it in different rooms, called it different things and used it to prop up the fabric of the building. The old house is weighed down with data, nobody can find what they are looking for and removing data risks a structural collapse.
We are changing this.
That’s a bold statement, let me put some context around it. We are taking our first experimental steps to see if we can move a tiny piece of housing data into a cloud platform.
Back in October we did a week long discovery to identify a data candidate for prototype and to think about what success might look like. Thankfully, we’re not starting from scratch. Other fine minds have looked at our old, overstuffed house – we’ve valiantly attempted renovation and even extension. Colleagues from MadeTech have pulled all this learning together and made a set of recommendations, which we are testing in the prototype we are building over the next few weeks.
Introducing the dream team
We are working with support from AWS and MadeTech along with our award winning team of Hackney developers. With got expertise from our data and insight team and three technical architects (at my last count). I’m terrified, the team is absolutely buzzing. We’re finally staring into the eyes of our nemesis – let the battle commence.
We are working in 5 day sprints. I love the drive of the team. They want to work hard and fast. We’ve covering new ground everyday. The team are absorbing new ideas, skills and ways of working like a sponge. We don’t all see things in the same way, but the team are embracing this too.
This week, we’ve identified our use case. Right from the outset we want to demonstrate how our work can bring tangible value to the Council services that rely on this data. We’ve got to keep this grounded in business need and ultimately the needs of Hackney residents. We’ve also set up our cloud platform in AWS this week. Next up: is deciding what database we need for this prototype. There is A LOT of debate about this in the team. I’m expecting a few fireworks. We’ve got a spike early next week to try and crack this.
This week we focused on two main areas: continuing to better understand the scale of the issues with property and asset management data and also starting to think about how we might use the LLPG to address some of these issues (awful pun not-intended).
In order to make sure that we were aligned with other work that had been done in this area we spoke to Ian Marriot and team (including +Clayton Daniel as our demo-master) who explained the thinking that has guided their efforts on the Property Index. This tool compares data on Universal Housing (UH) and Codeman (the asset management system PAM uses currently).
This gave us a helpful overview of the scale of some of the data quality issues and also a view of some of the things they’ve considered to improve them. There were some headline numbers (more dwellings on UH than Codeman), which had reasonable explanations (UH includes additional dwelling types like temporary accommodation), but others that revealed more of a problem (blocks or sub-blocks marked as disabled on Codeman but not marked on UH).
In addition to this we have started to put together a list of all the teams / services in Hackney that make direct use of PAM data and which data items they are interested in. This will help us to start to build a picture of which data items are the mainstream ones that we need to be accommodating in our solution.
It has also helped us hone our thinking on where we draw the line in terms of which data elements to add into the LLPG to ensure we can maintain data quality (of both our address data itself which is super important to protect, as well of PAM data itself). We’ll need to come up with some clear criteria to test with PAM colleagues in the next sprint.
This week we have started to look in more detail at the LLPG itself. This has prompted a discussion about how we will test the hierarchies in the LLPG. We have discussed the merits of cross referencing vs parent child relationships as a method of creating our hierarchies in the LLPG.
We chose to test cross referencing rather than parent child as parent child itself would not allow us to have enough layers in the hierarchy due to limitations of the LLPG structure and software we use to maintain it. We considered a hybrid of parent child and cross-refs, but in our discussions parent child did not seem to have any significant benefit.
Before the end of this sprint we’ll be talking through the pros and cons of this approach with the Dev team, Apps Management and other Data & Insight reps to get some healthy challenge and ensure that our proposed approach won’t cause unnecessary work for them down the line in surfacing PAM data for applications or reporting.