Many things coming together for MaT – but lots still to do! I’m drawing all the different parts together into one “Hub document” intended as a central repository of everything about the service for both team members and stakeholder. Let me know if you’d be interested in seeing that.
The status update in that doc currently says the following.
- ETRA will go live on 4/9/19 with some post-live updates to follow shortly afterwards
- Following the great project-retro with Emma H. last week we’ve completed a first draft Team Charter which we’ll continue to iterate in future retrospectives. Currently we are aware we need further sections on impact mapping, metrics and our service design work.
- We had the initial meeting with DXW on 29/8/19 which went well. Contract still to be signed so start date still not decided but hopefully it’ll hopefully be less than two weeks away.
- The full set of user journeys and user stories that will serve as the initial agreement for that work still need to be completed but we’ve had some great meetings with our Product Owner that have got this well under way
- We’re starting work on significantly improving the quality of the backlog
- In general we’re lining everything up to make sure we’re good to go for when DXW arrives – there’s a lot to do!
- There was a very thought-provoking and productive meeting with our team’s new Service Designer and the Head of Digital about the SD work that will be undertaken in September. This turns out to be more ambitious and far reaching than I had understood. It’s looking like it may be just the preamble to another round of discovery rather than the lead-in to immediate new digital features for MaT. This is going to involve some very interesting and positive conversations in the near future.
At the same time there’s a lot of other great things going on in HackIT from the experiments in team structures to the DevOps work – onwards!
This week we focused on two main areas: continuing to better understand the scale of the issues with property and asset management data and also starting to think about how we might use the LLPG to address some of these issues (awful pun not-intended).
In order to make sure that we were aligned with other work that had been done in this area we spoke to Ian Marriot and team (including +Clayton Daniel as our demo-master) who explained the thinking that has guided their efforts on the Property Index. This tool compares data on Universal Housing (UH) and Codeman (the asset management system PAM uses currently).
This gave us a helpful overview of the scale of some of the data quality issues and also a view of some of the things they’ve considered to improve them. There were some headline numbers (more dwellings on UH than Codeman), which had reasonable explanations (UH includes additional dwelling types like temporary accommodation), but others that revealed more of a problem (blocks or sub-blocks marked as disabled on Codeman but not marked on UH).
In addition to this we have started to put together a list of all the teams / services in Hackney that make direct use of PAM data and which data items they are interested in. This will help us to start to build a picture of which data items are the mainstream ones that we need to be accommodating in our solution.
It has also helped us hone our thinking on where we draw the line in terms of which data elements to add into the LLPG to ensure we can maintain data quality (of both our address data itself which is super important to protect, as well of PAM data itself). We’ll need to come up with some clear criteria to test with PAM colleagues in the next sprint.
This week we have started to look in more detail at the LLPG itself. This has prompted a discussion about how we will test the hierarchies in the LLPG. We have discussed the merits of cross referencing vs parent child relationships as a method of creating our hierarchies in the LLPG.
We chose to test cross referencing rather than parent child as parent child itself would not allow us to have enough layers in the hierarchy due to limitations of the LLPG structure and software we use to maintain it. We considered a hybrid of parent child and cross-refs, but in our discussions parent child did not seem to have any significant benefit.
Before the end of this sprint we’ll be talking through the pros and cons of this approach with the Dev team, Apps Management and other Data & Insight reps to get some healthy challenge and ensure that our proposed approach won’t cause unnecessary work for them down the line in surfacing PAM data for applications or reporting.
This is the first set of weeknotes for the PAM Data project and yes, I don’t yet have a better name for it. I tried to verbify by calling it ‘Get accurate PAM data’ but that still doesn’t feel quite right. Answers on a postcard please.
This phase of the project came about as a result of some excellent research carried out by the data and insight team earlier this year. Through their research they came up with the recommendation that “we invest to expand the LLPG to operate as the central, trusted source of unique references to support our property data schema”.
During this phase of the PAM Data Project (😬) we will be testing this recommendation to see if the LLPG is something that can become a ‘central trusted source’ for the majority of use cases of PAM data. This is part of a strategic move to make us less dependent on Universal Housing as a source of important data.
The team is made up of:
- myself as a Delivery Manager
- Lindsey Coulson as the product owner of the LLPG
- Lisa Stidle working on data strategy
- Herminia Matheiu and Yash Shah from Housing working as data analysts
- Liz Harrison as the project sponsor
This project is being run using the scrum methodology, this is the first time that many of the team have worked in this way so I’m really interested in the feedback at retros to know how they find it.
This week we kicked off the prototype / alpha phase with a planning session. Lisa, Liz and Lindsey had worked to produce a backlog and a team board with some hypotheses that this phase should test. We were able to use this to plan our first sprint’s worth of work that we are now working our way through. We will have a show and tell on the 31st of July for those that are interested.
This sprint is focused on researching and agreeing with stakeholders a first pass at the hierarchy of data items: floor, sub block, block and so on. Further to this we are keen to build on the work that multiple teams have done on property data for various projects. If you think that any work you’ve done could feed into this please get in touch.
Hi, I’m David Swainston from the Business Solutions team and I wanted to give a following up on the work we did over the last month to create a prototype for a digital end to end service for Housing.
Here’s what we learnt over the 2 fortnightly ‘sprints’ in which we built Housing Helper – a chatbot for tenants and leaseholders.
- Be guided by the needs of the end user first: as part of the Agile methodology, this was an opportunity to let our end user needs from the discovery phase inform the basis of our requirements. Traditionally, we would gather a whole raft of requirements from the service area to build a large product that isn’t necessarily tailored towards the needs of the user. It was really interesting to see the relative level of simplicity of end user requirements for a service (i.e. only what users really need), compared to the larger scope and scale of requirements we usually gather and work to. Obviously this can be dependent on the type of service/project being worked on – but we felt this is particularly relevant when it comes to designing public facing digital services
- Look at existing products and services for design inspiration: as part of the design we looked at various existing consumer based digital services and products currently out there and widely used. These included Banking apps, Messenger interfaces, web chat interfaces and SMS to name a few. We considered the strengths of what these services offered in terms of functionality and user experience and fed this into the design of our prototype
- Test with end users and let assumptions be challenged: ensure that the service you are prototyping is put in the hands of the users who will be using it. Get their perspective on things such as functionality, design, user experience and also additional user needs that may be identified and incorporated as requirements for future release sprints. Although our brief was to design a service for Housing, we learned from speaking to tenants that the service could be expanded to offer services across the organisation. We gained useful feedback on the certain design aspects, for example allowing spelling mistakes and slang to be recognised within the chatbot prototype and that the interface was easy to use
- Having the right blend of technical expertise: this was really important when it came to us exploring different platforms that could support the development of the prototype. We benefited from having developers within our team who could identify and configure the cloud platform, the various API’s between different components and coding the chat bot framework. We also had a technical architect who brought the whole platform together and summarised this in a solution design canvas that enabled other members of the team and observers to understand the different components that supported the platform
- Having a leader in the team/Scrum master: whilst Agile is a methodology to enable quick iteration and change, we soon found out that it is still very much a disciplined approach when it came to working through the design and release phase. It was useful to have a good leader to ensure that everybody within the team is aware of the role they have, and that release sprints are delivered in time to allow for the next iteration to begin. This is something that we can still tighten up on when applying Agile and its various forms for real projects, but as a first attempt away from the traditional Waterfall approach we didn’t do too badly!
Here’s a link to the Google Ventures channel on Youtube. We adopted the sprint methods described by GV in their videos, and the team found it really useful as a framework for our exercise to come up with the final prototype.
If there’s any questions you’d like to ask about the approach or the prototype itself, please drop me an email.