Modern Tools for Housing – Programme Week Notes 26/11/21

Another week zooms by as the nights draw in and we’re getting rapidly close to the festive season. Here’s the update from our teams.

Finance – Jay – HackIT workstream DM

  • Next Show and Tell: 7th December 2021
  • Celebrations:
    • Great progress has been made with a MAA Rules Engine concept
    • PTX testing environment is now set up
  • Challenges:
    • MAA Bugs and issues
    • Hackney development resource

Manage My Home – Yvonne – HackIT workstream DM

  • Celebrations:
    • Detailed discussions are continuing with the Document Storage team
    • The team is working excellently at pace – we managed to bring in four new work tickets (two technical enablers and two delivering user new functionality) this sprint – both of which will move us closer to our goal of delivering our first Housing management processes
  • Challenges:
    • Google Groups (permissions) is complex due to the different approaches the teams have taken across the workstreams. Integration is proving trickier than envisaged
    • Heading into our last full sprint before the current contract ends
    • We have the notifications component documentation but what is currently built is missing a few “must have” requirements for us

Repairs – Sarah – HackIT workstream DM

  • Celebrations
    • Fixed the ‘no access’ jobs being deleted from DRS
    • Development meetings with Advanced started off well
    • Deployed changes to mobile, improved job view including adding in the job description
  • Challenges
    • Solving the issue with the operative job splits – we’ve added more logging to see what’s going on
    • Keeping all stakeholders onboard while we build the last part of the bonus process ‘closing a week’

Our Digital Marketplace tender process is continuing. We whittled the applications down to a five company shortlist who have until midday Monday 6th to submit their full proposals. We hope to be able to announce a decision shortly after.

Some of our senior stakeholders have said that they want to have access to our test systems so they can see for themselves how our products function. It’s fantastic to have that level of interest so we’ll be setting that up this week.

Speaking of senior leaders, we’re looking forward to taking part in the Housing senior leadership day on Wednesday where we’re going to be presenting about the MTFH work to-date and our future plans to a large audience of departmental managers.

We’re restarting our research into moving away a number of our Housing resident processes from needing so-called “wet signatures” (signatures on a piece of paper). We’re collecting evidence of how this requirement has been removed in central and local government processes so if you have any examples of this please let me know.

We’re still lining up to recruit a senior .NET contractor to help work on the MTFH Finance workstream so if you know anyone who may be interested again, please let me know in a comment below.

A couple of bits I’ve been tardy on. I need to finish writing up the workshop that we had of evolving the governance of our programme and share that for feedback this week. I’m also re-starting the regular catch-ups with the programme Product Owners that stopped when I started the new Lead Delivery Manager role. I’m looking forward to discussing a number of important topics with them, but also really listening to how I can best help them with their goals for their workstreams.

Cloud Engineering weeknotes, 26 November 2021

This week we’ve been looking to the future a fair bit. The first part of this was in reviewing our roadmap and planning the larger pieces of work for the next 5-6 months. A lot has changed since we put this roadmap together back in March, and we have a much better understanding of what is needed and how we work with other teams. 

We have removed some items now deemed unnecessary, some items that run counter to our working philosophy, and some items that are low-value. If they are important enough, they will come back in the future. 

We’re concentrating on a smaller number of items, doing fewer things better, and we will start with testing our disaster recovery capability. We’ve had a couple of dry runs and will build on that; the idea is to see how we react and then iterate to build on strengths and identify weaknesses. 

Looking to the future has also included starting to plan for when our contractor colleagues roll off. This will be phased, with at least one person remaining till January or February. We are now starting handovers, with documentation and coaching. 

Part of this effort is a lot of work done in the last week or so to put as much automation into the firewalls as possible. We should soon be in a position where nobody has to go into the control panel; everything should be managed by Panorama and any changes made via Terraform. We have high availability, and changes to things like routing should be very rare. 

This extends to Globalprotect, which is part of the firewall software. We’re piloting a new way to create an application in Globalprotect using its API, rather than needing direct access to the control panel, and it’s gone well. Be on the lookout for more Globalprotect changes in the next week or so. 

We’ve made some progress on the account migrations this week. The work to migrate Advanced e5 is prepared, and the change should be made imminently. We’ve also migrated Housing-Dev, and work on the Websites account is progressing well. Unfortunately we’re blocked on the other Housing accounts, which in turn blocks the final set of migrations. We are talking daily to the MTFH team about getting this unblocked. 

The next few weeks may be frantic, but the light in the tunnel is getting bright. 

Data Platform Project Weeknotes 13: 23.11.2021

For more information about the HackIT Data Platform project please have a look at this weeknote on the HackIT blog.

Improving the Data Platform playbook

We have spent some more time restructuring our Data Platform playbook. After thinking about the particular needs of data scientists, analysts, engineers and managers and mapping the common and unique parts of the playbook that they may need to access, we have started the process of restructuring the playbook so that it uses a much clearer navigation menu for these users. We see this as a two part process involving restructuring the menu and refining and adding necessary content. 

One example of additions made to content in the playbook is the process  around simplifying creating Glue jobs in code (see weeknote 12 for more information). After writing the content (in this case instructions) for the playbook we have spent some time making sure that they are user friendly to analysts with a broad range of familiarity with such tools as Terraform and GitHub.

This testing is set to continue with our colleagues in the Parking team but has already given us a lot of insight into how we can make the playbook more user friendly and accessible.

Collaborating with Manage My Home

Last week members of the data platform team presented a proposal  on the benefits of using Kafka over an AWS S3 bucket for the event streaming process to the HackIt Technical Architecture meetup. 

Kafka is an open source software which provides a framework for storing, reading and analysing streaming data. We looked at the positives and negatives of introducing Kafka over the current SNS/SQS solution and believe that Kafka provides a more reliable and scalable solution to meet the needs of the data platform. After some lively discussion and debate, we are waiting to find out the next steps for this and are keen to start working with the Manage My Home team on event streaming as soon as possible. Ultimately this will enable us to get the data into a BI tool like Qlik so that the Manage My Home team understands how the tool is being used.

Exploring our Roadmap and Ways of Working

We held another workshop to further refine our product roadmap. As a team we identified all the possible user needs that various analysts, engineers and managers might have when using the platform. We then looked for commonalities between these user groups and mapped the needs on  an affinity map. This process enabled us to refine user needs by priority, complexity and difficulty.

We have also held a workshop to reflect on our current team pattern of agile working and ceremonies. There was a lot of debate about Scrum vs Kanban as agile processes and the best approach when it comes to estimating the complexity and time to complete a task when planning. We have come up with some changes which we hope will make the planning process more efficient. However, we acknowledge that this is an evolving process and one that we will reflect on the success of in the near future.  

Ingestion and use of Tascomi planning data

Work is still ongoing to change the Tascomi ingestion process in a way that stores daily data snapshots in the platform. This sprint, we are also attaching data quality checks to the process.  This is an opportunity to test and refine an entry in our Playbook.

Adam Burnett from the Data and Insight team is deconstructing previous Qlikview reports to understand the business logic behind key KPIs, sourcing the relevant data in the Tascomi tables and recreating them for the Planning team to review. In some cases this means identifying new datasets that need to be added into our daily loads. 

Email if you have any question about that Data Platform.

Modern Tools for Housing – Programme Week Notes 19/11/21

Some fantastic demos this week, and a great workshop on future governance for the programme.. The current Digital Marketplace tender is attracting a number of applications and we ran a demo for potential suppliers so that they could see what we’ve done so far. Here’s the updates from our workstreams.

Finance – Jay – HackIT workstream DM

  • Next Show and Tell: 23rd November 2021
  • Celebrations:
    • Great progress has been made with a MAA Rules Engine concept
    • PTX testing environment is now set up
  • Challenges:
    • MAA Bugs and issues
    • Hackney dev resource (still a significant challenge which we’re focussing on solving)

Manage My Home – Yvonne – HackIT workstream DM

  • Celebrations:
  • Repairs Hub integration is almost ready – this is a great addition to the product which will make it much easier for housing officers and others to see the whole picture for residents. It’s not quite live yet as we need to finalise the  Google group permissions) 
  • Ever more progress towards building our first process
  • Humairaa’s great work on Patches API
  • Concerns:
    • Amido contract is ending soon so we need to spend time making sure we’ve got everything documented and a plan for what happens next

Repairs – Sarah – HackIT workstream DM

  • Celebrations
    • Building relationships with operatives/ supervisors as we continue to oversee onboarding at the DLO
    • Reporting available for bonus calculation
    • Time booked in with Advanced to address the issue with the sync issue with DRS
  • Challenges
    • Sync with DRS, we’ve increased this to every 15 minutes but we’re still not picking up certain changes from within DRS meaning some jobs are missing from the operative job list (though they still have the printed jobs so the jobs are still going ahead)
    • Solving the issue with the operative job splits

This weeknote is after a short gap as I was away in the Scottish Highlands last week. A tremendous amount of great work has been achieved in the last two weeks.

The key programme level news is that we have clarity on what’s next in terms of budget. This enables us to focus on moving through our roadmaps for the next few months – especially starting to look towards focusing on resident use of MTFH.

We’ve been gratified to see a number of applications for our current tender on the Digital Marketplace. The tender closes on Tuesday and we will be announcing the shortlist of five going to the proposal submission very shortly afterwards.

It was excellent to watch the show and tell for Manage My Home and see the repairs information for properties now being displayed along with hearing about the proposals for the service’s process engine and seeing the wireframes for the new work tray. It also blew me away to hear that in the ongoing process redesign work some processes have had their number of steps cut by nearly 50%. This will really enable our Housing Officers to spend more time with our most vulnerable residents. 

Manage Arrears continues to be used across the whole of the Rents team and as increasingly large amounts of the automation functionality is switched on this is enabling Rent Officers to spend less of their time on IT admin tasks and more on the strategy of which residents to target and how to work with them.

We are shortly going to be looking for a senior .NET contractor to come and work with us in the MTFH Finance workstream. If you know of anyone who is available and may be interested please drop me a line at

Despite the ongoing technical integration issues the Repairs team are still working diligently onsite with the DLO operatives and steadily continue to increase both the number of users and the functionality they have via mobile working.

We’re still closely monitoring the integration points our workstreams have with each other and with other teams in HackIT. In particular we’re conscious of the possibility of MMH being blocked on delivering the first Housing processes by issues integrating with our Document Store service. We’ll be picking that up again at the start of next week.

Cloud Engineering weeknotes, 19 November 2021

More documentation this week, including a draft “Team ways of working” document that has really made me think. When writing this, I looked back to our first show & tell a year ago, when we set out our principles and values; I really do believe that we have held true to these. The fact that we’re doing this unconsciously is a good sign that the principles and values really do describe who we are as a team. 

We are tantalisingly close to finishing off some huge pieces of work. Our new firewalls are in, Panorama is being deployed for central management, and we have a host of improvements to Globalprotect lined up. One significant change coming in the near future will be new, and separate, URLs depending on where the application is hosted. The majority of applications will be on and any applications hosted in our own AWS, like Qlik, will be on

Unfortunately, there’s been no real progress on account migrations. We are ready to go on the Advanced e5 account, with a new VPN, but delays at Advanced mean that this will now not happen before next Tuesday. We are also still dealing with competing priorities in MTFH, but are meeting with the lead developers later today to unblock that. Until the Housing accounts are moved, we cannot move the API accounts. 

However, we are able to clean the API accounts up. The last significant group of apps to be moved to a new account is the GIS apps, such as Earthlight and LLPG. We have five EC2 instances and an RDS database to move, and the infrastructure to do so is just about ready. 

This week, we’ve noticed some issues in the platform, and have taken steps, or will take steps, to fix it. For example, we noticed that our Backups module wasn’t operating as expected – none of the backups older than 30 days had been deleted. We identified a missing line in the code, which has been fixed, and all the old snapshots have been purged. There is an associated cost with this, so S3 costs should fall a bit next month. 

Costs have been on my agenda this week. So we ran a report to identify over- or under-provisioned EC2 instances, and the recommendations have been shared. Some of the changes recommended don’t save a lot individually, but if we accepted all of them, we could save in the region of $10,000 per month (based on 24/7 usage). And that’s before Savings Plans, which we’re talking to AWS about. 

EC2 cost is now our single most expensive line item. Please make sure that your non-prod EC2s are powered down overnight by enabling the scheduler tags in Terraform. 

Finally, we are ripping up our roadmap next week, the second time we’ve done this since starting. We now have a much better understanding of where we are and what is needed, and some of the things we originally envisaged are either no longer necessary, or not possible. We would welcome input into this, and feedback once drafted, so if there’s anything you think should be included, please let us know.