Our Micro Frontend Journey

Introduction

Over the past year or so, and following the successes of our micro services architecture, Hackney embarked on a journey to align our web application strategy with that of our API platform.  This meant breaking down our applications into domain specific components that could be combined and reused in different ways.

Out of this, and the timely addition of Amido to our agency collaborators, came the decision to adopt a micro frontend strategy for our frontend development.

A Bit About Micro frontends

So what are micro frontends?  At its core, the micro frontend model basically adopts the same underpinning as that of microservices where the application is broken down into a series of reusable features and domain specific components in order to have a consistent user experience across the board.  Each feature or component becomes an application in its own right and developed independently but brought together in composition of the complete application.

For domain specific features, these can be built by product teams responsible for that domain which means the domain specific skills and knowledge can be leveraged for these domain specific products.  These products can take advantage of benefits of micro frontends such as:

  • Technology agnostic – Each micro frontend isn’t dependent on the technology stack of another and teams can use tooling best based on their particular skills set.
  • Faster development and deployment – Because a lot of the micro frontend strategy is around reusability.  Any project reusing a shared component benefits from not having to invest the time in building their features independently.
  • Isolated code – Each micro frontend is its own self-contained application and does not depend on the presence of another, which means the decoupled architectural state is maintained.
  • Team prefixes – Naming conventions can be established to easily identify which team or domain a product belongs to.
  • Site resilience – Due to the decoupled architectural state, if one micro-frontend goes down, it doesn’t mean the entire website becomes unavailable.

There are a number of tools and frameworks available for building micro frontends. For our flavour of micro frontends we decided on the Single SPA framework which brings together multiple JavaScript micro frontends into one application.

The diagram below gives an idea of what we were looking to accomplish, utilising reusable components wherever possible to maintain consistency and reduce developer effort.

The learning curve for adopting the Single SPA framework was not the easiest in my opinion but ultimately the benefits have been worth the investment in the time required to learn.

Our Journey to Adoption

As there were several work streams in our Modern Tools for Housing (MTFH) programme of work, we decided on a phased adoption.  Since the Manage My Home stream was the team to propose the Micro Frontend approach and they had the required set of capabilities to successfully deliver a proof of concept, the decision was made for them to pilot the approach.

What emerged out of this successful pilot was a set of useful tools and reusable components.  We now had several micro frontend applications that could easily be integrated into other projects, including a standard authorisation component, a standard header, and much more.

There was a slow initial uptake of the micro frontend approach for a number of reasons pointed out by several teams such as the learning curve required for adoption and that some projects had progressed so far that it would not have been cost effective at the time to reverse progress and adopt another approach.

What we eventually decided was that we would target any new projects and get them on board the micro frontend train to ensure we follow our defined approaches.  The idea was that at the start of a project would mean less of a buy-in challenge.  We had some new projects coming online that needed to make a decision on their frontend approach.  We approached two of these, Single View and Temporary Accommodation and they decided to come on board.

Because these two projects were outside of the Modern Tools for Housing programme of work, there were a couple of unforeseen challenges arising out of the tooling being specifically designed for the programme, a simple example being the application title being fixed for the programme.

At this point we were still sorely lacking in our internal frontend capabilities and relied heavily on the skills of our agency partners.  The two projects that came on board had their own specific take on how to extend and reuse the project outside of the programme.  This ultimately meant that we had two new flavours of implementation which ultimately was not the strategy we were aiming for.

However, collaborating with these two projects facilitated a lot of learning and we slowly started to build our internal capabilities and identify ways of extending the tooling in a more consistent way.

Where we are now

Since our efforts to extend the micro frontend tooling beyond the scope of the Modern Tools for Housing project and the challenges we were able to overcome, we now have several projects using this tooling:

  • Single View
  • Temporary Accommodation
  • Finance
  • Manage Arrears (Leasehold services)

The diagram below shows a partial view of how our micro frontend components are being reused across these various projects, either directly reusing the shared service components or reusing our templates to build new ones.

We do still encounter challenges as we extend this tooling as it is now pretty much a learn as we go process.  However, we are already seeing the exciting benefits of this approach such as not having to spin up an authentication process for each project as well as having a very consistent look and feel of our web applications.

So What’s Next?

There is still a lot of work to be done to incorporate all of the UI work happening across our projects.  As we have new projects coming onboard and wanting to take advantage of reusable components, we want to make it easier for them to do so.  

The diagram below sets out ideas on what the development timeline would look like for our continued micro frontend journey.

We have been working on a couple of useful templates such as root and feature component templates.

We want to make the deployment of these projects as seamless as possible while remaining in line with our standard deployment process.  The work on the deployment pipeline continues to evolve and we will soon be at the point where a developer can start a project from the template and have a working deployment pipeline with minimal configuration.

We also want to be able to integrate and align any micro frontend projects that may have started before we were able to set up our project template.  For better maintenance and support it would be ideal for these to share the same common components where applicable so work will be needed to get these incorporated.

With all of this in place it will be invaluable to have all of the processes, guidelines and governance around our micro frontends all consolidated into a playbook, a central location for information for any new developer eager to get cracking on a new frontend project.

There will always be opportunities to build and grow this ecosystem but we also need to remain vigilant and keep abreast of any emerging security vulnerabilities or incidents and get them resolved as quickly as possible.  This is why we will continue working with our security assurance team to address any detected issues as quickly as possible.

Conclusion

From the initial introduction of the micro frontend approach until now, I’ve been excited and interested in the prospects and possibilities that it allows.  It has shown me how far frontend development has come, from the early days of plain HTML, CSS and JavaScript, to these amazing frameworks that make application development such a rich and rewarding experience.

Where I was once filled with awe and overwhelmed with the amount of learning that would be required to build a frontend application, I am not part of a team that can deliver ui components in a way that was previously unimaginable.

I am really excited about this project and the possibilities it can open up as we build out our micro frontend ecosystem.

Hackney NuGet Package

NuGet Package

Context

At Hackney, we believe in reusability and consistency that is applied to our API development process for all projects. As we started building more APIs we identified a lot of duplication of code from several product developments that took place on APIs with similar user needs. Hence we have identified a need to have a service that would store common code that can then be used between different APIs and contribute towards the open source community. After doing lots of research, we decided to start building our own NuGet packages which will help developers in faster development, reusable code and help us to achieve consistency across the board.  

Our Users and their needs

As a member of the Technical Design Authority team and the Security team, I would like to:

  • Ensure that the NuGet Packages have a secure authentication in place so that Security Compliance rules are not compromised.

As a Developer I would like to:

  • See consistency between APIs and Listeners so that any changes made to the data models can be reflected in both areas straight away
  • Reduce duplication of code across APIs so that development time can be used in exploring new features/functionalities and reusability can be achieved
  • Ensure any NuGet package created is secure so that security practices are not compromised

As a Solution Architect, I would like to:

  • Have awareness of the different NuGet Packages that we have so that they can be consumed across solutions that I design
  • Have an easy way to find NuGet Package documentation so that I can make informed decisions about whether the NuGet package can be consumed in new solutions

As a Product Manager, I would like to:

  • Ensure development time is reduced so that there is faster delivery to our users

As an External Agency developer, I would like to:

  • View the different NuGet packages that we have so that I can consume them within my APIs.
  • View the different NuGet packages so that I can create new ones that can be consumed across different services.

Our Vision

  • To introduce reusability between APIs developed for product development so that duplication of code is reduced across many product developments. This also enables a return of investment on the platform.
  • To be the source of truth for data models between APIs and Listeners so that data remain up to date at all times and thereby achieve consistency.
  • Better practices so that faster delivery of services to our residents is promoted as well tested feature.
  • Enables the team to be adhering to best practices as described.
  • Well tested and secure piece of code enables the team to emphasise security assurance and gives confidence that we are following the security first principle in depth.
  • To reduce the duplication of code so that there is better performance of applications 

Why would you need a Hackney NuGet repository?

We use NuGet packages to share code that is exclusive to an organisation or a project. This becomes more useful when having a microservice architecture where multiple services share common code, things like middleware, validation, logging, etc in order to have a strong ecosystem in place. 

Although Hackney has got the Base API template that is used when building APIs all the time and allows us to onboard new developers quickly, there is still a need for NuGet packages, here few examples:

  • Base API is a boilerplate code for being reused for new APIs which might contain a part of shared common code as not all services might need everything. 
  • If we update an existing shared code in the Base API, then we need to manually update all the services which are using the Base API. Instead, if we have a NuGet repository, we should just update the new version of the package inside the Base API and the services.
  • When a change is required we only need to do so in one place which helps to reduce the development time required to update code.
  • We might need to share multi-domains specific code that does not belong to the Base API. This will allow the source of truth to be available between different services and doesn’t require developers to update several APIs at the same time. 

What is a NuGet package and its benefits to our ecosystem?

A NuGet package is a single .Zip file that contains a compiled code that can then be consumed and shared between developers. Each package is deployed to a central NuGet repository that can then be installed within your services. 

The NuGet packages are secure as they require a Personal Access Token. Without the Personal Access Token, you will not be able to install the NuGet Package.  Not only does the approach of using NuGet Packages meet Hackney’s 7 approaches but it also provides a source of truth for standard data models. 

Our NuGet packages are reliable as it has the most up-to-date data models that can then be used across different repositories. Also, our NuGet packages contributes towards meeting the KPI metrics in order to justify the return of investment.

Here are our NuGet Packages if you would like to take a look: 

https://github.com/orgs/LBHackney-IT/packages

Our use cases for using in-house built NuGet packages

Within Hackney, we were able to identify a lot of APIs with the same user needs hence reusability can be accomplished across different APIs. Every time a change was required for a piece of code it was needed to update several different APIs. This increased the development time required. To reduce the duplication of code and development time required we identified a need of creating NuGet packages that would store common code which can then be consumed across several APIs. 

For example, within Hackney, we follow Event Driven Architecture where our APIs act as publishers and custom-built ‘listeners’ act as the consumers. The data model stored in the Listener should be the same as the one in the API – to prevent making any updates in multiple places we decided to have a NuGet package that would store the data model which can then be consumed by the API and Listeners. This will allow the Nuget package to be the source of truth and will promote consistency.

We have chosen NuGet as a tool to produce and consume packages as it contributes to efficiency in our development process, through the use of a central point of reference for all services that use a particular data model. 

NuGet packages can also be used to compile common code that can be shared across different APIs. For example, If an API is using DynamoDB as its database service, the converters might be required to convert the data type when the data goes in and out of DynamoDB. Rather than having the converters within each individual API, we decided to create a NuGet package that can then be consumed by all the APIs. This prevents duplication of code and reduces the time required in development. 

Using and building packages

Consuming our NuGet packages is the same as using any other public NuGet packages – the main task that is required to do is to add the package to the given project and then set up any configuration for authentication (if required). For the purpose of onboarding new developers, we ensure that we have extensive documentation in the READMEs of our repositories that is kept up-to-date, We also include the NuGet Packages that we have within our API Playbook and the Base API template… Simple as that! 

To ensure that consuming services are not affected by changes to our common packages, we use an automated versioning strategy such that every time an update is made to the NuGet Package the version of the package is calculated automatically. A service will only be able to consume new changes once they have updated the package version they are using in their respective project and deployed the changes. This allows the individual teams to check if there is a need for them to update the NuGet package via the NuGet package manager as part of the code update process.

When it comes to naming our NuGet Package, we like to ensure that our packages are clear and concise to allow all developers to be aware of the features that are available in the packages. All of our feature packages start off with Hackney.Core this helps developers to identify that this is a Hackney build NuGet package. All of our shared packages start off with Hackney.Shared this allows developers to identify that this data model is used across different services. 

We are looking forward to any collaboration and we are more than happy to receive any feedback for improvements as we believe in failing fast and learning from the experience and iterating our ecosystem.

Future plans

We have been able to see a lot of benefits from consuming NuGet packages and we would like to share this with different developers so that they can improve their ecosystem. Here are some of the future plans that we are considering taking into account:

  • Encourage developers from all projects across HackIT to consume our NuGet packages
  • Encourage other teams to create their own packages to prevent duplication of code between APIs across HackIT
  • To continue consuming these packages and find any possible areas of improvement required. 
  • Automate the process of notifying developers when an update has been made to a NuGet Package.

HackIT Development Lifecycle (Part 1)

Introduction

The traditional software development process was long and tedious in the early waterfall days. With iterative development, the agile software development team including delivery managers and data engineers can leverage the software development lifecycle to discover, design, develop, test, and eventually deploy services or products with greater regularity, consistency, efficiency, and overall quality. This includes a thorough understanding of individual steps and ensuring we have incorporated the user, data, and tech needs along with design by security practices within each phase of the lifecycle. The paradigm allows us to build scalable, resilient, secured, highly available,  and well-orchestrated services for residents and staff to use them all the time.

Leveraging the right technology, skills, and platform that provides a balance between meeting our residents and staff needs, faster delivery, and designing a resilient and well-architected solution is key. It uses modern and cost-efficient technologies which is at the heart of evaluation we do at every step of delivering services so good that people prefer to use them. 

Our story/journey

At Hackney, we believe in continuous learning, iterations, and improvements. We have been regularly iterating our playbook (based on our learnings over the period), evaluating existing and new technologies to identify gaps for improvements, trialing out new solutions, and developing more efficient approaches for future service development. Knowledgement management is also an essential part of this process and we have implemented this via our weekly community of practices meetups for Data, Architecture, and Frontend.

We believe that having an open and transparent approach helped us to define our processes/ways of working effective and enabled us to gain a better understanding of the user, data, and technological needs.

We started by building monolithic APIs hosted on-premise and incurred a lot of technical debt as we were on our digital transformation journey. From classic waterfall styled development to a complete agile-based approach did take time but we managed to achieve it in no time. Introducing DevOps principles within the service development whilst still keeping the lights on was challenging but was possible. It takes us pride that we developed SMEs within the team to ensure we mentor and coach our junior members and apprentices going forward along with following the industry best practices. We learnt that cultivating the growth culture was key to enable the right mindset within the team. Winning recognition also enabled us to validate our processes among various industry giants.

We aspire to develop services so well that our residents and staff prefer to use them all the time. This has enabled us to prioritize their needs first along with the data and technology as our core principle. Over the years we developed our open working, collaboration, and processes so well that it has enabled us to save development time and costs. We have delivered powerful platforms (For eg: Reusable APIs via Platform APIs, API boilerplate, Design system, Microfrontend boilerplate, etc) to be used by numerous services. The approach has unlocked the team to deliver quickly and achieve return of investment(ROI) via reusability and consistency across the products allowing fewer vulnerabilities via code.

How do we work – HackIT Development Cycle

Like any other agile-based cloud development – the HackIT development cycle has the following stages to keep the user, data, and technology needs at the core. The model emphasizes on below phases:

  1. Discovery
  2. Design
  3. Development
  4. Testing
  5. Integration
  6. Monitoring
  7. Maintenance

These stages do not end here for us. We have precise steps inbuilt for each of the above phases which helps us to deliver efficient, secure, highly available, and resilient services to our residents and staff. They ensure to meet the approaches we have defined in our playbook for better and faster delivery of services with design by security as our core principle. The process enables us to

  • Identify vulnerabilities at the early stage
  • Ensure proper access management
  • Achieve security and policy compliance

They also encourage collaboration among various product teams ensuring consistency, reusability, and maintenance of products across the board.

How will the development lifecycle benefit our residents and staff?

By introducing and following our development lifecycle, we have massively improved our development process resulting in building more reliable services and faster delivery of features to applications used both by staff members and residents. The benefits achieved are :

  1. Improved service reliability via more efficient testing and automation of services for residents and staff will be more stable, resulting in a better user experience.
  2. Improved availability via the introduction of cloud all services available to staff and residents with minimal to no downtime and ensure it meets our approach cloud unless.
  3. Improved communication and collaboration between the residents and council services results in improved decision making
  4. Faster development process due to standards, code templates, and reusability -> features become available to staff and residents much quicker.
  5. Seamless integrations with other systems thereby providing better services to our residents by introducing services that share the same data and thus provide a better understanding of our residents’ circumstances and better support.
  6. Emphasis on the People-first approach to building the confidence for our residents and staff to use the council services and thereby enabling and supporting the digital transformation journey.
  7. Building our core entities’ data catalog efficiently to provide better and more informed services to residents.
  8. Secure systems for our residents to use and build that confidence so that our services are so good that people use them all the time.
  9. Having consistency in our service development enables the council to get a better return on investment and lower development costs.
What do we want to achieve – Our aspiration

The knowledge of designing and developing services on a cloud platform over the years has helped us to build aspirations/aims for us to achieve. This has been a gradual process but as a team, we have learned a lot and take immense pride in our approach of “Fail Fast” and “Trust the team”.

Our aim is to:

  • Build services so good that people use them all the time.
  • Have a straightforward developer onboarding experience and allow them to adapt our ways of working.
  • Have a clear set of standards to promote consistency and reusability with a process to continuously review and improve those.
  • Promote seamless integrations between Hackney’s systems.
  • Promote a single source of truth approach for our core data entities to allow us to build services using the same data sources.
  • Build and deploy services with the user, data and technology need at heart along with designing from security first principles.
  • Continuously iterate our playbooks and approaches to reflect the current market trends.
  • Achieve the target state architecture and ensure the service development follows the best practices as defined.
  • Continue to adopt practices that reduce development time, allow for a faster feedback loop, provide a cost-efficient approach and ensure security and data privacy by design.
  • Knowledge sharing and better collaboration.
  • To build and collect data better and consistently across the board for a unified platform view.
  • Enable the team to achieve the MOKRs efficiently and justify the spending effectively with the rigorous and iterative blue and green deployments.
What’s next?

The details of each stage of our development lifecycle will be continued in our next blog. It will define at individual stages how to ensure assurance and practice what we preach.

Please feel free to reach out to us as we would like to hear your views/feedback on the above.

By

HackIT Development Team

Reflecting on my experience of LOTI x Upfront

About LOTI x Upfront

Upfront Bond is a six-week online confidence course designed for and by women. Participants of the course join a ‘bond’ (a collective noun for a group of women) to take this journey with, and learn from Upfront founder Lauren Currie’s practical tools and insights to help women:

  • Feel more confident and assertive at work
  • Become more professionally visible
  • Combat and overcome nerves
  • Strengthen their own voice

Recently the London Office for Technology & Innovation (LOTI) invested in over 80 places on this course for women across London local authorities to take part. I was one of those women and wanted to share my experience of the course and my own relationship with confidence.

Why did you sign up for Upfront?

I signed up for a number of reasons. Firstly, I wanted to overcome imposter syndrome and this was really good timing for me having just been promoted to a more senior role. I can usually project confidence externally (e.g. I can speak up in meetings and give presentations) but I often feel inner self-doubt that can make this a pretty uncomfortable experience.

Having moved up into the role of Data & Insight Manager, I also wanted to take this opportunity to develop skills that would help me support my team better. I wanted to be more confident in setting boundaries and saying no so that I could protect the team’s capacity and well-being. I also wanted to be a better coach and cheerleader so that members of the team feel safe and confident themselves.

What were your doubts going in?

I had been to an information session about Upfront last summer and after this had thought that it wasn’t for me. It’s hard to describe, but it felt a bit…much. It felt very extroverted and I wasn’t sure there was room for different presentations of confidence. However, I had the opportunity to discuss these doubts with a colleague who had just completed the course. I’m happy to say that the discussions and content were a lot more nuanced and thoughtful than I expected.

I was also a little wary of the idea that women had to ‘fix’ themselves and act more like men in order to be (or be seen as) confident. I’ve felt conflicted about advice I’d heard in the past about women changing the way that they speak or write – I quite like using exclamation points in my emails, thank you! However, I was pleasantly surprised that in the first ‘Live’ session Lauren addressed this point head on and said that the problem is not the women aren’t confident, it’s that society generally doesn’t reward confidence in women.

How did you make the time?

I needed to dedicate 2-3 hours each week to Upfront. It wasn’t easy, but at the start of the course I blocked out 2 hours of focus time each week and rearranged regular meetings which clashed with the ‘Live’ sessions. I also had to miss stand ups, delegate things more, and be Upfront in saying ‘no’ to other meetings in order to stick to my plan.

What did you learn? What were your top 3 takeaways?

I think I knew most things that we learned on the course already intellectually. However, I didn’t really believe them to be true. This course was an opportunity for me to focus on my own confidence and really reflect on how it plays out in the workplace with the support of other women. My top 3 takeaways from the course were:

  • I don’t need a good reason to say ‘no’ to something. I need a good reason to say ‘yes’.
  • I don’t need to apologise for things that aren’t my fault.
  • Work doesn’t always speak for itself. I need to take credit for my achievements.

What have you done differently?

The Upfront course has caused me to reflect a lot more about how I think about work and act in certain situations. Here are a few of things that happened during my 6 weeks:

  • I copied someone into an email chain and got a somewhat angry response from the original sender saying they didn’t want that person to be included. I very nearly apologised but after I typed the words I stopped to think ‘what am I actually apologising for?’ I hadn’t done anything wrong in this situation. I deleted it and instead enquired about why this was a problem. Before Upfront, I probably would have apologised because I wanted the other person to like me, and then thought about it all evening.
  • I was in a meeting where a colleague was given credit for some of my work. I didn’t speak up because at the time it didn’t feel important and I wanted to focus on the content rather than who did what. However, I thought a lot more about this after the fact and wished I had corrected this at the time. Hopefully I’ll act differently next time.
  • I introduced ‘weekly wins’ in team stand-ups to start sharing some of our learning with the wider team. This is an Upfront practice where you reflect and share something that you’re proud of, with the idea that this aims to increase your motivation, sense of accomplishment, and feeling of happiness.
  • I encouraged a colleague to be Upfront herself when faced with a difficult work situation which led to a positive outcome for her, her team, and Hackney.

Overall, I think I’ve felt a bit less anxious about work (even my husband has commented that I seem less stressed recently). I am also reflecting a lot more about the unique value that I bring to the council and my team.

What’s next?

It’s unrealistic to expect a six week course to change a lifetime of social messaging and learned behaviours. There is also a wealth of resources in the course that I’ve only scratched the surface of. This is the beginning of a journey, and I’ve identified a few areas that I want to work on in greater depth now:

  • Taking credit for my work: I am already fairly visible at work but I’m often shining a light on the work of the team (and rightly so, they’re brilliant) rather than my own. I think this is tricky for managers, as we often don’t have tangible outputs we can point to and we don’t want to take credit for the outcomes when it was a team effort. I want to find better ways of identifying the unique value I’m adding to Hackney and shining a light on this too.
  • Being thoughtful with the language I use: I’ve already made progress by stopping myself from mindlessly saying ‘sorry’ and using filler words/phrases like ‘I think’, ‘just’ etc. but I don’t necessarily want to eliminate these words from my vocabulary. I want to be more mindful about my language and use them in appropriate contexts.
  • Building ladders behind me: how do I best role model these behaviours and encourage other women to embark on their own confidence journey? (hopefully this blog is a start).

I hope that with the grounding the Upfront course has given me, along with the support of other women (via our emerging LOTI Women’s Network and the Upfront Global Bond) I’ll be able to make some real progress in these areas this year.

Modern Tools for Housing – Programme week notes 11/2/22

The Modern Tools for Housing programme is the development of a suite of software products developed in collaboration between HackIT and Hackney Council’s Housing Department to support our staff and our tenant and leasehold residents. It currently covers housing repairs, home management, housing finance and support for our arrears collection team.

Big news this week as with the exception of two small teams we move to 100% digital working in the DLO – no more printing out work orders! On to highlights from our workstreams.

Managed Arrears – Silvia – workstream DM

  • Celebrations:
    • Large parts of the NoSP work already completed
    • Work on Rules Engine and Components Library started
  • Challenges:
    • The team had a few challenging moments with the Ruby on Rails framework, which have now been resolved

Finance – Kate – workstream DM

  • Celebrations:
    • The Council Finance team is undertaking user acceptance testing
    • Two week sprint cycles commenced on 7th February 2022
    • New member to the team; welcome Duncan
    • Direct Debit functionality ready for final testing
  • Challenges:
    • The data migration! It’s been one of those challenging processes where just as one problem is resolved, another emerges.

Manage My Home – Yvonne – workstream DM

  • Celebrations:
    • Got a full team again
    • Started our first full sprint
  • Challenges:
    • Learning and doing means pace is slower, but this is to be expected. 
    • Ensuring we’re capturing and understanding the user needs while we transition between new team members. 

Repairs – Sarah – HackIT workstream DM

  • Celebrations:
    • Work orders will no longer be printed (unless by exception) from Monday 14th February. This means the following for our DLO staff.
      • Planners will no longer need to produce daily pdf run sheets
      • DLO admin no longer need to print & distribute run sheets each day
      • Reactive operatives receiving their work 100% digitally
    • Operatives can now click a tenants number from a work order and call them directly from a withheld number
    • Linking a support site within mobile working so that operatives have more information available
    • Search by address or work order in the bonus app. This is especially important for supervisors and managers when investigating queries with operatives for missing jobs that they’re expecting to be paid for.
  • Challenges
    • Issues with AWS (our cloud hosting provider) has made it tricky for us to deploy everything

Everything continues to gather pace across the Modern Tools for Housing programme as we can see from the big list of celebrations above. In addition to all of that we’ve also moved forward on a number of additional things.

Great work continues to line up our major repairs partner Purdy to be able to start using Repairs Hub and our Mobile Repairs solution. Purdy supplies a significant number of multi-trade operatives. Once they are onboarded Housing will be better able to direct people who can fix a number of issues in a tenant’s property with just one trip.

We’re going to be significantly improving our ability to oversee planned maintenance online. This means a better service for residents for things like common area painting and lift upkeep.

We have received approval for our planned staffing-up of the new ongoing product team to replace the current project team. We’re currently discussing the finalised HackIT 3.0 job descriptions and expect to be able to start recruiting in the next 1-2 weeks. The product team will enable everyone to have confidence that the Council will support our repairs products and continue to invest in their iterative improvement over the long term.

We’re also moving forward with standing up a short project to introduce the open source Repairs Online product (originally funded by LOTI) which will integrate with our existing systems. We eventually expect a high number of our residents to raise repairs online and receive update notifications providing a better service and reducing the number of people who need to call our Repairs Contact Centre.

Over in Manage My Home we’ve agreed a date for the upcoming Government Service Standard review. We’re very pleased that the panel for this will be almost entirely made up of people from outside of HackIT including members from NHSX and the DWP. This review will enable the team to discuss the work they’ve done to date and their plans for the future against the 14 points in the Standard and get guidance from very experienced people on how to proceed.

Finally for today a quick recap on the “ways of working” workshops that we’re undertaking as part of the “soft reboot” of the programme to ensure that everyone across the whole programme has an opportunity to collaborate on defining how we work together. The first of those took place on the 7th of February facilitated by my Delivery Manager colleague Darren Aitcheson (thanks!) and produced an agreed set of Programme Values that we can use to guide us from now on.

We’re firming up future ways of working workshops but at the moment we’re thinking about sessions on Agile delivery, technical and quality standards, meetings and documents and programme metrics, product deployment and support. Each of those will help all of our teams build agreed ways to work together that will slimline and accelerate our delivery to our users.