Meet me at the port to see real fireworks

Network Redesign Week notes 01/11

In the last couple of weeks, we have been fairly quiet with the ‘Week’s notes’. I wanted to regroup with the team to get a better insight into the direction we are steering towards. The whole ‘Port density’ thing can be a bit overwhelming even for the most tech-savvy person, so it was important to check in to ensure we were all on the same page.

cartoon

Objectives

Following the last sprint, our engineers have concluded the testing from the Cisco test lab and workshop. The goal for these workshops is to agree on the best vendor network and switch solution for the current Hackney network infrastructure. All hardware and network providers have been tested against rigorous performance criteria that, among other requirements, review boot loading, ease of configuration, trouble-shooting and remote diagnostic solutions alongside, naturally, the best kit for the money spent. 

What we have achieved

To date, we have reviewed three vendors, namely:

  • CISCO
  • HP
  • Open compute

Based on the outcome of the review, our engineers will then advise on their recommended solution, which will be presented to the rest of the team. This recommendation will be drafted to help aid the final decision for our distribution and core solution for the web first project.

We have also been adding the final touches to the port density audits. At this point, I understand that you might be asking what is a port density audit? In summary, this is basically a manual count of all the network ports located at all the work stations throughout the Hackney council buildings.

By obtaining this information the team will be able to understand the number of ports that are used for actual incoming calls and the ones which are dormant so we can sum up the total number that can be decommissioned. 

Our recommendation document has now been adjusted to include a filter, revealing a recommended switch reduction grand total based on the current total usage of switches within council network hubs versus the total number of ports available.

We have also worked on the completion of our recommended ‘Firewall’ design which will provide security to the network connection for the cloud and the data center.

What we have found challenging

Obtaining approval for some proposed cable installations has been challenging due to a lack of understanding of the upfront requirements. Many thanks to @peter.burt and the rest of the planning team for the hard work and diligence towards helping us firm up a better understanding of the application process for such installations. We are now picking up momentum and gaining some traction in this area.

Next steps

  • Complete legal authorisation documentation for the Wayleaves contract
  • Present the final report recommendations on the proposed distribution and core solution to the team in accordance with our architectural design
  • Align hardware requirements with ‘VOIP’ strategy requirements ‘Voice Over Internet Protocol”



Manage Arrears weeknotes: w.c. 28/10/2019

Phase 3, Sprint 4

What have we been doing this week:

This week the team have been focusing on providing Leasehold Services with the ability to download the Letter Before Action (LBA) from the Manage Arrears system. It has been slightly challenging because the current set up with Gov Notify is tightly coupled together with various functions within the system. The team are very close to finishing this task. They have also been working on implementing Google Auth.

Joe Bramwell, our UI/UX expert carried out some research around how Legal caseworkers use UH Documents. We recognise that this is an extremely important feature for caseworkers and if this need can be addressed, it would greatly reduce officers reliance on UH. The purpose of this task was to find out how documents are used by caseworkers so that we can understand how to implement a version of this functionality in the new application for a future phase. Also, to see if there are any opportunity for innovation.

What is planned for next week:

We plan to continue working through our sprint tasks which includes;

1. Generating the LBA letter

2. Building the ‘Timeline’ view

3. Displaying the court date

4. Switching to Google Auth

5. Separating Leasehold Services from Income Collection within the system.

At the end of this sprint, we will look forward to providing our Product Owner with the opportunity to personally use the Manage Arrears product in ‘Patches’, with all of the newly completed features to date and share his feedback on the MVP.  

Improving the Repairs Hub 1st November Weeknotes

It’s week 3 of our 4 week push to improve the repairs hub for agents using it every day in the Repairs Contact Centre (RCC). If you want to see what we’ve been up to in the previous weeks, our weeknotes are publicly available to view.

This week, the focus has been squarely on identifying and preparing for the work Igor from Unboxed will be completing next week. He’s going to be doing lots of front end work to:

  • Flag when a dwelling is a new build
  • Clearly display the legal disrepair status of a dwelling
  • Flag if there is any cautionary information and what that is
  • Make error messages clear and informative

We are also going to try and show the tenants name and contact number to agents when they are raising a ticket and include details of who added a note and when (these are our stretch goals).

The preparation

Bukky and Richard worked together diagnose why some properties in Universal Housing are correctly alerting as under legal disrepair but were not alerting in the repairs hub. They eventually tracked this down to mismatching reference numbers. Bukky updated the query (to include a reference number we didn’t know we should be using) and, hey presto, all of the alerts now show in Universal Housing and Repairs hub! Hoorah!

Go team!

Currently, address and contact alerts come through as one piece of information so Bukky needs to make a further change to deliver these as distinct entities. He also needs to create mapping between the abbreviations we currently see and the actual alert message. We can then display plain English alerts and not coded messages to agents.

Chris Lynham reviewed our new error messages and confirmed that the information included in the error is enough for the support team to progress issues with and is also plain English for users. He advised on the correct next steps to take about errors too so we know we’re asking users to take the correct action once they’ve seen an error. We also checked them against GDS’ error messages in the design system. We can’t use their styling but we have made sure to follow their advice on what is a good error and what is not. I think we’ve nailed it and new our error messages are ready to go (as soon as Bukky has updated them)!

We also prepared HotJar to be added to the NCC CRM this week. Although it’s a few weeks until we have any developer time to do this, HotJar is all set up with the new feedback survey so will be ready for RCC agents to start putting feedback into once the tracking code is on the NCC CRM pages.

The pickle (or bit of a mess)

This week, we’ve managed to lose our way around notes. There are a plethora of questions that need answering; Which notes should we see in the repairs hub? Where do they come from in Universal Housing? What type of notes do we need to add via the repairs hub? How do we do that? What do we need to change about the notes we already have? What privacy considerations do we need to make with regard to notes?

Of the questions we have, we managed to answer none and, instead, generated more. It’s a pretty disappointing result as decoding notes was a real focus for this week and it feels like we’ve gone backwards. We’re hoping that when Barnes returns form his leave next week, we’ll be able to get the direction and focus we need around notes to make sense of them.

The postponement

We’ve had to swallow a bitter pill this week. We had high hopes of moving to Google Single Sign On (SSO) for the repairs hub but we’ve been struck by technical issues which we are unlikely to clear before our 4 weeks is up. The first is that, for security reasons, all service environments (live, dev and staging) need to be on a *.hackey.gov.uk domain. Our environments aren’t. We would also need to do this for the Neighbourhood Contact Centre CRM (NCC CRM) system to make the change really valuable. This isn’t a huge deal to resolve but it is extra work we didn’t expect. Secondly, we discovered that credentials agents use for logging in to NCC CRM might power much more background than we thought. We’re investigating this now to see what login details are used for apart from authentication; We’re hoping nothing as that means switching to Google SSO for both services is doable, just not within our time frame. If the answer is ‘some other stuff’ we have a much more difficult thing to untangle. However, Google SSO is our chosen solution for identity and authentication so we won’t be letting it drop!

The perspective

Bukky and Emma spent two sessions shadowing calls with RCC agents this week. Thanks so much to the team over in Robert House for making us so welcome! It was amazing and very valuable to see, first hand how the systems were used together and get insight into how a 20 second ‘annoyance’ per call can add up to minutes and hours of time over the course of days and weeks. Well be adding a few more ‘quick wins’ to our backlog, that’s for sure!

We also had some RCC agents joining us in stand up this week to see just how we are communicating to get things ‘done’ in this 4 weeks. It was great to see some fresh faces and get their extra point of view on the services and our approach. Thanks to all those who took 15 minutes out to attend.

Our last show & tell is on Friday 8th November at 12:00. If you’re interested in coming, drop us a line!

Manage a Tenancy week-notes 2019-11-01

Week notes are a way for us to keep people informed about progress on the project. Given the technical nature of the re-platforming work we will use them to explain technical choices that we are making, including the benefit and impact of these choices.  

Project goals

  • Enable housing officers to use MaT offline when there are mobile coverage blackspots
  • Enable Hackney to decommission Outsystems, saving £80,000 per year
  • Enable Hackney to support, develop and deploy future improvements to MaT more quickly, at lower cost

The goal for this first sprint is to lay the technical foundations that will help us deliver the project successfully. This includes:

  • Setting up a deployment pipeline. This will allow us to release working code smoothly.
  • Building a data-driven front-end system. This will reduce the amount of effort required to build and improve processes.
  • Beginning to populate a react component library. This will allow us to reuse visual components when building processes, speeding up the process while also maintaining consistency in design.
  • Further defining how we deliver the project in the right way (this includes carrying out a privacy impact assessment, identifying priorities for the service assessment and agreeing what user testing we need). This will reduce the risk of us hitting pitfalls later on.  
  • Identifying a suitable database solution to store data recorded as part of MaT processes that will be flexible and extensible to accommodate the various data structures of the different processes. 

Good Things

  • We’ve started development work. We’re celebrating the fact that we have officially started sprinting and the re-platforming work has properly begun. 
  • Deployment Pipeline is going well. We’ve made a decision on what the minimum requirement is for a pipeline and this is now being implemented. We don’t think that the timeline for this will have a negative impact on the project.
  • We can incorporate some UX improvements. Many of the UX improvements identified by Gill can be put in place without the need to go through lots of user testing and iterating. This means we can safely make these changes in the time that dxw and Hackney have working together. There are some exceptions to this, particularly when it comes to screens or guidance that will help users work offline. These are important parts of the project, and we want to make sure that there is a good offline user experience, which will mean getting some user feedback.
  • We are making progress on identifying the best database (DB) solution for storing process information. We are looking into NoSQL databases, particularly document DBs. We’re looking at the costs involved and are testing different solutions with dummy data. We expect that documentDBs will be the right fit for MaT processes, as they do not have a predefined schema; this means that they can easily accommodate different data formats. We are also considering reporting when testing those solutions.
  • We are working well as a blended team. Team members are communicating well with each other. This is helped by the new co-working space that we have. 

Learned things

  • We need to make the project accessible to non-technical people, and we need to demonstrate progress quickly. Lorraine and David both gave us feedback on how to communicate what we’re doing in a way that will build trust with the Housing team. This feedback will help us frame things like Show and Tells in the right way. 
  • HackIT colleagues (Emma, Mirela, and Shweta) learned about the proposed approach for building data-driven front-ends. We agreed to build a library to be used by all processes (and beyond) to “orchestrate” the multipage form pattern they all follow.
  • Hackney is still deciding how to host the replatformed service. We have talked about possibly setting up a new AWS account (or an organizational unit within a “root” account), to increase confidence around deploying infrastructure and reducing the potential impact on other projects.

Difficulties

Most of the challenges we’re currently working through are about the same thing – people’s availability and capacity to deliver what we need. 

  • Shweta is leaving HackIT. We’re going to have a new developer joining the team at around the same time but it’s going to take a little while for them to come up to speed.
  • Front-end development skills are not currently widely held skill in Hackney Council. Emma is going to keep an eye on what we’re doing via sprint planning sessions and code reviews. Ideally she will be able to get involved in replatforming at least one process, so that she can become more familiar with React.  
  • We want to minimise the risk of any individual being a single point of failure. The way to mitigate this is by making sure our work and decisions are well documented and that people are empowered to make decisions. 

Acknowledgements

  • Thanks to Lorraine for helping us think about how we talk about progress in a way that will build trust with the Housing team. 
  • Thanks to Emma for joining sprint planning. We know how busy you are, so this is really appreciated.
  • Thanks to everyone else who worked on the project this week – Chris, David, F, Gill, Liudvikas, Mirela, Richard and Shweta  

What’s next

We’re going to continue laying the technical foundations, which will help us deliver the rest of the project at pace. Specifically we will:

  • Finish setting up an MVP continuous integration pipeline, for both the front-end and back-end work.
  • Finish building the outline of the data-driven front-end system.
  • Continue rebuilding the existing pattern library components in React, in a way that minimises future maintenance work.
  • Carry out the privacy impact assessment and capture what we think is important for the service assessment. 
  • Finalise which elements of the new Tenancy and Household check process can be implemented.
  • Finalise the user testing we expect to need; we’re keen to minimise the burden that this poses on Housing Officers but also want to make sure we get their feedback on the offline-specific elements. 
  • Define the database schema and get the backend ready.
  • Define what API endpoints we need to create the processes.

Next week the following people will be working on the project:

  • Chris
  • David (except for Monday)
  • F (except for Monday)
  • Gill
  • Liudvikas
  • Mirela
  • Richard
  • Shweta

Joining Up Staff Data: Week Note w/c 21/10/19 – Alpha (Experiment).

It’s the end of the first week of our first sprint in experiment mode.

We’re working 20 hours a week and exploring various ways of working to make some headway in to this.

Because it’s an experiment, I am going to set up a G+ Community to post a few more updates and snippets as we go along – we seem to be moving at pace which is a good thing, but we want to make sure that we take people with us and that we don’t whizz off at 100mph in a direction that no one wants us to!

Our goal in this Alpha is that we will have identified and created a means to join people data together across systems and we have used this to deliver key insights about the business.

We’re now sure how we’re going to do this yet, but that’s because it’s too early to do so.

I received a comment on my week note last week that read “Just to say that people data has been an issue in every organisation I’ve done IM work in – so kudos for tackling it and don’t despair!” So it seems we’re not alone in giving this a crack and we may want to explore what else has been done to crack this code. (Before we get too excited about this, the message also said “I once had someone say to one of my suggestions when you’ve worked here as long as I have, ****, you’ll know that will never work”.

However, we’re giving this our best crack and so far we are making headway.

In similar fashion to the experiment that ran before us, we are moving faster than most of us expected. The hurdles and blockages that we have encountered so far have been surmountable and we have started to get a feel for the data and the quality of it.

The temptation with projects such as this is to get lost in an endless pit of data examination…

  • What links to what?
  • How does it get there and where does it go?
  • How well are all of the fields populated?
  • What is the thinking behind this entry and that?
  • What percentage of anything is applicable?

We tackled this early on by committing ourselves to two major foundation stones:

  1. We are just testing something here (this is an alpha, not everything for ever).
  2. We have to know when we have ‘enough’ as opposed to knowing everything about everything.

With these two driving forces in mind, we whittled down the systems we would investigate most closely to:

  • iTrent
  • Active Directory
  • G Suite

We also decided we would have a look at Matrix as well, although we recognise this is not a Council Administered system and so there will be some limitations here.

After a prioritisation exercise ‘borrowed’ from Housing Data In The Cloud, we also opted to focus on five key information fields that were either present in each system (or we believe have the potential to be present in each system). These were:

  • First Name
  • Surname
  • National Insurance Number
  • Employee Number
  • Hackney email address.

We were pleased with these choices and we think they’ll make great candidates for a deep dive investigation to see what is actually there (or what isn’t as the case may be), so that we can begin to think about how we might be able to bring the data together to start getting some insights for the business.

This coming week, we’re going to proceed with our ‘deep dive’ and so that we can create a show and tell that helps us to work with our stakeholders about the ‘what’ aspect of this alpha.

  • What the data sets are
  • What the quality of the data is like
  • What they look like on each of the systems
  • What’s missing
  • What missing or corrupted data means for this project.

This will help us to for the ‘how’ part of this project will be the main thrust of sprint 2.

But look at me getting all ahead of myself! Right now, we’re pleased that we worked hard and came up with a criteria, some data sets that we’re going to examine this week and the systems that we’re interested in having a look at.

In amongst all of this, we were also the first users of the new multi disciplinary desk area to great effect… we think they’re a hit. On Monday, we couldn’t get on to them – so it seems they were a hit with everyone else too.

(But we were first! #earlyadopters)

Please look out for the G+ community this week, and if you can’t find it, feel free to shout at me for not getting it up and running yet!

Until Sprint 2.