It’s been a very light and short sprint but we’ve still made progress on security and standards https://youtu.be/lJ3DPs35oWM
What’s it all about?
This is the first weeknote for a project that’s been going on for a little while at HackIT now. This one will be fairly brief but there will be much more coming soon.
I’ll go into some of the detail in a moment but first a quick comment about the title. This isn’t set in stone at the moment so may potentially change for the next update. If that happens I’ll make sure I put links in both posts so interested readers can track what’s been happening.
Okay – this work stream is in relation to investigating whether HackIT should invest time (and potentially budget) in a “form builder” product to enable to a more rapid deployment of services for users.
The work is itself divided into three parts.
Prototype service design
This is a piece of work that has been lead by Rahma, with Kirstine’s help, to investigate the user needs for a specific set of HR processes. It will be used as guide to help us decide which “form builder” tool(s) we might wish to use to build a prototype to test with.
Prototype technical integration
This is a piece of work that will start soon to verify what technical work will need to be undertaken to interface the design of the prototype with our existing HR systems.
“Form builder” needs and market analysis
You may have noticed that throughout I’ve been putting “form builder” in quotes. This is because there is still a diversity of opinion in the team as to what the requirements are for any product we might buy, reuse for free (e.g. open source) or build ourselves. The current most important thing for the project is to lock down our fundamental set of requirements as a set of user needs.
At the same time we’re going to start looking at what systems are available that we could use. Our original estimate was that there would be 3 or 4 potential products we could look at since one of our red-lines is that it general GDS Design System widgets. Instead a cursory search has turned up over 12 potential systems we could use. Defining a set of criteria to score each of these again, based on the above user needs, is going to be key to enabling us to decide which small number we’re going to further investigate by using them to each build the prototype discussed above.
I’m personally very interested in how not just HackIT but the much wider government digital community can make use of this kind of tooling to quickly produce services that can potentially be used by multiple organisations such organisations. There’s a number of related projects in various stages of development and support in different parts of government so, as part of this work, I’m reaching out to see if there’s potential for creating a community to discuss what we could do together.
It’s been difficult to get this project going as everyone involved is also responsible for multiple other projects with pressing deadlines. In order to give us some impetus I’ve created regular stand-ups and show and tells. I’ll also be posting weeknotes like this every Friday to show the progress we’re making.
Where we are currently
With one week to go until we ship ETRA we’re finishing off the final set of user journeys. There’s been some discussions between some of the devs and the product owner about some potential simplifications which were checked with a representative set of Housing Officers by our user researcher. A great example of multidisciplinary Agile working that will achieve our goals while saving some development time.
We’re hammering out the final few bugs in ITV. Meanwhile Lorraine has gone above and beyond in creating a 30 min introductory video for that services which she has sent round to all the Housing Officers.
At the same time I’ve been working on a business case with some of the other HackIT DMs to address some of our shared technical debt. That’s been approved so the tender for that work stream should go up on the Digital Marketplace early next week.
As well as completing all of the above there’s two major things for next week. Firstly, we’re also going to make sure we give the defect list and existing feature backlog a really good scrub. Secondly, we’re going to do a deep dive in how ready we are to launch the next round of feature development and decide if we’re good to go or need to do some additional discovery work.
On top of that we’re starting a piece of work with the HackIT Infrastructure team on investigating some of the network connectivity issues our Housing Officers have been having when working with the iPads on some of our estates.
Very busy week next week but the team’s all pulling together and I’m very much looking forward to us making ETRA available to improve the situation for our users.
Welcome to Submit my Planning Application weeknotes.
For more general updates, continue to check our project site; showing all the great stuff we’re doing and key information about the project. Alongside the project site, there’ll be a blog post being released very soon.
Thanks to Ana for being resilient and for ploughing through a hefty task in the previous sprint!
What we did this week
- We’ve kicked off sprint 3 where our main focus is on accounts and document uploads
- Fixed issues with data structures
- Authentication completed on the back end, now going to be connected to the front end
- Progression on the preparation for the Service Assessment
- Finalised data structures
- Completed matching schema
- Established security needs
- We’ve written a full walk through test case
- Focused on authenticating users and their data whilst logging on
- Looking at ways to measure the performance of the service
- Working with the Planning team to create the transactional emails
- Continue working on preparation for the Service Assessment
- Focus on the ‘Submit form’
- Focus on ‘Managing my account’
- Ongoing testing
What are we keeping an eye on?
- Meetings with Rashmi regarding the integration of Civica pay
- Worked with planning to choose a URL; planningapplications.hackney.gov.uk
Thanks for reading!
Planning Applications Team
(AKA Emma H, Andy B, Andy E, Ana, Matthew T, Nic and Hidayat).
This week, two key events took place on the Manage Arrears project, these included;
- Manage Arrears Deep Dive
- Post-Mortem into the delivery of the Manage Arrears – Service Charge Letters
Manage Arrears Deep Dive
Manage Arrears Deep Dive was facilitated by +Cate McLaurin and attended by key project stakeholders to understand the progress, benefits, constraints, challenges and opportunities that have been identified through the work to date on Manage Arrears. The session was very insightful with the key outputs including a collective view of the MVP and an action to move phase three’s business case forward.
Manage Arrears Service Charge Arrears Letters Post-Mortem
Since the ‘bulk sending’ feature was released in May, we have recognised that there has been a number of bugs identified. We have fixed these bugs promptly, including the ones highlighted last week. We thought it would be useful to meet with the Leasehold Service team leads to get a shared understanding of events and lessons learned.
From the Post-Mortem theses were the key findings;
- The Data in UH was poor – +Nick Prince carried out a lot of data cleansing in UH, cleaning up the addresses, sorting out the postal codes and town fields. This process has vastly improved the quality of the data to the extent that the team were able to send over 1000 letters yesterday, with ‘zero’ address validation failures.
- In the future, we will test UHT codes from the front end and backend to ensure there are no errors.
- Our staging (test) environment does not match the Production (Live) environment, this is something we will look into. In the interim, it is good practice to send a single letter in Production each time a new template is added.
Good news: The Leasehold Service team produced 1134 letter this week, which took approx 2 mins to run. Four of these letters failed because the address was just outside of the printable. We have added a card to the backlog to address this. This meant our error rate reduced to 3% which means we are 21 times more efficient than when we completed the first run back in May 🙂