Yesterday I spent some time thinking about the projects that I worked on back in the days when we were still getting excited about Gershon efficiency, and the idea of 30% budget reductions across the board would still have been a hilarious joke in the mineral water stocked meeting room.
It struck me that on a base level, there isn’t much difference – at their core, most projects come down to using a set of information to drive an evidence based decision intended to maximise value for money in delivering a service and/or improve the service for the resident. So why is it (aside from the somewhat more pressing budget imperative, of course) that it seems like they are so different?
The difference is in the data. Back in 2009, not a project would have gone by when I didn’t carefully analyse, compare and contrast the spend and service provision of any council I was working with, with all its “nearest neighbours” or comparator councils as they really were, as well as, well, its actual neighbours. Thanks to the much maligned national indicators, there was a rich and freely available pool of standard facts and figures which councils simply had no choice but to submit. It was a consultant’s data dream.
That’s not to say that there were no pitfalls of the mass data capture exercise marshalled by the Audit Commission. Firstly, I know from having worked first hand on a project around how the data was captured and reported in a big district, that the resource required to feed the information in was disproportionate and certainly has no place in an economic environment where even front line services are feeling the pain. My colleague Matt would also chastise me if I didn’t also point out that there were weaknesses in the data sets, driven by each categorizing and reporting things very differently to suit their respective agenda.
But the data did make a difference. I could see at a glance for example that Fantasy City Council was already spending the least in the area on waste collection and yet was delivering the highest quality service so there was no use investing resource in identifying savings in that area, but at the same time they were collecting 2% less in council tax a year amounting to a hefty potential increase in revenue. If a seaside town told me they couldn’t make savings on revenues and benefits due to the transient population, I could easily cite four other seaside towns that spent half as much despite facing the same issue.
The current pressures mean that every single council at every tier in the country is taking radical measures to restructure, prioritize, rethink and redesign service delivery across the board – and yet, at a time of massive change the ability to work out who is doing what well and cheaply is the hardest it has ever been. By far the most common question I get asked by local government officers is “can you tell me what other people are doing?” Imagine if, when considering a shared service option, you could see the three closest authorities that delivered an equal quality of service for less money than you, and approach them? Imagine if you could analyse the service quality statistics against in-house vs. commissioned delivery and see if there was any difference at all? Imagine if you could sense check every so called “best practice” example you came across by seeing how much it was really costing the service. The possibilities are endless.
Sure, I’m a consultant, and it’s in my job description to be a little bit overenthusiastic about excel spreadsheets. That said, it seems to me that in order to make the most difficult and challenging decisions that will be required in the next few years as robust as possible, we could use all the information we can get.