The CQC has been said by many to be underperforming for some time, something which has been recognised through an interim report from Penny Dash into the effectiveness of the CQC which was published on 26 July 2024 (the “Dash Report”).
We’ve previously written about many of the concerns highlighted in the Dash Report but two areas of the report really stood out. It was published at a time that I was working on a case concerning a Notice of Decision that had been issued to a client seeking to cancel their registration and these areas were pertinent to the action being taken against our client.
The two areas are:
- CQC’s increased emphasis on service user’s voice
- The lack of focus on outcomes of care within the Single Assessment Framework.
The CQC’s increased emphasis on service user voice
We have frequently spoken about the CQC’s desire to place an increased emphasis on service user voice. This was one of the fundamental building blocks of the CQC’s 2021 Strategy.
We have always said that service user views are important, as one piece to be weighed up alongside other available evidence, both good or bad. Often we see statements from service users in inspection reports, taken as fact, whether accurate or representative, or not, of the wider service user population.
The Dash Report found that despite this greater emphasis on people’s experience of the care they receive, it was not clear what data was looked at. The review did not find any published description or the statistical analysis applied by the CQC, what response rate is required, and how representation across users, patients and staff is ensured.
This was noted as Concern 4 in the Dash Report, where it was stated that ‘interviews could be as few as tens of users of a service when the service is looking after thousands of people a year’.
The Dash Report clearly is expecting breadth of service user opinion which is representative or statistically significant at a service level. This has to be the right approach.
The CQC does not investigate individual service user complaints, therefore it is appropriate that this has been raised in the Dash Report on the current use of service user opinions. Rarely, if at all, do you see a report that says:
- “Service user X told us Y, and we found Z to support this” or
- “Service user X told us Y, but we found nothing to support this.
Frequently, a report says: “Service user X told us Y.” The fact that a service user said this may be factually accurate, but the underlying facts are rarely tested, either in relation to the specific service user, or in the wider context of the service to see whether this is a indicative of a larger concern.
The proportionality could potentially be questioned at this stage. For example, one service user’s experience – even if accurate – is not necessarily representative of the experiences of the wider service population. Any action taken by the CQC should be proportionate to the perceived risk.
Sector concerns are well documented on how the CQC includes negative comments in a report, indeed it has chosen to focus on information of concern to trigger risk-based inspections, rather than positive information to trigger inspections that may lead to an uplift in rating.
So, despite this increased emphasis on service user voice – even if it is being used in a individualistic way, rather than a holistic way, as noted by the Dash Report, what we found very interesting on the recent case mentioned above was the lack of service user opinion gathered. This was interesting because when taking enforcement action to close a business, based on considerations about the care of the service user, we would have thought the action would be informed by service user views.
For example, one of the allegations was that a lounge was very cramped, with a lot of service users in it. At no point did the CQC present any evidence that they had spoken to the service users to determine whether they had any choice to be there, or were unable to leave or be asked to be moved should they wanted to. Equally, a relative expressed that they would like certain action to be taken in relation to their loved one. The CQC did not present any evidence that they had asked the service user what their preference was, nor presented any evidence that the service user was unable to communicate their preference.
Again, I am not saying that services should remain open just because the service users are living in blissful ignorance that they are at potential risk of harm, but assumptions cannot be made, just because it suits the narrative.
Outcomes on care in reports
The Dash Report highlights that outcomes do not readily feature in the evidence categories under the Quality Statements.
In the Notice of Decision the vast majority of allegations contained in the enforcement action did not point to poor outcomes for the service users concerned. Of course it is important to highlight risk but is it right to cancel a registration on the basis of a theoretical risk?
Again, we are not suggesting that the CQC should wait and only take action after harm has occurred, but there are other enforcements that could be used to help a provider improve.
The Dash Report emphasised the need to help the sector improve, which is one of the CQC’s core functions. So, where the situation warrants it, surely improvement is better than removing another service from a sector where demand is going to outstrip supply in the next few years.
The Dash Report notes that the sector needs, and deserves, a high-performing regulator. We doubt anyone would disagree with that.