The protracted roll out of the CQC’s new single assessment framework is now all but complete. Assessments of registered providers in all CQC regions are already underway. With effect from next month, the new assessment framework will be used for all registration activity, whether that is an assessment, a new CQC registration application or a change to registration for existing providers.
In this third instalment of our series mapping these key regulatory reforms, we assess the mechanics of the new scoring system that will be used by the CQC to determine its critically important overall assessment rating. We consider the likely impacts of the new system and highlight some of the opportunities and challenges these important changes are likely to bring for regulated health and care providers.
What's changed and what's staying the same?
As we have previously reported, the central pillars of the CQC’s assessment approach will remain. This means that, when assessing quality of care, the CQC will still be asking providers the same 5 key questions - namely, are they ‘safe’, ‘effective’, ‘caring’, ‘responsive’ and ‘well-led’? The ratings scale is also here to stay – is the service ‘outstanding’, ‘good’, ‘requires improvement’ or ‘inadequate’?
What is changing, however, is the method by which the CQC reaches this final rating. This had previously been decided by reference to ‘ratings characteristics’. These described in text form what ‘outstanding’, ‘good’, ‘requires improvement’ and ‘inadequate’ looked like for each of the Key Lines of Enquiry (KLOEs). As with the KLOEs, however, ‘ratings characteristics’ are now a thing of the past under the new system. In their place, the CQC has introduced a brand-new scoring system.
Planning and Preparation The introduction of the four-point scale
The new scoring system is described in detail here - How we reach a rating - Care Quality Commission (cqc.org.uk). The CQC has also published a helpful worked example of how the scoring system will operate in the context of a single quality statement (relating to infection prevention and control) in a GP practice - Example for a GP practice - Care Quality Commission (cqc.org.uk).
In summary, the new system is designed to operate in a pyramid structure – or, more accurately, 5 pyramid structures – layering scores from the bottom up. At the top of each pyramid sits the key question – namely, whether the service is safe, effective, caring, responsive and well-led. Below this are the quality statements that feed into the key questions. The quality statements will differ depending on the key question that is being assessed. An example of a quality statement that underpins the ‘safe’ key question is:
We assess and manage the risk of infection. We detect and control the risk of it spreading and share any concerns with appropriate agencies promptly.
Other quality statements underpinning the ‘safe’ key question might relate to an organisation’s learning culture, its safeguarding practices, staffing levels and so forth.
At the base of the pyramid are the key evidence categories that feed into the quality statements. These are the sources of evidence against which the quality statements will be assessed. In assessing each quality statement, the CQC has identified which of the six evidence categories will be relevant dependent upon the type of service. The six evidence categories are:
- People’s experience of health and care services
- Feedback from staff and leaders
- Feedback from partners
- Observation
- Processes
- Outcomes
The CQC has then set out the types of evidence it will focus on in respect of each evidence category it uses in assessing the quality statement. For example, mortality rates or hospital readmission rates are types of evidence that the CQC might consider as part of the ‘outcomes’ category for a nursing home. A review of an organisation’s complaints history and survey results is evidence that would fall within the ‘people’s experience’ category for a private hospital or a GP practice.
The CQC’s assessors will then build the rating score that will eventually apply to each key question from the bottom up by assigning a score to each evidence category for each quality statement being assessed. This process is done using a four-point scale: 4 (evidence shows an exceptional standard); 3 (evidence shows a good standard); 2 (evidence shows some shortfalls); and 1 (evidence shows significant shortfalls).
These evidence category scores are then combined to give a total score for each quality statement. For example, there might be 4 evidence categories for a particular quality statement, making a potential maximum score for that quality statement of 16.
All of the quality statement scores are then combined to give a total score for each key question. This total score for the key question is then divided by the maximum possible score to give the overall key question rating, which is expressed as a percentage. The CQC then translates this percentage into a key question rating using the following thresholds:
25% to 38% = inadequate
39% to 62% = requires improvement
63% to 87% = good
Over 87% = outstanding
The challenged of reaching an aggregated rating
The scores used to arrive at the overall rating for each key question are known as underlying ratings. The headline rating for the overall quality of the entire service is worked out by aggregating these underlying ratings.
However, this is where the scoring system starts to become rather more complicated. In particular, issues are likely to arise when the CQC needs to aggregate a wide range of underlying ratings – which might range from ‘inadequate’ to ‘outstanding’ for some services and providers. To address this, the CQC has produced guidance known as rating principles, which set out various parameters governing how the aggregated rating process will work for different types of service Levels of ratings - Care Quality Commission (cqc.org.uk). However, this guidance makes it clear that individual assessors can continue to use their professional judgement to depart from or modify these principles if they identify unspecified ‘concerns in an assessment’.
This matters because individual underlying ratings will have the potential to exert very significant influence on the aggregation process. To simplify, achieving an aggregated ‘outstanding’ rating will typically require a specified number of underlying ‘outstanding’ ratings, with the remaining underlying ratings being no worse than ‘good’. In a similar vein, an aggregated ‘requires improvement’ rating usually requires a set number of underlying ‘requires improvement’ ratings.
Importantly for providers, it will only need a few underlying ‘inadequate’ ratings for the aggregated rating to be capped at no higher than ‘requires improvement’. The CQC says the reason for this approach is to make sure any areas of poor quality are not hidden. However, the potential for one or two bad scores to have such a significant impact on the overall aggregated rating clearly presents a real challenge for providers who may have had poor ratings in the past and are now trying to improve their rating.
The CQC has said that it will initially only publish the ratings and not the underlying scores. However, in future it does intend to publish in the interests of greater transparency.
Impact for providers
So what will this all mean in practice for providers? The short answer is that it will obviously take time for the changes to bed in before a clearer assessment can be made of the opportunities and challenges. That said, the CQC is confident that the scoring system will provide increased transparency as to how a particular rating was arrived at, and where the provider sits within the bandings. For example, this would mean that for an overall rating of ‘good’, providers should now be able to tell from their score whether they are in the upper threshold, nearing ‘outstanding’, or in the lower threshold, nearer to ‘requires improvement. This transparency will not only help to identify where any issues lie, but should also help to inform decisions about whether to challenge a rating, and which specific areas to focus on in making that challenge.
Conversely, a number of concerns have been raised about the scoring system. These relate to the currently rather opaque process by which the all-important aggregated rating will be arrived at. This will still involve a degree of subjectivity and interpretation by the assessors, leading to potential inconsistencies and discrepancies. There are also concerns that resourcing pressures could lead to the CQC having to focus, at least initially, on previously identified areas of risk or poor performance, and that this may make it difficult for providers that are generally improving to raise their overall rating.