Identifying and mitigating biases in perioperative prognostic models and clinical scoring systems (PMCS)

Identifying and mitigating biases in perioperative prognostic models and clinical scoring systems (PMCS) image

Home » Data use register » Identifying and mitigating biases in perioperative prognostic models and clinical scoring systems (PMCS)

Data Use Register - full project summary

Safe People

Lead applicant organisation
University Hospitals Birmingham NHS Foundation Trust

Safe Projects

Project Title
Identifying and mitigating biases in perioperative prognostic models and clinical scoring systems (PMCS)
Lay summary
How do we make sure that a patient gets the right treatment at the right time? One of the tools that doctors use to diagnose and treat patients is ‘scoring systems’, or scores. These use data from patients’ medical records to support faster, safer care, but there is an important limitation. They are only as good as the data that goes into them. If scores were based mostly on people of a certain age, gender or ethnicity, they may only work for people who ‘match’. If you don’t ‘match’, the score may not work reliably for you.

Why does this research matter? Without information about how well scores work, doctors can’t give patients good advice about whether or not they need surgery. Bad advice could cause harm to patients. This may be worse in patients from underrepresented groups.

These scores are used every day across healthcare, including deciding if patients need surgery. The journey patients take from seeing their GP, to having surgery, and recovering at home is called ‘perioperative medicine’. A report has identified that many common perioperative medicine scores might not work as expected. Existing research shows some of these scores were made for small groups of people who are not representative of the breadth and diversity of the UK. This could lead to underrepresented patients, such as those from minority ethnic groups, not getting the treatment they need when they need it.

This research involves using patients’ anonymous healthcare data to evaluate how well scores work for underrepresented groups. Statistical experts will run ‘external validation’ studies of each score. Findings will be shared through scientific papers, conferences, accessible videos with subtitles in several languages, and a new dataset to improve scores.
Public benefit statement
Clinicians will know whether a particular PMCS is accurate for the patient they are treating. Currently, they usually only know whether a PMCS works across the entire population, rather than for specific subgroups. Importantly, they will know if a PMCS should not be used because it is likely inaccurate.

Patients deciding on surgery will benefit as PMCS will only be used if they work for them as individuals. This helps patients and clinicians make the best decisions about their health.

Scientists will learn from issues identified with how PMCS have been created or used in the past. These insights will help ensure that future PMCS work better for everyone, not just a privileged few. These lessons may also guide the development of artificial intelligence tools in healthcare, making them more equitable and effective for all.
Latest Approval Date
11/07/2024

Safe Data

Dataset(s) name
SDE103

Safe Setting

Access type
West Midlands SDE trusted research environment

Safe Outputs

Link
Not yet published