The management process of the trauma patient is complex, involving both the prehospital and inhospital phases and many medical disciplines, as well as nursing, paramedical and allied health support services. Quality trauma care requires involvement of all levels of the system in monitoring the relationship and process of care. This necessarily requires strategies to assess both clinical outcomes at the individual hospital level and wider system performance. Accountability for quality of trauma care is fundamental to providing optimal patient outcomes.
The multidimensional nature of an integrated trauma system dictates that a well-ordered quality management process be established. This allows for understanding of areas where activities can be undertaken for better prevention of trauma and the efficacy of management processes, as well as assessment of the appropriateness of outcome from injury.
Major Australian and overseas bodies including NHMRC, ACEM, RACS, NRTAC, ACHS, ACEP and ACS have endorsed ongoing evaluation of the quality of patient care provided by trauma systems and trauma hospitals (McDermott, 1994). This chapter discusses the necessary elements of quality improvement programs for the proposed trauma system in Victoria.
The hospital trauma registry underpins data collection for trauma quality improvement programs. As both system and hospital monitoring share many common data fields (such as demographics, mechanism of injury, triage, prehospital and inhospital care and outcome), the trauma registry should be the ideal single point for data collection.
In general, trauma registries for Victoria should:
It is optimal that data linkages be established between the datasets collecting epidemiological data on patients with a wide range of injury severity for injury surveillance and for system and hospital monitoring (RACS, 1993). These datasets should be established with recognition of commencement of the Victorian Ambulance Clinical Information System Project and should be created and maintained so as to ensure confidentiality of patient data. The MSCU should maintain the statewide trauma registry in Victoria.
The Taskforce proposes that three levels of data be collected to enable trauma system monitoring along the spectrum of injury. It is neither appropriate nor feasible to require all hospitals receiving major trauma patients, even infrequently, to collect the same complexity of trauma system data.
The Taskforce proposes that the data collection system comprise the basic building blocks of the EMDS and TRMDS. All hospitals treating trauma patients will collect EMDS data items and those receiving major trauma collect the additional items of the TRMDS. Major Trauma Services, ASV and other hospitals receiving a critical caseload of major trauma will collect extended data relating to the process of acute care and outcome data including, but not limited to, mortality data. This data will comprise the System Performance Minimum Data Set (SPMDS). These levels of data collection are recommended by the RACS Committee on Trauma (RACS, 1993).
Under the RAPID project, the Department is developing a data warehouse to replace VIMD and Psychiatric Records Information System Manager (PRISM) systems, and to provide central collection of VEMD, ambulance, waiting list and health service cost data.
Links with other relevant data sources such as the Transport Accident Commission (TAC) and Coroner datasets should be a future priority for improved monitoring and evaluation of Victoria's trauma system.
Clinical Indicators (Audit Filters)
Clinical indicators act as screens or filters for the identification of potential patient care and process problems, both at a system and hospital level. Indicators examine parameters such as the timeliness, appropriateness and effectiveness of care across the trauma care continuum. Examples of such indicators include:
Indicator values falling outside predetermined thresholds require in-depth case or system review, as appropriate. The MSCU in conjunction with the STC will develop appropriate clinical indicators, in addition to any current and future statutory clinical indicators.
Specific trauma indicators have been proposed by accrediting organisations such as the Joint Commission on Accreditation of Healthcare Organisations and ACHS, and recommended by ACS and ACEP for trauma quality improvement programs. However, caution should be exercised in implementing trauma indicators. There is little data available on the validity of trauma indicators in identifying patients at increased risk of adverse outcomes or quality of care problems.
A small number of US studies have assessed some ACS indicators in well-established trauma systems and found many to be costly to collect, with limited or no yield for quality of care problems or adverse outcomes. However, some indicators have been shown to have reasonable yields for quality of care problems ranging from 13.8 - 27 per cent, and for prediction of adverse outcomes. These include unexpected deaths, ICU length of stay more than twice the average, trauma surgeon response, major surgery performed more than 24 hours after admission and femur fracture without fixation. If accuracy of indicator data collection is assured, these indicators will be of value in the quality improvement process (Nayduch et al., 1994; Rhodes et al., 1990).
Outcome ReviewSurvival Probability
The large databases of the US Major Trauma Outcome Study and National Trauma Registry of the ACS have established norms for survival probabilities of trauma patients with which trauma outcomes for patient groups can be compared. These norms, relating injury severity to probability of survival, were based on the Trauma Injury Severity Score (TRISS) methodology (ACS, 1993).
The TRISS methodology is probably one of the most widely accepted trauma evaluation instruments in current use (Kelly & Epstein, 1997). Comparison with outcome norms identifies patients with unexpected outcomes (unexpected survivors and deaths) whose cases should be subjected to peer review (Karmy-Jones et al., 1992). However, the methodology has limitations.
A number of modifications of the methodology have evolved in an attempt to answer these limitations, including A Severity Characterization of Trauma (ASCOT), which matches TRISS's reliability of prediction for blunt injury and exceeds it for penetrating injury (Champion et al., 1990). Further evaluation of these tools is ongoing.
Currently, TRISS methodology provides a reliable standardised tool for comparing trauma outcomes by hospitals or systems against defined outcome norms. The limitations of TRISS and ISS mean they are not appropriate to use in comparing quality of care between providers or hospitals (Rutledge, 1996).
TRISS is used both for comparison of mortality rates of large trauma populations and as a screening tool for identifying potentially unexpected fatalities for peer group elevation (Boyd CR et al., 1987). However, it has well-known deficiencies concerning individual patient assessment: failure to allow for co-morbidity; failure to allow for the quality of pre-hospital management; measurement limited to one injury per body region; its database has disproportionately few cases with severe injury; and it fails to control for increasing age over 54 years. These limitations are evident in a recent evaluation of 544 major trauma patients (Demetriadis et al., 1998). Survival status (alive or dead) predicted by TRISS misclassified the true status in 34 per cent of patients aged 54 years or more and in 29 per cent of those requiring intensive care. In a recent Sydney study of 2,205 trauma patients both TRISS and ASCOT had only 25 per cent predictive value in identifying avoidable deaths (Sugrue et al., 1996).
The current view is that all trauma deaths need peer group review rather than relying on TRISS or ASCOT probability analysis to identify "unexpected" death for review (Danne et al., 1998; Demetriadis et al., 1998; Sugrue et al., 1996).
The best use for a TRISS analysis is in longitudinal studies within one organisation, to give a simple mathematical way of checking outcomes on an ongoing basis, where many other factors are constant.
Trauma System and Trauma Hospital Quality Programs
System and hospital quality improvement programs, while having many similarities, must necessarily differ in their overall focus. System quality improvement will focus on the components of trauma systems and their interactions with each other, while individual hospitals will focus on the care provided to individual patients by individual practitioners (ACEP, 1993).
The essential components for implementation and ongoing support for trauma quality improvement programs, whether at a system or hospital level, are also applicable to both small and rural hospitals, though the scale of monitoring will necessarily be reduced. These essential components are:
In addition, hospital-based programs for trauma patients need to be closely integrated with hospitals' general quality improvement programs and those of the departments involved in trauma patient care, such as emergency departments. Trauma quality improvement programs will, therefore, be integrated with activities such as clinical risk identification and management, critical incident monitoring, education programs and patient satisfaction surveys, both within and across departments.
Multidisciplinary Peer Review Process
The peer review process has a long established history in trauma and surgical audit, particularly at the hospital level. Standards for the composition, responsibilities and functions of peer review committees are well described, as are criteria for judgements of the appropriateness of care and preventability of death (ACS, 1993).
The multidisciplinary peer review process has been criticised, particularly at the level of preventable death studies, because of the failure to use standardised methodology, resulting in poor reliability of preventable judgements and inability to make comparisons between studies (McKenzie et al., 1992; Wilson et al., 1992).
However, studies such as those conducted by the CCRTF, have shown that with the use of a standardised methodology, including provision of comprehensive information for review, prior training and standardisation of reviewers, and explicit criteria for judgements of preventable death, high inter-panel and interrater agreements can be achieved (McDermott et al., 1997).
There is debate as to how preventable outcome studies should be used to monitor quality and outcomes of trauma care. Preventable outcome analyses are reported to have led to major adjustments in trauma systems and subsequent reductions in mortality rates. However, the validity of these 'before-after' studies has been questioned because of their inherent bias towards favourable outcomes for trauma services or systems and their specific biases due to non-blinding of panel members, stratified randomisation of patients, and the use of minority decisions by panels (Roy, 1987).
In addition, the methodology may be inefficient in defining problems with quality of trauma care. For example, where an institution provides excellent care and trauma deaths are rare, the resulting small denominator could mean a high preventable death rate when compared with other institutions, with exhaustive case review providing low yield for quality of care problems (Kelly & Epstein, 1997). The major focus should not be on estimating a preventable death percentage, but rather on identifying the errors and inadequacies which could have been awarded including those contributing or favouring the patients' death rather than survival, that is, adverse events.
The peer review process utilising multidisciplinary review is necessarily an intensive and costly exercise and, at the system level, is most efficiently used as a periodic rather than a continuous audit tool, examining a range of system problem areas rather than focusing on a single category of deaths or complications. It should be utilised to study deaths from all types of trauma (in addition to road trauma), and to study adverse outcomes in survivors, as performed in the Major Trauma Management Study (Danne et al., 1998).
The ongoing multidisciplinary evaluations undertaken by the CCRTF since 1992 on more than 500 patients have shown little change in the common receiving problems contributing to death and have identified the system inadequacies and clinical deficiencies prevalent in Victoria. These findings have allowed the Taskforce to make evidence-based recommendations and so produce a report differing from the more generalised NRTAC report.
System Performance and Enhancement
The STC will be responsible for overseeing monitoring of Victoria's trauma quality improvement programs at the system level. The STC should clarify responsibilities in all important aspects of system monitoring to promote efficiency and avoid duplication. Currently, some of these functions relating to the monitoring of quality of trauma and emergency care, including access, are performed by a number of government and non-government agencies.
Trauma quality improvement programs will be overseen at the urban hospital level by the MTS Statewide Coordination Management Committee and, at the regional level, by the regional CCECCS in conjunction with the RTS.