42 research outputs found

    Computerised medical record systems that guide and protect – reflections on the Bawa-Garba case

    Get PDF
    Lawrence Weed proposed we develop computerised, problem orientated medical records that guide and teach.  The Bawa-Garba case outcomes might have been different if care had been supported by computerised medical record (CMR) systems. CMR systems can reduce prescribing errors and could be develop to flag gaps in supervision.   However, CMR systems are not a panacea and need to be fit for purpose.  Our informatics perspective on this case is to call for widespread use of CMR systems – designed to guide and protect

    Using ontologies to improve semantic interoperability in health data

    Get PDF
    The present–day health data ecosystem comprises a wide array of complex heterogeneous data sources. A wide range of clinical, health care, social and other clinically relevant information are stored in these data sources. These data exist either as structured data or as free-text. These data are generally individual personbased records, but social care data are generally case based and less formal data sources may be shared by groups. The structured data may be organised in a proprietary way or be coded using one-of-many coding, classification or terminologies that have often evolved in isolation and designed to meet the needs of the context that they have been developed. This has resulted in a wide range of semantic interoperability issues that make the integration of data held on these different systems changing. We present semantic interoperability challenges and describe a classification of these. We propose a four-step process and a toolkit for those wishing to work more ontologically, progressing from the identification and specification of concepts to validating a final ontology. The four steps are: (1) the identification and specification of data sources; (2) the conceptualisation of semantic meaning; (3) defining to what extent routine data can be used as a measure of the process or outcome of care required in a particular study or audit and (4) the formalisation and validation of the final ontology. The toolkit is an extension of a previous schema created to formalise the development of ontologies related to chronic disease management. The extensions are focused on facilitating rapid building of ontologies for time-critical research studies.

    Accelerating the development of an information ecosystem in health care, by stimulating the growth of safe intermediate processing of health information (IPHI)

    Get PDF
    Health care, in common with many other industries, is generating large amounts of routine data, data that are challenging to process, analyse or curate, so-called ‘big data’. A challenge for health informatics is to make sense of these data. Part of the answer will come from the development of ontologies that support the use of heterogeneous data sources and the development of intermediate processors of health information (IPHI). IPHI will sit between the generators of health data and information, often the providers of health care, and the managers, commissioners, policy makers, researchers, and the pharmaceutical and other healthcare industries. They will create a health ecosystem by processing data in a way that stimulates improved data quality and potentially health care delivery by providers of health care, and by providing greater insights to legitimate users of data. Exemplars are provided of how a health ecosystem might be encouraged and developed to promote patient safety and more efficient health care. These are in the areas of how to integrate data around the unsafe use of alcohol and to explore vaccine safety. A challenge for IPHI is how to ensure that their processing of data is valid, safe and maintains privacy. Development of the healthcare ecosystem and IPHI should be actively encouraged internationally. Governments, regulators and providers of health care should facilitate access to health data and the use of national and international comparisons to monitor standards. However, most importantly, they should pilot new methods of improving quality and safety through the intermediate processing of health data

    An instrument to identify computerised primary care research networks, genetic and disease registries prepared to conduct linked research:TRANSFoRm International Research Readiness (TIRRE) survey

    Get PDF
    PURPOSE: The Translational Research and Patients safety in Europe (TRANSFoRm) project aims to integrate primary care with clinical research whilst improving patient safety. The TRANSFoRm International Research Readiness survey (TIRRE) aims to demonstrate data use through two linked data studies and by identifying clinical data repositories and genetic databases or disease registries prepared to participate in linked research. METHOD: The TIRRE survey collects data at micro-, meso- and macro-levels of granularity; to fulfil data, study specific, business, geographical and readiness requirements of potential data providers for the TRANSFoRm demonstration studies. We used descriptive statistics to differentiate between demonstration-study compliant and non-compliant repositories. We only included surveys with >70% of questions answered in our final analysis, reporting the odds ratio (OR) of positive responses associated with a demonstration-study compliant data provider. RESULTS: We contacted 531 organisations within the Eurpean Union (EU). Two declined to supply information; 56 made a valid response and a further 26 made a partial response. Of the 56 valid responses, 29 were databases of primary care data, 12 were genetic databases and 15 were cancer registries. The demonstration compliant primary care sites made 2098 positive responses compared with 268 in non-use-case compliant data sources [OR: 4.59, 95% confidence interval (CI): 3.93–5.35, p < 0.008]; for genetic databases: 380:44 (OR: 6.13, 95% CI: 4.25–8.85, p < 0.008) and cancer registries: 553:44 (OR: 5.87, 95% CI: 4.13–8.34, p < 0.008). CONCLUSIONS: TIRRE comprehensively assesses the preparedness of data repositories to participate in specific research projects. Multiple contacts about hypothetical participation in research identified few potential sites

    An integrated organisation-wide data quality management and information governance framework: theoretical underpinnings

    Get PDF
    Introduction Increasing investment in eHealth aims to improve cost effectiveness and safety of care. Data extraction and aggregation can create new data products to improve professional practice and provide feedback to improve the quality of source data. A previous systematic review concluded that locally relevant clinical indicators and use of clinical record systems could support clinical governance. We aimed to extend and update the review with a theoretical framework.Methods We searched PubMed, Medline, Web of Science, ABI Inform (Proquest) and Business Source Premier (EBSCO) using the terms curation, information ecosystem, data quality management (DQM), data governance, information governance (IG) and data stewardship. We focused on and analysed the scope of DQM and IG processes, theoretical frameworks, and determinants of the processing, quality assurance, presentation and sharing of data across the enterprise.Findings There are good theoretical reasons for integrated governance, but there is variable alignment of DQM, IG and health system objectives across the health enterprise. Ethical constraints exist that require health information ecosystems to process data in ways that are aligned with improving health and system efficiency and ensuring patient safety. Despite an increasingly ‘big-data’ environment, DQM and IG in health services are still fragmented across the data production cycle. We extend current work on DQM and IG with a theoretical framework for integrated IG across the data cycle.Conclusions The dimensions of this theory-based framework would require testing with qualitative and quantitative studies to examine the applicability and utility, along with an evaluation of its impact on data quality across the health enterprise

    Using routinely collected health data for surveillance, quality improvement and research: Framework and key questions to assess ethics, privacy and data access

    Get PDF
    Background The use of health data for public health, surveillance, quality improvement and research is crucial to improve health systems and health care. However, bodies responsible for privacy and ethics often limit access to routinely collected health data. Ethical approvals, issues around protecting privacy and data access are often dealt with by different layers of regulations, making approval processes appear disjointed.Objective To create a comprehensive framework for defining the ethical and privacy status of a project and for providing guidance on data access.Method The framework comprises principles and related questions. The core of the framework will be built using standard terminology definitions such as ethics-related controlled vocabularies and regional directives. It is built in this way to reduce ambiguity between different definitions. The framework is extensible: principles can be retired or added to, as can their related questions. Responses to these questions should allow data processors to define ethical issues, privacy risk and other unintended consequences.Results The framework contains three steps: (1) identifying possible ethical and privacy principles relevant to the project; (2) providing ethics and privacy guidance questions that inform the type of approval needed; and (3) assessing case-specific ethics and privacy issues. The outputs from this process should inform whether the balance between public interests and privacy breach and any ethical considerations are tipped in favour of societal benefits. If they are then this should be the basis on which data access is permitted. Tightly linking ethical principles to governance and data access may help maintain public trust

    Recording COVID-19 consultations : review of symptoms, risk factors, and proposed SNOMED CT terms

    Get PDF
    Background There is an urgent need for epidemiological research in primary care to develop risk assessment processes for patients presenting with COVID-19, but lack of a standardised approach to data collection is a significant barrier to implementation. Aim To collate a list of relevant symptoms, assessment items, demographics, and lifestyle and health conditions associated with COVID-19, and match these data items with corresponding SNOMED CT clinical terms to support the development and implementation of consultation templates. Design & setting Published and preprint literature for systematic reviews, meta-analyses, and clinical guidelines describing the symptoms, assessment items, demographics, and/or lifestyle and health conditions associated with COVID-19 and its complications were reviewed. Corresponding clinical concepts from SNOMED CT, a widely used structured clinical vocabulary for electronic primary care health records, were identified. Method Guidelines and published and unpublished reviews (N = 61) were utilised to collate a list of relevant data items for COVID-19 consultations. The NHS Digital SNOMED CT Browser was used to identify concept and descriptive identifiers. Key implementation challenges were conceptualised through a Normalisation Process Theory (NPT) lens. Results In total, 32 symptoms, eight demographic and lifestyle features, 25 health conditions, and 20 assessment items relevant to COVID-19 were identified, with proposed corresponding SNOMED CT concepts. These data items can be adapted into a consultation template for COVID-19. Key implementation challenges include: 1) engaging with key stakeholders to achieve ’buy in’; and 2) ensuring any template is usable within practice settings. Conclusion Consultation templates for COVID-19 are needed to standardise data collection, facilitate research and learning, and potentially improve quality of care for COVID-19.Publisher PDFPeer reviewe

    Ethnicity Recording in Primary Care Computerised Medical Record Systems: An Ontological Approach

    Get PDF
    Background Ethnicity recording within primary care computerised medical record (CMR) systems is suboptimal, exacerbated by tangled taxonomies within current coding systems.Objective To develop a method for extending ethnicity identification using routinely collected data.Methods We used an ontological method to maximise the reliability and prevalence of ethnicity information in the Royal College of General Practitioner’s Research and Surveillance database. Clinical codes were either directly mapped to ethnicity group or utilised as proxy markers (such as language spoken) from which ethnicity could be inferred. We compared the performance of our method with the recording rates that would be identified by code lists utilised by the UK pay for the performance system, with the help of the Quality and Outcomes Framework (QOF).Results Data from 2,059,453 patients across 110 practices were included. The overall categorisable ethnicity using QOF codes was 36.26% (95% confidence interval (CI): 36.20%–36.33%). This rose to 48.57% (CI:48.50%–48.64%) using the described ethnicity mapping process. Mapping increased across all ethnic groups. The largest increase was seen in the white ethnicity category (30.61%; CI: 30.55%–30.67% to 40.24%; CI: 40.17%–40.30%). The highest relative increase was in the ethnic group categorised as the other (0.04%; CI: 0.03%–0.04% to 0.92%; CI: 0.91%–0.93%).Conclusions This mapping method substantially increases the prevalence of known ethnicity in CMR data and may aid future epidemiological research based on routine data

    A simple clinical coding strategy to improve recording of child maltreatment concerns: an audit study

    Get PDF
    Background Recording concerns about child maltreatment, including minor concerns, is recommended by the General Medical Council (GMC) and National Institute for Health and Clinical Excellence (NICE) but there is evidence of substantial under-recording.Aim To determine whether a simple coding strategy improved recording of maltreatment-related concerns in electronic primary care records.Design and Setting Clinical audit of rates of maltreatment-related coding before January 2010–December 2011 and after January–December 2012 implementation of a simple coding strategy in 11 English family practices. The strategy included encouraging general practitioners to use, always and as a minimum, the Read code ‘Child is cause for concern’. A total of 25,106 children aged 0–18 years were registered with these practices. We also undertook a qualitative service evaluation to investigate barriers to recording.Method Outcomes were recording of 1) any maltreatment-related codes, 2) child protection proceedings and 3) child was a cause for concern.Results We found increased recording of any maltreatment-related code (rate ratio 1.4; 95% CI 1.1–1.6), child protection procedures (RR 1.4; 95% CI 1.1–1.6) and cause for concern (RR 2.5; 95% CI 1.8–3.4) after implementation of the coding strategy. Clinicians cited the simplicity of the coding strategy as the most important factor assisting implementation.Conclusion This simple coding strategy improved clinician’s recording of maltreatment-related concerns in a small sample of practices with some ‘buy-in’. Further research should investigate how recording can best support the doctor–patient relationshipHow this fits in Recording concerns about child maltreatment, including minor concerns, is recommended by the General Medical Council (GMC) and National Institute for Health and Clinical Excellence (NICE), but there is evidence of substantial underrecording. We describe a simple clinical coding strategy that helped general practitioners to improve recording of maltreatment-related concerns. These improvements could improve case finding of children at risk and information sharing
    corecore