HDAA 2023
Analytics with altitude
Presentation abstracts

10 Steps to Starting and Building an Effective Data Library

A data catalog is a central organizing system that captures information about data to advance users’ ability to find, use, share and understand important data assets. The Data Governance and Citizenship team at Mayo Clinic has recently worked to restructure information from a legacy data catalog into a newly refreshed system, referred to as the Data Library. Through this multi-year project, we have learned several key lessons on how to start and build a data inventory for broad adoption and use within an organization. Ultimately, the vision of Mayo Clinic’s Data Library was to ensure data citizens could: •Quickly find business and technical information about data •Understand the meaning and context of shared data assets •Know where to go to easily find data assets •Build meaning, relevancy and context through data stewardship actions •Ensure consistency of use of data across business teams Building an accessible, scalable data inventory did not occur in isolation, but rather as a close partnership between the Data Library, Data Governance & Stewardship, and Data Literacy teams. Ten key steps guided the team to successful launch and adoption, including identifying the business value, wrapping in new ways of working, creating a common language through data literacy efforts, identifying and supporting data stewards and measuring outcomes.


A Clinician-led Governance model for real-time AI and Predictive Models for Patient Care

With a new Epic inpatient implementation, there was a lot of interest in releasing out of the box (OOTB) and custom predictive models for patient care, but there were also concerns that these be held to a consistent high standard, that clinicians could have confidence in these models and that they would not negatively impact clinical or patient experience either at the time of release or over time. In addition, many clinicians and researchers wanted to have a straightforward path for proposing, evaluating and releasing new models, whether OOTB or custom developed ones and wanted to adopt this approach for all AI clinical models. A governance group was assigned the task of developing and managing the overall decisions regarding realtime EHR integrated AI models. This presentation will describe the criteria and charter for this Predictive Analytics and AI Governance Group, the development of a process workflow and checklist for proposing new models, the standards developed for accepting a new model and the validation path and policies. Standards and tools were developed for evaluating models for bias, effectiveness and fit, and clinicians and researchers were provided training on these. In this presentation, we will share the specifics for how models are evaluated, what the workflow is for adoption of a new model, and how models move into an operational cadence with ongoing evaluation. We will discuss lessons learned and describe some future opportunities for improvement.


A Dose of Analytics for Your Provider’s Health

In Heathcare the patient is always at the center of everything, however who is taking care of the Provider? The Houston Methodist Physician Organization (HMPO) took up this task by measuring over 50 metrics to ensure the Providers were in the best shape. Previously, Administration staff spent over two hours per Provider gathering all the metrics required for a holistic view of Provider performance. Rather than administrators manually gathering data from various sources inefficiently, the HMPO Analytics team created the Provider Health dashboard which opens a gateway for productive conversations with Providers backed by the power of data and trends available within less than in a minute. The Provider Health dashboard delivers a system-wide analytics solution that integrates information from different sources into a single platform. These metrics have been classified using organizational pillars to drive the discussion around Provider quality, efficiency, patient access, and financial data. Each of these metrics has a system-wide goal and allows Administrators to drive the discussion. The dashboard also allows specialty level comparison across the organization to identify areas of improvement and success within a given service line. The architecture of the Provider Health dashboard provides the flexibility to modify metrics and goals as the organization evolves. The dashboard has been a great value in time saved and in keeping the Providers in the best shape. The Providers can focus on making improvements in weak areas to provide the best in patient care. It’s a true testament to the adage – “Happy Provider, Happy Patient.”

 


An Infrastructure for Secure Sharing of Clinical Data

The proposed presentation provides an overview of the results of a research project and reference implementation, referred to as Secure Federated Data Sharing System (SFDS), showcasing its utility through a use case and demonstration platform for sharing health care information in support of collaborative clinical research between multiple entities or institutions. Information stored in database management systems (DBMS) is not only important to host entities but is often tremendously valuable to organizations outside their networks. The ability to efficiently share this information and work collaboratively has great societal value. For example, improved data sharing can enhance patient care outcomes, reduce operational costs for hospitals, and potentially improve efficiency for clinical trials and pharmaceutical development data management. To facilitate sharing of database resources, NIST has developed the SFDS. While supportive of a range of industry use cases, NIST has taken particular interest in and has focused on applying SFDS in promoting secure access to data where it resides in discovery of new therapeutics, improved health care delivery, and lowering medical costs through better access to clinical data. Our objective is to make it easier to facilitate broad, but well controlled secondary access to clinical data by the experts. Unfortunately, sharing of DBMS data among users in different organizations is inherently hard. This is because the data is often in different formats and schemas, and comes from different systems, making it difficult to transfer, consume, and interpret correctly among organizations.


ARC Academy: Creating an analytics training program to support self-service

The ARC Academy program is a foundational element of the Children's Hospital Colorado team's focus on self-service reporting. Our training program encompasses what it takes to get people the data they need at their fingertips, ensuring that access to a developer is not a bottleneck in progress. We plan to review: - The history of the program: our structure, staffing, and why we went this direction - Data Navigation Assistance Map: helping people figure out where to start - Current class list (Reporting in Epic, SlicerDicer, Data Visualization, Data Literacy, Tableau) - Importance of the analytics catalog - Office Hours Structure - Help Library - Data Driven Leadership: how we relate this back to leader competencies - Curriculum development and upkeep.


Automated PC4 Data Abstraction

An automatic PC4 abstraction tool that was created to be adaptable to the specific needs of diverse PC4 data team models. The close working relationship between our clinical data team and software engineers has allowed CHCO to develop an automated registry abstraction tool that is user friendly, applicable, and accurate. Additionally, a product created in-house is curated to the nuances of individual EHRs and creates a more reliable and dynamic platform that can efficiently be adjusted based on ever changing EHR and data definitions.


CCMCN’s Honeycomb Community Data Network: Reducing health equity issues by applying innovative technology, data, and payment systems to community collaborations across the state of Colorado

The Colorado Community Managed Care Network has established a "Honeycomb Data Network" in the Snowflake Data Cloud to consolidate health data from various sources, covering over 40% of the state of Colorado. This data is compared with a member organization's known population to assess its visibility, and selective data sharing is enabled. Stringent data access controls are in place.

The "Honeycomb" employs cloud technology to offer Private Data Vaults for data management and secure sharing. The team will share how this unified system supports improved population health, social health information exchange, and system efficiency.


Automated Extraction of Patient Handwriting and Manual Selections from Scanned Questionnaire Using Azure Cognitive Service

Despite the widespread adoption of electronic health records (EHR), many patients’ data is still maintained in scanned documents - images of the paper, e.g. physician notes, patient surveys. Extracting valuable data (printed text, patient’s handwriting, hand-filled selection) from these unstructured documents can be a challenging, particularly if they are in complex forms and involve handwriting, varying sizes and types of selections, multiple types of layouts. In this submission, we present a solution that leverages Azure Cognitive Service, Form Recognizer, to extract data from unstructured scanned questionnaire: MD Anderson Patient History Database Form (patient survey). The solution involves converting the scanned questionnaire into digital text, then parsing and breaking down into structured data stored in database. The comprehensive structured data was further analyzed, then with applying various methods, the patient hand-filled information was extracted in question and answer pairs with key features. The results can be easily accessed for clinic and research. Additionally, we discuss the challenges encountered and the workarounds we developed for creating a robust solution. Our results demonstrate the effectiveness of using cloud-based Azure Cognitive Services for extracting patient hand-filled information.


Chaos to Calm - Delivering Analytics Value in a Fast Moving Environment

Data and analytics have become essential tools for running, growing, and even transforming healthcare systems. Analytics teams face rapidly rising demand for their services, a faster pace of change in business and clinical needs, and a flood of new customers eager to advance their projects. These pressures come at the same time when financial headwinds have slowed or even reduced staffing. How can analytics leaders keep their customers happy and maximize the value they deliver, all while maintaining a sensible work-life balance? This talk will cover practical techniques to bring order to the chaos of a demanding, fluid business environment. Topics covered will include: * Managing multiple stakeholders * Prioritizing work and managing flow * Enabling and encouraging self-service * Capturing and communicating value delivery * Building business trust Following the presentation, the session will transition to a facilitated discussion where participants can share their unique challenges and successful approaches to delivering predictable, high value results.


CliniPane: Ambiently delivered analytics at the point of care

Summary statistics, graphs, predictive models, natural language processing, large language models like ChatGPT, and other analytics can provide useful information to clinicians at the point of care. Getting that information to the clinicians’ eyeballs can be challenging. The two most common paradigms are “interruptive push” and “manual pull.” “Interruptive push” disrupts the clinician’s workflow with an alert (like a popup window that must be addressed before the clinician can continue), so these alerts are often limited to truly urgent messaging. “Manual pull” requires the clinician to volitionally navigate to a special place to see the information, so very often it’s never accessed or seen. “Ambient push” represents a middle ground where the information is pushed to clinicians so no effort is needed to see it, but non-interruptively so that key workflows are not disrupted. At our health system we had limited ability to ambiently push information to clinicians, especially to push graphically rich and interactive analytics. To address this, we designed and built “CliniPane,” a desktop application that lives alongside the EHR and uses FHIR and other common standards to provide this content in proper patient context. Preliminary feedback on wireframes and screenshots has been very positive. We will describe and show CliniPane, its architecture, capabilities, potential use cases, preliminary assessment, and next steps. If successful, we hope to be able to open-source CliniPane so that anyone can use it.


Comparing ChatGPT Methods with Other Natural Language processing models for Medically aligned text

Artificial Intelligence and Natural language processing have become hot topics in the Medical community with the release of ChatGPT. This innovative Generative Large Language Transformer Model (GLLTM) makes interacting with the model feel much more intuitive with its probabilistic text generation, and maintenance of context during the session. Unfortunately, this has also led to many complaints about its authoritative tone with sometimes incorrect responses to queries. This is especially risky in more specialized domains such as the legal or medical fields. Work with ChatGPT in the medical community is often further hampered by concerns of PII and PHI. In our project, we are using medically aligned open source drug labels from OpenFDA to train a Bi-directional Encoder Presentation of Transformer (BERT) model. We then used BERT and ChatBPT to extract the Mechanism of Action and Target for 200 different FDA approved Oncology drugs. We performed two different kinds of ChatGPT queries. In the first kind, we asked for a table layout of Mechanism of Action and Target allowing it to use its full set of texts and links. In the second, we performed the same query with ChatGPT but asked for it to limit its response to an input prompt from the OpenFDA drug labels used for BERT. Each kind of the tests were performed in separate sessions to limit a change in performance during a conversation. Finally, each kind of test was performed twice to allow for comparing consistency between traditional LLM’s and GLLTM’s.


Data Literacy & Data Quality Partnership to Elevate Enterprise Data Quality

The Data Literacy Program and the Data Quality Center of Excellence (DQCOE) at the Children’s Hospital of Philadelphia (CHOP) have designed and developed a focused data quality training program that increases awareness on data quality and upskills data professionals in data quality best practices. Together, we developed three Data Literacy Personas, or learning tracks, to accommodate customers’ different data expertise and needs. One-hour micro sessions were hosted quarterly for all individuals across the enterprise to learn about what data quality is, its impact on the enterprise, and how to report data quality issues. For our data professionals, we hosted workshops that helped them identify and remediate data quality gaps in their subject area. Departmental training was designed with subject matter experts to ensure quality data entry, leading to quality data report insights. Teams we have partnered with include Epic Builders, and Infection Prevention and Control. Through these trainings, individuals shared the data quality issues they face in their teams and within their workflows, and the DQCOE were able to intake, prioritize, and remediate these issues appropriately. This data literacy and data quality partnership has resulted in an increased awareness of data quality across the organization, improved visibility in data quality trends, and accelerated partnerships with data experts across the enterprise.


Data Visualization as a Tool to Advance Equity

Disparities in health outcomes and care access are pervasive, with complex root causes. Deploying effective interventions that meaningfully close gaps can be challenging. Well-designed data visualizations can shed light on these issues, solidifying the need for change and steering resources toward initiatives that yield real results. Join presenters from UNC Health’s Department of Equity and Inclusion and the Enterprise Analytics and Data Services team as they share a practical guide to using analytics to advance equity, backed by examples from the literature and their analytics community. The guide will cover topics including: * Designing for Impact - What techniques effectively communicate disparities in a way that drives action? * Color Use - How can color be used effectively in a way that doesn’t create a hierarchy across groups or reinforce stereotypes? * Grouping – When does combining demographic groups into a category misrepresent data or disguise a gap? How can we be transparent about trade-offs and acknowledge what groups are missing? * Working with Small Populations – How to present information about small sub-populations while protecting privacy and communicating the statistical significance of disparities. * Language – How can language be used to show respect and thoughtfulness about the people connected to the data? * Layout – Can data be presented in a way that avoids perpetuating cycles of advantaging or centering certain communities while othering the rest? * Context – How can we acknowledge the complex and nuanced topics that underlie data such as the social construction of race or social determinants of health in the visualization? * Data Quality – How do we identify and communicate the limits of data quality of completeness? The presentation will be followed by a facilitated, open forum discussion of the challenges and successes others have had in their organizations.


Elevating Population Health with Actionable Analytics

Population Health Analytics are key to understanding the management, strategy, and success factors of value-based care. Here at Hopkins, we have strived to drive actionable analytics. It's great to know how many diabetics are in your population, it's better to know how many diabetics you have, what their utilization has been, what their cost and utilization is predicted to be, what their impactable opportunities for intervention are, and what their SDOH factors are - these factors elevate an analysis to an enhanced understanding of the population today, the steps that can and should be taken, and the outcomes that can be expected. Applying a layered analytics approach can help us view the population on a spectrum of current health need and future predicted needs. Historically, we have focused on patients who "bubble to the top" if you will - multimorbid, frail, highly complex patients. These are usually a small proportion, but represent the highest expense. Going beyond this in the future, we will want to look at the rising risk population. Focusing on those who have an opportunity for preventing a sentinel event: an inpatient admission, a readmission, a high-cost event, etc. and making a plan for prevention is the future of population health management. We all want our patients to be healthier, but resources are not unlimited so we need to focus on those who need us, while keeping an eye on the future and become proactive rather than reactive. Further, with the ever-increasing focus on health equity and value-based care outcomes, Johns Hopkins has focused on these impactable data points using our amenability index and viewing it through multiple lenses allowing for a layered approach and nuanced understanding of the patient population, how the financial and clinical VBC insights can come together, next steps to take for the population, and overall strategy in population health.


Ethical Considerations of Integrating ChatGPT into Your Analytics Workflow

Artificial intelligence (AI), referred to as using computers to perform intelligent tasks typically done by humans, has received increased attention and scrutiny within health care research and analytics. Its applications have created many varying opinions as to what its role should be going forward in enhancing overall patient care and ensuring the privacy and security of patient data. ChatGPT utilizes artificial intelligence and has disrupted numerous industries since its release by OpenAI in the fall of 2022, including health care data analytics. We plan on discussing the many different ways that ChatGPT can be utilized to better enhance an analytic team’s overall performance and output as well as to take into consideration serious risks and cautions that need to be addressed before moving froward. Just like any foundational shift in technology, caution, care, and foresight are necessary in order to protect the data and privacy of patients. Some areas that could be considerably helped by the assistance of ChatGPT are: 1.Initial data model design for varying subject domains, such as cardiology, urology, and various cancer related disease states. 2.Code optimization: improving SQL code efficiency or converting code from one language to another (R -> Python, etc.), which could help jump start code development and create easy first drafts for rudimentary analyses and computation. 3.Automation of routine documentation development and workflow diagrams, cutting down on manual maintenance. Some immediate areas of concern include: 1.Blatant mis-identification of certain disease states (wrong ICD9/10 code assignment, incorrect interpretation of labs). 2.Bias and lack of representation of marginalized communities. 3.The overall risk of releasing patient health information outside of an institution’s internal checks and balances. All of the above, and more, must be thoroughly thought through before unleashing this upon broad health care data and analyses.


Health System Integration Impact on Research Data Services

The partnership of two health systems uniting under one academic and research core provided many opportunities for synergistic change. This process has been ongoing for almost two years and can offer insight to others tasked with similar endeavors. For this presentation we will focus on the integration of highly disparate research data sets from two separate organizations and EHR systems, the harmonization of processes for obtaining research data, and lessons learned. Specifically, we will: Provide an overview of pre-existing organizational structures – including relative sizes in both clinical and research settings Discuss timelines and process change management – including how we internally prioritized work to meet ambitious externally-set benchmarks, and communicated with stakeholders in real time Discuss key elements of the integration, particularly from a research data and process perspective – including legal, regulatory, infrastructure, workflow, and dataflow elements Review major decisions and roadblocks encountered by relevant stakeholders and groups Map beginning state to current state to future goal state


How to solve the nursing shortage

The U.S. is experiencing a nursing shortage. Surveys and popular opinion estimate 30-90% of nurses have left or are considering leaving the profession, which would profoundly negatively impact healthcare operations and patient outcomes. We examined the current U.S. nursing shortage to quantitatively determine if the root cause is a decrease in the number of nurses or an increase in demand for nurses. We utilized large public, quantitative data sets to measure changes in nurse employment and nurse licensure over 25 years and validated our findings with independent data sources. We examined multiple potential effectors, including population changes, salaries, and COVID burden. Our findings demonstrate that there are currently more nurses (licensed and employed) than ever before, even when adjusted for population. However, the number of nursing job postings indicates there is a shortage, despite higher supply than ever before. This indicates a marked shift in demand, rather than a traditional supply-side shortage. We discuss potential solutions to address the root cause of the staffing shortage based on these findings.


Keeping the Lights On: Creating Insights into Hospital Profitability

Understanding hospital profitability by service is crucial for healthcare organizations to remain financially sustainable and to provide high-quality care. Once equipped with these insights, healthcare providers and administrators can identify and prioritize services from a cost-effective perspective and hone in on areas that will increase profitability. This information also helps these providers allocate resources and make strategic decisions about service lines, staffing, and equipment. At the University of Utah we have long invested in a cost accounting solution that has evolved in data complexity and has become a driving force in decision making around capacity and growth throughout our institution, from individual service lines up to the executive level. That spotlight also requires extended validation processes to ensure the data is accurate. An accurate snapshot of financial performance also includes a paradigm shift in how we think about low margin, high volume services that engage with patients and send them on to other more profitable care. An innovative downstream analysis is also included that fairly accounts for this and lends itself to a more complete representation of profitability. We have built a suite of interactive tools to help our capacity mgmt. and executive teams understand and slice profitability of services by inpatient bed day, by case, by OR minute. Coupled with other meaningful insights from the data, our tool enables users to have a consumable way to comprehend their financial outlook. This provides them a way to continue to do what is best for our patients as well as aid in the growth and management our day to day operations. We have to ensure financial viability to care for all our patients effectively. Understanding hospital profitability and visualizing it in a meaningful way, including down to each individual sub-service and procedure, is essential for our healthcare organization to make informed decisions about resource allocation.


Leveraging APIs, Virtualization and Dashboards in Disparate Research Administrative Systems Across Multiple Institutions

Tasked with a challenge to support research administrative process improvement and provide transparency into processes for systems across two institutions that were not integrated, we leveraged APIs and a data virtualization platform to integrate where the systems were not. This allowed for data flow between systems when possible, and ultimately resulted in a dashboard that allows tracking of research study start-up for study teams and investigators to use to manage their research studies and escalate issues.


Migrating Vizient Data to AWS

Using the serverless technology and Athena reporting tool of Amazon Web Services, Memorial Sloan Kettering Cancer Center migrated its large Vizient dataset off the mainframe, part of its long-term analytics strategy. MSKCC applies lessons learned from the Vizient project to future migrations.


MLOps for Custom Models Running in Epic

Model visibility is a cornerstone of modern machine-learning practice in production. As we implemented custom models in Epic, we identified persistent problems diagnosing discrepancies between scores generated by the real time system and the scores we expected based on a retrospective Clarity extract. Given that the potential causes of discrepancies are numerous, the chart-review required to reconstruct the state of the EHR at the time of the score calculation is expensive. We describe our current solution to the issue, which includes real-time input monitoring for 7 custom models and our process for ensuring models are behaving as they should in an ever-evolving EHR landscape.


Noting Unconscious Bias: What Can Be Learned from Running NLP on Physicians’ Treatment Notes?

Seeking to learn what possible demographic biases may be hiding in the text of emergency physician notes, we trained an NLP model to detect negative patient descriptors. We then compared the odds of descriptor use along patient demographic lines, suggesting areas where unconscious bias training may need to address further.


Overcoming Data Challenges to Develop a Predictive Model for Retinopathy of Prematurity in a Neonatal ICU

Retinopathy of prematurity (ROP) is a leading cause of childhood blindness. To facilitate targeted ROP screening for premature/underweight newborns in a neonatal ICU, we aim to develop a predictive model for ROP risk based on easily obtained clinical covariates and much more challenging high-resolution physiologic data from bedside monitors and respiratory support devices. We will present our successful data acquisition and preprocessing approaches for the high-resolution data and report on the importance of this data for predicting ROP. The challenges of acquiring bedside monitor data are two-fold. Obvious difficulty results from the high volume, as bedside monitors record data at a frequency greater than one value per second. It takes significant time and computational resources to simply assure that acquired data is complete and reasonable. In addition, our Philips bedside monitor data is stored in four relational tables that are linked through a sequence of keys, requiring three sets of mapping IDs to identify the monitor data corresponding to a target set of newborn hospital encounters. Heterogeneity in respiratory support data is the major obstacle for incorporating them into a predictive model. Oxygen support methods change over time, as does how corresponding data are stored. Longitudinal respiratory support data were classified using variable EPIC flowsheet identifiers over time, which created significant challenges in harmonizing data over the study period. To organize this data for use, we worked with a clinical informaticist and neonatologist to create logic programming that reproducibly identified specific support classes and respiratory device settings that are relevant for each class without the need for manual chart review.


Pediatric Mental Health Data Solutions - the First Step is Identifying the Problem

The CDC has determined that more than one third of adolescents have experienced poor mental health. An increase in Emergency Department visits by pediatric patients experiencing mental health crises has lead Children’s Hospital Colorado (CHCO) to declare its first ever state of emergency. With unprecedented new volumes in behavioral health (BH) visits, CHCO needed a mechanism to track the BH patients across the system in order to better serve these patients and yield better outcomes. A tracking system was developed in the Electronic Medical Record (EHR) system. This tracking system gave providers an easy way to determine which patients they need to be seeing, enabled leaders to obtain a high-level overview of the problems out patients were experiencing, and provided data that can be used to help advocate for policy changes in the state of Colorado. Based on the documentation completed as a part of this tracking system, data solutions were created to aggregate the required data and deliver the system with the ability to view outcomes, and evaluate volume and staffing needs.


Poisson Modeling Predicts Acute Telestroke Patient Call Volumes

Introduction: Predicting the frequency of calls for telestroke and emergency neurological consultation is essential to prepare adequate staffing for immediate management of each call that is required by the highly time-sensitive nature of treatment. Here, we aim to develop a probability model that predicts the volume of hourly telestroke calls over a 24-hour period. This may inform the number of physicians needed to cover such calls in real-time. Methods: We performed an IRB approved retrospective review of patients from January of 2018 through December of 2022 within an institutional telestroke database at a large nonprofit multi-hospital system. All patients ≥ 18 years who were subject to a phone or video telestroke activation were included. Telestroke calls were quantified in frequency by hour, as well as quantity per year and day vs. night. The call frequency per the various units of time were plotted in R and compared to the expected distributions corresponding to the calculated means. Poisson probability mass function and cumulative distribution function predicted probabilities were calculated and analyzed. A univariable Poisson regression model was fit to four years of historical data (2019-2022) after verifying assumptions to predict expected number of calls per day at six-month increments. A comparison with linear regression was also performed.

 


Prioritizing the Most Critical Needs for AI in Healthcare: Considerations for embarking on a GenAI development journey

Microsoft and UNC Health discuss their collaboration on a generative AI initiative. The presentation covers:

  • Generative AI and Healthcare
  • Developing GenAI Capabilities
  • Challenges When Choosing Technology


Recipe for Real-World Success in Governing Data

Learn how St. Luke’s Health System went from no data governance program to a fully mature data governance program in less than four years. Having a data governance program that delivers results takes not only an engaged and passionate data governance team, but also a hunger and desire from others throughout your business to ensure data assets are formally, properly, proactively, and efficiently managed throughout the enterprise. This ensures the accuracy and meaning of the data is understood and trusted, and there is accountability in place for the quality of the data. Learn high-level milestones and real-world deliverables obtainable through active enterprise engagement and a mature data governance program.

 


Reducing Cardiac Arrest in the Pediatric Cardiac Intensive Care Unit

 

Background: Program development and large-scale change within clinical environments requires strategic integration of data and analytics. In 2019, there was an increased incidence of cardiac arrest (CA) in our CICU. A multidisciplinary team was established that successfully implemented a bundle of interventions with the aim to reduce the incidence of CA. We used local data and national benchmarking to create a sense of urgency and drive change. Methods: A root cause analysis of the rising CA rate was performed and did not point to a single etiology, rather several gaps in practice suggesting multiple areas for improvement. Therefore, a comprehensive Cardiac Arrest Reduction and Excellence (CARE) program with multiple integrated subgroups was developed. Evaluation and sharing of baseline CA risk adjusted models was essential for getting buy-in from team members. The continued use and analysis of CA data and data highlighting compliance with process measures has allowed us to tailor interventions and ongoing make adjustments Results Since implementation of the CARE team in 2019, we have seen a reduction CA incidence from above the national average to an incidence in line with the national average. Our risk adjusted CA observed to expected ratio has decreased from 1.44 to 1.11 highlighting that our observed cardiac arrest rate is now closer to what is expected for our patient population. Conclusions Through the data utilization, we successfully implemented a large-scale QI initiative. The implementation, and ongoing work of this QI initiative, has resulted in a reduction in our cardiac arrest incidence.

 


Reducing Healthcare Carbon Emissions with Real-World Data

Seattle Children’s leveraged their real-world data to dramatically reduce anesthesia-related CO2e emissions. This was accomplished by providing anesthesia leaders a self-service clinical management solution (AdaptX) which allowed them to fully leverage their real-world EMR data. With this solution, they could quickly and easily monitor, evaluate, and adapt care across patients, treatments, teams, and workflows. Healthcare is a major contributor to pollution and greenhouse gas (GHG) emissions. Healthcare produces 2 gigatons of carbon dioxide equivalents (CO2e) per year, 8.5% of total U.S. based GHG or 4.4% of global emissions. The operating room generates a third of a hospital's emissions, inhaled anesthetic agents are potent GHG and contribute 7% of a hospital's emissions, persist in the atmosphere longer than CO2, so have an extremely high global warming potential. Low-emission anesthesia is possible but requires changes in clinical practice. Evidence based recommendations include reduction of fresh gas flow during induction and maintenance of anesthesia, avoidance of high impact agents and increased use of intravenous agents. Real-time emission data, derived from clinical systems and fed back to providers can drive these practice changes.


Reliability M&M’s – How we let the “M” say it for us!

I recently visited family in India. During conversations, I would begin talking about my work in building High Reliability dashboards for my organization only to be interrupted with - “But why did they hire YOU for that?”. I am a clinician by training and practice. I am not a Tableau expert and building dashboards was not part of my repertoire until now. I want to share my journey of guiding an organization with building their High Reliability dashboard and talk about a clinician’s perspective in connecting the dots between databases, operational definitions, technical build, clinical relevance and context, and tying it to executive leadership vision. This journey began with my role in leading the Metrics and Measurement (M&M) workgroup identifying 9 reliability and safety metrics. We solidified operational definitions, data sources, and owners through engagement with multidisciplinary stakeholders. I built visualizations for each of these approved metrics in Tableau with the aim of having ‘person-centered’ versus ‘number-centered’ representations as a reminder for the “Why?” behind the High Reliability initiative – to minimize patient and employee harm. We successfully deployed the High Reliability dashboard within 210 days of kick-off. I want to share my lessons learned as a “non-data expert” growing and evolving in the analytics space. As I was learning the analytics aspect, my clinical background empowered me to engage with different stakeholders, understand the clinical impact, and enabled me to have a unique lens to look at the data and evaluate the process. Our differently structured Performance Improvement team helped me build and deploy a wholesome product.

 


Revolutionizing the Way We Interact with Technology: Amazon’s Alexa Echo

 

Many seniors today face isolation issues that negatively affect their physical and mental health. These issues were especially exacerbated by the recent COVID-19 pandemic. Our project explored the ways that a virtual assistant device powered by artificial intelligence can support and promote good physical and mental health while improving their quality of life. It included consideration for caregiver support by incorporating a function to facilitate caregiver communication. This was accomplished and implemented in conjunction with providing nutrition from a Meals on Wheels program. This study observed how cognitively impaired older adults responded to utilizing technology that leverages artificial intelligence. The overall response from the participants opened the door to a multitude of opportunities for others to consider. The data collected will be shared, along with the results that could be used to develop a more in-depth analysis of interventions for future population health studies.

 


Supply Chain as a Service (SCaaS)

A Digital Healthcare Platform requires a Digital Supply Chain that is “always available” and “always on”. This requires a paradigm shift from a "pipeline" to a "platform". Supply Chain Management at Mayo Clinic is transforming the way users request and interact with data, analytics, machine learning, and other products. Users no longer must wait for a report to be run to get answers to their business questions because they are available through a service-oriented platform that is always available and always accessible to the users. This approach has five main pillars: 1. Analytics move to more predictive analytics (What will happen?) 2. Products are certified, catalogued, and designed to answer key questions for each business area 3. Real-time, bi-directional information sharing with internal and external partners without the need for bulk data transfers 4. Self-service tools require minimal Information Technology support and are accessible in a controlled manner using multiple tool-agnostic formats 5. Control tower allows for monitoring business processes, helps to identify opportunities for improvement, predicts future disruptions, and alerts key personnel about potential issues Imagine the possibility of making your users more informed, more empowered, and less reliant on other areas to be able to make decisions.


The Development of a Predictive Model to Identify Social Determinants of Health Risk in a Patient Population

According to the World Health Organization (WHO), social determinants of health are defined as the non-medical, socioeconomic factors that account for between 30-55% of health outcomes. Being at risk for one factor, can ultimately produce poorer health outcomes for an individual. Addressing SDOH by providing community assistance programs and resources helps to remove these barriers to health care/ treatment and improve health equity in a patient population. Cleveland Clinic has provided access to a questionnaire that patients can fill out prior to any visit within the organization; however, utilization of the survey is low (only 13% of the population) and data suggests those who are filling it out come from more affluent areas and backgrounds. As a result, many patients are going unidentified, and assistance is not provided in a timely or effective manner. We have been working on the development of a predictive model to help identify the risk of a patient being affected by SDOH (specifically related to food insecurity, lack of transportation, and financial insecurity). The Cleveland Clinic plans to use this model to supplement the existing survey data and identify patients who could be impacted through social work outreach and community resources. In this presentation, we will discuss the process of developing this model including the various sources of data considered for model inputs, modeling techniques used, and validation methods. We will cover how we used natural language processing on patient notes to help boost the model as supplemental data. We will explain ways we controlled for bias through the model development. Finally, we will touch on any lessons learned, successes, and failures of developing a model for this particular use case.


The REAL Forecasting Planning methodology: Quarterly 18-month Inpatient Census Forecast

This presentation provides an overview of Children’s Colorado's multi-faceted inpatient census forecasting methodologies, which encompass three distinct timeframes: hourly, 3-day forecasts; quarterly, 18-month forecasts; and annually, 5 to 10-year forecasts. While these forecasting methods differ in how they are used, the precision they demand, the decisions they support, and the technology they rely on, the collaboration with BigBear.AI has enabled the forecasting teams to adopt a meticulous, data-driven strategy for planning their operations, finances, and long-term strategies. 

The presentation starts by comparing the three forecasting methods, explaining the roles of people involved, the needed accuracy of predictions, the kinds of decisions each method helps with, and the technology used.

The focus of the presentation is on the quarterly, 18-month forecasting process, providing insight into its input parameters, valuable lessons learned from its application, and the significant business value it brings to Children’s Colorado. This approach demonstrates a delicate balance between practicality, data-driven observations, and the vital mission of delivering top-notch healthcare services to children.

The Road to AI-Enabled Healthcare through Data and Analytics

Seattle Children’s is a nonprofit children’s hospital with a mission to improve the health and well-being of children. As part of its journey to becoming an AI-enabled healthcare organization, the hospital has developed a variety of applications of data and analytics in Google Cloud to drive clinical and operational decisions. The hospital has already made significant progress in this journey, with clinical applications in OR readiness, frugal financing in the cloud, clinical outcomes, and early steps toward large language models. The success of these clinical applications is rooted in cloud-scalable data and analytics, innovation techniques balanced with healthcare cultural awareness, and a hyper focus on being a leader in the pediatric space.


Transforming Service Delivery with a Hub-and-Spoke Model

Facing the familiar story of an insatiable demand for data and analytics, challenges prioritizing work, and questions as to whether our work was creating value, the Analytics team at The Ottawa Hospital undertook a transition from a centralized data request service to a hub-and-spoke model. By creating mutually beneficial partnerships with clinical operations (i.e. the “spokes”) the goal is to ensure data and analytics activities are squarely focused on organizational priorities that would create value. Underpinning this transition was the development of a clinical data pipeline and the roll-out of a self-serve synthetic data platform to enable timely, secure access to high-quality data. Identifying key champions in the spokes focused our data literacy education and training activities, reducing our data wrangling and allowing us to pivot towards more value-added activities including enhancing data availability and advanced analytics. We are already seeing a diversion of data requests to self-service data discovery, and the changes to our data education and training are helping to build a more data literate workforce. The response from clinical operations to the change in our service delivery model has been positive, with multiple clinical teams requesting to be onboarded as spokes. The lessons learned from this transformation include the importance of halting existing activities, integrating with quality improvement teams, and the need for shifting skill sets within analytics teams. The Ottawa Hospital's shift to a hub-and-spoke model offers valuable insights for other healthcare organizations seeking to improve their data and analytics capacity.


Translating Machine Learning Research into Implementation: Visual Analytics Tool for Model Evaluation, Threshold Selection, and Explainability

We present a novel model evaluation dashboard optimized for making implementation decisions in cross-functional healthcare settings. The dashboard, developed in RShiny, aims to reconcile 3 critical audiences necessary for effective implementation: (1) clinical stakeholders, (2) data scientists, and (3) electronic health record (EHR) analysts and data engineers who convert retrospective models into prospective implementations. We focus the presentation on pediatric sepsis classification in the emergency department.


Understanding Bias, Fairness, and Harm in the Quest for Responsible AI

Understanding the difference between data bias and algorithmic harm is crucial for clinical and public health leaders, as it enables them to identify and address the root causes of health disparities. Data bias refers to inaccuracies or misrepresentations in the data, often caused by underrepresentation of certain population groups. Algorithmic harm, on the other hand, results from biased algorithms that perpetuate or exacerbate existing disparities, leading to negative consequences for vulnerable populations. By recognizing the distinction between data bias and algorithmic harm, organizations can adopt a data ethics approach combined with systems thinking to effectively utilize qualitative studies in improving equitable access to care. This strategy enables healthcare organizations to work towards fairer healthcare systems that consider the needs of all population groups, including those who are underrepresented. This presentation will provide valuable insights and stimulate essential discussions on the importance of mitigating intersectional bias in healthcare algorithms to ensure equitable outcomes. By incorporating data ethics and systems thinking, clinical and public health leaders can better understand the complexities of health disparities and work towards creating a more just and inclusive healthcare environment for all.


Unlocking the Power of EHR System Logs to Improve Clinical Workflows with Process Mining

Process mining, a family of data science techniques, is used to discover and improve workflows using data from system logs. These techniques can be applied to data from the audit trails and logging capabilities natively found in the EMR to produce valuable insights on clinical processes. These techniques can be used to map out a complex process, depict it graphically and examine it, revealing bottlenecks, wasteful variation, and unnecessary wait time. In this presentation, we'll introduce process mining techniques. We'll take a complex real-world example (the chemotherapy infusion process) and use the open-source PMTK process mining tools to show the power of this technique to use EHR data to map out a real-world clinical process and reveal opportunities to improve the process, and subsequently improve patient care.


Using Epic data and Tableau to visualize medicine patient activity across hospitals

The Medicine program in the Edmonton Zone within Alberta Health Services accounts for over half of all funded capacity in the zone across 10 acute inpatient hospitals. With so many patients moving in and out of medicine services it is crucial to understand the whole patient flow story from input, throughput, and finally to output in recent weeks. Prior to this project there was not a single place operational and physician leaders could go to get information on recent activity broken down by individual hospital service. The solution was to build a new data visualization tool using Tableau which is easily accessible and user-friendly. The dashboard uses day-old extracts from Epic’s Clarity database and includes information on where patients come from externally by geography, internally by hospital service, how long they spend in Emergency before admission, whether they were referred, breakdown by level of care, service patients are transferred to when they leave the medicine program, and where they go after discharge. Since the dashbaord rolled out operational leaders have been able to improve understanding of the interdependences of key flow metrics and create a base level of situational awareness. They can drill down to detailed hospital services by site or roll up to the entire medicine program across all 10 hospitals. Leaders now have a deeper understanding of how patients move through the medicine program and are able to provide better quality patient care as a result.


Using the Population Health Data Model to Develop Value-Based Care Analytic Tools

Primary care providers (PCPs) are considered the quarterbacks of value-based care, yet the analytic tools available to them for understanding opportunities for improving patient experience and health outcomes while efficiently spending precious healthcare resources remain basic at best. The optimal analytic tools allow for simultaneous patient- and aggregate-level understanding of health drivers that are influencing outcomes. Ideally, PCPs can monitor panel-level metrics and at the same time, optimize their schedule to meet patient needs, and maximize their potential in value-based arrangements. At a large integrated health system in the Midwest, a population health data model (PHDM) has been integrated into the data warehouse of the electronic medical record, Epic. The PHDM is the data infrastructure used to build the Primary Care Population Health Dashboard (PCPHD). This dashboard will allow PCPs to better manage their time, resources, patient needs, and improve their own and organizational performance in value-based contracts.