Ethical and Privacy Concerns with Suicide Risk Prediction Algorithms

Revision as of 15:24, 13 May 2021 by Praveen (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Title: Understanding Ethical and Privacy Concerns with Suicide Risk Prediction Algorithms

Mentor: Dr. Michael Zimmer

Approach: Engage in a multi-method investigation of ethical and privacy concerns related to the development of suicide risk prediction algorithms based on various medical, public, and private data sources

Summary: Suicide is the tenth leading cause of death in the United States, with more than 40,000 deaths and over one million attempts estimated annually. Despite ongoing efforts to reduce the burden of suicide and suicidal behavior, suicide rates have been increasing (by more than 25% since 1999), and 50 years of research has failed to identify any robust predictors. Even though nearly half of all suicide decedents have contact with a healthcare professional in the month before their death, suicide risk is rarely detected in such cases. There is thus a valuable and largely untapped opportunity to help at-risk individuals who interact with the healthcare system shortly before their suicide attempt. New research has used machine learning methods to analyze electronic health record (EHR) data to successfully detect more than 1/3 of first-episode suicidal behavior cases, on average 3 years in advance, with at least 90% specificity. Given that these predictions are based only on the structured data elements in the EHR (medications, diagnostic codes, procedure codes, etc., extracted from Partners Research Patient Data Registry, RPDR), there exists a desire for improvement by integrating additional information, such as public records datasets containing financial, legal, life event and sociodemographic data. This project will explore the ethical and privacy implications of integrating publicly available “socioeconomic health attributes” to assist healthcare organizations with their suicide risk analytics and predictive modeling activities. The project will be multi-method, combing a conceptual investigation of ethical dimensions, a technical analysis of proposed algorithmic modeling, and a study of user perceptions about potential privacy threats of such efforts.

Student Research Activities: The REU fellows will perform the following major tasks: • Perform a systematic literature review on the use of publicly-available data within healthcare contexts, as well as user opinions regarding the use of such data. • Assist in the development of survey instrument to measure public opinion about the use of publicly-available data in the development of algorithms across numerous contexts. • Perform data cleaning, analysis, and data visualization using packages from Python or R.

Student Background: Students need to have basic computing skills and introductory knowledge of the social and/or data science methodologies