Understanding “Consent” in the Age of Big Data and Human Research
Recently, associate professor and bioethicist Jon Merz gave a talk on legal privilege in the context of consent (or lack thereof) in emergency medicine and human research . Merz provided examples of court cases and research studies where the doctrine of informed consent in emergency medicine and the regulations for emergency research are contradictory, and ethically questionable as the patient’s right to informed consent is waived under certain circumstances. The doctrine of informed consent is a legal concept that applies to all physicians in every field of medicine. This doctrine is premised on the notion that “[e]very human being of adult years and sound mind has a right to determine what shall be done with his own body . . .” For a patient to be considered legally informed, the doctrine of informed consent requires a patient to have reasonable knowledge of the procedure to be performed as well as some understanding of the nature of the risks involved in the procedure. This doctrine is contradictory to the federal regulations under 61 CFR 51498 and 21 CFR 50.24 which allows an Institutional Review Board (IRB) to grant physicians waivers of consent for limited class of research activities involving human subjects who are in need of emergency medical intervention but who cannot give informed consent because of their life-threatening medical condition. Emergency research involves the most vulnerable population of study subjects, i.e., a population with no capacity to control what happens to them and no capacity to consent, in a setting where the emergency circumstances require prompt action and generally provide insufficient time and opportunity to locate and obtain consent from each subject’s legally authorized representative .
The concept of “privilege” is applicable to studies that utilize big data sets which, is outside of the scope of emergency medicine and research, but still within the confines of human research and waivers of consent. In “big data” research, waivers of consent are often granted (given required waiver criteria is met) as most of time the researcher is mining large pools of data from various sources, and would be unreasonable/ or impracticable to obtain informed consent from the individuals. In some cases, consent may be implied based on language used within the terms and conditions agreement; however, few consumers read or understand those agreements . Ultimately researchers are accessing information about a person to understand behavior or other health factors, while the individual remains unaware of the research being conducted with their data.
Michael Zimmer, professor at University of Wisconsin discusses that in the age of new technology, IRB’s have an important responsibility to understand the changing technological landscape, specifically as it relates to potential harm. Researchers often imply “data is already public, so what harm could possibly happen?”. In big data research “harm” may present in a “non-physical” form with intangible
consequences, such as loss of dignity and autonomy that may occur with privacy is compromised or a data breach occurs. Further. Zimmer also states “researchers often interact only with datasets, objects, or avatars, thus feel a conceptual distance from an actual human—(i.e. not human subjects research).”
Zimmer and Merz both suggest that research subjects may experience a loss of their rights when waivers are granted as their right to a reasonable expectation of autonomy is waived. Merz’ research revealed that 30% of people, if given the opportunity to consent would opt out, which is consistent with participation refusal rates in clinical trials. If the same scenario was applied to big data research, where data is being mined from social media sites (i.e. Facebook, Twitter, OkCupid), would individuals willingly provide their data if consent was presented to them? The Facebook Emotional Contagion study and OK Cupid research studies both experienced a backlash from social media users, as social media users had an “expectation of privacy.” “In addition to questions raised about the ethics of certain data science practices, the boundaries of open science research, and the ease of identifying the members of a given dataset, the incident reveals something else, too: People continue to give up vast quantities of their personal data to sites online, expecting privacy.”
In applying Merz’ finding of a 30% refusal rate, with a sample of 100,000 subjects, 30,000 would refuse if given the opportunity. Granting a waiver of consent and silencing the voice and choice of 30,000 users seems unethical; however with average users not reading terms of service, lack of knowledge regarding web deacons, data harvesting, and web algorithms; would a good informed consent for the web help users understand what’s going on? It is time we have an open and honest dialogue on the importance of consent, and the best practice for consent in data science research and digital environments.
Click here to continue the conversation and join CORE to contribute to ethical considerations in the area of digital and pervasive sensing research methods.