text
stringlengths
1
1.75k
From these rulings, legal scholar Marjorie Shultz argues, autonomy is primarily recognized in terms of contact between bodies, “as a byproduct of protection for two other interests—bodily security as protected by rules against unconsented contact, and bodily well-being.” Elaine Scarry builds on this apparent confusion between autonomy and physicality: “the body is here conceived of not simply as something to be brought in under the protection of civil rights, but as itself the primary ground of all subsequent rights.”
[...]
That informed consent is a fundamental requirement for research with human subjects is firmly established. Again, this process was for a long time reactive—a collective and political response to horror—it is only now entering a phase where ongoing maintenance and revision of policy are prioritized, as a means of addressing emergent needs for cultural and technological change.
The 1947 Nuremberg Code provides a foundation for informed consent in international law, outlining the basic requirements for science involving human subjects. The code provides that research participants should be given “sufficient knowledge and comprehension of the elements of the subject matter involved, as to enable [them] to make an understanding and enlightened decision.” In the US, the National Research Act of 1974 established the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, a group of doctors, lawyers, and scientists, with the goal of identifying basic ethical principles and guidelines for research with human subjects. This commission’s work resulted in the release of the Belmont Report in 1979, with its three main ethical principles: ‘Respect for Persons,’ ‘Beneficence,’ and ‘Justice.’
The first principle of ‘​​Respect for Persons’ defines an autonomous person as “an individual capable of deliberation about personal goals and of acting under the direction of such deliberation.” A capacity for making decisions, based on goals, leading to action. Taken together, these processes constitute the right to self-determination. The ethical principle of ‘Respect for Persons,’ as the Belmont Report outlines it, holds a basic view of individuals as autonomous agents, while providing that some individuals are identifiable as having diminished autonomy, and are entitled to protection. This latter category addresses those without “the capacity for self-determination,” and lists illness, disability, and incarceration as factors that may contribute to diminished autonomy.
Codified in 1991 as the ‘Common Rule’, a set of federal regulations (45 CFR 46) designed to protect human subjects taking part in research, the ethical framework of the Belmont Report was also formative for bioethics research in the US. These regulations are now continuously updated, while leaving the core ethics of the original report intact.
[...]
The practice of consent is culturally-informed and behavioral. It is an activity of daily life that is learned and practiced. As intimacy coordinator and consent educator Mia Schachter writes: “Consent is a practice of deep listening, not just for words, but also for body language, gaze, speech patterns and other non-verbal clues. Consent is an ongoing, practical approach to communication. It is a language and can be embodied. Embodiment = fluency.”
Planned Parenthood promotes a widely-accepted definition of sexual consent, a practical and juridical category of interpersonal relationships, using the ‘F.R.I.E.S.’ acronym, as: Freely given, reversible, informed, enthusiastic, and specific.
Critiques of this model tend to focus on the flattening of desire, in all its complexity, to enthusiasm. An alternative term that has been proposed is ‘engaged.’ This modification acknowledges the necessity of continued analysis of desire, by assessing whether it serves one’s own curiosity, or it serves a sense of duty to others, and how comfort, certainty, and changing circumstances can shift one’s positive or negative sense of ‘maybe’ over time.
Schachter, building on Betty Martin’s Wheel of Consent, formalizes the shifting gradients of self-appraisal as the Yes-to-No Spectrum. In this pedagogical frame, Schachter poses important qualifiers for engaged consent: is this a learning opportunity? Am I deferring to someone else’s judgment? Does my sense of potential consequences motivate my decision?
[...]
Consent, as a reversible commitment, happens in shared time. It is an ongoing and changeable action. A consensual agreement requires ongoing attention, and attending to.
Correspondingly, attention requires our consent. As the world confronts us, through sensory and logical channels, with decisions about what to pay attention to, we open or close ourselves to witnessing, participating, or reciprocating.
Writing in Consent with Touch: Manual for Practitioners, Schachter outlines how to maintain a focus on consent through changing circumstances, and the importance of “narrating what you are doing, why, what it might feel like, and what you are looking for,” crucial advice for all clinical encounters, out of respect for bodily autonomy and attentive awareness alike.
The idea that a misplaced emphasis on autonomy leads to a neglect of necessary aspects of communication in the consent process has been widely discussed. Bioethicists Neil Manson and Onora O’Neill, in surveying the current limitations of informed consent, locate the ways in which the notion of information itself is distorted in theory and practice. In principle, information is a process to which all parties contribute, rather than a material passed between separate and autonomous participants. Like consent, information is a specific, personalized, context-dependent, norm-dependent, intersubjective, rational process. These aspects of information-as-process are occluded by the unshakeable metaphor of information-as-material, as “the ‘content’ of communication, or as something that is acquired, stored, conveyed, transmitted, received, accessed, concealed, withheld.” One person has the information, the other person needs the information, necessitating as close to a lossless transfer across an asymmetrical power gradient as possible.
The culture and the technology must shift. One recent change is to shift the standard for what information is relevant: “The reasonable-patient standard views the informed consent communication process from the patient’s perspective. It requires physicians and other health care practitioners to disclose all relevant information about the risks, benefits, and alternatives of a proposed treatment that an objective patient would find material in making an intelligent decision as to whether to agree to the proposed procedure.”
Traditionally, courts have “tacitly reinforced paternalism by calling on physicians as expert witnesses in informed-consent lawsuits,” resulting in a self-replicating system where “physicians decided how much information a physician should disclose to patients.”
In 2015, the UK Supreme Court found, in Montgomery v Lanarkshire Health Board, that the standard for what information should be provided to patients to constitute informed consent “will no longer be determined by what a responsible body of physicians deems important but rather by what a reasonable patient deems important.”
Information needs, even when determined by what is imagined as a reasonable patient (a term I’d like to call into question), are not always considered in the context of other needs. The threshold for informed consent should address both how information is offered and how it is received, including such person-centered factors as the burden of complex and consequential decision-making on distressed patients, who are often not given adequate resources (time, materials, support) to make good decisions. Information is too-often presented with the requirement to sign “minutes before the start of a procedure, a time when patients are most vulnerable and least likely to ask questions hardly consistent with what a reasonable patient would deem acceptable.” To give oneself up to a medical procedure is to make a departure, to leave one world and enter another, and carries all the necessity of preparing for such a transition.
[...]
Departure, for Édouard Glissant, is “the moment when one consents not to be a single being and attempts to be many beings at the same time.” In this “passage from unity to multiplicity” that Glissant identifies as characteristic of diaspora, could automated pattern-matching tools such as AI act as a go-between or guide?
The capacity of large language models (LLMs) to summarize and revise content is already being put to use in hospital systems as a guide, making the language of informed consent more accessible. This capacity for summarization and explanation contributes to a broad need for language accessibility, both in terms of translation and in terms of simplification. Near-future use may include the generation of images, diagrams, and videos for use as decision aids.
Proxies, or “those who are most likely to understand the [...] subject's situation and to act in that person's best interest,” are an important factor in decisions made by and for people without capacity for self-determination, where mental faculty is either diminished, developing, or in decline. Proxies carry the legal authority to represent, to speak for or act on behalf of.
Where should AI be situated in supporting informed consent? As go-between and guide, between patient and information, or between patient-as-individual and patient-as-data? Or as a proxy: for patient, caregiver, or health care professional?
[...]
…to shared decision making
Informed consent is a legal concept that requires information to be provided before consent can be made. Articulations of the concept emphasize the importance of autonomy and self-determination. However, problems with poorly implemented consent frameworks lead to a slippage between consent and compliance, and a failure to support patient self-determination.
Informed consent mechanisms, often lacking specificity, “tend to be generic, containing information intended to protect the physician or hospital from litigation.” Compliance-centered methods that privilege the transfer of information as material, rather than aiding in the process of comprehension, have been described as the “banking” model of education in Paulo Freire’s Pedagogy of the Oppressed, or as a “container” for the passing of information in the bioethics framework provided by Manson and O’Neill. These models are critiqued for their contribution to flawed, inefficient, and misleading exchanges of information. By extension, exchanges premised on these models do not foster safe disclosure and participation, but reinforce cultures of dominance, undermine patient-centered outcomes, and short-circuit processes of inquiry.
Shared decision making, on the other hand, is an ethical move that recognizes “the need to support autonomy by building good relationships, respecting both individual competence and interdependence on others.” As a result, it has been shown to lead to measurable knowledge gains by patients, increased confidence in decisions, and more active patient involvement.
The choice to enter into a shared decision making process is situational: higher risk decisions require a focus on information; lower certainty outcomes require a focus on the decision making process. Nevertheless, the emergence of shared decision making seeks to support full autonomy by balancing the right to self-determination with the ethical principle of relational autonomy, which holds that “our decisions will always relate to interpersonal relationships and mutual dependencies.” As an application of relational autonomy, shared decision making sheds light on the numerous relationships that contribute to clinical decision making, including what patient and provider bring as individuals, together with the array of technical, social, and economic factors that define clinical practice.
“As best practice” shared decision making “validates, augments, and enriches the process of informed consent by emphasizing patients’ understanding and prioritizing of different medical interventions in light of their own values and lived experiences.” Shared decision making, as it has been defined by numerous sources, is “a process by which patients and providers consider outcome probabilities and patient preferences and reach a health care decision based on mutual agreement.” Shared decision making builds on and with informed consent. It is a collaborative process between healthcare providers and patients. It requires a supportive and responsive environment on the part of the clinician, and free and open disclosure of values, preferences and expectations on the part of the patient.
[...]
The introduction of AI technologies into medical data practices further erodes the stability of autonomy-based interpretations of informed consent, both positively and negatively.
Researchers and patient advocates alike have positively evaluated the use of generative AI to augment informed consent and decision support—particularly for its summarization and explanation functionality. However, induced belief revision, as a tendency to defer to automated or AI-assisted analysis over intuition, tacit knowledge, and traditional clinical measures, is a cautionary factor contributing to the need for trust, accountability, and agency to be shared between patients and providers.
In December 2023, the Department of Health and Human Services adopted the HTI-1 Final Rule (89 FR 1192). The stated goal of this legislation was to implement requirements already proposed in the 21st Century Cures Act (85 FR 25642) for the interoperability and transparency of AI-based healthcare technologies.
Reading these two pieces of legislation together, it can seem as if interoperability and transparency are the same thing. Both require the safe disclosure—of data, protocols, methods—and means of access to sensitive health information, efficiently achieved through the design and maintenance of a new standard application programming interface, or API. An API is a set of rules for how different computer programs should communicate with one another. The rules of an API describe a structure for making requests and responses, determining a program’s capacity for interoperability with other programs. APIs shape what is possible to be done with information that moves between technical systems. In this sense, the most direct beneficiaries of interoperability and transparency, when applied to health technology, are other pieces of health technology.
[...]
The HTI-1 Final Rule sets guidelines for, among other things, the use of predictive decision support interventions (DSIs). Predictive DSIs, as defined by the legislation, are tools “that support decision-making by learning or deriving relationships to produce an output, rather than those that rely on pre-defined rules.” This includes both “technologies that require users' interpretation and action to implement as well as those that initiate patient management without user action”—that is, tools that provide insight or information to be acted on by humans; interactive, hands-on tools to be acted on together with AI; and automated processes that do not have a human in the loop.
This broad, undifferentiated definition contrasts with how AI in medicine is characterized elsewhere.The American Medical Association Current Procedural Terminology’s AI Taxonomy makes a clear high-level distinction between assistive, augmentative, and fully automated technologies. The government rule addresses this call for specificity, and finds that “such constraints may unintentionally exclude relevant technology as it evolves and is applied to more use cases, humans interact with technology in more diverse ways, and societal views on the line between assistive and autonomous technologies shift.”
What are these decision aids that are expected to evolve, be applied to more use cases, that humans will interact with in more diverse ways?
Patient reported outcome measures (PROMs) are a key source for analysis that contributes to decision aids, showing how treatment, health status, and quality of life come together in meaningful ways. As discussed in chapter three (“The Validated Instrument”), it is expected that the application of AI tools to PROM data will greatly facilitate their use as ground truth for decision making processes.
Decision aids are access tools, in that they “provide balanced, evidence-based information about treatment options and usually are easy to read, often with pictures and figures; some may include patient testimonials about different pathways.”
Patients with access to decision aids were found to have “had greater knowledge of the evidence, felt more clear about what mattered to them, had more accurate expectations about the risks and benefits, and participated more in the decision-making process.”
[...]
Predictive DSIs offer a clear example of what is broadly known as automated decision making (ADM).
Article 22 of the European Union’s General Data Protection Regulation, commonly known as GDPR, protects subjects from the effects of automated decision making as follows:
“The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”
A data subject, in this context, is explained as “one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identi­fication number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.”
What Hito Steyerl refers to as patterns of life.
Performative technologies, from The Coordinator to generative AI, enact what they describe, and create data subjects through repetition. The model trains you. The concern is that the outcome of these processes will impose coherence that is unrecognizable to us, patterns we can’t sense, and explanations whose logic we can’t reason for or against.
Performance studies scholars Roberto Alonso Trillo and Marek Poliks, in the introduction to their co-edited collection of texts on performativity after AI, pose a set of framing questions: “Is artificial intelligence (AI) becoming more and more expressive, or is human thought adopting more and more structures from computation? What does it mean to perform oneself through AI, or to construct one’s subjectivity through AI”? Even if answers were available, the remainder is: can we still talk about meaning in the same way?
[...]
Incoherent, unexplainable and uninterpretable outcomes can’t be shared. Nor is their process to be shared in. These are symptoms of a black box.
Black box is a term that “can refer to a recording device, like the data-monitoring systems in planes, trains, and cars. Or it can mean a system whose workings are mysterious; we can observe its inputs and outputs, but we cannot tell how one becomes the other.” A black box is something that is not open to interpretation.
Media theorist Mark Andrejevic extends the black box metaphor, with its implicit focus on opacity, obscurity, and secrecy, to consider “actionable but non-sharable information:” practical insight, such as material used to support decision making, that is not shareable in the sense that it cannot be explained or interpreted.
To put this thought in context, Andrejevic quotes literary theorist Paul Ricouer—I paraphrase here: a text, as an object of interpretation broadly defined, mediates between one reader and another, between readers and the world they occupy, and between readers and their interior selves. Interpretation is a kind of sharing out of the text’s meaningfulness across all these mediated relationships. “The black box, by contrast,” writes Andrejevic, “replaces sharing with operationalism: the goal is not to tell or to explain, but to form a link in a process of decision or classification.”
This narrow focus on inputs and outputs encapsulates what Beatriz Fazi calls the “autonomy of automation,” or, freedom from human modes of abstraction and representation. This autonomy is most evident in the capacity of self-supervised AI models (exemplary black boxes) to produce internal representations “independently from the phenomenological or experiential ground of the human programmer.” When we think about how to use AI to support decision making, it is important to remember that the basis for this support is fundamentally different from human thought and experience.
However, this difference is non-separable. We encounter black box systems as hybrids of automation, abstraction, and storytelling, where “the role of narrative is inseparable from the call for transparency.” Narrative makes systems legible, and this is unavoidable, even if this narrative is about illegibility.
[...]
As with analysis, always return to a situated perspective:
* Who poses the questions?
* Whose well-being is at stake?
* What lived experience is used to frame decisions and reactions?
* Whose imaginary is drawing the field of decisions?
[...]
Compression as Explanation
“The manner and context in which information is conveyed is as important as the information itself [...] It is necessary to adapt the presentation of the information to the subject's capacities.”
What is information?
It is different from raw data, and it is separate from the way data can be modeled. Information, as defined by computer scientist Marcia Bates, consists of all the instances where people interact with their environment in such a way that it leaves some impression on them. Information is found in the interactions that change one’s knowledge. As Bates describes: “these impressions can include the emotional changes that result from reading a novel or learning that one’s friend is ill. These changes can also reflect complex interactions where information combines with preexisting knowledge to make new understandings.” This definition of information reflects a view that is centered on the people who seek out and use it, rather than the systems that organize, preserve, and make information available.
What do people need from information?
Brenda Dervin defined information needs in terms of sense-making: A person, in their time and place, needs to make sense. The sense they need to make is for their own world, their time and place. To do this, they need to inform themselves constantly. Their need for information is oriented by questions that deal with the here and now of the world they see themselves as being in, the places they come from, and the places they see themselves going to. Information needs are always situated, they arise in the shape of questions about the conditions of one’s life. As Dervin articulates: “Information needs are always personalized, as there is no other way for them to be; information seeking and use can be predicted more powerfully by knowing the kind of situations [people] are in rather than knowing their personality or demographic attributes; people seek information when their life situations are such that their old sense has run out; people are in charge of how they use the information they attend to.”
How do people behave with information?
Sense-making is one kind of information behavior. As with other kinds of behavior, information behavior is contextual, it involves some motivating factor, or objective: “People are trying to solve problems in their lives, not ‘seek information’.” Information behavior, as a broadly descriptive term, addresses why we choose to seek out information, what we (believe we) need from information, what we do with what we find, and what kinds of explanations we find useful.
Information behavior is not always a process of careful assembly and consideration. In the particular case of information seeking under threat, awareness of needs, motivations, and contextual support may shatter as a person copes with the dissonance of too much information. In seeking to understand this coping response, information seeking under threat has been shown to produce either active, passive, or avoidant behaviors, characterized along the dimensional axis of monitoring or blunting, heightened and continuous seeking out of information, or refusal, closing off to certain forms of information, and seeking distraction.
What happens when a patient researches their own condition?
Patients’ access to their own health records is a protected right under HIPAA (1996). The HITECH act (2009) moved this access into electronic health records (EHR), software-based patient portals and apps. The 21st Century Cures Act (2016) established this right as an obligation for care providers to not withhold their patients’ data. By 2021, the 21st Century Cures Act Final Rule further refined the legal requirement of providing “the immediate electronic availability of test results to patients, likely empowering them to better manage their health. Concerns remain about unintended effects of releasing abnormal test results to patients.”
How do patients think about having access to their own health information?
In multisite and international studies, patients tend to agree they prefer to have immediate access to their health information, even when that information is provided without adequate context or explanation. Patients surveyed also tend to agree that improved access to their own health information improves, in turn, their communication with health care providers. It is not directly evident from these studies that the information provides content for this communication (that it is the subject matter of the improved communication) or that it directly informs shared decision making.
In a systematic review of how patients’ respond to having immediate access to their own electronic health records (EHR), benefits were found ranging from reduced anxiety, better doctor–patient relationship, increased awareness of changing health status, adherence to medication, and improved patient outcomes. Patients self-reported better engagement in terms of self-management of symptoms, and increased knowledge. Concerns were found to be focused on security, privacy, and increased anxiety.
Across the board, increased access to one’s health information was shown to produce increased patient engagement, confidence in self-management of symptoms, health literacy, and informed participation in shared decision making. It is hoped that more transparent electronic health records will be taken up as a priority of healthcare technology.
[...]
How does information behavior affect how we disclose, answer, make decisions, and navigate our own legibility?
Elfreda Chatman, writing in The Impoverished Life-World of Outsiders, traces how flows of information can break down, contributing to a power gradient that she names information poverty. In this schema, knowledge falls into insider and outsider categories wherever trust is at a minimum, and vulnerability—sense of personal risk—is at a maximum. While Chatman withholds decision on the question of whether one needs to be an insider to understand the lived experiences of insiders, she traces the effects of this distinction through information behaviors such as secrecy and deception (preferring not to disclose, or providing false disclosure), as well as attitudes about what qualifies as relevant information (information needs), and what resources are shared across the insider / outsider boundary.
It is easy for me to read Chatman’s idea of information poverty in terms of the experience of illness—how patients inform themselves, and how patients relate to their own data. As artist Carolyn Lazard details in their illness narrative How to Be a Person in the Age of Autoimmunity, the insider information of other patients carries value that the objective knowledge of outsiders lacks: “to listen to the suggestions of people who actually lived with the disease” rather than receive “advice from those who merely studied it.” Significantly, maybe, this shift in listening accompanies not only a sense of mistrust and uncertain relevance in regard to medical knowledge, but also a sense of narrative incoherence, in that “these kinds of experiences are difficult to narrativize. There is no story arc.”
While Chatman describes information poverty in terms of how information is sought out, used, and shared within groups, Aimé Césaire names a wider failure of sociotechnical imagination to contend with incoherence as “impoverished knowledge.” Césaire proposes that poetry, and poetic knowledge, fill the gaps: “Scientific truth has as its sign coherence and efficacity. Poetic truth has as its sign beauty,” he proposes. “What presides over the poem is not the most lucid intelligence, or the most acute sensibility, but an entire experience.”
Césaire’s notion of poetic knowledge travels well, as a means of “working on and against” sociotechnical systems, in the act of disidentification, in José Esteban Muñoz’s formulation. For both insider and outsider positions, patients and researchers, encounters with new technology produces multiple instances of disidentification, repurposing, and relationality: “studying information systems in isolation and not as part of broader social constellations misses the nuance of how people negotiate, resist, and create new ways of interacting with technology.”
Analyses of “crip legibility”, or the methods with which disabled people interact with, relate to, and slip between legible categories, as a form of information expertise, are crucial to re-imagining the future of health information systems. The call has been issued for “greater acknowledgement of the lived experiences and material design practices of disabled people,” holding that “the lived experience of disability, and the shared experience of disability community creates specific expertise and knowledge that informs technoscientific practices.”
[...]
“What is lost in the search for perfect explainability?”
Nora N. Khan poses this rhetorical question in the 2022 compendium Mirror Stage: Between Computability and its Opposite. In the context of AI, explainability, as a summary or high level overview of the processes and justifications used (e.g. why did you do that), is distinguished in common use from interpretation (e.g. how did you do that). Something can be explained even if a low level examination of the process is not accessible.
Explainability is widely held up as a guideline for the development of ethical AI, particularly when the technology is used with sensitive data, vulnerable populations, and in decision making processes that have tangible effects on people’s lives. Healthcare is a prime example of this, where a right to justification is placed alongside non-discrimination as a core aspect of fairness.
However, explainability, as policy and as a critical tool, has its discontents. Unlike interpretation, which operates reflexively across incommensurable differences, explainability relies on sometimes clumsy metaphors and re-framings to produce a common legibility around reasons. We bring what we cannot understand into our own world: “we say that a computing machine ‘sees’, ‘listens’ or ‘thinks’, just as we say that an aeroplane ‘flies’ despite our awareness that an aircraft and a bird take flight in profoundly different ways.”
These metaphors shape what we are able to imagine about the processes at work, limit the scope of our understanding to what is already familiar, and obscure other important factors and conditions. We don’t learn from these explanations alone. As Khan puts it: “There are vital ways to map and narrate and explain the world outside of the limits of language”. AI should be, in addition to a translator and summarizer, a troubling agent, bringing friction to decisions. providing variations, expanding the terms and modes of a search. A wider scope, never a narrowing. Have you considered this another way?
[...]
For healthcare providers, the failure to recognize, symbolize, and reflect on the consequences of the emotions they experience in clinical encounters is shown to impede or adversely affect patient care. This is to say, when information seeking is an interpersonal activity, such as in shared decision making, emotion and affect change how information is used, sought out, and shared. Recognizing—naming—these changes is an important way of supporting clinical processes.
How can we ask better questions?
In advising clinicians on how to prepare for conversations about palliative care in oncology, Back et al. offer a quick litany of questions, conversation starters:
(1) What is happening?
(2) How do you (and I) feel?