text
stringlengths
1
1.75k
(3) What is important?
These subtext of these three elements—that we are in a process that is unfolding; that we should recognize the emotions, shared or differing, that we are experiencing, as information; and that there is key information, requiring decision or action—are critical to how information works in the clinic.
The Common Rule explains the importance of specificity and formulation of key information, to help make comprehension easier:
“Informed consent must begin with a concise and focused presentation of the key information that is most likely to assist a prospective subject or legally authorized representative in understanding the reasons why one might or might not want to participate in the research. This part of the informed consent must be organized and presented in a way that facilitates comprehension.”
In terms of representation, compression is the process of making a simpler or more lightweight representation from an original version by focusing only on what is considered most meaningful about the original—the key information. Compression is a fundamental process in any tool that relies on digital representation, including AI. Conceptually, compression can serve as a metaphor for any simplified communication of complex reality: thought into language, an explanation, a diagram, etc. A lossy compression is created, in part, by removing information that is not deemed meaningful, such that the reconstituted copy has lost some aspect of the original. This loss is accounted for by processes of normalization, deciding what the standard of meaningfulness should be, and conforming to that. What escapes the norm is held to be uncertain, irrelevant, and costly, a threat. Explanation is lossy compression, it normalizes the material being explained, the means of explanation, and the one being explained to.
[...]
Gayatri Spivak, addressing the symposium "Explanation and Culture" at the University of Southern California's Center for the Humanities in 1979, turned her talk on the symposium itself, in a classic deconstructive turn, pointing to “the prohibition of marginality that is implicit in the production of any explanation.”
Explanation, as compression, works to eliminate uncertainty.
Spivak: “We take the explanations we produce to be the grounds of our own action; they are endowed with coherence in terms of our explanation of a self.”
This cautionary critique—not to mistake the explanation for ground truth—can be urgently applied to the use of AI, particularly large language models (LLMs), whose appetite for new data has led to widespread use of synthetic data in the training process, especially as data sharing practices harden.
Ground truth is what an explanation points to, in order to show with certainty that the arguments it makes are sound. Ground truth, to be legible within explanatory and predictive systems, is assembled through processes of identifying, labeling, classifying, ordering, and structuring. It can start with any data that we think of as true, to the extent that we want to build our tools, processes, and systems around it as an example of truth.
Ground truth for shared decision making can originate with clinical observation and testing, caregiver input, patient-reported outcomes, as well as other forms of patient-generated health data including passively collected data from wearable sensors, apps, and fitness trackers.
Where there are discrepancies in health literacy, as with patients confronted with the need to make sense of technical and legal language that dominate informed consent material, access tools such as plain language or easy read translations are one starting point.
Where self-reporting or shared decision making present difficulty for patients with disabilities and / or high cognitive load, there are a range of strategies to be considered, from changing the question, how or when it is asked, to more radical departures from systematic norms, such as involving proxies or interpreters, or adding context from passively collected data.
Disabled people experience negative health outcomes disproportionately, but evidence shows that placing focus on how information is gathered, shared, and used could lead to greater health equity. This means: (1) collecting information on the context that affects a person’s experience of function; (2) representing individual patients’ voices, perspectives, and goals in the electronic health record; and (3) standardizing how observations of function and context are recorded in the electronic health record.
Overall, this strategy orients towards two endpoints: one addresses structure, by changing how standardized tools such as electronic health records can accommodate a wider range of information and context. The other addresses engagement, and respect for patients’ contribution: to collect and analyze patient-reported descriptions of their own personal perceptions and goals. Learn to ask.
[...]
The goal of all of this, and how shared decision making is supported, is to build knowledge about what is meaningful to patients in terms of the care they receive. How it translates to their real life. The name for this is minimal clinically important difference (MCID): what is the smallest intervention that could be taken to produce a meaningful positive effect, from the patient’s perspective?
[...]
Autonomy-in-Relation:
The promise and concern around AI as a predictive tool in healthcare has been well established, in terms of its capacity for accuracy, as well as its utility in summary and translation tasks.
What remains under-studied are the changes that AI brings about vis-à-vis new formations of autonomy, expertise, and ground truth. To this point, we should begin by looking at existing models in healthcare that foreground interdependency, intersubjectivity, and relationality.
The biopsychosocial model for pain management was originally developed in the mid-1970s, a time when “science itself was evolving from an exclusively analytic, reductionistic, and specialized endeavor to become more contextual and cross-disciplinary.”
The original, interdisciplinary findings that led to the biopsychosocial model found that in order “to understand and respond adequately to patients’ suffering—and to give them a sense of being understood—clinicians must attend simultaneously to the biological, psychological, and social dimensions of illness.”
Studying the impact of the biopsychosocial model twenty-five years into its widespread adoption for pain management, Borrel-Carrió et al. find an explicit link to information behavior, and a more nuanced understanding of how autonomy works in clinical decision making:
“Most patients desire more information from their physicians, fewer desire direct participation in clinical decisions, and very few want to make important decisions without the physician’s advice and consultation with their family members. This does not mean that patients wish to be passive, even the seriously ill and the elderly. In some cases, however, clinicians unwittingly impose autonomy on patients. Making a reluctant patient assume too much of the burden of knowledge about an illness and decision making, without the advice from the physician and support from his or her family, can leave the patient feeling abandoned and deprived of the physician’s judgment and expertise. The ideal, then, might be ‘autonomy in relation’—an informed choice supported by a caring relationship.”
The caring relationships that support patients’ choices, indeed, that support patients’ sense of self, effectively blur normative distinctions between categories. As psychologist George Engel wrote in the original publication describing the model: “the boundaries between health and disease, between well and sick, are far from clear and never will be clear, for they are diffused by cultural, social, and psychological considerations.”
More recent reappraisals of the biopsychosocial model find that this emphasis in the original call for a blurring of analysis, cutting across modalities and methodologies, is still largely out of reach: how can the “fuzzy thinking” around social and environmental factors be held alongside the pathology of physiological mechanisms? New, interdisciplinary research methods, alongside both clinical training and community health education are seen as ways to broaden understanding and expectations of how the pain is socially, environmentally, psychologically, and biologically situated.
[...]
In 1963, Michel Foucault characterized the emergence of clinical medicine over the course of the 19th century as “that opening up of the concrete individual, for the first time in Western history, to the language of rationality, that major event in the relationship of [people] to [themselves] and of language to things.”
Where Foucault situates his analysis in the space of the clinic, describing the massive shift in knowledge production that modern medical practice initiated, poet Anne Boyer examines the contemporary emergence of a different paradigm, in a different kind of space: “The pavilion, on the other hand, is a tangle of directions. Money and mystification, not knowledge or ignorance, are its cardinal points.” The cancer pavilion, where we go for treatment, enacts the allegory of the pavilion as a “temporary and luxurious architecture erected for the purposes of the powerful, adjacent to something else—in cancer’s case, adjacent to all the rest of what we call life.”
What happens in the pavilion, adjacent to life, that is equivalent to the “opening up” in the relationship of people to themselves, and of language to things?
Patients are engaged in the measurement of their own quality of life. Opening up, in time, through actions of self-evaluation, self-reflection, self-disclosure, self-reporting.
Who is the self who evaluates, reflects, discloses, reports?
The concept of intersubjectivity holds that subjectivity needs the recognition of another: individual or collective, or through the relational networks of community: “Identification is the detour through the other that defines a self.”[100] Intersubjectivity emphasizes the way perceptions, experiences, and interpretations are shaped by interactions with others.
What happens when this recognition is automated, or technologically mediated?
At the center is likely a screen. In Crampton et al.’s scoping review of how health information technology affects patient-provider relationships, the screen is shown as a shared artifact that holds, interrupts, and redirects attention:
“Clinician gaze at the screen was significantly associated with the patient’s gaze at the screen. Further analysis showed that clinician-initiated gaze at the screen, the patient, or other objects were significantly followed by the patients, resulting in a conjugate gaze [looking at the same thing]. In contrast, patient-initiated gaze patterns were not always followed by clinicians. The authors identified significant patterns of the patient’s gaze at the clinician followed by the clinician’s gaze at the monitor and of the patient’s gaze at the screen followed by the clinician’s gaze at the patient.”[101]
Are patient and doctor speaking to one-another or are they speaking about, around, through, something external—diagnosis, disease, impairment, treatment, data? If they are speaking at all—studies of EHR use in the exam room show that interaction with electronic health records often interrupt visual attention and produce prolonged periods of undifferentiated, unproductive, inattentive silence.[102]
Where the technology is seen as having agential input on the clinical encounter, it is often felt to be “a manifestation of external policies” at the “expense of narrative information, and sometimes even of patient agenda.”[103]
[...]
We recognize one another through intermediary objects. While much of the research literature has applauded the value of new technological tools in providing easy contact between patient and provider, the quality of contact, the effect of reinforcement feedback, and the responsiveness of clinicians are not evaluated.
What is the effect of a survey, a test, a portal message that is not discussed?
[...]
From pattern discrimination to pattern recognition
“We do not look like people: we look like people with cancer . We resemble a disease before we resemble ourselves.”[104]
Recognition needs repetition.
Gathering patients’ experience into an object of knowledge shapes the topos of cancer life, where one learns, and speaks, both anonymously and intimately, first as data, then as a pattern of life.
[...]
Nomina sunt numina [names are divine], can be interpreted as, for instance: there is a perfect correspondence between a word and the thing it names, or that to name something is to bring it into existence, or that language is more-than-human.[105]
[...]
Winograd and Flores: “The need for continued mutual recognition of commitment plays the role analogous to the demands of autopoiesis in selecting among possible sequences of behaviors” [italics mine].[106] The reciprocal actions of commitment and recognition shape behavior, ensuring that all behavior is directed towards the expression of coherent patterns of life.
Language, as Winograd and Flores proposed, and contemporary natural language processing techniques uphold, is not an index of objective meanings, but a chain of commitments in a ‘consensual domain.’ Biologist and philosopher Kriti Sharma further unpacks the ubiquitous function of commitment, considering a shift towards modeling ecologies in terms of “interdependence:” first, “a shift from considering things in isolation to considering things in interaction,” then, a shift “from considering things in interaction to considering things as mutually constituted, that is, viewing things as existing at all only due to their dependence on other things.”[107]
This view assumes a capacity to accept the world as contingent rather than conventional: things are the way they are not because of arbitrary choices, or even the imposition of singular value systems, but because of dynamic, intricate, highly interdependent and highly ordered processes—from genomics to climate to toxicity to biochemistry to culture to language—that inhere in objects. It is the interdependence of these processes that give rise to the term “flower” over and over again, and precisely what makes flowers appear so obvious, vivid, and stable as objects.[108]
[...]
Again, Winograd and Flores: “Language does not convey information. It evokes an understanding, or ‘listening,’ which is an interaction between what was said and the preunderstanding already present in the listener” [italics mine].[109]
What does this preunderstanding consist of, and why is it there?
Preunderstanding, or prior knowledge is bias. A heightened sense of where to look first when searching for something. It can be imposed (purposefully or not) by the model’s designers, it can be in the training data, in the self-supervised tuning of the model’s parameters as it learns, or in the interpretation of the model’s output. It is indispensable to the functionality of AI models.
In Bayesian statistics, a prior probability is used to represent initial beliefs about something uncertain. This is bias that helps situate the learning process: a first guess, usually based on some past experience about what is likely. Priors are what we knew before, all that we bring to the question at hand.
As AI works to balance specificity with generalizability, priors help define a generalized shape, and the boundaries of the search space. Informative priors shed light on the task under consideration, uninformative priors provide a general shape to the expected outcome, although without any specificity. Regularization priors—e.g. Laplace, Gaussian (normal), Lasso, Ridge—keep the model from overfitting, and help it to generalize to new data. Constraints are added to keep the parameters from becoming too complex, or falling too far outside of a specific pattern of distribution.
AI researcher Francois Chollet’s theorizes how human priors affect the design and development of AI:
“we are born with priors about ourselves, about the world, and about how to learn [...] These priors are not a limitation to our generalization capabilities; to the contrary, they are their source [...] To learn from data, one must make assumptions about it—the nature and structure of the innate assumptions made by the human mind are precisely what confers to it its powerful learning abilities.”[110]
Chollet provides this schema to illustrate how human priors approximate generalizability, as it is considered in the design of AI systems: Low-level priors tell us about the structure of our own sensorimotor space, what we feel and move with. Meta-learning priors determine how we learn: assumptions about the structure of knowledge and objects, and ideas about causality and continuity. High-level knowledge priors are our notions about how to orient ourselves and navigate through spaces, our social intuition, and sense of what it means to have goals, values, and private thoughts. At every level, prior knowledge prepares us to sense patterns, make choices, and understand meaning, particularly when we don’t know exactly what we’re looking for.
[...]
In terms of bringing patient-generated health data into clinical practice, automation can expand access and improve patient outcomes, but also raises concerns about data privacy and the potential for algorithmic bias to adversely affect patient care decisions.[111] With this risk in mind, researchers point to the need for participation in patient reported outcome assessment processes to be inclusive and equitable.[112]
The use of predictive tools to regulate, pathologize, manage, and draw knowledge from people’s lives is concerning “not because it creates new inequities, but because it has the power to cloak and amplify existing ones,” notes artist and technologist Mimi Ọnụọha.
Ọnụọha characterizes this power as algorithmic violence, encapsulating “the violence that an algorithm or automated decision-making system inflicts by preventing people from meeting their basic needs.”[113] Examples of algorithmic violence “occupy their own sort of authority [...] rooted in rationality, facts, and data, even as they obscure all of these things”
“Machine prediction of social behaviour” argues Abeba Birhane, “is not only erroneous but also presents real harm to those at the margins of society.” In their paper “The Impossibility of Automating Ambiguity”, Birhane exposes a fundamental factor in this failure as the under-theorized reliance on accuracy as a measure of AI effectiveness.[114]
Accuracy, unlike precision (consistency), relies on prior knowledge: how close a prediction is to a known measurement, ground truth. Automating ambiguity would require optimizing for something other than accuracy.
Wendy Hui Kyun Chun, in her book Discriminating Data, qualifies the link between discrimination and recognition in terms of difference (discrimination) and similarity (recognition). To assess a recognition as ‘accurate’ requires precognition: it involves evaluating various criteria in terms of properties we already know how to discriminate between. “Classification systems require the prior construction or discovery of ‘invariant’ features, on the basis of which they assign and reduce objects.”[115] In computer science “pattern discrimination” describes the “imposition of identity on input data, in order to filter (i.e., to discriminate) information from it.”[116] Recognition, on the other hand, is a correlated assembly of shared context, shared features, and shared relations. It is a form of identification that has been reciprocated.
[...]
How recognition is automated:
1. Do I recognize you?
2. How am I to believe you?
To the error alert which reads ‘We don’t recognize this device’: I understand you to mean that you don’t store your data on my device (in the form of an authenticating or tracking ‘cookie’). Data is a mark and a token, making recognition possible. The token indicates consent (an agreement to your terms). You don’t recognize me (my device) because I have not provided consent to be recognizable (to store data / a cookie). This is to say: consent requires recognition, as much as recognition requires consent.
And for authentication: please state your name, your date of birth, and in your own words, please describe what brings you here today?
Recognition needs repetition.
Minimal Assumptions
Alison Kafer, in her close reading of the language used by Margaret Price to introduce difficult material—a ‘trigger warning’—articulates how such warnings are “a matter of access rather than avoidance.” They encourage listeners to “think about what kinds of support they might need in order to engage with material” as well as describing alternate ways of experiencing the content, such as through a printed page, with the help of an interpreter, or asynchronously, in another time and place. “In this framing,” Kafer finds, “the trigger warning is about making the content of the talk accessible to anyone who wants it; quite simply, it’s about accessing the material.”[117]
Kafer builds on this example to consider not just the ways access is instituted through gestures like Price’s introduction, but how access can be understood through the lens of trigger warnings, as a preparation of a space or process, and as “part of a larger complex of practices designed to de-privatize and [collectivize] healing.”[118]
[...]
Given that “access addresses not only how a space is designed but also what happens within it,” how can processes such as consent and safe disclosure be approached as design challenges?[119]
The case of consent in passive data collection from apps, wearables, fitness trackers, etc. (referred to from the perspective of human-computer interaction (HCI) as examples of ubiquitous computing), reveals the complexities of approaching consent, and decision-making in general, as an ongoing or ambient process rather than an event. This is particularly acute for the case of persuasive design, or devices that aim to change user behavior.[120]
Ewa Luger et al. address the ways in which ubiquitous computing complicates the performance of consent requirements, such as end user license agreements, having “decoupled users from devices, presenting no clear moment for consent to occur.”[121]
Josef Nguyen and Bonnie Ruberg have highlighted approaches to consent as a design challenge for games in particular, and human-computer interaction (HCI) in general, through the lens of “consent mechanics,” as “a set of unique difficulties and potential solutions surrounding the question of how to design meaningful and ethical interactive opportunities for technology users to negotiate consent,” emphasizing the process and nuance of consent rather than one-time compliance-driven interactions.[122]
Yolande Strengers et al. apply the T.E.A.S.E. framework for consent processes developed out of BDSM communities to “emerging technologies that enable interactions that act on, act with, or act like bodies,” recognizing “the ways in which bodies (artificial and human) are entwined with processes of consent, and how consent is situated in physical and virtual space and time within specific contexts and experiences.”[123] This framework establishes consent as an ongoing, emergent process that is heavily dependent on transparent, mutual communication, where T.E.A.S.E. stands for: Traffic lights (ways to signal “stop”, “slow down” and “continue,” even nonverbally); Establish ongoing dialogue (make the process interpretable); Aftercare (or, analysis of limits, expectations, and desires); Safewords (the capacity to immediately and easily withdraw from actions in mid-process); and Explicate soft/hard limits (empower participants to recognize their own changing attitudes in mid-process by setting and revising a spectrum of limits).
Often, the mechanics of consent for games are implied through a legal metaphor, imagining a game as a magic circle. In this view, real legal limits are withheld in the virtual world of games, and what happens in the course of play is considered to be within the scope of consent of the players. Noting that the magic circle metaphor is used to protect “spaces for play, tools for narrative, and the chance to build a new life,” legal scholar Joshua Fairfield argues for greater attention to consent mechanics and community self-regulation in games such that “virtual worlds may be able to generate community norms usable by real-world courts as a source of legal rules”.[124]
Una Lee and Dann Toliver’s Building Consentful Tech project, while foregrounding the value of autonomy in both bodily and datafied interactions, suggests adapting guidelines from community accountability practices to accommodate the distributed nature of digital bodies.[125] In drawing a qualified equivalency between physical notions of consent and data ethics, Lee and Toliver invite further articulation of how the design of consent-centered technological tools and platforms understands individual autonomy in relation to notions of decentralization and interdependence.
[...]
Disability rights activist Mia Mingus describes access intimacy alongside related concepts of “physical intimacy, emotional intimacy, intellectual, political, familial or sexual intimacy” as “that elusive, hard to describe feeling when someone else ‘gets’ your access needs. The kind of eerie comfort that your disabled self feels with someone on a purely access level. Sometimes it can happen with complete strangers, disabled or not, or sometimes it can be built over years. It could also be the way your body relaxes and opens up with someone when all your access needs are being met.”
Mingus goes on to distinguish access intimacy from compulsory or compliance-oriented access: “Access intimacy is not just the action of access or “helping” someone. We have all experienced access that has left us feeling like a burden, violated or just plain shitty. Many of us have experienced obligatory access where there is no intimacy, just a stoic counting down of the seconds until it is over.“
Access intimacy can be minimal, silent, passive: “sometimes it is someone just sitting and holding your hand while you both stare back at an inaccessible world.”[126]
[...]
Inspired and informed by Mingus’s naming of the concept of access intimacy, I propose a set minimal assumptions to identify the operational degrees of comfort and consent that shape how we share information:
1. Standing back, giving space: Listening, speaking, appearing to all who witness. At a distance from all others. I experience a low level of detail, only indirect, relayed interactions, ephemeral messages; symbolic, not sensory representations.
2. Witnessing and being witnessed: Listening, speaking, appearing to you. Not others. Those I consent to hear may speak to me. I may speak to those who consent to listen. I will not hear others when they speak, others will not hear me. I experience a higher level of detail, both sensory and symbolic representations. Direct and indirect interaction at a distance, persistent messages. Waiting.
3. Nearby, standing with or beside: Listening, speaking, appearing with a group. Anything any of us can hear, we all can hear. Any message appears to be from all of us, together. We appear as a group formed of individuals. Only direct Interaction, at a distance or close. Promising to wait.
4. In contact, touching: Haptic, textural feedback between us. Ephemeral messages. I may choose to include tone indicators with any message or action. I can filter, or translate, tone indicators added by others. Being patient.
5. Holding, supporting: listening, speaking, appearing through you. I appear as indistinguishable from you. I only hear what you hear. You speak for me. Anticipation.
6. Falling together: feeding back; illegibility, only close interaction, sensory rather than symbolic representation. Everything indicates its tone. Listening, speaking, appearing as you. You and I appear as indistinguishable from each other. You and I happen to be in the right place at the right time.