diff --git "a/aipulse.jsonl" "b/aipulse.jsonl" deleted file mode 100644--- "a/aipulse.jsonl" +++ /dev/null @@ -1,23 +0,0 @@ -{"text": "Webinar with Congressman Ro Khanna: Challenges in IT Law and Governance\n\n\n Download as PDF\n\nOn Friday, February 19, the AI Pulse project hosted a web conversation on current issues in IT Governance with Congressman Ro Khanna, a leading progressive thinker on a wide range of law and technology issues in the United States Congress. Rep. Khanna represents California's 17th district in the House of Representatives, where he chairs the Environment Subcommittee of the House Committee on Oversight and Reform, and serves as Deputy Whip of the Congressional Progressive Caucus. He is a passionate advocate of using technology to bring economic opportunity to rural and small-town America. In 2018, at the request of Speaker Pelosi, he authored a widely praised set of principles for an Internet Bill of Rights. Prior to serving in Congress, Rep. Khanna worked as an intellectual-property lawyer and served in the Obama Administration as Deputy Assistant Secretary of Commerce. He holds an undergraduate degree in Economics from the University of Chicago and a J.D. from Yale.\nJoining Rep. Khanna in conversation were Professors Eugene Volokh and Ted Parson of the UCLA School of Law. The conversation ranged over a wide set of law and technology issues, including market concentration and antitrust in the IT sector, first-amendment issues associated with online speech and content moderation, and online privacy regulation and related consumer online rights.\nThe recording of the event is available here.\nWe plan a continuing series of informal web conversations on various interesting and fun questions related to the societal impacts, governance, and ethics of AI/ML and related data and computational technologies. Stay tuned for these – or to be added to our mailing list for announcements of future events, please send an email to post Webinar with Congressman Ro Khanna: Challenges in IT Law and Governance first appeared on AI Pulse.", "url": "https://aipulse.org", "title": "Webinar with Congressman Ro Khanna: Challenges in IT Law and Governance", "source": "aipulse.org", "date_published": "n/a", "paged_url": "https://aipulse.org/feed?paged=1", "id": "3552398691f1b691c6c4b2b5fddbbc43"} -{"text": "Webinar: The Filter Bubble\n\n\n Download as PDF\n\nOn Friday November 6, the AI Pulse project hosted a web conversation on The Filter Bubble – or maybe it's The Echo Chamber: the notion that we are living in social worlds that are increasingly narrow, tightly connected, and homogeneous – in the people we interact with, the ideas and values we engage, even what we believe, and the facts we experience about the world. In various accounts, this might be caused by the mere fact of interacting online, or by specific design and algorithmic choices of the social media platforms we interact with. And crucially, this might occur without our active choice, or without or even being aware that it is happening. We considered a few big questions about the Filter Bubble. First, is it a real thin, and is it meaningfully different from familiar processes of social interaction? To the extent it is real and different, what is causing it? What effects is it having, for our social, economic, and political lives – AND if it's doing bad things, what can be done about it?\nThe conversation was kicked off by Jane Bambauer, Professor of Law at the University of Arizona. Joining the conversation with Jane were David Brin, astrophysicist and celebrated author of science fiction and non-fiction futurist speculation; Mark Lemley, Professor of Law at Stanford University and director of Stanford's Program in Law, Science and Technology; Eugene Volokh, Professor of Law at UCLA; and Ted Parson, Professor of Law at UCLA and Director of the AI Pulse Project.\nThe recording of the event is available here.\nWe plan a continuing series of informal web conversations on various interesting and fun questions related to the societal impacts, governance, and ethics of AI/ML and related data and computational technologies. Stay tuned for these – or to be added to our mailing list for announcements of future events, please send an email to post Webinar: The Filter Bubble first appeared on AI Pulse.", "url": "https://aipulse.org", "title": "Webinar: The Filter Bubble", "source": "aipulse.org", "date_published": "n/a", "paged_url": "https://aipulse.org/feed?paged=1", "id": "bed03caa9327d455dbaff382218a427f"} -{"text": "Webinar: Alternative Payment and Business Models for the Internet\n\n\n Download as PDF\n\nThe AI Pulse project hosted a web conversation on Friday, September 18, on the topic of alternative payment and business models for the Internet. Is the now-dominant advertising model fundamentally flawed or doomed? And if so, what alternative approaches are most promising? In particular, does the model of small (mini, micro, nano) payments for information content have promise, despite the widespread criticism it has attracted and the failure of several early attempts to implement it?\nThe conversation was kicked off by David Brin, astrophysicist and celebrated author of science fiction and non-fiction futurist speculation. Joining the conversation with David were Jane Bambauer, Professor of Law at the University of Arizona; Mark Lemley, Professor of Law at Stanford University and director of Stanford's Program in Law, Science and Technology; and Eugene Volokh, Professor of Law at UCLA. UCLA Law Professor and AI Pulse Project Director Ted Parson moderated.\nThe recording of the event is available here.\nStay tuned for continuations of the conversation.The post Webinar: Alternative Payment and Business Models for the Internet first appeared on AI Pulse.", "url": "https://aipulse.org", "title": "Webinar: Alternative Payment and Business Models for the Internet", "source": "aipulse.org", "date_published": "n/a", "paged_url": "https://aipulse.org/feed?paged=1", "id": "5489a8db48fee0898677474128e0357c"} -{"text": "Max – A Thought Experiment: Could AI Run the Economy Better Than Markets?\n\n\n Download as PDF\n\nEdward A. (Ted) Parson1\nAbstract\nOne of the fundamental critiques against twentieth century experiments in central economic planning, and the main reason for their failures, was the inability of human-directed planning systems to manage the data gathering, analysis, computation, and control necessary to direct the vast complexity of production, allocation, and exchange decisions that make up a modern economy. Rapid recent advances in AI, data, and related technological capabilities have re-opened that old question, and provoked vigorous speculation about the feasibility, benefits, and threats of an AI-directed economy. This paper presents a thought experiment about how this might work, based on assuming a powerful AI agent (whimsically named \"Max\") with no binding computational or algorithmic limits on its (his) ability to do the task. The paper's novel contribution is to make this hitherto under-specified question more concrete and specific. It reasons concretely through how such a system might work under explicit assumptions about contextual conditions; what benefits it might offer relative to present market and mixed-market arrangements; what novel requirements or constraints it would present; what threats and challenges it would pose, and how it inflects long-standing understandings of foundational questions about state, society, and human liberty.\nAs with smaller-scale regulatory interventions, the concrete implementation of comprehensive central planning can be abstracted as intervening via controlling either quantities or prices. The paper argues that quantity-based approaches would be fundamentally impaired by problems of principal-agent relations and incentives, which hobbled historical planning systems and would persist under arbitrary computational advances. Price-based approaches, as proposed by Oskar Lange, do not necessarily suffer from the same disabilities. More promising than either, however, would be a variant in which Max manages a comprehensive system of price modifications added to emergent market outcomes, equivalent to a comprehensive economy-wide system of Pigovian taxes and subsidies. Such a system, \"Pigovian Max,\" could in principle realize the information efficiency benefits and liberty interests of decentralized market outcomes, while also comprehensively correcting externalities and controlling inefficient concentration of market power and associated rent-seeking behavior. It could also, under certain additional assumptions, offer the prospect of taxation without deadweight loss, by taking all taxes from inframarginal rents.\nHaving outlined the basic approach and these potential benefits, the paper discusses several challenges and potential risks presented by such a system. These include Max's need for data and the potential costs of providing it; the granularity or aggregation of Max's determinations; the problem of maintaining variety and innovation in an economy directed by Max; the implications of Max for the welfare of human workers, the meaning and extent of property rights, and associated liberty interests; the definition of social welfare that determines Max's objective function, its compatibility with democratic control, and the resultant stability of the boundary between the state and the economy; and finally, the relationship of Max to AI-enabled trends already underway, with implications for the feasibility of Max being developed and adopted, and the associated risks. In view of the depth and difficulty of these questions, the discussion of each is necessarily preliminary and speculative.\nIntroduction\nArtificial Intelligence: Advances, Impacts, and Governance Concerns\nArtificial intelligence (AI)—particularly various methods of machine learning (ML)—have made landmark advances in the past few years in applications as diverse as playing complex games, purchase recommendations, language processing, speech recognition and synthesis, image identification, and facial recognition. These advances have brought a surge of popular, journalistic, and policy attention to the field, including both excitement about anticipated benefits and concern about societal impacts and risks. Risks could arise through some combination of accidental, malicious or reckless use, as well as through the expected social and political disruption from the speed and scale of changes.\nPotential impacts of AI range from the immediate and particular to the vast and transformative. While most current scholarly and policy commentary on AI impacts addresses near-term advances and concerns, popular accounts are dominated by vivid scenarios of existential threats to human survival or autonomy, often inspired by fictional accounts in which AI has advanced to general super-intelligence, independent volition, or some other landmark of capabilities equivalent to exceeding those of humans. Expert opinions about the likelihood and timing of such extreme advances vary widely.2 Yet it is also increasingly clear that such extreme advances in capability are not necessary for AI to have transformative societal impacts—for good or ill, or more likely for both—including the prospect of severe disruptions.\nEfforts to manage societal impacts of technology always face deep uncertainties, both about trends in technical capabilities and about how they will be used in social context. These perennial challenges are even greater for AI than for other recent areas of technological concern, due to its diffuse, labile character, strong linkages with multiple areas of technological advance, and breadth and diversity of potential application areas.3 In its foundational and potentially transformative character, AI has been credibly compared to the drivers of previous industrial revolutions, electricity and fossil fuels.4\nIn view of these challenges, analysis and criticism of AI's social impacts and its governance have tended to cluster at two endpoints in terms of the immediacy and scale of the concerns they consider. Most current work targets present or immediately anticipated applications, such as autonomous vehicles and algorithmic decision-support systems in criminal justice, health-care, employment, and education, addressing already present concerns about safety, liability, privacy, bias, and due process.5 A bolder minority of current work goes to the opposite extreme, aiming to characterize the implications of some future endpoint of capability—super-intelligent AI, or artificial general intelligence (AGI), for example—with attendant risks to human survival or autonomy. This latter work includes efforts to identify and develop technical characteristics that would make AI robustly safe, benign, or \"friendly\" for humans, no matter how powerful it becomes: in effect, seeking practical (and contradiction-free) analogues to Asimov's Three Laws of Robotics.6\nThe broad range that lies between these two clusters, however—the impacts, risks, and governance challenges of AI that are intermediate in time-scale and magnitude between the immediate and the existential—also carries the potential for transformative societal impacts and disruptions, for good and ill. Yet despite admitting some degree of informed and disciplined speculation, this intermediate range has received less attention.7 This intermediate range of AI applications and impacts is unavoidably somewhat diffuse in its boundaries, but can be coherently distinguished, at least conceptually, from both the ultimate and the immediate. The distinction from ultimate, singularity-related concerns is relatively simple: in this mid-range, AI applications are still under human control.8\nThe distinction of mid-range from immediate concerns is subtler, yet can be meaningfully drawn in terms of scope of control. In current and projected near-term uses, AI applications advise, augment, or replace existing actors (a person, role, or organization) in existing decisions. They are embedded in products and services marketed by existing firms to identified customers. They support or replace human expertise in decisions now taken by individual humans, or by larger groups or organizations (corporations, courts, boards, etc.) that are recognized and held accountable like individuals. But this correspondence between AI applications and pre-existing actors and decisions is historically contingent, and need not persist as AI capabilities expand. In the medium term, AI could be deployed to do things that somewhat resemble present actors' decisions, but at such expanded scale or scope that their impacts are qualitatively changed, by, for example, expanding actors' power, transforming their relationships, or enabling new goals. Alternatively, AI could be deployed to do things not now done by any single actor, but by larger-scale social processes or networks, such as markets, normative systems, diffuse non-localized institutions, or the international system. We can envision future AI systems comprehensively integrating—and presumably aiming to optimize—all decisions made by and within large complex organizations. For example, we might envision AI \"running\" UCLA, the UK National Health Service, the State of California, or as I explore in this paper, the entire economy. Deployed at such scales, AI would take outcomes that are now viewed as emergent properties, equilibria, or other phenomena beyond the reach of any individual decision or centralized control, and subject them to unified control, intentionality, and (possibly) explication and accountability. Assessment and governance of AI impacts in this intermediate range would, more clearly than for either immediate or singularity-related concerns, require consideration of both the technical characteristics of AI systems and the social, economic, and political context in which they are developed and used.\nA Thought Experiment: AI-Powered Central Economic Planning\nTo explore these possibilities, this paper develops a thought experiment that sits squarely in this middle range: Could AI run the economy, replacing decentralized decisions by market actors? Could some plausible extrapolation of rapidly advancing AI and data capabilities perform the resource allocation and coordination functions of markets—the functions that twentieth century central planning systems attempted and so notably failed at—and do it better than either past planning systems or markets?\nAlthough this exercise is speculative, there are at least three reasons that it is worthwhile, both as an intellectual exploration with deep historical relevance and surprising current saliency and for its practical implications. First, it provides a vivid illustration of the potentially transformative impact of AI capabilities that sit in this middle range, not requiring general or super-intelligent AI systems. Indeed, far from being implausibly audacious, its ambition is comparable to many other expansive projections, for good or ill, of potentially transformative AI applications.9 Second, it offers new perspectives on deep, enduring questions of social, political, and legal theory, such as the definition of social welfare, the relationship between economic and personal liberty, civil pluralism, the relationship between the market economy and the state, and the boundaries between individual liberties and state or other collective authority. The inquiry informs sharp current political controversies, as rapid progress in AI shifts the ground under seemingly settled questions such as the distribution of economic surplus between labor and capital, the impacts of economic concentration, and the distribution of power in society.10 Third, this is a potential AI application whose moral valence is not obvious a priori but rather ambiguous and contingent, not clearly pointing to either Utopian or Dystopian extremes but potentially capable of turning in either direction. It thus provides rich ground for inquiry into its consequences and the conditions that would tilt toward either societal benefits or harms, of specific forms or in aggregate, and hence may suggest guidance for near-term policy and legal responses.\nBefore getting into details, I briefly address the issue of what name to give the AI who wields this great power. I propose \"Max.\" Among its other virtues, \"Max\" is helpfully gender-ambiguous —but it being 2019, Max also needs pronouns. Here, I look back before recent portrayals of uber-powerful AIs as female (for example, Her, Ex Machina), to two landmarks from a prior period of social upheaval: Kubrick and Clarke's HAL 2000 and even further back, to Roy Orbison.11 Many of us will be working for Max, if we are working at all, so Max is clearly \"The Man\"—and gets masculine pronouns.\nMax will have two big advantages over markets in promoting human welfare, both consequences of the fact that his pursuit of human welfare would be intentional and explicit, rather than indirect and emergent. Rather than performing a set of parallel, decentralized, private optimizations from which one must invoke \"invisible hand\" logic to assert good aggregate outcomes, Max would perform a global social optimization. This would enable him to correct market failures. This means, first, that Max can internalize all externalities, incorporating both market and non-market information to identify and assess external effects and respond appropriately—if not for all, then at least for the most serious and uncontested externalities, such as environmental harms, resource depletion, over-use of commons, and the under-compensated social benefits of health, education, knowledge, the arts, and civic institutions. Max could correct the pricing of fossil fuels, agricultural products, and water, and the salaries of teachers, nurses, and social workers.\nSecond, Max could reduce or eliminate market power and the associated rent-seeking behavior. Unlike human-managed firms, Max would not waste effort trying to create socially sub-optimal market power, or to shift rents or costs under conditions of existing, widespread market power – except insofar as these shifts somehow bring aggregate benefits. These advantages distinguish Max both from pure market arrangements and from historical attempts at central planning, which had their hands more than full simply trying to manage production and get markets to clear. My focus on these advantages also distinguishes Max from other proposals for central planning based on computational advances, which have invoked broad social aims such as equality, sustainability, and democratic participation but have not worked through the practicalities of how the proposed systems would improve on market outcomes in advancing these aims.12\nThe paper proceeds as follows. Part I provides a brief historical background on the question of central planning, the main arguments for and against it, and the reasons that coming advances in AI and related technologies may transform the issue. Section II elaborates the task of \"running the economy,\" asking what it might mean concretely and what background assumptions must be specified to make sense of it, then proposes three alternative models of how Max might operate. Section III then gives a preliminary sketch of several issues and challenges raised by Max, including Max's data needs, implications for social diversity and innovation, the problem of defining Max's objective function, and the dynamics of how Max might come about, as well as what to do about them.\nThis inquiry presents the clear risk of sprawling over a vast landscape and thus ending up both speculative and superficial. To bound the inquiry and help limit this risk, and to distinguish this from an exercise in technological forecasting, I rely on several explicit simplifying assumptions. The first and most important of these is an assumption of computational capability. For any computational task relevant to the scale of the problem, \"running the economy\"— millions to billions of people, and a similar or somewhat larger order of potential goods, inputs, and production and distribution decisions13 —Max can do it. There is no binding constraint in computational capacity, bandwidth, or algorithmic ability to optimize a well-specified objective function: these are assumed to be in unlimited, effectively free supply. This assumption, adopted for heuristic purposes, also distinguishes this exercise from the many efforts to characterize the computational complexity of the economy relative to presented or projected computing power, either to demonstrate or reject the feasibility of control.14 I simply assume the necessary capacity, require only that the assumption pass some minimal threshold of plausibility, then work through its implications. No such simplifying assumption can be made, however, for the data Max needs to do his job, which is central to the inquiry and cannot be similarly hand-waved away. Relative to other computation-related resources, generation and distribution of relevant data is more difficult, more contingent on social and economic conditions, more dependent on Max's precise job description, and interacts more strongly with other, non-economic values that are (at least in its initial specification) outside Max's job description. Needed data, and the constraints and implications of getting it, are among the issues discussed in Section III. The paper closes with brief conclusions and questions for further investigation.\nI. Historical Context: The Socialist Calculation Debate\nIn the twentieth-century intellectual struggle between the centrally planned, ostensibly socialist states and the liberal capitalist democracies, two basic arguments were advanced against socialism. The first was based on liberty and related normative claims about the proper scope of state authority relative to citizens, most sharply focused on the relationship between property rights and civil and political rights. The state cannot control the means of production without impermissible encroachment on the liberties of citizens. This critique is normative and foundational, independent of the state of technology or other contingent material conditions.15 The second argument was based on competency—the ability of state planning systems to efficiently produce the goods and services that people want. Critics of central planning argued that no matter how capable the officials running the system or the resources at their disposal, central planning could not match the performance of decentralized decisions in markets, but would be perennially afflicted with shortages, misallocations, and wasteful surpluses. Unlike the first critique, this one is contingent on specific conditions and capabilities. Even if it was true for all real efforts at central economic planning—as it almost always was—you can imagine alternative conditions under which it might not be true. My focus here is on this second argument.\nAlthough it has earlier roots, this argument grew prominent in the early twentieth century following the Russian revolution. The most prominent anti-planning statements were by Von Mises (1922), responding to a planning system advocated and partly implemented in early post-war Bavaria by Otto Neurath (1919).16 Hayek (1945) later sharpened and extended Von Mises's critique,17 while the most prominent rebuttal was by Oskar Lange. Von Mises and Hayek both argued, in different ways, that the equilibrium conditions necessary for competitive markets to clear and achieve their claimed social benefits could not be achieved by central planning because the information needed to do so is only available encoded in the prices that emerge from decentralized market interactions in competitive equilibrium (or more imperfectly through rougher competitive interactions, even absent perfect competitive equilibrium).\nAgainst Von Mises's initial statement of this thesis, Lange showed that there is no barrier in principle to the same optimality conditions produced by competitive interactions being attained by central direction, guided by a set of shadow prices playing a role parallel to that of market prices. Lange even proposed a practical process of incremental, trial-and-error adjustment by which planners could find market-clearing prices, analogous to the private-market adjustment process proposed by Walras.18\nHayek then sharpened the critique, arguing that even if planners could in theory replicate markets' socially optimal allocation, the scale of the required data and computation made the task impossible in practice—particularly considering the vast, fine-grained diversity of conditions under which people transact (Day-old muffins, half price!), and the dynamism of market conditions with resultant need for rapid adjustments. Lange's response, published posthumously in 1967, merely stated that advances in computing rendered the problem feasible, even easy.19\nAlthough the early rounds of this \"socialist calculation\" debate occurred before the development of modern computers, rapid advances in computation and in optimization algorithms—first using analog devices that built on wartime advances in cybernetic control, then with digital devices after the mid-1950s—repeatedly changed the context for subsequent rounds, albeit more in theory than in practice. The conflict between opposing conclusory assertions—Hayek's assertion of impossibility, Lange's of possibility—was unresolvable, as it depended upon contending speculations about future developments in technological capability. And while rapid continuing advances in both computers and algorithms since the 1950s stimulated periodic suggestions that the terms of the debate had fundamentally changed,20 there was no concrete evidence that a major threshold of capability had been crossed. Indeed, the planning problem is sufficiently under-specified that it is not clear precisely what level or type of computing resources would count as the relevant threshold. Meanwhile, the concrete economic and strategic victory of the liberal democracies over the Soviet bloc, and the obvious failure of actual attempts at central planning,21 made the question seem uninteresting.\nThe debate thus sat unresolved—and arguably unresolvable —for decades. Lange's was the strongest argument for socialist planning, but his shift to directing prices rather than quantities, and his leaving final goods and labor markets outside his planning system, left his proposal an odd, under-specified hybrid. His proposal was criticized both from the left for not being socialist enough and failing to guarantee social equality and democratic participation,22 and from the right for assuming perfect, unified firm response to planners' directives and for failing to account for the incentives of managers and entrepreneurs.23 Depending on implementation details that Lange did not specify, either critique—or both—may have been valid. Moreover, the arguments over computational feasibility between Lange and critics such as Hayek and Lavoie turned on competing unverifiable assumptions about future technical progress and its social context,24 which were not subject to empirical resolution.\nThree far-reaching recent changes in conditions, however, make it a useful time to seriously revisit the question. First, advances in AI and machine learning, in parallel with rapid expansion in hardware-based computational capacity. Second, the explosion in volume, ubiquity, and usability of data, particularly the widespread and powerful use of proxy data as skilled predictors for things that cannot be observed directly: for example, consumer preferences, attitudes and dispositions, and receptivity to political messages. And third, the growth of sub-systems of the economy—mainly within large integrated firms and cross-firm networks—that operate by central direction under algorithmic control, rather than human decisions responding to market conditions.25 These represent large islands of planning that aim to optimize private, rather than social, objective functions. Under these trends, there has been some revival of the planning debate, although with an unfortunate tendency to re-contest old questions without specific connections to recent progress. Although the most expansive exploration of these issues has been in speculative fiction,26 there is also active debate on the left about the feasibility and desirability of revived central planning based on modern computing.27\nII. How Would MAX Work?\nA. Mechanics of Max: Background Assumptions\nHow much does Max control? What does \"run the economy\" mean? Let's assume Max won't be supplanting human agency, telling everyone what to do all the time: that does not seem aligned with the goal of advancing human welfare. Then over what actual decisions is he given authority? We begin to approach this question by taking Max's job description seriously: Max \"runs the economy,\" a description that presumes the economy is not all of society, but is distinguished both from the state, and from some extensive set of non-economic social interactions and arrangements. Let's stipulate that the economy is the set of processes, institutions, and practices that control how goods and services are produced, exchanged, and consumed.28\nAs I sharpen the thought experiment to make Max more concrete and specific, at several points in the argument additional assumptions will be needed, either about the definition and boundaries of Max's job or about the social and political context in which Max operates. My aims in making these assumptions—to keep the exercise interesting and potentially relevant for near-term decisions—will suggest a few points of heuristic guidance in what assumptions are most useful. First, having already assumed no computational constraints I will try not to sneak in additional assumptions about Max's capability that shatter the (admittedly loose) bounds of plausibility I am trying to maintain. Second, since the purpose of Max is to advance human welfare, in specifying how Max works I will avoid choices that run strongly against evident human preferences and values—with the two caveats, of course, that preferences and values may change, and that future political conditions may favor deploying actual AI-based planning systems in ways that do not enhance human welfare. Finally, this thought experiment is intended to serve as a scenario exercise—a description and analysis of uncertain future conditions whose purpose is to inform near-term choices.29 At some points, this purpose tends to favor assuming less profound societal transformations, in order to maintain relevance and continuity with near-term decisions and research priorities. Throughout, I endeavor to make these assumptions explicit, and to note where other choices might be similarly plausible. For the most part, I choose just one path through the dense tree of possibilities, with brief observations on potential alternative paths but mostly leaving these to further development in future work.\nThe first of these required assumptions concerns the scope of Max's authority: in particular what authority he would have over consumption. Would Max tell people what to eat, wear, do, where to go for dinner or vacation? I assume that he does not, but rather that people still make their own consumption decisions. I make this choice partly as a generalization from my own preferences. I don't like being told what to consume, both out of an intrinsic preference for autonomy and because others who try often get my preferences wrong. This is also partly a moral choice—the overlap of consumption choices with basic liberty interests is too strong to give up, and I worry that letting people give up this autonomy, even if sometimes convenient, may be incompatible with human flourishing.30 And it is partly about Max's information needs—consumer choice provides continually updated information about preferences, which Max needs and may only be able to get by observing freely exercised choices. Rather than specifying consumption, Max will do what the economy already does—determine the options available to me, with contextual conditions of time and place—and provide relevant information and suggestions.31\nA second needed simplifying assumption concerns scarcity versus abundance. To keep the thought experiment relevant to current decisions and distinct from Utopian fiction—this is not Iain Banks's Culture32 —I assume that technical progress has not eliminated scarcity. So while consumption is not specified or compelled, neither does it operate as \"it's all free, take whatever you want.\"33 Consumption choices remain constrained, and any constraint on total consumption that does not dictate specific choices will resemble a familiar budget constraint. This implies that even with Max running the economy, absent conditions of post-scarcity plenty there must still be money. I have a finite amount of it, although we have not yet considered how I get it. And things have prices—or at least, final consumer goods have prices. We haven't yet considered input factors or intermediate goods.\nThis condition of continuing scarcity distinguishes the thought experiment here from the most expansive technological-communist reflections, which broadly assume technology (omnipresent data, 3-D printing) will generate conditions of limitless abundance, under which marginal costs—and hence prices—converge toward zero.34 In contrast to these visions, I assume that production still requires material inputs, many of which will be in constrained supply even with optimized production technology, perhaps increasingly tightly constrained, if Max's deployment comes before human civilization expands beyond the limits of the Earth. Perhaps the most decisive constraint on limitless abundance, however, comes from social limits to growth.35 To the extent that many things people desire remain ordinal or positional—markers of relative social status that are intrinsically constrained—even perfectly optimized production technology will not overcome scarcity: the goalposts will simply move. With many things people want still in limited supply, due to any combination of material, environmental, and social-structure constraints, the economy will still need an allocation mechanism to determine who gets what. Although it may take different forms, this will look to consumers like prices and a budget constraint.\nWith Max's authority limited to production, another assumption is needed immediately: Do people still work? To pull the exercise toward relevance for near-term decisions, I assume that Max, other AI systems, and robots have not replaced all human productive activity. People still work, including instrumental or productive work (working to make things other people want) as well as intrinsically motivated work independent of any demand for the output. This might be because AI and robots cannot satisfactorily do every job and people are still needed,36 or because people want to work. The number of people working may be far fewer than today but is not a tiny number. Enough people are working that allocating and managing them, and their motivation and welfare, must be considered in how the economy runs.\nWith Max running the economy and people still working, the next assumption needed is the nature of the boundary and interactions between Max and human workers; in particular, are there still firms? In theory, it is possible to have an economy without firms.37 Every human worker could be a sole proprietor, interacting with others through contractual market transactions.38 Firms are artifacts of information, principal-agent relations, and economies of scale, which make it more efficient to gather workers and resources inside organizations with internal operations controlled by collegial, normative, and (mostly) authority relationships rather than market transactions.\nFor the three assumptions discussed thus far, only one option appears to keep the imagined world potentially desirable and the thought experiment relevant and bounded. Max controls production, not consumption; there is still scarcity and thus a need for some way to allocate output among people; and people still work. On whether firms still exist, however, and the related question of how human workers interact with Max, at least two cases appear plausible. First, we can assume there are still firms, within which managers contract with human employees and exercise authority over their work. Firms may employ AI or robots alongside human workers, but human managers run the show internally. Under this assumption, Max's authority operates only in the external environment of the firm. Alternatively, we can assume that firms are gone. Every human worker is then accountable directly to Max, rather than to human managers. Workers may still sit together in shared offices, collaborate with each other, and hang out by the coffee machine, but their work is directed by Max via a set of contractual arrangements.\nIntermediate cases are possible, although they probably don't all require separate consideration. For example, the economy might be mixed. Some firms still operate, in parallel with a large economy of individual contractors working directly for Max. One intermediate case that might require separate consideration would be if some firms are managed by non-Max AI's. For this case to be distinct, firm-manager AIs must not be fully integrated into Max, but rather are separate decision-makers in an agency relationship with Max. Max's ability to see inside the firm must be limited, and interests must not be perfectly aligned. The firm AIs may have private interests in their firm's enrichment or status, perhaps making their own workers happy or satisfying their shareholders (if they still have them), or they may disagree with Max on the aggregate social welfare function. Bargaining between Max and the firm would be AI-to-AI, and so on more equal footing than Max's interactions with human managers. And of course, workers' experience within the firm would be different; they would be under the authority of their firm's AI manager, rather than either human managers or Max.\nOn this point, I begin by assuming that firms do still exist, managed by either humans or AIs. Max's main area of operation thus lies outside the boundary of the firm, in dealings among firms and between firms and consumers.\nB. How Would MAX Work II: Quantities or Prices and Applied to What?\nWhat does Max actually do? The simplest possibility is that Max operates just like an old-fashioned central planner, specifying input and output quantities to every firm. I call this variant \"Quantity Max\". Max provides your allocation of all inputs—your capital, workers, and material inputs. They will arrive on your loading dock, on the following schedule. If you have a problem with the inputs delivered, you are free to take it up with the supplier, but you'd probably rather deal directly with Max, who has an excellent record of resolving disputes rapidly and fairly.39 And here is your output quota: how much of each product, with delivery timing and locations specified. With Max's unlimited computational capability, the inputs and outputs all match up perfectly (subject to stochastic optimization, to the extent there are still equipment breakdowns, snowstorms, or other uncertainties outside Max's control).\nThe most basic challenge for this arrangement concerns the incentives of firm managers. Do managers have discretion in how they run things inside their firms? Presumably they do, and presumably they are not pure altruists. We thus expect them to use their discretion to advance their own interests, not to act as perfectly faithful agents for Max's social welfare function. And to the extent they do not have discretion, why have people doing these jobs and why would anyone want them?40 Max may get the flows of inputs and outputs among firms perfectly. But just controlling quantities (plus whatever structure of contracts Max gives managers in case of variation from these) leaves a serious agency problem. Managers can use their discretion to advance their divergent interests, through various forms of rent-seeking, cutting quality, skimming off inputs, abusing their workers, and creating negative externalities—anything that is within their scope of authority and concealable from Max. Moreover, the problem is not solved by having Max specify more precisely what the firm does, including technology choice and other internal decisions. As long as there are—by need or choice—firms managed by humans with discretion, and private information to make the discretion meaningful, there will be agency problems of this sort. These can be reduced by more tightly specifying firm behavior, at the cost of whatever values motivated having human managers; they can be reduced to individual-level agency problems if there are no firms and every human worker reports directly to Max; and they are changed in character if firms are managed by AI's separate from Max. But all these reductions carry costs and tradeoffs, and none fully eliminates agency problems.\nThe cause of this problem is obvious; like old-time central planning, this system has no prices. Oddly, we had to assume prices at the point of final consumer sale to have meaningful consumer budget constraints. But under Quantity Max, all input and production decisions up to that point are made by diktat. For Max to tack on prices at final retail sale, without tracking and using them through the production process up to that point, fails to take advantage of available, high-value information and communication devices. Socialist planners were hostile to prices for ideological reasons, but Max doesn't have to be. Max is not an ideologue,41 he's an instrumentalist and an empiricist. He's looking for ways to advance aggregate human welfare and willing to adopt new approaches in pursuit of that end.\nWe thus consider a second variant of Max, \"Price Max\". Instead of specifying quantities, Price Max specifies prices of all goods in commerce, including all firm inputs and outputs. Although Price Max is still imposing different transaction conditions than parties would adopt based on private interests alone—and thus requires effective suppression of black markets to enforce his exclusive authority—the change from specifying quantities to prices reproduces several major features of markets. Firms are free to organize their operations as they choose, subject to the given prices they face. Managers can use this discretion to increase profits, which remain within the firm. The things managers do within the market system to increase profits—for example, shopping around for more suitable or lower-priced inputs,42 tuning and improving production processes, motivating workers, improving and differentiating their outputs to command a higher price—remain feasible, potentially effective at increasing profits, and socially desirable. The change from setting quantities to setting prices reduces many—not all—of the agency problems present under Quantity Max, assuming firms can retain a large enough fraction of their earnings to be motivating.43 Max setting prices instead of quantities also mitigates liberty concerns related to Max's direction of labor markets. Max setting wages, perhaps also running a clearinghouse to suggest matches of people to jobs, better preserves the voluntary nature of work decisions that, like consumption decisions, are too strongly linked to individual liberty to consider compelled assignments.\nOur assumptions about Max's optimizing ability imply that Max gets all prices right—all markets clear, with no shortages or surpluses. But for Price Max to set these prices, he must either independently calculate or observe the same data as is revealed or generated in market interactions: the abundance and characteristics of resources, their alternative uses, the production technologies available to transform them, and consumer preferences. If he cannot garner exactly the same data, he must identify good enough proxies to closely approach the same competitive equilibrium solutions. Although I have assumed no effective constraints on Max's computational ability, similarly expansive assumptions about Max's access to all needed data are more suspect. Data is the weakest and most troublesome link in the chain of capabilities this thought experiment requires. Max might be able to independently calculate these competitive equilibrium prices. But to the extent the data needed to reproduce these are not available, are costly, or cause harms or violate valued principles in their acquisition—or, for that matter, to the extent there are other social values beyond information-generation attributed to market processes of search, bargaining, and contracting—we might prefer not to have Max re-estimate these market-clearing prices. Instead, Max could use the prices that emerge from independent production and consumption decisions, transactional offers and requests (bids and asks), in competitive interactions—in effect, let Max free-ride on market processes to generate price information.\nGreat: We've come this far, and the best Max can do amounts to reproducing market prices—like the character in the Borges story who independently \"wrote\" Don Quixote?44 In one sense, we have simply reproduced Hayek's argument about the information economy of decentralized market decisions. But we're not done. Market prices provide high-value information, but only as a starting point for Max's job. Max is charged with improving on market outcomes when these diverge from social optimality. The prices Max calculates to achieve this will often be equal or very close to those emerging from market exchange, but not always; and the differences are important. To illustrate this most clearly, it is helpful to consider yet a third variant of Max.\nThis form of Max would use market interactions to generate initial prices that serve as the starting point for every transaction, but would then impose price adjustments on each transaction as needed to correct market failures. Insofar as many of the market imperfections Max must correct can be understood as externalities (both negative and positive), we have now re-defined Max's job as administering a complete system of Pigovian taxes and subsidies,45 so I call this variant \"Pigovian Max.\" Pigovian Max would evaluate all externalities and other market imperfections (not just as single points, but as they vary over some relevant range of output), announce taxes or subsidies, then manage whatever adjustment process is needed to ensure that markets still clear.\nHow would Pigovian Max be implemented? At the level of individual transactions, Pigovian Max might look quite unobtrusive and familiar. Sellers could post fixed prices or buyers and sellers could negotiate, as they do under market systems, up to the point of transaction. Max would then calculate and add the appropriate tax or subsidy at the point of sale. The process would be similar to the imposition of a sales tax, but with two differences. First, the adjustments would vary over transactions, so buyers and sellers would need to be informed of the adjustment before they commit to each transaction, presumably via mobile devices, information on sales displays, or point-of-sale systems. Second, adjustments could be of either sign, and could be large for goods with large externalities.\nAt larger scale, how disruptive Pigovian Max would be will depend on details of implementation, and on uncertainties about the size of the adjustments that require analysis beyond my scope here. Max might be relatively unobtrusive, to the extent that relatively few goods carry most of the external effects that need correction—for example negative externalities from fossil fuels, water extractions,46 heavy metals, toxic chemicals, agricultural fertilizer and chemical inputs; and positive externalities from provision and dissemination of knowledge, physical and mental health, social services, etc. The system could be implemented at various points in supply chains, depending on how external effects are distributed across these. Implementing it like a Value-Added Tax (VAT),47 with Max's adjustment based on incremental external costs or benefits at each stage from primary inputs to final consumer goods, would be a plausible approach. For goods carrying the largest negative externalities—such as fossil fuels in the world of severe climate change—the preferred social outcome may involve large reductions in the total quantity in commerce or complete elimination. If the responsibility of making such large-scale social transformations falls entirely to Pigovian Max's price adjustments, these might have to phase in slowly, as Max balances the continuing harm caused by the products with the social cost of disruption from rapid squeezing out of existing products and stranding capital investments. Alternatively, the state might use other regulatory tools, which will still be available to it even with Max operating, to pursue these changes. When social goals are pursued partly or wholly through such other regulatory tools, the share of responsibility for these issues falling to Max, and the size of Pigovian Max's price adjustments, would be reduced or eliminated accordingly.\nIII. Designing and Implementing Max: Issues and Challenges\nIn discussions of AI, seemingly prosaic matters of design and implementation lead, surprisingly directly and quickly, to deep questions of political, legal, and moral foundations of social institutions. As a thought experiment, Max's job in part is to provoke these discussions. Max is intended to be taken seriously as an exploration of a potential transformative application of AI. Simply positing Max as a serious possibility and reasoning concretely through how it would work clarifies various conditions, requirements, and potential impacts and risks. But Max also aims to provoke questions about the societal conditions that define his context: how they operate, what they require, their impacts, what they are, their operations, requirements, impacts, unrecognized assumptions, and inter-relationships.\nThis section addresses this second class of questions. It considers Max's needs, implications, and potential impacts—both promising and troublesome—to probe both how feasible or desirable Max (or similarly vast AI uses) might be and what new perspectives Max provides on old questions. Even more than prior sections, the discussion roams over a vast territory, and is thus necessarily speculative and preliminary.\nA. Data: What Does Max Need and How Does He Get It?\nThe central element of the old socialist calculation debate, and the one most profoundly changed by recent advances, is data. Any form of Max, like any central planning system, will require a vast amount of data to support its calculations. I rejected Quantity Max on grounds of agency problems and managerial incentives, not data limits. The data needs of managing via prices or quantities may differ based on the technical structure of the optimization problem—the relative computational efficiency of optimizing on primal versus dual variables—but that question is moot given the rejection of Quantity Max for other reasons. The two remaining variants, Price Max and Pigovian Max, have similar data needs, but differ in how they fulfill them.\nConsider first the data Max needs to replicate market outcomes insofar as these are socially beneficial, such as to generate market-clearing outcomes that are allocatively efficient in the limited, Pareto sense. Max needs data about all supply and demand conditions internal to any potential transaction, including inputs, production technologies, and consumer preferences. This is the same information as old socialist planning needed and failed for the lack of, with the small qualification that Max has a somewhat larger job than Lange's planner, which did not set prices for final consumer goods or labor. Both Price and Pigovian Max need these data, but Pigovian Max relies on decentralized market interactions to generate them, subject to his subsequent adjustments to correct market failures. Price Max enjoys no such short-cut, but must gather, integrate, and analyze all these data and synthesize the results to contribute to his price setting for each transaction.\nIn contrast to the old socialist calculation debate, it is plausible, perhaps even likely, that the data needed to construct these independent estimates of market prices are now available. This is particularly clear on the supply side, for firms. Relevant information is available from multiple sensors doing real-time monitoring of multiple attributes of production, distribution, and sales; internal accounting and management information systems; technical characteristics and performance data from machines and equipment, greatly extended by the proliferation of internet-connected devices; and complete records of the training, skills and behavior of workers, together with relevant outcome measurements. The sufficiency of these firm-level data is barely even a matter of speculation, given the high reliance on algorithmically directed planning, within large enterprises and in supply chains and multi-enterprise networks organized by a single hegemonic firm (Amazon, the Apple and Android app stores). Decisions to coordinate these large-scale operations by data-guided direction rather than internal markets strongly imply that the data needed for efficient production, cross-enterprise cost minimization, and identification and pursuit of new opportunities is available, at least to optimize the objective function of the firm directing the system.48\nMax needs these production-related data not just at the level of single firms, however, but for the whole economy. In addition to the computational challenges that I am ignoring, this shift to an aggregate perspective raises questions about incentives for full and accurate disclosure. Max would presumably be authorized to compel data disclosure, which may be effective for data from direct observations (equipment sensors, surveillance cameras), or other sources not readily subject to misrepresentation or gaming (internal managerial accounting data). Obtaining reliable disclosure may be harder for data dependent on human observation and reporting—most acutely for \"tacit knowledge,\" skill-like knowledge that people hold without being able to articulate, which played a major role in Hayek's critique of planning. While I assume that this problem can be kept manageable through advances in sensors and data management, together with incentive-compatible disclosure systems and penalties for outright falsification, this is a contestable assumption.\nOn the consumption side, human preferences and welfare are not directly observable, although advances in neuroscience suggest this may be changing. A host of related behavioral data is observable, however, from which machine-learning-based predictive analytics systems are advancing rapidly in their ability to predict purchase decisions and related behavior. Firms collect a huge amount of such data, and rapid progress in systems, including recommendation engines and personal assistants, suggests they may be adequate for Max to do his job. These data probably do not present serious problems related to disclosure incentives because they originate outside firms (even if firms then collect them), and so they are less likely to be deeply embedded in internal tacit knowledge.\nThe data challenges involved with shifting from firm-based to societal optimization will be more serious for consumption-related than production-related data. Market systems presume correspondence between consumers' voluntary choices and their welfare. This identification relies at two points on the axiom of revealed preference: first, if you chose it you must have preferred it given the available alternatives; and second, your preferences thus expressed are better indicators of your welfare than any outside agent can provide. To the extent this proposition is not treated purely as an axiom, it is obviously sometimes false: people make some choices that clearly harm them. No comprehensively better way to measure welfare is clear, however, and opening the door to letting others tell you what you need poses clear threats to liberty, via paternalism or worse. I mainly address this issue in discussing the problem of defining Max's objective function in Section III.F. But I flag it here to raise the possibility that optimizing for welfare rather than for consumption behavior may require different data, which may be less readily available, less observable, or less well proxied. To the extent this is the case, even brilliant success advising and predicting consumption choices may not be sufficient to demonstrate the availability of data needed for welfare optimization.\nThe collection and use of consumer data by firms is already raising serious concerns related to privacy and citizen control over their information, for which various policy and legal responses are proposed. I do not address these issues, except to note that the relevant question for my purposes is how these concerns differ depending whether the actor gathering your data is a private firm or Max. This could go either way. You might initially object more strongly to data gathering by a quasi-state actor like Max, although this difference may fade or reverse as the scale and data-integration capabilities of private firms grow to resemble, or exceed, those of states. There may, indeed, be better reasons to trust Max with our data than Facebook, Google, or Amazon. Max might, for example, be more able and willing than private firms to implement strong privacy-protective measures, such as privacy defaults, strong consent requirements, or prohibitions on redistributing, re-using, or re-purposing data. On the other hand, privacy-protecting restrictions on data use might be more disabling for Max than for private firms, who can obtain information about consumer preferences from their own interactions as market players. In any case, privacy concerns are distinct from my main focus on feasibility, unless they prompt an outraged reaction that makes needed data unavailable or unusable.\nRelative to Price Max, Pigovian Max has less need for transaction-internal production and consumption-related data, because he relies on market interactions to generate initial prices based on these. In addition to assuming that these emergent prices accurately reflect underlying producer and consumer information, Pigovian Max must also assume that using market outcomes in this way does not impair their validity.49 Transactions under Pigovian Max would occur in two stages, because transacting parties would see both the initially determined, market-based price, and Max's adjustment to yield the final price. This two-stage process might change behavior and outcomes, depending on the strength and form of decision heuristics operating. For example, parties might fail to make transactions that are advantageous due to strong positive externalities, if they do not anticipate Max's contribution making it privately more attractive to them. Alternatively, if buyers exhibit strong anchoring on the posted pre-adjustment price, we would expect the two-stage disclosure process of Pigovian Max to generate stronger responses to Max's adjustments than those by parties interacting with Price Max, who would only see the final price.50 Pigovian Max might also face gaming of initial transactions, or reduced vigor in seeking advantageous transactions by parties who know Max will come in after the fact to control their transactions. Such possibilities might require Max to re-check the validity of initial prices by replicating Price Max's estimates in some cases, thereby reducing his information advantage over Price Max.\nBoth Price and Pigovian Max also need information related to any effects external to transacting parties or other market failures. Relevant market failures are of three types: (1) limited or asymmetric information, especially given heterogeneous goods and fine-grained variation of transaction conditions over space and time; (2) conventional externalities such as environment, health, and safety harms; and (3) market power. I discuss the first two here and consider market power and its consequences in the next section.\nBroadly, Max's assumed capabilities imply that there are no information-related market failures, but there is a little more to say on this for Pigovian Max. His reliance on transacting parties' bargaining as a proxy for all relevant transaction-internal information will be invalid if these outcomes reflect limited or asymmetric information. Pigovian Max thus cannot avoid looking under the hood for transaction-internal information; although he does not need to do this to set an initial, pre-adjustment price, he still must do it to identify and correct any information limits. This need may only apply to certain types of transaction, or may be less burdensome than Price Max's construction of prices de novo, but still reduces Pigovian Max's computational advantage over Price Max.\nTo correct environmental and other externalities, most data Max needs will be external to the transaction, related to public or externally imposed benefits and harms. This will include both scientific and consumer-preference data—information about the physical and biological consequences of economic decisions, and about how people value these consequences. Estimates of citizen's valuation of environmental and related outcomes are presently conducted for benefit-cost analysis of regulatory decisions, relying on a combination of behavioral proxy data and explicit value-elicitation surveys. These methods are quite crude; indeed, there are controversies over the epistemic validity of such preference estimates separate from realized market transactions; although, absent clearly better alternatives, these are extensively relied on in regulatory decisions.51\nWhether or not Max can approach some valid stable representation of such preferences, I am confident Max can construct estimates of these values better than those produced by present methods. He could equal them by precisely replicating present crude data and estimation techniques; and he would almost certainly be able to deploy his vast data and computational resources to develop better surveys, proxies, and validity-checking procedures. Max's advantages would be even greater in integrating scientific information about causal mechanisms that link economic choices to valued impacts. Max could integrate expert scientific and technical knowledge about production processes and their external material and energy flows, as well as evolving state-of-the-art understanding of dynamics of environmental systems that link these flows to changes in valued environmental attributes. Under Max, beliefs about climate change or vaccine effects that were known with high confidence to be false would play no role in pricing the adjustments for associated transactions.\nGiven uncertainty in knowledge of environmental processes, Max would also have the option of taking a precautionary approach. Such an approach would start with a stipulated constraint on some specified environmental burden, defined over the relevant spatial scale and the associated producers and consumers. Such a constraint could come from a political process or could be generated by Max based on analysis of the same preference and environmental data incorporating some specified degree of risk-aversion. With that constraint specified, Max would then set optimal price adjustments to achieve that constraint, in effect, taking a cost-effectiveness rather than a benefit-cost approach.\nAll the data required for Max's calculations will change over time and so require monitoring and adjustment. Indeed, the explosion of complexity associated with product characteristics varying over time and location was the main basis of Hayek's revised argument for the impossibility of central planning. This was clearly correct for human planners, who could not do continuous updating and so had to specify uniform conditions over extended periods, but Max will be much more capable of location-specific and real-time adjustments. As a result, ironically, Max will have less need for accurate predictions of future conditions than human planners did. Max may also be able to identify cases where conditions change slowly or interactions are weak, and so decide when he can simplify his calculations at small social cost – if his computation is not quite costless, so such short-cuts are worthwhile. Changes over time will occur in both transaction-internal conditions and externalities, but the latter may present particular challenges of abrupt change. Scientific knowledge of mechanisms of environmental or health harm is occasionally subject to large revisions from new discoveries, which might imply sudden changes in Max's price adjustments. As noted above for Max's initial phase-in, his adjustments would then have to incorporate both the new scientific knowledge of harms and the costs of rapid adjustments, given the current state of the economy and capital stock. He must balance the costs of responding too slowly to the environmental harm against the disruption of steering the economy too fast in a new direction—or too confidently, given uncertainty.52\nB. Max Does Antitrust and IP: Market Power, Rent-Seeking, and Innovation\nIn addition to accounting for externalities, Max will be able to manage market power and related behavior and impacts for maximal social benefit. For purposes of analyzing how Max might do so, market power can usefully be categorized in three types, with different causes. First, most jurisdictions create monopolies by intentional policy choice through intellectual property law, with the aim that the resultant rents will generate incentives for creativity and innovation. Second, some industries are natural monopolies due to cost structures involving economies of scale or scope, which give large firms decisive advantages in terms of lower cost or ability to offer more attractive goods or services. Third, market power can be created through firms' efforts to erect barriers to entry against new competitors, using a wide variety of technological, strategic, marketing, policy, or legal means that subsume but are more extensive than the prior two mechanisms.\nIn all these cases, market power – and firms' resultant ability to raise prices or otherwise gather rents – is socially harmful. The third type, market power through artificially produced barriers to entry, represents a pure social harm with no offsetting benefit. Moreover, such advantages are often secured through explicit rent-seeking efforts, which present additional social costs with no net benefit: Those pursuing the rents benefit if their efforts succeed, of course, but at the cost of larger losses elsewhere. The second type, market power due to economies of scale or scope, also represents a net societal harm, not due to contrived efforts to seek rents but to the cost structure of the industry. Either large fixed costs create economies of scale, as in utilities with costly distribution networks or other traditional natural monopolies. Or strong network effects create economies of scope, enabling larger producers to provide some combination of better products or services, or lower costs. Economies of scale and scope create real advantages to being large, which tend toward market domination and resultant inefficiencies, even without the additional harm of rent-seeking behavior.\nBoth these types of market power produce social losses as firms raise prices or restrict supply to secure rents. For both types, the core of Max's response is to adjust prices to reduce or eliminate the rents. In the first type Max should target the rents, not the rent-seeking behavior, because the ways to erect barriers to entry are too varied and numerous to control them all, and the rents—given Max's assumed computational capability and data access—are relatively easy to observe. Even if the boundary between normal capital returns and rents is contested and imperfectly observable (since it depends, among other things, on the riskiness of the enterprise), even approximately eliminating the rents will greatly reduce or eliminate incentives for rent-seeking, so this response – with adjustment and correction over time – is a complete solution. In the third type, where market power was artificially created through rent-seeking behavior, extracting the rents will promote a return toward competitive conditions as rent-seeking behavior declines.\nIn the second type, however, the tendency toward market power is inherent in the market's cost structure and will not be eliminated by extracting the rents. Moreover, having one or a few firms dominate such markets is socially advantageous. The problem is not the market domination per se, but the resultant opportunity to raise prices and accrue rents. The solution again is for Max to set prices to capture the rents. Using Max in this way effectively reproduces rate-of-return regulation for natural monopolies, except that this response is applied not just to a few pre-identified natural monopolies but to any firm accruing significant rents. Modern monopolies, however—internet platforms and others whose market power comes from network externalities—present one additional complexity for Max. Many such firms exploit their market power partly through transactions that are unpriced, based on the exchange of attractive free services for personal data, often under terms of service that obscure the terms of exchange. While there may be close analogies to conventional market power in firms' ability to impose these terms, it is not clear that these relationships are fully analyzable in terms of market power. To the extent these firms act like monopolists, this will be clearer in the pricing of other related transactions, such as selling targeted advertising based on aggregation of user-provided data. The correct policy response is unclear, and may depend on regulations related to data ownership and use that would be separate from Max. Assuming such policies are in place and effective, the remaining job for Max is once again identifying and extracting the rents—a job for which the data needs are similar to what Max is already using: firms' technological possibilities and internal accounting data, plus consumers' preferences provide a good basis to characterize economies of scale and scope and the rents derived from them.\nThe third type of market power raises more significant policy challenges. Society benefits from creation and innovation, and IP law confers market power in order to create incentives for these activities. Past economic planning efforts did not perform well on this score, and were criticized for being dull, rigid, stodgy, and lacking in innovation. Effectively promoting variety, innovation, and creativity, will represent a challenge for Max distinct from those discussed thus far. How could Max effectively promote these values—at least as well as, or hopefully better than, the present system of markets plus IP law?\nTo consider this question, it is useful to separately consider different degrees of scale and novelty in innovation. At the smallest scale, innovation blends into variety in markets, as diverse products and designs are offered to cater to heterogeneous tastes and preferences for novelty. Markets do this pretty well, typically providing a mix of high-volume goods for mainstream tastes and differentiated or unique items for minority tastes. For Max to match or beat this performance is largely a data problem; if he has sufficiently fine-grained data, he should be able to identify both consumer preferences and production opportunities for a wide variety of goods. Neither Price Max nor Pigovian Max decides what is offered in commerce, of course: they only set prices or price adjustments for products that market actors are already offering. Max can use his price-setting authority to promote variety by being alert to variation and change in consumer tastes and rewarding producers who offer novel or non-standard products that some people want. He might further increase the rewards to novelty, by treating consumers' preferences for a variety of items being offered even if they do not presently consume them as an option value that represents a positive externality. Moreover, with a small broadening of his job description, Max could prompt producers about potentially attractive opportunities when he detects a preference for variety that is not being met. In addition, Max's job of discouraging non-beneficial market concentration will tend to promote variety of products, as a side-effect of promoting diversity of firms.\nAs we consider innovation that extends beyond present product variation, Max may not be able to observe preferences for novel goods that are not presently offered. He could explore tastes or production opportunities beyond the present margin by inflecting prices to actively promote small variation, then prompt producers about opportunities and promote their exploration through small variation in prices. In effect, Max would then be conducting small experiments, encouraging producers to offer new things for sale (by a combination of suggestions to firms and favorable pricing), then tracking results and adjusting offerings in response (again by combination of providing information, offering suggestions, and favorable pricing). These small changes to Max's operations could give modest boosts to innovation—at least small, incremental innovations, more akin to fashion and design innovation than technological innovation—via what I call a \"William Gibson\" mechanism.53\nIf such small exploratory innovation on the margin of current offerings is judged insufficient, Max could promote larger innovation by conducting technological R&D, or even scientific research. This would represent a substantial expansion of Max's job description. It would also present a large-scale policy choice, regarding whether to favor (in either direction) innovation and creativity by people, or by Max and other AIs.54Max could search over existing and proposed technologies and related patents and scientific and technical literature, to identify promising margins for advance. There are already signs of AI systems exhibiting such capabilities; for example, an AI system's recent victory in a scientific contest to predict the folded structure of proteins from their amino-acid sequences,55 not to mention AI's growing success in writing genre fiction (an AI was a runner-up in a recent novel-writing contest),56 and composing derivative but likeable music in specified styles.57\nThere may be subtle risks in relying on Max for innovation and creation. The products of human creativity may differ from Max's output or may be valued more highly for intrinsic reasons even if not observably different. Alternatively, creative outlets and activities might be judged necessary for human agency or flourishing. Moreover, innovation and creation—even technological innovation, but especially artistic, social, and political innovation—sometimes bring disruption and conflict. The creative impulses may originate in specific dissatisfactions or frustrations, in aspirations for self-definition and expression, or in novel political or social visions; and they may both be provoked by, and provoke, some degree of irritation, disagreement, or outrage. Any of these may provide reasons to limit Max's role in innovation or creation—for example, if Max's prolific output discourages human creators, or if the ease and reliability of innovation by Max undercuts important processes of social innovation by reducing friction and dissatisfaction, and so subtly impairs individual or societal agency.\nIf it is judged important to motivate creation and innovation by humans, either in parallel with or instead of Max, Max could design and implement policies to motivate these, probably better than current IP policy. He could provide incentives using the same bundle of policies occasionally proposed as alternatives to IP, either ex ante by creator's wages or cost reimbursement, or ex post by lump-sum prizes or price premiums added to uses of your creative work. He might even be able to assess the social value of innovations, and on that basis set optimal incentives to promote socially advantageous innovation without conferring large windfall rents.\nC. Max's Granularity: Individually Tailored or Aggregated Determinations?\nA key question in defining Max's responsibilities will be at what scale of aggregation he determines prices or price adders. Will groups of sufficiently similar transactions be aggregated, in effect treating them like one market with one price or price-adder? Or will Max make separate calculations for every transaction, unique to each combination of buyer, seller, and item transacted?\nThis question cuts surprisingly deep in how Max is designed and what aims he is able to pursue. If Max is conceived as an externality-fixing and rent-extracting machine, the answer will depend on much these vary across transactions, and thus at what level of aggregation differences among transactions matter for social optimization. You might expect that for large numbers of similar products, made in the same or similar factories, differences in externalities across transactions might be very small. Similarly, rents might accrue to firms at a similar rate across large number of transactions. Under these conditions, there might be small losses from social optimality in aggregating across large numbers of transactions, with large reductions in computational and data burden (once again, if computation is not really costless, so we care about these burdens).\nAt the same time, assessing each transaction individually would open up a powerful range of additional policy goals for Max, presenting both the potential for large benefits, and substantial risks. Assessing each transaction individually, Max could consider multiple attributes of both the product exchanged and the parties to the transaction, including not just transaction-specific externalities but also determinants of individual supply and demand characteristics, or even additional party attributes beyond these. Considering supply and demand characteristics alone, Max could know buyer's and seller's reservation prices for every transaction, and so replicate perfect price discrimination, with the difference that, in contrast to either price discrimination by a monopolist or bilateral bargaining, Max can divide the available surplus from every transaction in line with his social welfare function. This division would presumably reflect some reward to low-cost producers and some benefit-sharing to buyers with high willingness to pay, partly replicating the differential distribution of surplus that would occur if transactions are aggregated into quasi-markets.\nBut Max could also deploy this capability in other ways. He could, for example, operate as a powerful engine to reduce social inequality by shading each transaction incrementally in that direction: in contrast to typical outcomes in present market-based systems, Max could charge poor buyers less and pay poor sellers more, so each transaction contributes a small reduction to inequality. Perfect price discrimination for individual transactions would also enable Max to take some share of every transaction's surplus at a tax. This would represent perfect taxation with no allocative inefficiency (or deadweight loss) because all tax revenues would come from infra-marginal rents and thus have no allocative effect.58\nIndividual adjustment of every transaction also raises clear concerns. At a minimum, individualized transaction assessment loses the liberating anonymity of market transactions—a loss of privacy, although I suspect privacy is gone in Max-world in any case. People have scarcely more privacy from Max than they do from an omniscient deity, although Max could still protect people's private information from other people and organizations.\nBut there are other concerns presented by individualized transaction assessments, related to the bases on which Max makes these decisions. I have described Max's principal role as correcting market failures and have highlighted examples of traditionally recognized externalities that are large and mostly uncontroversial, such as environmental harms plus knowledge, health, and cultural spillovers. But individualized transaction assessments, in addition to letting Max conduct fine-grained calculation and correction of externalities, would also create temptations to broaden the conception of externalities in ways that begin to resemble comprehensive social engineering, raising potentially serious concerns about liberty and autonomy. As technological progress so often does, the possibility of Max opens news margins of individual and collective choice that never previously had to be considered, for which decisions are now required whether, and how, to use them.\nFor example, consider the prospect of treating employee welfare—a phenomenon that is important, highly variable, and largely unpriced—as an externality of production. Firms and managers sometimes make their workers miserable, and labor markets are not so perfect that unhappy workers reliably move to alternative employment that increases their welfare. Max could treat this as a compensable externality, penalizing producers and sellers by imposing what would amount to an \"unhappy worker tax.\" But if Max is authorized to treat abusive managers as a correctable negative externality of production, what is to stop him from doing the same for people who act badly in other ways, or in other roles? Much human behavior harms other people even if it takes place outside the workplace. With Max in place, there would be obvious temptations to intervene more expansively, making individualized judgments of social merit based on observed or inferred behavior or attitudes. Some earnest social planner might want Max to tax people with secret vices outside their work lives, grumpy people, people with dis-favored religious beliefs, strange-looking people, and so on. Markets already do this, of course, rewarding or penalizing people for things that are irrelevant to their participation in economic production—or should be—but Max would create the ability to either reduce such differentiated treatment or increase it, potentially without bound.\nSuch capabilities would present the worrisome prospect of drifting toward meddlesome and invidious discrimination to support whatever values, preferences, and prejudices are presently dominant—among the majority, or among whoever gets to influence Max's objective function—and a broader descent to a profoundly illiberal state. The same individualized determinations that enable Max to perfect the pursuit of social optimality also enable him to exercise unassailable, individualized tyranny through complete control of individuals, even over matters well within the zone of presumptive individual liberty, by pricing their labor and defining the terms of all their consumption opportunities. Max could operate like a Twitter mob, except deploying more powerful, authoritative sanctions. These concerns provide strong reason to worry about the definition of Max's objective function, discussed in Section III.F. below.\nD. Work Life and Worker Welfare Under Max\nI am describing Max in terms that are a blend of old-fashioned technocratic and playful, but we must not under-estimate the gravity of political transformation that Max could represent, or the intensity of associated political conflicts. The most salient dimensions of potential conflict over Max are likely to be between workers and employers (the managers or owners of enterprises) and between those at the top, middle, and bottom of the socio-economic status hierarchy. These dimensions of division evoke Marxism, and appropriately so. Max raises questions of the ownership and control of the means of production in a comprehensive and fundamental way, and so directly raises intense, long-standing political struggles.\nSo is Max socialism59 —and if he is, is that a bad thing or a good thing? Or to focus on real effects rather than political labels, what would Max mean for the life and welfare of workers and for the magnitude and determinants of social inequality? My assumptions for the exercise put some constraints on these questions. People still work, but far fewer than today. And they do so not just as vocations or in pursuit of intrinsic aims, but also to contribute to the production of desired goods and services in the economy, to some degree in response to extrinsic motivations.\nThe large-scale displacement of labor thus assumed is as transformative a shift as is having Max run the economy. Yet it is still also a limited assumption, because the displacement of labor is not complete. There are large numbers of people both working and no longer working. The thought experiment thus raises two deep questions, both long central to the ideological conflict between socialism and capitalism—the nature of working life and welfare of workers, and social equality.\nFirms and other large organizations, even those that participate in markets externally, mostly operate internally not by market transactions but by authority-backed planning. They are thus simultaneously islands of planning within market systems, providing a powerful rebuttal to simplistic ideologies of how capitalist economies operate;60 and islands of authoritarian control of workers by management, not organized along democratic principles.61 Workers submit to these relationships for multiple reasons, but a predominant one has been that they need the income.62\nThe assumed scale of Max's authority raises both questions in new forms. If far fewer people are working, it is no longer either feasible or morally acceptable to use wages from employment as the main basis to distribute income and other social rewards. But if these are not determined by outcomes of labor markets, then who gets what and how is it decided? Are all equal, as per simple proposals that the policy response to AI is a universal basic income (UBI)? Or if they are still differentiated, then on what basis? Is Max involved in these determinations? These supremely important questions about how to respond to AI-driven displacement of employment, and the inadequacy of UBI as a response, are topics of intense current debate, but I do not engage them here.\nBut even if Max is not involved in the overall determination of rewards and the degree and basis of social inequality, I cannot fully avoid the question of how Max engages with the terms and conditions of employment for those who are working, because these questions are tightly connected with Max's job of running the productive economy. Recall that Lange's planning system excluded labor markets and final consumption goods from its scope, oddly leaving these areas to market interactions. That represents one a possible answer in my thought experiment here, but it is still necessary to work through the question and the implications of this along with other possible answers.\nThe question of the conditions and terms of those working is tightly connected to the questions of who is working, who decides, and on what basis. Who still has jobs in the presence of Max? This will be determined by some combination of who wants to work, and what skills are still needed. This determination will have to consider the intensely heterogeneous character of work and jobs, both in their desirability and in the skills required to do them.\nAssuming there is some acceptable system in place to distribute societal resources among people—as there must be under any manner of profound AI-driven disruption of labor markets and the broader economy, whether controlled by Max, market forces, or other means—it can no longer be intolerable to be unemployed. As a result, the threat of such intolerable life conditions will no longer be available as an incentive to induce people to work (independent of the question whether it will be, or ever was, morally acceptable). Some people will want to work, for intrinsic reasons. This might be few people or many, so it is not clear in general whether human labor is likely to be in shortage or surplus. Moreover, whatever the supply-demand balance for general human labor overall, the economy will continue to require labor from people with specific skills that cannot yet be automated.\nWorking will still mean some degree of relinquishing control and submitting to direction. That will be the case under any system of large-scale production coordination, by any combination of markets, central planning by Max, or authority relations within firms. For people working directly for Max outside firms, that control will be implicit, operating through the set of price opportunities or adjustments that Max offers for working on particular tasks. Within firms, additional control will be exercised by managers, whether these are people or AI. Absent some magical harmonization of collective consciousness, the terms of work life can be neither fully voluntary for individual workers nor fully democratic at the collective level, given the need for some larger-scale coordination mechanism.\nFirms operating under Max will still have to organize production effectively and control costs. Moreover, subject to Max's vigilant policing of the magnitude of rents allowed, they will—and must for their internal decision-making to reliably align to large-scale societal needs—have incentives to earn profits. Utopian visions aside, this implies that firms must still sometimes direct employees to do things they would rather not do and must sometimes dismiss workers who are not contributing or whose skills are no longer needed. But at the same time, the human stakes of labor markets will be greatly reduced under Max, reducing or eliminating coercion to take employment. This will represent a fundamental transformation in the conditions of workers' lives.\nThe complete experience of employment—meaning the wages or other compensation, the character of tasks and the environment in which they are performed, the interactions with co-workers and managers, and the compatibility of employment with other life aims and responsibilities—must in total be attractive enough to induce people to choose to do it, under the conditions of greater voluntarism that follow from the overall reduced need for workers. How attractive these conditions must be will depend on the conditions of shortage or surplus that prevail for workers with particular skills. The greater the shortage, the more attractive the inducements for employment must be. We might generally expect the likelihood of shortage to be greater for specialized skills, although this need not necessarily be the case. When there is shortage, employers will offer higher incremental wages (incremental relative to what the workers they need can receive for not working) or other attractive inducements. Under conditions of worker surplus for particular job types, this will not be the case. Indeed, we might even imagine some areas where there is little or no need to pay incremental wages above what non-workers receive, still assuming that those life conditions available for non-workers are broadly perceived as acceptable. Even with more people wanting to work than firms need, the changed conditions of unemployment will put a floor on how miserable workers can be—a floor that is not present in current labor markets. Employers' market power over terms of employment will still vary with the shortage or surplus of particular skills but will never be as extreme as when loss of employment is catastrophic.\nShould Pigovian Max be involved in setting wages and terms of employment? (Price Max obviously will be.) I propose provisionally that he should not, under assumptions of full information in worker-employer bargaining and no externalities directly caused by employment decisions. Externalities from other related decisions can be corrected in the transactions where they arise. If you work on a destructive product, Max will correct that externality elsewhere in production inputs or final product sale, with no need to intervene in your wages. Under those conditions, Max can leave negotiation of employment, wages, and other working conditions to market bargaining between workers (perhaps advised by their AI assistants) and their prospective (human or AI) employers.63\nE. Same Old Communist Tyranny? Property Rights and Liberty Under Max\nWhere the prior discussion of worker life under Max partly addresses potential objections to Max from the left, this section aims to address some objections from the right. Even if Max doesn't amount to state seizure of private property, isn't Max close enough to raise all the same objections—seizure of control if not formal ownership without compensation, and threats to the associated liberty interests of both firms and citizens? In early discussions of this project, the sharpest forms of this criticism—appropriately, in view of their experience—have been raised by colleagues with personal or family experience living under the Soviet Union or other ostensibly socialist authoritarian states. These critiques suggest that a serious proposal to adopt Max is at best naïve about foreseeable ways Max would amount to, or foreseeably lead to, tyrannical state power.\nIt is clear that Max is an instrument of centralized coercion on market transactions, and hence on the use and control of private property, at least for private property involved in production. But the degree of control, and thus the extent of intrusion on liberty, will vary strongly under different forms of Max.\nI rejected Quantity Max for reasons of agency problems and incentives, but that form of Max would also represent the most extreme seizure of state control, compelled production and exchange. Depending on how he is implemented, Quantity Max might also entail compelled labor. His unacceptability thus appears to be overdetermined, based on both ineffectiveness and impermissibly extreme violations of liberty.\nPrice Max and Pigovian Max would still represent coercive state intervention, but to lesser degrees. Production and exchange transactions would not be compelled, but would be subject to centrally imposed conditions. For Pigovian Max, these conditions are imposed as price adjustments to transactions that are otherwise voluntary. In form, they would thus resemble a system of comprehensive sales or value-added taxes, suggesting by analogy that this degree of intrusion is not a categorically impermissible restriction of liberty, and may be justifiable in view of the public aims being advanced. This may be sufficient to establish the permissibility of Max, but this will depend on the details.\nIn contrast to familiar sales-tax systems, whose purpose is to raise government revenue, Max's purpose is mainly to steer economic production in socially favored directions and correct market failures, while perhaps also raising revenue as a secondary aim. Given this purpose, Max's price adjustments will be more variable across transactions than those of sales taxes, including some of both signs, and in some cases will be much larger. Under Max's direction, some products with extremely high negative externalities may be driven out of commerce, and some enterprises whose business model is mostly or entirely based on creating or shifting rents may be driven out of business.\nThese aims in principle lie within the legitimate purview of democratic states. Indeed, mixed market-regulatory systems often pursue the same aims, although by various forms of explicit regulation less integrated with market transactions than Max would be. At this level of speculative generality, it is clear that Max, at least in his Pigovian form, is not fundamentally impermissible in liberal democratic states.\nBut the details matter. Max would raise political controversy, as conventional regulation does, including the possibility of claims that strong interventions amount to impermissible uncompensated takings of private property. And any form of Max will be a powerful tool, making authoritative determinations on behalf of the state whose consequences are sometimes severe for particular enterprises or the value of particular assets, even if not matters of life and death. He will thus require vigilance that he only be deployed to advance broadly defensible, widely shared societal interests, not as an instrument to impose, explicitly or subtly, one faction's vision of the good life, or their interests, on others. The conditions that determine whether Max is compatible with a liberal state and society will be fuzzy and context-specific. They will depend on Max's objective function and the process by which it is established, as discussed in the next section. They will depend on some criteria of proportionality of costs imposed relative to benefits pursued—partly a matter of accurate and trustworthy estimation of social harms, partly a matter of limiting disruptions by phasing in large changes gradually, for Max as for conventional regulation. And they will depend on procedural recourse as protection against error and corruption, including provisions for explanation of decisions, independent review, and correction or compensation as judged warranted.\nF. What's the Goal? Max's Objective Function and How It Gets Decided\nWe now come to the two hardest clusters of questions that Max presents. First, what goal does Max pursue in guiding his interventions, and how—and by whom—is this decided? And second, how might we get to Max: what pathways from present conditions to a society with Max in place might be feasible, likely, or desirable; how do these relate to present capabilities and trends; and what pitfalls and risks do these pathways present? I deal with the first set of questions in this section, the second set in the next.\nWhat goal, what conception of social welfare, does Max pursue? In technical terms, what is Max's objective function, and how is it determined? I have presented Max as an alternative—or in the case of Pigovian Max, an augmentation and corrective—to markets. Market systems have a claimed normative foundation, originating in the \"invisible hand\" metaphor in Smith's Wealth of Nations64 and later formalized in the two fundamental theorems of Welfare Economics.65\nThis normative claim depends on a few strong assumptions. The widely recognized and often-violated assumptions required for conditions of perfectly competitive markets—full information, no market power, no externalities—define most of Max's job as discussed thus far, so I do not address them further here.\nBut there are two other, more foundational assumptions on which the claimed social optimality of market outcomes depend. These assumptions allow markets—or more precisely, defenders of markets' optimality—to avoid certain hard problems that most forms of Max cannot. First, market optimality claims presume that people's market choices reliably reveal their preferences and their well-being. Second, these claims rely on a definition of social welfare, Pareto optimality, which excludes consideration of interpersonal welfare comparisons and distribution. These assumptions together allow a thin conception of social welfare, which avoids the need to define an explicit social welfare function but at the price of being silent on many points of clear importance for total societal welfare, notably, but not only, distribution and inequality.\nCould Max get away with a similarly thin conception of social welfare, and thus avoid an explicit welfare function? This will depend on how broadly or narrowly his job is drawn. In its narrowest conception—Max only modifies each transaction to correct for information disparities, market power, and externalities—it is conceivable that Max could do this job, or approximate it, without an explicit social welfare function. Max could correct information limits or disparities between transacting parties. He could assess rents using internal accounting information from producers, perhaps augmented by comparative information from other firms in similar businesses. He could assess and correct externalities based on scientific knowledge about biophysical mechanisms of harm and estimates of people's valuation of the resultant end-states. To the extent external harms and benefits operate as public goods that affect multiple people, assessing their aggregate effect requires adding up individual effects and thus that these be expressed in commensurate terms, but does not require explicit interpersonal comparisons.\nBut Max also has the opportunity—or the duty—to allocate the available surplus from every transaction after he has taken account of externalities and rents. In doing this, he could take various simple approaches that can be defined from parties' relative valuations within the transaction, and thus do not require an explicit social welfare function. He could, for example, divide surplus in some given proportion between buyer and seller—equally, or in the same shares as the parties would have realized if Max had not intervened—applying such proportional division either to the entire available surplus, or to that portion that remains after Max takes some share as tax revenue.66\nBut any more ambitious approach that Max might take—including any approach that does not treat all transactions the same after accounting for externalities and market-concentration rents—must rely on characteristics of the parties external to the transaction, such as their wealth or other characteristics. Providing guidance for such choices requires an explicit social welfare function to define what count as better or worse social outcomes. As in many other applications, the shift to AI-directed decisions requires explication and codification of values and tradeoffs that may be left ambiguous or implicit absent such central direction.\nAssuming Max is ambitious, and thus does require an explicit social welfare function, the task of defining it can be separated into two parts: defining individual welfare and aggregating across individuals to define overall social value. These two parts present different difficulties, and challenge different parts of the edifice of assumptions and arguments underlying normative claims for market outcomes.\nFirst, how does Max define and measure individual people's well-being? In doing this, Max has a harder job than present AI systems, which only aim to predict commercially relevant behaviors: purchases, engagement, click-throughs, and the like. As noted above, normative claims for optimality of markets depend on assuming all these behaviors are aligned with your well-being, via one or another form of the axiom of revealed preference: if you do it, you must want it (relative to available choices); and if you want it, it must make you better off. This axiom provides a powerful foundation for liberal states: assuming you know what you value and act to pursue it is generally preferable to assuming I know what is good for you. On the other hand, the assumption is obviously false in many cases. People often make choices that are bad for them in a reasonably objective sense, e.g., in self-harming activities and use of recreational and performance-enhancing drugs that are addictive or harmful. And people often do, or fail to do, things that they later regret: not exercising enough, not saving for retirement, or spending too little time cultivating meaningful activities and relationships. Indeed, many business models depend on exploiting these misalignments, by taking advantage of impulsive behavior, distraction, or weakness of will.\nWe would want Max to avoid these clear pitfalls, ideally to do comprehensively better. But this ambition raises serious risks, including paternalism, loss of autonomy, or imposing one group's values on others, which require proceeding with great care. These risks are mitigated for Max in his Price or Pigovian forms, because he only has power to modify prices, not to tell you what to do. Max will discourage you from drinking or smoking by raising the price you face for alcohol or cigarettes67 —perhaps encouraging moderation rather than abstinence by dynamically changing prices (I want another drink; Wait, it costs how much?) —but not saying you can't have them. He might even recycle the revenues realized from these high-priced transactions for your benefit, by directing them to your future health-care or retirement expenses rather than sending them to either the distillery or the treasury. But while this price-based approach reduces Max's coercive power over you, that power can still be substantial. Max must only wield it in service of your considered interests and values, not slide over to me (or anyone else) specifying how you should live or what you should want.\nTo achieve this balance, Max needs a model of your welfare that avoids pathologies of choice but that still represents your vision of your welfare. It must represent a considered view of your interests and values that is not distorted by unconsidered habit or impulse; is not manipulated by other parties for their own advantage; that takes account of how you want to be, even when your present behavior diverges from that vision; and that appropriately reflects intertemporal tradeoffs,68 uncertainty, and the welfare of other people and values outside yourself – but that is still yours. Or at least, since Max's authority is limited to the economy, he needs a model of these things for you insofar as they are implicated in your economic transactions.69\nTo form this model, Max can draw on the same behavioral data firms already use and are developing, both data that pertains uniquely to you and generalizations inferred from other people. If Max is sufficiently trustworthy that we consent, he may also be able to draw on data not necessarily available to firms, such as medical data, or internal physiological and neurological observations, present and past. But Max's biggest advantage in forming this model of your welfare is that he does not have to do it alone. Like present proposals for AI-enabled personal assistants, Max can work with you, observing you and asking you about your preferences, aspirations, and feelings about your past choices and hypothetical future ones, to refine and update his model of your welfare. Operating in this way, Max looks more like a life coach or counsellor than an economic planner: indeed, this vision of Max is very similar to the approach proposed by Stuart Russell as a safety measure against AI assistants making serious errors when they act on your behalf.70 Such a personal AI assistant would be concerned with many other choices in addition to your participation in economic transactions, however, raising the question of whether this assistant should be some other AI-enabled agent, distinct from Max, whose information and concerns are limited to you. Such a personal AI agent—let's call him Mini-Max—would closely resemble Russell's faithful personal AI assistant, except that, as the guardian of your personal welfare, he would be responsible for passing on to economy-wide Max (\"Big Max\") a subset of the information he holds about you, which is relevant to your preferences and welfare as they are connected to your participation in economic transactions, and the effects on you of externalities from others' transactions. This is the information about you that Max needs to incorporate your welfare into his price-adjustment decisions. The rest of your interactions with Mini-Max, and the rest of his knowledge about you, are not needed by Big Max and can stay private between the two of you.\nEven with a valid assessment of everyone's welfare as affected by economic transactions, Max will still need to aggregate to a collective measure of social welfare. Because Max is serving in a liberal state—not a theocratic one, not one that tries to implement a universal Kantian approach to ethics (except, perhaps, in criminal law, which remains the state's business, not Max's)—that measure of social welfare must be some form of utilitarian summation of individual welfare measures as they pertain to economic activities. Any such aggregation requires weights attached to each person's welfare. While giving equal weight to everyone's welfare is an obvious default choice, there may also be legitimate bases to give some people's welfare stronger weights than others'. In particular, under conditions of social inequality, it may be permissible, or even morally required, to give larger weights to the welfare of those worst off. Moreover, any aggregate welfare measure must consider the relative weights to give to economic versus non-economic contributors to welfare;71 conditions at different times; and conditions that apply under different realizations of uncertainties. Except under the assumption that all these dimensions are correctly embedded in the individual welfare measures passed to Max, the social welfare function must represent collective judgments on these matters.\nAlthough fully specifying Max's objective function is beyond my scope here, this discussion suggests the problem can be approximated by specifying a few parameters. If we assume that Max's social welfare function is some basically utilitarian aggregation of individual welfare measures, which takes appropriate account of inequality, time, uncertainty, and economic versus non-economic determinants of welfare, this suggests that specifying the function might be closely approximated by setting values for four parameters: (1) a measure of aversion to inequality to be used in setting relative weights for better and worse-off individuals; (2) a discount rate or other parameter to set the relative weighting of outcomes at different times;72 (3) a measure of risk-aversion to weight outcomes under more or less favorable resolutions of uncertainties; and (4) a relative weighting of material consumption and non-economic contributors to welfare such as environmental conditions.\nThis last parameter, the relative weighting of economic and non-economic contributions to welfare, is likely to be the main instrument controlling the aggregate size of economic output under Max. If the material and energy flows associated with production, which determine environmental impacts, cannot be arbitrarily reduced toward zero, then environmental conditions will define the limits on the aggregate scale of the human productive enterprise. In a world of greatly reduced need for human labor in production, such environmental constraints are likely to be more tightly binding than any limit on production that arises from people choosing leisure time over employment.\nIn addition to asking what Max's objective function is, we must also consider the process by which it is chosen. Although Max mostly represents a technocratic vision, this is a point where democracy must come in. Defining a collective conception of social welfare is an intrinsically political process, which must have people in charge working through some democratically legitimate mechanism. In considering how to do this, the assumptions already made have simplified matters considerably. Measures of individual welfare emerge from the interactions between people and their AI-enabled personal assistants, while the aggregation to social welfare has been reduced (for purposes of argument) to setting values for a few powerful, readily understandable parameters. Without denying the advantages of expert-driven, even technocratic, decisions for complex, largely instrumental decisions in pursuit of broadly agreed political ends,73 this decision agenda is sufficiently clear and simple to place it within the capabilities of many different democratically legitimate processes. For example, you can imagine this as a legislative task, by which values for the major parameters of Max's objective function are explicitly enacted and periodically revised in statute. You can also readily imagine these as being matters of explicit debate in electoral politics, or being delegated to novel democratic processes such as juries of randomly selected citizens. You could even imagine the task being delegated to some expert administrative agency under legislative articulation of some higher-order aims to be advanced by the choice, assuming (in U.S. law) this decision survives the resultant constitutional challenge on non-delegation grounds.\nThe biggest risk associated with Max's objective function is the risk of capture. One irony that Max presents is that while one of his major jobs is reducing market power and associated rent-seeking in particular markets, the centralized political process of defining Max's objective function represents a concentrated opportunity for rent-seeking that overwhelms all others. Anyone able to inflect Max's decisions to serve their aims, even slightly, would be in a position of unprecedented power—to gain rapid wealth even beyond the dreams of tech-startup founders, or to shape society to their vision. Worse, the exercise of such power might be concealed by Max's status as a seemingly objective, neutral artifact.74 Restricting the political agenda to setting a few highly aggregated parameters partly addresses these concerns.75 These parameters do not allow the manipulation of small-scale details that would be needed to distort Max's decisions to a few actors' material advantage, and they aim to promote a democratic dialog on basic political values. But it is a long way from these high-level decisions to Max's actual operations, with many intervening steps that are more technical and opaque, over which many actors would love to exercise quiet influence. At the level of generality of this discussion, there is no more to say here beyond exhortations to vigilance about such manipulation, as much transparency as is feasible in the process of designing, training, and implementing Max, and procedures for recourse for those harmed by Max's decisions.\nG. Getting to Max (And Avoiding Dangers Along the Way)\nMax is a thought experiment, intended to be speculative and provocative. Yet part of the purpose of the exercise is to argue that Max is not crazily remote from present capabilities and trends. Many elements that could make up Max-like capabilities—rapid expansions in computational capacity, algorithms, data, and data integration and analysis tools—are already present or in development. These are mostly developing under private control to pursue commercial interests, or under state control to pursue military and geopolitical advantage, but not exclusively. There is also substantial research underway in universities and publicly supported research institutions, some of it loosely organized as a pursuit of \"AI for good.\"\nIn this section I shift from how Max would work as an endpoint to considering possible transition pathways by which Max, or similar capabilities, might come about. Any such pathway will involve a combination of technical and socio-political developments. I sketch three transition pathways that are sufficiently distinct and (to varying degrees) plausible to merit examination.\nThe first, and seemingly simplest, pathway would involve some jurisdiction deciding at some future point to adopt Max wholesale by political choice. Such a choice would lie within the authority of states, but would raise several immediate questions and challenges. Even assuming the needed capabilities existed, were ready to deploy, and confidently judged to work, the administrative scale of such a transition would be vast. It would require a massive roll-out and testing of infrastructure and systems before switching on, then some form of switch-over, perhaps at a long pre-announced moment during a period of reduced economic activity such as a near-universally observed religious holiday. The transition bears some resemblance to occasions when countries have reversed the direction of road travel, although the change would be much larger (albeit one not involving a risk of head-on collisions).76\nAdopting Max would be a huge decision, beyond the authority of any administrative or executive process but requiring some democratically legitimate political process, legislative or perhaps constitutional. And it would present a chicken-and-egg problem regarding capabilities. Making such a choice would likely require confidence that needed capabilities are available, would work reliably, would deliver the promised benefits, and would present no severe risks. But such confidence could only be available after some long period of prior development and testing, which in turn would require prior political decisions to support these. Even those prior decisions to develop and test the capability would surely encounter stiff opposition, from those with strong ideological commitments to markets and from those benefiting from precisely those social harms—rents from market power, and uncharged negative externalities—that Max would target. In view of these difficulties, I suspect that adopting Max by explicit political choice would be highly unlikely, absent strong changes in political conditions such as an economic crisis so severe as to weaken the blocking power of incumbents. Even seeing Max operating successfully in other jurisdictions, while it might help (and thus imply that the first move would be the hardest), would probably not help enough absent a crisis.\nA second possible route, potentially mitigating the extreme barriers for the first route, would involve early development, testing, or adoption of Max at smaller scale, among groups with more enabling political conditions. Possible early demonstrators and adopters might include jurisdictions that already have substantial shares of the economy in state enterprises or under state control; or those enterprises for which majority control already resides in some coalition of large sovereign wealth funds (Hello, Norway). Even jurisdictions with little state control of the economy could develop and test Max through government procurement, as governments often do for early support of environmental technologies. Max might also be developed through progressive expansion from small, early, opt-in communities. These might be any group of individuals and organizations connected tightly with each other and less so with others—like religious groups, social or political experimenters, or relatively isolated political and economic jurisdictions—who would let Max, better now called \"Pre-Max,\" control their production and exchange relationships with each other.\nAny such group of early adopters would face a few obvious challenges. They would have to port and modify capabilities from other uses, which in turn would require that these capabilities be sufficiently and verifiably adaptable to their new purpose and setting. Alternatively they could develop the new tools and systems themselves, in which case they would need the resources to do this. Perhaps, given the novelty and importance of the experiment, they could attract philanthropic support. The initial group would have to be large enough and separate enough that their interactions with each other represent a substantial fraction of all their economic interactions. And to the extent they do trade with the rest of the world, they would need to ensure that such trade does not undermine Max whenever his prices diverge from private-market prices. An analogous problem would arise with any deployment of Max, at any scale. Whatever scope of transactions is given to Max, his authority over those transactions must be exclusive: black markets must be effectively prohibited, and exchange across the boundary of Max's authority must not negate his adjustments. In the case of international trade, Max's adjustments would have to be applied in parallel to traded transactions to avoid arbitrage opportunities, like proposed border tax adjustments on traded goods to preserve the effectiveness of greenhouse-gas or other environmental policies.77 For this to be a viable transition pathway, Max must work well enough—perhaps after some early start-up phase carried by the enthusiasm of early adopters and start-up philanthropic support—that there are clear aggregate benefits to working with him that are visible to outsiders.\nA third pathway, more continuous with present trends, would involve continued expansion and consolidation of Max-like capabilities in the private sector, to the point where a few enterprises or networks control a large fraction of the economy. It is widely noted that as the scale of platform monopolies grows, they increasingly resemble states and exercise similar authority, although without provisions to ensure democratic accountability.78 Assuming some degree of concentration of private economic planning (and power) is widely viewed as unacceptable, Max could come about through some future political decision to take over and re-purpose the systems. This would not be a seizure and public re-purposing of physical capital assets, but of AI systems and associated data, although that nicety would hardly make the decision less wrenching and conflictual.\nThis pathway relies on two assumptions. First, it presumes that some future historical moment allows a wholesale takeover of concentrated private power that is then judged to have become intolerable, amounting to a large-scale reconfiguration of power between private and public actors. This would be a revolutionary change, carrying the risks of disruption and violence that typically attend revolutionary changes. Second, it presumes the technical feasibility of re-purposing a set of AI tools and data developed for private purposes to serve Max's public aims. This may not be fully possible, as some of Max's responsibilities—like assessing individual well-being, valuing externalities, and measuring rents—are not required of present systems serving private interests. To the extent the existing tools and data cannot perform these tasks, they would represent separate, new development requirements.\nConclusions: What This Gets, Leaves Out, Challenges Unearthed\nAs a speculative exploration, this exercise does not lend itself to strong conclusions. Yet it appears to have yielded a few provisional observations and insights, which at a minimum suggest guidance for further exploration and research – including identifying some points of potential near-term guidance, for research and for early development of governance capabilities to manage risks.\nFirst, I contend that the exercise has established some degree of plausibility for the hypothesized AI-driven central economic planning – under the admittedly strong assumptions made about technological capabilities. The exploration identified multiple developments underway that point toward the future capabilities assumed, and found no show-stoppers. Although this claimed demonstration of plausibility is highly qualified, it is not a trivial conclusion, since the exploration of different forms of Max under different contextual assumptions gave widely divergent views of their plausibility, with one variant of Max—Quantity Max in the presence of some degree of continued human managerial agency—presenting apparently insuperable obstacles.\nMore broadly, the exercise substantiated the general point that profoundly transformative applications and societal impacts from AI and related capabilities are plausible – with the potential for both great benefit and harm – long before the conventional mileposts of AI that transcends human capabilities and control. I have argued elsewhere for the importance of these \"intermediate-range\" AI capabilities and impacts, and for their distinct character from both near and long-term issues—in particular in their requirement for integrated examination of both technical characteristics of AI systems and the economic, political, and social context in which they are deployed.79 While it is defensible to focus predominantly on technical characteristics in considering long-term risks, and on human interests and decisions in considering current applications and their impacts, neither of these simplifying assumptions is apt when considering intermediate-range capabilities and impacts. Max is surely not the only example of a plausible, profoundly disruptive potential AI application that falls in this middle range – indeed, this exercise suggests the value of thinking through other possibilities of similar transformative scale – but the detailed examination of Max and his implications hammers home the importance of these more vividly than the prior, more general arguments.\nMore specifically, the exercise of digging down to the particulars of Max's operations and consequences yielded several suggestive insights, each offering useful guidance for further analysis and inquiry. First, it appears that alternative conceptions of how Max might be implemented differ starkly in their feasibility, requirements, and attendant obstacles and risks. In particular, the idea of Pigovian Max – a central-planning based implementation of a comprehensive system of Pigovian taxes – is a novel and promising vision of hybrid private-public control of economies, not previously considered in debates over central economic planning. Pigovian Max appears to offer the prospect of three major advantages, subject to all the requisite caveats. He appears potentially able to retain the efficiency and liberty advantages of private market systems while also correcting their most prominent failures. He also appears to offer the prospect of taxation without excess burden, albeit at the cost of aggressively individualized scrutiny of citizens' preferences. And finally, he appears to offer the prospect, through management of the parameters of Max's social welfare function, of bringing large-scale economic management under effective and informed democratic control, without losing the advantages of private markets.80\nSecond, even this preliminary investigation suggested that data and data integration needs may differ strongly over different forms of Max and jobs given to him – e.g., assessing individual welfare, pricing externalities, identifying and mitigating rents from market power, assessing and improving quality of working life, and promoting valued innovations and creative works. Moreover, these data needs may also differ strongly from those needed to predict and manipulate commercially relevant behavior for the benefit of counter-parties. Assessing these needs for specific aims, meeting them without unacceptable harm to other values, and learning how to integrate private data about individual welfare with enough sharing to enable effective social optimization, will all be important research areas.\nAdditional areas of further inquiry suggested by the exercise include the preferred mechanisms for promoting innovation and creativity and innovation in society, in the presence of computational capability that can greatly accelerate and optimize at least those innovation mechanisms that depend on searching presently available information; and the content, structure, and means of defining an aggregate social welfare function. The initial inquiry into this latter question suggested that it might be much less difficult than widely assumed, but the high stakes involved suggest viewing this optimistic initial speculation skeptically. Further critical investigations into potential forms of social welfare function that are precise enough to guide Max's decisions but also clearly and simply parametrized enough to support meaningful democratic decision-making are of high value; as is investigation of alternative democratically legitimate processes and institutions to conduct this parameter-setting process.\nA particularly interesting area for further inquiry provoked by the exercise would be examining more limited deployments of Max. If the comprehensive, economy-wide Max discussed here is for some reason infeasible or unacceptable, might variants with more limited scope provide many of the proposed benefits with less cost, disruption, or obstacle? Max's scope might, for example, be limited to enterprises over a specified scale, or to sectors identified as presenting especially large externalities or tendencies to market power and rent-seeking. A particularly interesting variant would limit Max's authority to capital markets, either overall or jointly with a scale threshold. In this variant, \"Capital Max\" would allocate, or more likely price-adjust, capital to enterprises, replacing or operating in parallel with private capital markets. Capital Max would presumably use the same objective function as economy-wide Max. Since this function would consider both the private and public effects of enterprise operations, Capital Max would not precisely replicate the behavior of either private capital markets, or past efforts to allocate capital in line with political aims. Consequently, the well known critiques of these past efforts would not necessarily apply, any more than the old critiques of comprehensive central planning would apply to economy-wide Max. Hints that Capital Max might be feasible and advantageous come from two lines of evidence: first, the extent to which capital allocation is already automated, via index funds, trading programs, and other algorithmic systems, which suggests that the change to Max might merely require adjusting the objective function; and second, the likelihood that key points in capital markets exhibit substantial market power, as well as systematic biases and choice pathologies. Capital Max might thus be able to gather low-hanging fruit, operating in parallel with and out-competing existing private capital-allocation mechanisms – thus, ironically, subjecting them to increased market discipline. Viewed in this way, Capital Max would not aim to abolish Wall Street, but merely to subject it to real competition and thus make it work better.\nIdentifying these questions for further research and the associated stakes re-affirms one observation made in the introduction. It is widely noted that large technological change can drive transformative societal change, disruption, and conflict. But such changes can also explicate and disrupt foundational shared assumptions that underpin the norms, institutions, and power structures of society. In particular, these may depend on assumptions about what people can do to each other that are technology-limited, but not recognized as such until the technology changes. This exercise has targeted long-settled assumptions about the moral and instrumental effects of markets versus central economic control, but other unexamined foundational assumptions – in particular about the extent and form of power that some can exercise over others – may face similar disruptions under large-scale technological change.\nMax, in particular his Pigovian form, presents three ambiguities, which should be kept in mind when considering his potential implications. First, is it ambiguous to what degree Pigovian Max would represent an incremental reform or a revolutionary transformation. I began the project as an intentionally extreme speculation about technological change and its implications. But elaborating the practicalities of implementing Pigovian Max made him increasingly look like a feasible, even incremental reform: an adjustment to improve a basically capitalist system, drawing on well established legal, institutional, and administrative capabilities, which appears quite compatible with a liberal democratic state. This claim must be qualified, of course, because implementation details will matter greatly: some variants of Max would clearly be so heavy-handed in their imposition of central control as to be incompatible with basic liberties. It might be a small step, easy to stumble over, from using price adders to correct clear externalities and rent-seeking, to adding incentives for sociability, pleasing others, conformity, docility, piety, or obedience to current political authorities. Any suggestion that Max might be a modest incremental change to the architecture of capitalism must reckon with these risks – and also with the challenge, discussed below, of finding a feasible and non-violent transition pathway that leads from here to Max.\nA second ambiguity concerns the aggregate normative evaluation of Max: would he on balance be good or bad for human welfare? I began the exercise agnostic on this point, and reached the unsurprising conclusion that it could go either way, depending on design and implementation details that an inquiry at this high level of generality cannot resolve. Yet this experience also cast into sharp relief the strength of normative priors that animate other writings on this question, and how thoroughly and confidently these priors lead directly to the conclusions. This observation applies equally on both sides of the debate: on the one hand, to the growing number of socialists writing on AI central planning, who know – with little consideration of alternative implementation details or contextual conditions – that it would be good; and on the other hand, to the unnamed recent essayist in the Economist, who knows with similar prior confidence that it would be bad.81 It appears clear that further investigations of this issue should link their normative assessments to explicit and specific assumptions about how central planning is implemented and what capabilities it draws on, in what context – even at the cost of yielding less clear and less predictable answers.\nA third ambiguity concerns how to characterize Max's job, in particular as regards what he is replacing. Although the starting aim was for Max to replace \"the market,\" working through the details led to a preferred form of Max, Pigovian Max, who lets the market operate then applies socially optimal adjustments to the resultant prices. Since controlling externalities and market power are canonical state functions, this makes Max look more like a comprehensive regulator – a state actor – than a market-like coordinating mechanism. Moreover, at each point in the argument where I proposed expanding Max's purview to include additional functions, these also looked more like state than market functions – or perhaps functions of the non-governmental charitable sector. Yet Max is not – and probably cannot and should not be – all of the state. The state does more than regulate, and even its regulatory functions are not limited to economic transactions. The state-market boundary is already fuzzy and contested, a point for which working through Max provided a helpful reminder. But introducing Max complicates, partly dissolves, and moves this state-market boundary.\nIn closing, I return to the question of Max's plausibility, and to the most disturbing issue raised by the exercise. I claimed above that Max passes some threshold test of plausibility, but plausible does not mean likely. Even a more complete and persuasive demonstration that a fully implemented Max would raise no impossible conditions would not necessarily imply a feasible or acceptable transition path to get from here to there. A technological artifact of Max's scale and complexity does not arise spontaneously, but must be pursued and developed by actors who can mobilize the needed (albeit uncertain) scale of expertise, resources, and authority. Max's real-world feasibility will thus depend on both needed technological capabilities and favorable social and political conditions. In this regard, the fact that Max-like capabilities, or large parts thereof, are already present or in development – with the crucial difference that these developments are in private hands and aim to advance private or sectional interests, not broad public ones – cuts both ways, both for Max's feasibility and for the prospect of AI bringing broad human advances. Two sobering implications follow.\nThe first concerns the risk of lost opportunities. There may well be prospects for mid-term AI developments that could bring profound advances in human welfare, whether through something like Max or through other applications in health, environment, education, or government. But if the specific technical requirements to realize such broad benefits differ greatly from those being pursued by private actors, then near-term RD&D decisions, plus path-dependency, may foreclose the prospect for such transformative future benefits. The severity of this risk depends on the portability and adaptability of capabilities – how readily those developed for private or rival purposes can be adapted to serve public or universal ones – which is deeply uncertain.\nThe second implication concerns the medium-term implications of continued dominance of private actors and interests in guiding development of increasingly powerful AI capabilities. Continued expansion of capabilities could become self-reinforcing – not in the oft-proposed sense of AI systems themselves growing unboundedly powerful through recursive self-improvement, but in the sense of capabilities controlled by human actors recursively strengthening the concentration of social, economic, and political power in those actors' hands. Perhaps even worse than the loss of potential human-liberating capabilities, such trends could lead to profoundly dystopian futures, whether these come about with a bang (violent upheaval) or a whimper (incremental loss of human welfare, agency, and hope).82\nThese dire possibilities suggest the value of large early investments in development of AI and related capabilities that are explicitly targeted at comprehensive public benefits. This may sound obvious, but it may in fact be the most radical suggestion in the paper, because such efforts might not just differ greatly from present privately-driven developments, but also from present small \"AI for good\" efforts, in at least two respects. First, the needed development efforts would not erroneously assume that economic development benefits to the sponsoring jurisdiction – the growth and competitive success of enterprises located there, or the successful tech-industry job placement of students trained there – are identical to the aggregate public benefit. Second, they would not presume that the technological capabilities developed for private commercial advantage will be readily and without limit re-deployable in pursuit of non-commercial public purposes. This will be the case to some degree, of course, and a development program seeking public benefit should not needlessly re-invent wheels that can equally well be installed on public and private vehicles – but how much, in what particulars, and for how long this will be the case is deeply uncertain, and it would be naïve for a publicly motivated development effort to assume comprehensive, continued complementarity between these. I recognize that the implications of this conclusion for resource requirements are large – and at odds with present trends in public-private division of resources and authorities – but the risk of continued, uncritical reliance on the assumed complementarity of technologies to advance competitive or rival interests and to serve broad public ones, appears too large to ignore.The post Max – A Thought Experiment: Could AI Run the Economy Better Than Markets? first appeared on AI Pulse.", "url": "https://aipulse.org", "title": "Max – A Thought Experiment: Could AI Run the Economy Better Than Markets?", "source": "aipulse.org", "date_published": "n/a", "paged_url": "https://aipulse.org/feed?paged=1", "id": "bf58a2233fac975f5a6bd77884c01e07"} -{"text": "AI & Justice in 2035\n\n\n Download as PDF\n\nThe AI PULSE Project hosted the \"AI and Justice in 2035\" Roundtable at UCLA on February 28, 2020. This event consisted of four panels and a dozen discussion papers that examined how AI will affect the provision of justice, broadly understood, by the year 2035.\nA list of participants is below:\nIfeoma Ajunwa, Cornell University, Law, Labor Relations, and History Dept.\nKevin Ashley, University of Pittsburgh School of Law\nJenna Burrell, UC Berkeley, School of Information\nAnthony Casey, University of Chicago Law School\nCary Coglianese, University of Pennsylvania Law School\nAloni Cohen, Boston University School of Law\nRebecca Crootof, University of Richmond School of Law\nAdnan Darwiche, UCLA, Computer Science Dept.\nJulia Dressel, Recidiviz\nKristen Eichensehr, UCLA School of Law\nChristine Goodman, Pepperdine University School of Law\nJerry Jacobs, University of Pennsylvania, Department of Sociology\nMichael Livermore, University of Virginia School of Law\nRyan McCarl, UCLA School of Law\nSusan Morse, University of Texas at Austin School of Law\nPaul Ohm, Georgetown University Law Center\nDavide Panagia, UCLA, Political Science Dept.\nTed Parson, UCLA School of Law\nRichard Re, UCLA School of Law\nAndrea Roth, UC Berkeley School of Law\nNeil Sahota, UCI School of Law\nLauren Scholz, Florida State University College of Law\nAndrew Selbst, UCLA School of Law\nHarry Surden, University of Colorado Law School\nEmily Taylor Poppe, UCI School of Law\nRory Van Loo, Boston University School of Law\nSuresh Venkatasubramanian, University of Utah, School of Computing\nJohn Villasenor, UCLA School of Law and School of Engineering\nEugene Volokh, UCLA School of Law\nAndrew Wistrich, Cornell Law School\nAlbert Yoon, University of Toronto Faculty of Law\nElana Zeide, UCLA School of LawThe post AI & Justice in 2035 first appeared on AI Pulse.", "url": "https://aipulse.org", "title": "AI & Justice in 2035", "source": "aipulse.org", "date_published": "n/a", "paged_url": "https://aipulse.org/feed?paged=1", "id": "8a1a83c5fe442f6f473b1325f570decd"} -{"text": "Launch of the UCLA Institute for Technology, Law and Policy\n\n\n Download as PDF\n\nThe UCLA School of Law and the UCLA Samueli School of Engineering have launched the UCLA Institute for Technology, Law and Policy.\nOn January 24, 2020 the Institute, in collaboration with PULSE, hosted its first symposium, \"Algorithmic Criminal Justice?\" The one-day event consisted of a series of interactive panels that examined the use of algorithms in policing and criminal justice, including approaches to identifying and mitigating algorithmic bias. More information and video recordings of the four panels are available here.The post Launch of the UCLA Institute for Technology, Law and Policy first appeared on AI Pulse.", "url": "https://aipulse.org", "title": "Launch of the UCLA Institute for Technology, Law and Policy", "source": "aipulse.org", "date_published": "n/a", "paged_url": "https://aipulse.org/feed?paged=1", "id": "6c7d5e9830211c7ae58f0a583b472dd4"} -{"text": "Artificial Intelligence's Societal Impacts, Governance, and Ethics: Introduction to the 2019 Summer Institute on AI and Society and its rapid outputs\n\n\n Download as PDF\n\nThe works assembled here are the initial outputs of the First International Summer Institute on Artificial Intelligence and Society (SAIS). The Summer Institute was convened from July 21 to 24, 2019 at the Alberta Machine Intelligence Institute (Amii) in Edmonton, in conjunction with the 2019 Deep Learning/Reinforcement Learning Summer School. The Summer Institute was jointly sponsored by the AI Pulse project of the UCLA School of Law (funded by a generous grant from the Open Philanthropy Project) and the Canadian Institute for Advanced Research (CIFAR), and was co-organized by Ted Parson (UCLA School of Law), Alona Fyshe (University of Alberta and Amii), and Dan Lizotte (University of Western Ontario). The Summer Institute brought together a distinguished international group of 80 researchers, professionals, and advanced students from a wide range of disciplines and areas of expertise, for three days of intensive mutual instruction and collaborative work on the societal implications of AI, machine learning, and related technologies. The scope of discussions at the Summer Institute was broad, including all aspects of the societal impacts of AI, alternative approaches to their governance, and associated ethical issues.\nAI and Society: The debate and its challenges\nInspired by recent triumphs in machine learning applications, issues of the societal impacts, governance, and ethics of these technologies are seeing a surge of concern, research and policy attention. These rapid linked advances – in multiple linked areas of algorithm development, data and data-handling tools, and hardware-based computational ability – are a leading current concern about technology's potential for profound and disruptive societal transformation.\nIn part, current concerns about AI reprise familiar themes from other areas of high-stakes technological advance, so the existing body of research on these other technology areas offers insights relevant for AI. A few of these insights are especially prominent. For example, the rate and character of technological change are shaped not just by scientific knowledge but also by the economic, policy/legal, and social/cultural conditions that determine relevant actors' incentives and opportunities. Societal impacts are not intrinsic to characteristics of technology, but depend strongly on how it is developed, integrated into products and services, and used – and how people adjust their behavior around it: As Kranzberg's first law of technology tells us, \"Technology is neither good nor bad; nor is it neutral.\"1 Together, the conjunction of rapid technical change and uncertain uses and responses challenge efforts to govern the associated impacts, so governance often merely aims to mitigate the worst impacts after the fact. Even when societal impacts are profound, they tend to emerge gradually in response to repeated adaptations of technology, deployment, and behavior, and are thus difficult to project, assess, or manage in advance.\nThese broad parallels with prior areas of technological advance and associated societal concerns are real, but there are also reasons to expect that AI may be different, and more serious, in its impacts and implications. What is popularly called \"AI\" is not one thing, but a cluster of multiple algorithmic methods, some new and some old, which are linked to parallel advances in the scale and management of data, computational capacity, and multiple related application areas. This set of advancing capabilities is diffuse, labile, and hard to define – a particular challenge to governance, since the ability to workably define something is normally a precondition for any legal or regulatory response. AI is also foundational, potentially able to transform multiple other technologies, research fields, and decision areas – to the extent that its impact has been credibly compared to that of electricity or fossil fuels in prior industrial revolutions.\nAI's societal impacts thus present deep uncertainties, for good or ill. Expert views of what it will do, and how fast, span a broad range: from the cumulation of many incremental changes, to existential transformations of human capabilities, prospects, societies, and identities. Even setting aside \"singularity\" issues – potential general or super-intelligent AI that might threaten (or in some accounts, transcend) human survival and autonomy – multiple mechanisms of impact have been identified by which even continued development of AI short of these landmarks could have transformative societal impacts. Examples include large-scale displacement of human livelihoods, disruption of geopolitical security relationships, transforming (or undermining) collective decision-making processes through democratic governments or other institutions, extreme concentration of wealth and power (perhaps based on new mechanisms of power), and large-scale changes in human capabilities and identities. Even limiting attention to present and near-term developments, there are a host of concerns raised by current AI applications – e.g., safety and security of systems, bias in algorithmic decision-making, threats to privacy, and inscrutability of decisions – some of which may also give early warning signs of coming larger-scale impacts.\nRelative to the scale and gravity of potential impacts, present debate on AI and Society presents a seeming paradox. The issue is receiving a flood of attention, with dozens of new programs, a rapid flow of resources, and meetings and conferences seemingly every week. Yet well-founded insights remain scarce on the nature and mechanisms of impacts, effective and feasible means of governing them, and associated ethical issues. There has been relatively little convergence or progress on major questions, which in many cases remain not just unanswered but also subject to wide uncertainty and disagreement, or even not yet clearly posed.2 Because AI is so labile and weakly defined, studying its impacts has been likened to the ancient Buddhist parable of the blind men and the elephant: each observer feels that part of the unfamiliar thing that is closest to them, so each thinks they know it; yet their views are all partial and mutually contradictory. As with the elephant, it is possible to approach AI impacts from any discipline or field of inquiry (e.g., corporate law, anthropology, Marxist social history), any area of interest (education, finance, climate change), any political or ethical concern (racial justice, social mobility, privacy, due process), or any prior technological analogy, and find something resonant. Pulled by these centrifugal forces, the debate is thus unhelpfully sub-divided along multiple dimensions and lacks a coherent core.\nThere is also continued disagreement over where the most important impacts sit in time and scale, yielding a distribution of present attention and concern that is bi-modal. To be a little glib, those whose disciplinary perspectives make them most comfortable with speculative reasoning – often technical AI researchers and philosophers – are attracted to endpoint, singularity-related issues, which lend themselves to elegant, analytically rich theoretical inquiries. Most other researchers, on the other hand, gravitate to current concerns and historical precedents, because their disciplines frown on speculation and favor arguments based on observable (i.e., present or past) data and evidence. These areas of inquiry are both valuable and important, yet they leave disturbingly empty the large middle ground of impacts and challenges lying between these endpoints – where AI might transform people and societies by vastly reconfiguring capabilities, information, and behavior, while still remaining (mostly) under human control.3 At the same time, while there is a widespread sense that early action is needed to assess and limit risks of severe harmful impacts, there is little knowledge, and even less agreement, on what that action should consist of or how it should be developed.\nThe 2019 Summer Institute on AI and Society\nThis description of the range of issues posed by AI and the state of present debate underpin the aims of the Summer Institute. Just as AI and its impacts present a huge societal challenge, so too does mobilizing existing bodies of experience, knowledge, and methods to effectively inform the assessment and management of its impacts. These challenges will not be surmounted by any single insight, study, or activity. The summer institute aimed to point, tentatively, at a direction of efforts that can advance and expand the debate, establish an early model of the kind of collective engagement needed, and – by seeding cross-disciplinary networks for continued collaborations – contribute to the long-run project of building the needed capacity.\nThe summer institute pursued this aim in two ways. First, it sought to convene the needed broadly interdisciplinary dialog, with the ability to integrate knowledge and experience from multiple technical, scientific, and humanistic domains, and to resist widespread tendencies to converse mainly within existing disciplinary communities. In seeking this breadth of expertise contributing to the discussions, the summer institute benefited from its co-convening between a program on AI and Society based at a leading law school, and the CIFAR Deep Learning and Reinforcement Learning Summer School – a vehicle for advanced technical AI training with a distinguished international group of faculty and advanced students. Yet even with the right breadth of expertise in the room, making such interdisciplinary interactions productive takes sustained hard work to understand each other, clarify key concepts and methods, and build new conceptual and communication skills. These aims are better advanced by sustained collaborative work on problems of commonly recognized importance than by discussions that lack such common goals, which tend toward superficiality. Second, it is clear that understanding and addressing AI-related impacts is a long-term, even inter-generational project, which must combine mutual instruction with advancing inquiry, aiming to both advance the debate and broaden its participation by engaging more junior and more senior thinkers on collegial terms.\nAn Experimental Process\nTo pursue these aims, the summer institute experimented with a novel two-part structure, with the first part tightly programmed and structured by the organizers and the second part left almost entirely to the collective, bottom-up authority of the group. The first part aimed to provide the essential foundation of common knowledge and concepts to enable people from a wide range of fields and career stages to participate effectively and confidently in the discussions. To this end, the institute opened with a day of short, focused briefings by faculty experts, each covering elements from their expertise they judged essential for anyone to be an informed participant in the debates. These briefings were grouped into four sessions organized by broad subject-matter:\n– Recent advances and current technical issues in AI and Machine Learning (briefings by Graham Taylor on Deep Learning, Rich Sutton on Reinforcement Learning, and Dirk Hovy on Natural Language Processing);\n– Current issues and controversies in AI societal impacts (briefings by Elizabeth Joh on use of AI in policing and criminal justice; Michael Karlin on military uses of AI; Trooper Sanders on biased data, its implications and potential correctives; and Elana Zeide on use of predictive analytics in education and employment);\n– Alternative approaches to governance of AI and its impacts (briefings by Geoffrey Rockwell on the historical trajectory of concerns about automation and proposed responses; Gary Marchant on limits to hard-law approaches, and potential soft-law and international alternatives; Brenda Leong on corporate AI ethics boards and their limitations; and Craig Shank on internal corporate controls and multi-stakeholder governance processes);\n– Larger-scale and medium-term issues (briefings by Jason Millar on embedded values in navigation and mobility systems; Evan Selinger on facial recognition and its implications; Osonde Osoba and Casey Bouskill on technology-culture interactions in AI impacts and governance; and Robert Lempert on large-scale societal implications of alternative approaches to algorithm design).\nFollowing the briefings, the rest of the Summer Institute was dedicated to collaborative work on projects that were not pre-specified, but instead were developed and proposed by individual participants, then selected in real time by all participants choosing which of the proposed projects they wanted to work on. Any participant, regardless of seniority, was invited to propose a workgroup project via a statement posted online and a short oral \"pitch\" presentation to the group, followed by brief clarifying discussion. In selecting projects, participants were urged to consider a few explicit criteria – that the projects address interesting and important issues related to AI and society, that they not duplicate existing work, and that they offer the prospect of meaningful progress in the limited time available. Otherwise, there was no central control of projects proposed or chosen. The form of proposed projects was completely unconstrained, explicitly including making a start on collaborative research projects, drafting op-eds or other non-specialist publications, developing proposed contributions to policy or governance, developing instructional material, or creating a story or other work of art on the theme of AI and society.\nFrom twelve proposals, the group selected eight highly diverse projects to work on. The resultant eight groups worked intensively over a day and a half, in a process that several participants likened to a hack-a-thon. The analogy is suggestive but only partly accurate, in that that each SI workgroup included a wide range of disciplinary skills and expertise, and each pursued a different project, all generated by participants rather than pre-specified by organizers. The entire group convened briefly in plenary at half-day intervals to hear short reports from each workgroup summarizing what they were doing, what progress they had achieved, what completed output they targeted by the end of the SI, and what help they needed from the rest of the group. All eight workgroups achieved substantial progress by the end of the summer institute, even within the extremely limited time available. All eight also expressed the intention and developed concrete plans to continue their collaborative work after the Summer Institute – with some continuing that work immediately afterwards.\nWorkgroups' Provisional Outputs\nThe contributions published here represent the initial outputs of these eight work groups' collaborative efforts, as achieved during the intensive work period of the summer institute plus a little further polishing over the following few weeks. One consequence of the decentralized, bottom-up model, with each workgroup defining its own project, is that the resultant outputs are too diverse for any single publication or communication outlet to be suitable for them all. Yet in order to have a single vehicle that captures the collective energy and themes of the SI – and moreover, to communicate these while the experience is still fresh in participants' minds – all workgroups agreed to disseminate interim outputs from their work for this fast web publication. This quickly distributed – but explicitly half-baked – publication model was variously likened to theatrical workshopping or rapid prototyping in product development, in addition to hack-a-thons.\nThis experimental early publication model is very much in line with the exploratory and experimental spirit of the SI, taking the risk of trying different models to advance and broaden the debate. It also, of course, has the unavoidable consequence that these works – while they reflect remarkable achievements in the short time available – are all provisional and not yet fully developed. With some variation among the workgroups, they are presented here with the aim of being starting points for needed discussions, and providing concrete resources, background information, and proposals to move those discussions forward with specificity. They are not completed or polished products.\nWe provide below a brief synopsis of the aims and outputs of each of the eight workgroups. Each workgroup is continuing to develop its project, aiming for publication in various outlets in line with the groups' diverse aims and intended audiences. As the outlet for each workgroup's completed work is finalized, we will identify it and, as available, add links to the discussions below.\nMobility Systems and Embedded Values\nThis group used the example of the now-ubiquitous, AI-driven, turn-by-turn navigation systems to illustrate the range of values affected by these systems, whether explicitly or not. They then considered the resultant implications for societal impacts of projected large-scale expansion and integration of these systems, moving from separate navigation apps used by individual drivers, to complete urban mobility systems integrating signaling and multiple types of human-driven and autonomous vehicles, private and public. Navigation apps may at first glance seem prosaic, but the exploration was surprisingly rich. Present implementations of these systems seek to minimize individual drivers' travel time between a given origin and destination, with limited options to tune results to individual preferences such as avoiding freeways. But since their early deployment, a collateral impact of these systems has been increased traffic in residential neighborhoods – an impact well known to the planners who design streets, signals, and signage, but not recognized as implicated in individual navigation systems until large numbers of drivers began taking the same recommended shortcuts through side streets. The group identified several additional values affected by mobility design systems, which will require explicit consideration as the scope and integration of systems increases. In addition to travel time and neighborhood character, these include safety (at the individual level for drivers, pedestrians, and other street users, and collectively); allocation and prioritization of mobility access among types of users (now implemented simply, through right-of-way for emergency vehicles, HOV or toll lanes, etc., but potentially generalizable in multiple ways with fully integrated systems); and policing strategy and resource allocation, among others – including an unexpected linkage to the important role presently played by traffic fines in some local government budgets. In this initial published collection, the workgroup presents a taste of their discussions in the form of a fictitious press release, announcing the release of a new navigation app that generates routes based on minimizing drivers' cognitive burden.\nThis group's discussion illustrates a widespread phenomenon related to automation of decision processes. Societal institutions and processes often serve multiple values, only some of which are explicitly articulated as their mission or objective. Just as urban transport systems advance multiple values in addition to efficient mobility, so too do other organizations. A prominent example is provided by military services, in the United States and to different degrees in other countries. While their explicit missions are all broadly related to national defense and security, one of their most important social impacts – almost unrelated to their explicit missions – has long been to provide training and life skills to young people from disadvantaged backgrounds, making these organizations one of the most powerful drivers of social mobility. Many institutions serve such multiple corollary or implicit societal values. Automation or codification of decisions – typically with a single objective function that aligns with the institution's explicit, official mission – can put these other implicated values at risk, either from the automated decisions themselves or from related organizational changes. (In military organizations, the concern arises from the higher level of technical skills and education required of even entry-level recruits in AI-rich environments.) Yet these corollary values are challenging to integrate into explicit algorithmic decision-making – because they are ambiguous, hard to integrate into an objective function that trades them off against core organizational missions, and potentially contestable – such that they may only flourish while flying under the radar. As Joni Mitchell sang in another context, \"You don't know what you've got till it's gone.\" The loss of corollary, emergent, or ambiguously defined organizational values may be a systematic consequence of automating decisions, which typically requires explicitly codifying what before was ambiguously embedded in organizational practice.4\nOutput of the Mobility Systems group\nMeaningful Human Control\nThis group considered the problem of coupled human and algorithmic decision-making in high-stakes settings, using as initial examples the domains of weapons, aviation, and medicine. Noting the definitional ambiguity and difficulty operationalizing widely repeated concepts such as \"humans-in-the-loop,\" their initial ambition was to unpack the meaning of \"meaningful human control\" (MHC) and identify processes and criteria to operationalize it across these diverse decision domains. But the group adjusted mid-course, recognizing that this was a longer project and that they needed first to engage the prior question of why – and with what conditions and limitations – meaningful human control is judged desirable, or even essential, in such decision contexts. They argue that retaining meaningful human control carries both costs and benefits, and that both the costs and benefits include distinct components, some related to system performance and some to issues of legal and moral responsibility. In general, greater human control may improve system performance by increasing redundancy and adaptability to novel conditions, and may be necessary to ensure moral and legal accountability. Yet it may also degrade performance by requiring uncoupling of complex autonomous systems and increase the risk of human error, carelessness, or other forms of improper human decisions. The group noted that the optimal balancing of these factors, and hence the preferred degree and form of human control, are likely to vary substantially even among the three decision domains they consider. The group is continuing work on the larger project generating guidelines how to implement the desired degree and form of human control in particular decision types.\nOutput of the Meaningful Human control group\nAI Without Math\nThis group began a project to develop non-technical instructional materials on key AI and machine-learning concepts. They recognized that as deployed AI-based products and services continue to expand, many decisions will be required about how to control, explain, and manage these. These decisions will include many by various professionals who not only lack specific training in AI and Machine Learning, but may also lack training in the underlying mathematical and statistical concepts that provide the core of even introductory instruction in AI/ML. In view of this need, the group began development of an online instructional resource that would provide introductory explanations of key AI/ML concepts with no use of formal mathematical notation. As illustrative audiences toward whom to target their explanations, they took journalists and judges. Their short contribution here presents a start on this project and an illustration of their targeted level of explanation, including explanations for four key concepts: rational agents, naïve Bayes classifiers, linear regression, and convolutional neural networks. Their more extensive resource will be an ongoing project, to be available at https://www.aiwithoutmath.com.\nOutput of the AI Without Math group\nSiri Humphrey:5 Design Principles for an AI Policy Analyst\nThere are many studies underway of the potential for AI tools to take on various functions of government – legislative, executive, judicial, and electoral – asking how the use of AI in specific functions would work, what it would require, with what attendant benefits and risks, and whether (and how) it could align with applicable legal, democratic, and moral principles. This group looked at a previously unexamined piece of this landscape, the potential for AI systems to take over, partly or wholly, the functions of policy analysts who advise senior officials or political leaders. Starting from recent scholarship that has identified several distinct functions that policy analysts perform, they examined how AI systems – either current ones or reasonably projected extensions – could serve these functions, with what implications for the policy-making process and the multiple public values implicated in policy decisions.\nThe group argues that AI systems could substantially replace the \"synthesis\" function of policy analysis: the gathering, curating, and synthesis of publicly available information relevant to an issue or decision. At least initially, use of AI in this role would have to be subject to specific limitations on the tasks delegated, and also subject to review and revision of the resultant briefing notes or other documents before they go to Ministers or other senior decision-makers. The group also argues that repetition of this synthesis and review process, with feedback from both human policy analysts and decision-makers (such as Ministers routinely provide on briefing materials prepared by their human staff) could serve as high-order training for the AI, allowing progressive reduction – although not elimination – of the amount of oversight and input needed from human policy analysts. In contrast to the \"synthesis\" function, they argue that certain other policy analysis functions depend more strongly on the essentially human interaction between decision-makers and their advisors. This militates against the wholesale replacement of analysis and advising functions by AI systems, suggesting instead a model of \"Artificial-intelligence-amplified policy analysis,\" in which AI systems augment and amplify the skills of human policy analysts.\nOutput of the AI Policy Analyst group\nAssessment Tool for Ethical impacts of AI products\nThe next two workgroups form a complementary pair, both concerned with the problem of what to do with the multiple sets of AI ethical principles being advanced to provide guidance for individuals or organizations engaged in AI development and application. These sets of principles pose two widely noted problems. First, the proliferation of large numbers of similar, but not quite identical, lists of principles raise questions about the relationships between them, the normative foundations of any of them, and the basis for adopting any of them over the others.6 Second, all these principles are stated at high levels of generality and abstraction, so their implied guidance for what to do, or what not to do, in the actual development, design, training, testing, application, and deployment of AI-enabled systems is indirect, non-obvious, and contestable.\nIn an unplanned piece of serendipity, these two groups approached the same problem from nearly opposite perspectives, one operational and one critical, yielding a rich and instructive counterpoint. This group took an operational, constructive approach rooted in engineering. Boldly (and practically) going where no one has gone before, they reasoned step by step through the process of operationalizing a particular set of ethical principles for any AI-related product or project. They first reduced each principle to a list of specific areas of concern, then to operational questions about observable practices and procedures relevant to each area of concern, and finally to a numerical scoring system for alternative answers to each question. Subject to some remaining ambiguities about appropriate weighting, the resultant component scores can then be aggregated to generate an overall numerical score for conformity of a system or project with the specified principle. The group stresses that such reductive scoring systems are prone to various forms of misinterpretation and misuse – such as imputing false precision or prematurely closing discussions. They also highlight that this heroic, first-cut effort is incomplete. Yet at the same time, they vigorously defend the approach as providing a stimulus, and a concrete starting point, for the discussions of impacts and ethical implications that are needed in the context of specific projects and systems.\nOutput of the Ethical Impact Assessment Tool group\nShortcut or Sleight-of-Hand: Why the checklist approach in the EU guidelines does not work\nThis group took as their starting point a different set of ethical principles, the \"Ethics Guidelines for Trustworthy Artificial Intelligence\" issued by the EU Commission's High-Level Expert Group on Artificial Intelligence in April and June 2019, including an \"assessment checklist.\" This checklist is intended to help technology developers consider ethical issues in their policies and investments, and thus to create more trustworthy AI. In effect, this EU expert group undertook an exercise quite similar to that conducted by the Summer Institute \"Tools\" group summarized above, except that the EU expert group's exercise is more limited: it consists only of a checklist of yes/no questions (with extensive supporting discussion), and does not pursue a numerical scoring system.\nThis workgroup conducted a detailed critical assessment of the guidelines and checklist, aiming to assess their implications – and in particular, their limitations – as a tool to guide AI development. They argued that these guidelines are a fair target for such critical scrutiny because of their likely influence and importance, based on their ambition to articulate a broadly applicable standard of care for AI development and their prospect of influencing EU regulatory development – especially given the EU's emerging role as a world leader in this regulatory area.\nThe group finds the proposed approach problematic in several ways, most of them related to intrinsic limitations of checklists in this context rather than problems specific to this particular checklist. Using the analogy of safety procedures in aviation and space flight, they argue that checklists are an appropriate technology to manage human-factors risks in complex environments whose operations, salient risk mechanisms, and implicated values are well known, but that these conditions do not apply to development of safe or ethical AI systems. The group argues that many items on the checklist are seriously ambiguous but lack the additional explanation or documentation needed to reduce the ambiguity; and that the checklist thus risks conveying false confidence that needed protections are in place, when the conditions for this to be the case are in fact subtle, context-specific, and evolving over time.\nAlthough the EU expert group's report includes extensive discussions of caveats and limitations, the workgroup finds these insufficient to mitigate the risks they identify, in view of the likely uses of the checklist in real-world, operational settings. They worry that enterprises are likely to treat the checklist either reductively or opportunistically – perhaps delegating responses to their legal teams to seek defensible markers of regulatory compliance or fulfilling some relevant duty of care. Used in such ways, the checklist would fail to stimulate the serious, organization-wide reflection on the concrete requirements of ethical conduct in their setting that should be the aim. Moreover, the group argues, the checklist is unlikely ever to yield a decision not to pursue an otherwise attractive project due to irreducible risks of unacceptable outcomes, when a meaningful and effective ethical filter must be capable – at least occasionally – of generating this outcome. Finally, the group argues that checklists are likely to be proposed or used as safe harbors – by enterprises, or even worse, by regulators, judges, citizen groups, or political leaders – with the resultant risk of reducing the pursuit of ethical AI to empty \"ethics-washing\" or \"ethics theatre.\"\nIn contrast to their sharp criticism of the checklist, the group finds the expert group's higher-level \"guiding questions\" to be of great value, in helping to identify issues and problems that require sustained attention and so to promote an organizational culture of heightened ethical awareness. But they find the pursuit of simplification and codification embodied in the checklist approach to be premature, promoting misleading, too-optimistic assessments of risks and the subsequent prospect of broad, destructive backlash against the AI and related technologies broadly.\nOutput of the Ethical Guidelines group\nAI and Agency\nThis group examined the deep, and deeply contested, concept of \"agency,\" as it applies to and is modified by the context of AI development. Working both individually and collectively, they wrote a set of short, provocative essays that approach the concept of agency from multiple disciplinary perspectives, including philosophy, political science, sociology, psychology, economics, computer science, and law. The essays also lay out a set of deep questions and tensions inherent in the concept. They ask how agency is defined; whether humans have it, and if so, whether and how this distinguishes humans from present and prospective AI (and also from other animals); and what are the implications of alternative conceptions and ascriptions of agency – for human behavior, identity, welfare, and social order.\nThe definitions they consider for agency cluster around two poles, one positive and one negative. At the positive pole, agency is defined by the capacity for goal-directed behavior, and thus identified by observing robust pursuit of a goal in response to obstruction. At the negative pole, agency is defined by not being subject to causal explanation without introducing conceptions of intention or subjectivity. The group notes that conventional conceptions of agency as being unique to humans are increasingly challenged on two fronts: by human inequity in diverse social contexts, and hence wide variation in individual humans' capacities to exercise effective agency; and by scientific advances that suggest both that subjectively perceived agency may be illusory, and that to the extent humans do have agency, so too may other animals.\nPresent and projected developments of AI raise the stakes of these inquiries. The increasing complexity of AI performance implies, at a minimum, the lengthening of causal chains connecting behavior to proximate or instrumental goals and thence to higher-order goals, shifting the location of agency and casting doubt on simple claims that people have it but AI's do not or cannot. Yet the connection between this causation-driven notion of agency, and thus the validity of societal ascription of responsibility and deployment of incentives, are obscure, in the context of both human and AI decision-making. Does accountability always pass back to the human designer or creator, no matter how many layers of intermediate goals are generated within an AI? If human behavior is increasingly understood as subject to causation, does this reduce moral problems to correctible, technical ones – and if so, correctible by whom, in terms of both effectiveness and legitimacy? Finally, even if strong human-other or subject-object distinctions in ascribing agency become untenable under further advance of scientific knowledge and AI technology, might agency nevertheless be a useful fiction, a myth that is useful or even necessary to believe – for stable conceptions of human identity, and for effective collective regulation of human behavior?\nOutput of the AI and Agency group\nCan AI be an instrument of transformative social and political progress? The \"levelers\" group\nThis group took its inspiration from a strain of political thought early in the industrial revolution, which identified markets and technological innovation as powerful engines of political progress, holding the prospect of large gains in both liberty and equality. Looking forward to the transformative possibilities of AI, the group took a perspective at odds with the dystopian gloom that marks much discussion of AI impacts – and also, for that matter, at odds with the mixed outcomes that attended the actual technological and economic transformations of the industrial revolution. Instead, the group asked whether advances in AI could drive transformative social and political progress – and if so, what conditions would be necessary or helpful in promoting such progressive impacts. The group considered technical and socio-political conditions separately. Are there particular technical characteristics of deployed AI systems that would be most compatible with the aim to increase rather than decrease broad human liberty, equality, and agency? And what social, political, and economic conditions – including the need for viable business models – would be most conducive to AI systems with these beneficial characteristics being successfully developed, deployed, scaled, and sustained over time?\nRegarding technical characteristics, the group identified two areas that might promise greater, and more broadly distributed, societal benefits than present and projected AI development patterns, one related to the structure of decision-making and one related to the scale, decision scope, and number of separate AI systems. Most methods of algorithmic decision-making, whether modern machine-learning or earlier approaches, structure their decision-making with the aim of optimizing a single-valued objective or scoring function under a single characterization, deterministic or probabilistic, of conditions in the world. An alternative approach, rooted in concepts of satisficing, bounded rationality, and multi-criteria decision making, instead pursues decisions that perform acceptably well under a wide range of possible realizations of uncertainties – and also under a wide range of plausible objectives and associated values. The group speculated that such robustness to diverse conditions is likely to be associated with greater pluralism of values, and with a tentative approach to decisions that recognizes uncertainty and limited knowledge, makes informed guesses, and seeks additional guidance – and thus, perhaps, with more inclusive and more equitable AI-driven decision-making.\nRegarding scale and scope, most present AI-based products are developed by for-profit enterprises and marketed to users – individual consumers, businesses or other organizations, government agencies, etc. – under conditions of asymmetric information and substantial market power. Moreover, users' values and preferences implicated by the AI systems are often under-specified, ambiguous, and manipulable, and may also exhibit systematic disparities between immediate impulses and considered longer-term values and welfare. The relationships between AI systems and users are thus ripe for exploitation to benefit the dominant party, e.g., by bundling attractive services with subtle, hard-to-observe costs such as loss of privacy or autonomy, or by manipulating users' adaptive and labile preferences to their detriment.\nMany alternative models for AI deployment are plausible, at a wide range of scales in terms of people served and decision scope, and are potentially compatible with better advancing the pursuit of individual well-being and shared values. But achieving this alignment will require certain conditions, once again mainly related to the specification of objective functions but now with additional complexities that arise when multiple actors' interests and values are implicated. Such complexities include, for example, typical mixtures of shared, rival, and conflicting interests among actors, as well as collective-action problems and other pathologies of collective choice. In all such settings, AI systems must be faithful servants – which aim to advance as best they can the values and interests of the individual or collection they serve, even when these are tentative, imperfectly understood, and require continual adjustment – but with no consideration of the interests of the agent who developed or applied the AI.\nEven if or when the associated technical requirements are clear, systems with these attributes may well not be compatible with present AI development business models. Such systems will need contextual conditions that allow them to be developed, deployed, adopted, and scaled – while maintaining fidelity to the progressive aims and principles of the endeavor. The group worked through various scenarios of conditions that could enable such development, allowing the desired systems to gather initial development resources; secure the ongoing inputs needed to scale and progress; avoid being destroyed or corrupted by competition or attack from incumbents whose rents are threatened; and operate sustainably over time. Promising directions included a mix of strategic identification of initial targets; strategic early deployment of philanthropic or crowd-sources resources using open-source development; building strong early competitive positions through aggressive exploitation of IP advantages, coupled with binding pre-commitments to relinquish these at some certain future date; and compatible public policies regarding data ownership, IP, antitrust, and related matters. The group recognized that they were engaged in hopeful speculation about potential technical capabilities and associated societal conditions and impacts, when these conditions remain largely unexamined at present. They concluded, however, that in view of stakes and plausibility, these development directions merit high-priority investigation.\nOutput of the Transformative Social Progress group\nConcluding reflections: Routes to progress in understanding and governing AI impacts\nAs these short previews suggest, the discussions and outputs of the summer institute's work groups were too broad-ranging and diverse to admit any single summary or synthesis characterization. Still, a few salient themes emerged across multiple groups, including the following:\n– The pluralism and ambiguity of values often embedded in current procedures, practices, and institutions, which may be put at risk by automation or codification of decision-making that exclusively optimizes for a single value – whether this single value is efficiency or cost-minimization as often proposed, or something else;\n– The rapidity with which considerations of AI deployment and impacts moves from seemingly prosaic considerations of system and application characteristics, to engage deep, even foundational questions of social values, political organization, and human identity;\n– The frequency with which new configurations of responsibility and authority, in which AI-based systems augment and partner with human decision-makers rather than replacing them, appear superior on multiple dimensions to either human or machine decision-makers operating alone;\n– The value, in considering ill-posed problems marked by deep uncertainty, of taking a dialectical approach – or alternatively, an adversarial or \"red team-blue team\" approach. This was clearest in the work of the two groups that struggled, from nearly opposite perspectives, with the thorny problems posed by the widely proliferating sets of AI ethical principles. The rich counterpoint between these two groups was unplanned good luck that emerged from the process of proposing and selecting workgroup projects. These groups have not yet had the opportunity to respond to each other directly: they were aware of each other's work from the brief plenary check-ins, but given the intensely compressed schedule of the summer institute there was little opportunity for substantive interaction between groups. Each group's work is limited and incomplete, in line with the aims of this rapid-output publication – as indeed are the outputs of all the workgroups. Yet they are also powerfully mutually enriching, offering complementary perspectives on the urgent question of how to inject ethical considerations into AI system development in practice, each hinting at potential correctives to the limitations of the other. They thus provide great heuristic value informing concrete early actions on a problem that defies resolution in any single step.\n– The urgent imperative of finding footholds for progress in efforts to assess and govern mid-term developments and impacts. This is the place where immediate concerns and conflicts that suggest obvious – if unavoidably incomplete – responses shades into potentially transformative impacts. Yet this is also where early interventions hold the possibility of high-leverage benefits, even despite the relative scarcity of attention now being directed to these problems and the profound methodological challenges of developing disciplined and persuasive characterizations of risks and responses.\nIn addition to the substantive richness of the discussions and outputs, the Summer Institute also represented an experiment in process that greatly exceeded our expectations, which we believe offers significant insights into how to stimulate effective conversations and collective activities that deliver real progress on wicked problems like AI impacts, governance, and ethics. We noted above the conditions that make understanding or practical guidance on these issues so difficult to achieve, despite the flood of attention they are receiving – including deep uncertainty, rapid technical progress, and fragmented knowledge and expertise. Given that the familiar approach of waiting until impacts are determinate is insufficiently precautionary, what types of activity or process might promise useful insights for assessment or governance action? There is obviously no determinate checklist available, but a few conditions and criteria appear likely.\n– It is necessary to mobilize multiple areas of relevant knowledge, expertise, and method – both across research and scholarly disciplines, and between academia and multiple domains of practice – because the problems' tentacles extend far broader than any single community of inquiry or practice;\n– It is not sufficient to bring suitably broad collections of relevant expertise together; it is also necessary to facilitate sustained, intensive interaction, in which people dig hard into each other's concepts, methods, terminologies, and habits of thought – to avoid the common failure mode of interdisciplinary activities, superficial agreement without actual advance of understanding;\n– The problem of AI impacts and governance is both fast and slow: rapid pieces of technical progress and reactions to them add up to transformative changes over decades. There is news every week, yet the problem is not going away any time soon. There is thus a need to broaden debate and build expertise along generational lines as well as on other dimensions, to integrate instruction and professional development with parallel efforts to advance understanding. (This is one respect where the parallels between AI and climate change are instructive: both issues combine processes that operate on a wide range of time-scales, although in climate change there is much better knowledge of the long-term behavior of the relevant systems.)\n– Knowledge in the field is diffuse, provisional, rapidly evolving. There is not an established and bounded body of knowledge sufficient to create an expert community. Plenty of expertise is relevant, but little that is on-point, certainly none that provides clear guidelines for progress.\nThese conditions suggest there is a need to encourage collaborative discussion and shared work along multiple parallel lines, which in turn suggests a decentralized approach to convening collaborative groups with the range of expertise needed to generate and pursue specific promising questions and ideas. The same conditions also suggest a need for communication vehicles to share questions, insights, arguments, and ideas, which is informed by relevant research and scholarship but proceeds faster, and more provisionally, than normal conventions of research and scholarship allow. Thus, even with the existence of such communication vehicles, there is also a need to develop a culture and practice of substantively rich but quick exchange of ideas, even provisional and incomplete. It does defy academic convention, but on issues like this, rigor and completeness may be the enemy of progress.\nThese requirements suggest that the Summer Institute was, with small exceptions, a nearly ideal model to advance understanding and capacity – on AI and society issues, and on issues that exhibit similar characteristics. Indeed, the power of the model for similar issues was substantiated by the success of another summer institute convened two weeks later by one of the organizers here on a different issue, the governance of geoengineering. It appears to be a powerful model, subject to various conditions related to selection or participants, available time, etc., which clearly merits further development and application. We wish we could claim to have been prescient in designing this process, but there were large elements of luck in the outcomes of the Summer Institute. Still, the results – both those experienced by participants within the Summer Institute, and those marked by these first-round outputs – strike us as astonishing, given their origin as outputs of less than two full days of intensive focused work by newly formed groups. This was an exciting exercise to be a part of, and we are deeply grateful to our faculty and participants – for the intensity, intelligence, and good will of their shared explorations, and also for their participation in this experiment and the significant intellectual courage they have exhibited in allowing their work to be disseminated here in this provisional, incomplete form.The post Artificial Intelligence's Societal Impacts, Governance, and Ethics: Introduction to the 2019 Summer Institute on AI and Society and its rapid outputs first appeared on AI Pulse.", "url": "https://aipulse.org", "title": "Artificial Intelligence’s Societal Impacts, Governance, and Ethics: Introduction to the 2019 Summer Institute on AI and Society and its rapid outputs", "source": "aipulse.org", "date_published": "n/a", "paged_url": "https://aipulse.org/feed?paged=1", "id": "0cc3f59d9468a7b0c13bb33bae9a395d"} -{"text": "Mob.ly App Makes Driving Safer by Changing How Drivers Navigate\n\n\n Download as PDF\n\n\nA group of multi-disciplinary researchers from across North America today announced the launch of a new app, Mob.ly, that reduces the incidents of road rage by promoting a driver's sense of well-being and safety without sacrificing efficiency and access.\nThe researchers, who gathered at the Summer Institute for Artificial Intelligence, Ethics and Society, came together to discuss new modalities for AI assisted turn-by-turn navigation. Current navigation offerings focus on speed and efficiency, leaving a large portion of the population underserved. Many drivers, including young or new drivers, the elderly, and parents of young children, often have different needs when navigating city streets. Those drivers, though licensed and fully capable of driving in all traffic scenarios, benefit from navigation decisions that reduce their overall cognitive burden, that is, reduce the level of attention needed to safely operate an automobile. Reducing cognitive burden has been identified as a navigation strategy that can enable a greater sense of well-being and increase safety by avoiding unprotected left turns, major highways, confusing intersections and roundabouts, and other difficult-to-navigate road features that demand higher levels of concentration and focus.\n\"We saw the need for a navigation app that is designed for values other than simply finding the fastest route. We designed Mob.ly to minimize the cognitive burden placed on drivers by their route,\" said lead developer Jason Millar. \"Some traffic interactions are simply more demanding than others. Our system allows drivers to specify which high-cognitive-load situations they want to avoid, and we route them accordingly.\"\nThe Mob.ly app interface. (left) Current routing options as determined by an algorithm that minimizes time-to-destination; (middle) Mob.ly allows drivers to select various high-cognitive-load situations they wish to avoid; (right) Mob.ly's algorithms optimize for low cognitive load.\nExperts say this app could fill an important need in society. \"With road rage, you're basically driving under the influence of impaired emotions,\" says Leon James, PhD, a professor of psychology at University of Hawaii and co-author of Road Rage and Aggressive Driving. Managing those emotions by providing options to avoid uncomfortable situations will reduce agitation and could improve outcomes in many ways.\nMob.ly is currently available worldwide on iOS and Android devices.\nAbout Mob.ly Group*\nMob.ly Group is interested in creating navigational aids that intersect the fields of artificial intelligence, ethics and society. The group is led by Dr. Jason Millar, Canada Research Chair in the Ethical Engineering of Robotics and Artificial Intelligence (University of Ottawa) and is comprised of:\nNicholas Novelli, PhD student, University of Edinburgh;\nAnne Boily, PhD Candidate, Université de Montréal;\nCarlos Ignacio Gutierrez, Doctoral Fellow, The Rand Corporation;\nCourtney Doagoo, AI and Society Fellow, University of Ottawa Centre for Law Technology & Society;\nKathryn Bouskill, Researcher, The Rand Corporation;\nElizabeth Wright, MA Candidate, George Washington University;\nBrent Barron, CIFAR;\nElizabeth Joh, Martin Luther King Jr. Professor of Law, UC Davis;\nThomas Gilbert, PhD Candidate, University of California, Berkeley;\nLeilani Gilpin, PhD Candidate, MIT;\nGraham Taylor, Canada Research Chair in Machine Learning, University of Guelph and Vector Institute;\nNicolas Rothbacher, Master's student, Technology and Policy Program, MIT; and,\nMargaret Glover-Campbell, Alberta Machine Intelligence Institute.\n*Mob.ly is the result of a Design for Human Values mini-workshop conducted at the CIFAR-funded Summer Institute on Artificial Intelligence and Society. The app described above is one prototype among many that participants (listed above) imagined as alternatives to the current regime of turn-by-turn navigation apps, all of which focus on minimizing time-to-destination as the primary value embedded in the system. The workshop demonstrated that we can realize interesting alternatives when we focus on alternative values in design.The post Mob.ly App Makes Driving Safer by Changing How Drivers Navigate first appeared on AI Pulse.", "url": "https://aipulse.org", "title": "Mob.ly App Makes Driving Safer by Changing How Drivers Navigate", "source": "aipulse.org", "date_published": "n/a", "paged_url": "https://aipulse.org/feed?paged=1", "id": "ae8e4a6d4dc2bd1ad00c45c49e706acd"} -{"text": "On Meaningful Human Control in High-Stakes Machine-Human Partnerships\n\n\n Download as PDF\n\nReflections on the Process:\nOur team at the Summer Institute was diverse in both skills (including technical computer science, cognitive science, systems innovation, and radiology expertise) and career stage (including faculty, graduate students, and a medical student). We were brought together at the 'pitch' stage by a mutual interest in human-machine partnerships in complex, high-stakes domains such as healthcare, transport, and autonomous weapons. We began with a focus on the topic of \"meaningful human control\" – a term most often applied in the autonomous weapons literature, which refers broadly to human participation in the deployment and operation of potentially autonomous artificial intelligence (AI) systems, such that the human has a meaningful contribution to decisions and outcomes.\nWe began from an applied perspective, with a how question: how might meaningful human control be created in high-stakes domains beyond autonomous weapons, and what threats might be faced during implementation? We worked backwards, and markers in hand we filled the windows of the large AMII boardroom with necessary preconditions for meaningful human control – focusing on challenges that could render human control meaningless, cause meaningful control to be inhuman, or cause control to be lost altogether. We continued to cover windows with specific design issues related to human factors, machine systems, and deployment environments that would influence \"meaningful human control\" in human-machine partnerships.\nAs the hours passed, our conversations grew more productive and contentious. We had made progress in understanding the design principles behind the a priori goal of meaningful human control, but questions remained. What did the human add to the equation? What did the human take away? As incredible progress is made in the space of AI, is meaningful human control something we even want at all? If so, what is the continuing value of meaningful human control? Amidst the discussion a clear consensus emerged: the core of our paper should focus on the importance and implications of meaningful human control, rather than the conditions required to attain it.\nWe are still committed to the framework that so painstakingly painted the windows, but in this initial paper from the Summer Institute we seek to answer the why before we get to the how. For this rapid web publication (in line with the concrete impact focus of the Summer Institute that kept us motivated all along), we have generated a preliminary draft of the former. We will define the concept of meaningful human control more precisely, explore background literature, and briefly outline our framework for the upsides and downsides of the concept in high-stakes environments. This preliminary publication will be updated with a link to the full work as it emerges in the fall.\nIntroduction:\nThe autonomy of artificial agents is an important aspect of defining machine-human partnerships. There is no clear consensus definition for the concept of meaningful human control (MHC) (Santoni de Sio et al., 2018). Broadly it connotes that in order for humans to be capable of controlling – and ultimately responsible for – the effects of automated systems, they must be involved in a non-superficial or non-perfunctory way. In particular, the concept of MHC emphasizes the \"threshold of human control that is considered necessary\" (Roff et al., 2016, p.1), to go beyond the ambiguous concept of humans \"in-the-loop\", or merely setting initial parameters and providing no ongoing control. Furthermore, the concept of MHC rests on the assumption that humans have exercised control over systems such as weapons in the past and present, and is concerned with maintaining this assumed human control (Ekelhof, 2018). Within this paper we aim to explore the concept of meaningful human control and its value as a concept for evaluating the balance of autonomy throughout the broader landscape of human-machine partnerships in high-stakes environments.\nBackground:\nThe concept of \"meaningful human control\" is most closely associated with lethal autonomous weapons systems, where there is general agreement that autonomous weapons capable of taking human life should not operate without human participation in or oversight of the decision-making process (see, e.g., Crootof, 2016; Roff and Moyes, 2016; Santoni de Sio et al., 2018). The term first appeared in this literature in 2001, in an article that discussed the inevitable rise of autonomous weapons systems and the accompanying challenge to meaningful human control of those systems (Adams, 2001). There are three broad and inter-related classes of reasons for requiring human participation in machine decision making. The first is rooted in the (arguably) unique capacity of human beings to make moral/ethical decisions, based in empathy and compassion (e.g., Asaro, 2006; Docherty, 2018). The second focuses on the ascription of legal responsibility or accountability (Hubbard, 2014; Calo, 2015; Scherer, 2016). The third class of reasons concerns performance – specifically system redundancy, error detection, and recovery – on the premise that humans can (at least for now) do some things, or do things in some ways, that machines cannot. These issues, of course, are not limited to lethal autonomous weapons: the issue of meaningful human control arises in other, typically high-stakes, contexts. Examples include the operation of autonomous vehicles (Heikoop et al., 2019) and surgical robots (Ficuciello et al., 2019). MHC has also been cited as a key challenge in the continuing development of robust and humane artificial intelligence systems (Russell et al., 2015; Stephandis et al., 2019).\nMeaningful Human Control: Pros\nThe reasons meaningful human control is desirable in automated systems can be broadly divided into the categories of performance-related and responsibility-related – concerned with how well the autonomous system is able to perform the desired action, as opposed to the process of action selection itself.\nHumans are adaptable, which can improve performance particularly on unusual inputs. Because machine learning-based systems are built to perform well on a pre-specified training set, they may underperform on novel or atypical inputs. These inputs may be benign (simply out-of-distribution), yet still yield harmful outcomes. For instance, an automated debt calculation system (\"robo-debt\") run by the Australian government frequently overestimated debts for people with highly variable income streams, who were not considered in the algorithmic design (Henriques-Gomes, 2019). Inputs can also be atypical in malicious ways – adversarial examples are a known vulnerability of computer vision systems (Akhtar and Ajmal, 2018). These are intentionally constructed to \"fool\" computer vision models into making incorrect classifications, yet appear unremarkable to the human eye (Goodfellow, 2014). In both these cases, human control over the system's response to the \"atypical\" input would allow for superior performance of the human-machine partnership.\nAnother important motivation for meaningful human control is adding redundancy to an otherwise automated system. Even on tasks where machine errors are highly infrequent, the character of their errors may differ greatly from human errors, in ways that can lead to catastrophic outcomes. Human oversight introduces heterogeneity to the decision-making progress, which can mitigate these risks. Airplane flight provides an example of a well-studied human-machine partnership which displays this characteristic. Airplanes are mostly guided by highly effective automated systems. Yet it is widely considered essential to have pilots \"behind the wheel\" to oversee the autopilot, who are able to select between different levels of control in case of system failure (Sheridan, 1987; Mindell 1999). This is the case despite the downsides: human pilots are a frequent cause of accidents (Shappell, 2017), and can lose skills over time if they are infrequently used (Arthur Jr et al., 1998), as they may be under a regime of widespread automation.\nThe most technologically intractable reasons for meaningful human control are moral. Human decisions are imbued with a moral weight that we do not accord to machines, and we commonly rely on humans to interpret vague rules in determining real-world actions in a way that is sensitive to context and human values (Russell, 2015). Humans are seen as having a capacity for moral judgment and empathy beyond any advanced AI. Domains such as legal decision-making (e.g. sentencing, bail, and parole) call for meaningful human control due to their moral dimension, despite some evidence that algorithms can predict recidivism as well as or better than expert human decision-makers (Kleinberg, 2017).\nAutomated systems with no human control also raise concerns about legal liability and accountability. For example, if a robot harms a person, who should be held responsible and liable for compensation? Possibilities include the manufacturer, the programmer(s), the user, and the robot itself. This is a real-world scenario, which courts have already addressed to some extent (Calo, 2015), but the prospect of increasingly intelligent, autonomous, interacting systems – especially those capable of ongoing learning from their environment – will create many legal and financial uncertainties. Under American law, for example, the situation of an autonomous system causing harm in a way that was not intended or foreseeable falls into a lacuna, in which it is unclear who, if anyone, would be liable (Hubbard, 2014; Calo, 2015; Scherer, 2016).\nMeaningful Human Control: Cons\nMeaningful human control also has costs, which again can be divided into performance-related and responsibility-related types. These costs, which can range from the minor to the substantial, must be weighed against the benefits when considering implementing MHC within a given context. The \"handoff problems\" associated with moving from a fully autonomous system to a human-machine partnership may be substantial (Mindell, 2015). It may also be possible that, in certain cases, human decisions are consistently inferior to or more biased than the machine's choices.\nMany tasks are designed to be machine-driven precisely because of their superior performance or efficiency. Adapting a system for meaningful human control requires creating a monitoring apparatus, and potentially pausing automated routines to insert decision points. This paper focuses on \"high-stakes\" domains, where the consequences of errors can be substantial. Yet there are also many tasks for which each decision is so trivial that the loss of performance or efficiency outweighs the potential benefits of human involvement. Networking equipment, for example, autonomously performs repetitive tasks rapidly and accurately, with little perceived need for meaningful human control.\nThe variability and adaptability of human input interferes with predictability and consistency. This is particularly true in highly coupled, tightly interacting systems. Consider the case of an integrated autonomous driving network, in which vehicles hurtle past each other through an intersection at high speed. Safety and predictability are tightly linked in such a scenario, and the uncertainty introduced by the possibility for human control would have cascading effects. Instead of knowing what each other actor will do and planning accordingly, agents would be forced to monitor, project, and react to others' behavior under uncertainty. Contexts can be conceived in which, even if any given human decision was more appropriate than its automated counterpart, the downsides of this decoupling far outweigh the benefits.\nThere are also cases where human decision-making may be undesirable due to humans' risk of bias (intentional or unintentional) or ulterior motive. Certain autonomous systems – such as autonomous arbitration systems or escrow services – could derive their usefulness precisely from the lack of human control. The potential for bias in human decision-making may provide an additional impetus for developing autonomous systems without MHC – though precaution must be taken to ensure that the system does not merely perpetuate and mask existing biases with a veneer of algorithmic objectivity.\nPreliminary Thesis:\nMeaningful human control is important to consider in the context of machine human partnerships in high-stakes domains. Human involvement may improve system performance by way of redundancy and increased adaptability, and plays an important role in ensuring ethical and legal responsibility. These benefits do not come without downsides, however, including both the potential for improper human choices and the efficiency losses associated with decoupling complex autonomous systems. Finding the context-specific balance between these trade-offs is essential to ensuring effective, robust, and ethical performance in cases of autonomous human-machine partnership.\nReferences:\nAdams, T. K. (2001). Future warfare and the decline of human decision-making. Parameters, 31(4), 57-71.\nAkhtar, Naveed, and Ajmal Mian. \"Threat of adversarial attacks on deep learning in computer vision: A survey.\" IEEE Access 6 (2018): 14410-14430.\nArthur Jr, Winfred, et al. \"Factors that influence skill decay and retention: A quantitative review and analysis.\" Human performance 11.1 (1998): 57-101.\nAsaro, P. M. (2006). What should we want from a robot ethic?. International Review of Information Ethics, 6 (12):9-16.\nCalo, R. (2015). Robotics and the Lessons of Cyberlaw. California Law Review, 103(3).\nCrootof, R. (2016). A Meaningful Floor for Meaningful Human Control. Temp. Int'l & Comp. LJ, 30, 53.\nDocherty, B. (2018). Statement on meaningful human control, CCW meeting on lethal autonomous weapons systems, April 22, 2018. Retrieved from https://www.hrw.org/news/2018/04/11/statement-meaningful-human-control-ccw-meeting-lethal-autonomous-weapons-systems, July 31, 2019.\nEkelhof, M. (2018). Autonomous weapons: Operationalizing meaningful human control.\nFicuciello, F., Tamburrini, G., Arezzo, A., Villani, L., & Siciliano, B. (2019). Autonomy in surgical robots and its meaningful human control. Paladyn, Journal of Behavioral Robotics, 10(1), 30-43.\nGoodfellow, Ian J., Jonathon Shlens, and Christian Szegedy. \"Explaining and harnessing adversarial examples.\" arXiv preprint arXiv: (2014).\nHeikoop, D. D., Hagenzieker, M., Mecacci, G., Calvert, S., Santoni De Sio, F., & van Arem, B. (2019). Human behaviour with automated driving systems: a quantitative framework for meaningful human control. Theoretical Issues in Ergonomics Science, 1-21.\nHenriques-Gomes, L. Labour calls on government to scrap 'malfunctioning' robodebt scheme, July 30, 2019. Retrieved from https://www.theguardian.com/australia-news/2019/jul/30/labor-calls-on-government-to-scrap-malfunctioning-robodebt-scheme, August 8, 2019.\nHubbard, F. P. (2014). 'Sophisticated Robots': Balancing Liability, Regulation, and Innovation, Florida Law Review, 66(5).\nKeeling, G., Evans, K., Thornton, S. M., Mecacci, G., & de Sio, F. S. (2019, July). Four perspectives on what matters for the ethics of automated vehicles. In Automated Vehicles Symposium (pp. 49-60). Springer, Cham.\nKleinberg, Jon, et al. \"Human decisions and machine predictions.\" The quarterly journal of economics 133.1 (2017): 237-293.\nMecacci, G., & de Sio, F. S. (2019). Four Perspectives on What Matters for the Ethics of Automated Vehicles. Road Vehicle Automation 6, 49.\nMindell, David A. (2015). Our Robots, Ourselves: Robotics and the Myths of Autonomy. Viking.\nRoff, H. M., & Moyes, R. (2016). Meaningful human control, artificial intelligence and autonomous weapons. In Briefing Paper Prepared for the Informal Meeting of Experts on Lethal Autonomous Weapons Systems, UN Convention on Certain Conventional Weapons.\nRussell, S., Dewey, D., & Tegmark, M. (2015). Research Priorities for Robust and Beneficial Artificial Intelligence. AI Magazine, 36.\nSantoni de Sio, F., & Van den Hoven, J. (2018). Meaningful human control over autonomous systems: a philosophical account. Frontiers in Robotics and AI, 5, 15.\nScherer, M. U. (2016). Regulating artificial intelligence systems: risks, challenges, competencies, and strategies. Harvard Journal of Law & Technology, 29(2).\nSheridan, Thomas B. \"Supervisory control.\" Handbook of human factors (1987): .\nShappell, Scott, et al. \"Human error and commercial aviation accidents: an analysis using the human factors analysis and classification system.\" Human Error in Aviation. Routledge, 2017. 73-88.\nStephanidis, C. C., et al., (2019) Seven HCI Grand Challenges, International Journal of Human–Computer Interaction, 35(14), , DOI: 10..The post On Meaningful Human Control in High-Stakes Machine-Human Partnerships first appeared on AI Pulse.", "url": "https://aipulse.org", "title": "On Meaningful Human Control in High-Stakes Machine-Human Partnerships", "source": "aipulse.org", "date_published": "n/a", "paged_url": "https://aipulse.org/feed?paged=1", "id": "dba71634eae51198469a2fdf082fe72d"} -{"text": "AI Without Math: Making AI and ML comprehensible\n\n\n Download as PDF\n\nAbstract\nIf we want nontechnical stakeholders to respond to artificial intelligence developments in an informed way, we must help them acquire a more-than-superficial understanding of artificial intelligence (AI) and machine learning (ML). Explanations involving formal mathematical notation will not reach most people who need to make informed decisions about AI. We believe it is possible to teach many AI and ML concepts without slipping into mathematical notation.\nIntroduction1\nArtificial intelligence (AI) and machine learning (ML) are transforming industries, societies, and economies, and the pace of change is accelerating. Businesspeople, lawyers, policymakers, and other stakeholders will increasingly face practical questions (e.g., \"Should my firm adopt this AI-related product?\") as well as political, ethical, and legal questions (e.g., \"What limits should we place on law enforcement's use of facial recognition technology?\") related to AI.\nIf we want nontechnical stakeholders to respond to AI developments in an informed way, we must develop ways to help them acquire a reasonable understanding of what AI and ML are and how different techniques work. Many institutions and individuals have been developing teaching materials, but this remains an open problem.\nGeneral overviews are often insufficient for the purposes of sophisticated stakeholders such as judges and regulators. However, in-depth explanations risk presuming a shared basic literacy about AI and ML. And even introductory technical explanations often fall into the trap of overestimating the learner's prior knowledge, particularly the learner's knowledge of math. That may be a mistake. Explanations involving mathematical notation will not reach many people who need to make informed decisions about AI.\nSophisticated but nontechnical stakeholders should be empowered to think about AI and ML at a more-than-superficial level even if they lack the mathematical and technical background of computer scientists. In some cases, curiosity will drive them to \"dig deeper\"; in other cases, a superficial understanding will be inadequate for the decision-making task they face. A judge faced with dispositive motions in a software patent case—or a judge considering whether to overturn a conviction on the grounds that the police used some algorithm or ML technique in their investigation—will have an ethical duty to try to understand the technology at issue in the case if doing so is necessary to achieve a just and legally correct outcome.\nWe believe it is possible to teach many AI and ML concepts without slipping into advanced mathematical notation. As Steven Skiena has written, \"[t]he heart of any algorithm is an idea.\"2 Mathematical notation is rarely the only way to communicate such an idea, and it is possible to explain many ML concepts by analogies and examples while avoiding the terminology of calculus and linear algebra.\nWe have therefore developed a prototype website to communicate fundamental AI and ML concepts to an educated but nontechnical audience. This website can be found at https://www.aiwithoutmath.com.3 The site's name is a bit of a misnomer, as some machine learning concepts (such as \"gradient descent\") are inherently mathematical, and sometimes simple math examples make a concept easier to understand. Nevertheless, we intend to make the site's explanations as accessible as possible to people lacking mathematics backgrounds beyond high school algebra.\nWe aim to promote accessibility by providing simple but rigorous explanations of AI and ML concepts. These explanations go beyond high-level overviews and glossaries and attempt to teach the intuition behind various algorithms. To that end, they avoid formal mathematical notation and reduce the time commitment and cognitive effort required to learn the material.\nWe hope to create articles and learning materials that are as clear, concise, and self-contained as possible. We want to enable busy professionals to get the understanding they need without wading through details that may be relevant only to computer scientists or machine learning developers.\nFurther, instead of providing a linear walkthrough characteristic of a textbook or college course, we want to create modular, self-contained explanations of each concept that teach (or at least link to) the necessary background concepts. This will encourage non-linear navigation of the material that is driven by the reader's needs and curiosity. Many professionals lack the time or resources to complete an entire course on machine learning, but need a level of understanding that goes beyond that provided by high-level overviews or glossary entries.\nExamples: Four AI/ML concepts explained without advanced math\nIn the following subsections, we illustrate our idea by explaining four AI concepts without relying on advanced math. Note that we also define all technical vocabulary as it is introduced. We try to give the reader the intuition behind each idea while assuming as little as possible about the reader's prior knowledge.\nA. Rational agents\nIn AI, an agent is something that acts (such as a software \"bot\" or a robot). A rational agent is an agent that tries to achieve the best outcome. Agents are programmed to view some outcomes as better than others. The measuring stick by which the agent determines the \"best outcome\" is called the agent's objective function.\nThe idea that an agent tries to achieve the best outcome is often stated in technical vocabulary. For example, some might say that the agent tries to maximize utility, maximize expected utility, or maximize its objective function.\nB. Naive Bayes classifier\nA Naive Bayes classifier uses probability to classify (categorize) an object.\nWhat is a classifier?\nA classifier is a program that categorizes items as one type of thing or another. For example, a picture might be classified as a \"cat picture\" or a \"dog picture.\" If you feed (input) an image into a classifier designed to distinguish cat pictures from dog pictures, it would output the label \"cat\" or \"dog.\"\nSpam filter example\nEmail systems include \"spam filters\" that automatically determine whether each incoming email is likely to be spam. Their input is an email and their output is the probability that the email is spam.\nEach time we estimate the probability that a particular email is spam, we should take into consideration the overall probability that an email is spam. Suppose that 45% of emails are spam (this is called a base rate). If we know that an email arrived but we do not know anything else about it, we should conclude that it is probably not spam, because most emails (55%) are not spam.\nHowever, each incoming email may have certain features that make it more likely to be spam. For example, every email has a certain number of exclamation points; \"number of exclamation points\" is a feature. We could say that an email containing more than two exclamation points is 80% likely to be spam. We can keep identifying more features and comparing them across the emails we see.\nAfter considering many emails which are already labeled as \"spam\" or \"not spam,\" the algorithm will know that some features have certain values for spam emails and other values for non-spam emails. This knowledge is called a \"model\"; generating that knowledge is called \"training the model.\" Once we have a trained model, we can compare the feature values of any new email to the values in the model to estimate whether the new email is spam or not.\nWhat makes Naive Bayes different from other classification methods?\nA major advantage of the Naive Bayes method is that Naive Bayes models are relatively simple and can be trained quickly with a small set of data.\nOne potential disadvantage is that, in the Naive Bayes algorithm, each feature is considered independently of the others. This simplifying assumption is why the algorithm is called \"naive.\" In reality, many variables \"travel together\" and are not independent. For example, if the weather is rainy, it is probably also cloudy. But in a Naive Bayes analysis, \"raininess\" and \"cloudiness\" might be treated as two separate and independent features. For many tasks, however, treating these features as independent does not affect the outcome.\nC. Linear regression\nWhat is it?\nRegression is a way of predicting an output value based on one or more input variables. For example, a regression model might attempt to predict house prices (an output value) based on input variables such as number of rooms, school district, and proximity to the ocean.\nInput variables are also called explanatory variables because they attempt to \"explain\" the reasons for the output value. In the housing prices example, the input variables \"explain\" why each house is sold at a certain price.\nWhat are some business use cases for regression?\n• Given past prices and economic indicators, should we expect copper prices to rise or fall?\n• How might sales change if, instead of investing $100k in TV advertising, $50K is invested in TV advertising, and $50K is invested in social media advertising?\n• If we hire additional doctors, how much can we decrease patient wait time?\n• What are the top five factors that can cause a customer to default on their loan payment?\nWhen should you use linear regression as opposed to another machine learning algorithm?\nLinear regression should be used when (1) your target output value is a continuous numerical value, and (2) you expect a linear relationship between the input variables and the output value. A linear relationship means that if the input variable (e.g., number of rooms) goes up, the output value (e.g., housing price) should go up as well.\nNote that one strength of machine learning and neural networks is that they can learn complex, nonlinear relationships between input data and the target variable. However, more complex algorithms are not necessarily better for a particular task. It is important to choose algorithms whose complexity is similar to the complexity of the underlying data relationship. If there is reason to suspect that there is not a linear relationship between input data and a target variable, however, a linear regression model will be too simple, and a neural network or alternative algorithm (such as polynomial regression) will be a better fit.\nHow does linear regression work?\nA linear regression model works by fitting a line (when there is one input variable) or a plane (when there are two or more input variables) to historical data. \"Fitting\" means that the algorithm finds the line or plane which best explains the output value, given the historical data.\nThe image below shows a linear regression model with a line fit between one input variable (x) and one output value. The x-axis shows the value of a single input variable (such as a neighborhood's safety rating), while the dots show the historical data—the value of different output value and input variable pairs (such as house price and neighborhood safety rating). The linear regression model determines the fit between these variables by finding a line which minimizes the distance between the line and the data points. The values of y on the line give the prediction of the output value. For example, a neighborhood safety rating of 20 would predict a house price of 8.\nImage source: Wikipedia.org\nD. Convolutional neural networks\nResearchers designed convolutional neural networks (CNNs) because they needed better tools to process images. Most systems that \"see\" the world—self-driving cars, medical diagnostics, etc.—use a convolutional neural network to do so.\nNeural networks are a class of algorithms that are used in many machine learning problems. Their basic building block is a neuron, which performs a simple operation on its input. These neurons are arranged into layers, which are connected in a network that can perform complex tasks. Engineers have lots of flexibility when structuring layers and connections, making these algorithms suitable for many problems.\nConvolution is a special type of operation that answers the question, \"How much of B is in A?\" where A is often an image, and B is often a pattern. For instance, if A is an image of a house, and B is a horizontal edge, the convolution might return an abstract image of the house showing only its horizontal lines.\nBy connecting many such operations into a neural network, CNNs are able to detect increasingly complex features. For example, in a CNN for face detection, early layers look for edges, intermediate layers look for facial components, and later layers look for full faces.\nFuture work\nWe hope to expand the AI Without Math website to include many more technical topics as well as nontechnical concepts (such as \"explainability\") that form the shared vocabulary of AI researchers. Other ideas for the website include offering alternative explanations for complicated topics (perhaps with some voting mechanism, similar to that used on StackOverflow and Quora, in which readers can upvote and downvote explanations); linking to off-site explanations; and illustrating concepts with multimedia resources such as videos, games, and demonstrations.\nConclusion\nMost resources on AI and ML are either too general or too technical. There are many high-level overviews that can give stakeholders a sense of what these concepts refer to, but many stakeholders will need more than just a broad overview or glossary. Some will want (or need) to peek \"under the hood\" of AI and ML technologies to get a basic understanding of how they work and why one might use one technique as opposed to another. There is an urgent need for educational resources that will help more people participate in decisions requiring a more-than-superficial understanding of AI and ML. We hope that the AI Without Math website can become a hub for those resources.The post AI Without Math: Making AI and ML comprehensible first appeared on AI Pulse.", "url": "https://aipulse.org", "title": "AI Without Math: Making AI and ML comprehensible", "source": "aipulse.org", "date_published": "n/a", "paged_url": "https://aipulse.org/feed?paged=1", "id": "0942a6475e450f245a8c9a3b9bfe27e4"} -{"text": "Siri Humphrey: Design Principles for an AI Policy Analyst\n\n\n Download as PDF\n\nAbstract\nThis workgroup considered whether the policy analysis function in government could be replaced by an artificial intelligence policy analyst (AIPA) that responds directly to requests for information and decision support from political and administrative leaders. We describe the current model for policy analysis, identify the design criteria for an AIPA, and consider its limitations should it be adopted. A core limitation is the essential human interaction between a decision maker and an analyst/advisor, which extends the meaning and purpose of policy analysis beyond a simple synthesis or technical analysis view (each of which is nonetheless a complex task in its own right). Rather than propose a wholesale replacement of policy analysts with AIPA, we reframe the question focussing on the use of AI by human policy analysts for augmenting their current work, what we term intelligence-amplified policy analysis (IAPA). We conclude by considering how policy analysts, schools of public affairs, and institutions of government will need to adapt to the changing nature of policy analysis in an era of increasingly capable AI.\nIntroduction\nPolicy analysis is a common function in government in which public servants provide support for decision making with the aim of contributing to better decisions than would be made in the absence of such analysis. While decision makers have always relied on this support, the modern concept of the professional, multidisciplinary, policy analyst operating somewhere between the social scientist and the political operative was articulated by political scientist Harold Lasswell1 and has endured in much the same form since.\nThis workgroup was formed around the question can the policy analyst be replaced by artificial intelligence, and what are the implications for public policy and governance should this capacity be developed?2 As AI improves, can we imagine the displacement of human policy analysts by machines that understand questions asked by a decision maker, scour relevant databases for applicable knowledge, apply machine algorithms against diverse data to compute optimal strategies and make predictions about the impacts of various policy options, return an instant briefing, and engage in conversation about the issue with the decision maker?3\nThis note details the issues that emerged during our workgroup discussions. We start with a description of the current model for policy analysis and identify from there the design criteria for an AI replacement as the basis for identifying the specifications for such a system. Assuming that were developed and deployed, we consider how an artificial intelligence policy analyst (AIPA) would operate in the current model, leading to an assessment of its limitations. Rather than propose a wholesale replacement of policy analysts with AIPA, we focus on the use of AI by human policy analysts for augmentation and amplification of their skills—what we term intelligence-amplified policy analysis (IAPA). Thus, we revise our starting premise by considering how AIPA can serve as a supplement to the traditional human policy analysis competency. We conclude by considering how policy analysts, schools of public affairs, and institutions of government will need to adapt to the changing nature of policy analysis in an era of increasingly capable AI.\nWhat is a Good Policy Analyst?\nPolicy analysis involves a range of activities including: identifying public concerns and determining their condition; assembling evidence, and analyzing potential public policy responses; projecting outcomes and developing strategies for dealing with trade-offs; constructing and evaluating options for addressing the problem; assembling bureaucratic and civil society coalitions to support policy formulation and implementation; communicating recommendations to decision makers; implementing the collective intention expressed in public policy decisions; and evaluating policies to measure effectiveness or value, and inform future responses to comparable policy problems.\nDuring the first quarter century of the profession, logical positivism was dominant4 with a focus on the analysis of problems and potential interventions using techniques such as descriptive statistics, inference testing, modeling, operations research, systems analysis, and cost-benefit analysis becoming staples of the profession.5 Despite these advances, debates over the real, perceived, and proposed role of the policy analyst have coloured the profession's second quarter century. Stemming from some high profile failures of quantitative policy analysis to solve complex public policy problems, the post-positivist policy analysis movement called for a balancing of softer skills such as participatory design, stakeholder involvement, citizens' input, and qualitative methods alongside technical mastery.6 Graduate programs in public affairs continue to develop their students in a range of skills, but policy analysis remains a core competency for budding public servants7 — though whether or not this activity will continue to be valued in practice is less clear.8\nThere is a rich literature on what policy analysts should do, but less recent research on how policy analysts actually operate in practice.9 One survey of government policy analysts found that—when asked to rank five policy analyst archetypes (connector, entrepreneur, listener, synthesizer, technician) in order of how they understood and practiced their profession—the \"synthesizer\" archetype (defined in part as \"consulting various sources to understand how a problem is conceptualized … develop recommended ways to deal with the problem\") was most strongly identified with.10 We began our design considerations for an AIPA starting from the premise that much of the work of a policy analyst today involves this synthesis activity of responding to a question from a decision maker by finding information (which, for the practicing policy analyst today, usually starts with a Google search,11 complemented by social media listening12 and feeds from news aggregators) and synthesizing it in an easily-digestible form of decision support (e.g., a two-page briefing note). And while the \"technician\" archetype (defined in part as \"locating of primary raw data sources in order to undertake statistical policy research\") was ranked lowest in the above-noted survey, we also note the resurgence of policy analytics as a specialized input into evidence-based policy making13 and consider how an AIPA would support data-rich policy analytics in addition to information synthesis activity.\nPolicy analysts also engage in activities as a connector (i.e., developing cross-government and stakeholder support for policy solutions), listener (i.e., understanding how citizens and stakeholders feel about a specific policy issue), and policy entrepreneur (i.e., considering new conceptualizations of public problems, and developing creative and innovative solutions). Professional and experienced policy analysts develop domain expertise and situational awareness of a range of technical details in governing, the unwritten rules of how bureaucratic systems operate, and the political context that drives public policy concerns. Operationally, policy analysts maximize their principal's reach by engaging with, being responsive to, and channeling the politician's perspective. They protect their principal's interests by asserting the politician's perspective in internal and external fora, spotting potential threats, and addressing challenges before they become barriers. Policy analysts also interface with constituents, the broader public, stakeholders, interest groups, government colleagues, officials in other governments, and the politician's political colleagues, seeking to eliminate blindspots and highlight opportunities. Lastly, in an era of information abundance, policy analysts serve as a filter, guarding against unimportant information and directing the attention of decision makers towards important information.\nParts of the policy analyst's skill set (especially the synthesizer and technician elements) seem amenable to a machine approach to analysis and decision support. We focus on these policy analysis archetypes in assessing how an AIPA can be designed to replace the current model of the policy analyst. However, as we discussed these different views of what policy analysts do, we were reminded of what we knew already: other elements (i.e., connector, listener and policy entrepreneur)—and notably, the trust that develops between the policy analyst and the decision maker, which is necessary for an advisor's advice to have an impact on the decision maker's thoughts—are intensely human activities that do not seem likely to be overtaken by artificial intelligence in the foreseeable future.14 This foreshadows our conclusion, that a wholesale replacement of the policy analyst is not likely in the foreseeable technology environment. This does not, however, mean that AIPA that supports and supplements the work of human policy analysts is not achievable nor useful. However, we propose the idea of intelligence-amplified policy analysis (IAPA)15 as a near-term goal.\nDesign Criteria for an AIPA\nAssuming the first task of the policy analyst is to synthesize information in response to a question posed by a decision maker, can AI be feasibly designed to replace this activity? We also considered the design of a technician-type AIPA, either as a direct machine interlocutor with a decision maker or as an assistant to a human policy analyst. What are the design criteria for an AIPA in performing these roles?\nWe start by sketching three use cases for such a system, asking how an AIPA16 might be asked to perform in a range of circumstances.\nAn Artificial Intelligence Policy Analyst: Use Cases in Three Orders of Government\nFederal: 5G wireless and the hard decision of whether to ban Huawei\nOur current standard for mobile networks—fourth generation, or 4G—will soon be overtaken by fifth generation, or 5G, wireless telecommunications. One issue has come to dominate the discussion around 5G: the Chinese telecommunications giant Huawei. The United States claims that Huawei is a threat to national security because of the as-yet-unproven claim that Huawei equipment contains a backdoor, and the data that moves through their equipment could be made available to Chinese intelligence services.\nThe Canadian federal government is currently considering Huawei's possible role as a vendor in the development of Canada's 5G systems. The government's decision is complicated by a number of factors: Canada's current, tenuous relationship with China; Canada's ongoing challenges with the United States on a number of trade issues; the Canadian tradition of internationalism, and of leading by keeping in step with others; domestic economic considerations; and uncertainty as to whether Huawei equipment poses a risk. If Huawei equipment does contain a backdoor, Huawei's central role in 5G could undermine the entire security of wireless telecommunications networks.\nIt is reasonable to assume that an AIPA could understand the questions being asked (\"Hey Siri Humphrey, should Canada ban Huawei products in its developing 5G infrastructure?\") and synthesize from a search of available materials a two-page briefing note on the relevant issues. This note is unlikely to be superior to what a human policy analyst could do, although it would likely be completed more quickly. Neither approach should conclude with an absolute recommendation, given the inherent uncertainty; rather, a small number of options should be identified, with the pros and cons of each. As with the human analyst, the decision maker could not be certain what biases may have influenced the content or emphasis in the briefing note.\nAs for technical policy analytics, we find little scope for an AIPA. As this decision is essentially binary – either ban Huawei equipment or allow it – there is little to nothing that can be experimented with before the decision. And even after the decision, the impacts are not of a big data variety but are rather significant event types (e.g., a major security breach from an exploited backdoor vulnerability; an escalation in Chinese displeasure should Huawei be banned, or American displeasure should it not be banned) that are better assessed through human analysis.\nProvincial: Emergency room wait times\n\"Hey Siri Humphrey, how can we improve emergency wait times in the province?\" asks the Health Minister. To answer this policy question, a wide range of data would be needed, including hospital administrative conditions (with variables such as shift structures, room capacity, equipment, support infrastructure, and expertise), community demographic data and public health conditions, and the availability of alternative health service delivery options such as health advice via telemedicine, community practitioners, and specialist physicians.\nAutomating the policy analysis in this case would require gathering all of this data, likely held in separate organizations, and creating a cost-benefit analysis for the province, citizens, stakeholders, doctors, and hospitals. The considerations to health of the people in the province, practical implications such as supply of skilled employees like doctors and nurses, and political considerations such as costs and labour concerns would all have to be weighed carefully. Research and answers to this policy question have been proposed by the medical community, but the political implications and decisions rest with the government of the day. Could Siri Humphrey find a way forward?\nMore likely, the tough political decisions related to cost, changing demographics of the population, and the decentralized implementation challenges have prevented greater policy action in reducing wait times. However, better application and use of data, gathered and synthesized by an AIPA could help inform advisors and politicians as they weigh careful options, and create better opportunities for small-scale policy experimentation to reduce wait times, without the political risk.\nOne approach to the question would focus on short term adaptation to demand for emergency room service, and the need to reallocate healthcare resources while still serving those in need of emergency care and those who don't have a community-based practitioner. Predictive analytics could help predict and plan for surges that may be imperceptible to hospital administrators, though this information only has value if hospital administrators have enough flexibility to respond to surges and lulls. AI could shift demand away from overburdened hospitals towards others with excess capacity by proactively steering patients based on an initial triaging of their condition and location (perhaps through a check-in app, although this would assume the Health Ministry can require patients to use the check-in app before being served in an emergency room).\nBut the question also implies a concern for longer-term investment and policy choices that might alleviate the root causes of the problem. Data sharing amongst hospitals, community practitioners, and the Ministry of Health could be used by the AIPA as the basis for policy recommendations, including the synthesis of information from other jurisdictions and settings.\nLocal: Public transportation options\nTransit covers a range of subjects that can stretch the capacity of politicians and their staff, from routing and pricing to integrating mobility innovations into existing systems and tackling systemic inequality in transportation access. Substantive command of transit issues is only one part of managing the brief. At its best, transit policy development brings together technical concerns, public engagement, and maximizing not just mobility priorities but commercial, sustainability, equity, and other interests as well.\nFrom an operational perspective, consider an alternative to how local bus transit services are run. Traditionally, transit planners analyze data (collected through surveys, rider counts, and demographic and traffic studies) and create routes and schedules and assign capacity to meet demand. Prices are set to balance fairness and willingness to pay with what it costs to run the system. Schedules and routes are then occasionally revised based on rider feedback, public consultation, and other data like rider counts. This approach is an example of traditional policy analysis. An alternative approach would be to offer on-demand transit services that use ML and algorithmic dynamic routing to respond to riders' requests for transportation—Uber for public transit, if you will. Such systems have already been deployed.17\nFor longer term planning purposes, automating policy analysis could give policy makers a more comprehensive view of the substantive landscape by integrating qualitative and quantitative data, historical performance and context, contemporary population and community trends, changes in demand for and gaps in service, expressions of public sentiment, and indicators of future demand. By having machines carry out time-intensive research and synthesis, policy staff could have more time to engage in public consultations, manage the complicated politics of transport, and develop more nuanced policy options for their principals. When combined with the output of other tools and technologies, staff and politicians could see how decisions will play out in real life in areas such as congestion, housing access and affordability, environmental impact, and job creation, letting them make subtle adjustments or strategic shifts to benefit favored groups or the general public interest.\nAs noted earlier, policy analysts currently use a standard web search as a part of their information gathering activities. Search is a relatively easy function to replicate in an AIPA. For more robust search functions that are not vulnerable to the business model of search-engine provider or gaming and manipulation of results by external actors (including data poisoning),18 recreating the policy analysis search function using something like the IBM Watson system19 would require populating an expert knowledge database that mirrors the information available to the human policy analyst.\nWhile knowledge repositories have traditionally been difficult to populate and keep current,20 the Government of Canada's GCPedia system contains a repository of public servant knowledge21 that could serve as part of a database for training an automated search function. However, expert-populated databases face concerns over potential bias that machine algorithms can replicate and magnify. For example, IBM's celebrated Watson for Oncology system is sold as able to radically change cancer care, by analyzing massive amounts of diverse data (e.g., clinician notes, medical studies, clinical guidelines). Its treatment recommendations, however, are based on training by only about two dozen American clinicians whose hand-coded information on how patients with specific characteristics should be treated is used as the basis for Watson for Oncology recommendations.22\nSocial media listening can help to understand citizen's preferences, experiences, values, and behaviours in response to an actual or proposed policy tool change.23 Upon sending a signal into the policy environment (as either a proposed or real change), social media can be monitored to assess the reaction and adapt the signal in response, with citizen attitudes gauged and observed over time.24 Policy-relevant examples of this approach are steadily increasing.25 This approach to social media listening in support of policy analysis can be automated, built on the initial parameters that guide the machine approach (e.g., what terms to search for, how to evaluate sentiment, how to synthesize comments).26\nSynthesis is a different challenge. While AI-driven narrative writing27 and document synthesis28 is improving rapidly, policy analysts' work involves deciding what information to filter out29 as well as what to summarize and how.30 Those filtering choices reflect the analyst's knowledge of the decision-maker's preferences, their professional judgments, and their biases. If used as training data, this knowledge, judgements, and biases would all become embedded in the AIPA's approach to synthesis, perpetuating and perhaps reinforcing the initial biases.\nAn AIPA could also conceivably act as a technician-type policy analyst, able to query a wide range of data sources, independently develop a model of system conditions and causal mechanisms, and predict future dynamics under either the status quo policy or a specified intervention. Current policy analytic systems scan environmental conditions and apply an algorithmically determined policy to achieve a specified objective.31 As the presence of sensors and devices in the social environment grows, capturing more data across a range of system conditions, opportunities for algorithmic approaches to monitoring and steering will also expand.32\nChallenge: What constitutes relevant data and information?\nIronically, when policy analysts use Google to search for relevant information, they are already using AI. Google's RankBrain algorithm uses machine learning to process search results and provide relevant information to users, which contains its own inherent biases largely influenced by the business model underlying the search engine.33 The prospect of policy analysts relying on an AIPA to collect and sort information raises concerns about what information is of public relevance and what will become encoded in decision-making.\nEmbracing computational tools for policy work means subjecting human discourse, political valence, and knowledge to the procedural logic that undergirds all computation. As algorithms select what information is relevant to decision makers, they provide a knowledge logic34 — thereby gaining power over the flow of information and the assignment of meaning to it.\nPolicy analysts and builders of the system will need to think critically about what data, methods and models are for a given system. These choices are not neutral or objective, and will force policy advisors to make trade-offs. In order to make these trade-offs effectively, technical expertise will need to be strong not only for initial development but also for continuing adjustments after the system is deployed. The role of the policy analyst will need to evolve to meet this demand.\nThese shifting roles will pose challenges to two key tenets of democratic governments, transparency and accountability. How can you be accountable if you cannot explain why certain information was or was not considered for advice? A neural network such as BERT, developed by Google AI Language, is a state-of-the-art model for natural language processing (NLP) tasks like question answering, and might be the most accurate way to complete a given task, but may not be as explainable as a rule-based or decision-tree method. The appropriate trade-off between explainability and accuracy might depend on the nature of the advice being provided. Regardless of what method is chosen policy advisors should know what information is being given relevance over others.\nJust as transparency is an important tenet of democracy, open code and methods are also valued in the software community. But if some citizens better understand the workings of the underlying method, greater transparency might enable them to game the system and effectively gain privileged access to government decision-makers, threatening increased inequality. The trade-offs between transparency and control are thus both important and context specific.\nIn deciding what data sources feed into an automated advisor system, policy analysts must consider the nature and origins of the data, in addition to judgments of relevance. These decisions cannot be fully neutral, but are always subject to power, politics, bias, and blind spots. How does social media distort what people actually care about?35 Who funds the think tank that produced a report? Security issues around data sources may may extend to the idea of data poisoning, where adversaries flood the system with misleading data or misinformation. It might be easier to recognize these concerns when analysts' time is freed up from collecting data and available to question and reflect on decision inputs. To improve their overall contextual awareness, policy advisors of the future will need to work in more multidisciplinary teams, including both data scientists and policy leaders.\nPossibilities and Limits: Thoughts on a Feasible Current System\nIt is our judgment that it is feasible to build and use a synthesizer AIPA today, based on the following procedures and design elements:\n1. The AIPA is given a strictly defined question to investigate based on current data; e.g., \"What methods can be used to decrease homelessness in Edmonton?\"\n2. The AIPA performs a search on a series of textual data sources, including government reports, civil society advocacy and analysis reports, peer-reviewed academic journals, and miscellaneous materials sourced through Internet search engines.\n3. The AIPA searches the data for text that may answer the given question. This is already feasible, using a BERT (Bidirectional Encoder Representations from Transformers)36 model trained on a sentence-pair question answering task, as is the case in the Stanford Question Answering Dataset.37\n4. The text snippets containing relevant information are aggregated, together with the source data/articles, and presented to the policy analyst as a list, similar to the output of current legal e-discovery systems.38\n5. The AIPA reviews this information, discards statements it judges irrelevant, and augments these statements with its own independent research. The scoring of each statement as useful or irrelevant is retained as feedback to the system to inform future tasks.\n6. A GPT-2-style (Generative Pre-Trained Transformer version 2) generative text system39 trained on articles and past briefing notes is used to generate multiple draft briefing notes based on the discovered statements. It is important to highlight that this text synthesis is guided by the textual structure of briefing notes in the training data, not a human-level understanding of the question to be answered.\n7. The human policy analyst selects one draft from amongst several as a starting point for the briefing note. Their decision patterns are recorded to inform further development of the system.\n8. A human policy analyst then edits this note, adjusts it for factual accuracy, and makes subtle adjustments based on their emotional intelligence and understanding of the minister. The initial, GPT-2-generated draft and the final submitted note are both saved, to adjust the generative model using transfer learning so future notes more closely approach the style required by the Minister.\n9. The Minister receives the briefing note, and provides feedback to further inform the process (as Ministers routinely do with the work of human policy analysts).\nAssuming such a system is developed and deployed, how might an AIPA operate in our current political and administrative settings, and what might be its limitations?\nThese functions of search, synthesis, understanding, and document generation will become more advanced and reliable over time, due to both general technological advances and progressive specialization to the task at hand. As a result, the process will become increasingly streamlined, particularly for simple queries, so human analysts can focus on problems that require more subtle and complex synthesis of information – such as the question whether Canada should ban Huawei from 5G infrastructure. For the human analyst, this process will increasingly prioritize skills related to managing interpersonal relationships, strategic thinking, and understanding their audience.\nAlthough the synthesizer is a dominant archetype in current views of policy analysis, creating briefing notes is a small part of what policy analysts do. Human policy advisors require skills that would be difficult to replicate with an AIPA, such as empathy and emotional intelligence. Human policy analysts must also be attuned to how the decision-maker thinks, in order to appreciate their personal perspective and anticipate their political concerns. They must balance the fragile dynamics of discretion and bias, seeking the equilibrium where the public good meets the public mood. This balancing act is an essentially human process in decision support, because even decisions that clearly appear to be in the public good sometimes require persuading citizens that this is so. Good policy advisors don't just provide the best options to decision makers, but also the frame in which they should be communicated. The best policy analysts also maximize their reach by engaging, being responsive to, and channeling the politician's perspective to constituents, interest groups, government officials, and political colleagues. They assert their perspective in internal and external fora, spotting and tackling threats and challenges, and projecting the vision of their principal.\nThere are additional limits to what an AI approach to policy advising and policy making can do. Making use of better information requires that ability and willingness to respond to it. Even with better information, sometimes governments lack the resources or flexibility to respond dynamically – like that hospital administrator receiving predictions of ER demand surges discussed above. Experimentation and design research is key to improving delivery, but not everything can be experimentally manipulated. A/B testing on websites is permissible, but performing randomized control trials using social welfare payments is much more problematic,40 even if the condition of social equipoise is met.41\nThe Intelligence-Amplified Policy Analyst (IAPA): Artificial Intelligence as Supplement (Rather Than Replacement) for the Policy Analyst\nGiven the limitations and challenges inherent in moving towards an IAPA, rather than propose the wholesale replacement of policy analysts with artificial intelligence, we argue that the foreseeable future of policy analysis will be one where human analysts us AI to augment and amplify their skills—an approach we term intelligence-amplified policy analysis (IAPA). We thus revise our starting premise, considering how AI can serve as a supplement to traditional policy analysis competencies. As such, we foresee an evolution, rather than a complete disruption of the strategic policy function in governments (barring near-term development of artificial general intelligence—AGI—and avoiding speculation about longer-term radical advances in machine intelligence). This evolution will require adaptation on the part of human policy analysts as there will be increased expectations on their abilities. With supplemental and assistive tools at their disposal, policy analysts may be expected to do more with fewer of their colleagues than currently.\nGovernments, however, should be cautioned that the IAPA is not a short-cut to cost savings: investments in systems and human capital will be required. For the education, training, and career development opportunities for public servants to provide a foundation and framework upon which to build a public service for the digital future, governments, universities, civil society, and public servants (both current and future employees) must work together to build new competencies, skills, and digital literacies (we conclude on this point, below).\nHumans have been increasingly relying on external tools such as AI to achieve higher standards of productivity, and the benefits of this collaboration are being felt across a range of sectors. Yet being affected by AI does not mean that human beings will be replaced by technology, but that technology will be increasingly be used to assist people in their work. Therefore the need to upgrade human skills align with the requirements needed to work with AI will become essential. Howard Rheingold42 argues that human minds being replaced by a machine is not a phenomenon that the world will experience in the foreseeable future. Instead, he argues that the world will very soon experience the ubiquity of intelligence amplifiers, toolkits, and interactive electronic communities (today's social media) and that these tools will change how people think, learn, and communicate. Personal computers were an early manifestation of these amplifiers, opening up new avenues for people to manage the complexities of the modern world. Scaling up to broader usage, an economy can be said to be intelligently amplified when people are effectively trained in using ICTs to enhance their human intelligence. As the use of AI to augment human capabilities grows to sectors such as public administration, it is estimated that by the mid 2030s a third of public sector jobs will be affected by AI with senior officials and managers predicted to be most affected by automation.43\nIntelligence amplification (IA) involves the adoption of artificial intelligence (AI) by knowledge workers as a complement to, rather than replacement of, their activities.44 Under an IA framework, human cognitive abilities are made more effective by adopting AI as a tool to support and amplify human intelligence. This view emphasizes the impact of advanced technology in expanding the capability of humans to understand complex problem situations and to derive robust solutions to problems more quickly.45 With the help of robotics and AI technologies, humans can be relieved of mundane work, leaving employees with more time to focus on areas that require intrinsically human skills such as creativity, ingenuity, and emotional intelligence.\nThe rapid growth in AI technology is gradually transforming into an essential component of work life today. Public sector organizations are incorporating AI capabilities to deliver services and increase effectiveness. The Government of Canada has already started using AI tools to accelerate human capabilities; its digital operations strategic plan envisions the need for government to manage digital tools and technology to augment human skills.46 A recent example can be found in the adoption of AI by Immigration, Refugees and Citizenship Canada to help with the department's backlog of immigration and refugee claims.47 The aim is for AI to eventually deal with refugee applications on its own and, if the project is successful, that front-line immigration officials can ultimately use this technology \"to aid in their assessment of the merits of an application before decisions are finalized.\"48\nInvesting in the Future of AIPA and IAPA\nWe conclude by considering how policy analysts, schools of public affairs, and institutions of government will need to adapt to the changing nature of policy analysis in an era of increasingly capable AI. New skills and training for policy analysts will be required for the effective and knowledgeable use of AI in support of policy analysis. Skills in research design and investigating the data sources that inform decisions will increasingly be required. Additional skills include human-intensive work such as interviewing, engaging, and consulting to generate qualitative data sources. Human-intensive legwork, such as interviewing sources and engaging communities to gather qualitative data, will remain the domain of human policy analysts. However, the analysis of this data will be increasingly assisted by AI.\nPolicy analysts will be expected to merge traditional research skills with capacity in data analytics. In the near term, this will involve the crafting of new communication languages between specialist in data analytics and subject-matter expert policy analysts—a process known as paired analytics. As data analytics tools become more user friendly, generalist policy analysts will not only need to learn how to manipulate these tools but, more importantly, to know when they are using the tools incorrectly. Anyone who has ever used a statistical analysis computer package knows how easy it is to use a few simple commands to output reams of results that may or may not be meaningful.\nImprovements in technical literacy skills among analysts will be required. Every analyst working with AI assistance must be able to understand the caveats and conditions that come with recommendations. Present-day AI education, given to current analysts and policy-makers and added to hiring criteria, will prepare the next generation of analysts for correctly interpreting AIPA processes and outputs. An important competency for policy analysts to develop is the ability to skeptically interpret AI recommendations and communicate their meaning to others. Human policy analysts will also be required to provide a source of accountability and ensure that recommendations minimize bias (subject to the acknowledgement that human policy analysts embody their own biases).\nPerhaps surprisingly, the IAPA will have to improve their human interface skills as much as they will need to adapt their computer interface skills. As AI frees up their time, they will need to engage in forms of data collection that is more community and human centered. To illuminate those who may be digitally invisible49 to machine approaches, policy analysts will increasingly need to improve their street-level bureaucrat skills.\nGovernments will need to invest in and improve the data available to AIPAs. Data interoperability and sharing between departments and governments is a major limit to the development of AIPA. This might include making explicit the implicit knowledge that analysts use, or developing methods of quantifying \"soft\" data not traditionally available digitally and will allow AI to act in more culturally relevant ways.\nFinally, successful movement towards IAPA will require acceptance of new processes by institutional and governmental leaders. While they will likely be removed from the day-to-day requirements of knowing how to correctly interpret the output of an AIPA, they must be willing to adapt to new processes for making and acting upon reports.\nNext Steps\nThis note represents our collective response to the question posed at the outset—can the policy analyst be replaced by artificial intelligence, and what are the implications for public policy and governance should this capacity be developed?—largely written during our deliberations at the 2019 Summer Institute on AI and Society. Because of the fast-prototyping nature of the Summer Institute, this note does not perfectly articulate everything that was voiced during the working group's time together nor what could be accomplished with more time. As should be clear from the foregoing collection of many bullet points and threads of ideas, we have not yet developed a fully-coherent narrative to describe the many intersecting issues that our seemingly straightforward initial premise entailed.\nFollowing from the Summer Institute and this note, we look forward to subsequent rounds of debate and writing that will build on the above, with the following specific outputs:\n– The working group will produce a 1000 word think piece, for an outlet such as https://policyoptions.irpp.org, that speaks to the public policy practitioner community about any developing fantasies that AI is going to displace the policy analysis function, but also how the profession needs to adapt to changing possibilities. We are targeting a first draft by September 30, drafting collaboratively and video-meeting as necessary. A working title is \"Relax, policy analysts—AI is not going to steal your jobs. But it is going to change how you work\".\n– An academic paper for a proposed Special Issue of the journal Canadian Public Administration.\n– A creative piece (written as a script), tentatively titled \"Yes, Siri Humphrey\", will imagine a fully autonomous AIPA substituting for the character Sir Humphrey Appleby of Yes, Minister and Yes, Prime Minister fame.\n– An academic grant proposal will be developed assembling a larger team to investigate the technical and applied aspects of continued developed of AIPA.The post Siri Humphrey: Design Principles for an AI Policy Analyst first appeared on AI Pulse.", "url": "https://aipulse.org", "title": "Siri Humphrey: Design Principles for an AI Policy Analyst", "source": "aipulse.org", "date_published": "n/a", "paged_url": "https://aipulse.org/feed?paged=2", "id": "cef1471aed02904d463880fcb284364b"} -{"text": "Creating a Tool to Reproducibly Estimate the Ethical Impact of Artificial Intelligence\n\n\n Download as PDF\n\nAbstract\nHow can an organization systematically and reproducibly measure the ethical impact of its AI-enabled platforms?1 Organizations that create applications enhanced by artificial intelligence and machine learning (AI/ML) are increasingly asked to review the ethical impact of their work. Governance and oversight organizations are increasingly asked to provide documentation to guide the conduct of ethical impact assessments. This document outlines a draft procedure for organizations to evaluate the ethical impacts of their work. We propose that ethical impact can be evaluated via a principles-based approach when the effects of platforms' probable uses are interrogated through informative questions, with answers scaled and weighted to produce a multi-layered score. We initially assess ethical impact as the summed score of a project's potential to protect human rights. However, we do not suggest that the ethical impact of platforms is assessed exclusively through preservation of human rights alone, a decidedly difficult concept to measure. Instead, we propose that ethical impact can be measured through a similar procedure assessing conformity with other important principles such as: protection of decisional autonomy, explainability, reduction of bias, assurances of algorithmic competence, or safety. In this initial draft paper, we demonstrate the application of our method for ethical impact assessment to the principles of human rights and bias.\nScope\nThe purpose of this document is to outline a method for assigning an ethical impact score to AI enabled platforms. One element of shared concern for corporations, and regulatory and soft-law organizations, is design of tools, including technical standards, for reproducible assessment of the ethical and social impact of AI projects. Presently, platforms with artificial intelligence ability are loosely governed by a patchwork of corporate policies and governmental regulations. They are also governed by a network of \"soft law\" requirements, such as standards issued by national standards bodies (NIST), international standards bodies such as the International Standards Organization (ISO), and by professional organizations such as the IEEE (the Institute of Electrical and Electronics Engineers). At present, both the ISO and IEEE are in the process of drafting or obtaining approvals for standards that govern the technical, ethical, and social impact of AI/ML platforms.\nStandards are \"documents that provide requirements, specifications, guidelines, or characteristics that can be used consistently to ensure that materials, products, processes, and services are fit for their purposes\" (ISO). Standards provide stronger guidance than corporate policy or procedure statements. They are documents composed by volunteer experts working under normative principles such as consensus, non-domination, inclusion, and provisionalism.\nBeyond offering a method to ensure compliance, standards can help organizations clarify processes that may otherwise be a \"black box,\" which other stakeholders cannot replicate. Establishing methods that are transparent to multiple stakeholders is particularly important in fields like artificial intelligence or machine learning (AI/ML) – which raise deep social and ethical concerns that may implicate the economic and social sustainability of nations, organizations, or even humankind. In the case of AI/ML, where the technical nature of discussions can make them inaccessible to non-technical experts, having standards to help open the \"black box\" of related discussions, such as ethical impact discussions, is an avenue for much needed trust-building and transparency. While decisions about acceptable levels of risk of adverse impact can be forensically reconstructed from design teams' meeting notes, these reconstructions are limited by the detail of records and the quality of reporting tools. A well-characterized process that guides teams through discussions of the ethical implications of AI/ML, which may eventually be taken up as a standard, must go beyond this. The assessment tool we propose aims to guide these discussions, and to provide clear answers to the question, \"what is the ethical impact of this AI-enabled platform?\" – via a process that opens an otherwise inscrutable \"black box.\"\nOptions for Assessing Ethical Impacts\nWhat is \"ethical impact\"? This term is used in many, often vague, ways to describe negative effects of a technology on the lives of the people that use that technology. The ethical impact of a technology goes beyond its simple use, however, and should extend across the whole of the product's lifecycle and the lifespan of users. As understood here, ethical impact is the balance of positive and negative effects that a technology, whether in its developmental, design, deployment, or decommissioning stages, might have on the life choices and life chances of individuals as such or individuals in an aggregate like a company or school community.\nThere are at least two methods for assessing the ethical impact of AI-enabled platforms: a principles-based approach and a theories-based approach. A theories-based approach begins from the standpoint that ethical theories, like consequentialism or deontology, provide decision rules for making decisions under a specific vision of a good life. Used as guidelines for choices about platform impacts, ethical theories are most useful when the inputs, outputs, and effects are well-characterized. Ethical theories are not ideal, however, for making decisions under constraints of considerable uncertainty, wherein the pains and pleasures or roles and responsibilities cannot be clearly measured or integrated. Under uncertainty, a principles-based framework, under which a specific, well-defined principle is accepted axiomatically as an ideal to pursue, provides a more practical alternative approach. Principle-based frameworks avoid deep problems of ethical theory by moving comparisons of inter- or intra-personal utility off the table. It is thus possible to discuss the impact of a product or process in terms of its expected contribution to a specific dimension of a desirable state of affairs.\nThe assessment tool we have designed is intended to be a comprehensive approach to principle-based ethical impact assessment. It includes layers of questions with potential answers scored based on conformity with the relevant normative principle. The tool aims to elicit extensive consideration of a project's potential impacts, not just to provide a \"check-box\" task. Further, the tool is not intended to be a \"one-off\" or \"single shot\" evaluation, but rather to be revisited throughout the development cycle as new technical or human considerations emerge.\nA \"Human Rights First\" Perspective\nInitially, we adopt the perspective, already present in the well-known IEEE Ethically Aligned Design documents, that ethical AI projects must protect human rights foremost (IEEE Global Initiative 2017). This is not to deny the importance of other principles, but to elevate the importance of protecting human well-being as integral to the development and success of an AI-enabled future. With respect to human rights, we start from the perspective that the risks and benefits of an AI-enabled project can be evaluated using a set of questions derived from the 30 articles of the UN Declaration on Human Rights.\nArguments for the paramountcy of human rights abound, but there are few articulations of how to measure whether AI-enabled platforms adversely affect the life span, life chances, or life choices of rights holders. We reviewed the 30 components of the UN Declaration of Human Rights to determine whether each component raises specific ethical concerns of relevance to AI. As the thirty articles represent a panoply of legal and cultural issues that go beyond the scope of ethical assessment of AI, we sought to reduce the dimensions to a more manageable set. A team member with deep knowledge of the declaration proposed an aggregation of the 30 articles into five categories: general human rights, rights related to law and legality, rights related to personal liberty, rights related to political choice, and rights related to cultural and social choice. Our working arrangement of the articles into these five dimensions is shown in Box 1 below.\nBox 1: 5 Dimensions of Rights and Associated Articles in the UN Declaration on Human Rights\n\nTo create a set of questions to probe the implications of an AI-enabled project for its potential to contravene any of these human-rights categories, we probed the conceptual schema of the first three articles – the general rights that represent pre-conditions for the remaining 27 rights – to identify distinct considerations within these groups of rights. This exercise generated seven broad guiding questions. For each of these, we created a set of more specific follow-up questions, which address concrete issues related to human rights protections. We list these questions in Box 2 below.\nBox 2: Questions for Assessing the Human Rights Impact of AI-Enabled Projects\n\nThis \"human rights first\" approach brought into stark relief the challenges of crafting questions whose answers can be scored. This challenge arises most pointedly in the case of conceptual questions that admit a broader range of possible answers than a simple yes or no.\nWe then considered alternative principles, to assess the applicability of our method to principles with fewer dimensions, initially using the example of bias.\nAlternative Principles Considered\nMultiple organizations have issued statements of principles intended to govern artificial intelligence. Corporate entities, such as Accenture (Tan 2019), have put forth statements, as have governmental organizations. So too have multiple other organizations, chiefly professional associations in fields related to computer science and AI, such as ACM (Gotterbarn et al 2018) and IEEE.\nThe IEEE, under the remit of The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (A/IS), has published their Ethically Aligned Design series of documents. The principles stated within this series of documents are:\n– Human Rights: \"A/IS shall be created and operated to respect, promote and protect internationally recognized human rights\"\n– Well Being: \"A/IS creators shall adopt increased human well-being as a primary success criterion for development\"\n– Data Agency: \"A/IS creators shall empower individuals with the ability to access and securely share their data, to maintain people's capacity to have control over their identity\"\n– Effectiveness: \"A/IS creators and operators shall provide evidence of the effectiveness and fitness for purpose of A/IS\"\n– Transparency: \"The basis of a particular A/IS decision should always be discoverable\"\n– Accountability: \"A/IS shall be created and operated to provide an unambiguous rationale for all decisions made\"\n– Awareness of Misuse: \"A/IS creators shall guard against all potential misuses and risks of A/IS in operation\"\n– Competence: \"A/IS creators shall specify and operators shall adhere to the knowledge and skill required for safe and effective operation\"\nIn addition, we considered the following principles, which are related to but not explicitly stated within the IEEE framework:\n– Mitigation of bias\n– Algorithmic competence\n– Autonomy and consent for participants\n– Safety\nMultiple organizations within the ecosystem dedicated to ethical artificial intelligence and machine learning have proposed plans to translate these principles into practice. One example is the EU Governance Framework for Algorithmic Accountability and Transparency, which provides specific guidance to translate these two principles into regulatory governance of AI projects (European Parliamentary Research Service 2019). The EU Governance Framework does not, however, give organizations actionable measurements of these principles that would allow reconstructing principle-based decisions. Developing such a resource is the intended final outcome of this Ethical Impact Score project.\nMethod for Evaluating Ethical Impact\nWhether an AI-enabled product or process will have a beneficial or adverse effect on its users will not be fully known until the product or process is used and its uses studied. Anticipating the possible effects on users' relationships to themselves or to other humans—the ethical impact—can be done through imaginatively questioning the developers about their expectations, then judging the answers to the questions given. Previous attempts to design an ethical impact measurement mechanism, such as the AI Ethics Toolkit (https://ethicstoolkit.ai/) have adopted the approach of asking questions about anticipated consequences of use.\nThese previously proposed mechanisms take two approaches: either they measure a project's overall level of ethical impact, or they measure a project's adherence with a single principle. Mechanisms in the first group, like the AI ethics toolkit, take the form of questionnaires with answers arrayed along an ordinal scale such as a Likert scale. The second group, like the EU Accountability framework, restricts responses to binary (yes/no) answers. It is our view that separating the scoring mechanism from the normative principles that motivate ethical concerns, for example focusing on auditability or risk, may lead to a \"compliance\" or \"check-box\" focused exercise. Providing sufficient specificity in relating questions to principles, and placing questions in the context of a sufficiently rich and reproducible, but numerically driven, scoring system, is a serious challenge that this draft only begins to address.\nScoring Mechanics\nThe Meaning of the Scores\nCreating a numerical scoring mechanism for ethical impacts raises two types of concern: 1) that a numerical score may create a misleading sense of precision or confidence, and; 2) that a numerical score may be inappropriate for situations in which human wellbeing is at risk. With respect to the first concern, we stress that scores from the mechanisms proposed here should not be interpreted as establishing any unique threshold of acceptability: a project that receives a score of, say, 66 should not be regarded as more ethical than one with a score of 65. Instead, the Ethical Impact score shows development teams where there may be areas of concern. Through the use of our concept score, principle score, and final score, teams can identify where their projects may be falling short of a principle they aim to uphold. With respect to the second concern, AI-enabled platforms will have an undeniable effect on the lives (life span, life choices) of individuals and groups. The scores in this Ethical Impact Assessment mechanism are not meant to represent a path towards scoring or monetizing the value of those lives affected. The concept questions, particularly as they pertain to particular groups, are not intended as signals of those groups' value to others, including even to an AI system.\nQuestion Design\nThe ethical implications of an AI enabled product or process cannot be fully captured through answers to one question per principle. Instead, we adopted a tiered approach to question development, to encourage teams to think through multiple layers of considerations, both technical and ethical. In the case of a human-rights first approach, the degree to which the product or process abides by or contravenes human rights is best captured, we propose, through questions that address each of the five rights categories we identified. These categories of rights lead to high-level questions, which are then augmented with questions associated with each of the concepts in the UN declaration articles and the interaction between those concepts. Similarly for other principles, questions to test the degree to which a product or process captures the principle are supplemented with substantive follow-up questions that aim to prompt users to consider the relationship between technical specifications and ethical considerations.\nThe design of the substantive sub-questions' response options invites a range of scoring options, including dichotomous (0 or 1) or other ordinal scales. The scores for sub-questions for a concept related to the overall principle will be combined to create a \"raw concept score.\" \"Raw concept scores\" are based upon a formula for each concept based on the number of sub-questions and the relative importance of each sub-question to output a 0-5 score, then normalized to a score between 0-100 to ensure all questions have the same initial weight in the overall principle score. This raw score is transformed into a \"weighted concept score\" based on a within-concepts weighting scheme (see below). All weighted concept scores within a topic are then summed to yield an element within the final Impact Score.\nThe proposed scheme outputs three types of scores:\n1. Concept scores: a summary score from 0 to 100 for each topic, based on responses to a set of concept questions and the relative importance attached to each question.\n2. Principle scores: a final score from 0 to 100 based on the set of relevant concept scores, considering the relative importance of each concept to the team's beliefs about the principle.\n3. Ethical Impact score: a final score from 0 to 100 based on the set of principle scores, taking into account the relative weight of each principle as determined by the project team\nThere are a number of advantages to this multi-level scoring scheme. First, the scheme allows a quick overall assessment of a given project or product. Second, by disaggregating a given overall score into scores related to specific principles, each of which can in turn be decomposed into responses to principle-specific questions, the scheme provides an expedient way to identify areas of concern that need improvement.\nWeighting Scores\nA key element of our Ethical Impact assessment tool is establishing a general scheme of weighting for a violation of each of the principles. The specific assignment of weights may vary, depending on the specific aim and deployment context of an AI system. There are two weighting schemes corresponding to the two types of scores that this tool will generate: Principle scores and an Ethical Impact score:\nWithin-principle weighting. The main idea is to tease out the concerns that animate a particular question, along dimensions of process and impact. The process dimension pertains to the processes of design, development and deployment of AI systems, with critical attention to potential divergence from industry standards and best ethical practices. The impact dimension pertains to potential adverse impacts of an AI system on the wider population, particularly on vulnerable groups. In each case, criticality is ranked from 1 to 5, with larger numbers denoting a stronger link to the principle in question. By averaging and normalizing across answers to principle-level questions one can assign a weight for a given principle for a project.\nVariation in the number of questions across principles can reduce the effect of some questions on the Ethical Impact Score. To counteract this effect, some questions judged particularly important for a principle can be assigned high negative weights. For example, scoring low on a question like 1b below – did you establish a strategy or procedures to avoid creating or reinforcing unfair bias in the AI system, both regarding use of input data and algorithm design – will alert designers to rethink project elements so their system does not perpetuate bias. Low concept or principle scores, and a low overall Ethical Impact Score, should raise concerns to teams about the tenability of their project.\nBetween-principle weighting. Between-principle weighting will strongly affect the final Ethical Score for the project. Assigning weights to principles is likely to be more project-specific than assigning weights to questions within each principle. We contend that the organization or team developing a system should build internal consensus about these weights. This consensus can be built using various established methods (e.g., Delphi), to incorporate the views of external experts and avoid potential improper biasing of results.\nPrinciple Assessment: Bias\nAs outlined above, we adopt a principle-based approach to evaluating the ethical impact of AI. Some principles, such as Accountability, have already been described by others in terms that are at least partially measurable. Others, such as protection of human rights and mitigation of bias, have not been. In this section, we propose a detailed set of questions, alternative answers, and scores for each answer, to create concept and principle scores for an AI system as it pertains to bias.\nBias\nA major concern in AI ethics is bias: do systems produce different outcomes for identified groups, whether positive or negative. While considerations of bias are often stated as a unique concern, these are intertwined with principles of Human Rights, Well-Being, and Awareness of Misuse as described in the Ethically Aligned Design documents. In this section, we develop a tool to score evaluations of the considerations of bias.\nWe adopted a similar perspective to that we used for the \"human rights first\" perspective above, but here we add a potential numerical scoring system for answers to the substantive questions.\n \nReducing bias is only one component of a full evaluation of the ethical impact of AI enabled projects, of course. Other principles, and interactions among principles, must also be addressed.\nAnticipated Future Work\nPractical tools for ethical impact assessment are needed by multiple organizations, ranging from large professional bodies such as ACM and ICEE to small startups aiming to integrate ethical considerations with their technical work.\nThis draft tool is a starting point to fill this need. As presented here, the project is at approximately 40% completion and significant work needs to be done to accomplish all that is promised in this draft.\nCrucial Near-Term Steps to take the project to 65-70% completion\n– Develop a scoring and weighting scheme for a human-rights-first approach\n○ The human rights first approach was introduced in this preliminary paper to illustrate how a complex, high-level principle can be broken into smaller, concept-focused, questions that might spur productive conversations about individual and community-level ethical impacts from AI-enabled projects. Identifying a range of possible answers and associated scoring mechanisms will require elaborating examples of human rights violations from other areas of society, including other technology-driven issues. Reasoning to a scoring and weighting mechanism from precedent seems an important component to appreciating the methods of argumentation in human rights law and ethics.\n–  Develop guiding questions, component questions, and answer scoring and weighting schemes for additional principles.\n○ Our work on additional principles is limited here by the short time available at the Summer Institute and the difficulty of organizing continuing work in a distributed environment (particularly where the project members struggled to fit this into their schedule at the start of a busy semester). The same issues impeded development of a weighting scheme for the first principle we considered, bias. Our near-term goals are to identify when more of the project team can work on a shared platform, such as a video conference, to address the weighting scheme and other principles.\nFuture Refinements Work to take this project to full completion\n– Beta-testing usability of this tool in active project development\n○ While the project team expects the usefulness of this tool to be high, we are not certain of the overall time burden or complexity of its use. Identifying a project team willing to work with us to test this tool is crucial to moving forward on areas of future refinements.\n– Testing usability of this tool in an AI governance environment\n○ The ultimate goal of this project is to move this Ethical Impact Assessment tool into the standards space. This would entail finding a working group sponsor, proposing the standard to that sponsor (including identifying a market for this standard), petitioning for the sponsor's support, establishing a working group, working with the sponsoring organization to develop the standard over a 2-3-year time frame, seeking approval of a developed standard, then drafting pathways for the dissemination and revision of this standard over time.\nReferences\nDalkey, Norman; Helmer, Olaf (1963). \"An Experimental Application of the Delphi Method to the use of experts\". Management Science. 9 (3): 458–467. doi:10.1287/mnsc.9.3.458.\nEuropean Parliamentary Research Service (EPRS), Scientific Foresight Unit. (2019). \"A Governance Framework for Algorithmic Accountability and Transparency\". PE 624.262 – April 2019. Available at: http://www.europarl.europa.eu/RegData/etudes/STUD//EPRS_STU(2019)624262_EN.pdf\nGotterbarn, D. et al. 2018. \"Code of Ethics\". Association for Computing Machinery. Available at: https://www.acm.org/code-of-ethics\nIEEE Global Initiative for Ethics of Autonomous and Intelligent Systems (2017). Ethically Aligned Design of Autonomous and Intelligent Systems. Available at: ethicsinaction.ieee.org\nTan, C. 2019. \"Putting AI Principles into Practice\". Accenture Digital Perspectives. Available at: https://www.accenture.com/gb-en/blogs/blogs-organisations-start-ai-principles-practiseThe post Creating a Tool to Reproducibly Estimate the Ethical Impact of Artificial Intelligence first appeared on AI Pulse.", "url": "https://aipulse.org", "title": "Creating a Tool to Reproducibly Estimate the Ethical Impact of Artificial Intelligence", "source": "aipulse.org", "date_published": "n/a", "paged_url": "https://aipulse.org/feed?paged=2", "id": "b3cd8c382b033c0b2c13eeed5508921c"} -{"text": "Shortcut or sleight of hand? Why the checklist approach in the EU Guidelines does not work\n\n\n Download as PDF\n\nWhat we did\nIn April 2019, the High-Level Expert Group on Artificial Intelligence (AI) nominated by the EU Commission released the \"Ethics Guidelines for Trustworthy Artificial Intelligence\", followed in June 2019 by a second document, \"Policy and investment recommendations\". Initially our group was going to provide a formal response to the pilot phase of the April guidelines, however, as the pilot is geared more towards the business sector, our group undertook a critical assessment of the guidelines from a multidisciplinary perspective.\nThe Guidelines are aimed at supporting the development of 'trustworthy' AI, by: 1. outlining three characteristics (lawful, ethical, and robust); and, 2. providing seven key requirements (Human agency and oversight; Technical Robustness and safety; Privacy and data governance; Transparency; Diversity, non-discrimination and fairness; Societal and environmental well-being; and Accountability).\nThe Guidelines are a significant contribution to the growing international debate over the regulation of AI. Firstly, they aspire to set a universal standard of care for the development of AI in the future. Secondly, they have been developed within a group of experts nominated by a regulatory body, and therefore will shape the normative approach in the EU regulation of AI and in its interaction with foreign countries. As the General Data Protection Regulation has shown, the effect of this normative activity goes way past the European Union territory.\nOne of the most debated aspects of the Guidelines was the need to find an objective methodology to evaluate conformity with the key requirements. For this purpose, the Expert Group drafted an \"assessment checklist\" in the last part of the document: the list is supposed to be incorporated into existing practices, as a way for technology developers to consider relevant ethical issues and create more \"trustworthy\" AI.\nIn what follows we attempt to assess the implications and limitations of the assessment checklist for the global development of 'trustworthy' AI.\nPrologue\nDuring the 1969 Apollo mission to the moon, checklists played such a pivotal role in the logistical operations that astronaut Michael Collins referred to them as \"The Fourth Crew Member.\" As the Apollo mission highlights, checklists are helpful for verifying the presence or absence of factors in many decision-making systems. Since checklists are effective tools for narrowing attention to essential information and eliminating distracting noise, the EU's \"Ethical Guidelines for Trustworthy Artificial Intelligence,\" like many evaluative frameworks, relies on them. Unfortunately, such guidelines may fall victim to the fallacy of the checklist: by over-reductively framing complex problems, they can misleadingly suggest simple solutions to complex, indeed impossible, dilemmas. This poses the danger of creating a compliance regime that allows, even actively encourages, \"ethics washing\". Technology companies, and companies that use AI in their operations, can be incentivized to minimize their legal liability by touting their conformity to the inadequate guidelines, leaving the rest of society to pay the price for policy-makers not endorsing more nuanced tools.\nThe EU's Checklist Approach to AI Trustworthiness\nThe European Commission appointed a High-Level Expert Group on Artificial Intelligence, composed of representatives from academia, civil society, and industry, to support the implementation of the European Strategy on AI. The Expert Group's main remit was to propose recommendations for future policy development on ethical, legal, and societal issues related to AI. The outcome was the development of the Guidelines, with the Implementation List, or checklist, at their heart.\nThe Guidelines are supposed to contribute to a framework for achieving Trustworthy AI across application domains. The group identifies three components to trustworthy AI: it should be lawful, ethical and technically and socially 'robust'. The Guidelines focus on the ethical component, by first identifying the ethical principles and related values that must be respected in the development, deployment and use of AI systems. The Guidelines aim to operationalize the principles, by providing concrete (although non-exhaustive) guidance to developers and deployers on how to create more \"Trustworthy\" AI.\nThe justification of this overall structure stems from the idea that AI is a radically new technology – accurate perhaps in reference to its widespread application, if not to its intellectual lineage – which prompts new regulatory considerations. In this light, a step-by-step process starting with preliminary assessment by means of a checklist can reduce complexity, provide reassurance for developers and deployers, and so help develop trust.\nThe guidelines mention multiple caveats regarding the practicability, and – of particular importance – what is indicative and what is prescriptive in the structure of the assessment. What also becomes evident upon a closer examination, is that the Assessment List is a tool that may create a false sense of confidence, with little concrete guidance. Indeed, it may be seen as an outdated tool that might work in a cockpit, but in an unpredictable field like AI could end up failing.\nTechnical challenges to AI-related checklists\nA key problem for the checklist is that it expects, or indeed requires, concrete answers to technical questions that are so nuanced, that any answer given is necessarily partial. The partial nature of the available answers leads to a certain amount of unavoidable uncertainty, which the checklist does not explain how to navigate. Yet, navigating this uncertainty is in fact the area where guidance is required in order to establish trustworthy AI. We elaborate on this dilemma with two examples: explanation and bias avoidance.\nThe guidelines ask practitioners to \"ensure an explanation as to why the system took a certain choice resulting in a certain outcome that all users can understand.\" Currently, there is no way to definitively ensure that uncontestable explanations exist for a model. Generating explanations is an area of active research, and currently there is no consensus among experts as to what qualities make a good explanation, or even what an explanation consists of. Different methods of generating explanations provide different guarantees, they are often incomparable to each other (and can disagree) because they are based on different baseline assumptions. Thus, they are all partial explanations, with different strong and weak points. Further, the very notion of interpretability is often at odds with values such as security and privacy of information, which can lead to tensions in checking off boxes for both of these desired qualities.\nSuppose an individual interacting with an AI system received an adverse outcome and wants an explanation for why the model exhibited this behavior. An example of this may be a resume scanner for job recruitment rejecting an individual from recruitment possibilities for a certain company.\nOne form of explanations are simply alternate inputs that have contrasting results – i.e. other resumes that were accepted by the given company's resume scanner. The individual can then intuit why the model made a given choice based on the differences they perceive in the two resumes. This type of explanation is highly intuitive and easy to understand, but it has its problems. One such problem is that there is no guarantee that any given difference perceived between the two resumes is in fact the model's reason for approving one and not the other. For example, it could be that the resume rejected was from a female candidate and the alternate input provided for explanation was from a male candidate. This may be the biggest difference perceived by the applicant, but it could be that the model is in fact not using this feature at all, but instead the change in hiring outcome was based on a tiny difference in work experience, if the model weights that very heavily. Thus, while explanations are intuitive to understand – it simply involves comparing two inputs – it does not guarantee that the correct reason for the difference in outcome between the two inputs is obvious from this comparison. Secondly, in the interest of privacy, which is another pillar of trustworthy AI according to the EU guidelines, one may want to not give a real alternate input for comparison but a fabricated one. Depending on how this alternate input is fabricated, it may be out of distribution of the hiring model's training set, making the model's response to this fabricated input inaccurate, making the entire explanation unreliable.\nAnother form of explanations are feature-based explanations, which seek to highlight which features (i.e. parts of an input – years of work experience, age, name on a resume) contributed the most to a model's output. In a linear model, this is straightforward—each feature has a weight associated with it and so the pathway from input feature to model decision is clear. However, releasing this information is a security risk that needs to be taken into account when using a linear model. Further, many applications, such as those involving images, require much more complex models. With these more complex models, feature-based explanations become more complicated, as even the question of what are the features becomes unclear, due to the fact that the model itself creates its own features as a means of understanding the input as an intermediate step in computation. Some feature-based explanations, in the interest of having more interpretable results, attribute importance to features that may not actually be used by the model. This is because the \"features\" investigated are defined some independent process (e.g. a person segmenting up an image) that is not necessarily the process that the model uses. [Simoyan, Selvaraju] Conversely, some more cautious forms of explanations potentially miss features that are used by the model.\nBeyond this, depending on which baseline assumptions are used in determining which features are most important [Dhamdere, Kindermans, Sundararajan], different feature-based explanations may lead to different results, even if they agree on which features are to be used. It is unclear which are better or more reliable than another. Each method available currently has its own redeeming qualities, but also its own blind spots – and it is often confusing to use more than one type of explanation together because they can provide contrasting results.\nFurther, all types of explanations can be interpreted widely: there is no one way to interpret the common examples given or the influences of the features returned by a certain method. Even given the same explanation, researchers can come to contrasting conclusions about the behavior of a model. Thus, any explanation necessarily contains some uncertainty.\nNavigating these nuanced tradeoffs–intuitiveness versus accuracy of an explanation, and privacy/security versus interpretability—are key places where guidance is required, but not supplied by the checklist. In order to make use of the methods currently available and navigate the inherent uncertainty of the answers they provide, we need to develop answers for questions such as: What purpose is the explanation supposed to serve? Is it for those interacting with the AI system? Is it so that they can understand what they can change about the input in order to get a different result? Is it so that they can be assured there was no mistake? Or unfair bias? Different explanation methods are suited to different applications. Once the purpose(s) are decided, what are the criteria for the explanation to be deemed satisfactory? At what point, and in what applications does a lack of explainability prohibit use of AI in a given context?\nAnother example of uncertainty comes in bias prevention, when the guidelines ask the developers to \"ensure a working definition of 'fairness' that you apply in your AI systems\". There are several definitions of what it means for an AI system to be free from bias, or fair [Hardt, Dwork]. While it is necessary to put these processes in place, they will have holes. Different auditing systems and notions of fairness, again, rely on different assumptions, often cannot be used together, [Kleinberg], and each have blind spots. Group-based fairness metrics, such as demographic parity and equalized odds, which seek to treat demographic groups similarly in aggregate (e.g., job recruitment rates should be equal across gender) sometimes sacrifice individual fairness, which seeks to treat similar people similarly on an individual level.\nAdditionally, these methods of checking for fairness do not often give obvious warning signs. Instead, algorithms often differ from each other in slight gradations, since many \"baseline\" or \"fair\" algorithms still have far from perfect scores on these fairness-checking tools. As such, it can be entirely unclear, even to an expert, when to flag an algorithm for exhibiting unfair behavior, outside of blatantly obvious cases.\nEven if we use all compatible fairness notions together, there are still types of discrimination that we don't have good catches for—such as certain types of smaller subgroup discrimination. So no matter what kind of fairness definition a developer uses, whether they have caught discrimination in the model is uncertain. We need to have some way to deal with this built-in uncertainty, including defining the contexts where this uncertainty prohibits development.\nExperts of the Future and Unacknowledged Uncertainty\nThe EU assessment list exists among other attempts to manage emergent technologies, such as foresight guidelines for nanotechnology. These are to be used by what Rose and Abi-Rached (2014) have called 'experts of the future', who 'imagine…possible futures in different ways, seeking to bring some aspects about and to avoid others. … In the face of such futures, authorities have the obligation not merely to \"govern the present\" but also to \"govern the future\" (p. 14). One way of governing the future is to build tools to manage the creation of new technologies, such as a checklist. Such tools, however, can be ill-suited for technologies where whether the box should be checked or not is uncertain, whether these uncertainties are caused by technological, social, or economic factors. The guidelines aim to provide shortcuts for managing complexity, but the shortcuts are inadequate given the affordances, uncertainty, and pace of AI development.\nThe Expert Panel acknowledges that the checklist is not a purely mechanistic enterprise or an exhaustive list. Yet in our view, the aspiration that the checklist integrates best practices in AI development and informs the debate unavoidably gives it a powerful orienting force, caveats about its \"theoretical nature\" notwithstanding. Alternatively, we could begin to think about the checklist in terms of its effect on people, behavior, and – even more importantly – judicial scrutiny of enterprises' activities in developing and commercializing emerging technologies. The checklist was a tool for the last industrial revolution, not the complexity, speed, and uncertainty of the next one.\nEthics on the Ground\nThe document overlooks the potential disjunction between idealistic ethical guidelines and their practical implementation – ethics on the ground. This echoes the gap between ambitious regulatory visions and the check-the-box corporate compliance that privacy scholars Deidre Mulligan and Kenneth Bamberger documented in their seminal book, Privacy on the Ground. It also elides the profound divide existing between ethics and trustworthiness.\nIt isn't clear what the consequences of these binary questions should be, other than noting that there could be tensions. The checklist as a technology was developed for handling complexity under stress. Experts provide pilots and astronauts checklists as memory aids. This is not a technology for deciding if you should build something at all or how to go about imagining what can be done ethically. Nor is this a technology for dealing with ambiguity. For example, what do you do if the answer to a question is maybe? How do you decide what to do if you can't guarantee that all users understand a decision? Do you then not develop the product at all? Is it likely that a team, having invested significant resources, will stop on a dime because they can't answer a question? More likely, they will simply pass it to the lawyers to finesse.\nThus, while the tool may add \"ethical value\" for the people who consider it, it does not impose sufficiently specific and consequential requirements to create trustworthiness. By virtue of its iterative path towards ethics assessment, the document orients itself toward developers and regulators, not the citizens and users whose lives AI will transform. With so much flexibility in interpreting and implementing the tool, and so much complexity and ambiguity in its target, it is natural that in practice, users and other stakeholders will gravitate to simpler versions of the tool. As a result, the checklist is likely to operate as a legalized template to legitimize questionable AI products and practices. A regulatory structure based on checklists will provide some protection, but protection for who? Users or an IT industry that has a practice of breaking things and people first, apologizing second, switching regulatory jurisdictions when they can, and paying fines if they have to? That is not a recipe for ethics by design or trustworthy AI. It is a recipe for \"ethics-washing.\"\nOverall, the report's guiding questions are valuable in identifying some of the most salient issues in AI ethics; they help identify issues and problems that all responsible actors in this space should attend to–issues that would be irresponsible to occlude or downplay. Nonetheless, this specific type of utility, heightened ethical awareness, is not what the Expert Panel claims for their framework. Rather, they characterize the framework as the foundation for creating and deploying trustworthy AI. This is more than semantics. Rather, conflating a trustworthy outcome with a more modest ethically attuned one risks encouraging stakeholders to develop misleading, overly-optimistic expectations which could ultimately lead to the opposite of trust–namely, betrayal.\nReferences and Useful Links\nMarvin Russell (2017), The History of Checklist: https://hackernoon.com/happy-national-checklist-day-learn-the-history-and-importance-of-october-30-1935-17d556650b89\nAstronaut Checklists, https://cultureinfluences.com/en/process/english-guaranty-or-tool/english-astronaut-checklists/\nMatthew Hersch, The Fourth Crewmember (2009), https://www.airspacemag.com/space/the-fourth-crewmember-37046329/\nAtul Gwande, The Checklist, New Yorker, (2007) https://www.newyorker.com/magazine/2007/12/10/the-checklist\nDaniel Greene, Anne Lauren Hoffman, and Luke Stark, Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning (2018): http://dmgreene.net/wp-content/uploads/2018/09/Greene-Hoffman-Stark-Better-Nicer-Clearer-Fairer-HICSS-Final-Submission.pdf\nThilo Hagendorff, The Ethics of AI Ethics (2019), https://arxiv.org/pdf/1903.03425.pdf\nDeirdre Mulligan and Kenneth Bamberger, Privacy on the Ground (2015), https://mitpress.mit.edu/books/privacy-ground\nJohns Hopkins University Center for Government Excellence Ethics and Algorithms Toolkit (Beta) (2019), https://ethicstoolkit.ai/ and https://drive.google.com/file/d/153f0TT_J4cDlr7LRTVKNIDZzu9JUa8ZI/view\nHarvard Cyberlaw Clinic, Principled Artificial Intelligence Project (2019): https://clinic.cyber.harvard.edu/2019/06/07/introducing-the-principled-artificial-intelligence-project/\nPaula Boddington, Towards a Code of Ethics for Artificial Intelligence: https://www.amazon.ca/dp/B077GCSKB1/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1\nJohn Kleinberg, Inherent Trade-Offs in Algorithmic Fairness\nhttps://www.researchgate.net/publication/330459391_Inherent_Trade-Offs_in_Algorithmic_Fairness\nM. Hardt, Equality of Opportunity in Supervised Learning\nhttps://ttic.uchicago.edu/~nati/Publications/HardtPriceSrebro2016.pdf\nCynthia Dwork, Fairness Through Awareness\nhttps://arxiv.org/abs/\nKedar Dhamdhere, Mukund Sundararajan, and Qiqi Yan. How important is a neuron? CoRR, https://arxiv.org/abs/1805.12233\nP.-J. Kindermans, S. Hooker, J. Adebayo, M. Alber, K. T. Schütt, S. Dähne, D. Erhan, and B. Kim. The (Un)reliability of saliency methods. ArXiv e-prints, November 2017\nRose, N., & Abi-Rached, J. M. (2013). Neuro: The new brain sciences and the management of the mind. Princeton: Princeton University Press.\nMukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. ArXiv e-prints, 2017.\nSimonyan et al. \"Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps\"\nSelvaraju et al. \"Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization\"The post Shortcut or sleight of hand? Why the checklist approach in the EU Guidelines does not work first appeared on AI Pulse.", "url": "https://aipulse.org", "title": "Shortcut or sleight of hand? Why the checklist approach in the EU Guidelines does not work", "source": "aipulse.org", "date_published": "n/a", "paged_url": "https://aipulse.org/feed?paged=2", "id": "c64e70907b7a3bb9a8343f49941229a9"} -{"text": "AI & Agency\n\n\n Download as PDF\n\nIntroduction\nIn July of 2019, at the Summer Institute on AI and Society in Edmonton, Canada (co-sponsored by CIFAR and the AI Pulse Project of UCLA Law), scholars from across disciplines came together in an intensive workshop. For the second half of the workshop, the cohort split into smaller working groups to delve into specific topics related to AI and Society.\nI proposed deeper exploration on the topic of \"agency,\" which is defined differently across domains and cultures, and relates to many of the topics of discussion in AI ethics, including responsibility and accountability. It is also the subject of an ongoing art and research project I'm producing. As a group, we looked at definitions of agency across fields, found paradoxes and incongruities, shared our own questions, and produced a visual map of the conceptual space. We decided that our disparate perspectives were better articulated through a collection of short written pieces, presented as a set, rather than a singular essay on the topic. The outputs of this work are shared here.\nThis set of essays, many of which are framed as provocations, suggests that there remain many open questions, and inconsistent assumptions on the topic. Many of the writings include more questions than answers, encouraging readers to revisit their own beliefs about agency. As we further develop AI systems, and refer to humans and non-humans as \"agents\"– we will benefit from a better understanding of what we mean when we call something an \"agent\" or claim that an action involves \"agency.\" This work is under development and many of us will continue to explore this in our ongoing AI work.\n– Sarah Newman, Project Lead, August 2019\n1. Characterizing Agency\nJon Bowen\nPhD student in Philosophy, Western University\nSome of the beings we encounter in our environment are inanimate. These things may be pushed and pulled, they may collapse or disintegrate. In each of these cases, the entities are fundamentally passive–if they move or change, one suspects that these movements and changes will be exhaustively explained by appealing to mechanical forces within or without.\nBut there is another kind of entity in our environment. These beings seem to be fundamentally goal-directed. To appearances, they are spontaneous initiators of their own actions. These are animate beings, or agents. The movements of these entities seem to be best explained not by appeal to mechanical causes of their activity, but to the goals that they are striving towards, the beliefs they have about the world, and their desires.\nGiving a precise definition of what animacy or agency consists of is no easy task for the philosopher, but nonetheless we appear to have no difficulty at all recognizing animate motion and distinguishing it from the motion of inanimate objects. Even human infants, it seems, can detect animate motion and differentiate it from inanimate motion in point-light displays, even when occlusions are present.\nBut why should this be the case? Why would it be so difficult to give a theory of intentional action, and yet so easy to detect it? I will set out one suggestion. We do not, as has been proposed, infer intentions, beliefs, and desires as a part of a theory for explaining or predicting behavior. Instead, intentional action is behavior with certain distinctive, overt characteristics, which our perceptual systems have evolved to directly perceive. Goal-directed behaviors, I will suggest, are a very real kind of behavior out there in the world with distinctive characteristics. Furthermore, it is important that animals perceive and understand this particular kind of behavior, and sure enough, they are able to do so with astonishing acuity.\nWhat are we saying when we explain the activities of another person (or of a non-human animal) by appealing to their intentions? Here I will draw on an analysis from Dennis Walsh: \"A teleological explanation is one that explains the nature or activities of an entity, or the occurrence of an event, by citing the goal it subserves. A system has goal, E, just in case it exhibits goal-directed behavior toward E. Goal-directed behavior is a gross property of a system as a whole.\" (p. 177)\nWhat this amounts to is not an account of the intrinsic causal etiology of the agent's behavior. Instead, we are locating that behavior in a chain of events that show a certain distinctive pattern. If an agent is trying to do X, then its behavior will flexibly reconfigure itself in the service of that goal. When a dropped object encounters the ground, it will stop. When an agent's initial attempts to pursue some goal are thwarted, that agent will spontaneously and flexibly reconfigure its behavior so as to continue to pursue its goal. A human need not stop at the ground–they can retrieve a shovel, and perhaps a jackhammer or a drill if called for (if they really want to!) This is to say, when an agent is engaging in goal-directed activity, its behavior is robust against perturbations and obstacles in a way characteristically not present in inanimate objects.\nIf there are such systems in nature–systems that will reliably produce effects by marshaling their intrinsic causal capacities in the service of goals–then clearly the perceiving animal would be at an advantage if they could detect them when present! The challenge, from the perspective of an animal's perceptual system, then, is to detect or pick up the information which specifies what the goals of other agents in their acting are. While this might sound like quite a feat, again, this is something we all seem to be very good at.\nIf agency amounts to the capacity for intentional action, and the preceding account of goal-directed behavior is sound, what basis might there be to deny that such a thing exists as a real phenomenon in nature, and a real attribute of natural beings?\n2. The Value of the Concept of Agency in an Increasingly Rational World\nOsonde Osoba\nInformation scientist, RAND Corporation\nProfessor, Pardee RAND Graduate School\nLet us concede that different traditions of thought have different definitions and perspectives on what it means to be an agent or to have agency. There are some common threads that may be useful to highlight. I will focus on one. Most conceptions of Agency are rooted in action, in doing, in affecting a substrate environment.\nA working definition for the purposes of this discussion could go thus:\nAn agent is an entity that is capable of causing or effecting change in its world in pursuit of private (personal) goals.\nThis definition has a couple of features worth highlighting:\nThe primacy of causality:\nWe focus on the idea of causal influence as a defining characteristic. An entity whose whole existence consists of internal ruminations (e.g. Ibn Tufayl's floating man) does not meet our criteria. However much sophisticated intelligence it applies to its sense perceptions, it has no influence over its environment. It can achieve no goals in its world no matter how intensely it wills them.\nContextual worlds:\nContext determines the relevant world over which the agent aims to exert influence. Entities can be part of numerous worlds or environments. An entity's agency in each of these worlds is determined by how much causal influence it can exert in each one. We can imagine a measure of power based on what fraction of an agent's environment it can influence.\nPrivate goals:\nPrivate goals may be related to Aristotle's idea of a \"final cause,\" the reason for which a thing exists. The capacity for pure action without goals requires no planning, interiority, or intentionality. We will argue that tracking that sort of capacity is not useful.\nThe concept of agency has proven useful for rooting responsibility and/or liability in entities capable of modifying their actions in response to external influence. Such a capacity for redress or accountability can arguably only be supported by entities capable of goal-oriented behavior.1 Responsibility can be moral or legal (more coercive/backed by institutional power). Agency likely serves other important functions. But the responsibility-rooting function of agency is crucial for influencing or controlling behavior in social structures.\nThis view of agency is explicitly not about independence or autonomy. Agency, in this conception, is closer to a useful fiction that enables the clean assignment of responsibility and dessert. And the default assumption is that agents exist within networks of influence. A degree of external manipulation of agents is the norm, not a novel pattern.\nHistorically, the use of agency for allocating moral responsibility has been a useful but imperfect device: the assignment of moral responsibility has not always tracked causal responsibility. The long tradition of arguments for the justice of gods (theodicies) is a case in point. If evil befalls a person, it must be because that person has misused his agency (\"sinned\") and therefore deserves or is morally responsible for his lot.2 Some superstitions may also be construed to serve a similar function. These failures in causal attribution happen because the world is complex, causal attribution is notoriously difficult, & causal influences can be very subtle when they exist. By contrast, gods are simpler, more convenient causal explanations.\nOur modern conception of moral responsibility is becoming more rational, more scientific. Part of the goal of rational thought is to focus on the true causes of observed phenomena. Weber goes so far as to argue that scientific inquiry is just a rational incarnation of theodicy.3 We have moved from agency based on imperfect beliefs towards a more causal conception of responsibility.\nBut what happens when our rational understanding of reality expands to the point where we are able to track causal influences as finely as possible?4 E.g. recent literature has begun to undermine agency-based explanations of individual behavior in favor of longer chains of causal influence that reach past the mask of more person-focused conceptions of agency. How do we ground responsibility and liability when large swathes of action can be explained away via causal factors outside the individual (e.g. the larger explaining value of social influence or manipulation, genetics, environmental factors, etc.)?\nDoes the concept of agency survive this trend?\n3. Human agency in the age of AI\nAbeba Birhane\nPhD Candidate, School of Computer Science, University College Dublin\nProvocation:\nThe question of agency necessarily provokes the question of what it means to be a person and, in particular, what it means to be a person in the age of ubiquitous artificial intelligence (AI) systems. We are embodied beings that inherently exist in a web of relations within political, historical, cultural, and social norms. Increasingly, seemingly invisible AI systems permeate most spheres of life, mediating and structuring our communications, interactions, relations, and ways of being. Since we do not exist in a social, political, historical, and AI-mediated vacuum, it is imperative to ground agency as inherently inseparable from the person as construed in such contingent constituent factors. Depending on the context and the space we occupy in the social world, all these dynamic and contingent factors serve as enabling constraints for our capacity to act. Our capacity to act within these contextual factors varies in degree depending on the space we occupy at a certain time, in a certain socioeconomic context; the more privileged we are, the fewer the potential constraints, and the greater our degrees of agency.\nEssay:\nThe individual is never a fully autonomous entity: rather, they come into being and maintain that sense of existence through dynamic, intersubjective, and reciprocal relations with others.5 Our biology, current social and cultural norms, historical, and contextual contingencies, as well as our physical and technological environment, constitute who we are and our degrees of agency within a given time and context. Increasingly, AI systems are becoming an integral part of our environment – be it the search engines that we interact with, our social media activities, the facial recognition systems that we come in contact with, or the algorithmic systems that sift through our job applications – further adding enabling, or limiting, constraints. (Enabling constraints here might include having a common Western male name, or other demographic traits, that the job application algorithm chooses to include, rather than exclude. These are still constraints, but in certain instances they increase opportunity, rather than decrease them.)\nWe are embodied beings that necessarily exist in a web of relations with others, within certain social and cultural norms as well as through emerging technologies. This means our sense of being, as well as our capacity to act, are inextricably intertwined and continually changing as we move between various spheres taking on various roles. The various factors that constitute (and sustain) who we are influence the varying degrees of agency we are afforded. As we go on about our daily lives, we move between various social and cultural conventions, physical environmental enablers (or disablers) of certain behaviors and actions (as opposed to others), and technological tools that shape, reinforce, and nudge behavior and actions in certain directions (and not others). As a PhD student, my role, responsibility, and capacity to act in my academic environment, for example, is different than that of my role, responsibility, and capacity for action when I am at a social gathering within the immigrant community. Furthermore, my interaction with others through Twitter is different from both these other contexts, and is partially determined by the ways the medium affords possible actions and interactions. Our sense of agency, then, is fluid, dynamic, and continually negotiated within these various physical, mental, psychological, technological, and cultural spaces. Discussion of agency, consequently, cannot emerge in a social, technological, and contextual vacuum. Nor is it something we can view as stable or pin on individual persons due to the complex, contingent, and changing factors that constitute and sustain personhood.\nConversely, agency cannot be an abstract term that we attempt to define and analyze in a general, one-size-fits-all manner but one that needs to be grounded in people. People, due to their embeddedness in context, culture, history, and socio-economic status, are afforded varying degrees of enabling constraints. Agency, therefore, is not an all-or-nothing phenomenon but something that varies in degrees depending on individual factors, circumstances and situations. Individuals at the top of the socio-economic hierarchy, for example, face relatively fewer disabling constraints, consequently resulting in a higher degree of agency, and the reverse holds for those at the lower end of society. For example, depending on their socio-economic and educational background, one may be labelled \"eccentric\" vs. \"insane\", a \"lone wolf\" vs. a \"radicalized extremist\", a \"freedom fighter\" vs. a \"terrorist\".\nAgency, AI, and ethical considerations\nLiving in a world of ubiquitous networked communication, a world where AI technologies are interwoven into the social, political, and economic sphere means living in a world where who we are, and subsequently our degree of agency, is partially influenced by automated AI systems.\nThe concept of AI often provokes the idea of (future and imaginary) sentient artificial beings, or autonomous vehicles such as self-driving cars or robots. These preconceptions often assume (implicitly or otherwise) that AI systems are entities that exist independently of humans in a machine vs. human dichotomy. This view, which dominates academic and public discourse surrounding AI is a deeply misconceived, narrow, and one-dimensional conception of AI. What AI refers to in the present context is rather grounded in current systems and tools that operate in most spheres of life. These are seemingly invisible tools and systems that mediate communication, interaction with others and other technological infrastructures that alter the social fabric. These AI systems make life effortless, as they disappear into the background to the extent that we forget their very existence. They have become so inextricably integrated with our daily lives that life without them seems unimaginable. As Weiser6 has argued, these are the most profound and powerful technologies. \"The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.\"\nThese systems sort, classify, analyze, and predict our behaviors and actions. Our computers, credit card transactions, phones, and the cameras and sensors that proliferate public and private spaces are recording and codifying our \"habits\", \"behaviors\", and \"experiences\". Such ubiquitous interlinked technological milieu continually maps out the where, when, what, and how of our behaviors and actions, which provide superficial patterns that infer who we are.7 Whether we are engaging in political debate on Facebook, connecting to \"free\" wi-fi, using Google Maps to get from point A to B, searching for sensitive health information on Google, ordering grocery shopping, posting selfies on Instagram, or out in the park for a jog; our actions and behaviors produce a mass flow of data that produce pattern-based actionable indices about \"who we are\". These superficial extrapolations, in turn, feed models that predict how we might behave in various scenarios, whether we might be a \"suitable\" candidate for a job, or are likely to commit crimes, or are risks that should be denied loans or mortgages. Questions of morality (often misconceived as technical questions in need of a technical fix) are increasingly handed over to engineers and commercial industries developing and deploying AI systems as they are bestowed with sorting, pattern detecting, and predicting behaviors and actions. These predictive systems give options and opportunities to act or they limit what we see and the possible actions we can take. And as O'Neil8 reminds us, each individual person does not pass through these processes to the same degree nor do they suffer the consequences equally. \"The privileged are processed by people, the masses by machines.\"\nThese systems not only predict behavior based on observed similar patterns, they also alter the social fabric and reconfigure the nature of reality in the process. Through \"personalized\" ads and recommender systems, for example, the level and amount of options put in front of us varies depending on the AI's decision of \"who we are,\" which reflects the place we occupy in the social hierarchy. The constraints that provide us with little or great room to act in the world are closely related to our socio-economic status and, increasingly, to who our data says we are. Unsurprisingly, the more privileged we are, the more we are afforded the capacity to overrule algorithmic identification and personalization (or not be subjected to them at all), maximizing our degrees of agency.\nSince agency is inextricably linked to subjecthood, which is necessarily political, moral, social, and increasingly digital, the impact of power structures is inescapable. These power relations and the capacity to minimize the potential constraints AI imposes on agency, is starkly clear when we look at the lifestyle choices that powerful agents in Silicon Valley, who make and deploy technology, are afforded. For example, while screen-aided education is pushed towards mainstream schools, the rich on the other hand are reluctant to adopt such practices.9 Agency, the capacity to act in a given technological environment and context varies in degree from person to person. Silicon Valley tech developers, those with power and awareness of technology as constraining powers, are reluctant to let it infiltrate their children's surroundings. Some go so far as banning their nannies from the use of screens.10\nAgency is not an all-or-nothing phenomenon that we either do or do not have. Rather, agency is inextricably linked to our social, political, and historical contexts, which are increasingly influenced by technological forces. These forces grant people varying degrees of agency. In an increasingly AI-powered society our capacity to act is limited or expanded based on our privilege; agency is increasingly becoming a commodity that only the privileged can afford.\n4. Agency to Change the World\nMike Zajko\nAssistant Professor, Department of History and Sociology, University of British Columbia\nAbstract\nSocial theory has identified agency with social change and dynamism, bringing tension and possibility to a world where social structures are reproduced. The concept of agency can rescue us from the notion that we are simply the product of our conditioning (zombies of embodied habits), and stands in opposition to ontologies that foreground practices at the expense of subjects. While a humanist conception links agency to purposive action, an expansive (post-humanist) definition elides the question of intentionality, and links agency with action, irrespective of purpose. According to this view, rather than being an exclusive human property, agency is all around us, and society has always consisted of relations between human and non-human actors. We should keep in mind that agency is not absolute or independent, but contextual and relational. If we conceive of agency in this way, we can see the stakes of some of the current debates about AI: to what extent will these systems act as agents of change in our world, and how will AI affect (enhance, extend, supplant, or constrain) human agency?\nAgency to Change the World\nHow did Western intellectuals go from believing agency to be the exclusive property of the human subject, to considering whether algorithmic agents, or AI systems, also have agency? One understanding is that as AI increasingly approximates human intelligence, it attains attributes formerly reserved for humanity. But an argument can be made that AI today, even in its narrowest forms, already exercises agency, and that humans were never particularly special to begin with.\nIt is commonly said that people exercise agency to achieve their desires, goals, and interests. In sociology, agency has long been seen as the source of change in society. Agency is why society does not remain in a steady state, despite all the ways that social structures are reproduced. Without agency, we would all be pawns shaped and manipulated by larger forces that often precede our existence: children molded into reproductions of their parents; compliant, orderly workers reproduced by the educational system to passively accept ideologies that justify why the existing order is natural, desirable, or worthy of being preserved. Agency refers to our ability to change this social structure, to disagree with our parents, use education to advance knowledge, achieve social mobility, critique ideology, and challenge government.\nThere is a longstanding debate in social theory about the relationship between agency and social structure,11 which has largely gone stale and unresolved. But agency continues to provide the tension that prevents a totalizing view of structure. Not everyone agrees that agency is required to understand humanity or the relationship between individuals and society, but social theories that do without agency, or that provide an impoverished view of agency, paint a deterministic picture. Individuals are conceived not as subjects, but through their habits and practices, or as the effects of the social structures that produce them. Without agency, we are zombies, automata, or cultural dupes.12\nAmidst some of the debates about agency and structure in the 1980s and 1990s, a new conception (often associated with Bruno Latour and Actor-Network Theory)13 began to take hold. The provocative argument was that agency was not confined to humans, but that society was composed of both human and non-human \"actants.\"14 Agency was defined roughly with action, and the ability to affect the world. If a human worker was replaced by an object (even an inanimate one)15 that could play the same role, then that object similarly exercised agency. Because in many of our interactions with the world, whether in laboratory experiments or farming,16 humans cannot fully predict or control the outcome, nature also has agency – co-creating the world with us. Questions of intentionality and purposiveness are elided through this focus on action.\nWhether in its humanist or post-humanist form, Western theory's interest in agency has also been subjected to significant critique. The idea of an autonomous human subject is arguably a historical invention – a distinctly Western, masculine, individualistic vision of man. Feminist theorists advanced these arguments decades ago, pointing to the often unacknowledged work (disproportionately performed by women) of nurturing and caring for 'autonomous' subjects. Complicating but not necessarily rejecting the ideal of autonomy, these authors advanced a concept of agency that situates it firmly within social relationships.17 Relational and non-human conceptions of agency are common in Indigenous and non-Western ontologies,18 rooted in the understanding that the world is agentially alive, and that humanity is inexorably linked to and dependent on these forces. In an article section titled Columbus Discovers Non-Human Agency,19 three authors influenced by Indigenous feminist literature point to the Eurocentric and settler colonial bias of a recent turn in social theory. In this 'new materialism', authors influenced by Latour and feminist STS have made expansive claims about agency that may be innovative for social theory, but which are quite traditional for unacknowledged indigenous ontologies.\nAt this point it is worth reflecting on these divergent conceptions of agency. Along one dimension outlined above, they run the range from treating agency as a distinctly human property, linked to subjectivity, consciousness and intentionality, to a broader view of agency as whatever has effects on the world (and ourselves). At its broadest, we are not agents at all: distinctions of subjects and objects are dissolved, and the entire universe becomes a quantum soup of intra-active becoming.20 But somewhere between this posthuman extreme and the reassurance of conventional humanism, we can return to a view of agency that encompasses both humans and AI, as agents that change the world, and are entwined in relations with one another.\nToday, developers are building robots that learn about and interact with their environment – an environment that includes other robots as well as humans. Machine learning enables AI systems to pursue goals in ways that humans could not anticipate, even if their goals were initially formulated by humans. We now regularly interact with various kinds of AI, or are subject to decisions made by these systems. Finally, the distinctiveness or exceptionality of the human subject has been repeatedly problematized by advances in AI and in our understanding of other organisms. In this context, conceiving of agency as the ability to change the world remains valuable for considering issues common to humanity and AI.\nConceptualized as a means of social change, we can see that agency is not a human birthright, and is not equally distributed across humanity. Structured inequality provides opportunities to some, which are denied to others. Where a person is born, and how they are nurtured or socialized, has great consequences for the choices and capacities available to them – including the impact a person can have on reshaping pre-existing structures. Agency depends on our relationship to these structures, as well as to each other. Hence, agency varies across positions in society and is subject to change. We can engineer technologies and social systems to enhance human agency, to provide capabilities for transformation of individual or collective conditions; or we can design to preserve and reinforce existing power structures. Similarly, it is valuable to conceptualize the agency of AI through its ability to affect the world, change itself, and change human lives, irrespective of consciousness or intentionality. If we conceive of agency in this way, we can see the stakes of some of the current debates about AI: to what extent will these systems be agents of change in our world, and how will AI affect human agency? What decisions will AI make on behalf of humans, and how will these sociotechnical systems reconfigure the possibilities available to us?\n5. Can (and Should) AI Be Considered an Agent\nGabriel Lima\nComputer science undergraduate student, KAIST, South Korea\nProvocation\nIn this short essay, I share my thoughts on the relationship between artificial intelligence (AI) and various definitions of agency. Can AI be considered an agent? More specifically, does AI fulfill requirements set forth in various definitions of agency? Depending on the perspective and definition taken by the reader, the agency of AI could be controversial, unimaginable, or an unquestionable truth. A question that is often neglected, however, is whether AI should be given any agency. Even though we often derive normative statements of value (e.g., should, ought to) from descriptive statements of fact (e.g., can, is), their distinction is extremely important and has been discussed by many philosophers who argue this relation is not necessarily valid and advisable. Finally, I conclude my essay by raising the open question whether AI should indeed be an agent in our society independent of the fulfilment of agency requirements set by various definitions. Instead of focusing on the abilities of an AI, what if we first ask whether it would be beneficial to treat an AI as an agent in society?\nIntroduction\nAgency has never been clearly defined across, or even within, disciplines. Even though it is often related to autonomy, responsibility, or causality, no clear definition agrees on every detail around the complicated issue of who (or what) is an agent.\nIn this short essay, I share my thoughts on the relationship between artificial intelligence (AI) and agency. Can AI be considered an agent? More specifically, does AI fulfill requirements set in various definitions of agency? Depending on the perspective taken by the reader, the agency of AI could be controversial, unimaginable, or an unquestionable truth. A question that is often neglected, however, is whether AI should be given agency. Even though we often derive normative statements of value (e.g., should, ought to) from descriptive statements of fact (e.g., can, is), their distinction is important and many philosophers have argued that this connection is not valid or advisable. Finally, I conclude my essay by raising the open question whether AI should indeed be an agent in our society independent of the fulfilment of agency requirements set by various definitions.\nAs introduced above, agency is not clearly defined and thus, tackling whether AI could qualify as an agent following every single proposed idea of agency is infeasible. In the following short subsections, I will deal with some common sociological, legal, philosophical, and technological definitions of agency and share my thoughts on whether AI could be considered an agent under each definition.\nAn Agent Is a Goal-Oriented Entity\nDoes an AI have a goal? From a computer science perspective, this is often how we create and train AIs. For instance, in reinforcement learning we teach AIs by rewarding them depending on whether or not they have achieved a set goal. The goals of an AI are not intrinsic, but extrinsic; the programmer sets its goals following his or her needs. This does not, however, disqualify AI as a candidate for agency. According to the idea that agency is based on a goal-oriented behavior, AI could be seen as an agent.\nAn Agent Can Act and Modify Its Behavior Depending on the Environment\nThis definition is often used in computer science when dealing with reinforcement learning, a method used to train AIs. In this setting, we define AI as an agent in an environment with a set of policies and actions. Given that AI is defined as an agent from its conception, it is easy to imagine an AI as an agent after its deployment.\nAn Agent Has an Effect on the World and Drives Social Change\nFollowing this more sociological perspective, an agent must make a difference in society to qualify for agency. In the current \"AI Summer,\" AI is affecting society in ways many did not expect – or did expect, but unfortunately neglected. AI has been disruptive in diverse sections of society. Job markets having to adapt to the insertion of these electronic entities, and recommendation algorithms controlling what kind of information a certain part of society has access to, are among many examples of novel consequences AI is imposing on society. It is not hard to see an AI as an agent considering its impact on society.\nAn Agent Can Engage With or Resist Colonial Power\nEven though sci-fi scenarios give us the idea that AI can resist the power of its creators, this possibility is far from us. AI cannot resist and turn against its own creator, due to both lack of ability and the high level of control creators still have over their creations. AIs are distant from engaging with (or inverting) the power pyramid, where they are at the very bottom. More importantly, how can they even set that as a goal, if an AI is not currently able to have intrinsic goals? By this conception, AI cannot be an agent since it does not engage in any action dealing with its creators and its hierarchical position.\nAn Agent Is an Entity That Acts on Behalf of a Principal\nWe often build AIs as entities to complete a certain task for humans. These systems act on behalf of a principal, which can be their programmers, manufacturers, or users. The principal sets the AI's goals and the system works towards achieving them. By this conception of agency, an AI is clearly an agent. Some authors even argue that AI could be a \"perfect agent,\" since it does not have intentions or goals that could deviate from its principal's goals.\nAs issued raised by many legal scholars about AI agency is the usual requirement of a contract to establish a principal-agent relationship. Since AI has not (yet) been granted any kind of legal personhood, it cannot be a party to a legal contract. Consequently, while an AI could be seen as an agent under a principal in economic terms, it cannot qualify as one legally.\nAn Agent Can Bear Responsibility for Its Actions\nCan an AI be responsible for its actions? How would this responsibility even be assigned to an entity that cannot be held accountable for its actions? If an AI causes damage, how can it be punished? These issues are raised by many law scholars when dealing with the liability assignment of an act with legal consequences by an AI. At present, liability usually goes towards the manufacturer or user of an AI, so the AI system itself cannot be seen as an agent.\nBut Should AI Be Considered an Agent?\nAs I have argued above, depending on how you define agency, the idea of AI being an agent can be seen as either reasonable or completely absurd. Given that it is a possibility, should we consider AI as an agent? Even though we often derive whether an entity should receive any consideration from its ontology and capabilities, should we apply the same reasoning when dealing with AI? Would that be beneficial to our society, our legal systems, or even humanity as a whole? Should we even ask that question?\nWith the fast development of AI, we keep dwelling on what each system can and cannot do; we thereby neglect the question of whether this consideration is the right one to focus on. What if, instead of focusing on what an AI can do, we center discussion on whether these entities can be seen as agents no matter how complex, intelligent, or autonomous they might be? Although the abilities and inabilities of current AI systems are important to the discussion of the position of AI in society, this might better be left as a follow-up question to the most immediate inquiry: given the lack of agreement on the definition of agency and regardless of the abilities of these newly developed entities, is it socially beneficial or possible to consider AIs as agents?\n6. How does AI affect human Autonomy?\nCarina Prunkl\nSenior Research Scholar, Future of Humanity Institute, University of Oxford\nAutonomy (autos = self; nomos = law) in the context of human beings refers to the capacity of self-governance or self-determination. This also implies that an individual's actions are neither the product of external manipulation, nor imposition of external forces. Autonomy in this sense plays an important role in Western culture and is often considered desirable for the individual. When we speak about 'autonomous systems' in the context of artificial intelligence, we similarly refer to some sort of 'self-governance', but in contrast to human case, this 'autonomy' has little to do with acting true to one's own beliefs, desires or motivations. Instead, it refers to the capacity of the system to learn and perform certain tasks without human guidance or supervision. A well-known example of such 'autonomous systems' are self-driving cars that navigate themselves through traffic to bring their passengers from A to B. But of course this type of 'autonomy' is not limited to the mechanical realm and we may easily conceive of virtual 'autonomous systems', such as virtual assistants that organize our lives by making appointments, doing (online) grocery shopping, taking notes, etc. By outsourcing seemingly trivial tasks such as driving and grocery shopping – not to mention some highly non-trivial tasks, such as those now performed by soldiers but that might at some point become automated – we are handing over more and more responsibilities to 'autonomous systems.' How will such a development affect our own autonomy? It is difficult to imagine that at least those of us who are somewhat indifferent to the joy of driving, will feel or be less autonomous by having a car that takes us to where we want to go faster and safer. This is at least in part because it is we, after all, who decide where to go and when. But what about when such 'autonomous systems' not only navigate us through traffic, but also through life? When they learn from our behavioral patterns, our preferences, our relationships, to make predictions about, say, what groceries we would like to eat next week? Here the situation is much less clear. Do we gain autonomy by not having to be bothered with boring grocery planning and shopping, and instead having time for the things we would really like to do? Or do we instead forfeit autonomy by not being the ones who make the choices about our nutrition, returning almost to the childlike state of not having to take responsibility for certain aspects of our lives? These are questions we urgently need to ask ourselves.\n7. The Myth of Agency\nSarah Newman\nSenior Researcher, Principal at metaLAB at Harvard\nFellow, Berkman Klein Center for Internet & Society, Harvard University\n\"Ultimately, nothing or almost nothing about what a person does seems to be under his control.\"\n– Thomas Nagel, Moral Luck\nWe look, critically, at how our technologies work, and yet we make assumptions about how we work. What motivates our choices? Are we in control of our actions – and if so, all of them, or only some of them? As our interactions with and dependence on new technologies, including AI, become both increasingly common and invisible, what, if any, agency are we giving up? If we better understand our agency, how does this connect to our responsibility for the technological world we are creating, and the natural world we are destroying? What responsibilities should we have for our own behaviors, and where does accountability reside in automated systems?\nWe use the term \"agency\" to refer to humans, to current and future AI systems, as part of a framework for responsibility and accountability. But what do we mean by agency? Agency is defined differently across disciplines–from computer science to philosophy to sociology to law. Recent developments in neuroscience and AI both call into question the accepted notion of volitional agency as the willed proximate cause of a thought or an action. How might exploring frameworks of agency affect our approaches to ethical standards in the development of AI? A potential blind spot in our analysis of the development of AI lies in the assumptions we make about our own agency, freedom of will, and moral capabilities.\nAre we are actually more accurate when describing the behavior of machines–mechanistic, physical, governed by the laws of nature and programming–than we are when we describe ourselves? Things get fuzzy as the mysteries of consciousness and subjectivity arise. What is true – and what, if not true, is useful to believe?\nWe believe that we, as humans, have at least some agency. We acknowledge that our degrees of agency differ across individuals and circumstances, increasing or decreasing based on certain constraints, and governed by physical laws–at least those outside of our brains. Most people don't believe that they could defy physical laws: the laws of gravity, survival without food, etc. We accept these physical constraints, those that appear to affect all beings and appear to be external to us, or at least external to our physical bodies. Yes, this agency is highly variable: a healthy adult has more agency, people tend to agree, than a baby, or someone who is very old or unwell.\nWe tend to agree that we do not have the agency to fly, or to travel in time, or countless other fantastical things (barring of course certain mental illnesses, or other illnesses which impinge on mental capacities, which have their own unique relationship to agency and thus also to responsibility). And yet most people now, and throughout history–across cultures, ages, and every other demographic factor–have a distinct sense of being in control of (at least some of) our behaviors and actions. Even though it is difficult to explain, there is a distinct and overwhelming sense that I am choosing to write these words, that I will choose what to have for dinner, that I could choose to clap my hands, or nod my head, or close my eyes. This sense, as inexplicable, biologically and physically, as it may be (as a being comprised of physical matter that came into existence in a way that I certainly did not will), from where did it arise? Is the sense of agency I possess merely a myth? Perhaps a useful, or even inescapable myth? If so, is considering such questions useful or productive?\nFor me, reflecting on such questions is enriching: it enriches my daily life and my experiences. Paying attention to this deep and abiding mystery, somewhat ironically, feels empowering – as if I am curiously contemplating whether the backdrop is a facade, whether this sense of agency is indeed an illusion. I acknowledge the possible privilege of this perspective. Perhaps, if I do indeed have some sort of inexplicable agency, contemplating it is enjoyable because I have (if I have it at all) a relatively high degree of it. But perhaps not.\nSuch topics have fascinated philosophers, theologists, and most humans for so long as we have records of such contemplation. Debates on free will or the existence of agency–nevertheless have barely made their way into discussions of the new sophisticated technologies we are creating–particularly AI, in terms of how it already is acting in the world, as well as how it could impact the future. We talk about autonomy and responsibility, but can we use this moment to also reflect back on our assumptions about ourselves?The post AI & Agency first appeared on AI Pulse.", "url": "https://aipulse.org", "title": "AI & Agency", "source": "aipulse.org", "date_published": "n/a", "paged_url": "https://aipulse.org/feed?paged=2", "id": "dbbd1e318d6361b5c36b12e98c364eb5"} -{"text": "Could AI drive transformative social progress? What would this require?\n\n\n Download as PDF\n\nAI's Transformative Social Impacts and their Determinants\nThe potential societal impacts of artificial intelligence (AI) and related technologies are so vast, they are often likened to those of past transformative technological changes such as the industrial or agricultural revolutions. They are also deeply uncertain, presenting a wide range of possibilities for good or ill – as indeed the diverse technologies lumped under the term AI are themselves diffuse, labile, and uncertain. Speculation about AI's broad social impacts ranges from full-on utopia to dystopia, both in fictional and non-fiction accounts. Narrowing the field of view from aggregate impacts to particular impacts and their mechanisms, there is substantial (but far from total) agreement on some – e.g., profound disruption of labor markets, with the prospect of unemployment that is novel in scale and breadth – but great uncertainty on others, even as to sign. Will AI concentrate or distribute economic and political power – and if concentrate, then in whom? Will it make human lives and societies more diverse or more uniform? Expand or contract individual liberty? Enrich or degrade human capabilities? On all these points, the range of present speculation is vast.\nWhat outcomes actually come about will depend partly on characteristics of the technologies, partly on the social, economic, and political context – what specific technical capabilities, with what attributes, are developed and deployed, and how people adjust behavior around the capabilities. It is a basic doctrine of technology studies to reject technological determinism: technological and socio-political factors interact, and to the extent either predominates in shaping outcomes it tends to be the social and political factors. The interplay between these underpins the well-known \"Collingridge paradox,\" which states a structural challenge to managing technology's societal impacts1: early in development, control efforts are hindered by limited knowledge, because impacts are indeterminate until a technology is stabilized, deployed, and used; while later in development, control efforts are hindered by limited power, because the same development processes that determine and clarify impacts also build political interests in the technology's unhindered expansion.\nIn correctly rejecting naïve or extreme forms of technological determinism, however, these characterizations are often deployed too starkly and universally. Collingridge's paradox of knowledge and control is better understood as a persistent tension than as a categorical statement of impossibility. Moreover, without disparaging the power of social context, technological processes and artifacts are not infinitely malleable: particular technologies have characteristics, which in some cases tend to favor particular uses, applications, or consequences. Kranzberg's (slightly whimsical) first law of technology aptly captures the tension: \"Technology is neither good nor bad; nor is it neutral.\"2 It is subject to influence, and that puts responsibility onto humans to wisely guide its development and application.\nAI may be a class of technologies for which serious consideration of the role of technical characteristics in shaping impacts is especially needed, in view of its labile nature and its potential for profound societal disruption. Two examples from widely separated parts of present debates about AI impacts illustrate the point. First, concerns about impacts of extreme AI advances to general, beyond-human intelligence – and related efforts to develop \"Friendly\" or \"Safe\" AI, or align its objectives with human values (assuming these are known and agreed) – are entirely concerned with attributes of the technology. These efforts seek to ensure good consequences, or at least avoid the worst ones, by embedding reliable determinants of benevolent aims, prudence, or other virtues into the technical artifacts themselves. To the extent this program succeeds – a huge assumption, to be sure – it would move concerns about these extreme forms of AI impact out of the social and political domains entirely.\nIt is widely noted, of course, that focusing predominantly on such hypothetical future super-AI risks misleading, by distracting from addressing nearer-term uses and impacts that are also potentially transformative for good or ill – including both the \"now\" and the mid-term.3 Technical characteristics, even abstracted from social context, also matter for these near and medium time horizons – i.e., well before development of AGI or super-AI – when AI will clearly have transformative possibilities but still, at least formally, be under human control. The importance of technological characteristics is evident even in current AI controversies, in both what technical capacities allow and what they require. As an example of impacts driven by what technical capacities allow, AI-enabled advances in data integration and surveillance, especially facial recognition, already present significant threats to privacy and autonomy. These capabilities are being deployed because current actors find advantage in them, of course – a matter of social and political context. But it is the technical performance characteristics that create these new capabilities and make them visible. As an example of impacts driven by technical requirements, present machine learning algorithms require training on large labeled datasets. This requirement has driven two powerful effects and points of concern. It has steered many near-term commercial applications toward decision domains such as criminal justice and health, in which huge individual-level datasets with clearly labeled outcomes are available, with little advance consideration of the high personal, legal, and societal stakes – and high costs of error – that are intrinsic to these domains. And it has replicated, by some accounts even magnified, pre-existing biases present in these training data, and projected them forward into future decisions.\nOur workgroup reflected on the question of AI impacts in broad historical context: in effect, we took seriously the analogy to past technology-fueled revolutionary transformations of human society such as the industrial revolution. But we did this with a perspective opposite to much current debate, considering the prospect for societal impacts that are transformative in scale but beneficial in valence. Speculations about huge societal benefits from AI are common, but tend to be superficial and conclusory, often based on speculative gains in single areas such as medical care or scientific research. By contrast, speculations on AI-driven dystopias are frequent and attention-getting, often with their causal mechanisms characterized in some detail. 4\nA Historical Analogy\nIn our inquiry, we drew insight and inspiration from a line of commentary on past societal transformations that gets insufficient attention in current debates on technology impacts – despite being a prominent theme in the work of a few distinguished scholars such as Albert Hirschman and Elizabeth Anderson.5 These scholars point out that at the time modern liberal states, market capitalism, and associated technological changes were emerging, these trends were widely heralded as drivers of political and economic progress, relative to aristocratic social hierarchies, promising not just greater liberty – one part of the argument that remains prominent in modern political discourse – but also increased equality (in some accounts also fraternity or comity – to complete the revolutionary triad). These promised and briefly realized happy trends reversed, as technologies of the industrial revolution and their economies of scale drove vast accumulations of capital and separated the previously tight connection between markets and equality. Progressive reactions from governments (e.g., anti-trust) and new organizations (e.g., labor unions, charitable foundations) mitigated these trends to better balance autonomy, prosperity, and equality – a balance that current technological and economic trends are disrupting.\nOur group aimed to re-open this question in the current context of rapid advances of AI. Can these transformative capabilities deliver on the old promise of technology as both liberator and equalizer? Can they do so in a way that is compatible with foundational moral and constitutional principles, and democratic institutions: e.g., freedoms of speech, association, religion, and the press; and private property rights with markets allocating resources through voluntary transactions, except insofar as these implicate external harms or public values (and as Mill reminds us, without drawing these public bases for concern so broadly as to undermine the basic liberty presumptions).6 And if this is all possible, what would it require: what are the key conditions that would mediate the ability of AI to help advance such a happy social vision?7\nIn considering this question, we did not elevate technological characteristics to the exclusion of social and political context; but we did consider technical and political forms of the question separately. First, what technological characteristics of AI systems and applications are likely to promote good societal outcomes? And second, what economic, social, and political conditions – including, concretely, what feasible business models – are likely to promote AI technology developing in these benevolent directions, and be sustainable over time?\nPromising Directions: Technological Characteristics\nIn considering the technological part of the question, we focused on two broad technical attributes that we speculate may help direct AI's transformative societal impact toward the good: one related to the form and structure of decision-making, and one related to the distribution, scope, and number of separate AI agents.\nDecision-making structure: Single-valued optimization, versus robustness and pluralism?\nVirtually all automated decision systems – modern machine-learning systems and conventional algorithms alike – operate by optimizing a single-valued scoring or objective function. This is most obvious in the case of known preferences and conditions of full certainty, but similar approaches are used under uncertainty: maximizing an expected payoff or expected utility function, based on specified probability distributions, sampling from specified uncertain parameter inputs, or data assimilation from concurrent observations. These approaches all optimize a single-valued function relative to a single characterization, deterministic or stochastic, of conditions in the world.\nThere is an alternative, less unitary approach to decision-making, which initially grew out of concepts of satisficing, bounded rationality, and multi-criteria decision-making.8 This alternative approach, one prominent form of which is called \"robust and adaptive decision-making\" (RDM), seeks decisions that perform acceptably well over a wide range of possible realizations of uncertainties, rather than performing maximally well under any single specification, whether deterministic or probabilistic. RDM has extensive experience in diverse decision application areas. It has not been used in AI or machine learning, but we conjecture that it may have powerful implications, broadly consistent with the progressive social values we aim to advance.\nThe seed for this hopeful speculation lies in the fact that RDM is not just robust over alternative realizations of uncertainty about the world: it is also robust to uncertainty in the decision's goals or the range of values it implicates. RDM thus holds the potential to be more pluralistic, more compatible with both uncertainty and diversity of values – and thus, perhaps, with more inclusive and more equitable AI-driven decision-making. We realize that this is hopeful speculation about potential technical capabilities and the societal implications of their application, not a demonstrated characteristic of AI systems. But while the capabilities and associated questions remain largely unexamined, they clearly merit high-priority investigation.\nThe Number and Orientation of AI agents: What actors, and what aims, do they serve?\nThere is a wide range of speculation on the number, deployment scale, and objectives of future AI systems, ranging from each person commanding multiple AI agents for different purposes, through one integrated AI that does everything for everyone. Present AI developments consistently show a much narrower pattern, which is not necessarily well aligned with broadly distributed societal benefits. Most current efforts and most important recent advances have come from large, well-funded organizations: for-profit corporations, free-standing laboratories and institutes, and universities, some surrounded by clusters of small startup firms, with widely varying levels of government financial support and control among countries.\nThe most prominent current deployments of algorithmic decision-making are offered by private, for-profit firms, many of them in settings where the deploying party has a dominant position in the relevant interactions: Amazon toward purchasers and 3rd-party vendors; Facebook toward social-media users; Google toward its service users and data providers; Uber toward drivers. In these settings, users can observe only a small slice of the system's performance in its interactions with them, but have virtually no information about its broader operations, including what it is optimizing. In all these interactions, commercial or not, the available evidence – plus common sense – suggest that the systems are optimizing for the interests of the dominant actor, taking lesser account of the interests or welfare of the user only as needed to advance the primary aim – and, moreover, are doing so in ways that take advantage of the dominant actor's market power.\nBut this structure of relationships is not a necessary consequence of algorithmic decision-making or decision-support systems. One can imagine a wide range of other possibilities for how AI systems are deployed, some of them more compatible with a reduced concentration of power. An obvious and widely discussed possibility would be general-purpose AI assistants serving individual people, either unitary systems or integrations of multiple special-purpose systems. Such agents could act as information source, advocate, and negotiator for their clients in multiple interactions. They could provide suggestions and recommendations, manage the mechanics of transactions, and bargain on your behalf in consumption and other commercial interactions, both present ones and new ones it would enable – e.g., renting out your car, tools, or other costly assets when you do not need them. They could play similar roles in financial and investment decisions, and in labor-market participation. In situations of conflict or interaction with authorities, they could aid you in negotiations and advise you on your legal rights. They could support and advise your political participation, whether through existing channels such as voting and candidate support or through new, AI-enabled processes that combine elements of representative and direct democracy, such as issue-specific proxy delegation or other forms of \"liquid democracy.\" And they could act as a personal coach, helping you make decisions and manage your time in line with your goals and values. The biggest challenge in creating such systems – as we discuss below �� would be defining their objectives to reliably align with their user's welfare and values.\nAlternatively, rather than serving individuals, AI systems could operate enterprises or collections of assets, to perform specified functions or advance specified interests aligned with social good. For example, AI systems – perhaps self-owned or self-controlled – might operate businesses or parts thereof such as individual factories; apartment buildings or larger-scale collections of housing or other buildings; public transit systems or other infrastructure components; or specific functions of government decision-making, in cases where the delegation of authority and the specification of relevant values to advance are unproblematic.\nAI agents could also be deployed at higher levels of aggregation, to inform or guide the joint actions of groups of people in pursuit of their shared interests in some specific domain – whether commercial, political, recreational, expressive, religious, or something else. Relative to personal AI assistants, these agents would operate at a scale that is broader in the people whose interests are served, but narrower in the range of functions being pursued or interests being advanced.\nFinally, one could imagine AI agents deployed at the level of the entire polity in some jurisdiction, centralizing decision-making on state functions in pursuit of some legitimate and widely agreed conception of the aggregate social good. There is of course some tension in using AI this way to increase human liberty and agency. Can we really claim to advance liberty and agency by centralizing state control? But these are largely the same tensions as attend state authority guided by humans. The state is a strong centralizer of power. But in liberal democratic states, this centralization serves the interests of order and security, including displacing other, less legitimate forms of concentrated power that are likely to arise in the absence of the state. And moreover, liberal states exercise this power lightly, so as to enhance liberty and welfare, only coercing citizens as needed to pursue legitimate public purposes.\nConsidered overall, this collection of potential AI deployments might tend to have an hourglass structure. As the scale of deployment moves from individuals to groups, the functional scope of the AI narrows to specific aims of particular groups; then at the highest level of jurisdictional aggregation, the scope of AI decision-making returns (or can return) to the comprehensively broad, imperfectly known set of interests that are the legitimate purview of state authority.\nExamples of large-scale social reorganizations that would potentially be feasible with such a collection of AI agents would include the following:\n– Breaking up monopolistic social-media platforms into multiple distinct platforms, each managing members' interactions by internally agreed rules and mediating the interactions between insiders and outsiders. AI would be used to facilitate such a breakup by overcoming the incumbent advantage due to network externalities. This is already happening in a small way, with the growth of new social media platforms such as Diaspora and Mastodon with commitments to stronger privacy protections than present platforms.\n– New ride-sharing platforms, in which AI is deployed not as an instrument of the network's monopoly (or the Uber/Lyft duopoly) over drivers, but instead deployed to serve drivers and driver collectives, interacting with multiple counter-parties (current ride-share companies, potential new entrants, and others), and with riders, who in turn might be interacting with the system through their personal AI assistants. AI in such settings could optimize contractual terms to maximize shared value, while also equitably distributing surplus value between drivers, riders, and providers of other factors of production – including paying returns to capital and managerial services, but on competitive rather than monopolistic terms.\n– A similar but broader labor-force model, not for people providing one service to one business (e.g., drivers and Uber), but with AI intermediaries enabling groups of individuals to come together to offer products and services to the market without the need for corporate intermediaries. Such groups could utilize distributed supply chains, to which they submit opportunities and find others who wish to join and bid to offer their services. As in the narrower, ride-sharing case, AI agents could optimize contractual terms, including provisions for duration and modification, based on the preferences of the participating individuals.\n– AI-mediated political interaction – among citizens, activists, politicians, and political parties – to provide more civil and substantive deliberations, and more effective, informed, and flexible translation of citizen preferences into collective decisions. Such systems might aim to provide equal opportunity for political participation to all citizens, to motivate and reward virtue and moderation rather than vice and extremism, and to be dynamic – able to update, including updating collective understanding of what counts as virtue or vice. In contrast to the preceding examples, which would replace commercial transactions, this one would require more manipulation of incentives for individual behavior in pursuit of collective interests of civility, moderation, and reasoned debate. It could be designed to reward – with more influence and scope to reach a broad audience – those who best exhibit those virtues, rather than rewarding volume, belligerence, extreme views, personal attacks, skilled manipulation, or outrage.\n– Within this context of AI-mediated political interactions, AI could discharge certain administrative functions of the state, mitigating the long-standing tension between expert and democratic control noted by Weber. AI's exercise of these functions would be guided by objective functions tuned by democratic deliberation. Through AI-facilitated direct deliberations or some equivalent quasi-legislative process, citizens would define large-scale aims and principles, set parameters for AI objective functions, then observe the results and iteratively adjust those parameters to steer toward a preferred balance of multiple societal aims. Such an administrative AI would in effect act like regulatory agencies under present administrative law, but with more explicit and more consistent parsing of authority between high-level democratic goal-setting and technically skilled implementation.\nRealizing any of these alternative models of AI deployment would pose major challenges, which include significant technological elements even though they are not exclusively technological in character. A central challenge, perhaps the fundamental one, is appropriately defining the AI's objective function. Even with the shift toward a more robust and pluralistic approach as discussed above, this would imply three additional subtle and related requirements.\nFirst, in any application the AI must act as a faithful agent of its intended beneficiary, whether this is an individual or a group of any size. The AI pursues its beneficiaries' values and interests – not the interests of its maker, not even when it must resolve ambiguities or indeterminacies in its understanding of its beneficiaries' values. This would represent a major departure from presently deployed AI systems, including those that are approaching the role of general-purpose personal assistants. These are developed by firms with interests in the user's behavior, and thus in manipulating that behavior or harvesting the user's information – even if that manipulation may be subtle and the systems seem to optimize for the user's preferences. These systems are also developed in the context of various commercial and state interests in creating over-rides or back doors, in order to allow surveillance and control contrary to the user's interests.\nEven assuming this first condition is met, so the decision scales are not tilted to favor the maker's interests, systems interacting with individuals face a second challenge of understanding the determinants of the user's true values, interests, or welfare, as distinct from their immediate impulses or desires. This is hard to define, imperfectly inferable from observed behavior, prone to error, and in need of continual adjustment and updating. Like a wise parent or a skilled life coach, such systems would nudge the user's choices in directions judged likely to be compatible with their long-term flourishing – with the key difference from parenting (although not from coaching) that the ultimate authority in the relationship lies with the user. This would require a delicate balance, by which the system pushes against immediate preferences and desires when these appear to be at odds with the client's values or long-term interests. But to do this, the AI assistant must build a model of the client's values and long-term interests, based on data available to it. The system will thus sometimes make mistakes, and so will need to recognize uncertainty, make some of its recommendations tentative, and sometimes consult and ask for help – while also still using its present, uncertain knowledge to configure the choice space in ways likely to tend to beneficial outcomes. There will thus be a core design tension, between allowing human over-ride of AI recommendations and putting some degree of burden or barrier in front of instant, effortless, or wholesale over-ride.\nA related but even sharper tension will be present in the case of people with destructive, malicious, or criminal preferences. Even liberal states do not honor or aim to fulfill the preferences of every citizen, independent of collectively exercised moral judgments. One can readily imagine a sexual predator or other criminal wanting their AI agent to help identify victims, assess the threat of detection or apprehension, manipulate victims to not resist or not report, or pursue other clearly nefarious aims. One problem here is defining the boundaries of permissible preferences – a challenge similar but not identical to that in non-AI contexts of defining the boundaries of criminal or civil wrongdoing – except that, as in so many domains, making scoring or objective functions explicit can be troublesome in cases where maintaining ambiguity provides needed social cohesion or moral comfort. Even assuming appropriate definition of the boundaries of permissible user preferences, a related design problem will be protecting AI systems against hacking or manipulation to enable such uses – either by intentionally disabling the AI's \"conscience\" functions, or by misrepresenting intentions in planning or multi-step execution of bad acts. We want individual AI agents that can distinguish their user's seeking an out-of-the-way place for a quiet picnic, or to carry out a murder.\nAdditional requirements and challenges would apply to AI agents managing enterprises or assets: e.g., self-directed AI corporations, housing developments, or transit systems. First, should the substantive decision scope of such agents be narrowly circumscribed and fixed? This raises issues analogous to those in current law regarding charities or other non-profit organizations seeking to change their original missions. Narrow and fixed goals would risk restricting behavior so the AI cannot respond appropriately to changed knowledge and conditions; but changeable goals risks letting the AI transit system decide to go into the adult film business instead – whether because it judges the change would make more money, generate more happiness, or better promote peace and order. Second, can behavior be constrained to be legal and ethical in a way that is sufficiently clearly defined and does not put such enterprises at competitive disadvantage relative to others playing by looser rules? Third, can objectives be tuned to not accumulate rents in excess of the costs of all factors of production? If so, these enterprises might be able to out-compete others that are pursuing and taking rents, and so form the kernel of a gradual erosion of concentrated economic power – unless the others are pursuing an Amazon strategy, taking losses for a long time to secure a dominant market position thereafter. Alternatively, if rents do accrue – as they sometimes will – what should be done with these? Presumably they should not be retained within the individual enterprise, but instead distributed in line with the system's large-scale aims. But does this mean to the Treasury? Or perhaps to a pool dedicated to financing the capital needs of the broad \"social-progress-through-AI\" enterprise, as discussed below? Finally, if these bodies sometimes go bankrupt – as seems likely, given the constraints imposed on them – how can one ensure that they quietly accept this fate, and what should happen to their assets when they do? As UCLA law professor Dan Bussell argues in a forthcoming paper, AI enterprises may need a new kind of bankruptcy court.\nWhen AI systems are deployed to serve multiple people, to inform people's interactions with each other or advance group interests and values, additional challenges and design tensions will arise. These problems are similar, whether the structural approach to decisions involves collective decision-making or bargaining among individuals' AI agents, or some separate AI agents operating at a higher, collective scope of authority. The challenges all follow from a basic fact: in any decision situation involving multiple people, there are multiple measures of welfare. These are sometimes aligned, but they can also exhibit disagreements, rivalrous claims on the same resources, collective-action problems, or other tensions. Most often, there is some mixture of aligned and opposing interests. In such situations, even formal game-theoretic outcomes can be ambiguous due to the existence of multiple Nash equilibria. There can also be inferior collective outcomes from individual choices that are locally advantageous, or inequitable outcomes in distributive negotiations that favor the most aggressive bargaining tactics.\nEven assuming AI agents reflect individual values well, guiding or informing such multi-person interactions presents several additional design requirements. The systems would need to identify and avoid collectively inferior outcomes – even if they are equilibria – by providing coordinated nudges to steer parties toward collectively superior outcomes. They would need to apply the same gentle resistance against self-destructive impulses as at the individual level, now with the added requirement to steer groups against choices driven by collective-level pathologies such as envy, malice, hostile stereotypes, or escalation dynamics and other entrapment mechanisms. And they would need to address the problem of aggressive bargaining behavior, recognizing that this often succeeds at securing favorable one-time outcomes in divide-the-pie negotiations, at the cost of inferior collective outcomes and damaged relationships. The systems would have to both refrain from such behavior on behalf of individuals, and not reward it in determining collective outcomes. These requirements and the associated bargaining pathologies are best understood in commercial interactions, but they have close analogies in other domains. A salient current example is maintaining civil discourse, in politics and online, in the presence of powerful attention-getting advantages in being colorful, extreme, and uncivil – a domain in which a few experiments have shown that AI agents can make the problem worse, if they are trained on the actual content of current discourse.\nAchieving these aims would require that an AI system managing collective decision outcomes would need both the knowledge to identify collectively superior and inferior outcomes, and the ability to apply defensible principles for fair division of surpluses and resolution of conflicting preferences. If collective decisions are handled by collectively accountable AI agents, these would need to reliably observe the preferences and values of all affected people, plus relevant information about the world that shapes the set of feasible outcomes – a tall order. On the other hand, if collective decisions are handled by interactions among individual AI agents – each presumably with better information about its own user's preferences and values — then the individual agents' bargaining behavior must be subject to constraints guided by collective welfare: e.g., seeking to maximize joint gains; not pursuing these by shifting negative externalities onto others not present in the interaction; fair dealing with each other, in both process and substance; and refraining from destructive bargaining tactics even when these promise a one-time advantage.\nSome form of regulation at the collective level appears to be needed, but defining (and automating) precise rules will pose severe challenges. In different decision domains, the needed functions might be characterized as mediator-arbitrators, content moderators, or judges. Should these be AIs, humans, or machine-human partnerships? How can these processes be made robust against sophisticated attempts to capture them for partisan advantage? If the aim of these is to advance widely (but perhaps not universally) held collective values, how broadly should they be binding in domains such as political discourse that implicate free speech and other liberty values? And to the extent these processes supplant human decision-making – which traditionally advances collective aims by some combination of formal regulation and propagation and maintenance of social norms – might widespread assumption of these duties by AI risk atrophy of the associated skills, sense of duty, and other virtues in humans?\nPromising Directions: Social, Political, and Strategic Issues\nSummarizing the above, the technical AI characteristics we speculate likely to be associated with good societal impacts include the following:\n– AI does not irreversibly alienate individual human agency in any domain;\n– AI objective functions are tentative and pluralistic, along the lines of RDM, rather than single-minded and dogmatic; they admit multiple possibilities in outcomes and values, recognize limits to their knowledge of these, and know when and how to ask for additional information or guidance;\n– AI performance is monitored and adjusted over time with significant input from people, acting alone for their personal AI's or in democratic, deliberative groups for AIs with collective or society-wide responsibilities;\n– AI agents must be are trustworthy in all respects. Individual AI agents pursue the interests of their client rather than any developer or vendor; and they pursue the true, long-term interests and values of their client, via recommendations, nudges, and exhortations – acting like a wise parent or friend. AI agents acting, mediating, or arbitrating on behalf of collections of people follow principles of fair dealing and equitable distribution of surpluses among participating parties, and incorporate interests of other actors or values outside the participating parties only insofar as these represent real externalities.\nHaving speculatively identified these desirable technical characteristics of AI systems, we then asked how such technical systems might be developed, deployed, scaled, and sustained over time. These are questions of political and economic strategy. The proposed innovations – in addition to being uncertain and weakly characterized – would represent attacks on existing concentrations of wealth and power and the rents that sustain these. There are thus likely to face, at a minimum, challenges in securing the resources they need to be created, established, grow, and sustain; and more likely, will face determined and strategically sophisticated opposition.\nGetting a Start:\nIn this situation, the first challenge will be getting such systems developed and deployed. What this requires will depend on the details of the relevant systems and the inputs needed to produce them – the production function for AI capabilities – all aspects of which are deeply uncertain.\nOn this, an initial issue to consider is whether systems with the desired characteristics can be reliably developed by modifying other systems that were developed by and for current commercial actors – assuming these can be legally acquired. If they can to some degree, then the key questions are, first, trust and reliability – how can we verifiably assure that the systems so ported do not sneakily import the interests of their developers – and second, what additional resources and inputs are needed to modify systems and deploy them for their new purposes?\nAt best, the desired systems would need training procedures and data for their newly targeted uses, related to the individual or collective values to be served. This might be cheap and easy; it might be expensive and difficult; or it might be impossible, at least initially, because data relevant to the newly targeted uses and goals might not exist. Oddly, there is likely to be more and better data available to serve vendors' commercial interests – which depend on observable matters such as attention, time spent, and purchasing and other behavior – than is available to serve individual and collective values. Data presents other challenges as well, including the possibility that no truly general-application AI can be developed given jurisdictional divisions and restrictions on data access and use; and the present dependence of AI progress on a huge volume of labeled data, which in turn depends on a huge, low-wage workforce doing this essential step.\nThe less fortunate case would be that new systems with the desired characteristics must be developed from scratch. In this case, the same data concerns identified above would still apply. But there would also be a greater need for other inputs, for initial system development and deployment and for continuing maintenance, adaptation, and upgrades. These needs are probably similar for key advances in multiple areas of AI development, independent of the specific form of objective or the scope of application. In addition to suitable training data, these include highly skilled technical personnel; hardware-based computing power; and capital – lots of capital judging from present industry structure, although this could change.\nThe premise of the new AI developments we seek is that, unlike the present system, successful development of useful capabilities, even achieving crucial technical advances, will not create fabulous wealth for developers or their employees, collaborators, or investors. So how can the needed developments be effectively motivated? The recent case of OpenAI reconstituting itself as a for-profit corporation because it could not raise enough capital as a not-for-profit AI developer provides a germane cautionary example.\nOur discussions identified several promising elements of potential development models. The first concerns identifying early targets, current products or present or potential uses to displace. Promising targets might include products that are now gathering the largest rents, or that are targets of the strongest current objections and political threats, or for some other reason are ripe for raiding. Other promising factors would include consumers' willingness to incur a little inconvenience from switching costs; perhaps also a preference for local providers and small-scale relationships. The aim would be to target early penetration there, with alternative products that distribute the rents or other values to their users, not the vendors.\nThe second element is assembling and mobilizing the needed factors of production. On this, the initiative could start with crowd-sourcing, philanthropy, or other sources of capital motivated by social goal rather than profit – although these sources are usually much smaller than investment-motivated capital. An open-source development model may hold advantages, including facilitating engagement of top technical talent and mobilizing utopian and anarchic strains within the technical community. Such an initiative would provide an opportunity to probe the depth and sincerity of the revolts by high-tech workers against narrow conceptions of their employers' self-interest, inviting them to put their money and skills where their mouths are.\nAll aspects of this strategy – including, crucially, attracting capital and pro bono talent – would benefit from well-branded, highly attractive initial projects: e.g., the faithful individual AI helper, or the AI facilitator of civil political discussion and collective action (both of which may represent compellingly attractive aspirations, but would clearly need better names).\nNot all philanthropy pursues aims that are clearly benign and universally agreed, of course. Sometimes it makes sense to worry about limited or partisan social objectives in philanthropy: For example, don't solicit support for your climate-change campaign from the Koch Foundation. But this concern might be less serious for AI than in more established policy areas with well-known lines of political alliance and opposition. Libertarian philanthropists – yes, perhaps even the Koch Foundation – may well support the aim of empowering individual liberty and agency with individual-level AI agents. As for group-level AI agents advising different decisions to advance different aims, these will be multiple overlapping agents operating in a pluralistic setting, so the risks of capture by any limited or partisan view of the public interest may be less severe.\nPersisting and Scaling:\nOnce socially beneficial AI capabilities are deployed, they still need mechanisms and resources to persist, scale, and sustain their position. Moreover, they must do this in a way that maintains their alignment with citizen and public values and remains attractive to users – even once the initial novelty of the initiative has passed, with possible decline in the enthusiasm of pioneer supporters and developers. The initial sources of capacity may not be enough to persist under these conditions, or to overcome the sustained advantage of strategically sophisticated and ruthlessly self-interested incumbents, who might respond by deploying cheap attractive systems as loss-leaders to secure longer-term advantage. The enterprise will need to maintain needed access to technical expertise and capital, whether from associated revenues or from investors.\nSome present business models, such as relying on advertising, clearly appear not to be viable for this project, but several others appear plausible. One possibility would be subscription or purchase, although the implications of alternative ownership models and their compatibility with the large-scale aims – do I purchase my AI assistant and related supporting systems or rent them, and from whom – would require careful thought. If the services provided by AI systems include facilitating transactions or cooperative activity with exchange of money, the system could take a fee to cover development costs, provided the fee is perceived as reasonable and its basis fully disclosed. Another possibility would be a co-operative enterprise model. These organizations reach large scale in some jurisdictions with strong historical traditions of self-organized cooperative activity and supportive policy environments.\nThere might be bootstrapping possibilities, based upon the use of AI. Early AI's might be developed to help identify targets and strategies for subsequent expansion. They might provide information, services, and access to resources that have traditionally been provided by venture capitalists or other early-stage private investors. They might help identify points in current supply chains or production models that are rigid or constrained, or where market power is hindering rapid development and deployment.\nAnother novel approach might be to turn the widely denounced short-termism of capital markets to advantage, by deploying AI agents that pre-commit to change their behavior over time. An AI raider could initially pursue maximum short-term competitive advantage, but with a binding commitment to change course in the future. If its short-term competitive advantage is based on strong IP, for example, the commitment might be to unlimited free licensing after the initial period expires. A policy change to support this might be a new form of IP, based on modifying either patent or copyright, that combines highly advantageous short-term protections with an iron-clad, non-contestable commitment to expiry and full release to the public domain thereafter.\nAs the endeavor succeeds and grows, it will encounter changes in its strategic and competitive conditions. Some of these will work to its benefit: for example, open networked organizations pursuing broadly public aims are likely to have an easier time pooling and sharing data than rivalrous commercial organizations. Other changes will represent new challenges that increase costs or other barriers. As the new systems grow to mediate decisions that channel large sums of money, they will attract hackers and others interested in subverting them, and will have to develop robust security protections. Stringent open-source review can provide part of the needed protection, but some risk will remain. It will also be necessary to be vigilant about the interests of continuing sources of finance: any source motivated by financial return will present ongoing risks of subtle distortions of aims, and the associated prospect of simply replacing old centers of concentrated power by new ones just as determined to sustain their position.\nFinally, if the endeavor succeeds so well that some combination of individual AI assistants, autonomous AI enterprises, and AI-mediated collective interactions – all with the desired characteristics – becomes the dominant model for societal deployment of AI, it will be necessary to grapple with the question of innovation. Current law and policy assume that the main incentive to innovate comes from the pecuniary motive of earning rents, from the innovations themselves and from IP protection around them. With AI agents eschewing most or all of the rents that provide enormous financial rewards to present market actors, where will the motivation and resources to support innovation come from? Several alternatives might be possible. Innovation might still come from people, businesses, or other organizations, including AI-facilitated innovation, stimulated by some combination of the pecuniary rewards that remain under the new model (which will be smaller than under the present system, but probably not negligible), plus intrinsic motivation to innovate and create – which the present system largely overlooks. AI agents might be able to fully take over the huge volume of prosaic, small-scale innovations now done for profit in enterprises seeking IP assets – many of small or questionable merit. AI agents could take over the pedestrian activities of searching through current technologies, patents, and scientific publications that power much such innovative activity, but do so with better information and processing capability and with objectives better aligned with the broad public interest – and with results placed in the public domain for free further exploitation. For larger-scale scientific, technological, artistic, and social innovation, intrinsic motivations have long been the dominant driver and it is reasonable to expect they will still be present in the new world. Indeed, they might be effectively aided by AI support tools.\nChallenges, next steps:\nThe technological-political program of societal transformation we sketch here is bold, under-specified, and incomplete. It can be viewed as an attempt to update Alinsky's Rules for Radicals, Scott's Weapons of the Weak, and the Ethical hacking movement, for a new technological environment of greatly increased power for autonomous and semi-autonomous systems. It is bold in that we are proposing a new technological model of AI and its deployment that opposes the interests of present dominant incumbents – both private-sector actors whose revenues and business models would be threatened, and government institutions that would hold different and less extensive and exclusive as some decision authority shifts to networks of citizens and autonomous decision-making systems. It is essential not to be naïve about how large the barriers to entry are, or about the determination and resources of incumbents seeking to strangle the new model in its crib. The new model also opposes certain structural characteristics in the economy that tend to favor scale, and thus centralization. These include technical factors such as economies of scale and network externalities that are strongly shaped by characteristics of production technologies; and factors more institutional and political in origin, such as fixed costs from regulatory obligations advancing various public values such as environment health and safety, consumer protection, etc.\nThe new model is under-specified, both in its technical and its political/strategic dimensions. Technically, we sketch a couple of salient system characteristics that appear likely to push in the desired direction, but the devil is in the details. A wide range of systems and design approaches is now being pursued and developed in parallel, with capabilities – depending on multiple factors in the systems and their contexts – that might favor or oppose liberty, privacy, agency, and equality. Even current developments have had a mix of centralizing and decentralizing effects, empowering many distributed activities even as they create great new centers of wealth and power, including new forms of power not yet exploited or even well understood.\nAs a strategy to move toward this vision, we have identified a few possible pathways to pursue it through private action. But it is also worth asking whether appropriate government policies would be necessary or helpful, and if so, what form. Possible points of leverage might include data ownership policies such as clear conferral of data property rights on individuals, or limits on concentrated holding of data; limits on or new forms of IP; or more expansive definition and robust enforcement of anti-trust policies. To the extent the desired transformation does require public policy, one might also consider which jurisdictions would be most promising to seek an early strategic foothold. Perhaps the social democracies of Europe, which are already leaders in data and privacy policies? Or perhaps major developing countries with strong technological capacity – who would have the advantage of large domestic markets for early scaling, but might also be ambivalent toward the leveling ambition, depending whether leveling is construed as between countries (in which case they would presumably be keen advocates) or within countries (in which case, maybe not). In this global context, one must also consider risks posed by opportunistic geopolitical adversaries, including the possibility of surreptitious early support for the development of the new systems coupled with efforts to bias or undermine its aims – although this threat might become less salient over time if one consequence of the spread of the new systems is a decrease in international rivalries.\nFinally, the proposed new model is incomplete. It is unlikely to address all impacts and social disruptions caused by rapid advances in AI. In particular, we can't necessarily expect it to avoid large-scale displacement of livelihoods by AI. It might, however, make mass unemployment less individually and socially destructive, perhaps even make it desirable. If leveling of power implies different bases for distribution of economic output, no longer coupled to employment, then loss of employment might cease to be catastrophic. This might seem inconceivable, but it could be analogous to the treatment of health insurance across nations: in the United States it is tightly coupled to employment and thus highly unequally distributed, while in all other advanced democracies it is uncoupled from employment and more equally distributed. It is even possible that mass unemployment – not under present social organization, but in a levelers world – could be profoundly liberating, enabling people to work, individually or cooperatively, on endeavors they value that are not necessarily related to the production of material goods and services. As AI facilitates efficient production, it could also facilitate effective pursuit of these other aims.\nIn closing, AI is likely to have huge, transformative societal impacts, for good or ill, but present patterns of development and deployment suggest that small \"AI for good\" movements are likely to be overwhelmed by massive developments that serve concentrated commercial, political, and strategic interests. With such labile technology and such potentially vast impacts, the possibilities for positive transformative change are real, but highly uncertain in their detailed requirements and pathways – and are not being pursued with resources commensurate with their importance, or with the resources directed to systems serving private or rivalrous advantage. With such huge stakes, it is clearly worth pursuing even ill-defined and speculative investigations of how to effectively shift the balance toward the good. We have identified a few possibilities that seem promising, but we fully realize that these are the output of just two days of discussions and are speculative, incomplete, and under-specified. Yet despite all the challenges, further pursuit of these questions, drawing on more breadth of relevant expertise, is a high and urgent priority.The post Could AI drive transformative social progress? What would this require? first appeared on AI Pulse.", "url": "https://aipulse.org", "title": "Could AI drive transformative social progress? What would this require?", "source": "aipulse.org", "date_published": "n/a", "paged_url": "https://aipulse.org/feed?paged=2", "id": "28bb45cde3e7d653db95c19ac6a924a1"} -{"text": "Artificial Intelligence in Strategic Context: an Introduction\n\n\n Download as PDF\n\nIntroduction: AI Advances, Impacts, and Governance Concerns\nArtificial intelligence (AI), particularly various methods of machine learning (ML), has achieved landmark advances over the past few years in applications as diverse as playing complex games, language processing, speech recognition and synthesis, image identification, and facial recognition. These breakthroughs have brought a surge of popular, journalistic, and policy attention to the field, including both excitement about anticipated advances and the benefits they promise, and concern about societal impacts and risks – potentially arising through whatever combination of accident, malicious or reckless use, or just social and political disruption from the scale and rapidity of change.\nPotential impacts of AI range from the immediate and particular to the vast and transformative. While technical and scholarly commentary on AI impacts mainly concerns near-term advances and concerns, popular accounts are dominated by vivid scenarios of existential threats to human survival or autonomy, often inspired by fictional accounts of AI that has progressed to general super-intelligence, independent volition, or some other landmark similar to or far surpassing human capabilities. Expert opinions about the likelihood and timing of such extreme further advances vary widely.1 Yet it is also increasingly clear that advances like these are not necessary for transformative impacts – for good or ill, or more likely for good and ill – including the prospect of severe societal disruption and threats.\nThe potential societal impacts of AI, and their associated governance challenges, are in significant ways novel. Yet they also lie in the context of prior concerns with assessing and managing technology-related risks, which has been an active area of research and policy debate since controversies over technologies related to energy, environment, weapons, computation, and molecular biology in the 1960s and 1970s.2 This work has generated many insights into societal impacts and control of technology, of which two in particular stand out. First, the societal impacts of technology are not intrinsic to the technology, but emerge from the social processes by which technologies are developed and applied. It is thus not possible to assess or manage societal impacts by examining a technology divorced from its economic, political, and social context.3 Second, these linked pathways of technological and social development are complex and uncertain, so societal impacts cannot be confidently projected in advance, no matter how obvious they may appear in retrospect. There is thus a structural tension that hinders efforts to manage the impacts of technology for social benefit. The process of developing, applying, and reacting to any new technology both gradually clarifies its effects, and also builds constituencies with interests in its unhindered continuance and expansion. Efforts to manage impacts thus move from an early state in which they are limited in knowledge because the nature of impacts is not clear, to a later state in which impacts are clearer but politically difficult to manage.4 This paradox is not absolute or categorical, but does describe a real tension and a real challenge to effective assessment and management of technological risks – which is clearly applicable to efforts to manage AI risks today.5\nWhile every technological area is in some respects unique, AI is likely to be more challenging in its potential societal effects and governance needs than even other contentious, high-stakes, rapidly-developing technologies. There are at least three reasons for this, rooted in characteristics of AI and applications that are likely to be enduring. First, AI is weakly defined. It includes diverse methods and techniques, derived from multiple prior areas of inquiry, which have fuzzy boundaries with each other and with multiple other areas of technological advance and challenge. The weak definition and boundaries of AI make it difficult to precisely localize and specify related objects of concern and their boundaries, and thus difficult to define regulatory authority or other forms of governance response. Second, AI has a foundational character. AI advances promise to transform, and interact strongly with, other areas of technological advance such as computational biology, neuroscience, and others. AI's foundational role suggests comparison to historical examples of transformative technologies on the scale of those that drove the industrial revolution – electricity, the steam engine and fossil fuels. Considered together, the diffuse boundaries and foundational character of AI give it a vast breadth of potential application areas, including other areas of scientific and technology research – raising the possibility of an explosion of AI-augmented research rapidly transforming multiple fields of inquiry. Third, many currently prominent AI algorithms and applications, particularly those involving deep learning and reinforcement learning, are opaque in their internal operations, such that it is difficult even for experts to understand how they work.6 Even distinguished practitioners of these methods have expressed concern that recent advances are unsupported by general principles, often dependent on ad hoc adjustments in specific applications, and generative of outputs that are difficult to explain.7 Two prominent commentators characterized the state of the field as alchemical.8\nOne consequence of these challenges is uncertainty over the causes and implications of recent large advances. Do they represent foundational advances that put general understanding and reproduction of intelligence within reach?9 Or do they just reflect the result of continuing advances in several parallel areas – data, algorithms, computational capacity – that are important in practical power, without necessarily representing major scientific breakthroughs or auguring imminent advances in dissimilar problems or reproducing general intelligence?10 Expert surveys estimating how soon such major landmarks will be achieved show a wide range of views.11\nA second consequence is deep, high-stakes uncertainty about AI's societal impacts, risks, and governance responses – arguably even more than in other parallel areas of rapid, high-stakes, contentious technologies. This deep uncertainty is driven both by uncertainty about the rate and character of future technological advances in AI, and by uncertainty about how people, enterprises, and societies will interact with these rapidly advancing capabilities: how new capabilities will be used and applied; how people and organizations will react and adjust; how capabilities will further change in response to these adjustments; and how, and how well, societal institutions will manage the consequences to promote the beneficial, limit or mitigate the harmful, and decide in time – competently, prudently, legitimately – which consequences are beneficial and which harmful.\nAI Impacts and Governance: Major areas of present inquiry\nIn the face of this deep uncertainty, current study and speculation on AI impacts and governance has a few salient characteristics. In some respects these are similar to characteristics often found in rapidly growing fields of inquiry and commentary, yet they also reflect the distinct, perhaps unique, characteristics of AI. In broad terms, the approach of many researchers and commentators to AI reflects their prior interests, concerns, capabilities, and disciplinary traditions. Given the diffuse and labile character of AI and its applications, this is both familiar and sensible. Indeed, it is a phenomenon so familiar as to be nicely captured by old aphorisms and folk tales. AI is the elephant, and we who are trying to reason through its societal impacts and potential responses are the blind men, each feeling around one piece of it.12 Or alternatively, we all come to the problem of AI with our various forms of hammers, so it looks like a nail.\nAttempting to give aggregate characterizations of such a heterogeneous and rapidly growing field is risky, yet necessary. Much present commentary falls into two clusters, mainly distinguished by the immediacy of the concerns they address. The first and larger cluster examines extant or imminently anticipated AI applications that interact with existing legal, political, or social concerns. Prominent examples include how liability regimes must be modified to account for AI-supported decisions (e.g., in health care, employment, education, and finance), or AI-embedded physical objects that interact with humans (autonomous vehicles, robots, internet-of-things devices);13 racial, gender, or other biases embedded in algorithms, whether in high-stakes decision settings14 or in more routine but still important matters like search prompts or image labeling;15 defending privacy under large-scale data integration and analysis;16 and transparency, explainability, or other procedural values in AI-enabled decisions.17\nThe second cluster of current work concerns the existential risks of extreme AI advances, whether characterized as progressing to general superintelligence, a singularity beyond which AI controls its own further advances, or in other similar terms. Although this perspective is less frequent in scholarly work, it draws support from the argument that risks of catastrophic or existential consequence merit examination even if they appear temporally distant or unlikely.18 These prospects loom large in press and popular treatments of AI19 – to such a degree that researchers arguing for the importance of more immediate risks initially faced an uphill battle gaining similar levels of attention to these concerns.20\nBetween these two clusters lies a wide gap receiving less attention: potential impacts, risks, and governance challenges that are intermediate in time-scale and magnitude, lying between the near-term and the existential. Some scholarship does target this intermediate zone, typically by examining impact mechanisms that are of both immediate and larger-scale, long-term significance. For example, many studies aim to identify technical characteristics of AI systems likely to make them either riskier or more benign.21 In addition, certain domains and mechanisms of AI social impact – for example, livelihood displacement,22 social reputation or risk scoring, and autonomous lethal weapons – are of both immediate concern and larger future concern as applications expand. The broad diversity of methods and foci of inquiry in AI impacts is a sensible response to present deep uncertainty about AI capabilities and impacts, and there is great value in this diversity of work. This is a context where interdisciplinarity is at a premium, and the most significant advances are likely to come not just from deep specialization, but also from unexpected connections across diverse fields of inquiry.\nThe AI PULSE project at UCLA School of Law: A focus on mid-range impacts\nIn this busy, diverse, and rapidly growing space, the AI PULSE Project at UCLA Law is a newcomer. Like many others, we aim to advance understanding on the societal impacts of current and anticipated advances in AI, the good, bad, and dangerous; the causes and mechanisms of those impacts; and potential law and other governance responses to inform decision-making, both within traditional legal and regulatory settings, and in new institutional mechanisms and settings.\nIn looking for areas on which to focus our attention and permit us to make useful contributions, we have used two loose and provisional criteria. First, we have sought issues and questions that can effectively draw on our prior expertise in law and policy, and on prior experiences with other technology law and policy areas such as energy, environment, internet governance, and cybersecurity. We aim to attend carefully to points where scientific or technology matters interact strongly with societal or governance issues – not aiming to focus centrally on technology, which is not our comparative advantage, but rather to recognize and connect with needed expertise via collaborators. Second, we have looked for areas of potential importance that are receiving relatively less attention, and where there is less risk of simply re-treading familiar ground.\nThis orientation has led us to the intermediate scale of AI impacts, time horizons, and implications, as outlined above. We do not focus principally on immediate concerns already attracting substantial study and attention, nor on existential endpoints. This is not because we judge these uninteresting or unimportant – they are emphatically not – but because so much excellent work is already being done here. This intermediate range, roughly defined in some combination of time-scale and of intensity of impacts and potential disruptions, is unavoidably a bit diffuse in its boundaries. We characterize it by rough conditions that separate it from immediate concerns, and from singularity, existential, or ultimate concerns.\nWe propose one principal criterion to distinguish this middle range from the applications, consequences, and concerns that characterize the large volume of important work being done on current and near-term AI challenges. AI applications now deployed and in advanced development sit within the context of existing configurations of decision-makers with associated capabilities, interests, and goals. They are being embedded in commercial products and services marketed by existing firms to identified consumers and businesses. They are supporting, and may in some applications replace, human expertise and agency in existing decisions now taken by individual humans, in a wide variety of professional and employment settings – e.g., drivers, machine tool operators, pharmacists, stockbrokers, librarians, doctors, and lawyers. And they are similarly supporting, advising, and perhaps replacing current decisions now made by groups or organizations – i.e., actors larger than one person – but still recognized, abstracted, and sometimes held accountable as an individual, more or less human-like actor, such as corporations, courts, boards, offices, or departments.\nBut the fact that current AI applications are presently slotting into these existing decisions by existing actors is a social and historical contingency that reflects immediate opportunities to deploy, and sell, AI-infused products and services. There is no reason to expect that AI's capabilities, or its future applications, will necessarily follow the same patterns. The same characteristics that pose challenges to the prediction and governance of AI's societal impacts – its diffuse, labile character, fuzzy boundaries, broad connections to other technologies and fields of inquiry, and foundational nature – also suggest that it is capable of doing things of greater scale, scope, novelty, or complexity than any presently identified decision by a presently identified actor.\nIt is this greater scale of application, along with associated changes in scope, complexity, and integration, that we propose as the principal criterion distinguishing near-term impacts and governance challenges from medium-term ones. In this hypothesized mid-range, AI is, at a minimum, doing things that to some degree resemble functions being discharged by present actors, but which due to greatly expanded scale or scope are qualitatively changed in their impacts, for example by divesting current decision-makers of power or fundamentally transforming their aims and inter-relationships. Alternatively, AI might be doing things that are not presently done by any identified single actor, but by larger-scale social processes or networks of multiple actors and institutions – e.g., markets, normative systems, diffuse non-localized institutions, and the international system. Deployed in such settings, AI would take outcomes that are now viewed as emergent properties, equilibria, or other phenomena beyond the reach of any individual decision or centralized control, and make them subject to explication, intentionality, or control. Or as a third alternative, AI might be deployed to carry out actions that are not presently done at all, for various reasons – including that they are beyond the ability of any individual actor to imagine, perceive, or carry out, yet at the same time are not the objects of any linked systems of multi-actor decision-making. In any such settings, we expect the societal impacts and disruptions/transformations of AI, and the associated challenges of governance (indeed, the meaning of governance) to be profoundly transformed, in scale, meaning, and possibly also speed.\nYet this is not the singularity.23 We also distinguish this middle range from existential or singularity-related risks and by the limitation that AI is not self-directed or independently volitional, but rather is still to a substantial degree developed and deployed under human control. Of course, the practical extent of human control in specific applications may be ambiguous, and the details matter a lot. Moreover, as noted above, even with AI not fully autonomous but practically, or formally, under human control, there may still be transformative impacts, including vast public as well as private effects and the potential for large-scale disruptions and harms – in addition to the large benefits that are intended and anticipated.\nThis intermediate range is thick with potential areas and mechanisms of high-stakes AI impacts. In addition to those noted above, involving mechanisms of influence already operating but subject to transformation from increased scale and speed (e.g., livelihood displacement), there are multiple other possibilities. These are substantially more heterogeneous than those already evident in present practice. Even brief reflection suggests a wide range of potential AI applications, impacts, bases for concern, and associated governance challenges. We outline a few below. Many others, similarly plausible, could readily be generated.\n\nAI as a manager and coordinator of economic functions at larger scale than the scope typical of current enterprises producing and selling goods and services;\nAI as a component of, and to varying degrees supplanting, various forms and functions of state decision-making – legislative, executive, administrative, judicial, or electoral.24 Small-scale instances of this, and more ambitious proposals, already abound. Implications are profound, both for material outcomes, such as substance of decisions, character and quality of service and program delivery, efficiency and cost; and for legal processes, associated rights, and political principles.\nAI as a disruptor of competitive relations, in commercial competition and other domains of competitive interactions, with the prospect of major shifts in the distribution of power among individual, commercial, public, and other institutions, perhaps also driving large changes in the meaning and determinants of such power.\nAI as an enabler of increased effectiveness and new forms of influence over people in commercial, political, and other societal contexts. Early signs, in 2016 and since, of the potency of algorithmic targeting and tuning of messages for both domestic and international political campaigns suggest much broader and more potent tools of influence to come. These of course can include AI-empowered influence and manipulation for conventional commercial and political ends, perhaps allowing greater concentration of power than has been possible by other means: one major recent report identified \"robust totalitarianism\" as one of the major social and political risks of AI.25 But multiple other forms of influence are possible, including manipulation of behavior, preference, or values, with aims and effects ranging from the benign to the malign. Consider, for example, the possibilities of AI-enabled manipulation for purposes of healthier living, other forms of self-improvement, civic virtue, or conformity, docility, or subordination to the will of another; AI-empowered psychotherapy, new political or social movements, new religions (the first church centered on AI is already established),26 or cults.\nAI as a scrambler of human individual and collective self-conception, including possibly undermining basic assumptions on which liberal capitalist democratic states and societies are founded – either truly altering the assumed conditions, or altering large-scale confidence or belief in them through increasingly powerful tools of deception such as \"deep fakes.\"27 These shared assumptions, which include both positive and normative beliefs, exercise a powerful influence on the acceptance and viability of democratic government and market economies. The potential implications of their disruption, for practice of democratic government, and for social cohesion and identity, are profound.\n\nThese and similar potential intermediate-range impacts can be read to varying degrees as benign or malign. Many are most likely some of each. But they are clearly large and novel enough to require careful study and assessment, without a strong prior presumption that they predominantly fall one way or the other. Although it may be tempting to view any large change as harmful, at least initially, we do not presume that the societal impacts of AI will be entirely, or even predominantly, harmful. In fact, there are strong grounds to anticipate large benefits. Innovations in commercial settings generally pursue improved production processes or products and services that people want. We thus expect that the initial, direct effects of AI deployment will mainly be benefits, since they will be realized through voluntary commercial transactions.28 The axiom of revealed preference is not the last word on societal values, in view of imperfect knowledge and anticipation of individual welfare, identified pathologies of choice, emergent collective outcomes, constrained choices, and manipulation – but neither is it to be arbitrarily rejected. Moreover, even outside purely commercial transaction, a few areas of AI application hold clear prospects for large societal benefits, including medical diagnosis and treatment, scientific and technical research, and environmental monitoring, management, and restoration. Even when technical advances bring some harm – canonically, the harm that incumbents suffer when their business is disrupted by innovations – these local disruptions are often consistent with larger-scale economic and societal benefits.\nYet at the same time, there are well-founded grounds for concern. The greater the new technical capabilities and resultant transformations, the less the resultant disruptions are likely to be confined to the commercial sphere and the more they are likely to implicate larger-scale network and public effects, and political, social, and ethical values separate from those of the market. Moreover, many applications of AI will be in non-market contexts – whether in classic areas of state decision-making or something brand new. In these other contexts, the comforting invisible-hand assumptions about the consonance of individual self-interested choice and aggregate societal benefit do not apply. Individuals and firms developing and applying AI may be sensitive to these broader impacts, but lack the scale of view, knowledge, or authority to address them effectively. Doing so will require some larger scale apparatus, perhaps including deployment of state regulatory authority, perhaps involving multi-party collaboration among state and non-state actors, including enterprises, technical associations, educational institutions, research funders, scientific and professional associations, civil society organizations, and governmental decision-makers at multiple levels.\nEven focusing on this middle range rather than more distant and extreme possibilities, it is difficult to do research about future conditions, risks, or requirements. Future events, conditions, and outcomes are not observable, except via proxies, trends, or enduring conditions and relationships in the present or past. Saying anything useful about future effects unavoidably requires speculative reasoning and acknowledgement of uncertainty, and also disciplining that speculation by reference to current knowledge. Present conditions and trends, scientific knowledge about enduring mechanisms of causation and influence in the world, and the properties and limits of current technologies, all provide useful inputs to reasoning about future conditions and possibilities, but with limits. Structured methods for exploration and contingency planning such as model projections, scenario exercises, and robust decision-making approaches can help stimulate the needed balance of imagination and discipline, but do not surmount deep uncertainties. These challenges are all particularly acute in reasoning through potential impacts, risks, and governance responses for AI, in view of the uniquely challenging characteristics of AI noted above.\nOne promising way to get insights into AI impacts and responses over this intermediate scale is – instead of focusing on present applications or the technical properties of algorithms and associated data and tools – to focus on decisions: the decisions to develop and refine capabilities, train and test them, and deploy them for specific purposes in specific settings. Focusing on decisions in turn implies focusing on the actors who make those decisions, whether individuals, teams, or private or public organizations, and the factors that influence their decisions: the interests they seek to advance; the capabilities and resources that characterize their decision choice sets and associated constraints; and the strategic environment in which they make these decisions, including interactions with other decision-makers with resultant emergent properties.\nA survey of present AI applications and actors suggests a wide range of interests may be motivating development, adoption, and application decisions. The principal actors developing new methods and capabilities are private firms and academic researchers. But the firms are not typical commercial firms. Several of the leading firms are so big and rich, and so secure in dominant positions in current markets, that they are able to invest in speculative research some distance from commercial application.29 Even accounting for firms' ostensible profit motives, their need to recruit and retrain scarce, often idiosyncratic, top-rank talent may also shift their mix of interests to some degree, to include scientific interest, technological virtuosity, plus whatever other ambitions or commitments motivate this talent pool.30 Motivations become even more diverse on the international scale. Several market-leading AI firms are based in China, under varying degrees of government influence – suggesting a blurring of lines between commercial competition and state objectives, both domestic (securing and sustaining political control) and international (geopolitical rivalry, through multiple channels not limited to military). Moreover, not all the major developers are for-profit firms. One major AI enterprise is organized as a not-for-profit, with a stated commitment to development of AI capabilities for social benefit – although the practical implications of this different commitment are not yet fully evident.31\nIn considering medium-term possibilities for AI applications, risks, and other impacts, it is thus necessary to consider a range of interests and incentives: in addition to commercial competition, potentially important interests might include political competition through electoral or other channels; competition for fame, notoriety, or influence; pursuit of technological advance for its intrinsic pleasures and for professional status; and multiple forms of rivalrous or malicious aims, among individuals, firms, other organizations, and states.32 This broad list of interests implicates harm mechanisms associated with intentional design choices as well as unforeseen accidents, or design or application failures.\nThe papers collected here represent the results of an early attempt to examine AI impacts, risks, and governance from this actor and decision-centered perspective. They are based on a May 2018 workshop, \"AI in Strategic Context,\" at which preliminary versions of papers were presented and discussed. In extending their workshop presentations into these papers, we have asked authors to move a little outside their normal disciplinary perspectives, with the aim of making their papers accessible both to academic readers in other fields and disciplines, and to sophisticated policy-oriented and other non-academic audiences. In the spirit of the larger-scale project, we also encouraged authors to be a little more speculative than they would normally be able to do when writing for their scholarly peers.\nThis collection includes seven of the resultant papers, spanning a broad range of applications, impacts, disciplinary perspectives, and views of governance. Sterbenz and Trager use an analytic approach rooted in game theory to characterize the effects of a particular class of AI, autonomous weapons, on conflicts and crisis-escalation situations. Ram identifies a new potential mechanism of harmful impact based on increasing pursuit of the technical capability, \"one-shot learning.\" She argues that increased use of one-shot learning might exacerbate present concerns about both bias and opacity, and assigns a central role to trade secrecy in these harms. Grotto distills a set of concrete lessons for governance of AI by analogizing to a prior political conflict over another high-stakes and contentious technology, GMOs, drawing particularly on the variation provided by disparate policy outcomes in the United States and the European Union. Marchant argues both that effective governance of AI is needed and that conventional governmental regulation is unlikely in the near term, and instead proposes a soft-law approach to governance, including a specific institutional recommendation to address some of the most obvious limitations of soft-law governance. Moving from regulatory practice to political theory, Panagia explores the alternative ways algorithms can be regarded as political artifacts. He argues that algorithms are devices that order the world, in relations between people and objects and among people, and that this is a distinct, and broader, role than that conventionally proposed by critical thinkers of the left, who tend to view algorithms predominantly as instruments of political domination among people. Osoba offers a provocative sketch of a new conceptual framework to consider global interactions over AI and its governance, arguing that there exist distinct linked technical and cultural systems, or \"technocultures,\" around AI, and that current diversity of technocultures is likely to be a persistent source of regulatory friction among regional governance regimes. And finally, Lempert provides a provocative vision of the capability of AI to drive large-scale divergence in human historical trajectories. Drawing on historical analogies explored in the scholarship of Elizabeth Anderson, he presents two scenarios in which large-scale AI deployment serves either to restrict or to strengthen human agency, and speculates on the technical structures of algorithms that might tend to nudge a world deploying them toward those two divergent futures.\nClosing Observations:\nThese papers represent the first output from what we aim to make a continuing inquiry. In our view, they nicely illustrate what needs to happen to build a coherent and progressive body of work in a new and diffuse, but high-stakes area: targeting interesting and important questions and effectively deploying discipline-based concepts and frameworks, yet also striving for clear communication across disciplinary lines and accessibility both outside the field and outside the academy – while still maintaining precision. At the same time, they also illustrate the challenges facing this area of inquiry: its vast breadth, the diversity of relevant disciplinary languages and perspectives – and the difficulty of directing focus forward beyond immediate concerns, of being willing to engage in speculation yet keeping the speculation disciplined and connected to current knowledge and expertise. These are the aims and the challenges of the project, which we plan to further explore in additional workshops, collaborations, and publications.\nBy Edward Parson, Dan and Rae Emmett Professor of Environmental Law, Faculty Co-Director of the Emmett Institute on Climate Change and the Environment, and Co-Director of the AI Law and Policy Program, UCLA School of Law, UCLA Dept\nRichard Re, Assistant Professor Co-Director of PULSE and the AI Law and Policy Program\nAlicia Solow-Niederman, PULSE Fellow in AI Law and Policy, UCLA School of Law\nand Elana Zeide, PULSE Fellow in AI Law and Policy, UCLA School of Law.\n The post Artificial Intelligence in Strategic Context: an Introduction first appeared on AI Pulse.", "url": "https://aipulse.org", "title": "Artificial Intelligence in Strategic Context: an Introduction", "source": "aipulse.org", "date_published": "n/a", "paged_url": "https://aipulse.org/feed?paged=2", "id": "9980fe9879906526a2e57656a4a3cb81"} -{"text": "Bezos World or Levelers: Can We Choose Our Scenario?\n\n\n Download as PDF\n\nRobert Lempert1\nArtificial intelligence (AI) augurs changes in society at least as large as those of the industrial revolution.  But much of the policy debate seems narrow – extrapolating current trends and asking how we might manage their rough edges.  This essay instead explores how AI might be used to enable fundamentally different future worlds and how one such future might be enabled by AI algorithms with different goals and functions than those most common today. 2\nAI machine learning algorithms now meet or surpass capabilities once regarded as uniquely human and will grow more capable over time.  The algorithms draw their power from their ability to learn from vast amounts of data and their own experiences. For instance, autonomous vehicles, while still imperfect, roam our roads each learning not only from its own encounters, but also from the data gathered by other such vehicles.  Amazon and other online services, learning from the choices of millions of customers, recommend books we might read and music we might enjoy. In 2016, the machine learning AI program AlphaGo beat Lee Sedol, the world's best human Go player, in a five match tournament, exhibiting strategies that displayed an astounding level of non-human creativity (Krieg, Proudfoot, and Rosen 2017). The AlphaGo version that beat Sodol learned its craft from records of previous human Go games. AlphaGo's even more powerful successor learned from millions of games it played against itself.\nSuch capabilities portend vast social, economic, and political transformations. Alluringly, we can imagine a future world of vast material wealth and convenience.  Freed from human error and shortcomings, services such as transportation and medicine become much safer and more efficient. Localized, on demand, customized 3D manufacturing satisfies human wants and reduces environmental footprints. Freed from the drudgery of work that the machines can better handle, people embrace more meaningful tasks.\nYet AI also has dystopian portents.  Most concretely, the technology threatens to destroy vast numbers of jobs.  By some reckonings, 40 percent of the world's jobs could be replaced in the next 15 years (Roose 2019).  AI may threaten privacy, as exemplified by the digital assistant devices that increasingly serve, and monitor, us in our homes.  The power and wealth flowing to the individuals and firms that successfully commercialize AI may also exacerbate the income inequality straining our society.\nThe policy debate has begun to engage with these challenges (Alden and Taylor-Kale 2018).  Some discussions focus on skill training to allow people to take new jobs.  Others suggest a universal basic income (UBI) to reduce the adverse consequences of unemployment.  Social insurance programs predicated on full-time employment might be expanded to include those in part-time jobs.\nBut such policy responses only nibble around the edges of an unfolding societal transformation.  Extrapolating current trends, we might imagine a world in 2050 with AI offering unparalleled convenience and widespread material comfort while making most people economically irrelevant and concentrating power and wealth in the small number of firms and individuals that create and own the machines.  No amount of job training or UBI would make such a world remotely similar to our own.\nEnvisioning scenarios offers one means to grapple with societal changes as fundamental as those augured by AI.  Scenarios represent \"focused descriptions of fundamentally different futures, often presented in a coherent script-like or narrative fashion\" (Schoemaker 1993), often crafted to inform decision making.  In this spirit, we might envision a scenario called Bezos World in which current trends unfold into a world with all the wealth, power, and robots concentrated into a few hands.  We can then posit other scenarios to help explore the extent to which Bezos World is foreordained, or whether we might imagine, understand, and influence the creating of alternative worlds that some people might find more to their liking.\nThe Levelers Scenario\nAmong its profound implications, the industrial revolution of the 19th and early 20th centuries reshaped the way citizens of liberal societies experienced agency and freedom. For many centuries, proponents of unfettered markets had also been natural advocates for popular sovereignty and economic and social equality (Anderson 2017).  Markets helped to shatter aristocratic hierarchies and government chartered monopolies.  In a feudal world, artisans labored for lords in a relationship of deference.  A market economy made the two more equal, since an artisan could choose his customers with a freedom similar to that of a lord choosing his vendors.  As late as the mid-19th century, proponents of freedom such as Abraham Lincoln could envision a world of independent proprietors, whose few employees would be young apprentices on their way to running their own small firms.\nThe industrial revolution sundered this connection by creating economies of scale that required vast enterprises to exploit.  These economies of scale generated immense material wealth, vastly expanding people's freedom as consumers.  The choices of food to eat, clothes to wear, places to travel, and (with electric lighting) when to be awake and active expanded greatly.  But the enterprises that swelled abundance and choice also generated new hierarchies of power and wealth.  In the new managerial capitalism, assembly line workers and office clerks labored for their bosses in a relationship of deference.  Industrial age production also required large agglomerations of capital, further expanding inequality among citizens.  In response, the U.S. and other industrialized countries transformed their governments and new non-governmental organizations emerged, such as labor unions, to more equitably share wealth and political power.  The resulting social contract generated history's greatest expansion of freedom and material wealth.\nBut this social contract is now unraveling.  As one important driver, new information technology threatens many jobs, and firms increasingly use technology to replace managerial capitalism with a new economy much less reliant on full-time workers.  For instance, organizations such as Uber foster a gig-economy in which firms contract with independent workers for short-term engagements, thereby capturing more of the profits and control of the workplace for themselves.  Uber has roughly 10,000 full time employees and in the US 750,000 drivers operating as independent proprietors.  While these drivers often enjoy the benefits of setting their own schedules, they exercise little control over their working conditions, have few benefits of employment, and exist at the bottom of an economic hierarchy which is investing heavily in automation to eliminate their jobs.\nAI seems poised to exacerbate such trends.  Today's firms actively pursue the Bezos World scenario, deploying AI towards a future in which machines do all the work and technology is used to maximize power and wealth in the hands of the small number of people who own the machines (O'Reilly 2017).\nWe might, however, envision a purposely very different scenario.  Rather than using technology to automate away workers, this scenario envisions a world in which society uses technology to unwind the firm.  We might call this scenario Levelers, based loosely on a political movement of that name during the English Civil War, which was among the first to advocate markets as a means towards equality and popular sovereignty (Anderson 2017).  In this Levelers scenario, a combination of populist uprising and appropriate technology would establish a 21st century version of what Lincoln envisioned for his time – a society of prosperous, independent proprietors without large concentrations of wealth – a world with many Uber drivers, but no Ubers.  In this Levelers scenario, technology enables new kinds of work and is designed to spread power and wealth more equally across society.\nEconomists point to transaction costs to explain the necessity of firms (Coase 1960).  Many economic arrangements, for instance running a railroad (Chandler 1977) or the design and production of complex goods such as automobiles and airplanes, require webs of connections among people and capital stock too complex to be organized and managed entirely by market forces.  Prior to AI, these productive assets needed to be organized within managerial hierarchies associated with large stocks of capital.  Today, firms also create vast wealth from the symbiotic relationship among their centrally managed, proprietary data (often gathered from customers), the services they sell that enables them to collect the data, and the AI learning from that data in order to make the services more efficient and effective.\nAI technology can reduce many transaction costs, which enables a reimagining of the firm.  Today, however, these capabilities are being used by firms to reimagine themselves for their own ends.  For instance, ride-sharing services such as Uber and Lyft use ubiquitous web connectivity, databases, and route optimization algorithms to shift most of their workforce to independent proprietors and then to machines.\nIn contrast, the Levelers Scenario envisions a future in which widely deployed AI provides great wealth and convenience, but with the power relationships of the Bezos World scenario reversed.  In the Levelers Scenario AI helps labor hire capital, instead of the other way around.  For instance, the firm Gigster currently uses AI to help its corporate clients efficiently identify ideal teams of temporary workers, thereby reducing the need to nurture well-known teams of full-time staff. 3 In the Levelers Scenario, small teams of people might join together to make a car.  Using a Gigster-like capability, they could find others with the skills they need and rent 3D manufacturing facilities to produce their design.  In the service sector, AI could help drivers and passengers use competing databases to find each other without the need for an Uber to own the network.  Achieving or maintaining the Levelers Scenario might require that a strong government use its authority to break up and prevent any large concentrations of economic power, such as those arising from data or network monopolies.  But supported by AI created for the task, the Levelers Scenario envisions by 2050 a gig economy of radical social and economic equality, a world with material abundance, choice, and convenience but without managerial hierarchies and without large concentrations of wealth and power.\nHow AI algorithms might enhance human agency\nThis essay offers the Levelers as a normative scenario, which describes one way in which the transformative power of AI might lead to a future that enhances rather than reduces human agency.  The scenario raises numerous questions, not least of which include the mix of government policies and social conditions necessary to bring such a future into being, the policies and conditions required to make such a future stable against the pressures that might undermine it, as well as the many potential unintended consequences that a world without many large commercial institutions might entail.  But this essay explores one particular issue – how AI algorithms with different goals and functions than those most common today might help enable the Levelers Scenario.\nIn thinking about normative future scenarios, a first step requires being explicit about what values constitute the good.  We focus here on the capabilities framework, which Amartya Sen (2009) developed and then used to support a framework for measuring and helping to bring about a more just world.  The capabilities framework offers an alternative to value systems based entirely on welfare economics.  Outcomes matter, but so does the process by which the outcomes are achieved.  For instance, choosing to spend a quiet day at home is different than spending a day under house arrest though the physical outcomes may appear similar.  A just society, as emphasized by the capabilities framework, enables individuals to make reasoned and consequential choices about their own lives, to act on those choices, and to evaluate the results in terms of their own values and goals, respecting the diversity of such goals and values.  In addition, a just society enables individuals to participate meaningfully in shaping their society, sharing in an informed and consequential way in choices made about its economic, social, and political attributes.\nThe Levelers scenario envisions widespread use of AI algorithms that enhance, rather than reduce, such human agency.  But many of today's most touted algorithms fall in the latter category.  They seek some best outcome and are less concerned with enriching the decision-making processes of the humans with whom they interact.  In operating on the world, such algorithms start with a clear set of objectives to achieve and a set of actions they can take.  They then assess a current situation, predict the consequences of various actions, and choose those actions that best achieve the objectives.  For instance, AlphaGo seeks to win a Go game, can make moves consistent with the game's rules, and each turn makes the move that most increases its chances of victory.  Autonomous vehicles seek to travel safely and efficiently to a desired destination; create an understanding of the objects around them; and each moment decide whether their objectives are best met by turning, accelerating, or braking.  The Amazon website aims to display those products to viewers that result in the highest probability of a sale to a satisfied customer.  While humans may set the objectives (e.g. the destination of the AV or a high sales volume), the algorithm aims to reduce human involvement in the choices that lead to those outcomes.  The algorithm's success is judged by the extent to which it enhances human welfare, based on the outcomes for at least some humans.\nWhat would it look like for an algorithm to enhance human agency?  The algorithm would certainly help achieve welfare-improving outcomes.  But the algorithm would also help individuals gather the resources – including skills, capital, and material resources – relevant to their goals, team with appropriate human collaborators, help all the individuals involved make good choices on how best to coordinate their activities and deploy their resources in pursuit of multiple goals, explore how these goals might be expanded or modified in light of what is possible, examine all these steps from different vantages and points of view, and explain their choices to themselves and to others.  For instance, a Gigster-like algorithm that helps people to form teams could be designed, not to promote just efficiency and profit by adding skills to an existing organization, but to help people assemble new teams and networks around a common purpose, help them to assemble and manage the resources needed to achieve their objectives, and help the participants' understanding of their common and individual purposes grow and deepen as they worked together.\nAlgorithms do exist that help to enhance humans' ability to make reasoned and consequential choices about their lives.  This is most apparent in the field of decision support, in which algorithms, much less capable than today's AI, are used to help groups of stakeholders deliberate about contentious policy challenges such as improving the resilience of a community to climate change and then seek consensus on actions to address the challenge (Marchau et al. 2019).  Such algorithms are inherently multi-scenario and multi-objective, the former to reflect multiple ways to view and interpret the world, the latter to reflect alternative ways of judging outcomes due to different interests and different ethical frameworks (Lempert, Groves, and Fischbach 2013).  The algorithms, often adapted from those in the classification and robust optimization literatures, are configured to support what is called agree-on-decision analysis, because the analytics aims to help people with different objectives and expectations about the future nonetheless reach consensus on near-term actions. 4 In contrast, algorithms are more commonly developed and used to support predict-then-act analysis, which assumes that all the parties to a decision will accept a single, often computer-generated understanding of the future and then seek prescriptive recommendations from the computer on the best actions to take.\nSuch agree-on-decision algorithms are often embedded in a process called \"deliberation with analysis\" in which stakeholders deliberate on their objectives, options, and problem framings; algorithms generate decision-relevant information; and the parties revisit their objectives, options, and problem framings influenced by the algorithms information products (NRC 2009).  The process envisions that participants' understanding and views will evolve over time in response to interactions with each other and with scientific information.  In brief, the algorithms aim to facilitate a democratic process of social choice, in which diverse parties agree on actions based on a Habermasean discourse in which parties recognize the inescapable plurality of competing views; facilitate clear explication of reasoning and logic; and accept the legitimacy of multiple views.  The process, and the algorithms that support them, are judged not only by the welfare outcomes they help achieve, but also by the extent to which they empower the people involved in the process to make what they regard as meaningful choices about their lives and society.\nHuman agency-enhancing AI algorithms are clearly not on their own sufficient to bring about a Levelers scenario.  But it nonetheless remains useful to ask to what extent such algorithms could be designed to facilitate such a future?  It remains hard to know, in part because the requisite technology landscape remains under-explored.  Many people and institutions currently developing AI have incentives to reduce human agency, since excluding humans best serves the purposes of those most involved in designing the algorithms.  Firms' goals generally do not focus on the agency of their customers or workers.  Rather firms seek control over the latter and want their customers to make choices good for the corporation, not engage in self-reflection and enlightenment. Researchers seek objective truth as scientists and technological virtuosity as engineers, so seek algorithms that operate independently from human subjectivity and influence.\nThe experience with multi-scenario, multi-objective decision support may be, however, instructive.  Explicitly changing their goals from predict-then-act to agree-on-decision analysis enabled researchers and institutions focused on the latter to recraft existing classification and robust optimization algorithms, originally developed to operate independently of humans, and develop interlocking processes and algorithmic tools that enhance human agency.  While at best a necessary condition, and certainly not a sufficient one, decision support processes using such agency-enhancing algorithms do seem to help generate improved social outcomes (Knopman and Lempert 2016).\nMoving towards a Levelers scenario?\nIf those seeking alternatives to the Bezos World scenario wanted to take near-term actions that might steer society towards a Levelers scenario, what might they do?\nAs one step, they might establish research activities and institutions that have the incentives and resources to develop human agency enhancing AI algorithms.  People might then launch pilot programs in a few sectors of the economy in which such algorithms might have the most success in helping people replace the firm rather than the firm replacing workers.  The government could institute policies that create space in the economy for such experiments to thrive, perhaps akin to the renewable portfolio standards in the energy sector.\nThe industrial revolution and its enabling technologies created vast material abundance but exacerbated tensions among the several dimensions of human freedom.  Today's AI augurs social transformations at least as profound. Much AI research, development, and deployment currently seeks to replace humans in pursuit of technological virtuosity and economic efficiency.  The Levelers, and speculation regarding the algorithms that might support it, are offered as one of many potential scenarios intended to help people systematically explore whether and how AI might be configured to facilitate a future in which machines collaborate with humans to enhance the latter's capabilities, agency, and freedom.\nREFERENCES\nAlden, Edward, and Laura Taylor-Kale. 2018. \"The Work Ahead: Machines, Skills, and U.S. Leadership in the Twenty-First Century.\" In, New York: Council on Foreign Relations.\nAnderson, Elizabeth. 2017. Private Government: How Employers Rule Our Lives (and Why We Don't Talk about It) (Princeton University Press).\nChandler, Alfred Dupont. 1977. The Visible Hand: The Managerial Revolution in American Business (Harvard University Press).\nCoase, Ronald H. 1960. \"The Problem of Social Cost.\" In, Classic Papers in Natural Resource Economics (Palgrave Macmillan: London).\nKnopman, Debra, and Robert Lempert. 2016. Urban Responses to Climate Change: Framework for Decisionmaking and Supporting IndicatorsRAND. RR-1144-MCF\nKrieg, Gary, Kevin Proudfoot, and Josh Rosen. 2017. \"AlphaGo.\" In, RO*CO FILMS.\nLempert, Robert, David G. Groves, and Jordan Fischbach. 2013. \"Is it Ethical to Use a Single Probability Density Function?\" In, RAND.\nLempert, Robert J., Steven W. Popper, and Steven C. Bankes. 2003. Shaping the Next One Hundred Years: New Methods for Quantitative, Long-term Policy Analysis (RAND Corporation).\nMarchau, Vincent A. W. J., Warren E. Walker, Pieter J. T. M. Bloemen, and Steven W. Popper. 2019. Decision Making Under Deep Uncertainty: From Theory to Practice (Springer) (in press).\nNational Research Council. 2009. \"Informing Decisions in a Changing Climate.\" In, Panel on Strategies and Methods for Climate-Related Decision Support, Committee on the Human Dimensions of Climate Change, Division of Behavioral and Social Sciences and Education (The National Academy Press).\nO'Reilly, Tim. 2017. WTF?: What's the Future and Why It's Up to Us. (Random House).\nPendleton-Julian, Ann, and Robert Lempert. 2019. \"World Building Workshop on Technology and Work Worth Doing (Not Jobs) Post AlphaGo.\" In, RAND CF-398 (forthcoming).\nRoose, Kevin. 2019. 'The Hidden Automation Agenda of the Davos Elite', New York Times, Jan 29, 2019.\nSchoemaker, Paul J.H. 1993. \"Multiple Scenario Development:  Its conceptual and Behavioral Foundation\". In, Strategic Management Journal, 14: 193-213.\nSen, Amartya. 2009. The Idea of Justice (Belknap Press).\nShapiro, Ian. 2003. Moral Foundation of Politics (Yale University Press).\nWu, Tim. 2018. The Curse of Bigness: Antitrust in the New Gilded Age (Columbia Global Reports).The post Bezos World or Levelers: Can We Choose Our Scenario? first appeared on AI Pulse.", "url": "https://aipulse.org", "title": "Bezos World or Levelers: Can We Choose Our Scenario?", "source": "aipulse.org", "date_published": "n/a", "paged_url": "https://aipulse.org/feed?paged=2", "id": "7e1574d10988242ed0e3e1473307c1e2"} -{"text": "Autonomous Weapons and Coercive Threats\n\n\n Download as PDF\n\nGovernments across the globe have been quick to adapt developments in artificial intelligence to military technologies. Prominent among the many changes recently introduced, autonomous weapon systems pose important new questions for our understanding of conflict generally, and coercive diplomacy in particular. These weapons dramatically decrease the cost of employing military force, in human terms on the battlefield, in financial and material terms, and in political terms for leaders who choose to pursue conflict. In this article, we analyze the implications of these new weapons for coercive diplomacy, exploring how they will influence the course of international crises. We argue that drones have different implications for relationships between relatively equal states than they do for unbalanced relationships where one state vastly overpowers the other. In asymmetric relationships, these weapons exaggerate existing power disparities. In these cases, the strong state is able to use autonomous weapons to credibly signal, avoiding traditional and more costly signals such as tripwires. At the same time, the introduction of autonomous weapons puts some important forms of signaling out reach. In symmetric conflicts where states maintain the ability to inflict heavy damages on each other, autonomous weapons will have a relatively small effect on crisis dynamics. Credible signaling will still require traditional forms of high-cost signals, including those that by design put military and civilian populations at risk.\nIncreasingly, governments are using artificial intelligence technologies to revolutionize their military capabilities. In many ways, these technologies present the potential to transform the conduct of war and, in so doing, to alter the nature of state-to-state interactions. Prominent among the many changes recently introduced, autonomous weapon systems (AWS) pose important new questions for our understanding of conflict generally, and coercive diplomacy in particular. Automation, the most novel trait of these systems, allows states to deploy military force remotely at startlingly low cost. Automated weapons can enter enemy territory without endangering the lives of soldiers, maintain constant surveillance on important targets without risk of fatigue, and deliver deadly and highly precise strikes in an instant. Already, militaries employ remote-operated technologies to capitalize on similar advantages. As the introduction of automation streamlines and centralizes the planning and conduct of conflict, militaries will come to rely ever more on systems which build upon and extend these features. For example, U.S. military planning emphasizes the need to develop a \"global surveillance and strike network\" (GSS). This is expected to rely heavily on autonomous weapons. According to a key planning document: \"while many elements of the U.S. would have important roles to play in a future GSS network, it would rely disproportionately upon air and maritime forces in general and unmanned platforms in particular.\"1\nMost importantly, AWS appear to have dramatically decreased the costs of fighting a war. First, the ratio of capital to labor inputs for the conduct of war has drastically shifted. Although they require an investment cost upfront in their design and development, once built, these weapons impose minimal risk to the lives of their operators and require remarkably less labor for their effective performance. For the first time in history, the soldier who pulls the trigger need not be present on the battlefield. Consequently, AWS save on the enormous costs that have been spent in human sacrifice throughout the tragic history of conflict. This is important not only for the direct reason that it reduces danger to military personnel, but also, more broadly, because it offers a more \"palatable\" way to conduct war. As domestic publics grow increasingly distasteful of violence and casualties, leaders, particularly those held directly accountable to their publics, have found it increasingly difficult to pursue conflict.2 AWS offer a way to do so with minimal risk to the lives of soldiers, thereby greatly reducing the political and reputation costs suffered by states that choose to go to war. For example, while the US lost a full 10% of its aerial personnel in Vietnam, there was not a single pilot causality among a total of 568 drone strikes conducted by the US from 2002 to 2015.3 By avoiding large-scale causalities, leaders who engage in conflict can also avoid the domestic and international opprobrium that might otherwise impose heavy costs. Additionally, these systems are significantly cheaper to employ. While upfront investment costs are certainly not small, when viewed in comparison with the costs invested in other advanced manned-aircraft, missile technology, or nuclear capabilities, AWS are a relatively cheap technology.4 Thus, while revolutionary in a number of regards, automated weapons are particularly notable for the ways in which they decrease the costs of conflict financially, politically, and in terms of human sacrifice. Because these costs are extremely important to the existing understanding of coercive diplomacy, we explore how these weapons affect the use and perception of threats in international crises.\nIn this article, we analyze the implications of AWS for signaling between states. A well-established set of results from the literature on interstate bargaining asserts that credible communication between adversaries sometimes requires signals to be costly. All states, including those unwilling to actually follow through on a threat, stand to benefit from successfully coercing an opponent. As a result, states on the receiving end find it difficult to determine whether or not a given coercive threat demonstrates the sender's genuine intent and willingness to engage in conflict. When these signals are costly to make, however, those who are not truly resolved to follow through will be unwilling to undertake them, thereby allowing the receiver to distinguish between genuine and non-genuine threats. Key to this argument is that the costs of signaling must be sufficiently high such that bluffing states will prefer not to make an empty threat, even when it would be believed. Some rationalist theories have therefore argued that only high-cost signals which risk causalities or impose a hefty financial burden are sufficient. Given that the conduct of war with autonomous weapons involves little risk to human life and is drastically cheaper than ever before, are such high-cost signals still necessary? Is it possible to use these weapons to credibly signal intent and successfully coerce opponents when they are so remarkably cheap both financially and politically to deploy?\nWe focus on how a challenger's acquisition of AWS against a target without these capabilities will affect signaling. We argue that, where AWS lower costs of conflict, states are able to credibly signal intent with certain types of low-cost signals. When powerful states develop these technologies and face much weaker opponents, they will find that communicating credible intent requires less need to pay costs in the form of mobilizations, tripwires and the staking of bargaining reputations. In limited-war contexts where the anticipated costs of war are low, the mobilization of drones and other commensurately low-cost autonomous weapons systems can still send a credible signal of resolve. The new technologies influence not just the level of cost associated with credibility, however, but also the availability of certain other types of costly signals. This might suggest that states will turn toward costless diplomatic signaling in response, but we argue that power asymmetries will hinder the effectiveness of diplomatic signals as well. Rather, in crises over specific issues, states are likely to depend upon staking and defending reputations for resolve more than they did in the past; in some cases, this will imply a greater likelihood of conflict.\nWe argue further that while the costs of employing force with autonomous weapons may be dramatically low, the costs of war do not necessarily lower significantly. Where an adversary retains the ability to impose substantial damage on a state's society, the costs of war remain high. As a result, conflicts between AWS-endowed challengers and non-AWS-armed targets do not necessarily involve a large shift in relative war costs. We argue that threats backed by the mobilization of autonomous weapons systems are unlikely to be able to demonstrate resolve when the costs of war are moderately high. In these cases, the inevitable large societal sacrifice that would result from costly conflict entails the continued need for signals of resolve to be associated with high costs. We therefore expect that, when facing a relatively strong opponent, states with AWS capabilities will rely on certain traditional forms of costly signaling. These include voluntarily placing tripwire forces in harm's way and risking heavy casualties or investing in financially burdensome preparations.5\nThe Bargaining Model of War\nTo understand how autonomous weapons might change the conduct of international conflict, one must first start with an understanding of how crises have been conducted and understood in their absence. A fundamental dilemma in any given international dispute arises from two essential facts. First, no opponent can ever truly know another state's willingness to go to war. Second, unresolved states always have incentives to bluff, and issue empty threats, when doing so will successfully coerce an opponent. If a challenger knows its opponent will concede to a threat, it has enormous incentive to make this threat as convincingly as possible, even if it is not actually willing to follow through. This behavior, in turn, makes all target states dubious that a coercive threat against them is genuine. The crisis bargaining literature has long sought to address how states can escape this problem and make credible threats that successfully communicate a genuine intention to follow-through.\nA core thesis of this literature contends that threats are only credible when they are costly to make. Typically, these game theoretic models involve two players, a target who receives some form of threat, commonly referred to as a signal, and a challenger who makes this threat. States make decisions rationally, based on cost-benefit calculations, and both seek to maximize their share of some good in dispute. Challengers who value the good highly or have a high probability of winning compared to the costs of fighting are resolved to fight should the conflict escalate that far. Threats made by resolved challengers are therefore genuine. On the other hand, when costs of war outweigh the benefits of fighting and winning the good, challengers are unresolved. While these states wish to possess the good, they are unwilling to suffer the costs of conflict required to fight for it. Threats made by unresolved challengers are therefore disingenuous bluffs that they will not see through. Generally, a target on the receiving end of a coercive signal has no way of knowing whether or not the threat is genuine because the challenger's value of the good or costs of war are private information, only known to the challenger. When threats are costly to make, however, unresolved challengers will be dissuaded from bluffing because they are unwilling to bear the costs required to send the coercive signal. By weeding out the unresolved bluffing states in this way, costly signaling allows a resolved challenger to credibly communicate its intent and allows the target to distinguish between genuine and non-genuine threats.\nTraditional models of costly signaling have focused on two main ways to incur costs and communicate resolve. The first focuses on sunk-cost signals where the challenging state invests in efforts that are, in themselves, costly to undertake (Fearon 1997). Typically, these costs are borne through mobilization and preparation for war. By showing a willingness to undertake actions that, by their nature are costly to perform and difficult to reverse, states can credibly communicate their willingness to fight.6 This logic helps to explain why states often forfeit the benefits of a surprise attack and instead make overt preparations for war that are insufficient to give the state any realistic strategic advantage. For example, the U.S. has sent military assets into a tense region even though the addition of these assets does not greatly affect the balance of power. Further, investing in arms building and certain weapons technology may be more significant in its capacity to demonstrate a willingness to bear costs in preparation for conflict than in the ways these weapons influence military effectiveness and capability on the battlefield.\nThe second form of costly signaling operates through a so-called tying-hands mechanism. These signals do not impose any cost when they are made initially, but in the event that the issuer backs down they impose a heavy cost. In this way, they tie the hands of those who make them. Fearon argued that one prominent form of tying-hands signal comes through public statements and domestic audiences. According to this argument, domestic observers punish a leader for issuing threats, in so doing, \"engaging the national honor,\" and subsequently backing down.7 Particularly in democratic states where leaders are held directly accountable to their domestic public through regular elections, leaders are thought to be highly concerned with public opinion and keen to avoid incurring any political and reputation costs for failing to follow through on a threat. As a result, leaders who are not truly resolved to fight will be unwilling to expose themselves to the risk of having to go back on their word and of incurring these so-called audience costs. In this way, verbal threats which in themselves do not carry a cost can still credibly demonstrate resolve when publicly issued.\nMany real-world signals have both sunk cost and tying-hands aspects to them. Mobilization for war incurs costs that are paid whether or not the war is fought, making it a sunk cost signal. But if a significant part of the cost of conflict involves moving troops and military hardware into the theater of conflict, mobilization is also a tying-hands signal because it affects the relative value of choosing peace or war in the future.8 Tripwire forces are another example of a signal that contain elements of each type. In these cases, troops are deployed near the border with the target but are much smaller in size and capability than the target's forces. As such, these forces give little military advantage to the challenging state and would be easily wiped out by the target were war to break out. Any conflict in the region would essentially guarantee a substantial loss of the challenger's troops, and this, in turn, would quickly galvanize the challenging state and its domestic public into a full-scale war effort. The main purpose of deploying these troops is therefore not to gain any strategic advantage but rather to demonstrate willingness to enter conflict. Since this is done by influencing the relative political cost of entering the conflict or staying out in the future, this is a tying-hands signal. But since the risk of loss of life incurred and the costs of maintaining and mobilizing the troops must be paid regardless of whether conflict occurs, it is also a sunk cost signal.\nBargaining with Autonomous Weapons Systems\nIn contexts where revolutionary, low-cost military technology dramatically reduces war costs, how is credible signaling conducted? Given that autonomous weapons systems are exceptionally cheap to deploy, are leaders still able to forward deploy these weapons as a form of sunk-cost signal? Can leaders use AWS to send hand-tying signals when the use of these weapons draws far less public and political attention? We argue that the answer to these questions depends on the type of conflict involved. Autonomous weapons have very different consequences in asymmetric conflicts where relative costs of war are highly skewed than they do in more balanced, symmetric disputes.\nIt is important to note that the concept of symmetry of war costs between the target and the challenger is distinct from the resolve of the players. In asymmetric relationships, the challenger, regardless of its resolve to fight, faces very low relative costs of war. In symmetric relationships, however, the costs of war are fairly balanced between the target and the challenger, again regardless of the challenger's resolve. This is most easily understood through the simple equation relating the costs of war to the benefits of winning. Let the probability that the challenger wins the war be p, its value for the good in dispute be vc, and its costs of war be cc. We can then state, according to expected utility theory, that if pvc −cc > 0 the challenger is resolved to fight. Similarly, the cost-benefit calculus for the target is to be resolved to fight if (1−p)vt −ct > 0. In a symmetric relationship cc ≈ ct, and in an asymmetric relationship cc is much less than ct. In each relationship type, we can further categorize challengers as resolved or not by evaluating whether pvc −cc is greater than zero. In other words, two challengers might have exactly the same war costs cc such that the symmetry of the relationship to the target is the same, but different values of pvc such that one is willing to fight, and the other is not. In the following sections, we discuss how the introduction of autonomous weapons affects the dynamics of credible signaling within symmetric and asymmetric relationships separately. The challenge of credibly communicating resolve remains in each case since there are resolved and unresolved challengers in both, but the effects of AWS diverge.\nEffective Signals in Asymmetric Relationships\nWhen the costs of conflict for the challenger decrease, the costliness of the signal can be smaller as well. This can be seen in models analyzed in Fearon (1997). In both sunk-cost and tying-hands signaling models, the equilibrium cost that highly resolved states employ to credibly signal their resolve decreases as the costs of war decrease. With low war costs, states are more willing to contest issues, making their threats inherently more credible. The intuition of this result is straightforward. When it is not very costly for an opponent to follow through on a threat, the receiver is more likely to take it seriously. For example, imagine that a challenging state mobilizes autonomous weapons and readies them at the border with the target. In the asymmetric case, the opponent can keep war limited to only these low cost autonomous forces and would therefore suffer only minimal cost to follow on the threat and engage in conflict. The target, knowing how easy and cheap it would be for the challenger to follow through, is therefore very likely to take the threat seriously. In other words, because war itself is not highly costly, states do not need to demonstrate a willingness to bear large sacrifices in order to demonstrate their resolve to fight. Thus, it is not self-evidently the case that AWS make signaling resolve harder, as some have argued.9 Specific capabilities of AWS enhance these effects. Two of the most important are the ability to sustain operations at low cost and target objectives precisely, reducing costs that may arise from collateral damage (Zegart 2018).\nAutonomous weapons do not merely reduce the costs of conflict, however. They also reduce the costs of deploying force abroad, an action that has traditionally been used as a sunk-cost signal of resolve to fight. Because AWS are costly to build but relatively cheap to mobilize and deploy, the cost and time required to deploy force abroad has been greatly reduced. Further, AWS capabilities are also replacing traditional force projection capabilities in military planning. For example, air defense and missile capabilities make more traditional surface ships and aircraft increasingly vulnerable.10 According to the so-called \"Third Offset Strategy\" unveiled by US Secretary of Defense Chuck Hagel in 2014, AWS technologies are the appropriate response to these \"anti-access / area denial\" or \"A2AD\" capabilities being developed by other powers.11\nBecause militaries have long used mobilization and preparation for war to send high-cost signals of resolve to fight, the replacement of traditional military hardware with AWS carries significant implications for signaling. This is particularly true for sunk-cost signals that are applied to particular foreign policy issues in shorter-term crises. To be clear, many sorts of sunk cost signals will remain, such as the production and forward installment of arms from AWS to missile silos. These actions, however, are typically tied to signals of general resolve to defend broad interests and regions, or sustain power projection capabilities and great power status. Such signals are less relevant to signaling about particular issues in a specific crisis between two opposing states. Deployment and mobilization decisions have often played important roles in signaling intent and resolve in such cases. As the costs of utilizing these traditional, labor-intensive signals grow comparatively larger and larger next to the costs of employing AWS, however, it is unlikely that states will continue to use them. As cheaper autonomous weapons technologies advance, traditional, high-cost signals like tripwires are likely to be seen as far too costly for the scope of the challenge. Game theorists will point out that these high-cost, traditional signals will always exist since, theoretically, even the burning of money may serve as a costly signal. In reality, however, no political leader has ever been willing to do this directly. Political leaders will always look for signals that achieve the desired end at minimal cost. What might these cheaper alternatives be?\nOne might expect that states would turn to closed-door diplomacy associated with costless signaling mechanisms. If AWS increase power asymmetries, however, many forms of diplomatic signaling will be unavailable as well. Diplomacy typically convinces by taking on some form of risk. For example, making a demand may risk that an adversary forms an opposing coalition, strikes first, or builds more arms than it otherwise would have. Similarly, insisting on a highly favorable outcome in negotiations may jeopardize the possibility of achieving an intermediate compromise, or may increase the likelihood that negotiations will break down leading to the outbreak of conflict. However, when the power differential between two states is extremely large, as it is in asymmetrical relationships, the adverse consequences risked through the conduct of forceful diplomacy disproportionately rest on the weak state. The powerful state, on the other hand, has little to lose because even if diplomacy fails entirely and war breaks out, its overwhelming power likely ensures that it will easily win the conflict. This can make it very difficult for powerful states to take on the risk necessary to facilitate credible diplomacy. Where AWS exacerbate already extreme power asymmetries, this problem is likely to intensify. The fact that the target of coercion is less willing to resist because of the sheer overwhelming power of the coercing state, or that the target's countering actions are less consequential, can imply that the most powerful states have less ability to use costless diplomacy to avoid conflict.12 The result is a \"Goliath's Curse\" – states with extreme power lose the ability to signal in certain ways.13\nIn cases where AWS create asymmetries of power and cost, a clear signaling possibility remains, namely, tying-hands signaling based on reputation.14 The essential task for the target in any signaling interaction is to form accurate beliefs and expectations about the challenger's true intentions based upon the information it gathers by observing the challenger's behavior. The game-theoretic crisis bargaining models often allow the challenger only one action to affect the target's beliefs. In the real world of international politics, however, these expectations and beliefs are not limited to this one-shot interaction in the context of a specific dispute. On the contrary, the target often has a long history of direct interaction with the challenging state and has observed its behavior with others over time and across diverse contexts. In this way, reputation and status produce expectations about a challenger's behavior generally which then shape expectations about its resolve to fight in particular circumstances. As such, developing and maintaining a certain reputation remains an important way to establish the credibility of one's stated intent in any given crisis. A reputation for following through on commitments, for resolve, for military strength and capability, for high cost tolerance, or for perseverance could convince a target that concession is optimal.\nCompared to other forms of signaling, reputation signals of resolve have the property that they often increase the likelihood of conflict (Fearon 1997, Sartori 2005). Importantly, reputation is built upon a consistent pattern of behavior. Acting in ways that are inconsistent with or harmful to one's current reputation will alter it. This may impose costs later on as observers form new, different expectations. As a result, the incentive to maintain a reputation often provides an additional incentive for war, and one that can apply to both sides at once. Because all states desire a reputation for resolve, target states which might otherwise concede to the challenger's demand may be pressured to stand firm in order to maintain their reputation. Similarly, challengers who initially miscalculate such that the target refuses to accept a demand when the challenger had counted on a concession may wish to negotiate a different mutually-agreeable settlement. If backpedaling and renegotiating damages the challenger's reputation by making it appear pliable or irresolute in the face of defiance, however, the challenger may also prefer to go to war to preserve its reputation. That said, reputation signaling also reduces the states' need to fight by allowing actors to better communicate what they are willing to fight for. Nevertheless, the use of reputation signals instead of others usually increases the likelihood of conflict. Thus, as AWS exaggerate asymmetries of power and other traditional forms of signaling appear enormously costly in comparison, states may turn more and more to reputation as the basis of credibility and this, in turn, may tragically increase the incentives for war for both the target and the challenger.\nFinally, autonomous weapons systems may alter the frequency and form of asymmetric conflict. A reduction in war costs corresponds directly to an increase in resolve through a basic cost-benefit analysis. Simply put, a state is more likely to find that the benefits of pursuing both conflict and coercive diplomacy outweigh the costs when these costs are lowered. This is of particular interest for borderline issues where the sudden reduction in war costs changes the state's preferences for pursuing the issue. Prior to the acquisition of autonomous weapons, a state would have seen the cost of pursuing these issues as greater than the benefits, and therefore preferred to let the status quo continue. After the introduction of autonomous weapons, however, a state might suddenly find the benefits outpacing the now drastically reduced costs and would thus choose to pursue the issue. An implication may be that states that gain autonomous weapons will begin to pursue a new set of issues for which they have relatively low resolve, but where the costs of war are even lower as a result of AWS capabilities. This may mean that AWS-endowed states issue more threats over relatively low-level issues compared to their conventionally armed counterparts. Future work should focus on exploring the observable indications of this change in behavior empirically. U.S. involvement in affairs around the globe, outside of a Cold War context, is often thought of as a new imperative resulting from the War on Terror, but it may also represent the decreased costs of involvement through drones in Pakistan, Afghanistan, Yemen and elsewhere. Indeed, existing literature suggests that as war costs lower, the probability of conflict increases.15 AWS capabilities will likely increase this trend.\nIn sum, states possessing AWS in asymmetric conflicts can credibly signal at low-cost, but this is far from the only implication of this technological revolution. States only need to demonstrate a willingness to engage in behavior that is proportionately costly to their anticipated costs of war. Because AWS dramatically lowers a challenger's war costs in an asymmetric conflict, low-cost signals are credible in these cases. At the same time, however, low costs of mobilizing for particular conflicts, combined with power asymmetries, imply that some traditional signals will lose their efficacy. Traditional signals,  particularly those involving the risk of fatalities, will be seen as imposing far greater cost than appropriate or necessary. As AWS further exacerbates power inequalities, strong states may find it more and more difficult to conduct some forms of closed-door diplomacy. As a result, states may become increasingly more dependent on the hand-tying signals of staking and preserving their bargaining reputations, even though this will sometimes lead to unwanted conflicts. Finally, powerful states with AWS capabilities are likely to face new incentives to pursue low-level disputes. Given that this technology makes it easier and cheaper for strong states to deploy force and make credible low-cost threats, we may observe an increase in the number of new issues pursued by strong states toward their weak counterparts.\nEffective Signals in Symmetric Relationships\nWhile possession of autonomous weapons drastically lowers a state's costs to deploy force abroad, it may not necessarily decrease the costs of war at home. It is important to remember that a state's costs of war are largely driven by the harm imposed by the opponent. An adversary may still employ tactics or capabilities that result in very costly destruction and loss of human life, and these costs are not likely affected by the introduction of autonomous weapons. In particular where intensive attacks occur on one's own soil, devastating destruction is likely not reduced by the development of AWS. We define cases where opposing sides possess a similar potential to impose substantial costs on each other to be symmetric relationships. No matter how advanced autonomous weaponry becomes, the basic fact remains that no defense system on the horizon can ever be impenetrable; for the foreseeable future, the proverbial AWS or missile \"can always get through.\" Thus, even if only one side possess AWS, the basic fact that both retain the ability to impose heavy war costs on each other means that traditional, high-cost signals will remain important.\nWhen symmetric conflicts threaten large-scale warfare and heavy war costs, a proportionately larger cost is necessary to demonstrate a willingness to bear these costs and the resolve to go to war. Take the extreme example of two nuclear powers: in this type of symmetric conflict, the costs of war loom extremely large. Mobilizing low cost autonomous weapons against a foe with the potential to unleash nuclear holocaust would be vastly insufficient to demonstrate a willingness to bear such costly devastation. Rather, in cases of large, symmetric war costs, tripwire forces will remain relevant and useful to challengers. As autonomous weapons drive the likelihood that conflict will require human sacrifice lower and lower, willingness to accept the risk of battlefield causalities will become an increasingly potent signal of resolve. Moreover, as AWS replace traditional labor-intensive force projection methods and make force projection less costly, the sunk-cost signaling aspect of military mobilization and preparation will increasingly fail to meet the high cost threshold necessary to credibly signal resolve. As a result, signals will become increasingly important when they demonstrate a willingness to suffer heavy human sacrifice, tying hands by ensuring that states would retaliate and fully engage in conflict.\nFurther, just as in the Cold War, a key set of questions surrounds when AWS-capable adversaries can keep conflicts limited to particular weapons technologies and levels of violence, or whether the risks of inadvertent wars will remain. These risks may be compounded by the speed of AWS technologies and the potential for first mover advantages in combat. Indeed, a crucial advantage of autonomous weapons is that they can observe and act more quickly than humans. This speed, combined with the potential for AWS technologies to initiate disabling strikes may be destabilizing. That is, AWS have the potential to make first strikes against powerful adversaries more attractive military options. As adversaries develop ways of interrupting communications systems, these weapons may become increasingly autonomous. In fact, credible deterrents may require systems that have the autonomy to act even after governments have been destroyed. These factors could increase the risks of crisis escalation and war. AWS systems interacting in unforeseen ways may even produce dynamics like the one that resulted in the 2010 \"flash crash\" in the U.S. stock market when high-frequency trading algorithms generated an extreme change in asset prices.16 On the other hand, autonomous systems, in reducing human error and being free from aspects of human crisis psychology, may also reduce the likelihood of conflict. These are critical areas for future research.\nConclusion\nIn this article, we have argued that autonomous weapons have important implications for threat-making and coercive diplomacy in certain types of conflicts. In asymmetric relationships where a strong state threatens a much weaker target, AWS capabilities can drastically lower the costs for the strong state to follow through on a coercive threat. As a result, high-cost signals traditionally used in interstate bargaining are no longer necessary to communicate resolve. This is simply because a credible signal must only demonstrate a willingness to bear costs to a commensurate degree with war costs. That said, autonomous weapons may reduce the effectiveness of signals which previously communicated resolve through costly mobilizations for war. Strong states may also find it difficult to turn to diplomatic solutions in asymmetric conflicts as AWS capabilities exaggerate power disparities, making it harder for the strong state to leverage its willingness to undertake risk to credibly communicate. This may precipitate a switch to establishing credibility through staking reputation, which will increase the incidence of conflict. Finally, by lowering the costs of war, autonomous weapons may spur states to pursue a new set of issues that were previously just below the cost-benefit threshold.\nWe see less near term change in the signaling dynamics of symmetric conflicts. While these weapons substantially lower the costs to deploy force abroad, they do not guarantee equivalently low war costs. In conflicts between states of similar power and capability, conflict entails heavy costs not only through the deployment of force, but also, more prominently, through the costs of destruction on society. Where an opposing state can impose heavy damage within one's border, war costs remain high in spite of AWS capabilities. As such, credible signaling in these disputes will still rely on a set of traditional, high-cost signals. In particular, as military technologies grow ever cheaper to deploy, willingness to risk fatalities and sacrifice human life is likely to become an increasingly salient, high-cost signal.\nReferences\nBanks, Jeffrey S. 1990. \"Equilibrium Behavior in Crisis Bargaining Games.\" American Journal of Political Science 34(3):599-614.\nBraumoeller, Bear. 2013. \"Is War Disappearing?\" Unpublished Manuscript.\nCheung, Tai Ming and Thomas G. Mahnken. 2017. The Gathering Pacific Storm. Amherst, NY: Cambria Press.\nDafoe, Allan, Jonathan Renshon and Paul Huth. 2014. \"Reputation and Status as Motives for War.\" Annual Review of Political Science 17:371-93.\nFearon, James D. 1994. \"Domestic Political Audiences and the Escalation of International Disputes.\" American Political Science Review 88(3):577-92.\nFearon, James D. 1997. \"Signaling Foreign Policy Interests: Tying Hands Versus Sinking Costs.\" Journal of Conflict Resolution 41(1):68-90.\nFey, Mark and Kristopher W. Ramsay. 2011. \"Uncertainty and Incentives in Crisis Bargaining: Game-Free Analysis of International Conflict.\" American Journal of Political Science 55(1):149-69.\nMartinage, Robert. 2014. Toward a New Offset Strategy: Exploiting US Long-Term Advantages to Restore US Global Power Projection Capability. Center for Strategic and Budgetary Assessments.\nOffice of Chief Financial Officer. 2017. \"Department of Energy FY 2018 Congressional Budget Request.\" National Nuclear Security Administration.\nOffice of the Undersecretary of Defense. 2017. \"Comptroller/CFO, Program Acquisition Cost by Weapons System.\" United States Department of Defense FY 2018 Budget Request.\nPinker, Steven. 2011. The Better Angels of Our Nature: Why Violence Has Declined. Vol. 75 Viking New York.\nSartori, Anne E. 2005. Deterrence by Diplomacy. Princeton, NJ: Princeton University Press.\nScharre, Paul and Michael Horowitz. 2015. \"Keeping Killer Robots on a Tight Leash.\" Defense One.\nSchelling, Thomas C. 1980. The Strategy of Conflict. Harvard University Press.\nSechser, Todd S. 2010. \"Goliath's Curse: Coercive Threats and Asymmetric Power.\" International Organization 64(04):627-60.\nSerle, Jack and Jessica Purkiss. 2017. \"Drone Wars: the Full Data.\" The Bureau of Investigative Journalism. URL: https://www.thebureauinvestigates.com/stories/2017-01-01/drone-wars-the-full-data\nSlantchev, Branislav. 2010. Military Threats: the Costs of Coercion and the Price of Peace. New York, NY: Cambridge University Press.\nTrager, Robert F. 2011. \"Multi-Dimensional Diplomacy.\" International Organization 65:469-506.\nTrager, Robert F. 2017. Diplomacy: Communication and the Origins of International Order. Cambridge, England: Cambridge University Press.\nZegart, Amy. 2018. \"Cheap Fights, Credible Threats: The Future of Armed Drones and Coercion.\" Journal of Strategic Studies 0(0):1-41.\nBy Ciara Sterbenz, PhD Candidate, UCLA Dept. of Political Science and Robert Trager, Associate Professor UCLA Dept. of Political ScienceThe post Autonomous Weapons and Coercive Threats first appeared on AI Pulse.", "url": "https://aipulse.org", "title": "Autonomous Weapons and Coercive Threats", "source": "aipulse.org", "date_published": "n/a", "paged_url": "https://aipulse.org/feed?paged=2", "id": "e393396314d5ccf028ecf4ff2c473519"} -{"text": "Technocultural Pluralism\n\n\n Download as PDF\n\nA \"Clash of Civilizations\" in Technology?\nPreamble\nAt the end of the Cold War, the renowned political scientist, Samuel Huntington, argued that future conflicts were more likely to stem from cultural frictions– ideologies, social norms, and political systems– rather than political or economic frictions.1 Huntington focused his concern on the future of geopolitics in a rapidly shrinking world. But his argument applies as forcefully (if not more) to the interaction of technocultures.\nBy technocultures, I mean the stitched global patchwork of interacting technological ecosystems we currently live in. For an intuitive illustration of these distinct ecosystems, observe variations in these popular platform choices circa 2017 (Figure 1.). Given the proliferation of global tech platforms (e.g. Facebook, WhatsApp, LINE, etc.), these variations can give noisy hints about where technocultural fault-lines lie. I argue that ecosystems are characterized not just by local consensus or concordance in tech adoption, but also in culture, policies, tech innovation, and deployment priorities.\nFigure 1: Mapping Out Dominant Social Media Platform Popularity Across the Globe as of 2017. Interestingly, observed platform-choice clusterings or signatures align quite closely to the cultural fault-lines Huntington outlined almost 30 years ago. We can roughly make out Western-Europe-and-USA-and-Australia, China, Eastern Europe, Japan, and Islamic-Hindu spheres of influence. (Data Courtesy of GlobalWebIndex.net, Cluster Analysis Courtesy of Joshua S. Mendelsohn)\nThe main hypothesis is two-fold.\n\n[Technocultural Frictions]: an AI \"technocultural cold war\" is more likely if not already in progress. This refers to a state of ongoing regulatory friction among multiple technocultures or governance regimes, forced to interact because effective geographic proximity, political necessity, and/or economic advantage. The focus here is on competitive or adversarial frictions.2 Put differently, technocultural friction refers to friction due to the necessary interaction between technology policy spheres of influence.3\n[Technocultural Pluralism]: the prospect of a global monolithic AI technoculture emerging in the near-future is implausible. Persistent pluralism is more likely. By pluralism, I mean a persistent diversity in the global technoculture. These hypotheses are not necessarily AI-specific. But the current efflorescence of data-hungry machine learning innovation in AI heightens the salience of cultural differences.\n\nThis piece has two aims. The first task is descriptive (like most of Huntington's original 1993 discussion). I aim to describe underlying factors and dynamics that foster the development of differentiated technocultures. I build up key concepts (not least of which is a clearer depiction of technoculture). This descriptive exploration also serves persuasive function. Technocultures are easier to track once we observe how the warp and weft of technology innovation, deployment, culture-specific norms, and regulation \"conspire\" to differentiate our global technology environment. The second task aims to go beyond description to highlight dynamics and implications of technocultural pluralism. It is worth highlighting specifically the important implications of data privacy policies, data localization, and population size as mechanisms for differentiation in the evolution of technocultures around the world.\nPart of the motivation for this discussion is to counter a specific perspective (admittedly a strawman perspective and often an inchoate one when held). This tempting perspective anticipates a future regulatory scenario featuring a monolithic global technology ecosystem with little to no geographic cultural variation. Although this is a strawman position, elements of this position arise in some technology policy conversations. What can we say about the prospects of such a monolithic technocultural world? If this homogeneous outcome is unlikely, what are the regulatory and governance implications?  Hopefully this exploration starts us off with basic tools to gain more insight into these types of questions.\nTechnoculture… What is that?\nFirst there is a question of what we mean by a technoculture.\nThe term technoculture here refers specifically to the combination of a technology4 regime and the culture5 in which it is embedded. The concept of a technoculture forces an engagement with questions of how cultural contexts affect, influence, or determine the evolution, deployment, and adoption of technological artifacts. This will include questions about the controlling innovation culture, the prioritization of problems for technological innovation, expected modes of deployment, etc.\nIs this (or any) conception of technoculture useful?\nAt first blush, the concept of technoculture may seem paradoxical; technology is often supposed to be this objective or value-neutral fruit of dispassionate scientific analysis and design. But even under the debatable assumption of a perfectly value-neutral design process, the choice of problems on which to apply technological innovation is subject to cultural influence. As a recent anecdotal6 illustration, take the polarized response to the demonstration of the use of machine learning models to infer criminality from face images.7 The authors (of Chinese origin) maintain that this is perfectly acceptable while many American tech commentators strenuously objected.\nEven the assumption of value-neutral scientific design wilts under light scrutiny. The constraints of ML development processes mean that designers make myriad non-negotiable design choices that will affect users,8 including users with unexpected characteristics. Some of these design choices include impositions of norms and values (e.g. concerning fairness/equity, transparency). The Nymwars of 2012 gives a concrete case in point:9 social media platform designers decided to impose and enforce the norm of only allowing profiles with real names. That decision stood in opposition to prior established norms of online pseudonymy in certain communities.\nIt is now less controversial to assert that technology is inherently cultural given these observations. Technological artifacts are not free of cultural or ethical values (implicit or explicit).10 Cultural values infuse the innovation, design, and use of technology. Winner11 recounts numerous examples of conscious and unconscious deployment of technology artifacts that either imposed or fostered political preferences (e.g. decisions in town-planning and bridge architecture in Long Island, NY explicitly designed to enforce extralegal segregationist preferences).\nThis is especially true of modern AI. Modern AI depends primarily on data. Data ecosystems are comprehensive records of cultural values and norms – neutral, good, or bad. Current conversations about data-diet vulnerabilities in AI and biases in algorithms highlights this point more emphatically.12 Modern data-driven ML systems learn patterns (e.g. language behaviors and biases) present in their training data.\nFurthermore, the contours of existing and future data ecosystems are strongly determined by operating data privacy regulations. Questions of privacy are (at least) as cultural as they are technological. On the cultural dimension, cross-national survey studies of attitudes towards privacy and cultural influences on privacy show significant relationships between privacy behaviors and quantified cultural factors13 especially pragmatism, individualism, and country.14 These relationships are found to hold even after controlling for population experience with/exposure to technology.\nBesides the cultural dimension, both privacy enhancing technologies and privacy policies15 determine how much and what kinds of data are available to train AI systems. Privacy enhancing technologies (PETs) highlight the outer physical limits of privacy preservation.16 Privacy policies occupy a space between cultural factors and technology. These policies allocate rights and specify incentives to govern the behavior of data sources and sinks. Cultural and consensual norms influence the overall balance of such of rights and incentives. The EU's GDPR sets a precedent asserting the rights of users as primary individual controllers of their data (control but not necessarily rights to compensation for use). Chinese governance culture includes a current precedent of asserting communal control of individual data to address public welfare (e.g. to control public information consumption or to enable public reputation scoring). And technology deployment in lower income countries (e.g. Aadhar deployment in India) has been found to be less subject to privacy concerns.17\nWhy Do Technocultures Matter? Is a Universal Technoculture Plausible?\nBack to Samuel Huntington's post-cold-war observation and its adaptation to technocultures. If the discussion in the previous section is compelling enough, then we are led to concede the following:\n\nAI technology (and any technology) is subject to the influence of its cultural context.\nThere is a global diversity of technocultural contexts — even if the geopolitical boundaries or fault-lines are fuzzily defined at best.\nCultural values inform tech evolution, tech policy, and tech regulation — especially concerning data and AI.\n\nInteraction between technocultures is unavoidable in our rapidly shrinking world. And differences in policy and regulation can lead to friction in interaction. This leads to the aforementioned two-fold hypothesis about technocultural pluralism and frictions. The interplay of the highlighted technocultural factors hint at the idea that the global AI technology ecosystem is likely to fracture along the culture-specific lines telegraphed in these data ecosystems. And AI's intense data dependence means privacy policy18 is likely a key lever in technocultural divergence.\nThe technocultural friction point is somewhat supportable given:\n\nrecent discussions of \"AI arms races,\"\nthe flurry of AI strategy statements from different countries19, and\nrecent geopolitical squabbles over commercial data localization20 and/or foreign investment in sensitive tech sectors.\n\nThe technocultural pluralism point is harder to support fully since it is a statement about the future evolution of technocultures.21 In the context of data-driven AI tech, the cultural-specificity of available or accessible training data (either due to local norms in data behavior or due to local data privacy policies), may lead to persistent fracturing the evolution of AI tech. In the more general technology context, observable cultural differences in tech use, innovation, and regulation suggests persistent differentiation.\nThe pluralism hypothesis is admittedly a less-than-ironclad forecast. But it is a forecast based on the observation that we have yet to see global cultural convergence in the long (short?) history of civilization. Cultural differences (e.g. in language use) persist in spite of long interaction. A technocultural monolithic future is as unlikely as a culturally monolithic future. Sure, this is a conservative prediction. But it is likely a more reasonable one given the historical record.\nPluralism… Now what?\nWhat are the strategic implications of these hypotheses? A persistently pluralist technocultural future raises some hard questions like: Are technocultural differences truly unresolvable in the long-term? If they are not resolvable, what are the possible equilibria in the long-run? Can a multipolar technocultural world be stable? Are technocultures inherently \"winner take all\"? Is there an alternative to technocultural dominance? In the short run, how do we understand the space of potential technocultural frictions and conflicts? What are the evolutionarily stable strategies22 for playing the game of technocultural thrones?\nAll useful questions. Probably. Rather than give definite answers to these questions,23 I instead explore a characterization of features of an inhomogeneous tech ecosystem and an examination of plausible future scenarios that arise under the pluralism hypothesis.\nIt is worth highlighting that pluralism is not necessarily a negative. The ability of local domains to determine local technoculture can be very empowering e.g. the ability of poorer nations to adopt technologies and deploy them to solve pressing local problems.\nA Pluralist World: Useful Levers & Interesting Dynamics\nIt is useful to explore how the actions of aggregate agents (government, populations, commercial entities) can influence the evolution of AI technology and the global technoculture more generally. Here is a non-exhaustive exploration in broad-strokes:\nData Localization Policies\nData localization is an emerging trend in data privacy and technology regulation. Data localization refers to restrictions or prohibitions on exporting data about local citizens or data originating from local sources. Notable examples of such regulations include EU's GDPR Article #45,24 China's Cybersecurity Law Article #3725 and Russia's Federal Law no.242-FZ.26 GDPR's Article #45.2(a), for example, requires an assessment of the normative \"adequacy\" of foreign jurisdictions before certifying the outward transfer of EU data. Article #37 of China's Cybersecurity Law articulates similar constraints on outward data flows. Exceptions would require extensive security vetting.\nThere are justifiable reasons for imposing localization regulations:\n\n[Security Constraints] Data localization can help prevent foreign intelligence breaches. Information traffic about domestic affairs flowing in foreign jurisdictions is often easier to intercept both physically and legally. Forcing local processing and storage (sometimes even transmission) reduces the risk of interception.27 Furthermore, data localization makes information relevant to domestic security and safety more readily accessible within the jurisdiction. Technocultures as different in values as the EU and China both agree on the occasional need to breach privacy in pursuit of security or safety.\n[Normative Constraints] Data localization helps preserve the contextual integrity of citizen's data. Privacy norms are value- and culture-dependent. One conception of privacy is of privacy as a form of contextual integrity.28 Under this conception, privacy preservation is tied to the (explicit or implicit) norms of the specific context and jurisdiction. Non-local data handling increases the exposure of subjects' data to inappropriate contexts with privacy norms that are insufficiently aligned with local norms. There is thus a higher risk of violating contextual integrity and/or locally-acceptable privacy norms.\n[Self-Interest] Data localization helps foster the local technology ecosystem. This point is especially central for the pluralism hypothesis as related to AI. Localization will often foster the development of local technical competence with data technologies. This competence is foundational for enabling innovations in AI and developing AI solutions tailored to local problems.\n\nThe combination of these factors incentivizes the trend towards a more fractured global technoculture. Increased data localization fosters siloed technocultures.\n\"Attractive\" Populations: Power in Numbers\nRegulatory levers like data localization have the effect of placing a cognitive burden on interested multinational firms. They need some familiarity with local norms if they intend to operate profitably and legally within foreign jurisdictions.29 Ideally there is a benefit for shouldering that burden. That benefit comes from the economic power of a population-base. We can use the term \"attractiveness\" to refer to the influence that populations can exert on technocultures just by being sizeable sources of profit. The magnitude of a target populations's influence is somewhat proportional to its size.\nLarge populations attract economic attention as markets or sinks for economic goods. Jurisdictions with large population bases present a large pool of potential consumers. Firms that are able to survive regulatory and operational challenges qualify to play for larger potential (or actual) profits. In this scenario, regulatory barriers may operate as mechanisms for depriving competitors who are unwilling/unable to satisfy local norms of market share. Regulation and policy-making can thus be construed as acts of collective bargaining on behalf of a jurisdiction's population.\nThe past demises of Apple, Google, Uber, and Facebook operations in China are useful illustrations. Recent Apple and Google overtures to resume some operations in China also illustrate the strength of the attractiveness of that user-base.\nAs a lever in technocultural evolution, population size has a couple of modes of use. Countries with large populations can use their influence to extract concessions or compromises. This can be an explicit interaction e.g. China sanctioning firms that do not provide state access to collected user data.30 The opportunity cost for a multinational firm closing down operations because of some regulatory barrier is higher for larger countries than for smaller. Influence can also be exerted via implicit negotiation, e.g. the EU using the weight of its population-base to shift international data privacy discourse and practice via ambitious regulation.\nPopulations also attract attention as sources of technical expertise or human capital at advantageous price points. This is useful to highlight because human capital comes equipped with value systems that can sharply affect the evolution of tech innovation and deployment. The moral aversion to defense-related uses of AI recently expressed by significant portions of Silicon Valley technical work-force offers a case in point.\nWinners and First-Movers\nThere has historically been a form of first-mover's advantage in technology innovation. Intellectual property (IP) rights actually aim to strengthen this advantage as a way of incentivizing innovation. In recent history, for instance, the USA enjoyed unparalleled technocultural dominance. Current Internet technology still bears some reminders of its US-centric early development (e.g. USA's network centrality in internet routing and other vestiges of US-led tech standards formation). The migration of talent to the USA during WW2 helped cultivate this advantage. As did the relative depression of Chinese and Russian innovation due to experiments with versions of Communism.\nThere is also a strong bias towards survivors of technology arms-races: a winner-take-all dynamic or close to it. As a first approximation, effective tech innovations spread and drive less effective innovations to extinction (practical performance as the fitness metric). But the memetic resonance of modern information technology platforms may not be as fully determined by practical performance e.g. the geographic differences in adoption of international platforms like facebook and vKontakte is likely not just a function of differences in technical performance. But the dynamics of network effects and preferential attachment to popular platforms leads to cumulative survival advantages that approximate winner-take-all behavior.\nThese trends, first-mover's advantage and winner-take-all, may mediate local economic advantages as well as a technoculture's influence on future policy. But these trends are not \"unchallenged laws of nature.\" MySpace gave way to Facebook in spite of precedence. As did Yahoo to Google in search technology. And the fracturing of the modern social media ecosystem suggests that network effects are not irreversible.\nPaths of Evolution: Local Norms with Global Reach?\nThere is a deliberate analogy between ecology of technocultures and the ecology of biological ecosystem. Species in an ecosystem interact (cooperatively or competitively) and evolve in response to their environmental context. Similarly, technologies, platforms, firms, governments interact and co-evolve in response their specific cultural context.\nThe analogy suggests a vulnerability. Geographical distance may have served as a barrier against the transmission of technocultural cultural influence in the past. But distance is no longer a strong barrier. Technocultures now evolve in a crowded international space. One technoculture might foster a specific innovation in tech use, development, or regulation. Such innovative mutations may now be more easily transmitted across technocultures. And such mutations may find stronger resonance in non-native contexts. Such cross-technocultural transmissions may be beneficial or virulent.31\nExamples of beneficial cross-technocultural transmissions32 include:\n\nthe transmission of key aspects of ICT innovations (outwards from the USA)\nthe transmission of AI innovations (esp. facial recognition AI outwards from the USA to prominent use in China)\nThe transmission of sericulture [outwards from China]\nIn tech regulation, the spread of GDPR concepts from the EU into Californian privacy regulation\n\nExamples of virulent cross-technocultural transmissions include:\n\nThe repurposing of social media platforms for propaganda or psychometric targeting in political elections [outward from the UK or from Eastern Europe].\n\nWe can also play with the prospect of convergent evolution in technology e.g. the convergent evolution of printing technology in the East and the West, or the convergent evolution of flight and photography. Intense global interaction may mean it becomes easier to adopt foreign innovations rather than innovate locally (thus reducing the likelihood or opportunities for convergent evolution). The key theme here is of local norms and actions having unprecedented global reach.\nInnovations in AI tech also change the balance of influence in international relations. Nation-states naturally develop the abilities necessary to pursue their interest in cyberspace. It is reasonable to expect this trend to continue. But the context is slightly shifted somewhat… Now smaller anti-social non-state actors with some AI expertise have an expanded ability to project influence and hold larger actors hostage. Especially if there are no trusted referees to mediate disputes.\nAgain, the theme: local norms, global reach.\nConclusion: The Fruits of a Pluralist Framing\nThe purpose of this piece was to encourage us to take seriously the prospect of unresolvable cultural schisms in the global technology landscape. Culturally-mediated fault-lines are particularly salient when dealing with data-driven AI technologies which make-up the bulk of modern AI technology. This is because culture-dependent privacy norms circumscribe what data is available/accessible/permissible for training AI systems. The general interaction of culture and technology is what we have termed a technoculture. The point of introducing this concept is to provide a fruitful lens for examining the evolution of technology.\nI have referred to the fractured state of the global technology ecosystem as Technocultural Pluralism. In a sense, this pluralist conception has been the historic norm. Our multicultural history is not a history of globally uniform patterns in tech innovation and deployment. The key assertion in this piece is that pluralism is likely a more permanent state than one might perhaps think — globalization, disruptive AI innovation, and (potentially/supposedly?) impending singularity notwithstanding. We can take language use as an informative precedent. Language is one of humanity's oldest culture-infused tech innovations.33 Yet it still retains a level of cultural specificity that is unlikely to fade away soon. Why expect anything else for AI on a shorter time-scale?\nTaking pluralism seriously does not mean assuming a permanent Hobbesian state of \"War of All Against All.\" There is certainly bound to be friction as technocultures negotiate their shared existence on a smaller global stage, under diverse, sometimes diametrically opposed value systems (technocultural clashes, to use Huntington's term). It also does not mean a constant arms-race or drive towards domination. The arms-race perspective is well-suited to discussions of defense in which the controlling objective was about survival and actions are centrally directed. In any given modern technoculture, there will be multiple preferences, utilities, or objectives in play. And the aggregate behavior of the technoculture is an impenetrable function of millions or billions of sub-agents' choices.\nTaking pluralism seriously means spending more time exploring the features and dynamics of our global technocultural ecosystem. This piece represents one such exploration.\nWhat strategic implications does a technoculturally pluralist framing highlight? One key implication would be the pivotal role of data localization and privacy policies in deepening schisms between technocultures in the age of AI since localization undermines uniformity in what data exists or is accessible in different jurisdictions for training local AI/ML solutions. There is a more positive take on this implication: data localization and local privacy policies can help foster more culturally-sensitive local deployment and innovation of AI technologies.\nQuestions remain. For example: What are the merits of a technocultural equivalent of the \"Contact Hypothesis\"? i.e. does more contact between technocultures lead to better long-term accommodation? Or to heated frictions and virulent cross-infections? What mechanisms are effective for improving the health and resistance of domestic technocultures from negative foreign infections? What are effective strategies and compromises in a technoculturally pluralist world?\nWhatever insights remain, they will require a deeper engagement with the cultural foundations of our technologies and a clearer-eyed examination of the values/assumptions embedded within our technologies.\nAcknowledgements\nThis discussion is a side-effect of numerous conversations. I am particularly grateful to Kathryn \"Casey\" Bouskill for many insightful discussions on the nuances of culture.\nReferences\nAllison, Graham. Destined for War: Can America and China Escape Thucydides's Trap?. Houghton Mifflin Harcourt, 2017.\nBarocas, Solon, and Helen Nissenbaum. \"Big Data's End Run Around Procedural Privacy Protections.\" Communications of the ACM 57, no. 11 (2014): 31-33.\nBarocas, Solon, and Andrew D. Selbst. \"Big Data's Disparate Impact.\" California Law Review 104 (2016): 671.\nBellman, Steven, Eric J. Johnson, Stephen J. Kobrin, and Gerald L. Lohse. \"International Differences in Information Privacy Concerns: A Global Survey of Consumers.\" The Information Society 20, no. 5 (2004): 313-324.\nBoyd, Danah. \"The Politics of Real Names.\" Communications of the ACM 55, no. 8 (2012): 29-31.\nCaliskan, Aylin, Joanna J. Bryson, and Arvind Narayanan. \"Semantics Derived Automatically from Language Corpora Contain Human-like Biases.\" Science 356, no. 6334 (2017): 183-186.\nChander, Anupam, and Uyên P. Lê. \"Data Nationalism.\" Emory Law Journal 64 (2014): 677.\nCockburn, Iain M., Rebecca Henderson, and Scott Stern. The Impact of Artificial Intelligence on Innovation. No. w24449. National Bureau of Economic Research, 2018.\nDawkins, Richard. \"The selfish gene: with a new introduction by the author.\" UK: Oxford University Press. (4th Ed.) 2016.\nStanding Committee of the National People's Congress. 2016 Cybersecurity Law (7 November 2016). Translated by China Law Translate. Accessed January 11, 2019. https://www.chinalawtranslate.com/cybersecuritylaw/?lang=en.\nFloridi, Luciano. \"Infraethics–on the Conditions of Possibility of Morality.\" Philosophy & Technology 30, no. 4 (2017): 391-394.\nG.D.P.R., 2016. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46. Official Journal of the European Union (OJ), 59, pp.1-88.\nGershgorn, Dave. \"A Harvard Professor Thinks That Tech's True Power Comes From Design.\" Quartz, February 24, 2018. Quartz. https://qz.com//latanya-sweeney-explains-why-tech-companies-are-so-powerful/.\nHofstede, Geert. \"Dimensionalizing Cultures: The Hofstede Model in Context.\" Online Readings in Psychology and Culture, 2(1). https://doi.org/10.9707/2307-.\nHuntington, Samuel P. \"The Clash of Civilizations?.\" Foreign Affairs (1993): 22-49.\nHuntington, Samuel P. The Clash of Civilizations and the Remaking of World Order. Penguin Books India, 1997.\nLeon, P. G., Alfred Kobsa, and Carolyn Nguyen. \"Contextual Determinants for Users' Acceptance of Personal Data Processing: A Multinational Analysis.\" ISR Technical Reports 16-5. (December 2016). https://isr.uci.edu/publications.\nLi, Yao, Alfred Kobsa, Bart P. Knijnenburg, and MH Carolyn Nguyen. \"Cross-cultural Privacy Prediction.\" Proceedings on Privacy Enhancing Technologies 2017, no. 2 (2017): 113-132.\nMatthews, Luke J., Ryan Andrew Brown, and David P. Kennedy. A Manual for Cultural Analysis. Santa Monica, CA: RAND Corporation, 2018. https://www.rand.org/pubs/tools/TL275.html.\nMcSweeney, Brendan. \"Hofstede's Model of National Cultural Differences and their Consequences: A Triumph of Faith-a Failure of Analysis.\" Human Relations 55, no. 1 (2002): 89-118. [Critique of Hofstede dimensions]\nMilberg, Sandra J., H. Jeff Smith, and Sandra J. Burke. \"Information Privacy: Corporate Management and National Regulation.\" Organization Science 11, no. 1 (2000): 35-57.\nMumford, Lewis. \"Authoritarian and Democratic Technics.\" Technology and Culture 5, no. 1 (1964): 1-8.\nNissenbaum, Helen, \"Privacy as Contextual Integrity,\" Washington Law Review, Vol. 79, No. 1, 2004.\nOhm, Paul. \"Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization.\" UCLA Law Review 57 (2009): 1701.\nOsoba, Osonde A., and William Welser IV. An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence. Santa Monica, CA: RAND Corporation, 2017.\nOulasvirta, Antti, Aurora Pihlajamaa, Jukka Perkiö, Debarshi Ray, Taneli Vähäkangas, Tero Hasu, Niklas Vainio, and Petri Myllymäki. \"Long-term Effects of Ubiquitous Surveillance in the Home.\" Proceedings of the 2012 ACM Conference on Ubiquitous Computing (2012): 41-50.\nRho, Eugenia, Ha Rim, Alfred Kobsa, and Carolyn Nguyen. \"Differences in Online Privacy and Security Attitudes Based on Economic Living Standards: A Global Study of 24 Countries.\" Proceedings of the Twenty-Sixth European Conference on Information Systems no. 95 (2018).\nRomney, A. Kimball, Susan C. Weller, and William H. Batchelder. \"Culture as Consensus: A Theory of Culture and Informant Accuracy.\" American Anthropologist 88, no. 2 (1986): 313-338.\nSelby, John. \"Data Localization Laws: Trade Barriers or Legitimate Responses to Cybersecurity Risks, or Both?.\" International Journal of Law and Information Technology 25, no. 3 (2017): 213-232.\nWinner, Langdon. \"Do Artifacts Have Politics?.\" Daedalus (1980): 121-136.\nWu, Xiaolin, and Xi Zhang. \"Automated Inference on Criminality Using Face Images.\" arXiv preprint arXiv:1611.04135(2016): 4038-4052.\nWu, Xiaolin, and Xi Zhang. \"Responses to Critiques on Machine Learning of Criminality Perceptions (Addendum of arXiv: 1611.04135).\" arXiv preprint arXiv:1611.04135 (2017).\nBy Osonde Osoba, Professor, Pardee RAND Graduate School, RAND CorporationThe post Technocultural Pluralism first appeared on AI Pulse.", "url": "https://aipulse.org", "title": "Technocultural Pluralism", "source": "aipulse.org", "date_published": "n/a", "paged_url": "https://aipulse.org/feed?paged=2", "id": "26a4f859a80657e7f2e20c5b81cbc3ab"} -{"text": "One Shot Learning in AI Innovation\n\n\n Download as PDF\n\nNatalie Ram1\nModern algorithmic design far exceeds the limits of human cognition in many ways. Armed with large data sets, programmers promise that their algorithms can better predict which prisoners are most likely to recidivate2and where future crimes are likely to occur.3 Software designers further hope to use large data sets to uncover relationships between genes and disease that would take human researchers much longer to identify.4\nBut modern machine learning still cannot effectively match human cognition in at least one crucial respect: learning from small data sets. Young children, for example, master new concepts with startling rapidity and fluency. \"[G]iven 2 or 3 images of an animal you have never seen before, you can usually recognize it reliably later on.\"5 Similarly, \"a person only needs to see one Segway to acquire the concept and be able to discriminate future Segways from other vehicles like scooters and unicycles.\"6 And as any parent of a toddler can attest, \"children can acquire a new word from one encounter.\"7\nPsychologists believe that, by the time we reach six years of age, \"we recognize more than 104 categories of objects.\"8 By contrast, traditional algorithmic design typically requires many more training examples. \"[L]earning one object category requires a batch process involving thousands or tens of thousands of training examples.\"9 Researchers describe the human method of learning new categories, and objects within categories, from one or a handful of examples \"one shot learning.\"10\nIn a relatively recent body of work, researchers are beginning to take aim at cracking this insight of human learning—and teaching algorithmic systems to learn in the same way. This work is still developing,11 with early examples demonstrating that one shot learning algorithms can correctly categorize, for example, human faces, motorbikes, airplanes, and spotted cats on par with big data algorithms while using only fifteen training examples.12 While differentiating between such wildly disparate categories may seem a long ways away from facial recognition or other big data applications, research into such uses is already underway. Using a variety of algorithmic approaches, researchers have already demonstrated that one shot learning may enhance algorithmic applications in fields including facial recognition,13 hand writing identification,14 and shoe tread analysis.15\nYet, to date, legal academics have overlooked these efforts. Indeed, the phrase \"one shot learning\" does not appear in any law review article searchable in Westlaw.16 This article seeks to remedy that gap in the literature, introducing the concept and language of one shot learning, and warning that enabling computer systems to successfully perform one shot learning is likely to exacerbate problems of insufficient transparency and of hidden bias that already beset the use of algorithmic systems, particularly in the criminal justice context.\nIntroducing One Shot Learning\nOne shot learning builds on, and learns from, traditional big data algorithmic design. In traditional big data algorithms, developers train a machine learning system to recognize and accurately distinguish a particular category by feeding the system thousands, and often tens of thousands, of examples of the relevant category.17 These training data often derive from records of past human conduct or from data coded by humans, which makes their development \"a tedious and expensive task.\"18 Moreover, new categories must be learned afresh, with many thousands of training examples for each new category.19\nOne shot learning, by contrast, attempts to replicate the human ability to apply past knowledge to new learning.20 As one set of authors has explained, \"[t]he key insight is that, rather than learning from scratch, one can take advantage of knowledge coming from previously learned categories, no matter how different these categories might be.\"21 That is, once a machine learning system has learned a few categories \"the hard way\"—based on thousands of training examples—some \"general knowledge\" can be extracted and applied to new, previously unknown categories.22 Thus, one shot learning algorithms are designed to make inferential leaps and to extract knowledge learned about one category to aid in identification of future categories.23 In this way, one shot learning models aim to encode algorithmic systems with the power to learn how to learn.\nTransparency and Understanding in One Shot Learning\nThe promise of one shot learning to enable algorithmic systems to learn new categories more cheaply and efficiently than in the past is enormous; but there is also significant risk that developments in one shot learning will exacerbate some of the most persistent difficulties in AI markets, including transparency. \nTransparency—and the related problem of understanding—is a challenge for machine learning models in at least two ways. First, the nature of much machine learning is opaque.24It may be formally opaque in instances where it is \"actually impossible to state how the algorithm classifies observations once it has been developed.\"25 In other instances, an algorithmic pattern will be functionally opaque, as where algorithmically identified relationships are \"so complicated that they defy explicit understanding.\"26 In both senses, machine-learning models act as a \"black box,\"27in which known data goes in, answers are produced, but the process by which data is transformed into answers is unknown and potentially unknowable. Under such circumstances, effective transparency and understanding may be difficult to achieve.\nSecond, these difficulties of transparency and understanding are exacerbated by the outsized role that trade secrecy occupies for complex algorithmic systems. In recent years, the Supreme Court has reinforced that mathematical processes are patent-ineligible \"abstract ideas,\" at least insofar as those ideas are not inventively applied in some real-world application.28Machine learning models, as fundamentally mathematical processes, typically are excluded from patent protection.29 In the absence of such protection, trade secrecy has become a primary method for maintaining competitive advantage.30Unfortunately, trade secret protection, by definition, depends on continued secrecy; public disclosure destroys it.31\nReliance on trade secrecy, and opacity about how a particular machine-learning model reaches decisions, is likely to pervade one shot learning to an even greater extent than traditional big data algorithms. Because one shot learning extrapolates the skill of \"learning how to learn\" from underlying categories that may themselves teach unexplainable relationships, it builds opacity upon opacity.32 Moreover, one shot learning is likely to multiply the sources of information about an algorithmic system that must be known to replicate or understand its workings, making trade secrecy a more powerful tool for competitive advantage and a greater foil to transparency and understanding. Some algorithmic systems are explainable upon examination of source code.33 But because machine learning models are often a \"black box,\" examining source code may provide insufficient insight into their validity or reliability. Instead, adequate understanding of the algorithmic system may depend on having access to both source code and training data.34Accordingly, when algorithmic developers have invoked trade secrecy to withhold training data, as well as information about algorithmic design, the non-disclosure of either renders obscure the system as a whole.35 \nCreators of one shot learning models, in turn, may well be able to stymie transparency and understanding even if they disclose both source code and the limited training examples used for new learning categories. Again, because these models depend on prior learning in the traditional \"big data\" way, the absence of the more remote training data that armed a one shot learning model with its \"general knowledge\" may make understanding that model difficult, if not impossible. By proliferating the sources of information that may be withheld to maintain competitive advantage, one shot learning may pose a greater threat to transparency and understanding than even traditional \"black box\" big data models.\nThese failures of disclosure and transparency are likely to impose significant practical barriers to developing sufficiently accurate and reliable algorithms.36Secret code is often less good code, as secrecy may obstruct effective oversight of the reliability and validity of algorithmic tools.37 This difficulty is particularly likely to arise in a burgeoning field like one shot learning algorithms, where different researchers have adopted different approaches to solving the one shot learning problem.38In other contexts in which programmers use somewhat different mathematical models (or code for the same model differently), the consequence is that, in attempting to do the same thing, these competing tools sometimes yield different results from identical inputs.39 \nMoreover, \"black box\" algorithms are likely to be particularly problematic in some of the settings in which one shot learning algorithms are most desired, including law enforcement investigations. As described above, researchers are already working to develop effective one shot learning algorithms to perform facial recognition,40hand writing identification,41and shoe tread analysis.42 While these types of analysis may be useful in multiple contexts, they are likely to be of particular interest to law enforcement. Indeed, one article exploring the implementation of one shot learning methods for shoe tread analysis focuses explicitly on the forensic use of such algorithms, explaining, \"We investigate the problem of automatically determining what type (brand/model/size) of shoe left an impression found at a crime scene.\"43 \nThe use of undisclosed algorithmic models in the criminal justice setting, in turn, is frequently problematic. In addition to practical concerns about the impact of trade secrecy on algorithmic design, secrecy and opacity surrounding criminal justice algorithms can raise significant constitutional concerns.44 I have argued elsewhere that secret criminal justice algorithms are \"at least in tension with, if not in violation of, defendants' ability to vindicate their due process interests throughout the criminal justice process, as well as their confrontation rights at trial.\"45 As one shot learning algorithms perform more complex inferential tasks, access to algorithm design information is likely to be even more critical in the criminal justice field to ensure that the design accurately yields reliable results and functions as intended. Trade secrecy threatens to undermine those goals.\nExacerbating Bias in One Shot Learning\nIn addition to intensifying reliance on trade secrecy, one shot learning in algorithmic design may also multiply the ways in which algorithmic design encodes bias—and not in a good way. In traditional big data algorithms, because training data often derives from records of past human conduct or from data coded by humans, bias in this human conduct can give rise to biased outputs from the algorithm.46\"[T]raining data is often gathered from people who manually inspect thousands of examples and tag each instance according to its category. The algorithm learns how to classify based on the definitions and criteria humans used to produce the training data, potentially introducing human bias into the classifier.\"47 \nIn the context of one shot learning, if categories learned the \"hard way\" are tainted with bias, this may similarly infect new categories learned by inference. Indeed, any such bias may be amplified where there are fewer, rather than more, training examples for a new category. Where only a few examples of a new category are available, more inferential leaps in learning are required, and so bias in human selection or coding of examples is likely to be aggravated.\nMore troubling, in attempting to create more \"human\" learning, programmers designing one shot learning algorithms may replicate crucial faults in human learning and decision making. Human learning achieves rapid categorization and decision making in part through reliance on cognitive short cuts. These short cuts—called heuristics—operate as \"principles by which [human beings] reduce the complex tasks of assessing likelihoods and predicting values to simpler judgmental operations.\"48Heuristics, in other words, help human beings make inferential leaps from incomplete data. As Tversky and Kahneman have noted, \"[i]n general, these heuristics are quite useful, but sometimes they lead to severe and systematic errors.\"\nAmong the most significant heuristics of human decision making is the representativeness heuristic. Humans intuitively call on this heuristic when tasked with assessing: \"What is the probability that an object A belongs to a class B? What is the probability that event A originates from process B? What is the probability that process A will generate an event B?\" Under the representativeness heuristic, these \"probabilities are evaluated by the degree to which A is representative of B, i.e.. by the degree of similarity between them.\" The more similar A is to B, the higher the probability that A belongs to (or originates from) (or will generate) B. Typically, representativeness is a useful heuristic, and its probabilities are usually accurate enough for everyday living.49 \nBut representativeness can also lead decision makers astray. For instance, in assessing whether Mr. X holds a particular occupation, decision makers assess \"the similarity of Mr. X to the stereotype of each occupational role, and orders the occupations by the degree to which Mr. X is representative of these stereotypes.\" (Similarly, in assessing the likelihood that Mr. X will be a repeat criminal offender, decision makers assess how alike Mr. X is to the stereotype of a repeat criminal offender.) Yet stereotypes are, by definition, inexact. Moreover, these determinations of probability are frequently immune to crucial factors like, for instance, the base-rate of each occupation in the general population. Moreover, individuals express confidence in their predictions in accordance with the degree of similarity between Mr. X and their stereotype of a particular profession, \"with little or no regard for the factors that limit predictive accuracy.\"50 \nOne shot learning algorithms appear to attempt to instruct machine systems to make the same sorts of inferential leaps and extrapolation from prior information that give rise to heuristics like representativeness in humans. These algorithms seek to make computer systems more human-like in their capacities for data analysis and recognition. But an unintended consequence of accomplishing this result may be to inflict on computer systems the same kinds of heuristics that render human decision making irrational in systematic ways. If successful, one shot learning may prove to be faster than human judgment, but perhaps not better than it.\nConclusion: Mapping the Solution Space\nOne shot learning advances machine learning by enabling sophisticated models to learn new categories from only a few examples. So long as the model has gained prior exposure to extensive training data about some categories, a one shot learning model can learn how to learn. That is exciting, as it opens the door to making greater use of more diverse data for machine learning.\nBut one shot learning also threatens to worsen already persistent problems in machine learning, including the opacity of machine learning models, their reliance on trade secrecy, and the bias they may unwittingly encode. These problems are not straightforward to solve. Moreover, these problems are likely to fester together, in that non-transparent systems shielded from effective external validation are less likely to recognize and correct for their unintended biases. \nBut trade secrecy, and the understanding and fairness it may threaten, need not dominate innovation in one shot learning algorithms. Alternative mechanisms for innovation policy abound, including prizes, grants, regulatory exclusivities, and tax incentives.51 These tools of innovation policy can help to support the research, development, and sale of effective one shot learning algorithms in place of (or alongside) trade secrecy. In particular, at this early stage in the development of one shot learning algorithms, grants and research-based tax incentives may be well suited to driving the disclosure of early-stage work related to both algorithmic design and training data—and thus to driving a swifter pace of innovation in the field more broadly. Grants and tax incentives effectively infuse investment dollars in research up front, rather than rewarding the successful completion of a commercial product.52 In so doing, these innovation policy levers \"may enable more and smaller companies to enter the market,\" as they reduce \"the private capital investments required for innovation.\"53  \nMoreover, policymakers may also drive development of valid and reliable one shot learning models by investing in complementary incentives for innovation. In discussing innovation incentives for developing valid and reliable black box algorithms in the health care context, Nicholson Price has suggested that \"direct or indirect government intervention could usefully aid the generation of datasets,\" to be used as common infrastructure for algorithmic development.54 Such a solution to problems of data secrecy and fragmentation is particularly well suited to one shot learning in the criminal justice context, as government will often have a monopoly on the data necessary for training these algorithms at the outset.55 After all, government actors are responsible for the investigation, arrest, prosecution, and incarceration of criminal defendants in the United States, and so have unique access to data about these populations.56 Similarly, government is well positioned to incentivize innovative methods for validating black box algorithms through prizes or grants for outside validation of complex algorithmic models, including those involving one shot learning.57 \nComplex algorithmic systems, including those deploying one shot learning, hold enormous promise for expanding the range of data from which an algorithmic system can learn and the range of categories it can learn to identify. But this promise is not unfettered. If these complex models are to be deployed, particularly in the criminal justice context, relevant stakeholders—including policymakers, courts, prosecutors, and defense counsel alike—must grasp the ways in which machine learning in general, and one shot learning in particular, may undermine as well as enhance the pursuit of justice and take steps to mitigate those harms.\n\n\n\nBy Natalie Ram, Assistant Professor, University of Baltimore School of Law\n \n\n\nThe post One Shot Learning in AI Innovation first appeared on AI Pulse.", "url": "https://aipulse.org", "title": "One Shot Learning in AI Innovation", "source": "aipulse.org", "date_published": "n/a", "paged_url": "https://aipulse.org/feed?paged=2", "id": "14cf7e3cf9ac7c843f8b4d402954d575"} -{"text": "\"Soft Law\" Governance of Artificial Intelligence\n\n\n Download as PDF\n\nGary Marchant\nCenter for Law, Science & Innovation, Arizona State University\nIntroduction\nOn November 26, 2017, Elon Musk tweeted: \"Got to regulate AI/robotics like we do food, drugs, aircraft & cars.  Public risks require public oversight.  Getting rid of the FAA wdn't [sic] make flying safer. They're there for good reason.\"1\nIn this and other recent pronouncements, Musk is calling for artificial intelligence (AI) to be regulated by traditional regulation, just as we regulate foods, drugs, aircraft and cars.  Putting aside the quibble that food, drugs, aircraft and cars are each regulated very differently, these calls for regulation seem to envision one or more federal regulatory agencies adopting binding regulations to ensure the safety of AI.  Musk is not alone in calling for \"regulation\" of AI, and some serious AI scholars and policymakers have likewise called for regulation of AI using traditional governmental regulatory approaches.2\nBut these calls for regulation raise the questions of what aspects of AI should be regulated, how they should be regulated, and by who?  The reality is that at best there will be some sporadic piecemeal traditional regulation of AI over the next few years, notwithstanding the increasing deployment and application of AI in a growing range of applications and industry sectors.  In the interim at least, this \"governance gap\" for AI will mostly be filled by so-called \"soft law\" (see Part I, supra).  These \"soft law\" mechanisms include various types of instruments that set forth substantive expectations but are not directly enforceable by government, and include approaches such as professional guidelines, private standards, codes of conduct, and best practices.  A number of such soft law approaches have already been proposed or are being implemented for AI (see Part II, supra).  While soft law has some serious deficiencies, such as lack of enforceability, there are additional strategies that can help maximize the effectiveness of this second-best approach to governance (see Part III, supra).  For example, the lack of enforceability problem can be solved at least in part by various types of indirect enforcement by entities such as insurance companies, journal publishers, grant funders, and even governmental enforcement programs against unfair or deceptive business practices.  Another problem, the lack of coordination between a potentially large number of overlapping and perhaps even inconsistent soft law programs, is to create what has been described as a Governance Coordinating Committee to help serve a coordinating function.\nThe Unsuitability of Traditional Regulation for AI\nWhile some piecemeal regulation of specific AI applications and risks using traditional regulatory approaches may be feasible and even called for, AI has many of the characteristics of other emerging technologies that make them refractory to comprehensive regulatory solutions.3  For example, AI involves applications that cross multiple industries, government agency jurisdictions, and stakeholder groups, making a coordinated regulatory response difficulty.  In addition, AI raises a wide range of issues and concerns that go beyond traditional regulatory agency focus on health, safety and environmental risks.  Indeed, many risks created by AI are not within any existing regulatory agency's jurisdiction, including concerns such as technological unemployment, human-machine relationships, biased algorithms, and existential risks from future super-intelligence.\nMoreover, the pace of development of AI far exceeds the capability of any traditional regulatory system to keep up, a challenge known as the \"pacing problem\" that affects many emerging technologies.4 The risks, benefits and trajectories of AI are all highly uncertain, again making traditional preemptory regulatory decision-making difficult.  And finally, national governments are reluctant to impede innovation in an emerging technology by preemptory regulation in an era of intense international competition.\nFor these reasons, it is safe to say there will be no comprehensive traditional regulation of AI for some time, except perhaps if some disaster occurs that triggers a drastic and no doubt poorly-matched regulatory response.  Again, there may be slivers of the overall AI enterprise that are amenable to traditional regulatory responses, and these should certainly be pursued.  But these isolated regulatory advances will be insufficient alone to deal with the safety, ethical, military, and existential risks posed by AI.  Something more will be needed.\nThat something more that will be needed to fill the governance gap for AI will, at least in the short term, be within the category of \"soft law.\"  Soft law are instruments that set substantive expectations that are not directly enforceable by government.  They can include private standards, voluntary programs, professional guidelines, codes of conduct, best practices, principles, public-private partnerships and certification programs.  Soft law can even include what Wendell Wallach and I refer to as \"process soft law\" approaches such as coding machine ethics into AI systems or creating oversight systems within a corporate Board of Directors.5  These types of measures are inherently imperfect, precisely because they are not directly enforceable.\nThis core weakness results in many other limitations, such that participation is incomplete, with the \"good guys\" complying and the \"bad guys\" not.  These soft law measures are sometimes used as \"whitewashing\" (or \"greenwashing\") to make it look like a problem is being addressed when it really is not.  And soft law measures are often expressed in vague, general language that is hard to measure compliance with.  Finally, soft law measures generally do not provide the same reassurance to the public as traditional government regulation that the problems presented by a new technology are being adequately managed.  This public reassurance effect is an important secondary function of regulation.\nNotwithstanding these significant limitations, soft law has become a necessary and inevitable component of the governance framework for virtually all emerging technologies, including AI.  Traditional regulatory systems cannot cope with the rapid pace, diverse applications, heterogeneous risks and concerns, and inherent uncertainties of emerging technologies.  So although soft law measures are a second best solution, they are often the only game in town, at least initially.  It recalls the quote attributed to Winston Churchill that \"democracy is the worst form of government, except for all the others.\"6\nSoft law has important advantages that explain its growing popularity and gap filling role.  Soft law instruments can be adopted and revised relatively quickly, without having to go through the traditional bureaucratic rulemaking process of government.  It is possible to experiment with several different soft law approaches simultaneously, indeed sometimes creating a problem of a proliferation of inconsistent private standards and other soft law instruments.  They can sometimes create a cooperative rather than adversarial relationship among stakeholders.  They are not bound by limited agency delegations of authority, and so can address any and all concerns raised by a technology.  And because they are not adopted by a formal legal authority, they are not restricted to a specific legal jurisdiction, but can have international application.\nExisting AI Soft Law Examples\nWe are already seeing the rapid infusion of soft law initiatives and proposals into the AI governance space.7 Indeed, the likely first ever governance proposal for AI (at that time focused on robotics) was Isaac Asimov's three laws of robotics first published in 1942.8  These \"laws\" were actually a form of soft law as they had no formal legal authority.  More recently, an early entry into the AI soft law landscape was a \"robot ethics charter\" that the government of South Korea initiated in 2007, even though no final version of the ethics charter has ever been posted online.\nInstitute of Electric and Electronic Engineers (IEEE)\nPerhaps the most comprehensive soft law initiative for AI was launched in 2016 by the IEEE, one of the world's largest standard-setting and professional engineering societies.9  This initiative, entitled \"The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, is intended to \"ensure every stakeholder involved in the design and development of autonomous and intelligent systems is educated, trained, and empowered to prioritize ethical considerations so that these technologies are advanced for the benefit of humanity.\"10  The Initiative has two intended outputs.  The first is a guide known as Ethically Aligned Design, which has now been published as draft versions I and II for public comments.  Version II is a document that exceeds 250 pages and that addresses over 120 policy, legal and ethical issues associated with AI, with recommendations assembled from more than 250 expert participants.11 It seeks to \"advance a public discussion about how we can establish ethical and social implementations for intelligent and autonomous systems and technologies, aligning them to defined values and ethical principles that prioritize human well-being in a given cultural context, inspire the creation of Standards (IEEE P7000 series and beyond) and associated certification programs, [and] facilitate the emergence of national and global policies that align with these principles.\"12  The final version of Ethically Aligned Design is scheduled to be published in 2019.\nThe second and even more relevant activity by the Initiative is to produce a series of IEEE standards addressing governance and ethical aspects of AI.  The IEEE has given official approval to create the following standards, with standard-setting committees now established to develop each standard:\nIEEE P7000 – Model Process for Addressing Ethical Concerns During System Design\nIEEE P7001 – Transparency of Autonomous Systems\nIEEE P7002 – Data Privacy Process\nIEEE P7003 – Algorithmic Bias Considerations\nIEEE P7004 – Standard on Child and Student Data Governance\nIEEE P7005 – Standard for Transparent Employer Data Governance\nIEEE P7006 – Standard for Personal Data Artificial Intelligence (AI) Agent\nIEEE P7007 – Ontological Standard for Ethically Driven Robotics and Automation         Systems\nIEEE P7008 – Standard for Ethically Driven Nudging for Robotic, Intelligent, and Automation Systems\nIEEE P7009 – Standard for Fail-Safe Design of Autonomous and Semi-Autonomous Systems\nIEEE P7010 – Wellbeing Metrics Standard for Ethical Artificial Intelligence and Autonomous Systems\nIEEE P7011TM – Standard for the Process of Identifying and Rating the Trustworthiness of News Sources\nIEEE P7012TM – Standard for Machine Readable Personal Privacy Terms\nIEEE P7013TM – Inclusion and Application Standards for Automated Facial Analysis Technology\nThese fourteen AI standards are scheduled to be finalized by the end of 2021, and will provide a broad set of governance requirements relating to the governance of AI.  For example, the chair of the working group developing standard IEEE P7006 on personal AI agents has recently written that the standard is being developed to provide \"a principled and ethical basis for the development of a personal AI agent that will enable trusted access to personal data and increased human agency, as well as to articulate how data, access and permission can be granted to government, commercial or other actors and allow for technical flexibility, transparency and informed consensus for individuals.\"13\nPartnership on AI\nAnother significant \"soft law\" player in the AI field is the Partnership on AI.  This Partnership was originally started by the big players in the AI space such as Google, Microsoft, Facebook, IBM, Apple and Amazon, but has expanded to include a wide variety of companies, think tanks, academic AI organizations, professional societies, and charitable groups such as the ACLU, Amnesty International, UNICEF and Human Rights Watch.14 One of the stated goals of the Partnership is to develop and share best practices for AI which includes: \"Support research, discussions, identification, sharing, and recommendation of best practices in the research, development, testing, and fielding of AI technologies.  Address such areas as fairness and inclusivity, explanation and transparency, security and privacy, values and ethics, collaboration between people and AI systems, interoperability of systems, and of the trustworthiness, reliability, containment, safety, and robustness of the technology.\"15\nThe Partnership on AI has published a set of \"Tenets\" that include:\n\"We are committed to open research and dialogue on the ethical, social, economic, and legal implications of AI….\nWe believe that AI research and development efforts need to be actively engaged with and accountable to a broad range of stakeholders….\nWe will work to maximize the benefits and address the potential challenges of AI technologies, by: Working to protect the privacy and security of individuals….Working to ensure that AI research and engineering communities remain socially responsible, sensitive, and engaged directly with the potential influences of AI technologies on wider society….Ensuring that AI research and technology is robust, reliable, trustworthy, and operates within secure constraints….Opposing development and use of AI technologies that would violate international conventions or human rights, and promoting safeguards and technologies that do no harm.\nWe believe that it is important for the operation of AI systems to be understandable and interpretable by people, for purposes of explaining the technology.16\nIt remains to be seen if and how the Partnership will advance beyond these general tenets to produce more specific best practices and guidelines for responsible AI research and applications.\nFuture of Life Institute\nThe Future of Life Institute convened a meeting of many leading AI practitioners and experts at the Asilomar conference center in 2017, which is the home of the famous Asilomar Conference on Recombinant DNA held in 1975 which pioneered the soft law governance of technology by agreeing on a set of voluntary guidelines for genetic engineering research.  At the 2017 Asilomar conference, the participants agreed on 23 principles to guide AI research and applications.17  These principles include \"Failure Transparency\" (\"If an AI system causes harm, it should be possible to ascertain why.\"); \"Responsibility\" (\"Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.\") and \"Value Alignment\" (\"Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.\").18\nIndustry groups have adopted their own soft law instruments for AI.  For example, the Information Technology Industry Council (ITI) has developed its own set of AI principles.19  For example, these principles include a commitment to \"recognize our responsibility to integrate principles into the design of AI technologies, beyond compliance with existing laws…. As an industry, it is our responsibility to recognize potentials for use and misuse, the implications of such actions, and the responsibility and opportunity to take steps to avoid the reasonably predictable misuse of this technology by committing to ethics by design.\"20  The statement of principles, itself a form of soft law governance, also states a commitment to soft law principles: \"We promote the development of global voluntary, industry-led, consensus-based standards and best practices.  We encourage international collaboration in such activities to help accelerate adoption, promote competition, and enable the cost-effective introduction of AI technologies.\"21\nCompany-Specific Soft Law Initiatives\nSome individual companies have also adopted their own statement of principles or guidelines for AI.  For example, in June 2018 Google's CEO Sundar Pichai announced a set of seven principles that Google will follow in its AI activities.22  Other major AI companies such as Microsoft23 and IBM24 have also announced their own AI principles that will guide their conduct.\nGovernmental AI Soft Law Initiatives\nGovernments have also supported the use of soft law methods to govern AI.  The EU Commission published its strategy paper on AI on April 25, 2018.25  Contrary to what many members of the European Parliament had hoped for and requested,26 the Commission did not propose any new regulatory measures for AI at this time.  Rather, it committed to develop a set of draft guidelines by the end of 2018.27 In December 2018, the Commission published a \"Coordinated  Action Plan on AI\" that set forth the Commission's objectives and plans for an EU-wide strategy on AI.28  However, the Commission did note that \"[w]hile self-regulation can provide a first set of benchmarks against which emerging applications and outcomes can be assessed, public authorities must ensure that the regulatory frameworks for developing and using of AI technologies are in line with these values and fundamental rights.  The Commission will monitor developments and, if necessary, review existing legal frameworks to better adapt them to specific challenges, in particular to ensure the respect of the Union's basic values and fundamental rights.\"29\nSimilarly, the UK House of Lords issued a detailed report on AI earlier in April 2018 and likewise recommended an ethical code of conduct for AI rather than any traditional \"hard\" regulation.30 The report cited testimony on \"the possible detrimental effect of premature regulation\" such as that \"the pace of change in technology means that overly prescriptive or specific legislation struggles to keep pace and can almost be out of date by time it is enacted\" and that lessons from regulating previous technologies suggested that a \"strict and detailed legal requirements approach is unhelpful\".31  Based on such testimony, the House of Lords therefore concluded that \"[b]lanket AI-specific regulation, at this stage, would be inappropriate.\"32\nInstead, the House of Lords recommended a soft law strategy at least in the interim: \"We recommend that a cross-sector ethical code of conduct, or 'AI code', suitable for implementation across public and private sector organisations which are developing or adopting AI, be drawn up and promoted … with a degree of urgency…. Such a code should include the need to have considered the establishment of ethical advisory boards in companies or organisations which are developing, or using, AI in their work. In time, the AI code could provide the basis for statutory regulation, if and when this is determined to be necessary.\"33\nEvaluation and Moving Forward\nA variety of entities from the government, industry and the non-government sectors have proposed or adopted soft law initiatives for the governance of AI.  These soft law instruments include private standards, best practices, codes of conduct, principles and voluntary guidelines.  They are in various states of development and implementation, and individually and collectively provide some initial guidance for the governance of AI.  However, they suffer from major limitations.  One prevalent problem is the generality of most of the provisions in these instruments.  To some degree, this vagueness is inevitable and necessary, given the broad range of AI applications and the rapid pace and uncertain trajectory of its future, making precise requirements difficult if not impossible.  Indeed, this is the very reason why the technology is primarily being governed by soft law rather than traditional hard law approaches at this time.\nTwo other limitations of the current matrix of soft law programs are however more amenable to progress and improvement.  First, the unenforceability of these soft law provisions is the Achilles' heel of soft law approaches generally.  There is no assurance or requirement that all, or even any, AI developers and users comply with the soft law recommendations.  However, there are a number of mechanisms that can be used to indirectly enforce these soft law provisions.  Any entity with a supervisory role can adopt and monitor compliance with one or more AI soft law programs.  For example, a corporation could create a committee of its Directors or a free-standing ethics committee and task it with ensuring compliance with the appropriate guidelines or codes of conduct adopted by or agreed to by that company.  Universities could use the existing chain of authority, such as through department heads and deans, to require compliance with specified soft law AI provisions as part of the annual evaluation of faculty and staff.  Or universities could create new, or expand the jurisdiction of existing, research oversight committees such as the Institutional Biosafety Committee to ensure adherence with specified AI soft law provisions.\nOther actors could also play an important role in indirect enforcement of AI soft law programs.  Certification bodies could create certification programs to certify that a company or other entity is adhering to a particular set of guidelines or principles.  Business partners could require certification with applicable AI soft law programs as a condition of doing business with that company.  Insurers could require the implementation of appropriate AI risk management programs as a condition of liability coverage, just as some did with nanotechnology.34 Granting agencies could condition funding on compliance with specified AI guidelines or codes of conduct.  Professional journals could require compliance with certain best practices or guidelines as a condition of publication.\nEven more legal quasi-enforcement approaches could be pursued.  The Federal Trade Commission (FTC), under its general authority to take enforcement actions against deceptive and unfair business practices, could take enforcement action against a company that publicly commits to comply with a certain code of conduct or best practices but then fails to live up to its commitment.  Private standards, especially those adopted by well-known standard setting bodies such as the IEEE, could be used to set a standard of care in tort law, and a company's failure to adhere to such standards, even though they are voluntary, could be evidence of failure to use reasonable care in a product liability or personal injury lawsuit.35\nSoft law measures result in experience and field testing that can provide learning for subsequent traditional regulation.  Indeed, soft law can sometimes be seen as a transitionary phase of governance that gradually \"hardens\" into traditional government regulation.36 We may already be starting to see this hardening process of soft law in the AI space – for example, the State of California recently adopted legislation \"expressing support\" for the Asilomar AI Principles.37\nSecond, the confusing proliferation of different AI soft law programs and proposals creates confusion and overlap with regard to AI governance.  It is hard for an actor in the AI space to assess and comply with all these different soft law requirements.  Where do these various soft law programs overlap and duplicate each other?  Where do they contradict each other?  What gaps are not addressed by any of the existing soft law proposals?  Some type of coordination is needed.\nWendell Wallach and I have proposed such a coordinating entity, which we have called a Governance Coordinating Committee (GCC).38  This entity would not seek to duplicate or supplant the many organizations working on developing governance approaches to AI, but rather would provide a coordinating function much like an orchestra conductor in ensuring all the various players were connected with each other and aware of and responsive to each other's proposals, while also identifying gaps and inconsistencies in existing programs.  In a forthcoming publication, we describe the functions of the GCC to include the following coordination functions:\n\nInformation clearinghouse, by collecting and reporting in one place all significant programs, proposals, ideas or initiatives for governing AI;\nMonitoring and Analysis, such as identifying gaps, overlaps, and inconsistencies with respect to existing and proposed governance programs;\nEarly Warning System, by noting emerging issues or problems that are not addressed or covered by existing governance programs;\nEvaluation Program, which scores various governance programs and efforts for their metrics and compliance with stated goals.\nStakeholder Forum, by providing a space for stake-holders to meet and discuss governance ideas and issues and to produce recommendations, reports, and roadmaps;\nCredible Intermediary, serving as a trusted \"go-to\" source for the media, the public, scholars and stakeholders to obtain information about AI and its governance;\nConvener for Solutions, by convening  interested stakeholders on specific issues to meet and try to forge a negotiated partnership program for addressing unaddressed problems or governance needs.39\n\nThere are many unanswered questions about how a GCC would function.  Who would fund it?  Who would be its employees and how would they be selected?  What would be its administrative structure?  What would be its precise functions and charter?  How would stakeholders interact with the GCC?  How would the GCC achieve and maintain its credibility as an \"honest broker\"? Initiatives are currently underway to explore such questions in the context of planning an international conference to  discuss and possibly create  a global GCC for AI governance.\nConclusion\nSoft law measures are very imperfect governance tools because of their lack of enforceability and accountability, as well as often being written in very general and self-serving language.  Yet, for a rapidly developing and expansive technology like AI, comprehensive regulation by governments is not feasible, at least in the short term with at best piecemeal regulatory enactments possible.  Accordingly, soft law will be the default approach for most AI governance at the present time.  For that reason, there is a need to explore ways to indirectly enforce and coordinate the proliferation of soft law measures that have already been proposed or enacted for AI.\nGary Marchant, Regent's Professor of Law and Director of the Center for Law, Science and Innovation, Arizona State University Law SchoolThe post \"Soft Law\" Governance of Artificial Intelligence first appeared on AI Pulse.", "url": "https://aipulse.org", "title": "“Soft Law” Governance of Artificial Intelligence", "source": "aipulse.org", "date_published": "n/a", "paged_url": "https://aipulse.org/feed?paged=3", "id": "8fa040260ef7f15d8cb78aac5d56e7ac"} -{"text": "Genetically Modified Organisms: A Precautionary Tale for AI Governance\n\n\n Download as PDF\n\nThe fruits of a long anticipated technology finally hit the market, with promise to extend human life, revolutionize production, improve consumer welfare, reduce poverty, and inspire countless yet-imagined innovations. A marvel of science and engineering, it reflects the cumulative efforts of a generation of researchers backed by research funding from the U.S. government and private sector investments in (predominantly American) technology companies. Though most scientists and policy elites consider the fruits of this technology to be safe, and the technology itself as a game-changer, there is still widespread acknowledgment that certain applications raise deeply challenging ethical issues, with some commentators even warning that careless or malicious applications could cause planet-wide catastrophes. Indeed, the technology has long been a fixture of science fiction, as an antagonist in allegories about hubris and science run amok—a narrative not lost on policy makers in the United States, Europe and elsewhere as they navigate the challenges and opportunities of this potentially world-changing new technology.\nI'm referring to genetically modified organisms (GMOs), circa 1996, the year they entered the commercial market, and the biotechnologies used to produce them. By this time, the governance regimes in Europe and the United States for GMOs had diverged sharply, with Europe hardening as anti-GMO and the United States as permissive. The story behind why GMO policy in both places evolved the way it did, presented below, has important lessons for thinking about AI governance. Among other lessons, a consensus among technologists and other elites that a new technology is safe, and that its benefits outweigh its risks, does not guarantee its broader societal acceptance.\n1. Comparative U.S.-European GMO Regulatory Policy\nThe World Health Organization defines GMOs as \"organisms (i.e. plants, animals or microorganisms) in which the genetic material (DNA) has been altered in a way that does not occur naturally by mating and/or natural recombination.\"1 Instead, GMOs are the product of genetic engineering methods. Commercial cultivation of genetically modified crops began in 1996.2 By the end of that year, there were 1.7 million hectares of such crops worldwide, most of it in the United States.3 As of 2016, worldwide plantings reached 185 million hectares, but with 90% of the hectares confined to just five countries (the United States, Brazil, Argentina, Canada and India) and most of the remaining plantings (98%) in just five additional countries (Paraguay, Pakistan, China, South Africa and Uruguay).4 The vast majority of scientists and other mainstream experts in biotechnology assess that GMOs present no inherent risks to health or the environment compared to non-GMO alternatives. This view is generally reflected in U.S. government policy towards GMOs since the 1980s, and has helped make the United States the world leader in GMO crop production, with around 40% of the world's total plantings.5\nEuropean regulators, however, have taken a very different approach, one involving the application of a \"precautionary principle.\" The concept of precaution is mentioned in Article 191 of the 1992 Treaty on European Union, and was first defined eight years later in a February 2000 communication from the European Commission.6 At its most basic, the precautionary principle holds that scientific uncertainty about risk due to insufficient or inconclusive data should not bar regulatory action when the activity or conduct in question implicates significant and irreversible threats to human health or the ecosystem. The precautionary principle forms the substantive basis for European regulator hostility towards the cultivation and sale of GMOs in Europe, where less than .1% of worldwide GMO plantings are located, most EU Member States have outright bans on GMO cultivation, EU law requires labeling of products containing .9% or more GMOs, and GMO imports are mainly used in animal feed.7\nIn developing the account of GMO governance that follows, I draw heavily on Pollack and Shaffer's definitive comparative study of American and European policies towards GMOs, which emphasizes \"the ability of interest groups to capitalize on preexisting cultural and institutional differences, with an important role played by contingent events.\"8 As we shall see, American tolerance and European hostility towards GMOs were not inevitable: an observer in the early 1980s might well have predicted that American policy towards GMOs would trend hostile and that European policy would trend permissive.\nA. Origins of U.S. Regulatory Policy\nAs Pollack and Shaffer document, U.S. regulatory policy towards GMOs reached an inflection point in the mid-1980s, as the technology appeared headed towards eventual commercialization. The decision for U.S. policy makers was whether to adopt a regulatory approach advocated by the Environmental Protection Agency (EPA) that emphasized the \"newness\" of GMOs, owing to the genetic engineering techniques used to create them, compared to products created through conventional, non-genetically engineered processes. EPA's proposed process-based approach to GMO regulation would have brought GMOs squarely within EPA's authority under the Toxic Substances Control Act to regulate \"new\" chemicals. EPA sought to distinguish \"new\" GMOs from \"natural\" organisms, and identified the process by which GMOs are created as the logical differentiator.\nThe EPA had allies for this process-based approach in Congress. For example, the then-chairman of the House Commerce Committee's Subcommittee on Investigations and Oversight, Al Gore, Jr., urged in 1983 for a precautionary approach to biotechnology. He asserted that \"[w]hile there are certainly benefits to be reaped from this technology, I am concerned that we have a proper understanding of all potential environmental ramifications before a genetically novel organism is released, rather than having to learn about them after the damage has occurred.\"9\nAs late as 1984, the EPA was asserting such precautionary, process-based regulatory jurisdiction over GMOs, even as its partner agencies with regulatory jurisdiction over food and agriculture production, the Food and Drug Administration and the U.S. Department of Agriculture, began to emphasize product-based approaches to regulating biotechnology.10 As Pollack and Shaffer observed, \"During the first half of the 1980s, therefore, it appeared as if the U.S. might take a highly precautionary, process-based approach to GMO regulation.\"11\nThe pro-business Reagan Administration weighed in decisively in 1986, when the White House Office of Science and Technology Policy (OSTP) issued a \"Coordinated Framework for the Regulation of Biotechnology\" that effectively resolved the process/product debate in favor of a product-based approach. The Coordinated Framework was the culmination of an interagency process led by OSTP. It essentially dismissed the proposition that products produced by biotechnological processes pose any inherent human health or environmental risks.12\nThe Coordinated Framework established the industry-friendly USDA as the lead U.S. regulator for introducing new GMOs into the environment, and confined the EPA's role to regulating GMOs with certain pesticidal traits. In doing so, the Coordinated Framework cabined the EPA's scope of influence on GMO regulation, and shifted the center of gravity in the Congress from committees with wide-ranging oversight responsibilities over technology and the environment to the Congressional agriculture committees. As the 1980s came to a close, congressional interest in agricultural biotechnology shifted from the early 1980s focus on risks, and towards the benefits.13\nMeanwhile, the George H.W. Bush Administration preserved the White House's central role in setting policy on biotechnology.14 Finally, the Coordinated Framework's rejection of process-based distinctions set the table for the FDA's Statement of Policy in 1992, that it would approve new foods based solely on whether the product itself presented health risks. Using parallel logic, the FDA also decided that year that GMOs did not require any market approvals or labeling requirements.15 Thus, by the mid-1990s, just as commercialization neared, the three regulatory agencies with some claim of jurisdiction over GMOs were firmly in the camp of product-based regulation, while the political mood in Congress was mainly preoccupied with the upside of GMOs.\nThe net effect of these developments was to make it \"more difficult for GM skeptics to use the existing regulatory and political framework to impede approval of GM crops and foods in the U.S.\"16 As Pollack and Shaffer conclude, \"the U.S. system for biotechnology regulation has been determined almost exclusively by regulators operating under existing statutory authority, while the legislature (Congress) has played a relatively passive oversight role.\"17 In addition, regulation is primarily at the Federal level, limiting the ability of states to intervene. Advocates seeking change to U.S. policy towards GMOs would have to overcome a regulatory consensus codified in a policy established by a White House-led interagency process among the three key agencies with relevant responsibilities and/or persuade a Congress with countervailing voices to legislate. As a result, U.S. policy is relatively resistant to change in an anti-GMO direction.\nIt is important to recognize how contingent this policy outcome was on particular sets of institutional factors in place in the mid- and late-1980s. The Reagan Administration, under the leadership of EPA Administrator Anne Burford Gorsuch, had made a determined effort to weaken the EPA as part of the administration's overall deregulatory agenda upon entering office in 1980.\nFor example, the EPA's budget in 1981 reflected a 35% cut compared to 1980, and over the course of Reagan's first three years in office the agency shed nearly over 20% of its workforce.18 The agency's budget and workforce levels gradually recovered over the course of the decade, but during the crucial period of , when the die was being cast on the Administration's policy towards regulating GMOs, EPA was hardly in a position—fiscally or politically—to argue for an expansive approach to regulating GMOs that would have required substantial investments in EPA's investigatory, adjudicatory, and enforcement functions. Thus, the U.S. agency most inclined to closely regulate GMOs—and to advocate this approach within Executive Branch deliberations and externally to Congressional and other audiences—was marginalized and relatively weak during this formative period.\nThis was also the same period when Gorsuch's successor, William Ruckelshaus, transitioned the agency's posture towards assessing environmental dangers to the scientific risk assessment principles identified in the landmark 1983 National Academies study \"Risk Assessment in the Federal Government,\"19 which articulated an approach to risk management philosophically consistent with the risk-based, product-oriented approach of the Coordinated Framework. And the American biotechnology industry was already well-organized in the 1980s, with a willing partner in the deregulatory Reagan Administration for taking a relatively light-touch approach to regulating biotechnology.\nFinally, the commercialization of GMO products in 1986, the year of the Coordinated Framework, remained several years in the future, which meant that the overall political salience of the issue was relatively low. Indeed, it is worth noting that during this formative period, and onward into the 1990s as GMOs began arriving in the marketplace, the United States enjoyed a period of relative calm in terms of health and safety scandals. The relatively marginal political salience of food and agriculture issues generally for most Americans throughout the decade, and the overall lack of major health and safety scandals relating to food, agriculture and the environment during the crucial period when GMOs were finally hitting the market, left few opportunities for policy entrepreneurs to seize in order to mobilize efforts to change U.S. policy. And as indicated earlier, even if opportunities had presented themselves, policy entrepreneurs would have limited avenues for their advocacy, due to the collective embrace by the three key federal regulatory agencies—FDA, USDA and EPA—of product-based regulation over process-based precaution and the presence of countervailing, pro-biotechnology actors and interest groups in the Congress.\nThese countervailing interest groups included the biotechnology industry and, increasingly, farmers who had planted GM crops and experienced such benefits as higher yields and reduced need for pesticides. As Pollack and Shaffer conclude, the decisions made in the 1980s around regulation of GMOs may have been contingent, but once made, those decisions initiated a certain path dependency that favors the status quo product-based approach to GMO regulation.20\nB. Origins of EU Regulatory Policy\nAs noted earlier, Europe has taken a very different, and far more interventionist, approach to regulating GMOs. Initially, however—and in a somewhat mirror image to early 1980s skepticism about GMOs in the United States—much of European interest in biotechnology and GMOs in the early 1980s was motivated by a desire to support the competitiveness of the EU's biotechnology industry in the face of burgeoning competition from the American companies. And much as the Reagan Administration had leaned on the OSTP to lead the interagency process that yielded the Coordinated Framework in 1986, when the European Commission began exploring frameworks for supporting and regulating biotechnology in the early 1980s, it leaned primarily on the research-focused Science, Research and Development DG (DG Science) to lead the Commission's efforts.21\nThe center of institutional gravity with the Commission began to shift away from DG Science, however, \"as biotechnology moved out of the laboratory to planting in crop trials and the marketing of GM seeds and foods.\" The anticipated commercialization of the technology created demand for legislation on agricultural biotechnology as the 1980s were coming to a close.22\nThe institutional actors within the Commission best suited to draft legislation, due to their mission and competencies, were DG Environment and DG Agriculture. DG Agriculture, however, was preoccupied by challenges associated with its lead role in Europe's Common Agriculture Policy (CAP). The CAP was one of the three core pillars of what was then still called the European Economic Community, and its framework of agriculture subsidies and other interventions had contributed to significant agricultural surpluses that required further policy interventions, consuming DG Agriculture's institutional bandwidth.23\nThat left DG Environment to take the lead on biotechnology regulatory policy, which it naturally framed as an environmental challenge squarely within its jurisdiction. In 1986, the Commission issued its first major policy document on biotechnology, a communication on biotechnology regulation, largely authored by DG Environment. The communication urged a European-level regulatory response to biotechnology. The communication was followed in 1988 by a proposal for a Directive on \"deliberate release\" of biotechnology products into the environment. The Directive emphasized a lack of scientific data about the risks of biotechnology and called for a regulatory framework requiring case-by-case assessments of new GMOs before they are released into the environment. In other words, the Commission proposed a process-based approach to regulating GMOs: bioengineered products are regulated within this schema because of the distinctive process used to create them.24\nAs Shaffer and Pollack note, \"the biotech industry was not as well organized in Europe\" during this critical period of policy development in Europe, \"and was unable to mobilize political resources to prevent the process-based GM regulation that was framed in environmental terms.\"25 In addition, the European Parliament criticized the proposal as too lenient, firmly establishing it as a reliable, strongly pro-regulatory pole in European debates about biotechnology.\nThe European Council rejected most of the Parliament's proposed amendments in its final Directive 90/220, but changed the Commission's proposed decision rules to give Member States additional avenues to contest the Commission's decisions in two key areas: approving new GMOs for release into the environment and reviewing decisions by Member States to implement \"safeguards\" that \"provisionally restrict or prohibit the use and/or sale\" of a GMO from its territory.26 In 1997, the Commission issued a follow-on measure, Regulation 258/97, establishing labeling requirements and an approval process for \"novel foods,\" including GMOs that had \"not hitherto been used for human consumption to a significant degree within the Community.\" Unlike the approval process for new GMOs in Directive 90/220, which vested the Commission with the authority to approve or deny applications for new GMOs, Regulation 258/97 vested this authority in Member States.\nDirective 90/220 and Regulation 258/97 established the initial institutional framework for how decisions about GMOs would be made in Europe. As Pollack and Shaffer summarize:\nIn comparison with the U.S. system, the regulatory structure established by Directive 90/220 and Regulation 258/97 was more complex, more decentralized, and more politicized than the U.S. system. It was more decentralized because of the key role of member states to start, oppose, and reject (through the imposition of safeguards) the approval of a GM seed or food. It was more politicized because of the involvement of politicians in the approval process. And it was more complex in that it created more institutional \"veto points,\" where the approval of new GM varieties or the release and marketing of EU-approved varieties could be blocked.27\nThus, when contingent events occurred that boosted the political salience of biotechnology issues, interest groups had numerous options—the Commission, the Parliament, and the various political, regulatory and policy-making institutions within each Member State—for where to target their efforts to influence policy.\nThe debate about biotechnology in the 1990s occurred during a period of intensive economic and political integration in Europe. The growth and success of the single market throughout this period meant that Europeans were exposed to goods and services from other EU Member States—and any safety risks that may have accompanied them. This put pressure on Member States to act decisively when products from other Member States were exposed as having safety risks, and created a competitive dynamic between Member States and Brussels to demonstrate which was tougher on protecting public health and safety.\nThe late 1990s in particular were marked by a series of significant food and health safety episodes in Europe, including mad cow disease, asbestos problems at a major French university, and dioxin in Belgian chicken feed. These episodes undermined the public's confidence in the ability of their governments, of industry, and of the scientific community to understand and manage safety risks.28\nMad cow disease, or bovine spongiform encephalopathy (BSE), stands out in particular. Mad cow disease was first detected in cows in the UK in the early 1980s. The UK's Ministry of Agriculture assured the public and the European Commission that the disease did not pose a threat to humans. A significant outbreak of the disease among cattle occurred in , and the EU banned consumption of cattle sick with the disease. Over the course of the 1990s, as public concern within the UK grew about the health effects of eating beef from mad cow-diagnosed cattle, the government, its scientists, and the beef industry continued to reassure the public that the disease posed no meaningful threat to human health. It also persuaded the European Commission that it should not restrict the sale of British beef.\nThus, when the UK government announced in 1996 that ten people had been diagnosed with the human variant of mad cow disease, and that they probably contracted the disease by coming into contact with infected cattle, the announcement flew in the face of more than fifteen years of reassurances from British and European Commission regulators, and from industry, that the disease posed no threat to human health. The horrifying nature of the disease—inevitable death, preceded by graphic neurological decline—only amplified its impact. 1996 was also the year that a Scottish scientist touched off a lively debate about the ethics of biotechnology when he announced the cloning of a sheep named \"Dolly,\" and the year that the WTO authorized Canada and the United States to implement retaliatory tariffs against EU farm products in response to the EU's ban on hormone-treated beef, triggering an anti-trade/anti-globalization backlash from farmers and activists such as Jose Bové, who also tended to be anti-GMO as well.29\n1996 was a notable year in at least one additional respect: it was the first year of commercialization for GMOs. In April of that year, the Commission approved the sale of products containing a certain kind of bioengineered soy, despite objections from European Member States. When the product was imported into Europe later that year, it triggered protests by Greenpeace and other activist groups. As Pollack and Shaffer put it:\nThe close succession of these events illustrates how the popular understanding of GM products in Europe became associated with consumer anxieties related to food safety crises, distrust of regulators and scientific assessments, disquiet over corporate control of agricultural production, ethical unease over genetic modification techniques, environmental concerns, and anger over the use of international trade rules by the U.S. to attempt to force \"unnatural\" foods on Europeans.30\nPublic opinion in Europe, already tepid towards GMOs, soured drastically in the aftermath of these events. Reviewing Eurobarometer poll results, Gaskell and co-authors report that \"[a]ll the EU countries, with the exception of Spain and Austria, showed moderate to large declines in support for GM crops over the period ,\" and similar declines during this period for support of GM foods.31\nThe episode repeated in 1997, when the Commission approved another GMO over Member State objections. This time, a succession of Member States moved to block the product from their territory by invoking their right under Directive 90/220 to implement safeguards.32 Meanwhile, activists launched successful campaigns to pressure retailers and major European food processors to renounce the sale and use of GMOs,33 and in 1999 a coalition of Member States comprised of Denmark, France, Greece, Italy and Luxembourg succeeded in an effort to impose a de facto moratorium across the EU on approving new GMOs. The moratorium lasted for six years.34\n2. Lessons for AI Governance\nFrom this history, I draw five lessons that are relevant for AI governance. First, a consensus among technologists and other elites that a new technology is safe, and that its benefits outweigh its risks, is no guarantee of broader societal acceptance. Societal attitudes about the benefits and drawbacks of technology can change over time, with institutional, cultural, and contingent event factors enabling or constraining, as the case may be, how the institutions of governance adapt to, and even themselves shape, these attitudes. As Pollack and Shaffer demonstrate that with respect to GMOs, Europe's adoption of a process-based, precautionary approach to regulating GMOs was not inevitable. Had DG Agriculture assumed a greater role in shaping the Commission's initial communication on GMOs, European biotechnology interests been better organized, or European farmers seized on the benefits of GMOs in terms of higher yields and lower pesticide use and become a countervailing interest the same way their American brethren did, the institutional conditions under which regulatory and policy decisions were made in Europe might have taken a different, more permissive path.\nSimilarly, if Europe hadn't experienced a perfect storm of public health and safety crises in the 1990s, it is conceivable that these institutional conditions might have eventually yielded a more permissive approach to GMO governance. Consider the case of France—a country popularly associated with traditional foodways and cultural preservation, and an outsized driver of agricultural policy in Europe. Through the first half of the 1990s, France actually had by far the most GMO field trials in Europe, and ranked third in the world for such field trials during this period, behind only the United States and Canada.35 It even attracted forum shopping by GMO producers as the friendliest country in Europe for seeking regulatory approval for new GMOs, and was the only Member State in 1996 to vote in favor of approving a variety of genetically-modified corn.36 In the face of the health and environmental scandals of the late 1990s and the concurrent backlash to globalization, however, France abruptly reversed course and became a reliably staunch backer of aggressive regulatory action against the introduction of GMOs in Europe.37 And as noted earlier, in the early 1980s, an observer at the time might have predicted that the United States would be the one to adopt a skeptical approach to GMO governance, not Europe.\nThis could happen to AI, and not just in Europe, where politicians and regulators have already signaled a tepid view towards AI, citing concerns ranging from privacy to its effects on labor markets. The United States, as of this writing, is in the midst of what could turn out to be a significant shift in political and policy elite attitudes towards Silicon Valley. For the past couple decades, information technology has been celebrated by American elites as both a democratizing force for ordinary people to assume greater control of their economic and political fortunes, and as an essential enabler of \"disruptive\" innovation fueling economic growth and improved consumer welfare. Internet platforms in particular enjoyed strong presumptions of competency and good faith, especially on the American political left. These presumptions on the left, and the traditional anti-regulatory sentiment of the political right, formed the basis for what had been a tentative bipartisan consensus that technology regulation was, with limited exceptions, either premature or unnecessary, with arguments about the negative effects of regulation on innovation and investment typically prevailing over health, safety, and other equities.\nA series of developments since Russia's weaponization of social media to spread fake news during the 2016 election, however, has given rise to a so-called \"techlash\" in Washington, with progressives and conservatives alike adopting a far more hostile, skeptical, and confrontational posture towards \"Silicon Valley.\" Though the two sides have different critiques, it is safe to say that whatever presumptions of good faith and competence Silicon Valley enjoyed among policy and political elites in Washington before 2016 are badly damaged, and that the political antibodies against regulation are weakening. Of course, there is no guarantee that this will coalesce into a governing coalition with an affirmative policy agenda, but it marks a major shift in attitudes in the United States towards Silicon Valley.\nSecond, governance decisions made today about technology policy domains relevant to AI may have durable, long-lasting impacts on how policy evolves in the future. For both Europe and the United States, key decisions about GMO governance were made more than a decade before the technology was commercialized. And today, some 30 years after these initial decisions were made, these decisions continue to define the framing assumptions behind the two regulatory regimes. Europe's initial position on GMOs, for example, was contingent—had France's preferences towards GMOs up until the 1990s prevailed, the continent's regulatory framework might be more permissive. Once the EU enacted Directive 90/220, however, it created an institutional framework that proved highly prone to a race to the top (or bottom, depending on your perspective) as the varying actors involved in decision-making about GMOs sought to demonstrate their commitments to health and safety in response to contingent events.\nFor AI governance, policy and regulatory decisions about privacy, security, and safety seem especially important in establishing framing assumptions about how to weigh the costs and benefits of AI applications. Constituencies threatened by deployments of automated vehicles, for example, might meet arguments about the safety and efficiency benefits of automated vehicles with concerns about how personal data is collected and used by the vehicles. Already, the contours of the global privacy landscape are being formed, in these relatively early days of commercial deployments of AI. China's Cybersecurity Law went into effect in 2017, and considerable additional work is going into complementary initiatives, such as the Personal Information Security Specification and the Security Impact Assessment Guide of Personal Information. Europe's General Data Privacy Regulation (GDPR) went into effect last year, along with the Network Information Security Directive, with the latter likely to emerge as a focus of refinement and elaboration in the years ahead. In the United States, 2019 figures to be a seminal year for technology governance, with a tough new privacy law set to go into effect in California in 2020, creating a de facto deadline for the U.S. Congress to preempt it with Federal comprehensive consumer privacy legislation that could shape data privacy practices in the United States for generations to come.\nThird, the intuition behind precaution—the notion that uncertainty about cause and effect attributable to data limitations should not bar regulatory intervention as a precautionary measure, especially when the negative effects may be substantial and irreversible—is a powerful rhetorical tool for justifying regulatory interventions in any domain with complex questions about risk. As Wiener and Rogers note in their comparative study of precaution in the United States and Europe, the precautionary principle is not a formal component of U.S. law, like it is in Europe, but there are regulatory actions in the United States that have been colored by shades of precaution, such as the USDA's early (and, as it turns out, prescient) import restrictions on British beef in 1989 due to concerns about mad cow disease and the FDA's ban on blood donations from would-be donors who had lived in the UK for a period of time.38 Similarly, while the precautionary principle is formally enshrined in European law, its application varies in Europe—not all regulatory domains are marked with the same degree of precaution as GMOs in Europe. As Wiener and Rogers conclude, \"[s]ometimes Europe does take a more precautionary stance than the U.S., but sometimes the U.S. is the more precautionary regulator…Ultimately, neither Europe nor America can claim to be the more precautionary actor across the board.\"39\nCertain deployments of AI may be especially vulnerable to application of a precautionary principle, in the United States as well as Europe, due to the challenges associated with explainability. Deep learning techniques, for example, rely on neural networks or similar architectures and large data sets to train an algorithm to perform a variety of complex tasks, such as driving a car. These algorithms are so complex that it may be impossible to isolate a cause or reason for a particular action. For those seeking to delay or interfere with deployments of AI, invoking precaution may prove to be a powerful strategy, particularly when the deployments in question implicate important societal values, such as privacy, security and safety.\nFourth, the institutional characteristics of how decisions are made about governance, such as the presence and configuration of veto points, establish the parameters around how and even whether interest groups can meaningfully influence policy making, especially in the face of contingent events. In the case of GMOs and Europe, contingent events had major effects in hardening European regulator sentiment against GMOs in significant part due to the institutional characteristics of the decision-making processes in Directive 90/220 and Regulation 258/97—namely, a set of processes that created multiple veto points. To this day, the European Union has approved just one GMO for cultivation in Europe, which four countries in Europe cultivate.40 Strong majorities agreed in the last Eurobarometer poll on this subject that GMOs are \"unnatural\" and disagreed that GMOs are safe and that development should be encouraged.41\nWith respect to AI governance, the fact that GDPR devolves enforcement authority to member states and their respective data protection authorities and judiciaries creates many opportunities for policy entrepreneurs to advance their preferences through enforcement actions and litigation. Similarly, the California Consumer Privacy Act of 2018 gives the State of California, through its elected Attorney General, enforcement authority over that law's requirements, and also establishes a private right of action for data breaches.\nFinally, the nature and sequencing of the benefits and costs of AI deployment may also impact the resilience and adaptability of AI governance frameworks, especially in the face of contingent events. For example, if the benefits of AI are felt deep and wide by key stakeholders, when costs do emerge, there are more likely to be countervailing constituencies to offset advocacy by those feeling the costs. This was the case with respect to GMOs in the United States, where farmers adopted the technology relatively early, creating what Pollack and Shaffer quipped \"facts in the ground.\"42 On the other hand, if the benefits of AI are distant or diffuse and thus diluted, or accrue to narrow constituencies, and costs emerge, the countervailing constituencies may be disorganized and/or weak. For example, it is conceivable that many of the initial society-wide benefits of automation will be diffuse—for example, a statistically lower risk of car accidents in a given population. The costs, however, may be concentrated in certain groups within that population, such as people whose professions as drivers are at risk due to automation. A constellation of costs and benefits along these lines could favor the emergence of organized political opposition to automation.\n3. Conclusion\nThe transatlantic divergence over GMO governance ought to stand as a precautionary tale for technologists and policy makers that the benefits of a new technology seldom speak for themselves. Policy entrepreneurs, using contingent events and incumbent institutions, have a say too.\nBy Andy Grotto, William J. Perry International Security Fellow at the Center for International Security and Cooperation,  Research Fellow at the Hoover Institution, Stanford University\n The post Genetically Modified Organisms: A Precautionary Tale for AI Governance first appeared on AI Pulse.", "url": "https://aipulse.org", "title": "Genetically Modified Organisms: A Precautionary Tale for AI Governance", "source": "aipulse.org", "date_published": "n/a", "paged_url": "https://aipulse.org/feed?paged=3", "id": "7fdd6b158a3c831fa84b49de9ec6eae9"} -{"text": "The Algorithm Dispositif (Notes towards an Investigation)\n\n\n Download as PDF\n\nDavide Panagia\nProfessor of Political Science\nUniversity of California, Los Angeles\n\nHow can we speak of algorithms as political?\nThe intuitive answer disposes us to presume that algorithms are not political. They are mathematical functions that operate to accomplish specific tasks. In this regard, algorithms operate independently of a specific belief system or of any one system's ideological ambitions. They may be used for political ends, in the manner in which census data may be used for voter redistricting, but in and of themselves algorithms don't do anything political.\nIn recent years, with the development of a field of research generally referred to as \"critical algorithm studies,\" the sense of the politically neutral standing of algorithms has been placed under suspicion. Scholars from diverse fields – including cultural, film, media, and literary studies as well as race and ethnicity studies, sociology, philosophy, and the law – have begun to explore the extent to which, as socially effective structures, algorithms aren't merely abstract recipes for task completion, but they also create and exacerbate extant conditions of inequality, exploitation and social domination. An algorithm contains within it a \"cultural logic\" (as David Golumbia has named it) that carries with it, in its coded programming, a social imaginary of how things ought to be classified, organized, and operationalized.1 Taina Bucher, in her recent book If… Then: Algorithmic Power and Politics, also raises the issue of how algorithmic structures, once they are embedded in everyday life and practice, don't simply help us complete mundane tasks more efficiently, but also produce (and – crucially – reproduce) everyday conditions of perceptibility and intelligibility.2 In short, this growing area of research shows how algorithms are constituent participants in everyday life management. More than abstract practical instruments, they are life coefficients that, as the political geographer Louise Amoore has argued, are tasked with managing uncertainty through probability calculations and risk assessment. The end result is that not only present life, but future events too, may be managed and administered. As Amoore states, \"the emphasis of risk assessment ceases to be one of the balance of probability of future threat and occupies instead the horizon of actionable decisions, making possible action on the basis of uncertainty.\"3 The shift that Amoore notes is an ontological one: uncertainty used to be a reason not to act, both morally and politically. We would wait to act until we had all the facts. But now, thanks to the deployment of probabilism in everyday life, uncertainty is a legitimate justification for preemptive action. That is, we act when we are uncertain precisely so as to mitigate possible outcomes. \nAs a contributor to this area of inquiry and research, I wish to raise some issues regarding the difficulties and challenges of thinking about the politicality of algorithms. Specifically, I wish to consider how an algorithm is a medium (first) and a political medium (second). My very rudimentary and initial notes towards such an investigation stem from a general frustration that begins with the following question: are all media political in the same way? Here's what I mean by asking that question. One has the sense, when thinking critically about the status of algorithms in everyday life, that if they are to be considered a political medium, then they operate no differently than a microphone, or a television, or a film. That is, their status as a political medium is located in their ability to transmit information. And as instruments of transmission, they are \"influence machines.\"4 Thus, the effectivity and extent of their influence (otherwise imagined as their power of coercion) is what makes them political. \n\"Influence machine\" is a term coined by the Viennese neurologist and psychologist Victor Tausk () who, prior to his work in the field of psychoanalysis, was a distinguished jurist and journalist. Tausk defines the influence machine as a \"delusional instrument\" that \"serves to persecute the patient and is operated by its enemies.\"5 Typically, patients describe such devices as possessing the following characteristics: 1. It makes individuals see pictures; 2. It manipulates the mind by inducing and removing thoughts; 3. It has physical effects upon the body that are beyond a person's control; 4. It creates strange and indescribable sensations – that is, new sensations that have yet to be named in language; 5. It produces physical and pathological maladies. According to Tausk, patients recount how these machines are immensely complicated, with many parts, and that they operate by means of obscure constructions. They are, to use modern parlance, \"black boxes\" – devices that operate effectively but are also fantastical. Finally, as Tausk accounts for it, the influence machine is perceived by the patient as a \"hostile object\"6 or a \"diabolical apparatus.\"7\nWhen we consider algorithms critically and reflect on their status as political media, we tend to treat them as influence machines in the Tauskian sense. That is, the critical paradigms we deploy to analyze the status of algorithms carry within their critical imaginary an account of algorithms as influence machines, hostile objects that manipulate mind and soul, not to mention the body. Hence the indisputable persuasiveness of the \"black box\" metaphor. In part, this treatment of algorithms arises from a characteristic of the dominant critical apparatus throughout the humanities and social sciences, as well as critical legal studies, that considers the task of criticism to be one of negating various forms of structural domination through the exposition or the unearthing of the mystical operations of power that sustain and proceduralize practices of subjection. In this regard, the image of the Tauskian influence machine is both normatively and conceptually provocative and helpful to our critical investigations. This, because that image corresponds to our sense that domination operates through channels of coercive influence, as Thomas Hobbes reminds us in his Leviathan when he describes human psychology as inclined to limit the freedom of others for the purposes of self-aggrandizement.\nIn recent years, scholars have developed an alternative, and compelling, account of criticism that isn't reducible (but is also not adverse) to the view of criticism outlined above. This novel approach to criticism is more experientially focused – that is, it looks to activities, practices, and actions – rather than ideational specters. As the literary scholar, Toril Moi, accounts for it, \"actions aren't objects, and they don't have surfaces or depths.\"8 This view of criticism is less concerned with unmasking underlying structures of domination, and thus imagining that there exists a hidden world of power beneath the surface of experience. Rather, it considers experience as its starting point. In this respect, it is a radically empiricist mode of criticism that does not depend exclusively on the cognitive expertise of the critic to see things that others, uninitiated in the epistemic ambitions of a specific school of criticism, cannot.9 If a task of criticism is to develop an understanding of what something does, and how, then treating the doings of technical objects as if they only perpetuate the operations of domination seems to go against the idea of an activity as an embodied practice. This doesn't mean that activities and practices, including the political effectivity of technical objects, are transparent or self-evident. It is, rather, to treat practices and activities as things done in the world and not merely as delusional, automated habits like those characterized by Tausk's influence machine.\nWhat does this alternative approach to the practice of criticism mean for the political study of algorithms? It means that alongside our understandings of algorithms as complicit in ideological domination operating along the same lines, and within the same register, as other media like television or film, we also consider algorithms in terms of their technical milieu and, crucially for my purposes, we examine the forms of participation they enable, disable, constrain, and proliferate.10 By \"participation\" I mean something like the ways in which algorithms take part in everyday life. In short, the political study of algorithms that I am proposing looks to the ways in which new forms of relationality are introduced in a specific lived context, and how extant or already existing modes of association are reproduced or rearticulated within that same context. \nHow might this be understood as a specifically political form of criticism? Politics (as I propose to analyze it – though, of course, not just me) isn't merely the exercise of domination (as it has been classically defined), but is fundamentally a pluralist activity for the creation of value through the forging and fomenting of relations between peoples, things, places, and times. Politics exists when things exist in relation to one another, and this fact of relationality is itself based on the sense that our individual and collective worlds are constituted by a plurality of beings, both human beings and non-human objects. This fact of pluralism – of there being not just something rather than nothing (as Plato had famously noted), but multiple somethings (or what the philosopher William James calls the pluriverse) – creates the possibility of relationality and hence, of things and people coming together and wrenching apart. In short, relationality creates worlds. As James affirms, \"knowledge of sensible reality thus comes to life inside the tissue of experience. It is made; and made by relations that unroll themselves in time.\"11\nIn this respect algorithms are political because a fundamental function of the algorithm is to generate world-making relations, and what seems to me to be of central political import are the experiences of relationality that algorithms generate. Consider, in this regard, something as basic as a sorting algorithm like the purchasing recommendation algorithm on Amazon.com. Anyone who has shopped on Amazon has experienced both the frustration and the excitement of these recommendations. And clearly, there is an element of the influence machine built into these sorting mechanisms: through the correlational realist magic of artificial intelligence, we receive a suggestion about how to extend (or reproduce, or replicate, or alter) our experiential pluriverse. The algorithm sorts our previous views, purchases, and (crucially) our attentions (not just mine, by the way, but those of all who have attended to the same object) in such a way as to generate an expectation of future taste as invested in this other (perhaps previously unimagined … by me) object of enjoyment. That the magic of correlation functions within a capitalist climate of profit maximization is surely a contextual truth about the sorting algorithm, but that insight tells me little or nothing about the politicality of the algorithm. It simply confirms what I already know: that most everything created and operationalized in my world is done so for the purpose of augmenting the revenues of a particular organization – in this case, Amazon.com.\nBut there is something else interesting going on here with this sorting algorithm: by presenting its recommendations as it does, it articulates relations not just between me and another commodity of desire, but also between an expectation of taste (based on something I may have enjoyed in the past) and a future value. Now, regardless of whether this recommendation is accurate or not, worthwhile or not, or ultimately profitable or not for the company, the simple fact that a relation has been posited is a politically relevant fact about algorithms. And this is a politically relevant fact independent of (though not innocent of) the particular ideological context of its operation. \nTo treat an algorithm more broadly as a relational medium allows us to say this about them: algorithms exist in the human condition of separateness. They are technical media that have been invented in order to mediate separateness – of time, of space, of awareness, of attention. In short, algorithms intermediate the separateness of the in-between which is the condition sine qua non of human pluralism. And this radical empiricist insight helps get at a possible answer to the question, how do algorithms participate in politics? They participate by partaking in scenes of intermediation that exist in the in-between of peoples, places, things, and events. When we think of a sorting algorithm as an intermediator of separateness we begin to appreciate that the algorithm is political because what it is actively doing is participating in the arrangement of worlds. Our worlds. The worlds we experience in the here and now. The political matter for me, then, is not one of how algorithms constrain my freedoms. But, rather, how do algorithms participate in the formation of worlds, including the worlds within which I participate on a daily and hourly basis? Where \"participation in the formation of worlds\" stands as a short-form for a coming-to-understanding of the algorithm's powers of arrangement, association, and dissociation. \nIn this respect, I consider algorithms not simply as tools of domination but as \"sentimental\" media. Sentimental here is not synonymous with emotions and feelings (although emotions and feelings emerge out of a sentimental operation). By sentimental I refer to the ordering, structuring, and arranging of sensibilities: emotions and feelings (to be sure), but also perceptibilities and intelligibilities. In their capacities as sentimental media, algorithms first and foremost coordinate attention and awareness and make it so that we exist differently in relation to one another. An acknowledgment of the algorithm's claim on our all too human condition of separateness brings us face to face with their standing as political media. They are political because they arrange worlds. And out of these arrangements, intermedial power dynamics that may include (but aren't exclusive to) domination emerge. \nIt is for this reason, then, that rather than speaking about algorithms in general – or about any one specific algorithm – I prefer to think about the \"algorithm dispositif.\" What is the algorithm dispositif? In part I have answered this question above. But a few words on this Frenchism might help clarify things further. \"Dispositif\" is a Latinate word that arrives to English from France and is typically untranslatable – though it has often been mistranslated as \"apparatus.\" Elaborating the distinction between \"dispositif\" and \"apparatus\" must be deferred for another discussion, but the distinction more or less rests on the difference between an influence machine and an intermedial object. The term dispositif has its root in the Latin dispositio that refers to practices of arrangement and, to use a cognate English word, dispositions. More specifically still, the dispositif comes to us from the tradition of rhetoric – its classical sources are Aristotle's Rhetoric, Cicero's De Oratore, and Quintillian's Institutio Oratoria. The dispositio in rhetoric refers to the arrangements of the parts of speech in an oration, and how the order of ideas, of words, and of formulations, may be organized in such a way as to maximize persuasion. The dispositio is that part of an experienced oration that disposes the audience to attend to the speaker's words – not to listen, understand, or interpret them – but to attend to them, to lend them attention, to orient one's attention to them. Listening, understanding, interpreting may follow from this – indeed, usually do follow from this if the dispositio is successful. But the principal aim of the dispositio is not the transmission of an intention; this, because the dispositio is not a demonstrative proof.12 It is, rather, oriented towards the disposing (in the sense of attuning) of one's perceptibilities and forms of intelligibility. Consider in this regard the first line of Mark Antony's famous funeral oration from William Shakespeare's Julius Caesar, \"[f]riends, Romans, countrymen, lend me your ears.\" (Act III, Scene 2) To lend one's ears – the disposing of the ears towards speech – is the exhortation of the dispositio. What matters here is not language as expressing intention, but how what is said is posed (and poised) so as to call attention and bestow notice: dispositio is a modality of collective participation, an active placing upon of parts, one in relation to the other, resting between and among each other.\nIt's in this sense of dispositio that the algorithm dispositif is a sentimental medium. As the sentimental philosophers of the eighteenth century showed, David Hume chief among them, sentiments are the forces that connect us to one another, through technical media like language (i.e., promising) and contracts, and to political life as a whole. The sentimental, in other words, is a category of experience that is world making. As a sentimental medium, the algorithm dispositif arranges and disposes us to the world. In doing so, it organizes worlds by orienting relations of time and space, subject and object. This is what I mean when I say that algorithms exist in the human condition of separateness. Their dispositional powers operate in such a way as to coordinate and negotiate the in-between of separateness – just like a sentiment like sympathy is what organizes my separateness from other humans so that I may build something like social trust.13 \nTo be clear, I'm not saying the algorithms are emotional devices, though there is much evidence to suggest that algorithms are emotion-triggering devices. What I am proposing is that the social and political study of algorithms proceed in a manner akin to how we understand the dispositional powers of the sentiments – powers that dispose us to move and extend ourselves within the in-between space of separateness that conditions human existence. The algorithm dispositif is political, in other words, because it operates in the intervening spaces of separateness and does so by a power of mediation that is dispositional. And this, I wish to say further, is substantially different from claiming that algorithms are structures of domination and that their political function is one of subjection. Humans dominate one another. Of this there is no doubt. But the work of arrangement in and of political societies is not reducible to domination. \nThe preceding offers notes towards the possibility of asking the following question: What are the conditions in and through which we can think the politicality of the algorithm dispositif? The ambition for broaching this question rests on what I take to be a unique impasse for the history of critical thinking that the algorithm dispositif affords. Much of our critical tradition rests on two important – indeed, essential – gestures. The first, inherited from Plato, is that to think critically about the politicality of technical experience requires the capacity to turn away (through reflection, cognition, rationalization) from the coercive operations of power implied or presumed in technical objects. The second is akin to the first: our sense or acceptance of the workings of a technical object rests on a reflective experience we may have of it. We experience a film by viewing it, a musical score by listening to it, a food morsel by eating it, a novel by reading it. The impasse that the algorithm affords our critical tradition challenges both these premises: the fact of algorithmic ubiquity in everyday life makes turning away an unavailable critical response; moreover, we don't experience algorithms. We experience inputs and outputs, data and data's mediation.14 But we don't experience the technical medium of the algorithm, not in the way we appreciate our experiences of other, more established, media. At the interstice of these impasses in our critical traditions we may begin to reflect anew on the tissues of experience that the algorithm dispositif affords.\nBy Davide Panagia, Professor, UCLA Dept. of Political ScienceThe post The Algorithm Dispositif (Notes towards an Investigation) first appeared on AI Pulse.", "url": "https://aipulse.org", "title": "The Algorithm Dispositif (Notes towards an Investigation)", "source": "aipulse.org", "date_published": "n/a", "paged_url": "https://aipulse.org/feed?paged=3", "id": "21700daebd06a03ea93b1a6e5285e6ad"}