id
int64
3
41.8M
url
stringlengths
1
1.84k
title
stringlengths
1
9.99k
author
stringlengths
1
10k
markdown
stringlengths
1
4.36M
downloaded
bool
2 classes
meta_extracted
bool
2 classes
parsed
bool
2 classes
description
stringlengths
1
10k
filedate
stringclasses
2 values
date
stringlengths
9
19
image
stringlengths
1
10k
pagetype
stringclasses
365 values
hostname
stringlengths
4
84
sitename
stringlengths
1
1.6k
tags
stringclasses
0 values
categories
stringclasses
0 values
30,626,344
http://www8.austlii.edu.au/cgi-bin/viewdoc/au/journals/DeakinLawRw/2005/16.html
Deakin Law Review
null
# Stretton, Dean --- "The Birth Torts: Damages for Wrongful Birth and Wrongful Life" [2005] DeakinLawRw 16; (2005) 10(1) Deakin Law Review 319 # **THE BIRTH TORTS: DAMAGES FOR WRONGFUL BIRTH AND WRONGFUL LIFE** **DEAN STRETTON[*] ** [*This article examines the capacity of parents of children (whether disabled or not) born as a result of medical negligence to sue for the costs associated with the birth and raising of the children (‘wrongful birth’), as well as the capacity of disabled children who owe their existence to medical negligence to sue for the costs associated with the disability (‘wrongful life’). Many legal systems have allowed the first type of claim, but very few have allowed the second type. The author argues that allowing both types of claim is consistent with ordinary principles of tort law, and that there are no policy reasons that override this conclusion. Consequently, a range of damages ought to be available in relation to both types of claim.*] ## **I INTRODUCTION** In July 2003, the High Court of Australia held by a 4:3 majority that where an unplanned child is born through medical negligence, the parents may sue the negligent doctor to recover the costs of raising the child to maturity.[1] Acting Prime Minister of Australia John Anderson denounced the decision as “repugnant”, claiming it “devalue[d]...life” and treated “vulnerable children” as “mere commodities”.[2] In April 2004, the NSW Court of Appeal held by a 2:1 majority that where a disabled child owes his or her existence to an act of medical negligence, the child cannot sue the negligent doctor to recover the costs associated with the disability.[3] An appeal to the High Court has been foreshadowed.[4] These cases involve the ‘birth torts’: wrongful birth and wrongful life respectively. The subject of this paper is whether Australian courts, based on established principles of tort law, should award damages for these torts. Three issues arise for each tort. First, what are the options for recovery of damages? In other words, what heads of damages (if any) might be held to be recoverable for wrongful birth or wrongful life? Second, what heads of damages (if any) would be recoverable under ‘normal’ tort principles—that is, principles not invoking broad considerations of public policy? Third, are there persuasive policy grounds for choosing an option other than that reached through normal principles? Part II discusses wrongful birth. It will be argued that ‘pregnancy costs’ and ‘upbringing costs’ (in the sense to be defined) are recoverable on normal principles, and that the policy arguments for other options are unpersuasive. Part III discusses wrongful life. It will be argued that damages for pain, suffering and ‘disability costs’ are recoverable on normal principles, and that the policy arguments for other options are, again, weak. Damages for the birth torts *should* thus be awarded. ## **II WRONGFUL BIRTH** ### **A ***Definitions and Approaches* *Definitions and Approaches* #### 1 *Wrongful birth defined* *Wrongful birth* occurs where an act of medical negligence causes the birth of an unplanned child. The child may be ‘healthy’ (non-disabled) or disabled. The negligence may involve a doctor’s failure to:[5] (a) warn of the risk that a competently performed sterilisation may naturally reverse or otherwise fail; so that the plaintiffs, unaware of that risk, cease using contraception; (b) diagnose or advise of pregnancy, where diagnosis or advice would have led to lawful[6] termination; (c) diagnose a condition in either the parents or the foetus that will cause the child to be disabled, where diagnosis would have led to effective contraception or lawful termination; (d) take reasonable care in performing an attempted sterilisation or abortion; or (e) take reasonable care in giving advice on, or supplying, contraceptives. Each of (a)-(e) creates a risk that an unplanned pregnancy will occur or continue. If the risk eventuates, and the pregnancy is carried to term—because pregnancy was discovered too late to terminate safely or legally, or because the plaintiffs feel morally or emotionally unable to terminate—then wrongful birth has occurred. In a *wrongful birth action*, the parent or parents sue the doctor in negligence in respect of the damage resulting from the pregnancy and birth. The damage may include: (i) | pregnancy costs: the pain, suffering and economic loss associated with pregnancy, including labour pains, medical bills, maternity clothes, loss of income during pregnancy, and (less commonly) the cost of moving or extending the house in anticipation of accommodating an extra member; and | (ii) | upbringing costs: the costs of raising the child from birth to maturity or independence, including amounts spent on food, clothes, education, presents and entertainment; plus loss of income through looking after the child, and (if this occurs after birth) the cost of moving or extending house to accommodate an extra member.[7] | The terms ‘wrongful birth’, ‘wrongful pregnancy’ and ‘wrongful conception’ have been variously defined, sometimes interchangeably.[8] Here, only ‘wrongful birth’ is used, and has the meaning given above. Strictly, what is ‘wrongful’ is the *negligence*, not the birth;[9] but the label is a convenient shorthand. #### 2 *Options for recovery of damages* The reasonable options for recovery of damages are generally considered to be:[10] (1) | full recovery without set-off: upbringing and pregnancy costs can be awarded whether the child is healthy or disabled; and damages are not reduced for any emotional benefits the child will bring to the parents (but may perhaps be reduced for any financial benefits the child will bring, such as statutory welfare benefits). | (2) | full recovery with set-off: as for full recovery, but damages are reduced (offset) for any emotional benefits the child will bring. | (3) | pregnancy and extra disability costs only: pregnancy costs can be awarded; upbringing costs can be awarded, but only where either the child, or perhaps a parent, is disabled, and limited to the extra costs attributable to the disability (‘extra’ compared to the cost for a non-disabled person). | (4) | pregnancy costs only: pregnancy costs can be awarded; upbringing costs cannot be awarded, even if the child is disabled. | (5) | no recovery: neither pregnancy nor upbringing costs can be awarded. | #### 3 *Existing authorities: UK, US and Canada* In the UK, the first reported wrongful birth case allowed recovery of pregnancy costs.[11] Lower courts initially disallowed recovery of upbringing costs for policy reasons,[12] but later allowed recovery with no offset for healthy,[13] disabled,[14] and temporarily disabled[15] children—extending even to the costs of private education[16] and upbringing past age 18.[17] In *McFarlane*, the House of Lords cast aside this lower-court authority and held—largely through unsupported intuitions on what is ‘fair, just and reasonable’[18]—that upbringing costs for a healthy child are not recoverable. Lower courts have since confined *McFarlane* to allow the extra upbringing costs attributable to a child’s[19] or mother’s[20] disability. However, in *Rees*—another morass of unsupported intuitions—the House of Lords held by majority that the *mother’s* extra disability costs are *not* recoverable[21] (though a different majority held, obiter, that extra costs attributable to the *child’s* disability *are* recoverable[22]). In an admitted ‘gloss’[23]—an arbitrary and unprincipled exception to the policy in *McFarlane*[24]—*Rees* also held that wrongful birth parents may recover a nominal sum of £15,000 for violation of their autonomy.[25] The UK thus allows recovery of pregnancy costs and the extra costs attributable to the child’s disability. US decisions are numerous and divergent because the birth torts are a state rather than federal matter. Some states permit full recovery with[26] or without[27] offset. A majority disallow recovery of upbringing costs where the child is healthy,[28] but some allow recovery of extra disability costs.[29] Canadian courts have largely mirrored the UK, but with some variation among jurisdictions. Lower courts initially disallowed recovery of upbringing costs for policy reasons,[30] but subsequent cases awarded pregnancy and upbringing costs for healthy children (with an offset for emotional benefits[31]) and for disabled children[32] (though in some cases this was limited to the *extra* costs attributable to the child’s disability[33]). More recently, however, lower courts have held that upbringing costs for a healthy child can only be awarded where the parents had decided for *financial* reasons to have no further children;[34] or have held that such costs should not be awarded at all.[35] The law regarding healthy children is thus uncertain. For disabled children, the Supreme Court of Canada accepted (obiter, since the point was not on appeal) that plaintiffs can recover pregnancy costs and the extra upbringing costs attributable to the child’s disability—though the latter are discounted according to the probability of those costs being borne by the state.[36] #### 4 * Existing authorities: Australia* Before *Cattanach*,[37] wrongful birth was considered in NSW and Queensland. In NSW, damages were awarded for pregnancy costs but not upbringing costs, since the plaintiff’s choice to keep the child rather than adopt it out allegedly severed causation between the negligence and upbringing costs.[38] In Queensland, full recovery was permitted through an application of normal principles and rejection of opposing policy arguments.[39] In *Cattanach*, a sterilisation was performed on only one fallopian tube, since the mother falsely believed her other tube had been removed as a child. The sterilising doctor negligently failed to warn that the mother should have this belief checked (if it were false, she could still conceive).[40] As a result the plaintiffs ceased using contraception, thinking they could not conceive, and—because the second tube was in fact present—a healthy son was later conceived and born. The parents sued the sterilising doctor and the State of Queensland (the latter as responsible for the hospital where the sterilisation occurred). Pregnancy and upbringing costs were awarded at first instance[41] and upheld on appeal.[42] On further appeal, the High Court confirmed that upbringing costs are recoverable.[43] Pregnancy costs were not on appeal, but the absurdity of allowing upbringing costs *without* pregnancy costs makes full recovery the de facto position. The *Cattanach* majority held that upbringing costs are recoverable on normal principles;[44] that there should be no set-off between financial costs and emotional benefits, since these affect different interests;[45] and that the policy arguments against full recovery are unpersuasive. McHugh and Gummow JJ appeared to treat the plaintiffs’ loss as pure economic loss, stating ‘the relevant damage suffered by the [plaintiffs] is the expenditure that they have incurred or will incur in the future’[46] (though they suggested the outcome did not depend on this classification[47]). They noted the danger of relying on policy arguments[48] that are empirically unfounded or that slide impermissibly from broad statements of widely held values to the conclusion that upbringing costs are unrecoverable.[49] Kirby J characterised the plaintiffs’ economic loss not as pure but as *consequential* upon the physical damage of unwanted pregnancy.[50] He noted that policy arguments against full recovery, including those of the *Cattanach* minority,[51] rely on controversial values, unsupported assumptions, or are legally irrelevant. Callinan J held the plaintiffs’ economic loss was pure rather than consequential,[52] but saw the case as ‘a relatively simple one’[53] where the requirements for recovery of pure economic loss are met and the opposing policy arguments are weak:[54] a judge’s ‘distaste’ cannot override legal principle.[55] The majority also warned that a new form of legal immunity would be given to doctors and hospitals if shielded from the consequences of their negligence.[56] The minority judges assumed, virtually without supporting argument,[57] that damages must be offset for any emotional benefits the child will bring. They held[58] that financially estimating those benefits, and thus allegedly treating them as a commodity, creates problems of ‘legal coherence’[59] because it runs contrary to the law’s assumptions about ‘desirable paradigm[s] of family relationships’[60] and ‘key values in family life’.[61] On other matters the minority differed. Gleeson CJ saw the plaintiffs’ loss as pure economic loss,[6] and held that if recovery of upbringing costs were allowed then liability could potentially extend past age 18 to weddings, tertiary education, and so on.[62] He concluded that liability for upbringing costs would be indeterminate (incapable of calculation or non-arbitrary limitation), and would therefore be denied on normal principles, since normal principles do not permit recovery of indeterminate amounts for pure economic loss.[63] Hayne J, on the other hand, held the economic loss was consequential[64] and that normal principles permit recovery.[65] He noted that most policy arguments against recovery are weak but he ultimately found the ‘paradigms’ argument persuasive.[66] Heydon J did not discuss whether the plaintiffs’ economic loss was pure or consequential, or whether normal principles permit recovery, but instead launched a litany of policy arguments against recovery, based largely on the assumption that parents will happily denounce their child or formulate elaborate fictions in order to secure maximum compensation. The overall argument seemed to be that since (as Heydon J thought) most wrongful birth plaintiffs will act dishonestly and in ways that undermine family values, wrongful birth actions should be disallowed. This was perhaps the weakest *Cattanach* judgment, since it attempts to override ordinary principles and enforce controversial ‘values in family life’[67] by asserting empirically unsupported speculations about the motives and dispositions of potential litigants. It is thus a use of judicial power for ‘the furthering of some political, moral or social program’ (a program supporting those ‘values’), and so exhibits what some would label, and perhaps rightly decry, as ‘judicial activism’.[68] It will now be argued that normal principles permit full recovery with no set-off (Section 2); and that there are no persuasive policy grounds for accepting a different option (Section 3). Full recovery is therefore the ‘correct’, or most defensible, position at law. ### **B ***Do Normal Tort Principles Support Recovery for Damages?* *Do Normal Tort Principles Support Recovery for Damages?* #### 1* What are ‘normal’ principles of negligence?* ‘Normal’[69] or ‘ordinary’[70] tort principles contrast in some way with policy considerations. However, *two* types of policy consideration are relevant in negligence. First, policy considerations may focus on the actions, events and connections between defendant and potential plaintiffs, and ask whether these make it reasonable to attribute duty of care, breach of duty, causation and remoteness.[71] Second, policy considerations may go *beyond* those actions, events and connections and consider, in light of further relationships or social factors, whether attributing liability is socially or morally desirable.[72] ‘Normal’ principles appear to be those excluding the second category of policy consideration.[73] #### 2* Normal principles applied to wrongful birth* On normal principles, a defendant is liable for: (a) physical damage—damage to person or property—that is reasonably foreseeable, reasonably preventable and caused by his conduct;[74] and (b) reasonably foreseeable kinds of damage caused by that physical damage.[75] Assume for now that unwanted pregnancy is physical damage. Concerning (a), unwanted pregnancy involving a healthy or disabled child is reasonably foreseeable, reasonably preventable (say, by warning that sterilisation may reverse), and—in relevant cases—caused by the doctor’s conduct (such as failure to warn). Concerning (b), pregnancy and upbringing costs are reasonably foreseeable kinds of damage flowing from unwanted pregnancy, since they are the very kinds of damage likely to result. Failure to abort or adopt out the child will not sever causation between the breach and upbringing costs, since keeping the child is a foreseeable and non-negligent consequence of the breach;[76] indeed, failure to abort or adopt out is precisely a *failure* to interrupt the causal chain.[77] Nor can failure to abort or adopt out be seen as an unreasonable failure to mitigate damage, since abortion and adoption are sensitive matters of individual conscience and courts are rightly loath to find such failure unreasonable.[78] Thus, *assuming* unwanted pregnancy is physical damage, the negligent doctor—whether the child is healthy or disabled—is liable on normal principles for pregnancy and upbringing costs. #### 3 *Pure or consequential economic loss?* If unwanted pregnancy is *not* physical damage, then pregnancy costs of a financial nature, and upbringing costs, are all *pure* economic loss (because consequent upon a condition—unwanted pregnancy—that is not physical damage) and recovery on normal principles will be harder to establish.[79] It is submitted that ‘physical damage’ should be taken to include any substantial invasion of bodily interests. Bodily autonomy—the right to decide what happens in and to one’s body—is a legally protected interest.[80] Unwanted pregnancy substantially invades this bodily interest by introducing profound bodily changes to which one does not consent.[81] Unwanted pregnancy therefore *is* physical damage, *even though* pregnancy is in some sense a ‘natural’ phenomenon.[82] Thus, Kirby J in *Cattanach* held wrongful birth involves ‘direct [physical] injury to the parents, certainly to the mother who suffered profound and unwanted physical events (pregnancy and child-birth) involving her person’, so that ‘[a]ny [resulting] economic loss was not pure, but consequential’.[83] It would follow that any part of a wrongful birth claim brought (only) by the *mother* is a claim for *consequential* loss; while any part brought (only) by the *father* is, it seems, a claim for *pure* economic loss, since the loss is caused by physical damage to a third party (the mother). What if part of a claim—generally the component for upbringing costs—is brought jointly by *both* parents? Judges in *McFarlane*[84] and *Cattanach*[85] held a joint claim is for *pure* economic loss, since the father is part of the claim and suffers no physical injury: “From *his* point of view, how could the claim be anything other than a claim for pure economic loss?”[8] Yet equally, from the *mother’s* point of view, how could the claim be anything other than a claim for *consequential* loss? It is not clear, and seems chauvinistic to hold, that the father’s view should automatically take priority. Further, as Kirby J noted, the requirement that plaintiffs suffer physical damage in order to recover in negligence stems from a concern to avoid indeterminate liability; and, so long as at least *one* plaintiff suffers physical damage, that concern is met.[86] Accordingly, a joint claim should be seen as one for *consequential* loss. In short: since unwanted pregnancy is physical damage, the doctor, on normal principles, is liable to the mother, or to mother and father jointly, for pregnancy and upbringing costs. #### 4 * Pure economic loss and wrongful birth* The requirements for recovery of pure economic loss are described in *Perre*.[87] These requirements must be met if, contra the above, unwanted pregnancy is *not* physical damage, or in any case where the father claims alone. Separate judgments in *Perre* leave no single *ratio*, but the main factors identified as creating a duty of care were:[88] known reliance;[89] vulnerability;[90] control;[91] knowledge of the risk and its magnitude;[9] an ascertainable class of plaintiffs;[92] non-interference with existing law;[93] and non-interference with legitimate commercial interests.[94] The point of identifying these factors is to avoid the imposition of liability “in an indeterminate amount for an indeterminate time to an indeterminate class”.’[95] Typically in wrongful birth, the doctor **knows** the parents will **rely** on his or her advice or expertise. The parents are **vulnerable** (unable to protect their own interests) because they lack specialist medical knowledge and must rely entirely on the doctor to provide such knowledge (and thereby to protect their interests); it is unrealistic to expect them to protect their interests by negotiating a contract under which the doctor or hospital will pay for upbringing costs.[96] Similarly, the doctor is in **control**, since his or her conduct will effectively determine whether the parents’ interests (for example, in avoiding childbirth) will be protected or infringed. The doctor typically **knows of the risk** of pregnancy and its **magnitude** (that it will cause substantial costs, especially if the child turns out disabled). The parents are an **ascertainable class** known to the doctor: they are at particular risk of damage from the doctor’s conduct, since only *they* will have to bear, whether jointly or severally, the costs of any resulting child (or rather, they *and the child* will bear those costs: the costs of caring for the child are gratuitous care costs and thus are treated as also incurred by the child himself[97]). Imposing a duty of care does **not interfere** with existing law (since no well-developed laws yet apply to the birth torts) or with legitimate commercial interests (since imposing the duty merely holds the doctor to the standard of care already expected in law and society, and there is no legitimate interest in breaching that standard). Although these factors are met—and thus although liability for wrongful birth would appear to be determinate—Gleeson CJ in *Cattanach* held that liability for upbringing costs would be *in*determinate. His concerns: (1) ‘Parents might go through their lives making financial and other arrangements...to accommodate the needs or reasonable requirements of their children’; it is not clear when such arrangements would count as economic loss.[98] (2) It is not clear when liability would end: children are often dependent on their parents past age 18, so liability could in principle extend to weddings, tertiary education, and so on (albeit that these were not part of the claim in *Cattanach*).[99] (3) If upbringing costs are recoverable, damages for ‘adverse effects on career prospects’ must be too—and these ‘might far exceed the costs of raising and maintaining a child’.[100] (4) Upbringing costs would have to be discounted since the child may provide financial assistance later in life.[101] Ground (1), however, would equally have denied recovery in *Perre*. There, the defendant caused the plaintiff to be prohibited from exporting potatoes for five years.[102] During this time the plaintiffs might equally have made ‘financial and other arrangements’ to accommodate the prohibition. When would this count as economic loss? A difficult question, perhaps—but no *Perre* judge considered this a reason to deny recovery. Moreover, the question has a clear answer in wrongful birth: upbringing costs, it can be held, cover actual or likely *expenditure on goods or services* procured for the child’s benefit; they do not cover arrangements designed to *generate the funds* used to procure those goods and services. Concerning (2), liability would end when the child, on the facts, ‘might [reasonably] be expected to be economically self-reliant’.[103] Weddings and tertiary education could be included[104] if these were prior to reasonable self-reliance and were reasonable rather than extravagant expenses.[1] Concerning (3), a claim for loss of earnings (as part of a larger claim for upbringing costs) would be treated straightforwardly as any other claim for loss of earnings. Concerning (4), evidence of likely financial assistance could indeed produce a discount.[105] Nothing in (1)-(4) shows liability for upbringing costs would be to or for an indeterminate amount, time or class. A further worry about indeterminate liability is this:[106] through the new child’s birth, siblings may receive less pocket money and presents, while relatives babysitting the child may lose income from more profitable activities. Could they sue for pure economic loss? It seems not.[107] Other parties might suffer just as much loss: toy stores, video stores, clothes stores and restaurants, since the parents have less money to spend on luxuries; the mother’s employer, who must find a replacement during maternity leave; the parents’ friends, who must buy more meals because the parents do not treat them to dinner so often; and so on. Such losses are reasonably foreseeable, but the members of the class are in practice impossible to determine. So once the zone of liability is extended beyond the parents (or rather, parents and child—a non-arbitrary class at particular risk from the doctor’s conduct), it becomes indeterminate and must be disallowed.[108] Thus, the parents and child—but *only* the parents and child—satisfy the *Perre* requirements. Hence even if financial pregnancy and upbringing costs are *pure* economic loss, doctors in typical wrongful birth cases have a duty to prevent that loss. So, on normal principles, doctors in such cases will be liable to the plaintiffs—mother, father, or both jointly—for financial pregnancy and upbringing costs.[109] A claim for pure economic loss cannot, of course, include damages for pain and suffering. #### 5 *Offsetting benefits and harms* On normal principles, should the net value of emotional benefits brought to the parents by the child be estimated in financial terms and offset against the damages (if any) for pregnancy and upbringing costs? The *Cattanach* majority held not. McHugh, Gummow and Kirby JJ[110] noted Dixon J’s statement in *Zoanetti*: ‘when one of two separate interests is benefited in consequence of a wrongful act, the benefit cannot be set off against an injury to the other.’[111] McHugh and Gummow JJ continued: The coal miner, forced to retire because of injury, does not get less damages for loss of earning capacity [or pain and suffering] because he is now free to sit in the sun each day reading his favourite newspaper. Likewise, the award of damages to the parents for their future financial expenditure [or pain and suffering] is not to be reduced by the enjoyment that they will or may obtain from the birth of the child.[112] The exception would be damages for loss of enjoyment of life: these could, if claimed, be set off against emotional benefits, since the same interest is involved.[113] Kirby[114] and Callinan[115] JJ reasoned similarly. Gleeson CJ, however, rejected the miner example as circular: [The miner’s] loss of earning capacity, a recognised head of damages, is not mitigated by his enforced leisure. [In wrongful birth], however, *the question is whether* human reproduction and the creation of a parent-child relationship is actionable damage.[116] In other words: to apply the *Zoanetti* rule, one must *already assume* that the parent-child relationship—or, more accurately, the financial costs flowing from it[117]—are a recognised head of damages; yet to make that assumption is circular, since the very question at stake is *whether* those costs should be recognised as a head of damages. This charge of circularity is misplaced. The miner example shows that, on normal principles, damages for pain, suffering and economic loss are not reduced for any emotional benefits brought by the negligence. To apply this to wrongful birth, one *assumes*—what is taken to be shown on other grounds—that damages for pregnancy and upbringing costs are recoverable on normal principles; one then concludes that these damages are not to be reduced for any emotional benefits brought by the child. Hence, what the example assumes is not that upbringing costs are *ultimately* recoverable—that really *would* be circular—but merely that they are recoverable *on normal principles*. This is not circular. So the miner example does show there should be no offset for emotional benefits. Of course, the birth of the child may also bring the parents *financial* benefits, such as statutory welfare benefits.[118] On normal principles, financial benefits caused by the negligence are offset against financial costs. There is, however, no offset for gifts or insurance payouts,[119] and in the case of statutory benefits an offset will depend on the intention of the legislation conferring the benefit.[120] Thus, a reduction in pregnancy and upbringing costs for statutory benefits may be appropriate; but examining the relevant legislation is beyond the scope of this article. On normal principles, then, the doctor in wrongful birth will be liable for pregnancy costs and upbringing costs (though with no damages for pain and suffering if the claim is for pure economic loss); and there should be no offset between pregnancy and upbringing costs on the one hand, and emotional benefits on the other (though there should perhaps be an offset for statutory benefits). In short, ordinary principles permit—indeed require—full recovery with no set-off. ### **C ***Are There Sound Policy Arguments against Recovery? * *Are There Sound Policy Arguments against Recovery?* Full recovery with no set-off is often rejected for policy reasons. The main policy arguments will be considered by subject-matter: (1) the value of the child and family relationships; (2) justice and proportionality; (3) miscellaneous. It will be asked, of each argument, what damages [121] would result if the argument were accepted; but it will be shown that each should be rejected. #### 1* The value of the child and family relationships* ##### (a) * The ‘blessing’ argument* A child’s existence has been held a ‘blessing’, a benefit to the parents, who thus could not have suffered any loss or damage (at least not any *overall* loss or damage), and so have no cause of action in negligence. That is, although a child brings harms as well as benefits, ‘the benefits must be regarded as outweighing any loss.’[122] This argument, originating in the US,[123] has been rejected[124] more than accepted.[125] Four problems with it emerge. First, the argument contradicts normal principles by assuming that emotional benefits can be offset against—and so ‘outweigh’—financial and other costs. As argued, normal principles permit no such offset. Second, the argument entails there can be *no* recovery for wrongful birth: as there is no damage overall—as any loss is ‘outweighed’ by the emotional benefits of raising a child—therefore *no* damages would be recoverable.[126] This seems unjust: prima facie, it would be more just to award *some* damages to the plaintiffs rather than no damages at all. Third, measures to avoid childbirth—abstinence, abortion, the ‘rhythm’ method, contraception, and sterilisation—are used at some time by many heterosexual people *precisely because* they believe (correctly) that there are circumstances where having an extra child, even a healthy child, would not benefit the parents overall.[127] If every child *were* a blessing, the goal of life during one’s fertile years would be ‘unlimited child bearing,’[128] for each procreation would leave one better off. As this is manifestly absurd—there is more to life than procreation—*not* every child is a benefit. Also, that particular parents decide to keep the child does not mean they *regard* it as a blessing:[129] they may rather feel that, while *no* child would have been best, keeping the child is *better*, morally or emotionally, than abortion or adoption. Fourth, difficulties arise in severe disability. Parents whose relationships and life-plans are dashed because they must constantly care for a severely disabled child have plainly not benefited overall from the child’s existence. There would, as with every child, be *particular* benefits, *some* joys; but, overall*,* the parents are worse off: it would be better *for the parents* if the child had never existed. So, not every child is a blessing. Proponents of the ‘blessing’ argument might then reply that only ‘normal, healthy’[130] children are necessarily beneficial to their parents. But this is equally inconsistent with widespread anti-procreative measures, and is discriminatory.[131] It is discriminatory because many disabled children are *more* beneficial to their parents than many healthy children; and so to deny *all* disabled children the privileged status of ‘necessarily beneficial’ is unjust discrimination—denial of a privilege on the basis of a general characteristic (disability) while ignoring relevant differences between cases. ##### (b) *Estimation, commodification, denigration* It is said that to determine the plaintiff’s overall loss one must estimate in financial terms the net value of emotional benefits the child will provide to the parents, and then offset this amount against the financial losses the parents suffer.[132] But, it is said, one cannot financially estimate the value of a human relationship.[133] Even if one could, financially estimating the child’s value to the parents is contrary to key legal and moral values because it treats the child as a commodity.[134] Further, parents, in an attempt to reduce the offset, would be induced to denigrate their child: they would urge, and courts might accept, the ‘unedifying proposition’[135] that the emotional benefits are very small, that the child is more trouble financially than it is worth emotionally.[1] Since, therefore, the emotional benefits cannot, or at least *should* not, be estimated, the plaintiffs cannot show whether or to what extent they have suffered overall loss, and so no recovery—at least of upbringing expenses—should be permitted.[136] Like the ‘blessing’ argument, this argument fails by assuming, against normal principles, that an offset should be made for emotional benefits. The other steps are therefore irrelevant—but also unpersuasive. Financially estimating emotional benefits is hardly impossible given the law’s routine estimation of ‘nebulous items such as pain and suffering and loss of reputation.’[137] Such estimation does not treat the *child* as a commodity, but merely treats the *benefits* as roughly financially estimable in order to determine just compensation. If this unacceptably ‘commodifies’ the parent-child relationship, then financial estimation of gratuitous care services must likewise ‘commodify’ the gratuitous carer-cared relationship. Yet damages for gratuitous care are *allowed*.[138] The proposition that a child costs more financially than it is worth emotionally is unedifying but irrelevant. The appropriate set-off, if there is to be one, is not between financial costs and emotional benefits, but between *financial and non-financial* costs and emotional benefits. Non-financial costs include the pain and suffering of pregnancy, plus substantial *emotional* costs: loss of reproductive autonomy;[139] loss or postponement of life-plans and career goals; and the inconvenience—such as tantrums—of bringing up a child. In most cases where a child is not planned, these emotional costs could reasonably be held *of themselves* to outweigh emotional benefits.[140] A court worried about denigration could then deliberately *overestimate* the child’s emotional benefits by *deeming*, whether the child is healthy or disabled, that emotional benefits *equal* emotional costs[141] (this may be called the ‘overestimation’ solution). Since emotional benefits and emotional costs would thus cancel each other out, other categories of loss—pregnancy costs (including pain and suffering) and upbringing costs—would still be fully recoverable with no further set-off. This solution *over*estimates the child’s emotional benefits but still allows the parents to recover the costs resulting from the negligence; hence it is more just than a solution that, by *refusing* to estimate emotional benefits, leaves the victims of negligence uncompensated. The ‘overestimation’ solution also eliminates ‘commodification’ worries, since emotional benefits are compared with *emotional* costs, not money. Further, any incentive to denigrate the child is removed, since the offset is conclusively *deemed* and denigration will not reduce it. Unless this ‘overestimation’ solution is adopted, ‘commodification’ arguments entail there must be *no* recovery for wrongful birth.[142] The reason is that, if emotional benefits are to be set off against financial costs occurring *after* birth (as ‘commodification’ arguments assume), those benefits must also as a matter of consistency be set off against financial costs, pain and suffering occurring *before* birth: there is no principled reason why there would be an offset against one but not the other. Since the emotional benefits allegedly cannot or should not be estimated, the plaintiffs would be barred from showing whether or to what extent they have suffered overall loss *at all* (whether before or after birth). Hence they would not be entitled to *recover* at all: not even for pain, suffering, or extra disability costs,[143] since these items (barring an unprincipled and arbitrary exception[144]) would *equally* be subject to set-off against inestimable emotional benefits. Thus, if a set-off is allowed and ‘commodification’ worries are accepted, the choice is between the ‘overestimation’ solution and no recovery. ‘Overestimation’ is more just. In sum, ‘commodification’ arguments are irrelevant because normal principles permit no set-off. If set-off *were* permitted, the ‘overestimation’ solution should be adopted, which would leave pregnancy and upbringing costs recoverable. ##### (c) *Harm and distress to the child* Another common argument[145] claims that recovery of damages, or at any rate upbringing costs, should be disallowed because the child may suffer distress on discovering, through the court’s official pronouncement, that they were unplanned; that the parents sued for pregnancy and/or upbringing costs; that the parents believed they would have been better off without the child; that the child was raised with funds supplied by a doctor; or (in some cases, based on the difference between the damages awarded and the sort of upbringing the child knows he actually had) that the parents failed to spend the damages for the child’s benefit. This argument fails on several counts. First, it is not clear how ‘the possibility of detriment to a person *not* party to the action’—the child—could ‘prevent recovery of damages’.[146] That an action distresses the defendant’s (or even plaintiff’s) spouse, for example, is not a reason to deny compensation. Second, judicial pronouncements of *likely* harm lack empirical evidence;[147] while weaker claims of a mere *risk* of harm[148] do not justify outright denial of compensation, since the *certain* harm to parents in *denying* compensation may well exceed the merely *possible* harm to children in *granting* it. Third, distress to the child would generally be outweighed by the substantial *benefit*—security of upbringing—provided by damages.[149] Fourth, ‘there are many harsher truths’[150] children may discover than that they were unplanned; the discovery is common and outweighed by subsequent love.[151] Knowledge that the parents sued will not cause distress if it is explained that this was merely to avoid certain financial costs and was done for the child’s benefit:[152] ‘Because we love you, we wanted to ensure we could afford a good upbringing.’ Of course, one cannot completely ‘negate the risk of an irrational reaction’ from the child;[153] but then, *denying* compensation might equally produce irrational reactions from children who become angry that their parents brought them into the world without knowing they could obtain the means to support them. Fifth, the risk of harm (if there is one) will vary between cases, so respect for privacy and autonomy would require that *parents*, not courts, be left to decide if suing will harm the child. To hold that the parents’ ‘conflict between duty and interest’—duty not to harm the child, self-interest in compensation—should be ‘removed’ because the parents cannot be trusted to resolve it fairly[154] is paternalistic and inaccurate. Sixth, if no offset is allowed for emotional benefits,[155] or if an offset is allowed and the ‘overestimation’ solution is adopted, then there is no suggestion the parents are *emotionally* worse off through the child’s existence; merely that they are *financially* worse off (which is obvious and inoffensive). In any case, it is precisely if the court *fails* to award damages that the parents may be worse off because of the child. An award of damages would *itself* be a benefit flowing from the child’s existence, and would ensure the parents are *not* worse off. Seventh, knowledge that upbringing costs came from a doctor would be no more distressing than knowledge that they came from lottery winnings, or a kindly stranger. The funds *come* from a doctor, but thereupon *be*come the parents’, and the child will happily enjoy the benefits that flow from them. If the parents invest the damages imperfectly, the child will enjoy less than the full benefit;[156] but invariably the child will still benefit substantially[157]—and, as long as this is so, the child is unlikely to be overly distressed about how much more money should have been spent. Finally, the ‘distress’ argument, like ‘blessing’ and ‘commodification’, entails there must be *no* recovery for wrongful birth. So long as *some* damages are recoverable, the child may read the court’s judgment and discover he was unplanned and that the parents sued for associated costs—a discovery that can be prevented only by barring recovery completely.[158] This, however, seems plainly unjust. The assertion (made in an attempt to avoid this injustice) that recovery of pregnancy costs might nevertheless be allowed as ‘a not unreasonable compromise’[159] simply ignores the inconsistency between that compromise and the ‘distress’ argument. #### 2 *Justice and proportionality* ##### (a)* Intuitions about justice and assumption of responsibility* To some judges, recovery of upbringing costs is intuitively inappropriate. The fact that the doctor would have to pay for food and entertainment is said to ‘prompt questions as to the nature of the entire claim’.[160] Or it is said most people would ‘[i]nstinctively’ think an award of upbringing costs immoral.[161] Such intuitions, however, are too controversial to justify a departure from normal principles:[162] ‘Intuitive feelings for justice seem a poor substitute for a rule antecedently known, more particularly where all do not have the same intuitions.’[163] Equally unconvincing is the claim that ‘[t]he doctor does not assume responsibility’ for upbringing costs.[164] If this means the doctor does not *intentionally* assume responsibility, then it is irrelevant, since intention is not an element of negligence; while if it is simply an assertion, based on intuition, that the doctor has no *legal* responsibility for upbringing costs, then this assumes what is at stake.[165] In any case, intuition-based arguments can have even less force in Australia, where policy considerations have less prominence.[166] Any option on damages could be supported by appeal to intuition: one simply asserts the chosen option is ‘intuitively correct’. But intuition-based arguments against recovery of upbringing costs are unpersuasive because they merely *state* a conclusion without *justifying* it—without identifying, in other words, the features and principles that *make* an award of upbringing costs inappropriate. This unprincipled approach has led, in the UK, to confusion and arbitrariness.[167] ##### (b)* ** Proportionality and moderation of damages* In *Cattanach*, Heydon J warned that ‘if the law permits recovery [of upbringing costs] at all, damages will be sought [and awarded] in immoderate amounts which may become...unreasonable.’[168] ‘Rich parents’ might seek to recover ‘the cost of expensive clothes, toys, pastimes, presents and parties of the type which the planned siblings of the unplanned child had enjoyed or were going to enjoy.’[169] Claims for house extensions, larger family cars, boarding school, upbringing past age 18, and tertiary education—perhaps at Princeton—might likewise produce very substantial damages.[170] Heydon J did not see this as ‘in itself necessarily an argument against recovery’[171] (why he mentioned it is thus unclear), but others have argued that ‘the expense of child rearing would be wholly disproportionate to the doctor’s culpability’.[1] That is, ‘the extent of the liability’, if upbringing costs were awarded, would be ‘disproportionate to...the extent of the negligence.’[172] This argument, if accepted, would justify awarding pregnancy costs without upbringing costs (since pregnancy costs are presumably not ‘disproportionate’ to culpability). It would preclude recovery of extra disability costs, since these would often *exceed* the already ‘disproportionate’ upbringing costs of a normal child.[173] Despite certain isolated statements,[174] there is no common law principle that damages must be proportionate to culpability. Particularly where vulnerable people are injured, ‘the damages recoverable may [greatly exceed] the tortfeasor’s initial culpability’.[175] In *Rogers v Whitaker*,[176] for example, the defendant’s negligence involved failure to disclose a 1 in 14,000 risk of sympathetic opthalmia—a risk many specialists of his kind would also not have disclosed. This barely culpable failure brought damages, in 1992, of $808,564.38. Any principle of proportionality would have excluded that amount. More generally, acceptance in Australia of the ‘egg shell skull’ rule shows that damages are limited by remoteness, not magnitude;[1] hence there is no general principle of proportionality.[177] The spectre of extravagant claims is thus a mere ‘useful polemical device’,[178] ‘irrelevant to legal principle’.[179] In any case, extravagant claims face difficulties. Plaintiffs are compensated for *reasonable*, not ideal, requirements.[180] Thus in *Sharman* the plaintiff, a quadriplegic, was awarded damages based on future life in hospital rather than (greater) damages for future life at home. The claim for damages based on life at home was unreasonable because life at home would merely increase her happiness, not her health.[181] Had life at home been expected to increase her health, a cost-benefit analysis would follow: if cost is high and benefits speculative, or if less expenditure would produce almost the same benefit, then that part of the claim is unreasonable.[1] Adapting this to wrongful birth, it appears upbringing costs would be limited to preservation of the child’s health: that which merely increases happiness is unreasonable. However, *Cattanach* went beyond this, allowing $200 for an overseas holiday.[1] Perhaps the reasonable view is that upbringing costs are limited to what is *reasonably necessary* for the child’s *reasonable*, rather than ideal, welfare. There is a lack of authority here; but a three-step test could apply. First, consider the level of welfare the plaintiffs plan to give the child (as reflected in the claim for damages), and ask whether this level of welfare, by the standards of modern Australian society, is reasonable rather than ideal or extravagant. To the extent it is ideal or extravagant, the claim is unreasonable. Exclusive private schooling, a Princeton education, a Ferrari on one’s 18th birthday, or frequent overseas holidays, would all be excluded on this basis. Second, ask whether it is reasonably necessary for the plaintiffs to purchase the claimed items in order to achieve the planned level of welfare. The claim is unreasonable to the extent that: • | the plaintiffs could achieve the same level of welfare for the child by buying fewer or less expensive items; | • | the claimed items involve substantial cost with little increase in welfare; or | • | the child, by the standards of modern Australian society, could reasonably pay for an item himself. | So for example if, on the evidence, state schooling would be virtually as beneficial as private schooling, the cost of the latter would be disallowed. Also, the child in most cases can reasonably pay for tertiary education (via HECS) and at least most of the cost of a wedding (through earnings), so these would not be recoverable. Third, one would ask whether the claimed upbringing items correspond to the plaintiffs’ pre-negligence socio-economic level. To the extent that the claim for damages contemplates a wealthier upbringing than a child born to such plaintiffs would ordinarily expect, the claim is unreasonable, since it would, if accepted, cause the plaintiffs to *increase* their socio-economic position and so *profit* from the negligence.[1] However, no claim is unreasonable if it reflects the minimum necessary to meet the plaintiffs’ legal obligations to the child. In short, it is incorrect to say a restriction to ‘moderate’ and ‘reasonable’ damages is ‘wholly unsound in law.’[182] Claims for damages cannot be extravagant or ideal, and so *are* confined to what is ‘reasonable’—though ‘reasonable’ may still be substantial. #### 3 *Miscellaneous policy arguments* ##### (a) * Exaggeration of habits and weaknesses* Following his discovery that ‘[p]ersonal injury litigation...is not fought in an altruistic way’,[1] Heydon J feared that if recovery of upbringing costs were permitted, plaintiffs would exaggerate ‘family habits’ (amounts spent on upbringing) and ‘children’s weaknesses’ (items requiring additional expenditure) in order to secure greater damages.[183] Exaggeration could not be countered, as in other cases, by ‘objective assessments of medical science,’[184] and may—to reprise the ‘distress’ argument—damage the child’s ‘self-esteem’,[185] if by reading the court’s judgment he learns of his weaknesses or fails to live up to exaggerated expectations.[186] The principle behind this argument appears to be that a given class of legal action—for example, wrongful birth actions—should be disallowed unless the risk of plaintiffs lying in order to secure greater compensation can be countered by ‘objective assessments of medical science’. Any such principle, however, is refuted by ‘failure to warn’ cases. The plaintiff must prove in such cases that he would have acted differently had proper warning been given.[187] How the plaintiff *would* have acted depends on his beliefs, desires, temperament: matters generally incapable of objective medical assessment, so that any lies by the plaintiff about how he *would* have acted cannot be countered by ‘objective assessments of medical science’. Yet, while courts are wary of the danger of self-serving testimony,[188] recovery in ‘failure to warn’ cases is *allowed*. As a matter of legal coherence, the same approach—allowing recovery while wary of exaggeration—should apply to wrongful birth. In addition, there *are* checks on exaggeration in wrongful birth, as courts would rarely accept a child has weaknesses requiring substantial additional expenditure unless a qualified practitioner gave objective evidence to that effect. As for self-esteem, the child’s peers will already have pointed out any weaknesses,[189] and so the court’s judgment tells him nothing new. Similarly, parental expectations are usually expressed, so the child will already know if he has failed to live up to them; whereas if the expectations mentioned in the judgment have *not* since been mentioned, the child will realise they no longer are, or never were, held. Either way, the judgment will hardly distress the child. ##### (b)* Coherence with wrongful life claims* Most common law jurisdictions disallow ‘wrongful life’ claims, where a disabled child who owes his very existence to medical negligence sues the negligent doctor for the costs of the disability.[190] Two grounds are often cited: the sanctity of life; and the notion that the child suffers no damage through the negligence, because without the negligence he would not even exist. It has been argued: [I]t might seem somewhat inconsistent to allow a [wrongful birth] claim by the parents while [a wrongful life claim by] the child, whether healthy or disabled, is rejected. Surely the parents’ [wrongful birth] claim is equally repugnant to ideas of the sanctity and value of human life and rests, like that of the child, on a comparison between a situation where a human being exists and one where it does not.[191] It is incorrect, however, to ‘invoke the broad values which few would deny and then glide to the conclusion’ that they preclude the plaintiff’s claim.[1] To say the child’s life results in compensable damage hardly commits one to saying the child’s life is not valuable or sacred.[192] Moreover, the problem in wrongful life has been to show the plaintiff has suffered damage, given that without the negligence he would not even exist. This problem does not arise in wrongful birth, because without the negligence the plaintiff *would* still exist. Finally, one can *accept* the alleged inconsistency and reverse the logic: since wrongful birth claims *should* be allowed, wrongful life claims should too.[193] This, it will be argued, is the correct conclusion. #### 4 * Conclusion* There are no rationally persuasive policy grounds for accepting any option other than that dictated by normal principles: full recovery with no set-off. *Cattanach* was therefore correctly decided, making denial of wrongful birth claims ‘the business, if of anyone, of Parliament not the courts’.[194] The almost hysterical reaction to *Cattanach* in some quarters is thus based not on sound legal principle but, it would seem, on two factors: general aversion to litigiousness (the suspicion, however misguided, that novel negligence actions are always driven by profit, and that wrongful birth plaintiffs must accordingly view their child as a mere cash fund); and religious dogma (the conviction that motherhood is the God-given or ‘natural’ state of women, so that children, ‘a gift from above’,[1] must be treated at all times as a ‘blessing’ and never as a basis for complaint or compensation).[1] Neither factor provides a reason to depart from established legal principle. ## **III WRONGFUL LIFE** ### **A*** Definitions and Approaches* *Definitions and Approaches* #### 1* Wrongful life defined* *Wrongful life* occurs where an unplanned disabled child owes his very existence to medical negligence: had the negligence not occurred, the child would never have been born. The negligence may occur as for wrongful birth: negligent diagnosis or advice concerning sterilisation, pregnancy, disability or contraception; or negligent performance of sterilisation or abortion.[1] Commonly, a doctor negligently fails to diagnose rubella, where diagnosis would have led to lawful[1] termination: because the diagnosis is not made, a child is born with severe disabilities caused by the rubella. In other cases the disability is genetic. The common feature is that, had proper diagnosis, advice, sterilisation or abortion been given, the parents—who did not want a child, or at least not a disabled child—would have prevented or terminated the pregnancy, so the disabled child would never have been born. (Wrongful life thus contrasts with more straightforward cases where, but for the negligence, the child would have been born without disability.) In a *wrongful life action*, the disabled child sues the negligent doctor in respect of the damage caused by the disability; this would generally include pain, suffering, and ‘disability costs’—the extra financial costs attributable to the disability, such as the cost of nursing care (these costs are ‘extra’ compared to the costs a non-disabled person would incur). The label ‘wrongful life’ is an entrenched and convenient shorthand, though it also misleads: the notion that a person’s *life* could be *wrongful* is counterintuitive and renders the plaintiff’s claim suspect from the outset. What is wrongful is the *negligence*, not the child’s life;[195] and it is precisely by focusing on the plaintiff’s *life* (as a whole), rather than on negligent causation of physical damage, that courts have been led to misapply ordinary principles and thus deny recovery. #### 2* Options for recovery of damages* The reasonable options appear to be: damages for pain, suffering, and disability costs; disability costs only; or no recovery. It will be argued, however, that in extraordinary cases the child could also recover upbringing costs. Economic losses—disability and upbringing costs—might or might not be offset against economic benefits the child will receive through such sources as employment and statutory welfare benefits. #### 3 * Existing authorities: UK, US and Canada* In the UK, wrongful life claims were statute barred shortly after *McKay*,[196] leaving this as the leading authority. *McKay* held that, while the doctor may owe a duty of care to the foetus (or rather, to the born person the foetus will become), no damage in wrongful life cases can be established. To establish damage, one must show the plaintiff is worse off, on account of the negligence, than he would have been without it. Yet, in wrongful life, the plaintiff would not even *exist* without the negligence. Hence, for ‘damage’ to be suffered, the plaintiff would have to be worse off existing than not existing. But comparing existence with non-existence (in order to say existence might be worse) is, it was held, impossible;[1] so no damage can be established. This ‘non-existence’ argument has been highly influential. *McKay* also raised worries about children suing their parents for wrongful life,[197] and the difficulty of specifying ‘how gravely deformed’ the child must be before a wrongful life claim would be recognised.[1] The child’s claim was thus summarily dismissed. In the US, three states allow recovery of disability costs only,[198] while others bar wrongful life actions for essentially the reasons in *McKay*.[199] In Canada, wrongful life has not been considered at federal level, but lower courts, including one appellate court, have followed *McKay*.[200] #### 4 *Existing authorities: Australia* A NSW wrongful life claim[201] was summarily dismissed on the authority of *McKay*. In a Queensland case where wrongful life was pleaded but not pursued, the judge indicated *McKay* would have been followed.[2] In 2002, three matters were heard together in the NSW Supreme Court.[202] Studdert J, also following *McKay*, held that while the doctor has a duty of care to the foetus (or rather, to the born person the foetus will become), this is merely a duty not to damage or injure; and, since no damage in wrongful life cases can be established, the action must fail. Studdert J also raised policy concerns about contravening the sanctity of human life, harming the self-esteem of the disabled, and allowing children to sue their parents for wrongful life.[203] Two of the plaintiffs appealed Studdert J’s decision. The NSW Court of Appeal dismissed the appeal.[2] Spigelman CJ held that since a wrongful life plaintiff must assert ‘that it would be preferable [*for the plaintiff*] if she or he had not been born’, and since this assertion raises ‘highly contestable’ ethical issues on which ‘[t]here is no widely accepted ethical principle’, therefore the doctor should owe no duty of care to the child;[2] for imposing a duty of care ‘must reflect values generally, or at least widely, held in the community.’[2] Further, ‘in order to constitute damage which is legally cognisable...it must be established that non-existence is preferable to life with the disabilities *to the child*’;[204] but as the plaintiffs failed to argue that non-existence would be preferable for them, no damage had been shown.[205] Ipp JA likewise accepted the ‘non-existence’ argument in *McKay*, holding that since it is ‘impossible to use non-existence as a comparator’,[206] therefore it is impossible to demonstrate either that damage has occurred or what the appropriate measure of damages would be.[207] Further, wrongful life actions offend the ‘weighty’ principle of the sanctity of life.[208] Mason P in dissent held that since wrongful birth and wrongful life both involve ‘losses stemming from the creation of life’ by medical negligence, they are essentially similar causes of action and it would be ‘incoherent’ to disallow recovery for wrongful life given that recovery for wrongful birth is allowed.[2] Further, compensation would be allowed on ordinary principles: wrongful life involves physical damage[2] (in the form of disability) that is reasonably foreseeable (the plaintiffs ‘are persons whom the medical practitioners would have known as likely to come into being and as likely to suffer and have special needs of care if certain steps were not taken’[209]), reasonably preventable (by ‘giv[ing] advice and treatment to the mothers that would have prevented the suffering presently endured by the [plaintiffs]’[210]), and caused by the doctor’s conduct (since the doctors ‘omitted to give [such] advice and treatment’[211]). As in *Cattanach* (where there was no offset for the speculative emotional benefits brought by the child), there is no need for a speculative offset for the putative benefits of existence over non-existence.[212] Having thus rejected the ‘non-existence’ argument as well as other anti-recovery arguments,[213] Mason P concluded that damages should be awarded. It will now be argued that normal principles do indeed permit recovery of damages (Section 2); and that the policy arguments against recovery are weak (Section 3). ### **B*** Do Normal Tort Law Principles Support Recovery of Damages?* *Do Normal Tort Law Principles Support Recovery of Damages?* #### 1 *Duty and breach of duty: physical damage* ##### (a)* ** Duty to the foetus* In wrongful life, the doctor’s conduct occurs before the plaintiff’s birth. At common law the foetus has no rights until birth,[2] but doctors have a duty not to damage the foetus, since such damage may cause damage to the born person the foetus will become. A duty of care may thus be owed to a person not yet born.[214] Similarly, a duty of care may be owed to a person not yet conceived, since present actions may cause damage to the person once conceived and born.[2] For example, a baby food manufacturer who negligently allows toxins into the food will be liable for injuries caused to babies who ingest the food two years from now, even though some of those babies have not yet been conceived.[215] In either case the duty is to prevent physical damage to the person who may later be born. Conversely, if the doctor’s conduct *cannot* cause physical damage to the person who may later be born, there is simply no duty of care (unless on the basis of pure economic loss). *A fortiori* there would be no *breach* of duty of care, no causation of damage, and no liability (again, unless on the basis of pure economic loss). So liability in wrongful life depends crucially on whether the doctor’s conduct can cause physical damage to the plaintiff. ##### (b) *Physical damage contrasted with damage to the value of one’s life as a whole* *McKay* and subsequent cases have effectively ignored the issue of physical damage and have instead focused on the plaintiff’s life as a whole. They have required the plaintiff to show ‘damage’ in the sense of his very existence, his life as a whole, being worse off through the negligence than it would otherwise have been (hence the need to show his existence is worse than non-existence, since non-existence is what ‘would otherwise have been’). Yet this requirement is misguided, since *in no other case* is the plaintiff required to prove ‘damage’ in this sense. The *Cattanach* plaintiffs, for example, were certainly not required to show their lives *as a whole* were worse off for the birth of their son (and the ‘blessing’ argument, the claim that their lives were *better* off, was rejected as irrelevant[216]). As a matter of legal coherence, the same definition of ‘damage’ must apply to wrongful life as applies in other cases. Damage in law is either physical damage or pure economic loss,[217] and in determining whether either of these has occurred it is *not necessary* to look to the plaintiff’s life as a whole. Contra *McKay*, then, the wrongful life plaintiff *need not show* his life as a whole is worse off as a result of the negligence, and *a fortiori* need not show his existence is worse than non-existence. He need only show, as with other plaintiffs,[2] that he suffers reasonably foreseeable, reasonably preventable physical damage caused by the defendant’s conduct. ##### (c)* ** Can the doctor in wrongful life cause physical damage to the child?* In wrongful life, the plaintiff suffers disability. Disability is a recognised head of physical damage.[2] Thus, the plaintiff suffers physical damage. But since damage in law is always damage *to a born person*,[218] the damage is better char acterised as *disability suffered by a born person*. If the doctor takes reasonable care (by giving proper advice, diagnosis or treatment), the plaintiff will never be born, and so damage (to a born person) will never occur. If the doctor *fails* to take reasonable care (by *failing* to give proper advice, diagnosis or treatment), physical damage—disability suffered by a born person—*will* occur. Thus the doctor can allow or prevent physical damage, and so can be held as a matter of common sense and the ‘but for’ test to have *caused* that damage (in that his conduct is *a* cause of it).[2] Of course, the doctor does not cause the viral or genetic condition that produces disability;[219] and so his conduct is not the *sole* or *direct biological* cause of disability. But the crucial point[220] is that the doctor’s conduct is still *a* cause of *physical damage* (in this case, disability suffered by a born person). So the doctor in wrongful life *can* cause physical damage to the child. ##### (d) *Physical damage and non-existence* The ‘non-existence’ argument in *McKay* claims, contrary to the preceding argument, that the doctor’s conduct *cannot* cause damage, because *without* the conduct the plaintiff would not even exist.[221] Although *McKay* incorrectly focused on the plaintiff’s life as a whole, the ‘non-existence’ argument might equally be thought to show the wrongful life plaintiff suffers no *physical* damage.[222] The issue, then, is whether conduct can cause physical damage to the plaintiff in circumstances where the plaintiff, without that conduct, would not even exist. This certainly seems possible. For example, many wrongful life plaintiffs suffer brain damage. Brain damage is a form of physical damage, since the brain is a physical thing. So many wrongful life plaintiffs suffer physical damage. As argued, the doctor can allow or prevent that damage, and so on ordinary principles causes it.[2] So the doctor’s conduct *can* cause physical damage *even though* the plaintiff, without that conduct, would not exist. More generally, as a matter of common sense (which can be the only guide, since neither authorities nor dictionaries define ‘physical damage’ in detail), malformed body or brain parts are physical damage *even where* the alternative is non-existence. For example, suppose rescuer X saves baby Y’s life by dragging Y from the path of a speeding train. As a result, part of Y’s body or brain—say, a foot that was wedged in the track and had to be forcibly removed—becomes malformed (and thus physically damaged). Here, X’s conduct plainly causes physical damage to Y—the malformed body or brain part—*even though*, without X’s conduct, Y would not exist (and, if that was the only way to save Y, *could* not exist). Likewise, in wrongful life, the doctor’s conduct causes physical damage to the plaintiff—the malformations of brain or body, occurring in a born person, that comprise the disability—*even though*, without the conduct, the plaintiff would not exist (and, if the disability is genetic, *could* not exist). In both cases (the X and Y case and wrongful life), physical damage is caused *even though* the alternative is non-existence. Of course, the rescuer X could escape liability for the physical damage caused to Y. This, however, is not because X did not cause physical damage to Y (as argued, X *did* cause such damage); rather because, in negligence, one may permissibly risk or cause *lesser* damage (such as a malformed foot) in order to prevent *greater* damage (such as a person’s death).[223] In contrast, since non-existence (in the sense of never having been born) is not legally recognised damage, one cannot say that the doctor in wrongful life may permissibly risk or cause lesser damage (disability) in order to prevent greater damage (non-existence). Put another way, the clear social utility of saving a person’s life justifies the rescuer’s causation of the malformed foot; whereas giving negligent diagnosis, advice or treatment and thereby risking unwanted pregnancy has no clear social utility and hence does not justify the doctor’s causation of disability. So the rescuer X, on normal principles, could escape liability in respect of the physical damage he causes;[2] but the doctor in wrongful life could not necessarily do the same. It might be objected that the rescue analogy is imperfect for a further reason: had the rescue not occurred, Y would have *ceased* to exist; whereas in wrongful life, had the doctor’s conduct not occurred, the plaintiff *never would* have existed. This, however, is simply not true of rubella cases, where the plaintiff is already conceived when the negligence occurs and hence *would* still have existed (as an embryo or foetus, though not as a legal person[224]) had the conduct not occurred. Thus at the very least the *rubella* plaintiff’s disabilities would still count as physical damage. Could one then claim that if a failed sterilisation plaintiff suffers the *very same* sorts of disabilities—but suffers them genetically—then these disabilities do *not* count as physical damage, since without the conduct the plaintiff really would never have existed? This distinction is untenable: as a matter of common sense, if the disability is the same in each case, it is physical damage in each case (and so for example, if brain damage is physical damage when suffered by a rubella plaintiff, then it is still physical damage when suffered by a failed sterilisation plaintiff). Accordingly, one cannot say it is only rubella plaintiffs who suffer physical damage: other types of wrongful life plaintiff can also suffer physical damage. Thus, in wrongful life the doctor’s conduct *can* cause physical damage to the child. That damage—disability suffered by a born person—will occur whenever a disabled child (trivially or severely disabled) results from the negligence. ##### (e) * Existence worse than non-existence* Given the preceding argument, a wrongful life plaintiff seeking to demonstrate *damage* (in the sense of legally recognised damage) need only show he suffers malformations of brain or body: again, contra *McKay*, the plaintiff need *not* show his existence is worse than non-existence. Nevertheless, it appears the child would also suffer physical damage if his disabilities are *so* severe that they *do* restrict him to a life worse than non-existence: a life with so much pain, suffering and indignity, and so little pleasure or meaningful activity, that it genuinely would be better *for the child* if he did not exist. The damage in such cases is simply the state of (physically) existing. The problem of comparing severely disabled existence with non-existence (so as to say existence might be worse) can be solved by placing the value of non-existence at zero. The value of non-existence *must* be zero, because non-existence is nothingness, and so has no value—*zero* value. Fixing non-existence at zero value, one can then ask whether the bad things in the child’s life outweigh the good; and, if they do, non-existence would be better. It may be objected here that non-existence simply cannot be compared to anything else, and that any attempt to do so (for example, by giving it zero value) is misguided.[225] Yet this objection misfires, since comparisons with non-existence are both common and necessary in common sense and in law.[226] For example, one is glad—better off—to exist now than to have been killed five minutes ago. Yet, had one been killed, one would not now exist. So, in saying one is better off alive, one is comparing existence with non-existence and judging that one is better than the other. Similarly, disabled existence has been held a ‘gift’,[227] a great benefit, something better than the alternative (which could only be non-existence). Here, one is again comparing existence with non-existence and judging which is better or worse. Or again: in passive euthanasia, courts have held that, given low enough quality of life, continued life may be *worse*, or at least no better, than non-existence (death).[228] In all these cases, comparisons with non-existence—and judgments about which is better or worse—are possible. Logically, then, if one’s quality of life were as bad as or worse than in the passive euthanasia cases—as with exceptionally severe disability—then life could be *even worse* than non-existence. The objection that it is outside judicial competence to assess whether the plaintiff’s existence is worse than non-existence[2] ignores the fact that virtually the same assessment is made in passive euthanasia. The objection that disabled existence can be compared with *ceasing *to exist but not with *never existing* (so that existence might be worse than one but not the other) will again produce absurd distinctions between rubella and failed sterilisation plaintiffs. So, in exceptionally severe cases, the doctor’s conduct *does* cause the plaintiff physical damage, that damage being the state of (physically) existing. ##### (f) *Reasonable steps to avoid physical damage* Since the doctor’s conduct may cause reasonably foreseeable physical damage to the plaintiff—the damage being disability, or else existence worse than non-existence—the doctor owes the child a duty of care and must take reasonable steps to prevent that damage. The only way to prevent the damage is to prevent an unwanted disabled child from being born; hence that is the content of the duty.[229] *Reasonable* steps to prevent the birth of an unwanted disabled child would be the exercise of reasonable care and skill in providing diagnosis, advice, sterilisation or abortion;[230] for reasonable care in these matters will reduce the chance of an unwanted disabled child being born (by *increasing* the chance of effective contraception or abortion), and thus reduce the risk of physical damage (disability) occurring. It would not, of course, be reasonable to provide misleading advice or to lobby for an abortion. Hence the duty in wrongful *life*—providing adequate diagnosis, advice, sterilisation or abortion—has the same content as in wrongful *birth*;[231] though in wrongful life the duty is owed *to* the future child *via* the parents.[232] #### 2* Damage, causation, remoteness and damages* Where a duty of care is owed and breached, and a disabled child results, physical damage—disability, or existence worse than non-existence—is caused. That damage is of a reasonably foreseeable kind (being the very sort of thing that might result from failure to provide proper advice, diagnosis or treatment), as are the pain, suffering and disability costs that flow from it.[233] Accordingly, the defendant is liable on normal principles for those costs.[234] At this point, however, the ‘non-existence’ argument resurfaces. Damages must, as far as money can, place the plaintiff in the same position as if the negligence had not occurred.[235] Thus, to determine the appropriate measure of damages (in a case where the plaintiff claims for pain, suffering and economic loss), one determines in what respects the plaintiff’s actual state involves additional pain, suffering or economic loss as compared with the hypothetical state he would have been in had the negligence not occurred; and one then determines an amount of money to compensate for that additional pain, suffering or economic loss. Put another way, ‘placing the plaintiff in the same position as if the negligence had not occurred’ means the plaintiff’s actual level of pain, suffering and economic loss, when combined with the award of damages, should be *no worse* from the (reasonable) plaintiff’s point of view than the level of pain, suffering and economic loss in the hypothetical situation where the negligence does not occur. Inevitably, then, in wrongful life one must compare the plaintiff’s actual state with a hypothetical state of non-existence (since non-existence is the state he would be in had the negligence not occurred); and it may be objected that this comparison is impossible, so that no damages can be awarded.[236] There is no impossibility, however. A nonexistent person incurs no pain, suffering or economic loss; a wrongful life plaintiff does. So a wrongful life plaintiff’s actual state involves additional pain, suffering and economic loss (such as disability costs) as compared with the hypothetical state he would have been in had the negligence not occurred; hence one can then determine, in the usual way, an amount to compensate for that additional pain, suffering and economic loss. Put another way, one should ensure the plaintiff’s actual level of pain, suffering and economic loss, when combined with the award of damages, is *no worse* from the (reasonable) plaintiff’s point of view than the level of pain, suffering and economic loss that occurs in the hypothetical state of non-existence (namely, *no* pain, suffering or economic loss). Again, damages for pain, suffering and economic loss can thus be calculated in the normal way, and indeed will be much the same as if the hypothetical situation involved a healthy, living plaintiff who likewise suffers no negligently caused pain, suffering or economic loss. Importantly, however, the wrongful life plaintiff’s level of earnings and earning capacity, even if they are precisely zero, will never be *worse* than zero; hence, in terms of economic loss, the plaintiff could not recover for loss of earnings or loss of earning capacity—merely for economic losses taking the form of expenditure, such as disability costs. Could the wrongful life plaintiff also recover *upbringing* costs (which are gratuitous care costs and thus treated as a form of economic loss suffered by the plaintiff[237])? If the plaintiff’s claim is based on mere disability, rather than on existence being worse than non-existence, then the defendant is *not* liable for upbringing costs, since upbringing costs are not caused by the physical damage (disability) and so are not consequential loss. Put another way, upbringing costs result from the plaintiff’s very existence, hence are not consequential upon the physical damage complained of (disability), and so cannot be recovered. However, if the plaintiff’s claim is based on existence being worse than non-existence, the physical damage is not *disability* but *existence itself* (in a severely disabled state). Consequential damage—for which the defendant is liable—would then include all foreseeable damage flowing from that existence: pain, suffering, disability costs *and upbringing costs*. Damages in such a case should allow for sufficient care and treatment to ensure the child’s life will be no worse than non-existence (the state he would be in but for the negligence). In short: where the child is disabled (to any degree), the doctor is liable for pain, suffering, and disability costs. Where the child’s life is worse than non-existence, the doctor is liable for pain, suffering, disability and upbringing costs. #### 3 *Offsetting benefits and harms* On normal principles, financial benefits caused by the negligence are, except for gifts or insurance payouts, offset against financial costs.[238] Benefits gained through legislation *may* reduce damages, depending on statutory intention.[239] In wrongful life, the negligence causes the child’s existence, hence also causes any financial benefits existence may bring. Thus, disability and upbringing costs would be reduced according to any financial benefits the plaintiff will likely receive through employment and, depending on statutory intention, welfare benefits. For mild disabilities, expected benefits would often offset financial losses to zero. It may be objected here that since a living person, no matter how badly disabled, can receive welfare benefits, whereas a nonexistent person cannot, therefore a living person is necessarily economically better off than a nonexistent person—better off than zero—so that the wrongful life plaintiff would not be entitled to any damages for economic loss (such as disability or upbringing costs). In response, however, the fact that the negligence causes *some* economic benefits does not entail that those benefits completely offset the plaintiff’s economic losses.[240] If the plaintiff’s economic losses (the costs of the disability and, in some cases, upbringing) exceed what the plaintiff does or can gain through employment and welfare, then the plaintiff suffers an overall economic loss and hence *is* economically worse off than zero as a result of the negligence: the plaintiff will always remain, as it were, ‘in the red’ (whereas a nonexistent person would suffer no such overall loss). So the objection fails to show a living person is necessarily economically better off than zero, and indeed merely restates the point, already conceded, that because the earnings and earning capacity of a living person will never be worse than zero, a wrongful life plaintiff could not recover damages for loss of earnings or loss of earning capacity. Finally, the breach of duty in wrongful life—failure to provide adequate diagnosis, advice or sterilisation—may *benefit* the plaintiff overall, since the breach brings with it the emotional and other benefits of existence, and these may in some sense outweigh the costs of the disability. Similarly, the breach in wrongful *birth* may benefit the plaintiffs overall, since the breach brings with it the emotional benefits of a child, and these may in some sense outweigh the costs of pregnancy and upbringing. However, as *Cattanach* made clear, emotional benefits do not negate liability, and do not reduce damages for pain, suffering, or economic loss.[241] #### 4* Pure economic loss and wrongful life* If, contra the above, wrongful life involves no physical damage, damages might still be recovered through the principles governing pure economic loss. If the doctor’s conduct causes the birth of a child (healthy or disabled), the child will suffer economic loss: upbringing costs and, if disabled, disability costs. The *Perre* factors are satisfied as for wrongful birth,except that there is no ‘known reliance’ (the unborn plaintiff cannot possibly rely on anything). Given that all the factors the plaintiff possibly *can* satisfy *are* satisfied, a duty is most likely owed to the child to prevent economic loss. *Reasonable* steps to prevent such loss would again involve reasonable care and skill in providing the parents with diagnosis, advice,[242] sterilisation or abortion; for this reduces the likelihood that the unwanted child will be born and suffer economic loss. If reasonable care is not taken, and as a result the child (healthy or disabled) is born, the doctor will be liable for upbringing costs, including, if the child is disabled, disability costs. Liability would then be offset as for consequential loss, so that in the case of a healthy or mildly disabled child financial losses would likely be offset to zero. Thus, while a claim for pure economic loss precludes damages for pain and suffering, it would potentially *allow* a disabled plaintiff to recover upbringing costs, which (as argued) could not be recovered in a claim for consequential loss. ### **C*** Are There Sound Policy Arguments Against Recovery?* *Are There Sound Policy Arguments Against Recovery?* #### 1 *Contravening the sanctity of life* Allowing wrongful life claims—in particular, imposing a duty to prevent the unwanted disabled child’s birth—has been said to contradict the sanctity of human life.[243] This seems, however, to be another invalid slide from general values to a particular legal conclusion.[244] If ‘sanctity of life’ means that preventing birth—through abortion, say—is *in general* immoral, then this is too controversial to be given legal force; moreover *Cattanach* showed there *can* be a duty to prevent the birth of unwanted children. If ‘sanctity of life’ means human life must *always* be judged better than non-existence (just as some think a healthy child must *always* be judged a ‘blessing’), then this contradicts common sense and the passive euthanasia cases. If ‘sanctity of life’ means courts should nevertheless *pretend* human life is always better than non-existence, then it is not clear why courts should entertain a falsehood, particularly where this prevents compensation on normal principles; the court should rather *award* damages to ensure the child’s existence is *not* worse than non-existence. At any rate, most wrongful life claims would be brought on the basis of mere disability, or, failing that, pure economic loss; and in these cases there is no suggestion that the child’s existence is worse than non-existence. Even if wrongful life actions did at some level infringe the sanctity of life, they would at a deeper level uphold it; for the law would in effect be saying that it values the lives of the disabled enough to ensure they can recover the cost of the care and treatment they deserve.[245] #### 2 *Offending the disabled* To accept that disabled existence might be worse than non-existence would, it is said, ‘be offensive’ to disabled people,[246] reducing their ‘self esteem’ and standing in society.[247] This objection does not apply to claims brought on the basis of mere disability or pure economic loss, since in these cases there is no suggestion that the child’s existence is worse than non-existence. Further, common sense and the passive euthanasia cases show severely disabled existence *can* be worse than non-existence; normal principles should not be overturned merely because some people find this reality offensive. Also, a plaintiff claiming on the basis that his existence is worse than non-existence evidently believes this is so, if he is capable of understanding the matter at all; hence a court allowing the claim merely repeats what the plaintiff already believes, or asserts what he cannot understand. #### 3 *Appeals to the afterlife* Those who believe the harms of this life will be outweighed by the joys of an afterlife would evidently deny that non-existence is ever worse than existence (at least in the very long run). Thus, it is said, for courts to allow claims based on existence being worse than non-existence would adjudicate the afterlife issue in favour of non-believers; and, as ‘a worldly court’ cannot do this—it must remain neutral—such claims must be disallowed.[248] Again, this objection gives no reason to disallow claims brought on the basis of mere disability or pure economic loss, since there is no suggestion in such cases that the plaintiff’s existence is worse than non-existence. In any case, to use unreal speculations about the fate of the dead as an excuse to ignore the very real needs and suffering of the living is manifestly unjust. If such speculations were allowed, the following would always be a good defence: ‘Yes, Your Honour, I have negligently caused the plaintiff all manner of economic loss in *this* life. But he may receive extra riches in heaven to compensate! To allow recovery of damages would be to decide the ‘extra riches in heaven’ issue in favour of non-believers; and this a worldly court cannot do.’[249] The sane option is, plainly, to ignore such speculations. Tort law deals with *this* life, not the next life; and so if *this* life, considered in itself, is worse than non-existence, then compensation should be payable subject to normal principles. #### 4 *Actions against parents* If wrongful life claims were accepted, the child, it is said, could sue not only the doctor but the mother, ‘in the event that the mother was perceived to act unreasonably’ in failing to abort; this could cause substantial ‘disturbance of family life’.[250] However, as noted in wrongful birth, courts are loath to find that failure to have an abortion is unreasonable. Moreover the argument, if accepted, would merely prevent children suing mothers; it would not prevent children suing doctors.[251] #### 5 *Trivial disability* Griffiths LJ in *McKay* noted the difficulty of specifying ‘how gravely deformed’ the plaintiff must be before a wrongful life claim would be possible.[252] The arguments presented here avoid this difficulty: a wrongful life action is possible wherever an unwanted child is born through medical negligence and suffers a malformation of brain or body that results in pain, suffering or economic loss. A mere constitutional weakness—for example, having weaker arms or eyes than average—is not, of course, physical damage; but a slightly deformed ear would be, and if it causes pain, suffering or economic loss (such as medical costs), then it could ground an action for wrongful life. Wrongful life actions would not, therefore, be restricted to the severely disabled: even the trivially disabled could claim. It may be objected that this is absurd. But that is hardly clear. Normal principles place no lower limit on the *degree* of physical damage required to ground an action in negligence—one could sue, for example, for the minor pain and suffering of a stubbed toe—and so it is to be expected that a case for wrongful life based on normal principles will likewise place no lower limit on the necessary degree of damage. There is no more injustice or absurdity here than in other categories of negligence. Further, trivial disabilities will produce trivial damages, since such disabilities will cause little pain and suffering and the plaintiff’s likely earnings will offset financial losses to zero. This too is neither unjust nor absurd. There does remain the issue of ‘how gravely deformed’ a plaintiff must be before his existence would be judged worse than non-existence (so as to allow recovery of upbringing costs). This issue, while difficult, is no more difficult than in passive euthanasia, and should be approached by courts in the same cautious and compassionate way.[253] In practice the issue may be easier to decide, since the outcome concerns damages rather than life or death. #### 6 *Conclusion* Recovery of damages for wrongful life is counterintuitive, but a careful application of normal principles shows this is the correct position. Once the ‘non-existence’ argument is exposed as irrelevant (since it incorrectly focuses on the plaintiff’s life as a whole, rather than on negligent causation of physical damage), the way to recovery is clear. Typically in wrongful life, damages should be recoverable for pain, suffering, and disability costs, with disability costs offset according to income the plaintiff will receive through employment and, perhaps, statutory welfare benefits. In extraordinary cases where existence is worse than non-existence, the plaintiff could also recover upbringing costs. However, since wrongful life will not *normally* allow recovery of upbringing costs—whereas wrongful birth normally *will*—the disabled child and parents would need to bring a combined action for both birth torts in order to secure full compensation.. [*] BA(Hons), LLB(Hons) (ANU). Address for correspondence: [email protected]. The author (who is responsible for any errors or omissions in this article) wishes to thank Professor Jim Davis and Dr Joachim Dietrich of the ANU Faculty of Law for helpful comments on an earlier version of this article. [1] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003). [2] John Anderson, *Cattanach Decision: Statement by the Acting Prime Minister* (Press Release, 17 July 2003), <http://www.ministers.dotars.gov.au/ja/releases/2003/july/a80_2003.htm> (last visited Mar. 8, 2005). [3] Harriton v. Stephens; Waller v. James; Waller v. Hoolahan, (2004) NSWCA 93 (NSW Court of Appeal, 2004) (Spigelman CJ and Ipp JA; Mason P dissenting). [4] M Pelly, Sydney Morning Herald, *Tougher Limits on Suing Doctor*s, 30 April 2004, <http:// www.smh.com.au/articles/2004/04/29/1083224520781.html> (last visited Mar. 8, 2005). [5] Laura Hoyano, *Misconceptions about Wrongful Conception*, 65 MODERN L.REV. 883, 886 n. 26 (2002); *see also*: JOHN SEYMOUR, CHILDBIRTH AND THE LAW ch 5 (2000). [6] *See:* Medical Practitioners Act, 1930, ss55A-55E (ACT) ; Crimes Act, 1900, ss82-84 (NSW) ; Criminal Code Act, 1983, ss172-174 (NT); Criminal Code Act, 1899, ss224-226 (Qld); Criminal Law Consolidation Act, 1935, ss81-82A (SA); Criminal Code Act, 1924, ss134-135 (Tas) ; Crimes Act, 1958, ss65-66 (Vic); Health Act, 1911, s334 (WA). *See also*: R v. Wald, (1971) 3 DCR (NSW 25 (District Court of NSW, 1971); R v. Davidson [1969] VicRp 85; (1969) VR 667 (Supreme Court of Victoria, 1969); CES v. Superclinics (Australia) Pty Ltd (1995) 38 NSWLR 47 (NSW Supreme Court,, 1995) . If the termination would not have been lawful, the defendant may rely on the defence of illegality; *see *Superclinics. [7] In addition to pregnancy and upbringing costs, the father can apparently recover for ‘loss of consortium’; see Cattanach v. Melchior [2003] 199 ALR 131; [2003] HCA 38, [14]-[15] (High Court of Australia, 2003) . This is an award of damages ‘for all practical, domestic disadvantages suffered by a husband in consequence of the impair[ment, during or after pregnancy, of the] health or bodily condition of his wife’:; *see* Toohey v. Hollier [1955] HCA 3; [1955] 92 CLR 618 (High Court of Australia, 1955). The viability of claims for loss of consortium was not on appeal in Cattanach, but Gleeson CJ appeared doubtful, noting ‘such claims are now anomalous, and bear a proprietorial character inconsistent with current ideas as to the relationship between husband and wife’: [8] *See:* Cattanach v. Melchior ([2003)] 199 ALR 131; [2003] HCA 38, [285] (Callinan J)High Court of Australia, 2003) ; McFarlane v. Tayside Health Board [1999] UKHL 50; [1999] 4 All ER 961 (House of Lords, 1999); [1999] UKHL 50; [2000] 2 AC 59, 76G (Court of Appeal, England, 2000)(Lord Steyn), 91G-92A (Lord Hope), 99B-C (Lord Clyde); Kealey v. Berezowski ([1996)] 136 DLR (4th) 708 (Ontario Court, General Division), 723f-724d (Lax J); L Hoyano, *Misconceptions about Wrongful Conception*, 65 MODERN L. REV. 883-906, 884 (2002); A Maclean, ‘*McFarlane v Tayside Health Board: A Wrongful Conception in the House of Lords?*’ *Web Journal of Current Legal Issues* [2000] 3 Web JCLI [s 1] <http://webjcli.ncl.ac.uk/2000/issue3/maclean3.html> .(last visited 8 March 2005). [9] Cattanach v. Melchior [2003] 199 ALR 131; [2003] HCA 38, (High Court of Australia, 2003) [57], [68] (McHugh and Gummow JJ), [193] (Hayne J); SEYMOUR, *supra *note 5, at 75. [10] Cattanach v. Melchior [2003] 199 ALR 131; [2003] HCA 38, (High Court of Australia, 2003) [138] (Kirby J); *see also* Melchior v. Cattanach [2001] QCA 246, (Queensland Court of Appeal,2001) [151] (Thomas JA). [11] Scuriaga v. Powell [1979] 123 SJ 406 (QBD). [12] Udale v. Bloomsbury Health Authority [1983] All ER 522 (House of Lords, 1983); [1983] 1 WLR 1098 (QBD). [13] Thake v. Maurice [1986] 1 QB 644; [1986] 1 All ER 497 (CA) (Court of Appeal, 1986);* *Allen v. Bloomsbury Health Authority [1993] 1 All ER 651 (House of Lords, 1993); (1992) 13 BMLR 47 (Queens Bench Division); Fish v. Wilcox [1994] 5 Med LR 230 (CA). [14] Emeh v. Kensington and Chelsea and Westminster Area Health Authority [1985] 1 QB 1012; [1984] 3 All ER 1044 (CA). [15] Robinson v. Salford Health Authority [1992] 3 Med LR 270 (QBD). [16] Benarr v. Kettering Health Authority [1988] 138 NLJ 179 (QBD). [17] Nunnerley v. Warrington Health Authority [2000] Lloyd’s Rep Med 170 (QBD); Taylor v. Shropshire Health Authority [2000] Lloyd’s Rep Med 96 (QBD). But *see also* claims denied for causation reasons: Salih v. Enfield Health Authority [1991] 3 All ER 400 (CA); R v. Croydon Health Authority [1998] Lloyd’s Rep Med 44 (CA). [18] McFarlane v. Tayside Health Board [1999] UKHL 50; [1999] 4 All ER 961; [2000] 2 AC 59 (UK House of Lords, 2000), 76C (Lord Slynn), 83D-E (Lord Steyn), 97C (Lord Hope). Applied in *Greenfield v Irwin* [2001] EWCA Civ 113; [2001] 1 WLR 1279 (CA). [19] Rand v. East Dorset Health Authority [2000] Lloyd’s Rep Med 181 (QBD); Hardman v Amin [2000] Lloyd’s Rep Med 498 (QBD); Parkinson v St James and Seacroft University Hospital NHS Trust [2001] EWCA Civ 530; [2001] 3 All ER 97; Groom v Selby [2001] Lloyd’s Rep Med 39 (QBD). [20] Rees v. Darlington Memorial Hospital NHS Trust [2002] EWCA Civ 88; [2002] All ER 177 (House of Lords, 2002)). [21] Rees v. Darlington Memorial Hospital NHS Trust [2003] UKHL 52, [9] (House Of Lords, 2003) (Lord Bingham), [18] (Lord Nicholls), [114] (Lord Millett), [143] (Lord Scott); [39] (Lord Steyn), [68] (Lord Hope), [96]-[98] (Lord Hutton), dissenting. [22] Rees v. Darlington Memorial Hospital NHS Trust [2003] UKHL 52, (House of Lords, 2003) [35] (Lord Steyn), [57] (Lord Hope), [91] (Lord Hutton), [112] (Lord Millett); [9] (Lord Bingham), [18] (Lord Nicholls), [145] (Lord Scott), dissenting. [23] Rees v. Darlington Memorial Hospital NHS Trust [2003] UKHL 52, (House of Lords, 2003) [7] (Lord Bingham), [17] (Lord Nicholls). [24] *See:* Rees v. Darlington Memorial Hospital NHS Trust [2003] UKHL 52, (House of Lords, 2003) [40]-[47] (Lord Steyn), [70]-[77] (Lord Hope); Cattanach v. Melchior [2003] 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [91] (McHugh and Gummow JJ), [165] (Kirby J). [25] Rees v. Darlington Memorial Hospital NHS Trust [2003] UKHL 52, (House of Lords, 2003) [8] (Lord Bingham), [17] & [19] (Lord Nicholls), [123]-[125] (Lord Millett), [148] (Lord Scott). [26] Arizona *(*University of Arizona Health Sciences Center v. Superior Court 667 P2d 1294, 1299 [1983]); Connecticut (Ochs v. Borelli 445 A 2d 883, 886 [1982]); Minnesota (Sherlock v. Stillwater Clinic 260 NW 2d 169, 175-76 [1977]). [27] California [Custodio v. Bauer 251 Cal.App 2d 303, 59 Cal.Rptr 463 [1967]); New Mexico (Lovelace Medical Center v. Mendez 805 P 2d 603 [1991]); Oregon (Zehr v. Haugen 871 P 2d 1006 [1994]); Wisconsin (Marciniak v. Lundborg 450 N.W.2d 243 [1990]). [28] Including: Alabama (Boone v. Mullendore 416 So 2d 718 [1982]); Alaska (M.A. v. United States 951 P 2d 851 [1998]); Arkansas (Wilbur v. Kerr 628 SW 2d 568 [1982]); District of Columbia (Flowers v. District of Columbia 478 A 2d 1073 [1984]); Florida (Fassoulas v. Ramey 450 So 2d 822 [1984]); Georgia (Atlanta Obstetrics & Gynecology Group v. Abelson 398 SE 2d 557 [1990]); Illinois (Cockrum v. Baumgartner 447 NE 2d 385 [1983], cert denied, 464 U.S. 846, 104 S.Ct. 149, 78 L.Ed 2d 139 (1983)); Iowa (Nanke v. Napier 346 NW 2d 520 [1984]); Kansas (Johnston v. Elkins 736 P 2d 935 [1987]); Kentucky (Schork v. Huber 648 SW 2d 861 [1983]); Louisiana (Pitre v. Opelousas General Hospital 530 So 2d 1151 [1988]); Maine (Macomber v. Dillman 505 A 2d 810 [1986]); Michigan (Rouse v. Wesley 494 NW 2d 7 [1992]); Missouri (Girdley v. Coats 825 S.W 2d 295 [1992]); Nebraska (Hitzemann v. Adam 518 NW 2d 102 [1994]); Nevada (Szekeres v. Robinson 715 P 2d 1076 [1986]); New Hampshire (Kingsbury v. Smith 442 A 2d 1003 [1982]); New Jersey (Gracia v. Meiselman 531 A 2d 1373 [1987] (obiter)); New York (O’Toole v. Greenberg 477 NE 2d 445 [1985]); North Carolina (Jackson v. Bumgardner 347 SE 2d 743 [1986]); Ohio (Johnson v. University Hospitals of Cleveland 540 NE 2d 1370 [1989]); Oklahoma (Wofford v. Davis 764 P 2d 161 [1988]); Pennsylvania (Butler v. Rolling Hill Hospital 582 A 2d 1384 [1990]); Rhode Island (Emerson v. Magendantz 689 A 2d 409 [1997]); Tennessee (Smith v. Gore 728 SW 2d 738 [1987]); Texas (Terrell v. Garcia 496 SW 2d 124 [1973]); Utah (C.S. v. Nielson 767 P 2d 504 (\[1988]); Virginia (Miller v. Johnson 343 SE 2d 301 [1986]); Washington (McKernan v. Aasheim 687 P 2d 850 [1984)]; West Virginia (James G. v. Caserta 332 SE 2d 872 [1985]); Wyoming (Beardsley v. Wierdsma 650 P 2d 288 [1982]). Authorities collected in: Chaffee v Seslar (Unreported, Indiana Supreme Court, 15 April 2003), <http://www.marciaoddi.com/cgi-local/blogdocs/Chaffee.pdf> (last visited 8 March 2005). [29] See: Bader v. Johnson 675 NE 2d 1119 (Indiana Court of Appeal, 1997). [30] Colp v. Ringrose ([1976)] 3 L Med Q 72 (Aberta SCTD) (obiter); Doiron v. Orr ([1978)] 86 DLR (3d) 719 (Ontario HC); Cataford v. Moreau ([1978)] 114 DLR (3d) 585 (Québec SC) (obiter). *See also*: Keats v. Pearce ([1984)] 48 Nfld & PEIR 102 (Newfoundland SCTD) (upbringing costs disallowed because plaintiff could have mitigated losses through abortion); Fredette v. Wiebe ([1986)] 29 DLR (4th) 534, 4 BCLR (2d) 184 (SC) (upbringing costs disallowed because plaintiff would have had children and incurred those costs anyway). [31] Suite v. Cooke [1993] RJQ 514, 15 CCLT (2d) 15 (SC), affirmed [1995] RJQ 2765 (CA). [32] Joshi v Wooley [1995] 4 BCLR (3d) 208 (SC). [33] Cherry v. Borsman ([1990)] 75 DLR (4th) 668 (SC), varied (1992) 94 DLR (4th) 487 (CA). [34] Kealey v Berezowski [1996] 136 DLR (4th) 708 (Ontario Court, General Division); MS v. Baker [2001] ABQB 1032, [2002] 4 WWR 487 (obiter). [35] Mummery v. Olsson [2001] OJ No 226 (Ontario Superior Court of Justice); *MY v Boutros* [2002] ABQB 362, 11 CCLT (3d) 271; Bevilacqua v. Altenkirk [2004] BCSC 945 (awarded damages for pain, suffering and inconvenience, but no pecuniary damages); Roe v. Dabbs [2004] BCSC 957 (awarded damages for pain, suffering, inconvenience, and loss of income during pregnancy, but no other pecuniary damages). [36] Krangle v. Brisco 2002 CanLII 9 (SCC); [2002] 1 SCR 205 (Supreme Court of Canada). The Court disallowed upbringing costs past the age of majority (19), since those costs (given the facts and British Columbia legislation) would be borne by the state, not the parents. *Krangle* was followed in: Zhang v. Kan [2003] BCJ No 164; [2003] BCSC 5 (extra disability costs awarded to age 45 but discounted by 70% because the state would likely bear those costs); and Jones v. Rostvig [2003] BCJ No 1840; [2003] BCSC 1222 (extra disability costs awarded to age 25, when the court expected the son to move into a state-funded group home). [37] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003). Useful summary and analysis is given in: J Seymour, *Cattanach v Melchior*: *Legal Principles and Public Policy* 11(3) Torts LJ 208 (2003); and P Phillips, *Medical Negligence and Wrongful Birth: Cattanach v Melchior — A Discussion of the Medical, Legal and Policy Issues,* 15(3) INSURANCE L.J. 203 (2004). [38] CES v. Superclinics (1995) 38 NSWLR 47 (NSW Supreme Court, 1995). [39] Dahl v. Purnell (1992) 15 Qld Lawyer Reps 33 (healthy child); Veivers v. Connolly [1995] 2 Qd.R 326 (Townsville SC) (disabled child); Melchior v Cattanach [2000] QSC 285; (2001) Aust Torts Reports 81-597 (Queensland Supreme Court, 2001) (healthy child); Melchior v Cattanach [2001] QCA 246 (Queensland Court of Appeal, 2001); see also Murray v Whiting [2002] QSC 257 (Queensland Supreme Court, 2002). [40] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003) [11]-[12] (Gleeson CJ). [41] Melchior v. Cattanach, [2000] QSC 285; (2001) Aust Torts Reports 81-597 (Queensland Supreme Court, 2000). [42] Melchior v. Cattanach [2001] QCA 246 (QLD Court of Appeal). [43] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003) [44] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [51] & [71]-[72] (McHugh and Gummow JJ), [176] & [179] (Kirby J), [299] (Callinan J). See also: Melchior v. Cattanach [2001] QCA 246, (QLD Court of Appeal, 2001) [83] (Davies JA); McFarlane v. Tayside Health Board [1999] UKHL 50; [1999] 4 All ER 961 (House of Lords, 1999); [1999] UKHL 50; [2000] 2 AC 59, 74C-D (UK House of Lords, 2000) (Lord Slynn), 82E & 84C-E (Lord Steyn), 107B-C (Lord Millett); Parkinson v. St James and Seacroft University Hospital NHS Trust [2001] EWCA Civ 530; [2001] 3 All ER 97, 118b-c (Hale LJ); Rees v. Darlington Memorial Hospital NHS Trust [2002] EWCA Civ 88; [2002] All ER 177, 189b (Waller LJ); Emeh v Kensington and Chelsea and Westminster Area Health Authority [1985] 1 QB 1012, 1028E-F (Purchas LJ); [1984] 3 All ER 1044 (CA). [45] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [87]-[90] (McHugh and Gummow JJ); [103]-[105] (Kirby J); [297]-[298] (Callinan J). [46] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003) [67] (McHugh and Gummow JJ). [47] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [66] & [72] (McHugh and Gummow JJ). [48] *For example*: Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003) [35] (Gleeson CJ); Rees v. Darlington Memorial Hospital NHS Trust [2003] UKHL 52, (House of Lords, 2003) [16] (Lord Nicholls). [49] * *Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [77] (McHugh and Gummow JJ). [50] * *Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [148] (Kirby J). [51] * *Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [149], [151], [154], [176] (Kirby J). [52] * *Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [299] & [302] (Callinan J). [53] * *Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [302] (Callinan J). [54] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [299] (Callinan J). [55] *Cattanach* Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [296] (Callinan J). [56] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [59] (McHugh and Gummow JJ), [149] & [179] (Kirby J), [295] (Callinan J). [57] *But cf Cattanach *Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [37] (Gleeson CJ) (rejecting the ‘coal miner’ analogy often used to deny there should be any offset; see Part II, B.5, below). [58] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [35] & [36] & [38] (Gleeson CJ), [249]-[262] (Hayne J), [306]-[403] (Heydon J). [59] * *Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [35] (Gleeson CJ). [60] * *Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [258] (Hayne J). [61] * *Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [322] (Heydon J). Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [9] & [19] (Gleeson CJ)[.] [62] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [32] (Gleeson CJ). [63] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [39] (Gleeson CJ). [64] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [193] (Hayne J). [65] Cattanach v Melchior (2003) 199 ALR 131; [2003] HCA 38, (High Court of Australia 2003) [192] (Hayne J). [66] Cattanach v Melchior (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [195]-[222] (Hayne J). [67] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [322] (Heydon J). [68] Dyson Heydon, *Judicial Activism and the Death of the Rule of Law*, 47 QUADRANT 9-10 (Jan-Feb 2003). [69] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003),, [149] (Kirby J). [70] * *Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003),, [51] (McHugh and Gummow JJ). [71] Tame v. State of New South Wales; Annetts v. Australian Stations Pty Ltd (2002) 191 ALR 449; [2002] HCA 35, (High Court of Australia, 2002) [27] & [32] (Gleeson CJ), [53] (Gaudron J), [103] (McHugh J), [239] (Gummow and Kirby JJ), [304] (Hayne J), [358] (Callinan J); March v. E & MH Stramare Pty Ltd [1991] HCA 12; (1991) 171 CLR 506; (1991) 99 ALR 423, (High Court of Australia, 1991) 430-1 (Mason CJ; Gaudron J agreeing), 435-6 (Deane J; Gaudron J agreeing), 436 (Toohey J). [72] Giannarelli & Shulkes v. Wraith (1988) 165 CLR 543; (1988) 81 ALR 417 (High Court of Australia, 1988); Gala v. Preston [1991] HCA 18; (1991) 172 CLR 243; (1991) 100 ALR 29. (High Court of Australia, 1991). [73] *See also:* Melchior v Cattanach [2001] QCA 246, (Queensland Court of Appeal, 2001) [36] (McMurdo P). [74] Wyong Shire Council v. Shirt [1980] HCA 12; (1980) 146 CLR 40 (High Court of Australia, 1980); March v. E & MH Stramare Pty Ltd [1991] HCA 12; (1991) 171 CLR 506; (1991) 99 ALR 423 (High Court of Australia, 1991). [75] Overseas Tankship (UK) Ltd v. Miller Steamship Co Pty Ltd (The Wagon Mound (No2)) [1966] UKPC 1; [1967] 1 AC 617 (Eng. Court of Appeal, 1967); [1966] UKPC 1; [1967] ALR 97; [1966] 2 All ER 709; Mahony v. J Kruschich (Demolitions) Pty Ltd [1985] HCA 37; (1985) 156 CLR 522; (1985) 59 ALR 722 (High Court of Australia, 1985). [76] *See*: Mahony v. J Kruschich (Demolitions) Pty Ltd [1985] HCA 37; (1985) 156 CLR 522; (1985) 59 ALR 722, (High Court of Australia, 1985) 725-6 (Gibbs CJ, Mason, Wilson, Brennan and Dawson JJ); March v. E & MH Stramare Pty Ltd [1991] HCA 12; (1991) 171 CLR 506; (1991) 99 ALR 423 (High Court of Australia, 1991), 426 & 431-2 (Mason CJ; Toohey and Gaudron JJ agreeing). [77] Melchior v Cattanach [2000] QSC 285, (Queensland Supreme Court, 2000) [57] (Holmes J); [2000] QSC 285; (2001) Aust Torts Reports 81-597 Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003),, [161] (Kirby J); McFarlane v. Tayside Health Board [1999] UKHL 50; [1999] 4 All ER 961; [2000] 2 AC 59, 74D-F (UK House of Lords, 2000) (Lord Slynn), 113F-G (Lord Millett); CES v. Superclinics (Australia) Pty Ltd (1995) 38 NSWLR 47, (NSW Supreme Court, 1995) 79B-D (Kirby ACJ); Emeh v. Kensington and Chelsea and Westminster Area Health Authority [1985] 1 QB 1012, 1019E-F (Waller LJ), 1024G-1025A (Slade LJ), 1027D-E (Purchas LJ); [1984] 3 All ER 1044 (CA). *Contra:* CES v Superclinics (Australia) Pty Ltd (1995) 38 NSWLR 47, (NSW Supreme Court, 1995) 84G-85A (Priestley JA). [78] * *Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003),, [220]-[222] (Hayne J), [301] (Callinan J); Melchior v. Cattanach [2000] QSC 285, (Queensland Supreme Court, 2001) [57] (Holmes J); [2000] QSC 285; (2001) Aust Torts Reports 81-597; McFarlane v. Tayside Health Board [1999] UKHL 50; [1999] 4 All ER 961; [2000] 2 AC 59 (UK House of Lords, 2000), 81D-F (Lord Steyn), 113A-B (Lord Millett); Kealey v Berezowski (1996) 136 DLR (4th) 708, 740g-741b; CES v Superclinics (Australia) Pty Ltd (1995) 38 NSWLR 47, (NSW Supreme Court, 1995) 79B-D (Kirby ACJ); Emeh v. Kensington and Chelsea and Westminster Area Health Authority [1985] 1 QB 1012, 1019E-F (Waller LJ), 1024G-H (Slade LJ); [1984] 3 All ER 1044 (CA); SEYMOUR, *supra *note 5, at 80-81. *Contra*: CES v Superclinics (Australia) Pty Ltd (1995) 38 NSWLR 47, (NSW Supreme Court, 1995) 87D (Meagher JA). [79] * See*: Perre v. Apand Pty Ltd (1999) 198 CLR 180; (1999) 164 ALR 606; [1999] HCA 36 (High Court of Australia, 1999). [80] Schloendorff v. Society of New York Hospital (1914) 211 NY 125; (1914) 105 NE 92; Health & Community Services (NT), Department of v. J W B & S M B (Marion's case) [1992] HCA 15; (1992) 175 CLR 218; (1992) 106 ALR 385, (High Court of Ausralia, 1992) 392 (Mason CJ, Dawson, Toohey and Gaudron JJ), 452 (McHugh J). [81] Parkinson v. St James and Seacroft University Hospital NHS Trust [2001] EWCA Civ 530; [2001] 3 All ER 97 (Supreme Court of Judicature, Civil Division, Court of Appeal, 2001), Hale LJ [56]-[75]; EILEEN MCDONAGH, BREAKING THE ABORTION DEADLOCK: FROM CHOICE TO CONSENT , 69-78 & 84-91 (1996). [82] McFarlane v Tayside Health Board [1999] UKHL 50; [1999] 4 All ER 961; [2000] 2 AC 59, (UK House of Lords, 2000) 74B-C (Lord Slynn), 81F-G (Lord Steyn), 86F-H (Lord Hope), 102G-H (Lord Clyde), 107F-G (Lord Millett); Melchior v Cattanach [2001] QCA 246, (Queensland Court of Appeal [77] (Davies JA). [83] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), * *, [148] (Kirby J). See also: [193] (Hayne J); CES v. Superclinics (Australia) Pty Ltd (1995) 38 NSWLR 47, (NSW Supreme Court, 1995) 72E-F (Kirby ACJ); Walkin v South Manchester Health Authority [1995] 4 All ER 132; [1995] 1 WLR 1543, 1552F (Auld LJ), 1553B (Roch LJ), 1555G-H (Neill LJ) (CA). [84] McFarlane v. Tayside Health Board [1999] UKHL 50; [1999] 4 All ER 961; [2000] 2 AC 59 (UK House of Lords, 2000), 79E-F (Lord Steyn), 89D & 96H-97A (Lord Hope); but cf 83H-84A (Lord Steyn) and 108H-109A (Lord Millett) (both noting the invasion of the mother’s bodily interests). *See also:* Allen v. Bloomsbury Health Authority [1993] 1 All ER 651, 658e (Brooke J) (obiter); (1992) 13 BMLR 47 (QBD). [85] * *Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [9] & [19] (Gleeson CJ), [67] (McHugh and Gummow JJ), [302] (Callinan J), Heydon J not deciding. Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 3[8] (High Court of Australia, 2003), [19] (Gleeson CJ) (emphasis added). [86] * *Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [149] Kirby J. [87] Perre v. Apand Pty Ltd (1999) 198 CLR 180; (1999) 164 ALR 606; [1999] HCA 36 (High Court of Australia, 1999). [88] See also: Melchior v. Cattanach [2000] QSC 285, (Queensland Supreme Court, 2000) Holmes J [61]-[62]; [2000] QSC 285; (2001) Aust Torts Reports 81-597; Melchior v Cattanach [2001] QCA 246, (Queensland Court of Appeal, 2001) [44]-[45] (McMurdo P), [98] (Davies JA). But cf the different approach favoured in: Perre v Apand Pty Ltd (1999) 198 CLR 180; (1999) 164 ALR 606; [1999] HCA 36, (High Court of Australia, 1999) [259]-[273] (Kirby J); Caparo Industries Plc v. Dickman [1990] UKHL 2; [1990] 2 AC 605, (Eng. Court of Appeal, 1990) 617-618;* *Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [121]-[122] (Kirby J). [89] Perre v. Apand Pty Ltd (1999) 198 CLR 180; (1999) 164 ALR 606; [1999] HCA 36, (High Court of Australia, 1999) [10] (Gleeson CJ), [30] (Gaudron J), [124]-[126] (McHugh J). [90] Perre v. Apand Pty Ltd (1999) 198 CLR 180; (1999) 164 ALR 606; [1999] HCA 36, (High Court of Australia, 1999) [11] (Gleeson CJ), [105] (McHugh J). [91] Perre v. Apand Pty Ltd (1999) 198 CLR 180; (1999) 164 ALR 606; [1999] HCA 36, (High Court of Austrakia, 1999, [15] (Gleeson CJ), [42] (Gaudron J), [50] (McHugh J), [215]-[216] (Gummow J). Perre v. Apand Pty Ltd (1[9]99) 198 CLR 180; (1999) 164 ALR 606; [1999] HCA 36, (High Court of Austrakia, 1999, [50] & [105] (McHugh J), [205] (Gummow J). [92] Perre v. Apand Pty Ltd (1999) 198 CLR 180; (1999) 164 ALR 606; [1999] HCA 36, (High Court of Austrakia, 1999, [10] (Gleeson CJ), [50] (McHugh J), [335]-[337] & [341]-[342] (Hayne J). [93] Perre v. Apand Pty Ltd (1999) 198 CLR 180; (1999) 164 ALR 606; [1999] HCA 36, (High Court of Austrakia, 1999, [197] (Gummow J). [94] Perre v. Apand Pty Ltd (1999) 198 CLR 180; (1999) 164 ALR 606; [1999] HCA 36, (High Court of Austrakia, 1999,[50] & [105] (McHugh J), [211] (Gummow J), [346] (Hayne J). [95] Bryan v. Maloney* *(1995) 182 CLR 609, (High Court of Australia, 1995) 618 (Mason CJ, Deane and Gaudron JJ), quoting Ultramares Corporation v. Touche* * 174 NE 441 (1931), 444 (Cardozo CJ). Both quoted in Perre v. Apand Pty Ltd (1999) 198 CLR 180; (1999) 164 ALR 606; [1999] HCA 36, (High Court of Austrakia, 1999, [32] (Gleeson CJ), [106] (McHugh J), [243] (Kirby J), [329] (Hayne J), [393] (Callinan J). [96] *Cf* * *Perre v. Apand Pty Ltd (1999) 198 CLR 180; (1999) 164 ALR 606; [1999] HCA 36, (High Court of Austrakia, 1999 ,[120] & [123] (McHugh J) (plaintiff’s inability to protect itself in contract may be a reason to impose a duty of care). [97] *See*:* *Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), * *, [48] (Gummow and McHugh JJ), [276] (Callinan J); Griffiths v. Kerkemeyer [1977] HCA 45; (1977) 139 CLR 161; (1977) 15 ALR 387 (High Court of Australia, 1977). [98] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38, (High Court of Australia, 2003), [33] (Gleeson CJ). [99] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), * *, [20] & [32] (Gleeson CJ). [100] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), * *, [32] (Gleeson CJ). [101] * *Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), * *, [34] (Gleeson CJ). Indeterminacy ‘does not mean magnitude’: [32] (Gleeson CJ). Cf [306]-[311] & [393] (Heydon J)—) (similar concerns, but apparently relating to magnitude.). [102] Perre v. Apand Pty Ltd (1999) 198 CLR 180; (1999) 164 ALR 606; [1999] HCA 36 (High Court of Australia, 1999), [2] (Gleeson CJ), [59] (McHugh J). [103] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [138] (Kirby J). [104] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), * *[282] (Callinan J).[] [105] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [173]-[175] (Kirby J). [106] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38, (High Court of Australia, 2003), [9] (Gleeson CJ), [310] (Heydon J). [107] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [151] (Kirby J). [108] The spectre of siblings claiming for loss of enjoyment of life because they must now spend less time with parents—* *Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [310] (Heydon J)—is legally absurd, since there is no general duty to avoid emotional distress (short of psychiatric injury) to others: Tame v. State of New South Wales; Annetts v. Australian Stations Pty Ltd (2002) 191 ALR 449; [2002] HCA 35,(High Court of Australia, 2002) [7] (Gleeson CJ), [44] (Gaudron J), [193] (Gummow and Kirby JJ), [243] (Hayne J); Frost v. Chief Constable of South Yorkshire Police* *[1998] UKHL 45; [1999] 2 AC 455, (Eng. Court of Appeal) 469 (Lord Goff of Chieveley). [109] On whether the doctor would be liable to the *child* for upbringing costs, *see: *below Part II, B.2. [110] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [87] (McHugh and Gummow JJ), [173] (Kirby J). [111] Public Trustee v Zoanetti [1945] HCA 26; (1945) 70 CLR 266, (High Court of Australia, 1945) 278 (Dixon CJ). [112] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [90] (McHugh and Gummow JJ). *See* also: McFarlane v. Tayside Health Board [1999] UKHL 50; [1999] 4 All ER 961; [2000] 2 AC 59, (UK House of Lords, 2000) 103A-C (Lord Clyde); Melchior v. Cattanach [2001] QCA 246, (Queensland Court of Appeal, 2001) [56] (McMurdo P), [88] (Davies JA). [113] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [90] (McHugh and Gummow JJ). [114] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [173]-[175] (Kirby J), citing Public Trustee v. Zoanetti [1945] HCA 26; (1945) 70 CLR 266, (High Court of Australia, 1945) 278 (Dixon CJ) and Sharman v Evans [1977] HCA 8; (1977) 138 CLR 563, 578; [1977] HCA 8; (1977) 13 ALR 57 (High Court of Australia, 1977). [115] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), * *[297]-[298] (Callinan J). [116] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [37] (Gleeson CJ) (emphasis added). [117] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [67]-[68] (McHugh and Gummow JJ), [148] (Kirby J). [118] *See, for example*, the Taxation Laws Amendment (Baby Bonus) Act 2002 (Cth). [119] National Insurance Co of New Zealand v. Espagne [1961] HCA 15; (1961) 105 CLR 569, 573 (Dixon CJ); [1961] HCA 15; [1961] ALR 627 (High Court of Australia, 1961). [120] National Insurance Co of New Zealand v. Espagne [1961] HCA 15; (1961) 105 CLR 569; [1961] ALR 627 (High Court of Australia, 1961); Manser v. Spry [1994] HCA 50; (1994) 181 CLR 428; (1994) 124 ALR 539, 543-5 (Mason CJ, Brennan, Dawson, Toohey and McHugh JJ) (High Court of Australia, 1994). [121] *See:* above, Part, II, A.2. [122] McFarlane v. Tayside Health Board [1999] UKHL 50; [1999] 4 All ER 961; [2000] 2 AC 59, (UK House of Lords, 2000) 111F (Lord Millett). [123] Public Health Trust v. Brown 388 So.2d 1084 (1980), 1085-6; Cockrum v. Baumgartner 447 NE 2d 385 (1983) (Illinois State Court, 1983). [124] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38, (High Court of Australia, 2003), * *[79] (McHugh and Gummow JJ), [148] (Kirby J), [196] (Hayne J), [350] (Heydon J); Melchior v. Cattanach [2001] QCA 246, (Queensland Court of Appeal, 2001) [51] (McMurdo P), [81]-[82] (Davies JA); Melchior v. Cattanach [2000] QSC 285, (Queensland Supreme Court, 2000) [51] (Holmes J); [2000] QSC 285; (2001) Aust Torts Reports 81-597; McFarlane v. Tayside Health Board [1999] UKHL 50; [1999] 4 All ER 961; [2000] 2 AC 59, (UK House of Lords, 2000) 75B (Lord Slynn), 100D-E (Lord Clyde); CES v. Superclinics (Australia) Pty Ltd (1995) 38 NSWLR 47, (NSW Supreme Court, 1995) 73G-74D (Kirby ACJ); Thake v Maurice [1986] 1 QB 644, 666G (Peter Pain J); [1986] 1 All ER 497 (CA) (Court of Appeal, 1986). [125] McFarlane v Tayside Health Board [1999] UKHL 50; [1999] 4 All ER 961; [2000] 2 AC 59, (UK House of Lords, 2000) 113H-114B (Lord Millett); CES v Superclinics (Australia) Pty Ltd (1995) 38 NSWLR 47, (NSW Supreme Court, 1995) 87A-B (Meagher JA); Udale v Bloomsbury Health Authority [1983] All ER 522; [1983] 1 WLR 1098 (QBD), 1109F (Jupp J); *see also:* Kealey v Berezowski (1996) 136 DLR (4th) 708, 732a-b. [126] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [120] (McHugh and Gummow JJ); Thake v Maurice [1986] 1 QB 644, 667G-668B (Peter Pain J); [1986] 1 All ER 497 (CA); *contra:* McFarlane v Tayside Health Board [1999] UKHL 50; [1999] 4 All ER 961; [2000] 2 AC 59, (UK House of Lords, 2000) 114E-115A (Lord Millett). [127] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [79] (McHugh and Gummow JJ), [164]-[165] (Kirby J); CES v. Superclinics (Australia) Pty Ltd (1995) 38 NSWLR 47, (NSW Supreme Court, 1995) 74A-B (Kirby ACJ); Thake v Maurice [1986] 1 QB 644, 666G (Peter Pain J); [1986] 1 All ER 497 (CA) (Court of Appeal, 1986). [128] Melchior v. Cattanach [2001] QCA 246, (Queensland Court of Appeal, 2001) [82] (Davies JA). [129] McFarlane v. Tayside Health Board [1999] UKHL 50; [1999] 4 All ER 961; [2000] 2 AC 59, (UK House of Lords, 2000) 111C (Lord Millett). [130] McFarlane v. Tayside Health* *Board [1999] UKHL 50; [1999] 4 All ER 961; [2000] 2 AC 59, (UK House of Lords, 2000) 111D & 114B (Lord Millett). [131] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [78] (McHugh and Gummow JJ), [164] & [166] (Kirby J); *Melchior v Cattanach* [2001] QCA 246, [29] (McMurdo P). [132] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), * *[257]-[258] (Hayne J); cf [36]-[37] (Gleeson CJ) (also favouring this approach). [133] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38, (High Court of Australia, 2003), [38] (Gleeson CJ), [247] (Hayne J), [356] (Heydon J); McFarlane v. Tayside Health Board [1999] UKHL 50; [1999] 4 All ER 961; [2000] 2 AC 59, (Eng. Court of Appeal, 1999) 97D (Lord Hope). [134] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [261] (Hayne J), [353] (Heydon J);* see* also [35] (Gleeson CJ) and Melchior v Cattanach [2001] QCA 246, (Queensland Court of Appeal, 2001)[197] (Thomas JA) (both raising similar concerns). [135] McFarlane v Tayside Health Board [1999] UKHL 50; [1999] 4 All ER 961; [2000] 2 AC 59, (Eng. Court of Appeal, 1999) 111F (Lord Millett). Cattanach v. Melchior, (2003) [1]99 ALR 131; [2003] HCA 38 (High Court of Australia, 2003),[259]-[260] (Hayne J), [367]-[370] (Heydon J); CES v. Superclinics (Australia) Pty Ltd (1995) 38 NSWLR 47, (NSW Supreme Court, 1995) 87C (Meagher JA); Udale v. Bloomsbury Health Authority [1983] All ER 522; [1983] 1 WLR 1098 (QBD), 1109D-E (Jupp J). [136] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [262] (Hayne J); see also [404]-[405] (Heydon J). [137] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [144] (Kirby J); see also [200] (Hayne J). [138] Griffiths v. Kerkemeyer [1977] HCA 45; (1977) 139 CLR 161; (1977) 15 ALR 387 (High Court of Australia, 1977); Van Gervan v. Fenton [1992] HCA 54; (1992) 175 CLR 327; (1992) 109 ALR 283 (High Court of Australia, 1992); Kars v. Kars [1996] HCA 37; (1996) 187 CLR 354 (High Court of Australia, 1996); [1996] HCA 37; (1996) 141 ALR 37. [139] McFarlane v. Tayside Health Board [1999] UKHL 50; [1999] 4 All ER 961; [2000] 2 AC 59, (UK House of Lords, 1999) 114F (Lord Millett). [140] *See:* Maclean, *supra *note 8. [141] *See:* Thake v. Maurice [1986] 1 QB 644, 667F (Peter Pain J), 682E-G & 683D (Kerr LJ; Neill and Nourse LJJ agreeing); [1986] 1 All ER 497 (CA); Kealey v. Berezowski (1996) 136 DLR (4th) 708, 738g-739a (Lax J); Parkinson v. St James and Seacroft University Hospital NHS Trust [2001] EWCA Civ 530; [2001] 3 All ER 97, 122g-j & 123c-d (Hale LJ). [142] *See:* McFarlane v Tayside Health Board [1999] UKHL 50; [1999] 4 All ER 961; [2000] 2 AC 59, (UK House of Lords, 2000) 111F (Lord Millett); Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [355]-[356] (Heydon J). [143] McFarlane v. Tayside Health Board [1999] UKHL 50; [1999] 4 All ER 961; [2000] 2 AC 59 (UK House of Lords, 2000), 114D-E (Lord Millett); *contra: *Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [262]-[263] (Hayne J). [144] *See *Cattanach v. Melchior (2003) 199 ALR 131; [2003] HCA 38, [165] (Kirby J). [145] Cattanach v Melchior (2003) 199 ALR 131; [2003] HCA 38, [372] - [402] (Heydon J); Melchior v Cattanach [2001] QCA 246, [169] (Thomas JA); CES v Superclinics (Australia) Pty Ltd (1995) 38 NSWLR 47, 86B-C (Meagher JA); Udale v Bloomsbury Health Authority [1983] All ER 522; 1 WLR 1098 (QBD), 1109D (Jupp J). [146] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [203] (Hayne J) (original emphasis). [147] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [79] (McHugh and Gummow JJ); see also [145] & [152] (Kirby J), [203] (Hayne J). [148] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [390] ff (Heydon J). [149] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), * *[203] (Hayne J); Melchior v. Cattanach [2001] QCA 246 (Queensland Court of Appeal, 2001), [94] (Davies JA); Emeh v. Kensington and Chelsea and Westminster Area Health Authority [1985] 1 QB 1012, 1021E (Waller LJ); [1984] 3 All ER 1044 (CA) (UK Court of Appeal, 1984); Thake v. Maurice [1986] 1 QB 644, 667C-D (Peter Pain J); [1986] 1 All ER 497 (CA) (UK Court of Appeal, 1986). [150] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), , [301] (Callinan J). [151] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [145] (Kirby J); Melchior v. Cattanach [2001] QCA 246 (Queensland Court of Appeal, 2001), [59] (McMurdo P); McFarlane v. Tayside Health Board [1999] UKHL 50; [1999] 4 All ER 961; [2000] 2 AC 59 (House of Lords, 2000), 75E (Lord Slynn); Thake v. Maurice [1986] 1 QB 644, 667C (Peter Pain J); [1986] 1 All ER 497 (UK Court of Appeal, 1986) (CA). [152] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [145] (Kirby J). [153] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [399] (Heydon J). [154] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), * *[400] (Heydon J). [155] Melchior v. Cattanach [2001] QCA 246 (Queensland Court of Appeal), [97] (Davies JA). [156] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [401] (Heydon J). [157] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [176] (Kirby J). [158] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003),, [396] & [410]-[411] (Heydon J). [159] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [396] (Heydon J). [160] Cattanach v. Melchior, (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [36] (Gleeson CJ); but cf [2] (Gleeson CJ) (rejecting such appeals to intuition). See also Melchior v. Cattanach [2001] QCA 246 (Queensland Cour of Appeal), [196], [198] (Thomas JA). [161] McFarlane v. Tayside Health Board [1999] UKHL 50; [1999] 4 All ER 961; [2000] 2 AC 59, (House of Lords, 2000) 82D (Lord Steyn). [162] Parkinson v. St James and Seacroft University Hospital NHS Trust [2001] EWCA Civ 530; [2001] 3 All ER 97 (Supreme Court of Judicature, Civil Division, Court of Appeal, 2001), [82] (Hale LJ). [163] National Insurance Co of New Zealand v Espagne [1961] HCA 15; (1961) 105 CLR 569, 572 (Dixon CJ); [1961] HCA 15; [1961] ALR 627, (High Court of Australia, 1961). [164] McFarlane v. Tayside Health Board [1999] UKHL 50; [1999] 4 All ER 961; [2000] 2 AC 59, (Eng. Court of Appeal, 2000) 76C (Lord Slynn). [165] Hoyano, *supra *note 5 at 883-906, 887. [166] Sullivan v. Moody; Thompson v. Connon (2001) 207 CLR 562; (2001) 183 ALR 404, (High Court of Australia, 2001) 415 (Gleeson CJ, Gaudron, McHugh, Hayne and Callinan JJ). [167] Hoyano, *supra* note 5, at 905; Cattanach v. Melchior (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [163]-[166] (Kirby J). *See also:* Rees v Darlington Memorial Hospital NHS Trust [2003] UKHL 52 (House of Lords, 2003). [168] Cattanach v. Melchior (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [393] (Heydon J). See also: McFarlane v. Tayside Health Board [1999] UKHL 50; [1999] 4 All ER 961; [2000] 2 AC 59 (House of Lords, 2000), 91C-D (Lord Hope), 106A-B (Lord Clyde);* *Allen v. Bloomsbury Health Authority [1993] 1 All ER 651, 662d-f; (1992) 13 BMLR 47 (QBD) (House of Lords, 1993). [169] Cattanach v. Melchior (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [306] (Heydon J). [170] Cattanach v. Melchior (2003) 199 ALR 131; [2003] HCA 38,,(High Court of Australia, 2003) [306]-[309] (Heydon J). [171] Cattanach v. Melchior (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [311] (Heydon J).[] [172] McFarlane v. Tayside Health Board [1999] UKHL 50; [1999] 4 All ER 961; [2000] 2 AC 59 (House of Lords, 2000), 91E (Lord Hope); see also Kealey v. Berezowski (1996) 136 DLR (4th) 708 (Ontario Court, General Division), 741b-c. [173] Hoyano, *supra* note 5, at 891. [174] Caltex Oil (Australia) Pty Ltd v. The Dredge ‘Willemstad’ [1976] HCA 65; (1976) 136 CLR 529 (High Court of Australia, 1976), 551-552 (Gibbs J), 591 (Mason J);* *Perre v. Apand Pty Ltd (1999) 198 CLR 180; (1999) 164 ALR 606; [1999] HCA 36 (High Court of Australia, 1999), [427] (Callinan J). [175] Cattanach v. Melchior (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [177] (Kirby J). See also: McFarlane v. Tayside Health Board [1999] UKHL 50; [1999] 4 All ER 961; [2000] 2 AC 59 (House of Lords, 2000), 109E (Lord Millett); Parkinson v. St James and Seacroft University Hospital NHS Trust [2001] EWCA Civ 530; [2001] 3 All ER 97 (Supreme Court of Judicature, Court of Appeal, Civil Division), 121j-122a (Hale LJ); L Hoyano, *Misconceptions about Wrongful Conception* 65(6) MLR (2002)883-906, 887. [176] Rogers v. Whitaker* *[1992] HCA 58; (1992) 175 CLR 479; (1992) 109 ALR 625 (High Court of Australia, 1992). Applied in: Rosenberg v. Percival* *[2001] HCA 18; (2001) 205 CLR 434; (2001) 178 ALR 577 (High Court of Australia, 2001); Naxakis v. Western General Hospital (1999) 197 CLR 269; (1999) 162 ALR 540 (High Court of Australia, 1999); Chappel v Hart (1998) 195 CLR 232; (1998) 156 ALR 517 (High Court of Australia, 1998).[] [177] *See also:* Tame v. State of New South Wales; Annetts v. Australian Stations Pty Ltd (2002) 191 ALR 449; [2002] HCA 35 (High Court of Australi, 2002), [192]-[193] (Gummow and Kirby JJ). [178] Cattanach v. Melchior (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [210] (Hayne J). [179] Cattanach v. Melchior (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia,2003), [154] (Kirby J). [180] Sharman v. Evans [1977] HCA 8; (1977) 138 CLR 563; (1977) 13 ALR 57 (High Court of Australia, 1977), 66 (Gibbs and Stephen JJ), citing Arthur Robinson (Grafton) Pty Ltd v. Carier [1968] HCA 9; (1968) 122 CLR 649 at 661; [1968] HCA 9; [1968] ALR 257 (High Court of Australia, 1968), 267 (Barwick CJ). [181] Sharman v. Evans [1977] HCA 8; (1977) 138 CLR 563; (1977) 13 ALR 57 (High Court of Australia, 1977), 60 (Barwick CJ), 66 (Gibbs and Stephen JJ).[] [182] Cattanach v. Melchior (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [393] (Heydon J).[] [183] Cattanach v. Melchior (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [341] (Heydon J). Heydon J identified many ‘temptations’ which, as he thought, wrongful birth plaintiffs would be unable to resist (at [338], [334]-[336], [363], [369], [371], [401]). His Honour did not, however, go so far as to suggest (what at any rate seems no *less* convincing) that wrongful birth plaintiffs, following an award of damages, would be tempted to make their child mysteriously ‘disappear’ so that the plaintiffs could then enjoy their family habits and pastimes in peace. [184] Cattanach v. Melchior (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [341] (Heydon J). [185] Cattanach v. Melchior (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [346] & [371] (Heydon J). [186] Cattanach v. Melchior (2003) 199 ALR 131; [2003] HCA 38 (High Court of Australia, 2003), [401] (Heydon J). [187] Rogers v. Whitaker* * [1992] HCA 58; (1992) 175 CLR 479; (1992) 109 ALR 625 (High Court of Australia, 1992), 635 (Mason CJ, Brennan, Dawson, Toohey and McHugh JJ; Gaudron J agreeing); Chappel v. Hart (1998) 195 CLR 232; (1998) 156 ALR 517 (High Court of Australia, 1998), 520 (Gaudron J), 527 (McHugh J), 538 (Gummow J), 547-8 (Kirby J), 554-5 (Hayne J); Rosenberg v. Percival [2001] HCA 18; (2001) 205 CLR 434; (2001) 178 ALR 577 (High Court of Australia, 2001), 581 (Gleeson CJ), 582-3 (McHugh J), 597 (Gummow J), 618 (Kirby J), 629 (Callinan J). [188] Chappel v. Hart (1998) 195 CLR 232; (1998) 156 ALR 517 (High Court of Australia, 1998), 547-8 (Kirby J); Rosenberg v. Percival [2001] HCA 18; (2001) 205 CLR 434; (2001) 178 ALR 577 (High Court of Australia, 2001), 615-618 (Kirby J), 629 & 632 (Callinan J). See also: Tame v. State of New South Wales; Annetts v. Australian Stations Pty Ltd (2002) 191 ALR 449; [2002] HCA 35 (High Court of Australia, 2002), [194] (Gummow and Kirby JJ). [189] CES v. Superclinics (Australia) Pty Ltd (1995) 38 NSWLR 47, (NSW Supreme Court, 1995) 86C (Meagher JA). [190] See: Part II.A.3 and III.A.4. [191] F. A. TRINDADE AND PETER CANE, THE LAW OF TORTS IN AUSTRALIA 434 (3d ed. 1999). Quoted with approval in McFarlane v. Tayside Health Board [1999] UKHL 50; [1999] 4 All ER 961; [2000] 2 AC 59, (House of Lords, 2000) 83F-G (Lord Steyn); quoted in Cattanach v. Melchior (2003) 199 ALR 131; [2003] HCA 38, (High Court of Australia, 2003) [408] (Heydon J).[] [192] Many who invoke ‘sanctity of life’ considerations against recovery of damages for wrongful birth would presumably also be opposed to abortion. Yet, ironically, if recovery for wrongful birth were disallowed, greater numbers of potential parents may be led to seek abortion in order to avoid the (unrecoverable) costs of raising a child. [193] *Cf *Harriton v. Stephens; Waller v. James; Waller v. Hoolahan [2004] NSWCA 93, (NSW Court of Appeal, 2004) [93], [166] (Mason P) (applying a similar logic). [194] Cattanach v. Melchior (2003) 199 ALR 131; [2003] HCA 38,(High Court of Australia, 2003) [180] (Kirby J).[] [195] Edwards v Blomeley [2002] NSWSC 460, [6]. [196] McKay v Essex Area Health Authority [1982] 1 QB 1166 (CA), 1177H-1178C (Stephenson LJ). See: Congenital Disabilities (Civil Liability) Act 1976 (UK) s 4(5).[] [197] McKay v Essex Area Health Authority [1982] 1 QB 1166 (CA), 1181A-B (Stephenson LJ).[] [198] California (Curlender v Bio-Sciences Laboratories 106 Cal App 3d 811, 165 Cal Rptr 477 (1980); Turpin v Sortini 643 P 2d 954 (1982)); New Jersey (Procanik v Cillo 478 A 2d 755 (1984)); Washington (Harbeson v Parke-Davis Inc 656 P 2d 483 (1983)). *See* also: SEYMOUR, *supra* note 5, at 108-11. [199] Including: Alabama (Elliott v Brown 361 So. 2d 546 (1978)); Arizona (Walker v Mart 790 P.2d 735 (1990)); Colorado (Lininger v Eisenbaum 764 P.2d 1202 (1988)); Delaware (Garrison v Medical Center of Delaware, Inc. 581 A.2d 288 (1989)); Florida (Kush v Lloyd 616 So. 2d 415 (1992)); Georgia (Atlanta Obstetrics & Gynecology Group v Abelson 398 SE 2d 557 (1990)); Idaho (Blake v Cruz 698 P.2d 315 (1984)); Illinois (Cockrum v Baumgartner 95 Ill. 2d 193, 200-01 (1983); Siemieniec v Lutheran General Hospital 512 NE 2d 691 (1987)); Indiana (Cowe v Forum Group, Inc 575 NE 2d 630 (1991); Kansas (Bruggeman v Schimke 718 P.2d 635 (1986)); Louisiana (Petre v Opelousas General Hospital 517 So 2d 1019 (1987), reversed in part on other grounds, 530 So 2d 1151 (1988)); Massachusetts (Viccaro v.Milunsky 551 N.E.2d 8 (1990)); Michigan (Taylor v Kurapati 600 N.W.2d 670 (1999)); Missouri (Wilson v Kuenzi 751 SW 2d 741 (1988)); Nevada (Greco v United States 893 P.2d 345 (1995)); New Hampshire (Smith v Cote 513 A.2d 341 (1986)); New York (Becker v Schwartz 386 N.E. 2d 807 (1978)); North Carolina (Azzolino v Dingfelder 337 S.E.2d 528 (1985)); Ohio (Hester v Dwivedi 733 N.E. 1161 (2000)); Philadelphia (Ellis v Sherman 515 A 2d 1327 (1986)); Texas (Nelson v Kruzen 678 S.W.2d 918 (1984)); West Virginia (James G. v Caserta 332 S.E.2d 872 (1985)); Wisconsin (Dumer v St. Michael’s Hospital 233 N.W.2d 372 (1975)); Wyoming (Beardsley v Wierdsma 650 P 2d 288 (1982)). *See*: Edwards v Blomeley [2002] NSWSC 460, [35]; SEYMOUR, *supra *note 5, at 107-8. [200] Arndt v Smith [1994] 8 WWR 568 (British Columbia Supreme Court), overturned on another issue [1995] 2 SCR 539; Jones v Rostig (1999) 44 CCLT (2d) 312 (British Columbia Supreme Court); Lacroix v Dominique [2001] MBCA 122 (Manitoba Court of Appeal); Mickle v Salvation Army Grace Hospital (1988) 166 DLR (4th) 743 (Ontario General Division). These cases are collected in: Edwards v Blomeley [2002] NSWSC 460, [26]-[32]. [201] Bannerman v Mills (1991) ATR 81-079.[] [202] Edwards v Blomeley [2002] NSWSC 460; Harriton v Stephens [2002] NSWSC 461; Waller v James [2002] NSWSC 462. [203] Edwards v Blomeley [2002] NSWSC 460, [119]; Harriton v Stephens [2002] NSWSC 461, [71]; Waller v James [2002] NSWSC 462, [66].[] [204] Harriton v Stephens; Waller v James; Waller v Hoolahan [2004] NSWCA 93, [43] (Spigelman CJ), original emphasis. [205] Harriton v Stephens; Waller v James; Waller v Hoolahan [2004] NSWCA 93, [46] (Spigelman CJ). [206] Harriton v Stephens; Waller v James; Waller v Hoolahan [2004] NSWCA 93, [266] (Ipp JA). [207] Harriton v Stephens; Waller v James; Waller v Hoolahan [2004] NSWCA 93, [234]-[237], [271], [279], [320]-[321] (Ipp JA). As Spigelman CJ noted (see at [6]), Ipp JA routinely confuses damage (loss or injury) with damages (an amount awarded in compensation for loss or injury); so it is not always clear which one he means. But it emerges (at [279]) that he sees both as problematic. Ipp JA also argued (at [337]) that ‘at the present time, when legislatures throughout the country have legislated or have foreshadowed legislation restricting liability for negligence...it would be quite wrong to expand, by judicial fiat, the law of negligence into new areas.’ To this Mason P replied (at [164]): ‘I know of no legal principle that directs the common law to pause or to go into reverse simply because of an accumulation of miscellaneous statutory overrides.’ [208] Harriton v Stephens; Waller v James; Waller v Hoolahan [2004] NSWCA 93, [303], [348] (Ipp JA).[] [209] Harriton v Stephens; Waller v James; Waller v Hoolahan [2004] NSWCA 93, [108] (Mason P). [210] Harriton v Stephens; Waller v James; Waller v Hoolahan [2004] NSWCA 93, [116] (Mason P). [211] Harriton v Stephens; Waller v James; Waller v Hoolahan [2004] NSWCA 93, [116] (Mason P). [212] Harriton v Stephens; Waller v James; Waller v Hoolahan [2004] NSWCA 93, [161]-[162] (Mason P). [213] Harriton v Stephens; Waller v James; Waller v Hoolahan [2004] NSWCA 93, [121], [124], [135], [139], [141] (Mason P).[] [214] Watt v Rama [1972] VicRp 40; [1972] VR 353; Burton v Islington Health Authority [1992] EWCA Civ 2; (1993) QB 204; De Martell v Merton & Sutton Health Authority [1992] EWCA Civ 2; (1993) QB 204; Edwards v Blomeley [2002] NSWSC 460, [54].[] [215] X & Y v Pal, (1991) 23 NSWLR 26, 40 (Clarke JA) (New South Wales Court of Appeal); Waller v James, [2002] NSWSC 462, [17] (New South Wales Supreme Court, 2002).]. [216] *See*: above, Part II.C.1.(a). [217] Cattanach v Melchior (2003) 199 ALR 131; [2003] HCA 38, [67] (McHugh and Gummow JJ).[] [218] *See*: Paton v British Pregnancy Advisory Service Trustees & Another (1979) 1 QB 276 at 279; (1978) 2 All ER 987; C v S [1987] 1 All ER 1230; (1988) 1 QB 135, 140 (Heilbron J); Re F (in utero) (1988) Fam 122, 138 (May LJ); B v Islington Health Authority (1991) 1 QB 638; De Martell v Merton & Sutton Health Authority [1992] EWCA Civ 2; (1993) QB 204, 213 (Phillips J); Burton v Islington Health Authority [1992] EWCA Civ 2; (1993) QB 204, 226 (Dillon LJ); Re MB [1997] 8 Med. L.R. 217; St George's Healthcare NHS Trust v S [1998] 3 All ER 673; [1998] 3 WLR 936 (CA); Attorney-General (Qld); Ex rel Kerr v T (1983) 1 Qd R 396, 400 (Qld SC, Williams J), 406 (Qld CA); (1983) 57 ALJR 285, 287 (High Court, Gibbs CJ); Yunghanns v Candoora No 19 Pty Ltd [1999] VSC 524, [75]-[86]. To illustrate: if a negligently performed amniocentesis causes the foetus temporary pain or deformity but has no effects at or after birth, the person once born could hardly sue for negligence; and this precisely because damage before birth is not damage in law (or in other words, damage in law is always damage to the body, property or finances of a legal person: a legal non-person cannot suffer legally recognised damage).[] [219] McKay v Essex Area Health Authority [1982] 1 QB 1166 (CA), 1178E-F (Stephenson LJ); Edwards v Blomeley [2002] NSWSC 460, [69]; Harriton v Stephens [2002] NSWSC 461, [25]-[27]; Waller v James [2002] NSWSC 462, [39] & [43]. [220] *See*: March v E & MH Stramare Pty Ltd [1991] HCA 12; (1991) 171 CLR 506, 511; [1991] HCA 12; (1991) 99 ALR 423, 426-7; Medlin v State Government Insurance Commission [1995] HCA 5; (1995) 182 CLR 1, 6-7; [1995] HCA 5; 127 ALR 180, 183-4; Henville v Walker [2001] HCA 52; (2001) 206 CLR 459, 480 & 490; [2001] HCA 52; 182 ALR 37, 50-1 & 59. *See* also: Harriton v Stephens; Waller v James; Waller v Hoolahan [2004] NSWCA 93, [121] (Mason P). [221] McKay v Essex Area Health Authority [1982] 1 QB 1166 (CA), 1181D-F (Stephenson LJ), 1189C-D (Ackner LJ), 1191H-1193A (Griffiths LJ); Edwards v Blomeley [2002] NSWSC 460, [72]-[76]; Harriton v Stephens [2002] NSWSC 461, [33]; Waller v James [2002] NSWSC 462, [49]; Harriton v Stephens; Waller v James; Waller v Hoolahan [2004] NSWCA 93, [234]-[237], [271], [279], [320]-[321] (Ipp JA); see also [43], [46] (Spigelman CJ). [222] Cf Harriton v Stephens; Waller v James; Waller v Hoolahan [2004] NSWCA 93, [42]-[43] (Spigelman CJ) (appearing to endorse this logic). Discussions about non-existence are liable to invoke philosophical speculation rather than established legal principle, and a court wishing to avoid such speculations could simply accept the point (already made: above, Part III.B.1.(c)) that disability is a recognised head of physical damage. [] [223] *See*: Watt v Hertfordshire County Council [1954] EWCA Civ 6; [1954] 2 All ER 368; [1954] 1 WLR 835; Marshall v Curry [1933] 3 DLR 260 (Nova Scotia SC); Health & Community Services (NT), Department of v J W B & S M B (Marion's case) [1992] HCA 15; (1992) 175 CLR 218; (1992) 106 ALR 385, 452 (McHugh J); Krishna v Loustos [2000] NSWCA 272; [2000] ACL Rep 300 NSW 73.[] [224] *See*: SEYMOUR, *supra* note 5, at 160-164, 176. For the purposes of this paper I assume the common sense view that we were once foetuses. [225] *Cf.* Harriton v Stephens; Waller v James; Waller v Hoolahan [2004] NSWCA 93, [147] (Mason P) (noting this objection), [266], [271] (Ipp JA) (endorsing essentially this objection). [226] *See* also: Harriton v Stephens; Waller v James; Waller v Hoolahan [2004] NSWCA 93, [157] (Mason P). [227] McKay v Essex Area Health Authority [1982] 1 QB 1166 (CA), 1193B (Griffiths LJ). [228] Penny Dimopoulos & Mirko Bagaric, *The Moral Status of Wrongful Life Claims*, 32 COMMON LAW WORLD REV. 35, 58-60 (2003). *See*: Airedale NHS Trust v Bland [1992] UKHL 5; [1993] AC 789; [1993] 1 All ER 821; Re a Ward of Court (1995) 50 BMLR 140; Re J (A Minor) [1990] 3 All ER 930; [1991] 2 WLR 140; Re C (A Minor) [1989] 2 All ER 782; Gardner; Re BWV [2003] VSC 173 [43] (obiter); Hunter Area Health Service v Marchlewski [2000] 51 NSWLR 268; [2000] NSWCA 294, [91] (obiter). Note that quality of life is measured objectively or externally (that is, by persons other than the patient), since in most cases the patient is permanently unconscious and hence cannot assess his own quality of life; though the point is still to determine what is in the patient’s best interests.[] [229] Note that the duty of care arises only where the disabled child is unwanted. If the disabled child is wanted, the doctor’s conduct will have no bearing on whether the child exists (and suffers physical damage)—in which case there is no possibility of causing physical damage and hence no duty of care (unless on other grounds). [230] *See*: Rogers v Whitaker [1992] HCA 58; (1992) 175 CLR 479; (1992) 109 ALR 625. [231] *See*: above, Part II.A.1. [232] In Harriton v Stephens; Waller v James; Waller v Hoolahan [2004] NSWCA 93, [25]-[28], Spigelman CJ held that because the relationship between the wrongful life plaintiff and the doctor is mediated through the parents, that relationship is insufficiently ‘direct’ to create a duty of care. This seems implausible: if a doctor advises a pregnant woman (who opposes abortion) to spend lots of time with people who recently contracted rubella, the resulting disabled child (who, but for the negligence, would have been born without disabilities) could plainly sue the doctor for negligence—meaning there is sufficient ‘directness’ of relationship in this case; yet the relationship is surely no less ‘direct’ in wrongful life. Note also that the parents need not be seen as somehow acting on behalf of the potential child (contra the apparent view of Spigelman CJ at [27]); one need only recognise that the information provided to the parents by the doctor will affect whether an unwanted disabled child comes into existence and so suffers physical damage (disability). [233] Costs flowing from (caused by) the disability are those a non-disabled person in the same position as the plaintiff (same except for the disability) would not incur—for example, nursing costs. Comparison between the plaintiff and a non-disabled person is legitimate here, since one is merely asking what flows from the disability (and this involves considering what would happen with versus without the disability). Earlier it was asked what flows from the negligence. The appropriate comparison there is between disabled existence and never being born (non-existence), since these are what would happen with versus without the negligence. [234] * See*: Overseas Tankship (UK) Ltd v Miller Steamship Co Pty Ltd (The Wagon Mound (No2)) [1966] UKPC 1; [1967] 1 AC 617; [1967] ALR 97; Mahony v J Kruschich (Demolitions) Pty Ltd [1985] HCA 37; (1985) 156 CLR 522; (1985) 59 ALR 722. [235] Livingstone v Rawyards Coal Co [1880] UKHL 3; (1880) 5 App Cas 25, 39 (Lord Blackburn); Lee Transport Co. Ltd. v. Watson [1940] HCA 27; (1940) 64 CLR 1, 13-14 (Dixon J); Butler v Egg and Egg Pulp Marketing Board [1966] HCA 38; (1966) 114 CLR 185, 191; [1966] HCA 38; [1966] ALR 1025; Todorovic v Waller [1981] HCA 72; (1981) 150 CLR 402; (1981) 37 ALR 481, 486 (Gibbs CJ and Wilson J), 510 (Mason J), 527-8 (Brennan J); Haines v Bendall [1991] HCA 15; (1991) 172 CLR 60, 63; [1991] HCA 15; 99 ALR 385, 386 (Mason CJ, Dawson, Toohey and Gaudron JJ); Manser v Spry [1994] HCA 50; (1994) 181 CLR 428; (1994) 124 ALR 539, 543 (Mason CJ, Brennan, Dawson, Toohey and McHugh JJ). [236] *See*: Harriton v Stephens; Waller v James; Waller v Hoolahan [2004] NSWCA 93, [214]-[232] (Ipp JA). [237] *See*: Cattanach v Melchior (2003) 199 ALR 131; [2003] HCA 38, [48] (Gummow and McHugh JJ), [276] (Callinan J); Griffiths v Kerkemeyer [1977] HCA 45; (1977) 139 CLR 161; (1977) 15 ALR 387. [238] National Insurance Co of New Zealand v Espagne [1961] HCA 15; (1961) 105 CLR 569, 573 (Dixon CJ); [1961] HCA 15; [1961] ALR 627. [239] National Insurance Co of New Zealand v Espagne [1961] HCA 15; (1961) 105 CLR 569; [1961] ALR 627; Manser v Spry [1994] HCA 50; (1994) 181 CLR 428; (1994) 124 ALR 539, 543-5 (Mason CJ, Brennan, Dawson, Toohey and McHugh JJ). [240] So much was assumed in Public Trustee v Zoanetti [1945] HCA 26; (1945) 70 CLR 266. [241] Cattanach v Melchior (2003) 199 ALR 131; [2003] HCA 38, [90] (McHugh and Gummow JJ), [173] (Kirby J), [297]-[298] (Callinan J). [242] *See*: Rogers v Whitaker [1992] HCA 58; (1992) 175 CLR 479; (1992) 109 ALR 625. [243] McKay v Essex Area Health Authority [1982] 1 QB 1166 (CA), 1180G (Stephenson LJ), 1188B-C (Ackner LJ); Edwards v Blomeley [2002] NSWSC 460, [119]; Harriton v Stephens [2002] NSWSC 461, [71]; Waller v James [2002] NSWSC 462, [66]; Harriton v Stephens; Waller v James; Waller v Hoolahan [2004] NSWCA 93, [23] (Spigelman CJ), [303], [314], [348] (Ipp JA). [244] *See*: Cattanach v Melchior (2003) 199 ALR 131; [2003] HCA 38, [77] (McHugh and Gummow J) (rejecting such inferences). [245] *Cf.* Harriton v Stephens; Waller v James; Waller v Hoolahan [2004] NSWCA 93, [124] (Mason P): ‘It is one of the hallmarks of a compassionate society that care and treatment is made available to the severely disabled. To suggest that [wrongful life plaintiffs] are somehow impugning life itself by seeking just recompense for even the cost of care is quite irrational, indeed disturbing.’ [246] Edwards v Blomeley [2002] NSWSC 460, [75] [247] Edwards v Blomeley [2002] NSWSC 460, [119]; Harriton v Stephens [2002] NSWSC 461, [71]; Waller v James [2002] NSWSC 462, [66]. [248] Edwards v Blomeley [2002] NSWSC 460, [75]. [249] *Cf.* Edwards v Blomeley [2002] NSWSC 460, [75] (the logic appears parallel). [250] Edwards v Blomeley [2002] NSWSC 460, [119]; Harriton v Stephens [2002] NSWSC 461, [71]; Waller v James [2002] NSWSC 462, [66]; see also McKay v Essex Area Health Authority [1982] 1 QB 1166 (CA), 1181A-B (Stephenson LJ). [251] Harriton v Stephens; Waller v James; Waller v Hoolahan [2004] NSWCA 93, [139] (Mason P). [252] McKay v Essex Area Health Authority [1982] 1 QB 1166 (CA), 1193C-D (Griffiths LJ). [253] *See:* Airedale NHS Trust v Bland [1992] UKHL 5; [1993] AC 789; [1993] 1 All ER 821; Re a Ward of Court (1995) 50 BMLR 140; Re J (A Minor) [1990] 3 All ER 930; [1991] 2 WLR 140; Re C (A Minor) [1989] 2 All ER 782; Gardner; Re BWV [2003] VSC 173 [43] (obiter); Hunter Area Health Service v Marchlewski (2000) 51 NSWLR 268; [2000] NSWCA 294, [91] (obiter).
true
true
true
Australasian Legal Information Institute (AustLII), a joint facility of UTS and UNSW Faculties of Law.
2024-10-12 00:00:00
2024-01-05 00:00:00
null
null
null
null
null
null
14,830,012
https://blog.newskysecurity.com/iot-thermostat-bug-allows-hackers-to-turn-up-the-heat-948e554e5e8b
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
153,745
http://www.lo-fi-librarian.co.uk/?p=878
lo-fi-librarian.co.uk
null
Buy this domain. lo-fi-librarian.co.uk
true
true
true
This domain may be for sale!
2024-10-12 00:00:00
2024-01-01 00:00:00
null
null
null
null
null
null
34,868,093
https://www.bloomberg.com/news/articles/2023-02-20/china-blasts-us-for-military-cultural-hegemony-as-ties-sour
Bloomberg
null
To continue, please click the box below to let us know you're not a robot. Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy. For inquiries related to this message please contact our support team and provide the reference ID below.
true
true
true
null
2024-10-12 00:00:00
null
null
null
null
null
null
null
20,097,720
https://dougscripts.com/itunes/2019/06/first-thoughts-about-music-app/
Doug's AppleScripts
Doug Adams
# First Thoughts About Music.app Apple debuted Music.app and Apple TV.app at WWDC yesterday. The macOS 10.15 Developer Beta was released and I've installed it. All in all, I'd say things are looking good for scripting the media apps in macOS 10.15. **There is no iTunes.app in Catalina.**Surprisingly! I thought Apple would keep a "legacy" iTunes around, in the same way QuickTime Player 7 and Aperture were allowed to languish. But I'm guessing the new new media apps work well enough that such a strategy was deemed unnecessary.- Music and Apple TV have AppleScript support. Podcasts app does not. **Current iTunes scripts will not work with Music.app or AppleTV.app.**At least, not without some slight modifications. Music.app's scripting definitions file is virtually the same as iTunes (likewise the Apple TV.app). So scripts that target application "iTunes" will need to target application "Music" or "Apple TV". There may be other changes necessary.**The Music.app Script Menu Lives!**Simply create the "Music/Scripts/" folders in the Library folder, put at least one script in it and the Script menu will appear in Music.app. I haven't tried this with the Apple TV app, but I'm betting it works the same.**There doesn't appear to be an automatically updated "iTunes Library.xml" file.**Holding out hope this will be available in a later version (or perhaps I'll stumble over something). This XML file is used by third-party apps to quickly get information about the library. XML files can be user-exported but not having an automatically updated XML is inconvenient. This was being phased out anyway. I haven't tested the iTunesLibrary/ITLibrary framework under Catalina yet which may be a workaround.**No Column Browser (sad).** If you are a rabid iTunes user and are chomping at the bit to try Music.app: TAKE THAT BIT OUT OF YOUR MOUTH! I would not recommend using the Developer or Public Betas on a main machine "just to see". You will not be able to go back without enormous difficulty. If you must install a beta, use a separate partition or virtual machine. Otherwise, wait until the official release. I will try and update some scripts and apps for Music.app in the coming days and weeks. My Summer is going to be quite busy!
true
true
true
Download over 500 AppleScripts for the Mac, plus get tips and information on writing your own. This site is published by Doug Adams.
2024-10-12 00:00:00
2019-06-04 00:00:00
https://dougscripts.com/…sicon600x600.png
null
null
2001-2024
null
null
20,282,341
https://kulinacs.com/pip-install-is-code-execution/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
8,703,452
https://medium.com/@SeanAmmirati/science-of-growth-the-second-launch-to-accelerate-growth-fbd662f32c66
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
13,308,684
https://medium.com/@PaulStollery/all-of-the-fucks-given-online-in-2016-58c60edd6e44#.rcukgrwhr
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
16,614,831
https://www.newyorker.com/tech/elements/the-antisocial-media-app
The Antisocial-Media App
Mark O’Connell
A few weeks ago, I was in a café across the street from my house, having just put in an order for the first cappuccino of the day, when a woman walked in with her young son. I recognized him as one of the children who is regularly looked after by the same child minder as my own son. The woman, on the other hand, I had never seen before. The boy obviously recognized me, too, because as his mother was placing her coffee order he smiled up at me and said hello. I returned the greeting, using his name, and felt immediately awkward, because as far as his mother was concerned I was just some random man she’d never laid eyes on. Instead of simply leaving that ambiguous and faintly sinister note ringing in the air between us, I decided to tell her that her son and I knew each other from the child minder’s; but, flustered, I somehow managed to introduce myself to her as “Mike’s mum.” I tried to clarify that I was, of course, Mike’s dad, but the chat was by that point completely beyond salvage. We awaited our respective coffees in tense and complicated silence, before going our separate ways. Since then, I have, thankfully, not run into this woman again, but given the fact that we live in the same neighborhood, and that our sons are in child care together, I probably won’t be able to avoid her indefinitely. I recently found myself thinking of this when I learned of the existence of a smartphone app called Cloak. The app’s tagline is “Incognito mode for real life,” and it offers its users the ability to “avoid exes, co-workers, that guy who likes to stop and chat—anyone you’d rather not run into.” (Facebook’s newly announced Nearby Friends feature does essentially the same thing, though a spokesperson suggested that the app could be used “to make last-minute plans to meet up with a friend who happens to be in the same place you’re headed to.”) Cloak works by linking with your Instagram and Foursquare accounts to uncover the locations of these undesirables and revealing their avatars on a map, thereby empowering you to give them as wide a berth as possible; in this sense, it’s like a contemporary urban version of those maps from the Middle Ages, with their admonitory illustrations of dragons and sea monsters: “Here Be Vague Acquaintances.” Obviously, Cloak doesn’t really do anything that Instagram and Foursquare (and now Nearby Friends) don’t already inherently allow. In the same way that these social-media services can always be adapted to antisocial ends, Cloak itself can just as easily be used to contrive casual run-ins. But the technology is explicitly marketing itself as a means of social evasion, and this seems to account for its appeal. There’s a certain novelty value in the idea of an anti-social-media app, a kind of satirical inversion of the assumption of gregariousness built into your Facebook and your Foursquare. According to the New York *Times*, Cloak was downloaded from iTunes nearly three hundred thousand times its first three weeks. Clearly, there are a lot of people out there looking to avoid running into other people. When I first heard about Foursquare, shortly before its launch, in 2009, I felt preëmptively oppressed by the very notion of such a thing. “Why would I want you to be able to use your phone to hunt me down and socialize with me?” I thought. It never occurred to me, at the time, that I could use this same gadget to actively avoid running into people. If it *had* occurred to me, I might have invented Cloak on the spot. So I was curious enough about Cloak to download it onto my phone. I quickly realized, though, that it was essentially useless to me, what with my nonexistent Instagram network and my total absence from Foursquare. (Cloak offers integration with neither Facebook nor Twitter, rendering it even less valuable to the average chat avoider.) And, even if I were a prolific and widely connected user of these services, the app would still only be able to steer me clear of other people who happened to use them, too. “All Clear: There’s nobody nearby,” Cloak informs me as I write, flagrantly ignoring the presence of my next-door neighbor, whom I can see, through my window, has just stepped out for a cigarette. I find the idea of a genuinely effective chat-evasion technology initially appealing and ultimately troubling. If there were such a thing as a sat-nav device that planned my route from point A to point B while steering me clear of person X or Y, I can easily imagine getting quite a lot of use out of it. And that’s also exactly why I find the idea disconcerting, and why I’m glad that Cloak doesn’t really function for me in any sort of profitable way. In “River of Shadows,” her book about the pioneering photographer Eadweard Muybridge, Rebecca Solnit writes about how the development of new technologies in the nineteenth century—railroad networks, telegraphy, photography—was routinely referred to by the stock phrase “annihilation of time and space.” This annihilation, she writes, “is what most new technologies aspire to do: technology regards the very terms of our bodily existence as burdensome.… What distinguishes a technological world is that the terms of nature are obscured; one need not live quite in the present or the local.” The Internet has accelerated this process to a remarkable degree, alleviating more fully than ever before the burden of bodily existence. It allows us to be where we are not, but this also means not being where we are. By generating a kind of omnipresence—whereby we are always available, visible, contactable, all of us *there* all the time—the technologies that mediate our lives also cause us to disappear, to vanish into a fixed position on the timeline or the news feed. (“You are invisible,” runs the weirdly urgent message on my Gmail chat sidebar. “Go visible.”) Existing online means inhabiting a series of cloaks, a whole complex ontology of lurking and attenuated presence. And there is now that strange new sense of guilty truancy from leaving e-mails and phone calls unanswered while conspicuously tweeting or posting on Facebook—a social breach for which we don’t yet seem to have developed any sort of etiquette. When I open up Cloak on my phone, I see a stylized map of Dublin in night-vision green and black, with a single pulsing dot at its center, just north of the Liffey: the pulsing dot of myself, right at the cloaked location of my presence, or my absence. “0 People,” reads the counter at the top of the screen. “All Clear: There’s nobody nearby” reads like such a strange, sad message, such a lonely thing to have achieved through technological control of our social environments. Looking at that screen makes me want to place my phone face down on my desk, go out into the street, and walk around until I bump into someone I know. *Mark O’Connell is Slate’s books columnist, and a staff writer for The Millions. You can follow him on Twitter @mrkocnnll.* *Photograph by Ferdinando Scianna/Magnum.*
true
true
true
The new app Cloak allows you to avoid running into exes, co-workers, and “that guy who likes to stop and chat.”
2024-10-12 00:00:00
2014-04-18 00:00:00
https://media.newyorker.…it/cloak-app.jpg
article
newyorker.com
The New Yorker
null
null
2,756,965
http://www.jonathanbrun.com/2011/07/help-i-need-somebody.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
5,783,309
http://www.kennykellogg.com/2013/05/two-videos-every-tuesday_28.html
Two Videos Every Tuesday
Unknown
Kenny Kellogg Scott Orn's Personal Blog Tuesday, May 28, 2013 Two Videos Every Tuesday Back again with "Two Videos Every Tuesday." If you missed " 7 Articles on Sunday " take a quick peek. First up is a neat video that Evan Spiridellis linked to called Second Wind . Evan's friend made the video last year and it's terrific. Second, is Jerry Seinfeld's " Comedians in Cars Getting Coffee " with Michael Richards. Newer Post Older Post Home
true
true
true
A blog about Ben's Friends patient support networks, technology, startups, music and san francisco.
2024-10-12 00:00:00
2013-05-28 00:00:00
https://blogger.googleus…ael+richards.png
null
kennykellogg.com
kennykellogg.com
null
null
37,507,592
https://www.bbc.com/worklife/article/20230912-the-companies-sticking-to-fully-remote-work
The companies staying fully remote
Alex Christian
# The companies sticking to fully remote work **As many firms head back to the office, there are a few staunchly keeping their teams virtual. Are they the last of a dying breed, or trendsetters?** The days of fully remote set-ups are past their peak for most employees. According to July 2023 LinkedIn data, seen by BBC Worklife, there has been a 50% year-over-year decrease in remote roles advertised on the platform in the US, and a 21.5% drop in the UK. Yet even as more firms issue hard-line return-to-office mandates and clamp down on employees working from home, there are some companies still steadfastly remaining – or even switching to – remote set-ups. The trend skews towards technology firms, particularly startups, who are inherently tech-enabled, and have invested in their remote workforces. During the pandemic, for instance, Airbnb pivoted to a "Live and Work Anywhere" programme, which it’s retained to date. Online real estate marketplace Zillow switched to a permanent work-from-anywhere policy in 2020, subsequently announcing that it would pay the same wage to employees who moved away from its Seattle headquarters. Yet there are also some companies – both multinational corporations and startups – still embracing remote employment. These arrangements could be the last of a dying breed. Yet companies' refusals to give in to the return-to-office trend may also serve as proof of concept that tethering workers to desks like they were before the pandemic isn't the only way – or even the best way – to work. **'An opportunity to embrace a better way of working'** During pandemic lockdowns, some companies found the switch to remote working relatively seamless. Founded in 1995, Vista produces physical and digital marketing items for small businesses. Its 6,700-strong workforce includes in-person employees at printing plants in North America and Europe as well as distributed teams scattered across various finance, HR and product design departments in 15 countries. Headquartered in Venlo, Netherlands, Vista switched to a remote-first model for its office workers in August 2020. "Working from home was initially a necessity, but we didn't look at it as a stop-gap," says Massachusetts-based Dawn Flannigan, vice president of human resources for Vista's parent company Cimpress. "Instead, our thinking was that we were presented with an opportunity to embrace a better way of working and rethink our talent strategy." Pivoting to virtual work meant overhauling decades' worth of in-person practices, says Flannigan. "We wanted to move down the asynchronous path of non-linear workdays, so teams could work more flexibly. That meant relying more on documentation, training employees on collaborative tools and setting better agendas so there were fewer meetings, and building new teams responsible for making people effective at remote-first working." In doing so, Vista has been able to access deeper talent pools, improve results and boost employee engagement, says Flannigan. She cites a June 2023 internal survey of its workforce, in which 87% of staff said the company's remote-first policy improved their 'work-life harmony' ("calling it 'work-life balance' implies the scale has to tip one way or another," she adds). Without fixed in-person office days, Vista has also reconfigured its onboarding. Its former offices around the world have been turned into collaboration centres with hot desks, enabling teams to choose to occasionally work in person and brainstorm. "We've designed a 100-day programme specifically to train new starters in adapting to remote-first working," says Flannigan. "It includes networking opportunities, where we encourage team members to meet in a collaborative space, introduce themselves and build their working relationships." Tomas Chamorro-Premuzic, professor of business psychology at University College London, says large organisations that traditionally depend on in-person working are likely just as capable as Vista at creating a remote-first workplace. Yet they still often prefer in-office patterns, he adds, not because they’re necessarily limited by virtual working, but because they're actually constrained by a predisposition towards physical workspaces. "The tools, advantages and motives for a company to be fully remote are the same for a large conglomerate as they are for a small tech startup," says Chamorro-Premuzic. "But the main difference is that older businesses [stick] to their own traditions and habits." In short, companies taking Vista's path are rare – but it's still possible to go remote. It may just mean re-evaluating the entrenched office norm. **Starting out remote** Although new companies may set up in-office mandates from the start, many are launching businesses as fully remote outright. For example, skills-based recruitment startup TestGorilla was founded in 2019 in Amsterdam with a workforce that gets their jobs done anywhere – and as remote hiring boomed through the pandemic, so did its growth. Others started with in-office setups but switched to remote early on in their growth. Riannon Palmer (pictured below) is the founder of PR agency Lem-uhn. Launched in 2021, it initially had a London workspace under an informal flexible-hybrid policy, in which staff chose their in-office workdays. But in May 2023, as the company grew, Palmer decided to set a fully remote model, subsequently launching a work-from-anywhere policy. The choice to be office-free is philosophical, rather than financial, says Palmer. "It's about making people happier and more engaged in their work, and reducing presenteeism. Being a remote company means trusting employees to know where they work best and enabling them to be productive – whatever that looks like to them." By having a flexible-working model, Palmer says Lem-uhn is at a hiring advantage, given most PR firms have issued return-to-office mandates. "It means you can attract top, diverse talent even when you're just starting out. We've had one employee join who is still able to dog-sit around the UK, a working mother that can pick their child up during the school run and recent job candidates that are more introverted. Being remote means having more inclusivity and variety in teams." As firms U-turn on remote working or stiffen their return-to-office mandates, leaders' justification often focuses on how in-person collaboration is integral to learning and development, long-term planning and workplace culture. But Flannigan says all three features can be fostered among distributed teams. "Culture isn't defined by walls. It's based on values – the way you work." **'Being remote is an operating model'** More than three years since most of the workforce worked from home out of necessity, employers such as Vista and Lem-uhn are choosing virtual-working as their ideology. Consequently, they've made distinct, existential decisions in how they operate as businesses, explains Chamorro-Premuzic. This means that remote-first organisations often share similar qualities. "Fully remote firms tend to be more employee-centric, in that an all-virtual model is more likely to please workers, who prefer autonomy," says Chamorro-Premuzic. "And they'll likely be more trusting: they don't need to see where and how people work. So, they'll likely treat employees as adults and give them freedom and flexibility, which generally boosts performance." As the number of work-from-home roles dwindles, demand is vastly outstripping supply: LinkedIn US data shows that 44% of all job applications on the platform in July were for remote roles. Chamorro-Premuzic says this relative scarcity means these employers will be increasingly coveted in the labour market. In many cases, these organisations appeal to employees who not only want to work flexibly, but also want to do so for forward-thinking companies. "It goes further than just saying 'work from home and be behind a monitor'," says Flannigan. "Being remote is an operating model, not a perk. It's inspiring and empowering people to work wherever is most productive for them, and measuring them by results rather than an office parking lot."
true
true
true
As many firms head back to the office, there are a few staunchly keeping their teams virtual. Are they the last of a dying breed, or trendsetters?
2024-10-12 00:00:00
2023-09-13 00:00:00
https://ychef.files.bbci…351/p0gcyspy.jpg
newsarticle
bbc.com
BBC
null
null
11,094
http://www.collaborati.org/kevins/weblog/19.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
31,104,880
https://lwn.net/SubscriberLink/891273/c6f4105af9c36f5b/
Fedora considers deprecating legacy BIOS
Jake Edge April
# Fedora considers deprecating legacy BIOS Did you know...?LWN.net is a subscriber-supported publication; we rely on subscribers to keep the entire operation going. Please help out by buying a subscription and keeping LWN on the net. A proposal to "deprecate" support for BIOS-only systems for Fedora, by no longer supporting new installations on those systems, led to a predictably long discussion on the Fedora devel mailing list. There are, it seems, quite a few users who still have BIOS-based systems; many do not want to have to switch away from Fedora simply to keep their systems up to date. But, sometime in the future, getting rid of BIOS support seems inevitable since the burden on those maintaining the tools for installing and booting those systems is non-trivial and likely to grow over time. To head that off, a special interest group (SIG) may form to help keep BIOS support alive until it really is no longer needed. #### Proposal The proposal to "Deprecate Legacy BIOS" was, as usual, posted on behalf of its owners, Robbie Harwood, Jiří Konečný, and Brian C. Lane, by Fedora program manager Ben Cotton. It currently targets Fedora 37, which is due in October, though there is reason to believe the change will not happen quite that soon. The reasons for removing the support are described in the proposal: UEFI is defined by a versioned standard that can be tested and certified against. By contrast, every legacy BIOS is unique. Legacy BIOS is widely considered deprecated (Intel, AMD, Microsoft, Apple) and on its way out. As it ages, maintainability has decreased, and the status quo of maintaining both stacks in perpetuity is not viable for those currently doing that work.[...] While this will eventually reduce workload for boot/installation components (grub2 reduces surface area, syslinux goes away entirely, anaconda reduces surface area), the reduction in support burden extends much further into the stack - for instance, VESA support can be removed from the distro. The proposal says that, eventually, BIOS support will have to be removed, but that maintaining the ability to boot existing Fedora systems with BIOS is meant to help smooth the process. Fedora already has requirements that effectively restrict it to systems made after 2006, so this change would extend those restrictions somewhat further. Intel stopped shipping the last vestiges of BIOS support in 2020 (as have other vendors, and Apple and Microsoft), so this is clearly the way things are heading - and therefore aligns with Fedora's "First" objective. The subject was raised back in 2020, which also led to a (predictably) long thread, though it was not a change proposal at that time. Some of the "relevant points from that thread " are listed in the Feedback section of the proposal. There are, of course, machines that are BIOS-only and any kind of hardware deprecation for the distribution is impossible "without causing some amount of friction ". In addition, there is no way to migrate from a BIOS installation to a UEFI-based system, since "repartitioning effectively mandates a reinstall ". In particular, a UEFI partition would need to be added to the system. The Contingency Plan section of the proposal describes an ugly future for the status quo, but also a possible way forward: Leave things as they are. Code continues to rot. Community assistance is required to continue the status quo. Current owners plan to orphan some packages regardless of whether the proposal is accepted.Another fallback option could be, if a Legacy BIOS SIG organizes, to donate the relevant packages there and provide some initial mentoring. Longer term, packages that cannot be wholly donated could be split, though it is unclear whether the synchronization thereby required would reduce the work for anyone. #### Affected systems As might be guessed, multiple replies from users of affected systems were seen in the thread, starting with Neal Gompa. He said that he is sympathetic to the change, but thinks that it is "way too early to do across the board ". He has a system that struggles to boot Linux with UEFI and his workarounds are beyond the abilities of most users. David Airlie said that he recently worked on a project to rewrite the Mesa driver for a wide range of Intel GPUs. Many of the systems he used to develop and validate those drivers are pre-UEFI systems, but it was "of great benefit to me and the community " that he could use Fedora for that work. For future projects, he would have to consider moving away from Fedora, which would be a sad state of affairs. Hans de Goede said that he had several systems at home that were BIOS-only and happily running Fedora now. Obsoleting them just contributes to the e-waste problem; "Looking specifically at fixed PCs and not laptops this proposal would (eventually) turn 2/5 PCs in my home unusable, which really is unacceptable IMHO. " He also pointed to Airlie's work on Intel GPUs, saying that "it seems rather silly to drop support for this hw after just investing a significant chunk of time to breathe new [life] into their GPU support " But it is not just desktops and laptops that are affected by a change of this sort. Fedora is also installed on cloud servers and virtual machines of various sorts, some of which do not support anything other than booting via BIOS. The proposal noted that the time of the 2020 discussion, Amazon's AWS did not support UEFI, but that has changed. Marc Pervaz Boocha pointed out that many virtual private server (VPS) providers do not support UEFI, giving Linode and Vultr as examples. Dominik "Rathann" Mierzejewski reported that OVH is also affected: OVH is another big provider and they don't offer UEFI boot with their VPS range. I've just confirmed it with their support.Since one of my servers is hosted by OVH, I'd have to either migrate to another hosting provider or migrate off Fedora. Which is ironic, considering my involvement in Fedora. Stewart Smith added some "thoughts both from an EC2 [Elastic Compute Cloud] perspective, and an Amazon Linux as a downstream of Fedora perspective ". Most EC2 instance types boot using BIOS by default, though many can use UEFI and new types are likely to get UEFI support, but there are "a *lot* of instance types, a whole bunch of which are less likely to support UEFI ". There is no installer for cloud images, instead they use the Amazon Machine Image (AMI) format, but "AMIs that don't run on all instance types tend to cause confusion, no matter how [clearly] you document the limitations ". So Amazon Linux has an interest in ensuring that BIOS booting still works well, "likely for a decent number of years to come (however much I wish this wasn't the case) ". Gompa complained that the change is not really a deprecation, but is instead a removal, because the lack of packages and tooling that support BIOS "makes several scenarios (including recovery) harder ". It also puts the burden on users to determine if their hardware can boot and install Fedora, but UEFI has not yet reached a critical mass where it can be assumed to work. Alberto Abrao agreed, noting that this change would "leave behind a LOT of serviceable hardware ", especially in the server space: Ironically, Fedora is one of the distributions out there that allows me to extract the most out of older hardware. It would be a terrible loss to have to move to a different one, but it's hard to reason purchasing new hardware - especially right now, with pandemic-related supply issues still ongoing - to keep up with this change. In that message, Gompa also said that he is a "a fan of using UEFI instead of BIOS " and that he has done work to add UEFI support to Fedora cloud images. He just does not believe that it is time, yet, to make that switch. Part of the reason is that he believes the UEFI experience on Fedora is not all that good. #### A wander into secure boot In his original reply to the proposal, Gompa also asked about Fedora support for NVIDIA drivers under UEFI secure boot. Peter Robinson said that was out of the scope of the proposal, since users can disable secure boot if they need support for drivers, like NVIDIA's, that are not signed by the Fedora kernel-module keys. But Gompa replied that it is sometimes easier to have users fall back to booting from BIOS; furthermore: You're right that these are different problems, but I've also seen very little appetite for reducing the suffering of Fedora Linux users on UEFI Secure Boot with the *most common issue* we have: an NVIDIA driver that doesn't do anything because of the lockdown feature. If you're planning to say that UEFI is the only way to boot, then that means you need to be prepared to accept that our UEFI experience is *worse* than our BIOS one right now, and someone needs to take ownership to improve it. Harwood, who is one of the feature owners, said that NVIDIA users can either use the open-source nouveau driver (which is signed), sign their own copy of the driver "(involves messing with certificates, so not appropriate for all users) ", or disable secure boot. But several people pointed out that nouveau does not really solve the problem, especially for more recent NVIDIA hardware; it does not provide access to accelerated graphics operations for recent GPUs and tends to have quite a few bugs even for the older models. Michael Catanzaro pointed out that the options Harwood described are problematic from a user-experience perspective: The user experience requirement is: user searches for NVIDIA in GNOME Software and clicks Install. No further action should be necessary. We didn't make the NVIDIA driver available from the graphical installer with the intention that arcane workarounds would be required to use it. But Adam Jackson thought that the NVIDIA problem was not Fedora's to solve. There are technical means to make it work and "NVIDIA are the ones with the private source code ". But Chris Murphy sees things differently, noting that Fedora is sending mixed messages: When users have a suboptimal experience by default, it makes Fedora look bad. We can't have security concerns overriding all other concerns. But it's really pernicious to simultaneously say security is important, but we're also not going to sign proprietary drivers. This highly incentivizes the user to disable Secure Boot because that's so much easier than users signing kernel modules and enrolling keys with the firmware, and therefore makes the user *less safe*. On the other hand, Fedora does not actually have a full secure boot implementation, Lennart Poettering said, so by disabling it "you effectively lose exactly nothing in terms of security right now ". The reason is that the initrd is not signed, so it can be subverted: What good is a trusted boot loader or kernel if it then goes on loading an initrd that is not authenticated, super easy to modify (I mean, seriously, any idiot script kiddie can unpack a cpio, add some shell script and pack it up again, replacing the original one) – and it's the component that actually reads your FDE LUKS password. #### SIG In his message linked above, De Goede volunteered to form a SIG along the lines suggested in the proposal. He noted that previous attempts to keep 32-bit x86 alive when Fedora removed support for i686 had run aground, but felt that BIOS is different: Legacy BIOS boot support is basically only about the image-creation tools + the bootloader. As various people have mentioned in the thread BIOS support is still very much a thing in data-centers, so I expect the upstream kernel community to keep the kernel working with this for at least a couple of years. Where as both the kernel + many userspace apps were breaking on i686. After Fedora project leader Matthew Miller started a new thread about the possibility of having a video call to discuss the BIOS issue, which was generally seen as not being something that would do much more than rehash what had already been aired, De Goede renewed his call for a Legacy BIOS SIG. He envisioned it as a lightweight organization that focused on testing Fedora on BIOS systems, particularly the next version of Fedora early in its development, in order to file and fix bugs. The new thread largely went over the same ground as the first, at some length, naturally, though there was also some concrete planning of what a new SIG would do—and how. Harwood said that what is needed is a place to assign bugs and a way to get those problems addressed. "The overall goal of the SIG needs to be to reduce load on existing bootloader contributors. " But Harwood also wondered whether Fedora should redefine its support for BIOS: Given there is consensus that legacy BIOS is on its way out, we think Fedora release criteria in this area should be re-evaluated. Not only does support change from "fully supported" to "best effort", but we should re-evaluate what is/isn't release blocking, and probably clarify who owns what parts. Several people disagreed with that characterization of the status of BIOS. Chris Adams said: "I don't think this statement is true, unless Fedora doesn't want to be considered for a bunch of popular VM hosts (e.g. Linode and such) that have no stated plans to support UEFI. " Perhaps BIOS support for physical hardware could be phased out, though. De Goede agreed that BIOS support is still needed in some contexts: Given what the server product folks have indicated that BIOS boot support is quite important for them I'm not sure if changing the release criteria is in order. I do agree that any blocker bugs related to legacy BIOS booting should be assigned to; and taken care of by the legacy BIOS boot SIG. The change proposal will be discussed and decided by the Fedora engineering steering committee (FESCo), but its prospects do not look good. The FESCo ticket for the proposal has already gotten five "-1" votes from members and there are nine on the committee. Those votes are not binding, but it still looks pretty unlikely this change will go through—at least for Fedora 37. But the change will, seemingly, happen eventually. One of the complaints seen in the ticket (and threads) is that the proposal calls it a deprecation, but that is not really what it entails; new versions of Fedora will not be able to be installed on the older hardware, which could be problematic if an in-place upgrade goes awry. In addition, Fedora versions are only supported for about a year, so at some point upgrading may not really be an option if BIOS support is removed. That means users of those systems would have to migrate to a different distribution or keep running an unsupported version as it slowly bitrots. In a blog post about the change, Cotton pointed out that while he did not think the distribution "should abandon old hardware willy-nilly ", there is a balance to be struck: I think some distros should strive to provide indefinite support for older hardware. I don't thinkalldistros need to. In particular, Fedora does not need to. That's not what Fedora is. "First" is one of our Four Foundations for a reason. Other distros focus on long-term support and less on integrating the latest from upstreams. That's good. We want different distros to focus on different benefits. With luck, the SIG will take over any packages needed to keep BIOS functioning, since their owners plan to orphan them regardless of the outcome of the proposal. Keeping BIOS alive on Fedora for another few years—how many is difficult to guess—would seem to have a fair number of benefits at this point, though there is obviously a maintenance burden associated with it as well. If the change does not get approved this time around, we will likely see the proposal recur for Fedora 38 (or later), but if the SIG takes off, that may be postponed for some time. When it does come up again, however, we can probably expect another lengthy discussion in Fedora-land. Posted Apr 20, 2022 22:52 UTC (Wed) by Posted Apr 20, 2022 22:52 UTC (Wed) by Posted Apr 21, 2022 0:15 UTC (Thu) by Posted Apr 21, 2022 7:48 UTC (Thu) by Posted Apr 21, 2022 10:27 UTC (Thu) by Posted Apr 21, 2022 23:35 UTC (Thu) by Posted Apr 24, 2022 4:37 UTC (Sun) by Posted Apr 24, 2022 6:48 UTC (Sun) by The installer still supports installing Fedora on non uefi platforms. What their change proposal effectively just does, is to open up the door for *other* change proposal to make further changes in Fedora which more or less will involve the elimination of the technical debt in Fedora that has been gathered since it's inception around legacy bios support. At best it flips a switch or two for component defaulting to uefi but the fact is various upstream have already start defaulting to uefi and will eventually do code cleanups that drops any support for legacy bios regardless what downstream "feels" about it. Those distribution that feel so strongly about it will have to carry and maintain everything themselves, they should not expect upstreams to do that *for them*. Posted Apr 21, 2022 9:41 UTC (Thu) by The fact is it's quite easy these days to setup "refurbish" shop that plays into peoples/companies environmental guilt, that buys used known exploitable hardware then refurbish it's and injects it with persistant bios exploitation and resell's it to that target audience ( "Green" individuals/companies ). And for whatever reason people seem to be under the assumption that bios is not used after boot, which could not be further from the truth since the OS needs the bios for various reason and since it does, the OS always *trusts* the bios. One ( popular ) way to exploit that fact is to use the BIOS-32 calls OS have and do an direct-to-kernel binary execution from there ( here you can see where the Linux Kernel detects and calls this service [1] ) so fourth and so on ( people can educate themselves how to best exploit legacy/cms bios, there are enough examples out there and in use in the wild ). Anyways Fedora could not only be leading the way of moving the Linux ecosystem towards UEFI but also get rid of so much of technical debt in Fedora in the process and just leave the legacy users to some distribution better suited for that ( some LTS distro like CentOS, RHEL, etc ) but it seem that "my usecase matters the most even thou I dont contribute jack to Fedora or the ecosystem in general ) crowd wins again and this will be postponed to F40 and this whole dialog had again in the community. <sigh> [1] https://github.com/torvalds/linux/blob/b253435746d9a4a701... Posted Apr 21, 2022 10:22 UTC (Thu) by Posted Apr 21, 2022 10:34 UTC (Thu) by I'm not sure if you are aware of it but everytime any distro mentions anything about deprecating 32-bit support, a squeel like from a pig is heard in the void and someone's pony dies so 32-bit still seems to be live and well to me :) Posted Apr 21, 2022 10:32 UTC (Thu) by Typical security person mindset. Posted Apr 21, 2022 12:12 UTC (Thu) by Well funded ( law enforcement ) agencies have them in their arsenal to circumwent hard-drive encryption and in covert operation ( confuscating a computer, insert the exploit, release the subject and monitor the subject ) as well as pretending to be hardware sales company that sells drug cartels or other questionable businesses hardware ( used or new ), to infiltrate and monitor their operations while the other "darker" side is using those exploits to extort or otherwise profit from them. These attacks are very much real and have existed in the wild for over two decades. And I'm not sure what you mean by throwing away a perfectly good computer. What do you consider a perfectly good computer? I'm all for recycling and reuse but the fact is computers dont last forever and computer's longevity is based on, it's usage,the environment it resides in and the quality of the component it's made out of so it can last as little as couple of days or for as long as one or two decades in otherwords people's milage might vary in that regard depending on the manufacture or even just product lines between manufactures. The fact is distributions cannot be expect having to support old hw forever since it increases it's maintainership and will hinder the adoption of new technologies for those distribution so it's better to just use a distribution that is tailored to such usecases for that target audience. Posted Apr 21, 2022 13:39 UTC (Thu) by Yes they've existed, no they're not hypothetical, but they're totally outdated and pointless nowadays when it's both ultra-cheap and effective to develop a browser malware and that this has become by far the most effective way to steal users' information to the point that it's an industry now (look for "malware as a service"). Sorry but I do not want to mess up with that secure boot. It's only as secure as my ability to use it properly, which is basically zero. We should not force the user to endure pain that is designed to "protect them" against their will from attacks that are not relevant to them. We should instead educate users to where risks are and how to care about what matters. We'd already make a much bigger progress if people stopped reading HTML e-mails... While I had been seriously considering migrating from Slackware to Fedora a few months ago when slack15 was really longing to come, at least this discusssion just convinced me that it was absolutely not a good idea! Posted Apr 21, 2022 18:01 UTC (Thu) by If that happens to be slack good for you, if it happens to be Fedora great, if that happens to be *BSD awesome but please dont fall into this whole "I would have moved to x distro" or worse "If x feature is implemented I stop using the x distro" crowd. Posted Apr 21, 2022 20:29 UTC (Thu) by But I will say that if you do have something to hide from law enforcement, then its your problem to get a sufficiently modern system to keep them out. Posted Apr 21, 2022 22:17 UTC (Thu) by This is exactly why I'm not using secure boot -- but I note that secure boot works for millions of people perfectly well. I suspect the reason why is simply that they are not systems hackers regularly futzing with early boot (most people aren't). Those people (like us) who *are* systems hackers regularly futzing with early boot should either learn how secure boot works or simply not use it, but that doesn't mean it's not appropriate for the vast majority who aren't routinely doing things like that. (As for the added security: many of the attacks secure boot protects against are physical, and I'm not worried about those: anyone who can attack systems I care about that way has broken into my house and I have much bigger problems. But I'm not sure what to do about the possibility of remote attackers implanting persistent malware in my UEFI firmware or something. Secure boot would protect against that, but I still have it turned off because it would also make it more likely to turn a moderately bad boot problem into a disastrous one, and frankly the system failing to boot because of UEFI malware *is* a disaster, arguably worse for me than it booting with the malware active would be. It's a tradeoff... how common *is* UEFI malware anyway? Is it even a threat worth worrying about for someone like me who is basically a random boring person and thus unlikely to be of interest to major governments unless they are malware-implanting literally everyone in the population?) Posted Apr 22, 2022 0:23 UTC (Fri) by This makes perfect sense. You have to protect against two kinds of security failures: granting access to people who shouldn't have it and denying access to people who should. Getting locked out of your own system and losing data- or losing a lot of time going through some complicated procedure to recover your data- is a security failure just as surely as letting script kiddies in is. For your personal threat model, losing access is a much bigger danger, so it makes sense to take steps to mitigate it by turning off secure boot, even if that increases your chances of getting pwned. Posted Apr 22, 2022 0:49 UTC (Fri) by Heh, come to think of it, my desktop's firmware doesn't really do anything to indicate secure-boot is off, so if someone just went and disabled it I might not know! Same with my laptop. My Surface tablet does indicate this in firmware (the boot-splash gains a huge red warning). I think secure-boot (implemented well, like you gotta sign the initrd, come on) is probably useful against like, customs officials quickly adding a boot-kit onto your disk though, which is worth defending against for a lot of people. Posted Apr 22, 2022 15:47 UTC (Fri) by This was just in the news: Ars Technica: Hackers can infect >100 Lenovo models with unremovable malware. Posted Apr 22, 2022 20:31 UTC (Fri) by https://www.dell.com/support/kbdoc/en-is/000188682/dsa-20... Many of OEM's are using insyde https://cybersecurityworldconference.com/2022/02/02/exper... Insyde Software Security Advisory can be found here Report issued by U.S. Department of Homeland Security (DHS) and Department of Commerce "Firmware presents a large and ever-expanding attack surface, as the population of electronic https://www.dhs.gov/sites/default/files/2022-02/ICT%20Sup... And the list goes on... Posted Apr 25, 2022 3:17 UTC (Mon) by These examples just show that the most effective fix against all such problems is to refuse UEFI and revert back to BIOS instead. Posted Apr 25, 2022 3:26 UTC (Mon) by Posted Apr 25, 2022 3:40 UTC (Mon) by Posted Apr 25, 2022 14:09 UTC (Mon) by I think you should separate the XX century from the XXI century here. I still remember MS-6309. Year 2000 edition had a nice, simple jumper which made ROM read-only. Yes, certain change in configuration cause complaints at boot, but it was a simple matter of changing its position for one boot and return it back after that. It was as protected from malware as one can imagine. And then version 5 from 2001 (or was it 2002?) which not only lacked jumper in that place, it refused to boot if you short these two numbs which were left in it's place! So, Posted Apr 25, 2022 14:26 UTC (Mon) by Meanwhile, the other 99.999% of motherboards lacked that feature, including, as you mentioned, later versions of that same motherboard. One data point does not a generalization make. BIOS is layered-hacks-on-top-of-layered-hacks built that goes all the way back to 1982. [1] It's long past time to shoot it in the head. And it's also why, to this day, our bleeding edge Ryzen processors still pretend to be a 44-year-old 16-bit i8086 when powering up. [1] As in when Compaq released PC clones using a clean-room reverse-engineered BIOS. Posted Apr 25, 2022 15:14 UTC (Mon) by Sorry, but no. It's most definitely not 99.999%. I know for the fact that you had to replace ROM chips on Risc PC, Amiga (in fact you can still buy a replacement chips) and I have seen "Flash Protect" switch on lots and lots of motherboards made in XX centory. I remember that one specifically because it was a Yes, so what? It works. It's secure (more secure than the XXI century abomination). And easy to provide in virtual environment. EFI is huge mess with the only redeeming quality: it can support >4TB SSDs. That's great, but I'm not sure all that pointless complexity is worth it. Insecure-by-design POS which Sure, I use EFI when I have no choice, but that doesn't mean it's not POS. So what? One doesn't need that many transistors to implement that more and today there are billions of them on any x86 CPU. Posted Apr 25, 2022 15:41 UTC (Mon) by Note, though, that on all modern x86 hardware platforms, "traditional" BIOS is implemented as a module running atop UEFI; so you get all the vulnerabilities of UEFI, plus extra holes due to the CSM that implements the BIOS interface. Which, in turn, makes the claims about BIOS being more secure questionable - you're talking about an additional layer atop UEFI, which can have its own vulnerabilities, plus you have the full stack of UEFI beneath it to compromise. Posted Apr 25, 2022 15:46 UTC (Mon) by That's very true, sure. I see no reason to use the BIOS interface on the system where it's emulated via CSM. But I don't think it's imlemented that way in virtual systems and other small systems where Linux can still run. Although I wonder how many of these are out there which may not just run Linux, but specifically Fedora. It's pretty heavy novadays. Posted Apr 29, 2022 15:48 UTC (Fri) by Posted Apr 25, 2022 18:24 UTC (Mon) by Many of us have had much more robust than a jumper, an EPROM that required UV light to erase them, and a special programmer delivering 21V to the VPP pin to program them :-) There was no need for a jumper, and as an added bonus, not being upgradable in field tended to make them less bogus (at least they were more tested than my core i7's AMI BIOS). Posted Apr 28, 2022 12:55 UTC (Thu) by Posted Apr 28, 2022 13:47 UTC (Thu) by The vulnerability is actually with networkd-dispatcher, which is developed (and distributed!) independently from systemd. It's not even widely packaged in distributions! It's not a "systemd vulnerability" any more than a vulnerability in NetworkManager or Apache (or any other random daemon) can be called a "systemd vulnerability." Meanwhile I see nothing in the article about how this vulnerability can only affect UEFI systems -- it seems to involve relatively run-of-the-mill symlink traversal, and the CVE descriptions are still redacted. Can you point us towards some sort of supporting evidence for your assertion? Posted Apr 28, 2022 21:09 UTC (Thu) by Interesting even Keith Richards drugs aren't strong enough to reach that conclusion in otherwords please explain to the audience here on LWN how a security flaw in networkd-dispatcher has anything to do with systemd and it only be happening on uefi hardware. I eagerly await your response on the matter backing that up... Posted Apr 22, 2022 0:42 UTC (Fri) by If you are buying hardware from the FBI (or generally if the actual, no shit, manufacturer of the hw is attacking you) you are completely screwed, even remote attestation won't save you (because it'll attest it is unmodified from what you bought!). I don't really see what this has to do with removing BIOS, sure, you can do hardware attacks. Even secure boot can't prevent these attacks without robust attestation support, or extreme levels of tivoization. You _definately_ can't if you don't even bother to sign the initrd. I think many modern systems will verify firmware code before allowing it to be flashed, even code that implements the legacy BIOS interfaces (ofc with physical access and enough sophistication you can just replace whatever is doing that verification or storing those keys, though sometimes this ends up being the entire processor). In any event without an acute awareness of these kinds of attacks and a whole bunch of supporting infrastructure I don't think most systems will be able to prevent this, as it involves some significant useability (and freedom) tradeoffs to protect against an attack model that is uncommon (and if that model applies to you then you had better be well aware of that fact, otherwise you've already lost). Posted Apr 22, 2022 1:22 UTC (Fri) by Not entirely. If you're on Intel and Boot Guard is enabled, the firmware will be measured before it's executed. If you're specifically targeted with modified firmware (even if it's signed appropriately), then those measurements will be different and there'll be reasons to ask questions. Of course, if the vendor just ships backdoored firmware to everyone, that doesn't help (but it does increase the probability that someone will notice the backdoor) - and if Intel is in on it, then obviously all bets are off. Posted Apr 21, 2022 0:16 UTC (Thu) by Posted Apr 21, 2022 6:57 UTC (Thu) by Posted Apr 21, 2022 8:11 UTC (Thu) by Posted Apr 21, 2022 10:23 UTC (Thu) by Those companies should have the required means to prevent this from happening or is the (f)oss ecosystem supposed to be handholding businesses these days? Posted Apr 21, 2022 10:28 UTC (Thu) by There is a technical issue too since UEFI is considerably slower to boot and has much greater complexity. Posted Apr 21, 2022 11:13 UTC (Thu) by UEFI is just one step short from becoming mandatory requirements in government contracts and usually if for example the U.S government says jumps, the vendors jump and businesses better be in shape when that call to jump comes. Anyways still not seeing why *we* ( and by we I mean the entire (f)oss ecosystem ) should care I mean it's companies own responsibility to keep themselves relevant and competitive on the market, we do not exist to do that for them. > There is a technical issue too since UEFI is considerably slower to boot and has much greater complexity. This is the direction the industry is taking ( and has been for quite a while, as you yourself are very much aware of ), so it's not like there are any other choice other than to put in work to mitigate any downsides it brings. People might not like it and businesses might choose to stick their head in the sand and ignore it but it is what it is. Posted Apr 22, 2022 3:13 UTC (Fri) by Or the FOSS community can put its weight where it is most useful. There's no value to forcing cloud computing to move from BIOS to UEFI, and as long as Linux supports it, these cloud providers can continue using it. Not permitting AWS to run away with everything is of value for the FOSS community; it reduces the leverage Amazon has over the community, and more generally the FOSS community is opposed to monopolies. Posted Apr 22, 2022 7:21 UTC (Fri) by And now the industry is faced with deep, firmware- and hardware-level attacks and the reason for that being is there is a major firmware security gap with zero visibility in it. In this interconnected world we (now?) live in, organazations and businesses need to be able verify that the internal components of their purchased computing devices are genuine and have not been altered with during the manufacturing and distribution processes ( for example manufactures swapped out components during the chip shortage for different ones, from different suppliers ) or otherwise modified throughout the device´s life cycles in their infrastructure or during the warranty of the device as a part of their business solution(s). That the firmware that the every day device relies on, be it the "smart" coffee machine, corporate laptops, the network equipment, the servers or the even the personal/corporate EV's ( you dont want your Tesla's to run rampage, plowing down pedestrians like an elephant tramping over ants do you, or turn left when it should be turning right after the vehicle recieved altered firmware update over the air right? ) has not been tampered with. All these devices have firmware on them so it's vital that organazations and businesses have their cybersecurity supply chain in check. The unified extensible firmware interfaces is an critical components in the cybersecurity supply chain since it's an integrated part of infrastructures that runs this fairly newly founded interconnected world and I most certainly would think that the FOSS community would want to be integrated part in helping pushing, validate and securing and otherwise help strengthen the cybersecurity supply chain as well as phase out solutions like legacy bios's that are no longer being maintained and are considered obsolete in the industry since more often than not Linux is the underlying technology running this new world. I'm not too overly concerned with cloud vendors or other buisnesses that do not have their cybersecurity supply chain in check since the free market will take care of those that wont ( they break the root of trust and will just be stuck with financially starved clients ). For example if I was a government entity or a car manufacturer like Toyota,Volvo or Tesla I would not be doing any business with companies that dont have their cybersecurity supply chain in check and one of the things I would be checking for would be if the relevant vendor supported uefi and which hw manufacture it ran, in it's infrastructure on ( all the way down to component level ) etc. so I could validate and ensure that *my* cybersecurity supply chain was in check and *my* root of trust was secure. I would exclude IBM/RH/Amazone from every government/business contracts since it's quite apparent based on the feedback here and on fedora-devel that they are not ready for this. Their cybersecurity supply chain is not in order thus cannot be trusted and wont be for years ( from the looks of it ) thus I would choose a different vendor that provided me with a secure root of trust. ( There are business opportunity for those that can, to get ahead of large vendors like Amazon for example ) Fedora will not be allowed to make the required change to prepare RHEL due to the legacy trolls that come crawling out of last century, screaming ME ME ME, MY USECASE, FOREVER!!!! while riding on their roating,rusting, steam powered technology devices demanding that the technology universe remains frozen in time and be supported forever, as is ( another example is Adam trying to deprecate the legacy xorg driver ) as opposed to be thinking a bit further away from their own noses, ahead into the future, to the state that the industry is in now and evolving into. The Fedora solutions MOAR Community Burecracy ( because that as worked so well! ) by forming a "Legacy SIG" to deal with the problem. So RHEL 10, nope propably more like RHEL 11... Posted Apr 22, 2022 8:22 UTC (Fri) by Have you seen the world? The free market doesn't take care of putting bad products out of business. Posted Apr 22, 2022 8:23 UTC (Fri) by Posted Apr 25, 2022 3:41 UTC (Mon) by This paragraph is very interesting because I've long been interpreting the situation exactly the other way. What if those unnecessary changes were only pushed hard by a few vendors who need to force their customers to regularly buy new hardware, and by software developers who see it as a guarantee to get a lifetime job ? I mean, I'm fine with changes that bring improvements, but there are many changes we're forced to swallow that significantly degrade our user experience. Sometimes you're even forced to abandon hardware by lack of support from new software, and for what justification, beyond "look how great the new version is" ? I used to have machines that took 3 seconds to start to boot from power-on in the past. At work in the lab we have a UEFI machine that takes more than one minute doing whatever in your back before trying to boot, and you have just 2 seconds to decide what device to boot from so you're forced to stay in front counting in your head. That's one of the machines I run test kernels on... It's a good example of crap I don't need and that significantly degrades my experience by preventing me from remotely booting test kernels. It's not about wanting to stay in the previous century, it's a concrete example of "improvements" that I didn't need and that makes users suffer for no reason except some vendors forcefully pushing that down their customers' throat. It would be nice if software developers could sometimes try to argument their improvements as benefits perceived by their *users* and not only by themselves as software maintainers. Just claiming that "new feature X is much better and if you don't want to adopt it we'll remove the previous one and you'll have no other choice" isn't exactly how the free software movement started, quite the opposite in fact. I remember the time when we were proud to recycle old machines to make powerful Linux servers. Nowadays some linux distros force you to trash powerful machines. There must be something really wrong with that policy. The only reason I'm thinking about is the distro's policy possibly being dictated by too powerful companies whose business does not benefit from small systems. But surely I'm wrong on all that line and it's normal to trash perfectly working hardware, I'm the only one concerned about e-waste and with purposely spending my money to buy less pleasant replacement hardware... Posted Apr 25, 2022 7:30 UTC (Mon) by That would be insecure working hardware, connected to the network that requires more "power" than current solutions, which adds to the environmental problem. The people/companies that live by a concept called "carbon footprint" installed by the oil companies as part of an deceptive PR campaigns [1] on their behalf, the biggest one in history and claim that they care about the environment should not be buying/using anything involving computers,solar panels, tv's etc since in the devices manufacturing process is an chemical ( that is among few that was conveniently left out of the Kyoto Protocol international climate change agreement ) called Nitrogen Trifluoride(NF3) is being used. "The gas is 17,000 times more potent as a global warming agent than a similar mass of carbon dioxide. It survives in the atmosphere about five times longer than carbon dioxide" [2] People wont like what I'm about to say but the fact is we are way beyond point of no return for our planet at this point so if people genuenly care about the environment then they should disconnect, find another line of work and go and live like an Amish in the literal sense and figuring out solution that a) change the worlds economy in an instant and b) reduce the human population because *none* of the solution out there fix anything ( but they do produce profit ), at best they just delay it. This whole act of people pretenting to be "green" to ease their environmental guilt which has been installed by their government in conjunction with the oil companies and an trillion dollar environmental industry is just ridiculous. 1. https://mashable.com/feature/carbon-footprint-pr-campaign... Posted Apr 25, 2022 8:49 UTC (Mon) by Sources: Perfect is the enemy of good. If we only accept perfect solutions we'll never get anywhere. I don't disagree with your sentiment that we're already basically screwed, but I don't think that's an excuse to just give up completely. Posted Apr 25, 2022 17:36 UTC (Mon) by It's not like we are getting anywhere without the solutions being perfect now is it. And it's not like the real/perfect solutions aren't known as in changing the world's economy and reduce the human population, they just cant be realistically implemented, well technically they can the former will never be allowed to happen and the latter never be socially accepted. > I don't disagree with your sentiment that we're already basically screwed, Biodiversity, polution in earth, water and air yup we are pretty fucked. > but I don't think that's an excuse to just give up completely. True we can always hope that good size meteor hits the planet. Posted Apr 25, 2022 18:22 UTC (Mon) by To the best of my knowledge, we've undershot pretty much every prediction since, and the forecast for 2100 or 2150 is only 3Bn ... Populations naturally boom and bust, and it looks like we might be hitting bust, luckily for Earth ... Cheers, Posted Apr 25, 2022 19:08 UTC (Mon) by And who does not want to go out in a blaze of glory of kinetic energy = mass/2 * velocity^2 It sure beats dying of old age doesn't it... Posted Apr 25, 2022 22:13 UTC (Mon) by Cheers, Posted Apr 26, 2022 12:50 UTC (Tue) by (There is probably also something related to falling death rates: if you know that most of your children will survive, you'll have fewer. But the economic incentive is substantial even in the absence of that.) Posted Apr 27, 2022 1:33 UTC (Wed) by Posted Apr 25, 2022 14:20 UTC (Mon) by This actually is the part I agree with :-) Note that I wasn't doing some greenwashing, just explaining that I'm not seeing any reason to trash perfectly working hardware (which comes with costs and waste) just for the sake of making some developers' life easier when they claim that may hardware is dead. Posted Apr 25, 2022 15:31 UTC (Mon) by Running costs are only part of the equation. How much environmental damage is done and making new and disposing of obsolete hardware? Surely the extra costs of running old hardware may well be cheaper ... Much as I dislike the waste-to-power plant just down the road (and even more so because we're a poor East London borough having all of posh West London's waste dumped on us), the fact is that is very environmentally friendly - it means a load of waste is converted efficiently to energy and CO2 rather than inefficiently converted to methane ... and it leaves a load of oil/coal unmined and in the ground. If you're being green, you need to look at the big picture, not just minimise your costs at the expense of creating even bigger costs elsewhere (cough cough the government pricing our steel industry out of existence cough cough). Cheers, Posted Apr 25, 2022 12:15 UTC (Mon) by Works for me on Fedora. Posted Apr 25, 2022 14:23 UTC (Mon) by However it only works when the machine reaches grub, and that one is particularly stubborn and decides to boot again from the first disk because boot priorities are reset every second boot or so... I simply gave up doing kernel testing on that one. Posted Apr 25, 2022 15:45 UTC (Mon) by Last time I read the grub documentation, it told you how to changed the default boot entry from entry one, and then had the caveat "this only works if ...". Of course, my system breaks that "if"... Cheers, Posted Apr 22, 2022 1:06 UTC (Fri) by This applies to laptops, desktops, and server hardware. I haven't used a system that doesn't support EFI only mode (i.e. no oproms or anything required) for at least 10 years. In any event I would like fedora to stop silently installing in BIOS mode, I've made that mistake several times and you need to reinstall the whole OS to fix it. Posted Apr 22, 2022 9:35 UTC (Fri) by Posted Apr 22, 2022 10:52 UTC (Fri) by BIOS is _dead_ in the sense that new hardware doesn't support it, it's not being developed - and when Linux eventually is forced to move to attestation by market pressure it'll be dead anyway. Although I've a 25 year old laptop somewhere in my junkpile - I'm hardly expecting to resurrect it - and it's been some years since I've _needed_ BIOS (except for a couple of HP Microservers which are annoying in that firmware is a paid for support contract upgrade). Posted Apr 22, 2022 11:30 UTC (Fri) by Posted Apr 24, 2022 11:55 UTC (Sun) by Posted Apr 21, 2022 10:30 UTC (Thu) by Posted Apr 21, 2022 10:41 UTC (Thu) by Posted Apr 21, 2022 12:23 UTC (Thu) by I would suggest looking at some maps with rivers on them. Posted Apr 21, 2022 13:41 UTC (Thu) by Posted Apr 21, 2022 14:34 UTC (Thu) by Cheers, Posted Apr 21, 2022 16:08 UTC (Thu) by The criticism isn't of the wording, it's of the content. Water frequently does not take the shortest route, as you can determine by looking at a map. Instead, rivers meander all over the place. A river can even erode through the soil into the underlying rock, causing its old meandering path to be literally set in stone. There may be some truth to the idea of people behaving like water, but it has more to do with their path being heavily determined by history. Both will usually take an established route even when it is long and convoluted. It's only the occasional dramatic event that causes rivers- or people- to change their course. Posted Apr 21, 2022 16:16 UTC (Thu) by Posted Apr 21, 2022 17:38 UTC (Thu) by Those that are bothered by the inaccuracy of that can replaces "least resistant" with the "distribution of flow that will lead to the least "total" resistance" but most people should have gotten the gist of what I meant. Posted Apr 21, 2022 13:42 UTC (Thu) by Posted Apr 21, 2022 18:18 UTC (Thu) by That said why should UEFI miracles start happening when RHEL 10 get's released? Posted Apr 22, 2022 0:52 UTC (Fri) by Posted Apr 21, 2022 14:37 UTC (Thu) by Posted Apr 21, 2022 21:10 UTC (Thu) by Posted Apr 21, 2022 8:27 UTC (Thu) by Posted Apr 22, 2022 16:24 UTC (Fri) by Posted Apr 24, 2022 20:15 UTC (Sun) by Posted Apr 25, 2022 0:46 UTC (Mon) by Posted Apr 25, 2022 11:44 UTC (Mon) by Posted Apr 27, 2022 11:57 UTC (Wed) by Legacy HD-media In principle it would be possible for an installed system to support both UEFI and Legacy. I do see a few issues though. 1. Operating systems using UEFI installed on fixed drives are supposed to register themselves with the firmware(which requires the installer to berunning in UEFImode) rather than relying on a fixed entry point on the drive. In practice it's possible to boot using the "removable media path" even on a fixed drive but it's not the official way to do things. Posted Apr 27, 2022 14:55 UTC (Wed) by GRUB supports booting from a GPT disk in BIOS mode, but you will need an extra BIOS boot partition. A disk with an MBR partition table usually contains a gap between the boot record and the first partition, which GRUB takes advantage of. GPT-partitioned disks do not (typically) have this "no man's land", so a separate partition is used instead. Using a partition is cleaner anyway, and GPT also does not have a practical limit on the number of partitions a disk can contain, so one extra partition doesn't make a big difference. Posted Apr 26, 2022 12:43 UTC (Tue) by This fubar is becoming rarer now because few machines with existing DOS partition tables are getting GPT-reformatted, and the installers that do that are using wipefs or equivalent and thus are *actually* wiping out the old one properly. But oh good grief was it confusing for a few years. Posted Apr 29, 2022 8:24 UTC (Fri) by ## Fedora considers deprecating legacy BIOS **atai** (subscriber, #10977) [Link] (1 responses) ## Fedora considers deprecating legacy BIOS **atai** (subscriber, #10977) [Link] ## Fedora considers deprecating legacy BIOS **fenncruz** (subscriber, #81417) [Link] (33 responses) ## Fedora considers deprecating legacy BIOS **taladar** (subscriber, #68407) [Link] (4 responses) ## Fedora considers deprecating legacy BIOS **leoluk** (guest, #97665) [Link] (3 responses) ## Fedora considers deprecating legacy BIOS **trini** (subscriber, #70570) [Link] EDK2 is only one part of UEFI. In the entire thread, there was ## Fedora considers deprecating legacy BIOS **Conan_Kudo** (subscriber, #103240) [Link] (1 responses) *no* commitment by the proposers to improve the experience *with* UEFI on Fedora Linux. There are several dimensions there to improve, but the fundamental assumption was that the current experience is fine, when it is clearly not. ## Fedora considers deprecating legacy BIOS **johannbg** (guest, #65743) [Link] Fedora will still work on non uefi platforms, it's just considered unsupported Improving UEFI experience can and should be handled by a different change proposal in Fedora ( not that I'm seeing how since the experience from the installer is not "bad", the rest is relevant to upstream as in no downstream contributor is going to change that, let alone some individuals that are doing change proposal in Fedora, which is just a marketing gimmick for the distribution ). ## Fedora considers deprecating legacy BIOS **johannbg** (guest, #65743) [Link] (27 responses) ## Fedora considers deprecating legacy BIOS **pbonzini** (subscriber, #60935) [Link] (1 responses) ## Fedora considers deprecating legacy BIOS **johannbg** (guest, #65743) [Link] ## Fedora considers deprecating legacy BIOS **LtWorf** (subscriber, #124958) [Link] (24 responses) ## Fedora considers deprecating legacy BIOS **johannbg** (guest, #65743) [Link] (23 responses) There is nothing hypothetical about them at all. ( like LTS distribution or something like I guess slackware which presumably looks and operate just like it did when it was initally created at least it's website is most certainly from that era ) ## Fedora considers deprecating legacy BIOS **wtarreau** (subscriber, #51152) [Link] (20 responses) > There is nothing hypothetical about them at all. However it surely guarantees that I will eventually lose my data when not being able to recover my system after some bugs, crashes or other issues with my machine. ## Fedora considers deprecating legacy BIOS **johannbg** (guest, #65743) [Link] (1 responses) ## Fedora considers deprecating legacy BIOS **jwarnica** (subscriber, #27492) [Link] I won't be that guy to say that if you don't have anything to hide then law enforcement will leave you alone. ## Fedora considers deprecating legacy BIOS **nix** (subscriber, #2304) [Link] (17 responses) ## Fedora considers deprecating legacy BIOS **rgmoore** (**✭ supporter ✭**, #75) [Link] Secure boot would protect against that, but I still have it turned off because it would also make it more likely to turn a moderately bad boot problem into a disastrous one, and frankly the system failing to boot because of UEFI malware *is* a disaster, arguably worse for me than it booting with the malware active would be. ## Fedora considers deprecating legacy BIOS **bartoc** (subscriber, #124262) [Link] > how common *is* UEFI malware anyway? ## Fedora considers deprecating legacy BIOS **abatters** (**✭ supporter ✭**, #6932) [Link] (14 responses) ## Fedora considers deprecating legacy BIOS **johannbg** (guest, #65743) [Link] (13 responses) https://www.insyde.com/security-pledge devices grows. Securing the firmware layer is often overlooked, but it is a single point of failure in devices and is one of the stealthiest methods in which an attacker can compromise devices at scale. Over the past few years, hackers have increasingly targeted firmware to launch devastating attacks." ## Fedora considers deprecating legacy BIOS **wtarreau** (subscriber, #51152) [Link] (12 responses) ## Fedora considers deprecating legacy BIOS **Cyberax** (**✭ supporter ✭**, #52523) [Link] ## Fedora considers deprecating legacy BIOS **mjg59** (subscriber, #23239) [Link] (10 responses) ## Fedora considers deprecating legacy BIOS **khim** (subscriber, #9252) [Link] (6 responses) **please** don't tell about the problematic situation with BIOS. It wasn't problematic when people cared. It **is**, of course, became problematic when people started thinking only about flexibility and forgot that it's not a good idea to have computers which are trivially bricked.## Fedora considers deprecating legacy BIOS **pizza** (subscriber, #46) [Link] (4 responses) >Year 2000 edition had a nice, simple jumper which made ROM read-only. Yes, certain change in configuration cause complaints at boot, but it was a simple matter of changing its position for one boot and return it back after that. > Meanwhile, the other 99.999% of motherboards lacked that feature, including, as you mentioned, later versions of that same motherboard. One data point does not a generalization make. ## Fedora considers deprecating legacy BIOS **khim** (subscriber, #9252) [Link] (3 responses) **surprise** to me that they would remove it.**can not be protected by design** — and I'm supposed to use for sake of “security”? Puhlease.## Fedora considers deprecating legacy BIOS **farnz** (subscriber, #17727) [Link] (1 responses) ## Fedora considers deprecating legacy BIOS **khim** (subscriber, #9252) [Link] ## Fedora considers deprecating legacy BIOS **ms-tg** (subscriber, #89231) [Link] ## Fedora considers deprecating legacy BIOS **wtarreau** (subscriber, #51152) [Link] ## Fedora considers deprecating legacy BIOS **stock** (guest, #5849) [Link] (2 responses) contrary : https://www.theregister.com/2022/04/27/microsoft-linux-vu... which is vulnerability within systemd and only happens on UEFI hardware. ## Fedora considers deprecating legacy BIOS **pizza** (subscriber, #46) [Link] ## Fedora considers deprecating legacy BIOS **johannbg** (guest, #65743) [Link] which is vulnerability within systemd and only happens on UEFI hardware. ## Fedora considers deprecating legacy BIOS **bartoc** (subscriber, #124262) [Link] (1 responses) ## Fedora considers deprecating legacy BIOS **mjg59** (subscriber, #23239) [Link] ## Fedora considers deprecating legacy BIOS **sub2LWN** (subscriber, #134200) [Link] ## Fedora considers deprecating legacy BIOS **LtWorf** (subscriber, #124958) [Link] ## Clouds and VPSes **rwmj** (subscriber, #5474) [Link] (39 responses) ## Clouds and VPSes **johannbg** (guest, #65743) [Link] (36 responses) ## Clouds and VPSes **rwmj** (subscriber, #5474) [Link] (24 responses) ## Clouds and VPSes **johannbg** (guest, #65743) [Link] (18 responses) ## Clouds and VPSes **dvdeug** (subscriber, #10998) [Link] (17 responses) ## Clouds and VPSes **johannbg** (guest, #65743) [Link] (16 responses) ## Clouds and VPSes **LtWorf** (subscriber, #124958) [Link] ## Clouds and VPSes **LtWorf** (subscriber, #124958) [Link] ## Clouds and VPSes **wtarreau** (subscriber, #51152) [Link] (13 responses) ## Clouds and VPSes **johannbg** (guest, #65743) [Link] (9 responses) It's use increases as the world is being deliberately pushed into adopting more technology as can be clearly seen in the market projections for the chemical [3]. 2. https://scripps.ucsd.edu/news/potent-greenhouse-gas-more-... 3. https://www.marketquest.biz/report/108079/global-nitrogen... ## Clouds and VPSes **kleptog** (subscriber, #1183) [Link] (6 responses) https://en.wikipedia.org/wiki/Nitrogen_trifluoride#Greenh... https://unfccc.int/process-and-meetings/transparency-and-... ## Clouds and VPSes **johannbg** (guest, #65743) [Link] (5 responses) ## Clouds and VPSes **Wol** (subscriber, #4433) [Link] (4 responses) Wol ## Clouds and VPSes **johannbg** (guest, #65743) [Link] (1 responses) ## Clouds and VPSes **Wol** (subscriber, #4433) [Link] Wol ## Clouds and VPSes **nix** (subscriber, #2304) [Link] (1 responses) ## Clouds and VPSes **JanC_** (guest, #34940) [Link] ## Clouds and VPSes **wtarreau** (subscriber, #51152) [Link] (1 responses) ## Clouds and VPSes **Wol** (subscriber, #4433) [Link] Wol ## Clouds and VPSes **james** (subscriber, #1325) [Link] (2 responses) ... a UEFI machine that takes more than one minute doing whatever in your back before trying to boot, and you have just 2 seconds to decide what device to boot from so you're forced to stay in front counting in your head. Just in case you haven't heard of it, does `sudo grub2-reboot` work for you? You run it from the Linux command line, specifying a grub menu entry, then reboot: Grub2 uses that menu entry just that once.## Clouds and VPSes **wtarreau** (subscriber, #51152) [Link] ## Clouds and VPSes **Wol** (subscriber, #4433) [Link] Wol ## Clouds and VPSes **bartoc** (subscriber, #124262) [Link] (4 responses) ## Clouds and VPSes **rwmj** (subscriber, #5474) [Link] (3 responses) ## Clouds and VPSes **amacater** (subscriber, #790) [Link] ## Clouds and VPSes **mjg59** (subscriber, #23239) [Link] (1 responses) ## Clouds and VPSes **johannbg** (guest, #65743) [Link] ## Clouds and VPSes **leoluk** (guest, #97665) [Link] (10 responses) ## Clouds and VPSes **johannbg** (guest, #65743) [Link] (6 responses) ## Clouds and VPSes **dullfire** (guest, #111432) [Link] (5 responses) ## Clouds and VPSes **smoogen** (subscriber, #97) [Link] (4 responses) ## Clouds and VPSes **Wol** (subscriber, #4433) [Link] Wol ## Clouds and VPSes **rgmoore** (**✭ supporter ✭**, #75) [Link] (2 responses) ## Clouds and VPSes **sfeam** (subscriber, #2841) [Link] ## Clouds and VPSes **johannbg** (guest, #65743) [Link] ## Clouds and VPSes **rwmj** (subscriber, #5474) [Link] (2 responses) ## Clouds and VPSes **johannbg** (guest, #65743) [Link] (1 responses) ## Clouds and VPSes **bartoc** (subscriber, #124262) [Link] ## Clouds and VPSes **cortana** (subscriber, #24596) [Link] ## Clouds and VPSes **dmoulding** (subscriber, #95171) [Link] In the Phoronix forum thread about the same topic, "skeevy420" mentions the Clover bootloader, which has an ability to emulate a UEFI.## Fedora considers deprecating legacy BIOS **Lionel_Debroux** (subscriber, #30014) [Link] (1 responses) I'm sharing the link to 1) spread the knowledge (it was the first time I read about that project) and 2) possibly get some interesting takes from users of that project :) ## Fedora considers deprecating legacy BIOS **mattdm** (subscriber, #18) [Link] ## sneaky dual-boot **ballombe** (subscriber, #9523) [Link] (5 responses) It is also possible to have two different partitions tables (GPT and legacy) on the same disk. ## sneaky dual-boot **pabs** (subscriber, #43278) [Link] (3 responses) ## sneaky dual-boot **lsl** (guest, #86508) [Link] ## sneaky dual-boot **plugwash** (subscriber, #29694) [Link] (1 responses) Legacy optical media UEFI HD-media UEFI CD 2. While the combination of BIOS boot and GPT is technically feasible it's notexactly standard, I'm not sure if grub supports it or if a hybrid partion table (which is it's own can of worms) is needed. 3. There is no gaurantee that what works in one mode will work in the other. 4. Multiboot becomes even more of a can of worms than usual. ## sneaky dual-boot **jem** (subscriber, #24231) [Link] While the combination of BIOS boot and GPT is technically feasible it's notexactly standard, I'm not sure if grub supports it or if a hybrid partion table (which is it's own can of worms) is needed. ## sneaky dual-boot **nix** (subscriber, #2304) [Link] ## Fedora considers deprecating legacy BIOS **yuhong** (guest, #57183) [Link]
true
true
true
null
2024-10-12 00:00:00
2022-04-20 00:00:00
null
null
null
null
null
null
12,841,261
https://www.bloomberg.com/view/articles/2016-10-28/want-more-startups-build-a-better-safety-net
Bloomberg
null
To continue, please click the box below to let us know you're not a robot. Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy. For inquiries related to this message please contact our support team and provide the reference ID below.
true
true
true
null
2024-10-12 00:00:00
null
null
null
null
null
null
null
24,133,389
https://github.com/bevyengine/bevy
GitHub - bevyengine/bevy: A refreshingly simple data-driven game engine built in Rust
Bevyengine
Bevy is a refreshingly simple data-driven game engine built in Rust. It is free and open-source forever! Bevy is still in the early stages of development. Important features are missing. Documentation is sparse. A new version of Bevy containing breaking changes to the API is released approximately once every 3 months. We provide migration guides, but we can't guarantee migrations will always be easy. Use only if you are willing to work in this environment. **MSRV:** Bevy relies heavily on improvements in the Rust language and compiler. As a result, the Minimum Supported Rust Version (MSRV) is generally close to "the latest stable release" of Rust. **Capable**: Offer a complete 2D and 3D feature set**Simple**: Easy for newbies to pick up, but infinitely flexible for power users**Data Focused**: Data-oriented architecture using the Entity Component System paradigm**Modular**: Use only what you need. Replace what you don't like**Fast**: App logic should run quickly, and when possible, in parallel**Productive**: Changes should compile quickly ... waiting isn't fun **Features:**A quick overview of Bevy's features.**News**: A development blog that covers our progress, plans and shiny new features. **Quick Start Guide:**Bevy's official Quick Start Guide. The best place to start learning Bevy.**Bevy Rust API Docs:**Bevy's Rust API docs, which are automatically generated from the doc comments in this repo.**Official Examples:**Bevy's dedicated, runnable examples, which are great for digging into specific concepts.**Community-Made Learning Resources**: More tutorials, documentation, and examples made by the Bevy community. Before contributing or participating in discussions with the community, you should familiarize yourself with our **Code of Conduct**. **Discord:**Bevy's official discord server.**Reddit:**Bevy's official subreddit.**GitHub Discussions:**The best place for questions about Bevy, answered right here!**Bevy Assets:**A collection of awesome Bevy projects, tools, plugins and learning materials. If you'd like to help build Bevy, check out the **Contributor's Guide**. For simple problems, feel free to open an issue or PR and tackle it yourself! For more complex architecture decisions and experimental mad science, please open an RFC (Request For Comments) so we can brainstorm together effectively! We recommend checking out the Quick Start Guide for a brief introduction. Follow the Setup guide to ensure your development environment is set up correctly. Once set up, you can quickly try out the examples by cloning this repo and running the following commands: ``` # Switch to the correct version (latest release, default is main development branch) git checkout latest # Runs the "breakout" example cargo run --example breakout ``` To draw a window with standard functionality enabled, use: ``` use bevy::prelude::*; fn main(){ App::new() .add_plugins(DefaultPlugins) .run(); } ``` Bevy can be built just fine using default configuration on stable Rust. However for really fast iterative compiles, you should enable the "fast compiles" setup by following the instructions here. This list outlines the different cargo features supported by Bevy. These allow you to customize the Bevy feature set for your use-case. Bevy is the result of the hard work of many people. A huge thanks to all Bevy contributors, the many open source projects that have come before us, the Rust gamedev ecosystem, and the many libraries we build on. A huge thanks to Bevy's generous sponsors. Bevy will always be free and open source, but it isn't free to make. Please consider sponsoring our work if you like what we're building. This project is tested with BrowserStack. Bevy is free, open source and permissively licensed! Except where noted (below and/or in individual files), all code in this repository is dual-licensed under either: - MIT License (LICENSE-MIT or http://opensource.org/licenses/MIT) - Apache License, Version 2.0 (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0) at your option. This means you can select the license you prefer! This dual-licensing approach is the de-facto standard in the Rust ecosystem and there are very good reasons to include both. Some of the engine's code carries additional copyright notices and license terms due to their external origins. These are generally BSD-like, but exact details vary by crate: If the README of a crate contains a 'License' header (or similar), the additional copyright notices and license terms applicable to that crate will be listed. The above licensing requirement still applies to contributions to those crates, and sections of those crates will carry those license terms. The license field of each crate will also reflect this. For example, `bevy_mikktspace` has code under the Zlib license (as well as a copyright notice when choosing the MIT license). The assets included in this repository (for our examples) typically fall under different open licenses. These will not be included in your game (unless copied in by you), and they are not distributed in the published bevy crates. See CREDITS.md for the details of the licenses of those files. Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.
true
true
true
A refreshingly simple data-driven game engine built in Rust - bevyengine/bevy
2024-10-12 00:00:00
2020-01-18 00:00:00
https://repository-images.githubusercontent.com/234798675/18016580-dab2-11ea-9864-452f7149499c
object
github.com
GitHub
null
null
16,491,950
https://www.simform.com/building-scalable-application-aws-platform/
How to Build a Scalable Application up to 1 Million Users on AWS
Jignesh Solanki
## How to Build a Scalable Application up to 1 Million Users on AWS Suppose you’ve built a web application and started getting few customers. After some feedback and suggestions, you are ready with a full-fledged product. Now, your marketing team shares your app on product hunt to acquire new customers. Suddenly, thousands of visitors are using your app and at one point they are unable to use your app. You’ve tested your app and it is working fine. So what happened? “This is not a bug but a problem of scalability. Your cloud architecture is not designed to scale with increasing load.” I’ve seen many companies that usually focus more on features and less on scalability. Creating applications that are both resilient and scalable is an essential part of any application architecture. In this blog, you will learn how to build highly scalable web application architecture that can scale with increasing load. ## What is a Scalable Application? Scalability refers to the ability of a system to give a reasonable performance under growing demands (This can be larger data-sets, higher request rates, the combination of size and velocity, etc). It should work well with 1 user or 1 million users and handles spikes in traffic automatically. By adding and removing the resources only when needed, scalable apps consume only the resources necessary to meet demand. When talking about scalability in cloud computing, you will often hear about two main ways of scaling – horizontal or vertical. Let’s look deeper into these terms. ### Vertical scaling (Scaling Up) Scaling up or vertical scaling refers to resource maximization of a single unit to expand its ability to handle the increasing load. In hardware terms, this includes adding processing power and memory to the physical machine running the server. In software terms, scaling up may include optimizing algorithms and application code. ### Horizontal scaling (Scaling Out) Scaling out or horizontal scaling refers to resource increment by the addition of units to the app’s cloud architecture. This means adding more units of smaller capacity instead of adding a single unit of larger capacity. The requests for resources are then spread across multiple units thus reducing the excess load on a single machine. ## Scalable Web Architecture Whether you are looking to scale horizontally or vertically, scalable web architecture will be the base you will need. The scalable architecture enables web applications to adapt according to the user demand and offer the best performance. All the components in a monolithic architecture are tightly coupled. So, a single failure in any of the several layers can cause the entire application to fail. On the other hand, scalable web architecture requires a loosely coupled structure of different modules interacting through lightweight mechanisms. A typical example of scalable web architecture is a one using the LAMP (Linux, Apache, MySQL. PHP) stack. It offers vital benefits like scalability, high availability, loosely coupled services (fault-tolerant), and distributed computing. **Scalability- **A scalable web architecture like LAMP can enable horizontal scalability. The distributed system can quickly expand or contract resource pools as per scaling needs by adding or removing nodes. **High Availability- **Higher uptime and lower downtime are essential to any business. Especially for eCommerce businesses, an hour’s downtime can lead to massive revenue loss. A scalable web architecture needs consideration for redundancy of critical components and rapid disaster recovery even in a partial system failure to ensure higher availability. **Fault-tolerant- **There is no single point of failure, and the system should be able to run efficiently even if a component fails. **Share Nothing – **This is an architecture pattern where each component is independent and capable of carrying out specific functions. It can be achieved for different web application layers like, - For database layer through partitioning or sharding - For cache layer- Memcache client-side partitioning that stores a portion of application data in cache and rest is managed by RDBMS(Relational DataBase Management Systems) - For computing layer- Job queues that allow the tasks to be in queues until a bare minimum for execution are met. A typical scalable web architecture will have four key layers, - Web servers - Database servers - Load balancers - Shared file Servers Each of these layers is scaled independently, with the database layer being the toughest to scale. Employing the master-slave replication approach is an excellent way to scale databases efficiently. Each master node is a powerful machine that can read/write data, whereas a slave node can only read data. At the same time, a load balancer will ensure the distribution of load across master nodes. For optimal performance, the caching hierarchy is important. Here, the application’s local cache can handle common tasks like bootstrapping, while application server caches can handle complex and big queries. Further, job queues are a great way to control the high CPU operations needed for complex data processing. Finally, a Content Distribution Network (CDN) can be leveraged for static public content, and the design system should be fault-tolerant. Similarly, when it comes to cloud-based scalable architectures, Amazon Web Services have everything you may need. Not just the application scalability, but AWS also offers several service integration that enhances the functionality. As mentioned earlier, a scalable application needs different layers to scale independently rather than as a stack of tightly coupled components. Now, let’s look at some successful applications that leveraged such scalable web architecture. **Examples of Top Scalable Applications:** **Evernote-**It is a note-taking web app that uses a microservices architecture to improve the velocity of developing new features and support more than 5 million notes and 12 billion file attachments**Bitly-**The URL shortening service leverages microservices architecture for the creation of decentralized structures and isolates the failure of components to support 20 billion clicks per month.**Dropbox-**Uses an orchestration service like Atlas to manage distributed database services through self-contained clusters to store more than one million files every 15 minutes.**Slack-**Leverages microservices to enable real-time communication, video calls, and screen sharing for more than 12 million daily active users. ## Challenges to Consider When Building Scalable Web Applications There are several challenges to building a scalable web application like the choice of framework, testing, deployment, and even infrastructure needs. ### Right Framework Choosing the proper framework can be challenging to build scalable web applications. First, you need to analyze the system architecture along with application features and a technology stack. Post this, one can decide whether a particular framework is meeting all the requirements. Frameworks like Django and Ruby on Rails are also great options to build scalable web applications. However, if you compare Ruby on Rails with Node.Js, the latter offers an excellent capacity to run complex asynchronous queries. Other frameworks that you can try are, Asp.net, Angular JS, React.js, Laravel, etc. ### Testing Needs Testing is an essential part of the entire web application development lifecycle. Load testing simulations are especially crucial because they allow you to understand the scaling capabilities of your application. In addition, it enables the measurement of response times, throughput rates, and resource consumption to identify the app’s breaking point. However, the complexity of the protocols on which the load tests are based makes it challenging to execute. For example, if you are performing HTTP-based load testing, there are several dynamic parameters to consider, like cookies or session IDs. However, there are several other performance testing requirements for measuring and enhancing app scalability. From operating system compatibility of the application to real-time error detections, you need to create tests for several different conditions and even simulate app usage experience for many users. One possible solution is choosing a Test-driven Development (TDD) to leverage the iterative development approach. It is a software development approach that relies on converting web app requirements into test cases before fully developing them. It allows developers to use different test cases and track all the web app development through iterations. Due to the “Test First” approach of TDD, there is rapid feedback to the functionality of applications predefined by test codes. So, when you scale the application and if a function does not perform as expected, you can quickly change the code before the next iteration. ### Deployment Challenges Deployment of multiple services interacting with each other needs a reliable communication mechanism which is challenging. That is why you need to test these communication mechanisms and orchestration services simultaneously to ensure lower disruptions. Failure to do efficiently can slow down your development process due to parallel development and testing activities. One possible solution is the usage of CI/CD systems. Such a system will inherently allow you to build features without worrying about testing and deployment activities. ### Infrastructure Needs It is challenging to forecast the exact number of users. So, gauging the actual infrastructure requirements to scale accordingly is challenging. However, depending on the situation, you can employ two types of scaling- horizontal and vertical. Both of these have unique use cases, which are discussed below, You can choose horizontal scaling when, - You can’t risk downtime due to a single point of failure - Your scope of upgrading the application is higher - You don’t want a vendor-lockin and want to explore multiple vendor services - You want to improve the resilience of systems by using multiple services At the same time, you can choose vertical scaling when, - You need a simple architecture with reduced operational costs - You want a system that can scale without consuming much power - You need an easy to install and scalable system with lower licensing costs - You want the compatibility of applications maintained. ## Characteristics of the Scalable Application What does scalability look like? There are some areas where an app needs to excel to be considered scalable. ### Performance First and foremost, the application must operate well under stress with low latency. The speed of a website affects usage and user satisfaction, as well as search engine rankings, a factor that directly correlates to revenue and retention. As a result, creating a scalable web application architecture that is optimized for fast responses and low latency is key. ### Availability and Reliability These are closely related and equally necessary. Scalable apps rarely if ever go down under stress. They need to reliably produce data upon request and not lose stored information. ### Manageability The manageability of the cloud architecture equates to the scalability of operations: maintenance and updates. Things to consider for manageability are the ease of diagnosing and understanding problems when they occur, ease of making updates or modifications, and how simple the system is to operate. (i.e., does it routinely operate without failure or exceptions?) ### Cost Highly scalable applications don’t have to be unreasonably expensive to build, maintain, or scale. Planning for scalability during development allows the app to expand as demand increases without causing undue expenses. You’ve plenty of options to choose the cloud provider while building the high-performance web application architecture. The three leading cloud computing vendors, AWS, Microsoft Azure and Google Cloud, each have their own strengths and weaknesses that make them ideal for different use cases. In this blog, I’ve chosen AWS to show you how to build web scalable application. AWS is a subsidiary of the renowned company, Amazon, it provides different services that are cloud-centered for various requirements. AWS holds the highest 33% market share of cloud computing. They provide excellent documentation on each of their services, helpful guides and white papers, reference architectures for common apps. ## Steps to build a scalable application based on increasing users from 1 to 1 million ### 1 User ### Initial Setup of Cloud Architecture You are the only one using the app on the localhost. The start can be as simple as deploying an application in a box. Here you need to use the following AWS services to get started. - Amazon Machine Images (AMI) - Amazon EC2 - Amazon VPC - Amazon Route 53 ### Amazon Machine Images (AMI) Amazon Machine Image (AMI) gives the information required to launch an instance, which is a virtual server in the cloud. You can specify an AMI during the launch an instance. An AMI includes a template for the root volume for the instance, launch permissions that control which AWS accounts can use the AMI to launch instances and a block device mapping that specifies the volumes to attach to the instance when it’s launched. ### Amazon Elastic Compute Cloud (Amazon EC2) Amazon Elastic Compute Cloud provides the scalable computing capacity in the AWS cloud. This eliminates the hardware upfront so that you can develop and deploy applications faster. ### Amazon Virtual Private Cloud (Amazon VPC) Amazon Virtual Private Cloud gives a provision to launch AWS resources in a virtual network. It gives complete control over the virtual networking environment including a selection of IP address range, subnet creation, the configuration of route tables and network gateways. ### Amazon Route 53 Amazon Route 53 is a highly available and scalable cloud DNS web service. Amazon Route 53 effectively connects user requests to infrastructure running in AWS – such as Amazon EC2 instances, Elastic Load Balancing load balancers or Amazon S3 buckets. Here you need a bigger box. You can simply choose the larger instance type which is called vertical scaling. At the initial stage, vertical scaling is enough but we can’t scale vertically indefinitely. Eventually, you’ll hit the wall. Also, it doesn’t address failover and redundancy. ### USERS > 10 ### Create multiple hosts and choose the database First, you need to choose the database as users are increasing and generating data. It’s advisable to start with SQL Database first because of the following reasons: - Established and well-worn technology. - Community support and latest tools. - We aren’t going to break SQL DBs in our first 10 million users. Note, you can choose the NoSQL database if your users are going to generate a large volume of data in various forms. At this stage, you have everything in a single bucket. This architecture is harder to scale and complex to manage in the long run. It’s time to introduce the multi-tier architecture to separate the database from the application. ### USERS > 100 ### Store database on Amazon RDS to ease the operations When users increase to 100, Database deployment is the first thing which needs to be done. There are two general directions to deploy a database on AWS. The foremost option is to use a managed database service such as Amazon Relational Database Service (Amazon RDS) or Amazon Dynamo DB and the second step is to host your own database software on Amazon EC2. - Amazon RDS - Amazon DynamoDB ### Amazon RDS Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Amazon RDS provides six familiar database engines to choose from, including Amazon Aurora, Oracle, Microsoft SQL Server, PostgreSQL, MySQL and MariaDB. ### User > 1000 ### Create multiple availability zones to improve availability As per current architecture, you may face availability issues. If the host for your web app fails then it may go down. So you need another web instance in another Availability Zone where you will put the slave database to RDS. - Elastic Load Balancer (ELB) - Multi – AZ Deployment Here you’ve to use Elastic Load Balancer (ELB) to balance the load between the two web host instances in the two AZs. ### Elastic Load Balancer (ELB) ELB distributes the incoming application traffic across EC2 instances. It is horizontally scaled, imposes no bandwidth limit, supports SSL termination, and performs health checks so that only healthy instances receive traffic. This configuration has 2 instances behind the ELB. We can have 1000s of instances behind the ELB. This is Horizontal Scaling. At this stage, you’ve multiple EC2 instances to serve thousands of users which ultimately increases your cloud cost. To reduce the cost, you have to optimize instances’ usage based on varying load. #### Check How We Helped Our Client, Swift Shopper, To Build E-commerce Solutions On the AWS Stack ### Users: 10,000s – 100,000 ### Move static content to object-based storage for better performance To improve performance and efficiency, you’ll need to add more read replicas to RDS. This will take load off the write master database. Furthermore, you can reduce the load from web servers by moving static content to Amazon S3 and Amazon CloudFront. - Amazon S3 - Amazon CloudFront - Amazon DynamoDB - Amazon ElastiCache ### Amazon S3 Amazon S3 is object-based storage. It is not attached to EC2 instance which makes it best suitable to store static content, like javascript, CSS, images, videos. It is designed for 99.999999999% of durability and can store multiple petabytes of data. ### Amazon CloudFront Amazon CloudFront is a Content Delivery Network(CDN). It retrieves data from Amazon S3 bucket and distributes it to multiple data center locations. It caches content at the edge locations to provide our users with the lowest latency access. Furthermore, to reduce the load from database servers, you can use DynamoDB(managed NoSQL database) to store session state. For caching data from the database, you can use Amazon ElastiCache. ### Amazon DynamoDB Amazon DynamoDB is a fast and flexible NoSQL database service for applications that need consistent, single-digit millisecond latency. It is a completely managed cloud database and supports document and key-value store models. ### Amazon ElastiCache Amazon ElastiCache is a Caching-as-a-Service. It removes the complexity associated with deploying and managing a distributed cache environment. It’s a self-healing infrastructure if nodes fail new nodes are started automatically. ### Users > 500,000 ### Setting up Auto Scaling to meet the varying demand automatically At this stage, your architecture is quite complex to be managed by the small team and without proper monitoring, analyzing it’s difficult to move forward. - Amazon CloudWatch - AWS Elastic Beanstalk - AWS OpsWorks - AWS Cloud Formation - AWS CodeDeploy Now that the web tier is much more lightweight, it’s time for Auto Scaling! “Auto Scaling is nothing but an automatic resizing of compute clusters based on demand.” Auto Scaling enables “just-in-time provisioning,” allowing users to scale infrastructure dynamically as load demands. It can launch or terminate EC2 instances automatically based on Spikes in Traffic. You pay only for the resources which are enough to handle the load. For monitoring you can use the following AWS services: ### Amazon CloudWatch Amazon CloudWatch provides a rich set of tools to monitor the health and resource utilization of various AWS services. The metrics collected by CloudWatch can be used to set up alarms, send notifications, and trigger actions upon alarms firing. Amazon EC2 sends metrics to CloudWatch that describe your Auto Scaling instances. The autoscaling group can include multiple AZs, up to as many as are in the same region. Instances can pop up in multiple AZs not just for scalability, but for availability. We need to add monitoring, metrics, and logging to optimize Auto Scaling efficiently. **Host-level metrics.**Look at a single CPU instance within an autoscaling group and figure out what’s going wrong.**Aggregate level metrics.**Look at metrics on the Elastic Load Balancer to understand the performance of the entire set of instances.**Log analysis.**Look at what the application is telling you using**CloudWatch**Logs.**CloudTrail**helps you analyze and manage logs. If you have set up region-specific configurations in CloudWatch, it is not easy to combine metrics from different regions within an AWS monitoring tool. In that case you can use Loggly, a log management tool. You can send logs and metrics from CloudWatch and CloudTrail to Loggly and unify these logs with other data for a better understanding of your infrastructure and applications. Squeeze as much performance as you can from your configuration. Auto Scaling can help with that. You don’t want systems that are at 20% CPU utilization. The infrastructure is getting big, it can scale to 1000s of instances. We have read replicas, we have horizontal scaling, but we need some automation to help manage it all, we don’t want to manage each individual instance. Here some automation tools: ### AWS Elastic Beanstalk AWS Elastic Beanstalk is a service that allows users to deploy code written in Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, NGINX, Passenger, and IIS. ### AWS OpsWorks AWS OpsWorks provides a unique approach to application management. Additionally, AWS OpsWorks auto-heals application stack, giving scaling based on time or workload demand and generates metrics to facilitate monitoring. ### AWS Cloud Formation AWS Cloud Formation provides resources using a template in JSON format. You have the option to choose from a collection of sample templates to get started on common tasks. ### AWS CodeDeploy AWS Code Deploy is a platform service for automating code deployment to Amazon EC2 instances and instances running on premises. ### Users > 1 million ### Use Service Oriented Architecture(SOA) for better flexibility To serve more than 1 million users you need to use Service Oriented Architecture(SOA) while designing large scale web applications. - Amazon Simple Queue Service (SQS) - Amazon Simple Notification Service (SNS) - AWS Lambda In SOA, we need to separate each component from the respective tiers and create separate services. The individual services can then be scaled independently. Web and application tiers will have different resource requirements and different services. This gives you a lot of flexibility for scaling and high availability. AWS provides a host of generic services to help you build SOA infrastructure quickly. They are: ### Amazon Simple Queue Service (SQS) It is a simple and cost-effective service to decouple and coordinate the components of a cloud application. Using SQS sending, storing, and receiving messages can be executed easily between software components of any size. ### Amazon Simple Notification Service (SNS) With SNS you can send messages to a large number of subscribers. The benefits are easy setup, smooth operation, and high reliability to send notifications to all endpoints. ### AWS Lambda AWS Lambda is a compute service that lets you run code without provisioning or managing servers. AWS Lambda executes your code only when needed and scales automatically, from a few requests per day to thousands per second. You pay only for the compute time you consume – there is no charge when your code is not running. You can also build serverless architecture composed of functions that are triggered by events. #### Let's Develop a Highly Scalable Application Together! ### Conclusion The decision about how to approach scaling should be made upfront because you never know when you are going to get popular! Also, crashing (or even just slow) pages leave your users unhappy and your app with a bad reputation. It ultimately affects your revenue. Learning how to build scalable websites takes time, a lot of practice, and a good amount of money. For companies that have a need and struggling to get this done, Simform is your best option for such a project. We’ve created scalable web apps for clients having millions of users. Contact us for a consultation and price quote. ## Saif Fuad Wonderful and excellent article. ## Shrikant Shendye Exccellent write up ## Roshan Aman Very good Article ## Dattaprasad Ekavade This was well written and useful. ## Puth Makara Thank you for sharing! ## Sripal Jain Very in depth details in across different segments
true
true
true
Want to build a scalable application on AWS? Here is the step by step guide to building a scalable application to serve upto 1 million users.
2024-10-12 00:00:00
2018-12-10 00:00:00
https://www.simform.com/…ers-on-AWS-1.png
article
simform.com
Simform - Product Engineering Company
null
null
21,394,174
https://www.cnn.com/2019/10/29/tech/facebook-california-candidate-false-ads/
He’s running for governor to run false ads on Facebook. Now Facebook is stopping him | CNN Business
Donie O'Sullivan
A San Francisco man tried to call out Facebook’s controversial false ads policy by running false ads of his own. Now the company is barring him from doing so — and the man told CNN Business he’s now considering legal action. Adriel Hampton, a political activist, registered as a candidate for California’s 2022 gubernatorial election on Monday so he could take advantage of Facebook’s policy allowing politicians and political candidates to run false ads on the platform. On Tuesday evening, a Facebook (FB) spokesperson told CNN Business, “This person has made clear he registered as a candidate to get around our policies, so his content, including ads, will continue to be eligible for third-party fact-checking.” Ads from people and groups other than politicians are subject to fact-checking. Hampton had pulled the stunt with the hope of forcing the company to change its policy and stop allowing politicians to run false ads. Responding to Facebook’s decision, Hampton told CNN Business, “They have made a policy specific to me and I am running for California governor to regulate Facebook.” “I am going to look at suing them – I will immediately seek all available legal options,” he added. Hampton said he plans on recruiting other candidates across the country to run false ads on Facebook. “The genesis of this campaign is social media regulation and to ensure there is not an exemption in fact-checking specifically for politicians like Donald Trump who like to lie online,” he told CNN Business after registering his candidacy on Monday. “I think social media is incredibly powerful,” he said. “I believe that Facebook has the power to shift elections.” Hampton is the treasurer of “The Really Online Lefty League” PAC, which last Thursday began running a false ad on Facebook claiming that Senator Lindsey Graham backed the Green New Deal. The ad spliced together different audio of Graham speaking to make it sound like he said, “Simply put, we believe in the Green New Deal.” Graham never said that. The ad was eventually flagged by Facebook’s fact-checkers and was removed. While Facebook allows politicians to lie in ads, it does not allow PACs or other political groups to do so. While his main motivation for running for governor was to run Facebook ads, Hampton had suggested his campaign could become a fully fledged gubernatorial campaign. “Don’t count me out,” he said. His stunt came as Facebook executives face mounting pressure, including from inside the company, to make sweeping changes to how the social media site runs political ads. On Monday night, the top Democrat on the Senate Intelligence Committee Sen. Mark Warner wrote to Facebook CEO Mark Zuckerberg and warned that Facebook’s policies risked undermining American norms of “transparency, public deliberation and debate, openness, diversity of opinion, and accountability.” The Warner letter came just hours after The New York Times reported that more than 200 Facebook employees had signed a letter also raising concerns about the company’s political ad policy. On Tuesday, two senior Facebook employees wrote in a USA Today op-ed defending the company’s policy, “Anyone who thinks Facebook should decide which claims by politicians are acceptable might ask themselves this question: Why do you want us to have so much power?”
true
true
true
A San Francisco man tried to call out Facebook’s controversial false ads policy by running false ads of his own. Now the company is barring him from doing so — and the man told CNN Business he’s now considering legal action.
2024-10-12 00:00:00
2019-10-29 00:00:00
https://media.cnn.com/ap…100,c_crop/w_800
article
cnn.com
CNN
null
null
10,323,608
http://www.theverge.com/2015/9/30/9416579/spotify-discover-weekly-online-music-curation-interview
How Spotify’s Discover Weekly cracked human curation at internet scale
Ben Popper
In the ‘90s, Aby Ngana Diop was the queen of taasu, a practice of ritual poetry performed by female griots in Senegal. Diop’s distinctive vocals made her a sought-after performer at the weddings and funerals of the rich and powerful, but only a single album of her work is widely available — *Liital*, originally released in 1994. *Liital* took the traditional spoken word art form and merged it with the raucous modernity of electronic synth and drum loops. The record propelled her to superstar status in Senegal. Sadly, Diop died just three years later. # Tastemaker ## How Spotify’s Discover Weekly cracked human curation at internet scale ### By Ben Popper | Photography by Alex Welsh **In the ‘90s,** Aby Ngana Diop was the queen of taasu, a practice of ritual poetry performed by female griots in Senegal. Diop’s distinctive vocals made her a sought-after performer at the weddings and funerals of the rich and powerful, but only a single album of her work is widely available — *Liital*, originally released in 1994. *Liital* took the traditional spoken word art form and merged it with the raucous modernity of electronic synth and drum loops. The record propelled her to superstar status in Senegal. Sadly, Diop died just three years later. I hadn’t heard of Diop until two months ago, when someone put her on a mixtape for me. It was the first track on the playlist, and my jaw just dropped. I got out of my chair and paced around. I played the track again. She’s rapping and shouting and singing over an instrumental that mixes dancehall, electro, and traditional African drum patterns. It is very weird, very rough music, and I’m not suggesting it’s something most people would, or should, like. But for me, it was perfect. With a full-time job and two young children, these days I don’t have much time to seek out new artists. But discovering new music remains a very powerful experience. Streaming services know this, and since most have very similar pricing and catalogs, curation has emerged as one of the most important areas of differentiation between them. With millions of tracks available to a subscriber of Spotify, Rdio, or any other major service — more than you could finish in a lifetime — the battleground is shifting from access to curation. Every major streaming service touts its ability to learn your taste and recommend the right song at the right time. And they all use a mix of human curators and computer algorithms to target their suggestions. But increasingly, there is a divide in the industry over which half of that equation should lead and which half should follow. This past June, legendary producer and major label insider Jimmy Iovine unveiled Apple Music as the grand finale of the tech titan’s semi-annual product showcase. "It’s a revolutionary music service curated by the leading music experts who we helped handpick," he declared, placing Apple firmly in the human curation camp. Apple’s curation service, Iovine promised, would match the song you hear to the mood and the moment. "Algorithms alone can’t do that emotional task. You need a human touch." Watching a live stream of the event, I found myself nodding along in agreement. Increasingly, I’ve turned to streaming services for recommendations. For the most part I’ve found them far too safe and uninspired — the equivalent of a wedding DJ who isn’t going to risk clearing the dance floor. My best discoveries still came from talking with friends. All of which bring me back to Aby Ngana Diop. I got that mixtape a month and a half after I heard Iovine speak. That first track was a risky selection, and the rest of the playlist was, too. It felt like an intimate gift from someone who knew my tastes inside and out, and wasn’t afraid to throw me a curveball. But the mix didn’t come from a friend — it came from an algorithm. To be exact, it came from a new Spotify service called Discover Weekly, which automatically generates a personally tailored playlist of two to three hours of music for me every week. Using Discover for the first time felt revelatory, like the first time I left AltaVista and Ask Jeeves for Google Search. The tracks it suggested weren’t all perfect, but the ones it got right cut through the clutter of stale and timid recommendations I got from most music services. They connected with such force that I didn’t mind the misses. Spotify has found a new way to tap the collective intelligence of its 75 million users, turning their taste into a data layer that can be used to better personalize everyone’s experience. I’m not the only one who noticed. "It’s good. It’s better than I thought it would be," says tech entrepreneur and web pioneer Anil Dash. To Dash, Discover Weekly felt carefully tended, even though it was being produced by machines. "They're as good as DJs — at scale." "At first I was a bit skeptical," says Billy Chasen, founder of Turntable.fm. "I was definitely in the camp that believes you really need people to help find music for you, [that] algorithms can only take you so far." After a few weeks using Discover, Chasen was a believer. "One [song] every week is a real gem. All of a sudden it’s like, this thing is awesome." When Iovine called Apple Music a "revolutionary new music service," people in the audience actually laughed. The company’s take is starkly traditional, a return to hand-crafted playlists and top-down DJs that radio spent decades trying to hone. But while Iovine’s claim of "revolution" seemed hyperbolic, his assessment of the central challenge was spot on: the only song more important than the one you’re listening to is the one that comes next. Whoever perfects this act of curation could win the streaming music war. And with Discover Weekly, Spotify may have just out-maneuvered the competition. **For many decades,** the radio DJ was a critical gatekeeper between artist and fan — Elvis Presley was a little-known crooner until Memphis DJ Dewey Phillips threw his first single, "That’s Alright," on repeat for two hours during his popular afternoon broadcast. The radio DJ was also a curator for the masses, shaping the evolution of genres by playing unknown tracks that crossed their desk or promoting local acts they had come to love and champion. But with its limited channels, radio had to be many things to many people. You could tune it to your taste by spinning the dial to find your favorite station or DJ, but the closest commercial radio got to personalization was letting listeners call in to request their favorite song. At a macro level, however, the radio industry had developed an art and a science to curation, much of which still undergirds attempts to personalize and recommend songs in the digital world. By the late 1980s, radio stations had largely replaced the personal preference of the DJ for playlist selection. Specialists known as "programmers" (not the computer kind) decided what was played, and relied on a practice called "clocking" to sequence it. A rock show, for instance, would divide songs into buckets: a power hit at the top of the hour to suck you in, an old favorite to build trust, then an obscure nugget to surprise and delight you. Stations ran through the cycle in a carefully timed and arranged order, tweaking and refining the mix based on the relatively limited audience response data. As radio stations consolidated over the decades, companies cut down on local hosts and standardized playlists across their properties in order to maximize efficiency — and profits. "You end up with very little variation from market to market when it comes to not just music but also personality," says Matt Bates, the global head of curation at Rdio, who worked for more than a decade in terrestrial radio. But as traditional radio was homogenizing, the rise of the internet and digital music was opening up enormous new avenues to curate and personalize music curation and personalization. Online, users were no longer limited by the range of a radio tower, and broadcasters were no longer limited to producing a single, monolithic stream. For the first time in history, it was possible to send unique recommendation to millions of listeners, and to collect detailed data on their listening habits. "In the late ‘90s, early 2000s, people saw Google, they saw MP3s and they put two and two together and said, ‘Hey, how do we build search engines for music?’" recalls J. Stephen Downie, a professor of library science at the University of Illinois and founder of the International Society of Music Information Retrieval. That meant a basic system that could pull up a song or artist when you searched for them in a library. But it also meant creating a system for generating data about how music sounded — and creating a program to power recommendation. ### Curation Challenge To get a sense of how different music services try to understand my preferences and recommend me music, I created brand new profiles for each. I then made a playlist of the 50 artists Spotify had identified as best representing my taste. It was an eclectic mix, covering noise pop, funk rock, bossa nova, and folk. Here’s how they stacked up. ### Rdio Rdio’s recommendations felt accurate, but impersonal. The service did some basic collaborative filtering to come up with its personalized suggestions: because I like Sly and The Family Stone, I like funk and soul. From there it recommended seven albums, all best hit collections of extremely well known artists. But I love the slider that lets you adjust between familiar favorites and serious exploration. With that filter set to hardcore discovery and some judicious skipping, I found myself quickly enjoying the feed. ### Apple Apple gave me a broad selection of hand made playlists, many of which fit my taste. The hip hop offerings took me behind the boards with the producers of my favorite songs and for indie rock there was a good, if generic, lazy Sunday morning mix. But I also got numerous playlists prominently featuring Dr. Dre, even though I own none of his albums and don’t remember selecting west coast rap as one of my favorite genres during the onboarding process. Personalization and nepotism don’t mix. ### Tidal Tidal recognized I was the kind of guy who enjoys hearing a hip hop track and the original song it samples. I am also a fan of 60s soul. The playlists Tidal offered me were hit or miss, but at least they fit my taste. I have zero clue, on the other hand, why Tidal suggested a Metalcore mega or New Jack Swing mixes. And the Lana Del Ray-inspired playlist featuring Judy Garland and Cher was about as far from my sweet spot as you can get. The recommendations here were limited to radio streams. It’s very basic collaborative filtering and doesn’t feel like its offering much guidance about what I should listen to or why it’s right for me. But Google does have a "I’m feeling lucky" button, just like it does on its flagship search engine. That played a personalized stream, and several of the selections were songs I had also been given by Discover Weekly. To me this suggests that Google, not surprisingly, will be Spotify’s toughest competitor. **Today, the world** of curation in streaming music is a spectrum, with each service offering its own blend of human editorial and algorithmically generated selections. One branch of digital curation pursues this goal through acoustic analysis. "I run a team of about 30 music analysts, and their job is to listen to each song we’re putting on the service," says Eric Bieschke, the chief data scientist and VP of playlists at Pandora, a company considered to be the grandaddy of music streaming. "They do a detailed analysis that describes what kind of instruments are in the song, what those instruments are doing. They detail harmony, melody, rhythm, what the voices sound like." That technique, while refined over the years, is fundamentally unchanged since the company’s inception 15 years ago. Pandora takes that data and uses it to relate songs to one another, and to understand a listener’s taste. You start a radio stream with a "seed": the track, artists, or album that it can recommend against. From there, as you decide which songs to like, dislike, skip, or repeat, algorithms track your taste and recommend songs with matching attributes. This approach has been successful: today, Pandora is the world’s largest streaming service, with 79 million listeners tuning in for an average of 20 hours a month. Other streaming services also use some version of Pandora’s "adjustable radio" approach. You input an artist, album, or genre, and it cues up a never-ending stream of tracks pegged off that clue. Google Play Music and Rdio both offer this — Rdio has a station, You.fm, that users can dip in and out of without any seed at all. It starts with its knowledge of your taste, then adjusts based on your device, or the time of day, what you want to hear. Slacker Radio decided to take a different approach, leaning heavily on terrestrial radio techniques to batch songs. Instead of starting with a seed, listeners pick a station with a traditional format — blues, hard rock, easy listening — which feature big buckets of songs chosen and clocked by a human with experience in terrestrial radio. From there, Slacker uses algorithms to personalize the station based on a user’s activity. All these services used a technique called collaborative filtering to improve their recommendations. Collaborative filtering means that individuals who like James Brown will probably like Otis Redding. More complex implementations can pick up on deeper patterns: people who like these three particular albums from The White Stripes also like The Dead Weather. Still other services abandoned personalization entirely and bank on genres, mood, or activity. Startups like 8Track and Songza, acquired by Google last year, offered pre-made collections for Study Hall Focus, Backyard BBQ, and Rainy Sunday Afternoon. You could even search for playlists made by users like you. These dueling approaches ran into two problems. Human curation doesn’t scale, which is why Pandora has just 2 million songs and Apple Music just a few thousand playlists. Algorithms, on the other hand, have little grasp of the cultural and historical attributes of songs. In trying to help you discover new music, they can throw together tracks that miss your taste by a mile, happened to be on the same compilation but are otherwise entirely unrelated, or sound terrible next to one another. Until Discover Weekly launched this past July, no major streaming service had tried to combine a static playlist with personalized recommendation that fit a single user. There have been individually personalized streams, like Rdio’s You.fm, and plenty of static playlists created by humans. Apple Music and Amazon Prime, for instance, broadly target playlists to users based on the artists and albums they play, but you and I won’t receive different versions of Apple’s "Hip-Hop Essentials" based on our taste in rap or our listening history. The reason no one attempted something like Discover Weekly until now is because a static, personalized playlist is very risky. A radio stream usually begins with a prompt from the user and can adjust in real time based on a user’s feedback. Discover Weekly, by contrast, is two hours of music you get once a week with no real explanation of why you’re getting these tracks, or how to influence that process. Just like handing a mix tape to your crush in real life, once you finalize the playlist, you’re committed. Somehow Spotify’s algorithms manage to deliver me a consistently great experience. I visited Spotify’s New York office to find out how it worked. **There is a room **on the third floor of Spotify’s New York City headquarters that is in particularly high demand. "Once I turn on the sound system, you’ll see why," says Matthew Ogle, the product lead on Discover Weekly. The room’s walls are well padded and decorated in old album covers — two very comfy chairs sit in front of a pair of massive speakers. Ogle wears Elvis Costello glasses — dressed nattily in a white dress shirt buttoned to the top, slim fit jeans, and expensive sneakers, he sported a millennial business casual attire that I recognized from the young record executive on Amazon’s *Transparent*. Before Spotify, Ogle worked at Last.fm and ran his own startup, This Is My Jam. We sat down to play each other some of our favorite tracks from Discover Weekly. I cued up "Today" by Tom Scott. It’s from his album called *The California Dreamers* and opens on a mellow psychedelic guitar riff with dulcet harmonies. The first time I heard it, tucked into the middle of my seventh edition of Discover Weekly, I assumed it was going to be some lovely mid-tempo psych-folk. That made sense, considering my listening history. But at the 1-minute mark, a saxophone solo comes in. Something about the rough tone of the horn stirred a flicker of recognition in me. Thirty seconds later, an absolutely vicious lick on the sax caused my jaw to contort, a condition my wife lovingly refers to as stank face. Listening to it again on the massive speakers at Spotify’s office, the effect was electric. "Today" contains the sample to "They Reminiscence Over You," a hip-hop classic I’ve spun on Spotify dozens of times. Spotify knew I had never heard "Today," at least not on their service, and was therefore ripe to be thrilled at connecting the dots. It was a recommendation driven less by the way the music sounds, or genre, than by the cultural and historical web that gives music so much of its power. The technology that makes Discover Weekly possible comes in part from a Boston-based startup called The Echo Nest, which Spotify acquired in March of 2014. Ogle worked there for two years before joining Spotify, and since the acquisition, Spotify has put The Echo Nest employees in charge of its biggest and most important new curation and discovery products. The Echo Nest co-founder and CEO Brian Whitman started out as an experimental electronic musician, but was inspired by the rise of Napster and streaming radio. He switched gears and eventually got a PhD in machine listening from MIT. There, he created a program that could crawl the web, reading and interpreting music blogs and reviews, learning what songs critics thought were "edgy" or "old school." After graduating in 2005, Whitman helped found The Echo Nest, which combined that technology with acoustic analysis and collaborative filtering. The company became one of the best in the business and helped power recommendation systems for Rdio, Spotify, Deezer, iHeartRadio, and Rhapsody. But it never had a massive user base of its own that it could leverage to build new tools. "You have really good people, you have some really good algorithms. They’re only [as] good as the data that you have," says professor Downie. That changed when The Echo Nest became part of Spotify and could tap its 75 million users. The combination of The Echo Nest technology and Spotify’s massive data trove led to Discover Weekly. Here’s how it works: Spotify has built a taste profile for each user based on what they listen to. It assigns an affinity score to artists, which is the algorithm’s best guess of how central they are to your taste. It also looks at which genres you play the most to decide where you would be willing to explore new music. The algorithms behind Discover Weekly finds users who have built playlists featuring the songs and artists you love. It then goes through songs that a number of your kindred spirits have added to playlists but you haven’t heard, knowing there is a good chance you might like them, too. Finally, it uses your taste profile to filter those findings by your areas of affinity and exploration. Because the playlist, that explicit act of curation, is both the source of the signal and the final output, the technique can achieve results far more interesting than run of the mill collaborative filtering. In a sense, the system works like the original Page Rank, (named for Larry Page), the technique Google used to revolutionize web search. Page Rank crawled the web to find hyperlinks and treated each one as a vote pointing toward useful information. A big batch of links pointing to a website about Elvis indicated to Google that site was a good resource on the The King. In Discover Weekly, each time a user with similar taste playlists a certain song, it’s a vote that the song will sound good to you when paired with other tracks on that playlist. Ajay Kalia, the product lead for Taste Profiles, says its crucial to keep humans in the loop. There is a camp that believes if you just have enough data, and smarter algorithms, a curation machine will emerge discovers new and innovative ways to predict your taste. That might be true when sorting data on weather, says Kalia, but not for something as emotional as music. "I think we'll always have to start with patterns that map to the way people actually relate to music."It’s still humans who are doing the song selection and arranging, but instead of outside experts, it’s users like you and me. Generating a human-curated playlist for each of Spotify’s users would be a challenge of mammoth proportion. "We probably can’t hire enough editors to do that," says Ogle. So Spotify uses each of its users as one cog in a company-wide curatorial machine. "The answer was staring us in the face: playlists, since the beginning, have been more or less the basic currency of Spotify. Users have made more than 2 billion of them." In effect, Discover Weekly sidesteps the man versus machine debate and delivers the holy grail of music recommendation: human curation at scale. **Discover Weekly is amazing,** but it’s far from perfect. For every gem Spotify gave me, there was one mediocre track and another dud. The songs were also often sequenced terribly, with tracks coming one after another that simply did not flow. That uncertainty is probably why no other streaming music service currently attempts this feat. "If we give you two songs you like and one you don’t, we’ve failed," Pandora’s Bieschke told me. Jimmy Iovine took a similar tack when introducing Apple Music. There is no greater sin, he declared, than the buzzkill of a bad song after a good one. To avoid the buzzkill pitfall, Apple uses human editors. But that drastically limits the number of playlists it can produce — even Apple can’t afford to hire a dedicated editor to make a weekly playlist for each listener. As a result, Apple’s broad, safe method is far less likely to surprise and delight. "When you’re trying to analyze large sets of data, you do tend to flatten out the spikes that might be interesting," explains Turntable.fm’s Chasen. "If we give you two songs you like and one you don’t, we’ve failed." Whitman is currently taking the lead on another product Spotify recently introduced, Fresh Finds. The system crawls tens of thousands of blogs, news sites, and reviews to figure out what tastemakers are talking about and generate a trending chart of unknown music that’s about to catch on. It’s human curation, but gathered by an automated system that doesn’t even know their names. "We think of it as a data layer to tap into, their taste," says Whitman. As streaming services like Apple and Tidal recruit young artists for exclusives on their platform, Whitman hopes his algorithms will eventually help Spotify source the best new acts. "Fresh Finds is an amazing tool we use internally for things like artist relations. We have nothing to announce right now, but I think in the future you’ll see more about how we can use these predictive algorithms to make sure these artists come to Spotify." You can imagine the traditional A&R department at a major record label salivating at the predictive possibilities. If the most important thing in music is the song that comes on next, identifying my favorite track before it has been created offers Spotify a massive advantage. When I was visiting Ogle at Spotify headquarters, he showed me a favorite track Discover Weekly had recommended him. It was a meandering organ solo over a spare whurlitzer beat. The song, 1972’s "Why We Can’t Live Together," was a strange hit from session musician Timmy Thomas. I couldn’t place it until Ogle cued "Hotline Bling," a massive hit from Drake currently in constant rotation on the radio. That tiny wurlitzer was the sample anchoring the beat. I assumed that the obscure track had been recommended to Ogle because of how often he played the song that sampled it. "Actually no, I got this during testing, a couple of months before that song came out," Ogle said. He cracked a broad smile. "I like to think maybe Drake got the idea the same way." Correction: Although iHeartRadio controls roughly 800 stations and dominates many major markets, it does not control a majority of radio stations in the US, as previously stated. Product by Frank Bi Edited by Michael Zelenko
true
true
true
In the ‘90s, Aby Ngana Diop was the queen of taasu, a practice of ritual poetry performed by female griots in Senegal. Diop’s distinctive vocals made her a sought-after performer at the weddings...
2024-10-12 00:00:00
2015-09-30 00:00:00
https://cdn.vox-cdn.com/…0.1443587375.jpg
article
theverge.com
The Verge
null
null
30,915,169
https://www.youtube.com/watch?v=_JzfROUDsK0
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
25,230,796
https://developer.apple.com/documentation/hypervisor
Hypervisor | Apple Developer Documentation
null
This page requires JavaScript. Please turn on JavaScript in your browser and refresh the page to view its content.
true
true
true
Build virtualization solutions on top of a lightweight hypervisor, without third-party kernel extensions.
2024-10-12 00:00:00
null
https://docs.developer.a…developer-og.jpg
website
apple.com
Apple Developer Documentation
null
null
22,147,337
https://ncatlab.org/
nLab nLab
null
Content Resources Community Contributors? **physics**, mathematical physics, philosophy of physics theory (physics), model (physics) experiment, measurement, computable physics Axiomatizations Tools Structural phenomena Types of quantum field thories This is a wiki for collaborative work on *Mathematics*, *Physics*, and *Philosophy* — especially (but far from exclusively) from the higher structures point of view: with a sympathy towards the tools and perspectives of homotopy theory/algebraic topology, homotopy type theory, higher category theory and higher categorical algebra. The nLab records and explores a wide range of mathematics, physics, and philosophy. Along with work of an expository nature, original material can be found in abundance, as can notes from evolving research. Where mathematics, physics, and philosophy arise in other fields, computer science and linguistics for example, the nLab explores these too. If you take a little time to find something in the literature, or to work through a proof or example, or to find an intuitive or new way to think about something, add a note on it to the nLab! Others will benefit, and you may well find that it proves useful to you too. A fundamental idea behind the nLab is that the linking between pages of a wiki is a profound way to record knowledge, placing it in a rich context. For more, see also Urs Schreiber‘s thoughts at What is… the nLab?. Most nLab pages have a corresponding discussion thread at the nForum, linked to in the menus at the top and bottom of the page. For all but the most trivial edits (correcting spelling or punctuation, etc), we ask that you announce your changes at the nForum using the box at the bottom of the edit page. Typically this would be a short note summarising what you have done, but it can also include what you plan to do, or what you would like others to do! See Welcome to the nForum for more on the nForum. If you have any questions about an nLab page, or would like to discuss or make a comment upon something in it, whether because you feel that something is wrong or missing or for any other reason, you are encouraged and very welcome to use the nForum discussion thread for the page for this purpose too. If the discussion thread does not yet exist, feel free to create it manually: use the ‘nLab - Latest changes’ category, and give it the same title (case-sensitive!) as the nLab page. The pages of the nLab have almost always been edited by several people. In almost all cases, we recommend that you use the nForum rather than contact an author of an edit directly, but if you do have a reason to do this, the history of edits to the page with the details of who made them can be found by clicking on ‘History’ at the bottom of the page. If after looking around for a while you feel like contributing yourself, you are very welcome to do so! See below for more on how to actually edit, as well as the HowTo. If you make any edits to the nLab, please follow the guidelines for announcing your changes at the nForum given above. If you are unsure whether what you would like to contribute is appropriate, please ask at the nForum before making the edit. The nLab should be viewable and editable in any modern web browser on any device. For technical reasons, browsers which are able to render MathML, such as Firefox, may render very large pages quicker than other browsers, but these pages are few and far between, and most people will be able to use the browser of their choice. Editing the nLab these days can be done more or less as in LaTeX. Macros are not currently available, but mathematics can be entered within dollar signs as usual, theorem environments, sections and tables of contents can be defined as in LaTeX, commutative diagrams and figures can be created in Tikz or XyMatrix, and so on. For simple formatting, such as italics, and for some other things such as tables, Markdown is used. See the HowTo for more on editing the nLab. One goal of the Lab is to help make information widely available and usefully related to other information. In this users and contributors are expected to follow traditional academic practice: Using and distributing content obtained from the Lab is free and encouraged if you acknowledge the source, as usual in academia. (There is currently no consensus on a more formal license statement, but if it matters check if relevant individual contributors state such on their Lab homepages.) If you cite a page you may want to point to a specific version of it, because Lab pages can change. You can find a list of all the versions of a page by clicking on the **History** link at the bottom of the page itself. Conversely, any content contributed to the Lab is publicly available and you should be aware that others may use your contributions (whatever you decide to do with their content elsewhere) and indeed may edit them. In the first case you trust that users will cite your contributions properly, in the second that they will respect and only improve on them. At the same time, you are expected to properly acknowledge sources of information for material entered into the Lab. Usually this works well. If there is need for discussion, the nForum is the forum to turn to. If serious problems arise, the steering committee might intervene. The domain `ncatlab.org` is owned by Urs Schreiber. The Lab server is currently hosted at Carnegie Mellon University, kindly provided by Steve Awodey. The nLab’s software and technical administrative matters are in the hands of the nLab’s technical board. Much of the software behind the nLab has been written especially for the nLab: the latest source can be found at GitHub. It was originally an instance of Instiki. The Lab page style is due to Jake Bian, originating with his Kan browser extension The Lab logo is due to David Roberts, inspired by Matisse’s painting *La Gerbe*. Besides being an inside reference to higher structures known as *gerbes*, the logo represents maybe computational trinitarianism or the progression of modalities or generally the unity of diverse mathematical phenomena revealed by the nPOV. The default venue for all communication regarding the nLab is the nForum. When posting there you get to choose a “category”-label for your message: Latest edit logs are to be posted under *Latest changes*. Organizational matters (such as concerning user accounts) are best posted under *nLab Organisation*. Technical bug reports or software feature requests are best raised in *nLab Technical Matters* or directly on github.com/ncatlab/nlab/issues. Please do not try to contact the technical team on matters that are not purely technical: Policy decisions are made by the active nLab community or, if all fails, by the Steering Committee. The Lab is a community undertaking. But for all matters that do require that the Lab is represented to the outside by an official decision-taking body, we have the steering committee. *Nobody “is in charge of the Lab”.* But the steering committee is the closest approximation to a body being in charge that we have. Last revised on March 30, 2024 at 02:03:46. See the history of this page for a list of all contributions to it.
true
true
true
null
2024-10-12 00:00:00
2024-03-30 00:00:00
null
null
null
null
null
null
21,857,013
https://www.usenix.org/conference/raid2019/presentation/zhang-li
{CryptoREX}: Large-scale Analysis of Cryptographic Misuse in {IoT} Devices
Li Zhang; Jiongyi Chen; Wenrui Diao; Shanqing Guo; Jian Weng; Kehuan Zhang
Li Zhang, *Jinan University;* Jiongyi Chen, *The Chinese University of Hong Kong; *Wenrui Diao and Shanqing Guo, *Shandong University; *Jian Weng, *Jinan University; *Kehuan Zhang, *The Chinese University of Hong Kong* Cryptographic functions play a critical role in the secure transmission and storage of application data. Although most crypto functions are well-defined and carefully-implemented in standard libraries, in practice, they could be easily misused or incorrectly encapsulated due to its error-prone nature and inexperience of developers. This situation is even worse in the IoT domain, given that developers tend to sacrifice security for performance in order to suit resource-constrained IoT devices. Given the severity and the pervasiveness of such bad practice, it is crucial to raise public awareness about this issue, find the misuses and shed light on best practices. In this paper, we design and implement CryptoREX, a framework to identify crypto misuse of IoT devices under diverse architectures and in a scalable manner. In particular, CryptoREX lifts binary code to a unified IR and performs static taint analysis across multiple executables. To aggressively capture and identify misuses of self-defined crypto APIs, CryptoREX dynamically updates the API list during taint analysis and automatically tracks the function arguments. Running on 521 firmware images with 165 pre-defined crypto APIs, it successfully discovered 679 crypto misuse issues in total, which on average costs only 1120 seconds per firmware. Our study shows 24.2% of firmware images violate at least one misuse rule, and most of the discovered misuses are unknown before. The misuses could result in sensitive data leakage, authentication bypass, password brute-force, etc. Our findings highlight the poor implementation and weak protection in today's IoT development. ## Open Access Media USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access. author = {Li Zhang and Jiongyi Chen and Wenrui Diao and Shanqing Guo and Jian Weng and Kehuan Zhang}, title = {{CryptoREX}: Large-scale Analysis of Cryptographic Misuse in {IoT} Devices}, booktitle = {22nd International Symposium on Research in Attacks, Intrusions and Defenses (RAID 2019)}, year = {2019}, isbn = {978-1-939133-07-6}, address = {Chaoyang District, Beijing}, pages = {151--164}, url = {https://www.usenix.org/conference/raid2019/presentation/zhang-li}, publisher = {USENIX Association}, month = sep }
true
true
true
null
2024-10-12 00:00:00
2019-01-01 00:00:00
null
null
null
null
null
null
6,642,779
https://medium.com/better-humans/9925f0cec821
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
6,251,491
http://patrickcollison.com/post/government-internet
Government and the internet
null
# Government and the internet Over the last two decades, tension between government and the changes caused by the internet has been a recurring theme. Today, they’re almost seen as opposing forces. This is somewhat strange when you think about it. Most technologies don’t cause so much ongoing upheaval. Why is the internet so challenging? I decided to make a list of reasons and came up with 11. Though none of them are very novel, I found the catalog interesting. (For one thing, I expected fewer.) 1. Better communication tools mean that events simply happen faster. The Arab Spring would probably have been less problematic for incumbent governments if things had happened more slowly. They’d have had more time to react. But governments still operate on timescales of days and weeks while it’s becoming easier for events to play out hour-by-hour. 2. Communication and publication can no longer be censored. Wikileaks, the Arab Spring, and Snowden’s revelations all depend on governments being unable to prevent mass dissemination of information. 3. Individuals can now cast a spotlight on government action. (That is, publication isn’t just uncensorable; it’s also decentralized.) With the internet, we can rapidly rewire our communication networks when a node becomes an important source of information. (This is most noticeable during live events, when people rapidly figure out which Twitter users are worth paying attention to.) If a government does something wrong, and even if the mainstream publishers ignore it, the information can still be rapidly broadcast to millions. 4. In a global community, local aberrations stick out more. Because online news and social structures don’t pay much attention to national boundaries, laws and government actions are increasingly held to international standards. People are more likely to know when they’re being screwed over by their governments. 5. It’s harder for governments to broadcast directly to their citizens. Fifty years ago, it was easy: just run a prime-time TV segment. You can still do that today, but many fewer people will pay attention. 6. Governments are more fragile and hence weaker. It’s almost as easy to leak a database as it is a file. It’s much harder for governments to maintain secret structures, and they must contend with the omnipresent risk of a calamitous leak. 7. Governments are more powerful and hence more likely to overreach. Because it’s now far easier to eavesdrop on communications, maintain intrusive databases, etc., it’s much more tempting to do it. Thirty years ago, you needed to adopt extreme GDR-style tactics to eavesdrop on everyone. It was logistically prohibitive, and most governments would probably reconsider when they realized what doing so would actually entail. Today, technology improvements mean that it takes much less effort—and evidently that it feels much less wrong. 8. Because a lot of internet activity is oblivious to national borders, jurisdictional questions are now much murkier. Internet companies and services aren’t so much proactively global as casually global: the default is to be available everywhere. As a result, internet services don’t consider the legal implications of expansion into each jurisdiction. They don’t really expand into jurisdictions at all. They’re just *there*. We don’t have a good framework for regulating these entities. Should a US website be subject to Australian libel law? (To the union of all libel laws?) The most obvious solution is to have an internet company be subject only to the laws of the country in which it resides, but even this doesn’t work: companies can easily operate from multiple countries. And, of course, countries affected by a non-resident service aren’t likely to be satisfied by this solution. 9. A lot of laws make less sense today. Remix culture, Airbnb, YouTube, Lyft, and Bitcoin challenge existing regulation. The harm of bad laws is often invisible, which makes weighing the trade-offs difficult. (How *many* good things aren’t happening and how good would they be?) It’s hard to argue against regulation before the potential of a new technology has been realized—the possibilities are, by definition, merely hypothetical. 10. Since a lot of industries are being reshaped by technology, more incumbents are seeking regulatory protection. Goverments sometimes acquiesce and enact bad laws (DMCA), which in turn angers citizens. 11. Legislators lack the conceptual framework to reason effectively about internet and software issues. I think this might be the biggest problem of all. As industry insiders, we have an advantage: we know it’s inane to talk about "getting data back"; we know that metadata and data are often distinctions without differences; we know that large datasets are very hard to anonymize; we know that a large dataset will rarely be used only for its originally intended purpose. We know this simply because we’ve watched these issues play out many times before. Politicians haven’t, and when policy questions hinge on understanding technology, they don’t tend to fare well. Looking back over these, the main thing that strikes me is that it’s very hard to imagine any of them going away or being resolved any time soon. The internet was an epochal jolt and our governing structures have yet to catch up. We’re now twenty years in; given the pace of change, I expect these issues to persist for most of my life. I find it interesting that the internet is still politicians’ example of choice when they need to show how the future is exciting. The US is today, in Obama’s words, “the nation of Google and Facebook.” The problems don’t simply stem from obliviousness. Even leaders who recognize the internet’s importance and value are stuck in an edifice almost guaranteed to yield a dissonant relationship.
true
true
true
null
2024-10-12 00:00:00
2013-08-21 00:00:00
null
null
null
null
null
null
16,424,203
http://www.pnas.org/content/early/2018/02/16/1713501115
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
7,851,063
http://geometrygames.org/HyperbolicBlanket/index.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
27,479,526
https://www.cnbc.com/2021/06/11/amazon-to-overtake-walmart-as-largest-us-retailer-in-2022-jpmorgan.html
Amazon will overtake Walmart as the largest U.S. retailer in 2022, JPMorgan predicts
Annie Palmer
Amazon is on track to overtake Walmart as the largest U.S. retailer in 2022, according to JPMorgan research released Friday. Amazon's U.S. retail business is the "fastest growing at scale," according to the company's analysts. Between 2014 and 2020, Amazon's U.S. gross merchandise volume, or GMV — a closely watched industry metric used to measure the total value of goods sold over a certain time period — has grown "significantly faster" than both U.S. adjusted retail sales and U.S. e-commerce, the analysts said. Neither Amazon nor Walmart break out GMV in their quarterly earnings results, but JPMorgan estimates Amazon's GMV is growing faster than its largest retail competitor. JPMorgan analysts said Amazon's GMV in 2020 climbed 41% year over year to $316 billion, while Walmart's GMV is estimated to have grown 10% year over year to $439 billion in 2020. "Based on current estimates, we believe Amazon could surpass Walmart to become the largest U.S. retailer in 2022," J.P. Morgan analysts Christopher Horvers and Doug Anmuth wrote Friday. Horvers and Anmuth highlighted a few factors they believe are driving Amazon's top-line growth, including an expansion into "large and under-penetrated categories" such as grocery and apparel, strong growth of third-party seller sales and the "Prime flywheel." Amazon CEO Jeff Bezos said in April the company now has more than 200 million Prime subscribers, up from 150 million at the beginning of 2020. The coronavirus pandemic rapidly accelerated the adoption of e-commerce and cemented Amazon's dominance in the retail space. Stuck-at-home consumers turned to Amazon for a plethora of goods ranging from toilet paper to workout gear. They also relied on Amazon for services they might not have otherwise considered, such as online grocery delivery. Amazon's pandemic-fueled sales surge has helped it grow its slice of the e-commerce market. JPMorgan estimates Amazon expanded its share of the U.S. e-commerce market to 39% in 2020, up from 24% in 2014. The accelerated adoption of e-commerce has also provided a lift to other areas of Amazon's business. Amazon is on track to "become one of the largest delivery companies" in the U.S., analysts at Bank of America wrote in research published Tuesday. Amazon is estimated to deliver 7 billion packages in 2021, surpassing the roughly 6 billion packages UPS is expected to deliver in the U.S. this year, the analysts wrote, citing figures from MWPVL International, a supply chain and logistics consulting firm. In recent years, Amazon has quietly built a shipping operation that rivals the likes of UPS, FedEx and the U.S. Postal Service. It maintains an ever-increasing network of warehouses and last-mile delivery stations, and a sprawling logistics operation with airplanes, trucks and vans. This has allowed Amazon to deliver most of its own orders. Amazon currently delivers packages for other businesses in the U.K. and could one day expand that service to the U.S. MWPVL estimates Amazon handled about 5 billion of the 7.35 billion packages it shipped in 2020. UPS and USPS handled the other 1.25 billion and 1.1 billion, respectively, according to Bank of America analysts.
true
true
true
Amazon is on track to surpass Walmart as the largest U.S. retailer by 2022, JPMorgan analysts wrote in a note published Friday.
2024-10-12 00:00:00
2021-06-11 00:00:00
https://image.cnbcfm.com…47&w=1920&h=1080
article
cnbc.com
CNBC
null
null
25,753,797
https://www.businesswire.com/news/home/20210112006080/en/Visa-and-Plaid-Announce-Mutual-Termination-of-Merger-Agreement
Visa and Plaid Announce Mutual Termination of Merger Agreement
null
SAN FRANCISCO--(BUSINESS WIRE)--Visa Inc. (NYSE: V) and Plaid today announced that the companies have terminated their merger agreement and agreed with the Department of Justice to dismiss the litigation related to the proposed transaction. The proposed transaction was first announced on January 13, 2020. “We are confident we would have prevailed in court as Plaid’s capabilities are complementary to Visa’s, not competitive,” said Al Kelly, Chairman and CEO of Visa Inc. “We believe the combination of Visa with Plaid would have delivered significant benefits, including greater innovation for developers, financial institutions and consumers. However, it has been a full year since we first announced our intent to acquire Plaid, and protracted and complex litigation will likely take substantial time to fully resolve.” Mr. Kelly added, “We are focused on accelerating our business by advancing our broader strategy and continuing to drive Visa’s three growth pillars: consumer payments, new flows, and value added services. We have great momentum to build upon. Over the past year, our Visa Direct solution moved money around the world using multiple card, ACH and RTP networks, growing nearly 70 percent. In addition, our value added services revenue has grown in the mid-to-high-teens. We have great respect for Plaid and the business they have built and look forward to our continued partnership.” “This past year saw an unprecedented uptick in demand for the services powered by Plaid, and our priority is to support the hundreds of millions of people who now rely on fintech,” said Zach Perret, CEO and co-founder of Plaid. “We made great strides last year, growing our customers by more than sixty percent and adding hundreds of banks to our platform. While Plaid and Visa would have been a great combination, we have decided to instead work with Visa as an investor and partner so we can fully focus on building the infrastructure to support fintech.“ **Webcast and Conference Call Information** Visa’s executive management team will host a live audio webcast beginning at 5:00 p.m. Eastern Time (2:00 p.m. Pacific Time) today to discuss the announcement. All interested parties are invited to listen to the live webcast at http://investor.visa.com. A replay of the webcast will be available for 30 days. Investor information is also available at http://investor.visa.com. **About Visa Inc.** Visa Inc. (NYSE: V) is the world’s leader in digital payments. Our mission is to connect the world through the most innovative, reliable and secure payment network - enabling individuals, businesses and economies to thrive. Our advanced global processing network, VisaNet, provides secure and reliable payments around the world, and is capable of handling more than 65,000 transaction messages a second. Our relentless focus on innovation is a catalyst for the rapid growth of digital commerce on any device, and a driving force behind the dream of a cashless future for everyone, everywhere. As the world moves from analog to digital, Visa is applying our brand, products, people, network and scale to reshape the future of commerce. For more information, visit usa.visa.com/about-visa.html, usa.visa.com/visa-everywhere/blog.html and @VisaNews. **About Plaid ** Plaid is a data network that powers the fintech tools millions of people rely on to live healthier financial lives. Plaid works with thousands of fintech companies like Venmo, SoFi, and Betterment, several of the Fortune 500, and many of the largest banks to make it easy for people to connect their financial accounts to the apps and services they want to use. Plaid’s network covers 11,000 financial institutions across the US, Canada, UK and Europe. Headquartered in San Francisco, the company was founded in 2013 by Zach Perret and William Hockey.
true
true
true
Visa Inc. (NYSE: V) and Plaid today announced that the companies have terminated their merger agreement and agreed with the Department of Justice to d
2024-10-12 00:00:00
2021-01-12 00:00:00
https://www.businesswire…wlogo_square.png
article
businesswire.com
Businesswire
null
null
11,877,212
https://medium.com/data-bits/are-smaller-mobile-games-more-successful-d21379095d10#.qgcg25i86
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
20,999,678
https://developer.apple.com/documentation/contacts/cnlabelcontactrelationeldercousinmotherssiblingsdaughterorfatherssistersdaughter
CNLabelContactRelationElderCousinMothersSiblingsDaughterOrFathersSistersDaughter | Apple Developer Documentation
null
This page requires JavaScript. Please turn on JavaScript in your browser and refresh the page to view its content.
true
true
true
The label for the contact’s mother’s sibling’s daughter or father’s sister’s daughter.
2024-10-12 00:00:00
null
https://docs.developer.a…developer-og.jpg
website
apple.com
Apple Developer Documentation
null
null
6,114,083
http://www.economist.com/news/leaders/21582258-it-not-just-detroit-american-cities-and-states-must-promise-less-or-face-disaster
The Unsteady States of America
null
Leaders | America’s public finances # The Unsteady States of America ## It is not just Detroit. American cities and states must promise less or face disaster This article appeared in the Leaders section of the print edition under the headline “The Unsteady States of America” ## Discover more ### The front line of the tech war is in Asia The two superpowers are vying for influence. China will not necessarily win ### How high could the oil price go? Geopolitical risk is rising. But so is the supply of oil ### The Trumpification of American policy No matter who wins in November, Donald Trump has redefined both parties’ agendas ### How Florida should respond to Hurricane Milton Storms like it raise uncomfortable questions about the state’s future ### Britain should not hand the Chagos Islands to Mauritius Once again, the Chagossians have been denied a say ### A map of a fruit fly’s brain could help us understand our own A miracle of complexity, powered by rotting fruit
true
true
true
It is not just Detroit. American cities and states must promise less or face disaster
2024-10-12 00:00:00
2013-07-27 00:00:00
/engassets/og-fallback-image.png
Article
economist.com
The Economist
null
null
5,261,603
http://barrydahlberg.com/notes/eyes.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
29,403,883
https://ae.studio/blog/nlb-challenge
Building the best model of brain activity: We're Leading The Neural Latents Benchmark Challenge
Brent Hodgeson
# Building the best model of brain activity: We're Leading The Neural Latents Benchmark Challenge # "We don't quit at halftime" (But we are in the lead) From returning agency to a human being with paralyzed limbs, to restoring auditory function, to treating depression, the concept of a brain-computer-interface (BCI) presents a mind-boggling set of possibilities. One day, rather than pounding keyboards and sliding mice to convey our intentions to our laptops, our thoughts will simply elicit the desired responses. Perfect transcription fluency1 would revolutionize how human beings invest their professional and leisure hours and redefine the term “able-bodied.” Realizing the promise of BCI means understanding the activity of the 80 billion neurons firing in our brains, then translating the complexity of that activity into “latent” patterns. For instance, while we speak to one another, you have no idea whatsoever about the specific neurons firing in my brain, but you can discern a “latent” pattern, like the nodding of my head in agreement when you describe how a Bruce Springsteen concert is a religious experience or the visceral, snarling anger on my face when you tell me you root for the Dallas Cowboys. On September 5th, 2021, while most of us were fixated on the beginning of the upcoming football season, a less publicized, but perhaps more important competition began. The Neural Latents Benchmark Challenge begins with data from (primarily) Utah Arrays implanted in monkey brains. Electrodes read the behavior of 100+ monkey neurons. The mathematical challenge is simply stated: determine if an additional few dozen neurons are active or not. As an analogy, consider an enormous checkerboard grid. Now remove the colors. The grid contains 80 billion squares. You are told the color (red or black) of ~100 squares. You are instructed to determine the color (red or black) of an additional few dozen squares. This is how scientific knowledge extends its knowledge of complex systems. Once upon a time, your humble blogger was a hydrologist studying moisture in the earth’s topsoil. Measurements are relatively simple - dig a few holes, install a few sensors, record some data, take a shower, and dig a few more holes the next day. Easy, right? Well, there’s a problem. Sensors are expensive, and flights to test sites in the US of A were expensive enough2, but what if the desired location of measurement is in Mongolia? Also, the surface of the earth is covered with a complex set of soil textures, topographies, vegetation, and climates. It would require billions of sensors to measure it all in any detail. So what’s a scientist to do? The short answer is to build models that extend the reach of sensors. If we can measure at 5cm depth, what can a model infer about 10cm and 20cm? If we can measure *here*, how might we estimate activity 10m over. How about 100m? What about a location 1,000km away that happens to be quite similar in terms of texture, topography, and climate? The NLB competition challenges neuroscientists to engage in a similar pursuit to extend our ability to measure the only landscape more complex than the globe - the brain. There are numerous datasets of this type, such as MC Maze, which was gathered by monitoring the neurological activity of a monkey brain solving a maze. Results from five benchmark models were provided by the competition’s administrators-in other words, take the state-of-the-art models from the most recent research papers, and throw down the gauntlet. This becomes the baseline any would-be challengers must surpass. *On November 18th, 2021, the first competitor to surpass that baseline was none other than AE*. Our machine learning researchers took the lead, besting the performance of the best model of neural activity known to modern science on this dataset. And if we are ultimately the best modelers when the competition concludes on January 7th, 2022, we’ll reveal the entirety of our methods. No peer-reviewed publication hidden behind a paywall, no obfuscation. Just the best approach, available to all. What does this mean? It means, among all machine learning outfits interested in BCI, our models are the most adept at recognizing patterns in neural data. It means we grasp the state of the art and possess the creativity and technical chops to improve upon it. More to come, more to learn, and like any compelling competition, there will be updates and reporting. Perhaps no one is following their fantasy machine learning team on Sunday afternoons as winter comes. But give or take a few expletives lobbed at screens, there might be no competition more capable of impacting the future of our species. Ultimately, amidst the impossible complexity of 80 billion neurons, patterns emerge. It is the ability to recognize those patterns that unlocks the potential to alleviate suffering and extend the capabilities of human beings. The tools with which those patterns are recognized are those found in the insights and mathematics of machine learning. The collaboration of AE's data scientists and neuroscience experts tops a leaderboard today, and might just increase agency for all human beings tomorrow. *AE thanks Joel Ye and Chethan Pandarinath for their work on Neural Data Transformers. Their open-source, easy-to-use codebase was foundational for the work described above (in our repo).* 1 The accuracy and speed with which ideas are translated into intention and action. I have thoughts in my head, but before you can grasp those thoughts, first I must find the appropriate words, then use my inefficient fingers to type those words, then you must read those pixels upon the screen, process their meaning, and ultimately, grasp the idea I intended to convey (hopefully). The quantity of time and energy devoted to transcription fluency between human beings or between human beings and machines is staggering. What if, on a not-so-distant day, the thought, the intent, could be transcribed fluently, instantaneously, to the recipient? 2 One test site was located in Marena, OK. That municipality contained one home, which may or may not have been inhabited. Around the turn of the 20th century, the US government offered these parcels, at no cost, to any family willing to work the land and grow food. Many tried. They discovered that, even at a price of “free,” it still wasn’t worth it. They left. The government retained the land as a hydrological test site. Circa 2014, I wandered these plots on hot summer days, equipment in hand, gathering data. The science was fascinating. Oklahoma is about as dull and dusty as Steinbeck would have you believe. ### No one works with an agency just because they have a clever blog. To work with my colleagues, who spend their days developing software that turns your MVP into an IPO, rather than writing blog posts, click here (Then you can spend your time reading our content from your yacht / pied-a-terre). If you can’t afford to build an app, you can always learn how to succeed in tech by reading other essays. # Building the best model of brain activity: We're Leading The Neural Latents Benchmark Challenge # "We don't quit at halftime" (But we are in the lead) From returning agency to a human being with paralyzed limbs, to restoring auditory function, to treating depression, the concept of a brain-computer-interface (BCI) presents a mind-boggling set of possibilities. One day, rather than pounding keyboards and sliding mice to convey our intentions to our laptops, our thoughts will simply elicit the desired responses. Perfect transcription fluency1 would revolutionize how human beings invest their professional and leisure hours and redefine the term “able-bodied.” Realizing the promise of BCI means understanding the activity of the 80 billion neurons firing in our brains, then translating the complexity of that activity into “latent” patterns. For instance, while we speak to one another, you have no idea whatsoever about the specific neurons firing in my brain, but you can discern a “latent” pattern, like the nodding of my head in agreement when you describe how a Bruce Springsteen concert is a religious experience or the visceral, snarling anger on my face when you tell me you root for the Dallas Cowboys. On September 5th, 2021, while most of us were fixated on the beginning of the upcoming football season, a less publicized, but perhaps more important competition began. The Neural Latents Benchmark Challenge begins with data from (primarily) Utah Arrays implanted in monkey brains. Electrodes read the behavior of 100+ monkey neurons. The mathematical challenge is simply stated: determine if an additional few dozen neurons are active or not. As an analogy, consider an enormous checkerboard grid. Now remove the colors. The grid contains 80 billion squares. You are told the color (red or black) of ~100 squares. You are instructed to determine the color (red or black) of an additional few dozen squares. This is how scientific knowledge extends its knowledge of complex systems. Once upon a time, your humble blogger was a hydrologist studying moisture in the earth’s topsoil. Measurements are relatively simple - dig a few holes, install a few sensors, record some data, take a shower, and dig a few more holes the next day. Easy, right? Well, there’s a problem. Sensors are expensive, and flights to test sites in the US of A were expensive enough2, but what if the desired location of measurement is in Mongolia? Also, the surface of the earth is covered with a complex set of soil textures, topographies, vegetation, and climates. It would require billions of sensors to measure it all in any detail. So what’s a scientist to do? The short answer is to build models that extend the reach of sensors. If we can measure at 5cm depth, what can a model infer about 10cm and 20cm? If we can measure *here*, how might we estimate activity 10m over. How about 100m? What about a location 1,000km away that happens to be quite similar in terms of texture, topography, and climate? The NLB competition challenges neuroscientists to engage in a similar pursuit to extend our ability to measure the only landscape more complex than the globe - the brain. There are numerous datasets of this type, such as MC Maze, which was gathered by monitoring the neurological activity of a monkey brain solving a maze. Results from five benchmark models were provided by the competition’s administrators-in other words, take the state-of-the-art models from the most recent research papers, and throw down the gauntlet. This becomes the baseline any would-be challengers must surpass. *On November 18th, 2021, the first competitor to surpass that baseline was none other than AE*. Our machine learning researchers took the lead, besting the performance of the best model of neural activity known to modern science on this dataset. And if we are ultimately the best modelers when the competition concludes on January 7th, 2022, we’ll reveal the entirety of our methods. No peer-reviewed publication hidden behind a paywall, no obfuscation. Just the best approach, available to all. What does this mean? It means, among all machine learning outfits interested in BCI, our models are the most adept at recognizing patterns in neural data. It means we grasp the state of the art and possess the creativity and technical chops to improve upon it. More to come, more to learn, and like any compelling competition, there will be updates and reporting. Perhaps no one is following their fantasy machine learning team on Sunday afternoons as winter comes. But give or take a few expletives lobbed at screens, there might be no competition more capable of impacting the future of our species. Ultimately, amidst the impossible complexity of 80 billion neurons, patterns emerge. It is the ability to recognize those patterns that unlocks the potential to alleviate suffering and extend the capabilities of human beings. The tools with which those patterns are recognized are those found in the insights and mathematics of machine learning. The collaboration of AE's data scientists and neuroscience experts tops a leaderboard today, and might just increase agency for all human beings tomorrow. *AE thanks Joel Ye and Chethan Pandarinath for their work on Neural Data Transformers. Their open-source, easy-to-use codebase was foundational for the work described above (in our repo).* 1 The accuracy and speed with which ideas are translated into intention and action. I have thoughts in my head, but before you can grasp those thoughts, first I must find the appropriate words, then use my inefficient fingers to type those words, then you must read those pixels upon the screen, process their meaning, and ultimately, grasp the idea I intended to convey (hopefully). The quantity of time and energy devoted to transcription fluency between human beings or between human beings and machines is staggering. What if, on a not-so-distant day, the thought, the intent, could be transcribed fluently, instantaneously, to the recipient? 2 One test site was located in Marena, OK. That municipality contained one home, which may or may not have been inhabited. Around the turn of the 20th century, the US government offered these parcels, at no cost, to any family willing to work the land and grow food. Many tried. They discovered that, even at a price of “free,” it still wasn’t worth it. They left. The government retained the land as a hydrological test site. Circa 2014, I wandered these plots on hot summer days, equipment in hand, gathering data. The science was fascinating. Oklahoma is about as dull and dusty as Steinbeck would have you believe.
true
true
true
"We don't quit at halftime" (But we are in the lead)
2024-10-12 00:00:00
2021-11-23 00:00:00
https://ae.studio/blog/c…abay-34514-1.jpg
article
ae.studio
AE Studio
null
null
16,481,562
https://www.backblaze.com/blog/choosing-data-center/
What to Consider When Choosing a Data Center
Roderick Bauer
Though most of us have never set foot inside of a data center, as citizens of a data-driven world we nonetheless depend on the services that data centers provide almost as much as we depend on a reliable water supply, the electrical grid, and the highway system. Every time we send a tweet, post to Facebook, check our bank balance or credit score, watch a YouTube video, or back up a computer to the cloud, we are interacting with a data center. In this series, *The Challenges of Opening a Data Center*, we’ll talk in general terms about the factors that an organization needs to consider when opening a data center and the challenges that must be met in the process. Many of the factors to consider will be similar for opening a private data center or seeking space in a public data center, but we’ll assume for the sake of this discussion that our needs are more modest than requiring a data center dedicated solely to our own use (i.e. we’re not Google, Facebook, or China Telecom). Data center technology and management are changing rapidly, with new approaches to design and operation appearing every year. This means we won’t be able to cover everything happening in the world of data centers in our series, however, we hope our brief overview proves useful. ## What is a Data Center? A data center is the structure that houses a large group of networked computer servers typically used by businesses, governments, and organizations for the remote storage, processing, or distribution of large amounts of data. While many organizations will have computing services in the same location as their offices that support their day-to-day operations, a data center is a structure dedicated to 24/7 large-scale data processing and handling. Depending on how you define the term, there are anywhere from a half million data centers in the world to many millions. While it’s possible to say that an organization’s on-site servers and data storage can be called a data center, in this discussion we are using the term data center to refer to facilities that are expressly dedicated to housing computer systems and associated components, such as telecommunications and storage systems. The facility might be a private center, which is owned or leased by one tenant only, or a shared data center that offers what are called “colocation services,” and rents space, services, and equipment to multiple tenants in the center. A large, modern data center operates around the clock, placing a priority on providing secure and uninterrrupted service, and generally includes redundant or backup power systems or supplies, redundant data communication connections, environmental controls, fire suppression systems, and numerous security devices. Such a center is an industrial-scale operation often using as much electricity as a small town. ### Types of Data Centers There are a number of ways to classify data centers according to how they are set up and used. These factors include: - Whether they are owned or used by one or multiple organizations - Whether and how they fit into a topology of other data centers - Which technologies and management approaches they use for computing, storage, cooling, power, and operations - How green they are, which has become more important and visible in recent years Data centers can be loosely classified into three types according to who owns them and who uses them. **Exclusive Data Centers** are facilities wholly built, maintained, operated and managed by the business for the optimal operation of its IT equipment. Some of these centers are well-known companies such as Facebook, Google, or Microsoft, while others are less public-facing big telecoms, insurance companies, or other service providers. **Managed Hosting Providers** are data centers managed by a third party on behalf of a business. The business does not own data center or space within it. Rather, the business rents IT equipment and infrastructure it needs instead of investing in the outright purchase of what it needs. **Colocation Data Centers** are usually large facilities built to accommodate multiple businesses within the center. The business rents its own space within the data center and subsequently fills the space with its IT equipment, or possibly uses equipment provided by the data center operator. Backblaze, for example, doesn’t own its own data centers but colocates in data centers owned by others. As Backblaze’s storage needs grow, Backblaze increases the space it uses within a given data center and/or expands to other data centers in the same or different geographic areas. #### Availability is Key When designing or selecting a data center, an organization needs to decide what level of availability is required for its services. The type of business or service it provides likely will dictate this. Any organization that provides real-time and/or critical data services will need the highest level of availability and redundancy, as well as the ability to rapidly failover (transfer operation to another center) when and if required. Some organizations require multiple data centers not just to handle the computer or storage capacity they use, but to provide alternate locations for operation if something should happen temporarily or permanently to one or more of their centers. Organizations operating data centers that can’t afford any downtime at all will typically operate data centers that have a mirrored site that can take over if something happens to the first site, or they operate a second site in parallel to the first one. These data center topologies are called Active/Passive, and Active/Active, respectively. Should disaster or an outage occur, disaster mode would dictate immediately moving all of the primary data center’s processing to the second data center. While some data center topologies are spread throughout a single country or continent, others extend around the world. Practically, data transmission speeds put a cap on centers that can be operated in parallel with the appearance of simultaneous operation. Linking two data centers located apart from each other — say no more than 60 miles to limit data latency issues — together with dark fiber (leased fiber optic cable) could enable both data centers to be operated as if they were in the same location, reducing staffing requirements yet providing immediate failover to the secondary data center if needed. This redundancy of facilities and ensured availability is of paramount importance to those needing uninterrupted data center services. ### LEED Certification Leadership in Energy and Environmental Design (LEED) is a rating system devised by the United States Green Building Council (USGBC) for the design, construction, and operation of green buildings. Facilities can achieve ratings of certified, silver, gold, or platinum based on criteria within six categories: sustainable sites, water efficiency, energy and atmosphere, materials and resources, indoor environmental quality, and innovation and design. Green certification has become increasingly important in data center design and operation as data centers require great amounts of electricity and often cooling water to operate. Green technologies can reduce costs for data center operation, as well as make the arrival of data centers more amenable to environmentally-conscious communities. The ACT, Inc. data center in Iowa City, Iowa was the first data center in the U.S. to receive LEED-Platinum certification, the highest level available. ACT Data Center exterior ACT Data Center interior ## Factors to Consider When Selecting a Data Center There are numerous factors to consider when deciding to build or to occupy space in a data center. Aspects such as proximity to available power grids, telecommunications infrastructure, networking services, transportation lines, and emergency services can affect costs, risk, security and other factors that need to be taken into consideration. The size of the data center will be dictated by the business requirements of the owner or tenant. A data center can occupy one room of a building, one or more floors, or an entire building. Most of the equipment is often in the form of servers mounted in 19 inch rack cabinets, which are usually placed in single rows forming corridors (so-called aisles) between them. This allows staff access to the front and rear of each cabinet. Servers differ greatly in size from 1U servers (i.e. one “U” or “RU” rack unit measuring 44.50 millimeters or 1.75 inches), to Backblaze’s Storage Pod design that fits a 4U chassis, to large freestanding storage silos that occupy many square feet of floor space. ### Location Location will be one of the biggest factors to consider when selecting a data center and encompasses many other factors that should be taken into account, such as geological risks, neighboring uses, and even local flight paths. Access to suitable available power at a suitable price point is often the most critical factor and the longest lead time item, followed by broadband service availability. With more and more data centers available providing varied levels of service and cost, the choices increase each year. Data center brokers can be employed to find a data center, just as one might use a broker for home or other commercial real estate. Websites listing available colocation space, such as UpStack.com, or entire data centers for sale or lease, are widely used. A common practice is for a customer to publish its data center requirements, and the vendors compete to provide the most attractive bid in a reverse auction. #### Business and Customer Proximity The center’s closeness to a business or organization may or may not be a factor in the site selection. The organization might wish to be close enough to manage the center or supervise the on-site staff from a nearby business location. The location of customers might be a factor, especially if data transmission speeds and latency are important, or the business or customers have regulatory, political, tax, or other considerations that dictate areas suitable or not suitable for the storage and processing of data. #### Climate Local climate is a major factor in data center design because the climatic conditions dictate what cooling technologies should be deployed. In turn this impacts uptime and the costs associated with cooling, which can total as much as 50% or more of a center’s power costs. The topology and the cost of managing a data center in a warm, humid climate will vary greatly from managing one in a cool, dry climate. Nevertheless, data centers are located in both extremely cold regions and extremely hot ones, with innovative approaches used in both extremes to maintain desired temperatures within the center. #### Geographic Stability and Extreme Weather Events A major obvious factor in locating a data center is the stability of the actual site as regards weather, seismic activity, and the likelihood of weather events such as hurricanes, as well as fire or flooding. Backblaze’s Sacramento data center describes its location as one of the most stable geographic locations in California, outside fault zones and floodplains. Sometimes the location of the center comes first and the facility is hardened to withstand anticipated threats, such as Equinix’s NAP of the Americas data center in Miami, one of the largest single-building data centers on the planet (six stories and 750,000 square feet), which is built 32 feet above sea level and designed to withstand category 5 hurricane winds. Equinix “NAP of the Americas” Data Center in Miami Most data centers don’t have the extreme protection or history of the Bahnhof data center, which is located inside the ultra-secure former nuclear bunker Pionen, in Stockholm, Sweden. It is buried 100 feet below ground inside the White Mountains and secured behind 15.7 in. thick metal doors. It prides itself on its self-described “Bond villain” ambiance. Bahnhof Data Center under White Mountain in Stockholm Usually, the data center owner or tenant will want to take into account the balance between cost and risk in the selection of a location. The Ideal quadrant below is obviously favored when making this compromise. Cost = Construction/lease, power, bandwidth, cooling, labor, taxes Risk = Environmental (seismic, weather, water, fire), political, economic Risk mitigation also plays a strong role in pricing. The extent to which providers must implement special building techniques and operating technologies to protect the facility will affect price. When selecting a data center, organizations must make note of the data center’s certification level on the basis of regulatory requirements in the industry. These certifications can ensure that an organization is meeting necessary compliance requirements. #### Power Electrical power usually represents the largest cost in a data center. The cost a service provider pays for power will be affected by the source of the power, the regulatory environment, the facility size and the rate concessions, if any, offered by the utility. At higher level tiers, battery, generator, and redundant power grids are a required part of the picture. Fault tolerance and power redundancy are absolutely necessary to maintain uninterrupted data center operation. Parallel redundancy is a safeguard to ensure that an uninterruptible power supply (UPS) system is in place to provide electrical power if necessary. The UPS system can be based on batteries, saved kinetic energy, or some type of generator using diesel or another fuel. The center will operate on the UPS system with another UPS system acting as a backup power generator. If a power outage occurs, the additional UPS system power generator is available. Many data centers require the use of independent power grids, with service provided by different utility companies or services, to prevent against loss of electrical service no matter what the cause. Some data centers have intentionally located themselves near national borders so that they can obtain redundant power from not just separate grids, but from separate geopolitical sources. Higher redundancy levels required by a company will of invariably lead to higher prices. If one requires high availability backed by a service-level agreement (SLA), one can expect to pay more than another company with less demanding redundancy requirements. ### Continue with Part 2 of The Challenges of Opening a Data Center That’s it for part 1. Read the second part here where we’ll take a look at some other factors to consider when moving into a data center such as network bandwidth, cooling, and security. In future posts, we’ll take a look at what is involved in moving into a new data center. We’ll also investigate what it takes to keep a data center running, and some of the new technologies and trends affecting data center design and use. Here’s a tip on finding all the posts tagged with *data center* on our blog. Just follow https://www.backblaze.com/blog/tag/data-center/.
true
true
true
There are numerous factors that go into choosing the right data center for your organization's needs. Location, proximity, power sources, and bandwidth are just a few of the factors that you must take into consideration. Let's take a closer look.
2024-10-12 00:00:00
2018-02-27 00:00:00
/blog/wp-content/uploads/2018/02/picking-datacenter.jpg
article
backblaze.com
Backblaze Blog | Cloud Storage & Cloud Backup
null
null
37,045,636
https://philip.greenspun.com/blog/2023/07/29/biking-without-bike-infrastructure-the-netherlands/
Biking without bike infrastructure: the Netherlands
Philg
In Danish happiness: bicycle infrastructure I described the Danish system of road/curb/bike path/curb/sidewalk. What if a significant percentage of a society used bicycles for transportation, but nobody bothered to build infrastructure? That’s the Netherlands! I recently visited a friend in Delft, a university town south of Amsterdam. There are no generally no curbs at all in the downtown area. The road is informally divided into car/bike/pedestrian, but these divisions can change depending on what exactly is sticking out from a house, possibly forcing pedestrians into the bike area, or whether a truck is trying to use the road. The risk of injury has ballooned in the last few years due to the popularity of cargo bikes and electric bikes. Instead of getting hit by a 200 lb. person-bike combo going 8 mph you’ll get hit by a 400 lb. person-small person-groceries-bike combo going 15 mph. “Trouble in cyclists’ paradise: Amsterdam accused of favouring pedestrians” (Guardian 2021) describes the increasing conflict between walkers and bikers in Amsterdam. There aren’t as many collisions as you’d imagine, but pedestrians are required to be constantly mindful. This works for the Dutch, but tourists are frequently wandering casually into near-collisions with cyclists. What the cyclists have gained is balanced by a loss of mental peace and capacity among pedestrians. Here’s a narrow street designed for pedestrians in The Hague: The bicycle is being used for transportation, not recreation, so it might be whipping by these pedestrians at 10-20 mph. Here are the two transportation modes interacting in Delft: Maybe those white boxes are supposed to delineate between walking and biking? Or maybe there are two lanes for opposite directions? I didn’t figure it out. Just a few of the bikes parked near the Amsterdam Zuid secondary station: What if you choose “neither” but don’t have a car and/or don’t want to pay what my rich local friend said were insanely expensive parking fees? Take the tram! My take-away: the Danes did it right. I lived in Amsterdam for a year, and yes, cyclists pay no heed to pedestrians whatsoever. The Dutch has this theory that by abolishing the separation between sidewalk and street, it will force car drivers to pay attention. They claim this has actually reduced accidents but I suspect cherry-picking data as is usually the case with urban planners and biking advocates. It’s unfair to say they have no bike infrastructure, however. They have an entire network of highways for bikes only, parallel to and completely separate from the car highway system. The typical dutch person will think nothing of biking 30 miles or the distance between Amsterdam and The Hague as a regular commute. Fazal: I’m not sure this purported “network of [bike] highways” exists other than for commuting in/out of Amsterdam. Google Maps never showed me anything like that when I played around with getting cycling directions to/from Delft. Phil, we spent multiple weeks biking around the Netherlands (and Belgium, and France, but I digress), and I can absolutely confirm that there are separate bike “highways” almost everywhere in the Netherlands, including in and out of Delft. Google Maps is a bit deceiving because many of the bike paths are within a meter or two of a major roadway, separated by a grass strip, so at first glance it looks like Google is routing you down the road. But it’s not. We became so accustomed to having our own separate bike “roads” through the Netherlands that when we got to Belgium, it almost felt insulting: “What? No bike roads? We have to share roads with CARS?”. Except, of course, the bicycle infrastructure in Belgium is still insanely good. Just not as insanely good as in the Netherlands. “The typical dutch person will think nothing of biking 30 miles or the distance between Amsterdam and The Hague as a regular commute.” – How does it work in the rain? 30 miles on a bike means approximately 3 hour commute. And it could make for a soapy business meeting after and if you get there. No wonder economy of Netherlands’s performance trails economies of Austria and Germany and is not even close to Switzerland’s GDP per capita. Netherlands used to be one of the world’s top manufactures. At least speed limits are lower for cars. Try that on a 50mph rural Fl*rida road. “My take-away: the Danes did it right.” I am sure they did, but this post is not about them. As much as I love to bike and Dutch culture, I think that Dutch got it wrong on transportation. bsd Faizal “pay no heed” is generous. On the Upper West Side getting on a bike, any bike, turns you into an asshole who is not obligated to stop at red lights or slow down for babies being pushed in strollers. And this is across the board to include DoorDouche delivery ebikes going 25-35 mph in the “bike lanes” (disregarding direction arrows). Peace.
true
true
true
In Danish happiness: bicycle infrastructure I described the Danish system of road/curb/bike path/curb/sidewalk. What if a significant percentage of a society used bicycles for transportation, but n…
2024-10-12 00:00:00
2023-07-29 00:00:00
https://i0.wp.com/philip…0x1140.jpg?ssl=1
article
greenspun.com
Philip Greenspun’s Weblog
null
null
38,951,084
https://www.wired.com/story/scammy-ai-generated-books-flooding-amazon/
Scammy AI-Generated Book Rewrites Are Flooding Amazon
Kate Knibbs
When AI researcher Melanie Mitchell published *Artificial Intelligence: A Guide for Thinking Humans* in 2019, she set out to clarify AI’s impact. A few years later, ChatGPT set off a new AI boom—with a side effect that caught her off guard. An AI-generated imitation of her book appeared on Amazon, in an apparent scheme to profit off her work. It looks like another example of the ecommerce giant’s ongoing problem with a glut of low-quality AI-generated ebooks. Mitchell learned that searching Amazon for her book surfaced not only her own tome but also another ebook with the same title, published last September. It was only 45 pages long and it parroted Mitchell’s ideas in halting, awkward language. The listed author, “Shumaila Majid,” had no bio, headshot, or internet presence, but clicking on that name brought up dozens of similar books summarizing recently published titles. Mitchell guessed the knock-off ebook was AI-generated, and her hunch appears to be correct. WIRED asked deepfake-detection startup Reality Defender to analyze the ersatz version of *Artificial Intelligence: A Guide for Thinking Humans*, and its software declared the book 99 percent likely AI-generated. “It made me mad,” says Mitchell, a professor at the Santa Fe Institute. “It’s just horrifying how people are getting suckered into buying these books.” Amazon took down the imitation of Mitchell’s book after WIRED contacted the company. “While we allow AI-generated content, we don't allow AI-generated content that violates our Kindle Direct Publishing content guidelines, including content that creates a disappointing customer experience,” Amazon spokesperson Ashley Vanicek says. But Mitchell is far from the only AI researcher apparently targeted using the same technology they work on. Pioneering computer scientist Fei-Fei Li’s new memoir *The Worlds I See: Curiosity, Exploration, and Discovery in the Age of AI* has over a dozen different summaries come up when you search for the book on Amazon. Unlike the takeoff of Mitchell’s book, the summaries of Li’s announce themselves as such. One, forthrightly titled *Summary and Analysis of The Worlds I See*, has a product description that begins: “DISCLAIMER!! THIS IS NOT A BOOK BY FEI-FEI LI, NOR IS IT AFFILIATED WITH THEM.IT IS AN INDEPENDENT PUBLICATION THAT SUMMARIZES FEI-FEI LI BOOK IN DETAILS.IT IS A SUMMARY.” Yet these books, too, appear to be AI-generated and to add little value for readers. Reality Defender analyzed a sample of the *Summary and Analysis* book and found it was also likely AI-generated. “A complete and total rewriting of the text. Like, someone queried an LLM to rewrite the text, not summarize it,” Reality Defender head of marketing Scott Steinhardt says. “It’s like a KidzBop version of the real thing.” Reached for comment over email, Li distilled her reaction into a single emoji: 🤯. Sleazy book summaries have been a long-running problem on Amazon. In 2019, *The Wall Street Journal* found that many used deliberately confusing cover art and text, irking writers including entrepreneur Tim Ferriss. “We, along with some of the publishers, have been trying to get these taken down for some time now,” says Authors Guild CEO Mary Rasenberger. The rise of generative AI has supercharged the spammy summary industry. “It is the first market we expected to see inundated by AI,” Rasenberger says. She says these schemes fit the strengths of large language models, which are passable at producing summaries of work they’re fed, and can do it fast. The fruits of this rapid-fire generation are now common in searches for popular nonfiction titles on Amazon. AI-generated summaries sold as ebooks have been “dramatically increasing in number, says publishing industry expert Jane Friedman—who was herself the target of a different AI-generated book scheme. That’s despite Amazon in September limiting authors to uploading a maximum of three books to its store each day. “It's common right now for a nonfiction author to celebrate the launch of their book, then within a few days discover one of these summaries for sale.” Writer Sarah Stankorb is one such author. This summer, she published *Disobedient Women: How a Small Group of Faithful Women Exposed Abuse, Brought Down Powerful Pastors, and Ignited an Evangelical Reckoning*. Summaries appeared on Amazon within days. One, which she suspects was based on an advance copy commonly distributed to reviewers, appeared the day *before* her book came out. Stankorb was aghast. *Disobedient Women* was the product of years of careful reporting. “It’s disturbing to me, and on multiple moral levels seems wrong, to pull the heart and sensitivity out of the stories,” she says. “And the language—it seemed like they just ran it through some sort of thesaurus program, and it came out really bizarre.” Comparing the texts side by side, the imitation is blatant. Stankorb’s opening line: “In my early days reporting, I might do an interview with a mompreneur, then spend the afternoon poring over Pew Research Center stats on Americans disaffiliating from religion.” The summary’s opening line: “In the early years of their reporting, they might conduct a mompreneur interview, followed by a day spent delving into Pew Research Center statistics about Americans who had abandoned their religious affiliations.” Reality Defender rated the summary of Stankorb’s book as 99 percent likely AI-generated. When Mitchell found out about her own AI imitator, she vented in a post on X, asking, “Is this legal?” Now, she has doubts she could successfully take anyone to court over this. “You can’t copyright the title of a book, I’ve been told,” Mitchell says. Usure of whether she has any recourse, she hasn’t contacted Amazon. Some copyright scholars say that a summary is legal as long as it refrains from explicit word-for-word plagiarism. Kristelia Garcia, an intellectual property law professor at Georgetown University, draws a comparison with the original blockbusters of the summary world: CliffsNotes, the longrunning study guide series that provides student-friendly explanations of literature. “CliffsNotes aren't legal because they're fair use. They're legal because they don't actually copy the books. They just paraphrase what the book is about,” Garcia says via email. Other IP experts aren’t so sure. There’s a big difference, after all, between CliffsNotes—which provide substantive analysis of a book in addition to summarizing it and are written by humans—and this newer wave of summaries clogging up Amazon. “Simply summarizing a book is harder to defend,” says James Grimmelmann, a professor of internet law at Cornell University. “There is still substantial similarity in the selection and arrangement of topics and probably some similarity in language as well.” Rasenberg of the Authors Guild sees a 2017 case where Penguin Random House sued authors who created children’s editions of its titles as a precedent that could help writers fight AI-generated summaries. The court found that the children’s summaries were not legal, because they were primarily devoted to retelling copyrighted stories. Until an author actually files a lawsuit against the creator of one of these new generation summaries, their legality remains an open question. And although Amazon did take down Mitchell’s summary, it has not announced plans to proactively monitor this wave of summaries. “I hate that this is the new reality, but it would likely take a significant and recurring PR nightmare for a change in policy to occur at Amazon,” Friedman says. Right now, there’s nothing much stopping the next AI ebook hustler from creating a new summary and uploading it tomorrow. “It’s ridiculous that Amazon doesn’t seem to be doing much to stop it,” Mitchell says. Then again, the publishing industry doesn’t seem to know quite how to handle this, either. Mitchell remembers the resigned way her agent responded when she wrote to her about the AI imitation: “This is just the new world we’re living in.”
true
true
true
Authors keep finding what appear to be AI-generated imitations and summaries of their books on Amazon. There's little they can do to rein in the rip-offs.
2024-10-12 00:00:00
2024-01-10 00:00:00
https://media.wired.com/…ss-484917294.jpg
article
wired.com
WIRED
null
null
9,480,455
http://securityaffairs.co/wordpress/36505/cyber-crime/political-malvertising-campaign.html
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
10,586,921
https://www.madetech.com/news/making-multiple-browserify-bundles-with-gulp
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
23,522,363
https://medium.com/humans-of-xero/technical-debt-f5158cc9ca07
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
6,914,404
http://signaltower.co/2013/12/10/michael-fitzgerald-show-up/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
1,718,451
http://vospe.com/2010/09/22/artsy-hacks-maria-zeldis/#more-556
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
18,347,330
http://theideaisshit.com
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
9,328,379
http://www.ocf.berkeley.edu/~broockma/broockman_skovron_asymmetric_misperceptions.pdf
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
6,991,328
https://twitter.com/twiet/status/417812596667191296
x.com
null
null
true
true
false
null
2024-10-12 00:00:00
null
null
null
null
X (formerly Twitter)
null
null
20,541,995
https://medium.engineering/microservice-architecture-at-medium-9c33805eb74f
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
6,526,693
http://www.nextbigwhat.com/startup-funding-lateral-means-297/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
3,334,752
https://twitter.com/#!/search/twilio%20basket
x.com
null
null
true
true
false
null
2024-10-12 00:00:00
null
null
null
null
X (formerly Twitter)
null
null
2,559,045
http://chrisyeh.blogspot.com/2011/05/angel-investing-is-like-nba-draft.html
Angel Investing Is Like The NBA Draft
null
This Page has moved to a new address: Adventures in Capitalism: Angel Investing Is Like The NBA Draft Sorry for the inconvenience… Redirection provided by Blogger to WordPress Migration Service
true
true
true
Each year, NBA scouts devote their time to finding flaws in players. Landry Fields lacks athleticism. DeJuan Blair is too short. And each ye...
2024-10-12 00:00:00
2011-05-01 00:00:00
null
null
blogspot.com
chrisyeh.blogspot.com
null
null
12,132,700
http://journals.aps.org/prx/abstract/10.1103/PhysRevX.6.031007
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
297,524
http://www.independent.co.uk/life-style/gadgets-and-tech/news/google-once-reviled-computer-superpowers-but-domination-is-just-what-it-is-achieving-921451.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
38,960,317
https://techcrunch.com/2024/01/11/amazon-owned-audible-lays-off-5-of-staff/
Amazon-owned Audible lays off 5% of staff | TechCrunch
Amanda Silberling
Audible, the Amazon-owned audiobook company, is laying off 5% of its staff, according to a leaked memo obtained by Business Insider. Per the memo, Audible CEO Bob Carrigan praised staff for a strong 2023 and assured them that the business was in good shape… but, due to the “increasingly challenging landscape,” the company is still making cuts. This is a common refrain in the tech sphere, where companies across all sectors have been beleaguered by ongoing layoffs since 2022. Audible did not respond to requests for comment. dear staff, thank you for making so much money last year now, we're firing a lot of you because our parent company thinks it will make it look better love, ur boss https://t.co/dT0zey1S7r pic.twitter.com/7qmzINkKs9 — alex (tired) (@alex) January 11, 2024 Amazon has been cutting back its workforce aggressively over the last year. Across the company, in 2023, Amazon laid off around 27,000 employees across multiple departments, including AWS, Twitch and advertising. Now, just this week, Twitch laid off another 500 employees, and Amazon’s MGM Studios and Prime Video let go of “several hundred” employees. It’s an ominous time for Amazon’s entertainment products. Prime Video aside, all of these organizations at Amazon — Twitch, MGM Studios and Audible — came to the company via acquisition. Most recently, Amazon spent $8.5 billion on MGM in 2022, which brought more than 4,000 films and 17,000 TV shows to the Prime Video streaming service. Twitch, the gaming-focused livestream platform, came to Amazon for about $1 billion in 2014. Audible has been part of Amazon since 2008, when it was acquired for $300 million. Twitch is massively popular, but the costs of operating a huge livestreaming service are high, so the company has remained unprofitable. For MGM and Prime Video, senior vice president Mike Hopkins said the division is making cuts to “reduce or discontinue investments in certain areas while increasing our investment and focus on content and product initiatives that deliver the most impact.” Amazon Prime Video and MGM Studios laid off hundreds of employees
true
true
true
Audible, the Amazon-owned audiobook company, is laying off 5% of its staff, according to a leaked memo obtained by Business Insider. Per the memo, Audible
2024-10-12 00:00:00
2024-01-11 00:00:00
https://techcrunch.com/w…s-1179901012.jpg
article
techcrunch.com
TechCrunch
null
null
7,909,781
http://jetpack.me/2014/06/18/how-to-install-stats-and-analytics-on-your-wordpress-site-with-jetpack/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
11,635,461
https://maikel.pro/blog/how-apts-can-intercept-messaging-apps/#whatsappsmsreactivationande2eencryption
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
24,634,437
https://medium.com/cantors-paradise/mandelbulb-three-dimensional-fractals-d74c0317b76d
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
6,638,827
http://alexn.id.au/2013/10/30/the-end-of-coding/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
9,161,006
https://medium.com/sample-collection/what-i-ve-learned-about-giving-feedback-to-peers-707515d5e927
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
23,515,404
https://www.archaeology.org/news/8795-200612-bulgaria-ritual-pits
News - Thracian Ritual Pits Discovered in Bulgaria - Archaeology Magazine
Jessica Esther Saraceni
BURGAS, BULGARIA—*The Sofia Globe* reports that 14 ritual pits dated to about the fifth century B.C. are being excavated at a construction site on southern Bulgaria’s Black Sea coast. At least ten additional pits have also been found. Miroslav Klasnakov of the Regional History Museum said human and animal bone, coal, and fragments of ceramic bowls and amphoras have been recovered. Most of the pottery was made locally, he added. To read about an anthropomorphic oil vessel found in a third-century A.D. Thracian burial, go to "Bath Buddy." # Thracian Ritual Pits Discovered in Bulgaria News June 11, 2020 ## Recommended Articles Digs & Discoveries March/April 2021 ## A Dutiful Roman Soldier Digs & Discoveries July/August 2018 ## Mirror, Mirror Digs & Discoveries November/December 2017 ## Iconic Discovery - Features May/June 2020 ## A Path to Freedom At a Union Army camp in Kentucky, enslaved men, women, and children struggled for their lives and fought to be free (National Archives Records Administration, Washington, DC) - Features May/June 2020 ## Villages in the Sky High in the Rockies, archaeologists have discovered evidence of mountain life 4,000 years ago (Matt Stirn) - Letter from Morocco May/June 2020 ## Splendor at the Edge of the Sahara Excavations of a bustling medieval city tell the tale of a powerful Berber dynasty (Photo Courtesy Chloé Capel) - Artifacts May/June 2020 ## Torah Shield and Pointer (Courtesy Michał Wojenka/Jagiellonian University Institute of Archaeology)
true
true
true
BURGAS, BULGARIA—The Sofia Globe reports that 14 ritual pits dated to about the fifth century […]
2024-10-12 00:00:00
2020-06-11 00:00:00
https://archaeology.org/…a-Ritual-Pit.jpg
article
archaeology.org
Archaeology Magazine
null
null
34,259,773
https://cointelegraph.com/news/lastpass-data-breach-led-to-53k-in-bitcoin-stolen-lawsuit-alleges
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
10,845,077
https://www.washingtonpost.com/news/soloish/wp/2016/01/05/my-tinder-date-with-pharma-bro-martin-shkreli/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
31,028,433
https://www.theguardian.com/us-news/2018/oct/19/billionaires-wealth-richest-income-inequality
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
17,886,727
https://medium.com/@per.fredelius/gui-s-are-more-user-friendly-than-terminals-because-of-discoverability-and-limiting-false-dcc8ed62884e
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
8,662,219
https://www.kickstarter.com/projects/830527119/point-a-softer-take-on-home-security
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
1,840,611
http://www-igm.univ-mlv.fr/~lecroq/string/
EXACT STRING MATCHING ALGORITHMS Animation in Java
null
Christian Charras - Thierry Lecroq Laboratoire d'Informatique de Rouen Université de Rouen Faculté des Sciences et des Techniques 76821 Mont-Saint-Aignan Cedex FRANCE
true
true
true
EXACT STRING MATCHING ALGORITHMS Animation in Java
2024-10-12 00:00:00
1997-01-01 00:00:00
null
null
null
null
null
null
10,558,886
https://www.youtube.com/playlist?list=PLe-h9HrA9qfA9qK9eTbDW5c5_qrRmoR34
Wrangle Conference 2015
null
About Press Copyright Contact us Creators Advertise Developers Terms Privacy Policy & Safety How YouTube works Test new features NFL Sunday Ticket © 2024 Google LLC
true
true
true
Share your videos with friends, family, and the world
2024-10-12 00:00:00
2024-10-10 00:00:00
https://i.ytimg.com/vi/x…ince_epoch=20008
website
youtube.com
YouTube
null
null
881,069
http://www.b-list.org/weblog/2009/oct/14/registration/
django-registration
null
# django-registration So, life has been eventful lately. There was DjangoCon, which was awesome even though I came away deeply unhappy with how my talk turned out; due to a lot of hectic things going on, it fell far below the standard I usually like to enforce for myself. I’ve got a couple things cooking for PyCon, though, which will hopefully make up for it. Things are starting to ramp up for the Django 1.2 development cycle, which is looking to be chock full of awesomeness. There’s quite a lot of interestingness going on at work which, unfortunately, I can’t really talk much about right now but which is bringing with it a general “OMG BUSY” status. And, of course, there was last week’s emergency Django security release. What little free time I’ve had has been divided between two personal projects. One is finishing up the code repository for the second edition of Practical Django Projects (which I’d hoped to get some serious time on last weekend; the security stuff kinda crapped all over that, so I’m hoping to get back to it Saturday or Sunday, while curled up on my couch with a bunch of football games). The other is a complete rewrite of my most popular open-source application, django-registration. As luck would have it, I’ve recently been able to devote a bit of work time to it and get it nearly finished, so I’d like to take a minute to talk about that. I’ve always tried to have django-registration offer some opportunities for customizing how users sign up, because that’s a feature which tends to have such varied requirements from site to site. There’s always been, for example, the ability to specify custom form classes, custom templates and custom redirects for actions which require them. But at its core, django-registration has always had the default workflow — two-phase register-then-activate — more or less hard-coded in, and that’s a problem: there are lots of use cases where it just doesn’t fit, and where working around django-registration’s assumptions was either difficult or impossible. So about six months ago I quietly forked the repository and started working on a new, more flexible approach, inspired by Django’s own pluggable authentication system, which in turn owes a lot to various similar systems which preceded it. My goal was to move all the actual logic of user registration into a standard interface, then rewrite django-registration to work with anything which implements that interface. The result, today, is the first alpha release of django-registration 0.8, which you can download from the project page at Bitbucket. A copy of the in-progress documentation is also up, and is built using Sphinx, which is frankly awesome for this. Browsing around the documentation (pay particular attention to the release notes and upgrade guide) should give you a feel for what’s new and what’s changed. All the actual work of registering (and, optionally, activating) accounts is now handled by swappable backend classes; to plug in a new workflow, simply write a backend which implements it. The full backend API, for those of you who’d like to try your hand at it, consists of six methods, two of which are optional. Between them, I’m pretty sure they cover the whole range of things people will need or want to do with user registration. The original two-phase workflow used by older releases is now implemented as a bundled backend, and is mostly backwards-compatible: you’ll need to add one new template, and make a couple changes to one other, but aside from that things should Just Work(TM). If you’re feeling adventurous and you’d like to try it out, grab the 0.8 alpha package and install it. Note that you’ll need to download it directly from Bitbucket, or point your favorite package-install tool at the Bitbucket URL; this package is not on PyPI and won’t be until 0.8 goes final. If you spot bugs, head on over to the issue tracker and let me know about them. Between now and the final 0.8 release — and I don’t yet know when that’ll happen — I’ve got a few more things I’d like to get done: - Adding at least one other backend bundled with django-registration, as an example of how to do that. - Collecting and checking in updated translations (there are two new strings, and a few that no longer need explicit translation because they’re translated by Django itself). I’m looking at Transifex (which is both incredibly awesome, and Django-powered) as an option for making this easier all around. - Expanding the documentation and smoothing out any wrinkles in the upgrade process; it isn’t 100% painless and can’t be since the changes under the hood were so big, but I suspect there’s still room for improvement. Once it’s out the door, django-registration 0.9 will begin; I’m hoping it’ll collect fixes for any bugs which have crept through unnoticed, permanently stabilize the backend API, and provide a stepping stone toward a 1.0 release. If you’re interested in helping, feel free to grab a copy of the code from the Mercurial repo (if you have a Bitbucket account, you can just fork directly) and dive in.
true
true
true
So, life has been eventful lately. There was DjangoCon, which was awesome even though I came away deeply unhappy with …
2024-10-12 00:00:00
2024-06-03 00:00:00
null
null
b-list.org
b-list.org
null
null
35,032,112
https://staysaasy.com/business/2023/03/01/culture-viruses.html
Culture Viruses
null
In large organizations, culture is key. The values and habits of an organization, and what it rewards and punishes, are the background radiation driving towards discrete outcomes in a world of infinite possibility. Your culture is a living thing - it changes and adapts to new teammates, external forces, and the broader environment. And sometimes it gets sick; sometimes your culture gets a virus. A Culture Virus is a contagious idea that hooks into your culture like a pathogen, passing from person to person, and very often preying on the weak and struggling - the people who are susceptible to convenient excuses. Below we’ll work through examples of common culture viruses that can occur as companies grow. These elements of culture aren’t matters of style – if you let them creep into your company, they will meaningfully deflate, devalue, and debase your company. ## Culture Virus Examples ### Culture Virus One: Lack of accountability Lack of accountability is the most common culture virus in existence. The most common and problematic cause for a lack of accountability is when organizations don’t have easy ways to understand whose fault something is. As organizations grow, often they’ll transform into something where the specific outcomes for the business can be divorced from people’s day-to-day work. On software teams, long build times are a common cause here: I delivered my thing 4 months ago, it’s not my fault they screwed it up downstream; I got bad requirements, look upstream for the problem. Sometimes build times are so long you can just blame people who aren’t at the company anymore. Another common case is people gaslighting the company about the difficulty of their job - you know what outcomes they own, but they always claim it was so much more difficult than you imagine. This is particularly problematic in roles where skills are specialized or isolated, and when it’s done to the level where it never is quite bad enough to go dive in and figure out just how difficult the role is. This virus reaches a fever pitch in two modes: - When teams just constantly blame other teams for everything, constantly. - When there’s so little accountability that teams never blame other teams, because nobody is being held to account for anything. Often the company sees that kind of civility as evidence things are going well, because people fighting becomes the only short-term metric they have for inter-team collaboration. As a result, this failure mode is actually worse. ### Culture Virus Two: Burnout, Resourcing, and The Impossible A second common culture virus straddles the line between illogical burnout, resourcing, and the impossible. This virus looks like different forms of saying “there is too much work” way too soon. Burnout is really real. Many people are overworked and tired. However, sometimes a team or a group catches a burnout Culture Virus. Pathogenic burnout seems to show up constantly, regardless of current conditions or workload, and when you zoom in on specific causes or hours worked, it just doesn’t add up. Often this is accompanied with claims that are found to be obviously false, e.g people saying things like “I worked literally all night” when you’re positive that’s not true. Another version of the “there’s too much stuff” virus claims that there’s never enough people, or not the right people. Specialization can become a hammer for every solution: we need a QA team, operational staff, a chief of staff, an administrative assistant, an event coordinator..etc. Or sometimes people start to treat people like resources that add and subtract to get outcomes - I need one more engineer to do project X (when in reality, output is much more determined by a mountain of decisions in each project than just the resourcing input). The final virus in this trio is fatalism: “We tried that already.”, “It’s impossible.” People will say this about things that are clearly possible. The overwhelming majority of outcomes any software startup might want to achieve are achievable with enough effort; very few things are impossible. Whether people use this as an attempt at short circuiting conversations or expressing severity or just not realizing things can be done - once this becomes part of the lexicon it can spread like wildfire. All of these viruses lead to the same place - people always, constantly, pathologically saying no, we can’t do more, we don’t have enough. This virus is particularly challenging because often these issues are 100% real, and it can take a lot of work to understand and be decisive on whether it’s real or a culture virus. ### Culture Virus Three: Toxic Positivity The final culture virus we’ll examine is Toxic Positivity. This one is interesting because it’s more often found in leaders. To win at a startup you need a very unreasonable willingness to keep going, to move forward, to make it work. Like, a crazy amount. Burn the boats. There’s no going back. The problem is that this fanatical commitment often blurs into toxic positivity - not admitting when things are wrong. Theranos is a great example here. It is critically, critically important that your leadership group never loses hold of the truth. It’s important that they never lose hold of the ability to make a nuanced and accurate analysis of what’s working and what’s not. If you do lose this, you’ve caught a culture virus, and it can become the way of business that problems aren’t acknowledged, issues are brushed off, glaring gaps are quietly ignored. Other symptoms of this virus include radicalization of arguments very quickly. E.g. “I think the home page doesn’t look good” gets the response “Well maybe you shouldn’t work at this company.” ### Fighting Culture Viruses Your organization needs to have an immune system for Culture Viruses. You must have leaders and team members roaming the proverbial organizational body like white blood cells, ready toremove any signs of culture viruses. It’s important that your approach to fighting culture viruses is always-on. Approaches that don’t remove that virus give it a chance to adapt and learn how to shape shift. While much of leadership is a game of moderation, fighting culture viruses is the one place you must have no chill. You must require people leave the conversation not agreeing to have different points of view. You must be clear there is no other acceptable point of view, and you’re willing to back that up with your reward system. It’s important, however, that you enforce these cultural values evenly, without malice or anger. You have to be willing to hire and fire based on them, but you need to avoid doing it in a way that is culty or problematic. Other things you can do to help fight culture viruses: - Meaningful equity stakes help. Accountability diffusion often happens when people perceive paths to optimizing their compensation that don’t strictly align with adding value to the company. - Leadership really matters. One bad leader can change the culture for a huge number of employees. Finding the right leaders and ensuring you have the right amount of visibility and oversight into the culture they’re instilling is important. Outcomes can be due to many reasons, but if the culture underneath is rotten, problems will eventually arise. For the culture viruses above, the following can be done: - Accountability: Be insistent that people show business value from their work for as long as humanly possible. If it ever becomes totally impractical, keep it as the default expectation and be reasonable in interpreting aberrations. The minute you let people get credit for things that are totally diveroced from business value, you’ve started to lose the war. - Possibility/Resourcing: Avoid anything that looks like resourcing math as a valid argument for engineering projects (e.g. 2 eng = X, 3 eng = Y). Avoid operating in a way where people are incentivized to get you to agree to smaller business outcomes - e.g. if “finishing your OKRs” is valued more than “your OKRs delivered the amount of value we wanted”, you’re in trouble. - Positivity: Especially with leadership discussions, be clear that a nuanced discussion is happening and desired, and potentially even say out loud that the goal is to build understanding, not enforce points of view. Also: leaders are often very busy - avoid having nuanced discussions in time restricted forums (including instant messaging). ### You Have To Be Right, You Have To Be Careful Given you have to have no chill with Culture Viruses, you have to make sure you’re right about what you’re going after, and you have to do it appropriately. Be very careful in your diagnosis, but decisive in your execution. In fact, cult-style culture enforcement is a culture virus in itself. My best advice here is: - Remove emotion from the equation. A good culture like a river, not a hero or a villain - it just keeps going because that’s the way it is. - Enforce your culture values (and fight culture viruses) more than you talk about them. - Beware of, look for, and prevent people from using your culture as a way to complain or attack others. In general, always think of culture as a way to promote, not to punish; halting culture viruses isn’t foundationally about punishing, it’s just about stopping the behavior.
true
true
true
A Culture Virus is a contagious idea that hooks into your culture like a pathogen, passing from person to person, and very often preying on the weak and struggling - the people who are susceptible to convenient excuses..
2024-10-12 00:00:00
2023-03-01 00:00:00
https://staysaasy.com/as…ack-ogimage.jpeg
article
staysaasy.com
Stay SaaSy
null
null
15,350,706
https://futurism.com/astronauts-could-use-tv-video-games-to-combat-isolation-in-space/
Astronauts could use TV, video games to combat isolation in space
The Conversation
## Long-Range Space Journey No one knows for sure what a long-range space journey will be like for the people on board. Nobody in the history of our species has ever had to deal with the “Earth-out-of-view” phenomenon, for instance. How will it feel to live in close quarters with a small group, with no escape hatch? How will space travelers deal with the prospect of not seeing family or friends for years, or even ever again? How will they occupy themselves for years with nothing much to do? Researchers do know some things from observing astronauts who’ve stayed in space stations revolving around Earth for long periods of time, people who spent a lot of time shut off from the outside world in isolated regions (such as on polar expeditions) and from experiments with simulated Mars missions. Because astronauts would have a lot of free time to fill, some researchers have casually suggested sending along a selection of books and films or even bespoke video games. As a social scientist who studies media use and its effects on behavior, I believe television could help. Recreating the media environment from before we had permanent, continuous access to anything we want to watch or listen to might be just the thing to help space travelers cope with a loss of a sense of space and time, with loneliness, privacy issues, boredom and more. ## Floating Rudderless in Space and Time In space, the distinction between days of the week, day and night, or morning and noon will be mostly meaningless. Before DVDs and streaming, television helped us structure our time. For some, “lunch” was when a particular game show came on. “Evening” started with the news. “Thursday” was when the next episode of our favorite drama finally arrived. Seasonal programming split the year into chunks (Halloween, Thanksgiving, Christmas). Annual events, such as the Super Bowl, helped us realize yet another year had passed. A media system that recreates structured access would help define time in space, something unlimited access to a random list of movies would not. Knowing that you were watching something that millions of others were watching at the same time created a particular group feeling – think tuning in to a royal wedding or a presidential funeral. It remains to be seen how today’s fracturing of the media landscape has changed that. Interestingly, one of the earliest occasions where millions around the world shared a bond in front of their or their neighbor’s TV was the first lunar landing. ## Out of Reach, Out of Touch One reason prisoners like to watch television is that it shows them how the world outside is evolving. If we don’t want long-range space travelers to return feeling like aliens, they will need to keep up-to-date with what’s happening back on Earth. Television news has an “agenda setting” effect: It tells viewers not only what is going on, but also what matters to people, and public opinion about current events. Entertainment media, from reality shows to game shows to drama, display how fashion, vocabulary and even accents are evolving. Tuning in to what’s going on back home is also a way to counteract the “Earth-out-of-view” phenomenon. The feeling of being on top of what’s happening on Earth may help keep the psychological connection to the home planet active and strong. ## Heritage Maintenance Members of a crew are likely to have different cultural backgrounds. The distinctions are biggest if they come from far-flung countries or different language families. Immigrants, for instance, use the media to integrate more quickly into their new culture. But exposure to home media is also a way to keep a connection to (and derive support from) the culture of origin. Imagine a crew consisting mainly of people from the United States, but with one member from, say, Japan. It will be equally important to facilitate integration and bonding by making media content available that everyone can consume as a group as it is to make specific content available that (in this example) may cater to a person who grew up in Japan. ## Balancing Solitude and Community As individuals, astronauts will crave autonomy and privacy. Media can help create “alone time.” Being immersed in a book, a movie or music (using headphones) helps lock out the environment, as every teenager knows. At the same time, astronauts as a group will need to work on interconnection to be successful. Even though media are often blamed for dissolving social cohesion, they can also create and reinforce powerful feelings of community and group cohesion. Even in families where everyone has their own smartphone and a TV set, a lot of group viewing occursbecause members enjoy being in each other’s company. Spectator sports, in particular, can create strong bonds. Of course, it makes sense that individuals’ interests differ among groups, cultures and genders, as well as with personal preferences. A supportive media access program will need careful pretesting long before the journey starts. ## Building On How We Already Use Media Media can do much more. People turn to media for mood management, either when they feel down or want to relax. The distraction caused by media is usually seen as negative for people trying to avoid overeating, but if food is bland and monotonous, that might be a good thing. There are dangers, too. Media can distract from necessary tasks, affect sleep or lead to addiction-like behaviors. News from Earth or exposure to social media could induce fear and anxiety for loved ones. There is, finally, a more mundane but perhaps also more fundamental reason to incorporate media into the daily lives of future Mars travelers. They will be drawn from a generation that grew up immersed in and surrounded by media access and content. Recreating a reasonable facsimile of that environment may go a long way toward making astronauts feel a little bit more at home out there. Share This Article
true
true
true
Keep in mind, astronauts would have a lot of downtime as they travel through space.
2024-10-12 00:00:00
2017-09-27 00:00:00
https://wordpress-assets…n_cupola_iss.jpg
article
futurism.com
Futurism
null
null
20,487,383
https://www.nytimes.com/2019/07/20/science/nasa-food-gardening-mars.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
2,081,224
http://www.techvibes.com/blog/five-sins-of-tech-journalism-2011-01-07
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
26,859,728
https://lwn.net/Articles/853244/
Debian's election results
[Posted April
# Debian's election results Posted Apr 18, 2021 15:16 UTC (Sun) by There's a better link for visualizing the outcome here: https://www.debian.org/vote/2021/vote_002#outcome I think calling it a "strong result" is misleading. With 420 voters, "No statement" beat: - "Call for Stallman's resignation" by 8 votes, Also, "Call for Stallman's resignation" strongly beat "Discourage collaboration" which beat "further governance processes". So a 1 vote swing would have created a Condorcet cycle between all 4 of these options! That's as close to a tie as you can possibly get without having an actual tie. OTOH, all the pro-Stallman options got buried under "Further discussion" by massive supermajorities. My interpretation: if Debian had to take a side, they would be strongly against Stallman continuing at the FSF. But there isn't a strong consensus about how to do that, or if it's appropriate for Debian to take a side at all, plus there's a minority who does support Stallman, so putting all those factors together "No statement" narrowly won out. Posted Apr 18, 2021 15:34 UTC (Sun) by Visualizing the vote results gives me a headache. I understand the motivation for complicated voting protocols, but yikes... this is crazy. Posted Apr 18, 2021 16:39 UTC (Sun) by Posted Apr 19, 2021 3:07 UTC (Mon) by Posted Apr 19, 2021 16:17 UTC (Mon) by Only 2 votes have had the final winner defeat another contender by a single-digit margin: the 2006 and 2007 DPL elections of Anthony Towns and Sam Hocevar (respectively), both of whom nearly lost to Steve McIntyre. However, even then the margins were 6 and 8, as opposed to a single vote. Further, in both cases the Schwartz set would STILL have been trivial, since Steve defeated all other candidates. This vote, by contrast, had the winning option nearly lose to TWO candidates: the call for RMS's resignation and the call for some introspection by the FSF. In the latter case, that was a one-vote margin. Had either margin been switched, the Schwartz set would contain five (!) options. So yeah. Pretty controversial stuff. It is definitely true that the "support RMS" options were VERY unpopular, but a few votes could have changed the result from "Debian issues no statement" to anything from "Debian signs the letter demanding the entire board of the FSF resign" to "Debian asks the FSF to have a good hard think about these things". Posted Apr 19, 2021 17:14 UTC (Mon) by This is true for the "introspection" option. But if the "RMS resignation" option would have defeated the "no statement" option, the Schwartz set would be trivial. The "RMS resignation" option defeats options except the "no statement" option. Switching the one vote margin in the other direction would not change much, as this would be the weakest link in the created cycles. Posted Apr 19, 2021 17:41 UTC (Mon) by Playing around a bit with the numbers (disclaimer: I'm not sure I've understood the algorithm well enough to guarantee this is correct): if 9 additional people had voted 2 > 7 > FD (and the rest somewhere in that chain, it doesn't really matter where), then option 2 would have won. That means that if 5 existing people had voted 2 > 7 instead of 7 > 2, then a completely different option would have won. While this isn't a margin of 1, this is still very close. Posted Apr 18, 2021 16:40 UTC (Sun) by Posted Apr 19, 2021 8:36 UTC (Mon) by Posted Apr 18, 2021 17:20 UTC (Sun) by Posted Apr 18, 2021 17:34 UTC (Sun) by > OTOH, all the pro-Stallman options got buried under "Further discussion" by massive supermajorities. But this one is very telling indeed. Posted Apr 18, 2021 18:08 UTC (Sun) by I have a different opinion -- and expectative. I think the result is very clear (the Debian project agrees this is not something it should take a project-wide position on). Other than that, the project _is_ clearly split in two camps (that means, options 5 and 6, the pro-RMS statements) didn't reach majority, but the division is quite clear (the number of people ranking 5 and 6 over 1, 2, 3 and 4 is not that small). I do feel that, even with the meta-discusion in this regard, this illustrates Condorcet's strength: It gives a clear view of the collective preferences and gives more legitimacy to the whole decision process. Divisive, yes, hard, yes, but clearly illustrates the project's standing. Posted Apr 18, 2021 18:56 UTC (Sun) by Huh. A majority of only eight votes over "Call for RMS to resign", and one vote over "Hey FSF you can do better than that" isn't something I would call "very clear". However, the winning result is the one that (I hope) most people of both the pro-RMS and anti-RMS "camps", as it were, can live with, and (within the context of Debian) agree to disagree but otherwise move on. Posted Apr 18, 2021 20:10 UTC (Sun) by But the vote tally can also give an indication about the individual opinions within the project -- and there it is very clear that people unhappy with the FSF outnumber people in agreement with the FSF's most recent decisions by a margin of at the very least 2:1. If I were at the FSF I'd be very worried about this going forward, because while Debian is not representative of the entire free software movement, these results are probably going to be at least somewhat indicative of the sentiment at large. Posted Apr 18, 2021 23:14 UTC (Sun) by Posted Apr 19, 2021 4:18 UTC (Mon) by I agree that Debian is not representative of the entire free software movement, but that's because they're closer philosophically to FSF than the vast majority of the movement. They still call their Linux distribution GNU/Linux, for example. For them to come anywhere close to criticizing the FSF should be understood as a big warning sign. Posted Apr 19, 2021 5:55 UTC (Mon) by Posted Apr 19, 2021 23:48 UTC (Mon) by I think that is wrong. Afaik, most GNU documentation does not use invariant sections and is included in debian free section https://www.gnu.org/licenses/fdl-howto-opt.en.html Posted Apr 20, 2021 0:20 UTC (Tue) by cpp-doc gcc-doc gdb-doc make-doc tar-doc It would be great to see these packages moved to Debian main. Posted Apr 21, 2021 8:51 UTC (Wed) by But they don't stop at putting documentation in non-free. For Gforth, Debian excised the documentation completely (it has no invariant sections). My impression is that a portion of DDs has wanted to distance Debian from GNU and the FSF for many years, and the documentation issue is one area where they won. The present issue is one where they lost, but it was close. Posted Apr 21, 2021 14:29 UTC (Wed) by So while the project decided it is a free license when no invariant sections are present, and that means one can upload such works into the Debian archive, that does not mean that people that didn't or don't agree with that position will be willing to maintain packages containing such licensed works, and might thus remove or not install them as would be done with other non-free files. I don't think at the time, the GFDL issue was an excuse to distance from GNU or the FSF though, more a disagreement with the desire for it to be solved. To me it resembled more the situation when Debian dropped KDE due to licensing concerns, which ended up with those concerns being eventually addressed. <https://www.debian.org/News/1998/19981008.en.html> Posted Apr 19, 2021 7:23 UTC (Mon) by I think your implicit assumption that the FSF is concerned with free software will not prove correct. Posted Apr 20, 2021 6:21 UTC (Tue) by But, taking into account the running disprestigiation campaign against them, I think that it would be far better to put humour asside and make complaints explicit, and verifiable. Otherwise it's just throwing dirt on their name. Posted Apr 18, 2021 19:03 UTC (Sun) by Posted Apr 18, 2021 21:18 UTC (Sun) by It's much more likely to be a compromise option that both pro-Stallman and anti-Stallman people can accept (of which the latter clearly outnumbered the former in the vote). Posted Apr 19, 2021 6:09 UTC (Mon) by It's not a position that I personally agree with, but it's a reasonable, thoughtful, and consistent position in its own right. It happens to also be a compromise, but I don't think striking a compromise was the primary reason why a lot of advocates of that position hold it. Posted Apr 19, 2021 8:11 UTC (Mon) by Personally, I don't see the difference between this and FD, so I'm surprised they ranked so dramatically different. Posted Apr 19, 2021 8:38 UTC (Mon) by Usually, this doesn't matter, because options which lose to FD tend to lose to everything else, too. But if there is a Condorcet cycle which involves FD (which should be quite rare in practice), then any option which loses to FD is eliminated before the cycle-breaking machinery even gets involved. This ensures that an option which is actively opposed by a simple majority cannot win under any circumstances. It also means that, if you want a "do nothing" option, you need to nominate it separately. Posted Apr 19, 2021 9:57 UTC (Mon) by I never understood exactly why the voting system handles FD in a way that breaks with the Condorcet criterion (it is possible to use FD to vote tactically, and I've had votes among a very small number of people where it mattered), but I've been told it was discussed on -vote back in the days. Posted Apr 19, 2021 15:56 UTC (Mon) by This is a form of tactical voting, but crucially, it does not require you to reverse the order of any of your true preferences (provided you *don't* think of FD as a "real" outcome). Posted Apr 19, 2021 18:38 UTC (Mon) by I think the effect on elections without a supermajority requirement was largely a side effect. Posted Apr 19, 2021 19:18 UTC (Mon) by Posted Apr 19, 2021 21:29 UTC (Mon) by I think the main difference is that if you have three choices A: try to solve some problem by a means requiring a 2:1 supermajority then the new system allows a third of the voters to veto A (by voting FD over it), but as long as that doesn't happen the choice between A and B will effectively be by simple majority. While with the old system B would have had a built-in advantage over A, even if everyone involved was happy in principle to do the thing that requires the supermajority (overrule someone, or whatever) if A won. Posted Apr 19, 2021 21:33 UTC (Mon) by Posted Apr 19, 2021 21:42 UTC (Mon) by So I suppose when they decided they didn't like that happening, it was a natural idea to start to treat FD specially and apply the supermajority requirement only there. Posted Apr 19, 2021 10:23 UTC (Mon) by It's really interesting, because Debian is an ideal setting for this kind of vote: voters who actually care about the issue, technically highly sophisticated, etc. Yet even so, they're not very good at filling out a rank ballot. Posted Apr 19, 2021 10:53 UTC (Mon) by Posted Apr 20, 2021 19:22 UTC (Tue) by Posted Apr 20, 2021 20:06 UTC (Tue) by Posted Apr 20, 2021 20:44 UTC (Tue) by Posted Apr 19, 2021 14:04 UTC (Mon) by Both options start from the idea that none of the statements are good, but they go in opposite directions after that. Making no statement is saying there's never going to be a statement that encapsulated Debian's position, so they should shut up about it and go back to making a distribution. Further discussion is saying there's hope of coming up with a statement that encapsulates Debian's position, so they should talk about it more. So people who are tired of arguing about it would prefer no statement to further discussion. Posted Apr 19, 2021 7:46 UTC (Mon) by 0. For the sake of the hypothetical, imagine that "Call on the FSF to further its governance processes" had defeated "No statement" by 1 vote (instead of the other way around). Now we have a four-outcome cycle as you describe. So it would have taken a swing of more than one vote to actually change the outcome. Nevertheless, I agree with your conclusion: Not only did Debian choose not to explicitly support Stallman, those options were overwhelmingly defeated. "No statement" won a much more narrow victory over the options which explicitly opposed Stallman (yes, it was more than one vote, but "more than one vote" is not what most people would think of as a wide margin). Arguably, they don't need to issue a statement; those numbers speak for themselves. Everyone will interpret the outcome slightly differently, but it is very difficult to interpret it as substantively supportive of RMS (procedurally, maybe, but not substantively). Posted Apr 19, 2021 9:54 UTC (Mon) by Posted Apr 19, 2021 18:20 UTC (Mon) by Posted Apr 19, 2021 19:20 UTC (Mon) by Posted Apr 20, 2021 9:05 UTC (Tue) by Posted Apr 20, 2021 12:36 UTC (Tue) by The full weight is irrelevant IMHO. We want to know how tight the vote was. You can easily replace the single number with an explicit difference if you want, but what would that help? Posted Apr 20, 2021 13:09 UTC (Tue) by Posted Apr 20, 2021 9:28 UTC (Tue) by Posted Apr 18, 2021 20:27 UTC (Sun) by From an outsider perspective: is there anywhere where the differences between individual candidates are highlighted? Sure, you can read how they answered various questions, but I didn't see any of the candidates directly interacting with the responses of the other candidates. Don't get me wrong: I'm not saying they should throw mud at each other like in an ugly political TV appearance -- but it would be nice to see candidates interacting more directly, such as "I agree with you here because X", "I disagree with you here because Y", "While I agree with you in general, I wouldn't put my focus here because A", "I think you're not putting enough focus on this because B", things like that. Still respectful, still friendly, but making differences between candidates more explicit. After looking at the past couple of DPL elections (at least at what could be easily found), If I had to decide between individual candidates here, I'd have a really hard time making up my mind in many of them, unless I knew the candidates on a personal level. (And I seriously doubt that ~1000 DDs could know every candidate for DPL personally.) Posted Apr 18, 2021 21:22 UTC (Sun) by For an example of a rebuttal, see Jonathan Carter's platform page from the 2020 elections: https://www.debian.org/vote/2020/platforms/jcc Posted Apr 18, 2021 22:28 UTC (Sun) by Posted Apr 19, 2021 5:32 UTC (Mon) by Someone votes 22222212, someone 88888818, a third one ------1-. Are these votes different or not? Posted Apr 19, 2021 5:58 UTC (Mon) by Posted Apr 21, 2021 6:54 UTC (Wed) by The one with the hyphens is different I believe. Isn't it so that a dash means "completely unacceptable" while the highest number means "least acceptable". In practice the dashes are used to calculate quorum. If an option has too many dash votes it does not reach quorum and cannot be accepted even it would win enough one on one races. That leads to the question of tactical voting: Not many voters use dashes, but high numbers (relative for their vote). So if you cast a dash for a "bad" option, but the option reaches quorum your vote is "lost". Had you voted a high number (relative within your own one again), your vote would still be counted. That's at least my understanding. It could be wrong. In all ballots I can participate in real life I have only 1 or 2 votes for better or worse. Posted Apr 21, 2021 10:04 UTC (Wed) by So, if a ballot puts e.g. 8 for FD, and leaves option X unranked, that does count against X for majority and means The email call for votes (https://lists.debian.org/debian-vote/2021/04/msg00218.html) says that: > You may skip numbers, leave some choices unranked, and rank options And e.g. the irrelevance of an all-equal ballot was just confirmed by the Project Secretary > Ranking all options the same has no effect on the result. It does Also see the formal description in the Debian Constitution A.6. Vote Counting (https://www.debian.org/devel/constitution). Posted Apr 21, 2021 17:22 UTC (Wed) by Posted Apr 21, 2021 17:39 UTC (Wed) by I'm not sure arguments from personal incredulity are all that persuasive when it comes to voting. People tend to underestimate how much other people's preferences and thought processes differ from their own, and think that mixes of opinions that other people sincerely hold are nonsensical or obvious errors. This comes up all the time in politics, and I think argues for a lot of humility and deference to other people's ballots as cast. Yes, people possibly made errors; that certainly happens. But it feels more productive to me to focus on the errors that people identify themselves. If a voter says "ugh, I meant to express preference A but got confused by the ballot and expressed preference B instead," that's a valuable data point. But saying "this voter who I have never talked to voted something that I find nonsensical and therefore obviously made a mistake" is not helpful and usually isn't even true. Posted Apr 24, 2021 21:14 UTC (Sat) by That's a really good way of putting it, and (meta) even does well at assuming good faith despite disagreeing with said People. I'm reminded of a conversational technique of putting the opponents position into your own words and checking they're happy with this interpretation before continuing. It's a lot of effort that is often missed in short form prose, but inevitably raises the quality of conversation by e.g. realising there is no underlying problem just disagreement on definitions of terms. So please continue to wade into controversial topics with your opinions as much as possible, because whenever I see your name I know I'm about to read a measured response which I may or may not agree with. Posted Apr 22, 2021 9:47 UTC (Thu) by Posted Apr 19, 2021 6:21 UTC (Mon) by For (potentially) 1018 voters whose votes are traceable to them (see also discussion in debian-vote) this allows for a wide spread of opinion. I think this is the largest number of options we've had to deal with on a General Resolution and it was quite important to ensure that you had them in order: the potential for inadvertent error was quite high. There was also discussion that this whole way of voting should be explained to new developers as part of the onboarding process. [Full disclosure: Debian developer for a long time but haven't voted in every GR and hasn't checked whether we had more options on ballots in the past] Posted Apr 19, 2021 15:39 UTC (Mon) by The exact numbers used are NOT compared across votes, just used *within that specific vote* to order the choices. So, these vote examples actually mean the same: I (the voter) prefer *this* higher ranked option above all others, and I (the voter) am not disclosing how I feel about the other options (either because I don't want to, or because they're actually all the same as far as I am concerned). Posted Apr 20, 2021 4:30 UTC (Tue) by The FSF, on the other hand, has a completely opaque governance structure, with the voting power being a closely-held secret. ## Debian's election results **njs** (guest, #40338) [Link] (48 responses) - "Discourage collaboration with the FSF while Stallman is in a leading position" by 4 votes - "Call on the FSF to further its governance processes" by 1 vote (!) ## Debian's election results **dskoll** (subscriber, #1630) [Link] (7 responses) ## Debian's election results **amacater** (subscriber, #790) [Link] (4 responses) ## Debian's election results **njs** (guest, #40338) [Link] (3 responses) ## Debian's election results **calumapplepie** (guest, #143655) [Link] (2 responses) ## Debian's election results **matthias** (subscriber, #94967) [Link] (1 responses) ## Debian's election results **chris_se** (subscriber, #99706) [Link] ## Debian's election results **smurf** (subscriber, #17840) [Link] ## Debian's election results **cortana** (subscriber, #24596) [Link] The original post didn't call the statement vote a strong result, only the project leader vote. The difference there is: ## Debian's election results **sroracle** (subscriber, #124960) [Link] Option 1 defeats Option 2 by ( 312 - 102) = 210 votes. which I would call a strong result. ## Debian's election results **jezuch** (subscriber, #52988) [Link] (12 responses) ## Debian's election results **gwolf** (subscriber, #14632) [Link] (11 responses) > > > OTOH, all the pro-Stallman options got buried under "Further discussion" by massive supermajorities. > > But this one is very telling indeed. ## Debian's election results **smurf** (subscriber, #17840) [Link] ## Debian's election results **chris_se** (subscriber, #99706) [Link] (9 responses) ## Debian's election results **flussence** (guest, #85566) [Link] ## Debian's election results **rgmoore** (**✭ supporter ✭**, #75) [Link] (5 responses) If I were at the FSF I'd be very worried about this going forward, because while Debian is not representative of the entire free software movement, these results are probably going to be at least somewhat indicative of the sentiment at large. ## Debian's election results **gwolf** (subscriber, #14632) [Link] (4 responses) ## Debian's election results **IanKelling** (subscriber, #89418) [Link] (3 responses) ## Debian's election results **pabs** (subscriber, #43278) [Link] (2 responses) Or maybe a gfdl section. Not everyone wants to open the door to proprietary software just because they want documentation for free software. ## Debian's election results **anton** (subscriber, #25547) [Link] (1 responses) ## Debian's election results **guillemj** (subscriber, #49706) [Link] ## Debian's election results **rodgerd** (guest, #58896) [Link] (1 responses) ## Debian's election results **dgm** (subscriber, #49227) [Link] ## Debian's election results **epa** (subscriber, #39769) [Link] (17 responses) ## Debian's election results **Sesse** (subscriber, #53779) [Link] (16 responses) ## Debian's election results **rra** (subscriber, #99804) [Link] (15 responses) ## Debian's election results **Sesse** (subscriber, #53779) [Link] (14 responses) ## Debian's election results **NYKevin** (subscriber, #129325) [Link] (7 responses) ## Debian's election results **Sesse** (subscriber, #53779) [Link] (6 responses) ## Debian's election results **NYKevin** (subscriber, #129325) [Link] ## Debian's election results **mattheww** (guest, #108205) [Link] (4 responses) ## Debian's election results **Sesse** (subscriber, #53779) [Link] (3 responses) ## Debian's election results **mattheww** (guest, #108205) [Link] (2 responses) B: try to solve the problem by a means that doesn't require a supermajority FD: further discussion ## Debian's election results **Sesse** (subscriber, #53779) [Link] (1 responses) ## Debian's election results **mattheww** (guest, #108205) [Link] ## Debian's election results **bpearlmutter** (subscriber, #14693) [Link] (4 responses) ## Debian's election results **zdzichu** (subscriber, #17118) [Link] (3 responses) ## Debian's election results **bpearlmutter** (subscriber, #14693) [Link] (2 responses) ## Debian's election results **hummassa** (guest, #307) [Link] (1 responses) ## Debian's election results **smurf** (subscriber, #17840) [Link] ## Debian's election results **rgmoore** (**✭ supporter ✭**, #75) [Link] Personally, I don't see the difference between this and FD ## Debian's election results **NYKevin** (subscriber, #129325) [Link] (6 responses) 1. Delete all outcomes other than the four involved in the cycle. 2. Delete the edge from "...further its governance processes" to "No statement" (because it has the smallest weight, at 1 vote). 3. Now there's no cycle anymore, and you still get "No statement" as the final winner. ## Debian's election results **bpearlmutter** (subscriber, #14693) [Link] ## Debian's election results **itvirta** (guest, #49997) [Link] (4 responses) would actually be #4 "Call on the FSF to further its governance processes", since it's the total weight that counts there, not the difference, see: https://lists.debian.org/debian-vote/2021/04/msg00384.html ## Debian's election results **NYKevin** (subscriber, #129325) [Link] (3 responses) ## Debian's election results **itvirta** (guest, #49997) [Link] (2 responses) ## Debian's election results **smurf** (subscriber, #17840) [Link] (1 responses) ## Debian's election results **itvirta** (guest, #49997) [Link] But as said, in the case of loops, it's not so. ## Debian's election results **bpearlmutter** (subscriber, #14693) [Link] ## DPL elections **chris_se** (subscriber, #99706) [Link] (2 responses) ## DPL elections **guus** (subscriber, #41608) [Link] ## DPL elections **legoktm** (subscriber, #111994) [Link] ## Debian's election results **geuder** (subscriber, #62854) [Link] (9 responses) ## Debian's election results **rra** (subscriber, #99804) [Link] (6 responses) ## Debian's election results **geuder** (subscriber, #62854) [Link] (5 responses) ## Debian's election results **itvirta** (guest, #49997) [Link] (4 responses) X is "completely unacceptable" for the voter. But the same could be achieved by putting 7 for FD and 8 for X, or any other vote that puts FD above X. > equally. Unranked choices are considered equally the least desired > choices, and ranked below all ranked choices. (https://lists.debian.org/debian-vote/2021/04/msg00429.html) > not have an effect on the quorum or majority. The only effect it has > is that more people voted. Point 1 says the same as the call for votes, points 2 and 3 clearly say quorum and majority are based on defeating FD. ## Debian's election results **geuder** (subscriber, #62854) [Link] (3 responses) ## Debian's election results **rra** (subscriber, #99804) [Link] (1 responses) ## Debian's election results **emorrp1** (guest, #99512) [Link] ## Debian's election results **itvirta** (guest, #49997) [Link] ## Debian's election results **amacater** (subscriber, #790) [Link] ## Debian's election results **hmh** (subscriber, #3838) [Link] ## An interesting contrast between two organisations **rodgerd** (guest, #58896) [Link]
true
true
true
null
2024-10-12 00:00:00
2021-04-18 00:00:00
null
null
null
null
null
null
8,355,876
https://wafflechat.com
🐢WaffleChat - Best livejasmin free webcam sex chat!
null
🐢WaffleChat proudly presents over 5600 VIP pornstars always ready for webcam private porn on the best amateur platform!
true
true
true
🐢WaffleChat proudly presents over 5600 VIP pornstars always ready for webcam private porn on the best amateur platform!
2024-10-12 00:00:00
2022-07-01 00:00:00
null
website
wafflechat.com
🐢Wafflechat
null
null
8,915,717
https://torx.pw/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
36,315,327
https://www.estimand.ai/post/ingredient-optimization-with-causal-ai
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
17,387,540
https://amp.theguardian.com/science/2018/jun/24/nathan-myhrvold-inventor-microsoft-patent-nasa-asteroid-data-pizza-science
Nathan Myhrvold: ‘Nasa doesn’t want to admit it’s wrong about asteroids’
Zoë Corbyn
Nathan Myhrvold is the former chief technology officer of Microsoft, founder of the controversial patent asset company Intellectual Ventures and the main author of the six-volume, 2,300-page Modernist Cuisine cookbook, which explores the science of cooking. Currently, he is taking on Nasa over its measurement of asteroid sizes. **For the ****past couple of years, you’ve been fighting with N****asa about its analysis of near-Earth asteroid size. You’ve just published a ****33-page scientific paper**** criticising the methods used by its Neowise**** project team to estimate the size and other properties of approximately 164,000 asteroids. You have also published a ****long blog post**** explaining the problem. Where did Nasa go wrong and is it over or underestimating size?** Nasa’s Wise space telescope [Wide-field Infrared Survey Explorer] measured the asteroids in four different wavelengths in the infrared. My main beef is with how they analysed that data. What I think happened is they made some poor choices of statistical methods. Then, to cover that up, they didn’t publish a lot of the information that would help someone else replicate it. I’m afraid they have both over- and underestimated. The effect changes depending on the size of the asteroid and what it’s made of. The studies were advertised as being accurate to plus or minus 10%. In fact, it is more like 30-35%. That’s if you look overall. If you look at specific subsets some of them are off by more than 100%. It’s kind of a mess. **Why does it matter?**** As you admit in your blogpost, obsessively arguing over the size of rocks that are millions of miles away may appear ****esoteric. ** Asteroids are very important. They tell us a lot about the origin of the solar system. The work of hundreds of scientists has been based on this Neowise data. Also, a major asteroid impact will affect life on Earth at some point, just as it has multiple times in the past. The data are critical to assessing this risk. If there is a mistake by a factor of two in the diameter, which there are for a bunch, that’s a mistake of a factor of eight in the energy. **You must have spent many hours and a lot of money working on this. You even hired lawyers to help lodge ****f****reedom of ****information**** requests. Has it been worth it?** I am guilty of being stubborn. Once this started, I was left with a stark choice: give up or keep going. If they had ever said: “You probably have a point, we’ll look into it in our next version”, I probably would have stopped. But because of the way they reacted, I felt I couldn’t let it go. I am not hoping for it, but if in those thousands of asteroids there is [one] heading towards us and we can better estimate its size, I will feel vindicated. **Nasa’s ****reported response**** has been to stand by the data and the analysis performed by the Neowise team. Can we trust Nasa after this?** They need to have an independent investigation of these results. When my preprint paper came out in 2016, they said: “You shouldn’t believe it because it’s not peer-reviewed.” Well, now it has been peer reviewed. How Nasa handles it at this stage will be very telling. People have suggested to me the reason Nasa doesn’t want to admit that anything is wrong with the data is that they’re afraid it would hurt the chances of Neocam, an approximately $500m (£380m) telescope to find asteroids that might hit Earth proposed by the same group who did the Neowise analysis. **This isn’t the first time you have found fault in scientists’ analyses. In ****2013, you challenged estimates**** of the growth rate of dinosaurs, which**** led to some corrections by the papers’ authors. Do you enjoy being a thorn in their side? ** Agatha Christie books and other mystery novels feature an amateur crime solver: a “Miss Marple”. That’s me, it seems. In the case of the dinosaurs, I made some material changes in the field. I feel the same thing will happen with the asteroids. When I wrote the paper [challenging the dinosaur analysis], a lot of other dinosaur people said: “I’m glad you did that, I suspected the numbers were wrong.” But either they weren’t confident enough with statistics, or they were afraid. Similarly, when my asteroid preprint first came out, I got emails from several people in that community saying the same. It’s hugely liberating [to be an outsider]. The fact that I have already given myself tenure at the University of Nathan, I don’t have to worry about looking for grants like others might. **In 2015, you built this robotic model of a dinosaur tail that showed a dinosaur could ****break the sound barrier ****when it cracked its tail. You also fund palaeontology digs and I believe you keep a lifesize T-rex skeleton in your living room. Are you working on anything dinosaur-related at the moment?** One thing is a big comparative study to try to understand why dinosaurs got so damn big. A lot of it has to do not with why dinosaurs got so big, but why mammals are small. Mammals have some problems when they’re very large – like pregnancy – that dinosaurs didn’t have. For dinosaurs, laying eggs made it a lot easier to be big. **In 2000, you left Microsoft and set up Intellectual Ventures, which primarily buys and licenses patents. The business is often vilified as one of the world’s biggest “patent trolls”. Why do you think people find it so loathsome?** I fundamentally think what we do is good. It is hard for me to get too worked up about figuring out why it is bad. Any patent holder who enforces their rights gets called a patent troll. Silicon Valley feels very threatened by anything that could challenge its authority. If you are one of the big companies, like Google or Apple, almost no one can challenge you in the market that you’re in. But if somebody has a patent, they can ask for a bunch of money. The more you can get a return from an invention, the better off the world will be. It will lead to more inventions being funded and more inventing. **How many patents do you have?** I’d guess altogether we have around 30,000. Around 3,000 of those my company invented. Personally I probably have about 800. It changes every Tuesday, which is the day the patent office issues patents. We still only have a drop in the bucket. There are around 4m active US patents. **Intellectual Ventures also has this “global good” arm ****that is collaborating with ****Bill Gates****. What are some of the inventions coming out of that and how close are they to fruition?** We’ve invented a vaccine container that keeps vaccine cold even without refrigeration. That has played a crucial role in the Ebola outbreaks and has been rolled out across Africa for other diseases. It has made an enormous difference in people’s lives. One that’s coming is we’ve invented a very cheap, noninvasive test for cervical cancer. It does remarkably better than any other technique. We designed it for developing countries but there is interest in using it here in the US, too. We take a picture of the cervix. We have a machine-learning algorithm that learned how to diagnose cervical cancer by looking at pictures. Our latest thing is a plastic speculum, a rubber band, and an iPhone. It’s really cheap and it only takes a minute. We’re working on diagnostic trials. **You also have commercial spinouts from Intellectual Ventures. One of them is developing a new kind of nuclear reactor. What’s different about it and where are you towards building it?** For a very long time there was really not that much effort put in to designing radical new kinds of reactors. Instead, everyone made tiny little modifications to old reactors. We realised that, so we got into the game. There are a tremendous number of opportunities that one has to make reactors that are cheaper, safer, and which can do really cool things like burn nuclear waste as fuel. That’s really important because the world has thousands of tonnes of nuclear waste sitting around that is otherwise a big problem.Turning that into fuel that would be fantastic and we are looking at projects in China and other parts of the world. Nuclear reactors take a long time to build, but we’re hoping in the 2020s we’ll see our first. **President Trump is going after China’s ****intellectual property theft. Given your experience, can he succeed in curbing it?** The theft of intellectual property by Chinese companies is a very serious issue. It’s not just private companies in China or little companies. A large amount of it is state-owned enterprise. So, it really is the Chinese government doing it. Exactly how to solve that issue, I don’t know. You need the Chinese government to be very serious about it, but so far they haven’t been. In my experience in business, you mostly do better with negotiating in quiet diplomacy, not with brinksmanship. But I’ve never built luxury hotels and golf courses. Maybe it is different there. **Your cookbooks, most recently ***Modernist Bread***, ****cost hundreds of dollars, weigh**** kilograms and are full of striking photos that you took****. You’re in Brazil researching your next book, about pizza. What does Brazil have to do with pizza?** The US has had pizza since lots of immigrants from Italy brought it with them in the late 19th century. Well, so, too, for Brazil and Argentina, which is our next stop. One of the things about pizza is, when it goes to some new area, people tend to mutate it and develop their own unique style. One of the styles of Brazilian pizza is an ultra-thin crust: thinner than a cracker, and stiff, with a thin layer of toppings. Also, because they have lots of tropical ingredients, you get pizzas with bananas on. Sliced hard-boiled eggs are also pretty common. **You use innovative photography techniques and custom-made equipment to show food in a whole different way. ****Many people post pictures of their brunch on Instagram. Do you have any tips for better shots?** It’s really hard to get a good shot with the flash that’s on your cell phone, so you usually want to turn that off. Very bright, directional light also usually looks bad: you get harsh shadows. You want a broad light source near your food. Get near a window. Of course, some restaurants, for reasons I still don’t understand, make it so damn dark. I think it’s supposed to be romantic or something. **What’s next due a “Nathaning”? ** It is really hard for me to predict what’s going to be next. Many of the things I am doing now I wouldn’t have predicted I would be doing a couple years ago. On the cooking front, at some point we have to address dessert, but we haven’t figured out how yet. We’re still working on pizza. {{topLeft}} {{bottomLeft}} {{topRight}} {{bottomRight}} {{.}} {{/paragraphs}}{{highlightedText}}
true
true
true
The maverick inventor, ex-Microsoft executive and ‘patent troll’ is battling Nasa on its asteroid data and exploring pizza science
2024-10-12 00:00:00
2018-06-24 00:00:00
null
newsarticle
theguardian.com
The Guardian
null
null
2,246,736
http://english.aljazeera.net/watch_now/
Breaking News, World News and Video from Al Jazeera
Hadi Al Bahra
Do not believe the Syrian regime’s promises of amnesty **Live updates**Live updates, **Live updates**Live updates, ### ‘Bodies in the streets’, at least 200 killed in Israeli siege of north Gaza - list 1 of 6Published 8 minutes ago #### Five members of the same family killed in central Gaza - list 2 of 6Published 23 minutes ago #### Israeli army says 40 rockets fired from Lebanon - list 3 of 6Published 38 minutes ago #### WATCH: At least 30 Palestinians killed in Israeli attacks across Gaza since dawn - list 4 of 6Published 53 minutes ago #### Eight wounded after Israeli strike on Lebanon’s Nabatieh - list 5 of 6Published 1 hours ago #### Kuwait condemns Israel’s attacks on UN peacekeepers - list 6 of 6Published 1 hours ago #### Katz stands firm on Guterres ban despite widespread support for UN chief
true
true
true
News, analysis from the Middle East & worldwide, multimedia & interactives, opinions, documentaries, podcasts, long reads and broadcast schedule.
2024-10-12 00:00:00
2024-09-29 00:00:00
https://www.aljazeera.co…o_aje_social.png
article
aljazeera.com
Al Jazeera
null
null
17,174,190
https://neurosciencenews.com/the-logic-of-modesty-why-it-pays-to-be-humble/
The Logic of Modesty: Why it Pays to Be Humble
Neuroscience News
*Summary: Researchers have developed a new evolutionary game theory model which may help explain why many of us hide our good deeds.* Source: Institute of Science and Technology Austria. **Why do people make anonymous donations, and why does the public perceive this as admirable? Why do we downplay our interest in a potential partner, if we risk missing out on a relationship? A team of scientists, consisting of Christian Hilbe, a postdoc at the Institute Science and Technology Austria (IST Austria), Moshe Hoffman, and Martin Nowak, both at Harvard University, has developed a novel game theoretic model that captures these behaviors and enables their study. Their new model is the first to include the idea that hidden signals, when discovered, provide additional information about the sender. They use this idea to explain under which circumstances people have an incentive to hide their positive attributes.** People often take actions that may be costly at first, but lead to reputational benefits in the long run. However, if good reputations are important, why are there numerous situations in which people hide accomplishments or good characteristics, like when we donate anonymously? Similarly, we often emphasize subtlety in art or fashion, avoid appearing over-eager, or otherwise obscure something positive. Why do others consider this behavior commendable? The team’s key insight into this societal puzzle is that “burying” a signal (i.e. obscuring information) is a signal in and of itself. This additional signal can have several interpretations: for instance, the sender may be unconcerned with those who might have been impressed, but who miss subtle messages (like an artist disregarding the philistine masses). Alternatively, the sender might be confident that those who matter to them will find out anyway (for instance, only those who have the taste and/or necessary wealth will recognize a designer bag without an obvious logo). The scientists succeeded in formalizing these ideas in a new evolutionary game theory model they call the “signal-burying game”, which they detail in a paper published today in *Nature Human Behaviour*. In this game, there are different types of senders (high, medium, and low), and different types of receivers (selective and unselective). The sender and the receiver do not know the other’s type. To convey their type, senders may pay a cost to send a signal. Signals may be sent clearly or be buried. When a signal is buried, it has a lower probability of being observed by any kind of receiver. In particular, buried signals entail the risk that receivers will never learn that the sender has sent a signal at all. After the sender has made his signaling decision, receivers decide whether or not to engage in an economic interaction with the sender. The game has an element of risk, and therefore, senders and receivers must develop strategies to maximize their payoff. “We wanted to understand what strategies would evolve naturally and be stable,” explains Christian Hilbe, co-first author of the paper and postdoc in the research group of Krishnendu Chatterjee at IST Austria. “In particular, is it possible to have a situation where high-level senders always choose to bury their signals, mid-level senders always send a clear signal, and low-level senders send no signal at all?” This would correspond to situations that come up in real life, and is one of the key distinguishing features of their model: they allow for strategies that target specific receivers at the risk of losing others. In their simulations, players started off neither sending nor receiving signals. Then, with some probability, a player either selects a random strategy (representing mutation) or imitates another player (representing a learning process biased towards strategies with higher payoff). In their simulations, the scientists found that populations quickly settled at the strategy described above. The team also developed several extensions to the model, enabling them to cover more general scenarios. First, they added different levels of obscurity: senders could choose from several revelation probabilities. “We found that in this case, high senders tend to be modest…but not too modest,” adds Hilbe. “Even if you’re humble, you don’t try to be holier-than-thou.” It is moreover possible to increase the number of types of senders and receivers, as well as introduce subtleties in the preferences of the receivers. Using their new model, Hilbe, Hoffman, and Nowak were able to put a different perspective on various common situations: a donor giving anonymously, an academic not disclosing their degree, an artist creating art with hidden messages, and a possible partner hiding their interest, among others. Evolutionary game theory shows that, in the end, these puzzling social behaviors make sense. **Funding:** John Templeton Foundation, Office of Naval Research, ISTFELLOW Program, Austrian Science Fund funded this study. **Source:** Elisabeth Guggenberger – Institute of Science and Technology Austria **Publisher:** Organized by NeuroscienceNews.com. **Image Source:** NeuroscienceNews.com image is in the public domain. **Original Research:** Abstract for “The signal-burying game can explain why we obscure positive traits and good deeds” by Moshe Hoffman, Christian Hilbe & Martin A. Nowak in *Nature Human Behaviour*. Published May 28 2018. **doi:**10.1038/s41562-018-0354-z [cbtabs][cbtab title=”MLA”]Institute of Science and Technology Austria “The Logic of Modesty: Why it Pays to Be Humble.” NeuroscienceNews. NeuroscienceNews, 28 May 2018. <https://neurosciencenews.com/humble-modesty-model-9156/>.[/cbtab][cbtab title=”APA”]Institute of Science and Technology Austria (2018, May 28). The Logic of Modesty: Why it Pays to Be Humble. *NeuroscienceNews*. Retrieved May 28, 2018 from https://neurosciencenews.com/humble-modesty-model-9156/[/cbtab][cbtab title=”Chicago”]Institute of Science and Technology Austria “The Logic of Modesty: Why it Pays to Be Humble.” https://neurosciencenews.com/humble-modesty-model-9156/ (accessed May 28, 2018).[/cbtab][/cbtabs] **Abstract** **The signal-burying game can explain why we obscure positive traits and good deeds** People sometimes make their admirable deeds and accomplishments hard to spot, such as by giving anonymously or avoiding bragging. Such ‘buried’ signals are hard to reconcile with standard models of signalling or indirect reciprocity, which motivate costly pro-social behaviour by reputational gains. To explain these phenomena, we design a simple game theory model, which we call the signal-burying game. This game has the feature that senders can bury their signal by deliberately reducing the probability of the signal being observed. If the signal is observed, however, it is identified as having been buried. We show under which conditions buried signals can be maintained, using static equilibrium concepts and calculations of the evolutionary dynamics. We apply our analysis to shed light on a number of otherwise puzzling social phenomena, including modesty, anonymous donations, subtlety in art and fashion, and overeagerness.
true
true
true
Researchers have developed a new evolutionary game theory model which may help explain why many of us hide our good deeds.
2024-10-12 00:00:00
2018-05-28 00:00:00
https://neurosciencenews…cenews-ublic.jpg
article
neurosciencenews.com
Neuroscience News
null
null
14,227,248
http://www.progressivepolicy.org/issues/economy/how-the-startup-economy-is-spreading-across-the-country/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
24,542,788
https://developer-tech.com/news/2020/sep/14/research-one-four-people-learnt-code-covid19-lockdown/
Research: One in four people learnt to code during COVID-19 lockdown
Ryan Daws
Some people who found themselves with extra time during the COVID-19 lockdown put it to good use. Research from digital transformation firm BoxBoat suggests that around one in four people spent time learning coding languages during the lockdown. The most commonly learned programming language was Python, followed by Java and C++. The greatest motivations for people setting out to improve their skills were career development (55%), personal development (46%), and improving job search prospects (33%). YouTube, and other freely available content, was the top source of training material for most (66%) people boosting their skills. However, around one in three turned to paid resources. Despite their necessity, lockdowns had a devastating impact on global economies. The data shows that one silver lining is that much of their workforces have become more skilled during their downtime. Around 70 percent of people say their technology skills have “moderately or greatly” improved since the COVID-19 pandemic (Many certainly learnt what Zoom is… with usage of the video chat tool quadrupling between the start of the pandemic and March 2020.) There is a notable difference between generations who reported an upskilling during their downtime. Millennials reported the most (72%) improvements, closely followed by Generation X (70%). However, just 56 percent of baby boomers reported improving their tech skills. *(Photo by Joshua Reddekopp on Unsplash)* **Interested in hearing industry leaders discuss subjects like this?** Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.
true
true
true
Some people who found themselves with extra time during the COVID-19 lockdown put it to good use.
2024-10-12 00:00:00
2020-09-14 00:00:00
https://www.developer-te…ng-languages.jpg
article
developer-tech.com
Developer Tech News
null
null
11,373,474
https://medium.com/@parkjiho/modular-design-approach-in-product-development-e8ad604d4467#.sqlvezs2k
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
6,042,599
http://techcrunch.com/2013/07/14/google-explains-why-its-cloud-service-is-different-when-it-comes-to-lock-in/
Google Explains Why Its Cloud Service Is Different When It Comes To Lock-In | TechCrunch
Kate Park
A Google engineer concluded on his Google+ page last week that a cloud platform can’t be built without some form of lock-in. That’s evidently true but there really is one main reason for making such a point. Google wants to show that it is not much different from its competitors when it comes to this hot topic with cloud customers. The post by Google Engineering Director Peter Magnusson has to be read with a dash of skepticism. Magnusson does focus on the lock-in issue with Google App Engine (GAE), but he also uses the topic to show the difference between its managed services and the infrastructure as a service (IaaS) from a provider like AWS. He says without restrictions Google could not offer the service that it does. What they do try to do is offer alternatives such as from services like AppScale. Reading into what Magnusson says, it appears that some customers are not accustomed to the restrictions that come with Google App Engine: The run-time environment is different and customers can’t make systems calls, write directly to the file system, or choose their own operating system. In the end, Magnusson says that lock-in is inevitable for Google to offer the service that it does with GAE: You can’t build an innovative platform without some measure of lock-in. For example, the Datastore is unlike any other NoSQL or NearSQL service or stack out there. It has multi-datacenter synchronous replication, cursors, distinct queries, projection queries, zigzag merge join, transactional tasks, automatic sharding/load balancing, managed secondary indexes, and multi-row and multi-machine transactions. Our developers don’t want us to dumb down those features to some lowest common denominator; they want us to build more features that leverage the unique underlying software systems. GAE comparess to other PaaS providers such as Cloudbees, Heroku, and AppFog, acquired in June by CenturyLink. Google abstracts the actual hands-on work of managing the infrastructure that is a hallmark of other services. That means there are certain aspects of Google’s app service that have to be restricted. These restrictions allow Google to take responsibility for a customer’s firewall, most denial of service attacks, viruses — the list goes on. The upside, Magnusson argues, is that for most mobile and web apps, instances can be started quickly, allowing apps to scale fast. Google manages that scale and the problems that arise when, for example, an app has to be moved to another data center. But the real problem are closed, custom APIs that lock customers into a proprietary platform. People on Twitter who responded to my question about the issue pretty much universally agreed that some form of lock-in is inevitable. @alexwilliams Lock-in level depends far more on cust use case than platform. Data gravity doesn’t impact all, tooling helps migrations — Pete Johnson (@nerdguru) July 10, 2013 AWS was cited as an example of a lock-in service: @alexwilliams the AWS strategy of creating an ecosystem of systems to architect for is de facto lock-in — Jack Clark (@mappingbabel) July 9, 2013 ThinkJar Founder and Principal Analyst Esteban Kolsky said minimizing lock-in comes with open standards, which are not as much a reality at this early stage in the market: While in theory there is no lock in when using common standards, there are few organizations that are using them. It is in the vendor’s best interest to build a certain amount of lock-in for their platform, else it would be too easy to leave and hard to predict revenues (based on the current models of licensing). If their revenues were to be based on pay-as-you-go/rent/use as the cloud promises (which we are starting to see in smaller vendors) then the issue of lock cannot exist, as renting power means you should be able to rent it from more than one vendor (again, mostly theory at this time – but is applies far more to platforms than to infrastructure, where lock-in still remains quite strong – see amazon v rackspace for an example). There is also a small part of being in the user’s bests interest, so leaving is not so easy and they are not pressured, especially for IT. So the issue becomes one of convenience for the user/vendor versus what the stakeholder’s best interests are – as with anything that is Enterprise Software. The lock-in issue is always there for the cloud service providers. OpenStack is bound to have lock-in with the platforms emerging from the likes of Red Hat, IBM, Cloudscaling, HP and Rackspace. Magnusson admits as much but says it is just too early in the game for standards to play a meaningful role. That’s one way to look at the issue. And the position certainly supports the numerous vendors in the market that have learned to provide services for moving data and apps from cloud to cloud. These companies represent a growing ecosystem that will continue to thrive as long as the cloud vendors continue to focus on making interoperability a challenging feat even for the most sophisticated of customers.
true
true
true
A Google engineer concluded on his Google+ page last week that a cloud platform can't be built without some form of lock-in. That's evidently true but
2024-10-12 00:00:00
2013-07-14 00:00:00
https://techcrunch.com/w…engine.png?w=250
article
techcrunch.com
TechCrunch
null
null
20,644,917
https://arxiv.org/abs/1908.02284
Two-stage Training for Chinese Dialect Recognition
Ren; Zongze; Yang; Guofu; Xu; Shugong
# Computer Science > Computation and Language [Submitted on 6 Aug 2019 (v1), last revised 10 Aug 2019 (this version, v2)] # Title:Two-stage Training for Chinese Dialect Recognition View PDFAbstract:In this paper, we present a two-stage language identification (LID) system based on a shallow ResNet14 followed by a simple 2-layer recurrent neural network (RNN) architecture, which was used for Xunfei (iFlyTek) Chinese Dialect Recognition Challenge and won the first place among 110 teams. The system trains an acoustic model (AM) firstly with connectionist temporal classification (CTC) to recognize the given phonetic sequence annotation and then train another RNN to classify dialect category by utilizing the intermediate features as inputs from the AM. Compared with a three-stage system we further explore, our results show that the two-stage system can achieve high accuracy for Chinese dialects recognition under both short utterance and long utterance conditions with less training time. ## Submission history From: Zachary Ren [view email]**[v1]**Tue, 6 Aug 2019 04:28:56 UTC (537 KB) **[v2]**Sat, 10 Aug 2019 09:28:00 UTC (537 KB) Current browse context: cs.CL ### References & Citations # Bibliographic and Citation Tools Bibliographic Explorer *(What is the Explorer?)* Litmaps *(What is Litmaps?)* scite Smart Citations *(What are Smart Citations?)*# Code, Data and Media Associated with this Article CatalyzeX Code Finder for Papers *(What is CatalyzeX?)* DagsHub *(What is DagsHub?)* Gotit.pub *(What is GotitPub?)* Papers with Code *(What is Papers with Code?)* ScienceCast *(What is ScienceCast?)*# Demos # Recommenders and Search Tools Influence Flower *(What are Influence Flowers?)* Connected Papers *(What is Connected Papers?)* CORE Recommender *(What is CORE?)*# arXivLabs: experimental projects with community collaborators arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? **Learn more about arXivLabs**.
true
true
true
In this paper, we present a two-stage language identification (LID) system based on a shallow ResNet14 followed by a simple 2-layer recurrent neural network (RNN) architecture, which was used for Xunfei (iFlyTek) Chinese Dialect Recognition Challenge and won the first place among 110 teams. The system trains an acoustic model (AM) firstly with connectionist temporal classification (CTC) to recognize the given phonetic sequence annotation and then train another RNN to classify dialect category by utilizing the intermediate features as inputs from the AM. Compared with a three-stage system we further explore, our results show that the two-stage system can achieve high accuracy for Chinese dialects recognition under both short utterance and long utterance conditions with less training time.
2024-10-12 00:00:00
2019-08-06 00:00:00
/static/browse/0.3.4/images/arxiv-logo-fb.png
website
arxiv.org
arXiv.org
null
null
26,134,608
https://www.youtube.com/watch?v=hexScHvEFwQ
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
30,012,483
https://web3isgoinggreat.com?id=2022-01-19-1
Kingfund Finance rug pulls for $141,000
Molly White
## Kingfund Finance rug pulls for $141,000 ## Multichain publicly announces a vulnerability, and is quickly hacked by attackers using it ## Mastercard spins a partnership with Coinbase as addressing "accessibility" and "inclusivity" ## Once popular play-to-earn game BNB Heroes rug pulls after a period of inactivity from the team ## Creator of "MetaBirkins" NFTs writes that he "won't be intimidated" by a trademark lawsuit from Hermès I, for one, am very curious to see how the litigation plays out. In the meantime, the Rarible landing page for the connection displays an error message stating, "This user or item has been temporarily blocked from public access". ## At least $34 million is stolen from users of Crypto.com Although some users reported funds missing from their wallets, including one investor who reported that $16.3 million missing, Crypto.com announced that "All funds are safe". Over the next few days this was revealed to be untrue; as of January 20, the total estimated funds stolen from the platform had reached $30 million. Large amounts of stolen funds were quickly laundered through Tornado Cash, a popular crypto mixer. ## Mysterious NFT project NotASecretNFT gets people to authorize a shady contract after leaving clear clues to their intentions ## CryptoBurgers play-to-earn game is hacked shortly after launch ## SpiceDAO wins a $3 million auction to buy an extremely rare storyboard book of Dune, only to learn that owning a book doesn't confer them copyright *Dune*adaptation. In a celebratory tweet the group wrote, "We won the auction for €2.66M. Now our mission is to: 1. Make the book public (to the extent permitted by law) 2. Produce an original animated limited series inspired by the book and sell it to a streaming service 3. Support derivative projects from the community". They were quickly informed that buying the physical book did not somehow confer to them copyright or licensing rights (much like how buying an NFT does not automatically confer you the rights to the underlying artwork!). You'd think they might have checked that first.
true
true
true
Kingfund Finance suddenly drained more than 300 WBNB (about $141,000) from their project. This happened a few days after users began to report being blocked by the project's Twitter account and kicked from its Telegram channel for reporting issues with unavailable funds, apparently an attempt to buy time as they prepared for their exit. Around the time of the rug pull, they took their Twitter and website offline.
2024-10-12 00:00:00
2022-01-19 00:00:00
https://primary-cdn.web3…ingfund_300.webp
website
web3isgoinggreat.com
Web3 is Going Just Great
null
null
15,788,774
https://sdtimes.com/sumo-logic-kubernetes-docker/
Sumo Logic adds unified logs and metrics solution for apps running on Kubernetes and Docker - SD Times
Jenna Barron
Sumo Logic is trying to improve the customer experience for applications that run on Kubernetes and Docker. Its new unified logs and metrics solution uses open-source and native integrations to streamline the data ingestion process. The company’s usage data shows that microservices and container adoption has been increasing. Amazon EC2 adoption in AWS is at 57 percent, while Kubernetes adoption is at 35 percent, according to the company. “We believe that our customers are already using Docker and Kubernetes in a significant way and we are the first vendor to bring together all of the machine data – logs, metrics, and events – from these technologies,” said Kalyan Ramanathan, VP of product marketing at Sumo Logic. “That makes us a very good choice for customers who are deploying Docker technologies and managing Docker with Kubernetes.” IT teams need a way to keep up with the challenges of microservices and containers. Sumo Logic’s solution provides this by showing a real-time view of applications running on containers. It allows IT teams to build, run, secure, and manage applications no matter what the underlying infrastructure or stack is. Sumo Logic will be showcasing its solution at AWS re:Invent this week in Las Vegas.
true
true
true
Sumo Logic adds unified logs and metrics solution for applications running on Kubernetes and Docker
2024-10-12 00:00:00
2017-11-27 00:00:00
https://sdtimes.com/wp-c…o-logic-logo.jpg
article
sdtimes.com
SD Times
null
null
19,867,179
https://techcrunch.com/2019/05/08/google-play-is-changing-how-app-ratings-work/
Google Play is changing how app ratings work | TechCrunch
Sarah Perez
Two years ago, Apple changed the way its app store ratings worked by allowing developers to decide whether or not their ratings would be reset with their latest app update — a feature that Apple suggests should be used sparingly. Today, Google announced it’s making a change to how its Play Store app ratings work, too. But instead of giving developers the choice of when ratings will reset, it will begin to weight app ratings to favor those from more recent releases. “You told us you wanted a rating based on what your app is today, not what it was years ago, and we agree,” said Milena Nikolic, an engineering director leading Google Play Console, who detailed the changes at the Google I/O Developer conference today. She explained that, soon, the average rating calculation for apps will be updated for all Android apps on Google Play. Instead of a lifetime cumulative value, the app’s average rating will be recalculated to “give more weight” to the most recent users’ ratings. With this update, users will be able to better see, at a glance, the current state of the app — meaning, any fixes and changes that made it a better experience over the years will now be taken into account when determining the rating. “It will better reflect all your hard work and improvements,” touted Nikolic, of the updated ratings. On the flip side, however, this change also means that once high-quality apps that have since failed to release new updates and bug fixes will now have a rating that reflects their current state of decline. It’s unclear how much the change will more broadly impact Google Play Store SEO, where today app search results are returned based on a combination of factors, including app names, descriptions, keywords, downloads, reviews and ratings, among other factors. The updated app ratings was one of numerous Google Play changes announced today, along with the public launch of dynamic delivery features, new APIs, refreshed Google Play Console data, custom listings and even “suggested replies” — like those found in Gmail, but for responding to Play Store user reviews. End users of the Google Play Store won’t see the new, recalculated rating until August, but developers can preview their new rating today in their Play Store Console.
true
true
true
Two years ago, Apple changed the way its app store ratings worked by allowing developers to decide whether or not their ratings would be reset with their
2024-10-12 00:00:00
2019-05-08 00:00:00
https://techcrunch.com/w…tore-ratings.jpg
article
techcrunch.com
TechCrunch
null
null
38,721,708
https://arxiv.org/abs/2312.13273
Where Did the Amaterasu Particle Come From?
Unger; Michael; Farrar; Glennys R
# Astrophysics > High Energy Astrophysical Phenomena [Submitted on 20 Dec 2023 (v1), last revised 7 Feb 2024 (this version, v2)] # Title:Where Did the Amaterasu Particle Come From? View PDFAbstract:The Telescope Array Collaboration recently reported the detection of a cosmic-ray particle, "Amaterasu", with an extremely high energy of $2.4\times10^{20}$ eV. Here we investigate its probable charge and the locus of its production. Interpreted as a primary iron nucleus or slightly stripped fragment, the event fits well within the existing paradigm for UHECR composition and spectrum. Using the most up-to-date modeling of the Galactic magnetic field strength and structure, and taking into account uncertainties, we identify the likely volume from which it originated. We estimate a localization uncertainty on the source direction of 6.6\% of $4\pi$ or 2726 deg$^2$. The uncertainty of magnetic deflections and the experimental energy uncertainties contribute about equally to the localization uncertainty. The maximum source distance is 8-50 Mpc, with the range reflecting the uncertainty on the energy assignment. We provide sky maps showing the localization region of the event and superimpose the location of galaxies of different types. There are no candidate sources among powerful radio galaxies. An origin in AGNs or star-forming galaxies is unlikely but cannot be completely ruled out without a more precise energy determination. The most straightforward option is that Amaterasu was created in a transient event in an otherwise undistinguished galaxy. ## Submission history From: Michael Unger [view email]**[v1]**Wed, 20 Dec 2023 18:52:53 UTC (1,884 KB) **[v2]**Wed, 7 Feb 2024 11:01:43 UTC (1,884 KB) Current browse context: astro-ph.HE Change to browse by: ### References & Citations # Bibliographic and Citation Tools Bibliographic Explorer *(What is the Explorer?)* Litmaps *(What is Litmaps?)* scite Smart Citations *(What are Smart Citations?)*# Code, Data and Media Associated with this Article CatalyzeX Code Finder for Papers *(What is CatalyzeX?)* DagsHub *(What is DagsHub?)* Gotit.pub *(What is GotitPub?)* Papers with Code *(What is Papers with Code?)* ScienceCast *(What is ScienceCast?)*# Demos # Recommenders and Search Tools Influence Flower *(What are Influence Flowers?)* Connected Papers *(What is Connected Papers?)* CORE Recommender *(What is CORE?)* IArxiv Recommender *(What is IArxiv?)*# arXivLabs: experimental projects with community collaborators arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? **Learn more about arXivLabs**.
true
true
true
The Telescope Array Collaboration recently reported the detection of a cosmic-ray particle, "Amaterasu", with an extremely high energy of $2.4\times10^{20}$ eV. Here we investigate its probable charge and the locus of its production. Interpreted as a primary iron nucleus or slightly stripped fragment, the event fits well within the existing paradigm for UHECR composition and spectrum. Using the most up-to-date modeling of the Galactic magnetic field strength and structure, and taking into account uncertainties, we identify the likely volume from which it originated. We estimate a localization uncertainty on the source direction of 6.6\% of $4π$ or 2726 deg$^2$. The uncertainty of magnetic deflections and the experimental energy uncertainties contribute about equally to the localization uncertainty. The maximum source distance is 8-50 Mpc, with the range reflecting the uncertainty on the energy assignment. We provide sky maps showing the localization region of the event and superimpose the location of galaxies of different types. There are no candidate sources among powerful radio galaxies. An origin in AGNs or star-forming galaxies is unlikely but cannot be completely ruled out without a more precise energy determination. The most straightforward option is that Amaterasu was created in a transient event in an otherwise undistinguished galaxy.
2024-10-12 00:00:00
2023-12-20 00:00:00
/static/browse/0.3.4/images/arxiv-logo-fb.png
website
arxiv.org
arXiv.org
null
null
17,069,641
https://daily.jstor.org/what-really-happened-to-the-megafauna/
What Really Happened to the Megafauna - JSTOR Daily
James MacDonald
What happened to the huge mammals of the Pleistocene? Mastodons, gompotheres, glyptodonts, and many others disappeared roughly 10,000 years ago, an event known as the Late Quarternary Extinction (LQE). (Note: The Pleistocene is an epoch in the Querternary Period.) The discovery in White Sands, New Mexico of a giant ground sloth footprint with a human footprint inside it lends support to a widely-believed hypothesis: ancient humans may have hunted the massive beast. But could human hunters really have wiped out all of the megafauna? The answer, apparently, is yes. Ancient humans armed with stone weapons and tools would have had a difficult time hunting their massive prey, but according to scientists Paul L. Koch and Anthony D. Barnosky there is no question that they did. Stone points have been found inside megafaunal remains, and butchering sites have been discovered. Humans may also have had indirect impacts besides hunting, modifying landscapes and destroying habitats through fire and settlement. But was that enough to cause a global extinction event? At the same time as the LQE, the climate was undergoing rapid changes following the end of the Last Glacial Maximum (LGM). To try and sort out whether climate or humans were responsible, Christopher Sandom and colleagues reviewed the existing literature on the LQE in *Proceedings: Biological Science*. They examined distribution of humans as well as distribution of mega-mammals at the time of extinction, climate patterns in each region, and the relative impact of humans or climate in previous studies. Their conclusion: the LQE is linked to humans in most areas. Mammals went extinct as humans expanded, and they went extinct in areas where the climate was stable but humans were new. Even when resources still existed for some of these animals, they went extinct anyway. The climate did have some influence in Eurasia, where mega-mammals were forced into small ranges and could not cope with both a changing climate and pressure from humans. #### Weekly Newsletter Humans were not responsible for the extinction of every last population of extinct megafauna. Many smaller populations went extinct before humans even colonized those areas, so clearly other factors were responsible. Some went extinct even before the LGM. But taken on a global scale, at least according to Sandom et. al., the Late Quarternary Extinctions were primarily driven by humans.
true
true
true
Could humans be responsible for the extinction of megafauna like giant sloths and mastodons?
2024-10-12 00:00:00
2018-05-14 00:00:00
https://daily.jstor.org/…ths_1050x700.jpg
article
jstor.org
JSTOR Daily
null
null
22,773,155
https://insights.lytho.com/5-ideas-for-remote-employee-engagement
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
26,606,038
https://www.msn.com/en-us/news/world/releasing-container-ship-from-suez-canal-could-capsize-it/ar-BB1f0aBC
MSN
null
null
true
true
false
null
2024-10-12 00:00:00
2024-10-12 00:00:00
null
null
null
null
null
null
9,812,351
http://antiifcampaign.com/
What is Anti-IF Programming?
null
Defusing the IF-Strategy Originated by Francesco Cirillo in 2007, the Anti-IF Programming approach has transformed how many perceive IFs and conditionals in software design. Explore its history. On this page, we delve into the essence of this approach. **The IF Dilemma** At the heart of software design lies a simple yet potentially dangerous tool: the conditional 'IF' statement. Undeniably foundational, its use to handle changes in a growth context, dubbed the "IF Strategy," can be a silent saboteur, complicating code and tangling logic. The IF Strategy can lead to debugging problems, never-delivered user stories, technical debt, entangled design, frictions in the team and other inefficiencies. In essence, the IF Strategy can escalate costs and delay software delivery times while degrading internal quality. **The Pitfalls of the "IF Strategy"** Why do developers opt for the IF Strategy when faced with changes? Simply put, it's the most immediate or primitive approach. Suppose a statement performs a specific action, and there's a desire for it to function differently under certain conditions. The immediate solution might be to add a conditional, such as an IF or a switch, around that statement and introduce an alternative action. The job seems done. This solution appears efficient in the short term, considering the time to implement the change. If the software never changed again, the IF Strategy would be ideal. But software constantly changes and evolves. Like a snowball gathering momentum, an over-reliance on this strategy can quickly blur logic, diminish code readability, and undermine its maintainability and testability. Nested IFs, in particular, can confuse even the most seasoned developers and bring in duplications and bugs. In the ever-changing landscape of software development, the IF Strategy can lead to catastrophic consequences, especially as the software grows—making it a poor choice when considering long-term repercussions. Often, the choice of the IF strategy arises from time pressures: "The deadline is tomorrow; how do we make this change?" But frequently, it also stems from a lack of design knowledge. Many developers are unaware of how to implement a change using a design solution different from the IF Strategy. In both scenarios, opting for the IF Strategy means overlooking alternative solutions. Defusing the IF Strategy requires training, and that's where Anti-IF Programming steps in. It serves as a training tool, ensuring that even under time pressures, developers take a moment to reflect and evaluate a series of valuable design alternatives for software growth instead of defaulting to the IF Strategy. **The Promise of Anti-IF Programming** Anti-IF Programming isn't advocating for the outright abandonment of 'IF' statements. Instead, it champions their mindful use, particularly in software growth scenarios. This philosophy urges developers to explore alternative design methods, ones skilled at addressing growth, change, and complexity, without succumbing to the pitfalls of the "IF Strategy." By steering clear of an excessive maze of conditionals, developers can embrace patterns, polymorphism, and other robust strategies. The result? A codebase that's flexible, adaptable, and easy to test—a place where errors are minimised, comprehension is amplified, and maintenance is streamlined. **Conclusion** Anti-IF Programming serves as a guiding light, steering developers toward neater, more intuitive coding practices. It's an open invitation to rise above conventional challenges and traverse the complex world of software development with clarity and assurance. Simplify your codebase by removing excessive nested conditions and redundancies, making it easier to navigate and modify. Enhance the clarity of your code with consistent structuring and naming conventions, allowing both novice and expert developers to understand its flow quickly. By reducing complexity and promoting best coding practices, experience a significant drop in software defects and logic errors, ensuring a more robust application. Design your software to handle evolving requirements gracefully. This resilience means less overhaul and rework when changes arise. With a combination of streamlined code and a clear design, accelerate your software development cycle, enabling quicker releases and feature updates. Achieve a trifecta of software development. By emphasizing efficiency and effectiveness, deliver high-quality software promptly without breaking the budget. This benefit arises from the cumulative effects of the points above.
true
true
true
Explore Anti-IF Programming, a strategy to mitigate the risks of excessive conditional statements in software design. Discover effective alternatives for managing growth, change, and complexity.
2024-10-12 00:00:00
2024-01-01 00:00:00
assets/images/anti-if-programming-social.png
null
null
null
null
null
4,720,059
http://www.indiegogo.com/happyedit?c=home
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
35,784,027
https://twitter.com/NickADobos/status/1653251763674419203
x.com
null
null
true
true
false
null
2024-10-12 00:00:00
null
null
null
null
X (formerly Twitter)
null
null
8,915,929
http://torrentfreak.com/the-internet-would-never-have-existed-without-the-copyright-monopoly-150118/
"The Internet Would Never Have Existed Without The Copyright Monopoly" * TorrentFreak
Rick Falkvinge
I had an interesting exchange of opinions with a copyright industry lawyer the other day. In what appeared to be a private conversation on Twitter between colleagues I was called out as evil, claiming that all the anti-copyright-monopoly sentiment on the Internet came from me personally. Of course, knowing how Twitter works, anybody mentioning my name gets an immediate highlight on my screen, and so I took the liberty of butting in to the conversation a few seconds later. I explained patiently that the Pirate Party could not possibly exist if there wasn’t already a widespread sense of information liberty; that the sentiment of the Internet already was that the copyright monopoly was there to constrict and punish rather than anything else. To my surprise, the copyright industry lawyer responded that the entire Internet would not have existed *at all* without the copyright monopoly. This was a statement that would have been trivial to ridicule to smithereens (“please explain how Al Gore fits into it?”), but it made me genuinely curious. How do these people think, anyway? So when I asked which part of the Internet Mr. Shrum referred to – DARPA (ARPAnet), TCP/IP, or CERN [sic, referring to the birth of WWW but I didn’t write that out], he surprised me even more by saying “All of them”. Somewhat to my surprise, this lawyer also picked up on the “monopoly” moniker as can be seen above, not trying to argue against that characteristic at all. So being aware that there was a monopoly, this copyright industry lawyer still argued that no part of the Internet would have been created without that monopoly. Of course, this goes completely counter to actual history: particularly with regards to the World Wide Web, which was specifically created in Switzerland to *circumvent* the monopoly previously held by University of Minnesota in the US, where a similar technology by the name of Gopher had been developed. When somebody claims exclusive rights to a standard on the Internet, that standard is generally dropped like a bad habit and replaced by something else immediately. That has happened several times, and the WWW standard was such a replacement. However, I gained a lot of understanding from this short exchange. It would appear the people we are debating in the copyright industry are reasoning something like this: 1 – the Progress Clause (the justification for the copyright monopoly in article 8 in the US Constitution, allowing Congress to create exclusive rights in order to “Promote the Progress of Science and the Useful Arts”) means that *any* law created using that justification automatically has the *effect* of also promoting such progress in all its applications and spheres of influence. 2 – therefore, anything created in an environment where such a law exists, created under a monopoly regime covering expressions of ideas, could not have been conceived *without* the existence of the law in question. If this is the actual reasoning – and it would appear that it is – then it becomes comprehensible why net liberty activists who fight for the freedom to create without permission are seen as evil by the copyright industry. If they genuinely believe that everything that exists was created because the copyright monopoly exists, then somebody who wants to take away that monopoly regime would plunge the world into darkness where nothing more is created, ever. Stop laughing. This explains the worldview we’re going up against when discussing the topic, and as such, it was valuable to understand. It fits well in with my observation that copyright monopoly maximalists are acting like religious fundamentalists a few years back – especially given the apparent non-need to check on actual facts, when all the “facts” could be easily deduced from the certainty that everything created was created because there is a copyright monopoly. At some point later in the discussion, a colleague in the copyright industry butted in and subtly suggested that this lawyer might want to stop arguing the point that the Internet was created because of the copyright monopoly before reading up on the actual history. That was the end of that discussion. About The Author Rick Falkvinge is a regular columnist on TorrentFreak, sharing his thoughts every other week. He is the founder of the Swedish and first Pirate Party, a whisky aficionado, and a low-altitude motorcycle pilot. His blog at falkvinge.net focuses on information policy. Book Falkvinge as speaker?
true
true
true
This week I've learned something new. Apparently, some copyright industry lawyers genuinely believe that everything created today could not possibly have been created without a strong copyright monopoly regime.
2024-10-12 00:00:00
2015-01-18 00:00:00
null
article
torrentfreak.com
Torrentfreak
null
null
34,301,577
https://greatergood.berkeley.edu/article/item/how_power_shapes_trust
How Power Shapes Trust
About the Authors Oliver Schilke Martin Reimann
One of the ongoing themes of the current presidential campaign is that Americans are becoming increasingly distrustful of those who walk the corridors of power—Exhibit A being the Republican presidential primary, in which three of the top four candidates are outsiders of Washington. Yet at the same time, these three—Donald Trump, Ben Carson and Carly Fiorina—are very powerful in their own way, be it as a billionaire entertainment and real estate mogul, a neurosurgeon, and conservative media pundit or a former corporate chief executive officer. So what makes us place trust in powerful people? We performed four studies on the nexus between power and trust and found some surprising results. #### Rational actor theory Typically, when asked, many people will be reluctant to admit that they would place blind trust in somebody who is in a high-power position. Too many stories of politicians and top executives abusing their power run through the media. Making oneself vulnerable to such power holders thus doesn’t seem like the sensible choice. Rational actor theories agree with this anecdotal wisdom: they suggest that people will be trustworthy toward someone else only if being so is instrumental in maintaining that relationship. Given that powerful people tend to have many partners to choose from, they place—relatively speaking—less value in any particular relationship, reducing the likelihood that they will behave in a trustworthy fashion. In other words, powerful individuals can afford to betray others—they can always find new people to work with. Rational actor theories further assume that the less powerful party to an exchange will predict this behavior and, as a result, place less trust in their more powerful counterpart. However, our research shows that this is not the case. In fact, we observe exactly the opposite pattern. Over a wide variety of different experimental paradigms and measures, we find that less powerful actors place more trust in others than more powerful actors do. That is, trust is greater when power is low rather than high. In a negotiation setting, for example, low-power negotiators perceived their partners to be more trustworthy than high-power negotiators did. In an investment game, low-power players entrusted their partners with more money than did high-power players. But why would that be the case? Why didn’t we find results consistent with predictions from rational actor theories? #### Studies in power and trust We conducted four experiments that tested whether being in a weaker or stronger power position is linked to differences in trust perceptions and behaviors. Study one used an established negotiation task, in which participants were asked to negotiate over a consignment of cellphones. We put participants either in a low- or high-power position, depending on the viability of their fall-back option (another buyer) in case the negotiations with the partner failed. Negotiators in the high-power position trusted their negotiation partner significantly less (as measured through a perceptual survey scale of trust) than did participants in the low-power condition. Study two was based on an investment task (also known as the “trust game”), in which participants had the option to either keep a monetary endowment to themselves or invest it in a partner, in which case the money was tripled, but it would be up to the partner to decide whether to send back some of the money or keep all the money to him- or herself. Some participants were put in the position of the “power player,” enabling them to switch partners if they wanted to. We found that power players sent significantly less money – and thus trusted less – than non-power players. Studies three and four both used a scenario in which participants assumed the role of a typist offering services to a new potential client. They could either provide a free sample to the client and thus trust that the client would come back with follow-up jobs or they could save their time and not provide a free sample (and thus not trust the client). Participants were given several pieces of information about themselves and the new client, for example, specifying how urgently needed the typing business is for them and how much the client depended on their services, which allowed us to manipulate power. In both studies, participants in the high-power condition were significantly less trusting. Study four furthermore provided insight into the causal chain connecting power and trust. It revealed that having low power amplifies people’s hope that their exchange partner will turn out to be benevolent, which then leads to their decision to trust. #### Motivated cognition In sum, people who lack power are motivated to see their partner as more trustworthy in order to avoid the anxiety inherently attached to their feelings of dependence. This is known as “motivated cognition.” These power-disadvantaged actors thus effectively protect themselves by perceiving power holders in a positive light, even if little or no relevant information would support such perceptions. Their hope that their powerful partner will be trustworthy dominates their cognition and decision-making. The powerful partner, on the other hand, has no reason to engage in significant motivated cognition. In sum, the decision to place trust seems to be based more on one’s motivation to protect oneself from unwanted realities than on relatively rational calculations of the other party’s deliberations. #### Power inequities Our results bring new insight into the drivers of trust decisions. Trust is a critical ingredient in successful social exchange. But the threat of misplacing one’s trust and suffering the detrimental consequences of a breach make trust risky. Thus, researchers have paid considerable attention to the factors that facilitate or hinder trust in various settings. However, one potentially important source of variation in trust had received little attention so far—namely, power. This is surprising, since many, if not most, trust relationships involve nontrivial power inequalities between exchange partners. Examples include relationships between patients and doctors, students and professors, employees and supervisors, and small and large firms. It is well-known that behavior in social exchange relationships is significantly affected by power inequalities that involve one actor depending on the other. The prevalence of power inequalities in relationships in which trust is critical led us to ask: does having power or lacking power increase or decrease an actor’s tendency to place trust in others? Our results provide a clear answer to this question. They show that people low in power are significantly more trusting than more powerful people and that this effect can be explained by motivated cognition. #### Trust and government Our findings can be applied to a variety of settings, including the public’s institutional trust in powerful entities such as governments. Recent findings show that many power holders are actually admired and even seen in a very positive light. Yet this seemingly high trust in power holders contradicts the low levels of trust people have in their leaders. Americans consistently show very little trust in the federal government, with just 24 percent saying they trust it in 2014. A possible explanation for these divergent perspectives is provided by social distance. Following this idea, while self-reported trust in anonymous political decision makers in far-away Washington may be at all-time lows, trust in local politicians with whom people have interpersonal interactions is often high. To conclude, on the most general level, our findings may help better understand why societies with stark hierarchical differences can be functioning and enduring. In a counterfactual world where people low in power would refuse to place trust in power holders, many of the advantages of hierarchies (such as improved coordination, reduced conflict and stability) might not be attainable. These considerations underline the centrality of “irrational” acts of trust for the existence of a relatively stable society. *This article originally appeared in the UK-based magazine *The Conversation*, an independent source of news and views from the academic and research community.* ## Comments
true
true
true
A new study suggests that people with less power actually tend to put more faith in others.
2024-10-12 00:00:00
2015-10-09 00:00:00
https://ggsc.s3.us-west-…d77eb1e16b58.jpg
article
berkeley.edu
Greater Good
null
null
9,796,321
http://www.linuxjournal.com/article/1059
Search
Eric Youngdale
# The ELF Object File Format: Introduction Now that we are on the verge of a public release of ELF file format compilers and utilities, it is a logical time to explain the differences between **a.out** and ELF, and discuss how they will be visible to the user. As long as I am at it, I will also guide you on a tour of the internals of the ELF file format and show you how it works. I realize that Linux users range from people brand new to Unix to people who have used Unix systems for years—for this reason I will start with a fairly basic explanation which may be of little use to the more experienced users, because I would like this article to be useful in some way to as many people as possible. People often ask why we are bothering with a new file format. A couple reasons come to mind—first, the current shared libraries can be somewhat cumbersome to build, especially for large packages such as the X Window System that span multiple directories. Second, the current **a.out** shared library scheme does not support the **dlopen()** function, which allows you to tell the dynamic loader to load additional shared libraries. Why ELF? The Unix community seems to be standardizing this file format; various implementations of SVr4 such as MIPS, Solaris, Unixware currently use ELF; SCO will reportedly switch to ELF in the near future; and there are rumors of other vendors switching to ELF. One interesting sidenote—Windows NT uses a file format based upon the COFF file format, the SVr3 file format that the Unix community is abandoning in favor of ELF. Let us start at the beginning. Users will generally encounter three types of ELF files—**.o** files, regular executables, and shared libraries. While all of these files serve different purposes, their internal structure files are quite similar. Thus we can begin with a general description, and proceed to a discussion of the specifics of the three file types. Next month, I will demonstrate the use of the readelf program, which can be used to display and interpret various portions of ELF files. One universal concept among all different ELF file types (and also **a.out** and many other executable file formats) is the notion of a section. This concept is important enough to spend some time explaining. Simply put, a section is a collection of information of a similar type. Each section represents a portion of the file. For example, executable code is always placed in a section known as **.text**; all data variables initialized by the user are placed in a section known as **.data**; and uninitialized data is placed in a section known as **.bss** In principle, one could devise an executable file format where everything is jumbled together—MS-DOS binaries come to mind. But dividing executables into sections has important advantages. For example, once you have loaded the executable portions of an executable into memory, these memory locations need not change. (In principle, program executable code could modify itself, but this is considered to be extremely poor programming practice.) On modern machine architectures, the memory manager can mark portions of memory read-only, such that any attempt to modify a read-only memory location results in the program dying and dumping core. Thus, instead of merely saying that we do not expect a particular memory location to change, we can specify that any attempt to modify a read-only memory location is a fatal error indicating a bug in the application. That being said, typically you cannot individually set the read-only status for each byte of memory—instead you can individually set the protections of regions of memory known as pages. On the i386 architecture the page size is 4096 bytes—thus you could indicate that addresses **0-4095** are read-only, and bytes **4096** and up are writable, for example. Given that we want all executable portions of an executable in read-only memory and all modifiable locations of memory (such as variables) in writable memory, it turns out to be most efficient to group all of the executable portions of an executable into one section of memory (the **.text** section), and all modifiable data areas together into another area of memory (henceforth known as the **.data** section). A further distinction is made between data variables the user has initialized and data variables the user has not initialized. If the user has not specified the initial value of a variable, there is no sense wasting space in the executable file to store the value. Thus, initialized variables are grouped into the **.data** section, and uninitialized variables are grouped into the **.bss** section, which is special because it doesn't take up space in the file—it only tells how much space is needed for uninitialized variables. When you ask the kernel to load and run an executable, it starts by looking at the image header for clues about how to load the image. It locates the **.text** section within the executable, loads it into the appropriate portions of memory, and marks these pages as read-only. It then locates the **.data** section in the executable and loads it into the user's address space, this time in read-write memory. Finally, it finds the location and size of the **.bss** section from the image header, and adds the appropriate pages of memory to the user's address space. Even though the user has not specified the initial values of variables placed in **.bss**, by convention the kernel will initialize all of this memory to zero. Typically each **a.out** or ELF file also includes a symbol table. This contains a list of all of the symbols (program entry points, addresses of variables, etc.) that are defined or referenced within the file, the address associated with the symbol, and some kind of tag indicating the type of the symbol. In an **a.out** file, this is more or less the extent of the information present; as we shall see later, ELF files have considerably more information. In some cases, the symbol tables can be removed with the strip utility. The advantage is that the executable is smaller once stripped, but you lose the ability to debug the stripped binary. With **a.out** it is always possible to remove the symbol table from a file, but with ELF you typically need some symbolic information in the file for the program to load and run. Thus, in the case of ELF, the strip program will remove a portion of the symbol table, but it will never remove all of the symbol table. Finally, we need to discuss the concept of relocations. Let us say you compile a simple “hello world” program: main( ) { printf("Hello World\n"); } The compiler generates an object file which contains a reference to the function **printf** . Since we have not defined this symbol, it is an external reference. The executable code for this function will contain an instruction to call **printf**, but in the object code we do not yet know the actual location to call to perform this function. The assembler notices that the function **printf** is external, and it generates a relocation, which contains several components. First, it contains an index into the symbol table—this way, we know which symbol is being referenced. Second, it contains an offset into the **.text** section, which refers to the address of the operand of the call instructions. Finally, it contains a tag which indicates what type of relocation is actually present. When you link this file, the linker walks through the relocations, looks up the final address of the external function **printf**, then patches this address back into the operand of the call instruction so the call instruction now points to the actual function **print**. **a.out** executables have no relocations. The kernel loader cannot resolve any symbols and will reject any attempt to run such a binary. An **a.out** object file will of course have relocations, but the linker must be able to fully resolve these to generate a usable executable. So far everything I have described applies to both **a.out** and ELF. Now I will enumerate the shortcomings of **a.out** so that it is more clear why we would want to switch to ELF. First, the header of an **a.out** file (struct exec, defined in **/usr/include/linux/a.out.h**) contains limited information. It only allows the above-described sections to exist and does not directly support any additional sections. Second, it contains only the sizes of the various sections, but does not directly specify the offsets within the file where the section starts. Thus the linker and the kernel loader have some unwritten understanding about where the various sections start within a file. Finally, there is no built-in shared library support—**a.out** was developed before shared library technology was developed, so implementations of shared libraries based on **a.out** must abuse and misuse some of the existing sections in order to accomplish the tasks required. About 6 months ago, the default file format switched from ZMAGIC to QMAGIC files. Both of these are **a.out** formats, and the only real difference is the different set of unwritten understandings between the linker and kernel. Both formats of executable have a 32 byte header at the start of the file, but with ZMAGIC the **.text** section starts at byte offset 1024, while with QMAGIC the **.text** section starts at the beginning of the file and includes the header. Thus ZMAGIC wastes disk space, but, more importantly, the 1024 byte offset used with ZMAGIC makes efficient buffer cache management within the kernel more difficult. With a QMAGIC binary, the mapping from the file offset to the block representing a given page of memory is more natural, and should allow for some performance enhancements in the kernel. ELF binaries are also formatted in a natural way that is compatible with possible future changes to the buffer cache. I have said that shared library support in **a.out** is lacking—while this is true, it is not impossible to design shared library implementations that work with **a.out**. The current Linux shared libraries are certainly one example; another example is SunOS-style shared libraries which are currently used by BSD-*du-jour*. SunOS-style shared libraries contain a lot of the same concepts as ELF shared libraries, but ELF allows us to discard some of the really silly hacks that were required to piggyback a shared library implementation onto **a.out**. Before we go into our hands-on description of how ELF works, it would be worthwhile to spend a little time discussing some general concepts related to shared libraries. Then when we start to pick apart an ELF file, it will be easier to see what is going on. First, I should explain a little bit about what a shared library is; a surprising number of people look at shared libraries as sort of black boxes without a good understanding of what goes on inside. Most users are at least aware of the fact that if they mess up their shared libraries, the system can become nearly unusable. This leads most people to treat them with a certain reverence. If we step back a little bit, we recall that non-shared libraries (also known as static libraries) contain useful procedures that programs might wish to make use of. Thus the programmer does not need to do everything from scratch, but can use a set of standard well-defined functions. This allows the programmer to be more productive. Unfortunately, when you link against a static library, the linker must extract all library functions you require and make them part of your executable, which can make it rather large. The idea behind a shared library is that you would somehow take the contents of the static library (not literally the contents, but usually something generated from the same source tree), and pre-link it into some kind of special executable. When you link your program against the shared library, the linker merely makes note of the fact that you are calling a function in a shared library, so it does not extract any executable code from the shared library. Instead, the linker adds instructions to the executable which tell the startup code in your executable that some shared libraries are also required, so when you run your program, the kernel starts by inserting the executable into your address space, but once your program starts up, all of these shared libraries are also added to your address space. Obviously some mechanism must be present for making sure that when your program calls a function in the shared library, it actually branches to the correct location within the shared library. I will be discussing the mechanics of this for ELF in a little bit. Now that we have explained shared libraries, we can start to discuss some of the general concepts related to how shared libraries are implemented under ELF. To begin with, ELF shared libraries are position independent. This means that you can load them more or less anywhere in memory, and they will work. The current **a.out** shared libraries are known as fixed address libraries: each library has one specific address where it must be loaded to work, and it would be foolish to try to load it anywhere else. ELF shared libraries achieve their position independence in a couple of ways. The main difference is that you should compile everything you want to insert into the shared library with the compiler switch **-fPIC**. This tells the compiler to generate code that is designed to be position independent, and it avoids referencing data by absolute address as much as possible. Position independence does not come without a cost, however. When you compile something to be PIC, the compiler reserves one machine register ( **%ebx** on the i386) to point to the start of a special table known as the global offset table (or GOT for short). That this register is reserved means that the compiler will have less flexibility in optimizing code, and this means that it will take longer to do the same job. Fortunately, our benchmark indicates that for most normal programs the drop in performance is less than 3% for a worst case, and in many cases much less than this. Another ELF feature is that its shared libraries resolve symbols and externals at run time. This is done using a symbol table and a list of relocations which must be performed before the image can start to execute. While this sounds like it could be slow, a number of optimizations built into ELF make it fairly fast. I should mention that when you compile PIC code into a shared library, there are generally very few relocations, one more reason why the performance impact is not of great concern. Technically, it is possible to generate a shared library from code that was not compiled with **-fPIC**, but an incredible number of relocations would need to be performed before the shared library was usable, another reason why **-fPIC** is important. When you reference global data within a shared library, the assembly code cannot simply load the value from memory the way you would do with non-PIC code. If you tried this, the code would not be position independent and a relocation would be associated with the instruction where you were attempting to load the value from the variable. Instead, the compiler/assembler/linker create the GOT, which is nothing more than a table of pointers, one pointer for each global variable defined or referenced in the shared library. Each time the library needs to reference a given variable, it first loads the address of the variable from the GOT (remember that the address of the GOT is always stored in **%ebx** so we only need an offset into the GOT). Once we have this, we can dereference it to obtain the actual value. The advantage of doing it this way is that to establish the address of a global variable, we need to store the address in only one place, and hence we need only one relocation per global variable. We must do something similar for functions. It is critical that we allow the user to redefine functions which might be in the shared library, and if the user does, we want to force the shared library to always use the version the user defined and never use the version of the function in the shared library. Since the function could conceivably be used lots of times within a shared library, we use something known as the procedure linkage table (or PLT) to reference all functions. In a sense this is nothing more than a fancy name for a jumptable, an array of jump instructions, one for each function that you might need to go to. Thus if a particular function is called from thousands of locations within the shared library, control will always pass through one jump instruction. This way, you need only one relocation to determine which version of a given function is actually called, and from the standpoint of performance this is about as good as you are going to get. Next month, we will use this information to dissect real ELF files, explaining specifics about the ELF file format. **Eric Youngdale** Eric Youngdale has worked with Linux for over two years, and has been active in kernel development. He also developed the current Linux shared libraries.
true
true
true
null
2024-10-12 00:00:00
1995-04-01 00:00:00
null
null
linuxjournal.com
linuxjournal.com
null
null
10,446,350
http://7webpages.com/blog/image-duplicates-detection-python/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
3,216,995
http://www.wired.com/wiredscience/2011/11/humans-social/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
11,902,305
http://www.nytimes.com/2016/06/14/business/media/trump-kicks-phony-and-dishonest-washington-post-off-his-campaign.html?_r=0
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
26,823,029
https://www.theverge.com/22376272/lenovo-thinkpad-x12-detachable-review
The ThinkPad X12 Detachable is Lenovo’s latest take on the Surface Pro
Monica Chin
You could line up most ThinkPad models of the past few years and the average laptop buyer might have difficulty telling the difference between them. Lenovo has the look, feel, and features of its premium business line down to a science, and it has attracted a devoted base of fans in doing so. Over the past year, Lenovo has made several attempts to move the ThinkPad package into less traditional, more portable form factors, from the razor-thin X1 Nano to the pricey but groundbreaking X1 Fold. With the ThinkPad X12 Detachable, the company is once again taking direct aim at Microsoft’s Surface Pro line. As you probably figured out from its name, the X12 Detachable is a ThinkPad-branded and ThinkPad-looking 12.3-inch Windows tablet with a detachable keyboard deck. Once you’re aware of that information, there’s not much about the X12 Detachable that will surprise you. It has many of the same strengths that its ThinkPad siblings do, including the camera shutter, the discrete clickers, the keyboard nub, and the black-and-red color scheme that ThinkPad fans will know and love. It also comes with some unique drawbacks that are inherent to its form factor — small screen, shallow keyboard, limited ports, and so on. But if you’re in the market for a detachable PC with business features and strong specs, there’s no reason the X12 Detachable shouldn’t be on your list. Buying a portable, detachable machine sometimes means compromising on specs and performance, but that’s certainly not the case here. The X12 Detachable comes with Intel’s latest 11th Gen processors and runs Windows 10 Pro. The base model has an MSRP of $1,819 but is currently listed at $1,091. (This is just how Lenovo does their pricing — don’t think about it too hard.) It comes with a Core i3-1110G4, 8GB of RAM (soldered), and 256GB of storage. I’m testing a more expensive Core i5 model with 16GB of RAM and 512GB of storage, which is currently listed at $1,331.40. The most comparable Surface Pro 7 Plus models are currently listed at $849.99 and $1,399.99 respectively. Those prices are deceptive, however, because all of the X12 Detachable models currently listed on Lenovo’s site include a stylus and keyboard cover in the box; you have to buy them separately for the Surface Pro 7 Plus, and they add at least an extra $99.99 and $97.49 to the price. That means my X12 model is actually a couple hundred bucks less expensive than the most comparable Surface Pro (which also has less storage). The Core i5-1130G7 in my test unit offers the various amenities of the 11th Gen line, including Intel’s powerful Xe integrated graphics and support for Thunderbolt 4 and Wi-Fi 6. Using it was a good, smooth experience: I can’t imagine that anyone using the X12 for standard business work with Chrome tabs, streaming, Zoom calls, and the like will encounter any performance issues. The ThinkPad’s fan was consistently running during my use, but it wasn’t loud enough to be bothersome, and the device never heated up. This isn’t a laptop you’d want to use for any kind of heavy gaming, but the Iris Xe graphics can lend a hand with lighter creative work. I used the device to process and lightly edit a batch of photos, and while it wasn’t the snappiest experience I’ve ever had, it was workable for my amateur needs. Anyone who does professional graphic work should consider a system with a GPU, of course. The X12’s keyboard and pen are both fine, and among the better accessories I’ve used with detachable PCs. The stylus, which lives in a handy loop on the right side of the keyboard deck, didn’t give me any problems and has two buttons that you can map to your taste in Lenovo’s Pen Settings software. The keys are a bit cramped and flat, as is often the case with folio keyboards, but typing was a comfortable experience overall. They’re also backlit, which you don’t see on every detachable keyboard. The touchpad is a bit small (I often hit the clickers while scrolling) and not the smoothest around, but that’s also par for the course with this sort of device. If the touchpad isn’t your thing, you can use the TrackPoint in the center of the keyboard. One other part of the chassis worth calling out is the 1920 x 1280 display. Like the Surface Pro line, the X12 has a 3:2 aspect ratio, which is my favorite aspect ratio (yes, I have a favorite). It’s much roomier than a traditional 16:9 display, providing noticeably more vertical space. The top and side bezels are chunky, which may put some people off, but that’s understandable since the device is also meant to function regularly as a tablet (and you need something to hold). The panel itself is also quite nice. It gets decently bright, maxing out at 380 nits in my testing, which should be enough for use outdoors and in other bright settings (provided that you’re not doing creative work). It covers 73 percent of the DCI-P3 spectrum, which is on par with what we’ve seen from the Surface Pro 7 Plus. Videos and webpages all looked great, with bright colors and not too much glare. The X12 has a number of modern security features, which will mostly be of interest to business customers. There’s a match-on-chip fingerprint reader, which enables all enrollment, storage, and authentication to happen within the sensor itself, as well as a dTPM 2.0 chip and Lenovo’s self-healing BIOS, which Lenovo guarantees “will recover and self-heal when corrupted or maliciously attacked.” Like previous ThinkPads, the X12 also has an IR camera that supports Windows Hello facial recognition and includes a physical shutter to block it for privacy. I will note that the shutter is a bit small and difficult to swap back and forth, even with my tiny fingers. (There’s also no shutter for the rear-facing camera.) One area where Lenovo is lagging behind Microsoft is upgradeability. The Surface Pro 7 Plus comes with a removable SSD that’s easy to access. The X12’s SSD is very difficult to get at, and doing so would likely void its warranty, per analysis from the experts at *Tom’s Hardware*. This likely will not be a deciding factor for most consumers but could be an important consideration for some business users. The X12 Detachable has a fairly small 42Wh battery. So I was pleasantly surprised to see how long it lasted — especially since the X1 Fold, the last ThinkPad-branded tablet I reviewed, averaged under five hours to a charge. With the screen around 200 nits of brightness, using the X12 as my daily driver for Chrome tabs, Spotify streaming, Zoom calls, and the like, I averaged around seven hours and 50 minutes to a charge. This is right around what my colleague Tom Warren got out of the Surface Pro 7 Plus, and it means you’ll get close to a full day out of the X12 if your workload is similar to mine. I think Lenovo’s done what it set out to do with the ThinkPad X12 Detachable. The device makes a clear case for itself as a system with the performance, build quality, and business features of a ThinkPad in a uniquely portable and versatile form factor. Its areas of weakness aren’t disasters, and they’re not areas where other detachables do much better. And it comes with some unique benefits, including Lenovo’s suite of business features and the bundled keyboard and stylus. When comparing the X12 to the Surface Pro 7 Plus, the latter factor is what really seals the deal for me. The merits of the two laptops’ respective enterprise features and upgradability benefits will likely vary between companies. But for a consumer like me, the fact that the accessories you need are bundled into the X12’s price makes it a significantly better deal than its Microsoft competitor without too many drawbacks. And if you’re a ThinkPad fan who’s been hoping for a capable detachable with all your favorite features, your wish has certainly come true. *Photography by Monica Chin / The Verge*
true
true
true
ThinkPad power, Surface Pro convenience.
2024-10-12 00:00:00
2021-04-13 00:00:00
https://cdn.vox-cdn.com/…03_4517_0003.jpg
article
theverge.com
The Verge
null
null