id
int64 3
41.8M
| url
stringlengths 1
1.84k
| title
stringlengths 1
9.99k
⌀ | author
stringlengths 1
10k
⌀ | markdown
stringlengths 1
4.36M
⌀ | downloaded
bool 2
classes | meta_extracted
bool 2
classes | parsed
bool 2
classes | description
stringlengths 1
10k
⌀ | filedate
stringclasses 2
values | date
stringlengths 9
19
⌀ | image
stringlengths 1
10k
⌀ | pagetype
stringclasses 365
values | hostname
stringlengths 4
84
⌀ | sitename
stringlengths 1
1.6k
⌀ | tags
stringclasses 0
values | categories
stringclasses 0
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
33,635,365 |
https://en.wikipedia.org/wiki/Filter_bubble
|
Filter bubble - Wikipedia
|
Authority control databases National United States Poland Israel
|
# Filter bubble
A **filter bubble** or **ideological frame** is a state of intellectual isolation[1] that can result from personalized searches, recommendation systems, and algorithmic curation. The search results are based on information about the user, such as their location, past click-behavior, and search history.[2] Consequently, users become separated from information that disagrees with their viewpoints, effectively isolating them in their own cultural or ideological bubbles, resulting in a limited and customized view of the world.[3] The choices made by these algorithms are only sometimes transparent.[4] Prime examples include Google Personalized Search results and Facebook's personalized news-stream.
However, there are conflicting reports about the extent to which personalized filtering happens and whether such activity is beneficial or harmful, with various studies producing inconclusive results.
The term *filter bubble* was coined by internet activist Eli Pariser circa 2010. In Pariser's influential book under the same name, *The Filter Bubble* (2011), it was predicted that individualized personalization by algorithmic filtering would lead to intellectual isolation and social fragmentation.[5] The bubble effect may have negative implications for civic discourse, according to Pariser, but contrasting views regard the effect as minimal[6] and addressable.[7] According to Pariser, users get less exposure to conflicting viewpoints and are isolated intellectually in their informational bubble.[8] He related an example in which one user searched Google for "BP" and got investment news about British Petroleum, while another searcher got information about the Deepwater Horizon oil spill, noting that the two search results pages were "strikingly different" despite use of the same key words.[8][9][10][6] The results of the U.S. presidential election in 2016 have been associated with the influence of social media platforms such as Twitter and Facebook,[11] and as a result have called into question the effects of the "filter bubble" phenomenon on user exposure to fake news and echo chambers,[12] spurring new interest in the term,[13] with many concerned that the phenomenon may harm democracy and well-being by making the effects of misinformation worse.[14][15][13][16][17][18]
## Concept
[edit]Pariser defined his concept of a filter bubble in more formal terms as "that personal ecosystem of information that's been catered by these algorithms."[8] An internet user's past browsing and search history is built up over time when they indicate interest in topics by "clicking links, viewing friends, putting movies in [their] queue, reading news stories," and so forth.[19] An internet firm then uses this information to target advertising to the user, or make certain types of information appear more prominently in search results pages.[19]
This process is not random, as it operates under a three-step process, per Pariser, who states, "First, you figure out who people are and what they like. Then, you provide them with content and services that best fit them. Finally, you tune in to get the fit just right. Your identity shapes your media."[20] Pariser also reports:
According to one
Wall Street Journal study, the top fifty Internet sites, from CNN to Yahoo to MSN, install an average of 64 data-laden cookies and personal tracking beacons. Search for a word like "depression" on Dictionary.com, and the site installs up to 223 tracking cookies and beacons on your computer so that other Web sites can target you with antidepressants. Share an article about cooking on ABC News, and you may be chased around the Web by ads for Teflon-coated pots. Open—even for an instant—a page listing signs that your spouse may be cheating and prepare to be haunted by DNA paternity-test ads.[21]
Accessing the data of link clicks displayed through site traffic measurements determines that filter bubbles can be collective or individual.[22]
As of 2011, one engineer had told Pariser that Google looked at 57 different pieces of data to personally tailor a user's search results, including non-cookie data such as the type of computer being used and the user's physical location.[23]
Pariser's idea of the filter bubble was popularized after the TED talk in May 2011, in which he gave examples of how filter bubbles work and where they can be seen. In a test seeking to demonstrate the filter bubble effect, Pariser asked several friends to search for the word "Egypt" on Google and send him the results. Comparing two of the friends' first pages of results, while there was overlap between them on topics like news and travel, one friend's results prominently included links to information on the then-ongoing Egyptian revolution of 2011, while the other friend's first page of results did not include such links.[24]
In *The Filter Bubble*, Pariser warns that a potential downside to filtered searching is that it "closes us off to new ideas, subjects, and important information,"[25] and "creates the impression that our narrow self-interest is all that exists."[9] In his view, filter bubbles are potentially harmful to both individuals and society. He criticized Google and Facebook for offering users "too much candy and not enough carrots."[26] He warned that "invisible algorithmic editing of the web" may limit our exposure to new information and narrow our outlook.[26] According to Pariser, the detrimental effects of filter bubbles include harm to the general society in the sense that they have the possibility of "undermining civic discourse" and making people more vulnerable to "propaganda and manipulation."[9] He wrote:
A world constructed from the familiar is a world in which there's nothing to learn ... (since there is) invisible autopropaganda, indoctrinating us with our own ideas.
— Eli Pariser inThe Economist, 2011[27]
Many people are unaware that filter bubbles even exist. This can be seen in an article in The Guardian, which mentioned the fact that "more than 60% of Facebook users are entirely unaware of any curation on Facebook at all, believing instead that every single story from their friends and followed pages appeared in their news feed."[28] A brief explanation for how Facebook decides what goes on a user's news feed is through an algorithm that takes into account "how you have interacted with similar posts in the past."[28]
### Extensions of concept
[edit]A filter bubble has been described as exacerbating a phenomenon that called *splinternet* or *cyberbalkanization*,[Note 1] which happens when the internet becomes divided into sub-groups of like-minded people who become insulated within their own online community and fail to get exposure to different views. This concern dates back to the early days of the publicly accessible internet, with the term "cyberbalkanization" being coined in 1996.[29][30][31] Other terms have been used to describe this phenomenon, including "ideological frames"[9] and "the figurative sphere surrounding you as you search the internet."[19]
The concept of a filter bubble has been extended into other areas, to describe societies that self-segregate according political views but also economic, social, and cultural situations.[32] That bubbling results in a loss of the broader community and creates the sense that for example, children do not belong at social events unless those events were especially planned to be appealing for children and unappealing for adults without children.[32]
Barack Obama's farewell address identified a similar concept to filter bubbles as a "threat to [Americans'] democracy," i.e., the "retreat into our own bubbles, ...especially our social media feeds, surrounded by people who look like us and share the same political outlook and never challenge our assumptions... And increasingly, we become so secure in our bubbles that we start accepting only information, whether it's true or not, that fits our opinions, instead of basing our opinions on the evidence that is out there."[33]
### Comparison with echo chambers
[edit]Both "echo chambers" and "filter bubbles" describe situations where individuals are exposed to a narrow range of opinions and perspectives that reinforce their existing beliefs and biases, but there are some subtle differences between the two, especially in practices surrounding social media.[34][35]
Specific to news media, an echo chamber is a metaphorical description of a situation in which beliefs are amplified or reinforced by communication and repetition inside a closed system.[36][37] Based on the sociological concept of selective exposure theory, the term is a metaphor based on the acoustic echo chamber, where sounds reverberate in a hollow enclosure. With regard to social media, this sort of situation feeds into explicit mechanisms of *self-selected personalization*, which describes all processes in which users of a given platform can actively opt in and out of information consumption, such as a user's ability to follow other users or select into groups.[38]
In an echo chamber, people are able to seek out information that reinforces their existing views, potentially as an unconscious exercise of confirmation bias. This sort of feedback regulation may increase political and social polarization and extremism. This can lead to users aggregating into homophilic clusters within social networks, which contributes to group polarization.[39] "Echo chambers" reinforce an individual's beliefs without factual support. Individuals are surrounded by those who acknowledge and follow the same viewpoints, but they also possess the agency to break outside of the echo chambers.[40]
On the other hand, filter bubbles are implicit mechanisms of *pre-selected personalization*, where a user's media consumption is created by personalized algorithms; the content a user sees is filtered through an AI-driven algorithm that reinforces their existing beliefs and preferences, potentially excluding contrary or diverse perspectives. In this case, users have a more passive role and are perceived as victims of a technology that automatically limits their exposure to information that would challenge their world view.[38] Some researchers argue, however, that because users still play an active role in selectively curating their own newsfeeds and information sources through their interactions with search engines and social media networks, that they directly assist in the filtering process by AI-driven algorithms, thus effectively engaging in self-segregating filter bubbles.[41]
Despite their differences, the usage of these terms go hand-in-hand in both academic and platform studies. It is often hard to distinguish between the two concepts in social network studies, due to limitations in accessibility of the filtering algorithms, that perhaps could enable researchers to compare and contrast the agencies of the two concepts.[42] This type of research will continue to grow more difficult to conduct, as many social media networks have also begun to limit API access needed for academic research.[43]
## Reactions and studies
[edit]### Media reactions
[edit]There are conflicting reports about the extent to which personalized filtering happens and whether such activity is beneficial or harmful. Analyst Jacob Weisberg, writing in June 2011 for *Slate*, did a small non-scientific experiment to test Pariser's theory which involved five associates with different ideological backgrounds conducting a series of searches, "John Boehner," "Barney Frank," "Ryan plan," and "Obamacare," and sending Weisberg screenshots of their results. The results varied only in minor respects from person to person, and any differences did not appear to be ideology-related, leading Weisberg to conclude that a filter bubble was not in effect, and to write that the idea that most internet users were "feeding at the trough of a *Daily Me*" was overblown.[9] Weisberg asked Google to comment, and a spokesperson stated that algorithms were in place to deliberately "limit personalization and promote variety."[9] Book reviewer Paul Boutin did a similar experiment to Weisberg's among people with differing search histories and again found that the different searchers received nearly identical search results.[6] Interviewing programmers at Google, off the record, journalist Per Grankvist found that user data used to play a bigger role in determining search results but that Google, through testing, found that the search query is by far the best determinant of what results to display.[44]
There are reports that Google and other sites maintain vast "dossiers" of information on their users, which might enable them to personalize individual internet experiences further if they choose to do so. For instance, the technology exists for Google to keep track of users' histories even if they don't have a personal Google account or are not logged into one.[6] One report stated that Google had collected "10 years' worth" of information amassed from varying sources, such as Gmail, Google Maps, and other services besides its search engine,[10][ failed verification] although a contrary report was that trying to personalize the internet for each user, was technically challenging for an internet firm to achieve despite the huge amounts of available data.
[Analyst Doug Gross of CNN suggested that filtered searching seemed to be more helpful for consumers than for
*citation needed*]*citizens*, and would help a consumer looking for "pizza" find local delivery options based on a personalized search and appropriately filter out distant pizza stores.
[10]
[Organizations such as the
*failed verification*]*Washington Post*,
*The New York Times*, and others have experimented with creating new personalized information services, with the aim of tailoring search results to those that users are likely to like or agree with.
[9]
### Academia studies and reactions
[edit]A scientific study from Wharton that analyzed personalized recommendations also found that these filters can create commonality, not fragmentation, in online music taste.[45] Consumers reportedly use the filters to expand their taste rather than to limit it.[45] Harvard law professor Jonathan Zittrain disputed the extent to which personalization filters distort Google search results, saying that "the effects of search personalization have been light."[9] Further, Google provides the ability for users to shut off personalization features if they choose[46] by deleting Google's record of their search history and setting Google not to remember their search keywords and visited links in the future.[6]
A study from *Internet Policy Review* addressed the lack of a clear and testable definition for filter bubbles across disciplines; this often results in researchers defining and studying filter bubbles in different ways.[47] Subsequently, the study explained a lack of empirical data for the existence of filter bubbles across disciplines[12] and suggested that the effects attributed to them may stem more from preexisting ideological biases than from algorithms. Similar views can be found in other academic projects, which also address concerns with the definitions of filter bubbles and the relationships between ideological and technological factors associated with them.[48] A critical review of filter bubbles suggested that "the filter bubble thesis often posits a special kind of political human who has opinions that are strong, but at the same time highly malleable" and that it is a "paradox that people have an active agency when they select content but are passive receivers once they are exposed to the algorithmically curated content recommended to them."[49]
A study by Oxford, Stanford, and Microsoft researchers examined the browsing histories of 1.2 million U.S. users of the Bing Toolbar add-on for Internet Explorer between March and May 2013. They selected 50,000 of those users who were active news consumers, then classified whether the news outlets they visited were left- or right-leaning, based on whether the majority of voters in the counties associated with user IP addresses voted for Obama or Romney in the 2012 presidential election. They then identified whether news stories were read after accessing the publisher's site directly, via the Google News aggregation service, web searches, or social media. The researchers found that while web searches and social media do contribute to ideological segregation, the vast majority of online news consumption consisted of users directly visiting left- or right-leaning mainstream news sites and consequently being exposed almost exclusively to views from a single side of the political spectrum. Limitations of the study included selection issues such as Internet Explorer users skewing higher in age than the general internet population; Bing Toolbar usage and the voluntary (or unknowing) sharing of browsing history selection for users who are less concerned about privacy; the assumption that all stories in left-leaning publications are left-leaning, and the same for right-leaning; and the possibility that users who are *not* active news consumers may get most of their news via social media, and thus experience stronger effects of social or algorithmic bias than those users who essentially self-select their bias through their choice of news publications (assuming they are aware of the publications' biases).[50]
A study by Princeton University and New York University researchers aimed to study the impact of filter bubble and algorithmic filtering on social media polarization. They used a mathematical model called the "stochastic block model" to test their hypothesis on the environments of Reddit and Twitter. The researchers gauged changes in polarization in regularized social media networks and non-regularized networks, specifically measuring the percent changes in polarization and disagreement on Reddit and Twitter. They found that polarization increased significantly at 400% in non-regularized networks, while polarization increased by 4% in regularized networks and disagreement by 5%.[51]
### Platform studies
[edit]While algorithms do limit political diversity, some of the filter bubbles are the result of user choice.[52] A study by data scientists at Facebook found that users have one friend with contrasting views for every four Facebook friends that share an ideology.[53][54] No matter what Facebook's algorithm for its News Feed is, people are more likely to befriend/follow people who share similar beliefs.[53] The nature of the algorithm is that it ranks stories based on a user's history, resulting in a reduction of the "politically cross-cutting content by 5 percent for conservatives and 8 percent for liberals."[53] However, even when people are given the option to click on a link offering contrasting views, they still default to their most viewed sources.[53] "[U]ser choice decreases the likelihood of clicking on a cross-cutting link by 17 percent for conservatives and 6 percent for liberals."[53] A cross-cutting link is one that introduces a different point of view than the user's presumed point of view or what the website has pegged as the user's beliefs.[55] A recent study from Levi Boxell, Matthew Gentzkow, and Jesse M. Shapiro suggest that online media isn't the driving force for political polarization.[56] The paper argues that polarization has been driven by the demographic groups that spend the least time online. The greatest ideological divide is experienced amongst Americans older than 75, while only 20% reported using social media as of 2012. In contrast, 80% of Americans aged 18–39 reported using social media as of 2012. The data suggests that the younger demographic isn't any more polarized in 2012 than it had been when online media barely existed in 1996. The study highlights differences between age groups and how news consumption remains polarized as people seek information that appeals to their preconceptions. Older Americans usually remain stagnant in their political views as traditional media outlets continue to be a primary source of news, while online media is the leading source for the younger demographic. Although algorithms and filter bubbles weaken content diversity, this study reveals that political polarization trends are primarily driven by pre-existing views and failure to recognize outside sources. A 2020 study from Germany utilized the Big Five Psychology model to test the effects of individual personality, demographics, and ideologies on user news consumption.[57] Basing their study on the notion that the number of news sources that users consume impacts their likelihood to be caught in a filter bubble—with higher media diversity lessening the chances—their results suggest that certain demographics (higher age and male) along with certain personality traits (high openness) correlate positively with a number of news sources consumed by individuals. The study also found a negative ideological association between media diversity and the degree to which users align with right-wing authoritarianism. Beyond offering different individual user factors that may influence the role of user choice, this study also raises questions and associations between the likelihood of users being caught in filter bubbles and user voting behavior.[57]
The Facebook study found that it was "inconclusive" whether or not the algorithm played as big a role in filtering News Feeds as people assumed.[58] The study also found that "individual choice," or confirmation bias, likewise affected what gets filtered out of News Feeds.[58] Some social scientists criticized this conclusion because the point of protesting the filter bubble is that the algorithms and individual choice work together to filter out News Feeds.[59] They also criticized Facebook's small sample size, which is about "9% of actual Facebook users," and the fact that the study results are "not reproducible" due to the fact that the study was conducted by "Facebook scientists" who had access to data that Facebook does not make available to outside researchers.[60]
Though the study found that only about 15–20% of the average user's Facebook friends subscribe to the opposite side of the political spectrum, Julia Kaman from Vox theorized that this could have potentially positive implications for viewpoint diversity. These "friends" are often acquaintances with whom we would not likely share our politics without the internet. Facebook may foster a unique environment where a user sees and possibly interacts with content posted or re-posted by these "second-tier" friends. The study found that "24 percent of the news items liberals saw were conservative-leaning and 38 percent of the news conservatives saw was liberal-leaning."[61] "Liberals tend to be connected to fewer friends who share information from the other side, compared with their conservative counterparts."[62] This interplay has the ability to provide diverse information and sources that could moderate users' views.
Similarly, a study of Twitter's filter bubbles by New York University concluded that "Individuals now have access to a wider span of viewpoints about news events, and most of this information is not coming through the traditional channels, but either directly from political actors or through their friends and relatives. Furthermore, the interactive nature of social media creates opportunities for individuals to discuss political events with their peers, including those with whom they have weak social ties."[63] According to these studies, social media may be diversifying information and opinions users come into contact with, though there is much speculation around filter bubbles and their ability to create deeper political polarization.
One driver and possible solution to the problem is the role of emotions in online content. A 2018 study shows that different emotions of messages can lead to polarization or convergence: joy is prevalent in emotional polarization, while sadness and fear play significant roles in emotional convergence.[64] Since it is relatively easy to detect the emotional content of messages, these findings can help to design more socially responsible algorithms by starting to focus on the emotional content of algorithmic recommendations.
Social bots have been utilized by different researchers to test polarization and related effects that are attributed to filter bubbles and echo chambers.[65][66] A 2018 study used social bots on Twitter to test deliberate user exposure to partisan viewpoints.[65] The study claimed it demonstrated partisan differences between exposure to differing views, although it warned that the findings should be limited to party-registered American Twitter users. One of the main findings was that after exposure to differing views (provided by the bots), self-registered republicans became more conservative, whereas self-registered liberals showed less ideological change if none at all. A different study from The People's Republic of China utilized social bots on *Weibo*—the largest social media platform in China—to examine the structure of filter bubbles regarding to their effects on polarization.[66] The study draws a distinction between two conceptions of polarization. One being where people with similar views form groups, share similar opinions, and block themselves from differing viewpoints (opinion polarization), and the other being where people do not access diverse content and sources of information (information polarization). By utilizing social bots instead of human volunteers and focusing more on information polarization rather than opinion-based, the researchers concluded that there are two essential elements of a filter bubble: a large concentration of users around a single topic and a uni-directional, star-like structure that impacts key information flows.
In June 2018, the platform DuckDuckGo conducted a research study on the Google Web Browser Platform. For this study, 87 adults in various locations around the continental United States googled three keywords at the exact same time: immigration, gun control, and vaccinations. Even in private browsing mode, most people saw results unique to them. Google included certain links for some that it did not include for other participants, and the News and Videos infoboxes showed significant variation. Google publicly disputed these results saying that Search Engine Results Page (SERP) personalization is mostly a myth. Google Search Liaison, Danny Sullivan, stated that “Over the years, a myth has developed that Google Search personalizes so much that for the same query, different people might get significantly different results from each other. This isn’t the case. Results can differ, but usually for non-personalized reasons.”[67]
When filter bubbles are in place, they can create specific moments that scientists call 'Whoa' moments. A 'Whoa' moment is when an article, ad, post, etc., appears on your computer that is in relation to a current action or current use of an object. Scientists discovered this term after a young woman was performing her daily routine, which included drinking coffee when she opened her computer and noticed an advertisement for the same brand of coffee that she was drinking. "Sat down and opened up Facebook this morning while having my coffee, and there they were two ads for Nespresso. Kind of a 'whoa' moment when the product you're drinking pops up on the screen in front of you."[68] "Whoa" moments occur when people are "found." Which means advertisement algorithms target specific users based on their "click behavior" to increase their sale revenue.
Several designers have developed tools to counteract the effects of filter bubbles (see § Countermeasures).[69] Swiss radio station SRF voted the word *filterblase* (the German translation of filter bubble) word of the year 2016.[70]
## Countermeasures
[edit]### By individuals
[edit]In *The Filter Bubble: What the Internet Is Hiding from You*,[71] internet activist Eli Pariser highlights how the increasing occurrence of filter bubbles further emphasizes the value of one's bridging social capital as defined by Robert Putman. Pariser argues that filter bubbles reinforce a sense of social homogeneity, which weakens ties between people with potentially diverging interests and viewpoints.[72] In that sense, high bridging capital may promote social inclusion by increasing our exposure to a space that goes beyond self-interests. Fostering one's bridging capital, such as by connecting with more people in an informal setting, may be an effective way to reduce the filter bubble phenomenon.
Users can take many actions to burst through their filter bubbles, for example by making a conscious effort to evaluate what information they are exposing themselves to, and by thinking critically about whether they are engaging with a broad range of content.[73] Users can consciously avoid news sources that are unverifiable or weak. Chris Glushko, the VP of Marketing at IAB, advocates using fact-checking sites to identify fake news.[74] Technology can also play a valuable role in combating filter bubbles.[75]
Some browser plug-ins are aimed to help people step out of their filter bubbles and make them aware of their personal perspectives; thus, these media show content that contradicts with their beliefs and opinions. In addition to plug-ins, there are apps created with the mission of encouraging users to open their echo chambers. News apps such as *Read Across the Aisle* nudge users to read different perspectives if their reading pattern is biased towards one side/ideology.[76] Although apps and plug-ins are tools humans can use, Eli Pariser stated "certainly, there is some individual responsibility here to really seek out new sources and people who aren't like you."[52]
Since web-based advertising can further the effect of the filter bubbles by exposing users to more of the same content, users can block much advertising by deleting their search history, turning off targeted ads, and downloading browser extensions. Some use anonymous or non-personalized search engines such as YaCy, DuckDuckGo, Qwant, Startpage.com, Disconnect, and Searx in order to prevent companies from gathering their web-search data. Swiss daily *Neue Zürcher Zeitung* is beta-testing a personalized news engine app which uses machine learning to guess what content a user is interested in, while "always including an element of surprise"; the idea is to mix in stories which a user is unlikely to have followed in the past.[77]
The European Union is taking measures to lessen the effect of the filter bubble. The European Parliament is sponsoring inquiries into how filter bubbles affect people's ability to access diverse news.[78] Additionally, it introduced a program aimed to educate citizens about social media.[79] In the U.S., the CSCW panel suggests the use of news aggregator apps to broaden media consumers news intake. News aggregator apps scan all current news articles and direct you to different viewpoints regarding a certain topic. Users can also use a diversely-aware news balancer which visually shows the media consumer if they are leaning left or right when it comes to reading the news, indicating right-leaning with a bigger red bar or left-leaning with a bigger blue bar. A study evaluating this news balancer found "a small but noticeable change in reading behavior, toward more balanced exposure, among users seeing the feedback, as compared to a control group".[80]
### By media companies
[edit]In light of recent concerns about information filtering on social media, Facebook acknowledged the presence of filter bubbles and has taken strides toward removing them.[81] In January 2017, Facebook removed personalization from its Trending Topics list in response to problems with some users not seeing highly talked-about events there.[82] Facebook's strategy is to reverse the Related Articles feature that it had implemented in 2013, which would post related news stories after the user read a shared article. Now, the revamped strategy would flip this process and post articles from different perspectives on the same topic. Facebook is also attempting to go through a vetting process whereby only articles from reputable sources will be shown. Along with the founder of Craigslist and a few others, Facebook has invested $14 million into efforts "to increase trust in journalism around the world, and to better inform the public conversation".[81] The idea is that even if people are only reading posts shared from their friends, at least these posts will be credible.
Similarly, Google, as of January 30, 2018, has also acknowledged the existence of a filter bubble difficulties within its platform. Because current Google searches pull algorithmically ranked results based upon "authoritativeness" and "relevancy" which show and hide certain search results, Google is seeking to combat this. By training its search engine to recognize the intent of a search inquiry rather than the literal syntax of the question, Google is attempting to limit the size of filter bubbles. As of now, the initial phase of this training will be introduced in the second quarter of 2018. Questions that involve bias and/or controversial opinions will not be addressed until a later time, prompting a larger problem that exists still: whether the search engine acts either as an arbiter of truth or as a knowledgeable guide by which to make decisions by.[83]
In April 2017 news surfaced that Facebook, Mozilla, and Craigslist contributed to the majority of a $14M donation to CUNY's "News Integrity Initiative," poised at eliminating fake news and creating more honest news media.[84]
Later, in August, Mozilla, makers of the Firefox web browser, announced the formation of the Mozilla Information Trust Initiative (MITI). The +MITI would serve as a collective effort to develop products, research, and community-based solutions to combat the effects of filter bubbles and the proliferation of fake news. Mozilla's Open Innovation team leads the initiative, striving to combat misinformation, with a specific focus on the product with regards to literacy, research and creative interventions.[85]
## Ethical implications
[edit]As the popularity of cloud services increases, personalized algorithms used to construct filter bubbles are expected to become more widespread.[86] Scholars have begun considering the effect of filter bubbles on the users of social media from an ethical standpoint, particularly concerning the areas of personal freedom, security, and information bias.[87] Filter bubbles in popular social media and personalized search sites can determine the particular content seen by users, often without their direct consent or cognizance,[86] due to the algorithms used to curate that content. Self-created content manifested from behavior patterns can lead to partial information blindness.[88] Critics of the use of filter bubbles speculate that individuals may lose autonomy over their own social media experience and have their identities socially constructed as a result of the pervasiveness of filter bubbles.[86]
Technologists, social media engineers, and computer specialists have also examined the prevalence of filter bubbles.[89] Mark Zuckerberg, founder of Facebook, and Eli Pariser, author of *The Filter Bubble*, have expressed concerns regarding the risks of privacy and information polarization.[90][91] The information of the users of personalized search engines and social media platforms is not private, though some people believe it should be.[90] The concern over privacy has resulted in a debate as to whether or not it is moral for information technologists to take users' online activity and manipulate future exposure to related information.[91]
Some scholars have expressed concerns regarding the effects of filter bubbles on individual and social well-being, i.e. the dissemination of health information to the general public and the potential effects of internet search engines to alter health-related behavior.[16][17][18][92] A 2019 multi-disciplinary book reported research and perspectives on the roles filter bubbles play in regards to health misinformation.[18] Drawing from various fields such as journalism, law, medicine, and health psychology, the book addresses different controversial health beliefs (e.g. alternative medicine and pseudoscience) as well as potential remedies to the negative effects of filter bubbles and echo chambers on different topics in health discourse. A 2016 study on the potential effects of filter bubbles on search engine results related to suicide found that algorithms play an important role in whether or not helplines and similar search results are displayed to users and discussed the implications their research may have for health policies.[17] Another 2016 study from the Croatian Medical journal proposed some strategies for mitigating the potentially harmful effects of filter bubbles on health information, such as: informing the public more about filter bubbles and their associated effects, users choosing to try alternative [to Google] search engines, and more explanation of the processes search engines use to determine their displayed results.[16]
Since the content seen by individual social media users is influenced by algorithms that produce filter bubbles, users of social media platforms are more susceptible to confirmation bias,[93] and may be exposed to biased, misleading information.[94] Social sorting and other unintentional discriminatory practices are also anticipated as a result of personalized filtering.[95]
In light of the 2016 U.S. presidential election scholars have likewise expressed concerns about the effect of filter bubbles on democracy and democratic processes, as well as the rise of "ideological media".[11] These scholars fear that users will be unable to "[think] beyond [their] narrow self-interest" as filter bubbles create personalized social feeds, isolating them from diverse points of view and their surrounding communities.[96] For this reason, an increasingly discussed possibility is to design social media with more serendipity, that is, to proactively recommend content that lies outside one's filter bubble, including challenging political information and, eventually, to provide empowering filters and tools to users.[97][98][99] A related concern is in fact how filter bubbles contribute to the proliferation of "fake news" and how this may influence political leaning, including how users vote.[11][100][101]
Revelations in March 2018 of Cambridge Analytica's harvesting and use of user data for at least 87 million Facebook profiles during the 2016 presidential election highlight the ethical implications of filter bubbles.[102] Co-founder and whistleblower of Cambridge Analytica Christopher Wylie, detailed how the firm had the ability to develop "psychographic" profiles of those users and use the information to shape their voting behavior.[103] Access to user data by third parties such as Cambridge Analytica can exasperate and amplify existing filter bubbles users have created, artificially increasing existing biases and further divide societies.
## Dangers
[edit]Filter bubbles have stemmed from a surge in media personalization, which can trap users. The use of AI to personalize offerings can lead to users viewing only content that reinforces their own viewpoints without challenging them. Social media websites like Facebook may also present content in a way that makes it difficult for users to determine the source of the content, leading them to decide for themselves whether the source is reliable or fake.[104] That can lead to people becoming used to hearing what they want to hear, which can cause them to react more radically when they see an opposing viewpoint. The filter bubble may cause the person to see any opposing viewpoints as incorrect and so could allow the media to force views onto consumers.[105][104][106]
Researches explain that the filter bubble reinforces what one is already thinking.[107] This is why it is extremely important to utilize resources that offer various points of view.[107]
## See also
[edit]- Algorithmic curation
- Algorithmic radicalization
- Allegory of the Cave
- Attention inequality
- Communal reinforcement
- Content farm
- Dead Internet theory
- Deradicalization
- Echo chamber (media)
- False consensus effect
- Group polarization
- Groupthink
- Infodemic
- Information silo
- Media consumption
- Narrowcasting
- Search engine manipulation effect
- Selective exposure theory
- Serendipitous discovery, an antithesis of filter bubble
*The Social Dilemma*- Stereotype
## Notes
[edit]**^**The term*cyber-balkanization*(sometimes with a hyphen) is a hybrid of*cyber*, relating to the internet, and*Balkanization*, referring to that region of Europe that was historically subdivided by languages, religions and cultures; the term was coined in a paper by MIT researchers Van Alstyne and Brynjolfsson.
## References
[edit]**^**Technopedia, Definition – What does Filter Bubble mean? Archived 2017-10-10 at the Wayback Machine, Retrieved October 10, 2017, "....A filter bubble is the intellectual isolation, that can occur when websites make use of algorithms to selectively assume the information a user would want to see, and then give information to the user according to this assumption ... A filter bubble, therefore, can cause users to get significantly less contact with contradicting viewpoints, causing the user to become intellectually isolated...."**^**Bozdag, Engin (September 2013). "Bias in algorithmic filtering and personalization".*Ethics and Information Technology*.**15**(3): 209–227. doi:10.1007/s10676-013-9321-6. S2CID 14970635.**^**Huffington Post, The Huffington Post "Are Filter-bubbles Shrinking Our Minds?" Archived 2016-11-03 at the Wayback Machine**^**Encrypt, Search (February 26, 2019). "What Are Filter Bubbles & How To Avoid Them".*Search Encrypt Blog*. Archived from the original on February 25, 2019. Retrieved March 19, 2019.**^**Kitchens, Brent; Johnson, Steve L.; Gray, Peter (December 1, 2020). "Understanding Echo Chambers and Filter Bubbles: The Impact of Social Media on Diversification and Partisan Shifts in News Consumption".*MIS Quarterly*.**44**(4): 1619–1649. doi:10.25300/MISQ/2020/16371. S2CID 229294134.- ^
**a****b****c****d**Boutin, Paul (May 20, 2011). "Your Results May Vary: Will the information superhighway turn into a cul-de-sac because of automated filters?".**e***The Wall Street Journal*. Archived from the original on April 5, 2015. Retrieved August 15, 2011.By tracking individual Web browsers with cookies, Google has been able to personalize results even for users who don't create a personal Google account or are not logged into one. ...
**^**Zhang, Yuan Cao; Séaghdha, Diarmuid Ó; Quercia, Daniele; Jambor, Tamas (2012). "Auralist: Introducing serendipity into music recommendation".*Proceedings of the fifth ACM international conference on Web search and data mining*. pp. 13–22. doi:10.1145/2124295.2124300. ISBN 9781450307475. S2CID 2956587.- ^
**a****b**Parramore, Lynn (October 10, 2010). "The Filter Bubble".**c***The Atlantic*. Archived from the original on August 22, 2017. Retrieved April 20, 2011.Since December 4, 2009, Google has been personalized for everyone. So when I had two friends this spring Google "BP," one of them got a set of links that was about investment opportunities in BP. The other one got information about the oil spill....
- ^
**a****b****c****d****e****f****g**Weisberg, Jacob (June 10, 2011). "Bubble Trouble: Is Web personalization turning us into solipsistic twits?".**h***Slate*. Archived from the original on June 12, 2011. Retrieved August 15, 2011. - ^
**a****b**Gross, Doug (May 19, 2011). "What the Internet is hiding from you".**c***CNN*. Archived from the original on April 9, 2016. Retrieved August 15, 2011.I had friends Google BP when the oil spill was happening. These are two women who were quite similar in a lot of ways. One got a lot of results about the environmental consequences of what was happening and the spill. The other one just got investment information and nothing about the spill at all.
- ^
**a****b**Baer, Drake. "The 'Filter Bubble' Explains Why Trump Won and You Didn't See It Coming".**c***Science of Us*. Archived from the original on April 19, 2017. Retrieved April 19, 2017. - ^
**a**DiFranzo, Dominic; Gloria-Garcia, Kristine (April 5, 2017). "Filter bubbles and fake news".**b***XRDS*.**23**(3): 32–35. doi:10.1145/3055153. S2CID 7069187. - ^
**a**Jasper Jackson (January 8, 2017). "Eli Pariser: activist whose filter bubble warnings presaged Trump and Brexit: Upworthy chief warned about dangers of the internet's echo chambers five years before 2016's votes".**b***The Guardian*. Archived from the original on March 7, 2017. Retrieved March 3, 2017...."If you only see posts from folks who are like you, you're going to be surprised when someone very unlike you wins the presidency," Pariser tells The Guardian....
**^**Mostafa M. El-Bermawy (November 18, 2016). "Your Filter Bubble is Destroying Democracy".*Wired*. Archived from the original on March 9, 2017. Retrieved March 3, 2017....The global village that was once the internet ... digital islands of isolation that are drifting further apart each day ... your experience online grows increasingly personalized ...
**^**Drake Baer (November 9, 2016). "The 'Filter Bubble' Explains Why Trump Won and You Didn't See It Coming".*New York Magazine*. Archived from the original on February 26, 2017. Retrieved March 3, 2017....Trump's victory is blindsiding ... because, as media scholars understand it, we increasingly live in a "filter bubble": The information we take in is so personalized that we're blind to other perspectives....
- ^
**a****b**Holone, Harald (June 2016). "The filter bubble and its effect on online personal health information".**c***Croatian Medical Journal*.**57**(3): 298–301. doi:10.3325/cmj.2016.57.298. PMC 4937233. PMID 27374832. - ^
**a****b**Haim, Mario; Arendt, Florian; Scherr, Sebastian (February 2017). "Abyss or Shelter? On the Relevance of Web Search Engines' Search Results When People Google for Suicide".**c***Health Communication*.**32**(2): 253–258. doi:10.1080/10410236.2015.1113484. PMID 27196394. S2CID 3399012. - ^
**a****b**"Medical Misinformation and Social Harm in Non-Science Based Health Practices: A Multidisciplinary Perspective".**c***CRC Press*. Archived from the original on August 4, 2020. Retrieved April 22, 2020. - ^
**a****b**Lazar, Shira (June 1, 2011). "Algorithms and the Filter Bubble Ruining Your Online Experience?".**c***Huffington Post*. Archived from the original on April 13, 2016. Retrieved August 15, 2011.a filter bubble is the figurative sphere surrounding you as you search the Internet.
**^**Pariser, Eli (May 12, 2011).*The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think*. Penguin. ISBN 9781101515129. Archived from the original on January 19, 2021. Retrieved October 11, 2020.**^**"How Filter Bubbles Distort Reality: Everything You Need to Know". July 31, 2017. Archived from the original on July 3, 2019. Retrieved June 23, 2019.**^**Nikolov, Dimitar; Oliveira, Diego F.M.; Flammini, Alessandro; Menczer, Filippo (December 2, 2015). "Measuring online social bubbles".*PeerJ Computer Science*.**1**: e38. arXiv:1502.07162. Bibcode:2015arXiv150207162N. doi:10.7717/peerj-cs.38.**^**Pariser, Eli (March 2011). "Beware online "filter bubbles"". Archived from the original on May 28, 2018. Retrieved May 30, 2018.**^**Pariser, Eli (March 2011). "Beware online 'filter bubbles'".*TED.com*. Archived from the original on September 22, 2017. Retrieved September 24, 2017.**^**"First Monday: What's on tap this month on TV and in movies and books: The Filter Bubble by Eli Pariser".*USA Today*. 2011. Archived from the original on May 3, 2011. Retrieved April 20, 2011.Pariser explains that feeding us only what is familiar and comfortable to us closes us off to new ideas, subjects and important information.
- ^
**a**Bosker, Bianca (March 7, 2011). "Facebook, Google Giving Us Information Junk Food, Eli Pariser Warns".**b***Huffington Post*. Archived from the original on March 13, 2011. Retrieved April 20, 2011.When it comes to content, Google and Facebook are offering us too much candy, and not enough carrots.
**^**"Invisible sieve: Hidden, specially for you".*The Economist*. June 30, 2011. Archived from the original on July 3, 2011. Retrieved June 27, 2011.Mr Pariser's book provides a survey of the internet's evolution towards personalisation, examines how presenting information alters the way in which it is perceived and concludes with prescriptions for bursting the filter bubble that surrounds each user.
- ^
**a**Hern (May 22, 2017). "How social media filter bubbles and algorithms influence the election".**b***The Guardian*. Archived from the original on May 31, 2018. Retrieved May 30, 2018. **^**Van Alstyne, Marshall; Brynjolfsson, Erik (March 1997) [Copyright 1996]. "Electronic Communities: Global Village or Cyberbalkans?" (PDF). Archived (PDF) from the original on April 5, 2016. Retrieved September 24, 2017.**^**Van Alstyne, Marshall; Brynjolfsson, Erik (November 1996). "Could the Internet Balkanize Science?".*Science*.**274**(5292): 1479–1480. Bibcode:1996Sci...274.1479V. doi:10.1126/science.274.5292.1479. S2CID 62546078.**^**Alex Pham; Jon Healey (September 24, 2005). "Systems hope to tell you what you'd like: 'Preference engines' guide users through the flood of content".*Chicago Tribune*. Archived from the original on December 8, 2015. Retrieved December 4, 2015....if recommenders were perfect, I can have the option of talking to only people who are just like me....Cyber-balkanization, as Brynjolfsson coined the scenario, is not an inevitable effect of recommendation tools.
- ^
**a**Menkedick, Sarah (May 14, 2020). "Why are American kids treated as a different species from adults?".**b***Aeon*. Archived from the original on May 15, 2020. Retrieved May 15, 2020. **^**Obama, Barack (January 10, 2017).*President Obama's Farewell Address*(Speech). Washington, D.C. Archived from the original on January 24, 2017. Retrieved January 24, 2017.**^**Hosanagar, Kartik (November 25, 2016). "Blame the Echo Chamber on Facebook. But Blame Yourself, Too".*Wired*. Archived from the original on September 25, 2017. Retrieved September 24, 2017.**^**DiFonzo, Nicholas (April 21, 2011). "The Echo-Chamber Effect".*The New York Times*. Archived from the original on June 13, 2017. Retrieved September 24, 2017.**^**sdf (June 23, 2004). "John Gorenfeld, Moon the Messiah, and the Media Echo Chamber".*Daily Kos*. Archived from the original on May 2, 2016. Retrieved September 24, 2017.**^**Jamieson, Kathleen Hall; Cappella, Joseph N. (July 22, 2008).*Echo Chamber: Rush Limbaugh and the Conservative Media Establishment*. Oxford University Press. ISBN 978-0-19-536682-2. Retrieved September 24, 2017.- ^
**a**"What are Filter Bubbles and Digital Echo Chambers? | Heinrich-Böll-Stiftung | Tel Aviv - Israel".**b***Heinrich-Böll-Stiftung*. Retrieved March 8, 2023. **^**Cinelli, Matteo; De Francisci Morales, Gianmarco; Galeazzi, Alessandro; Quattrociocchi, Walter; Starnini, Michele (March 2, 2021). "The echo chamber effect on social media".*Proceedings of the National Academy of Sciences*.**118**(9): e2023301118. Bibcode:2021PNAS..11823301C. doi:10.1073/pnas.2023301118. ISSN 0027-8424. PMC 7936330. PMID 33622786.**^**Elanor Colleoni; Alessandro Rozza; Adam Arvidsson (April 2014). "Echo Chamber or Public Sphere? Predicting Political Orientation and Measuring Political Homophily in Twitter Using Big Data".*Journal of Communication*.**64**(2): 317–332. doi:10.1111/jcom.12084. hdl:10281/66011.**^**Ekström, Axel G.; Niehorster, Diederick C.; Olsson, Erik J. (August 1, 2022). "Self-imposed filter bubbles: Selective attention and exposure in online search".*Computers in Human Behavior Reports*.**7**: 100226. doi:10.1016/j.chbr.2022.100226. ISSN 2451-9588. S2CID 251434172.**^**Reviglio, Urbano; Agosti, Claudio (April 2020). "Thinking Outside the Black-Box: The Case for "Algorithmic Sovereignty" in Social Media".*Social Media + Society*.**6**(2): 205630512091561. doi:10.1177/2056305120915613. hdl:2434/840214. ISSN 2056-3051. S2CID 219019544.**^**"Twitter's plan to cut off free data access evokes 'fair amount of panic' among scientists".*www.science.org*. Retrieved March 8, 2023.**^**Grankvist, Per (February 8, 2018).*The Big Bubble: How Technology Makes It Harder to Understand the World*. United Stories Publishing. p. 179. ISBN 978-91-639-5990-5.- ^
**a**Hosanagar, Kartik; Fleder, Daniel; Lee, Dokyun; Buja, Andreas (December 2013). "Will the Global Village Fracture into Tribes: Recommender Systems and their Effects on Consumers".**b***Management Science, Forthcoming*. SSRN 1321962. **^**Ludwig, Amber. "Google Personalization on Your Search Results Plus How to Turn it Off". NGNG. Archived from the original on August 17, 2011. Retrieved August 15, 2011.Google customizing search results is an automatic feature, but you can shut this feature off.
**^**Bruns, Axel (November 29, 2019). "Filter bubble".*Internet Policy Review*.**8**(4). doi:10.14763/2019.4.1426. hdl:10419/214088.**^**Davies, Huw C (September 2018). "Redefining Filter Bubbles as (Escapable) Socio-Technical Recursion".*Sociological Research Online*.**23**(3): 637–654. doi:10.1177/1360780418763824. S2CID 149367030. Archived from the original on January 19, 2021. Retrieved August 29, 2020.**^**Dahlgren, Peter M. (January 29, 2021). "A critical review of filter bubbles and a comparison with selective exposure".*Nordicom Review*.**42**(1): 15–33. doi:10.2478/nor-2021-0002.**^**Flaxman, Seth; Goel, Sharad; Rao, Justin M. (2016). "Filter Bubbles, Echo Chambers, and Online News Consumption".*Public Opinion Quarterly*.**80**(S1): 298–320. doi:10.1093/poq/nfw006. S2CID 2386849.**^**Chitra, Uthsav; Musco, Christopher (2020). "Analyzing the Impact of Filter Bubbles on Social Network Polarization".*WSDM '20: Proceedings of the 13th International Conference on Web Search and Data Mining*. pp. 115–123. doi:10.1145/3336191.3371825.- ^
**a**"5 Questions with Eli Pariser, Author of 'The Filter Bubble'".**b***Time*. May 16, 2011. Archived from the original on April 14, 2017. Retrieved May 24, 2017. - ^
**a****b****c****d**Bleiberg, Joshua; West, Darrell M. (May 24, 2017). "Political polarization on Facebook".**e***Brookings Institution*. Archived from the original on October 10, 2017. Retrieved May 24, 2017. **^**Bakshy, E.; Messing, S.; Adamic, L. A. (June 5, 2015). "Exposure to ideologically diverse news and opinion on Facebook".*Science*.**348**(6239): 1130–1132. Bibcode:2015Sci...348.1130B. doi:10.1126/science.aaa1160. PMID 25953820. S2CID 206632821.**^**Lumb (May 8, 2015). "Why Scientists Are Upset About The Facebook Filter Bubble Study". Archived from the original on November 11, 2017. Retrieved November 10, 2017.**^**Oremus, Will (April 5, 2017). "The Filter Bubble Revisited".*Slate Magazine*. Archived from the original on February 6, 2020. Retrieved March 2, 2020.- ^
**a**Sindermann, Cornelia; Elhai, Jon D.; Moshagen, Morten; Montag, Christian (January 2020). "Age, gender, personality, ideological attitudes and individual differences in a person's news spectrum: how many and who might be prone to 'filter bubbles' and 'echo chambers' online?".**b***Heliyon*.**6**(1): e03214. Bibcode:2020Heliy...603214S. doi:10.1016/j.heliyon.2020.e03214. PMC 7002846. PMID 32051860. - ^
**a**Pariser, Eli (May 7, 2015). "Fun facts from the new Facebook filter bubble study".**b***Medium*. Archived from the original on November 11, 2017. Retrieved October 24, 2017. **^**Lumb, David (May 8, 2015). "Why Scientists Are Upset About The Facebook Filter Bubble Study".*Fast Company*. Archived from the original on October 23, 2017. Retrieved October 24, 2017.**^**Pariser, Eli (May 7, 2015). "Did Facebook's Big Study Kill My Filter Bubble Thesis?".*Wired*. Archived from the original on November 11, 2017. Retrieved October 24, 2017.**^**"Contrary to what you've heard, Facebook can help puncture our political "bubbles"".*Vox*. Archived from the original on June 13, 2018. Retrieved May 30, 2018.**^**Bakshy, E.; Messing, S.; Adamic, L. A. (2015). "Exposure to ideologically diverse news and opinion on Facebook".*Science*.**348**(6239): 1130–1132. Bibcode:2015Sci...348.1130B. doi:10.1126/science.aaa1160. PMID 25953820. S2CID 206632821.**^**Barberá, Pabló (August 2015). "How Social Media Reduces Mass Political Polarization. Evidence from Germany, Spain, and the U.S." CiteSeerX 10.1.1.658.5476.**^**Hilbert, M., Ahmed, S., Cho, J., Liu, B., & Luu, J. (2018). Communicating with Algorithms: A Transfer Entropy Analysis of Emotions-based Escapes from Online Echo Chambers. Communication Methods and Measures, 12(4), 260–275. https://doi.org/10.1080/19312458.2018.1479843 Archived 2021-01-19 at the Wayback Machine ; https://www.martinhilbert.net/communicating-with-algorithms/ Archived 2019-05-09 at the Wayback Machine- ^
**a**Bail, Christopher; Argyle, Lisa; Brown, Taylor; Chen, Haohan; Hunzaker, M.B.F.; Lee, Jaemin (2018). "Exposure to opposing views on social media can increase political polarization" (PDF).**b***Proceedings of the National Academy of Sciences*.**115**(37): 9216–9221. Bibcode:2018PNAS..115.9216B. doi:10.1073/pnas.1804840115. PMC 6140520. PMID 30154168. Archived (PDF) from the original on April 10, 2020. Retrieved April 22, 2020. - ^
**a**Min, Yong; Jiang, Tingjun; Jin, Cheng; Li, Qu; Jin, Xiaogang (2019). "Endogenetic structure of filter bubble in social networks".**b***Royal Society Open Science*.**6**(11): 190868. arXiv:1907.02703. Bibcode:2019RSOS....690868M. doi:10.1098/rsos.190868. PMC 6894573. PMID 31827834. **^**Statt, Nick (December 4, 2018). "Google personalizes search results even when you're logged out, new study claims".*The Verge*. Archived from the original on July 31, 2020. Retrieved April 22, 2020.**^**Bucher, Taina (February 25, 2016). "The algorithmic imaginary: exploring the ordinary effects of Facebook algorithms".*Information, Communication & Society*.**20**– via Taylor & Francis Online.**^**"How do we break filter bubble and design for democracy?". March 3, 2017. Archived from the original on March 3, 2017. Retrieved March 3, 2017.**^**""Filterblase" ist das Wort des Jahres 2016". December 7, 2016. Archived from the original on December 20, 2016. Retrieved December 27, 2016.**^**Eli Pariser (May 2011).*The Filter Bubble: What the Internet Is Hiding from You*. New York: Penguin Press. p. 17. ISBN 978-1-59420-300-8.**^**Stephen Baron; John Field; Tom Schuller (November 30, 2000). "Social capital: A review and critique.".*Social Capital: Critical perspectives*. Oxford University Press. ISBN 9780199243679.**^**"Are we stuck in filter bubbles? Here are five potential paths out".*Nieman Lab*. Archived from the original on March 4, 2017. Retrieved March 3, 2017.**^**Glushko, Chris (February 8, 2017). "Pop the Personalization Filter Bubbles and Preserve Online Diversity".*Marketing Land*. Archived from the original on March 15, 2017. Retrieved May 22, 2017.**^**Ritholtz, Barry (February 2, 2017). "Try Breaking Your Media Filter Bubble".*Bloomberg*. Archived from the original on August 21, 2017. Retrieved May 22, 2017.**^**"A news app aims to burst filter bubbles by nudging readers toward a more "balanced" media diet".*Nieman Lab*. Archived from the original on May 15, 2017. Retrieved May 24, 2017.**^**Mădălina Ciobanu (March 3, 2017). "NZZ is developing an app that gives readers personalised news without creating a filter bubble: The app uses machine learning to give readers a stream of 25 stories they might be interested in based on their preferences, but 'always including an element of surprise'". Journalism.co.uk. Archived from the original on March 3, 2017. Retrieved March 3, 2017.... if, based on their consumption history, someone has not expressed an interest in sports, their stream will include news about big, important stories related to sports,...
**^**Catalina Albeanu (November 17, 2016). "Bursting the filter bubble after the US election: Is the media doomed to fail? At an event in Brussels this week, media and politicians discussed echo chambers on social media and the fight against fake news". Journalism.co.uk. Archived from the original on March 10, 2017. Retrieved March 3, 2017.... EU referendum in the UK on a panel at the "Politicians in a communication storm" event... On top of the filter bubble, partisan Facebook pages also served up a diet heavy in fake news....
**^**"European Commission". Archived from the original on March 4, 2017. Retrieved March 3, 2017.**^**Resnick, Paul; Garrett, R. Kelly; Kriplean, Travis; Munson, Sean A.; Stroud, Natalie Jomini (2013). "Bursting your (Filter) bubble".*Proceedings of the 2013 conference on Computer supported cooperative work companion - CSCW '13*. p. 95. doi:10.1145/2441955.2441981. ISBN 978-1-4503-1332-2. S2CID 20865375.- ^
**a**Vanian, Jonathan (April 25, 2017). "Facebook Tests Related Articles Feature to Fight Filter Bubbles".**b***Fortune.com*. Archived from the original on September 25, 2017. Retrieved September 24, 2017. **^**Sydell, Laura (January 25, 2017). "Facebook Tweaks its 'Trending Topics' Algorithm to Better Reflect Real News". KQED Public Media. NPR. Archived from the original on February 26, 2018. Retrieved April 5, 2018.**^**Hao, Karen. "Google is finally admitting it has a filter-bubble problem".*Quartz*. Archived from the original on May 4, 2018. Retrieved May 30, 2018.**^**"Facebook, Mozilla and Craigslist Craig fund fake news firefighter". Archived from the original on November 23, 2018. Retrieved January 14, 2019.**^**"The Mozilla Information Trust Initiative: Building a movement to fight misinformation online".*The Mozilla Blog*. Archived from the original on January 14, 2019. Retrieved January 14, 2019.- ^
**a****b**Bozdag, Engin; Timmerman, Job. "Values in the filter bubble Ethics of Personalization Algorithms in Cloud Computing".**c***ResearchGate*. Archived from the original on December 14, 2020. Retrieved March 6, 2017. **^**Al-Rodhan, Nayef. "The Many Ethical Implications of Emerging Technologies".*Scientific American*. Archived from the original on April 8, 2017. Retrieved March 6, 2017.**^**Haim, Mario; Graefe, Andreas; Brosius, Hans-Bernd (March 16, 2018). "Burst of the Filter Bubble?".*Digital Journalism*.**6**(3): 330–343. doi:10.1080/21670811.2017.1338145. S2CID 168906316.**^**"The Filter Bubble raises important issues – You just need to filter them out for yourself".*Rainforest Action Network*. Archived from the original on April 8, 2017. Retrieved March 6, 2017.- ^
**a**Sterling, Greg (February 20, 2017). "Mark Zuckerberg's manifesto: How Facebook will connect the world, beat fake news and pop the filter bubble".**b***Marketing Land*. Archived from the original on March 8, 2017. Retrieved March 6, 2017. - ^
**a**Morozov, Evgeny (June 10, 2011). "Your Own Facts".**b***The New York Times*. Archived from the original on March 4, 2017. Retrieved March 6, 2017. **^**Hesse, Bradford W.; Nelson, David E.; Kreps, Gary L.; Croyle, Robert T.; Arora, Neeraj K.; Rimer, Barbara K.; Viswanath, Kasisomayajula (December 12, 2005). "Trust and Sources of Health Information: The Impact of the Internet and Its Implications for Health Care Providers: Findings From the First Health Information National Trends Survey".*Archives of Internal Medicine*.**165**(22): 2618–24. doi:10.1001/archinte.165.22.2618. PMID 16344419.**^**El-Bermawy, Mostafa (November 18, 2016). "Your filter bubble is destroying democracy".*Wired*. Archived from the original on March 9, 2017. Retrieved March 6, 2017.**^**"How to Burst the "Filter Bubble" that Protects Us from Opposing Views".*MIT Technology Review*. Archived from the original on January 19, 2021. Retrieved March 6, 2017.**^**Borgesius, Frederik; Trilling, Damian; Möller, Judith; Bodó, Balázs; de Vreese, Claes; Helberger, Natali (March 31, 2016). "Should we worry about filter bubbles?".*Internet Policy Review*. Archived from the original on March 20, 2017. Retrieved March 6, 2017.**^**Pariser, Eli (2011).*The Filter Bubble: How the New Personalized Web is Changing What We Read and How We Think*. New York: Penguin Press. ISBN 978-1-59420-300-8.**^**"In praise of serendipity".*The Economist*. March 9, 2017. Archived from the original on January 15, 2019. Retrieved January 14, 2019.**^**Reviglio, Urbano (June 2019). "Serendipity as an emerging design principle of the infosphere: challenges and opportunities".*Ethics and Information Technology*.**21**(2): 151–166. doi:10.1007/s10676-018-9496-y. S2CID 57426650.**^**Harambam, Jaron; Helberger, Natali; van Hoboken, Joris (November 28, 2018). "Democratizing algorithmic news recommenders: how to materialize voice in a technologically saturated media ecosystem".*Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences*.**376**(2133): 20180088. Bibcode:2018RSPTA.37680088H. doi:10.1098/rsta.2018.0088. PMC 6191663. PMID 30323002.**^**Herrman, John (August 24, 2016). "Inside Facebook's (Totally Insane, Unintentionally Gigantic, Hyperpartisan) Political-Media Machine".*The New York Times*. Archived from the original on October 19, 2017. Retrieved October 24, 2017.**^**Del Vicario, Michela; Bessi, Alessandro; Zollo, Fabiana; Petroni, Fabio; Scala, Antonio; Caldarelli, Guido; Stanley, H. Eugene; Quattrociocchi, Walter (January 19, 2016). "The spreading of misinformation online".*Proceedings of the National Academy of Sciences*.**113**(3): 554–559. Bibcode:2016PNAS..113..554D. doi:10.1073/pnas.1517441113. PMC 4725489. PMID 26729863.**^**Granville, Kevin (March 19, 2018). "Facebook and Cambridge Analytica: What You Need to Know as Fallout Widens".*The New York Times*. Archived from the original on October 19, 2018. Retrieved October 19, 2018.**^**Meredith, Sam (April 10, 2018). "Facebook-Cambridge Analytica: A timeline of the data hijacking scandal".*CNBC*. Archived from the original on October 19, 2018. Retrieved October 19, 2018.- ^
**a**Gross, Michael (January 2017). "The dangers of a post-truth world".**b***Current Biology*.**27**(1): R1–R4. Bibcode:2017CBio...27...R1G. doi:10.1016/j.cub.2016.12.034. **^**"How Filter Bubbles Distort Reality: Everything You Need to Know".*Farnam Street*. July 31, 2017. Archived from the original on May 20, 2019. Retrieved May 21, 2019.**^**Dish, The Daily (October 10, 2010). "The Filter Bubble".*The Atlantic*. Archived from the original on August 22, 2017. Retrieved May 21, 2019.- ^
**a**"Filter Bubbles & Confirmation Bias - Fake News (And how to fight it) - LibGuides at Miami Dade College Learning Resources". Archived from the original on October 23, 2020. Retrieved October 22, 2020.**b**
## Further reading
[edit]- Pariser, Eli.
*The Filter Bubble: What the Internet Is Hiding from You*, Penguin Press (New York, 2011) ISBN 978-1-59420-300-8 - Green, Holly (August 29, 2011). "Breaking Out of Your Internet Filter Bubble".
*Forbes*. Retrieved December 4, 2011. - Friedman, Ann (2014). "Going Viral".
*Columbia Journalism Review*.**52**(6): 33–34. - Bozdag, Engin; van den Hoven, Jeroen (December 2015). "Breaking the filter bubble: democracy and design".
*Ethics and Information Technology*.**17**(4): 249–265. doi:10.1007/s10676-015-9380-y. - boyd, danah m.; Ellison, Nicole B. (October 2007). "Social Network Sites: Definition, History, and Scholarship".
*Journal of Computer-Mediated Communication*.**13**(1): 210–230. doi:10.1111/j.1083-6101.2007.00393.x. S2CID 52810295. - Nguyen, Tien T.; Hui, Pik-Mai; Harper, F. Maxwell; Terveen, Loren; Konstan, Joseph A. (2014). "Exploring the filter bubble: The effect of using recommender systems on content diversity".
*Proceedings of the 23rd international conference on World wide web*. pp. 677–686. doi:10.1145/2566486.2568012. ISBN 9781450327442. S2CID 16747810. - Resnick, Paul; Garrett, R. Kelly; Kriplean, Travis; Munson, Sean A.; Stroud, Natalie Jomini (2013). "Bursting your (Filter) bubble: Strategies for promoting diverse exposure".
*Proceedings of the 2013 conference on Computer supported cooperative work companion*. pp. 95–100. doi:10.1145/2441955.2441981. ISBN 9781450313322. S2CID 20865375. - Liao, Q. Vera; Fu, Wai-Tat (2013). "Beyond the filter bubble: Interactive effects of perceived threat and topic involvement on selective exposure to information".
*Proceedings of the SIGCHI Conference on Human Factors in Computing Systems*. pp. 2359–2368. doi:10.1145/2470654.2481326. ISBN 9781450318990. S2CID 8504434. - Holone, Harald (2016). "The filter bubble and its effect on online personal health information".
*Croatian Medical Journal*.**57**(3): 298–301. doi:10.3325/cmj.2016.57.298. PMC 4937233. PMID 27374832.
## External links
[edit]- Beware Online Filter Bubbles. TED Talks, March 2011
| true | true | true | null |
2024-10-12 00:00:00
|
2011-05-02 00:00:00
|
website
|
wikipedia.org
|
Wikimedia Foundation, Inc.
| null | null |
|
7,996,496 |
http://singularityhub.com/2014/07/06/virtual-reality-needs-an-immersive-3d-soundscape/
|
What's Missing from Virtual Reality? Immersive 3D Soundscapes
|
Jason Dorrier
|
When you imagine virtual reality, chances are you’ve got visuals in mind. Or maybe you fantasize about a virtual sense of touch. Another key component? Immersive audio.
Tom Simonite of *MIT Technology Review *recently got a demo of some pretty impressive 3D sound in Microsoft’s Silicon Valley lab. After using scanning technology Kinect to make a digital 3D model of his head and shoulders, the team gave Simonite a pair of special sensor-laden headphones plugged into some specialized software.
The result was more than simple surround sound.
Using the 3D model of his head, motion sensors in the headphones, and a camera to track Simonite’s movement, the system precisely adjusted sounds in the headphones to make it seem like they were coming from specific (previously silent) points in space—a voice on a cardboard radio, music from a speaker, and chirps from a stuffed bird.
You may have experienced a simple version of this effect when listening to music that pans left and right on headphones. Panning makes it sound like music is emanating from various points in between your ears.
But instead of me describing it to you, check out this early (but impressive) demo of positional sound for Oculus. (Best with headphones.)
Currently, positional 3D audio in video games is informed by average physiological models of the head. But the shape and position of our ears and the anatomy of our head changes how sound reaches our ear canals. Microsoft researchers, Ivan Tashev and David Johnston, say customization makes positional audio far more accurate.
The pair used 250 physiological profiles of people’s heads and ears to write their software. Added to a simple 3D scan, the software can accurately tailor 3D sound to trick the ears into thinking it originates in very specific locations out in space.
“Essentially we can predict how you will hear from the way you look,” Tashev told *MIT Technology Review*. “We work out the physical process of sound going around your head and reaching your ears.”
But why is Microsoft working on 3D sound at all? Presumably for the same reason they developed Kinect—for their gaming business. If you’ve got a gaming platform, chances are you’re anticipating the imminent arrival of virtual reality.
Virtual reality has long been a dream of gamers and technologists. What’s changed?
Last year, Oculus made waves when they showed off their head-mounted virtual reality Rift device. Rift is immersive virtual reality, but more than that, it’s *affordable* immersive virtual reality. The firm, since acquired by Facebook, is aiming for a $300 consumer ready virtual reality gaming system later this year or early next.
Mobile computing has driven a decade of miniaturization and declining prices in sensors, processors, and screens. Oculus uses cheap but super-precise motion sensors to track head position, powerful processors to manipulate visuals to give the illusion of immersive 3D, and high definition screens to deliver quality imagery.
Microsoft’s 3D audio currently uses motion sensors embedded in the headphones and a camera to detect a user’s head position—but according to the *MIT Technology Review *article, sensors like those in the Rift would also work. The system would combine position data with the physiological profile of a user taken in the beginning using Kinect.
Positional audio is not a new field—for example, check out this 2007 YouTube video of QSound’s Virtual Barbershop—but it is rapidly improving thanks to more widely available and affordable sensors.
Microsoft hopes to create the necessary physiological profile for 3D sound filters simple enough users can make one at home using naught but Kinect. Though at-home solutions might not match the highest quality lab-created filters, they would be good enough to greatly increase the sense of immersion in virtual worlds.
And, of course, Microsoft isn’t alone. The firm’s gaming rival and maker of the Playstation video game platform, Sony, earlier this year unveiled their virtual reality venture, Project Morpheus, and likewise hinted they are working on positional 3D audio.
Richard Marks, senior director of research and development at Sony Computer Entertainment America, told *Polygon* that along with visuals, his firm views sound as an equally important component to the virtual experience. According to Marks, Sony is working to create positional audio that adapts to a players’ head orientation “creating a highly realistic audio environment within an immersive 360-degree virtual world.”
No doubt Sony and Microsoft will be joined by independent developers. Even as Oculus has shown they’ve increasingly got virtual reality visuals nailed, there’s plenty of room for work on other tools and peripherals—specialized treadmills, body tracking systems, 3D audio, more intuitive interfaces—to better transport our senses into the virtual.
*Image Credit: Janus Sandsgaard/Flickr; Christopher Michel/Flickr*
| true | true | true |
When you imagine virtual reality, chances are you’ve got visuals in mind. Or maybe you fantasize about a virtual sense of touch. Another key component? Immersive audio. Tom Simonite of MIT Technology Review recently got a demo of some pretty impressive 3D sound in Microsoft’s Silicon Valley lab. After using scanning technology Kinect to make a […]
|
2024-10-12 00:00:00
|
2014-07-06 00:00:00
|
article
|
singularityhub.com
|
Singularity Hub
| null | null |
|
41,176,976 |
https://kotaku.com/elden-ring-shadow-erdtree-speedruns-skips-tournament-1851610294
|
Incredible New Elden Ring Glitch Is Letting Players Fly Through Shadow Of The Erdtree
|
Moises Taveras
|
The *Elden Ring** *speedrunning community is hungry for new exploits to crack the open-world FromSoftware game wide open and it might’ve just discovered one. As part of an ongoing $10,000 tournament, someone has found a way to displace the player character just enough to allow them to walk on air and skip huge chunks of the recently released expansion, *Shadow of the Erdtree*. Unbelievably, the tournament isn’t even over yet, but the community’s already found “the holy grail of DLC skips.”
As first reported by *GamesRadar+*, the glitch (which has been dubbed Legasus), involves interrupting the animation of summoning the player’s spectral steed, Torrent. Discovered by the player Joo, the glitch can be initiated in a variety of ways, including the use of the DLC’s perfume weapons or hitting a Site of Grace where players can rest.
**Order Elden Ring: Shadow of the Erdtree:** Amazon | Best Buy | Humble Bundle
If the player manages to interrupt the mounting animation at the exact right frame, they can achieve a “limited no-gravity” effect that simply allows the player to walk on air. The initial animation isn’t so much canceled as it is “paused” while the player commits to another action. That last bit is especially important because in order to maintain the Legasus effect, you can never idle.
There are actually a million caveats to making Legasus work, which you can read up on here, but the gist of it is that movement isn’t exactly free once you manage to activate the glitch. You can only move forward or step backward and you must always be doing something like holding guard with a shield out. If you idle, you’ll be restored back to the point at which you activated the glitch, and if you accidentally break the effect somewhere else, you naturally risk plummeting to your death. Additionally, in order to turn, you have to use a bow, aim in the direction you’d like to go, and then dodge roll that way.
Once you get it working though, you can get pretty far, and players have been using it to get into areas like *Shadow of the Erdtree*’s Abyssal Woods (which apparently sucks ass) while skipping segments of it that might be a pain in the ass. That’s because you can also “unload” Torrent, which commits to the rest of the summoning animation and breaks the glitch exactly where you are, allowing players to tactically position themselves to die near checkpoints and respawn there.
The tournament that prompted this find is being held by Distortion2, one of the most prominent speedrunners of FromSoftware’s catalog, and a guy who’s beaten *Elden Ring *with nothing more than his character’s ass. The competition, which is going till August 11, challenges other runners to finish *Shadow of the Erdtree *under some barbaric conditions of Distortion’s choosing. The DLC must be completed at rune level 1 using weapons that can only be found in the expansion, and players can only raise their Scadutree Blessing—a separate leveling system implemented in *Shadow of the Erdtree*—one level per every defeated boss fight.
Yeah, it’s fucking brutal, but it also seems like exactly the kind of masochistic gauntlet that Souls runners love to take on. Given these early results, it seems like the tournament has already accomplished exactly what Distortion wanted, and kickstarted a new race to find cool tech for the next great *Elden Ring *speedrun.
| true | true | true |
A speedrunning tournament whose prize pool is $10,000 is already yielding great results
|
2024-10-12 00:00:00
|
2024-07-31 00:00:00
|
article
|
kotaku.com
|
Kotaku
| null | null |
|
8,610,581 |
https://dist-systems.bbn.com/projects/CRASH/news.shtml
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
5,386,204 |
http://luvit.io/
|
Familiar API
| null |
Asynchronous I/O for Lua
Install## Familiar API
Luvit implements the same APIs as Node.js, but in Lua!
This helps teams migrate without having to learn a new way of programming.
## Async Choice
Choose your async model; we don’t mind; we encourage experimentation.
If you don’t like callbacks and event emitters, use coroutines and write blocking style code without actually blocking your event loop!
## Modular Core
The various projects in the luvit ecosystem can be mixed and matched to build the ideal runtime for your application.
- Use luv directly in luajit
- Use lit without node apis
- The possibilities are endless
## Using the Node-Style APIs
The `luvit`
CLI tool can be used as a scripting platform just like `node`
. This
can be used to run lua scripts as standalone servers, clients, or other tools.
This simple web server written in Luvit responds with Hello World
for every
request.
```
local http = require('http')
http.createServer(function (req, res)
local body = "Hello world\n"
res:setHeader("Content-Type", "text/plain")
res:setHeader("Content-Length", #body)
res:finish(body)
end):listen(1337, '127.0.0.1')
print('Server running at http://127.0.0.1:1337/')
```
And run this script using `luvit`
.
```
> luvit server.lua
Server running at http://127.0.0.1:1337/
```
This script is a standalone HTTP server, there is no need for Apache or Nginx to act as host.
## Using Third-Party Libraries
Luvit also has a package system that makes it easy to publish and consume libraries.
For example, @creationix has made a set of libraries that use coroutines instead of callbacks for async I/O and published these to lit.
Using `lit`
install `creationix/weblit`
to use an express-like framework built
on top of coroutines.
```
> mkdir myapp && cd myapp
> lit install creationix/weblit
> vim server.lua
> luvit server.lua
```
The `server.lua`
file will contain:
```
local weblit = require('weblit')
weblit.app
.bind({host = "127.0.0.1", port = 1337})
-- Configure weblit server
.use(weblit.logger)
.use(weblit.autoHeaders)
-- A custom route that sends back method and part of url.
.route({ path = "/:name"}, function (req, res)
res.body = req.method .. " - " .. req.params.name .. "\n"
res.code = 200
res.headers["Content-Type"] = "text/plain"
end)
-- Start the server
.start()
```
This very site is being served by `weblit`
and its source can be found at
https://github.com/luvit/luvit.io
## Permissive License
Luvit is licensed under the Apache 2.0 License to The Luvit Authors
. This
was done to make the project as accessible as possible to users and
contributors.
## Dive In
Join us on freenode IRC at #luvit, the Luvit Mailing list, or the Discord Server
We’ll be publishing tutorials here at the luvit blog soon, stay tuned.
| true | true | true | null |
2024-10-12 00:00:00
|
2018-01-01 00:00:00
| null | null | null | null | null | null |
210,846 |
http://firewatching.com/ambient/2008/06/06/products-that-wouldnt-exist-if-their-creators-listened-to-hasnt-someone-done-that-already/
| null | null | null | true | true | false | null |
2024-10-12 00:00:00
| null | null | null | null | null | null | null |
19,207,126 |
https://www.washingtonpost.com/technology/2019/02/19/password-managers-have-security-flaw-you-should-still-use-one/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
19,807,151 |
http://www.davecooper.org/injecting-mock-data-into-applications-in-2019
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
23,465,070 |
https://www.ibm.com/blogs/policy/facial-recognition-susset-racial-justice-reforms/
|
Home - IBM Policy
|
– Christopher A Padilla; Vice President; IBM Government; Regulatory Affairs
|
**Categories:**Artificial Intelligence, Policy Positions, Statements & Reactions, Workforce Policy
Read IBM's playbook and policy recommendations to close the talent gap and help drive the change needed for Europe's workforce.
**Categories:**Europe, Policy Lab Perspectives, Workforce Policy
As nations move toward cyber incident reporting implementation, and others contemplate their own national cybersecurity policies around cyber reporting, IBM urges lawmakers to examine what is already working.
**Categories:**Cybersecurity, Policy Lab Perspectives
**Categories:**Artificial Intelligence, Policy Positions
**Categories:**Cybersecurity, Hybrid Cloud, Policy Lab Perspectives
**Categories:**Policy Positions
More Articles
-
IBM Statement on the EU-US Data Privacy Framework Adequacy Decision -
IBM comments on Dept. of Labor's proposed national apprenticeships rules -
IBM Welcomes Sen. Klobuchar's Protect Elections from Deceptive AI Act -
3 practical ways to strengthen EU-US trade and generate near-term benefits for the transatlantic economy
Engaging in worldwide policy advocacy to drive growth and innovation in the digital economy. With dedicated resources in the Americas, Europe, Africa, and Asia, IBM is driven by the mutual objectives of global consistency and local relevancy.
The IBM Policy Lab is a forum providing policymakers with a vision and actionable recommendations to harness the benefits of innovation while ensuring trust in a world being reshaped by data.
For more than a century, IBM has earned the trust of our clients by responsibly managing their most valuable data, and we have worked to earn the trust of society by ushering powerful new technologies into the world responsibly and with clear purpose.
IBM’s core values include a commitment to trust and personal responsibility and a pursuit of innovation that matters to our company and the world. Our values reflect the corporation’s long-standing policy against political contributions of any kind, even when permitted by law.
Subscribe to the IBM Policy Lab Newsletter
The IBM Policy Lab bi-weekly newsletter covers the world's most pressing tech policy topics, from AI to quantum to 5G. Authored by IBM Policy Lab co-directors Jean-Marc Leclerc, in Brussels, and Ryan Hagemann, in D.C., you'll also receive our latest IBM Policy Lab white papers and tech policy news from around the world.
| true | true | true | null |
2024-10-12 00:00:00
|
2022-11-03 00:00:00
| null |
website
|
ibm.com
|
IBM Policy
| null | null |
15,185,753 |
http://www.sfchronicle.com/business/article/Universities-rush-to-add-data-science-majors-as-12170047.php#photo-14022384
|
Universities rush to add data science majors as demand explodes
|
Isha Salian
|
In spring 2016, UC Berkeley’s first Foundations of Data Science course attracted around 300 students. This semester, nearly 1,000 have enrolled — and university officials are working to create a data science undergraduate major, the first new major for the College of Letters and Science in at least 16 years.
“No program has grown this fast at Berkeley,” said David Culler, interim dean of the Division of Data Sciences, which was established in December. The first students could graduate with a data science major as early as May next year, he said, and certainly by 2019 (a minor is also planned).
Across the UC system, campuses are quickly adding data science programs in response to soaring workplace demand. UC San Diego is starting a data science undergraduate major and minor this fall. UC Davis opened a statistical data science track within its statistics major effective this year. And at UC Santa Cruz, a new D3 Research center — short for Data, Discovery and Decisions — pairs students with companies to work on research projects using data science skills.
Advertisement
Article continues below this ad
## More on the UC System
UC Irvine was the first in the University of California system to create a data science major in fall 2015.
“This is the national conversation at pretty much all of the leading universities,” Culler said.
Data science uses the modeling and analysis skills of statistics combined with the programming and machine learning tools of computer science to find patterns in data and extract insights. But the third element that makes data science unique, according to Cathryn Carson, faculty lead for the program, is the need for expertise in a particular field of inquiry — which could be anything from medicine to linguistics to economics.
Sam Lau, 21, was a teaching assistant in fall 2015, when Berkeley tested a pilot of its data science course, in which around 100 students enrolled. “At first, I totally thought data science was a buzzword — one of the words people made up to describe people who know machine learning and statistics and all that stuff,” he said.
Advertisement
Article continues below this ad
Now Lau, who taught the course this summer, thinks data science is an essential skill and is particularly impressed by how the class attracts students of all backgrounds. Recently, the data science foundations course attracted more economics majors than computer science students, Culler said.
The Cal course teaches statistical thinking and coding using the programming language Python. Around half the students also enroll for “connector courses” that apply data science skills to topics like smart cities or literature. Companies including Microsoft, Intel and Google are providing funding for the university’s data sciences division and cloud services for its classes, Culler said.
The job title “data scientist” topped Glassdoor’s 50 Best Jobs in America for the second consecutive year in 2017. Data engineer and analytics manager both cracked the top five. A study this year co-authored by IBM projects that the demand for data scientists and data engineers will grow 39 percent by 2020, when the number of annual job openings for data professionals reaches 2.72 million.
“We hire data scientists all the time,” said Jon Rooney, head of product marketing at Splunk, a San Francisco company that sells data analytics software. Machine-learning algorithms can automate some analyses and lower the training barrier for employees working with data, but many companies will still need a “vanguard” of skilled data scientists, he said.
Michaela Palmer, a UC Berkeley fourth-year student majoring in geography, said taking the introductory data science course “was life-changing, honestly.” Palmer hopes to work on geospatial data analytics after graduation, and is considering pursuing a graduate degree in computer science. “Data is everywhere,” she said. “Knowing how to handle it is such an important skill.”
Advertisement
Article continues below this ad
In addition to its new undergraduate major and minor, UC San Diego is creating a data science institute, funded largely by a $75 million donation this year from Taner Halicioglu, an alumnus who was Facebook’s first full-time hire after its founders.
“You launch a new major maybe once a decade, once every two decades,” said Rajesh Gupta, a UC San Diego professor who chaired the computer science and engineering department during the creation of the data science program. “This has to last for a very long time ... regardless of what’s needed in the market today.”
Gupta believes the last new major at UC San Diego was nanoengineering — which admitted its first freshman class in 2010, according to the department’s website. He said the data science program could offset some of the overwhelming enrollment in computer science classes.
Not every university thinks data science should be its own major, since many of the underlying skills overlap with existing computer science and statistics programs.
“Everyone is grappling with what (data science) is,” said UC Santa Cruz computer science Professor Lise Getoor, who also directs the university’s D3 Research center, which was established in June. “Is it just a renaming of old stuff, or is it a new thing?”
Advertisement
Article continues below this ad
UC Santa Cruz does not plan on forming a data science major in the near future, said Abel Rodriguez, applied mathematics and statistics professor and associate director of the research center. The university’s statistics department was created as recently as 2006, he said, and UC Santa Cruz introduced an undergraduate requirement in statistical reasoning a few years ago.
Rodriguez said the university’s focus is instead on integrating material from computer science into the statistics program, and vice versa. “Rather than trying to create this new coursework, it’s just trying to make sure that we have more communication and more cross-fertilization of these skills.”
*
Isha Salian is a San Francisco Chronicle staff writer. Email: [email protected]
*
| true | true | true |
In spring 2016, UC Berkeley’s first Foundations of Data Science course attracted around...
|
2024-10-12 00:00:00
|
2017-09-05 00:00:00
|
article
|
sfchronicle.com
|
San Francisco Chronicle
| null | null |
|
14,607,032 |
https://www.jci.org/articles/view/92087
|
CRISPR/Cas9-mediated gene editing ameliorates neurotoxicity in mouse model of Huntington’s disease
|
The Journal; Su Yang; Renbao Chang; Huiming Yang; Ting Zhao; Yan Hong; Ha Eun Kong; Xiaobo Sun; Zhaohui Qin; Peng Jin; Shihua Li; Xiao-Jiang Li
|
Advertisement
Brief ReportNeuroscience Free access | 10.1172/JCI92087
1Department of Human Genetics, Emory University School of Medicine, Atlanta, Georgia, USA.
2Institute of Genetics and Developmental Biology, Chinese Academy of Sciences, Beijing, China.
3University of Chinese Academy of Sciences, Beijing, China.
4Department of Mathematics and Computer Sciences, and
5Department of Biostatistics and Bioinformatics, Rollins School of Public Health, Emory University, Atlanta, Georgia, USA.
6Guangdong-Hongkong-Macau Institute of CNS Regeneration, Ministry of Education CNS Regeneration Collaborative Joint Laboratory, Jinan University, Guangzhou, China.
Address correspondence to: Xiao-Jiang Li or Shihua Li, 347 Whitehead Building, 615 Michael Street, Atlanta, Georgia 30322, USA. Phone: 404.727.3290; Email: [email protected] (X.J. Li); Phone: 404.712.2304; Email: [email protected] (S. Li).
**Authorship note:** S. Yang and R. Chang contributed equally to this work.
Find articles by Yang, S. in: JCI | PubMed | Google Scholar
1Department of Human Genetics, Emory University School of Medicine, Atlanta, Georgia, USA.
2Institute of Genetics and Developmental Biology, Chinese Academy of Sciences, Beijing, China.
3University of Chinese Academy of Sciences, Beijing, China.
4Department of Mathematics and Computer Sciences, and
5Department of Biostatistics and Bioinformatics, Rollins School of Public Health, Emory University, Atlanta, Georgia, USA.
6Guangdong-Hongkong-Macau Institute of CNS Regeneration, Ministry of Education CNS Regeneration Collaborative Joint Laboratory, Jinan University, Guangzhou, China.
Address correspondence to: Xiao-Jiang Li or Shihua Li, 347 Whitehead Building, 615 Michael Street, Atlanta, Georgia 30322, USA. Phone: 404.727.3290; Email: [email protected] (X.J. Li); Phone: 404.712.2304; Email: [email protected] (S. Li).
**Authorship note:** S. Yang and R. Chang contributed equally to this work.
Find articles by Chang, R. in: JCI | PubMed | Google Scholar
1Department of Human Genetics, Emory University School of Medicine, Atlanta, Georgia, USA.
2Institute of Genetics and Developmental Biology, Chinese Academy of Sciences, Beijing, China.
3University of Chinese Academy of Sciences, Beijing, China.
4Department of Mathematics and Computer Sciences, and
5Department of Biostatistics and Bioinformatics, Rollins School of Public Health, Emory University, Atlanta, Georgia, USA.
6Guangdong-Hongkong-Macau Institute of CNS Regeneration, Ministry of Education CNS Regeneration Collaborative Joint Laboratory, Jinan University, Guangzhou, China.
Address correspondence to: Xiao-Jiang Li or Shihua Li, 347 Whitehead Building, 615 Michael Street, Atlanta, Georgia 30322, USA. Phone: 404.727.3290; Email: [email protected] (X.J. Li); Phone: 404.712.2304; Email: [email protected] (S. Li).
**Authorship note:** S. Yang and R. Chang contributed equally to this work.
Find articles by Yang, H. in: JCI | PubMed | Google Scholar
1Department of Human Genetics, Emory University School of Medicine, Atlanta, Georgia, USA.
2Institute of Genetics and Developmental Biology, Chinese Academy of Sciences, Beijing, China.
3University of Chinese Academy of Sciences, Beijing, China.
4Department of Mathematics and Computer Sciences, and
5Department of Biostatistics and Bioinformatics, Rollins School of Public Health, Emory University, Atlanta, Georgia, USA.
6Guangdong-Hongkong-Macau Institute of CNS Regeneration, Ministry of Education CNS Regeneration Collaborative Joint Laboratory, Jinan University, Guangzhou, China.
Address correspondence to: Xiao-Jiang Li or Shihua Li, 347 Whitehead Building, 615 Michael Street, Atlanta, Georgia 30322, USA. Phone: 404.727.3290; Email: [email protected] (X.J. Li); Phone: 404.712.2304; Email: [email protected] (S. Li).
**Authorship note:** S. Yang and R. Chang contributed equally to this work.
Find articles by Zhao, T. in: JCI | PubMed | Google Scholar
1Department of Human Genetics, Emory University School of Medicine, Atlanta, Georgia, USA.
2Institute of Genetics and Developmental Biology, Chinese Academy of Sciences, Beijing, China.
3University of Chinese Academy of Sciences, Beijing, China.
4Department of Mathematics and Computer Sciences, and
5Department of Biostatistics and Bioinformatics, Rollins School of Public Health, Emory University, Atlanta, Georgia, USA.
6Guangdong-Hongkong-Macau Institute of CNS Regeneration, Ministry of Education CNS Regeneration Collaborative Joint Laboratory, Jinan University, Guangzhou, China.
Address correspondence to: Xiao-Jiang Li or Shihua Li, 347 Whitehead Building, 615 Michael Street, Atlanta, Georgia 30322, USA. Phone: 404.727.3290; Email: [email protected] (X.J. Li); Phone: 404.712.2304; Email: [email protected] (S. Li).
**Authorship note:** S. Yang and R. Chang contributed equally to this work.
Find articles by Hong, Y. in: JCI | PubMed | Google Scholar
1Department of Human Genetics, Emory University School of Medicine, Atlanta, Georgia, USA.
2Institute of Genetics and Developmental Biology, Chinese Academy of Sciences, Beijing, China.
3University of Chinese Academy of Sciences, Beijing, China.
4Department of Mathematics and Computer Sciences, and
5Department of Biostatistics and Bioinformatics, Rollins School of Public Health, Emory University, Atlanta, Georgia, USA.
6Guangdong-Hongkong-Macau Institute of CNS Regeneration, Ministry of Education CNS Regeneration Collaborative Joint Laboratory, Jinan University, Guangzhou, China.
Address correspondence to: Xiao-Jiang Li or Shihua Li, 347 Whitehead Building, 615 Michael Street, Atlanta, Georgia 30322, USA. Phone: 404.727.3290; Email: [email protected] (X.J. Li); Phone: 404.712.2304; Email: [email protected] (S. Li).
**Authorship note:** S. Yang and R. Chang contributed equally to this work.
Find articles by Kong, H. in: JCI | PubMed | Google Scholar
1Department of Human Genetics, Emory University School of Medicine, Atlanta, Georgia, USA.
2Institute of Genetics and Developmental Biology, Chinese Academy of Sciences, Beijing, China.
3University of Chinese Academy of Sciences, Beijing, China.
4Department of Mathematics and Computer Sciences, and
5Department of Biostatistics and Bioinformatics, Rollins School of Public Health, Emory University, Atlanta, Georgia, USA.
6Guangdong-Hongkong-Macau Institute of CNS Regeneration, Ministry of Education CNS Regeneration Collaborative Joint Laboratory, Jinan University, Guangzhou, China.
Address correspondence to: Xiao-Jiang Li or Shihua Li, 347 Whitehead Building, 615 Michael Street, Atlanta, Georgia 30322, USA. Phone: 404.727.3290; Email: [email protected] (X.J. Li); Phone: 404.712.2304; Email: [email protected] (S. Li).
**Authorship note:** S. Yang and R. Chang contributed equally to this work.
Find articles by Sun, X. in: JCI | PubMed | Google Scholar
1Department of Human Genetics, Emory University School of Medicine, Atlanta, Georgia, USA.
2Institute of Genetics and Developmental Biology, Chinese Academy of Sciences, Beijing, China.
3University of Chinese Academy of Sciences, Beijing, China.
4Department of Mathematics and Computer Sciences, and
5Department of Biostatistics and Bioinformatics, Rollins School of Public Health, Emory University, Atlanta, Georgia, USA.
6Guangdong-Hongkong-Macau Institute of CNS Regeneration, Ministry of Education CNS Regeneration Collaborative Joint Laboratory, Jinan University, Guangzhou, China.
Address correspondence to: Xiao-Jiang Li or Shihua Li, 347 Whitehead Building, 615 Michael Street, Atlanta, Georgia 30322, USA. Phone: 404.727.3290; Email: [email protected] (X.J. Li); Phone: 404.712.2304; Email: [email protected] (S. Li).
**Authorship note:** S. Yang and R. Chang contributed equally to this work.
Find articles by Qin, Z. in: JCI | PubMed | Google Scholar |
1Department of Human Genetics, Emory University School of Medicine, Atlanta, Georgia, USA.
2Institute of Genetics and Developmental Biology, Chinese Academy of Sciences, Beijing, China.
3University of Chinese Academy of Sciences, Beijing, China.
4Department of Mathematics and Computer Sciences, and
5Department of Biostatistics and Bioinformatics, Rollins School of Public Health, Emory University, Atlanta, Georgia, USA.
6Guangdong-Hongkong-Macau Institute of CNS Regeneration, Ministry of Education CNS Regeneration Collaborative Joint Laboratory, Jinan University, Guangzhou, China.
Address correspondence to: Xiao-Jiang Li or Shihua Li, 347 Whitehead Building, 615 Michael Street, Atlanta, Georgia 30322, USA. Phone: 404.727.3290; Email: [email protected] (X.J. Li); Phone: 404.712.2304; Email: [email protected] (S. Li).
**Authorship note:** S. Yang and R. Chang contributed equally to this work.
Find articles by Jin, P. in: JCI | PubMed | Google Scholar
1Department of Human Genetics, Emory University School of Medicine, Atlanta, Georgia, USA.
2Institute of Genetics and Developmental Biology, Chinese Academy of Sciences, Beijing, China.
3University of Chinese Academy of Sciences, Beijing, China.
4Department of Mathematics and Computer Sciences, and
5Department of Biostatistics and Bioinformatics, Rollins School of Public Health, Emory University, Atlanta, Georgia, USA.
6Guangdong-Hongkong-Macau Institute of CNS Regeneration, Ministry of Education CNS Regeneration Collaborative Joint Laboratory, Jinan University, Guangzhou, China.
Address correspondence to: Xiao-Jiang Li or Shihua Li, 347 Whitehead Building, 615 Michael Street, Atlanta, Georgia 30322, USA. Phone: 404.727.3290; Email: [email protected] (X.J. Li); Phone: 404.712.2304; Email: [email protected] (S. Li).
**Authorship note:** S. Yang and R. Chang contributed equally to this work.
Find articles by Li, S. in: JCI | PubMed | Google Scholar
1Department of Human Genetics, Emory University School of Medicine, Atlanta, Georgia, USA.
2Institute of Genetics and Developmental Biology, Chinese Academy of Sciences, Beijing, China.
3University of Chinese Academy of Sciences, Beijing, China.
4Department of Mathematics and Computer Sciences, and
5Department of Biostatistics and Bioinformatics, Rollins School of Public Health, Emory University, Atlanta, Georgia, USA.
6Guangdong-Hongkong-Macau Institute of CNS Regeneration, Ministry of Education CNS Regeneration Collaborative Joint Laboratory, Jinan University, Guangzhou, China.
Address correspondence to: Xiao-Jiang Li or Shihua Li, 347 Whitehead Building, 615 Michael Street, Atlanta, Georgia 30322, USA. Phone: 404.727.3290; Email: [email protected] (X.J. Li); Phone: 404.712.2304; Email: [email protected] (S. Li).
**Authorship note:** S. Yang and R. Chang contributed equally to this work.
Find articles by Li, X. in: JCI | PubMed | Google Scholar
**Authorship note:** S. Yang and R. Chang contributed equally to this work.
Published June 19, 2017 - More info
Huntington’s disease is a neurodegenerative disorder caused by a polyglutamine repeat in the Huntingtin gene (*HTT*). Although suppressing the expression of mutant HTT (mHTT) has been explored as a therapeutic strategy to treat Huntington’s disease, considerable efforts have gone into developing allele-specific suppression of mHTT expression, given that loss of *Htt* in mice can lead to embryonic lethality. It remains unknown whether depletion of HTT in the adult brain, regardless of its allele, could be a safe therapy. Here, we report that permanent suppression of endogenous mHTT expression in the striatum of mHTT-expressing mice (HD140Q-knockin mice) using CRISPR/Cas9-mediated inactivation effectively depleted HTT aggregates and attenuated early neuropathology. The reduction of mHTT expression in striatal neuronal cells in adult HD140Q-knockin mice did not affect viability, but alleviated motor deficits. Our studies suggest that non–allele-specific CRISPR/Cas9-mediated gene editing could be used to efficiently and permanently eliminate polyglutamine expansion–mediated neuronal toxicity in the adult brain.
Expansion of a CAG/glutamine repeat in various genes causes at least 9 different neurodegenerative diseases, including Huntington’s disease (HD). In HD, the expanded CAG repeat encodes a polyglutamine (polyQ) tract in the N-terminal region of huntingtin (*HTT*) and leads to a wide range of cellular dysfunctions (1). The gain of toxic function of mutant huntingtin (mHTT) has led to considerable efforts to use siRNA, antisense oligonucleotides, or CRISPR/Cas9 to selectively suppress the expression of mHTT (2–4). Indeed, siRNA and antisense oligonucleotides have shown promising therapeutic effects in HD mice that express transgenic mHTT (2, 5). However, this relies on SNPs that are specific to the mutant allele. Also, whether this strategy can be successfully used in HD mice that express mHTT at the endogenous level remains unknown, since normal and expanded alleles of the endogenous *Htt* gene are not readily distinguished by siRNA and antisense oligonucleotides (6).
Our recent studies using conditional *Htt*-KO mice revealed that depletion of normal HTT in adult mouse brains does not affect animal survival, growth, or neuronal viability (7). In addition, knockin (KI) mice that express N-terminal mHTT have shown that the N-terminal region of HTT is not essential for early embryonic development (8). These findings suggest that removal of N-terminal HTT containing the polyQ domain, regardless of its allele, could be a potential therapeutic strategy to treat HD. Here, we report that permanent suppression of the endogenous expression of mHTT via CRISPR/Cas9 in the striatum of HD140Q-KI mice, which express a human HD *HTT*, can effectively deplete HTT aggregates and early neuropathology, even after the formation of abundant HTT aggregates. Reducing HTT expression in striatal neuronal cells does not affect the viability of the adult HD140Q-KI mice, but alleviates their motor deficits and neurological symptoms. Our findings suggest that depletion of HTT via CRISPR/Cas9 in a non–allele-specific manner can efficiently and permanently eliminate polyQ expansion–mediated neuronal toxicity in the adult brain. This also opens up a new avenue for treating other neurodegenerative diseases caused by the gain-of-function mechanism.
To delete the polyQ domain of mHTT using CRISPR/Cas9, we designed 4 guide RNAs (gRNAs) to target the DNA regions (T1, T2, T3, and T4) flanking the CAG repeat in exon 1 of human *HTT* (Figure 1A and Supplemental Figure 1A; supplemental material available online with this article; https://doi.org/10.1172/JCI92087DS1). We transfected HEK293 cells stably expressing exon 1 of human *HTT* containing 120 CAG repeats with each of the 4 gRNAs and Cas9. Western blotting showed a reduction of mHTT in the transfected cells (Supplemental Figure 1, B and C). We also tested the activities of combining 2 HTT-gRNAs in the stable HEK293 cells. A combination of T1 and T3 HTT-gRNAs led to the greatest reduction in mHTT (Supplemental Figure 1, D and E) and was used for our subsequent studies.
CRISPR/Cas9 depletes the ubiquitous expression of mHTT in homozygous HD140Q-KI mouse striatum. (**A**) Schematics of the designed HTT-gRNA (T1 and T3). (**B**) Immunofluorescence showing the transduction of AAV-HTT-gRNA in the striatum and part of the cortex. Ctx, cortex; Str, striatum; CC, corpus callosum; LV, lateral ventricle. Scale bar: 100 μm. (**C**) Different brain regions from 9-month-old homozygous HD140Q-KI mice injected with AAV-CMV-Cas9 and AAV-HTT-gRNA (T1 and T3) or control-gRNA were analyzed by Western blotting with 1C2 for mHTT and antibodies against Cas9, GFAP, NeuN, p62, caspase 3, and cleaved caspase 3. Vinculin was used as a loading control. Hip, hippocampus. (**D**) Low- and high-magnification images show the reduction of nuclear HTT and HTT aggregates in the AAV-HTT-gRNA/AAV-CMV-Cas9–injected area in 9-month-old homozygous HD140Q-KI mice compared with the contralateral striatum injected with AAV-HTT-gRNA only. Arrow indicates a remaining cell with nuclear HTT inclusion. Scale bar: 10 μm. The red dashed outline indicates the injected region where mHTT aggregates are markedly reduced. (**E**) Double immunostaining confirmed the depletion of mHTT in the area expressing HTT-gRNA in the injected striatum of 9-month-old homozygous HD140Q-KI mice. The striatum of a HD140Q-KI mouse injected with AAV-CMV-Cas9 only was used as a control. Scale bar: 20 μm.
We next tested the effect of CRISPR/Cas9-mediated HTT depletion in HD140Q-KI mice. In this KI mouse model, exon 1 of human *HTT* with 140 CAG repeats replaces exon 1 of endogenous mouse *Htt* (9), resulting in the expression of full-length mHTT with 140Q under the control of the endogenous mouse *Htt* promoter. In HD140Q-KI mice, accumulated mHTT in striatal neuronal nuclei is detectable between 4 and 6 months and forms obvious aggregates at 9 to 10 months (8, 10–12). We focused on the striatum to investigate the effect of removing mHTT. Two gRNAs (T1 and T3) are expressed under the U6 promoter in an adeno-associated virus (AAV) vector that also expresses red fluorescent protein (RFP) (AAV-HTT-gRNA), and Cas9 is expressed in another AAV vector under the CMV promoter (AAV-CMV-Cas9) (13). The 2 viruses were mixed at a ratio of 1:4 for stereotaxic injection into mouse striatum (Supplemental Figure 2A). After 3 weeks, Western blotting verified that RFP and Cas9 were predominantly expressed in the injected striatum (Supplemental Figure 2B).
We injected AAV-HTT-gRNA and AAV-CMV-Cas9 into one side of the striatum in homozygous HD140Q-KI mice at the age of 3 or 9 months. The contralateral striatum was injected with AAV-HTT-gRNA or AAV-CMV-Cas9 alone, which allowed us to rigorously examine the efficiency of HTT-gRNA/Cas9–mediated mHTT knockdown. HD140Q-KI mice are known to develop age-dependent motor deficits and nuclear accumulation of mHTT (9, 14). We found that most of the striatum and the needle pathway in the cortex and hippocampus were transduced by AAVs 3 weeks after injection (Figure 1B). Western blotting showed that HTT-gRNA, but not control-gRNA, caused a significant reduction of mHTT in the striatum of 9-month-old HD140Q-KI mice (Figure 1C and Supplemental Figure 2C). Compared with the contralateral striatum injected with HTT-gRNA alone, immunostaining revealed a dramatic decrease in the nuclear accumulation and aggregation of mHTT in the HTT-gRNA/Cas9–injected striatum (Figure 1D). Double immunofluorescence staining further verified that the decrease in mHTT staining is dependent on the expression of HTT-gRNA (Figure 1E). In HD KI mouse brain, a well-known early neuropathology is reactive astrocytes (15, 16). In brain regions transduced by HTT-gRNA/Cas9, attenuation of the increased glial fibrillary acidic protein (GFAP) was associated with knockdown of mHTT compared with brain regions injected with control-gRNA/Cas9 (Figure 1C and Supplemental Figure 2C), indicating that a reduction of mHTT alleviated reactive astrocytes. We also checked several other proteins such as NeuN (a neuronal marker), p62 (an autophagy marker) and caspase 3 (an apoptosis marker), which are frequently studied in neurodegenerative diseases, and found that these proteins remained unchanged (Figure 1C and Supplemental Figure 2C). The results were corroborated by immunohistochemical studies using GFAP and NeuN antibodies (Supplemental Figure 3, A and B).
Most HD patients are heterozygous for the HD gene mutation. Also, neurons are preferentially affected in HD. Thus, we tested the therapeutic potential of CRISPR/Cas9 in heterozygous HD140Q-KI mice, using AAV-HTT-gRNAs (T1 and T3) with AAV-Cas9 that was expressed under the neuronal methyl-CpG–binding protein (*Mecp2*) promoter (AAV-MECP2-Cas9) (Figure 2A). As a control, AAV-control-gRNA with AAV-MECP2-Cas9 were used. These viruses were mixed at a ratio of 1:4 (gRNA/Cas9) and injected into both sides of the striatum of 9-month-old heterozygous HD140Q-KI mice to maximize the therapeutic effects. Immunostaining of the injected striatum revealed the presence of RFP in dopamine- and cAMP-regulated phosphoprotein as well as dopamine- and cAMP-regulated neuronal phosphoprotein (DARPP-32) (Figure 2B) and NeuN-positive (Supplemental Figure 4) neurons, indicating that medium spiny neurons in the striatum had been transduced by the injected AAVs.
Behavioral analysis of heterozygous HD140Q-KI mice with depletion of neuronal HTT in the striatum by AAV-HTT-gRNA/AAV-MECP2-Cas9 injection. (**A**) Schematics showing the viral vectors used. HA, human influenza hemagglutinin; ITR, inverted terminal repeat; KASH, Klarsicht, ANC-1, Syne Homology; WPRE, woodchuck hepatitis virus post-transcriptional regulatory element. (**B**) Double immunostaining with anti–DARRP-32 indicated that medium spiny neurons expressed AAV-HTT-gRNA. Scale bar: 10 μm. (**C**) Motor functions of heterozygous HD140Q-KI mice injected with AAV-HTT-gRNA/AAV-MECP2-Cas9 (KI HTT-gRNA) or AAV-control-gRNA/AAV-MECP2-Cas9 (KI control-gRNA) and WT mice injected with AAV-control-gRNA/AAV-MECP2-Cas9 (WT) were evaluated using rotarod, balance beam, and grip strength tests at different time points after injection (*n* = 12 for each group; **P* < 0.05, ***P* < 0.012, and ****P* < 0.001, by 2-way ANOVA with Bonferroni’s test, comparing the KI HTT-gRNA group with the KI control-gRNA group).Data represent the mean ± SEM.
We examined whether CRISPR/Cas9-mediated neuronal mHTT depletion had any impact on the motor function of HD140Q-KI mice. We were able to obtain 24 nine-month-old heterozygous KI mice for examination, at which age the mice show abundant nuclear accumulation of mHTT in striatal neurons and also develop obvious motor dysfunction. These KI mice were injected with either AAV-HTT-gRNA/AAV-MECP2-Cas9 or AAV-control-gRNA/AAV-MECP2-Cas9, and their behaviors were monitored for 3 months. In HD140Q-KI mice, motor dysfunction has been well documented using rotarod, balance beam, and grip strength tests (9, 17). We found that HTT-gRNA/Cas9 could significantly improve performance in these tests and alleviate the motor deficits of HD140Q-KI mice compared with control-gRNA/Cas9–injected KI and WT mice (Figure 2C). In addition, knocking down HTT expression also attenuated body weight reductions (Figure 2C). The efficiency of mHTT reduction in the striatum varied among the individual mice tested (Supplemental Figure 5A). We found that the percentage of mHTT reduction in each mouse correlated with its rotarod and balance beam performance (Supplemental Figure 5, B and C).
Many studies have shown rare off-targets when specific gRNAs are used (18, 19). Whole-genome sequencing analysis using genomic DNA extracted from HTT-gRNA/Cas9–injected striatum verified that genomic mutations predominantly occurred around the HTT-gRNA targeted sequences in the *Htt* gene, but not in potential off-target loci (Supplemental Figure 6). This result was further corroborated by a T7E1 assay showing a lack of DNA mutations in selected potential off-target loci (Supplemental Figure 7A). In addition, DNA sequencing confirmed the presence of frameshift mutations around the targeted region of the HTT-gRNA (Supplemental Figure 7B). Western blotting analysis of individual injected mice showed an obvious reduction of mHTT in the striatum and part of the cortex containing the injection pathway compared with mHTT levels in the hippocampus (Figure 3, A and C). Double immunostaining confirmed a specific reduction of mHTT by HTT-gRNA, but not control-gRNA, in the injected striatal area (Figure 3, B and C). Furthermore, AAV-MCEP2-Cas9, which selectively depleted HTT in neuronal cells, could also reduce reactive astrocytes but did not alter expression of the neuronal marker NeuN (Figure 3, D and E). We also examined striatal volume and brain weight and did not find significant differences between HTT-gRNA– and control-gRNA–injected HD140Q-KI mice (Supplemental Figure 8). These results suggest that neuronal mHTT causes early neuropathology to increase reactive astrocytes in HD140Q-KI mouse brains, which could be diminished by eliminating the expression of HTT. Taken together, removal of endogenous HTT in neuronal cells via CRISPR/Cas9 can efficiently alleviate mHTT-mediated neuropathology in HD140Q-KI mice.
Removal of mHTT in neuronal cells alleviates neuropathology in 13-month-old heterozygous HD140Q-KI mouse striatum. (**A**) Western blotting shows the reduction of mHTT in brain tissues from 3 heterozygous HD140Q-KI (KI-1, KI-2, and KI-3) and WT mice. 2166 Antibody was used to show both mHTT and WT HTT. 1C2 Antibody was used to show only mHTT. Replicate samples run on separated blots are presented. (**B**) Double immunostaining with 1C2 antibody confirmed the depletion of mHTT by AAV-HTT-gRNA. A heterozygous HD140Q-KI mouse injected with AAV-control-gRNA served as a control. Scale bar: 20 μm. (**C**) Quantitative assessments of the relative ratio of mHTT to total HTT in **A** (left; *n* = 8; ****P* < 0.001, by 1-way ANOVA with Tukey’s test) and relative levels of mHTT staining in **B** (right; *n* = 8; ****P* < 0.001, by Student’s *t* test). (**D**) Double immunostaining of striatum (from 9-month-old injected mice examined at 13 months of age) shows decreased GFAP levels by HTT-gRNA compared with control-gRNA. There was no difference in NeuN staining. Scale bars: 20 μm. (**E**) Quantitative assessment of the relative levels of GFAP and NeuN staining (*n* = 8). The staining intensity for each mouse was the average from three ×10 images. ****P* < 0.001, by 1-way ANOVA with Tukey’s test. Data represent the mean ± SEM.
Although shutting off the expression of transgenic mHTT can alleviate neurological symptoms in HD mice (20, 21), whether reducing the expression of endogenous HTT can be used to treat HD without deleterious effects remains unknown. When we used homozygous HD140Q-KI mice in which 2 alleles of the mHTT gene could be disrupted by CRISPR/Cas9, we found that removal of HTT did not affect the expression of NeuN or caspase 3. Instead, mHTT depletion significantly reduced reactive astrocytes, an early pathological event in HD KI mouse brains (15, 16). The results also support our recent findings that depletion of endogenous mouse HTT in adult neurons is nondeleterious and that the function of HTT is cell type and age dependent (7). Using heterozygous HD140Q-KI mice to further analyze their pathology and behaviors, we also verified that CRISPR/Cas9 could effectively alleviate HD-related phenotypes. At the age of 9 months, HD140Q-KI mice show abundant nuclear mHTT accumulation in the striatum and obvious motor deficit phenotypes (8, 10–12). Our findings revealed that CRISPR/Cas9-mediated gene inactivation could reverse the neuropathology and behavioral phenotypes, even when the mice were 9 months old, suggesting that old neuronal cells still have the ability to clear the accumulated mutant proteins and repair early injury once the expression of mutant proteins is blocked. Thus, reducing mHTT expression in the brains of elder HD patients might still be effective in alleviating neurological symptoms.
Given that CRISPR/Cas9 can permanently eliminate the expression of targeted genes, using CRISPR/Cas9 should more efficiently deplete the expression of mHTT than has been possible with previous therapeutic approaches, which require continuous administration. Also, the severe neurological symptoms of many neurodegenerative diseases are often associated with the preferential vulnerability of selective neuronal populations. The use of specific promoters allows CRISPR/Cas9 to target specific types of neurons. Thus, using CRISPR/Cas9 to inhibit mutant protein expression in specific brain regions opens up a new avenue for treating HD as well as other neurodegenerative diseases that are caused by a toxic gain of function of mutant genes.
Study approval. All procedures were performed in accordance with NIH guidelines and the US Public Health Service’s Guide for the Care and Use of Laboratory Animals and were approved by the IACUC of Emory University, which is accredited by the American Association for Accreditation of Laboratory Care (AAALC).
Whole-genome sequencing. Whole-genome sequencing data have been deposited in the NCBI’s Sequence Read Archive (SRA accession number SRP105422).
Statistics. Statistical significance was determined by 2-tailed Student’s *t* test, 1-way ANOVA, or 2-way ANOVA using GraphPad Prism 5.0 (GraphPad Software). A *P* value of less than 0.05 was considered statistically significant.
SY, RC, SL, and XJL designed the study. SY, RC, HY, TZ, and YH, performed experiments and collected the data, HEK, XS, ZQ, and PJ performed whole-genome sequencing–related work. SY, RC, SL, and XJL analyzed the data. SY, RC, and XJL wrote the manuscript.
This work was supported by grants from the NIH (NS036232 and NS101701, to XJL, and NS095279, to SHL) and the National Natural Science Foundation of China (grant 91332206).
Address correspondence to: Xiao-Jiang Li or Shihua Li, 347 Whitehead Building, 615 Michael Street, Atlanta, Georgia 30322, USA. Phone: 404.727.3290; Email: [email protected] (X.J. Li); Phone: 404.712.2304; Email: [email protected] (S. Li).
**Conflict of interest:** The authors have declared that no conflict of interest exists.
**Reference information:***J Clin Invest.* 2017;127(7):2719–2724.https://doi.org/10.1172/JCI92087.
| true | true | true | null |
2024-10-12 00:00:00
|
2017-06-30 00:00:00
| null | null | null |
The Journal of Clinical Investigation
| null | null |
36,002,774 |
http://tulrich.com/tectrixvr/
|
Viva Tectrix VR
| null |
**Viva Tectrix VR**
From early in 1992 through mid-1998 I spent the majority of my waking
hours writing software for interactive exercise machines, which saw
the light of day as the Tectrix VR Bike and VR Climber products. The
machines didn't take the world by storm, but there were some really
cool aspects to what we did. In 1998 Tectrix was acquired by Cybex
International, who promptly cancelled the VR machines and removed all
evidence of their existence from the corporate web site, so I feel the
need of a place to point people when I want to shamelessly bore
them.
**News**
2007-01-04
**World Pack CD available for download
**
I just got another request for updated software. It's a dirty shame
that not all VR machines have had the latest (released in June 1998)
software! I've posted a CD image you can use to burn the latest
World Pack -- see the new download page.
2004-11-26
**No more wiki
**
This used to all be in a wiki (TWiki), but apparently some script
kiddie has been attacking its (many) security holes. Plus, I never
really liked TWiki; it was confusing and ugly. It didn't seem like
anybody was getting the hang of editing the pages, so RIP TWiki.
I've grabbed most of the wiki content and made it available as
conventional web pages. If you see something that needs editing or
want to add something, send me an email.
| true | true | true | null |
2024-10-12 00:00:00
|
2004-11-26 00:00:00
| null | null | null | null | null | null |
26,547,106 |
https://onezero.medium.com/the-ad-based-internet-is-about-to-collapse-what-comes-next-48e31d648a35
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
19,570,723 |
https://www.bloomberg.com/news/articles/2019-04-04/finland-says-basic-income-recipients-less-financially-insecure
|
Bloomberg
| null |
To continue, please click the box below to let us know you're not a robot.
Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy.
For inquiries related to this message please contact our support team and provide the reference ID below.
| true | true | true | null |
2024-10-12 00:00:00
| null | null | null | null | null | null | null |
1,891,298 |
http://ashitvora.info/human-computation
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
36,866,419 |
https://mastoinstance.info
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
37,249,969 |
https://wiki.documentfoundation.org/ReleaseNotes/7.6
|
LibreOffice 7.6 Community: Release Notes
| null |
# LibreOffice 7.6 Community: Release Notes
TDF LibreOffice Document Liberation Project Community Blogs Weblate Nextcloud Redmine Ask LibreOffice Donate
## Writer
- Added page number wizard in Insert menu for easy one-step insertion of the page number in the header/footer tdf#86630 (Paris Oplopoios / Justin Luth, Collabora)
- The Paragraph Style dropdown (in the Formatting toolbar) now gradually replaces the default list with styles used in the document rather than always showing the full list at the top. tdf#152666 (Heiko Tietze, TDF)
- Character properties of the paragraph marker from DOCX are now also remembered in ODT. blog post (Miklos Vajna, Collabora)
- Citation handling: added plumbing in Writer to build Zotero-like functionality blog post (Miklos Vajna, Collabora)
- Table of Figures can be generated more flexibly based on a paragraph style, not only by categories or object names. tdf#153090 (Michael Stahl, allotropia)
- Bibliography entries can now be edited directly from a bibliography table. tdf#72955 (Vojtěch Doležal)
- Bibliography marks now (by default) hyperlink to matching row in a bibliography table. The click behaviour can be changed to open the "Display URL", open the newly introduced "Target URL", or "None". tdf#153396 (Vojtěch Doležal)
- Non-breaking space character is now indicated using a degree symbol, when non-printing characters (formatting marks) are displayed. 28675af8 (Vojtěch Doležal)
- Start of multi-page floating tables in Writer commits, blog post 1, blog post 2, blog post 3 blog post 4 blog post 5 blog post 6 (Miklos Vajna, Collabora)
- The Accessibility Check has been moved to the sidebar to allow easier usage while editing the document. tdf#142978 (Samuel Mehrbrodt, allotropia)
- Now if you have a hidden section in your document and try (possibly accidentally) to delete it, then Writer will show a warning message tdf#130199 (Balazs Varga, allotropia)
- DOI citation recognition in Tools/AutoCorrect: Create a hyperlink pointing to doi.org for DOI citation tdf#145925 (Baole Fang)
- Added highlighting for used Paragraph and Character styles and highlighting for used Direct Formatting in text. tdf#38194, tdf#106556 (Jim Raykowski)
- keyboard navigation through forms: tab key now circularly navigates through content controls/fieldmarks. The modern content controls have a tabIndex field, which allows for precise ordering of keyboard navigation. The tabIndex also allows a control to be skipped - which is useful to avoid getting stuck in rich text controls (since the tab key needs to insert a tab character in that case). Form developers can specify the tabIndex via the content control properties UI. tdf#151548 (Justin Luth, Collabora)
- Tracked tables (László Németh, NISZ)
- Track table columns (follow-up to tracked table rows) commits
- Show tables with change tracking colors within a single text change tdf#155187
- Fix lost change tracking of tables within a single text change at table editing tdf#147180 and DOCX export tdf#150824 tdf#155187
- Track table columns (follow-up to tracked table rows) commits
- The position of the cursor within the page is now exposed via accessibility APIs, so screen readers like NVDA can announce it. tdf#136760 Related NVDA change (Michael Weghorn)
### Spell checking
- Phrase checking: multi-word dictionary items of Hunspell and custom dictionaries are accepted tdf#154499 (László Németh, FSF.hu)
- New options for proofreading: optional disabling of recognition of possible hyphenated and closed compound words, i.e. rule-based compounding, which allowed to accept also typos in Danish, Dutch, German, Hungarian, Norwegian, Swedish etc., and in the case of hyphenated compound words, English and all the other languages tdf#136306 (László Németh, FSF.hu)
## Calc
- Fixed conditional border color export to xlsx tdf#152581 (Attila Szűcs, Collabora)
- Evaluate formula inputs in Validity… tdf#150098 (Balázs Varga, allotropia)
- Number format:
`?`
are now supported on export to ODF to represent an integer digit, replaced by blank if it is a non significant zero tdf#118324 (Laurent Balland)- decimals for format in seconds without truncate like
`[SS].00`
are now accepted tdf#150028 (Laurent Balland)
- Delete Sheet tdf#153709 (Laurent Balland):
- if Sheet is empty, no confirmation message is displayed
- confirmation message is adapted to the number of selected sheets
- Sheet copied to another document now retains a user-defined print range tdf#66613 (Andreas Heinisch)
- Save solver settings to file tdf#38948 (Rafael Lima)
- Added support for drawing styles for shapes and comments. This includes a dedicated style for comments that makes it possible to customize the default look and text formatting of new comments. The old workaround involving editing the default cell style was removed. tdf#89369, commits (Maxim Monastirsky)
- New comment indicator scales with zoom, making it more visible at higher zoom factors. tdf#91415 (Heiko Tietze, TDF)
- The color for the text overflow and comment indicators can now be changed via tdf#154080 (Heiko Tietze, TDF) ▸ ▸ ▸ (labelled "Text Overflow" and "Comment" respectively), which is also where the text overflow indicator can be turned off (moved from ▸ ▸ ▸ ).
- Pop-up comments now show authorship metadata. (Not visible when all comments are shown, to preserve layout.) tdf#73537 (Balázs Varga, allotropia)
- Export all page styles in Calc even if they are not in use tdf#154445 (Andreas Heinisch)
- Automatic hyperlinks now stand out more in Calc tdf#153880 (Balázs Varga, allotropia)
- Added pivot table compact layout. 2f8d1 (Dennis Francis, Collabora)
- Add Poisson distribution to Random Number Generators ( ▸ ▸ ) tdf#154872 (Bartosz Kosiorek)
- Sorting by color is now possible in AutoFilter tdf#95520 (Samuel Mehrbrodt, allotropia)
- Filter/sort by color considers colors set by number format tdf#144549 (Samuel Mehrbrodt, allotropia)
- Import Text dialog (as CSV file or as Unformatted Text) have a new option to not detect number in scientific notation. This option is only available if "Detect special numbers" is off tdf#154131 (Laurent Balland)
- add “formula marker” feature like in Quattro Pro tdf#97551 (feature requested by Brolin Empey in 2016, feature added by Grigory A. Mozhaev in 2023)
- Fix theme background color lost during XLSX export tdf#91332 (Tünde Tóth, NISZ)
- If you have enabled the 'Protect Size' option for a shape or form control and you see a different size after reopening the document, please do the following:
If it is a form control, put the form in 'Design Mode'. This is an icon on the 'Form Controls' toolbar. Then right-click on the shape or form control and open the 'Anchor' item in the context menu. If the anchor is set to 'To cell (resize with cell)', you are affected by the following problem.
The settings 'Protect size' and anchor 'To cell (resize with cell)' contradict each other. Unfortunately, versions prior to 7.6 had this contradiction written to the file in different ways, so an automatic repair is not possible. Version 7.6 automatically writes the 'To Cell' anchor to the file if the shape or form control is size protected.
To repair your file, set the anchor type to 'To cell'. Then uncheck the 'Protect Size' option and set the shape to the desired size and position. Now you can re-enable the 'Protect size' option. Then save the file. The shape or form control now has an unchanged size on reopening, not only in version 7.6, but also in older versions.
Possibly the problem was created because the 'Control Properties' dialog contains an 'Anchor' dropdown field with the entry 'To cell', but this is actually the anchor type 'To cell (resize with cell)'.
Please excuse that we did not detect the problem earlier.
## Impress & Draw
- Display soft breaks as line breaks at fontwork. tdf#148000 (Attila Szűcs, Collabora)
- "Show Layer" is now directly available from the right-click menu instead of having to set as "Visible" in tdf#113439 (Heiko Tietze, TDF) ▸ .
- Navigation panel for switching slides while viewing a presentation. This option is enabled via checkbox: tdf#154839 (Amin Irgaliev, Vladislav Tarakanov)
- Objects can now be listed in "front to back" order in the Navigator, showing the top-most object at the top of the list: tdf#154604 (Jim Raykowski) ▸ ▸ ▸ .
- PDFium import now supports free text annotations, and export now supports ink, free text and polygon/polyline annotations (Jaume Pujantell, Collabora)
- Added support to open multi image tiff files tdf#155444 (Rashesh Padia, Collabora)
- Auto fitting text scaling algorithm has been changed so it works similar to MS Office. Text scaling now separates scaling for the spacing scale (paragraph and line spacings) and font scale, where spacing scale can only be 100%, 90% and 80% and font scaling is rounded to the nearest point size. Spacing in the horizontal direction (for example bullet size, various indents) is not scaled anymore. (Tomaž Vajngerl, Collabora)
- Fix missing diacritics in slideshow animations that work letter-by-letter. tdf#113290 (Khaled Hosny, TDF)
- Fix squashed display of emojis and glyphs taken from fallback fonts in slideshow on Windows. tdf#147999 (Khaled Hosny, TDF)
- Fix missing CJK emphasis marks in slideshow on Linux. tdf#43671 (Khaled Hosny, TDF)
- Fix gaps between justified Arabic letters in slideshow. tdf#155810 (Khaled Hosny, TDF)
- Fix connectors when importing them as connectors instead of shapes. tdf#149756 tdf#154363 (Tibor Nagy, NISZ)
## Base
- Bug fixed tdf#43369 PostgreSQL: Specific UI for collecting PostgreSQL connection settings (Nirnay Korde)
- Added Firebird's DATEDIFF function to the set of functions that can be used in the query designer (without the need to run SQL directly). tdf#104918 (Juan C. Sanz)
- Added Firebird's DATEADD function to the set of functions that can be used in the query designer (without the need to run SQL directly). tdf#156534 (Juan C. Sanz)
- Added MariaDB/MySQL functions TIMESTAMPDIFF and TIMESTAMPADD to the set of functions that can be used in the query designer (without the need to run SQL directly) (Juan C. Sanz)
## LibreOffice Help
LibreOffice Help now describe access to commands from several interfaces: Menus, tabbed interface, keyboard, toolbars, status bar and more.
Help contents updates and fixes:
- E. Rathke
- L. Balland
- O. Hallot
- S. Chaiklin
- S. Horacek
- S. Schroeder
- M. Kaganski
- R. Lima
- A. Romedenne
- Bogdan Buzea
- Adolfo Jayme Barrientos
- Juan C. Sanz
## Core / General
- Added support for zoom gestures when using touchpads in the main view. (Povilas Kanapickas)
- Exporting to PDF updates the last printed time in document properties. tdf#134901 (Justin Luth)
- Added support for document themes (Tomaž Vajngerl, Collabora)
- Import and export of theme definition for OOXML format
- Import and export of theme definition for ODF
- Changing of the theme in Writer and extended various colors to support theme color definition
- Added theme colors in the color picker in Writer and Calc
- Added new Theme dialog to change the currently used theme
- Also added the possibility to define new theme colors for a theme
- Sidebar theme deck has been adapted to also work
- Added support for multicolor gradients (Armin Le Grand, allotropia)
- LibreOffice 7.6 has a new feature called "multicolor gradients" (MCGR) implemented by Armin Le Grand. A multicolor gradient still goes from a starting color to final color, but now additional colors are possible in between.
- Although the 'Gradient' tab in the 'Area' dialog has not yet been adapted to the new feature, you can use such gradients. The document File:InfoPresentation MultiColor Gradients LO76.odp lists hints on what you can already do, and the 'Gradient' list in the dialog contains three multicolor gradient examples.
- You can create and modify multicolor gradients using macros, see the 'Gradient2' struct and the associated 'ColorStop' struct in the SDK API reference https://api.libreoffice.org/docs/idl/ref/index.html. Find more details and some primitive example macros in the file File:MacrosForMCGR.odp.
- Since this is a new feature, you might find errors. In this case, please help improve the feature by reporting the issue in our bug tracking system "Bugzilla" https://wiki.documentfoundation.org/Bugzilla. When doing so, mention 'MCGR' in the subject line.
- Some notes:
- LibreOffice versions prior to 7.6 cannot interpret multicolor gradients. They will display a gradient made from the first and last color.
- You need to use "1.3 Extended (recommended)" file format. This is the default setting, so don't worry.
- Some gradient properties in ODF (LibreOffice) and OOXML (Microsoft Office) are basically incompatible. This problem is not solved by multicolor gradients.
- Each view of a document now can have its language specific accelerator manager (Gökay Şatır, Collabora).
- Entering a group once again dims the objects that are not included in it. tdf#122735 (Armin Le Grand, allotropia).
- Fix text layout issues when using qt5/qt6 VCL plugins (as opposed to kf5/kf6 plugins). tdf#151925 tdf#151273 (Khaled Hosny, TDF)
- Fix overlapping text issue with some Graphite fonts. tdf#137553 (Khaled Hosny, TDF)
- Fix interaction between complex text fonts and Unicode superscript numbers. tdf#52577 (Khaled Hosny, TDF)
- Fix font fallback of Unicode character from higher planes inside right-to-left text . tdf#153440 (Khaled Hosny, TDF)
- Fix issue with misspelling red line covering parts of right-to-left text. tdf#151968 (Khaled Hosny, TDF)
- Compress full width CJK punctuation when punctuation compression is enabled. tdf#129810 (Khaled Hosny, TDF)
- Fix rendering of Tangut and Khitan Small Script in vertical text. tdf#114432 (Khaled Hosny, TDF)
- Available since 7.6.1Fix broken contextual text rendering between Narrow No-Break Space and Mongolian letter. tdf#107612, tdf#112594 (Khaled Hosny, TDF)
- Fix broken text rendering when mixing higher Unicode planes with other complex text. tdf#139863 (Khaled Hosny, TDF)
- Don’t insert extra space between Indic and non-Indic text. tdf#89288 (Khaled Hosny, TDF)
- Don’t require installing Hunspell spelling dictionary for every Arabic locale, installing only “ar” dictionary will work for all Arabic locales. tdf#64830 (Khaled Hosny, TDF)
- Fix vertical displacement of vertical text on macOS. tdf#149297 (Khaled Hosny, TDF)
- Don’t use Private Use Area characters for bulleted lists, use the proper Unicode code points. tdf#133089 (Khaled Hosny, TDF)
- Categorized link targets when linking to a presentation. (Szymon Kłos, Collabora)
## Filters
### General OOXML filters
- Added support for OOXML files created in zip64 format tdf#82984, tdf#94915 (Attila Szűcs, Collabora)
- Lots of fixes for frames defined by DOC/X's framePr. Issues fixed include lost frames, combined frames that should be separate, split frames that should be combined, overlapping frames, ignored parent styles, lost relative positioning, wrong absolute positioning, and lost rotation. tdf#154129, tdf#154703 (Justin Luth, Collabora)
- Export to PDF v.1.7 by default. e624e (Michael Stahl, allotropia)
- Tagged PDF is now produced by default, for improved accessibility. (To further improve your PDF's accessibility, the PDF/UA option is available in the export dialog and will trigger the Accessibility Check tool). tdf#39667 (Samuel Mehrbrodt, allotropia)
- Exporting as a hybrid PDF now stores the original ODF document as a PDF compatible file attachment. e052f (Tomaž Vajngerl, Collabora)
- Fix glyph size mismatch and overlap when printing of variable fonts. tdf#156151 (Khaled Hosny, TDF)
- Fix missing or incorrect overline color when exporting to PDF. tdf#48707 (Khaled Hosny, TDF)
- Fix position of CJK emphasis marks when exporting to PDF. tdf#115321 (Khaled Hosny, TDF)
- Fix blank text for the default instance of CFF2 variable fonts when exporting to PDF. tdf#155161 (Khaled Hosny, TDF)
- Fix underline position of Liberation fonts when exporting to PDF. tdf#154235 (Khaled Hosny, TDF)
### EMF/EMF+
- Implement EMR_POLYDRAW record. tdf#142249 (Bartosz Kosiorek)
- Add missing EmfPlusDrawCurve implementation. tdf#143877 (Bartosz Kosiorek)
- Performance boost for EMF+ images containing EmfPlusRecordTypeDrawBeziers records. tdf#154789 (Bartosz Kosiorek)
### SVG
- Support feColorMatrix, feGaussianBlur, feDropShadow, feFlood, feOffset. tdf#156066 (Xisco Fauli, TDF)
## GUI
- The recent documents picklist under
**ShowCurrentModuleOnly**expert option to show only files that can be handled by the current module. tdf#56696 (Andreas Heinisch) ▸ now shows the 5 most recent module-specific items first. The list can be configured using the
- Documents in the Start Center can now be pinned to show them at the beginning of the recently opened document list. To pin a document, hover the corresponding document and click on the pin icon in the top left corner. The selected document will then be shown in a separate line at the beginning of the list, along with already pinned documents. tdf#38742 (Andreas Heinisch)
- Keyboard navigation for the Special Characters dialog (tdf#153806 tdf#153918 (Michael Weghorn) ▸ ) has been improved and the currently selected character is now correctly announced by screen readers.
- The title of styles in the Fontwork dialog (tdf#153657 (Michael Weghorn) ▸ ) is now announced by screen readers.
`InsertObjectFloatingFrame`
is still available in ▸ . tdf#155006 (Caolán McNamara, Red Hat) ▸ ▸ (in Writer) and ▸ (elsewhere) was removed from the main menu and toolbars. However, the corresponding command
- Sets of "Automatic" application colors can now be chosen independently from the Application Color scheme in tdf#152184 (Heiko Tietze, TDF) ▸ ▸ ▸ . Pick between "Dark" or "Light" automatic colors, or alternatively follow the system's theme with "System".
### Changes in UI strings
- Rename "Square" and "Quadratic" gradient styles to "Rectangular" and "Square (Quadratic)", respectively. tdf#154071 (Regina Henschel)
- "Quotations" paragraph style renamed to "Block Quotation". tdf#150994 (Rafael Lima)
## Localization
- Improved predefined outline styles for en_US (and most locales reference this, so will automatically benefit). MLA/Chicago-compliant choice now available, and Roman numeral levels are now right-aligned. Also fixed indents in the "Numbering IVX" style, and made the "Numbering ivx" style usable. The outline numbering button was also added to the toolbar. tdf#56258 (Justin Luth, Collabora)
### Improvements to proofing tools and language support
#### Dictionaries
- Danish dictionary was updated. (Stavekontrolden)
### New languages/locales with locale data
Available as default document language and for locale specific formatting.
**Morisyen**{mfe-MU} [0x06B2]. tdf#154832 (Jean-Yves; Eike Rathke, Red Hat)**Santali**{sat-IN} [0x0646]. tdf#154987 (Prasanta Hembram; Eike Rathke, Red Hat)
### Additional languages in the language list
Available for text attribution.
**Saraiki**{skr-PK} [0x06B0], CTL, RTL. (Eike Rathke, Red Hat)**Rohingya Hanifi**{rhg-Rohg-MM} [0x06B1] CTL, RTL. tdf#154031 (Eike Rathke, Red Hat)
## Scripting
### The ScriptForge libraries
An extensible and robust collection of macro scripting resources for LibreOffice to be invoked from user Basic or Python scripts. (Jean-Pierre Ledure)
The libraries expose a total of **31 services** each with a bunch of methods and properties.
**New in LibreOffice 7.6**:
- The (new)
**FormDocument**service (a form document is also known as a "Base form", but this is confusing): open (even without first opening the Base document container), close, print, export to PDF, menubar management, access to individual controls. - The (new)
**Toolbar**and**ToolbarButton**services: hide/show built-in or custom toolbars, hide/show individual toolbar buttons, get or set the script or command to execute when clicked. - In the
**Calc**service: ranges may be sorted on any number of keys. Also a new**RemoveDuplicates**method, to clear or to compact ranges, keeping only one copy of identical records. - A new
**Echo**method in the**Document**service to freeze screen updates during scripts or to change the actual mouse pointer. - Many improvements on the
**Dialog**and**DialogControl**services:- Support of the
**Hyperlink**control type - Dialog controls may be resized. The height and width are expressed in
*Map AppFont units*, like in the Basic IDE. - All the
**On properties**(to specify the script to be executed when an event occurs) are now editable. - Dialog controls may be
**created dynamically**. - Dialog controls may be cloned with the new
**CloneControl**method. - A dialog can be
**created**from scratch. - Tabulations between controls are defined at once by the new
**OrderTabs**method.
- Support of the
The whole set of services (except when better done by native built-in functions) is made available for Python scripts with identical syntax and behaviour as in Basic.
The English version of the **documentation** of the ScriptForge libraries (7.6) is fully integrated in the LibreOffice **help pages** (https://help.libreoffice.org/7.6/en-US/text/sbasic/shared/03/lib_ScriptForge.html?DbPAR=BASIC). Their translation into other languages is underway. (Alain Romedenne, Rafael Lima)
### Java
- The property
`userClassPath`
in the`javasettings_$OS_$ARCH.xml`
file now supports (non nested) bootstrap variables (allowing to enter relative paths). 7795a (Samuel Mehrbrodt, allotropia)
### VBA Support
- Added support for ExportAsFixedFormat VBA function to Export As PDF. tdf#149786 (Balázs Varga, allotropia)
## Feature Removal / Deprecation
- Option for making PDF the default print job format have been removed and PDF is always used. Support for PostScript as a print job format is deprecated and will be removed in a later release. 2a405 and c3a4f
- Shortcut visibility setting (for context menus) has been removed from
`ShortcutsInContextMenus`
is still available if needed (`0`
to hide,`1`
to show,`2`
for default). tdf#152898 (Caolán McNamara, Collabora) ▸ ▸ ▸ and defaults to the desktop environment's. The expert configuration setting
## LOK API
- Add memory trimming functionality for idle documents (Michael Meeks, Collabora)
- Avoiding running graphics tests on startup in LOK mode (Michael Meeks, Collabora)
- Avoid un-necessary slow whole-writer-document off-screen render in some cases (Michael Meeks, Collabora)
- Swap out compressed graphics in LOK mode as well as de-compressed versions (Michael Meeks, Collabora)
- Performance improvements for headless cairo rendering, avoiding PDF code-paths (Michael Meeks, Collabora)
## Platform Compatibility
### Mac
- LibreOffice 7.6 requires macOS 10.15 or newer to run.
## API Changes
- New Writer UNO command
`.uno:HighlightCharDF`
to highlight direct formatting where it is used in the document. (Jim Raykowski) tdf#106556 `css.qa.XDumper::dump`
got a`kind`
parameter. 56e17- remove
`.uno:CharBackgroundExt`
's secondary use to set background color. Instead use`.uno:CharBackColor`
for 7.6+. tdf#85592 - deprecate
`.uno:BackColor`
to set background color in Writer. Instead use the universal`.uno:CharBackColor`
for 7.6+. tdf#85592 - The C functions
`rtl_string_newFromStr`
and`rtl_uString_newFromStr`
, and the C++ constructors for`rtl::OString(char const *)`
and`rtl::OUString(sal_Unicode const *)`
wrapping those functions, no longer support the undocumented behavior of accepting a null pointer string argument and treating it as an empty string. (Such calls had already been diagnosed with`std::abort`
in debug builds since LibreOffice 7.2.) 6028e - The
`Gradient2`
struct and the`ColorStop`
struct were added to support multicolor gradients. Search for MCGR to get the related commits. For more about multicolor gradients look at section Core/General.
| true | true | true | null |
2024-10-12 00:00:00
|
2024-08-01 00:00:00
| null | null | null |
LibreOffice 7.6 Community: Release Notes
| null | null |
20,007,683 |
http://blogs.perl.org/users/damian_conway/2019/05/why-i-love-perl-6.html
|
Damian Conway
|
Damian Conway
|
## Why I love Raku
I've been quietly playing along at home with the Weekly Challenge, and this week's first task was:
Write a script that finds the first square number that has at least 5 distinct digits.
The solution to that is *(obviously!)* to lazily square every number from 1 to infinity,
then comb through each square's digits looking for five or more unique numerals,
and immediately output the first such square you find.
Which translates directly to Raku:
`1..∞ ==> map {$^n²} ==> first {.comb.unique ≥ 5} ==> say();`
But the elegance of that solution is ** not** why I love Raku.
I love Raku because, if that solution seems too scary to you (too infinite, too lazy, too concurrent, too pipelined, too Unicoded, too declarative, too functional, too much like something that an Erlang guru would code), then Raku will equally allow you to write a plain and simple version: one that's imperative, iterative, block structured, variable-driven, pure ASCII, and more-or-less exactly what you'd write in Perl, or even in C:
```
loop (my $n=1 ;; $n++) {
my $n_squared = $n ** 2;
my %unique-digits;
for (split '', $n_squared, :skip-empty) {
%unique-digits{$_}++
}
if (%unique-digits >= 5) {
say $n_squared;
last;
}
}
```
Or you could just as easily write a solution somewhere between those two extremes, at whatever level of complexity and decomposition happens to be the sweet spot in your personal comfort zone. For example:
```
sub find_special_square {
for 1..Inf -> $n {
return $n²
if $n².comb.unique >= 5
}
}
say find_special_square();
```
More than any other language I know, Raku lets you write code in precisely
the way that suits you best, at whatever happens to be your (team's) current level
of coding sophistication, and in whichever style you will later find most readable
...and therefore easiest to maintain.
And ** that's** why I love Raku.
Do use for-loops for unbounded limits or loops that exit prematurely. Only use them for loops that limits are known and will run to completion.
| true | true | true | null |
2024-10-12 00:00:00
|
2019-05-26 00:00:00
| null | null | null |
Damian Conway [blogs.perl.org]
| null | null |
15,863,082 |
https://hackernoon.com/flaky-tests-a-war-that-never-ends-9aa32fdef359
|
Flaky Tests - A War that Never Ends | HackerNoon
|
The Code Gang
|
**NODES, The Dev Community Conference by Neo4j!**
Don’t you hate when things are not deterministic? A test should constantly pass or fail if no code changes are applied. We should run our tests against a controlled environment and make assertions against an expected output. We may use a test fixture as a baseline for running tests. A test fixture is a fixed state so the results should be repeatable. A flaky test is a test which could fail or pass for the same configuration. Such behavior could be harmful to developers because test failures do not always indicate bugs in the code. Our test suite should act like a bug detector. Non-determinism can plague any kind of test, but it’s particularly prone to affect tests with a broad scope, such as acceptance, functional/UI tests.
A good suite of tests should let you decide whether the code is ready to be released. When I have a test suite that I can trust, a successful test run gives me the green light to proceed with a release. It gives me confidence that I can refactor the code safely. In TDD, we should run all our tests after every code change. Sometimes this is not always possible, but at least every now and then we have to run the whole suite of tests. But at least, we have to ensure that all our tests run successfully after committing our changes. If a test constantly fails, this is not a flaky test and must not be confused.
But how you could introduce a flaky test? Let’s see some common reasons a test could be flaky:
*Continuous Integration* *is the practice of merging all developer working copies to a shared pipeline several times a day*. A flaky test could block/delay development until spotted and resolved. The problem is that you do not know if you caused the test failure or if it is flaky. There is no easy way to deal with flaky tests. But there are some practices that could help you spot them and deal with them.
As a very first step, re-run all failed tests with clean system state. This is an easy way to identify if the failed tests are constantly failing or they are flaky. But a successful re-run does not mean that you can ignore the flaky test. It is an easy way to identify that test is flaky indeed and you have to deal with it. There are tools that support automatic re-running failed tests in development or CI environment that could help you get through.
Place any spotted flaky test in a quarantined area. Teams should follow a strict process when spotting a flaky test. After you record this down, you could also place this test in the quarantined area. This will let others know that this test is possibly flaky and will be investigated. But the main reason is that all other healthy tests will remain in trust. This does not mean that you can postpone the investigation. Shortly someone has to pick this up. You can enforce this by setting either a number limit of quarantined items or a time limit in the quarantine area.
Running tests frequently in scheduled builds at different times of day could reveal flaky tests. It is better to spot a flaky test early rather emerging during a release.
In order to deal with them, you should somehow record all the tests that are flaky. Upon a failure, you have to gather all related data. Logs, memory dumps, system current state or even screenshots in UI tests, that can help you investigate later what went wrong. A ticketing system works fine for storing all that data. This will let you know how many flaky tests are they. You can create a new ticket for that flaky test so someone will pick this up.
When you have identified that a test is flaky, if this test lives long in your codebase, you should try to figure out when it was introduced. As for example, if this test has failed in your CI pipeline again, you can try to find out what code changes could have affected its behavior.
Tests that make assertions on dynamic content have to wait for content to load. Putting a test to sleep for some time is not a good practice. UI tests are slow enough and you don’t want to make them even slower. You could use callbacks if those are provided by the dynamic content provider. If there are no callbacks, you can use polling in small wait intervals. The wait interval is the minimum time that you have to wait when content is not available, thus it should be short. But also, it should be easily configurable. Test run environment could change, so the wait interval will need tweaking over time.
Tests that usually pass but rarely fail, are hard to reproduce. This is where the data that we mentioned earlier that should be gathered can help. Once we spot them, we have what is needed to reproduce the faulty scenario. Another way to investigate those is running the test multiple times till you end up with a failure. Then we should do some post-mortem analysis to identify the root cause. Unfortunately, this is not an always win procedure, but it is free of cost while you are investigating possible reasons.
The best way to deal with time bombs is wrapping the system clock with routines that can be replaced with a seeded value for testing. You can use this clock stub to time travel to a particular time and frozen at that time, allowing your tests to have complete control over its movements. That way you can synchronize your test data to the values in the seeded clock.
As said, a carelessly written test that does not clear its state after execution could waste you a lot of time, trying to figure out why other tests are failing. Those tests might assume that system is in a vanilla state which also wrong. A way to deal this kind of flakiness is to rerun all your tests in the same order when it failed. A test might pass when running separately and fail under specific execution order. In general, you should configure your tests to run randomly to identify tests that could get affected by other bad written tests. Most testing libraries provide a way to execute tests in random order. Use this option, as it will force you to write more resilient and stable tests.
When having a big suite of tests, it is hard to avoid having flaky tests, especially on UI/integration tests. Usually, the insertion rate is the same as the dealing rate. There should be a level of awareness in the teams about flaky tests and should be part of the team culture to guard the tests. After all, its team’s productivity that gets affected. When you get used to seeing your pipeline red, you inevitably pay less attention to other problems as well. One recurring problematic test becomes unreliable, so unreliable that you ignore whether it passes or fails. To make things worse, others will also look at the red pipeline and notice that the failures are in non-deterministic tests, but soon they’ll lose the discipline to take any actions. Once that discipline is lost, then a failure in the healthy deterministic tests will get ignored too. A red pipeline should be like an alert. It is like the traffic lights. Red means we should not continue the development!
As a rule of thumb, if you face a flaky test, do not assume that this is a test problem. You should suspect production code first and then the test. Sometimes a flaky test can be flawless and has just revealed a bug in your code. Just remember, a bug’s best place to hide is a flaky test that developers would assume that something is wrong with the test and not the code.
Eradicating Non-Determinism in Tests
No more flaky tests on the Go team
| true | true | true |
Don’t you hate when things are not deterministic? A test should constantly pass or fail if no code changes are applied. We should run our tests against a controlled environment and make assertions against an expected output. We may use a test fixture as a baseline for running tests. A test fixture is a fixed state so the results should be repeatable. A flaky test is a test which could fail or pass for the same configuration. Such behavior could be harmful to developers because test failures do not always indicate bugs in the code. Our test suite should act like a bug detector. Non-determinism can plague any kind of test, but it’s particularly prone to affect tests with a broad scope, such as acceptance, functional/UI tests.
|
2024-10-12 00:00:00
|
2017-12-04 00:00:00
|
article
|
hackernoon.com
|
Hackernoon
| null | null |
|
20,667,658 |
https://awealthofcommonsense.com/2014/06/1966-1982-stock-market-really-bad/
|
Was the 1966-1982 Stock Market Really That Bad? - A Wealth of Common Sense
|
Ritholtzadmin; Ben Carlson
|
“Investment success accrues not so much to the brilliant as to the disciplined.” – William Bernstein
In January of 1966 the Dow Jones Industrial Average hit a level of 990. It would continue trading in a range of roughly 600 to 1,000 over the following 17 years. It once again reached 990 in December of 1982 before finally breaking out and heading higher.
The Dow never dropped below 1,000 again.
This long, drawn out sideways market is one of the ultimate devil’s advocate positions for those that like to argue against stocks being a solid long-term investment. Although this was technically a sideways market we need to put some context around this time frame.
First of all, the Dow isn’t the only way to gauge the stock market. It’s a price-weighted index consisting of only 30 blue chip stocks, but it’s mostly used for nostalgic purposes today. It has a really long historical track record so it still gets publicity.
If we instead look at the S&P 500 from 1966 to 1982, things don’t look too bad from a nominal perspective:
The Dow went sideways, but the S&P actually earned a respectable 6.8% return in that time. Dividends and earnings also showed relatively healthy annual growth rates. The S&P 500 went from a price level of 92 to 140 so three-quarters of the performance came from dividend payments.
But those numbers don’t tell the entire story as inflation was out of control, especially in the late 70s and early 80s. Here’s why inflation was the real widow maker that caused this sideways environment in real terms:
One of the main aims of long-term investing is to beat inflation over time to increase your standard of living. The reason this was such a frustrating investing environment was that stocks only broke even after accounting for inflation while bonds lost nearly 40% in real terms.
So we went from stocks are still a decent investment in chart number one to stocks are a horrible investment in chart two. Let’s look at one more piece of data before putting it all together from a retirement savings perspective:
Runaway inflation is a scary issue to deal with. But you can see that wages were probably the main culprit as they grew much faster than both stocks and inflation.
The median family income was $6,900 in 1966. Assuming someone stocked away 15% of annual earnings that means in 1966 they would have been saving $1,035 per year.
With no changes to that percentage over time that means the amount saved would have compounded by 8.8% per year based on wage growth. So by 1982, the amount saved jumped to nearly $4,000 a year (almost $10,000 in today’s dollars).
Increasing the savings rate by just 20% of each annual raise (so keeping the remaining 80% for spending purposes) and that 15% increased to 20% of income by 1982 (or almost $6,000 in 1982 terms and $15,000 in today’s dollars).
That’s where investors made up ground from the underwhelming performance in stocks. Remember, the stock market is simply a place to park your savings over time. Most likely, the amount you save will have a far greater impact on your ending portfolio balance than a few extra basis points of investment performance
It’s interesting to note that the 1966-82 period of low stock returns, high inflation, and high wage growth is basically the exact opposite of the current environment of high stock returns, low inflation and stagnating wages.
You’ll notice that out of these three the one you have the most control over is how much you save.
Many smart people in the industry are predicting lower investment returns over the next decade or so. Who knows what will happen, but it makes sense to prepare for that possibility.
One of the most interesting scenarios over the next few years would be if the economic recovery really takes off, the job market improves and wage increases ultimately cause lower stock market returns. In that situation everybody is confused and many investors are left extremely frustrated.
We’ll never see another environment exactly like the 1966-1982 period, but we will definitely see periods of underwhelming market performance. A successful investment plan includes preparing yourself for a number of different scenarios so you don’t overreact when things don’t go as planned.
As always, long-term performance is mostly about your reactions, not necessarily your actions.
[…] The Stock Market Was Bad […]
[…] Further Reading: GLD’s Fall From Grace Torturing Historical Market Data Was the 1966-1982 Stock Market Really That Bad? […]
[…] Reading: Was the 1966-1982 Stock Market Really That Bad? The Joy of Investing in Down Markets Enduring Lessons from the Financial […]
[…] look at the flipside, which is a poor performing market that so many are now calling for. The 1966-1981 period was one of the more difficult market environments to navigate in the post-WWII era for a simple […]
[…] Reading: Was the 1966-1982 Market Really That Bad? What About the […]
The Dow Jones Industrial Average ranged from 1300 to 7300 between the years 1914 and 1982. Yet between from 1982 until the present it rose from 2000 to 18000. What changed?Your graph is from here: Dow Jones 100 Year Historical Chart and it does NOT show the raw value of the Dow Jones – it’s inflation-adjusted. Without that, it looks like this: So what happened is really just that the 70s had a relatively stagnant stock…
[…] Was the 1966-1982 Stock Market Really That Bad? – Was the 1966-1982 Stock Market Really That Bad? Posted June 19, 2014 by Ben Carlson “Investment success accrues not so much to the brilliant as to the disciplined.” […]
[…] the S&P 500 was up 6% per year in the 1966-1981 period, many consider this a sideways market because the Dow went nowhere from a price perspective and […]
[…] the S&P 500 was up 6% per year in the 1966-1981 period, many consider this a sideways market because the Dow went nowhere from a price perspective and […]
| true | true | true |
“Investment success accrues not so much to the brilliant as to the disciplined.” – William Bernstein In January of 1966 the Dow Jones Industrial Average hit a level of 990. It would continue trading in a range of roughly 600 to 1,000 over the following 17 years. It once again reached 990 in December of 1982...
|
2024-10-12 00:00:00
|
2014-06-19 00:00:00
| null |
article
|
awealthofcommonsense.com
|
A Wealth of Common Sense
| null | null |
10,346,237 |
http://www.linuxjournal.com/content/vigilante-malware
|
Search
|
James Darvell
|
# Vigilante Malware
Vigilante. The word itself conjures up images of a man in a mask, leaping across rooftops as he chases wrongdoers, dancing with the devil in the pale moonlight. In films and on TV, the vigilante is usually the character we support. But would you welcome a vigilante into your home in real life?
The question is not as hypothetical as it may seem. In a fascinating turn of events, security firm Symantec recently published the story of on an exceptional piece of malware that goes by the name Linux.Wifatch.
Wifatch was discovered a year ago by an independent researcher, but Symantec has spent more time studying it after it infected one of the company's honeypot machines. Wifatch targets embedded Linux devices, such as home Wi-Fi routers and Internet of things (IoT) devices. Once it has gained a foothold on a device, it alters other software and connects to a peer-to-peer network, downloading payloads and receiving commands from the malware's author.
Wifatch is designed to avoid casual detection. The process runs under a false name and is designed to crash any debugging tools that try to inspect the process in memory. Tracking down the location of the files on the filesystem is not easy, and when you do find them, you need to reverse-engineer the compression routine to discover what's inside.
None of that makes the malware exceptional, however. Instead, it's the nature of the payloads that it downloads. You see, while other malware downloads viruses and other horrible exploits, Wifatch installs security patches, terminates insecure services, such as telnet, and eliminates any other malware infections it might find. It also alerts users to update their firmware and change their passwords. In other words, Wifatch seems to be working to make infected systems more secure.
But is Wifatch a good thing? Don't forget that it propagates by exploiting your system, installs itself on your devices without your consent, and then makes changes and modifications without your knowledge.
And, that brings us back to the nature of vigilantes--people who fight evil from the shadows, who consider themselves "above" the law, recognizing no authority other than their own conscience.
There are established ways that security hackers can contribute to the common good. Discovering security holes and then publicly publishing them to security boards helps software developers improve their products, leading to safer systems for all of us. The key points here are that all of those actions are open. Information is publicly shared with people so they can make their own security decisions.
To me, Wifatch is more like a doctor who creeps into your house when you're asleep, silently injects you with the latest vaccines, throws away your cigarettes and leaves a note in your fridge recommending a healthier diet.
| true | true | true | null |
2024-10-12 00:00:00
|
2015-10-06 00:00:00
| null | null |
linuxjournal.com
|
linuxjournal.com
| null | null |
30,913,797 |
https://github.com/eknoorpreet/dev.to-clone
|
GitHub - eknoorpreet/dev.to-clone: A DEV.to clone using MERN stack
|
Eknoorpreet
|
An DEV.to clone created with MongoDB, Express, React, Node, and Socket.io
- UI: React
- Routing: React Router
- Real-time Notifications: Socket.io
- Backend: Express
- Database: MongoDB
- ORM: Mongoose
- Image hosting: Cloudinary
- Login / Signup
- Google / Facebook / Twitter / GitHub OAuth
- Create / Remove / Update / Delete Post
- Like / Unicorn / Bookmark Post
- Reading List
- Create / Add Tags to Post
- Follow Tags
- Find Posts by Tags
- Comment / Replies
- Like Comment
- Edit / Delete Comment
- View Profile
- Edit Profile
- Follow User
- Search Posts
- Real-time Notifications
- Skeleton Loading
Clone the repo to your local machine using `https://github.com/eknoorpreet/dev.to-clone`
Install npm dependencies in both `client`
and `server`
subdirectories using `npm install`
```
$ cd server && npm install
$ cd client && npm install
```
Set up a MongoDB database either locally or online via MongoDB Atlas
Create a Cloudinary account
Create a new project on Google Cloud Platform
Create a `.env`
file in in both `client`
and `server`
subdirectories
Set up the following environment variables
In `client/.env`
:
```
REACT_APP_BASE_URL=http://localhost:5000/api
REACT_APP_SOCKET_IO_URL=http://localhost:5000
REACT_APP_GOOGLE_CLIENT_ID=<GOOGLE_CLIENT_ID>
REACT_APP_GITHUB_CLIENT_ID=<GITHUB_CLIENT_ID>
REACT_APP_FB_APP_ID=<FACEBOOK_CLIENT_ID>
```
In `server/.env`
:
```
DB_USER = //user name for db
DB_PASSWORD = //password for db
DB_NAME = // name for db
JWT_KEY = //random string
COOKIE_KEY = //random string;
NODE_ENV = 'development';
CLIENT_URL = //the port of React app, ex: 'http://localhost:3000';
//cloundiary will provide you with the following credentials
CLOUDINARY_CLOUD_NAME = //cloud name
CLOUDINARY_API_KEY = //API key
CLOUDINARY_API_SECRET; //API secret
//Google will provide you with the following credentials
GOOGLE_API_KEY = //API key
//Github will provide you with the following credentials
GH_CLIENT_ID = //Github's Client ID
GH_CLIENT_SECRET = //Github's Client Secret
// Twitter will provide you with the following credentials
TWITTER_CONSUMER_KEY = //Twitter's Consumer key
TWITTER_CONSUMER_SECRET = //Twitter's Consumer Secret
```
Finally, run `npm start`
in both `client`
and `server`
subdirectories
```
$ cd server && npm start
$ cd client && npm start
```
| true | true | true |
A DEV.to clone using MERN stack. Contribute to eknoorpreet/dev.to-clone development by creating an account on GitHub.
|
2024-10-12 00:00:00
|
2022-03-25 00:00:00
|
https://opengraph.githubassets.com/e1dc3ecf37191ed72bcc9e33f9ce0e26f0cdb1c9ccdc1f270492ee2b75496385/eknoorpreet/dev.to-clone
|
object
|
github.com
|
GitHub
| null | null |
18,119,874 |
https://about.gitlab.com/2018/10/01/events-api-security-issue/?mkt_tok=eyJpIjoiTkRRNVpUazNNR1ZrTlRJNCIsInQiOiJ4cFJmc2g3SW5QTkJycVo4dDBcL3g2Yk92SWtuM1pSK1wvaFFyY05DRWlDcFhuSWVmQWZQK01zZE1CaG9EdHgrZTE2UTVKdzNJXC9EN0lObUZwRU8relgxTkJDUU5sV3pWNjFqNExpek9aQ0F2N2pVVGwyOHFQY0ZoYzgxN0doUjJcLzUifQ%3D%3D
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
31,532,795 |
https://apps.apple.com/us/app/dabl/id1583652727
|
Dabl
| null |
# Dabl 17+
## Live Face To Face
## Tavish Software Inc.
-
- Free
## iPhone Screenshots
## Description
Welcome to Dabl, the place to date honestly!
In a dating scene defined by snap judgements, Dabl is the place where the real you is all that matters. Photos are great, but you can show so much more. We believe genuine connection is born from being honest in who you are and what you really want.
That's why we’ve given our users the spotlight.
VIDEOS
Tired of guessing what your match is like based on photos and text? We were too. Dabl videos let you show your fun side. People are expressive and we want to highlight that. Whether you're telling a crazy story or showing off those culinary skills we want to see what makes you great, and we bet your dates will too.
QUESTIONS
There’s a time for small talk and a time for pickup lines, but it gets a little repetitive don't you think? Dabl asks questions (some goofy, some serious) because we believe this is how you connect on a deeper level.
FREE
Dabl is free to download and use, and no that doesn't mean ads and limits.
If you’re ready to date beyond swipes and photos download Dabl today!
## What’s New
Version 5.0.2
Bug fixes and library video selection.
## Ratings and Reviews
### Unique and interesting
Feel like this app is more for meeting people than just another swipe dating app. Ya see both sexes so I feel like it’s more focused on meeting people with similar interests which is great.
### Great to see app developers listening to feedback
EDIT: App has completely changed, lost old account, has become a dating app now? And can’t make a simple profile without videos. Completely ruined unfortunately, seems to have strayed away from original intent which in itself isn’t too bad but not being able to make a profile without videos is a bad thing.
Original:
After the latest update, the app is now arguably even better than it was originally. The only minor issue I have is that there are many users and it takes a long time to keep swiping through them all with no idea how many more there are. If there was some way to filter/sort them by common interests etc. that would be nice.
### Developer Response ,
Hello Heisenberg646,
Thank you for your honest feedback: after speaking with more of our users we are pushing an update in which you see cool people in your neighborhood, not just folks you cross paths with.
Apologies for the direction we went down. We believe in admitting to our mistakes and fixing them as soon as we can, so keep a lookout 👀
### Found other 215 kids !
I joined dabl and went live with the MATH215 tag. within a day I had a few other 215 kids reach out with the same tag and now we do work together! 5/5
### Developer Response ,
Thanks Derek! Appreciate the kind words.
## App Privacy
The developer, Tavish Software Inc., indicated that the app’s privacy practices may include handling of data as described below. For more information, see the developer’s privacy policy.
### Data Used to Track You
The following data may be used to track you across apps and websites owned by other companies:
- Identifiers
### Data Linked to You
The following data may be collected and linked to your identity:
- Location
- Contact Info
- User Content
- Identifiers
- Usage Data
- Sensitive Info
- Diagnostics
- Other Data
Privacy practices may vary, for example, based on the features you use or your age. Learn More
## Information
- Seller
- Tavish Software Inc.
- Size
- 40.1 MB
- Category
- Lifestyle
- Compatibility
-
- iPhone
- Requires iOS 14.1 or later.
- iPod touch
- Requires iOS 14.1 or later.
- Apple Vision
- Requires visionOS 1.0 or later.
- Languages
-
English
- Age Rating
- 17+ Frequent/Intense Mature/Suggestive Themes
- Copyright
- © 2021 Tavish Software Inc.
- Price
- Free
| true | true | true |
Welcome to Dabl, the place to date honestly! In a dating scene defined by snap judgements, Dabl is the place where the real you is all that matters. Photos are great, but you can show so much more. We believe genuine connection is born from being honest in who you are and what you really want.…
|
2024-10-12 00:00:00
|
2022-03-22 00:00:00
|
website
|
apple.com
|
App Store
| null | null |
|
9,691,595 |
http://www.wired.com/2015/06/kaspersky-finds-new-nation-state-attack-network/
|
Kaspersky Finds New Nation-State Attack—In Its Own Network
|
Kim Zetter
|
If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more. Please also consider subscribing to WIRED
Researchers at Kaspersky Lab in Russia have discovered yet another new nation-state attack attributed to members of the infamous Stuxnet and Duqu gang. But this time the perpetrators were hiding in plain sight---inside the security firm's own networks.
Kaspersky says the attackers became entrenched in its networks some time last year. For what purpose? To siphon intelligence about nation-state attacks the company is investigating---a case of the watchers watching the watchers who are watching them. They also wanted to learn how Kaspersky's detection software works so they could devise ways to avoid getting caught. Too late, however: Kaspersky found them recently while testing a new product designed to uncover exactly the kind of attack the intruders had launched.
The attackers appear to be the same group that created Duqu, spyware discovered in 2011 that was used to hack a certificate authority in Hungary, as well as targets in Iran and Sudan, and that shared a number of similarities with Stuxnet, the famed digital weapon that sabotaged Iran's nuclear program. The team's handiwork popped up again in 2012 in two sophisticated spy tools Kaspersky helped expose---the massive Flame surveillance platform that infected thousands of victims over a period of five years and the mysterious Gauss attack, which contained a payload so securely locked that it's yet to be deciphered.
The hack against Kaspersky bears some of the hallmarks of the 2011 Duqu attack, including sharing an algorithm and large amounts of the same code. But where the original Duqu consisted of just six modules, Duqu 2.0, as Kaspersky is calling it, is a massive, 19-megabyte toolkit with plugins for various reconnaissance and data theft activities. All of these are stored in and operated stealthily from inside an infected machine's memory in order to bypass detection tools that might otherwise uncover them if they were stored on the machine's hard drive. The attackers also appear to have used at least three zero-day exploits to conduct their attack, as well as a clever technique to surreptitiously extract data remotely and communicate with infected machines.
"The entire code of this [attack] platform is some of the best we have seen ever," Costin Raiu, director of the company's Global Research and Analysis Team, told WIRED. "It is incredibly well written. Almost no mistakes anywhere."
Kaspersky is still trying to determine how much data the attackers stole. The thieves, as with the previous Duqu 2011 attack, embedded the purloined data inside blank image files to slip it out, which Raiu says "makes it difficult to estimate the volume of information that was actually transferred." But at least, he says, it doesn't appear that the attackers were out to infect Kaspersky customers through its networks or products. Kaspersky claims to have more than 400 million users worldwide.
Kaspersky wasn't the only victim of Duqu 2.0. Based on data the company collected from its customers, the attackers also struck a series of hotels and conference venues, each of them a location where members of the UN Security Council met in the past year to negotiate Iran's nuclear program. That program is a recurring interest for the attackers behind the Duqu code, which shouldn't come as a big surprise. The US and Israel reportedly were behind Stuxnet, but various researchers have long suspected that Israel alone was behind the Duqu code. The focused spying on the nuclear negotiations, from which Israel was excluded, would seem to support this theory.
Additionally, the security firm Symantec, which obtained samples of Duqu 2.0 provided by Kaspersky, uncovered more victims of the targeted attack code among its own customers, and found that some of these victims were in the US---a fact that would be cause for even more concern if the attack were perpetrated by the US government.
Over the last five years, Kaspersky has made a name for itself exposing one nation-state attack after another, including Stuxnet, Duqu, Flame, Gauss, Regin and the Equation Group---many of them seemingly launched by the US and its UK and Israeli allies. It was perhaps inevitable that Kaspersky eventually would be targeted itself.
Kaspersky uncovered the breach after an engineer, testing a new product on a company server, spotted anomalous traffic that caused him to further investigate. Eventually the company determined that a couple dozen Kaspersky systems had been infected. The company won't say when exactly the intrusion began to preserve the integrity of the investigation, but Raiu says they're working with law enforcement agencies in several countries to track the breach of Kaspersky as well as other victims. The company has also filed police complaints in Russia and the UK, where it also has an office.
The infection of Kaspersky unfolded like a precision campaign. The attackers first targeted an employee in one of the company's Asia-Pacific offices, likely using a spear-phishing attack and zero-day exploit to breach the system. The employee's machine had all the latest software patches installed, but zero-day exploits target vulnerabilities that are yet unknown to a software maker, and therefore have no patches available to seal them.
Another indication that a spear-phishing email was used was the fact that while Kaspersky was investigating the breach, the attackers wiped the mailbox and browsing history from the infected employee's system, preventing Kaspersky from fully analyzing it.
The wipe occurred just four hours before Kaspersky identified the employee's machine as "patient zero," suggesting the intruders knew they'd been caught and were racing to eliminate evidence before Kaspersky could find it. Raiu suspects they may have been tipped off when Kaspersky disconnected many of its critical systems from the Internet after discovering the breach. He notes, however, that the company has backups and logs of the employee's system, and once they're able to compile and review them, he's confident they'll produce evidence of how the attackers got in.
From this first infected system, the attackers leapfrogged to others in the network, likely using a second zero-day exploit to do this. "We were able to map the malware jumping from one computer to another based on event logs," Raiu says.
He thinks they used an exploit targeting a vulnerability in the Kerberos protocol, which Microsoft patched last November after the attackers had already used it. The hole would have allowed them to gain elevated privileges on a domain controller server, which would have provided them with credentials to target other systems. Although Kaspersky found no samples of such an exploit on their system, they saw indications that a domain controller attack had occurred.
Once the attackers found a computer of interest, they used another zero-day exploit to install their toolkit in memory from kernel mode, the deepest layer of a machine. Kaspersky reported this zero-day to Microsoft several weeks ago, for which the software vendor issued a patch yesterday. Kaspersky had waited for Microsoft to issue the patch before going public with news of the breach and the zero-day exploit.
Jumping into kernel-mode to install malware like this will often trigger a detection system like Kaspersky's, so the attackers used a creative technique to bypass Kaspersky's antivirus software and trick it into believing the behavior was normal. The malware in fact checked for the presence of more than a dozen antivirus products from different vendors to determine the best method to bypass detection. Kaspersky has described these techniques in a blog post and paper published today, which also discuss all the ways in which Duqu 2011 and 2015 are alike.
Once the toolkit was loaded into the infected machine's memory and launched, all traces of the installer and malware were erased from the hard disk. The fact that the attackers ran their entire operation from memory after this step, is a sign, Raiu says, that they had high confidence in their code and the stability of their platform.
Not every system got the full 19-megabyte package. In some cases, the attackers only installed a small backdoor. These are the systems they used only to explore further into a network. But once they found a system of interest, they installed the full package. There appeared to be no middle ground, Raiu notes.
"It's pretty crazy. It has a lot of modules that may not be necessarily relevant to us, but nevertheless they deployed the entire payload packet [on our systems]," he says. Ordinarily, attackers install as few tools as possible to maintain a low profile. But Raiu says the attackers probably didn't care in this case because they believed their chances of being detected were "close to zero."
This was one risky move the attackers took. But another one was storing all of their malware only in memory. This meant that any time an infected system got rebooted, the malware would disappear. With nothing on disk to re-install it, the attackers ran the risk of losing the infected machine. So to combat this, they stored a driver on another machine on the network, and any time an infected machine got rebooted, the driver could reach out to a domain controller on the network and relaunch an infection on the cleaned machine.
The same driver also served a second purpose. It helped the attackers communicate stealthily and remotely with infected networks. Often, criminal hackers will have every infected machine on a network communicate with their external command-and-control server. But large amounts of traffic like this can raise alerts. So the Duqu 2.0 attackers limited the traffic by using this driver to tunnel communication to and from the network.
They would first send one of two "magic strings" to the driver---either "romanian.anti-hacker" or "ugly.gorilla"---from an IP address in Jakarta or Brazil. The strings triggered the driver to add the IP addresses to a whitelist so communication to them wouldn't be flagged. Then they used Windows pipes sessions to tunnel through the driver to communicate with other machines on the network. They also siphoned data out of the network in this way, in order to shield their activity. Instead of multiple machines communicating with the external command servers, only the machine with the driver would be seen communicating with it.
The 19-megabyte assault kit contains a complete set of specialized modules designed to map systems and networks, harvest passwords and other credentials, snap screenshots, read and write content and siphon text from emails and documents, among other things. They've found modules for infecting both the 32-bit and 64-bit versions of Windows, but so far found no modules for infecting Mac systems.
Some of the modules are so sophisticated that Kaspersky hasn't been able to reverse-engineer them yet. One of them appears to be designed to interact with some type of SCADA system, Raiu says. "This could be a security system in a hotel or surveillance or security related. But it can also be some kind of a new Stuxnet payload."
The attackers were primarily interested in Kaspersky's work on APT nation-state attacks--especially with the Equation Group and Regin campaigns. Regin was a sophisticated spy tool Kaspersky found in the wild last year that was used to hack the Belgian telecom Belgacom and the European Commission. It's believed to have been developed by the UK's intelligence agency GCHQ.
The Equation Group is the name Kaspersky gave an attack team behind a suite of different surveillance tools it exposed earlier this year. These tools are believed to be the same ones disclosed in the so-called NSA ANT catalogue published in 2013 by journalists in Germany. The interest in attacks attributed to the NSA and GCHQ is not surprising if indeed the nation behind Duqu 2.0 is Israel.
The Duqu 2.0 attackers were also curious about a new secure operating system Kaspersky is developing for use in industrial control systems and critical infrastructure and they also wanted to study its KSN system. The Kaspersky Security Network is an opt-in system that gathers data from customers about new threats infecting them. The company uses it to create maps outlining the geographical reach of various threats. "It's one of our essential core technologies for fighting APT [advanced persistent threats]," Raiu says.
Their curiosity wasn't limited to Kaspersky's systems, though. Kaspersky found Duqu 2.0 infections on about a dozen customers, though the company won't identify the countries where they reside. Victims uncovered so far fall into two types: those who appear to have some connection to Iran's nuclear program; and technology companies that appear to have been attacked for some utilitarian purpose. One victim in this category is an industrial control system manufacturer in the Asian Pacific. "They are a very, very interesting target. We don't know if they are the final target or because they make interesting hardware that they sell to other countries," Raiu says. The attackers also targeted a telecom in the Middle East.
There was one victim, however, that didn't fit the profile of other targets. Raiu says this was an international gathering for the 70th anniversary of the liberation of the Auschwitz-Birkenau concentration camps. The focus in this case may have been on the scores of VIPs who attended the event, including presidents and prime ministers. "Pretty much everyone was there with the exception of Obama and Putin," Raiu notes.
In addition to all of these targets, Symantec uncovered victims in the UK, Sweden, Hong Kong and India. Notably, it found telecom victims in Europe and Africa, an electronics firm in South East Asia, and multiple infections in the US, including one organization where developers working on mobile platforms were infected. Some of the infections dated back to 2013, according to Vikram Thakur, senior manager for the company's Security Response team.
Based on the number of victims found so far, Kaspersky estimates that the total number is likely less than 100.
But perhaps the most interesting targets were the venues hosting the P5+1 meetings. P5+1 refers to the five permanent members of the UN Security Council plus Germany, who have been in negotiations with Iran over its nuclear activities. Raiu wouldn't identify the hacked venues, but the negotiations have occurred in many places over the last 18 months, including the Coburg Palace Hotel in Vienna; the Montreux Plaza Hotel, Hotel Intercontinental, and President Wilson Hotel in Geneva; the Beau-Rivage Palace Hotel in Lausanne and the Al Bustan Palace Ritz-Carlton Hotel in Muscat, Oman.
Earlier this year, the *Wall Street Journal* reported that Israel had spied on the closed-door talks about Iran's nuclear program, but was vague on details about how this might have occurred. The Duqu 2.0 spy operation is a possible clue.
Raiu says each of the infections began within three weeks before the P5+1 meetings occurred at that particular location. "It cannot be coincidental," he says. "Obviously the intention was to spy on these meetings."
Initially Kaspersky was unsure all of these infections were related, because one of the victims appeared not to be part of the nuclear negotiations. But three weeks after discovering the infection, Raiu says, news outlets began reporting that negotiations were already taking place at the site. "Somehow the attackers knew in advance that this was one of the [negotiation] locations," Raiu says.
Exactly how the attackers spied on the negotiations is unclear, but the malware contained modules for sniffing WiFi networks and hijacking email communications. But Raiu believes the attackers were more sophisticated than this. "I don't think their style is to infect people connecting to the WiFi. I think they were after some kind of room surveillance---to hijack the audio through the teleconference or hotel phone systems."
One thing is clear, with Kaspersky's exposure of Duqu 2.0, the attackers will now have to find a new tool to conduct their espionage. Though given the recent proficiency of Kaspersky and other companies in discovering these tools, it may not be long before the next one is exposed too.
| true | true | true |
Kaspersky says the attackers became entrenched in its networks some time last year.
|
2024-10-12 00:00:00
|
2015-06-10 00:00:00
|
article
|
wired.com
|
WIRED
| null | null |
|
10,681,634 |
https://github.com/mafintosh/peerflix
|
GitHub - mafintosh/peerflix: Streaming torrent client for node.js
|
Mafintosh
|
Streaming torrent client for Node.js
```
npm install -g peerflix
```
Peerflix can be used with a magnet link or a torrent file. To stream a video with its magnet link use the following command.
```
peerflix "magnet:?xt=urn:btih:ef330b39f4801d25b4245212e75a38634bfc856e" --vlc
```
Remember to put `"`
around your magnet link since they usually contain `&`
.
`peerflix`
will print a terminal interface. The first line contains an address to a http server. The `--vlc`
flag ensures vlc is opened when the torrent is ready to stream.
To stream music with a torrent file use the following command.
```
peerflix "http://some-torrent/music.torrent" -a --vlc
```
The `-a`
flag ensures that all files in the music repository are played with vlc.
Otherwise if the torrent contains multiple files, `peerflix`
will choose the biggest one.
To get a full list of available options run peerflix with the help flag.
```
peerflix --help
```
Examples of usage of could be
```
peerflix magnet-link --list # Select from a list of files to download
peerflix magnet-link --vlc -- --fullscreen # will pass --fullscreen to vlc
peerflix magnet-link --mplayer --subtitles subtitle-file.srt # play in mplayer with subtitles
peerflix magnet-link --connection 200 # set max connection to 200
```
If you want to build your own app using streaming bittorrent in Node you should checkout torrent-stream
Chromebooks are set to refuse all incoming connections by default - to change this:
```
sudo iptables -P INPUT ACCEPT
```
If you wanna use peerflix on your chromecast checkout peercast or castnow
MIT
| true | true | true |
Streaming torrent client for node.js. Contribute to mafintosh/peerflix development by creating an account on GitHub.
|
2024-10-12 00:00:00
|
2013-03-14 00:00:00
|
https://opengraph.githubassets.com/2bf9fee82f42fdf944d3f1c7e90e038bce32fdf307594b75f92e9c077fdb5b7e/mafintosh/peerflix
|
object
|
github.com
|
GitHub
| null | null |
39,651,869 |
https://www.wsj.com/articles/underdog-who-beat-biden-in-american-samoa-used-ai-in-election-campaign-b0ce62d6
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
3,577,397 |
http://blogs.scientificamerican.com/observations/2012/02/09/a-deceptive-individual-steve-jobss-fbi-file/?WT.mc_id=SA_Facebook
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
6,431,184 |
http://www.forbes.com/sites/walterloeb/2013/09/23/nordstrom-how-to-remain-relevant-in-a-tech-savvy-world/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
9,041,778 |
http://news.google.com/newspapers?id=MDtkAAAAIBAJ&sjid=a-QDAAAAIBAJ&pg=5471,3084268
|
The Sydney Morning Herald - Google News Archive Search
| null |
Search
Images
Maps
Play
YouTube
News
Gmail
Drive
More
»
Sign in
Archive Search Help
The Sydney Morning Herald - Aug 5, 1987
Browse this newspaper »
Browse all newspapers »
Page
of 44
Related articles
No related articles found for this article.
Get this newspaper
| true | true | true | null |
2024-10-12 00:00:00
|
2024-01-01 00:00:00
| null | null | null | null | null | null |
10,891,329 |
http://phys.org/news/2016-01-gravitational-rumors-ripple-science-world.html
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
30,514,533 |
https://lareviewofbooks.org/article/a-private-gentleman-on-the-trials-of-harry-s-truman/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
40,370,793 |
https://health.osu.edu/wellness/exercise-and-nutrition/cooking-cicadas
|
How to cook cicadas, which, yes, are edible
|
Jim Warner
|
## Healthy eating is within your reach!
Make an appointment with our dietitians or nutritionists.
Schedule an appointmentThere’s a powerful story behind every headline at Ohio State Health & Discovery. As one of the largest academic health centers and health sciences campuses in the nation, we are uniquely positioned with renowned experts covering all aspects of health, wellness, science, research and education. Ohio State Health & Discovery brings this expertise together to deliver today’s most important health news and the deeper story behind the most powerful topics that affect the health of people, animals, society and the world. Like the science and discovery news you find here? You can support more innovations fueling advances across medicine, science, health and wellness by giving today.
**Subscribe.** The latest from Ohio State Health & Discovery delivered right to your inbox.
Looking for something unusual to wow — or gross out — family and friends at your next backyard barbecue? How about cooking up some earthy, crunchy cicadas?
When broods of cicadas emerge from on their 17-year cycles, it can be a unique opportunity to try an unusual, low-fat protein source.
Some things crossed my mind when I started writing this:
Narrowing of the mind, widening of the waist. I consider cicadas an occasional gifts from the food gods, along with morels, asparagus and ramps.
As a kid growing up in a small town in Pennsylvania, I remember the sounds of the “locusts,” as we called them, as we were riding our bikes, building forts in the woods or hanging out at the swimming hole. It was the sound of summer.
We’d take the empty cicada shells and scare my friend’s younger sister by putting them in her hair, or just showing her one in our hand. She’d run away screaming. (I still feel kind of bad about that.)
My first experience with eating what I’ll call my “gateway bug” was a gift of spicy crickets from a couple of our chefs at the Ohio State Wexner Medical Center. I thought they were great, but I had to add them to some salty snack mix. As I grew used to the flavor and idea of eating bugs, I soon was able to proudly pop a handful of crickets into my mouth for a quick protein-packed snack.
Cicadas and many other insects such as ants, crickets and grasshoppers are great sources of protein, hugely abundant and earth friendly. Insects have been delicacies for many cultures for thousands of years. Even today, some people enjoy the guilty pleasure of chocolate-dipped bugs, eating them just like chocolate-covered pretzels.
Your best bet is to find a wooded area away from older homes to lessen the chances for potential lead absorption from chipped paint. Steer clear of well-manicured yards due to the potential for lawn chemicals and other contaminants the cicadas may have absorbed.
Cicadas are at their most tender just out of the shell. The tough exoskeleton is not very tasty. Watch them climb up a tree and begin to molt from their outer shell. When they’re out of their shell, gently grab the soft bodies, blanch them in boiling water for one minute, then put them into a zip-lock bag and place them in the freezer before preparing them to cook.
If you fancy soft shell crabs, which are Atlantic Blue crabs that have just molted their old shell, then you may enjoy a cicada stir fry. Cicadas are related to shrimp, crayfish, lobsters, and other arthropods, so if you’ve ever eaten those, you’re just one step away from trying cicadas.
I’m tempted to say they taste just like chicken, but they do have a nutty flavor and a nice crunch when sautéed in olive oil with a few seasonings tossed in for good measure. Old Bay seasoning is always a winner. However, I’m not so sure you can eat them without accompaniments. Go ahead and sauté them for a minute or two and top a nice dish of leafy greens with some crispy cicadas.
But please don’t use ranch dressing. After all, they’ve been waiting 17 years for this big dance, and you shouldn’t humiliate them one last time. A bit of extra virgin olive oil, some fresh lemon juice and a few cracks of black pepper will do just fine.
Believe it or not, yes. *Cooking with Cicadas* by R. Scott Frothingham includes gourmet recipes and explains how to prepare cicadas for snacks, meals and desserts. This cookbook features a variety of recipes from Italian and Moroccan to Asian and Mexican with enticing names including Cicada Frittata, Pasta a la Cicada, Cicada Curry, Cicada Tacos, Cicada Pad Thai and Caramel Cicada Crunch.
University of Maryland graduate school student Jenna Jadin published *Cicada-Licious: Cooking and Enjoying Periodical Cicadas* in 2004. Her book starts with a disclaimer to first consult with your doctor before eating cicadas, particularly for potential allergic reactions to substances within the cicadas.
People in many countries today consume cicadas, from Thailand to Mexico to northern African regions. Bug love has even made it to the great American pastime: a fresh cup of cooked insects is now available for $4 at a Major League Baseball concession stand. According to ESPN, the Seattle Mariners are selling toasted gluten-free grasshoppers tossed in chili-lime salt.
My suggestion for those first-timer cicada connoisseurs is to go slowly, have a friend nearby for support, spice ‘em up and enjoy. Otherwise, you’ll have to wait 17 more years before you can try eating something this exotic again.
*Yield: 4 main course servings*
Make an appointment with our dietitians or nutritionists.
Schedule an appointment
| true | true | true |
Looking for something unusual to wow – or gross out – family and friends at your next backyard barbecue? How about cooking up some earthy, crunchy cicadas?
|
2024-10-12 00:00:00
|
2024-04-02 00:00:00
|
website
|
health.osu.edu
|
The Ohio State University
| null | null |
|
37,494,652 |
https://magickimg.com/face2famous/
|
magickimg
| null |
# Face2Famous
## How to Transform Your Face into an Iconic Figure with AI?
Unleash your inner art aficionado. With Magickimg's Face2Famous, your face melds with world-renowned artworks and legendary figures, creating a fusion masterpiece.
**01**### Upload
Capture your best angle and introduce it to Face2Famous. We smoothly process both JPG and PNG image types.
**02**### AI Transforms Photo
Choose from an array of timeless art pieces or iconic figures. Watch as our AI crafts a fusion that's nothing short of magic.
**03**### Download
Marvel at your transformed portrait and share this unique blend with the world. Be the talk of the town with your very own art fusion!
## Amazing Quality
- AI Art Generator
- Photo Restoration
- Headshot Generator
- Face2Emoji
- Face2Famous
- Face2Cartoon
- Remove Background
- Colorize Image
Features
-
### AI Image Enhancement
Obtain excellent results through Magickimg's advanced artificial intelligence algorithm. Discover new potential in every image.
-
### Transform With Deep Learning
Let our intelligent deep learning systems evolve the way you work with photos. Magickimg drives incredible improvements.
-
### Fast, Easy to use
Enhance images in just a few clicks with Magickimg's user-friendly interface. Designed for simplicity and individual needs.
-
### Free trial
Try Magickimg for free, then choose the subscription plan that's right for you. The choice is yours.
-
### Unrivaled Image Resolution
Take image quality to new heights with Magickimg's unmatched AI-powered resolution improvements. Crisper, sharper, more vivid.
-
### Secure and Reliable Service
Your privacy and data security is our top priority. Magickimg processes your photos securely for peace of mind.
Frequently asked questions
## Everything you need to know
##
What is magickimg?
##
Is this app free?
##
What do you do with my photos after generated?
Uploaded and generated files are deleted after 1 hour. We do not store your photo data, so don't forget to download your files.
##
Does Magickimg have an affiliate program?
##
Can I request a refund?
Subscribers can cancel subscription at anytime, with cancellations taking effect after the current billing cycle ends.
Please ensure to fully evaluate our services before subscribing, as our high GPU processing costs prevent us from offering refunds.
Something we didn't cover? We're happy to have feedback .
| true | true | true |
Boost Your Images Powered by AI - magickimg
|
2024-10-12 00:00:00
| null | null | null |
Lvwzhen
| null | null |
|
30,529,819 |
https://www.theguardian.com/commentisfree/2022/mar/02/civilised-european-look-like-us-racist-coverage-ukraine
|
They are ‘civilised’, ‘European’ and ‘look like us’: the racist coverage of Ukraine | Moustafa Bayoumi
|
Moustafa Bayoumi
|
While on air, CBS News senior foreign correspondent Charlie D’Agata stated last week that Ukraine “isn’t a place, with all due respect, like Iraq or Afghanistan, that has seen conflict raging for decades. This is a relatively civilized, relatively European – I have to choose those words carefully, too – city, one where you wouldn’t expect that, or hope that it’s going to happen”.
If this is D’Agata choosing his words carefully, I shudder to think about his impromptu utterances. After all, by describing Ukraine as “civilized”, isn’t he really telling us that Ukrainians, unlike Afghans and Iraqis, are more deserving of our sympathy than Iraqis or Afghans?
Righteous outrage immediately mounted online, as it should have in this case, and the veteran correspondent quickly apologized, but since Russia began its large-scale invasion on 24 February, D’Agata has hardly been the only journalist to see the plight of Ukrainians in decidedly chauvinistic terms.
The BBC interviewed a former deputy prosecutor general of Ukraine, who told the network: “It’s very emotional for me because I see European people with blue eyes and blond hair … being killed every day.” Rather than question or challenge the comment, the BBC host flatly replied, “I understand and respect the emotion.” On France’s BFM TV, journalist Phillipe Corbé stated this about Ukraine: “We’re not talking here about Syrians fleeing the bombing of the Syrian regime backed by Putin. We’re talking about Europeans leaving in cars that look like ours to save their lives.”
In other words, not only do Ukrainians look like “us”; even their cars look like “our” cars. And that trite observation is seriously being trotted out as a reason for why we should care about Ukrainians.
There’s more, unfortunately. An ITV journalist reporting from Poland said: “Now the unthinkable has happened to them. And this is not a developing, third world nation. This is Europe!” As if war is always and forever an ordinary routine limited to developing, third world nations. (By the way, there’s also been a hot war in Ukraine since 2014. Also, the first world war and second world war.) Referring to refugee seekers, an Al Jazeera anchor chimed in with this: “Looking at them, the way they are dressed, these are prosperous … I’m loath to use the expression … middle-class people. These are not obviously refugees looking to get away from areas in the Middle East that are still in a big state of war. These are not people trying to get away from areas in North Africa. They look like any.” Apparently looking “middle class” equals “the European family living next door”.
And writing in the Telegraph, Daniel Hannan explained: “They seem so like us. That is what makes it so shocking. Ukraine is a European country. Its people watch Netflix and have Instagram accounts, vote in free elections and read uncensored newspapers. War is no longer something visited upon impoverished and remote populations.”
What all these petty, superficial differences – from owning cars and clothes to having Netflix and Instagram accounts – add up to is not real human solidarity for an oppressed people. In fact, it’s the opposite. It’s tribalism. These comments point to a pernicious racism that permeates today’s war coverage and seeps into its fabric like a stain that won’t go away. The implication is clear: war is a natural state for people of color, while white people naturally gravitate toward peace.
It’s not just me who found these clips disturbing. The US-based Arab and Middle Eastern Journalists Association was also deeply troubled by the coverage, recently issuing a statement on the matter: “Ameja condemns and categorically rejects orientalist and racist implications that any population or country is ‘uncivilized’ or bears economic factors that make it worthy of conflict,” reads the statement. “This type of commentary reflects the pervasive mentality in western journalism of normalizing tragedy in parts of the world such as the Middle East, Africa, south Asia, and Latin America.” Such coverage, the report correctly noted, “dehumanizes and renders their experience with war as somehow normal and expected”.
More troubling still is that this kind of slanted and racist media coverage extends beyond our screens and newspapers and easily bleeds and blends into our politics. Consider how Ukraine’s neighbors are now opening their doors to refugee flows, after demonizing and abusing refugees, especially Muslim and African refugees, for years. “Anyone fleeing from bombs, from Russian rifles, can count on the support of the Polish state,” the Polish interior minister, Mariusz Kaminski, recently stated. Meanwhile, however, Nigeria has complained that African students are being obstructed within Ukraine from reaching Polish border crossings; some have also encountered problems on the Polish side of the frontier.
In Austria, Chancellor Karl Nehammer stated that “of course we will take in refugees, if necessary”. Meanwhile, just last fall and in his then-role as interior minister, Nehammer was known as a hardliner against resettling Afghan refugees in Austria and as a politician who insisted on Austria’s right to forcibly deport rejected Afghan asylum seekers, even if that meant returning them to the Taliban. “It’s different in Ukraine than in countries like Afghanistan,” he told Austrian TV. “We’re talking about neighborhood help.”
Yes, that makes sense, you might say. Neighbor helping neighbor. But what these journalists and politicians all seem to want to miss is that the very concept of providing refuge is not and should not be based on factors such as physical proximity or skin color, and for a very good reason. If our sympathy is activated only for welcoming people who look like us or pray like us, then we are doomed to replicate the very sort of narrow, ignorant nationalism that war promotes in the first place.
The idea of granting asylum, of providing someone with a life free from political persecution, must never be founded on anything but helping innocent people who need protection. That’s where the core principle of asylum is located. Today, Ukrainians are living under a credible threat of violence and death coming directly from Russia’s criminal invasion, and we absolutely should be providing Ukrainians with life-saving security wherever and whenever we can. (Though let’s also recognize that it’s always easier to provide asylum to people who are victims of another’s aggression rather than of our own policies.)
But if we decide to help Ukrainians in their desperate time of need because they happen to look like “us” or dress like “us” or pray like “us,” or if we reserve our help exclusively for them while denying the same help to others, then we have not only chosen the wrong reasons to support another human being. We have also, and I’m choosing these words carefully, shown ourselves as giving up on civilization and opting for barbarism instead.
-
Moustafa Bayoumi is the author of the award-winning books How Does It Feel To Be a Problem?: Being Young and Arab in America and This Muslim American Life: Dispatches from the War on Terror. He is professor of English at Brooklyn College, City University of New York. He is a contributing opinion writer at Guardian US
| true | true | true |
Are Ukrainians more deserving of sympathy than Afghans and Iraqis? Many seem to think so
|
2024-10-12 00:00:00
|
2022-03-02 00:00:00
|
article
|
theguardian.com
|
The Guardian
| null | null |
|
614,909 |
http://blog.jerodsanto.net/2009/05/expand-your-twitter-network-in-less-than-15-lines-of-ruby/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
40,576,663 |
https://arxiv.org/abs/2105.05398
|
Sound, Precise, and Fast Abstract Interpretation with Tristate Numbers
|
Vishwanathan; Harishankar; Shachnai; Matan; Narayana; Srinivas; Nagarakatte; Santosh
|
# Computer Science > Programming Languages
[Submitted on 12 May 2021 (v1), last revised 15 Dec 2021 (this version, v3)]
# Title:Sound, Precise, and Fast Abstract Interpretation with Tristate Numbers
View PDFAbstract:Extended Berkeley Packet Filter (BPF) is a language and run-time system that allows non-superusers to extend the Linux and Windows operating systems by downloading user code into the kernel. To ensure that user code is safe to run in kernel context, BPF relies on a static analyzer that proves properties about the code, such as bounded memory access and the absence of operations that crash. The BPF static analyzer checks safety using abstract interpretation with several abstract domains. Among these, the domain of tnums (tristate numbers) is a key domain used to reason about the bitwise uncertainty in program values. This paper formally specifies the tnum abstract domain and its arithmetic operators. We provide the first proofs of soundness and optimality of the abstract arithmetic operators for tnum addition and subtraction used in the BPF analyzer. Further, we describe a novel sound algorithm for multiplication of tnums that is more precise and efficient (runs 33% faster on average) than the Linux kernel's algorithm. Our tnum multiplication is now merged in the Linux kernel.
## Submission history
From: Santosh Nagarakatte [view email]**[v1]**Wed, 12 May 2021 01:58:27 UTC (2,459 KB)
**[v2]**Mon, 13 Dec 2021 16:05:18 UTC (3,366 KB)
**[v3]**Wed, 15 Dec 2021 23:18:39 UTC (591 KB)
### References & Citations
# Bibliographic and Citation Tools
Bibliographic Explorer
*(What is the Explorer?)*
Litmaps
*(What is Litmaps?)*
scite Smart Citations
*(What are Smart Citations?)*# Code, Data and Media Associated with this Article
CatalyzeX Code Finder for Papers
*(What is CatalyzeX?)*
DagsHub
*(What is DagsHub?)*
Gotit.pub
*(What is GotitPub?)*
Papers with Code
*(What is Papers with Code?)*
ScienceCast
*(What is ScienceCast?)*# Demos
# Recommenders and Search Tools
Influence Flower
*(What are Influence Flowers?)*
Connected Papers
*(What is Connected Papers?)*
CORE Recommender
*(What is CORE?)*# arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? **Learn more about arXivLabs**.
| true | true | true |
Extended Berkeley Packet Filter (BPF) is a language and run-time system that allows non-superusers to extend the Linux and Windows operating systems by downloading user code into the kernel. To ensure that user code is safe to run in kernel context, BPF relies on a static analyzer that proves properties about the code, such as bounded memory access and the absence of operations that crash. The BPF static analyzer checks safety using abstract interpretation with several abstract domains. Among these, the domain of tnums (tristate numbers) is a key domain used to reason about the bitwise uncertainty in program values. This paper formally specifies the tnum abstract domain and its arithmetic operators. We provide the first proofs of soundness and optimality of the abstract arithmetic operators for tnum addition and subtraction used in the BPF analyzer. Further, we describe a novel sound algorithm for multiplication of tnums that is more precise and efficient (runs 33% faster on average) than the Linux kernel's algorithm. Our tnum multiplication is now merged in the Linux kernel.
|
2024-10-12 00:00:00
|
2021-05-12 00:00:00
|
/static/browse/0.3.4/images/arxiv-logo-fb.png
|
website
|
arxiv.org
|
arXiv.org
| null | null |
38,033,859 |
https://citizenlab.ca/2023/10/finding-you-teleco-vulnerabilities-for-location-disclosure/
| null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
40,155,599 |
https://crypto-integrate.netlify.app/
|
Crypto Integrate - Cryptocurrency Community and Airdrops
| null |
Discuss cryptocurrency and participate in airdrops.
Crypto Integrate is a vibrant community where crypto enthusiasts come together to discuss the latest trends, news, and insights in the cryptocurrency world. Additionally, we regularly host airdrops, providing members with opportunities to earn free tokens.
Our community is dedicated to fostering knowledge sharing and collaboration among members, whether you're a seasoned trader or just getting started in the world of cryptocurrency.
Join us today and become a part of the growing community!
Join our active community of crypto enthusiasts from around the world.
Participate in regular airdrops and earn free tokens.
Access educational content and stay updated with the latest trends.
Learn about crypto from starting. No need to have any knowledge about crypto. This course will go through all the fundamentals you need to know about crypto.
Take Course
| true | true | true |
Join Crypto Integrate's Discord server to discuss cryptocurrency, participate in airdrops, and access free educational content.
|
2024-10-12 00:00:00
|
2024-01-01 00:00:00
|
website
|
yourwebsite.com
|
yourwebsite.com
| null | null |
|
4,757,648 |
http://www.html5rocks.com/en/tutorials/casestudies/jamwithchrome-audio/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
35,303,902 |
https://www.bbc.com/travel/article/20230320-eugnie-brazier-the-legendary-mother-of-french-cuisine
|
Eugénie Brazier: The legendary 'mother of French cuisine'
|
Anna Richards
|
# Eugénie Brazier: The legendary 'mother of French cuisine'
**Uneducated, a single mother and the first person ever to receive six Michelin stars, Eugénie Brazier was a tour de force. So why doesn't the world know about her?**
With more restaurants per capita than any other French city and the home of Rue du Bœuf (the street with the most Michelin stars in the country), Lyon is France's undisputed gastronomic capital. And although the city has become synonymous with the name Paul Bocuse (1926-2018) – with five restaurants falling under the late chef's brand, and even Halles de Lyon – Paul Bocuse (an indoor food market) bearing his name – its culinary legacy began long before he rose to fame.
Known as "the mother of French cooking", Eugénie Brazier (or Mère Brazier) never completed primary school and was forced to leave home at 19 after becoming pregnant. Yet, by the time she turned 40, she was running two restaurants and was the most decorated chef in the world. In 1933, she would become the first person to receive six stars in the Michelin Guide, a record that remained unchallenged until Alain Ducasse matched her in 1998. She was also largely responsible for teaching Bocuse his trade.
Brazier was no doubt a tour de force. So, why, then, have her achievements been largely forgotten, while those of chefs like Bocuse have been lauded?
One of her restaurants, the currently two-starred La Mère Brazier, is still running to this day under the guidance of chef Mathieu Viannay. Inside, the 1933 Michelin guide sits proudly in a glass case, while a photo of Brazier in a starched white blouse lines a sliding door. Although Brazier's legacy is kept alive in the restaurant, few people know about her important contributions to French gastronomy. Viannay believes this is due to the time she was living in.
"Brazier is well-known to anyone who knows the history of French cuisine," Viannay said. "When I reopened the restaurant in 2008, articles came out in 80 different countries. But Brazier came from a time when chefs weren't in the media."
Given that famous male French culinary names like François Pierre de la Varenne, Marie-Antoine Carême and Auguste Escoffier all pre-dated Brazier but are much better known globally, the timeframe can't be the only reason for her relative anonymity.
"Her gender had a huge role to play," explained food historian Dr Annie Gray. "France's culinary scene was largely split into two categories: *haute-cuisine*, prepared by those with classical training (mostly men); and *cuisine de la grand-mère*, grandmother's style cooking, usually accompanied by the stereotypical image of the buxom woman at the stove."
In the 19th and early 20th Centuries, the route to becoming a top chef in France followed strict rules. Boys aged between 10 and 13 would start apprenticeships in kitchens, working their way up the ranks. Training would follow, largely in Paris, but often with a spell in Nice and on the Normandy coast, working in casino resorts. Women weren't made apprentices, and Brazier was no exception.
Growing up in the early 1900s, her family lived on a farm in La Tranclière, 56km north-east of Lyon. Under her mother's instruction, Brazier began to cook as soon as she could hold a spoon. By the age of five, she could make two types of tarts, although she wasn't allowed to light the oven. She was responsible for the family pigs, and her schooling was sporadic at best. She only attended classes during winter when there was less work to do on the farm.
Brazier's mother died when she was just 10, and she took a job at a neighbouring farm to help provide for her family. But in 1914, the 19-year-old Brazier became pregnant out of wedlock and her father kicked her out, as it was considered scandalous in those times. To make ends meet, Brazier got a housekeeping job with a wealthy Lyonnaise family, the Milliats, placing her son, Gaston, in a *pensionnat* (boarding school). She travelled with the family each year as they spent winters in Cannes in southern France, and eventually took on the additional role of cook once the family decided to live there year-round. With no cookbooks to consult, she would ask merchants or local hotel staff for recipes and recreate them from memory.
After World War One, Brazier, now a more seasoned cook, started working in the kitchen of Mère Filloux, a restaurant in Lyon's Brotteaux neighbourhood with an all-female staff, which was common at the time. Typically, *bouchons *(traditional restaurants) were run by women called "Lyonnaise mothers", who served offal and offcuts of meat to hungry businessmen and silk workers.
By 1922, Brazier had saved enough money working at Mère Filloux and other restaurants to buy a grocery shop, which she turned into a small restaurant. There, she began making a name for herself preparing dishes like crayfish in mayonnaise, roast pigeon and country-style peas and carrots. She later moved to a larger restaurant on Rue Royale in central Lyon, which is the site of the present-day La Mère Brazier. In 1928 she opened a second restaurant, also called La Mère Brazier, with a farm and cookery school, in the hills 19km outside Lyon at Col de la Luère.
Being outside Paris was both key and detrimental to her success. The Michelin Guide (originally a motoring handbook designed to boost sales of Michelin tyres) inspired people to travel more, and as Lyon was a popular stop for motorists heading south from Paris, the notoriety of the city's restaurants – including Brazier's – grew. However, Paris was home to the great culinary schools like Le Cordon Bleu, and it held the crown for haute cuisine, which was more highly regarded than the traditional style of cooking largely found in Lyon.
"Brazier's dishes remained firmly and unapologetically rooted in Lyonnaise cuisine, familiar and recognisable dishes that didn't try to approach the gilded cuisine of Paris," said Maryann Tebben, author of Savoir-Faire: A History of Food in France. "Bocuse was also based in Lyon, but [after training with Brazier] he apprenticed with [famous chef and restaurateur] Fernand Point and worked at the Lucas Carlton restaurant in Paris. His Parisian training was in full view."
After the outbreak of World War Two, when France fell to German occupation, Lyon stood in Vichy (so-called "free") France. Brazier was allowed to continue operations, but quickly fell afoul of the Nazis, after complaining that stringent rationing was affecting the quality of her food. The restaurant closed in 1941 for the duration of the war and Brazier was imprisoned, although she never disclosed why.
After Brazier resumed work at the end of the war, she began to train aspiring chefs at her farm-restaurant in Col de la Luère. Paul Bocuse and Bernard Pacaud (founder and chef of L'Ambroisie in Paris) were among her protégées.
In 1953, the director of New York's Waldorf Astoria hotel tried to hire Brazier to run their restaurant, offering a hefty annual salary. Brazier declined, refusing to uproot. She was even offered the Legion of Honour, the highest French order of merit, but again declined, saying that the award should be "reserved for more important things than cooking well".
Brazier died aged 81 in 1977, leaving the running of her restaurant to her granddaughter, Jacotte. In 2004, the restaurant closed, remaining empty until 2008, when it was bought by Viannay.
For Viannay, the restaurant's history is of paramount importance. He describes himself as "a gatekeeper", knowing the institution will live on long after he is gone.
The simplicity of ingredients and elements of Brazier's traditional style of cooking are two things that he has kept consistent since Brazier's time. Although he's modernised the menu, old favourites such as Bresse chicken and *cervelle de canut *(a soft Lyonnais cheese infused with herbs) still regularly feature on the menu.
While Brazier's legacy lives on through the restaurant, the gender divide in the culinary world still exists, as only around 6% of Michelin-starred restaurants in France are helmed by women. French chef Anne-Sophie Pic, who has followed in Brazier's footsteps as a culinary pioneer, is currently the only women in France to have a three-Michelin-starred restaurant.
"Brazier deserves to be on the podium with the grandfathers of French cuisine," said Gray. "With restaurants like noma closing, the age of ridiculously intensive preparation is over. There's room for French cuisine to take a look at itself and change."
*BBC.com's **World's Table** "smashes the kitchen ceiling" by changing the way the world thinks about food, through the past, present and future.** *
*--- *
*Join more than three million BBC Travel fans by liking us on **Facebook**, or follow us on **Twitter** and **Instagram**.*
*If you liked this story, **sign up for the weekly bbc.com features newsletter** called "The Essential List". A handpicked selection of stories from BBC Future, Culture, Worklife and Travel, delivered to your inbox every Friday.*
| true | true | true |
Uneducated, a single mother and the first person ever to receive six Michelin stars, Eugénie Brazier was a tour de force. So why doesn't the world know about her?
|
2024-10-12 00:00:00
|
2023-03-21 00:00:00
|
newsarticle
|
bbc.com
|
BBC
| null | null |
|
3,482,875 |
http://ongig.com/blog/resume/infographic-resume-david-ingram
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
35,941,884 |
https://www.seattletimes.com/business/to-stem-the-housing-crisis-religious-congregations-are-building-homes/
|
To stem the housing crisis, religious congregations are building homes
|
EDEN STIFFMAN
|
The crowd that prayed together at Arlington Presbyterian Church’s Sunday worship service had dwindled from more than 100 to a few dozen. Donations dropped, and for years, congregation members grappled with how to reinvent their nearly century-old Northern Virginia church.
Neighbors’ stories guided the church’s radical transformation. As church members spoke with people who worked nearby, they heard a common concern: People were struggling to afford to live there.
“Those stories broke their hearts,” says the Rev. Ashley Goff, pastor since 2018. “They really felt this call by God to do something very dramatic about the lack of affordable housing.”
After some contentious discussions, the church reached a decision to use the greatest asset it had: real estate. In 2016 the church sold its land and historic stone building to the Arlington Partnership for Affordable Housing, a nonprofit developer, for $8.5 million.
The church was razed. In its place now stands Gilliam Place, a six-story complex with 173 apartments. The building, with ground-floor space rented by the church for services, offers homes to people who earn 60% or less of the area’s median income.
Hundreds of faith groups are using their property to build homes. For cash-poor congregations that face declining revenue and member participation and rising maintenance costs, developing housing can offer a financial benefit while also expanding their social mission.
Most faiths embrace helping the vulnerable, and faith-based organizations have long provided housing. But it’s rare that religious leaders have real-estate-development expertise and resources to navigate the often-challenging financial and political barriers that come with planning and building apartments or houses.
Nonprofits and foundations have stepped in to help. Enterprise Community Partners, the Local Initiatives Support Corporation, and other groups provide religious leaders with training, connections to developers, legal advice, and financial support to help them make informed decisions about whether they should use their land for housing. Then, the nonprofits guide leaders through the complex development process.
Enterprise, one of the biggest nonprofits working on housing issues, has run its Faith-Based Development Initiative since 2006. Capital One, Bank of America, and local grant makers, including the Blank Foundation in Atlanta and New York’s Trinity Church Wall Street and others, provided support. In 2022, Wells Fargo gave $8.5 million to help the program expand nationally from the mid-Atlantic region where it began.
Houses of worship in Atlanta, Baltimore, Miami, New York, Seattle, and Washington are participating now. Grantmakers and local governments have committed roughly $12 million to the program for the next several years.
So far, the effort has created or preserved 1,500 affordable rental apartments in the Baltimore-Washington region. More than 1,000 homes are in various stages of development in other parts of the country, and the potential for more is huge.
“Even if just 10% of the faith-owned land got activated tomorrow for affordable housing, we’re talking about potentially hundreds of thousands of units around the country,” says the Rev. David Bowers, an Enterprise vice president and leader of Faith-Based Development Initiative. In the Washington metropolitan area alone, the Urban Institute identified nearly 800 vacant parcels owned by faith-based institutions, most of which are already zoned for residential buildings. Assuming multifamily housing could be built on that land, it could support building 43,000 to 108,000 new low-cost housing units.
Meanwhile, Local Initiatives Support Corporation, a nonprofit community-development financial institution, is helping churches explore housing projects in New York and the San Francisco area. And Yes in God’s Back Yard, backed by the grant-maker coalition Catalyst of San Diego & Imperial County, has ambitious goals for faith groups in Southern California.
Most faith groups don’t opt to sell their land and tear down their sanctuary space as Arlington Presbyterian did. Rather, they want to maintain control of the land and take better advantage of underused property like parking lots or classrooms.
Congregations and other faith-based organizations have a long history of filling housing needs through land donations, Habitat for Humanity projects, and providing shelter for people who are homeless. Many churches in Black neighborhoods have been involved in those efforts, and these congregations are a priority for Enterprise, as they’ve historically had less access to financial resources to support their growth, Bowers says.
Leaders from more than 250 houses of worship across the county have participated in Enterprise training sessions. Black churches represent around 80%. The rest include a mix of churches and a few mosques and synagogues.
“Part of our work is to get more faith communities from all kinds of walks involved,” Bowers says. “When you have declining memberships and you see your building space very underutilized, it becomes pretty stark.”
Some faith organizations that build housing rely on the Low-Income Housing Tax Credit, the country’s largest affordable-housing subsidy program. But the process of applying for government tax credits can be sluggish, says Monica Ball, who leads community outreach for Yes in God’s Back Yard, or YIGBY. The group’s name is a play on NIMBY, or Not in My Back Yard, the acronym used to describe residents who object to new housing or other development where they live.
YIGBY helps faith leaders navigate the home-building process. Instead of relying on tax credits for development, the group hopes to demonstrate how foundations, corporations, and wealthy people can help increase the supply of affordable housing without necessarily spending a dime. Using a construction loan guarantee, foundations or donors pledge to repay a loan with their endowment or other assets. This helps developers access the funds they need while removing risk for the lender.
YIGBY is helping Bethel African Methodist Episcopal, San Diego’s oldest Black church, build 26 new one-bedroom apartments for homeless veterans and older people. The region’s severe shortage of housing means that many veterans who receive a housing voucher from the Department of Veterans Affairs often can’t find a place to rent. Housing analysts estimate the San Diego region needs to build more than 13,000 new homes annually to meet demand.
Banks are often reluctant to lend to first-time developers, so YIGBY has turned to donors and low-interest loans, to help finance Bethel’s project using a construction loan guarantee. Andy Ballester, a co-founder of the crowdfunding site GoFundMe, set aside around $5.3 million — an amount equivalent to the value of the construction loan. That money acts as insurance for the bank and will be tapped into only if the developer fails to make an interest payment on the loan.
So why haven’t more faith groups built new housing to address the shortage?
“It’s just a simple time and money and expertise disconnect,” Ball says. And while these challenges aren’t unique to houses of worship, the need to get zoning approvals from the government and deal with neighbors who resist new development often presents obstacles.
Sometimes houses of worship are at an advantage when they try to work through local opposition, Bowers says. “If people perceive the house of worship as an anchor institution and a good neighbor in that community, sometimes they have goodwill that they’ve accrued over time, and that may help.”
Places of worship are “in need of revenue and relevance,” says Ball, leader of community outreach at YIGBY.
“When you’re in the middle of a housing crisis, if you’ve got land, the best way to generate revenue and become socially relevant is build housing.”
_____
| true | true | true |
The crowd that prayed together at Arlington Presbyterian Church’s Sunday worship service had dwindled from more than 100 to a few dozen.
|
2024-10-12 00:00:00
|
2023-05-10 00:00:00
|
article
|
seattletimes.com
|
The Seattle Times
| null | null |
|
23,373,221 |
https://medium.com/data-for-science/visualizing-the-spread-of-covid-19-a4ea21ee8e46
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
3,787,174 |
http://www.arcticstartup.com/2012/04/02/angry-birds-tv
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
11,031,082 |
https://brett.is/writing/about/managing-go-dependencies-with-git-subtree/
| null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
36,785,469 |
https://tachyons.io/
| null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
4,652,852 |
http://emboss.github.com/blog/2012/10/14/computing-the-units-digit-of-a-power-a-constant-time-algorithm-using-a-little-number-theory/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
3,893,044 |
http://blog.echen.me/2012/04/25/making-the-most-of-mechanical-turk-tips-and-best-practices/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
18,948,942 |
https://www.cryptologie.net/article/468/speed-and-cryptography/
| null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
36,997,890 |
https://nitter.net/andrewmccalip/status/1687405505604734978#m
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
4,500,391 |
http://www.wittenburg.co.uk/Blog.aspx?id=5919c0a5-dcb5-4e5d-be02-80932dc29fc4
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
40,861,120 |
https://x-dev.pages.jsc.fz-juelich.de//2022/07/13/transformers-matmul.html
|
A mathematician’s introduction to transformers and large language models
|
Carolin
|
# A mathematician's introduction to transformers and large language models
# About
This blog post is based on a presentation I held at the “New Trends in Computational Science in Engineering and Industrial Mathematics” workshop in Magdeburg on 01/07/2022. My goal is to give a brief introduction to the state of current large language models, the OpenGPT-X project, and the transformer neural network architecture for people unfamiliar with the subject.
- About
- What is a language model?
- Deep learning architectures
- Attention please!
- From attention to transformers
- Recent developments in large language models
- Takeaways and learnings
- Sources
The audience at the workshop had a mathematics background and is assumed to have a good understanding of linear algebra, but not necessarily of neural networks. Basically, the target audience is past me from before I started working on this project with the goal of understanding the math behind transformers. The questions I want to answer are:
- Where are matrix products performed in training large language models?
- What makes transformers well-suited for high performance computing (HPC)?
If you find any mistakes or unclear points feel free to let me know in order to improve this post.
# What is a language model?
Natural language processing deals with making the human language accessible for computations.1 2 Having a computer understand what you say can help in many situations. Applications of NLP include intelligent speakers, chatbots, translation, text generation, summarization and much more.
A language model forms the back bone of these applications. A language model is just a probability distribution. Given a sequence of words \(w_{1:(t-1)}=(w_1,\dots,w_{t-1})\), a language model gives the probability of all the words in your vocabulary \(V\) to follow this sequence,
\[P(w_t| w_{1:(t-1)}),\qquad w_1,\dots,w_{t-1},w_{t}\in V.\]With such a language model one can generate new texts: Start with a sentence, then choose the word with the highest probability (or sample according to probabilities) and feed the new appended sequence back into the model to generate the next word. The language model can be used to assign a probability to a sentence (using the chain rule of conditional probabilities) as
\[P(w_{1:n}) = \prod_{i=1}^{n} P(w_i|w_{1:(i-1)}).\]One can imagine this to be helpful in grammar corrections for example.
There are different ways to arrive at such a language model. One could think about putting all rules of grammar and the meaning of words into a computer program. However, this is extremely difficult to do. The approach that caught on in recent years and produced very impressive language models does not require encoding explicit grammar or world knowledge. Instead, neural networks are trained on huge amounts of text and learn to form proper sentences just from the data they see.
In order to understand the broader context of the transformer architecture in NLP applications, we clarify some terms related to training and application of large language models.
*Pre-training*: The goal of pre-training is to provide a general language model that has a good understanding of how language is used in a variety of settings.*Fine-tuning*: In fine-tuning, a pre-trained model is trained further on a (comparatively) small set of task-specific data. Before the emergence of pre-trained models, neural networks were trained from scratch for each specific application (also called*downstream task*). Using a pre-trained model uses compute resources more efficiently and can avoid overfitting. Fine-tuning can involve continued training of the whole network or parts of it (*layer freezing*). This step is also called adaptation and may also include adapting the neural network’s architecture.*Inference*: When the model is deployed, for example in form of a chatbot in an online shop, inference describes computing the output (the answer of the chatbot) given a user’s input, using the trained model. This corresponds to a forward-pass of the neural network.
The learning methodology described by the first two steps (pre-training followed by fine-tuning) is called sequential transfer learning.3
All these steps need computing resources. The computational device of choice is typically the GPU due to the massive parallelism it provides and hardware features that make it extremely efficient in performing matrix multiplications. We will see below (in the section Attention please!) how matrix multiplications form the core of training the model. Pre-training of large models is the most computationally demanding step and happens on a supercomputer such as JUWELS at Forschungszentrum Jülich using lots (hundreds) of GPUs in parallel. Fine-tuning and inference may happen on server systems with a handful of GPUs.
# Deep learning architectures
Neural networks are everywhere. You might be familiar with the basic ideas. There are many great resources to learn the foundations.4 5 The goal of training a neural network is to learn input-output relations from data. When a neural network is well-trained, a vector representing input data is fed to an input layer. In illustrations this is on the left (like the one to the right by Dake & Mysid on Wikimedia Commons). Then it is processed by passing several hidden layers until it reaches an output layer. Moving from one layer to the next means multiplying the vector with a matrix, adding another vector and applying a non-linear activation function. This is called a forward-pass or forward-propagation.
The elements of the matrices are called weights, the elements of the additive vector are called biases. Weights and biases are the parameters that are learned during training. For your training data, the output given by the network should closely match the real desired output, i.e. the loss function (measure of difference between network’s output and desired output) should be minimal. If this is not yet the case, we change the parameters to achieve a smaller loss. This is done using gradient descent. The gradient of the loss function with respect to the parameters is computed. The parameters are updated by adding the gradient multiplied by a step size (called learning rate). The actual computation of the gradients uses the chain rule from calculus and involves starting at the output layer and moving backwards through the network. This is why computing the gradients is called backward propagation.
In practice, more useful heuristics are added to this process, and it works very well for many tasks. However, it is difficult to use the fully-connected neural network for NLP tasks. One problem is that the input size is fixed, and we would like to process longer as well as shorter word sequences as input. In general, a dense neural network does not represent the nature of language very well.
Luckily, this standard feed-forward neural network is only the most basic neural network architecture of many that were devised over the years for various applications.
In the field of NLP and language modelling, until recently, sequential models were the state of the art. These include *recurrent neural networks* (RNNs) and *long short-term memory* (LSTM) networks.6
RNNs apply the same neural network (with learned parameters) to every word in a sequence of words. Additionally, this neural network takes an internal state as input, which comes as output from the neural network associated to the previous word. This way the network can learn to use information from earlier words in the sequence. When one writes down the gradient of the loss function with respect to the parameters using the chain rule, one can see that the newest word has the most influence. The influence of the previous words diminishes exponentially. Intuitively, this makes sense: For choosing the next word, the most recent word is on average more important than a word further in the past. However, in practice, language is more nuanced. Some specific words in the past can be very important for choosing future words, and a smart neural network should know how to look for them. Just think of a very long relative clause for example. Older words having less influence on the gradients is therefore more of a bug than a feature, and this is called the *vanishing gradients* problem.
LSTMs alleviate this issue by introducing an extra cell state (serving as “memory”) whose exact influence is determined by gates that are defined by more learnable parameters.
One drawback remains: Both RNNs and LSTMs process their input data sequentially. Consider the forward pass: In order to apply the neural network (a series of matrix multiplications) on an input word vector \(x_i\) we also need the result from applying the network on the previous word vector \(x_{i-1}\). We can not stack the word vectors together in a matrix and apply a neural network all at once.
Formulating algorithms to use matrix-matrix products as main computational element is a good step forward towards the efficient use of modern compute hardware. This is true on the small scale of a single processor to the large scale of supercomputers using thousands of GPUs. Matrix-matrix products are the key.
Realizing this need, researchers started “having intuitions” about neural network architectures that employ these operations to learn to pay *attention* to other relevant words.
# Attention please!
The so-called *attention* mechanism had been employed in the context of sequence models to give the model the opportunity to learn which words are relevant for the next word. The landmark paper “Attention is all you need” (2017) 7 showed that you do not need a recurrent network structure, and that the attention mechanism (together with some other tricks like positional encoding) is powerful enough for impressive results. The resulting neural network architecture is called a transformer.
In the following we describe a forward-pass through a (self-)attention layer, which forms the central element of a transformer block. A neural network architecture is called a transformer when it consists of several transformer blocks. Backpropagation is taken care of by using the automatic differentiation engines of frameworks such as PyTorch or TensorFlow.
Consider a sequence of input tokens \(x_1,\dots, x_n\in\mathbb{R}^{n_\text{model}}\) represented by vectors. Tokens are the smallest building blocks into which word sequences are divided for processing. The process of getting a sequence of tokens (represented as a series of integers referring to a vocabulary) from a text string is called tokenization. The vector representation of a token is called an embedding and spatially encodes the meaning of tokens and their relationship towards each other. In the case of transformers, word embeddings are also learned during pre-training. You can think of this as a matrix with learned entries being multiplied with a one-hot vector, i.e. choosing row \(i\) when the token is encoded as integer \(i\). A one-hot vector is called a (standard) unit vector in numerical linear algebra.
The processing of the first three input vectors \(x_1, x_2, x_3\) to generate an output vector \(y_3\) is seen in the following diagram:2
Among the learned parameters of a transformer block are three matrices \(W_k\), \(W_q\) and \(W_v\). They transform an input vector \(x_i\) to generate three vectors \(k_i\), \(q_i\) and \(v_i\). The convention is to treat the vectors as row vectors and apply the matrix from the right:
\[k_i \leftarrow x_i W_k\in\mathbb{R}^{1\times d_k} \quad q_i \leftarrow x_i W_q \in\mathbb{R}^{1\times d_k},\quad v_i \leftarrow x_i W_v \in\mathbb{R}^{1\times d_v}, \\ \text{for } i=1,\dots, n.\]The vectors \(k_i\), \(q_i\) and \(v_i\) are called queries, keys and values. There is some intuition behind these names that imagines the attention mechanism as retrieving information similar to a database. But I did not find this very helpful in understanding what is going on, so I will not go into more detail here.
To compute the output vector \(y_3\), one first computes scalar products of the query vector \(q_i\) and all previous key vectors \(k_1,\dots, k_i\). In order to prevent numerical overflow, the results are scaled by \(\sqrt{d_k}^{-1}\). Then the softmax activation function is applied.
\[\alpha_{i,j} \leftarrow \frac{q_i k_j^{T}}{\sqrt{d_k}}\quad \text{for }j=1,\dots, i\\ a_{i,j} \leftarrow \text{softmax}(\alpha_{i,j}) = \frac{\exp{(\alpha_{i,j})}}{\sum_{j=1}^i{\exp{(\alpha_{i,j})}}}\quad \text{for }j=1,\dots, i\]The softmax function, applied on a set of \(n\) values, returns \(n\) values between 0 and 1 that sum up to one. Larger values are mapped closer to one and smaller values are mapped closer to zero following a sigmoid function. In a regular “max” function the largest value is mapped to 1 and all smaller values are mapped to 0. The name “softmax” comes from it being a “softer” version of this.
Now the output vector is given as a sum of the scalars \(a_{i,j}\) and the value vectors.
\[y_i \leftarrow \sum_{j=1}^i a_{i,j} v_j \quad \text{for }j=1,\dots, i.\]The beauty of the attention mechanism is now that we can consider all input vectors at once by stacking them on top of each other forming a matrix
\[X = \begin{bmatrix} - x_1 -\\ \vdots\\ - x_{n} - \end{bmatrix}\in\mathbb{R}^{n\times n_\text{model}}.\]Keys, queries and values of all input vectors are computed via matrix-matrix multiplication as
\[K= \begin{bmatrix} - k_1 -\\ \vdots\\ - k_{n} - \end{bmatrix} \leftarrow XW_k \in\mathbb{R}^{n\times d_k},\quad Q=\begin{bmatrix} - q_1 -\\ \vdots\\ - q_{n} - \end{bmatrix} \leftarrow XW_q \in\mathbb{R}^{n\times d_k}, \\ V=\begin{bmatrix} - v_1 -\\ \vdots\\ - v_{n} - \end{bmatrix} \leftarrow XW_v\in\mathbb{R}^{n\times d_v}.\]The scalars \(a_{i,j}\) can now be computed as a softmax applied to the rows of a matrix-matrix product
\[A = [a_{i,j}]_{i,j=1,\dots,n} \leftarrow \text{softmax}\left(\frac{QK^T}{\sqrt{d_k}}\right) \in\mathbb{R}^{n\times n}.\]The next step is the summation of value vectors, weighted with the values \(a_{i,1},\dots,a_{i,n}\) (line \(i\) of \(A\)). This is realized for all vectory \(y_1,\dots,y_n\) at once by – you guessed it – another matrix-matrix product. So in total we have
\[Y = \begin{bmatrix} - y_1 -\\ \vdots\\ - y_{n} - \end{bmatrix} \leftarrow \text{softmax}\left(\frac{QK^T}{\sqrt{d_k}}\right)V \in\mathbb{R}^{n\times d_v}.\]Further remarks on simplifications we made for clarity in the equations:
- The softmax in the last assignments is not a matrix function. Instead it is just a shorthand for applying the softmax function to the rows of the matrix, i.e. \(a_{i,j} \leftarrow \frac{\exp{(\alpha_{ij})}}{\sum_{j=1}^i\exp{(\alpha_{ij})}}.\)
- The self-attention mechanism we described when working with vectors and in the diagram is called
*masked*self-attention. This means that computing the output \(y_i\) only requires the inputs \(x_1,\dots,x_i\). However, when we wrote down the computations using matrices, we forgot about this and also the query, key and value vectors of \(x_{i+1},\dots,x_n\) are used to compute \(y_i\). When training a neural network as a language model predicting the next word this can be undesirable. Then the upper triangular part of the scalar product matrix \(A\) represents “the future” and should not be used. To this end, the upper right half of the matrix is*masked*, i.e. the values are set to \(-\infty\). With the convention \(\exp{(-\infty)}=0\), these values do not contribute to the softmax. In transformer architectures intended for encoding information from language, such as BERT, masking during training is realized differently. In this case the model is allowed to see context on the right side of a token. - Any matrix multiplication can also involve adding a bias vector (for low level enthusiasts: in typical
`gemm`
fashion), which is not stated here explicitly.
# From attention to transformers
Transformer neural networks arrange attention layers and other network layers in various configurations. A number of \(h\) attention layers (*attention heads*) are connected in parallel to form *multi-headed attention*. Every head has independent training parameters. The attention heads’ outputs (matrices of dimension \(n \times d_v\)) are concatenated, forming a matrix of dimension \(n\times h d_v\). This matrix is brought back into the right form by multiplying it with another trained matrix \(W_O\in\mathbb{R}^{hd_v\times n_\text{model}}\):
Multi-headed attention together with normalization layers, feed-forward layers, and residual connections forms a transformer block. The input and the output of a transformer block have the same shape, so they can be connected in series. For example for GPT-1 a transformer block is repeated 12 times. In order to generate a probability distribution for the next word in a sequence, one more linear transformation layer and a softmax is employed at the very end.
The exact transformer architecture can vary and depends on the training objective. The original paper (*Attention is all you need*) considered machine translation. Here, an encoder-decoder structure makes sense: First the sentence in the original language is encoded using a stack of transformer blocks as described above. Both directions of information flow are allowed. The decoder’s structure is mostly similar except that the self-attention is masked and there is a second (multi-head) attention layer in each transformer block. In contrast to the forms of attention we discussed before, this is not *self*-attention, but instead attention is paid to the outputs of the encoder: The output vectors of the encoder are used to compute key and value vectors which serve as input for the decoder’s attention block.
I would suggest not to think too much about wether a network architecture is an “encoder” (BERT)8 or a “decoder” (GPT)9 and not try to relate them to the encoder-decoder architecture from the *Attention is all you need* paper. They are similar in the main ideas, and details vary anyway. The main difference is the masking during training as described above. My theory is that BERT decided to call itself an encoder, mainly to get an “E” for its acronym, to keep this running gag about sesame street characters going.
# Recent developments in large language models
In 2018 the GPT (*Generative Pre-trained Transformer*) model 9 by the company OpenAI started an avalanche of publications describing pre-trained neural networks based on the transformer architecture. Now models could become more powerful just by throwing more compute power and data at them. Larger and larger models were trained. The BERT (*Bidirectional Encoder Representations from Transformers*)8 model by Google followed in the same year (2018). Both have similar architectures corresponding to a series of transformer blocks, making them more simple than the encoder-decoder architecture presented in *Attention is all you need*.
Each year, larger and more powerful models followed. GPT-2 10 was published in 2019. GPT-3 11 followed in 2020 and showed great powers in solving a variety of language related tasks. Modern large language models (since GPT-3) already show impressive performance on downstream tasks even without the fine-tuning step. To achieve this, in-context learning is incorporated in the pre-training loop and at inference time. This is called meta-learning in the GPT-3 paper.11 Here, examples of the task and solution (e.g. sentiment analysis) are shown as part of the input at the forward pass (in pre-training or at inference). Showing few examples at inference time is called few-shot learning. One-shot learning shows just one example and zero-shot learning shows no example.
Even though GPT-3 was developed by a company with “Open” in its name, the trained model is not in fact open, but only accessible for a fee.
In 2022 the OpenGPT-X project, funded by the German Federal Ministry of Economics and Climate Protection (BMWK), was launched with the goal to provide an independent and open large language model based in Europe and trained on English and German data. Other efforts to provide models of similar capabilities as GPT-3 more openly include the BigScience Research Workshop and OPT (*Open Pretrained Transformer*) by Meta.12
# Takeaways and learnings
- Large language models have an incredibly wide range of applications. They will play a big role in our every day lifes very soon.
- OpenGPT-X is the European answer to GPT-3.
- Everybody interested in large-scale deep learning should look into the transformer architecture.
I recently moved from numerical linear algebra, developing algorithms for solving structured eigenvalue problems, towards natural language processing with a focus on high performance computing. In my native language I would call a principal component analysis a singular value decomposition. This is why I have an instinct to look for matrices everywhere. I want to conclude by sharing some of my personal learnings from switching fields.
- AI research is extremely fast-paced. There are new interesting preprints coming out every week and it is hard to keep up. However, I have the feeling that the algorithms are on some level still immature just because the field is so young. Compared to algorithms from applied mathematics (say Krylow subspace methods to just name one example), the transformer architecture feels unpolished and arbitrary. There is a lot of research to be done on WHY it works as well as it does.
- The open source spirit is alive and strong. The common development of code bases across multiple companies such as Nvidia, Microsoft, Meta, and HuggingFace, is something I could not have imagined to be a reality before seeing it with my own eyes.
- Both these factors contribute to a wide availability of not only research publications but also didactic materials teaching state-of-the art research in an accessible manner.
# Sources
-
Coursera course by Andrew Ng: Sequence models ↩
-
Book by Dan Jurafsky and James H. Martin: Speech and Language Processing (3rd ed. draft) ↩ ↩
2 -
Presentation by Thomas Wolf: An Introduction to Transfer Learning in NLP and HuggingFace ↩
-
Lecture series by Sebastian Raschka: Deep learning lecture videos by Sebastian Raschka, in particular lecture L19: Self-attention and transformer networks ↩
-
Lecture series by MIT: Introduction to Deep Learning, in particular lecture 2 by Ava Solemany Deep Sequence Modeling ↩
-
Blog post by Christopher Olah: Understanding LSTMS ↩
-
Original transformer paper: Attention is all you need, 2017 ↩
-
BERT paper: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, 2018 ↩ ↩
2 -
GPT-1 paper: Improving Language Understanding by Generative Pre-Training, 2018 ↩ ↩
2 -
GPT-2 paper: Language Models are Unsupervised Multitask Learners, 2019 ↩
-
GPT-3 paper: Language models are few shot learners, 2020 ↩ ↩
2 -
Paper: OPT: Open Pre-trained Transformer Language Models, 2022 ↩
| true | true | true |
About
|
2024-10-12 00:00:00
|
2022-07-13 00:00:00
| null |
article
|
fz-juelich.de
|
JSC Accelerating Devices Lab
| null | null |
21,771,147 |
https://hbr.org/2017/11/many-strategies-fail-because-theyre-not-actually-strategies
|
Many Strategies Fail Because They’re Not Actually Strategies
|
Freek Vermeulen
|
## Summary.
Many strategy execution processes fail because “new strategies” are often not strategies at all. A real strategy involves a clear set of choices that define what the firm is going to do and what it’s not going to do. Many strategies fail to get implemented because they do not represent such a set of clear choices. And many so-called strategies are in fact goals. “We want to be the number one or number two in all the markets in which we operate” is one of those. It does not tell you what you are going to do; all it does is tell you what you hope the outcome will be. But you’ll still need a strategy to achieve it. Another reason why many implementation efforts fail is that executives see it as a pure top-down, two-step process: “The strategy is made; now we implement it.” That’s unlikely to work. A successful strategy execution process is seldom a one-way trickle-down cascade of decisions.
Many strategy execution processes fail because the firm does not have something worth executing.
| true | true | true |
Many strategy execution processes fail because “new strategies” are often not strategies at all. A real strategy involves a clear set of choices that define what the firm is going to do and what it’s not going to do. Many strategies fail to get implemented because they do not represent such a set of clear choices. And many so-called strategies are in fact goals. “We want to be the number one or number two in all the markets in which we operate” is one of those. It does not tell you what you are going to do; all it does is tell you what you hope the outcome will be. But you’ll still need a strategy to achieve it. Another reason why many implementation efforts fail is that executives see it as a pure top-down, two-step process: “The strategy is made; now we implement it.” That’s unlikely to work. A successful strategy execution process is seldom a one-way trickle-down cascade of decisions.
|
2024-10-12 00:00:00
|
2021-09-17 00:00:00
|
/resources/images/article_assets/2017/11/nov17-08-135629213-Lobo-Press.jpg
|
article
| null |
Harvard Business Review
| null | null |
30,589,751 |
https://www.theguardian.com/music/2022/mar/07/bandcamp-sells-to-epic-can-a-video-game-company-save-independent-music
|
Bandcamp sells to Epic: can a video game company save independent music?
|
Chal Ravens
|
Musos and gamers were left scratching their heads last Wednesday as Bandcamp, the online record store hailed by independent artists as a bankable alternative to the razor-thin royalties of streaming, announced its acquisition by Epic Games, makers of the online gaming phenomenon Fortnite.
Bandcamp CEO Ethan Diamond framed the deal as a boon for artists, saying that the two US companies shared a vision of building “the most open, artist-friendly ecosystem in the world”. A blogpost from Epic underlined the need for “fair and open platforms” to enable “creators to keep the majority of their hard-earned money”.
But Bandcamp users reacted with shock and disappointment to the sale of the indie juggernaut, lamenting the loss of “our” store, as drummer and Spotify critic Damon Krukowski tweeted.
“We all just got sold,” lamented media theorist McKenzie Wark. Bemused gamers and tech experts, meanwhile, wondered what possible uses a company such as Epic – itself 40% owned by Chinese gaming megacorp Tencent – might have for the direct-to-fan marketplace for MP3s of niche musical genres like vaporwave and chiptune.
Since its founding in 2008, Bandcamp has become a cornerstone of the global underground music economy through its “pay-what-you-want” download structure and low commissions, taking a 15% cut of every sale. During the pandemic, the San Francisco-based company earned kudos for waiving that fee on the first Friday of every month, generating millions of dollars for artists, as well as donating to racial justice campaigns including the NAACP Legal Defense Fund.
Epic has its own track record as a plucky indie, having remained largely in the hands of CEO Tim Sweeney since its founding in 1991. Like Bandcamp, Epic takes a relatively small cut from developers, setting its commission at 12% compared with Apple’s 30%. Epic even took Apple to court – and lost, at great expense – to accuse the tech giant of monopolising the mobile gaming industry. But the company’s fortunes have soared in recent years after the success of Fortnite and other free-to-play games. Now valued at $28bn, and facing its own controversies over data collection and Store exclusivity, it’s no longer a feisty underdog.
Many independent artists, already squeezed by the collapse of physical sales and – since Covid – a long hiatus from touring, see the Bandcamp sale as another disappointment in a long tradition of indie sellouts. But tech experts and neophyte musicians have also been speculating on the possibility of new integrations between Bandcamp’s vast catalogue of music and Epic’s cutting-edge game technology. These range from live-streaming events such as Fortnite’s virtual concerts, which have seen artists Travis Scott and Marshmello putting on trippy performances for millions of emoting avatars, to social spaces such as Party Worlds, designed for virtual hangouts rather than combat and destruction.
Beyond the busy world of Fortnite, Epic could also be thinking about easy routes into licensing music for software developers using Unreal Engine, the open and free-to-use development platform built by Epic’s Sweeney. There are obvious opportunities for integration with Harmonix, the games studio behind Guitar Hero and Rock Band.
More importantly, Sweeney is a longtime proponent of virtual reality and its buzzy reincarnation as the metaverse, the promise of a virtual 3D environment built from interconnecting spaces and social networks. “The most plausible way the metaverse is going to rise,” Sweeney said in 2020, “isn’t from one company, even Epic, building this thing and forcing everybody to use it. It’s going to be from more and more companies and brands connecting their products and services.”
In that sense, Epic’s acquisition can also be read as a play for the hearts and minds of the next generation of musicians and music lovers who have grown up in virtual worlds such as Fortnite. Rather than attempting to convert digital natives into record-buying traditionalists, the smart move could be to meet Generation Z where they already are, among the 160 million players inside the Epic Games Store.
Selling to Epic may also have presented itself as the least worst option for a company under pressure to provide returns for its early investors. Bandcamp received venture capital backing in its early years, and though the precise numbers involved are unknown, market logic dictates that VC-backed startups eventually start looking for an exit: either float on the stock market, which works for companies showing impressive growth and significant future valuation, or sell up.
Whether or not that’s the case for Bandcamp, it won’t do much to cheer up the musicians and fans who long ago identified Silicon Valley economics as the source of their woes. On social media, the deal was met with cynicism by some of the musicians who make up Bandcamp’s global community of creator-consumers. “It’s VC-funded and has always been part of platform capitalism, where growth is paramount,” wrote experimental artist Zola Jesus.
The irony is that Bandcamp has always positioned itself as a “community” rather than a marketplace, yet that community has not been given a say in the fate of the value it has created. The solution, say some artists, is to take back control – either by moving over to a platform like Resonate, a streaming cooperative owned by its users, or exploring new Web3 protocols for collective ownership. Austin Robey, co-founder of alternative music platform Ampled, advocates for an “exit to community”, a model where startups are taken over by the users and stakeholders who depend on the product or service they’ve created.
Still, no other indie platform has yet achieved anything like the scale or appeal of Bandcamp, and the shift to alternative platforms will require a leap of technical literacy that most artists and fans aren’t ready to make. In the short term, Bandcamp remains the slickest direct-to-fan operation in town – and is still paying out millions to artists.
“The products and services you depend on aren’t going anywhere,” assured Diamond in his statement. Nothing will change in the short term, is the promise – although it’s the same one that accompanies every similar acquisition. Those old enough to remember losing their MySpace music overnight may be feeling itchy. Word to the wise, advised one suspicious user, “download all your Bandcamp MP3s if you haven’t yet”.
| true | true | true |
A hero site generating better revenue for creators has sold out, say some musicians, while others hail the potential to reach new audiences
|
2024-10-12 00:00:00
|
2022-03-07 00:00:00
|
article
|
theguardian.com
|
The Guardian
| null | null |
|
39,718,876 |
https://www.johndcook.com/blog/2024/03/15/experiences-with-thread-programming-in-microsoft-windows/
|
Experiences with Thread Programming in Microsoft Windows
|
Wayne Joubert
|
Lately I’ve been helping a colleague to add worker threads to his GUI-based Windows application.
Thread programming can be tricky. Here are a few things I’ve learned along the way.
**Performance**. This app does compute-intensive work. It is helpful to offload this very compute-heavy work to a worker thread. Doing this frees the main thread to service GUI requests better.
**Thread libraries**. Windows has multiple thread libraries, for example Microsoft Foundation Class library threads and C++ standard library threads. It is hazardous to use different thread libraries in the same app. In the extreme case, different thread libraries, such as GOMP vs. LOMP, used in resp. the GCC and LLVM compiler families, have different threading runtimes which keep track of threads in different ways. Mixing them in the same code can cause hazardous silent errors.
**Memory fences** are a thing. Different threads can run on different processor cores and hold variables in different respective L1 caches that are not flushed (this to maintain high performance). An update to a variable by one thread is not guaranteed to be visible to other threads without special measures. For example, one could safely transfer information using `::PostMessage`
coupled with a handler function on the receiver thread. Or one could send a signal using an MFC `CEvent`
on one thread and read its `Lock`
on the other. Also, a thread launch implicitly does a memory fence, so that, at least then, the new thread is guaranteed to correctly see the state of all memory locations.
**GUI access** should be done from the master thread only, not a worker thread. Doing so can result in deadlock. A worker thread can instead `::PostMessage`
to ask the master thread to do a GUI action.
**Thread launch.** By default `AfxBeginThread`
returns a thread handle which MFC takes care of deleting when no longer needed. If you want to manage the life cycle of the handle yourself, you can do something like:
```
myWorkerThreadHandle = AfxBeginThread(myFunc, myParams,
THREAD_PRIORITY_NORMAL, 0, CREATE_SUSPENDED);
myWorkerThreadHandle->m_bAutoDelete = false;
myWorkerThreadHandle->ResumeThread();
```
**Joint use of a shared library** like the DAO database library has hazards. One should beware of using the library to allocate something in one thread and deallocating in another, as this will likely allocate in a thread-local heap or stack instead of a shared thread-safe heap, this resulting in a crash.
**Initialization**. One should call `CoInitializeEx(NULL, COINIT_APARTMENTTHREADED)`
and `AfxDaoInit()`
(if using DAO) at thread initialization on both master and worker threads, and correspondingly `CoUninitialize()`
and `AfxDaoTerm()`
at completion of the thread.
**Monitoring of thread state** can be done with
`WaitForSingleObject(myWorkerThreadHandle->m_hThread, 0)`
to determine if the thread has completed or `WaitForSingleObject(myWorkerThreadHandle->m_hThread, INFINITE)`
for a blocking wait until completion.
**Race conditions** are always a risk but can be avoided by careful reasoning about execution. Someone once said up to 90% of code errors can be found by desk checking [1]. Race conditions are notoriously hard to debug, partly because they can occur nondeterministically. There are tools for trying to find race condition errors, though I’ve never tried them.
So far I find no rigorous specification of the MFC threading model online that touches on all these concerns. Hopefully this post is useful to someone else working through these issues.
##### References
[1] Dasso, Aristides., Funes, Ana. Verification, Validation and Testing in Software Engineering. United Kingdom: Idea Group Pub., 2007, p. 163.
| true | true | true |
Thread programming under Microsoft Foundation Class on Windows can have gotchas. Silent errors can impact program correctness. Here are tips to avoid problems.
|
2024-10-12 00:00:00
|
2024-03-15 00:00:00
| null |
article
|
johndcook.com
|
John D. Cook | Applied Mathematics Consulting
| null | null |
896,133 |
http://www.dailytech.com/article.aspx?newsid=16575
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
33,902,481 |
https://www.tobiwrites.com/p/unshackling-myself-from-golden-handcuffs
|
🤑💰 unshackling myself from golden handcuffs
|
Tobi Ogunnaike
|
# 🤑💰 unshackling myself from golden handcuffs
### leaving a high-paying job to wriggle around and figure life out.
**I. **
I’m often asked the same question:
How did you leave that much $$ on the table?
Having worked in the temple of tech, I’m intimately familiar with the reasoning behind this question. I worked in a land that treats TC (total comp) as the cost of salvation. Get the right offer or job title and all your worries will be quelled and you too can cry yourself to sleep on your satin, anti-aging pillowcase. See, the tenets of this game are quite simple—you hitch a ride that puts coins in your wallet, or you build that train yourself. Sure, there’s fancy talk of missions and visions, but the cold truth is we were all attracted by the rewards. At least partially. The culture is one of ever-increasing numbers—more users, more ad revenue, bigger paychecks, more stock. The game is the game. You’re expected to take pitstops, to swap jerseys with rival teams and to flirt with the opposition to get counter offers. This expansionist, accumulative culture isn’t unique to tech per se, but tech offers the addiction of RSUs (stocks)—the sweetest drug in all the world.
RSUs are like little balls of decadent chocolate that slowly fall into your mouth on a predetermined schedule. A Payroll rep blindfolds you and carefully places sinful treats on your tongue. (Okay, so maybe that was a dream I once had.) Imagine a conveyor belt that brings self-dispensing sweets to your doorstep. A fondant pie this month, a human-sized tiramisu the next. Mileage will vary depending on the weight of your grant. But it’s simply a prize you’re given for remaining employed until a certain date. On vest day, there is no fanfare—a dull email crashes your inbox and the ganache will ask you to devour it. There’s a mad rush the first time this happens. That chocolate truffle is now laced with caffeine, pumping adrenaline through your veins. But like any drug, your body starts to normalise the effects after a while. You get used to it. You yearn for a higher dose for the same high. You crave that asymptotic high—one you will certainly chase but will never quite reach.
The tech world is inundated with talk about “exits”. Basically, an exit is when you cash out from the game and get to do whatever you want. But I’ve always found this weird. If we’re constantly talking about our desire to leave…doesn’t that tell us something about where we currently are? Doesn’t that imply displeasure with your current location? Where are we trying to go? Is that place inaccessible to us right now?
To its credit, tech douses you with many addictive comforts and rituals to soften any discomfort. You’ve heard about the beer and wine on tap, but do you know about the dizzying array of sparkling waters, smoothies and sunflower seeds? The ridiculously-complicated machine that makes custom mocktails? The free cafe where buzzing baristas will prepare any drink of your imagination. The nap room (yes, you read that right). The library where you never study. The catered lunches with multiple meat, pescatarian, vegetarian and vegan options. The Uber Eats discounts. The shipping room that turns every half-baked adult into a returns specialist. The soft seating and plush poufs that make sure you never rest your bum on bare metal. The salad bar with a devious supply of avocado. Even the bloody stationery was perfect. Those firm-grip-yet-soft-touch pens stare you down and demand you jot down all your bad ideas.
You don’t need me to tell you that these perks are not heartfelt gifts borne of love. I mean, your company doesn’t hate you. The truth is more blasé. A company is a legal construction that was birthed in Delaware. How could it have feelings for you - good or bad? To them, you’re irrelevant in the same refreshing way that you don’t matter to Jupiter. The money, the treats, the perks are the terms of a deal. You bring your smarts, energy and optimism and hopefully, that helps the greater organism to expand. The hope is that the perks keep you at the office long enough to do more work, maybe even better work than if you had to stress about how to ship your ASOS returns.
**In any game, especially if it’s a game worth playing, there is that which is seen and that which is unseen. **The seen stuff shines bright in daylight and everyone can see its lustre**.** Then there’s that which is unseen, undercounted and often ignored. In this little dance between sought-after employee and opulent tech company, the seen cost is quite obvious. The companies shell boatloads of cash to feed, flirt with and seduce employees. The unseen costs lurk underneath the surface and are silently borne by employees. But nobody really talks about this in the open.
Money is supposed to be freeing. It’s supposed to let you imagine better futures that work on your terms. It should bring you closer to living in alignment with your interests and your values. It should not be bondage. For far too many, a high-paying job becomes a prison that drains your life satisfaction and wellbeing. I know several people in this boat. How could they be imprisoned by money, you ask?* *Well, the covid pandemic was an accelerant. When you strip away all the frills and glitz from your job and are faced with the reality of how you spend your workdays, the outcome is often disappointing. The happy hours, the perks and the lunches medicated you into managing until the weekend. But when your work is one lifeless, inconsequential Zoom call after another, then even the weekend is too far away.
Trapped by the weight of their grants, they lied to themselves about when they would leave. “Oh I’ll just wait for two more vests then I’ll leave”. I’m not judging, I did this very thing myself. Constantly drawing new lines in the sand. Doing all kinds of justification gymnastics to tell myself that it would be worth it in the end. Spreadsheets tried to talk sense to my logical brain, and narratives swayed and seduced my emotional brain. The reality is that it’s difficult to give up money. And that makes sense from an evolutionary perspective. We’re wired to protect ourselves and loved ones. Remember, for most of history, we did not live very long. Surviving the winter was not a foregone conclusion. If you have extra fat or extra grain, your brain wants you to store that for the days of rain, cold and famine. But the issue is your brain is running outdated software. It doesn’t know how to account for the realities of today, the true costs and value of money, your mental health and the other opportunities you have.
**So what do you when your money becomes your prison?** When you’re zooming through life endlessly yearning for weekends that are always too short? When you’ve run out of vices to quell the pain? How do you reckon with the finiteness of your life - you don’t know when you’re going to die so what the hell is the point of accumulating wealth if you hate your present days? Is that future money worth it if you’re gonna be depressed when you’re wealthy?
It is here that I want you to sit for a moment. In this uncomfortable space where you reckon with the value of money in your life. I won’t pretend to know what season you’re in or what your priorities are. You could have huge financial obligations that demand you earn as much as legally possible. Or you might enjoy the corporate game and dream of fishing for bigger salmon in the open seas. Who am I to tell you otherwise?
But I know that there will be others. Others who feel completely disconnected with the stories we tell ourselves about work. Those who continue to excel at work and are rewarded with more work that they hate. Those who recoil at the idea of being called a “resource”. Those who dream of using the totality of their skills at work - not just an engineer, not just a writer, not something that exists on a career ladder. Those who don’t fit squarely into the neatly divided holes that the corporate world prescribes. Those who feel at odds with the options they have, who know that there must be much more out there for them. Those too talented, too inquisitive, too rebellious to sit and twiddle their thumbs with inconsequential work.
Consider the end of your life. If that’s too dark for your taste, consider someone else on their death bed. What do you guess they might regret about their life? Shots they never took? Grudges they held too long? Money they didn’t earn? Work they hadn’t done? Hopefully, you and yours are not dying anytime soon. But research consistently tells us that the dying often regret having the courage to live their authentic life. The life truest to themselves not the life expected by others or dictated by society. Not the one where you’re merely adopting the narrow, singular definitions of success that society prescribes us. So don’t shortchange yourself by endlessly sacrificing the present for a tomorrow you’re not certain you’ll have. Spend your today doing what you were put on this planet to do, or at least trying to figure out what that is.
### II**. **
“Beach balls always rise to the surface”.
My therapist said these words to me on a random Tuesday. I was taken aback because I live in foggy San Francisco where there are no actual beaches. Instead, there are stretches of sand that present as mirages, tempting us to dip our toes in the water only to be met with frigid cold. So I asked her to explain the concept. She obliged and I felt completely understood.
I tried many things before I decided to leave my job. I used two coaching services ironically provided by my ex-employer. I wrote painfully honest journal entries for my therapist. You know the kind that you try to edit before sending because you’re concerned about what it might say about you? I found a few friends who liked their jobs and badgered them with questions. *“So what do you mean, when you say you enjoy your job?” *I talked about leaving for so long that it had become a running joke with my close friends that it would never happen. I’d be there when the stage curtains were being closed, turning the lights off and drinking the applause as the credits rolled in.
In the summer, I was in denial. I thought changing roles would help mute the pain and propel me forward. Over the years, I’d been told by many people - mentors, coworkers and a manager that I’d be a great product manager. So one day, I climbed out of my abyss and told my manager that I wanted to pursue that direction. He supported it. Until I reckoned with myself while in the shower one day. Would this new role *really* change much? Was I avoiding something? Pursuing a shiny distraction instead of doing the introspection I desperately needed? Setting a new goal and going after it was intoxicating, but I could see myself three months in feeling the same way. I decided not to go down this route.
I learned how to game my own mind. I’d look at the calendar and JIRA board and would see several things I didn’t want to do. So I’d incentivise myself with rewards from other parts of my life. “*It’s Monday so at least you’re playing soccer tonight*”. Just hold it together until the evening and you will feel a bit better. Voila! A future dose of dopamine…something to look forward to. Then I’d wear my corporate makeup and put on a theatrical performance to try to drag myself across the finish line. Often I’d fail. Focus and motivation were at an all-time low. My workdays were simply the hours I spent waiting for things that would give me a little life. What a horrible way to live. Whether it was soccer, or hanging with friends, I tried to stuff the void with things I liked. But all fun things end at some point and that bleak Monday cloud always found a way to stalk, return and hover. It would be a stretch to say I was surviving. I was a tiny notch above functional.
Then, I came to understand what my therapist was saying about beach balls. I’ll spare you the high school physics lesson about upthrusts and pressure. But the gist is this: **no matter how hard you try to submerge a beach ball underwater, it shoots back up**. I had succeeded at temporarily convincing myself to shut up and do the work. To follow the incentives and play the game for the rewards. To sacrifice the short term for future fruits. I had quietened my dissenting voice that yearned for other fruits. Even if I didn’t know what those fruits were, I knew this current thing wasn’t for me. While my logical brain could accept this tradeoff, my soul could not. It flared up in bouts of indifference, inaction, depressive thoughts and general sadness. And this was profoundly uncomfortable for more than a year. When I calculated my expected payouts, I never counted the cost of misery and dejection. Just because those don’t come with price stickers doesn’t make them any less costly.
I had my fears about leaving. I mean, tech companies are decimating entire teams and slowing hiring across the board. And here I was, voluntarily trying to leave my (as-far-as-I could-tell) secure job. The timing felt odd. But once I took the time to evaluate my fears, I realised that they were mostly unvalidated. Fears try to help us to protect ourselves from danger, but they are low resolution, they can’t see the real details. At the time, I was concerned about next steps - what I’d do, where I’d go, how much money I’d need, how I’d explain this, how long I’d need before getting a “real job”. But none of those are real concerns, I made simple backup plans in case things got really bad. In case we got into a two year recession and engineers suddenly became undesirable. And I convinced myself that I could get another tech job next year if I really needed to. So then it became a simple question: could I bet on myself? Could I take a leap of faith to figure out my journey? What would it look like for me to take advantage of this time? Whisper it, but what if I actually thrived on this journey? How could I say no to this opportunity? Saying no is saying yes to toiling and lingering in the hopeless place with no direction. I couldn’t do that to myself.
Ultimately, I decided to leave because I needed to. For my own survival and because the cost of staying was far higher than I could be paid. My light would be too dimmed, outlook too pessimistic and aura too muted, and to what end?
What’s the point of flying first class on a crashing plane?
Spectacular ending, Tobi. I am almost there with my decision. This helped!
Good work on quitting! The way you take us through it in this writing gives me goosebumps.
It's been 10 years now since I left Silicon Valley. Far less money in my bank account, but each day feels far more valuable, if that makes sense. Never once regretted that move.
Keep it up, friend!
| true | true | true |
What do you do when your money becomes a prison? Is it worth chasing $$ if you're gonna be depressed and wealthy? Are you flying first class on a crashing plane?
|
2024-10-12 00:00:00
|
2022-12-07 00:00:00
|
article
|
tobiwrites.com
|
Wandering the Grey
| null | null |
|
35,718,236 |
https://www.theatlantic.com/technology/archive/2023/04/tech-company-perks-free-food-google/673855/
|
Goodbye to the Dried Office Mangoes
|
Will Gottsegen
|
# Goodbye to the Dried Office Mangoes
Google is clawing back its famously lavish employee perks, sending a message that might be more symbolic than practical.
Even as the whole of Silicon Valley grapples with historic inflation, a bank crash, and mass layoffs, Google’s woes stand apart. The explosion of ChatGPT and artificial intelligence more broadly has produced something of an existential crisis for the company, a “code red” moment for the business. “Am I concerned? Yes,” Sundar Pichai, Google’s CEO, told *The New York Times*. But Google employees are encountering another problem: “They took away the dried mango,” says a project manager at Google’s San Francisco office, whom I agreed not to name to protect the employee from reprisal. At least at that office, the project manager said, workers are seeing less of long-cherished food items—not just the mangoes, but also the Maui-onion chips and the fun-size bags of M&Ms.
Cost-cutting measures have gutted some of Google’s famous perks. In a company-wide email last month, Chief Financial Officer Ruth Porat announced rollbacks on certain in-office amenities, including company-sponsored fitness classes, massages, and the availability of so-called microkitchens: pantries stocked with everything from low-calorie pork rinds to spicy Brazilian flower buds. These perks have long been an inextricable part of Google’s culture, even in an industry flush with nap pods and coffee bars—a way to recruit top talent and keep coders happy during long days in the office. “The idea was ‘We’re going to make it so wonderful to be here that you never need to leave,’” Peter Cappelli, a professor of management at the University of Pennsylvania’s Wharton School, told me. “Are they giving up on that idea?”
Google told me they’re still committed to perks, and indeed, the free meals are still around. “As we’ve consistently said, we set a high bar for industry-leading perks, benefits and office amenities, and will continue that into the future,” Google spokesperson Ryan Lamont said in an email. But the cutbacks are seemingly coming at an inopportune time: If there was ever a moment when Google needed to recruit top talent, it’s now. Although overall demand for software engineers has slowed, money and jobs are still flocking to a buzzy new breed of generative AI. OpenAI, after all, makes a point of matching Google’s daily meals and handing out “fresh-baked cookies.” Google’s new attitude toward perks may be an admission of what was true all along: Perks are *perks—*just expendable add-ons. They’re nice to have in the good times but hardly essential in the bad.
The world of HR has long claimed that happy workers are productive workers, but Google treated that idea like a mantra, creating offices that were less like cubicle-packed grids and more like adult playgrounds (complete with in-office slides and rock-climbing walls). As part of what the company refers to as “Googley extras,” it has given employees free yoga and Pilates classes, fully comped team trips, and even once-a-week eyebrow shaping. Other big companies, and even start-ups flush with venture-capital cash, realized that to have a shot at competing for talent, they’d need to start subsidizing the same sort of lifestyle. Massages and macchiatos were just the start: Apple has hosted private concerts with artists such as Stevie Wonder and Maroon 5; Dropcam, a start-up Google bought in 2014 (whose tech it has recently decided to phase out), reportedly offered each employee a free helicopter ride, piloted by the CEO, to a destination of their choosing. Others, such as WeWork, simply handed out tequila around the clock.
The Googley extras aren’t gone, by any means, but they’re no longer guaranteed. Google’s infamous shuttle buses, known to clog San Francisco streets as they ferry employees to and from the office, are running less frequently, and traditional laptops have become a privilege reserved for employees in engineering roles. Everyone else must now make do with slightly wimpier netbooks. Part of this reduction in amenities has to do with the new reality of hybrid work, which has itself become a perk. It makes sense to trim the shuttle-bus schedule if fewer people are taking the bus to work every day. Same goes for the reported reduction in in-office muffins, although understanding the rationale behind the crackdown doesn’t necessarily make it sting any less.
It’s not just Google, either. “My sense is that [perks] are being pulled back broadly,” Cappelli said. “So many public companies feel that they have to look like they’re belt-tightening for investors.” After just a year, Salesforce has abandoned its “Trailblazer Ranch,” a 75-acre retreat meant to host guided nature walks, group cooking classes, sessions for meditation, and “art journaling.” Over at Meta, already a year out from its decision to cancel free laundry and dry-cleaning services, employees are expressing similar frustrations over snacks.
Still, it all cuts a little deeper at Google. That’s in part because Google has taken such care to cement its reputation as the best place in the world to work, the plushest employer in a sea of plush. As any Google employee will insist, the lunches were never as good at Apple or Microsoft. The message is perhaps symbolic as much as practical. Muffins are not a real financial concern for Alphabet, Google’s $1.3 trillion parent company, which could very much still cash in on the new AI boom. But for the company’s workers, it’s not the muffins themselves, but their *absence*, that may end up having the greatest impact. “The way it is conveyed to people matters as much as the perks themselves,” Cappelli said. If an abundance of perks signals care and intention, what might a lack of perks represent? “You’re sending the opposite signal: ‘We don’t really care about you so much, and that’s why we’re taking it away.’”
Flashy perks helped produce an illusion of safety that couldn’t last. Surface-level penny-pinching is ultimately about assuring investors that costs are under control; employees’ annoyance is just part of the bargain. You’ll know your employer really means business when it lays off your whole team. And if Google is willing to cut down on some of its most visible perks just as generative AI threatens to upend its business, then maybe it’s not too concerned about OpenAI outdoing it in the snack department. The end of muffins and dried-mango slices amounts to a gesture more than anything else—a way of reminding current employees that these are lean times, and they should start acting like it.
| true | true | true |
Google is clawing back its famously lavish employee perks, sending a message that might be more symbolic than practical.
|
2024-10-12 00:00:00
|
2023-04-26 00:00:00
|
article
|
theatlantic.com
|
The Atlantic
| null | null |
|
17,751,495 |
https://www.cubeslam.com/
|
Cube Slam by Google - Experiments with Google
| null |
# Cube Slam
June 2013 | By Google
### Collection:
Cube Slam is a video game that you can play face-to-face against your friends. It’s a Chrome Experiment built using WebRTC, an open web technology that lets you video chat right in the browser without installing any plug-ins. That means you can quickly and easily play Cube Slam with your friends, no matter where they are in the world, just by sharing a link. To win Cube Slam, hit the cube against your friend’s screen three times until the screen explodes. Shields, obstacles, and gravity fields change with every new level, and you can unlock power-ups including fireballs, lasers, multi-balls, mirrored controls, bulletproof shields, fog, ghost balls, time bombs, resized paddles, extra lives and death balls––though you might want to avoid the death balls. If none of your friends are online, you can always play against Bob the Bear and see what level you can reach. If you install the Cube Slam app, you can even play Bob when you’re offline. Cube Slam’s graphics are rendered in WebGL and CSS 3D, and its custom soundtrack is delivered dynamically through Web Audio. WebRTC, which enables the two-person game, is available on desktop Chrome and Chrome OS, and will be available on mobile later this year. In the meantime, you can play Cube Slam against Bob the Bear on your phone or tablet.
Wow, time flies! This project is now archived but you can learn more about its details and history on this page or check out some of the newer experiments.
| true | true | true |
Since 2009, coders have created thousands of amazing experiments using Chrome, Android, AI, WebVR, AR and more. We're showcasing projects here, along with helpful tools and resources, to inspire others to create new experiments.
|
2024-10-12 00:00:00
|
2009-01-01 00:00:00
|
https://lh3.ggpht.com/5_FcGJ9aZmkPai4AH3Lg2mojfD6i-cFnz-GvD_iws22s0SbD2-xpatUaKhZZNHwZxVHZAi4nrgRtWv6ERkamrhgg0-XQhsOz1TiO
|
website
|
withgoogle.com
|
Google
| null | null |
26,745,582 |
https://www.theverge.com/2021/4/8/22374386/proctorio-racial-bias-issues-opencv-facial-detection-schools-tests-remote-learning
|
Students of color are getting flagged to their teachers because testing software can’t see them
|
Mitchell Clark
|
Proctorio, a piece of exam surveillance software designed to keep students from cheating while taking tests, relies on open-source software that has a history of racial bias issues, according to a report by *Motherboard*. The issue was discovered by a student who figured out how the software did facial detection, and discovered that it fails to recognize black faces over half the time.
Proctorio, and other programs like it, is designed to keep an eye on students while they’re taking tests. However, many students of color have reported that they have issues getting the software to see their faces — sometimes having to resort to extreme measures to get the software to recognize them. This could potentially cause the students problems: Proctorio will flag them to instructors if it doesn't detect their face.
The software failed to recognize black faces more than half the time
After anecdotally hearing about these issues, Lucy Satheesan decided to look into the facial detection methods that the software was using. She discovered that it looked and performed identically to OpenCV, an open-source computer vision program that can be used to recognize faces (which has had issues with racial bias in the past). After learning this, she ran tests using OpenCV and a data set designed to validate how well machine vision algorithms deal with diverse faces. According to her second blog post, the results were not good.
Not only did the software fail to recognize black faces more than half the time, it wasn’t particularly good at recognizing faces of any ethnicity — the highest hit rate was under 75 percent. In its report, *Motherboard* contacted a security researcher, who was able to validate both Satheesan’s results and analysis. Proctorio itself also confirms that it uses OpenCV on its licenses page, though it doesn't go into detail about how.
In a statement to *Motherboard*, a Proctorio spokesperson said that Satheesan’s tests prove that the software only looks to detect faces, not recognize the identities associated with them. Well that may be a (small) comfort for students who may rightly be worried about privacy issues related to proctoring software, it doesn't address the accusations of racial bias at all.
This isn’t the first time Proctorio has been called out for failing to recognize diverse faces: the issues that it caused students of color were cited by one university as a reason why it would not renew its contract with the company. Senator Richard Blumenthal (D-CT) even called out the company when talking about bias in proctoring software.
While racial bias in code is nothing new, it’s especially distressing to see it affecting students who are just trying to do their school work, especially in a year where remote learning is the only option available to some.
| true | true | true |
In tests, it detected black faces less than half the time
|
2024-10-12 00:00:00
|
2021-04-08 00:00:00
|
article
|
theverge.com
|
The Verge
| null | null |
|
31,347,932 |
https://www.politico.com/news/magazine/2022/05/08/maduro-venezuela-wokewashing-left-00030698
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
8,164,379 |
http://billmoyers.com/2014/08/07/wall-street-analysts-high-inequality-makes-us-vulnerable-to-crashes/
| null | null | null | true | false | false | null | null | null | null | null | null | null | null | null |
9,672,765 |
https://www.phoronix.com/scan.php?page=news_item&px=Intel-SKL-BXT-Firmware-Blobs
|
Intel Skylake & Broxton To Require Graphics Firmware Blobs
|
Michael Larabel
|
# Intel Skylake & Broxton To Require Graphics Firmware Blobs
Intel's upcoming Skylake and Broxton hardware will require some binary-only firmware blobs by the i915 DRM kernel graphics driver.
Rodrigo Vivi of Intel's Open-Source Technology Center sent in the pull request for landing these binary files into the
These first i915 DRM firmware blobs are for Skylake and Broxton for the GuC and DMC. DMC in this context is the Display Microcontroller, which is present in Skylake (Gen9) and newer and used within the display engine to save and restore its state when entering into low-power states and then resuming. The DMC is basically saving/restoring display registers across low-power states separate of the kernel.
The GuC engine on Skylake is responsible for workload scheduling on the parallel graphics engines. Intel explained on 01.org, "GuC is designed to perform graphics workload scheduling on the various graphics parallel engines. In this scheduling model, host software submits work through one of the 256 graphics doorbells and this invokes the scheduling operation on the appropriate graphics engine. Scheduling operations include determining which workload to run next, submitting a workload to a command streamer, pre-empting existing workloads running on an engine, monitoring progress and notifying host SW when work is done." This page also seems to indicate that these firmware blobs are
The license of these firmware blobs also indicate that redistribution is only allowed in binary form without modification. Beyond that, "no reverse engineering, decompilation, or disassembly of this software is permitted."
These new firmware blobs will certainly have some open-source enthusiasts less excited now about Skylake, Broadwell's successor beginning to ship later this year, and Broxton meanwhile is the new Atom SoC built using the Goldmont architecture and will feature Skylake graphics. If there's any good news out of the situation, at least Intel is shipping these firmware files early rather than NVIDIA that with their months-old hardware still hasn't released their GTX 900 Maxwell firmware files needed by the Nouveau driver to provide open-source hardware acceleration. AMD also tends to be timely with the releasing of their necessary binary-only GPU firmware files for the open-source Linux driver.
Rodrigo Vivi of Intel's Open-Source Technology Center sent in the pull request for landing these binary files into the
**linux-firmware**repository. Up to now there's been no i915 blobs within the linux-firmware tree.These first i915 DRM firmware blobs are for Skylake and Broxton for the GuC and DMC. DMC in this context is the Display Microcontroller, which is present in Skylake (Gen9) and newer and used within the display engine to save and restore its state when entering into low-power states and then resuming. The DMC is basically saving/restoring display registers across low-power states separate of the kernel.
The GuC engine on Skylake is responsible for workload scheduling on the parallel graphics engines. Intel explained on 01.org, "GuC is designed to perform graphics workload scheduling on the various graphics parallel engines. In this scheduling model, host software submits work through one of the 256 graphics doorbells and this invokes the scheduling operation on the appropriate graphics engine. Scheduling operations include determining which workload to run next, submitting a workload to a command streamer, pre-empting existing workloads running on an engine, monitoring progress and notifying host SW when work is done." This page also seems to indicate that these firmware blobs are
*required*by the DRM driver rather than being an optional add-on.The license of these firmware blobs also indicate that redistribution is only allowed in binary form without modification. Beyond that, "no reverse engineering, decompilation, or disassembly of this software is permitted."
These new firmware blobs will certainly have some open-source enthusiasts less excited now about Skylake, Broadwell's successor beginning to ship later this year, and Broxton meanwhile is the new Atom SoC built using the Goldmont architecture and will feature Skylake graphics. If there's any good news out of the situation, at least Intel is shipping these firmware files early rather than NVIDIA that with their months-old hardware still hasn't released their GTX 900 Maxwell firmware files needed by the Nouveau driver to provide open-source hardware acceleration. AMD also tends to be timely with the releasing of their necessary binary-only GPU firmware files for the open-source Linux driver.
86 Comments
| true | true | true |
Intel's upcoming Skylake and Broxton hardware will require some binary-only firmware blobs by the i915 DRM kernel graphics driver.
|
2024-10-12 00:00:00
|
2015-06-05 00:00:00
| null | null | null |
Phoronix
| null | null |
10,279,853 |
https://theintercept.com/2015/09/25/gchq-radio-porn-spies-track-web-users-online-identities/
|
From Radio to Porn, British Spies Track Web Users’ Online Identities
|
Ryan Gallagher
|
T__HERE WAS A SIMPLE AIM__ at the heart of the top-secret program: Record the website browsing habits of “every visible user on the internet.”
Before long, billions of digital records about ordinary people’s online activities were being stored every day. Among them were details cataloging visits to porn, social media, and news websites, search engines, chat forums, and blogs.
The mass surveillance operation — code-named KARMA POLICE — was launched by British spies about seven years ago without any public debate or scrutiny. It was just one part of a giant global internet spying apparatus built by the United Kingdom’s electronic eavesdropping agency, Government Communications Headquarters, or GCHQ.
The revelations about the scope of the British agency’s surveillance are contained in documents obtained by *The Intercept* from National Security Agency whistleblower Edward Snowden. Previous reports based on the leaked files have exposed how GCHQ taps into internet cables to monitor communications on a vast scale, but many details about what happens to the data after it has been vacuumed up have remained unclear.
Amid a renewed push from the U.K. government for more surveillance powers, more than two dozen documents disclosed today by* The Intercept* reveal for the first time several major strands of GCHQ’s existing electronic eavesdropping capabilities.
One system builds profiles showing people’s web browsing histories. Another analyzes instant messenger communications, emails, Skype calls, text messages, cellphone locations, and social media interactions. Separate programs were built to keep tabs on “suspicious” Google searches and usage of Google Maps.
The surveillance is underpinned by an opaque legal regime that has authorized GCHQ to sift through huge archives of metadata about the private phone calls, emails, and internet browsing logs of Brits, Americans, and any other citizens — all without a court order or judicial warrant.
Metadata reveals information about a communication — such as the sender and recipient of an email, or the phone numbers someone called and at what time — but not the written content of the message or the audio of the call.
As of 2012, GCHQ was storing about 50 billion metadata records about online communications and web browsing activity every day, with plans in place to boost capacity to 100 billion daily by the end of that year. The agency, under cover of secrecy, was working to create what it said would soon be the biggest government surveillance system anywhere in the world.
### Radio radicalization
The power of KARMA POLICE was illustrated in 2009, when GCHQ launched a top-secret operation to collect intelligence about people using the internet to listen to radio shows.
The agency used a sample of nearly 7 million metadata records, gathered over a period of three months, to observe the listening habits of more than 200,000 people across 185 countries, including the U.S., the U.K., Ireland, Canada, Mexico, Spain, the Netherlands, France, and Germany.
A summary report detailing the operation shows that one aim of the project was to research “potential misuse” of internet radio stations to spread radical Islamic ideas.
GCHQ spies from a unit known as the Network Analysis Center compiled a list of the most popular stations that they had identified, most of which had no association with Islam, like France-based Hotmix Radio, which plays pop, rock, funk, and hip-hop music.
They zeroed in on any stations found broadcasting recitations from the Quran, such as a popular Iraqi radio station and a station playing sermons from a prominent Egyptian imam named Sheikh Muhammad Jebril. They then used KARMA POLICE to find out more about these stations’ listeners, identifying them as users on Skype, Yahoo, and Facebook.
The summary report says the spies selected one Egypt-based listener for “profiling” and investigated which other websites he had been visiting. Surveillance records revealed the listener had viewed the porn site Redtube, as well as Facebook; Yahoo; YouTube; Google’s blogging platform, Blogspot; the photo-sharing site Flickr; a website about Islam; and an Arab advertising site.
GCHQ’s documents indicate that the plans for KARMA POLICE were drawn up between 2007 and 2008. The system was designed to provide the agency with “either (a) a web browsing profile for every visible user on the internet, or (b) a user profile for every visible website on the internet.”
The origin of the surveillance system’s name is not discussed in the documents. But KARMA POLICE is also the name of a popular song released in 1997 by the Grammy Award-winning British band Radiohead, suggesting the spies may have been fans.
A verse repeated throughout the hit song includes the lyric, “This is what you’ll get, when you mess with us.”
### The Black Hole
GCHQ vacuums up the website browsing histories using “probes” that tap into the international fiber-optic cables that transport internet traffic across the world.
A huge volume of the internet data GCHQ collects flows directly into a massive repository named Black Hole, which is at the core of the agency’s online spying operations, storing raw logs of intercepted material before it has been subject to analysis.
Black Hole contains data collected by GCHQ as part of bulk “unselected” surveillance, meaning it is not focused on particular “selected” targets and instead includes troves of data indiscriminately swept up about ordinary people’s online activities. Between August 2007 and March 2009, GCHQ documents say that Black Hole was used to store more than 1.1 trillion “events” — a term the agency uses to refer to metadata records — with about 10 billion new entries added every day.
As of March 2009, the largest slice of data Black Hole held — 41 percent — was about people’s internet browsing histories. The rest included a combination of email and instant messenger records, details about search engine queries, information about social media activity, logs related to hacking operations, and data on people’s use of tools to browse the internet anonymously.
Throughout this period, as smartphone sales started to boom, the frequency of people’s internet use was steadily increasing. In tandem, British spies were working frantically to bolster their spying capabilities, with plans afoot to expand the size of Black Hole and other repositories to handle an avalanche of new data.
By 2010, according to the documents, GCHQ was logging 30 billion metadata records per day. By 2012, collection had increased to 50 billion per day, and work was underway to double capacity to 100 billion. The agency was developing “unprecedented” techniques to perform what it called “population-scale” data mining, monitoring all communications across entire countries in an effort to detect patterns or behaviors deemed suspicious. It was creating what it said would be, by 2013, “the world’s biggest” surveillance engine “to run cyber operations and to access better, more valued data for customers to make a real world difference.”
GCHQ is able to identify a particular person’s website browsing habits by pulling out the raw data stored in repositories like Black Hole and then analyzing it with a variety of systems that complement each other.
KARMA POLICE, for instance, works by showing the IP addresses of people visiting websites. IP addresses are unique identifiers that are allocated to computers when they connect to the internet.
In isolation, IPs would not be of much value to GCHQ, because they are just a series of numbers — like 195.92.47.101 — and are not attached to a name. But when paired with other data they become a rich source of personal information.
To find out the identity of a person or persons behind an IP address, GCHQ analysts can enter the series of numbers into a separate system named MUTANT BROTH, which is used to sift through data contained in the Black Hole repository about vast amounts of tiny intercepted files known as cookies.
Cookies are automatically placed on computers to identify and sometimes track people browsing the internet, often for advertising purposes. When you visit or log in to a website, a cookie is usually stored on your computer so that the site recognizes you. It can contain your username or email address, your IP address, and even details about your login password and the kind of internet browser you are using — like Google Chrome or Mozilla Firefox.
For GCHQ, this information is incredibly valuable. The agency refers to cookies internally as “target detection identifiers” or “presence events” because of how they help it monitor people’s internet use and uncover online identities.
If the agency wants to track down a person’s IP address, it can enter the person’s email address or username into MUTANT BROTH to attempt to find it, scanning through the cookies that come up linking those identifiers to an IP address. Likewise, if the agency already has the IP address and wants to track down the person behind it, it can use MUTANT BROTH to find email addresses, usernames, and even passwords associated with the IP.
Once the agency has corroborated a targeted person’s IP address with an email address or username, it can then use the tiny cookie files associated with these identifiers to perform a so-called pattern of life analysis showing the times of day and locations at which the person is most active online.
GCHQ was extracting data containing information about people’s visits to the adult website YouPorn.
In turn, the usernames and email and IP addresses can be entered into other systems that enable the agency to spy on the target’s emails, instant messenger conversations, and web browsing history. All GCHQ needs is a single identifier — a “selector,” in agency jargon — to follow a digital trail that can reveal a vast amount about a person’s online activities.
A top-secret GCHQ document from March 2009 reveals the agency has targeted a range of popular websites as part of an effort to covertly collect cookies on a massive scale. It shows a sample search in which the agency was extracting data from cookies containing information about people’s visits to the adult website YouPorn, search engines Yahoo and Google, and the Reuters news website.
Other websites listed as “sources” of cookies in the 2009 document (see below) are Hotmail, YouTube, Facebook, Reddit, WordPress, Amazon, and sites operated by the broadcasters CNN, BBC, and the U.K.’s Channel 4.
In one six-month period between December 2007 and June 2008, the document says, more than 18 billion records from cookies and other similar identifiers were accessible through MUTANT BROTH.
The data is searched by GCHQ analysts in a hunt for behavior online that could be connected to terrorism or other criminal activity. But it has also served a broader and more controversial purpose — helping the agency hack into European companies’ computer networks.
In the lead-up to its secret mission targeting Netherlands-based Gemalto, the largest SIM card manufacturer in the world, GCHQ used MUTANT BROTH in an effort to identify the company’s employees so it could hack into their computers.
The system helped the agency analyze intercepted Facebook cookies it believed were associated with Gemalto staff located at offices in France and Poland. GCHQ later successfully infiltrated Gemalto’s internal networks, stealing encryption keys produced by the company that protect the privacy of cellphone communications.
Similarly, MUTANT BROTH proved integral to GCHQ’s hack of Belgian telecommunications provider Belgacom. The agency entered IP addresses associated with Belgacom into MUTANT BROTH to uncover information about the company’s employees. Cookies associated with the IPs revealed the Google, Yahoo, and LinkedIn accounts of three Belgacom engineers, whose computers were then targeted by the agency and infected with malware.
The hacking operation resulted in GCHQ gaining deep access into the most sensitive parts of Belgacom’s internal systems, granting British spies the ability to intercept communications passing through the company’s networks.
### Cryptome surveillance
In March, a U.K. parliamentary committee published the findings of an 18-month review of GCHQ’s operations and called for an overhaul of the laws that regulate the spying. The committee raised concerns about the agency gathering what it described as “bulk personal datasets” being held about “a wide range of people.” However, it censored the section of the report describing what these “datasets” contained, despite acknowledging that they “may be highly intrusive.”
The Snowden documents shine light on some of the core GCHQ bulk data-gathering programs that the committee was likely referring to — pulling back the veil of secrecy that has shielded some of the agency’s most controversial surveillance operations from public scrutiny.
KARMA POLICE and MUTANT BROTH are among the key bulk collection systems. But they do not operate in isolation — and the scope of GCHQ’s spying extends far beyond them.
The agency operates a bewildering array of other eavesdropping systems, each serving its own specific purpose and designated a unique code name, such as: SOCIAL ANTHROPOID, which is used to analyze metadata on emails, instant messenger chats, social media connections and conversations, plus “telephony” metadata about phone calls, cellphone locations, and text and multimedia messages; MEMORY HOLE, which logs queries entered into search engines and associates each search with an IP address; MARBLED GECKO, which sifts through details about searches people have entered into Google Maps and Google Earth; and INFINITE MONKEYS, which analyzes data about the usage of online bulletin boards and forums.
GCHQ has other programs that it uses to analyze the content of intercepted communications, such as the full written body of emails and the audio of phone calls. One of the most important content collection capabilities is TEMPORA, which mines vast amounts of emails, instant messages, voice calls, and other communications and makes them accessible through a Google-style search tool named XKEYSCORE.
As of September 2012, TEMPORA was collecting “more than 40 billion pieces of content a day” and it was being used to spy on people across Europe, the Middle East, and North Africa, according to a top-secret memo outlining the scope of the program. The existence of TEMPORA was first revealed by *The Guardian* in June 2013.
To analyze all of the communications it intercepts and to build a profile of the individuals it is monitoring, GCHQ uses a variety of different tools that can pull together all of the relevant information and make it accessible through a single interface.
SAMUEL PEPYS is one such tool, built by the British spies to analyze both the content and metadata of emails, browsing sessions, and instant messages as they are being intercepted in real time.
One screenshot of SAMUEL PEPYS in action shows the agency using it to monitor an individual in Sweden who visited a page about GCHQ on the U.S.-based anti-secrecy website Cryptome.
### Domestic spying
Partly due to the U.K.’s geographic location — situated between the United States and the western edge of continental Europe — a large amount of the world’s internet traffic passes through its territory across international data cables.
In 2010, GCHQ noted that what amounted to “25 percent of all internet traffic” was transiting the U.K. through some 1,600 different cables. The agency said that it could “survey the majority of the 1,600” and “select the most valuable to switch into our processing systems.”
Many of the cables flow deep under the Atlantic Ocean from the east coast of the U.S., landing on the white-sand beaches of Cornwall in the southwest of England. Others transport data between the U.K. and countries including France, Belgium, Germany, the Netherlands, Denmark, and Norway by crossing below the North Sea and coming aground at various locations on England’s east coast.
According to Joss Wright, a research fellow at the University of Oxford’s Internet Institute, tapping into the cables allows GCHQ to monitor a large portion of foreign communications. But the cables also transport masses of wholly domestic British emails and online chats, because when anyone in the U.K. sends an email or visits a website, that person’s computer will routinely send and receive data from servers that are located overseas.
“I could send a message from my computer here [in England] to my wife’s computer in the next room, and on its way it could go through the U.S., France, and other countries,” Wright said. “That’s just the way the internet is designed.”
In other words, Wright adds, that means “a lot” of British data and communications transit across international cables daily, and are liable to be swept into GCHQ’s databases.
GCHQ is authorized to conduct dragnet surveillance of the international data cables through so-called external warrants that are signed off on by a government minister.
The external warrants permit the agency to monitor communications in foreign countries as well as British citizens’ international calls and emails — for example, a call from Islamabad to London. They prohibit GCHQ from reading or listening to the content of “internal” U.K. to U.K. emails and phone calls, which are supposed to be filtered out from GCHQ’s systems if they are inadvertently intercepted unless additional authorization is granted to scrutinize them.
However, the same rules do not apply to metadata. A little-known loophole in the law allows GCHQ to use external warrants to collect and analyze bulk metadata about the emails, phone calls, and internet browsing activities of British people, citizens of closely allied countries, and others, regardless of whether the data is derived from domestic U.K. to U.K. communications and browsing sessions or otherwise.
In March, the existence of this loophole was quietly acknowledged by the U.K. parliamentary committee’s surveillance review, which stated in a section of its report that “special protection and additional safeguards” did not apply to metadata swept up using external warrants and that domestic British metadata could therefore be lawfully “returned as a result of searches” conducted by GCHQ.
Perhaps unsurprisingly, GCHQ appears to have readily exploited this obscure legal technicality. Secret policy guidance papers issued to the agency’s analysts instruct them that they can sift through huge troves of indiscriminately collected metadata records to spy on anyone regardless of nationality. The guidance makes clear that there is no exemption or extra privacy protection for British people or citizens from countries that are members of the Five Eyes, a surveillance alliance that includes the U.K., as well as the U.S., Canada, Australia, and New Zealand.
“If you are searching a purely Events only database such as MUTANT BROTH, the issue of location does not occur,” states one internal GCHQ policy document, which is marked with a “last modified” date of July 2012. The document adds that analysts are free to search the databases for British metadata “without further authorization” by inputting a U.K. “selector,” meaning a unique identifier such as a person’s email or IP address, username, or phone number.
Authorization is “not needed for individuals in the U.K.,” another GCHQ document explains, because metadata has been judged “less intrusive than communications content.” All the spies are required to do to mine the metadata troves is write a short “justification” or “reason” for each search they conduct and then click a button on their computer screen.
Intelligence GCHQ collects on British persons of interest is shared with the domestic security agency MI5, which usually takes the lead on spying operations within the U.K. MI5 conducts its own extensive domestic surveillance as part of a program called DIGINT (digital intelligence).
“We think and behave differently based on the assumption that people may be watching.”
GCHQ’s documents suggest that it typically retains metadata for periods between 30 days and six months. It stores the content of communications for a shorter period of time, varying from three to 30 days. The retention periods can be extended if deemed necessary for “cyberdefense.”
One secret policy paper dated January 2010 lists the wide range of information the agency classes as metadata — including location data that could be used to track your movements; your email, instant messenger, and social networking “buddy lists”; logs showing who you have communicated with by phone or email; the passwords you use to access “communications services” (such as an email account); and information about websites you have viewed.
Records showing the full website addresses you have visited — for instance, www.gchq.gov.uk/what_we_do — are treated as content. But the first part of an address you have visited — for instance, www.gchq.gov.uk — is treated as metadata.
In isolation, a single metadata record of a phone call, email, or website visit may not reveal much about a person’s private life, according to Ethan Zuckerman, director of the Center for Civic Media at Massachusetts Institute of Technology.
But if accumulated and analyzed over a period of weeks or months, these details would be “extremely personal,” he told *The Intercept*, because they could reveal a person’s movements, habits, religious beliefs, political views, relationships, and even sexual preferences.
For Zuckerman, who has studied the social and political ramifications of surveillance, the most concerning aspect of large-scale government data collection is that it can be “corrosive toward democracy” — leading to a chilling effect on freedom of expression and communication.
“Once we know there’s a reasonable chance that we are being watched in one fashion or another, it’s hard for that not to have a ‘panopticon effect,’” he said, “where we think and behave differently based on the assumption that people may be watching and paying attention to what we are doing.”
### Light oversight
A GCHQ spokesperson declined to answer any specific questions for this story, citing a “long-standing policy” not to comment on intelligence matters. The spokesperson insisted in an emailed statement that GCHQ’s work is “carried out in accordance with a strict legal and policy framework, which ensures that our activities are authorized, necessary, and proportionate, and that there is rigorous oversight.”
It is unclear, however, whether in practice there are sufficient internal checks in place to ensure GCHQ’s spies don’t abuse their access to the troves of personal information.
According to the agency’s documents, just 10 percent of its “targeting” of individuals for surveillance is audited annually, and a random selection of metadata searches are audited every six months.
When compared to surveillance rules in place in the U.S., GCHQ notes in one document, the U.K. has “a light oversight regime.”
The more lax British spying regulations are reflected in secret internal rules that highlight greater restrictions on how NSA databases can be accessed. The NSA’s troves can be searched for data on British citizens, one document states, but they cannot be mined for information about Americans or other citizens from countries in the Five Eyes alliance.
No such constraints are placed on GCHQ’s own databases, which can be sifted for records on the phone calls, emails, and internet usage of Brits, Americans, and citizens from any other country.
The scope of GCHQ’s surveillance powers explain in part why Snowden told *The Guardian* in June 2013 that U.K. surveillance is “worse than the U.S.” In an interview with *Der Spiegel* in July 2013, Snowden added that British internet cables were “radioactive” and joked, “Even the Queen’s selfies to the pool boy get logged.”
In recent years, the biggest barrier to GCHQ’s mass collection of data does not appear to have come in the form of legal or policy restrictions. Rather, it is the increased use of encryption technology that protects the privacy of communications that has posed the biggest potential hindrance to the agency’s activities.
“The spread of encryption … threatens our ability to do effective target discovery/development,” says a top-secret report co-authored by an official from the British agency and an NSA employee in 2011.
“Pertinent metadata events will be locked within the encrypted channels and difficult, if not impossible, to prise out,” the report says, adding that the agencies were working on a plan that would “(hopefully) allow our Internet Exploitation strategy to prevail.”
*Documents published with this article*:
- TDI Introduction
- TINT External July 2009
- Social Anthropoid Briefing
- Sensitive Targeting Authorisation
- QFD BLACKHOLE Technology Behind INOC
- Pull Steering Group Minutes
- Access: Vision 2013
- Op Highland Fling Event Log
- Operational Engineering November 2010
- NGE BLACK HOLE ConOp
- Next Generation Events
- Events Analysis
- Legalities
- JCE UK Legalities Context
- HRA Auditing
- GCHQ Analytic Cloud Challenges
- Events
- Demystifying NGE Rock Ridge
- Data Stored in BLACK HOLE
- Cyber Defence Operations Legal Policy
- Crypt Discovery Activity
- Content-Metadata Matrix
- Cloud Developers Exchange July 2011
- Broadcast Analysis
- Blazing Saddles Tools
- Architecture Risk 2012
- ADD SD BLACK HOLE
- 200G Iris Access
## Latest Stories
### An Informant Pushed Him to Plot a Subway Bombing. After 20 Years Behind Bars, He Has a Chance at Freedom.
Shahawar Matin Siraj is one of many Muslim men convicted in informant-related terrorism cases. Now he’s seeking compassionate release.
Israel’s War on Gaza
### Four Days in Gaza: Five Journalists Killed or Wounded
“It was not random, but direct targeting on purpose. Fadi was wearing his press uniform.”
Israel’s War on Gaza
### U.S. Journalist Jeremy Loffredo Released After Being Detained by Israel for Four Days
Jeremy Loffredo was taken into custody on suspicion of “assisting an enemy in war” for his reporting on Iran’s missile attack.
| true | true | true |
Top-secret documents from whistleblower Edward Snowden expose UK eavesdropping agency GCHQ's attempts to create world's largest mass surveillance system.
|
2024-10-12 00:00:00
|
2015-09-25 00:00:00
|
article
|
theintercept.com
|
The Intercept
| null | null |
|
38,917,024 |
https://en.wikipedia.org/wiki/Akrotiri_(prehistoric_city)
|
Akrotiri (prehistoric city) - Wikipedia
|
Authority control databases International VIAF WorldCat National United States Israel Geographic Pleiades
|
# Akrotiri (prehistoric city)
Location | Santorini, Greece |
---|---|
Region | Aegean sea |
Coordinates | 36°21′05″N 25°24′13″E / 36.35139°N 25.40361°E |
Type | Settlement |
History | |
Founded | c. 5000–4001 BCE |
Abandoned | 16th century BCE |
Cultures | Cycladic |
Events | Theran eruption |
Site notes | |
Excavation dates | since 1967 |
Condition | Ruins |
**Akrotiri** (Greek: Ακρωτήρι, pronounced Greek: [akroˈtiri]) is the site of a Cycladic Bronze Age settlement on the volcanic Greek island of Santorini (Thera). The name comes from the nearby village of Akrotiri.
The settlement was destroyed in the Theran eruption sometime in the 16th century BCE[2] and buried in volcanic ash, which preserved the remains of fine frescoes and many objects and artworks. Akrotiri has been excavated since 1967 after earlier excavations on Santorini.
## History
[edit]The earliest evidence for human habitation of Akrotiri can be traced back as early as the fifth millennium BCE when it was a small fishing and farming village. By the end of the third millennium, this community developed and expanded significantly. One factor for Akrotiri's growth may be the trade relations it established with other cultures in the Aegean, as evidenced in fragments of foreign pottery at the site. Akrotiri's strategic position on the primary sailing route between Cyprus and Minoan Crete also made it an important point for the copper trade,[3] thus allowing it to become an important centre for processing copper, as proven by the discovery of moulds and crucibles there. Akrotiri's prosperity continued for about another 500 years; paved streets, an extensive drainage system, the production of high-quality pottery and further craft specialization all point to the level of sophistication achieved by the settlement.
This all came to an end, however, in the 16th century BCE with the volcanic eruption of Thera. There is a variety of dating evidence for the eruption, but its exact year is not known. Radiocarbon dating places it most probably between 1620 and 1530 BCE, which is also in accord with the date range of 1570 to 1500 BCE suggested by similarities of the material culture with other sites in the Aegean. Unusual growth patterns observed in tree rings in 1597, 1560, 1546 and 1544 BCE are consistent with a major volcanic event in any of those years. The latter three dates might be the best candidates as they are also considered possible for Egyptian New Kingdom records that are thought to refer to the eruption.[2]
## Cycladic settlement
[edit]The Akrotiri excavation site is of a Cycladic cultural settlement on the Greek island of Santorini, associated with the Minoan civilization due to inscriptions in Linear A, and close similarities in artifact and fresco styles.[4] The excavation is named for a modern village situated on a hill nearby. The name of the site in antiquity is unknown.
Akrotiri was buried by the massive Theran eruption in the middle of the second millennium BCE[5] (during the Late Minoan IA period); as a result, like the Roman ruins of Pompeii after it, it is remarkably well-preserved. Frescoes,[6] pottery, furniture, advanced drainage systems and three-story buildings have been discovered at the site.[7]
### Excavations
[edit]The earliest excavations on the island of Santorini were conducted by French geologist F. Fouque in 1867 after some local people found old artifacts at a quarry. Later, in 1895–1900, the digs by German archeologist Baron Friedrich Hiller von Gaertringen revealed the ruins of ancient Thera on Mesa Vouno, which date from the archaic period, much after the Minoan eruption.[8] Also, a little later, R. Zahn excavated in the locality of Potamos, near Akrotiri, under the auspices of the German Archaeological Institute at Athens.
The extensive modern excavation was started in 1967 by Spyridon Marinatos and revealed the full value of this site. Marinatos's choice of the site proved to be correct and just a few hours into the excavation, the remains of the buried city began to be discovered.[9] The next step was to determine the extent of the city, to which it took two whole seasons devoted to the site in 1967 and 1968. In the early years of the excavation, a great deal of attention was paid towards the organization of proper facilities for the dig, including substantial workshops, laboratories built for storage, repair and treatment and areas for examination of archaeological finds.[9] Because of the site being preserved in thick, volcanic debris, Marinatos noted that many of the buildings were preserved to a height of more than a single story, creating unique challenges for excavation. He experimented with tunnelling into the pumice, but this technique was later abandoned.
In 1975, after Marinatos' death, Christos Doumas took over as the head of excavations.
Excavated artifacts have been installed in a museum distant from the site (Museum of Prehistoric Thera), with many objects and artworks presented. Only a single gold object has been found, hidden beneath flooring, and no uninterred human skeletal remains have been found. This indicates that an orderly evacuation was performed with little or no loss of life.
In 2005, a new roof structure, meant to protect the site, collapsed, just before its completion killing one visitor.[1][10]
Between 2005 and April 2012, the site was closed to visitors.[1][10][11]
Between 2005 and 2016, excavations stopped, due to lack of funding, and resumed, with sponsor support.[1][12]
No damage was caused to the antiquities.[1][13]
In October 2018, a small shrine with a marble figurine of a woman was discovered in the "House of the Thrania" which is located near Xeste 3, where a golden goat was found in 1999.[14]
## Frescoes
[edit]The frescoes in Akrotiri are especially important for the study of Minoan art because they are much better preserved than those that were already known from Knossos and other sites on Crete, which have nearly all survived only in small fragments, usually fallen to the ground.
All of the pigments used by the artists at Akrotiri for painting the frescoes look as though they are mineral based and thus have resulted in the great preservation of the pieces. The colors used in Theran painting include white, yellow, red, brown, blue and black. The technique used is not true fresco, except for a few isolated instances, and instead appears as though the painting was begun while the plaster was still wet, but as though the artist made no effort to keep it wet, and seemed content to complete the work on a dry surface. As a result, often on the same fresco, the paint has penetrated the plaster in some areas but flakes off easily in others.[9]
Specialized techniques were required when it was discovered early on in the excavation process that the site contained numerous well preserved fresco wall paintings. Tassos Margaritoff, one of the leading restorers of Byzantine frescoes, is currently[ when?] the supervisor of the Akrotiri project.
[9]The first fragments of fresco were discovered in 1968 in Sector Alpha and depict the head of an African, the head of a blue monkey and some large flying blue birds.
In 1969, the fresco of the Blue Monkeys in Room Beta 6 was discovered and created increased excitement at the site.[9] The rocky landscape which the monkeys are depicted climbing upon mimics the similar volcanic rocks near the site presently.
In 1970 the Spring Fresco was uncovered in Room Delta 2. It is the first fresco to have been found perfectly preserved and still standing in its original installed position.[9] The supporting wall of the fresco was not in the best condition and thus the fresco had to be removed immediately in order to preserve it. Rescuing the fresco was a delicate procedure and allowed the archaeologists and restorers to develop invaluable experience.
A few other frescoes including The Fisherman and the Lady from the House of Ladies have been found standing, though detached from the wall.
## Artifacts
[edit]The excavations at Akrotiri have produced a large variety of artifacts revealing numerous varieties of Late Cycladic (LC I) pottery from the area. Pottery is the most common and most enduring commodity in the culture of the majority of ancient societies and, thus, is of great importance to archaeologists in interpreting Ancient Greek societies. At Akrotiri, pottery is particularly abundant because of the circumstances surrounding the demise of the town, in that its sudden evacuation meant that inhabitants were only able to take their most valuable possessions with them.[9]
Serving a multitude of purposes, pottery can tell a great deal about the society in which it were produced. Large jars were used as containers for storage of goods, while others like stirrup jars were designed for the transportation of certain commodities. As well, there were also vessels for preparing and cooking food, for eating and drinking and many other diverse activities, including bathtubs, braziers, oil lamps, bee-hives, flower pots, etc. Most evidently, the shape, size and perhaps even the decoration of the vases were closely related to their use in the ancient world.[9]
In regards to furniture, the volcanic ash which engulfed the city often penetrated into the houses in large quantities and, in these layers of fine volcanic dust, produced negatives of the disintegrated wooden objects. Using these negatives as molds, liquid plaster-of-Paris can be poured in and produce casts of parts, or even entire pieces of furniture such as beds, tables, chairs or stools. Offering tables are one of the most common finds in Akrotiri and were either made of clay or coated with plaster, decorated in the same technique as the wall paintings, and only consisted of three highly decorated legs and a top.[9]
## Connecting path
[edit]There is a path descending from the first houses of the modern settlement to the parking lot of the excavations of Akrotiri, connecting the old excavation site to the town of Akrotiri. The path was signposted and reopened in September 2012 and now regularly undergoes maintenance, thanks to international volunteers. The local population has been the first supporter of this initiative and in charge of the upkeep of the path, working alongside the volunteers. The path is suitable for mountain biking, hiking and many other activities.
## See also
[edit]- Castle of Akrotiri, a ruined castle in Akrotiri, Santorini
- Akrotiri, Santorini, a village located north of the ancient settlement
*Summer Lovers*, a 1982 Randal Kleiser film with filmed scenes at Akrotiri- Akrotiri Boxer Frescoes, one of the frescoes at Akrotiri
- List of Aegean Frescoes
## References
[edit]- ^
**a****b****c****d****e**- "Disaster on Santorini".
*Kathimerini*. 24 September 2005. Archived from the original on 2005-10-04. Retrieved 3 November 2023. - "Greek archaeological site roof collapses".
*Times of Malta*. 24 September 2005. Retrieved 3 November 2023. - "Body of tourist found after roof collapse".
*Irish Examiner*. 24 September 2005. Retrieved 3 November 2023. - "Man dies in Greece ruin cave-in".
*news.bbc.co.uk*. 26 September 2005. Archived from the original on 2021-02-01. Retrieved 3 November 2023. - "About Akrotiri accident".
*Greeka: Travel Agency for Greece since 1999*. October 18, 2005. Retrieved 3 November 2023. - "Akrotiri roof was 'overloaded'".
*Kathimerini*. 12 January 2006. Retrieved 3 November 2023. - "Roof of Akrotiri will soon be replaced".
*Greeka: Travel Agency for Greece since 19*. November 23, 2010. Retrieved 3 November 2023. - "Trial for fatal Santorini roof collapse to begin next week".
*Kathimerini*. 3 March 2011. Retrieved 3 November 2023. - "Greece reopens Bronze Age site on Santorini island".
*Reuters*. 11 April 2012. Retrieved 3 November 2023. - Hawdon-Earl, Sarah; Tsavdaridis, Konstantinos Daniel (1 December 2018). "Form Finding and Dimensioning of Reinfornced Concrete Shell Roof for Akrotiri (Santorini)".
*Journal of the International Association for Shell and Spatial Structures*.**59**(4): 276–285. doi:10.20898/j.iass.2018.198.014. S2CID 135440335. Retrieved 3 November 2023.
- "Disaster on Santorini".
- ^
**a**Pearson, Charlotte; Brewer, Peter; Brown, David; Heaton, Timothy; Hodgkins, Gregory; Jull, Timothy; Lange, Todd; Salzer, Matthew (2018). "Annual radiocarbon record indicates 16th century BCE date for the Thera eruption".**b***Science Advances*.**4**(8): eaar8241. Bibcode:2018SciA....4.8241P. doi:10.1126/sciadv.aar8241. PMC 6093623. PMID 30116779. **^**Knappelt, Carl; Evans, Tim; Rivers, Ray (2008). "Modeling Maritime Interactions in the Aegean Bronze Age".*Antiquity*.**82**(318): 1009–1024 [p. 1020]. doi:10.1017/S0003598X0009774X.**^**Christos G. Doumas,*Thera – Pompeii of the Ancient Aegean*, London 1983**^**Floyd W. McCoy and Grant Heiken,*Volcanic Hazards and Disasters in Human Antiquity*, 2000**^**Christos G. Doumas,*The Wall Paintings of Thera*, Athens 1991**^**C. Michael Hogan,*Akrotiri*, The Modern Antiquarian**^**Centro Universitario europeo per I beni culturali di Ravello,*Ancient Buildings and Earthquakes: the Local Seismic Culture Approach.*Edipuglia srl, 2005 ISBN 8872284031- ^
**a****b****c****d****e****f****g****h**Doumas, Christos (1983).**i***Thera: Pompeii of the ancient Aegean*. New York, New York: Thames and Hudson Inc. - ^
**a**"Ancient Akrotiri reopened to visitors".**b***The Greek Island Specialists*. 11 Apr 2012. Archived from the original on 30 May 2016. Retrieved 14 May 2016. **^**"Άνοιξε ο αρχαιολογικός χώρος του Ακρωτηρίου Σαντορίνης,"*Ta Nea*, April 10, 2012 Archived April 13, 2012, at the Wayback Machine**^**"Santorini: Dreams do Come True Sometimes".**^**Although, those charged with criminal negligence were charged with "damaging a monument". Staff (3 March 2011) "Trial for fatal Santorini roof collapse to begin next week: Twelve people to appear in court in connection to the death of one tourist and damage to ancient site"*Kathimerini*, archived here by WebCite**^**"Santorini Excavation Brings to Light Impressive New Findings".*Greek Reporter*. 12 October 2018.
## Further reading
[edit]- Doumas, Christos G. (1983).
*Thera, Pompeii of the Ancient Aegean: Excavations at Akrotiri 1967-1979*. London: Thames and Hudson. - Morgan, Lyvia (1988).
*The miniature wall paintings of Thera : a study in Aegean culture and iconography*. New York: Cambridge University Press. ISBN 0521247276.
## External links
[edit]*.*
**Akrotiri**- "TimeTable".
*ktel-santorini.gr*. KTEL Buses. Retrieved 3 November 2023. - "Akrotiri of Thera".
*Odysseus: Archaeological Sites*. Greek government, Ministry of Culture and Sports. 2012. Archived from the original on 2007-09-03. Retrieved 2007-09-02. - "Disaster on Santorini".
*Kathimerini*. 24 September 2005. Archived from the original on 2005-10-04. Retrieved 3 November 2023. - Akrotiri, Akrotiri on Visit Santorini Tourist Information Website
- Populated places disestablished in the 2nd millennium BC
- 1867 archaeological discoveries
- Minoan sites in the Cyclades
- Santorini
- Populated places in the ancient Aegean islands
- Archaeological sites on the Aegean Islands
- Former populated places in Greece
- Ancient Thera
- 17th-century BC disestablishments
- Disestablishments in the Minoan civilization
- Natural disaster ghost towns
| true | true | true | null |
2024-10-12 00:00:00
|
2007-03-15 00:00:00
|
website
|
wikipedia.org
|
Wikimedia Foundation, Inc.
| null | null |
|
23,162,141 |
https://thehill.com/opinion/finance/497244-universal-basic-income-and-the-end-of-the-republic
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
33,705,924 |
https://arunis100.medium.com/inclusive-cs-examples-b5f40e003815
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
17,802,348 |
https://www.cnbc.com/2018/08/20/bank-of-england-haldane-ai-could-threaten-large-amount-of-jobs.html
|
Bank of England's chief economist warns A.I. could threaten ‘large’ amount of jobs
|
Ryan Browne
|
The Bank of England's Chief Economist Andy Haldane warned on Monday that the rise of artificial intelligence (AI) threatens to replace a huge number of jobs.
Haldane said that the so-called Fourth Industrial Revolution — a digitally-driven paradigm shift similar to previous industrial revolutions in the West — had the potential to displace numerous jobs and leave people "technologically unemployed."
"Each of those [industrial revolutions] had a wrenching and lengthy impact on the jobs market, on the lives and livelihoods of large swathes of society," Haldane told the BBC.
The BOE economist cautioned that previous industrial revolutions resulted in "heightened social tensions," "financial tensions" and "inequality." The First Industrial Revolution, which took place during the Victorian era, transformed Britain's economy, leading to the creation of ground-breaking industrial innovations including the steam train and advanced machine tools, all the while resulting in layoffs especially in industries like textiles.
"This is the dark side of technological revolutions and that dark side has always been there," Haldane added. "That hollowing out is going to be potentially on a much greater scale in the future, when we have machines both thinking and doing — replacing both the cognitive and the technical skills of humans."
While Haldane did not pinpoint a figure for the number of jobs he thought might be replaced, he said it would likely be "at least as large" as the unemployment levels from previous industrial revolutions. The economist said there was a need for the creation of new jobs and upskilling in order to avoid redundancies.
Haldane is not alone in warning of the impact of AI on the labor market. It is one of the biggest concerns held by experts in the field.
Research firm Gartner has predicted that AI will create 2.3 million jobs and eliminate 1.8 million — a net increase of 500,000 jobs — by 2020. However, that doesn't throw out the fact that it would result in steep layoffs around the world.
And some are less optimistic. Deutsche Bank's former chief executive, John Cryan, warned last year that "a lot of people" in the banking industry would lose their jobs due to automation. He suggested that thousands of his own employees could be replaced by AI.
Some commentators — particularly within the tech industry — argue that the introduction of a universal basic income will be necessary to offset the effects of mass job losses. Finland had trialed the scheme, which promotes a universal welfare system in place of all existing benefit programs, but earlier this year said it would not extend the program and end payments to recipients at the start of 2019.
| true | true | true |
The Bank of England's Andy Haldane said artificial intelligence could displace numerous jobs and leave people "technologically unemployed."
|
2024-10-12 00:00:00
|
2018-08-20 00:00:00
|
article
|
cnbc.com
|
CNBC
| null | null |
|
16,371,312 |
https://www.nber.org/reporter/2017number4/black.html
|
The Reporter
|
Cecilia E Rouse
|
# Program Report: Children and Families
On July 1, the Program on Children was renamed the Program on Children and Families. This change, which better captures the range of research carried out by its 171 affiliates, in part marks a return to the program’s roots. In 1993, the late Alan Krueger launched an NBER project on the Economics of Families and Children. It subsequently became a program and has been known as the Program on Children since 1997.1 Broadening the program name recognizes the complex web of interactions, economic and otherwise, that involve children. Economic and other forces that affect families can have important effects on children, and developments involving children in turn have significant influence on the wellbeing of adult family members.
In the eight years since our last program report, scholars affiliated with the program have authored 919 working papers on a wide array of topics. We begin this report with a sampling of their continuing research in core areas, such as the long-term consequences of early-life conditions and the effects of public programs affecting children. We then summarize studies on a number of issues that are attracting growing attention, including gun violence, mental health, access to abortion services, and the long-lived effects of the COVID-19 pandemic.
## Consequences of Early-Life Conditions and Policies
It is now well established that events in early life, including in utero, can have both immediate and lasting impacts on children and, therefore, on families. Douglas Almond, Janet Currie, and Valentina Duque review some of the literature and conclude that even relatively mild adverse shocks in early life can have substantial negative impacts. These effects are heterogeneous, reflecting differences in children’s endowments, family budget constraints, and the technologies of production.2 This observation has inspired work investigating how the social safety net influences child health and wellbeing.
Historically, much of the economic research on the safety net focused on how unconditional assistance to low-income families might affect parental behavior. It was thought that by reducing labor supply and marriage, safety net programs might sustain poverty rather than alleviate it. However, as Anna Aizer, Hilary Hoynes, and Adriana Lleras-Muney show, recent research on the impact of the safety net for children and families has focused more on its impact on child outcomes, as shown in Figure 1.3 This shift in research emphasis roughly coincides with the launch of the NBER research program.
The newer work has shown significant positive effects of safety net programs on short-run child outcomes, as well as on longer-term measures. For example, the Supplemental Security Income (SSI) program is an important safety net program that began to serve larger numbers of children after 1990. Manasi Deshpande and Michael Mueller-Smith find that removing children from the SSI program at age 18 increased the likelihood of criminal charges and incarceration for crimes associated with income generation by 60 percent.4
Research on the social safety net has increasingly focused on the long-term impact of initiatives such as the Food Stamp Program rolled out in the US in the 1960s. Martha Bailey, Maya Rossin-Slater, Reed Walker, and Hoynes document significant increases in adult human capital, economic self-sufficiency, and longevity among children exposed to the program early in life, as shown in Figure 2.5
Researchers continue to explore the impact of the safety net on parental behavior. Many papers find little effect of safety net programs on labor supply and marriage rates. These include work by Elizabeth Ananat, Benjamin Glasner, Christal Hamilton, and Zachary Parolin;6 Shari Eli, Aizer, and Lleras-Muney;7 and Jason Cook and Chloe East.8 However, Kevin Corinth, Bruce Meyer, Matthew Stadnicki, and Derek Wu conduct simulations and find that an unconditional child allowance could reduce employment, thus offsetting some of the effects of the allowance in alleviating child poverty.9 Concerns about these effects have contributed to a change in the structure of the US safety net after 1990 so that spending goes increasingly to families with earners who are more likely to have incomes above the poverty line, as documented by Diane Whitmore Schanzenbach and Hoynes.10
Researchers continue to find large positive effects of cash transfers on children, in part through effects on parental behavior. For example, Lindsey Bullinger, Analisa Packham, and Kerri Raissian show that unconditional cash payments from the Alaska Permanent Fund reduced child maltreatment in that state, as seen in Figure 3.11 The effects of cash transfers on family functioning could potentially be more important than any documented effects on labor supply.
The social safety net can potentially improve parents’ mental health, which may be a pathway for improvements in child outcomes. Lucie Schmidt, Lara Shore-Sheppard, and Tara Watson show in a simulation that a $1,000 increase in cash and food benefits reduced severe psychological distress by 8.4 percent.12 These effects were most pronounced for single mothers with low levels of education. Manudeep Bhuller, Gordon Dahl, Katrine Loken, and Magne Mogstad link parental distress with child wellbeing: A domestic violence incident leads to a 30 percent increase in mental health visits among adult victims and a 19 percent increase in such visits among victims’ children.13
## Early Childhood Education
Promising results from two model programs implemented in the US in the 1960s helped to generate significant interest in early childhood education. Work by Jorge Luis García, James Heckman, and Victor Ronda documents the lasting effects of one such program for African American participants and their children. They find that both parents and children completed more schooling and were more likely to be employed later in life.14
Since the first evaluations of these small model programs, other work examining the impact of the Head Start program, which now serves nearly 800,000 children, has also documented significant long-term benefits. Shuqiao Sun, Breden Timpe, and Bailey use restricted linked census data and the roll-out of Head Start across counties to estimate that access to Head Start led to a half-year increase in schooling and a 40 percent increase in college completion.15
While evaluations of small model programs and Head Start suggest positive and long-lasting gains, some other early childhood programs have smaller benefits. Elizabeth Cascio reviews the research and concludes that the effectiveness of early childhood education depends on the quality of the program and the environment that children would have spent time in absent the program.16 Greg Duncan, Ariel Kalil, Mari Rege, and Mogstad explore the heterogeneity in estimated impacts and conclude that investments in skill-specific curricula may be especially important.17
Variation in the quality of early childcare environments can have implications for inequality and social mobility. Sarah Flood, Joel McMurry, Aaron Sojourner, and Matthew Wiswall find that children from families with higher socioeconomic status (SES) are more likely to receive high-quality care which may exacerbate inequalities.18 Jonathan Borowsky, Jessica Brown, Elizabeth David, Chloe Gibbes, Chris Herbst, Sojourner, Erdal Tekin, and Wiswall model the implications of expanding childcare subsidies for low-income families and conclude that it would increase maternal employment and shift more low-income children into high-quality care.19
## Health Insurance and Healthcare
Several recent studies find positive impacts of access to medical care, especially for historically marginalized Black children. Esra Kose, Siobhan O’Keefe, and Maria Rosales-Rueda demonstrate that increasing access to medical care via the rollout of community health centers improved birth outcomes.20 Despite the demonstrated benefits of public health insurance coverage during pregnancy, undocumented women remain largely ineligible in the United States. Work by Sarah Miller, Laura Wherry, and Gloria Aldano shows that Medicaid coverage of undocumented women increases prenatal care, with positive impacts on birth weight.21
Expansions in Medicaid access can even have intergenerational impacts. East, Marianne Page, Wherry, and Miller estimate the intergenerational impact of Medicaid expansions in utero and in early life. They document that the offspring of children exposed to Medicaid expansions early in life are themselves born healthier, as shown in Figure 4.22
Despite Medicaid’s benefits, many eligible children are not enrolled. Iris Arbogast, Anna Chorniy, and Currie show that regulations increasing the administrative burden of enrollment reduce health insurance coverage among children by six percent in the six months following a new regulation. These effects were especially pronounced among Hispanic children.23
## Intergenerational Effects
The growing availability of large datasets that allow linkages across generations has enabled researchers to explore the intergenerational effects of childhood events. Krzysztof Karbownik and Anthony Wray link childhood hospitalizations in 1870–1902 London with later outcomes.24 Boys admitted to the hospital before the age of 12 were three percentage points more likely to experience downward occupational mobility than their brothers, explaining 11 percent of the downward occupational mobility in England at this time. Using historical data on US Civil War veterans linked with that of their children and grandchildren, Dora Costa documents that the grandchildren of men who experienced severe conditions including nutritional deprivation as prisoners of war (POWs)— lost roughly a year of life at age 45 compared to grandsons of veterans who were not POWs.25
Finally, Gordon Dahl and Anne Gielen study a Dutch reform of disability insurance that resulted in an increase in employment and earnings. They find that children of affected adults had increased schooling attainment and better health and labor market outcomes.26
## Parental Investments
What explains the intergenerational persistence of shocks to health and wellbeing? One potential mediator is parental investment behavior. García, Frederik Bennhoff, and Duncan Leaf document that a child’s participation in a model early childhood program has spillover benefits to siblings.27 They show that the program affects parental decision-making and likely increases parental investments in all children in the household. Susan Mayer, William Delgado, Lisa Gennetian, and Kalil focus on differences in time investments in children by maternal education. They find that college-educated mothers spend more time, even though they do not appear to place a high value on the time spent.28
There are often important differences in parental investment decisions within families. Rebecca Dizon-Ross and Seema Jayachandran ask how mothers and fathers differ in their propensity to invest in their sons and daughters in Uganda.29 They find that differences in spending across siblings are driven by fathers spending less on daughters.
The opioid epidemic severely disrupted many parents’ capacity to invest in their children. Building on research showing that the epidemic was initially caused by lax prescribing, Kasey Buckles, William Evans, and Ethan Lieber ask how variation across states in the ease with which doctors could prescribe opioids relates to overdose death rates. They conclude that the epidemic led to an additional 1.5 million children living apart from a parent and in a household headed by a grandparent.30
Parents with more resources invest more in their children, but how do parents respond to reductions in household resources? Marianne Bitler, Krista Ruffini, Lisa Schulkind, Barton Willage, Currie, and Hoynes find that when household benefits fall as a result of children aging out of the Special Supplemental Nutrition Program for Women, Infants, and Children at age 5, the caloric intake of adult women in the household falls but that of children does not, suggesting that mothers protect their children.31
## Adolescence
Adolescence is increasingly understood to be a crucial period of growth and development. A number of studies highlight the impact of interventions during this period. Sara Heller evaluates two experiments that provided summer jobs to youth and finds large declines in criminal violence. There was little heterogeneity across implementations of the programs but significant heterogeneity across individual youths: those with the highest probability of negative outcomes benefitted the most.32 Keyoung Lee, Aizer, Eli, and Lleras-Muney study the Great Depression-era Civilian Conservation Corps and find it had significant positive effects on longevity, lifetime earnings, and disability, even though there was little short-term effect on employment or wages.33 Jonathan Guryan et al. find that high-impact tutoring during adolescence can increase test scores by 15–37 percent of a standard deviation, which is comparable to successful early childhood interventions.34 Some recent studies have focused on adolescent girls in developing countries. Eric Edmonds, Benjamin Feigenberg, and Jessica Leight show that teaching life skills to girls in Indian schools reduced drop out.35 Manisha Shah, Jennifer Seager, Joao Montalvao, and Markus Goldstein find that an intervention focused on goal setting reduced intimate partner violence among adolescent girls in sub-Saharan Africa.36
## Emerging Areas
Emerging areas of research among program affiliates include child mental health, abortion access, gun violence, and the impact of COVID-19 on families and children.
## Mental Health
The mental health of children is critical to their wellbeing. Rossin-Slater, Molly Schnell, Hannes Schwandt, Sam Trejo, and Lindsey Uniat find that exposure to school shootings led to an increase in youth antidepressant prescriptions, as shown in Figure 5.37 In follow-up work, Marika Cabral, Bokyoung Kim, Rossin-Slater, Schnell, and Schwandt link exposure to school shootings in Texas to lower educational attainment and worse economic outcomes at age 25.38 This work underscores an important link between violence, mental health, and future economic outcomes.
Monica Deza, Thanh Lu, and Johanna Catherine Maclean use a county-level, two-way fixed effects analysis to show that higher availability of office-based mental healthcare is associated with fewer juvenile arrests.39 But even conditional on access, there are significant SES differences in the types of mental healthcare that children receive. Paul Kurdyak, Jonathan Zhang, and Currie find that among Canadian children with the same health insurance coverage and the same mental health diagnoses, low SES children are more likely to be prescribed drugs with dangerous side effects.40 One way to improve mental health treatment is to encourage adherence to treatment guidelines. Emily Cuddy and Currie show that treating adolescents with mental health conditions in a way that is consistent with treatment guidelines improves health outcomes.41
Perhaps unsurprisingly, there are strong intergenerational correlations in mental health. Aline Bütikofer, Rita Ginja, Karbownik, and Fanny Landaud find that in Norway, a parental mental health diagnosis is associated with a 40 percent higher probability that a child has a mental health diagnosis.42 They also find that early childhood intervention for children whose parents have been diagnosed with a mental health condition can reduce the association between parental and child mental health diagnoses by almost half.
## Abortion Access
A growing body of work has explored the ramifications of reduced access to abortion on families and children. Joanna Lahey and Marianne Wanamaker find that abortion restrictions in the late nineteenth century led to increased child mortality.43 Jason Lindo, Caitlin Myers, Andrew Schlosser, and Scott Cunningham show that recent abortion clinic closures in Texas have reduced geographic access and increased births.44 Stephanie Fischer, Heather Royer, and Corey White find that these clinic closures also have reduced take-up of family planning.45 Diana Green Foster, Miller, and Wherry study women who were denied an abortion because they just missed the maximum gestational age cutoff and find that these women experience a large increase in financial distress over the next several years.46
## Gun Violence and Its Impacts
In 2023, deaths from firearms became the leading cause of child death in the US. Previously mentioned work documents the impact of school shootings on adolescent mental health and future economic outcomes. Bahadir Dursun, Michael Hatch, Tekin, and Currie demonstrate that exposure to the Beltway sniper attacks in utero negatively affected newborn health, as shown in Figure 6.47
What explains gun violence, and what can be done to reduce it? Evans, Craig Garthwaite, and Timothy Moore link gun violence today to the crack cocaine epidemic of the 1980s and 1990s, which especially ravaged Black communities. They show that murder rates for young Black males doubled in a city when the crack epidemic started and that it remained 70 percent higher 17 years later largely due to increased gun ownership. They show that today, gun violence explains 10 percent of the racial gap in male life expectancy.48 Monica Bhatt, Max Kapustin, Marianne Bertrand, Christopher Blattman, and Heller evaluate a program that provides paid employment combined with therapy and other social supports to at-risk, young, primarily Black men and find that it reduced shooting and homicide arrests by 65 percent.49
## COVID-19
The COVID-19 pandemic significantly disrupted schooling, causing alarming declines in test scores as well as concerns about behavior and mental health. Clare Halloran, Claire Hug, Rebecca Jack, and Emily Oster find that during the 2021–22 school year, 20 percent of English test score losses and 37 percent of math losses were recovered.50 Anna Gassman-Pines, Ananat, John Fitz-Henley II, and Jane Leer use parent survey data to document that remotely schooled children experienced more disruption and displayed worse behavior.51 Benjamin Hansen, Joseph Sabia, and Jessamyn Schaller find that teen suicide rates plummeted in March 2020, when the pandemic closed schools, and rose when schools reopened.52
Multiple studies have measured the pandemic’s impact on fertility. Melissa Schettini Kearney and Phillip Levine document a drop of nearly 100,000 births between August 2020 and February 2021, followed by a rebound of about 30,000 births between March and September 2021.53 Bailey, Currie, and Schwandt show that 60 percent of the decline was driven by births to foreign-born mothers. Moreover, an initial decline of 30,000 in births to native-born mothers was more than offset by an increase of 71,000 births by 2021.54
## Concluding Comments
Economists have long been interested in children and families, but research was scattered across subdisciplines. Development economists thought about stunting and malnutrition, labor economists researched education and discrimination, health economists focused on medical care, demographers studied fertility, and public economists emphasized transfer programs. The Program on Children and Families unites these perspectives and promotes cross-fertilization. The result can be seen in the increasing number of studies that examine multiple outcomes and in the growing internationalization of the field. This richness of perspectives has been complemented by remarkable new data combining information from multiple sources in order to enable research spanning decades, generations, and multiple outcomes. In the coming decade, these sources may facilitate research into vulnerable groups that have seldom been studied, including Native American children, children suffering homelessness, foster children, and the forcibly displaced.
## Endnotes
In October 1993, Krueger convened an NBER meeting on “Economics of Families and Children” (https://www.nber.org/sites/default/files/2019-09/Winter%201993%284%29.pdf, page 20). Krueger was tapped for service at the US Department of Labor the next year, and in December 1994, Lawrence Katz organized a conference that gathered the researchers associated with an NBER grant-supported project on “The Well-Being of Children” (https://www.nber.org/sites/default/files/2019-09/reporter1995-01.pdf, page 43). Katz convened another such meeting in May 1995 (https://www.econstor.eu/bitstream/10419/62108/2/1995_summer.pdf, page 39). By November 1996, when the group met again, the project had become the “Program on the Well-Being of Children,” and Jonathan Gruber had been named program director (https://www.nber.org/sites/default/files/2019-09/reporter1997-01.pdf, page 39). Gruber was tapped for a role at the US Treasury shortly thereafter, and Janet Currie became the program director. She organized a November 1997 meeting of the “Program on Children” (https://www.nber.org/sites/default/files/2019-09/reporter1998-01.pdf, page 34). Gruber returned to academia, and to his role as program director, in 1998 and organized a November 1998 meeting of the “Program on Children” (https://www.nber.org/sites/default/files/2019-08/winter1998-1999_1.pdf, page 42).
“Childhood Circumstances and Adult Outcomes: Act II,” Almond D, Currie J, Duque V. NBER Working Paper 23017, January 2017, and *Journal of Economic Literature* 56(4), December 2018, pp. 1360–1446.
“Children and the US Social Safety Net: Balancing Disincentives for Adults and Benefits for Children” Aizer A, Hoynes HW, Lleras-Muney A. NBER Working Paper 29754, February 2022, and *Journal of Economic Perspectives* 36(2), Spring 2022, pp. 149–174.
Does Welfare Prevent Crime? The Criminal Justice Outcomes of Youth Removed from SSI,” Deshpande M, Muller-Smith MG. NBER Working Paper 29800, February 2022, and *The Quarterly Journal of Economics* 137(4), November 2022, pp. 2263–2307.
Is the Social Safety Net a Long-Term Investment? Large-Scale Evidence from the Food Stamps Program,” Bailey MJ, Hoynes HW, Rossin-Slater M, Walker R. NBER Working Paper 26942, April 2020, and *The Review of Economic Studies* 91(3), May 2024, pp. 1291–1330.
Effects of the Expanded Child Tax Credit on Employment Outcomes: Evidence from Real-World Data from April to December 2021,” Ananat E, Glasner B, Hamilton C, Parolin Z. NBER Working Paper 29823, March 2022.
The Incentive Effects of Cash Transfers to the Poor,” Aizer A, Eli S, Lleras-Muney A. NBER Working Paper 27523, July 2020.
The Effect of Means-Tested Transfers on Work: Evidence from Quasi-Randomly Assigned SNAP Caseworkers,” Cook JB, East CN. NBER Working Paper 31307, May 2024.
The Anti-Poverty, Targeting, and Labor Supply Effects of Replacing a Child Tax Credit with a Child Allowance,” Corinth K, Meyer BD, Stadnicki M, Wu D. NBER Working Paper 29366, March 2022.
Safety Net Investments in Children,” Hoynes HW, Schanzenback DW. NBER Working Paper 24594, May 2018.
Effects of Universal and Unconditional Cash Transfers on Child Abuse and Neglect,” Bullinger LR, Packham A, Raissian KM. NBER Working Paper 31733, September 2023.
The Effect of Safety Net Generosity on Maternal Mental Health and Risky Health Behaviors,” Schmidt L, Shore-Sheppard L, Watson T. NBER Working Paper 29258, January 2023, and *Journal of Policy Analysis and Management* 42(3), Summer 2023, pp. 706–736.
Domestic Violence and the Mental Health and Well-being of Victims and Their Children,” Bhuller M, Dahl GB, Loken KV, Mogstad M. NBER Working Paper 30792, December 2022, and *The Journal of Human Resources* 59(S), April 2024, pp. S152–S186.
The Lasting Effects of Early Childhood Education on Promoting the Skills and Social Mobility of Disadvantaged African Americans,” García JL, Heckman JJ, Ronda V. NBER Working Paper 29057, July 2021, and *Journal of Political Economy* 131(6), June 2023, pp. 1477–1506.
Prep School for Poor Kids: The Long-Run Impacts of Head Start on Human Capital and Economic Self-Sufficiency,” Bailey MJ, Sun S, Timpe BD. NBER Working Paper 28268, December 2020, and *American Economic Review* 111(12), December 2021, pp. 3963–4001.
“Early Childhood Education in the United States: What, When, Where, Who, How, and Why,” Cascio E. NBER Working Paper 28722, April 2021.
“Investing in Early Childhood Development in Preschool and at Home,” Duncan G, Kalil A, Mogstad M, Rege M. NBER Working Paper 29985, May 2022.
“Inequality in Early Care Experienced by US Children,” Flood S, McMurry JFS, Sojourner A, Wiswall MJ. NBER Working Paper 29249, September 2021, and *Journal of Economic Perspectives* 36(2), Spring 2022, pp. 199–222.
“An Equilibrium Model of the Impact of Increased Public Investment in Early Childhood Education,” Borowsky J, Brown JH, Davis EE, Gibbs C, Herbst CM, Sojourner A, Tekin E, Wisall MJ. NBER Working Paper 30140, June 2022.
“Does the Delivery of Primary Health Care Improve Birth Outcomes? Evidence from the Rollout of Community Health Centers,” Kose E, O’Keefe SM, Rosales-Rueda M. NBER Working Paper 30047, May 2022.
“Covering Undocumented Immigrants: The Effects of a Large-Scale Prenatal Care Intervention,” Miller S, Wherry L, Aldana G. NBER Working Paper 30299, July 2022.
“Multi-generational Impacts of Childhood Access to the Safety Net: Early Life Exposure to Medicaid and the Next Generation’s Health,” East CN, Miller S, Page M, Wherry LR. NBER Working Paper 23810, September 2017, and *American Economic Review* 113(1), January 2023, pp. 98–135.
“Administrative Burdens and Child Medicaid and CHIP Enrollments,” Arbogast I, Chorniy A, Currie J. NBER Working Paper 30580, April 2024, and *American Journal of Health Economics* 10(2), Spring 2024, pp. 237–271.
“Educational, Labor-market and Intergenerational Consequences of Poor Childhood Health,” Karbownik K, Wray A. NBER Working Paper 26368, February 2021.
“Health Shocks of the Father and Longevity of the Children’s Children,” Costa D. NBER Working Paper 29553, January 2024.
“Persistent Effects of Social Program Participation on the Third Generation,” Dahl GB, Gielen A. NBER Working Paper 32212, March 2024.
“The Dynastic Benefits of Early Childhood Education: Participant Benefits and Family Spillovers,” García JL, Bennhoff FH, Leaf DE. NBER Working Paper 31555, August 2023.
“Education Gradients in Parental Time Investment and Subjective Well-being,” Kalil A, Mayer S, Delgado W, Gennetian LA. NBER Working Paper 31712, September 2023.
“Dads and Daughters: Disentangling Altruism and Investment Motives for Spending on Children,” Dizon-Ross R, Jayachandran S. NBER Working Paper 29912, April 2022.
“The Drug Crisis and the Living Arrangements of Children,” Buckles K, Evans WN, Lieber EMJ. NBER Working Paper 27633, August 2020, and *Journal of Health Economics* 87, January 2023, Article 102723.
“Mothers as Insurance: Family Spillovers in WIC,” Bitler M, Currie J, Hoynes HW, Ruffini KJ, Schulkind L, Willage B. NBER Working Paper 30112, June 2022, and *Journal of Health Economics* 91, September 2023, Article 102784.
“When Scale and Replication Work: Learning from Summer Youth Employment Experiments,” Heller S. NBER Working Paper 28705, April 2021, and *Journal of Public Economics* 209, May 2022, Article 104617.
“Do Youth Employment Programs Work? Evidence from the New Deal,” Aizer A, Eli S, Lleras-Muney A, Lee K. NBER Working Paper 27103, July 2020.
“Not Too Late: Improving Academic Outcomes Among Adolescents,” Guryan J, Ludwig J, Bhatt MP, Cook PJ, Davis JMV, Dodge K, Farkas G, Fryer Jr RG, Mayer S, Pollack H, Steinberg L. NBER Working Paper 28531, March 2021, and *American Economic Review* 113(3), March 2023, pp. 738–765.
“Advancing the Agency of Adolescent Girls,” Edmonds EV, Feigenberg B, Leight J. NBER Working Paper 27513, July 2020, and *The Review of Economics and Statistics* 105(4), July 2023, pp. 852–866.
“Sex, Power, and Adolescence: Intimate Partner Violence and Sexual Behaviors,” Shah M, Seager J, Montalvao J, Goldstein M. NBER Working Paper 31624, November 2023.
“Local Exposure to School Shootings and Youth Antidepressant Use,” Rossin-Slater M, Schnell M, Schwandt H, Trejo S, Uniat LM. NBER Working Paper 26563, December 2019, and *PNAS* 117(38), September 2020, pp. 23484–23489.
“Trauma at School: The Impacts of Shootings on Students’ Human Capital and Economic Outcomes,” Cabral M, Kim B, Rossin-Slater M, Schnell M, Schwandt H. NBER Working Paper 28311, January 2021.
Office-Based Mental Healthcare and Juvenile Arrests,” Deza M, Lu T, Maclean JC. NBER Working Paper 29465, November 2021, and *Health Economics* 31(S2), August 2022, pp. 69–91.
“Socioeconomic Status and Access to Mental Health Care: The Case of Psychiatric Medications for Children in Ontario Canada,” Currie J, Kurdyak P, Zhang J. NBER Working Paper 30595, October 2022, and *Journal of Health Economics* 93, January 2024, Article 102841.
“Rules vs. Discretion: Treatment of Mental Illness in US Adolescents,” Cuddy E, Currie J. NBER Working Paper 27890, October 2020.
“(Breaking) Intergenerational Transmission of Mental Health,” Bütikofer A, Ginja R, Karbownik K, Landaud F. NBER Working Paper 31446, July 2023, and *The Journal of Human Resources* 59(S), April 2024, pp. S108–S151.
“Effects of Restrictive Abortion Legislation on Cohort Mortality Evidence from 19th Century Law Variation,” Lahey JN, Wanamaker MH. NBER Working Paper 30201, July 2022.
How Far Is Too Far? New Evidence on Abortion Clinic Closures, Access, and Abortions,” Lindo JM, Myers C, Schlosser A, Cunningham S. NBER Working Paper 23366, May 2017, and *The Journal of Human Resources* 55(4), October 2020, pp. 1137–1160.
“The Impacts of Reduced Access to Abortion and Family Planning Services on Abortion, Births, and Contraceptive Purchases,” Fischer S, Royer H, White C. NBER Working Paper 23634, July 2017, and *Journal of Public Economics* 167, November 2018, pp. 43–68.
“The Economic Consequences of Being Denied an Abortion,” Miller S, Wherry LR, Foster DG. NBER Working Paper 26662, January 2020, and *American Economic Journal: Economic Policy* 15(1), February 2023, pp. 394–437.
“The Hidden Cost of Firearm Violence on Infants In Utero,” Currie J, Dursun B, Hatch M, Tekin E. NBER Working 31774, March 2024.
“Guns and Violence: The Enduring Impact of Crack Cocaine Markets on Young Black Males,” Evans WN, Garthwaite C, Moore TJ. NBER Working Paper 24819, July 2018, and *Journal of Public Economics* 206, February 2022, Article 104581.
“Predicting and Preventing Gun Violence: An Experimental Evaluation of READI Chicago,” Bhatt MP, Heller SB, Kapustin M, Bertrand M, Blattman C. NBER Working Paper 30852, January 2023, and *The Quarterly Journal of Economics* 139(1), February 2024, pp. 1–56.
“Post COVID-19 Test Score Recovery: Initial Evidence from State Testing Data,” Halloran C, Hug CE, Jack R, Oster E. NBER Working Paper 31113, April 2023
“Effects of Daily School and Care Disruptions During the COVID-19 Pandemic on Child Mental Health,” Gassman-Pines A, Ananat E, Fitz-Henley II J, Leer J. NBER Working Paper 29659, January 2022.
“In-Person Schooling and Youth Suicide: Evidence from School Calendars and Pandemic School Closures,” Hansen B, Sabia JJ, Schaller J. NBER Working Paper 30795, December 2022, and *The Journal of Human Resources* 59(S), April 2024, pp. S227–S255.
“The US COVID-19 Baby Bust and Rebound,” Kearney MS, Levine PB. NBER Working Paper 30000, July 2023, and *Journal of Population Economics* 36, July 2023, pp. 2145–2168.
“The COVID-19 Baby Bump: The Unexpected Increase in US Fertility Rates in Response to the Pandemic,” Bailey MJ, Curie J, Schwandt H. NBER Working Paper 30569, August 2023.
| true | true | true | null |
2024-10-12 00:00:00
|
2024-07-23 00:00:00
| null |
nber.org
|
NBER
| null | null |
|
12,742,000 |
http://etherlisten.com
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
35,143,398 |
https://www.neoaxis.com/news/neoaxis_engine_2023_1_released
|
NeoAxis Engine 2023.1 Released
| null |
NeoAxis company releases a new version of NeoAxis Engine, a versatile real-time platform for making 3D, 2D games and apps. The release includes significant licensing changes, graphics improvements, a new physics engine, multiplayer support and many new add-ons. Now available for free a road constructor, fence constructor, building constructor, vegetation generator and initial version of traffic system.
## Licensing changes
After experiments with licensing, we decided to continue the project in the most open format. This release removes Pro license and open-sources the editor. Also previously paid add-ons are now free and provided with sources at the start.
## Improved rendering system
One of the big improvements is the addition of a virtualized geometry system that allows you to quickly render a very large number of objects. We called it NeoAxis Levels.
## Scalability
A very large number of optimizations have been made to increase the scalability. Rendering, physics, sound and editor have been optimized for many objects.
There is a new City Demo to show engine features and the scalability. The demo will be gradually improved.
## Multiplayer
Added support for multiplayer. Almost all components support synchronization over the network, some will be supported a little later.
## New powerful physics engine
NeoAxis Engine now uses Jolt Physics! This is very good physics in terms of features and multithreaded optimizations. Both single and double precision are supported.
## Vehicles
Thanks to the new physics, it was easy enough to add full support for vehicles.
## Vegetation generator
The SDK now includes a vegetation generator. This is a large set of tools that allows you to generate any kind of vegetation from grass to trees. The source code is also included.
Updated Nature Demo now uses procedurally generated vegetation that was created in the editor. This does not look very realistic yet, it will be improved step by step.
## Road constructor
The previously paid road add-on is now free, the source codes are also open.
## Fence constructor
The previously paid fence add-on is now free, the source codes are also open.
## Building constructor
A building constructor is a new add-on in the SDK.
## Weapons, bullets, explosions
The game framework has been improved. Now you can create shooters even easier.
## Battle Demo and more samples
Demo scenes have been updated, there are many new ones.
## Next plans
Next plans include finishing cloud services and a full global illumination. Roadmap.
| true | true | true |
NeoAxis Engine 2023.1 Released | NeoAxis Engine - Versatile Real-Time 3D, 2D Development Platform
|
2024-10-12 00:00:00
|
2023-03-13 00:00:00
| null | null |
neoaxis.com
|
NeoAxis Engine 2023.1 Released | NeoAxis Engine
| null | null |
4,657,029 |
http://taylormetric.com/marketing/the-3-ingredients-of-an-effective-company-t-shirt/
|
The 3 ingredients of an effective company t-shirt, and why yours sucks ass
|
Taylor West
|
On Thursday, I joined a couple thousand other people in attending the ATX Startup Crawl, as part of Austin Startup Week.
In case you’ve never been to a crawl before, here’s a quick rundown: It’s essentially a giant party showcasing local startups (as well as a few well-established companies), complete with free food and booze. Companies participate by running shuttle buses between their headquarters, or by renting small tables at the main hub location where they can network with attendees, hand out free swag, get some free publicity, and then stumble home drunk at the end of the night.
This was the second crawl that Capital Factory hosted this year; the first one taking place the day before SXSW 2012 began. And on that note, Kudos to Joshua Baer and his crew at CF, as the whole Austin startup scene is very clearly benefiting from their hard work.
### Mission: Update Wardrobe
Before mingling with old friends and making new connections, my first order of business as a veteran of these events, was to quickly grab a cold beer and then visit the various tables for a fashion refresh, *before* the smaller or ill-prepared companies ran out of their newly minted tees.
Of course, just about any company that survives beyond their LLC filing sooner or later has some form of corporate identity and/or swag printed up. Big deal.
But, if you’ve spent any time in or around a bonafide start up community, like in Austin, “The Valley”, or Boulder, you are likely familiar with the reigning king of tech swag — *the almighty t-shirt* (apparently “beer koozies” are now a close 2nd…). Heck, many startups pass out t-shirts before they complete their MVP, my last company included (R.I.P.).
This phenomenon is easy to explain. You see, a t-shirt is a real physical thing. A collection of threads and fibers that can be seen and touched, in a way that ones and zeros cannot. It’s a step, albeit a small one, towards *actually existing*. It simultaneously serves as a badge of pride for the team, AND, as a long-term branding & marketing opportunity on the backs of supporters (as well as beggars like me, whose entire wardrobe could double as an Austin startup directory).
And so, the first order of business for the team at Matttresss: “The Pinterest for dating” — is to spew their bright pastel logo into a Photoshop file, and send it off to be printed on ultra-soft-eco-friendly-organically-grown-PETA-approved t-shirts, which cost more than the company will eventually raise in their seed round.
Prioritizing a bit of expensive swag isn’t really the issue here though. Just as long as it’s thoughtfully designed — as in — by utilizing a functional brain somewhere in the process.
### How To Design A Killer T-Shirt
Here are the two ingredients that comprise an effective company tee, and one bonus ingredient that should be strived for:
**Quality Material:**A soft, high quality t-shirt is a thing of beauty. Even fashion-comatose geeks subconsciously desire that nice American Apparel or Next Level feel, over some cheap sheet of stiff cotton that hangs to their knees, such as a lower-end Gildan. After all, what good are your new t-shirts if nobody wears them? Fortunately, most of the startups around here have NAILED this, as I eluded to earlier.**Clear & Consistent Branding:**The message on your company t-shirt should fit your brand! A well-sized logo and appropriate tagline for context, printed on the front of a fabric with good color contrast, is a sure-fire way to get this right — especially if you feel like you’re stretching for something more creative. You’re creating a walking billboard with limited useable area, so resist the temptation to print some ambiguous statement in 16pt font, or a cryptic symbol that only your psychic Aunt Ellie understands. THIS is where many companies fail miserably.**The “It” Factor (Bonus):**This is what separates the men from the boys, and is appropriately difficult to pull off. This is the ingredient that causes strangers to actually notice your t-shirt. It can be something funny, thought provoking, curious, or any number of other well-executed elements that clearly separate you from the crowd and grab attention. Done properly, a tee with this factor is worn and noticed more often (increasing impressions), discussed more often (click throughs?), and potentially acted on more consistently (visits, follows, sales, conversions). If you want your swag to function at it’s highest level, this is the key ingredient.
If you got the first two right, good work — your t-shirt is passable, people will look good in it, and it will help to increase awareness and brand your new company. Find a tasteful way to add your web address or twitter account to it, and it might even drive some conversions.
If people are laughing, asking questions, and commenting favorably and *often *about your t-shirt — then this is an indication that you nailed all three ingredients. Congratulations, you have the holy grail of wearable swag!
Unfortunately, many companies fail this test, sometimes even at older companies with spotless track records in the swag department. The newly-acquired pile on my desk from Thursday contains proof.** Wasting precious time and cash on seriously bad swag that fulfills only one or fewer of the above IS a problem.**
Here’s some real world cost data to drive this point home.
I was recently in charge of marketing for one of the best damn teams I’ve ever been a part of, at WP Engine. These guys are undoubtedly the best WordPress hosting company in the world (shameless plug), and getting better every moment.
The last batch of t-shirts I worked on at WPE before my departure (which I’ll write about soon) ran the company approximately $6,000 for 1000 shirts, which they buzz through very quickly on their conference and event trail. And this is quite a bit cheaper than they were prior to finding a better vendor.
While this IS a lot of cash to spend on t-shirts, it was well worth it for WPE because their shirts were awesome — totally nailing the 3 ingredients above.
However, not every startup company is well funded or succeeds at doubling revenue every few months like WPE, and so careful attention must be paid to how effective each dollar is that’s being spent, even if it’s just a ballpark guess. This amount of money could absolutely make bigger waves spent elsewhere if your shirt sucks ass. This is the case at WP Engine now with their latest shirts, which definitely lack $6/ea worth of impact!
### Good Shirt, Bad Shirt
Let’s look at WP Engine’s previous t-shirt and compare it to their new one. Then we’ll evaluate how handing out the new one is akin to burning six one-dollar bills each time.
In case you’re brain dead (as if anyone is actually reading my first blog anyway!) and don’t understand what’s good about this shirt, allow me to explain… EVERYTHING is good about this shirt.
- It’s comfy, fitted, soft, and durable
- It fits the brand perfectly both design-wise, and through it’s smart way of conveying one of the startup’s biggest benefits: speed. The logo and tagline are adequately sized and placed smartly on the back of the shirt (not shown).
- Major “it” factor. I wore this shirt damn near everyday for months (clean ones, of course!), and I stopped counting how many conversations and laughs I got out of it from strangers after the very first day! No joke. The ONLY item in my wardrobe that even comes close to this shirt as a conversation starter, is my pair of Vibram “Five-Finger” shoes.
So what about the new shirt? Let’s go down the list.
**Comfort and fit?***Bingo.*They never skimp in this department.**Branding/Clarity?***Double-Fail.*Aside from the little bit of color, the design absolutely fails in fitting their brand. Nothing at WPE has ever been wildly modern, contemporary, or splashy — and that’s a*good*thing. It’s also impossible to read at a distance, the logo and tagline are tiny and placed in terribly awkward locations — which is only really appropriate on more formal or embroidered shirts in terms of effectiveness. When the shirt is on your body, the design gets REALLY awkward looking.**“It” Factor?***No Chance*. They actually achieved the opposite! Not a single conversation, smirk, or smile have I experienced with this shirt in the few days I’ve worn it in public. That is, unless you count the two people, plus my wife, who have commented on how ugly it is, and how it makes me look like I’m “trying too hard” — think: Jersey Shore, or “Bro-wear”. Add a gold necklace, a watch the size of your fist, and some flashy sunglasses and you’re all set!
It completely fails at conveying to people how much WPE, it’s product, and it’s culture totally kick ass — which their past t-shirts did in spades. It’s an incredibly easy shirt to ignore completely. My dear friends, if you read this — I don’t care how many people have complimented you on this design — I promise you you’re better off handing out the old ones, including “Fireball Proof”, until you can come up with something better. Shit, I’m tempted to design something for you myself.
If you’re reading this and you’re one of those design agnostic people (aka engineers), who insist that “good” design is a matter of opinion, I have news for you — you’re wrong. There are cold hard truths about good design that *usually* must be obeyed to achieve a positive impact — especially if you’re a bit handicapped in this skill to begin with.
The most important truth in this circumstance, and the only one I’ll get into here for the sake of brevity:
**Simple is King.**
Simplicity is always in style. It will always be appreciated by greater numbers. Simplicity is easy to understand, easy to talk about, easy to evangelize. This is true in many more areas than just design.
If you’re not 110% confident that your new design will be a massive hit, scrap it *now*, and start over with something clear and simple. You’ll be glad you did.
Let’s wrap this up with my favorites…
The best shirt handed out at the Startup Crawl on Thursday — “I SHOP IN MY UNDERWEAR.” by adlucent. Simple, funny, and totally smart for their brand. My wife is wearing hers at a local poker game as I write this.
Also, OtherInbox was handing out their most recent “email” shirt, which I think will become a fast classic for them.
**Agree or disagree with something I said? Have a favorite shirt you want to share?**
** Let me have it in the comments section!**
| true | true | true |
On Thursday, I joined a couple thousand other people in attending the ATX Startup Crawl, as part of Austin Startup Week. In case you’ve never been to a crawl before, here’s a quick run…
|
2024-10-12 00:00:00
|
2012-10-14 00:00:00
|
http://taylormetric.com/wp-content/uploads/2012/10/adlucent-i-shop-in-my-underwear-225x300.jpg
|
article
|
taylormetric.com
|
Taylormetric
| null | null |
37,417,369 |
https://medicalxpress.com/news/2023-09-discovery-kind-cell-neuroscience.html
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
30,190,216 |
https://www.cnbc.com/2022/02/02/facebook-says-apple-ios-privacy-change-will-cost-10-billion-this-year.html
|
Facebook says Apple iOS privacy change will result in $10 billion revenue hit this year
|
Kif Leswing
|
Facebook parent Meta said on Wednesday that the privacy change Apple made to its iOS operating system last year will decrease the social media company's sales this year by about $10 billion.
"We believe the impact of iOS overall is a headwind on our business in 2022," Meta CFO Dave Wehner said on a call with analysts after the company's fourth-quarter earnings report. "It's on the order of $10 billion, so it's a pretty significant headwind for our business."
Facebook's admission is the most concrete data point so far on the impact to the advertising industry of Apple's App Tracking Transparency feature, which reduces targeting capabilities by limiting advertisers from accessing an iPhone user identifier.
Meta shares sank 23% in extended trading on Wednesday after the company warned about numerous challenges and came up short on user numbers. Facebook said first quarter revenue will be $27 billion to $29 billion, while analysts were expecting that number to exceed $30 billion.
Wehner said the $10 billion revenue hit this year is merely a best guess.
"We're just estimating what we think is the overall impact of the cumulative iOS changes to where the 2022 revenue forecast is," Wehner said. "If you aggregate the changes that we're seeing on iOS, that's the order of magnitude. We can't be precise on this. It's an estimate."
Apple first introduced the ATT feature in iOS 14.5, which was released for iPhones last year. It's also included in iOS 15, which is running on 72% of modern iPhones, according to Apple.
ATT consists of popups that ask users whether they want to be tracked when opening up an app. If the user says no, the app developer can no longer access the IDFA, a device ID that's used to target and measure the effectiveness of online ads.
A study from ad measurement firm AppsFlyer in October suggested that 62% of iPhone users were choosing to opt-out of sharing their IDFA.
The privacy feature disrupts the behind-the-scenes mechanics of many mobile ads, especially those that confirm whether a purchase or download was made. IPhone apps with targeted advertising can instead use SKAdNetwork, an Apple tool built as an alternative, which Apple says is more private.
Online advertising companies have voiced their displeasure with the feature since it was first announced in June 2020, but Facebook has been the loudest in its criticism. In December 2020, Facebook ran a marketing campaign including full-page ads in major newspapers blasting the feature and saying that the change was about "profit, not privacy."
The next day, Apple CEO Tim Cook used Facebook's app in a tweet as an example of how the feature works.
Sheryl Sandberg, Facebook's operating chief, said on Wednesday that ATT would hurt small businesses that rely on digital advertising to grow and are much more dependent than larger companies on personalized ads. It's a theme Facebook has hit repeatedly in its attacks on Apple.
Sandberg said the changes are diminishing the accuracy of Facebook's ads, driving up prices based on an outcome like a sale or download. She also said that measuring whether those conversions occur is becoming more difficult.
A day before Facebook's results, Alphabet blew past estimates with its fourth-quarter numbers, and cited strength in e-commerce ads, an area where Facebook saw weakness.
Wehner suggested that Apple's changes aren't having the same impact on search as they are on other types of apps. He referenced how much money Google makes for Apple as the default search engine on the Safari browser.
"Given that Apple continues to take billions of dollars a year from Google Search, the incentive clearly is for this policy discrepancy to continue," Wehner said.
| true | true | true |
Apple's privacy feature disrupts the behind-the-scenes mechanics of many mobile ads, especially those that confirm whether a purchase or download was made.
|
2024-10-12 00:00:00
|
2022-02-02 00:00:00
|
article
|
cnbc.com
|
CNBC
| null | null |
|
1,813,350 |
http://www.andrewfashion.com/2009/12/05/how-i-made-2-5-million/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
25,268,435 |
https://www.tag1consulting.com/blog/drupalcon-europe-2020-coming
|
DrupalCon Europe 2020 is coming
| null |
DrupalCons are unquestionably the biggest events of the year in the Drupal community. It’s an opportunity for developers, designers, users, customers, and businesses to get together and talk about everything Drupal! This year has been a difficult one for conferences, as everyone cancels, reschedules, or moves online. DrupalCon has been no exception to this, bringing both of this year’s conferences online, in an effort to continue fostering the project’s strong community.
Always wanted to go to DrupalCon Europe but couldn’t afford the travel time and expense? DrupalCon this year gives people a unique opportunity - one of the biggest obstacles to DrupalCon attendance is getting there. With the conference being online, the difficulties and expenses of travel are reduced, because DrupalCon comes to you. With the reduced cost of travel, it makes more sense than ever for companies to encourage their employees to attend, and to purchase tickets for those attendees. And, if your company is looking to channel those resources it would’ve spent on travel? Sponsor DrupalCon, or help fund scholarships.
Because DrupalCon is virtual, people who wouldn’t normally travel have the opportunity to attend, giving a potentially larger group to discuss initiatives, work on updates and changes, or just get to know people you might not have a chance to meet otherwise! Supporting DrupalCon now, as an attendee, a company, or a sponsor, helps keep the community active, and moving forward. Meeting new people, attending talks and events, and generally being present helps you make connections you might not ever have made otherwise. Attending this conference also helps ensure there will be additional ones in the future, when we can be together in person.
## Who goes to DrupalCon?
As part of the community, Tag1 offered its employees free tickets to Drupalcon, encouraging everyone who was able to attend, to take part in the online event. Almost two dozen individuals took the company up on their offer, from Jeremy Andrews in Tuscany, Italy, to Travis Whitehead and Narayan Newton, on the US West coast. Users from around the world come to Drupalcon to meet and talk with people who share the same passion they do - to better the Drupal open source content management system.
## What about the talks?
Tag1 Consulting is a long time participant and sponsor of DrupalCon, and often has multiple speakers giving talks. For this year’s conference, Tag1 has these speakers on the docket:
- Fabian Franz, VP of software engineering, continues the talk he gave at DrupalCon Global 2020, with the next part of his deep dive into Drupal’s caching system. His previous talk is a great place to start for those who are new to Drupal, or anyone having a tough time figuring out how to approach caching in Drupal.
- Michael Meyers, Managing Director, and Jeremy Andrews, CEO, will be talking about the Drupal 7 and D8 end of life, why it matters to you, and what your options are if you’re still running Drupal 7. Tag1 is a pioneer of the Drupal extended support programs. In the talk they’ll tell you everything you need to know about how Extended Support works, and how Tag1 Consulting can help you navigate the challenges ahead with Tag1 Quo, or by helping you plan and execute an upgrade.
- Narayan Newton, Tag1 CTO and Drupal Association Systems Coordinator, is expected to participate in the Drupal.org Update panel, which will discuss everything going on with Drupal.org, and what the future holds.
This year’s conference has five different speaker tracks, covering everything from business to development to content creation and community building.
The Tag1 DrupalCon attendees have one major focus - Drupal itself, but each individual person has their areas of interest as well. Unsurprisingly, people tend to be interested in the Driesnote, because it is a “good gauge of where Drupal is and where it's going next.”
Some specific talks our team here at Tag1 look forward to are:
- The initiative leads keynote, which should cover where many of the strategic initiatives for Drupal 10 are, and how you can get involved in them.
- Similarly, Gabor Hjotsy’s session about Drupal 10, and what it’ll take to get there.
- Kevin Bridges is giving a talk called Drupal is dead. Long live Drupal, which looks to be a big picture talk about Drupal, its place in open source, the projects it is built on, and how helping those projects helps Drupal in the future.
- At Tag1, we help clients upgrade or migrate their platforms, making the talk Setting up your digital project for success: Lessons learned from a 70-site migration to Drupal of a global brand highly relevant for anyone involved in migrations, from developers to project managers.
- Your first steps to a successful content strategy (workshop) goes far beyond just the content, relating each step of your website’s lifecycle to your overall strategy.
- Tag1 leadership has a strong interest in the makers and builders track, primarily on the topics in backend development and Devops & Infrastructure.
- And lest we forget that work isn’t everything, Michael Schmid is helping remind us that there are some best (and worst) strategies for dealing with high-demand work.
## What else is there to look forward to?
Like the larger community, many of the Tag1 team members look forward to different things with each DrupalCon. Some look forward to giving talks, or talks given by others. Some enjoy the camaraderie of a convention where everyone is focused on improving the software they use every day.
One of the biggest challenges or drawbacks to any convention (aside from the inevitable con-crud), are the distances involved. While there are generally two DrupalCons a year, one in North America, and one in Europe, the majority of attendees are generally localized to those regions. Long travel times, visas, and expenses can make these trips prohibitive. But those who do attend nearly always say it’s worth the effort. One of DrupalCon’s best tracks has always been the “Hallway track”, where you can walk up to nearly any group, and they’re likely to be talking about something you’re at least familiar with or may have an interest in. It can be a way to quickly meet new people, grab a handful of folks for a quick discussion on a particular topic, or make plans for the evening. DrupalCon’s conferencing software should still enable these very informal chances to ‘meet’ new people.
Somewhere in between the scheduled talks and the hallway track are BoF (Birds of a Feather) sessions, where people have small group sessions on whatever interests them, to brainstorm solutions, or just hang out with like-minded people or special interest groups. Past DrupalCons have included BoFs like a knitter and other crafters meetup, the Women of Drupal (which has since grown far beyond a BoF!), to module tutorials and development, to workshops on tools to help you in your day to day work.
Finally, there’s the events like the famous DrupalCon trivia night (watch out for webchick, she knows everything.) Events like this help bring new and long term members of the community together, beyond work, going beyond just Drupal.
For many, DrupalCon is a chance to get together, and meet up with new friends and old. It’s a chance for new folks in the community to meet experienced folks, find projects to work on, and mentors. For folks who are nervous about walking up to a big group, a virtual conference enables you to join in, at whatever comfort level works for you - even without a camera on, if you don’t want it! For distributed companies like Tag1, it’s a chance to spend time with the people you work with, and really get to know them. It’s a good time for job seekers to meet potential employers, and for some, it’s a chance to get deep into projects that could use collaborative help. Big project decisions get made at DrupalCon, and being there is a chance to be part of those decisions. For many DrupalCon attendees, the conference is as much about the people as it is about the code.
*For more of Tag1's content for and about DrupalCon, see DrupalCons!*
Photo by Fikri Rasyid on Unsplash
| true | true | true |
DrupalCons are unquestionably the biggest events of the year in the Drup
|
2024-10-12 00:00:00
|
2020-12-01 00:00:00
|
article
|
tag1consulting.com
|
Tag1 Consulting
| null | null |
|
9,891,031 |
http://www.strangecompany.org/why-the-guy-who-coined-machinima-is-now-making-live-action-films/#
|
Why The Guy Who Coined "Machinima" Is Now Making Live-Action Films
| null |
## Why The Guy Who Coined “Machinima” Is Now Making Live-Action Films
I made my first Machinima – in-game animation – film in 1997. That was three years before I coined a word – Machinima – to describe the weird animation stuff I was doing, based on the suggestion of fellow Machinima pioneer Anthony Bailey.
I worked in the medium between then and 2014. But now, a year later, I’ve got 4 story projects going and none of them is Machinima. On Monday, I release my first ever live-action (fiction) film, *HOWTO: Demon Summoning* ( click here to get updated when it comes out ). It’s set in a world of Lovecraftian horrors and disgruntled techies, it uses a lot of technology I developed in Machinima, but it’s not Machinima.
Why?
Part of it is that Machinima falls uncomfortably between the “film” and “game” stools, which causes problems of its own. That’s something I’ll talk about another time.
But the main reason is that it feels like film – or, more accurately, digital video – is finally taking up the mantle which I felt Machinima alone held for so long: that of a truly democratic filmic medium.
It’s not so much that Machinima got worse; it’s that film got far, far better.
## What Do I Mean By A Democratic Medium?
But wasn’t film democratised way back when, in the early 2000s or even the late 90s?
Not at all. It became **more** accessible, certainly. Digital video opened up the possibility of creating a feature film to thousands of people who couldn’t consider it before. But it still had a long way to go.
The reason for the massive glut of indie horror movies and ‘mumblecore’ films – naturalistic dialogue, no lighting, few characters – is that digital video was a breakthrough, but it wasn’t the only breakthrough that live-action film needed. Cameras were cheaper, tape was far cheaper than film, and non-linear editing meant you could edit your film on a home computer rather than a Steenbeck rented by the hour, but plenty of things were still slow and awkward:
Lighting was a huge bugbear. Cameras required tons of light, and the only lights available were power-heavy tungstens, which would blow out ordinary 13-amp fuses.
Special effects were getting more sophisticated at the high end, but they still really weren’t any more practical at a garage-band level. Photorealistic 3D rendering took forever and required extremely expensive tools.
Props and set decoration required hours or days of scouring local shops or navigating the rudimentary options online.
Moving the camera was a nightmare. You could pick it up for the Blair Witch / Paranormal Activity vomitcam look, or you could stick it on a tripod. But if you wanted to do long, sophisticated moving shots, you were looking at a Hollywood budget again.
Consumer or prosumer digital video cameras still weren’t capable of film quality. They didn’t have interchangeable lenses, and they didn’t produce footage that looked as nice as film.
“Fix it in post” was still a phrase cursed in the industry. You could do wonders with film thanks to new digital tools – but those digital tools were expensive, slow and unreliable.
It was possible to make **a film** for very little money, and have it stand up – but not any film. The film had to be very precisely tailored for the medium.
You could tell a story, **but not any story**.
And that’s the reason I went into Machinima, made my first feature film using the medium, engaged Brian Blessed and Joanna Lumley to make a WoW movie, and so on. I was fully aware of digital video, but I didn’t want to either massively restrict my stories or go into the traditional movie death spiral of endless fundraising and zero filmmaking.
## In 2015, The Barriers Are Lifting
Those barriers, and the many other problems in live-action film, certainly haven’t vanished in 2015. But it feels to me like we’ve reached the tipping point, and that’s why I’m diving into real-world filmmaking at last.
Embarassingly for someone best known for doing clever things with 3D animation packages, the biggest changes have come from much simpler special effects tools. In particular *Adobe After Effects*, the mostly-2D compositing tool, has changed filmmaking and continues to do so.
Here’s an example of genius pioneer Gareth Edwards’ work using *After Effects* and similar tools to create massive battles with a tiny crew. I first became aware of Edwards when Justin Hall, whom I worked with on my feature film *Bloodspell*, edited Edwards’ first feature, *Monsters*. Subsequently, watching his work is one of the major reasons I’ve made the switch:
A lot of simple tools come together at once to cause this tipping point – and it’s not necessarily that the tools are available now, but that they’re *evolved* now.
It’s not just greenscreening, for example: that’s been around forever. But it’s also the evolution of greenscreening tools from massively time-intensive pains in the ass to things that I can use to create a 15-minute lecture in a single day from greenscreen footage.
3D camera move tracking has been around forever too, but it used to be useless unless you wanted to spend a month on a single shot. Now I can take a video with my DSLR, run it through some simple software and have a Minecraft zombie walking around, perfectly synced, within half a day:
Desk Zombie from Strange Company on Vimeo.
*HOWTO: Demon Summoning* wouldn’t have been possible to make without that tech. I’d still have been tracking my handheld camera shots for my CGI character next July.
And so it goes on. Lighting is one of the really big things that pushed me into live-action: where film used to require those massive, hot, dangerous lights, I can now use a Sony a7s camera capable of shooting really nice footage using only the light of the stars and the moon.
We shot much of *HOWTO: Demon Summoning* by the light of 4 tealight candles.
Likewise, if I need light, I can use a magic battery-powered stick – the Westcott Icelight – that generates pro-photographer quality light for hours with no leads and almost no weight. That made a hell of a difference on another upcoming short, *Dangerous Treasures*, where we had to rapidly move from room to room setting up shots, and the Icelight meant I could do that in minutes.
Where I would once have had to employ a team of sweating grips and miles of track, I can now use a gimbal: essentially, a robot which stabilises my camera and transforms wobbly handheld movements into silky-smooth dolly shots. On *HOWTO* that let me move the camera with the same sort of freedom I’m used to having in a 3D program, rather than being limited to the old-school low-budget approach of tripod or wobbly handheld.
And where I would have had to employ props masters just to source all the weird and wonderful props I needed, I can now just search on eBay or Amazon for bizarre sacrificial knives or props for a crack pipe.
It’s becoming amazing.
## And this is just the beginning
Experienced filmmakers and videographers will be getting terribly cynical at this point. Filmmaking is, as of 2015, certainly not yet cheap, effortless or completely without budgetary constraints.
Tools are still finnicky and require powerful PCs. Cameras are expensive, they’ve got poor UI and they require a massive pile of support equipment. Certainly, you can make a film with a crew of two, but it’s not easy.
All of that is true, no question. But the thing is, we’re not at the end of this journey. We’re at the beginning.
Cameras are going to get smaller, faster and easier to use. MUCH smaller, faster and easier. We’re already seeing feature films shot on an iPhone which get into *Sundance*, and this isn’t an ‘in spite of how they look’ deal, either.
That was shot on an iPhone 5s. It’s universally acknowledged that the camera on the iPhone 6 is far better.
For a look at the future, look at GoPro. For those who aren’t aware, GoPro produce ‘action cameras’ – incredibly ruggedised cameras designed for use by people engaging in action sports. Note that I didn’t say, ‘people filming people engaging in action sports’. I mean: ‘for use whilst you’re climbing, skydiving or otherwise doing something that really requires your full attention’.
GoPro have just brought out a new camera. It’s absolutely tiny. It produces very nice footage; it’s not competing with top-end cameras, but the footage is perfectly usable for professional production.
And it has one button. Press to start recording, press again to stop.
That’s the level of ease we’re heading toward.
I just bought a new camera to use as a B camera, and immediately used it extensively in *Dangerous Treasures*. BTW, it’s also my phone.
I wrote a piece a month or so ago looking at the rise of robot cameras – things that can film, move and follow action autonomously. The first generation of those will spend a lot of time crashing into trees. The seventh generation won’t.
And so on.
We’re on the slope now. We’re gathering speed toward a world where making a film isn’t easy – it’ll never be easy to create good art – but it’s no harder than cracking open a copy of *Scrivener* is for authors.
And that’s why I’m suddenly interested.
Want to see the first example of what I can achieve with this stuff? As I said, *HOWTO: Demon Summoning* is coming out on Monday. It’s a fun tale of a startup founder who got screwed by his colleagues, and now he’s out for revenge – with the aid of the Dark Arts, and by following a handy YouTube tutorial. What could possibly go wrong? Click here to get updated when it comes out.
And I’d love to hear what you think. Am I being too optimistic, or is this stuff really happening?
| true | true | true |
I made my first Machinima – in-game animation – film in 1997. That was three years before I coined a word – Machinima – to describe the weird animation stuff I was doing, based on the suggestion of fellow Machinima pioneer Anthony Bailey. I worked in the medium between then and 2014. But now, a year later, I’ve got 4 story projects going and none of them is Machinima. On Monday, I release my first ever live-action (fiction) film, HOWTO: Demon Summoning ( click here to get updated when it comes out ).
|
2024-10-12 00:00:00
|
2015-07-09 00:00:00
| null |
article
|
strangecompany.org
|
strangecompany.org
| null | null |
15,324,256 |
https://www.neowin.net/news/canonical--microsoft-make-azure-tailored-linux-kernel
|
Canonical & Microsoft make Azure-tailored Linux kernel
|
Paul Hill
|
Canonical has announced that it is joining forces with the Microsoft Azure team to build an Azure-tailored Linux kernel for Ubuntu Cloud Images of Ubuntu 16.04 LTS. The new kernel will receive the same level of support and security maintenance offered to supported kernels but is not yet compatible with the Canonical Livepatch Service – if this doesn't matter to you, then you’ll be able to enjoy the improvements the kernel brings.
The kernel is provided by the linux-azure kernel package, here are the highlights from Canonical:
- Infiniband and RDMA capability for Azure HPC to deliver optimised performance of compute intensive workloads on Azure A8, A9, H-series, and NC24r.
- Full support for Accelerated Networking in Azure. Direct access to the PCI device provides gains in overall network performance offering the highest throughput and lowest latency for guests in Azure. Transparent SR-IOV eliminates configuration steps for bonding network devices. SR-IOV for Linux in Azure is in preview but will become generally available later this year.
- NAPI and Receive Segment Coalescing for 10% greater throughput on guests not using SR-IOV.
- 18% reduction in kernel size.
- Hyper-V socket capability – a socket-based host/guest communication method that does not require a network.
- The very latest Hyper-V device drivers and feature support available.
The new kernel package will be used by default in any Ubuntu 16.04 LTS image brought up from the Azure portal after the 21st of September. In order to check if you’re running the Azure kernel, get to the command line and run: **uname -r**, and the output should be something like **4.11.0-1011-azure**, 'azure' is the key part of the string.
In order to revert to a standard kernel without the benefits listed above, enter the following commands in the terminal:
**$ sudo apt install linux-virtual linux-cloud-tools-virtual**
**$ sudo apt purge linux*azure **
**$ sudo reboot**
Instances using the Azure tailored Ubuntu kernel will be supportable through Canonical’s Ubuntu Advantage service which is available from Canonical's online shop.
Source: Ubuntu Insights
## 7 Comments - Add comment
| true | true | true |
Canonical and Microsoft have come together to build an Azure-tailored Linux kernel which provides several benefits over the standard kernel including an 18% reduction in the size of the software.
|
2024-10-12 00:00:00
|
2017-09-22 00:00:00
|
article
|
neowin.net
|
Neowin
| null | null |
|
2,380,704 |
http://daggle.com/better-letter-nyt-readers-digital-subscriptions-2514
|
A Better Letter To New York Times Readers About Digital Subscriptions
| null |
Today, the New York Times is taking a major step forward as we introduce digital subscriptions in the United States and the rest of the world. Since we first announced the plan 11 days ago, we’ve heard from so many of you, our readers. We’ve also heard from a bunch of noisy bloggers, but they just rip us off anyway, so we’re ignoring them.
We’re grateful for the feedback from our loyal readers (not those blogger brats) and, most of all, for your commitment to the The Times. So grateful, indeed, that we think you should start paying us, even though we’ll still be showing you all those ads.
As you may know, on March 17, we introduced digital subscriptions in Canada. That’s because we figured, “Who gives a crap if the Canadians complain?” Plus, Canadians are known for being pretty polite. We figured we’d be good there.
Officially, the Canadian launch allowed us to test our systems and fine-tune the user interface and customer experience. Today, we are launching globally. I know, I said that already in my lead, but I enjoy repetition.
## Memorize These Print Subscription Costs!
If you are a home delivery subscriber of The Times [we like to say “The Times” as if there are no other “Times” newspapers out there], you will continue to have full and free access to our news, information, opinion and other features on your computer, smartphone and tablet. International Herald Tribune subscribers will also receive free access to NYTimes.com.
We have three home delivery options, so you’ll pay:
- $193 for Monday-Friday delivery
- $270 for Friday-Sunday delivery
- $385 for all seven days in the week
Now memorize those figures, that we’ve shoved over on a page where they’re only accessible after you enter a ZIP code, and where you can’t easily compare them to our three different digital subscription prices. As a news publication, we wouldn’t want to make any of this stuff easy. We can do interactive graphics on elections, nuclear meltdowns but our pricing plans? Maybe we’ll do a flowchart in the future.
If you are not a home delivery subscriber, you will have free access to 20 articles (including slide shows, videos and other features) each month.
By the way, because we break our stories up into two, three or more “pages” on the web for no other reason to shove more ads your way, you won’t really get 20 “articles” but rather 20 page views. Bonus tip: even if you use the “print” option to view an article in one single page, that will have cost you a second click.
## Our 20 Article “Limit” [Chuckle]
If you go over those 20 articles, you’ll be asked to become a digital subscriber. You won’t be able to view any more articles on our site, sorry. No ifs, ands or buts. Except…
If you use our smartphone or tablet apps, the Top News section will remain free. But that’s it! Except….
If you come through links from search engines, blogs and social media, you’ll be able to read any article, even if you’ve used up your 20 limit already. Except….
If you come to the site through ANY link, you’ll be able to read any article, even if it’s not a link from search engines, blogs or social media sites. Except….
If you come from search engines, even though we already said you’ll be able to read any article even if you’ve hit your 20 per month limit, we actually meant any article except if you’ve already read five articles via search engines on that day. Get it? No? Yeah, it makes our heads hurt, too.
Why are search engine links so special? The short story is we’re pretty messed up about all this stuff. The long story, well, there’s a link below.
## People Who Don’t Know We Exist [Shudder] Deserve Freebies
We’re doing all this, giving away all this free access, because we don’t think “new” and “casual” users will:
- Cough up the same money that you, our loyal users will
- Or link to us giving us all those ad views that we earn money from, but not enough money, apparently
Our home page and all section fronts will remain free to browse for all users at all times. That’s because those new and casual freeloaders never come to our home page or section pages. But our regular users do, and we hope you’ll keep doing that, use up your free clicks and pay to get rid of a barrier that a 12 year old child could figure out.
## Real Readers Whip Out Their Wallets
But you’re not 12. You’re 55, and paying the money is worth it to you. Pity you’re not really our future, though. Then again, we’ll be long gone before that issue gets even worse. Let those suck-head social media yapping editors and reporters deal with it, when their time comes around, they think they’re so smart.
## Memorize These Digital Subscription Costs!
How about those digital options? Well, you can buy:
- $195 for web and smartphone app access
- $260 for web and tablet app access
- $455 for web, smartphone & tablet app access
## Killing Trees Saves Us $70 Per Person In Journalism Production Costs
Now I hear you asking yourself. Is it true? I can get a human being to throw a hard copy of the New York Times on my porch seven days a week for $385 — and that comes with digital access on ANY device — but if I just want digital access, it costs me $70 more?
Yes. You see, despite all our yapping that we don’t make enough money off digital visitors, if we can just throw more paper copies on porches that people don’t actually read, we still make money, because we can continue to sell overpriced print ads to all our print advertisers as if they are somehow more valuable unseen on dead trees than when viewed through electronic pixels.
## Displaying Content In Tablet Apps Costs $65 More In Journalism Production Costs
I hear you asking, why should I pay $65 more to view the exact same content on my tablet app versus my smartphone app.
Um, because we can do that? Look, we don’t really have much control over all this digital stuff, so we take what we can get.
## Access To Both Tablets & Smartphones Costs $260 More In Journalism Production Costs
And I hear you asking, why’s it cost $260 per year more to view things on both your smartphone and your tablet?
Again, because we can. Why don’t you just take the paper edition, smart ass?
## Coming In Next Week’s Wall Street Journal, A Guide To The New York Times Paywall
As you have seen during this recent period of extraordinary global news, The Times is uniquely positioned to keep you informed. Except about all these plans, and how they make much sense. In that regard, we’re banking on a convoluted system that will let anyone who doesn’t want to pay to keep reading whatever they want while those who don’t know better, or those who figure “What the hell, I just want that nag screen to go away,” to pay.
| true | true | true | null |
2024-10-12 00:00:00
|
2011-03-28 00:00:00
|
article
|
dannysullivan.com
|
dannysullivan.com
| null | null |
|
40,925,886 |
https://www.theverge.com/2024/7/9/24194970/google-one-free-dark-web-monitoring
|
Google’s dark web monitoring service will soon be free for all users
|
Jennifer Pattison Tuohy
|
Since last year, Google has monitored dark web leaks of stolen account information for Google One subscribers, such as phone numbers and physical addresses. But, starting later this month, Google’s dark web reports will be available to anyone with a Google account.
According to a Google support page about the transition, the free service will be part of Google’s “results about you” page. This is where you can currently check for information Google has indexed that contains personal contact info like your home address, phone number, or email address and request it be removed so that it doesn’t surface in search results. Google says the move will create a “combined solution to help users protect their online presence.”
Of course, several services — both paid and free, like Have I Been Pwned? — will scan the dark web for your data and send you alerts. But, for Google users, combining the company’s two monitoring features into a single place to view potential personal information leaks makes sense.
This does means that both perks added last spring for the more than 100 million paid-up Google One subscribers (which starts at $1.99 a month) have been removed. Last month, Google announced that the other addition, its VPN by Google One service, will shut down later this year.
It’s unlikely these were the reasons anyone signed up for Google One in the first place, but it could be disheartening to see benefits disappear without a corresponding drop in price.
The main reason to sign up for Google One is to get more storage for your Google account, including photos and Gmail storage. While there are other perks — including premium Google Meet video calling features, the ability to share your storage with up to five people, and enhanced appointment scheduling in Google Calendar — none are *that* compelling. Google’s Gemini-powered AI features might be something you’d consider paying for, but those require higher tiers of Google One, starting at $19.99 a month.
| true | true | true |
Google can tell you if your info shows up in a data leak.
|
2024-10-12 00:00:00
|
2024-07-09 00:00:00
|
article
|
theverge.com
|
The Verge
| null | null |
|
862,656 |
http://www.spectrum.ieee.org/robotics/humanoids/the-reality-of-robot-surrogates
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
6,366,278 |
http://www.apple.com/uk/iphone-5s/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
4,004,506 |
http://summitcountyvoice.com/2012/05/20/colorado-2012-solar-eclipse-photos/
|
summitcountyvoice.com
| null |
Buy this domain.
summitcountyvoice.com
| true | true | true |
This domain may be for sale!
|
2024-10-12 00:00:00
|
2024-01-01 00:00:00
| null | null | null | null | null | null |
11,002,774 |
http://hardmath123.github.io/armchair-philosophy.html
|
Armchair Philosophy
| null |
# Armchair Philosophy
An allegory.
#### Friday, January 29, 2016 · 4 min read
The IKEA Poäng is perhaps the company’s most comfortable and best-named product: a chic, springy twist to the classic light armchair. The Poäng comes in five or six different color schemes: generally variations on white, beige, red, and coffee.
But what if it didn’t?
Let’s imagine an alternate universe, where the Poäng is advertised as a medium of expression. Let’s imagine a world where the Poäng seat covers are made of dye-able canvas. A world where customers are encouraged to decorate their armchairs to reflect their own personalities.
Sounds like fun, doesn’t it? Well, uh, let’s see what happens. I present to you an allegory in twelve parts.
**January.** The concept is first revealed during the keynote at the IKEA
Worldwide Developers Conference. The Twitterverse explodes. *The New York
Times* says, “What a time to be alive!”.
**February.** IKEA sells out within the first 24 hours of sales; customers
waiting in line report being “disappointed, but contently stuffed with
meatballs”. Television commercials begin to feature contemporary artists
decorating their Poängs. There are rumors of AMC Theaters planning to
license Poängs for their cinemas. BuzzFeed publishes ten of their best
Poäng-assembling tips and tricks (you won’t *believe* #4).
**March.** Almost everyone now owns a Poäng. A dark blue Poäng
with the Presidential Seal is spotted in the White House.
**April.** One’s Poäng-decoration becomes a profound statement of his or
her identity. After all, an armchair is where you spend some of your most
important hours. Reading, chatting, watching TV: these are all best done from a
familiar environment that should be optimized for your lifestyle.
A Berkeley establishment begins to sell tie-dyed Poäng covers.
**May.** Genres emerge.
There are the loud, skeuomorphic Poängs with too much color and design. These generally belong to young children who decorate their Poängs in Crayola colors.
Then there are the average adults, who choose the most suburban colors they can find. Navy blue? Perfect. Olive green? Sounds like home.
Finally, there are the artistic adults, who go for a more refined look. They pick neutral but subtle color schemes with tasteful accents.
**June.** The Average Adults realize that their Poängs look outmoded
compared to the beautiful Poängs of the Artistic Adults. Pastel colors are
the “in” thing, according to several popular Poäng-centered Instagram
accounts.
**July.** The development of Poäng plugins spawns a new industry. Embedded
hardware for Poäng covers becomes cheap, resulting in increasingly
sophisticated Poängs.
**August.** The genres begin to homogenize into something the Chair Gurus call
the “material design revolution”. A combination of color palettes and design
guidelines assembled by experienced superstar designers guides every new
Poäng design.
An NPR survey reveals that while over 40% of the US population owns a Poäng, only 12% of Poäng-owners report sitting in their armchairs regularly.
**September.** IKEA begins selling readymade Poängs designed painstakingly
by expert designers and artists. They even deliver it—assembled—to your
doorstep. Most people choose to buy the readymade Poängs because they are
low-maintenance and don’t require as much effort to set up. They are also
stunningly beautiful, and the experienced designers probably took care of a lot
of corner-cases that you, as an amateur, wouldn’t really think of.
**October.** Hand-decorated Poängs begin to look passé. Many of
them lack essential armchair features such as cupholders and localization
settings. They also ignore common best practices in the industry. Marketing
professionals say that hand-decorated Poängs are a poor business choice
for furnishing your waiting room because they “project an outdated look to
potential customers”.
“Don’t roll your own paint,” preaches one blog post that tops Hacker News.
Google publishes a framework to develop apps for the front end of Poängs. They call it PoAngularJS. The average chair now weighs significantly more than the average American.
**November.** IKEA sells one kind of Poäng now. Customers have occasional
problems with them, but you can find workarounds online. Besides, everything
else is so user-friendly. It’s really just a couple little things that bother
you, like the Wi-Fi crashing every once in a while.
Very few hand-decorated Poängs exist, mostly in educational institutions.
Old people complain that “see, them chairs had *character* in them”, but
they’ve been saying that for centuries.
**December.** IKEA discontinues the Poäng. Usage of armchairs is
deprecated in favor of the “one-person couch”, which is a remarkable new piece
of technology destined to revolutionize the way we think about sitting.
Nobody really remembers how to put together an old-fashioned armchair (just like they don’t remember how to build a gramophone). Some engineers work together to build their own version of the Poäng called the LibreChair. However, it is only used by hardcore carpentry enthusiasts since the manual is twelve pages long and building it requires you to weave your own cloth.
**Epilogue.** Let’s talk about customization. The etymology of the word
*custom* can be traced to the
Latin *consuetudo*, which means “habit”. But it means more than “habit”. It
means “experience”, “tradition”, “convention”, “familiarity”, “companionship”,
“conversation”… even “love affair”.
And it’s this dichotomy between the *individual* and the *communal* that makes
the idea of “customization” (which is so central to hackerdom) paradoxical. Our
identity is as much our own as not; we forfeit our identity to others.
There’s something to be said about having a fortress of solitude. A world which you control, which you make your own with endless tweaks towards your ideals of perfection. Programmers don’t need to carve their fortresses out of rocky cliffs; they can find solace in editors, shells, browsers, and personal websites.
The key is in *customization*.
Yet *even though* we spend hours making our tools “our own” with color schemes,
macros, and key bindings, we *still* choose to publish our dotfiles as
open-source “projects” on Github. We scarcely bother to read the original
documentation of our software, choosing instead to search for solutions written
already on StackOverflow. We happily hand over our content to the corporate
Cerberus that calls itself Medium. We choose to adhere to style guides written
by people who are not us. We foist upon others screenshots of artistically
themed editors, that are no better than gilded toothbrushes. We steal
boilerplate and eye-candy from others, believing somehow that we’re doing
ourselves favors.
It’s foreign, it’s homogeneous, it’s both beautiful and sickening: like a fortress made of cotton candy.
| true | true | true | null |
2024-10-12 00:00:00
|
2016-01-29 00:00:00
| null | null |
github.io
|
hardmath123.github.io
| null | null |
37,562,847 |
https://www.theguardian.com/technology/2023/sep/18/elon-musk-accuses-george-soros-foundation-of-wanting-to-destroy-western-civilisation
|
Elon Musk hits out at Soros foundation before meeting Israel’s Netanyahu
|
Hibaq Farah
|
Elon Musk has accused George Soros’s foundation of wanting to destroy western civilisation, as the tech tycoon prepares to meet the Israeli prime minister, Benjamin Netanyahu, in California.
Musk made the comment in reply to a post by a user sharing footage of people arriving on the Italian island of Lampedusa from north Africa that referred to a “George Soros led invasion” of Europe.
“The Soros organization appears to want nothing less than the destruction of western civilization,” X’s owner posted.
Soros, a Hungarian-American businessman and philanthropist, has been the subject of multiple antisemitic conspiracy theories.
His Open Society Foundations, which supports civil society groups including some working on issues affecting the safety and wellbeing of migrants, refugees and asylum seekers, is a regular target of the far right.
Musk has been accused by civil rights groups of amplifying antisemitism on his platform – which he denies. He is scheduled to meet Netanyahu on Monday for talks that both men have said will focus on on artificial intelligence technology, and not the Anti-Defamation League (ADL), with whom Musk is feuding over antisemitism on X.
However, the Washington Post reported last week that the meeting was the latest step in a campaign by Musk’s Jewish friends and allies to stave off the mounting criticism surrounding the increase in antisemitic speech on X.
Musk has threatened to sue the ADL, a US-based civil rights group that campaigns against antisemitism and bigotry, and accused it of trying to “kill” the platform by “falsely accusing it and me of being antisemitic”.
Civil rights groups, including the Center for Countering Digital Hate and the ADL, have issued findings that the volume of hate speech on X has grown under the stewardship of Musk.
| true | true | true |
X owner makes comment on his platform as he prepares to meet Israeli prime minister Benjamin Netanyahu
|
2024-10-12 00:00:00
|
2023-09-18 00:00:00
|
article
|
theguardian.com
|
The Guardian
| null | null |
|
26,445,443 |
https://fossil-scm.org/home/doc/trunk/www/concepts.wiki
|
Fossil Concepts
| null |
## 1.0 Introduction
Fossil is a software configuration management system. Fossil is software that is designed to control and track the development of a software project and to record the history of the project. There are many such systems in use today. Fossil strives to distinguish itself from the others by being extremely simple to setup and operate.
This document is intended as a quick introduction to the concepts behind Fossil.
See also:
## 2.0 Composition Of A Project
R1: cylinder "Remote" "Repository" fill 0xadd8e6 rad 70% R2: cylinder same "Remote" "Repository" at 2.5*R1.wid right of R1 spline <-> from R1.e to 0.6<R1.se,R2.sw> then to 0.4<R1.ne,R2.nw> then to R2.w text "HTTPS" at .5<R1.ne,R2.nw> R3: cylinder same "Local" "Repository" fill 0x90ee90 \ at dist(R1.e,R2.w) below .5<R1,R2> spline <-> from .5<R1.s,R1.se> to 0.6<R1.s,R3.w> to 0.5<R1.se,R3.n> to .5<R3.nw,R3.n> \ "HTTPS" above behind R1 spline <-> from R2.sw to .6<R2.sw,R3.n> to .5<R2.s,R3.e> to R3.ne "HTTPS" ljust T1: line from 1.0cm heading 200 from R3.sw go 2.2cm heading 150 then 2.2cm west close \ fill 0xffff00 "Local" below "Source Tree" below T2: line from 1.0cm heading 160 from R3.se same "Local" below "Source Tree" below line <-> from R3.sw to T1.start line <-> from R3.se to T2.start[→ /pikchrshow]
A software project normally consists of a "source tree". A source tree is a hierarchy of files that are used to generate the end product. The source tree changes over time as the software grows and expands and as features are added and bugs are fixed. A snapshot of the source tree at any point in time is called a "version" or "revision" or a "baseline" of the product. In Fossil, we use the name "check-in".
A "repository" is a database that contains copies of all historical check-ins for a project. Check-ins are normally stored in the repository in a highly space-efficient compressed format (delta encoding). But that is an implementation detail that you the user need not worry over. Think of the repository as a safe place where all your old check-ins are securely stored away and available for retrieval whenever you need them.
A repository in Fossil is a single file on your disk. This file might be rather large (dozens or hundreds of megabytes for a large or long running project) but it is nevertheless just a file. You can move it around, rename it, write it out to a memory stick, or do anything else you normally do with files.
Each source tree that is controlled by Fossil is associated with a single repository on the local disk drive. You can tie two or more source trees to a single repository if you want (though one tree per repository is the most common configuration.) So a single repository can be associated with many source trees, but each source tree is associated with only one repository.
Fossil source trees may not overlap. A Fossil source tree is identified by a file named "_FOSSIL_" (or ".fslckout", but this article will always use the name "_FOSSIL_") in the root directory of the source tree. Every file that is a sibling of _FOSSIL_ and every file in every subfolder is considered potentially a part of the source tree. The _FOSSIL_ file contains (among other things) the pathname of the repository with which the source tree is associated. On the other hand, the repository has no record of its source trees. So you are free to delete a source tree or move it around without consequence. But if you move or rename or delete a repository, then any source trees associated with that repository will no longer be able to locate their repository and will stop working.
When multiple developers are working on the same project, each developer typically has his or her own local repository and an associated source tree in which to work. Developers share their work by "syncing" the content of their local repositories either directly or through a central server. Changes can "push" from the local repository into a remote repository. Or changes can "pull" from a remote repository into a local repository. Or one can do a "sync" which is a shortcut for doing both a push and a pull at the same time. Fossil also has the concept of "cloning". A "clone" is like a "pull", except that instead of beginning with an existing local repository, a clone begins with nothing and creates a new local repository that is a duplicate of a remote repository.
Communication between repositories is normally via HTTPS. (SSH is also supported, as is unencrypted HTTP.) Remote repositories are identified by URL. You can also point a web browser at a repository and get human-readable status, history, and tracking information about the project.
### 2.1 Identification Of Artifacts
A particular version of a particular file is called an "artifact". Each artifact has a universally unique name which is the SHA1 or SHA3-256 hash of the content of that file expressed as either 40 or 64 characters of lower-case hexadecimal. (See the hash policy document for information on which algorithm is used, when.) Such a hash is referred to as the Artifact ID. These hash algorithms were created with Fossil's purpose in mind: to provide a highly forgery-resistant identifier for a blob of data, such as a file. Given any file, it is simple to find the artifact ID for that file. But given an artifact ID, it is computationally intractable to generate a file that will have that same artifact ID.
Artifact IDs look something like this:
6089f0b563a9db0a6d90682fe47fd7161ff867c8 59712614a1b3ccfd84078a37fa5b606e28434326 19dbf73078be9779edd6a0156195e610f81c94f9 b4104959a67175f02d6b415480be22a239f1f077 997c9d6ae03ad114b2b57f04e9eeef17dcb82788
When referring to an artifact using Fossil, you can use a unique prefix of the artifact ID that is four characters or longer. This saves a lot of typing. When displaying artifact IDs, Fossil will usually only show the first 10 digits since that is normally enough to uniquely identify a file.
Changing (or adding or removing) a single byte in a file results in a completely different artifact ID. And since the artifact ID is the name of the artifact, making any change to a file results in a new artifact. In this way, artifacts are immutable.
A repository is really just an unordered collection of artifacts. New artifacts can be added to the repository, but existing artifacts can never be removed. (Well, almost never. There is a "shunning" mechanism that allows spam or other inappropriate content to be removed if absolutely necessary, but such removal is discouraged.) Fossil is designed in such a way that it can be handed a set of artifacts in any order and it can figure out the relationship between those artifacts and reconstruct the complete development history of a software project.
### 2.2 Manifests
Associated with every check-in is a special file called the "manifest". The manifest is a listing of all other files in that source tree. The manifest contains the (complete) artifact ID of the file and the name of the file as it appears on disk, and thus serves as a mapping from artifact ID to disk name. The artifact ID of the manifest is the identifier for the entire check-in. When you look at a "timeline" of changes in Fossil, the ID associated with each check-in or commit is really just the artifact ID of the manifest for that check-in.
The manifest file is not normally a real file on disk. Instead, the manifest is computed in memory by Fossil whenever it needs it. However, the "fossil setting manifest on" command will cause the manifest file to be materialized to disk, if desired. Both Fossil itself, and SQLite cause the manifest file to be materialized to disk so that the makefiles for these project can read the manifest and embed version information in generated binaries.
Fossil automatically generates a manifest whenever you "commit" a new check-in. So this is not something that you, the developer, need to worry with. The format of a manifest is intentionally designed to be simple to parse, so that if you want to read and interpret a manifest, either by hand or with a script, that is easy to do. But you will probably never need to do so.
In addition to identifying all files in the check-in, a manifest also contains a check-in comment, the date and time when the check-in was established, who created the check-in, and links to other check-ins from which the current check-in is derived. There is also a couple of checksums used to verify the integrity of the check-in. And the whole manifest might be PGP clearsigned.
### 2.3 Key concepts
- A
**check-in**is a set of files arranged in a hierarchy. - A
**repository**keeps a record of historical check-ins. - Repositories share their changes using
**push**,**pull**,**sync**, and**clone**. - A particular
__version__of a particular file is an**artifact**that is identified by an**artifact ID**. - Artifacts tracked by Fossil are inherently immutable.
- Fossil automatically generates a
**manifest**file that identifies every artifact in a check-in. - The artifact ID of the manifest is the identifier of the check-in.
## 3.0 Fossil - The Program
Fossil is software. The implementation of Fossil is in the form of a single executable named "fossil" (or "fossil.exe" on Windows). To install Fossil on your system, all you have to do is obtain a copy of this one executable file (either by downloading a pre-compiled version or compiling it yourself) and then putting that file somewhere on your PATH.
Fossil is completely self-contained. It is not necessary to
install any other software in order to use Fossil. You do __not__ need
CVS, gzip, diff, rsync, Python, Perl, Tcl, Java, Apache, PostgreSQL, MySQL,
SQLite, patch, or any similar software on your system in order to use
Fossil effectively. You will want to have some kind of text editor
for entering check-in comments. Fossil will use whatever text editor
is identified by your VISUAL environment variable. Fossil will also
use GPG to clearsign your manifests if you happen to have it installed,
but Fossil will skip that step if GPG missing from your system.
You can optionally set up Fossil to use external "diff" programs,
though Fossil has an excellent built-in "diff" algorithm that works
fine for most people. If you happen to have Tcl/Tk installed on your
system, Fossil will use it to generate a graphical "diff" display when
you use the --tk option to the "diff" command, but this too is entirely
optional.
To uninstall Fossil, simply delete the executable.
To upgrade an older version of Fossil to a newer version, just
replace the old executable with the new one. You might need to
run "**fossil all rebuild**" to restructure your repositories after
an upgrade. Running "all rebuild" never hurts, so when upgrading it
is a good policy to run it even if it is not strictly necessary.
To use Fossil, simply type the name of the executable in your shell, followed by one of the various built-in commands and arguments appropriate for that command. For example:
fossil help
In the next section, when we say things like "use the **help**
command" we mean to use the command name "help" as the first
token after the name of the Fossil executable, as shown above.
## 4.0 Workflow
down R1: cylinder "Remote" "Repository" fill 0xadd8e6 rad 70% move 150% R2: cylinder same "Local" "Repository" fill 0x90ee90 move 120% T1: line go 2.2cm heading 150 then 2.2cm west close \ fill 0xffff00 "Local" below "Source Tree" below arrow from R2.n+(-0.25cm,+0.25cm) to R1.s+(-0.25cm,-0.25cm) \ "push " rjust arrow from R1.s+(+0.25cm,-0.25cm) to R2.n+(+0.25cm,+0.25cm) \ " pull" ljust " clone" ljust arrow from T1.start+(-0.25cm,+0cm) to R2.s+(-0.25cm,-0.25cm) \ "commit " rjust arrow from R2.s+(+0.25cm,-0.25cm) to T1.start+(+0.25cm,+0cm) \ " open" ljust " update" ljust " merge" ljust[→ /pikchrshow]
Fossil has two modes of operation: *"autosync"* and
*"manual-merge"*
Autosync mode is reminiscent of CVS or SVN in that it automatically
keeps your changes in synchronization with your co-workers through
the use of a central server. The manual-merge mode is the standard workflow
for GIT or Mercurial in that your local repository develops
independently of your coworkers and you share and merge your changes manually.
An interesting feature of Fossil is that it supports both autosync
and manual-merge work flows.
The default setting for Fossil is to be in autosync mode. You can change the autosync setting or check the current autosync setting using commands like:
fossil setting autosync on fossil setting autosync off fossil settings
By default, Fossil runs with autosync mode turned on. The authors finds that projects run more smoothly in autosync mode since autosync helps to prevent pointless forking and merging and helps keeps all collaborators working on exactly the same code rather than on their own personal forks of the code. In the author's view, manual-merge mode should be reserved for disconnected operation.
### 4.1 Autosync Workflow
-
Establish a local repository using either the
**new**command to start a new project, or the**clone**command to make a clone of a repository for an existing project. -
Establish one or more source trees using the
**open**command with the name of the repository file as its argument. -
The
**open**command in the previous step populates your local source tree with a copy of the latest check-in. Usually this is what you want. In the rare cases where it is not, use the**update**command to switch to a different check-in. Use the**timeline**or**leaves**commands to identify alternative check-ins to switch to. -
Edit the code. Add new files to the source tree using the
**add**command. Omit files from future check-ins using the**rm**command. (Even when you remove files from future check-ins, those files continue to exist in historical check-ins.) Test your changes. -
Create a new check-in using the
**commit**command. You will be prompted for a check-in comment and also for your GPG key if you have GPG installed. The commit copies the edits you have made in your local source tree into your local repository. After your commit completes, Fossil will automatically**push**your changes back to the server you cloned from or whatever server you most recently synced with. -
When your coworkers make their own changes, you can merge those changes into your local local source tree using the
**update**command. In autosync mode,**update**will first go back to the server you cloned from or with which you most recently synced, and pull down all recent changes into your local repository. Then it will merge recent changes into your local source tree. If you do an**update**and find that it messes something up in your source tree (perhaps a co-worker checked in incompatible changes) you can use the**undo**command to back out the changes. -
Repeat all of the above until you have generated great software.
### 4.2 Manual-Merge Workflow
When autosync is disabled, the **commit** command is decoupled from
**push** and the **update** command is decoupled from **pull**.
That means you have to do a few extra steps in order to accomplish the
**push** and **pull** tasks manually.
-
Establish a local repository using either the
**new**command to start a new project, or the**clone**command to make a clone of a repository for an existing project. The default setting for a new repository is with autosync on, so you will need to turn it off using the**setting autosync off**command with a**-R**option to specify the repository. -
Establish one or more source trees by changing your working directory to where you want the root of the source tree to be, then issuing the
**open**command with the name of the repository file as its argument. -
The
**open**command in the previous step populates your local source tree with a copy of the latest check-in. Usually this is what you want. In the rare cases where it is not, use the**update**command to switch to a different check-in. Use the**timeline**or**leaves**commands to identify alternative check-ins to switch to. -
Edit the code. Add new files to the source tree using the
**add**command. Omit files from future check-ins using the**rm**command. (Even when you remove files from future check-ins, those files continue to exist in historical check-ins.) Test your changes. -
Create a new check-in using the
**commit**command. You will be prompted for a check-in comment and also for your GPG key if you have GPG installed. The commit copies the edits you have made in your local source tree into your local repository. -
Use the
**push**command to push your changes out to a server where your co-workers can access them. -
When co-workers make their own changes, use the
**pull**command to pull those changes into your local repository. Note that**pull**does not move the changes into your local source tree, only into your local repository. -
Once changes are in your local repository, use the
**update**command to merge them to your local source tree. If you merge in some changes and find that the changes do not work out or are not to your liking, you can back out the changes using the**undo**command. -
If two or more people ran "commit" against the same check-in, this will result in a fork which you may want to resolve by running
**merge**followed by another**commit**. -
Repeat all of the above until you have generated great software.
## 5.0 Setting Up A Fossil Server
With other configuration management software, setting up a server is a lot of work and normally takes time, patience, and a lot of system knowledge. Fossil is designed to avoid this frustration. Setting up a server with Fossil is ridiculously easy. You have four options:
**Stand-alone server.**Simply run the fossil server or fossil ui command from the command-line.
**CGI.**Install a 2-line CGI script on a CGI-enabled web-server like Apache.
**SCGI.**Start an SCGI server using the fossil server --scgi command for handling SCGI requests from web-servers like Nginx.
**Inetd or Stunnel.**Configure programs like inetd, xinetd, or stunnel to hand off HTTP requests directly to the fossil http command.
See the How To Configure A Fossil Server document for details.
## 6.0 Review Of Key Concepts
- The
**fossil**program is a self-contained stand-alone executable. Just put it somewhere on your PATH to install it. - Use the
**clone**or**new**commands to create a new repository. - Use the
**open**command to create a new source tree. - Use the
**add**and**rm**or**delete**commands to add and remove files from the local source tree. - Use the
**commit**command to create a new check-in. - Use the
**update**command to merge in changes from others. - The
**push**and**pull**commands can be used to share changes manually, but these things happen automatically in the default autosync mode.
| true | true | true | null |
2024-10-12 00:00:00
|
2024-10-12 00:00:00
| null | null | null | null | null | null |
5,128,247 |
http://phenomena.nationalgeographic.com/2013/01/23/shakespeares-sonnets-and-mlks-speech-stored-in-dna-speck/
|
Shakespeare’s Sonnets and MLK’s Speech Stored in DNA Speck
|
Ed Yong
|
# Shakespeare’s Sonnets and MLK’s Speech Stored in DNA Speck
When Nick Goldman first opened the package, he couldn’t quite believe that it contained anything at all, much less all of Shakespeare’s sonnets. The parcel had come from a facility in the US and arrived at the European Bioinformatics Institute in the UK, in March 2012. It contained a series of small plastic vials, at the bottom of which were… apparently nothing. It was Goldman’s colleague Ewan Birney who showed him the tiny dust-like specks that he had missed.
These specks were DNA, and they contained:
- All of the Bard’s 154 sonnets.
- A 26-second clip of Martin Luther King’s legendary “I have a dream” speech
- A PDF of James Watson and Francis Crick’s classic paper where they detailed the structure of DNA
- A JPEG photo of Goldman and Birney’s institute
- A code that converted all of that into DNA in the first place
The team sent the vials off to a facility in Germany, where colleages dissolved the DNA in water, sequenced it, and reconstructed all the files with 100 percent accuracy. It vindicated the team’s efforts to encode digital information into DNA using a new technique—one that could be easily scaled up to global levels. And it showed the potential of the famous double-helix as a way of storing our growing morass of data.
## A better format
DNA has several big advantages over traditional storage media like CDs, tapes or hard disks. For a start, it takes up far less space. Goldman’s files came to 757 kilobytes and he could barely see them. For a more dramatic comparison, CERN, Europe’s big particle physics laboratory, currently stores around 90 petabytes of data (a petabyte is a million gigabytes) on around 100 tape drives. Goldman’s method could fit that into 41 grams of DNA. That’s a cupful.
DNA is also incredibly durable. As long as it is kept in cold, dry and dark conditions, it can last for tens of thousands of years with minimal care. “The experiment was done 60,000 years ago when a mammoth died and lay there in the ice,” says Goldman. Readable DNA fragments have been recovered from such mammoths, as well as a slew of other prehistoric creatures. “And those weren’t even carefully prepared samples. If you did that under controlled circumstances, you should be good for more than 60,000 years.”
## YEAR-LONG ADVENTURE for every young explorer on your list
FREE limited-edition frog drawstring bag with every Nat Geo Kids Book Bundle subscription
(For those of you wondering if the information would mutate, it can’t. It’s not inside a living thing, and not being copied. It’s just the isolated non-living molecule.)
And using DNA would finally divorce the thing that stores information from the things that read it. Time and again, our storage formats become obsolete because we stop making the machines that read them—think about video tapes, cassettes, or floppy disks. That’s a faff—it means that archivists have to constantly replace all their equipment, and laboriously rewrite their documents in the new format du jour, all at great expense. But we will always want to read DNA. It’s the molecule of life. Biologists will always study it. The sequencers may change, but as Goldman says, “You can stick it in a cave in Norway, leave it there in a thousand years, and we’ll still be able to read that.”
## The code
DNA has a proven track record for storing information. It already stores all the instructions necessary to build one of you, or a giraffe, or an oak tree, or a beetle (oh so many beetles). To exploit it, all you need to do is to convert the binary 1s and 0s that we currently use into the As, Gs, Cs and Ts of DNA.
A Harvard scientist called George Church did exactly that last year. He used a simple cipher, where A and C represented 0, and G and T represented 1. In this way, he encoded his new book, some image files, and a Javascript programme, amounting to 5.2 million bits of information
Goldman and Birney have encoded the same amount, but with a more complex scheme. In their system, every byte—a string of 8 ones or zeroes—is converted into five DNA letters. These strings are designed so that there are never any adjacent repeats. This makes it easier for sequencing machines to read and explains why they had a far lower error rate (that is, none) compared to Church’s method.
Using their cipher, they converted every stream of data into a set of DNA strings. Each one is exactly 117 letters long and contains indexing information to show where it belongs in the overall code. The strings also overlap, so that every bit is covered by four separate strings. Again, this reduces error. Any mistake would have to happen on four separate strings, which is very unlikely.
Accuracy aside, Goldman’s coding system has a more fanciful advantage—it should be apocalypse-proof. Let’s get a bit fanciful: Imagine that there’s a calamity that wrecks human civilisation, creating a huge discontinuity in our technology. The survivors rebuild and eventually relearn what DNA is and how to decode it. Maybe they find some of these stores, locked away in a vault. “They’d quickly notice that this isn’t DNA like anything they’ve seen,” says Goldman. “There are no repeats. Everything’s the same length. It’s obviously not something from a bacterium or a human. Maybe it’s worth investigating. Of course you’d need to send some sort of Rosetta stone to tell people how to decode the message…”
## Scaling up
Goldman calculated that this method could be feasibly scaled up to cover all of the world’s data (which currently stands at around 3 zettabytes—3 million million gigabytes). For now, the big problems are cost and speed. It’s still expensive to read DNA, and *really* expensive to write it. The team estimate that you would pay $12,400 to encode every megabyte of data, and $220 to read it back, based on current costs. But those costs are falling exponentially, far faster than those of other electronics.
If you use DNA, you face a steep one-time cost of writing the data. If you use other technologies, you face the recurring costs of having to re-write the data into whatever new format has arrived. It’s the ratio between these two prices that drives the economics of DNA storage.
At the moment, DNA only becomes cost-effective if you want to store things for 600 to 5000 years—that’s the threshold where the one-time cost outweighs all the constant re-writing. But if the price of writing DNA falls by 100 times in the decade, as it assuredly will, then DNA becomes a cost-effective option for storing anything beyond 50 years. “Maybe you’d store your wedding videos,” says Goldman.
DNA technology is also getting faster, but for now, it only makes sense to use it for data that you want to keep for a very long time but aren’t going to access very often.
CERN’s a good example. By 2015, the Large Hadron Collider will be collecting around 50 to 60 petabytes every year—that’s a lot of tape! They also have to migrate their entire archives to new media every four to five years, to save space and avoid the cost of maintaining old equipment. And although people rarely use old data, it has to be kept for at least 20 years, and probably even longer. DNA could be a perfect means of storing these archives (although CERN’s senior computer scientist German Cancio tells me that it will still have to be read and verified every 2 years).
**Reference: **Goldman, Bertone, Chen, Dessimoz, LeProust, Sipos & Birney. 2013. Towards practical, high-capacity, low-maintenance information storage in synthesized DNA. Nature http://dx.doi.org/10.1038/nature11875
| true | true | true |
When Nick Goldman first opened the package, he couldn’t quite believe that it contained anything at all, much less all of Shakespeare’s sonnets. The parcel had come from a facility in the US and arrived at the European Bioinformatics Institute in the UK, in March 2012. It contained a series of small plastic vials, at […]
|
2024-10-12 00:00:00
|
2013-01-23 00:00:00
|
article
|
nationalgeographic.com
|
nationalgeographic.com
| null | null |
|
27,672,672 |
https://en.wikipedia.org/wiki/Lists_of_disasters
|
Lists of disasters - Wikipedia
| null |
# Lists of disasters
Appearance
The following are **lists of disasters**.
## Natural disasters
[edit]A natural disaster is the highly harmful impact on a society or community following a natural hazard event. These lists are lists of natural disasters:
- List of avalanches
- List of blizzards
- List of derecho events
- List of droughts
- Lists of earthquakes
- List of fires
- List of floods
- List of heat waves
- List of ice storms
- List of landslides
- List of natural disasters by death toll
- List of solar storms
- Lists of tornadoes and tornado outbreaks
- Lists of retired tropical cyclone names
- List of historical tropical cyclone names
## Disasters caused by accidental human action
[edit]These are lists of disasters caused by accidental human action.
### Transport
[edit]- List of aviation accidents and incidents
- List of elevator accidents
- List of maritime disasters
- List of rail accidents
- List of road accidents
- List of spaceflight-related accidents and incidents
### Industrial
[edit]- List of industrial disasters
- List of natural gas and oil production accidents in the United States
- List of structural failures and collapses
- List of explosions
- List of major power outages
- List of mining disasters
- Nuclear and radiation accidents
- List of oil spills
### Health
[edit]- List of famines
- List of food contamination incidents
- List of epidemics and pandemics
- List of mass evacuations
- List of medicine contamination incidents
- List of methanol poisoning incidents
## Disasters caused by deliberate human action
[edit]These are lists of disasters caused by deliberate human action or public endangerment or culpable negligence.
- List of amusement park accidents
- List of economic crises
- List of environmental disasters
- List of explosions
- List of fires
- List of fireworks accidents and incidents
- List of man-made mass poisoning incidents
- List of orphan radioactive source incidents
- List of crushes (almost all caused by failures of management)
- List of military disasters
- List of riots
- List of terrorist incidents
- List of wars
- List of anthropogenic disasters by death toll
## By location
[edit]- List of disasters in Antarctica
- List of disasters in Australia
- List of disasters in Canada
- List of disasters in Croatia
- List of disasters in Great Britain and Ireland
- List of disasters in Haiti
- Lists of disasters in Indonesia
- List of disasters in New Zealand
- List of disasters in Pakistan
- List of disasters in the Philippines
- List of disasters in Poland
- List of disasters in South Korea
- List of disasters in Thailand
- List of disasters in the United States
| true | true | true | null |
2024-10-12 00:00:00
|
2008-01-06 00:00:00
|
website
|
wikipedia.org
|
Wikimedia Foundation, Inc.
| null | null |
|
13,100,770 |
http://www.atomico.com/state-of-european-tech/2016
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
342,812 |
http://www.ft.com/cms/s/0/f1f94dca-a1e9-11dd-a32f-000077b07658.html
|
Senior Republicans endorse Obama
| null |
Senior Republicans endorse Obama
was $468 now $279 for your first year, equivalent to $23.25 per month. Make up your own mind. Build robust opinions with the FT’s trusted journalism. Take this offer before 24 October.
Then $75 per month. Complete digital access to quality FT journalism. Cancel anytime during your trial.
Complete digital access to quality FT journalism with expert analysis from industry leaders. Pay a year upfront and save 20%.
FT newspaper delivered Monday-Saturday, plus FT Digital Edition delivered to your device Monday-Saturday.
Terms & Conditions apply
See why over a million readers pay to read the Financial Times.
| true | true | true | null |
2024-10-12 00:00:00
|
2024-01-01 00:00:00
| null |
website
| null |
Financial Times
| null | null |
36,863,177 |
https://web.archive.org/web/20230725143403/https://www.nytimes.com/2023/07/25/opinion/karp-palantir-artificial-intelligence.html
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
18,166,266 |
https://medium.com/@dd_labs/openanalytica-free-and-open-source-ad-targeting-for-campaigns-eaa2d60aaa2e
|
OpenAnalytica — Free and Open Source Ad Targeting for Campaigns
|
Digital Director
|
# OpenAnalytica — Free and Open Source Ad Targeting for Campaigns
*Slice the voter file into important demographic groups for easy upload and accurate targeting on social media platforms like Facebook, YouTube, and Twitter.*
## Why We Built It
Digital Directors and the political campaigns they run face a unique marketing challenge unlike most businesses — they must reach a wide range of voter demographics across a wide range of issues in a short amount of time. Advertising methods used in industry are increasingly being used in political campaigns with significant impact. Yet, most campaigns from local school board elections to U.S. Senate races are not using best practices in digital advertising.
That’s why we built a tool for highly motivated campaign *digital directors* who understand the importance of data driven campaign messaging but may lack the experience or funds to implement best practices.
## How It Works
Before you get started, the tool will require you or a friend to be able to **a) **run a Python script and **b)** work with advanced social media targeting settings like the Facebook Ad Manager.
OpenAnalytica is a Jupyter notebook with associated Python (& Pandas) scripts for taking a free NationBuilder voter file and generating voter demographic segments as CSV files for use in ad targeting on various social media platforms such as Facebook, YouTube, and Twitter.
The tool provides:
- Data quality analysis and data cleaning of the voter file
- Data visualizations across various voter demographics and elections
- Segmented CSV files grouped by voter age, gender, and party affiliation
- Tutorial for using the newly generated CSV files for Facebook ad targeting
In summary, this tool should automate a large chunk of the workflow for those looking to run targeted political ads. *Digital Directors, Campaign Managers, Social Media Consultants, and even Volunteers can use this tool to reach real people who vote*.
## Stay In Touch
We hope others find this work useful and encourage those looking to collaborate to email us at [email protected] or fork the repo. You can also follow the project on Twitter @dd_labs where we’ll share additional tools, tips, and tricks in the final weeks before the U.S. midterm elections and long term plans for the project.
-DD
| true | true | true |
Slice the voter file into important demographic groups for easy upload and accurate targeting on social media platforms like Facebook…
|
2024-10-12 00:00:00
|
2018-10-08 00:00:00
|
article
|
medium.com
|
Medium
| null | null |
|
25,556,502 |
https://psankar.blogspot.com/2020/12/repairability.html
|
Repairability
| null | null | true | true | false |
My Macbook Pro I have a Macbook pro retina 15 inch, that I bought in 2016. A few days back, the battery started bulking up and the laptop ha...
|
2024-10-12 00:00:00
|
2020-12-01 00:00:00
| null | null |
blogspot.com
|
psankar.blogspot.com
| null | null |
5,393,334 |
http://m.smh.com.au/environment/animals/extinct-frog-hops-back-into-the-gene-pool-20130315-2g68x.html
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
9,184,371 |
http://www.speakingtree.in/spiritual-slideshow/seekers/mysticism/how-did-rama-die-did-he-live-long-after-sita/256510
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
13,275,239 |
https://signalvnoise.com/moonlighting-managers-aint-got-no-time-for-bullshit-3645882c7137#.2iybkvffv
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
15,195,320 |
http://www.reuters.com/investigates/special-report/usa-taser-legal/
| null | null | null | false | false | false | null | null | null | null | null | null | null | null | null |
28,548,966 |
https://techcrunch.com/2021/09/16/andreessen-horowitz-a16z-first-india-investment/
|
A16z in talks to back CoinSwitch Kuber in first India investment | TechCrunch
|
Manish Singh
|
A16z is inching closer to making its first investment in a startup in India, the world’s second-largest internet market that has produced over two dozen unicorns this year.
The Menlo Park-headquartered firm is in final stages of conversations to invest in Indian crypto-trading startup CoinSwitch Kuber, three sources familiar with the matter told TechCrunch. The proposed deal values the Bangalore-based firm at $1.9 billion, two sources said. Coinbase is also investing in the new round, one of the sources said.
CoinSwitch Kuber was valued at over $500 million in a round in April this year when it raised $25 million from Tiger Global. If the deal with a16z materializes, it will be CoinSwitch Kuber’s third financing round this year.
TechCrunch reported last week that CoinSwitch Kuber was in talks to raise its Series C funding at up to a $2 billion valuation. The report, which didn’t identify a lead investor, noted that the Indian startup had engaged with Andreessen Horowitz and Coinbase in recent weeks.
Usual caveats apply: Terms of the proposed deal may change or the talks may not result in a deal. The author reported some details about the deal on Wednesday.
The startup declined to comment. Coinbase and a16z as well as existing investors Tiger Global and Sequoia Capital India did not respond to requests for comment.
The investment talks come at a time when CoinSwitch Kuber has more than doubled its user base in recent months — even as local authorities push back against crypto assets. Its eponymous app had over 10 million users in India last month, up from about 4 million in April this year, the startup said in a newspaper advertisement over the weekend.
A handful of crypto startups in India have demonstrated fast-paced growth in recent years — while impressively keeping their CAC very low — as millions of millennials in the South Asian nation kickstart their investment journeys. Several funds including those with big presence in India such as Accel, Lightspeed, WEH and Kalaari recently began working on their thesis to back crypto startups, TechCrunch reported earlier.
B Capital backed CoinDCX, a rival of CoinSwitch Kuber that has amassed 3.5 million users, last month in a $90 million round that valued CoinDCX at about $1.1 billion.
Policymakers in India have been debating on the status of digital currencies in the South Asian market for several years. India’s central bank, Reserve Bank of India, has expressed concerns about private virtual currencies though it is planning to run trial programs of its first digital currency as soon as December.
About 27 Indian startups have become a unicorn this year, up from 11 last year, as several high-profile investors — and global peers of Andreessen Horowitz — such as Tiger Global and Coatue have increased the pace of their investments in the South Asian market. Apna announced earlier on Thursday that it had raised $100 million in a round led by Tiger Global at $1.1 billion valuation, becoming the youngest Indian firm to attain the unicorn status.
Groww, an investment app for millennials, is in talks to raise a new financing round that would value it at $3 billion, TechCrunch reported on Wednesday. The startup has engaged with Coatue in recent days, the report said.
Andreessen Horowitz triples down on blockchain startups with massive $2.2 billion Crypto Fund III
| true | true | true |
A16z is inching closer to making its first investment in a startup in India, the world's second-largest internet market that has produced over two dozen
|
2024-10-12 00:00:00
|
2021-09-16 00:00:00
|
article
|
techcrunch.com
|
TechCrunch
| null | null |
|
22,339,260 |
https://twitter.com/id_aa_carmack/status/1228824996228800513
|
x.com
| null | null | true | true | false | null |
2024-10-12 00:00:00
| null | null | null | null |
X (formerly Twitter)
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.