id
int64
3
41.8M
url
stringlengths
1
1.84k
title
stringlengths
1
9.99k
author
stringlengths
1
10k
markdown
stringlengths
1
4.36M
downloaded
bool
2 classes
meta_extracted
bool
2 classes
parsed
bool
2 classes
description
stringlengths
1
10k
filedate
stringclasses
2 values
date
stringlengths
9
19
image
stringlengths
1
10k
pagetype
stringclasses
365 values
hostname
stringlengths
4
84
sitename
stringlengths
1
1.6k
tags
stringclasses
0 values
categories
stringclasses
0 values
36,437,768
https://github.com/vateseif/lmpc-py/tree/main/src/gpt/pid_tuning
lmpc-py/src/gpt/pid_tuning at main · vateseif/lmpc-py
Vateseif
We read every piece of feedback, and take your input very seriously. To see all available qualifiers, see our documentation.
true
true
true
Contribute to vateseif/lmpc-py development by creating an account on GitHub.
2024-10-12 00:00:00
2023-04-15 00:00:00
https://opengraph.githubassets.com/c877e68b7f7014247908078c161087c7bd875e0a1a2ef3e09a9d868f09f00798/vateseif/lmpc-py
object
github.com
GitHub
null
null
25,456,068
https://newsroom.ibm.com/Homomorphic-Encryption-Services
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
3,710,816
http://www.siliconbeat.com/2012/03/15/hiring-of-kevin-rose-by-google-sends-all-the-wrong-signals-to-silicon-valley/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
29,743,111
https://www.beginningwithi.com/2021/12/30/problematic-employers-in-tech/
Problematic employers in tech
Deirdre Straughan
Once upon a time, a company that many – especially those who worked there – felt to be nice, good, and generally on the right side of tech history was acquired by a company that many – including some who worked there – felt to be evil, rapacious, soulless, and in other ways reflective of its founder. Yes, I’m talking about the acquisition of Sun Microsystems by Oracle. Many stories could be told about this acquisition, but for the purposes of this piece I will focus on the schism between those who left Oracle immediately, and those who did not. A bunch of us – including one Very Opinionated Engineer (we’ll call him VOE) – left Oracle with varying degrees of haste, and ended up working together at a small cloud company. VOE was a wellspring of vituperative reminiscences about the good old days at Sun, and particularly the people he had despised there. The cadre of people he despised quickly grew to include anyone who remained at Oracle. VOE would expound at length about how the remainers were cowards happy to suck the Oracle teat, or were too incompetent to get jobs elsewhere, or or or… They were certainly lesser people than he was, in his opinion. He was happy to share those thoughts in public, and frequently did. I was aware of the human factors behind some folks’ remaining at Oracle. Some had green card applications in flight, involving both themselves and their families. Getting a green card in the US is notoriously a very long process. For most types of green card, if you change employer, you have to start all over again, often after a one-year waiting period at your new company – they want to be sure you’ll stick before they make that investment of attention and money. It’s usually better to grit your teeth and stay with an employer you don’t like, however long it takes, until the green card is issued. Some people had health situations. Until the ACA was passed in March 2010 (a bit after the Oracle acquisition was formally completed), it was possible for a health insurer to turn you down for a pre-existing health condition. If you suffered from a complicated medical issue, it had long been very risky to change jobs. I don’t remember now how long it took for the “no more exclusions for pre-existing conditions” clause of the ACA to take effect, but, had I been in that situation at the time, I would have been very cautious about a job change. Many people stayed at Oracle out of dedication to and entrenchment in the technologies they had been working on for years at Sun. Some outside of Oracle were equally dedicated to Solaris: we were trying to keep it alive and available via an open source fork called illumos. (This effort was perhaps doomed from the start, but at least it gave me a good case study in what *not* to do in open source.) There were undoubtedly other personal situations I wasn’t aware of that kept some ex-Sun folks at Oracle, and some actually liked Oracle well enough to stay, even for years. Bad companies are not uniformly bad – you may find yourself in a relatively healthy pocket of an otherwise toxic company. (The reverse can also be true.) VOE was correct that some people remained because they weren’t likely to get a better offer elsewhere. As a young, straight, white, cis male, it was easy for him to assume that those folks were incompetent: he had never been handicapped by the sexism, racism, ageism, homophobia, and other forms of discrimination that were and still are endemic in tech. Whatever employment choices people make are personal, and I don’t question them. For example, I do not question my friends who work for Facebook/Meta or other “problematic” companies. After all, I worked for Amazon, a company that many are coming to abhor (even as many still love it). I had many reasons for taking that job and staying in it, but the biggest was sheer survival – like many underrepresented folks in tech, I have not had a wide choice in the jobs I took to provide for myself and my family. I tell you all this to make a point, in my own long-winded way: people choose to work for companies for many reasons, reasons that an outside observer may not be aware of (nor is it any of their damned business anyway). Piling on anyone, particularly a non-young-CIS-straight-white-male person, about their choice of employer is… not helpful. Not everyone has the same scope of choices that you do, and even when they do, they may make different choices than you would. When you attack an individual for taking a job, you may be punching down at some of the most vulnerable people in tech, who don’t necessarily have the luxury of choice that you do. You are only demonstrating your own privilege to those who do not share it. Instead, address your concerns to the companies and the leadership that made those companies problematic in the first place. That might make tech a healthier place for all, where none of us have to make invidious choices just to stay employed. Well said! Love your insights and your writing We are skilled workers, high up in the economy of an incredibly wealthy country. While I understand that there are many, many ways to be absolutely compelled to stay at a bad employer, I don’t think that means that everyone gets a free pass. Everyone works because they are forced to, but the choices we make in where we work matter, and the things we do in exchange for the money we need matter. Tech workers can inflict some truly impressive suffering across wide swathes of people, and I think that we are responsible for that. Working to enable Amazon’s continued immiseration of its workers is unconscionable. How many people are you willing to step on for your own good? I personally have arrived at the answer of “some”, and I think it’s reasonable and good to ask hard questions of those who have arrived at the answer of “as many as I need to”. As a straight white male (without CIS degree), with the aforementioned proclivity for assuming the world has shared my privilege – you make an excellent point. Additionally, and as a parent, I have come to realize that the job of management is hard, and that fostering dialogue and empowering the less privileged, provides the only path toward equality and equity within a system. I agree that that is a very reasonable question to ask. There were many other points I could have made in this piece, including cases where one person getting a FAANG job leads to a whole extended family finding a path out of poverty. Not many of us are willing to pass up that opportunity, even when we know it will cause suffering elsewhere.
true
true
true
null
2024-10-12 00:00:00
2021-12-30 00:00:00
null
null
beginningwithi.com
beginningwithi.com
null
null
39,141,462
https://github.com/bieganski/asstrace
GitHub - bieganski/asstrace: A stateful strace-like - Linux syscall tampering-first strace-like tool.
Bieganski
`asstrace` stands for **a** **s**tateful **strace**-like - Linux syscall tampering-first `strace` -like tool. As opposed to `strace` , `asstrace` alters binary behavior by being "man in the middle" of binary and operating system. If your goal is to understand why some black-box binary is not working as expected, then `strace` with all it's advanced features is the way to go. `asstrace` is designed to **provide a convenient way of altering binary behavior and sharing it to other people**. It doesn't change the binary itself, but allows for manipulating behavior of system calls that the binary executes. `asstrace` is designed to work with `Linux` . Currently `x86` and `RISC-V` are supported. - legacy executable which source code is not available no longer works on modern workstations, as it assumes presence of some special files (sockets, device character special etc.). We can intercept all system calls touching that particular device and provide our own implementation that emulate the device (all the emulation is in user mode). - black-box executable does not work because inside a binary there are IP address and port hardcoded, that are no longer accessible as the service moved to a different server. We can intercept network system calls that try to access non-existing address, and change it so that the new address is used. - black-box executable does some computation, and as a result it creates a single output file. During computation it creates lots of meaningful temporary files, but unfortunately it deletes them all before output is produced. Using `asstrace` we can intercept all`unlink` system calls and cause them to do nothing. This way no temporary files get removed! [go to example] In this example we run `gcc` , but prevent it from deleting temporary files. The command used: `echo "int main();" | ./asstrace.py -q -ex 'unlink:nop:msg=prevented {path} from deletion' -- gcc -o a.out -x c -c -` Often in order to get some functionality, we need to hook more than a single syscall. For such purpose `asstrace` defines concept of groups, available by `-g` CLI param. Here we use `pathsubst` , that hooks `open` , `openat` , `faccessat2` and `statx` . The command used is `./asstrace.py -qq -g 'pathsubst:old=zeros,new=abc' -- cat zeros` In this example we manipulate `ls -1` command, so that for each regular file that it prints it will include metadata: number of lines. The command used: `./asstrace.py -qq -x examples/count_lines.py ls -1` The code of `write` syscall in `count_lines` example is slightly more complicated, thus not suitable for `--ex` as previously. Instead we have a Python file that can use `API` functionality: ``` # examples/count_lines.py from pathlib import Path from asstrace import API # defining function called asstrace_X will make a hook for syscall named 'X'. # hook will be executed before each entry to 'X'. def asstrace_write(fd, buf, num, *_): if fd != 1: # not interesting to use - we care about stdout only. API.invoke_syscall_anyway() # jump to 'write' with default params return path = Path(API.ptrace_read_mem(buf, num)[:-1].decode("ascii")) # strip '\n' and decode from bytes if not path.is_file(): # probably a directory - follow default execution path API.invoke_syscall_anyway() return try: num_lines = len(path.read_text().splitlines()) except UnicodeDecodeError: # raw-bytes file - number of lines doesn't make sense for it. API.invoke_syscall_anyway() return # if we are here, it means that our file is regular, UTF-8, and has 'num_lines' lines. # print it to stdout instead of default 'buf'. res_str = f"{path}({num_lines})\n" print(res_str, end="") # 'ls -1' program will think that it has written 'len(res_str)' characters, # as 'write' syscall returns number of characters really written (see 'man write'). return len(res_str) ``` ``` -ex 'open,openat:delay:time=0.5' - invoke each 'open' and 'openat' syscall as usual, but sleep for 0.5s before each invocation -ex 'unlink:nop' - 'unlink' syscall will not have any effect. value '0' will be returned to userspace. -ex 'mmap:nop:ret=-1' - 'mmap' syscall will not have any effect. value '-1' will be returned to userspace (fault injection; see 'man mmap'). -ex 'open:nop:ret=-1' -ex read:detach - fail each open, detach on first read ``` When invoking without `-q` or `-qq` params `asstrace.py` will print all syscalls executed to stderr, in similar manner as `strace` do (but without fancy beautifying): ``` m.bieganski@test:~/github/asstrace$ ./asstrace.py ls openat(0xffffff9c, 0x7f4883e8d660, 0x80000, 0x0, 0x80000, 0x7f4883e8d660) = 0x3 read(0x3, 0x7ffd70b6e9b8, 0x340, 0x0, 0x80000, 0x7f4883e8d660) = 0x340 pread64(0x3, 0x7ffd70b6e5c0, 0x310, 0x40, 0x7ffd70b6e5c0, 0x7f4883e8d660) = 0x310 pread64(0x3, 0x7ffd70b6e580, 0x30, 0x350, 0x7ffd70b6e5c0, 0x0) = 0x30 pread64(0x3, 0x7ffd70b6e530, 0x44, 0x380, 0x7ffd70b6e5c0, 0x0) = 0x44 newfstatat(0x3, 0x7f4883ebdee9, 0x7ffd70b6e850, 0x1000, 0x7f4883e8d660, 0x7f4883eca2e0) = 0x0 pread64(0x3, 0x7ffd70b6e490, 0x310, 0x40, 0xc0ff, 0x7f4883e8db08) = 0x310 mmap(0x0, 0x228e50, 0x1, 0x802, 0x3, 0x0) = 0x7f4883c00000 mprotect(0x7f4883c28000, 0x1ee000, 0x0, 0x802, 0x3, 0x0) = 0x0 ... ``` See user guide for more details. - MIT license - to make `asstrace` run on your Linux only a single file is needed (`asstrace.py` )* - no external Python dependencies - no need for `requirements.txt` etc. - no native code - only CPython interpreter is required - cross platform - adding a new target is as simple as defining CPU ABI: ``` CPU_Arch.riscv64: CPU_ABI( user_regs_struct_type=riscv64_user_regs_struct, syscall_args_registers_ordered=[f"a{i}" for i in range(6)], syscall_number="a7", syscall_ret_val="a0", syscall_ret_addr="ra", pc="pc", ) ``` - the `*` gotcha is that it needs additionaly`syscall_names.csv` . It either seeks it locally (will fork if obtained`asstrace` via`git clone` ) or downloads directly from GitHub (url is hardcoded in`asstrace.py` ).
true
true
true
A stateful strace-like - Linux syscall tampering-first strace-like tool. - bieganski/asstrace
2024-10-12 00:00:00
2024-01-12 00:00:00
https://opengraph.githubassets.com/b7e0f318d54e60eebaad330f03e56b553ef3e0947da051da62e573803d78324f/bieganski/asstrace
object
github.com
GitHub
null
null
5,833,878
http://insidescoopsf.sfgate.com/blog/2013/06/04/questioning-the-reviews-on-opentable/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
9,948,083
http://www.vox.com/2015/7/24/9035803/fossil-fuel-companies-cost-of-carbon
Fossil fuel companies impose more in climate costs than they make in profits
David Roberts
It is fairly well understood by now that releasing carbon dioxide and other greenhouse gases into the atmosphere imposes an economic cost, in the form of climate change impacts. In most cases, however, those responsible for carbon emissions are not required to pay that cost. Instead, it’s borne mainly by the world’s poor and low-lying countries, and of course by future generations, as many of the worst impacts of climate change will emerge years after the emissions that drive them. # Fossil fuel companies impose more in climate costs than they make in profits People sometimes refer to the unpaid cost of carbon pollution as a subsidy, or an “implicit subsidy,” to polluting businesses. The IMF recently issued a report saying that total worldwide subsidies to energy, mainly fossil fuel energy, amounted to *$5.2 trillion a year*. The reason that number is so high is that the IMF includes implicit subsidies — the social costs imposed by businesses (including climate damages) that they don’t have to pay for. Vox’s Brad Plumer raised some questions about whether that’s a misleading use of the term “subsidy.” Whatever you call it, though, it makes for an unsustainable situation, literally. It can’t go on. As climate change gets worse and the chance to avoid harsh impacts dwindles, governments are getting serious about putting some sort of price on carbon emissions, whether explicit (a tax) or implicit (regulations). Soon, a quarter of the world’s carbon emissions will be priced in some way. Businesses that now emit carbon pollution for free (or cheap) will soon see their costs rise. In other words, carbon pollution is a business risk. It’s a bubble that’s going to pop, probably soon. The Carbon Tracker Initiative has popularized a term for this looming liability: “unburnable carbon.“ ## With proper accounting, the fossil fuel business doesn’t look like such a moneymaker There’s been a lot of work recently trying to quantify carbon risk. A recent contribution to that conversation was released by Chris Hope and colleagues at the University of Cambridge Judge Business School: “Quantifying the implicit climate subsidy received by leading fossil fuel companies.“ It attempts to put a number on the carbon risk facing the world’s top 20 fossil fuel companies, the ones most directly vulnerable to a price on carbon. The results suggest that those companies are in a perilous situation. Hope took a fairly simple approach: He multiplied the carbon emissions embedded in the companies’ products by the “social cost of carbon,” i.e., the net economic, health, and environmental cost of a ton of carbon dioxide. He ran the calculation for data from 2008 to 2012 and took the results as a rough proxy for the level of carbon risk facing each company. (See the technical addendum below for more details on this calculation.) The results are pretty startling. To wit: “**For all companies and all years, the economic cost to society of their CO2 emissions was greater than their after‐tax profit**, with the single exception of Exxon Mobil in 2008” (my emphasis). In other words, if these fossil fuel companies had to pay the full cost of the carbon emissions produced by their products, none of them would be profitable. It’s even worse for pure coal companies, for which “**the economic cost to society exceeds total revenue in all years**, with this cost varying between nearly $2 and nearly $9 per $1 of revenue.” Total revenue, Hope and colleagues note, represents “employment, taxes, supply purchases, and indirect employment” — everything that coal companies contribute to the economy. It turns out the costs they impose through carbon emissions are larger than all those contributions combined. (For oil and gas companies, carbon costs generally range from 10 to 50 percent of total revenue.) This is a somewhat idealized exercise, obviously. Fossil fuel companies are unlikely to bear the entire cost of carbon, if and when it is imposed. The cost of carbon itself is highly uncertain (see technical addendum) and theoretically varies based on geography and income. Nonetheless, this kind of calculation is helpful in indicating the comparative level of risk among fossil fuel companies (see the paper for a ranking) and the materiality of that risk. It shows that the carbon bubble is very large indeed. It’s also a good reminder that we are, in carbon terms, eating the seed corn, using up resources that only appear cheap because we’re shifting the costs to poor and future people, who don’t have the political power to stop us. It is grossly irresponsible. Hope’s results depend entirely on his estimate of the social cost of carbon dioxide (SCCO2), which he pegs at “$105 per tonne of CO2 in 2008.” Here’s a note from the study about how that figure was chosen: Several estimates of the SCCO2 have been made over the last decade or so. The US Environmental Protection Agency (EPA) uses a central value of $39 per tonne of CO2 (in $2011) at a 3% discount rate. A recent study which tried to include the effects of climate change on economic growth as well as consumption estimated a value of $220 per tonne of CO2 in 2015. Here we use the mean estimate from business‐as‐usual emissions in the default PAGE09 model, one of the three models used by the EPA, of $105 per tonne of CO2 in 2008. The SCCO2 increases in real terms as the world gets richer, and as the emission date gets closer to the time at which the most severe impacts of climate change are expected to occur. We assume it rises at 2.3% per year in real terms to $122 per tonne in 2012. As the wide range of possible values shows, calculating the SCCO2 is a fraught undertaking. It not only involves estimating the timing and severity of climate impacts, which are notoriously uncertain, but it also means choosing a discount rate, which determines how much you discount future harms relative to present harms. Think of it as a negative interest rate. (I once wrote a long, otter-filled post about discount rates and their role in climate economics, if you have an hour to spare and want to know more.) If you choose a high discount rate — say, 5 percent — you’re saying that the value of harms falls quickly as they move into the future. It’s worth very little to you to prevent damages in, say, 2100. If you choose a 0 percent discount rate, you’re saying future damages are worth exactly as much as damages today; it’s worth spending $1 today to prevent $1 of damage in 2100. Which discount rate you choose completely shapes the results of your climate economic model. A high discount rate justifies only a modest carbon price, while a low discount rate justifies rapid, substantial action to reduce emissions. What is the correct discount rate? There’s much debate over that, but the answer, in short, is that there isn’t one. It’s a matter of values and risk tolerance, which are inevitably somewhat subjective, shaped by socioeconomic circumstance. If discount rates depend on values, and the cost of climate change mitigation depends on discount rates, then the cost of climate change mitigation depends on values — there is no “objective” measurement of the cost of climate action. Put more bluntly: We can’t know how much it will cost to tackle climate change, not in advance, not with any confidence. For all our faux-precise economic modeling, we’re acting, as humans always do, on some mix of educated guesses, fears, hopes, and instincts. Nobody wants to hear that, though, so I’m sticking it here at the end of the technical addendum. ## Most Popular - The one horrifying story from the new Menendez brothers doc that explains their whole caseMember Exclusive - Take a mental break with the newest Vox crossword - AI companies are trying to build god. Shouldn’t they get our permission first? - The resurgence of the r-wordMember Exclusive - Sign up for Vox’s daily newsletter
true
true
true
Vox is a general interest news site for the 21st century. Its mission: to help everyone understand our complicated world, so that we can all help shape it. In text, video and audio, our reporters explain politics, policy, world affairs, technology, culture, science, the climate crisis, money, health and everything else that matters. Our goal is to ensure that everyone, regardless of income or status, can access accurate information that empowers them.
2024-10-12 00:00:00
2015-07-24 00:00:00
https://platform.vox.com…,69.808027923211
article
vox.com
Vox
null
null
19,089,718
https://www.linuxjournal.com/content/what-really-ircs-me-slack
Search
Kyle Rankin
# What Really IRCs Me: Slack *Find out how to reconnect to Slack over IRC using a Bitlbee libpurple plugin.* I'm an IRC kind of guy. I appreciate the simplicity of pure text chat, emoticons instead of emojis, and the vast array of IRC clients and servers to choose from, including the option to host your own. All of my interactive communication happens over IRC either through native IRC channels (like #linuxjournal on Freenode) or using a local instance of Bitlbee to act as an IRC gateway to other chat protocols. Because my IRC client supports connecting to multiple networks at the same time, I've been able to manage all of my personal chat, group chat and work chat from a single window that I can connect to from any of my computers. Before I upgraded to IRC, my very first chat experience was in the late 1990s on a web-based Java chat applet, and although I hold some nostalgia for web-based chat because I met my wife on that network, chatting via a web browser just seems like a slow and painful way to send text across the internet. Also, shortly after we met, the maintainers of that network decided to shut down the whole thing, and since it was a proprietary network with proprietary servers and clients, when they shut it down, all those chat rooms and groups were lost. What's old is new again. Instead of Java, we have JavaScript, and kids these days like to treat their web browsers like Emacs, and so every application has to run as a web app. This leads to the latest trend in chat: Slack. I say the *latest* trend, because it wasn't very long ago that Hipchat was hip, and before that, even Yammer had a brief day in the sun. In the past, a software project might set up a channel on one of the many public or private IRC servers, but nowadays, everyone seems to want to consolidate their projects under Slack's infrastructure. This means if you joined a company or a software project that started during the past few years, more likely than not, you'll need to use Slack. I'm part of a few Slack networks, and up until recently, I honestly didn't think all that much about Slack, because unlike some other proprietary chat networks, Slack had the sense to offer IRC and XMPP gateways. This meant that you weren't required to use its heavy web app, but instead, you could use whatever client you preferred yet still connect to Slack networks. Sure, my text-based IRC client didn't show animated Giphy images or the 20 party-parrot gifs in a row, but to me, that was a feature. Unfortunately, Slack could no longer justify the engineering effort to backport web chat features to IRC and XMPP, so the company announced it was shutting down its IRC and XMPP gateways. When Slack first announced it was shutting down the IRC gateway, I wasn't sure what I would do. I knew that I wouldn't use the web app, so I figured if an alternative didn't come around, I'd just forget about the Slack networks I was a part of, just like when that old Java chat shut down. Fortunately, the FLOSS community saved the day, and someone wrote a plugin that uses the libpurple library (a kind of Rosetta stone plugin framework for chat used by programs like Pidgin and Bitlbee to allow access to ICQ, MSN, Yahoo and other dead proprietary chat networks). Although using the direct IRC gateway was easier, setting this up on Bitlbee wasn't so bad. So, in this article, I describe how to do exactly that. Why Not Weechat?I know that many console-based chat fans have switched to Weechat as their IRC client, and it has a native Slack plugin. That's great, but I've been using Irssi for something like 15 years, so I'm not about to switch clients just for Slack's sake. Anyway, with the Bitlbee program, you can connect to Slack using your preferred IRC client whether that's Irssi, Xchat or even MIRC (no judgment). Install the Slack libpurple Plugin for BitlbeeSince the Slack Bitlbee plugin uses libpurple, the first step is to make sure you install a Bitlbee package that has libpurple built in. On Debian-based distributions, this means replacing the basic bitlbee package with bitlbee-libpurple if you don't already have it installed. This package should set up a local network service listening on the IRC port automatically. I cover how to use Bitlbee in detail in my past article "What Really IRCs Me: Instant Messaging", so I recommend you refer to that article for more details. Once you are connected to Bitlbee, you should be able to issue a ``` help purple ``` command and get a list of existing libpurple plugins that it has installed: ``` `````` 19:23 @greenfly| help purple 19:23 @ root| BitlBee libpurple module supports the ↪following IM protocols: 19:23 @ root| 19:23 @ root| * aim (AIM) 19:23 @ root| * bonjour (Bonjour) 19:23 @ root| * gg (Gadu-Gadu) 19:23 @ root| * novell (GroupWise) 19:23 @ root| * icq (ICQ) 19:23 @ root| * irc (IRC) 19:23 @ root| * msn (MSN) 19:23 @ root| * loubserp-mxit (MXit) 19:23 @ root| * myspace (MySpaceIM) 19:23 @ root| * simple (SIMPLE) 19:23 @ root| * meanwhile (Sametime) 19:23 @ root| * jabber (XMPP) 19:23 @ root| * yahoo (Yahoo) 19:23 @ root| * yahoojp (Yahoo JAPAN) 19:23 @ root| * zephyr (Zephyr) 19:23 @ root| ``` Note that Slack isn't yet on this list. The next step is to build and install the Slack libpurple plugin on your machine. To do this, make sure you have general build tools installed on your system (for Debian-based systems, the build-essential package takes care of this). Then install the libpurple-devel or libpurple-dev package, depending on your distro. Finally, pull down the latest version of the plugin from GitHub and build it: ``` `````` $ git clone https://github.com/dylex/slack-libpurple.git $ cd slack-libpurple $ sudo make install ``` (Note: if you don't have system-level access, you can run ``` make install-user ``` instead of `sudo make install` to install the plugin locally.) Once the install completes, you should have a new library file in /usr/lib/purple-2/libslack.so. Restart Bitlbee, and you should see a new plugin in the list: ``` `````` 19:23 @greenfly| help purple 19:23 @ root| BitlBee libpurple module supports the ↪following IM protocols: 19:23 @ root| 19:23 @ root| * aim (AIM) 19:23 @ root| * bonjour (Bonjour) 19:23 @ root| * gg (Gadu-Gadu) 19:23 @ root| * novell (GroupWise) 19:23 @ root| * icq (ICQ) 19:23 @ root| * irc (IRC) 19:23 @ root| * msn (MSN) 19:23 @ root| * loubserp-mxit (MXit) 19:23 @ root| * myspace (MySpaceIM) 19:23 @ root| * simple (SIMPLE) 19:23 @ root| * meanwhile (Sametime) 19:23 @ root| * slack (Slack) 19:23 @ root| * jabber (XMPP) 19:23 @ root| * yahoo (Yahoo) 19:23 @ root| * yahoojp (Yahoo JAPAN) 19:23 @ root| * zephyr (Zephyr) 19:23 @ root| ``` Configure Slack in Bitlbee Once you have the Slack module set up, the next step is to configure it like any other Bitlbee network. First, create a new Bitlbee account that corresponds to your Slack account from the Bitlbee console: ``` `````` account add slack [email protected] ``` Next, you'll need to add what Slack calls a Legacy API token, which tells me at some point Slack will deprecate this and leave us out in the cold again. To do this, make sure you are logged in to Slack in your web browser, and then visit https://api.slack.com/custom-integrations/legacy-tokens. On that page, you will have the ability to generate API tokens for any Slack networks where you are a member. Once you have the API token, go back to your Bitlbee console and set it: ``` `````` account slack set api_token xoxp-jkdfaljieowajfeiajfiawlefje account slack on ``` If this is the only Slack account you have created, it will label it as "slack", and you can refer to it that way. Otherwise, you'll need to type `account list` in the Bitlbee console and see how Bitlbee numbered your slack account, and then replace `slack` in the above commands with the number associated with that account. Unfortunately, unlike with the IRC gateway, this plugin doesn't connect you to any channels in which you are active automatically. Instead, once your Bitlbee client connects, you need to tell Bitlbee about any particular channels you want to join. You can do this with the standard Bitlbee `chat add` command. So for instance, to add and join the #general channel most Slack networks have, you would type: ``` `````` chat add slack general /join #general ``` Note that like with the other previous commands, you may need to replace `slack` with the number associated with your account if you have multiple Slack networks defined. If you want Bitlbee to rejoin a particular room automatically whenever you connect, you can type: ``` `````` channel general set auto_join true ``` Repeat this for any other channels you want to auto-join. ConclusionOkay, so maybe this article was a little bitter compared to others I've written. I can't help it. It really bothers me when companies use their control over proprietary software, networks or services to remove features upon which people depend. I've also seen so many proprietary chat networks come and go while IRC stays around, that I just wish people would stick with IRC, even if they don't get the animated smiley emoji that turns around in a circle. I'm very thankful for a solid community of developers who are willing to pore through API docs to build new third-party plugins when necessary.
true
true
true
null
2024-10-12 00:00:00
2018-07-30 00:00:00
null
null
linuxjournal.com
linuxjournal.com
null
null
4,061,085
http://daltoncaldwell.com/oh-the-places-youll-go/
Oh, the Places You'll Go! • Dalton Caldwell
null
# Oh, the Places You’ll Go! A few years ago I went through an incredibly difficult period in my life. During this difficult time, I had a newborn son, which made everything both easier and harder. As a parent, I spend a great deal of time reading my son various books, but during this dark time, there was one specific book that came to hold more and more meaning to me as I read it. That book was Dr. Seuss’ “Oh, the Places You’ll Go”. As I regularly read the book to my pre-lingual son, I began to take notice that it captured Truth about life. To be completely honest, during this difficult period, I got to the point where I had trouble reading the whole book to him without choking up. Sure, laugh if you want. Oh, the places you’ll go! There is fun to be done! There are points to be scored. There are games to be won. And the magical things you can do with that ball will make you the winningest winner of all. Fame! You’ll be famous as famous can be, with the whole wide world watching you win on TV.Except when they don’t. Because, sometimes, they won’t. I’m afraid that some times you’ll play lonely games too. Games you can’t win ‘cause you’ll play against you. I bring this up because it’s so clear to me that this humble children’s book is Great, and I want to discuss the creative process that creates Greatness. I am not sure if Dr. Seuss realized that this particular book would hold deep significance to anyone, or that generations of young people would be given this book as a graduation present. But that is exactly what happened. Understanding the backstory of how a person creates something that is Great is a topic that I am obsessed with. Whether it is music, books, art, software, athletics, you name it, how and why is it that a work that is an order of magnitude Greater than what would be predicted pops into existence? What does it feel like to *be them* in their Great moment of creation? As I have referenced on my blog before, Greatness is often banal. The creators of Greatness appear to be just as oblivious to the importance of what they are doing as everyone else is. During my tenure in the music industry, my favorite part was getting to meet people that created truly Great music. The same goes with having the privilege of knowing many of the most interesting people in the technology business. (I am going out of my way not to namedrop here, so please take my word for it.) What is fascinating to me is that Great creation stories all sound surprisingly similar. Something along the lines of “yeah we went in the studio and put down some tracks, and they sounded pretty good, and we had to redo a couple of things, and then when put out the album.” Disappointing, right? David Foster Wallace wrote an essay that touched on the topic of why locker room interviews with athletes are always so terrible and uninsightful. DFW’s thesis was that the **athletes are in fact 100% accurate at communicating what they were thinking and experiencing while taking the game winning shot**. For example, when an athlete is interviewed and says things like “well, we just went out there to play today, and we got some good momentum and powered through the other team,” it’s not that the athlete is a moron lacking the cognitive capacity to accurately explain to us what happened out on the field that day. Rather, it’s that these interviews really, truly are an accurate description of what was going on in their head during the game. *It’s our fault for expecting a compelling narrative*. Our expectation of divining some deep insight into their creative process is fundamentally flawed. They were just out there doing their thing, just like they always do, and it worked. The main takeaway that I have been able to synthesize from all of this data is this: Greatness always comes from someone with a finely honed craft, a craft honed to the point of muscle memory. In baseball, you can’t be thinking about which hand goes where on the bat, and how wide your stance is, and where your feet are placed if you want to hit a fastball. All of those decisions have to be muscle memory, and you *must* have a clear head that is simply thinking about “showing up to play.” Similarly, in software, you can’t be thinking about which programming language you are using, and whether you are using MongoDB or MySQL, or whether photogrid layouts are the hot new thing or not. You will never hit the proverbial fastball if that is the sort of junk filling your head. **Rather, creating and shipping products needs to be muscle memory.** You just need to have clear eyes, a full heart, and be ready to show up and play. Kid, you’ll move mountains.
true
true
true
A few years ago I went through an incredibly difficult period in my life. During this difficult time, I had a newborn son, which made everything both easier and harder. As a parent, I spend a great deal of time reading my son various books, but... | Dalton Caldwell | Partner @ Y Combinator
2024-10-12 00:00:00
2012-06-02 00:00:00
https://svbtleuserconten…0xsuap_large.png
article
daltoncaldwell.com
Dalton Caldwell on Svbtle
null
null
1,746,324
http://techcrunch.com/2010/10/01/the-ugliest-girl-at-the-dance-how-yahoo-destroyed-yelps-google-acquisition/
The Ugliest Girl At The Dance: How Yahoo Destroyed Yelp's Google Acquisition | TechCrunch
Michael Arrington
A fascinating footnote to the failed Google acquisition of Yelp last December: a Yahoo counteroffer killed the deal, say two source with knowledge of the situation. As of December 17 Yelp was in the final stages of negotiations to sell to Google for $550 million. But just three days later the deal was off. So what happened during those three days? Yahoo came in with an offer to buy Yelp for $750 million – $200 million more than Google had offered. Yelp, via their investment bank, asked Google if they wanted to match it. Google declined, and one source says they didn’t actually believe that there actually was a competing offer. Here’s where things got interesting. The Yelp management team apparently refused to work for Yahoo and wanted to take the Google offer. The Yelp board of directors, faced with a fiduciary duty to act in the best interests all stockholders, couldn’t approve a Google deal when a competing deal was available at a $200 million higher price. So with the Yelp management team refusing to take the Yahoo offer, and the Yelp board of directors unable to accept the Google offer, everything froze and a deal never happened. The NY Times discovered many of these details on December 21 last year, but either didn’t know or didn’t name Yahoo as the competing buyer. And there are supposedly people at Google who still believe Yelp actually never had a competing offer at all and simply over negotiated. Our sources, however, swear the Yahoo offer was very real. If Yahoo did make the counter offer the whole situation is a sad reflection on the company. Even with the Yelp management team knowing that they couldn’t take the Google offer, they still walked from a huge sale just because they couldn’t stomach working at Yahoo. Foursquare apparently made a similar decision just a few months later, walking away from a $100 million or so Yahoo offer even though they knew Facebook would soon jump squarely into their market. The saddest part of the story is this – things have only gotten worse at Yahoo since then. There isn’t really a whole lot left to say. Stick in a fork in this one – it’s done.
true
true
true
A fascinating footnote to the failed Google acquisition of Yelp last December: a Yahoo counteroffer killed the deal, say two source with knowledge of the situation. As of December 17 Yelp was in the final stages of negotiations to sell to Google for $550 million. But just three days later the deal was off. So what happened during those three days? Yahoo came in with an offer to buy Yelp for $750 million - $200 million more than Google had offered. Yelp, via their investment bank, asked Google if they wanted to match it. Google declined, and one source says they didn't actually believe that there actually was a competing offer.
2024-10-12 00:00:00
2010-10-01 00:00:00
https://techcrunch.com/w…0/deadyahoo1.jpg
article
techcrunch.com
TechCrunch
null
null
5,273,467
http://benhoskin.gs/2013/02/24/ruby-2-0-by-example
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
20,893,820
https://www.universetoday.com/143256/the-spaceline-an-elevator-from-the-earth-to-the-moon/
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
901,854
http://www.cloudera.com/hadoop-training
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
39,078,044
https://www.deprocrastination.co/blog/block-out-input-free-time
Block out input-free time
null
# Block out input-free time How much time do you spend aimlessly scrolling? Looking for new and interesting information? Checking notifications, likes, or emails? For many of us, the answer is hours every day. The question is: is that time well spent? Probably not. Now, there's nothing wrong with occasionally watching a YouTube video or two in the evening (so long as it doesn't cut into your sleep schedule.) However, always seeking something *new* is not good for us. Our screens have become addictive. They are visually stimulating, offering fast feedback for our actions. *Tap* and you're rewarded with infinite potentially interesting videos or games. Worse, our screens control our behavior by controlling the choices presented to us. ## The default choice architecture When you have "nothing to do," what do you do? You probably take your phone, or open the browser and are presented with choices. Facebook, Instagram, Twitter, TikTok, email,... Here's why it's bad: those choices aren't designed to help you. When we're presented with options, we choose from them. We rarely create *our own* additional and better options. That would take effort. We'd have to think. Our actions are guided by the interfaces we're looking at every day. It's always easier to tap an icon and watch something than figure out what you need to do next. Yet, the latter is much better for us. The latter helps us make our lives better and easier in the future. ### The missing apps Here are some "apps" that you don't see when you're looking for something to do. - Tidy up my room - Take out the trash - Wash the dishes - Think about what you want to accomplish this year - Re-decorate the room - Stretch for a bit - Set a goal and put it on a calendar You don't see the above when you open your laptop or unlock your phone. That's a shame, because these actions would actually make your life a bit better, unlike doomscrolling. What you see influences what you do. If you're not reminded of productive actions by your surroundings, you'll take unproductive actions instead, against your interests, just because they are the easiest choice at the moment. When we're not using our computer to produce something or to connect with someone, our time is often better spent off-screen. Many on-screen actions are optimized to suck away our attention, not use it to improve our lives. That's one argument for less screen time. Here's another one. ## The catch-22 of digital distractions You don't feel great. Your life is not like the life of the celebrities you see on social media. You feel bad about yourself. So you escape. You watch something. You scroll. You turn off your brain and let the memes and silly GIFs take over. It takes your mind off your life. Then you get tired or you finally fall asleep, way too late, disrupting your sleep schedule. The next day, you wake up and are back at square 0. You still don't feel that great about yourself... Digital distractions are self-perpetuating misery machines. ### Zombie mode: Low effort, low value entertainment You don't feel great, so you choose to get distracted. But then you become distracted, which causes you to not feel great. After all, you haven't made any real progress. So the cycle continues. You rarely get off the Internet feeling excited and ready to take on the world. No, you feel distracted. Unsure what to do. Unfocused. That mental state is not great. And it's not useful. That's what we call Zombie mode: passively consuming online media to get cheap dopamine. Let's stop this cycle. "How?" you ask. ## Let your mind wander The solution is simple: get away from screens. With the arrival of information technologies, information has become less and less connected to our daily lives. In other words, less relevant. The news, social media, and much of the Internet is utterly irrelevant to your life right now. **You don't need more information, you need to do more with the information you already have.** And to do that, you need time to process it. Time to take the general lessons of others and figure out how you can implement them in your own life. Time to turn the information into action. Time to go from passive to active. ### Unprocessed thoughts = heavy burden When you've spent the whole day binging a Netflix show, then you won't have time to process the fact that you're getting out of shape, or that you've let your room be a mess, or that you still have not messaged John,... There's no time for left for it, you're always distracted. Always "not feeling like" doing what needs to be done. Over time, unfinished business accumulates in our mind. Promises we've made, things we meant to do, requests from others. When we don't create time to let all those things unwind in our mind and deal with them, they become a heavy burden in our mind. ### Afraid of your thoughts? When the unprocessed thoughts accumulate over long periods of time, they can turn into the fear of being alone with your own thoughts. The idea of thinking about what we need to deal with becomes so stressful that we want avoid it as much as we can. We fill our time with podcasts, YouTube videos, Instagram scrolling, watching TikToks, and other passive activities because **we fear the moments when our mind is free of external inputs.** The only moment when we're alone is when falling asleep or in a shower, and some people fill even those moments with music or podcasts. Needless to say, this is terrible for us. ## Most digital tools aren't suited for figuring things out The problem is that our digital environment is not well suited to letting our mind wander and being intentional. **When you want to think, reflect, or figure things out, a screen is not your friend.** (The one exception may be a blank screen of a distraction-free editor.) To mull over your current circumstances and formulate what to do, go away from screens for an extended periods of time. Create big chunks of time (1 hour or more) when you have "nothing to do." Create that space and then don't fill it with low-effort unproductive actions! Instead, let your mind be. Let yourself wander a bit. And then act on thoughts that are at least mildly productive. Let yourself process the accumulated unfinished business. Unpack your baggage. Here's one way to turn this into practice. ### ⏱ Block out an Input-free Hour Set a timer for 60 minutes, and go away from any screens until the timer goes off. Do anything, except for staring at screens. Don't consume any information from the outside, only from your own mind. No podcasts, no news, no social media. Tidy up, nap, go for a walk. If you want to, write down some thoughts, but don't look of external information. For 60 minutes, do not consume any more information, deal with the stuff already on your mind. Put it on your calendar, or do it on the fly by setting a time. You can try it right now. Stop reading. Set a timer for 1 hour on your phone, put your phone screen-side down and walk away from it. ## Block out input-free time in your life Let your mind wander. Get away from time-sucking apps and do something offline that will make your life a bit better tomorrow, instead of waking up back at square 1 every day. **You don't need more information. You need to do more with the information you already have.**
true
true
true
How much time do you spend aimlessly scrolling?
2024-10-12 00:00:00
2024-01-15 00:00:00
https://www.deprocrastin…ock_out_time.png
website
deprocrastination.co
deprocrastination.co
null
null
7,897,856
http://bytebuddy.net
Byte Buddy
null
- Byte Buddy - Byte Buddy is a code generation library for creating Java classes during the runtime of a Java application and without the help of a compiler. Other than the code generation utilities that ship with the Java Class Library, Byte Buddy allows the creation of arbitrary classes and is not limited to implementing interfaces for the creation of runtime proxies.
true
true
true
null
2024-10-12 00:00:00
2014-01-01 00:00:00
null
null
null
null
null
null
39,430,062
https://www.aserto.com/blog/attributes-authorization-when-to-use
When do you need attributes in fine-grained authorization services?
null
Fine-grained authorization is the process of verifying that a subject (typically a user) has permission to perform an action on a specific resource (for example, a document). Attribute-based access control (ABAC) was invented to scale from role-based (RBAC) approaches, which suffered from scaling issues (“role explosion”). In traditional ABAC systems, this was done by matching up attributes on a user with attributes on a resource. For example, a user could have a “top secret clearance” attribute, and would then be able to access any document that had a “top secret” attribute. If a domain was easy to organize into a discrete number of attributes on both resources and users, ABAC delivered on its promise. ## Attributes suffer from scaling issues In practical scenarios, however, the ABAC approach doesn’t always scale. For example, if I want to lock down access to a *specific* file to a *specific* set of users, I would need to model a specific attribute for that file (e.g. “file-1”), and ensure that only users that are granted the same attribute have access. Used in this way, attributes are no different than roles, and subject to the same “role explosion” problem: a user needs to be assigned explicit access (through a role or an attribute) to every single resource they have access to. This is simply not a scalable model. ## Relationships can help Organizing resources and users into hierarchies can help. Defining the *relationships* that link these hierarchies as a first-class concept can prove to be a powerful tool for wrangling complexity. For example, if I can place files in a folder, assign users to a group, and assign the group a “viewer” role on the folder, I can eliminate the direct assignment of roles or attributes to users and resources, and instead model these with transitive relationships. Furthermore, I can nest groups or folders, and delegate the transitive evaluation to a purpose-built graph processor, instead of evaluating these “by hand.” This is known as relationship-based access control (ReBAC). It is the model underneath Google Drive and Google Docs, popularized by Google’s Zanzibar paper, and has proven to be intuitive to grok for users and admins alike. ## Attributes still matter But in most real-world scenarios, relationships aren’t the whole story. When should we continue to use attributes? ### Deny scenarios - user “kill switch” As we highlighted above, authorizing access to documents in a document management system is a great scenario for relationship-based access control, since users, groups, folders, and files all fit neatly within hierarchies. But what if we wanted to disable a particular user - for example, because they exhibited suspicious behavior? We don’t want to remove all of the relationships they have to groups, and then have to recreate them. Instead, we’d like to have an “override” attribute on the user (for example, “disabled”), and be able to immediately disable access to any resource, even if the user has a relationship to it. ### Access through expression evaluation Often, access control logic needs to evaluate a user against a numerical or boolean expression - such as their approval limits. For example, a claims adjuster could have authority over claims under $50,000, but any claim above this needs to go to a manager. Numerical expressions are much easier to model using attributes than relationships: the user has an approval limit attribute ($50,000), and the claim has a value ($40,000). The authorization logic evaluates an expression along the lines of `user.approval_limit > claim.value` . This is much easier to specify than modeling different types of relationships that depend on approval limits - for example, “can_approve_under_50k”, “can_approve_under_75k”, etc. Numerical expression evaluation scales much better than static relationship assignment, and should be used in these scenarios. ### Environmental attributes Sometimes access control logic needs to evaluate external data, such as the user’s timezone, the current date/time, or the IP address. For example, a contractor should only be able to access a document during business hours, while connected through a VPN, and in a particular geographic area. These environmental attributes are not directly related to the user or the resource - they embody other elements that may figure into authorization decisions, such as the device, location, and time. Traditionally, this is where attribute-based access control shines. Modeling this with relationships creates significant challenges, and may require inventing synthetic relationships (user to timezone, user to IP address, etc.) that need to be managed by the application, adding complexity instead of helping manage it. Environmental attributes are clearly a case where attributes are preferable to relationships. ## Combining attributes and relationships A truly flexible authorization model allows both relationship-based and attribute-based access control to be combined into a single authorization policy. Fortunately, the Topaz authorization engine is purpose-built for this! The three scenarios we described are easy to express as Topaz policies. ### User “kill switch” This policy allows access to a specific document if the user has a relationship to it that carries the “can_read” permission, AND if the user doesn’t have the “disabled” property. ``` allowed { # check if the user can_read the document ds.check({ "subject_type": "user", "subject_id": input.user.id, "relation": "can_read", "object_type": "document", "object_id": input.resource.document_id }) # also check whether the user is not disabled !input.user.properties.disabled } ``` ### Expression evaluation This policy allows a claims adjuster to approve a claim if the claims adjuster is an owner of the claim, AND the claim value is under the adjuster’s approval limit. ``` allowed { # check if the user is the owner of the claim ds.check({ "subject_type": "user", "subject_id": input.user.id, "relation": "owner", "object_type": "claim", "object_id": input.resource.claim_id }) # get the claim object claim := ds.object({ "object_type": "claim", "object_id": input.resource.claim_id }) # check claim value is under user's approval limit claim.claim_value < input.user.properties.approval_limit } ``` ### Environmental attributes This policy allows access to a specific document if the user has a relationship to it that carries the “can_read” permission, AND the access is performed on a workday. ``` allowed { # check if the user can_read the document ds.check({ "subject_type": "user", "subject_id": input.user.id, "relation": "can_read", "object_type": "document", "object_id": input.resource.document_id }) # check if access is during a workday workdays := ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday"] now := time.now_ns() day := time.weekday(now) day == workdays[_] } ``` ## Try Topaz! To get started with Topaz, follow the quickstarts and explore the tutorials! And if you have any questions, feel free to ask them in the Topaz community slack. Happy hacking! # Related Content Unlocking modern, fine-grained authorization with Topaz Aserto CEO, Omri Gazitt, and Damian Schenkelman from Okta discuss the current state of modern authorization on Authorization in Software. Tune in to learn all about modern authorization: what it is, the underlying design principles, why it is gaining momentum, and open-source projects that can help you build your fine-grained authorization system. Jan 23rd, 2024 Authorization and the principle of least privilege Zero trust has moved the burden of securing applications from the perimeter to the application. Based on this framework, we must assume a breach, verify entities explicitly, and implement the principle of least privilege. In this post, we explore how fine-grained authorization brings the principle of least privilege to life, enabling us to establish a robust security posture. Feb 1st, 2024 How Airbnb and Uber authorize their apps: Real-world examples of ReBAC and ABAC Explore real-world examples of attribute-based access control (ABAC) and relationship-based access control (ReBAC). Learn how Airbnb uses ReBAC to authorize external users and Uber uses ABAC to authorize internal users. Feb 15th, 2024
true
true
true
Attribute-based access management and relationship-based access control are two popular authorization models. In this post, we review the best use cases for each, and how to enjoy the benefits of both using Topaz OSS authorizer
2024-10-12 00:00:00
2024-01-17 00:00:00
https://cdn.sanity.io/im…dde-2000x663.png
website
aserto.com
When do you need attributes in fine-grained authorization services?
null
null
39,262,619
https://www.theregister.com/2024/02/01/selfhealing_microgrid_sandia/
Boffins develop a cheap way of building self-healing grids
Brandon Vigliarolo
# Affordable, self-healing power grids are closer than you think ## They're just an algorithm away, national lab engineer tells El Reg Feature When the first commercial coal-fired electric power plants came online, starting with the Holborn Viaduct power station that supplied electricity to the City of London in January 1882, the world was changed forever. Fast forward 142 years, and the world has changed a lot. When it comes to the power grids that distribute electricity to homes and businesses, however, a lot less has changed. Sure, there've been tweaks added here, and there and new forms of electricity generation have been introduced, but by and large the design is the same. Our current electrical paradigm isn't sustainable. In just 142 short years power generation from burning fossil fuels has changed the world's climate, necessitating yet another wave of electrification – this time from clean, renewable energy sources including solar and wind. With that new energy paradigm comes the need for a new grid, and with it a host of challenges to overcome. Sometime soon large, regional power grids supplied by a few solitary fuel-burning giants will hopefully be gone. In their place will be interconnected microgrids fueled by smaller distributed power generation plants, such as wind and solar farms, Dr Michael Ropp, an electrical engineer at the Sandia National Lab over in America, told *The Register* this week. Rethinking the grid isn't simple. Grids are mostly designed with single one-way power lines feeding AC current from power plants to customers. Renewable energy sources like solar and wind typically produce direct current electricity, requiring an inverter to turn it into alternating current. All those distributed inverters spread over a whole bunch of small grids mean it's much easier for a grid to end up in a loop, as power flows in different directions among small, interconnected systems. Keeping a bunch of microgrids playing nice with each other – and not destabilizing due to the creation of unintentional closed loops – will be tricky, if not impossible, without a bunch of new tech. The US power system as it stands isn't designed for such decentralization. That's where Ropp and his fellow engineers at Sandia and its partner facilities come in. They've been working on methods to create the ideal self-healing power grid, and they think they've found a far more reliable way to do it than has been tried to date. This technology could be deployed anywhere, really, in theory and depending on the circumstances. ### The modern self-healing grid: Not sci-fi, but not cheap There's no need to wait for a future of electrical lines filled with self-replicating nanites for a self-healing grid to become a thing – it's not even a new concept. Development of such power-shifting systems has been a stated priority of Uncle Sam since the codification of the US Energy Storage Competitiveness Act of 2007, which was designed to spur development of a number of electrical innovations – self-healing grids among them. The US code defines a self-healing grid as one "capable of automatically anticipating and responding to power system disturbances, while optimizing the performance and service of the grid to customers." Such technology has even been deployed by power providers like Charlotte, North Carolina-based Duke Energy in several states. Duke's system is typical of existing self-healing grids. It involves "remote sensors and monitoring, as well as advanced communication systems that deliver real-time information from thousands of points along the grid … to make real-time decisions to keep power reliable," according to its website. Self-healing grid technology, said Duke, can reduce the number of customers affected by an outage, decrease the time necessary to locate a problem, speed up power restoration and reduce downtime due to natural disasters and other events. - US cities are going to struggle to green up their act by 2050 - Microsoft hires energy mavericks in quest for nuclear-powered datacenters - Google goes geothermal to power some bitbarns - Ireland to develop datacenter powered by fuel cells Of course, those advancements aren't without their own impediments. Such self-healing grid technology is expensive and – like the current grid – centralized. So a failure could knock the entire thing offline. Networks of fiber optic cables, monitoring equipment, and lots of other costly hardware is necessary to make self-healing grids like Duke's possible. Using traditional telecommunications to monitor the grid also means there's a potential for cyber attacks, and scaling such systems is a further problem. "In a major problem situation of any type, you may lose those communications," Ropp told *The Register*. "And in some cases, those comms are expensive." With those drawbacks in mind it would be hard to justify wide-scale deployment of such self-healing technologies to modernize the grid – especially given so many clean energy projects are already behind schedule and threaten to derail clean energy goals. Ropp and his fellow researchers want to solve this problem by looking at ways to prevent the formation of closed loops without needing the total situational awareness provided by such self-healing designs. "We're trying to figure out how to avoid creating a loop if the only information I can see is the information right where I am," explained Ropp, emphasizing that a key goal is avoiding reliance on a system of expensive communications equipment. ### The future-future grid is already ready Ropp and his Sandia-led team of researchers, with collaboration from boffins at New Mexico State University, have developed a method of detecting potential disruptions between microgrids using nothing but software algorithms. Better yet, the system doesn't require any new hardware, and could be readily deployed as relays – the microprocessor controls for grid switches that reconfigure electrical systems in various ways. "New software on existing hardware was our focus on this project," Ropp said. "Almost all of what we're doing is deployable on existing commercial hardware." Sandia National Lab's Dr Michael Ropp, who led development of an algorithm that could make future electrical grids self-healing without the need for new hardware ... Click to enlarge. As described in a pair of papers published in 2022 and 2023, Ropp and his team have sussed out a system that works at each relay switch – without any knowledge of the rest of a larger system of microgrids between which relays would form bridges. By looking at the frequency of voltages on either side of a relay and running the measurements through an algorithm, Ropp's software arrives at a correlation coefficient between the two sides that determines whether two microgrids should be disconnected to prevent a loop forming. Each microgrid relay, equipped with the necessary code to make that determination, could act independently to prevent grid malfunction. According to Sandia, those algorithms could be used to determine when a portion of a grid should be shut off to maintain power supplies to critical resources (like hospitals), and could reorganize to avoid damaged microgrids – much like the existing centralized systems in use by companies like Duke. With the hardware necessary for such a system largely in place, this isn't a distant, far-term project – it could be in place in less than five years, it's claimed. We've got the technology, we're ready to go with this "We use a lot of existing functions that are already used in the power system – we just use them in new ways to try to detect new things," Ropp told us. "We came up with [the loop detector] ourselves to solve a specific problem, but the whole idea is that this is something that can be practically applied on power systems tomorrow. We've got the technology, we're ready to go with this." Of course, testing will be needed to ensure the preliminary results demonstrated in the papers bear out in the real world. "We want to pound the living daylights out of it to make sure that it really does work," Ropp explained. "We're confident, but we haven't done larger scale testing yet." The team is already setting up test facilities at Sandia, and has partnered with several utility companies around the US to ensure the concept works across different power system design philosophies. Ropp isn't sure where the tech may be deployed first, but suggested it could end up being tested in multiple locations, once validated in the lab. As for whether we can make the transition from our old centralized electrical paradigm to a world of distributed generation and microgrids, Ropp has faith we can, with the matter all boiling down to how affordable we can make it. "What we're trying to do is to create solutions that allow us to meet the challenge without breaking the bank," he declared, "and [the self-healing grid algorithm] is what that's all about." ® 114
true
true
true
They're just an algorithm away, national lab engineer tells El Reg
2024-10-12 00:00:00
2024-02-01 00:00:00
https://regmedia.co.uk/2…shutterstock.jpg
article
theregister.com
The Register
null
null
16,177,828
https://insidehpc.com/2018/01/enabling-fpgas/
Enabling FPGAs - High-Performance Computing News Analysis | insideHPC
MichaelS
*Sponsored Post* In the past few years, accelerators that speed up certain classes of problems have made headway from the lab to production environments. Field Programmable Gate Arrays (FPGAs) are an exciting technology that allows hardware designers to create new digital circuits through a programming environment. Compared to hardware that is designed once or software which must adhere to the hardware architecture, an FPGA allows developers to draw a circuit to solve a specific problem. FPGAs are programmed to simulate a hardware circuit. The design of this circuit only lasts while power is supplied to the FPGA, once it is powered off, the design is gone. Downloading the design from an application or at system boot time basically turns on the FPGA to be used by an application. FPGAs contain a highly parallel architecture which enables very high performance for certain applications with very low latencies. Intel is leading the way to giving a wider range of developers the tools necessary to create innovative uses of FPGAs. [clickToTweet tweet=”Intel FPGAs can help to speed up a wide range of applications. Check out Intel FPGAs” quote=”FPGAs are an essential tool in a variety of domains.”]FPGAs are becoming an essential tool in diverse fields such as autonomous driving, cloud computing and accelerated networking. In all of these domains, very large amounts of data are produced, which is where FPGAs can contribute significantly. Automated vehicles are dealing with about 1 Gigabyte of data per second and the system must deal with all of this data in almost real-time. Only fractions of a second are available for the system to detect and react to an incident. New wireless network standards are demanding tremendous performance just as it is estimated that the data handled on wireless networks will exceed wired data in the not too distant future. As many industries move towards cloud computing, FPGAs located in large datacenters will be able to handle the enormous processing power that is need to address large scale requirements. Previously programming an FPGA required knowledge of the underlying hardware systems using programming languages that were familiar to hardware designers. In order to broaden the appeal to software designers, new programming environments need to be developed that software developer are more familiar with. Software stacks are being developed by Intel that allow a developer to create an application that is implemented in the FPGA. The lower level expertise is used in creating the libraries that are used and has the architectural expertise embedded in the libraries. By allowing different types of developers access to the software stack at different levels, a wider range of applications can be created that are run on FPGAs. Intel is creating a number of technologies that will enable more development of applications that run on FPGAs. From the actual hardware itself to programming tools and libraries, and complete stack is being made available for a wide range of developers. The tools that Intel is creating allow for more optimized and simplified hardware interfaces and software Application Programming Interfaces (APIs). Developers can create application that are highly tuned to the problem at hand, and expect very high performance. Intel helped create the Open Programmable Acceleration Engine (OPAE) which can handle many tasks, including all the details of the FPGA reconfiguration process. OPAE also contains drivers, libraries, and example programs that developer can quickly become productive with. Intel is leading the FPGA world with highly programmable FPGA solutions. Moore’s Law has given FPGA designers a lot to work with, and the designers, in turn, have used those transistors to add features with software programmability in mind. Intel has released a framework to help system owners, application developers, and FPGA programmers interact in a standard way—a real advantage when using Intel FPGAs through a common developer interface, tools, and IP that make it easier to leverage FPGAs and reuse code.
true
true
true
Field Programmable Gate Arrays (FPGAs) are an exciting technology that allows hardware designers to create new digital circuits through a programming environment. Compared to hardware that is designed once or software which must adhere to the hardware architecture, an FPGA allows developers to draw a circuit to solve a specific problem.
2024-10-12 00:00:00
2018-01-18 00:00:00
https://insidehpc.com/wp…08/stratix10.jpg
article
insidehpc.com
High-Performance Computing News Analysis | insideHPC
null
null
28,116,795
https://www.bloomberg.com/news/articles/2021-08-06/amazon-lottery-offers-vaccinated-workers-cars-500-000-cash
Bloomberg
null
To continue, please click the box below to let us know you're not a robot. Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy. For inquiries related to this message please contact our support team and provide the reference ID below.
true
true
true
null
2024-10-12 00:00:00
null
null
null
null
null
null
null
2,778,664
http://www.rackspacestartups.com
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
23,808,406
https://jacobinmag.com/2020/07/international-poverty-line-ipl-world-bank-philip-alston
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
7,888,618
http://www.bloomberg.com/news/2014-06-13/priceline-to-buy-opentable-in-deal-valued-at-2-6-billion.html
Bloomberg
null
To continue, please click the box below to let us know you're not a robot. Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy. For inquiries related to this message please contact our support team and provide the reference ID below.
true
true
true
null
2024-10-12 00:00:00
null
null
null
null
null
null
null
16,716,156
https://www.theverge.com/2018/3/30/17177758/destiny-2-bungie-go-fast-update-teachable-moment-controversy-game-industry
The never-ending Destiny 2 controversy is a teachable moment for the game industry
Nick Statt
Bungie’s latest and most significant attempt to address vocal player complaints surrounding *Destiny 2* landed on Tuesday, a little more than six months after the game’s initial launch. And yet inevitably, the vociferous community has regressed in a mere few days to its previous state of near-perpetual outrage. The update, nicknamed the “Go Fast” update for how it was designed to address complaints about speed and competitive intensity, made a number of big changes to how powerful certain firearms were and how effectively and frequently the game’s signature brand of sci-fi superpowers could be wielded, among other crucial gameplay changes. But the end result is a similar level of frustration from players who feel Bungie isn’t going far enough and, more importantly, that it continues to misread player expectations. At this point, *Destiny 2* feels chained down by its own fundamental design decisions, ones that are all but impossible to uproot and alter without a full-scale reboot of the title in the style of Square Enix’s infamous overhaul of *Final Fantasy XIV*. Players want Bungie to effectively roll the game back to the state it was in during the peak of the original *Destiny*, an unlikely course of events for a studio that has only just begun examining its most critical missteps. Those range from a lack of meaningful in-game activities and bland rewards, to a sluggish and uninspired weapons system. Bungie’s Destiny 2 troubles are instructive for the rest of the games industry Of course, “video game players are mad at video game” isn’t exactly a novel narrative, and it’s certainly not specific to games like *Destiny 2*. But what makes Bungie’s efforts with the sequel to its shooter / MMO hybrid so interesting is how instructive it is for the rest of the game industry. So many video games today are created as persistent, ever-evolving products that can be altered in subtle and drastic ways through post-launch expansions, updates, and patches. Look at Epic Games’ *Fortnite*, a game that responded to an industry trend last year and has since blown up into a worldwide phenomenon thanks in part to a breakneck and radically creative update cycle. But what if the game maker, at the highest possible level, misunderstands what players actually want, and doesn’t listen to or trust those players when they verbalize those demands? No amount of nimble iteration or cool new features can bridge a gap of trust. And that’s what Bungie appears to suffer from today, with a player base that almost refuses to believe the company has the best interests of the game at heart and wants accordingly to act in good faith. We don’t know how much money *Destiny 2* is making, or how many people play it every day or month. Bungie won’t say, and it could be that the game is healthy and revenue is flowing in from its in-game Eververse store. But from even just a cursory community snapshot, players are unhappy and the game feels as if it’s on a path toward an unsalvageable state. Bungie recognizes this, and members of the development team have become increasingly candid, almost sardonic, in on-camera interviews. Sandbox design lead Josh Hamrick described the team’s philosophy these days with the phrase, “What’s the worst that can happen?” in a YouTube breakdown of the “Go Fast” update earlier this week. So how did we get here, and how did we miss the warning signs? Since *Destiny 2*’s launch last September, the narrative around the title* *has shifted dramatically and so frequently that Bungie has often failed to keep up. But from the start, it’s centered on players wanting a hardcore experience similar to *Destiny 1*, and yet Bungie delivering a watered down, simplified version of that experience. Every time the company has found itself mired in controversy, the developer — via a purposefully rotating cast of public-facing voices — pledges to listen more to feedback. Yet Bungie has taken months to address player demands and to try to remedy the game’s lack of fun factor — the central problem at its core. For as many of these undelivered changes that you can chalk up to players not understanding the nuts and bolts of game development, there remains an equal number of seemingly simple, crowd-pleasing home runs Bungie could have made far earlier and yet inexplicably did not. Why, for instance, did it take Bungie six months to deliver a free-for-all “Rumble” playlist for its competitive Crucible multiplayer mode? Why did it take an equal amount of time to make the necessary “sandbox” changes, which determine the speed and variety of play styles the game incentivizes, to address the fact that almost everyone was using the same minuscule set of guns, armor, and abilities? Bungie misunderstood from the onset what it thought players wanted There remains a laundry list of requests players continue to ask for, and yet it may take Bungie months to deliver them alongside its next big content drop in May and an even larger one planned for September. Demanding the studio overhaul the game’s entire design and mechanical framework while also providing new activities to enjoy is a big ask. But players want nothing short of a miracle to save what many considered their primary post-work pastime. But again, what we’re really discussing here is not the specific changes it would take for Bungie to “fix” the game, whatever that means, or even really where it went wrong and how. (The two-primary weapon system is a likely culprit for the latter investigation, alongside a game design philosophy stubbornly rooted in simplicity at all costs.) We’re talking about a game developer that misunderstood from the onset what it thought players wanted, only to find out later on that it had made near-fatal mistakes. Many of the changes Bungie outlined in a development roadmap earlier this year involve implementing features the original game enjoyed and yet were taken out of the sequel. In an almost Gladwellian twist, what critics, players, and even the game makers themselves thought was a step in the right direction was in fact multiple steps backward. We just couldn’t see it in the lead up to or even during the launch. When I reviewed *Destiny 2* in September, I said it was everything fans had been asking for. I sincerely believed that: it had planetary fast-travel, a milestone system for simplified progression, and a more balanced and team-based competitive multiplayer mode. Everything we thought *Destiny 2* needed — less one-hit-kill combat in multiplayer, less frustrating systems for managing resources and powering up your character, less randomized loot drops — it turns out was the original game’s lifeblood.* * *Destiny 2* has been a counterintuitive failure unfolding for over half a year now, with the reality of the situation taking many thousands of in-game player hours to gel into a cohesive picture of dissatisfaction and unmet expectations. Sure, some players called it earlier than others, with frustrations bubbling up just weeks after launch. But not until the December expansion,* Curse of Osiris*, did it feel like the game had entered into an irreversible downward spiral. As one rather prescient fan wrote on Reddit back in December, Bungie was simply responding in the wrong way to the right issues, going too far in some respects and not far enough in others. “As I read many of the threads in this sub that discuss people’s issues with *Destiny 2* I have realized that many of the drastic changes Bungie have made are the direct result of complaints that were made throughout the life of *Destiny 1*,” wrote the user. “A lot of which I contributed to. I participated in conversations and made posts complaining about many of the things that Bungie tried to address in *D2* to strike that balance between the casual and hardcore player.” While players can shoulder some of the blame here for vocally telling Bungie to move in every direction at once, the developer carries the more significant responsibility of figuring out what it is that makes its product successful and fun, and then figuring out how to improve those aspects of the product for everyone. Destiny 2 has been a counterintuitive failure unfolding for over a half a year now *Destiny 2 *has become a telling example of how a game maker can overestimate its ability to deliver something millions of people will enjoy without deeply engaging with those players, and without listening to or trusting the community when its members say they’re unhappy. The biggest pitfall of the games-as-services shift in the industry over the past half-decade or so is that a developer can create a game with problems that minor and even major updates *can’t* fix — a product so at odds with what players want that no amount of tweaking will repair its image in their eyes. And that because of the very nature of the product, this situation might not become readily apparent to all parties involved until months after launch. The most successful games-as-platforms today, like *Fortnite *and Blizzard’s *Overwatch*, don’t just iterate quickly, take risks, and do so while providing extensive communication with players. The developers of those games at a fundamental level also understand what players want and how to give it to them. You can’t adequately respond to player feedback if you’re not even on the same page. Bungie has the resources and time to fix *Destiny 2*, though it will most likely happen when the more pricey September expansion that mirrors 2015’s *The Taken King *drops later this year. Weathering the storm until then won’t be pleasant, but the beauty of games like *Destiny *is that they are never static. They can always change into something else. The team at Bungie just has to be willing to let go of what they thought they wanted to make, and turn their focus toward what made the original game work so well. Finding the answer the is just a matter of tuning into the community, and listening to what they have to say.
true
true
true
What we can take away from Bungie’s troubled shooter six months after launch
2024-10-12 00:00:00
2018-03-30 00:00:00
https://cdn.vox-cdn.com/…y_arcstrider.png
article
theverge.com
The Verge
null
null
8,298,304
http://gigaom.com/2014/09/10/the-bay-area-gets-the-european-internet-exchange-model-netflix-hopes-will-spread/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
38,688,606
https://turbo.art/
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
8,589,205
http://zudepr.co.uk/curate-content-pr-how/
How to Curate Content & Influence People | Zude PR blog
null
**Interested?** Ring 0141 569 0342 or **get in touch** I’m not a religious man. But I admire the community aspect of religion. The social binding that comes through a good priest watching over his flock. I guess that’s where the relatively new idea of “curated content” comes from. Sharing the latest and greatest thinking from around the world to create a community around your brand. In my case Zude PR…rather than God. Curating content does not mean nicking other people’s ideas and passing them off as your own. Oh no. As I’ve blogged before, times have changed. With few exceptions, every business should be sharing what it knows in 2014. When I set up my startup public relations company in Glasgow seven months ago, I looked around at the tools in my PR and marketing armoury. How was I going to compete with my competitors? How was I going to raise my profile when, aside from launch day, my business hadn’t got many newsworthy stories to tell? Bigger, more established Glasgow PR companies had an in-built advantage. A tricky one. And the tried and tested media relations route was not the answer. One of the ways I addressed this imbalance was content curation. Meaning: finding interesting, on-brand stuff, injecting a bit of your own personality, and sharing it through social media. Here, the Internet was my friend and I have had, as my mentors, some of the best in the business. Not personally, you’ll understand. People like Kevan Lee, Jeff Bullas, Gini Dietrich, and the guys at PR Daily, Moz and Copyblogger, to name but a few. Most of these gurus blog daily and the content they produce is amazing. The great thing about being a content curator pro is that you must read every single thing you share on social media. This means that over a period of time you learn so much that you become completely au fait with your topic area. You are not only sharing all the latest thinking but also implementing it. Whether that’s for your own business or (in my case, as someone who provides content marketing services) your clients’. Would I tell every organisation to go down the content curation route? No. But most businesses could benefit. The amount of people I bump into now who marvel at my grasp of content marketing, SEO and PR…and wonder where the heck I find all this stuff. And it is this curated content, mixed with self-penned blogs, which has led to all my new business wins over the last seven months. It reminds people I’m here, tells them what I’m doing and gives them something useful. I don’t worry about giving it all away either. If any of my potential clients have got the time to read everything I curate, fair play to them. Otherwise, they may just hire someone like me to do it for them. **So, that’s my story. For the rest of this post I’m going to share with you the 21 tools I use to find all this amazing material.** **I’ll then finish off with my six top tips on how to curate content like a public relations pastor and create a congregation.** *And remember. The best priests are those who care for their parishioners. Imparting little homilies and wisdoms on a daily basis is all very well but – once a week – everyone’s going to expect an original sermon. Or in my case a blog (mea culpa, must try harder).* It’s worth stating here that before you begin sharing your thoughts and those of others with potential customers, you’d better understand what you stand for. Tone of voice is so important. My three brand pillars are trust, integrity and results. They colour my tone of voice on social media. I would add honesty to this, and also a little bit of humour (but not too much). I admire Buffer’s tone of voice, which is all about truthfulness and transparency. I am indebted to this great article (albeit I’m sure many of my clients would think sharing all staff salaries is a bridge too far). Once you’ve worked out your tone of voice, it’s all about working out what you would like to be known for. Who are your target audiences. Who is your target customer. This is the most important bit. I share content on media relations, content marketing, SEO, media crisis management, public relations, ethics, honesty, measurement, social media marketing, and Glasgow. Try to keep it wide enough that you have a steady stream of content but not so wide as people don’t know what you stand for, or what services you offer. You need a website. I don’t need to say that right? If you haven’t got one, stop reading now. You also need social media portals. Take advice from someone like me here. If you’re selling to other businesses, do you need a Facebook page? Sometimes yes but most times no. Set up enough social media portals that you are seeing the value from identifying great content. Most B2B companies need a verified Google+ company page (Google My Business), a LinkedIn personal and company page and a company or personal Twitter feed. And get them set up right. You can do it yourself but more often than not it’ll look a bit shonky. Best to hire a professional who will make you and your company look good. Also, make sure you join and contribute to relevant Google+ Communities and LinkedIn Groups. Then set yourself up with a social media post scheduler. I’m with Hootsuite Pro but I also have the free Buffer subscription. Don’t try to make curated content a key part of your marketing strategy without these time-saving tools. Pretty much every curated content tool nowadays can schedule BUT you still need Buffer or Hootsuite (I’ve not tried Sprout Social but I believe that is great too). The time Hootsuite Pro saves you for under £100 a year is immense. Right, ok, so here are the 21 ways I find content for Zude PR and its clients. Hope you find it useful. *I use the Chrome browser with extensions, and have an Android Nexus 7 tablet and Nexus 5 smartphone. I do the vast majority of content curation on my laptop. I won’t kid you, it takes a fair bit of time to sign up to all these tools and tell them what you want them to find. But, it’s worth it.* Buffer, the scheduling tool, also has a “suggestions” section. It makes 25 suggestions at a time. I find myself sharing one or two of these a week. They tend to be quirky and contain a few quotes. It gets to know you/your client pretty well after a time. Sharing is easy because the suggestions are made within the scheduling tool. Buffer is great. And why do I think this? Two words: content marketing. The Buffer blog is the only one I Twibble, such is my confidence that every post will be a great one. Again, Hootsuite has a “suggested” section which has been in beta for as long as I can remember. Like Buffer, it’s so easy to share. I find some of the 20 suggestions a bit salesy e.g. company x is launching a new social media product, and some of the content is sponsored. In addition, some of the content is too old. Ex-journalists tend to make great content marketers…and they would tell you that it pays to be first with the news. *I’m not going to go into the scheduling functionality of each one here. Some only post to two social networks (unless you upgrade). Some only share there and then. Some you can set to share in say 2/4/8 hours/whenever you want. Some you can change the text for each social media portal. I will focus on the content they find for you.* For me, ContentGems is the Daddy of content suggesters. For Zude PR, I have it set for two interests (the maximum on the free plan). It emails me every day with about 20 suggestions for each topic, and they are always on song. The articles it puts forward are content gems: it does what it says on the tin. My go-to content suggester. Swayy is also good. It’s a lot more visual than ContentGems and the rolling dashboard is a good feature. It takes me longer to find good content though and I often find myself flipping out before I’ve had time to explore its suggestions. Again, you receive a daily e-mail, but with just four or five pieces of suggested content, more often than not it misses the mark. *October 2015 update: perhaps this is why it shut down in July 2015, with its founders joining SimilarWeb.* Tried Beatrix, have an account, persevered with it a few times but now never use. They were good in offering me a free webinar but I found it a bit fiddly and there are only so many hours in the day. Klout is addictive; who works in social media and doesn’t know their Klout score:O). It is also a content suggester, and scheduler. A year ago, I used Klout to find content. But the more I used its content suggesting functionality, the less content I curated. I found myself swiping through loads of visually-well-laid out dashboard content to unearth that gem. I sometimes use it, as it does well in finding local (to Glasgow) material. *The more your Twitter network grows, the more useful these next two services are. I find them particularly good for quick and dirty scheduling. The key with curated content is not spending too much time while giving your flock the sort of tending they deserve. These two daily emails highlight the most-shared items by your Twitter followers. It’s likely that you will want to share too.* Simple sign-up process. Always on point. The content author is usually someone whose work you are familiar with. Take a quick squizz then share. Simples, is Daily Digg. Nuzzel is similar to Digg but, if anything more comprehensive. It suggests not only the most shared content of your Twitter followers but also the most shared content of their friends. *And like the two above, check out number nine and sign up to the daily newsletter.* Another daily email. Posts on Medium tend to be less commercial and more quirky/thoughtful…if sometimes outrageous. Watch for that, outrage will not form a key plank of your tone of voice. I write on Medium and it’s good to support what is an excellent, clean blogging platform. BuzzSumo is just an amazing piece of kit. With many uses. E.g. I’ve used it when writing this post to find out how other bloggers have approached the topic of curated content, and what have been the most popular posts over the past year. When curating content, I set it on the week filter on a Monday and, depending on time and client needs, check it daily for topic areas. It tells you how many “social shares” each piece of content has had. Not always but, as a rule, the higher the shares the better the content. I then click on the original webpage and press Hootlet in my extensions. Reference the author, add your own commentary and share to the appropriate networks. And bear in mind it’s not just a blog search tool. It covers infographics, podcasts and videos too. Feedly rocks; if you have time, that is. As you curate more and more content for your company, you’ll get to know which authors you like and which you don’t. If you think your followers will like their content, stick their RSS feed in your Feedly. **Pro tip:** if you’re using Chrome head over to the Chrome Store and get Feedly Mini. Make life easy for yourself. Your Feedly knows which posts you’ve read and which you haven’t. As you get better at curating content, your Feedly will be your go-to resource for unearthing those hidden gems. I schedule time every morning to rifle through my Feedly. I skip over the bloggers whose posts I’ve shared during the week and have a look at the more obscure ones. Those bloggers, like me, who only make time to post once a week or so. If I spot a gem, I buffer it up for later in the week. Reddit is a whole ecosystem in itself. There are subreddits for every interest, but don’t break the rules. I sometimes check in to see if I can spot some content which the Reddit community has voted up (a sure sign of content virality). But more often than not I just dip in when I’m trying to promote a blog post. A good search tool but the content you find tends to be a bit negative. Upworthy is definitely worth a look because there can be some real beauties in there. This one’s simple. The Latest gives you the 10 top trending links on Twitter, right now. I’ve just clicked on it and it tells me a new Google Calendar app will be out on Android v. soon. I won’t be sharing that, but I will be using it. The new brand tracker tool. It’s an app anyone can use to search any company/keyword on social media and the news. We all know being visual on social media begets more interaction. So why don’t you see many companies sharing slidedecks? There are hundreds of hidden gems to be found on Slideshare. The same goes for YouTube. Most people still see LinkedIn as a text-based rather dry networking tool for businesses. Not so. With the roll-out this year of LinkedIn Publishing it is now a sophisticated blogging platform. If you write good content, and it’s picked up by Pulse, the reach is phenomenal. And even though it’s barely a year old, the amount of content already on there is amazing. Worth a look at its blog post search tool (click search for posts) once a week in my opinion. I always find something relevant to share. The thing about content schedulers such as Hootsuite and Buffer is that they only post to your Google+ Company Page. That’s great, but, most people then neglect their personal page. I find the What’s Hot – Google+ hashtag search tool a great means of +1’ing/sharing content to those in my various Google+ Circles. There’s some interesting, visual content on Google+ and marketers ignore it at their peril. Want to find great blogs in any given topic area, look no further than Alltop. If a blog’s on Alltop it’s a good ‘un. Go one step further and stick it in your Feedly. You know it makes sense. I’m experimenting with this. I’ve noticed that quite a few influencers in my field are using Scoop.it and every time I Scoop content to my Scoop.it dashboard I get a Mention. That sentence sounds like I’ve spent far too much time in social media land. Scoop.it is another means of sharing content, inventively, on Twitter, and has good SEO value. So I’m for it, so far. I’m not sure yet whether Snip.ly is the missing tool in the effective content curator’s toolbox or a hammer to crack a nut. I’ve been using it for a month now and all I can say is, try it for size, use sparingly and see how it goes. I like it, as so do others. Put some time aside every week to buffer your content. I do Zude PR’s first thing on a Monday morning. I start with the timeous content which has come in over the weekend then switch to Feedly and BuzzSumo to look at older and more obscure material. That way, if I don’t have time to do any more curating during the week, I have a base level of curated content. I then know I won’t disappear off my congregation’s radar. Digg, Nuzzel, ContentGems and Swayy. Wait until the end of the day or deal with the emails when they come in. But try to be first with the news. Sharing is caring. Don’t go off piste. I’m a startup. There are so many things that affect my business on a daily basis; thing I’m interested in. But sharing productivity techniques is not what my brand is all about. Focus. Don’t just share. You’ll get used to it but all these suggestion and scheduling tools write your social media updates for you. Don’t just press send. Say why you’re sharing it. Inject some of your brand personality. Don’t overdo LinkedIn. Do overdo Twitter. Twitter’s like a river; LinkedIn isn’t. Get a system for storing your stuff on the go. I use Bookmarks on my browser and Pocket on my phone. This is where I head first on a Monday morning. **Interested in finding out more about content curation? These are the best recent posts on the subject.** https://blog.kissmetrics.com/double-social-media-content/ https://blog.bufferapp.com/guide-to-content-curation https://blog.bufferapp.com/17-unique-places-to-find-great-content-to-share http://alltopstartups.com/2014/09/22/content-discovery-curation-and-marketing/ https://zenoptimise.com/buzzsumo-better-content-marketeres-to-find-great-content-to-share **And here’s one that’s come in from my mate Jeff while I’ve been sleeping on this blog post:** http://www.jeffbullas.com/2014/11/05/spent-10-minutes-per-week-triple-twitter-followers/ *I’m getting an average of 20 people a day following me on Twitter at the moment, many influential, through the quality and relevance of the content I curate for Zude PR.* **My name’s Dave @zudepr** **I’m a multi-award-winning Glasgow PR guy offering media relations, content marketing and SEO advice to clients across the UK.** **I would be really interested in finding out others’ experience of curating content for their business or their clients.** **Head over to zudepr.co.uk if you are interested in finding out more or call me on +44 (0)1415690342. ** *P.S. I found myself curating my own content last Friday, after I was approached by PR industry bible PR Daily to publish one of my first blogs on their website.* *Loving the fact that my PR Daily post came through in my daily ContentGems email on Saturday. One of 20 recommended Public Relations articles. That’s the modern-day PR equivalent of having your press release rewritten and picked up by the Press Association or Reuters, still The Holy Grail when you’re trying to get a client’s story out there in traditional media.* ## 0 Comments
true
true
true
2014 startup Glasgow PR company founder David Sawyer shares 21 practical top tips on how to curate content and influence people.
2024-10-12 00:00:00
2014-11-11 00:00:00
https://zudepr.co.uk/wp-…-a-PR-pastor.jpg
article
zudepr.co.uk
Glasgow PR Agencies | Scottish Digital PR Company | Zude
null
null
7,558,399
https://medium.com/best-thing-i-found-online-today/c4d8c9ccce39
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
37,226,162
https://arxiv.org/abs/2308.08253
Benchmarking Neural Network Generalization for Grammar Induction
Lan; Nur; Chemla; Emmanuel; Katzir; Roni
# Computer Science > Computation and Language [Submitted on 16 Aug 2023 (v1), last revised 25 Aug 2023 (this version, v2)] # Title:Benchmarking Neural Network Generalization for Grammar Induction View PDFAbstract:How well do neural networks generalize? Even for grammar induction tasks, where the target generalization is fully known, previous works have left the question open, testing very limited ranges beyond the training set and using different success criteria. We provide a measure of neural network generalization based on fully specified formal languages. Given a model and a formal grammar, the method assigns a generalization score representing how well a model generalizes to unseen samples in inverse relation to the amount of data it was trained on. The benchmark includes languages such as $a^nb^n$, $a^nb^nc^n$, $a^nb^mc^{n+m}$, and Dyck-1 and 2. We evaluate selected architectures using the benchmark and find that networks trained with a Minimum Description Length objective (MDL) generalize better and using less data than networks trained using standard loss functions. The benchmark is available at this https URL. ## Submission history From: Nur Lan [view email]**[v1]**Wed, 16 Aug 2023 09:45:06 UTC (302 KB) **[v2]**Fri, 25 Aug 2023 13:40:31 UTC (302 KB) ### References & Citations # Bibliographic and Citation Tools Bibliographic Explorer *(What is the Explorer?)* Litmaps *(What is Litmaps?)* scite Smart Citations *(What are Smart Citations?)*# Code, Data and Media Associated with this Article CatalyzeX Code Finder for Papers *(What is CatalyzeX?)* DagsHub *(What is DagsHub?)* Gotit.pub *(What is GotitPub?)* Papers with Code *(What is Papers with Code?)* ScienceCast *(What is ScienceCast?)*# Demos # Recommenders and Search Tools Influence Flower *(What are Influence Flowers?)* Connected Papers *(What is Connected Papers?)* CORE Recommender *(What is CORE?)*# arXivLabs: experimental projects with community collaborators arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? **Learn more about arXivLabs**.
true
true
true
How well do neural networks generalize? Even for grammar induction tasks, where the target generalization is fully known, previous works have left the question open, testing very limited ranges beyond the training set and using different success criteria. We provide a measure of neural network generalization based on fully specified formal languages. Given a model and a formal grammar, the method assigns a generalization score representing how well a model generalizes to unseen samples in inverse relation to the amount of data it was trained on. The benchmark includes languages such as $a^nb^n$, $a^nb^nc^n$, $a^nb^mc^{n+m}$, and Dyck-1 and 2. We evaluate selected architectures using the benchmark and find that networks trained with a Minimum Description Length objective (MDL) generalize better and using less data than networks trained using standard loss functions. The benchmark is available at https://github.com/taucompling/bliss.
2024-10-12 00:00:00
2023-08-16 00:00:00
/static/browse/0.3.4/images/arxiv-logo-fb.png
website
arxiv.org
arXiv.org
null
null
2,337,515
http://www.nytimes.com/2011/03/17/arts/design/arduinos-provide-interactive-exhibits-for-about-30.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
7,632,017
http://www.pbs.org/wgbh/pages/frontline/locked-up-in-america/#solitary-nation
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
4,549,623
http://www.virtualpants.com/post/31928535839/google-maps-app-for-ios-6-is-already-here
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
1,387,918
http://www.universetoday.com/2010/05/28/air-force-launches-next-generation-gps-satellite/
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
32,703,800
https://www.theguardian.com/us-news/2022/sep/02/goat-cedar-county-fair-auction-california
A girl wanted to keep the goat she raised for a county fair. They chose to kill it
Dani Anguiano
When a young California girl purchased a baby goat last spring, the intention was to eventually sell it at a county fair livestock auction. But after feeding and caring for the animal for months, she bonded with the goat, named Cedar, and wanted to keep it. Instead, law enforcement officers allegedly travelled hundreds of miles to confiscate the pet, who was eventually slaughtered. The story is laid out in a lawsuit, first reported by the Sacramento Bee, filed by the child’s parent this week, in a case that has sparked outrage and criticism that the police and the county fair went too far to reclaim the goat and send a child’s beloved pet to slaughter. Jessica Long sued the Shasta county sheriff’s department seeking damages and accusing the agency of violating her daughter’s constitutional rights and wasting police resources by getting involved in a dispute between her family and a local fair association. In July, “two sheriff’s deputies left their jurisdiction in Shasta county, drove over 500 miles at taxpayer expense, and crossed approximately six separate county lines, all to confiscate a young girl’s beloved pet goat”, the lawsuit states. “As a result, the young girl who raised Cedar lost him, and Cedar lost his life.” According to the lawsuit, Long and her daughter purchased the baby goat while the child was enrolled in 4-H, a youth agriculture program popular in rural California. The intention of the program was that the goat would be raised by the family and eventually sold. But the girl, who is not even 10 years old, grew attached to Cedar. In June, when it was time to sell Cedar at a local fair livestock auction, she was “sobbing in his pen beside him”, the lawsuit states. “[The girl] and Cedar bonded, just as [she] would have bonded with a puppy. She loved him as a family pet,” according to the lawsuit. The family told the Shasta Fair Association that the girl, as was within her rights, did not want to continue with the sale of the goat. In another strange twist, the goat’s meat was due to be sold to the California state senator Brian Dahle, a Republican who is also running for governor. Long offered to “pay back” the fair for the loss of Cedar’s income, but the fair association ordered her to return the goat and said she would face charges of grand theft if she failed to do so, according to the complaint. She contacted Dahle’s office to explain the situation and representatives for the lawmaker said they would “not resist her efforts to save Cedar from slaughter”. She also appealed to the fair association. “Our daughter lost three grandparents within the last year and our family has had so much heartbreak and sadness that I couldn’t bear the thought of the following weeks of sadness after the slaughter,” Long said in a letter to the fair association. But the association was “unmoved”, according to the lawsuit, rejecting her offer and continuing to “threaten” Long with criminal charges. Instead, they opted to “avoid the courts and instead resort to the strong-arm tactics of involving law enforcement”, the lawsuit states. Despite having no warrant, according to the lawsuit, law enforcement seized Cedar from the Sonoma county farm and brought him to the Shasta district fairgrounds. He was eventually killed. “Cedar was her property and she had every legal right to save his life,” the lawsuit states. “Yet, the Shasta Fair Association disputed her contractual rights to do so. In response, two sheriff’s deputies unreasonably searched for and unreasonably seized Cedar, without a warrant.” The Shasta county sheriff’s office has told media outlets it will not comment on pending litigation. The fair association did not immediately respond to a request for comment.
true
true
true
A California lawsuit brought by the girl’s parents accuses law enforcement of traveling hundreds of miles to confiscate a beloved pet
2024-10-12 00:00:00
2022-09-02 00:00:00
https://i.guim.co.uk/img…69bacc2cd4cf5621
article
theguardian.com
The Guardian
null
null
21,957,478
https://www.nytimes.com/2020/01/03/us/military-draft-world-war-3.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
34,662,190
https://www.youtube.com/watch?v=3_ranbmH7PE
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
7,811,176
http://www.builtinchicago.org/blog/money-doesnt-matter-anymore
MONEY DOESN'T MATTER ANYMORE | Built In Chicago
null
** MONEY DOESN”T MATTER ANYMORE ** Today, for a startup, especially in the tech space, money just doesn’t matter anymore. There’s more money available today – even for mediocre stories and half-baked ideas - than anyone knows what to do with. And there doesn’t appear to be any end in sight with more and more investors than ever before all frantically chasing the shiny new things and the few deals that they hope are really exceptional. As always, it’s still a great big crapshoot in any event because (just like we say in the music business), it’s easy to tell when someone’s got a bad idea, but it’s a lot harder to figure out the one-in-a-million deal that’s gonna break through. So if you’ve got something special to sell and people are beating a path to your door, now’s the time to let them in. And there’s another game-changing aspect of the money game which is equally important. In addition to having fairly painless, reasonably-priced and readily-available access to a great deal of cash, virtually every startup today actually needs millions of dollars *less* to get their businesses up and operating. In fact, they can even get themselves far enough along the way to hit a few major milestones on what we used to call “chump change”. It’s not like the good old days when capital was a central concern (and critical to your business’s credibility and success) and you needed to raise a real war chest because – at least back then - you couldn’t launch your company on sweat, smoke and mirrors with a few servers rented from AWS. But today, for better or for worse, you can pretty much get the ball rolling with some relatively modest funding and then you just have to start praying hard for both traction and momentum. However, it’s still important to keep in mind that just because the barriers to entry are much lower; it doesn’t mean that it’s any easier to succeed. In fact, if you don’t have all the tools you need; it’s actually much harder to break through the noise, clutter and competition to get yourself and your business noticed. So, if money isn’t the be-all and end-all gating factor these days, what really does make the major difference in a startup’s likelihood of success? I’d say that it all comes down to how you handle your talent. You can teach someone all about technology, but you can’t teach talent. Talented and highly motivated people have always been and will always be the only, long-term, sustainable competitive advantage for a business and managing this particular resource is something that you need to do from the very first day of your business. In addition, we are starting to better understand that talent management is an ongoing, maybe every day, kind of job and not some kind of lay-away plan where once a year you try to make all the folks happy with raises or bonuses or options (or at least less unhappy) and then you generally try to forget about these things for the rest of the time or until something blows up in your face. We see this particular phenomena and the hyped-up emphasis on talent acquisition and accommodation in Major League Baseball right now where the balance of power (and compensation) has shifted dramatically from the on-field and dugout managers of the clubs who used to run the show to the corporate GMs who are the guys responsible for tracking down, tempting and securing the talent. Now I realize that there are already plenty of treatises, textbooks (remember those?) and thoughtful articles out there about the need for (and the clever ways of) attracting, nurturing and retaining talent, but these things are generally written by people sitting on the sidelines like corporate managers, business school professors and HR professionals. Frankly, it takes a lot more talent, strength and energy to start, grow or change a company than it does to run one. And, as I like to say about picking surgeons if I’m having an operation: I want the guy who’s done a hundred operations; not the guy who’s watched a thousand. My life, and the world of startups in general, are not about “say”, they’re all about “do”. So, I want to get down to brass tacks and into the trenches and talk about three critical things to keep in mind when you’re dealing with the people who will make or break your business. (1) Exceptional Talent is a Package Deal A very important part of your job is to make room for people. Talent comes in strange and wonderful packages and – while we’re happy to have the upsides – we are all too often not willing to understand that there are going to be trade-offs that come with the deal. You don’t get to pick and choose and you’ve got to make sure that there’s a place for everyone (including many who don’t speak, act or look like you) in your business whether or not they believe that bathing is optional or prefer working all night long to showing up before the bell rings in the morning. Productivity is what you’re looking for, not punctuality. (2) Your Business is as Bad as Your Worst Employee While it’s still true that the best and most talented software engineers’ contributions are a multiple of those made by the next group of smart programmers or designers; it turns out that there’s a more important overall consideration. It turns out that the damage done by even a modestly underperforming employee is far more negative to the overall company efforts than the added benefit of those people punching above their weight. And tolerating mediocre performers is not only a horrible example for the rest of your folks; it’s a contagious disease that can sink your ship. This means that another part of your job – not the easiest and certainly not the most popular – is to promptly and regularly get rid of the losers. And this means even the people who are trying the hardest. It’s a sad thing to see people who have just enough talent to try, but not enough to succeed. Nonetheless, for your business to move forward, they need to move out and you have to be the agent of those changes. Waiting never helps in these cases. These situations don’t fix themselves and I have found over the last 50 years that I have never fired someone too soon. Think about it and get busy. (3) Even Your Superstars Need Support I used to say that talent and hard work are no match for self-confidence, but over the years, I have discovered that every one of us has serious moments of self-doubt and crises of confidence. With extremely talented people, it’s a special problem in their maturation and development. In their early years – whether it’s in business or in baseball – the superstars can mainly get by on their sheer talent alone at least until the going gets really tough and the competition starts to even out the score. Then, at some point, they fail – in a project or in a pitch – it’s inevitable and that’s where you need to be standing by to help. Because it’s only after you have failed – only once your raw skills and talent have let you down – that you realize that the really great talents are those people who combine their talents with thought and preparation – those who can add the power of discipline to their talent are the ones we come to call geniuses down the line. But this is a precarious juncture for these people who’ve never before known a rainy day or caught a bad break and, without some support – whether they ask for it or not – there’s a risk that they can fall apart and never get their risk-taking confidence and their mojo back. If you’ve had it your own way for too long, you can come to believe (or at least convince yourself) that even luck is a talent. But it’s not. At these times, if you want to hang on to these precious people, you need to be there to help. PS: “You Get What You Work for, Not What You Wish for”
true
true
true
Learn more about MONEY DOESN'T MATTER ANYMORE.
2024-10-12 00:00:00
2014-05-24 00:00:00
https://cdn.builtin.com/…ail_fallback.jpg
article
builtinchicago.org
Built In Chicago
null
null
35,664,909
https://medium.com/@nayakdebanuj4/bw-trees-also-known-as-buzz-word-trees-7de93f70cce8
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
29,390,457
https://refiapp.io/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
6,444,026
http://orcz.com/GTA_V:_Java_Update_Coffeeshop
GTA V: Java Update Coffeeshop
null
java.update() is a coffeeshop in Los Santos It is a reference to Java (the coffee bean) and Java (Programming language) Their signage combines both in a clever way, for example: package java.cappuccino; import java.shot.milk.foam; Public class Breakfast extends Meal { Private double espresso private int bread; private int bacon; public void sandwich () { bread = 2; bacon = 2; return bread + bacon; } The written code would not compile. The correct may be: public int sandwich () { bread = 2; bacon = 2; return bread + bacon; } A better solution, that would compile could be this: public Sandwich makeSandwich() { Bread bread = new Bread(2); Bacon bacon = new Bacon(2); return new Sandwich(bread, bacon); } You can find this shop on Boulevard Del Perro
true
true
true
null
2024-10-12 00:00:00
2013-10-09 00:00:00
null
null
null
Orcz.com, The Video Games Wiki
null
null
4,571,381
http://www.youtube.com/watch?feature=player_embedded&v=WlsahuZ_4oM
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
3,291,025
http://longnow.org/essays/richard-feynman-connection-machine/?again
Richard Feynman and The Connection Machine
null
by W. Daniel Hillis for Physics Today Reprinted with permission from Phys. Today 42(2), 78 (1989). Copyright 1989, American Institute of Physics. Photo by Faustin Bray One day when I was having lunch with Richard Feynman, I mentioned to him that I was planning to start a company to build a parallel computer with a million processors. His reaction was unequivocal, "That is positively the dopiest idea I ever heard." For Richard a crazy idea was an opportunity to either prove it wrong or prove it right. Either way, he was interested. By the end of lunch he had agreed to spend the summer working at the company. Richard's interest in computing went back to his days at Los Alamos, where he supervised the "computers," that is, the people who operated the mechanical calculators. There he was instrumental in setting up some of the first plug-programmable tabulating machines for physical simulation. His interest in the field was heightened in the late 1970's when his son, Carl, began studying computers at MIT. I got to know Richard through his son. I was a graduate student at the MIT Artificial Intelligence Lab and Carl was one of the undergraduates helping me with my thesis project. I was trying to design a computer fast enough to solve common sense reasoning problems. The machine, as we envisioned it, would contain a million tiny computers, all connected by a communications network. We called it a "Connection Machine." Richard, always interested in his son's activities, followed the project closely. He was skeptical about the idea, but whenever we met at a conference or I visited CalTech, we would stay up until the early hours of the morning discussing details of the planned machine. The first time he ever seemed to believe that we were really going to try to build it was the lunchtime meeting. Richard arrived in Boston the day after the company was incorporated. We had been busy raising the money, finding a place to rent, issuing stock, etc. We set up in an old mansion just outside of the city, and when Richard showed up we were still recovering from the shock of having the first few million dollars in the bank. No one had thought about anything technical for several months. We were arguing about what the name of the company should be when Richard walked in, saluted, and said, "Richard Feynman reporting for duty. OK, boss, what's my assignment?" The assembled group of not-quite-graduated MIT students was astounded. After a hurried private discussion ("I don't know, you hired him..."), we informed Richard that his assignment would be to advise on the application of parallel processing to scientific problems. "That sounds like a bunch of baloney," he said. "Give me something real to do." So we sent him out to buy some office supplies. While he was gone, we decided that the part of the machine that we were most worried about was the router that delivered messages from one processor to another. We were not sure that our design was going to work. When Richard returned from buying pencils, we gave him the assignment of analyzing the router. The router of the Connection Machine was the part of the hardware that allowed the processors to communicate. It was a complicated device; by comparison, the processors themselves were simple. Connecting a separate communication wire between each pair of processors was impractical since a million processors would require $10^{12]$ wires. Instead, we planned to connect the processors in a 20-dimensional hypercube so that each processor would only need to talk to 20 others directly. Because many processors had to communicate simultaneously, many messages would contend for the same wires. The router's job was to find a free path through this 20-dimensional traffic jam or, if it couldn't, to hold onto the message in a buffer until a path became free. Our question to Richard Feynman was whether we had allowed enough buffers for the router to operate efficiently. During those first few months, Richard began studying the router circuit diagrams as if they were objects of nature. He was willing to listen to explanations of how and why things worked, but fundamentally he preferred to figure out everything himself by simulating the action of each of the circuits with pencil and paper. In the meantime, the rest of us, happy to have found something to keep Richard occupied, went about the business of ordering the furniture and computers, hiring the first engineers, and arranging for the Defense Advanced Research Projects Agency (DARPA) to pay for the development of the first prototype. Richard did a remarkable job of focusing on his "assignment," stopping only occasionally to help wire the computer room, set up the machine shop, shake hands with the investors, install the telephones, and cheerfully remind us of how crazy we all were. When we finally picked the name of the company, Thinking Machines Corporation, Richard was delighted. "That's good. Now I don't have to explain to people that I work with a bunch of loonies. I can just tell them the name of the company." The technical side of the project was definitely stretching our capacities. We had decided to simplify things by starting with only 64,000 processors, but even then the amount of work to do was overwhelming. We had to design our own silicon integrated circuits, with processors and a router. We also had to invent packaging and cooling mechanisms, write compilers and assemblers, devise ways of testing processors simultaneously, and so on. Even simple problems like wiring the boards together took on a whole new meaning when working with tens of thousands of processors. In retrospect, if we had had any understanding of how complicated the project was going to be, we never would have started. I had never managed a large group before and I was clearly in over my head. Richard volunteered to help out. "We've got to get these guys organized," he told me. "Let me tell you how we did it at Los Alamos." Every great man that I have known has had a certain time and place in their life that they use as a reference point; a time when things worked as they were supposed to and great things were accomplished. For Richard, that time was at Los Alamos during the Manhattan Project. Whenever things got "cockeyed," Richard would look back and try to understand how now was different than then. Using this approach, Richard decided we should pick an expert in each area of importance in the machine, such as software or packaging or electronics, to become the "group leader" in this area, analogous to the group leaders at Los Alamos. Part Two of Feynman's "Let's Get Organized" campaign was that we should begin a regular seminar series of invited speakers who might have interesting things to do with our machine. Richard's idea was that we should concentrate on people with new applications, because they would be less conservative about what kind of computer they would use. For our first seminar he invited John Hopfield, a friend of his from CalTech, to give us a talk on his scheme for building neural networks. In 1983, studying neural networks was about as fashionable as studying ESP, so some people considered John Hopfield a little bit crazy. Richard was certain he would fit right in at Thinking Machines Corporation. What Hopfield had invented was a way of constructing an [associative memory], a device for remembering patterns. To use an associative memory, one trains it on a series of patterns, such as pictures of the letters of the alphabet. Later, when the memory is shown a new pattern it is able to recall a similar pattern that it has seen in the past. A new picture of the letter "A" will "remind" the memory of another "A" that it has seen previously. Hopfield had figured out how such a memory could be built from devices that were similar to biological neurons. Not only did Hopfield's method seem to work, but it seemed to work well on the Connection Machine. Feynman figured out the details of how to use one processor to simulate each of Hopfield's neurons, with the strength of the connections represented as numbers in the processors' memory. Because of the parallel nature of Hopfield's algorithm, all of the processors could be used concurrently with 100\% efficiency, so the Connection Machine would be hundreds of times faster than any conventional computer. Feynman worked out the program for computing Hopfield's network on the Connection Machine in some detail. The part that he was proudest of was the subroutine for computing logarithms. I mention it here not only because it is a clever algorithm, but also because it is a specific contribution Richard made to the mainstream of computer science. He invented it at Los Alamos. Consider the problem of finding the logarithm of a fractional number between 1.0 and 2.0 (the algorithm can be generalized without too much difficulty). Feynman observed that any such number can be uniquely represented as a product of numbers of the form $1 + 2^{-k]$, where $k$ is an integer. Testing each of these factors in a binary number representation is simply a matter of a shift and a subtraction. Once the factors are determined, the logarithm can be computed by adding together the precomputed logarithms of the factors. The algorithm fit especially well on the Connection Machine, since the small table of the logarithms of $1 + 2^{-k]$ could be shared by all the processors. The entire computation took less time than division. Concentrating on the algorithm for a basic arithmetic operation was typical of Richard's approach. He loved the details. In studying the router, he paid attention to the action of each individual gate and in writing a program he insisted on understanding the implementation of every instruction. He distrusted abstractions that could not be directly related to the facts. When several years later I wrote a general interest article on the Connection Machine for [Scientific American], he was disappointed that it left out too many details. He asked, "How is anyone supposed to know that this isn't just a bunch of crap?" Feynman's insistence on looking at the details helped us discover the potential of the machine for numerical computing and physical simulation. We had convinced ourselves at the time that the Connection Machine would not be efficient at "number-crunching," because the first prototype had no special hardware for vectors or floating point arithmetic. Both of these were "known" to be requirements for number-crunching. Feynman decided to test this assumption on a problem that he was familiar with in detail: quantum chromodynamics. Quantum chromodynamics is a theory of the internal workings of atomic particles such as protons. Using this theory it is possible, in principle, to compute the values of measurable physical quantities, such as a proton's mass. In practice, such a computation requires so much arithmetic that it could keep the fastest computers in the world busy for years. One way to do this calculation is to use a discrete four-dimensional lattice to model a section of space-time. Finding the solution involves adding up the contributions of all of the possible configurations of certain matrices on the links of the lattice, or at least some large representative sample. (This is essentially a Feynman path integral.) The thing that makes this so difficult is that calculating the contribution of even a single configuration involves multiplying the matrices around every little loop in the lattice, and the number of loops grows as the fourth power of the lattice size. Since all of these multiplications can take place concurrently, there is plenty of opportunity to keep all 64,000 processors busy. To find out how well this would work in practice, Feynman had to write a computer program for QCD. Since the only computer language Richard was really familiar with was Basic, he made up a parallel version of Basic in which he wrote the program and then simulated it by hand to estimate how fast it would run on the Connection Machine. He was excited by the results. "Hey Danny, you're not going to believe this, but that machine of yours can actually do something [useful]!" According to Feynman's calculations, the Connection Machine, even without any special hardware for floating point arithmetic, would outperform a machine that CalTech was building for doing QCD calculations. From that point on, Richard pushed us more and more toward looking at numerical applications of the machine. By the end of that summer of 1983, Richard had completed his analysis of the behavior of the router, and much to our surprise and amusement, he presented his answer in the form of a set of partial differential equations. To a physicist this may seem natural, but to a computer designer, treating a set of boolean circuits as a continuous, differentiable system is a bit strange. Feynman's router equations were in terms of variables representing continuous quantities such as "the average number of 1 bits in a message address." I was much more accustomed to seeing analysis in terms of inductive proof and case analysis than taking the derivative of "the number of 1's" with respect to time. Our discrete analysis said we needed seven buffers per chip; Feynman's equations suggested that we only needed five. We decided to play it safe and ignore Feynman. The decision to ignore Feynman's analysis was made in September, but by next spring we were up against a wall. The chips that we had designed were slightly too big to manufacture and the only way to solve the problem was to cut the number of buffers per chip back to five. Since Feynman's equations claimed we could do this safely, his unconventional methods of analysis started looking better and better to us. We decided to go ahead and make the chips with the smaller number of buffers. Fortunately, he was right. When we put together the chips the machine worked. The first program run on the machine in April of 1985 was Conway's game of Life. The game of Life is an example of a class of computations that interested Feynman called [cellular automata]. Like many physicists who had spent their lives going to successively lower and lower levels of atomic detail, Feynman often wondered what was at the bottom. One possible answer was a cellular automaton. The notion is that the "continuum" might, at its lowest levels, be discrete in both space and time, and that the laws of physics might simply be a macro-consequence of the average behavior of tiny cells. Each cell could be a simple automaton that obeys a small set of rules and communicates only with its nearest neighbors, like the lattice calculation for QCD. If the universe in fact worked this way, then it presumably would have testable consequences, such as an upper limit on the density of information per cubic meter of space. The notion of cellular automata goes back to von Neumann and Ulam, whom Feynman had known at Los Alamos. Richard's recent interest in the subject was motivated by his friends Ed Fredkin and Stephen Wolfram, both of whom were fascinated by cellular automata models of physics. Feynman was always quick to point out to them that he considered their specific models "kooky," but like the Connection Machine, he considered the subject sufficiently crazy to put some energy into. There are many potential problems with cellular automata as a model of physical space and time; for example, finding a set of rules that obeys special relativity. One of the simplest problems is just making the physics so that things look the same in every direction. The most obvious pattern of cellular automata, such as a fixed three-dimensional grid, have preferred directions along the axes of the grid. Is it possible to implement even Newtonian physics on a fixed lattice of automata? Feynman had a proposed solution to the anisotropy problem which he attempted (without success) to work out in detail. His notion was that the underlying automata, rather than being connected in a regular lattice like a grid or a pattern of hexagons, might be randomly connected. Waves propagating through this medium would, on the average, propagate at the same rate in every direction. Cellular automata started getting attention at Thinking Machines when Stephen Wolfram, who was also spending time at the company, suggested that we should use such automata not as a model of physics, but as a practical method of simulating physical systems. Specifically, we could use one processor to simulate each cell and rules that were chosen to model something useful, like fluid dynamics. For two-dimensional problems there was a neat solution to the anisotropy problem since [Frisch, Hasslacher, Pomeau] had shown that a hexagonal lattice with a simple set of rules produced isotropic behavior at the macro scale. Wolfram used this method on the Connection Machine to produce a beautiful movie of a turbulent fluid flow in two dimensions. Watching the movie got all of us, especially Feynman, excited about physical simulation. We all started planning additions to the hardware, such as support of floating point arithmetic that would make it possible for us to perform and display a variety of simulations in real time. In the meantime, we were having a lot of trouble explaining to people what we were doing with cellular automata. Eyes tended to glaze over when we started talking about state transition diagrams and finite state machines. Finally Feynman told us to explain it like this, "We have noticed in nature that the behavior of a fluid depends very little on the nature of the individual particles in that fluid. For example, the flow of sand is very similar to the flow of water or the flow of a pile of ball bearings. We have therefore taken advantage of this fact to invent a type of imaginary particle that is especially simple for us to simulate. This particle is a perfect ball bearing that can move at a single speed in one of six directions. The flow of these particles on a large enough scale is very similar to the flow of natural fluids." This was a typical Richard Feynman explanation. On the one hand, it infuriated the experts who had worked on the problem because it neglected to even mention all of the clever problems that they had solved. On the other hand, it delighted the listeners since they could walk away from it with a real understanding of the phenomenon and how it was connected to physical reality. We tried to take advantage of Richard's talent for clarity by getting him to critique the technical presentations that we made in our product introductions. Before the commercial announcement of the Connection Machine CM-1 and all of our future products, Richard would give a sentence-by-sentence critique of the planned presentation. "Don't say `reflected acoustic wave.' Say [echo]." Or, "Forget all that `local minima' stuff. Just say there's a bubble caught in the crystal and you have to shake it out." Nothing made him angrier than making something simple sound complicated. Getting Richard to give advice like that was sometimes tricky. He pretended not to like working on any problem that was outside his claimed area of expertise. Often, at Thinking Machines when he was asked for advice he would gruffly refuse with "That's not my department." I could never figure out just what his department was, but it did not matter anyway, since he spent most of his time working on those "not-my-department" problems. Sometimes he really would give up, but more often than not he would come back a few days after his refusal and remark, "I've been thinking about what you asked the other day and it seems to me..." This worked best if you were careful not to expect it. I do not mean to imply that Richard was hesitant to do the "dirty work." In fact, he was always volunteering for it. Many a visitor at Thinking Machines was shocked to see that we had a Nobel Laureate soldering circuit boards or painting walls. But what Richard hated, or at least pretended to hate, was being asked to give advice. So why were people always asking him for it? Because even when Richard didn't understand, he always seemed to understand better than the rest of us. And whatever he understood, he could make others understand as well. Richard made people feel like a child does, when a grown-up first treats him as an adult. He was never afraid of telling the truth, and however foolish your question was, he never made you feel like a fool. The charming side of Richard helped people forgive him for his uncharming characteristics. For example, in many ways Richard was a sexist. Whenever it came time for his daily bowl of soup he would look around for the nearest "girl" and ask if she would fetch it to him. It did not matter if she was the cook, an engineer, or the president of the company. I once asked a female engineer who had just been a victim of this if it bothered her. "Yes, it really annoys me," she said. "On the other hand, he is the only one who ever explained quantum mechanics to me as if I could understand it." That was the essence of Richard's charm. Richard worked at the company on and off for the next five years. Floating point hardware was eventually added to the machine, and as the machine and its successors went into commercial production, they were being used more and more for the kind of numerical simulation problems that Richard had pioneered with his QCD program. Richard's interest shifted from the construction of the machine to its applications. As it turned out, building a big computer is a good excuse to talk to people who are working on some of the most exciting problems in science. We started working with physicists, astronomers, geologists, biologists, chemists --- everyone of them trying to solve some problem that it had never been possible to solve before. Figuring out how to do these calculations on a parallel machine requires understanding of the details of the application, which was exactly the kind of thing that Richard loved to do. For Richard, figuring out these problems was a kind of a game. He always started by asking very basic questions like, "What is the simplest example?" or "How can you tell if the answer is right?" He asked questions until he reduced the problem to some essential puzzle that he thought he would be able to solve. Then he would set to work, scribbling on a pad of paper and staring at the results. While he was in the middle of this kind of puzzle solving he was impossible to interrupt. "Don't bug me. I'm busy," he would say without even looking up. Eventually he would either decide the problem was too hard (in which case he lost interest), or he would find a solution (in which case he spent the next day or two explaining it to anyone who listened). In this way he worked on problems in database searches, geophysical modeling, protein folding, analyzing images, and reading insurance forms. The last project that I worked on with Richard was in simulated evolution. I had written a program that simulated the evolution of populations of sexually reproducing creatures over hundreds of thousands of generations. The results were surprising in that the fitness of the population made progress in sudden leaps rather than by the expected steady improvement. The fossil record shows some evidence that real biological evolution might also exhibit such "punctuated equilibrium," so Richard and I decided to look more closely at why it happened. He was feeling ill by that time, so I went out and spent the week with him in Pasadena, and we worked out a model of evolution of finite populations based on the Fokker Planck equations. When I got back to Boston I went to the library and discovered a book by Kimura on the subject, and much to my disappointment, all of our "discoveries" were covered in the first few pages. When I called back and told Richard what I had found, he was elated. "Hey, we got it right!" he said. "Not bad for amateurs." In retrospect I realize that in almost everything that we worked on together, we were both amateurs. In digital physics, neural networks, even parallel computing, we never really knew what we were doing. But the things that we studied were so new that no one else knew exactly what they were doing either. It was amateurs who made the progress. Actually, I doubt that it was "progress" that most interested Richard. He was always searching for patterns, for connections, for a new way of looking at something, but I suspect his motivation was not so much to understand the world as it was to find new ideas to explain. The act of discovery was not complete for him until he had taught it to someone else. I remember a conversation we had a year or so before his death, walking in the hills above Pasadena. We were exploring an unfamiliar trail and Richard, recovering from a major operation for the cancer, was walking more slowly than usual. He was telling a long and funny story about how he had been reading up on his disease and surprising his doctors by predicting their diagnosis and his chances of survival. I was hearing for the first time how far his cancer had progressed, so the jokes did not seem so funny. He must have noticed my mood, because he suddenly stopped the story and asked, "Hey, what's the matter?" I hesitated. "I'm sad because you're going to die." "Yeah," he sighed, "that bugs me sometimes too. But not so much as you think." And after a few more steps, "When you get as old as I am, you start to realize that you've told most of the good stuff you know to other people anyway." We walked along in silence for a few minutes. Then we came to a place where another trail crossed and Richard stopped to look around at the surroundings. Suddenly a grin lit up his face. "Hey," he said, all trace of sadness forgotten, "I bet I can show you a better way home." And so he did.
true
true
true
null
2024-10-12 00:00:00
2016-01-01 00:00:00
null
null
null
null
null
null
1,609,234
http://www.primal.com/
Primal
null
Led by our Chief Scientist, Dr. Jimmy Lin—a global authority in Al and the David R. Cheriton Chair at the University of Waterloo—Primal Labs embodies our commitment to applying state-of-the-art Al techniques to solve real-world challenges. This applied research approach, supported by over 170 patents, directly drives the innovation behind Primal applications.
true
true
true
We create enterprise-grade AI solutions, delivering reliable results seamlessly integrated with corporate systems. Drawing from 15+ years of applied AI research, our cutting-edge approaches mitigate bias, incompleteness, and hallucinations inherent in many AI solutions.
2024-10-12 00:00:00
2024-09-12 00:00:00
null
website
null
null
null
null
5,024,650
http://www.usatoday.com/story/opinion/2013/01/06/black-boxes-cars-edr/1566098/
Editorial: 'Black boxes' are in 96% of new cars
USA TODAY
# Editorial: 'Black boxes' are in 96% of new cars - Federal government has proposed that all new passenger vehicles be equipped with EDRs. - But 96% of new cars already have them, as do at least 150 million older vehicles. - Just 13 states have laws on the issue, and fewer offer strong privacy protection. If you happen to read every word of your new car owner's manual, then you already know that your car may be monitoring your driving habits. If you're like most people on the planet, though, it will come as a surprise that a box the size of a deck of cards — called an event data recorder — is on board, tracking your seat belt use, speed, steering, braking and at least a dozen other bits of data. When your air bag deploys, the EDR's memory records a few seconds before, during and after a crash, much like an airliner's "black box." This a handy tool for analyzing the cause and effect of crashes. It can be used to improve safety technology. But its presence is not entirely benign. The data have many other potential uses — for insurance companies, lawyers and police, for instance — and it's up for grabs. The EDR is the only part of your car that you don't necessarily own. Just 13 states have laws on the issue, and fewer — Oregon and North Dakota, for example — offer strong privacy protection. The devices, part of a car's electronic system, are almost impossible to remove. Last month, the federal government proposed that all new passenger vehicles be equipped with the devices. But 96% of new cars already have them, as do at least 150 million older vehicles. American makers, led by GM and Ford, have been putting them in cars since the mid-1990s. What the federal government ought to do is ensure that car buyers get prominent disclosure *before* they buy and that privacy protections are in place. But the trend is in the opposite direction. In 2006, when the National Highway Traffic Safety Administration first proposed regulating black boxes, it rejected calls for pre-purchase disclosure and opted for requiring a few obscure paragraphs in the owner's manual. It gave car makers six years to comply. The agency says it has no authority to regulate privacy. But it has not sought any, nor alerted Congress to the need for legislation. The chaotic results are apparent in courts and in high-profile crashes. In a 2011 crash, Massachusetts Lt. Gov. Timothy Murray, who said he was belted in and driving the speed limit, was contradicted by an EDR. The government-owned Ford Crown Victoria's recorder found that the car was traveling more than 100 mph and that Murray wasn't belted in. OK, so comeuppance for a politician doesn't sound so bad. But what about your own car? Should police be able to grab that data without a warrant? Should insurers access it so they can raise your rates? Courts are all over the place. Two New York courts have ruled warrants are not needed. Many prosecutors, not surprisingly, argue that drivers have no expectation of privacy on public roads. In California, an appeals court tossed out a drunken-driving manslaughter conviction because police failed to get a warrant for the box. Proponents of black boxes argue that they aren't all that intrusive. Maybe so, today. But technology never stands still. GPS in cellphones was originally advanced as a safety feature so callers to 9-1-1 could be quickly located. But location identification is now used in all sorts of third-party apps. People's movements are easily tracked. It wouldn't take much to tweak EDRs for equally broad uses. They could record more. Some insurers are offering customers a cousin of the EDR, which tracks how a car is driven over a long period, so volunteer participants may qualify for lower rates. Two things are certain. Black boxes are here to stay. And without strict rules of the road, they are less a boon to safety than an intrusive hitchhiker.
true
true
true
You don't necessarily own what it records.
2024-10-12 00:00:00
2013-01-06 00:00:00
https://www.usatoday.com…t=pjpg&auto=webp
article
usatoday.com
USA TODAY
null
null
9,700,305
http://dn42.net/Home
Home
null
How-To Services Internal Historical External Tools dn42 is a big dynamic VPN, which employs Internet technologies (BGP, whois database, DNS, etc). Participants connect to each other using network tunnels (GRE, OpenVPN, WireGuard, Tinc, IPsec) and exchange routes thanks to the Border Gateway Protocol. Network addresses are assigned in the `172.20.0.0/14` range and private AS numbers are used (see registry) as well as IPv6 addresses from the ULA-Range (`fd00::/8` ) - see FAQ. A number of services are provided on the network: see internal (only available from within dn42). Also, dn42 is interconnected with other networks, such as ChaosVPN or some Freifunk networks. Still have questions? We have FAQs listed. dn42 can be used to learn networking and to connect private networks, such as hackerspaces or community networks. But above all, experimenting with routing in dn42 is fun! Participating in dn42 is primarily useful for learning routing technologies such as BGP, using a reasonably large network (> 1500 AS, > 1700 prefixes). Since dn42 is very similar to the Internet, it can be used as a hands-on testing ground for new ideas, or simply to learn real networking stuff that you probably can't do on the Internet (BGP multihoming, transit). The biggest advantage when compared to the Internet: if you break something in the network, you won't have any big network operator yelling angrily at you. dn42 is also a great way to connect hacker spaces in a secure way, so that they can provide services to each other. Have you ever wanted to SSH on your Raspberry Pi hosted at your local hacker space and had trouble doing so because of NAT? If your hacker space was using dn42, it could have been much easier. Nowadays, most end-user networks use NAT to squeeze all those nifty computing devices behind a single public IPv4 address. This makes it difficult to provide services directly from a machine behind the NAT. Besides, you might want to provide some services to other hackerspaces, but not to anybody on the Internet. dn42 solves this problem. By addressing your network in dn42, your devices can communicate with all other participants in a transparent way, without resorting to this ugly thing called NAT. Of course, this doesn't mean that you have to fully open your network to dn42: similarly to IPv6, you can still use a firewall (but you could, for instance, allow incoming TCP 22 and TCP 80 from dn42 by default). If your hackerspace is actually using dn42 to provide some services, please let us know! (on this wiki or on the mailing list). It's very rewarding when the network is actually used for something :) dn42 is operated by a group of volunteers. There is no central authority which controls or impersonates the network. Take a look at the contact page to see how to collaborate or contact us. The Getting started page helps you to get your first node inside the network. This wiki is the main reference about dn42. It is available in read-only mode from the Internet here or here or here or here or here or here or here (v6 only) and for editing from within dn42, at https://wiki.dn42 - https required for editing. An svg of the DN42 Logo is available here. Hosted by: BURBLE-MNT, GRMML-MNT, XUU-MNT, JAN-MNT, LARE-MNT, SARU-MNT, ANDROW-MNT, MARK22K-MNT | Accessible via: dn42, dn42.dev, dn42.eu, wiki.dn42.us, dn42.de (IPv6-only), dn42.cc (wiki-ng), dn42.wiki, dn42.pp.ua, dn42.obl.ong Last edited by **Marek Küthe**, 2024-02-28 14:31:15
true
true
true
null
2024-10-12 00:00:00
2024-02-28 00:00:00
null
null
null
null
null
null
23,593,253
https://metacpan.org/pod/release/XSAWYERX/perl-5.32.0/pod/perldelta.pod
perldelta
Sawyer X
# NAME perldelta - what is new for perl v5.32.0 # DESCRIPTION This document describes differences between the 5.30.0 release and the 5.32.0 release. If you are upgrading from an earlier release such as 5.28.0, first read perl5300delta, which describes differences between 5.28.0 and 5.30.0. # Core Enhancements ## The isa Operator A new experimental infix operator called `isa` tests whether a given object is an instance of a given class or a class derived from it: `if( $obj isa Package::Name ) { ... }` For more detail see "Class Instance Operator" in perlop. ## Unicode 13.0 is supported See https://www.unicode.org/versions/Unicode13.0.0/ for details. ## Chained comparisons capability Some comparison operators, as their associativity, *chain* with some operators of the same precedence (but never with operators of different precedence). `if ( $x < $y <= $z ) {...}` behaves exactly like: `if ( $x < $y && $y <= $z ) {...}` (assuming that `"$y"` is as simple a scalar as it looks.) You can read more about this in perlop under "Operator Precedence and Associativity" in perlop. ## New Unicode properties `Identifier_Status` and `Identifier_Type` supported Unicode is in the process of revising its regular expression requirements: https://www.unicode.org/draft/reports/tr18/tr18.html. As part of that they are wanting more properties to be exposed, ones that aren't part of the strict UCD (Unicode character database). These two are used for examining inputs for security purposes. Details on their usage is at https://www.unicode.org/reports/tr39/proposed.html. ## It is now possible to write `qr/\p{Name=...}/` , or `qr!\p{na=/(SMILING|GRINNING) FACE/}!` The Unicode Name property is now accessible in regular expression patterns, as an alternative to `\N{...}` . A comparison of the two methods is given in "Comparison of \N{...} and \p{name=...}" in perlunicode. The second example above shows that wildcard subpatterns are also usable in this property. See "Wildcards in Property Values" in perlunicode. ## Improvement of `POSIX::mblen()` , `mbtowc` , and `wctomb` The `POSIX::mblen()` , `mbtowc` , and `wctomb` functions now work on shift state locales and are thread-safe on C99 and above compilers when executed on a platform that has locale thread-safety; the length parameters are now optional. These functions are always executed under the current C language locale. (See perllocale.) Most locales are stateless, but a few, notably the very rarely encountered ISO 2022, maintain a state between calls to these functions. Previously the state was cleared on every call, but now the state is not reset unless the appropriate parameter is `undef` . On threaded perls, the C99 functions mbrlen(3), mbrtowc(3), and wcrtomb(3), when available, are substituted for the plain functions. This makes these functions thread-safe when executing on a locale thread-safe platform. The string length parameters in `mblen` and `mbtowc` are now optional; useful only if you wish to restrict the length parsed in the source string to less than the actual length. ## Alpha assertions are no longer experimental See "(*pla:pattern)" in perlre, "(*plb:pattern)" in perlre, "(*nla:pattern)" in perlre>, and "(*nlb:pattern)" in perlre. Use of these no longer generates a warning; existing code that disables the warning category `experimental::alpha_assertions` will continue to work without any changes needed. Enabling the category has no effect. ## Script runs are no longer experimental See "Script Runs" in perlre. Use of these no longer generates a warning; existing code that disables the warning category `experimental::script_run` will continue to work without any changes needed. Enabling the category has no effect. ## Feature checks are now faster Previously feature checks in the parser required a hash lookup when features were set outside of a feature bundle, this has been optimized to a bit mask check. [GH #17229] ## Perl is now developed on GitHub Perl is now developed on GitHub. You can find us at https://github.com/Perl/perl5. Non-security bugs should now be reported via GitHub. Security issues should continue to be reported as documented in perlsec. ## Compiled patterns can now be dumped before optimization This is primarily useful for tracking down bugs in the regular expression compiler. This dump happens on `-DDEBUGGING` perls, if you specify `-Drv` on the command line; or on any perl if the pattern is compiled within the scope of `use re qw(Debug DUMP_PRE_OPTIMIZE)` or `use re qw(Debug COMPILE EXTRA)` . (All but the second case display other information as well.) # Security ## [CVE-2020-10543] Buffer overflow caused by a crafted regular expression A signed `size_t` integer overflow in the storage space calculations for nested regular expression quantifiers could cause a heap buffer overflow in Perl's regular expression compiler that overwrites memory allocated after the regular expression storage space with attacker supplied data. The target system needs a sufficient amount of memory to allocate partial expansions of the nested quantifiers prior to the overflow occurring. This requirement is unlikely to be met on 64-bit systems. Discovered by: ManhND of The Tarantula Team, VinCSS (a member of Vingroup). ## [CVE-2020-10878] Integer overflow via malformed bytecode produced by a crafted regular expression Integer overflows in the calculation of offsets between instructions for the regular expression engine could cause corruption of the intermediate language state of a compiled regular expression. An attacker could abuse this behaviour to insert instructions into the compiled form of a Perl regular expression. Discovered by: Hugo van der Sanden and Slaven Rezic. ## [CVE-2020-12723] Buffer overflow caused by a crafted regular expression Recursive calls to `S_study_chunk()` by Perl's regular expression compiler to optimize the intermediate language representation of a regular expression could cause corruption of the intermediate language state of a compiled regular expression. Discovered by: Sergey Aleynikov. ## Additional Note An application written in Perl would only be vulnerable to any of the above flaws if it evaluates regular expressions supplied by the attacker. Evaluating regular expressions in this fashion is known to be dangerous since the regular expression engine does not protect against denial of service attacks in this usage scenario. # Incompatible Changes ## Certain pattern matching features are now prohibited in compiling Unicode property value wildcard subpatterns These few features are either inappropriate or interfere with the algorithm used to accomplish this task. The complete list is in "Wildcards in Property Values" in perlunicode. ## Unused functions `POSIX::mbstowcs` and `POSIX::wcstombs` are removed These functions could never have worked due to a defective interface specification. There is clearly no demand for them, given that no one has ever complained in the many years the functions were claimed to be available, hence so-called "support" for them is now dropped. ## A bug fix for `(?[...])` may have caused some patterns to no longer compile See "Selected Bug Fixes". The heuristics previously used may have let some constructs compile (perhaps not with the programmer's intended effect) that should have been errors. None are known, but it is possible that some erroneous constructs no longer compile. `\p{`*user-defined*} properties now always override official Unicode ones *user-defined*} Previously, if and only if a user-defined property was declared prior to the compilation of the regular expression pattern that contains it, its definition was used instead of any official Unicode property with the same name. Now, it always overrides the official property. This change could break existing code that relied (likely unwittingly) on the previous behavior. Without this fix, if Unicode released a new version with a new property that happens to have the same name as the one you had long been using, your program would break when you upgraded to a perl that used that new Unicode version. See "User-Defined Character Properties" in perlunicode. [GH #17205] ## Modifiable variables are no longer permitted in constants Code like: ``` my $var; $sub = sub () { $var }; ``` where `$var` is referenced elsewhere in some sort of modifiable context now produces an exception when the sub is defined. This error can be avoided by adding a return to the sub definition: `$sub = sub () { return $var };` This has been deprecated since Perl 5.22. [perl #134138] ## Use of `vec` on strings with code points above 0xFF is forbidden Such strings are represented internally in UTF-8, and `vec` is a bit-oriented operation that will likely give unexpected results on those strings. This was deprecated in perl 5.28.0. ## Use of code points over 0xFF in string bitwise operators Some uses of these were already illegal after a previous deprecation cycle. The remaining uses are now prohibited, having been deprecated in perl 5.28.0. See perldeprecation. `Sys::Hostname::hostname()` does not accept arguments This usage was deprecated in perl 5.28.0 and is now fatal. ## Plain "0" string now treated as a number for range operator Previously a range `"0" .. "-1"` would produce a range of numeric strings from "0" through "99"; this now produces an empty list, just as `0 .. -1` does. This also means that `"0" .. "9"` now produces a list of integers, where previously it would produce a list of strings. This was due to a special case that treated strings starting with "0" as strings so ranges like `"00" .. "03"` produced `"00", "01", "02", "03"` , but didn't specially handle the string `"0"` . [perl #133695] `\K` now disallowed in look-ahead and look-behind assertions This was disallowed because it causes unexpected behaviour, and no-one could define what the desired behaviour should be. [perl #124256] # Performance Enhancements `my_strnlen` has been sped up for systems that don't have their own`strnlen` implementation.`grok_bin_oct_hex` (and so,`grok_bin` ,`grok_oct` , and`grok_hex` ) have been sped up.`grok_number_flags` has been sped up.`sort` is now noticeably faster in cases such as`sort {$a <=> $b}` or`sort {$b <=> $a}` . [GH #17608] # Modules and Pragmata ## Updated Modules and Pragmata Archive::Tar has been upgraded from version 2.32 to 2.36. autodie has been upgraded from version 2.29 to 2.32. B has been upgraded from version 1.76 to 1.80. B::Deparse has been upgraded from version 1.49 to 1.54. Benchmark has been upgraded from version 1.22 to 1.23. charnames has been upgraded from version 1.45 to 1.48. Class::Struct has been upgraded from version 0.65 to 0.66. Compress::Raw::Bzip2 has been upgraded from version 2.084 to 2.093. Compress::Raw::Zlib has been upgraded from version 2.084 to 2.093. CPAN has been upgraded from version 2.22 to 2.27. DB_File has been upgraded from version 1.843 to 1.853. Devel::PPPort has been upgraded from version 3.52 to 3.57. The test files generated on Win32 are now identical to when they are generated on POSIX-like systems. diagnostics has been upgraded from version 1.36 to 1.37. Digest::MD5 has been upgraded from version 2.55 to 2.55_01. Dumpvalue has been upgraded from version 1.18 to 1.21. Previously, when dumping elements of an array and encountering an undefined value, the string printed would have been `empty array` . This has been changed to what was apparently originally intended:`empty slot` .DynaLoader has been upgraded from version 1.45 to 1.47. Encode has been upgraded from version 3.01 to 3.06. encoding has been upgraded from version 2.22 to 3.00. English has been upgraded from version 1.10 to 1.11. Exporter has been upgraded from version 5.73 to 5.74. ExtUtils::CBuilder has been upgraded from version 0.280231 to 0.280234. ExtUtils::MakeMaker has been upgraded from version 7.34 to 7.44. feature has been upgraded from version 1.54 to 1.58. A new `indirect` feature has been added, which is enabled by default but allows turning off indirect object syntax.File::Find has been upgraded from version 1.36 to 1.37. On Win32, the tests no longer require either a file in the drive root directory, or a writable root directory. File::Glob has been upgraded from version 1.32 to 1.33. File::stat has been upgraded from version 1.08 to 1.09. Filter::Simple has been upgraded from version 0.95 to 0.96. Getopt::Long has been upgraded from version 2.5 to 2.51. Hash::Util has been upgraded from version 0.22 to 0.23. The Synopsis has been updated as the example code stopped working with newer perls. [GH #17399] I18N::Langinfo has been upgraded from version 0.18 to 0.19. I18N::LangTags has been upgraded from version 0.43 to 0.44. Document the `IGNORE_WIN32_LOCALE` environment variable.IO has been upgraded from version 1.40 to 1.43. IO::Socket no longer caches a zero protocol value, since this indicates that the implementation will select a protocol. This means that on platforms that don't implement `SO_PROTOCOL` for a given socket type the protocol method may return`undef` .The supplied *TO*is now always honoured on calls to the`send()` method. [perl #133936]IO-Compress has been upgraded from version 2.084 to 2.093. IPC::Cmd has been upgraded from version 1.02 to 1.04. IPC::Open3 has been upgraded from version 1.20 to 1.21. JSON::PP has been upgraded from version 4.02 to 4.04. Math::BigInt has been upgraded from version 1.999816 to 1.999818. Math::BigInt::FastCalc has been upgraded from version 0.5008 to 0.5009. Module::CoreList has been upgraded from version 5.20190522 to 5.20200620. Module::Load::Conditional has been upgraded from version 0.68 to 0.70. Module::Metadata has been upgraded from version 1.000036 to 1.000037. mro has been upgraded from version 1.22 to 1.23. Net::Ping has been upgraded from version 2.71 to 2.72. Opcode has been upgraded from version 1.43 to 1.47. open has been upgraded from version 1.11 to 1.12. overload has been upgraded from version 1.30 to 1.31. parent has been upgraded from version 0.237 to 0.238. perlfaq has been upgraded from version 5.20190126 to 5.20200523. PerlIO has been upgraded from version 1.10 to 1.11. PerlIO::encoding has been upgraded from version 0.27 to 0.28. PerlIO::via has been upgraded from version 0.17 to 0.18. Pod::Html has been upgraded from version 1.24 to 1.25. Pod::Simple has been upgraded from version 3.35 to 3.40. podlators has been upgraded from version 4.11 to 4.14. POSIX has been upgraded from version 1.88 to 1.94. re has been upgraded from version 0.37 to 0.40. Safe has been upgraded from version 2.40 to 2.41. Scalar::Util has been upgraded from version 1.50 to 1.55. SelfLoader has been upgraded from version 1.25 to 1.26. Socket has been upgraded from version 2.027 to 2.029. Storable has been upgraded from version 3.15 to 3.21. Use of `note()` from Test::More is now optional in tests. This works around a circular dependency with Test::More when installing on very old perls from CPAN.Vstring magic strings over 2GB are now disallowed. Regular expressions objects weren't properly counted for object id purposes on retrieve. This would corrupt the resulting structure, or cause a runtime error in some cases. [perl #134179] Sys::Hostname has been upgraded from version 1.22 to 1.23. Sys::Syslog has been upgraded from version 0.35 to 0.36. Term::ANSIColor has been upgraded from version 4.06 to 5.01. Test::Simple has been upgraded from version 1.302162 to 1.302175. Thread has been upgraded from version 3.04 to 3.05. Thread::Queue has been upgraded from version 3.13 to 3.14. threads has been upgraded from version 2.22 to 2.25. threads::shared has been upgraded from version 1.60 to 1.61. Tie::File has been upgraded from version 1.02 to 1.06. Tie::Hash::NamedCapture has been upgraded from version 0.10 to 0.13. Tie::Scalar has been upgraded from version 1.04 to 1.05. Tie::StdHandle has been upgraded from version 4.5 to 4.6. Time::HiRes has been upgraded from version 1.9760 to 1.9764. Removed obsolete code such as support for pre-5.6 perl and classic MacOS. [perl #134288] Time::Piece has been upgraded from version 1.33 to 1.3401. Unicode::Normalize has been upgraded from version 1.26 to 1.27. Unicode::UCD has been upgraded from version 0.72 to 0.75. VMS::Stdio has been upgraded from version 2.44 to 2.45. warnings has been upgraded from version 1.44 to 1.47. Win32 has been upgraded from version 0.52 to 0.53. Win32API::File has been upgraded from version 0.1203 to 0.1203_01. XS::APItest has been upgraded from version 1.00 to 1.09. ## Removed Modules and Pragmata Pod::Parser has been removed from the core distribution. It still is available for download from CPAN. This resolves [perl #119439]. # Documentation ## Changes to Existing Documentation We have attempted to update the documentation to reflect the changes listed in this document. If you find any we have missed, open an issue at https://github.com/Perl/perl5/issues. Additionally, the following selected changes have been made: ### perldebguts Simplify a few regnode definitions Update `BOUND` and`NBOUND` definitions.Add ANYOFHs regnode This node is like `ANYOFHb` , but is used when more than one leading byte is the same in all the matched code points.`ANYOFHb` is used to avoid having to convert from UTF-8 to code point for something that won't match. It checks that the first byte in the UTF-8 encoded target is the desired one, thus ruling out most of the possible code points. ### perlapi `sv_2pvbyte` updated to mention it will croak if the SV cannot be downgraded.`sv_setpvn` updated to mention that the UTF-8 flag will not be changed by this function, and a terminating NUL byte is guaranteed.Documentation for `PL_phase` has been added.The documentation for `grok_bin` ,`grok_oct` , and`grok_hex` has been updated and clarified. ### perldiag Add documentation for experimental 'isa' operator (S experimental::isa) This warning is emitted if you use the ( `isa` ) operator. This operator is currently experimental and its behaviour may change in future releases of Perl. ### perlfunc `caller` - Like `__FILE__` and`__LINE__` , the filename and line number returned here may be altered by the mechanism described at "Plain Old Comments (Not!)" in perlsyn. `__FILE__` - It can be altered by the mechanism described at "Plain Old Comments (Not!)" in perlsyn. `__LINE__` - It can be altered by the mechanism described at "Plain Old Comments (Not!)" in perlsyn. `return` - Now mentions that you cannot return from `do BLOCK` . `open` - The `open()` section had been renovated significantly. ### perlguts No longer suggesting using perl's `malloc` . Modern system`malloc` is assumed to be much better than perl's implementation now.Documentation about *embed.fnc*flags has been removed.*embed.fnc*now has sufficient comments within it. Anyone changing that file will see those comments first, so entries here are now redundant.Updated documentation for `UTF8f` Added missing `=for apidoc` lines ### perlhacktips The differences between Perl strings and C strings are now detailed. ### perlintro The documentation for the repetition operator `x` have been clarified. [GH #17335] ### perlipc The documentation surrounding `open` and handle usage has been modernized to prefer 3-arg open and lexical variables instead of barewords.Various updates and fixes including making all examples strict-safe and replacing `-w` with`use warnings` . ### perlop 'isa' operator is experimental This is an experimental feature and is available when enabled by `use feature 'isa'` . It emits a warning in the`experimental::isa` category. ### perlpod Details of the various stacks within the perl interpreter are now explained here. Advice has been added regarding the usage of `Z<>` . ### perlport Update `timegm` example to use the correct year format*1970*instead of*70*. [GH #16431] ### perlreref Fix some typos. ### perlvar Now recommends stringifying `$]` and comparing it numerically. ### perlapi, perlintern Documentation has been added for several functions that were lacking it before. ### perlxs Suggest using `libffi` for simple library bindings. ### POSIX `setlocale` warning about threaded builds updated to note it does not apply on Perl 5.28.X and later.`Posix::SigSet->new(...)` updated to state it throws an error if any of the supplied signals cannot be added to the set. Additionally, the following selected changes have been made: ### Updating of links Links to the now defunct https://search.cpan.org site now point at the equivalent https://metacpan.org URL. [GH #17393] The man page for ExtUtils::XSSymSet is now only installed on VMS, which is the only platform the module is installed on. [GH #17424] URLs have been changed to `https://` and stale links have been updated.Where applicable, the URLs in the documentation have been moved from using the `http://` protocol to`https://` . This also affects the location of the bug tracker at https://rt.perl.org.Some links to OS/2 libraries, Address Sanitizer and other system tools had gone stale. These have been updated with working links. Some links to old email addresses on perl5-porters had gone stale. These have been updated with working links. # Diagnostics The following additions or changes have been made to diagnostic output, including warnings and fatal error messages. For the complete list of diagnostic messages, see perldiag. ## New Diagnostics ### New Errors Expecting interpolated extended charclass in regex; marked by <-- HERE in m/%s/ This is a replacement for several error messages listed under "Changes to Existing Diagnostics". `No digits found for %s literal` (F) No hexadecimal digits were found following `0x` or no binary digits were found following`0b` . ### New Warnings Code point 0x%X is not Unicode, and not portable This is actually not a new message, but it is now output when the warnings category `portable` is enabled.When raised during regular expression pattern compilation, the warning has extra text added at the end marking where precisely in the pattern it occurred. Non-hex character '%c' terminates \x early. Resolved as "%s" This replaces a warning that was much less specific, and which gave false information. This new warning parallels the similar already-existing one raised for `\o{}` . ## Changes to Existing Diagnostics Character following "\c" must be printable ASCII ...now has extra text added at the end, when raised during regular expression pattern compilation, marking where precisely in the pattern it occurred. - ...now has extra text added at the end, when raised during regular expression pattern compilation, marking where precisely in the pattern it occurred. - ...now has extra text added at the end, when raised during regular expression pattern compilation, marking where precisely in the pattern it occurred. "\c%c" is more clearly written simply as "%s" ...now has extra text added at the end, when raised during regular expression pattern compilation, marking where precisely in the pattern it occurred. Non-octal character '%c' terminates \o early. Resolved as "%s" ...now includes the phrase "terminates \o early", and has extra text added at the end, when raised during regular expression pattern compilation, marking where precisely in the pattern it occurred. In some instances the text of the resolution has been clarified. - As of Perl 5.32, this message is no longer generated. Instead, "Non-octal character '%c' terminates \o early. Resolved as "%s"" in perldiag is used instead. Use of code point 0x%s is not allowed; the permissible max is 0x%X Some instances of this message previously output the hex digits `A` ,`B` ,`C` ,`D` ,`E` , and`F` in lower case. Now they are all consistently upper case.The following three diagnostics have been removed, and replaced by `Expecting interpolated extended charclass in regex; marked by <-- HERE in m/%s/` :`Expecting close paren for nested extended charclass in regex; marked by <-- HERE in m/%s/` ,`Expecting close paren for wrapper for nested extended charclass in regex; marked by <-- HERE in m/%s/` , and`Expecting '(?flags:(?[...' in regex; marked by <-- HERE in m/%s/` .The `Code point 0x%X is not Unicode, and not portable` warning removed the line`Code points above 0xFFFF_FFFF require larger than a 32 bit word.` as code points that large are no longer legal on 32-bit platforms.- This error message has been slightly reformatted from the original `Can't use global %s in "%s"` , and in particular misleading error messages like`Can't use global $_ in "my"` are now rendered as`Can't use global $_ in subroutine signature` . Constants from lexical variables potentially modified elsewhere are no longer permitted This error message replaces the former `Constants from lexical variables potentially modified elsewhere are deprecated. This will not be allowed in Perl 5.32` to reflect the fact that this previously deprecated usage has now been transformed into an exception. The message's classification has also been updated from D (deprecated) to F (fatal).See also "Incompatible Changes". `\N{} here is restricted to one character` is now emitted in the same circumstances where previously`\N{} in inverted character class or as a range end-point is restricted to one character` was.This is due to new circumstances having been added in Perl 5.30 that weren't covered by the earlier wording. # Utility Changes ## perlbug The bug tracker homepage URL now points to GitHub. ## streamzip This is a new utility, included as part of an IO::Compress::Base upgrade. streamzip creates a zip file from stdin. The program will read data from stdin, compress it into a zip container and, by default, write a streamed zip file to stdout. # Configuration and Compilation *Configure* For clang++, add `#include <stdlib.h>` to Configure's probes for`futimes` ,`strtoll` ,`strtoul` ,`strtoull` ,`strtouq` , otherwise the probes would fail to compile.Use a compile and run test for `lchown` to satisfy clang++ which should more reliably detect it.For C++ compilers, add `#include <stdio.h>` to Configure's probes for`getpgrp` and`setpgrp` as they use printf and C++ compilers may fail compilation instead of just warning.Check if the compiler can handle inline attribute. Check for character data alignment. *Configure*now correctly handles gcc-10. Previously it was interpreting it as gcc-1 and turned on`-fpcc-struct-return` .Perl now no longer probes for `d_u32align` , defaulting to`define` on all platforms. This check was error-prone when it was done, which was on 32-bit platforms only. [perl #133495]Documentation and hints for building perl on Z/OS (native EBCDIC) have been updated. This is still a work in progress. A new probe for `malloc_usable_size` has been added.Improvements in *Configure*to detection in C++ and clang++. Work ongoing by Andy Dougherty. [perl #134171]*autodoc.pl*This tool that regenerates perlintern and perlapi has been overhauled significantly, restoring consistency in flags used in *embed.fnc*and Devel::PPPort and allowing removal of many redundant`=for apidoc` entries in code.The `ECHO` macro is now defined. This is used in a`dtrace` rule that was originally changed for FreeBSD, and the FreeBSD make apparently predefines it. The Solaris make does not predefine`ECHO` which broke this rule on Solaris. [perl #134218]Bison versions 3.1 through 3.4 are now supported. # Testing Tests were added and changed to reflect the other additions and changes in this release. Furthermore, these significant changes were made: *t/run/switches.t*no longer uses (and re-uses) the*tmpinplace/*directory under*t/*. This may prevent spurious failures. [GH #17424]Various bugs in `POSIX::mbtowc` were fixed. Potential races with other threads are now avoided, and previously the returned wide character could well be garbage.Various bugs in `POSIX::wctomb` were fixed. Potential races with other threads are now avoided, and previously it would segfault if the string parameter was shared or hadn't been pre-allocated with a string of sufficient length to hold the result.Certain test output of scalars containing control characters and Unicode has been fixed on EBCDIC. *t/charset_tools.pl*: Avoid some work on ASCII platforms.*t/re/regexp.t*: Speed up many regex tests on ASCII platform*t/re/pat.t*: Skip tests that don't work on EBCDIC. # Platform Support ## Discontinued Platforms ## Platform-Specific Notes - Linux - `cc` will be used to populate`plibpth` if`cc` is`clang` . [perl #134189] - NetBSD 8.0 - Fix compilation of Perl on NetBSD 8.0 with g++. [GH #17381] - Windows - The configuration for `ccflags` and`optimize` are now separate, as with POSIX platforms. [GH #17156]Support for building perl with Visual C++ 6.0 has now been removed. The locale tests could crash on Win32 due to a Windows bug, and separately due to the CRT throwing an exception if the locale name wasn't validly encoded in the current code page. For the second we now decode the locale name ourselves, and always decode it as UTF-8. [perl #133981] *t/op/magic.t*could fail if environment variables starting with`FOO` already existed.MYMALLOC (PERL_MALLOC) build has been fixed. - Solaris - `Configure` will now find recent versions of the Oracle Developer Studio compiler, which are found under`/opt/developerstudio*` .`Configure` now uses the detected types for`gethostby*` functions, allowing Perl to once again compile on certain configurations of Solaris. - VMS - With the release of the patch kit C99 V2.0, VSI has provided support for a number of previously-missing C99 features. On systems with that patch kit installed, Perl's configuration process will now detect the presence of the header `stdint.h` and the following functions:`fpclassify` ,`isblank` ,`isless` ,`llrint` ,`llrintl` ,`llround` ,`llroundl` ,`nearbyint` ,`round` ,`scalbn` , and`scalbnl` .`-Duse64bitint` is now the default on VMS. - z/OS - Perl 5.32 has been tested on z/OS 2.4, with the following caveats: Only static builds (the default) build reliably When using locales, z/OS does not handle the `LC_MESSAGES` category properly, so when compiling perl, you should add the following to your*Configure*options`./Configure <other options> -Accflags=-DNO_LOCALE_MESSAGES` z/OS does not support locales with threads, so when compiling a threaded perl, you should add the following to your *Configure*options`./Configure <other Configure options> -Accflags=-DNO_LOCALE` Some CPAN modules that are shipped with perl fail at least one of their self-tests. These are: Archive::Tar, Config::Perl::V, CPAN::Meta, CPAN::Meta::YAML, Digest::MD5, Digest::SHA, Encode, ExtUtils::MakeMaker, ExtUtils::Manifest, HTTP::Tiny, IO::Compress, IPC::Cmd, JSON::PP, libnet, MIME::Base64, Module::Metadata, PerlIO::via-QuotedPrint, Pod::Checker, podlators, Pod::Simple, Socket, and Test::Harness. The causes of the failures range from the self-test itself is flawed, and the module actually works fine, up to the module doesn't work at all on EBCDIC platforms. # Internal Changes `savepvn` 's len parameter is now a`Size_t` instead of an`I32` since we can handle longer strings than 31 bits.The lexer ( `Perl_yylex()` in*toke.c*) was previously a single 4100-line function, relying heavily on`goto` and a lot of widely-scoped local variables to do its work. It has now been pulled apart into a few dozen smaller static functions; the largest remaining chunk (`yyl_word_or_keyword()` ) is a little over 900 lines, and consists of a single`switch` statement, all of whose`case` groups are independent. This should be much easier to understand and maintain.The OS-level signal handlers and type (Sighandler_t) used by the perl core were declared as having three parameters, but the OS was always told to call them with one argument. This has been fixed by declaring them to have one parameter. See the merge commit `v5.31.5-346-g116e19abbf` for full details.The code that handles `tr///` has been extensively revised, fixing various bugs, especially when the source and/or replacement strings contain characters whose code points are above 255. Some of the bugs were undocumented, one being that under some circumstances (but not all) with`/s` , the squeezing was done based on the source, rather than the replacement. A documented bug that got fixed was [perl #125493].A new macro for XS writers dealing with UTF-8-encoded Unicode strings has been created " `UTF8_CHK_SKIP` " in perlapi that is safer in the face of malformed UTF-8 input than "`UTF8_SKIP` " in perlapi (but not as safe as "`UTF8_SAFE_SKIP` " in perlapi). It won't read past a NUL character. It has been backported in Devel::PPPort 3.55 and later.Added the `PL_curstackinfo->si_cxsubix` field. This records the stack index of the most recently pushed sub/format/eval context. It is set and restored automatically by`cx_pushsub()` ,`cx_popsub()` etc., but would need to be manually managed if you do any unusual manipulation of the context stack.Various macros dealing with character type classification and changing case where the input is encoded in UTF-8 now require an extra parameter to prevent potential reads beyond the end of the buffer. Use of these has generated a deprecation warning since Perl 5.26. Details are in "In XS code, use of various macros dealing with UTF-8." in perldeprecation A new parser function parse_subsignature() allows a keyword plugin to parse a subroutine signature while `use feature 'signatures'` is in effect. This allows custom keywords to implement semantics similar to regular`sub` declarations that include signatures. [perl #132474]Since on some platforms we need to hold a mutex when temporarily switching locales, new macros ( `STORE_LC_NUMERIC_SET_TO_NEEDED_IN` ,`WITH_LC_NUMERIC_SET_TO_NEEDED` and`WITH_LC_NUMERIC_SET_TO_NEEDED_IN` ) have been added to make it easier to do this safely and efficiently as part of [perl #134172].The memory bookkeeping overhead for allocating an OP structure has been reduced by 8 bytes per OP on 64-bit systems. eval_pv() no longer stringifies the exception when `croak_on_error` is true. [perl #134175]The PERL_DESTRUCT_LEVEL environment variable was formerly only honoured on perl binaries built with DEBUGGING support. It is now checked on all perl builds. Its normal use is to force perl to individually free every block of memory which it has allocated before exiting, which is useful when using automated leak detection tools such as valgrind. The API eval_sv() now accepts a `G_RETHROW` flag. If this flag is set and an exception is thrown while compiling or executing the supplied code, it will be rethrown, and eval_sv() will not return. [perl #134177]As part of the fix for [perl #2754] perl_parse() now returns non-zero if exit(0) is called in a `BEGIN` ,`UNITCHECK` or`CHECK` block.Most functions which recursively walked an op tree during compilation have been made non-recursive. This avoids SEGVs from stack overflow when the op tree is deeply nested, such as `$n == 1 ? "one" : $n == 2 ? "two" : ....` (especially in code which is auto-generated).This is particularly noticeable where the code is compiled within a separate thread, as threads tend to have small stacks by default. # Selected Bug Fixes Previously "require" in perlfunc would only treat the special built-in SV `&PL_sv_undef` as a value in`%INC` as if a previous`require` has failed, treating other undefined SVs as if the previous`require` has succeeded. This could cause unexpected success from`require` e.g., on`local %INC = %INC;` . This has been fixed. [GH #17428]`(?{...})` eval groups in regular expressions no longer unintentionally trigger "EVAL without pos change exceeded limit in regex" [GH #17490].`(?[...])` extended bracketed character classes do not wrongly raise an error on some cases where a previously-compiled such class is interpolated into another. The heuristics previously used have been replaced by a reliable method, and hence the diagnostics generated have changed. See "Diagnostics".The debug display (say by specifying `-Dr` or`use re` (with appropriate options) of compiled Unicode property wildcard subpatterns no longer has extraneous output.Fix an assertion failure in the regular expression engine. [GH #17372] Fix coredump in pp_hot.c after `B::UNOP_AUX::aux_list()` . [GH #17301]Loading IO is now threadsafe. [GH #14816] `\p{user-defined}` overrides official Unicode [GH #17025]Prior to this patch, the override was only sometimes in effect. Properly handle filled `/il` regnodes and multi-char foldsCompilation error during make minitest [GH #17293] Move the implementation of `%-` ,`%+` into core.Read beyond buffer in `grok_inf_nan` [GH #17370]Workaround glibc bug with `LC_MESSAGES` [GH #17081]`printf()` or`sprintf()` with the`%n` format could cause a panic on debugging builds, or report an incorrectly cached length value when producing`SVfUTF8` flagged strings. [GH #17221]The tokenizer has been extensively refactored. [GH #17241] [GH #17189] `use strict "subs"` is now enforced for bareword constants optimized into a`multiconcat` operator. [GH #17254]A memory leak in regular expression patterns has been fixed. [GH #17218] Perl no longer treats strings starting with "0x" or "0b" as hex or binary numbers respectively when converting a string to a number. This reverts a change in behaviour inadvertently introduced in perl 5.30.0 intended to improve precision when converting a string to a floating point number. [perl #134230] Matching a non- `SVf_UTF8` string against a regular expression containing unicode literals could leak a SV on each match attempt. [perl #134390]Overloads for octal and binary floating point literals were always passed a string with a `0x` prefix instead of the appropriate`0` or`0b` prefix. [perl #125557]`$@ = 100; die;` now correctly propagates the 100 as an exception instead of ignoring it. [perl #134291]`0 0x@` no longer asserts in S_no_op(). [perl #134310]Exceptions thrown while `$@` is read-only could result in infinite recursion as perl tried to update`$@` , which throws another exception, resulting in a stack overflow. Perl now replaces`$@` with a copy if it's not a simple writable SV. [perl #134266]Setting `$)` now properly sets supplementary group ids if you have the necessary privileges. [perl #134169]close() on a pipe now preemptively clears the PerlIO object from the IO SV. This prevents a second attempt to close the already closed PerlIO object if a signal handler calls die() or exit() while close() is waiting for the child process to complete. [perl #122112] `sprintf("%.*a", -10000, $x)` would cause a buffer overflow due to mishandling of the negative precision value. [perl #134008]scalar() on a reference could cause an erroneous assertion failure during compilation. [perl #134045] `%{^CAPTURE_ALL}` is now an alias to`%-` as documented, rather than incorrectly an alias for`%+` . [perl #131867]`%{^CAPTURE}` didn't work if`@{^CAPTURE}` was mentioned first. Similarly for`%{^CAPTURE_ALL}` and`@{^CAPTURE_ALL}` , though`@{^CAPTURE_ALL}` currently isn't used. [perl #134193]Extraordinarily large (over 2GB) floating point format widths could cause an integer overflow in the underlying call to snprintf(), resulting in an assertion. Formatted floating point widths are now limited to the range of int, the return value of snprintf(). [perl #133913] Parsing the following constructs within a sub-parse (such as with `"${code here}"` or`s/.../code here/e` ) has changed to match how they're parsed normally:`print $fh ...` no longer produces a syntax error.Code like `s/.../ ${time} /e` now properly produces an "Ambiguous use of ${time} resolved to $time at ..." warning when warnings are enabled.`@x {"a"}` (with the space) in a sub-parse now properly produces a "better written as" warning when warnings are enabled.Attributes can now be used in a sub-parse. [perl #133850] Incomplete hex and binary literals like `0x` and`0b` are now treated as if the`x` or`b` is part of the next token. [perl #134125]A spurious `)` in a subparse, such as in`s/.../code here/e` or`"...${code here}"` , no longer confuses the parser.Previously a subparse was bracketed with generated `(` and`)` tokens, so a spurious`)` would close the construct without doing the normal subparse clean up, confusing the parser and possible causing an assertion failure.Such constructs are now surrounded by artificial tokens that can't be included in the source. [perl #130585] Reference assignment of a sub, such as `\&foo = \&bar;` , silently did nothing in the`main::` package. [perl #134072]sv_gets() now recovers better if the target SV is modified by a signal handler. [perl #134035] `readline @foo` now evaluates`@foo` in scalar context. Previously it would be evaluated in list context, and since readline() pops only one argument from the stack, the stack could underflow, or be left with unexpected values on the stack. [perl #133989]Parsing incomplete hex or binary literals was changed in 5.31.1 to treat such a literal as just the 0, leaving the following `x` or`b` to be parsed as part of the next token. This could lead to some silent changes in behaviour, so now incomplete hex or binary literals produce a fatal error. [perl #134125]eval_pv()'s *croak_on_error*flag will now throw even if the exception is a false overloaded value. [perl #134177]`INIT` blocks and the program itself are no longer run if exit(0) is called within a`BEGIN` ,`UNITCHECK` or`CHECK` block. [perl #2754]`open my $fh, ">>+", undef` now opens the temporary file in append mode: writes will seek to the end of file before writing. [perl #134221]Fixed a SEGV when searching for the source of an uninitialized value warning on an op whose subtree includes an OP_MULTIDEREF. [perl #134275] # Obituary Jeff Goff (JGOFF or DrForr), an integral part of the Perl and Raku communities and a dear friend to all of us, has passed away on March 13th, 2020. DrForr was a prominent member of the communities, attending and speaking at countless events, contributing to numerous projects, and assisting and helping in any way he could. His passing leaves a hole in our hearts and in our communities and he will be sorely missed. # Acknowledgements Perl 5.32.0 represents approximately 13 months of development since Perl 5.30.0 and contains approximately 220,000 lines of changes across 1,800 files from 89 authors. Excluding auto-generated files, documentation and release tools, there were approximately 140,000 lines of changes to 880 .pm, .t, .c and .h files. Perl continues to flourish into its fourth decade thanks to a vibrant community of users and developers. The following people are known to have contributed the improvements that became Perl 5.32.0: Aaron Crane, Alberto Simões, Alexandr Savca, Andreas König, Andrew Fresh, Andy Dougherty, Ask Bjørn Hansen, Atsushi Sugawara, Bernhard M. Wiedemann, brian d foy, Bryan Stenson, Chad Granum, Chase Whitener, Chris 'BinGOs' Williams, Craig A. Berry, Dagfinn Ilmari Mannsåker, Dan Book, Daniel Dragan, Dan Kogai, Dave Cross, Dave Rolsky, David Cantrell, David Mitchell, Dominic Hargreaves, E. Choroba, Felipe Gasper, Florian Weimer, Graham Knop, Håkon Hægland, Hauke D, H.Merijn Brand, Hugo van der Sanden, Ichinose Shogo, James E Keenan, Jason McIntosh, Jerome Duval, Johan Vromans, John Lightsey, John Paul Adrian Glaubitz, Kang-min Liu, Karen Etheridge, Karl Williamson, Leon Timmermans, Manuel Mausz, Marc Green, Matthew Horsfall, Matt Turner, Max Maischein, Michael Haardt, Nicholas Clark, Nicolas R., Niko Tyni, Pali, Paul Evans, Paul Johnson, Paul Marquess, Peter Eisentraut, Peter John Acklam, Peter Oliver, Petr Písař, Renee Baecker, Ricardo Signes, Richard Leach, Russ Allbery, Samuel Smith, Santtu Ojanperä, Sawyer X, Sergey Aleynikov, Sergiy Borodych, Shirakata Kentaro, Shlomi Fish, Sisyphus, Slaven Rezic, Smylers, Stefan Seifert, Steve Hay, Steve Peters, Svyatoslav, Thibault Duponchelle, Todd Rinaldo, Tomasz Konojacki, Tom Hukins, Tony Cook, Unicode Consortium, VanL, Vickenty Fesunov, Vitali Peil, Yves Orton, Zefram. The list above is almost certainly incomplete as it is automatically generated from version control history. In particular, it does not include the names of the (very much appreciated) contributors who reported issues to the Perl bug tracker. Many of the changes included in this version originated in the CPAN modules included in Perl's core. We're grateful to the entire CPAN community for helping Perl to flourish. For a more complete list of all of Perl's historical contributors, please see the *AUTHORS* file in the Perl source distribution. # Reporting Bugs If you find what you think is a bug, you might check the perl bug database at https://github.com/Perl/perl5/issues. There may also be information at http://www.perl.org/, the Perl Home Page. If you believe you have an unreported bug, please open an issue at https://github.com/Perl/perl5/issues. Be sure to trim your bug down to a tiny but sufficient test case. If the bug you are reporting has security implications which make it inappropriate to send to a public issue tracker, then see "SECURITY VULNERABILITY CONTACT INFORMATION" in perlsec for details of how to report the issue. # Give Thanks If you wish to thank the Perl 5 Porters for the work we had done in Perl 5, you can do so by running the `perlthanks` program: `perlthanks` This will send an email to the Perl 5 Porters list with your show of thanks. # SEE ALSO The *Changes* file for an explanation of how to view exhaustive details on what changed. The *INSTALL* file for how to build Perl. The *README* file for general stuff. The *Artistic* and *Copying* files for copyright information.
true
true
true
what is new for perl v5.32.0
2024-10-12 00:00:00
2020-06-20 00:00:00
https://metacpan.org/sta…/images/dots.png
article
metacpan.org
MetaCPAN
null
null
25,988,153
https://twitter.com/MaxCRoser/status/1356203389630218243
x.com
null
null
true
true
false
null
2024-10-12 00:00:00
null
null
null
null
X (formerly Twitter)
null
null
6,933,102
http://bernsteinbear.com/blog/cicada/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
24,462,569
https://ssteo.github.io/2019-cybercentral/
The sweet and the bitter of cloud computing
null
**Sze Siong Teo | CyberCentral Summit 2019** Transcript available at: https://www.linkedin.com/pulse/sweet-bitter-cloud-computing-sze-siong-teo/ programming - 22 years devops - 5 years security - 8 years industry exp. - 14 years #1 Credential leak https://hackerone.com/github/hacktivity (as of 24th March 2019) https://hackerone.com/gitlab/hacktivity (as of 24th March 2019) #2 Rogue supply chain #3 Misconfigured cloud resources http://* bucket_name*.s3- **Brute force with SecLists on below URL format:** **Same approach for filenames when directory listing not available:** **(Under the hood: aws s3 ls [target_bucket_name])** **S3Scanner - **https://github.com/sa7mon/S3Scanner *“The strategy around Zero Trust boils down to don’t trust anyone. We’re talking about, ‘Let’s cut off all access until the network knows who you are. Don’t allow access to IP addresses, machines, etc. until you know who that user is and whether they’re authorized.’”*
true
true
true
null
2024-10-12 00:00:00
2018-12-09 00:00:00
null
null
null
null
null
null
31,107,390
http://jpkoning.blogspot.com/2022/04/a-sound-debasement.html
A sound debasement
JP Koning
An imitation English half noble issued by Philip the Bold, Duke of Burgundy, 1384-1404 [source] | [This is a republication of an article I originally wrote for the Sound Money Project. When we look back at old coinage systems, our knee-jerk reaction to the periodic debasements that these systems experienced is "ew, that's gross." But things were considerably more complex than that. This article tells the story of a healthy, or wise, coin debasement — Henry IV's debasement of the gold noble during the so-called "war of the gold nobles" between England and Burgundy in the late 1300s and early 1400s.] **A Sound Debasement** In his excellent article on medieval coinage, Eric Tymoigne makes the seemingly paradoxical claim that “debasements helped preserve a healthy monetary system.” A debasement of the coinage was the intentional reduction in the gold or silver content of a coin by the monarch by diminishing either the coin’s weight or its fineness. I’m going to second Tymoigne’s paradoxical statement and provide a specific example of how a debasement might have been a sound monetary decision. First, we need to review the basics of medieval coinage. In medieval times, any member of the public could bring raw silver or gold to the monarch’s mint to be coined. If a merchant brought a pound of silver bullion to the mint, this silver would be combined with base metals like copper to provide strength and from this mix a fixed quantity of fresh pennies — say 40 — would be produced. These 40 pennies would contain a little less than a pound of silver since the monarch extracted a fee for the mint’s efforts. The merchant could then spend these 40 new pennies into circulation. Coins were generally accepted by *tale*, or at their face value, rather than by weight. Shopkeepers simply looked at the markings on the face of the coin to verify its authenticity rather than laboriously weighing and assaying it. This was the whole point of having a system of coinage, after all: to speed up the process of transacting. As long as the monarch of the realm continued to mint the same fixed quantity of coins from a given weight of silver or gold, the standard would remain undebased. Sometimes, however, “coin wars” erupted between monarchs of different realms, the aggressors minting inferior copies of their victims’ coins. Since these wars hurt the domestic monetary system of the victim, some sort of response was necessary. One of the best lines of defense against an aggressive counterfeiter was a debasement. John Munro, an expert in medieval coinage, recounts the story of the “war of the gold nobles,” a coin war that broke out in 1388 when the Flemish Duke Philip the Bold began to mint decent imitations of the English gold noble. Flanders, comprising parts of modern-day Belgium and northern France, was a major center of trade and commerce on the Continent. Both the weight and fineness of Philip’s imitations were less than those of the original English noble. According to Munro’s calculations, by bringing a *marc de Troyes* of gold (1 *marc de Troyes* = 244.753 grams) to Philip’s mint in Bruges, a member of the public could get 31.163 counterfeit nobles. But if that same amount of gold were brought to the London mint, it would be coined into just 30.951 English nobles. Given that more Flemish nobles were cut from the same *marc* of gold than English nobles, each Flemish noble contained a little bit less of the yellow metal. Philip’s “bad” nobles soon began pushing out “good” English nobles, an instance of Gresham’s law. Given Philip’s offer to produce more nobles from a given amount of raw gold, it made a lot of sense for merchants to ship fine gold across the English Channel to Philip’s mints in Bruges and Ghent rather than bringing it to the London mint. After all, any merchant who did so got an extra 0.212 nobles for 244.753 grams of the gold they owned. By bringing the fakes back to England, merchants could buy around 1 percent more goods and services than they otherwise could. After all, Philip the Bold’s fake nobles were indistinguishable from real ones, so English shopkeepers accepted them at the same rate as legitimate coins. English nobles steadily disappeared as they were hoarded, melted down, or exported. Why spend a “good” coin — one that has more gold in it — when you can buy the exact same amount of goods with a lookalike that has less gold in it? Philip’s motivation for starting the war of the gold nobles was profit. By creating a decent knock-off of the English noble that had less gold in it, though not noticeably so, Philip provided a financial incentive for merchants to bring gold to his mints rather than competing English mints. Like all monarchs, Philip charged a toll on the amount of physical precious metals passing through his mints. So as throughput increased, so did his revenues. The health of the English monetary system deteriorated thanks to the coin war. With a mixture of similar but non-fungible coins in circulation, there would have been an erosion in the degree of trust the public had in the ability of a given noble to serve as a faithful representation of the official unit of account. Nor was the system fair, given that one part of the population (people who had enough resources to access fake coins) profited off the other part (people who did not have access). Finally, when Gresham’s law hits, crippling coin shortages can appear as the good coin is rapidly removed but bad coins can’t fill the vacuum fast enough. The English king’s efforts to ban Philip’s nobles had little effect. After all, gold coins have high value-to-weight ratios and are easy to smuggle. One line of defense remained: a debasement. In 1411, some 20 years after Philip the Bold had launched his first counterfeit, King Henry IV of England announced a reduction in the weight — and thus the gold content — of the English noble. This finally resolved the war of the nobles, says Munro. By reducing the noble’s gold content so that it was more in line with the gold content of the Flemish fakes, the English noble lost its “good” status. Merchants no longer had an incentive to visit Philip’s mints to get counterfeits, and English nobles once again circulated. The health of the English coinage system improved. We shouldn’t assume that all medieval debasements constituted good monetary policy. There were many coin debasements that were purely selfish efforts designed to provide the monarch with profits, often to fight petty wars with other monarchs. These selfish debasements hurt the coinage system since they reduced the capacity of coins to serve as trustworthy measuring sticks. As Munro points out, each medieval debasement needs to be analyzed separately to determine whether it was an attempt to salvage the monetary system or an attempt to profit. So why didn't Philip respond by knocking another percent off the weight of his coins? Presumably it would be just as undetectable and efficient the second time. ReplyDeletePresumably these coinage wars were embedded within the broader political relationships of the time. The English would have known that counterfeiting was occurring in Burgundy. If the Burgundians wanted good, or at least better, relationships with the English, the price might have included giving up on producing inferior imitations of the English gold noble. To be sure, you'd have to dig into the politics of the day. DeleteIf you're competing like that you end up giving more and more coins for the same amount of gold, while you need to keep the metal ratios similar enough to the other person's coin that your coin doesn't become distinguishable. I would think this would either become unprofitable, or there would be a slow decline in the coin's worth until no one wants it anymore. DeleteA coin war is definitely not a healthy long-term pattern. As you say, the coin steadily loses value, and this hurts the population's trust in the monetary system. I suspect that over time, as states grew in power, they sought to solve these problems with diplomacy. And developments in coin production technology would have made it harder for casual counterfeiters to make good imitations. Delete> I would think this would either become unprofitable, or there would be a slow decline in the coin's worth until no one wants it anymore. DeleteOne percent can't hurt, right? Anyway the first one-percent drop worked for two decades; at that rate no individual ruler is going to see any major inflationary effect, but he's going to see lots of that sweet, sweet coin rolling in to his mint. Even if it's not as good as it was in his grandfather's day, who cares? > developments in coin production technology That can't well apply to this specific case, unless there was some striking breakthrough in the twenty-year period. And if there was, you'd think Henry would have used it. "That can't well apply to this specific case, unless there was some striking breakthrough in the twenty-year period. And if there was, you'd think Henry would have used it." DeleteI agree, probably not. Mind you, it's not just striking that determines a coins counterfeitability. It's also the skill of the engravers and the quality of their engraving work. So a king who wanted an extra defence in a coin war might conceivably try to monopolize the best engraver.
true
true
true
An imitation English half noble issued by Philip the Bold, Duke of Burgundy, 1384-1404 [ source ] [This is a republication of an article I ...
2024-10-12 00:00:00
2022-04-15 00:00:00
https://blogger.googleus…lip_the_Bold.jpg
null
blogspot.com
jpkoning.blogspot.com
null
null
26,026,778
https://www.nytimes.com/2021/02/04/travel/coronavirus-vaccine-passports.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
879,556
http://www.nytimes.com/2009/10/13/opinion/13brooks.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
9,883,790
http://www.economist.com/news/finance-and-economics/21657458-price-comparison-websites-should-help-lower-prices-left-unchecked-they-may
Costly comparison
null
# Costly comparison ## Price-comparison websites should help lower prices. But left unchecked, they may raise them AT FIRST glance, price-comparison websites are an example of capitalism at its best. Savvy consumers can use them to hunt down the best available deal for insurance, electricity or a mortgage. Firms providing such items, terrified of losing customers, feel an obligation to improve their offerings all the time. But recent theory and practice suggest the reality is more complex: comparison sites are simultaneously friends and foes of competition. This article appeared in the Finance & economics section of the print edition under the headline “Costly comparison” ## Discover more ### China’s property crisis claims more victims: companies Unsold homes are contributing to a balance-sheet recession ### Europe’s green trade restrictions are infuriating poor countries Only the poorest can expect help to cushion the blow ### How America learned to love tariffs Protectionism hasn’t been this respectable for decades ### Why have markets grown more captivated by data releases? Especially when the quality of statistics is deteriorating ### Can the world’s most influential business index be fixed? Two cheers for the World Bank’s new global business survey ### Can markets reduce pollution in India? An experiment in Gujarat yields impressive results
true
true
true
Price-comparison websites should help lower prices. But left unchecked, they may raise them
2024-10-12 00:00:00
2015-07-09 00:00:00
https://www.economist.co…711_FND000_1.jpg
Article
economist.com
The Economist
null
null
5,034,089
http://www.phoronix.com/scan.php?page=news_item&px=MTI3MDk
Portable OpenCL 0.7 Improves On OpenCL 1.2
Michael Larabel
# Portable OpenCL 0.7 Improves On OpenCL 1.2 Version 0.7 of POCL, the Portable OpenCL implementation targeting OpenCL 1.2 compliance, has been officially released. Portable OpenCL aims to be open-source, very portable, and improving performance through compiler optimizations and reducing target-dependent manual optimizations. Portable OpenCL was released in 2011 and released last August was Portable OpenCL 0.6 that began to implement the OpenCL 1.2 specification. POCL is built around the LLVM compiler infrastructure. Portable OpenCL 0.7 introduces support for LLVM 3.2 (the latest LLVM release from last month), support for generating the work group functions using simple/parallel loop structures, fixes for POCL on PowerPC32/PowerPC64/ARMv7, and initial Cell SPU support. The Cell SPU back-end is still very experimental and meant as an example of a heterogeneous POCL device driver, though with LLVM 3.2 the Cell back-end was dropped. In terms of the OpenCL 1.2 support, Portable OpenCL 0.7 doesn't yet implement the full specification and there are known bugs. However, POCL 0.7 is ready for wider-scale testing and is passing OpenCL tests from ViennaCL, Rodinia, Parboil, and the OpenCL Programming Guide samples as well as those from the AMD APP SDK. The Portable OpenCL 0.7 release announcement can be found on the LLVM mailing list. The POCL project is hosted on SourceForge. Interestingly, the development of the Portable OpenCL 0.7 release was sponsored by Nokia, namely the Radio Implementation Research Team from Nokia Research Center. Portable OpenCL aims to be open-source, very portable, and improving performance through compiler optimizations and reducing target-dependent manual optimizations. Portable OpenCL was released in 2011 and released last August was Portable OpenCL 0.6 that began to implement the OpenCL 1.2 specification. POCL is built around the LLVM compiler infrastructure. Portable OpenCL 0.7 introduces support for LLVM 3.2 (the latest LLVM release from last month), support for generating the work group functions using simple/parallel loop structures, fixes for POCL on PowerPC32/PowerPC64/ARMv7, and initial Cell SPU support. The Cell SPU back-end is still very experimental and meant as an example of a heterogeneous POCL device driver, though with LLVM 3.2 the Cell back-end was dropped. In terms of the OpenCL 1.2 support, Portable OpenCL 0.7 doesn't yet implement the full specification and there are known bugs. However, POCL 0.7 is ready for wider-scale testing and is passing OpenCL tests from ViennaCL, Rodinia, Parboil, and the OpenCL Programming Guide samples as well as those from the AMD APP SDK. The Portable OpenCL 0.7 release announcement can be found on the LLVM mailing list. The POCL project is hosted on SourceForge. Interestingly, the development of the Portable OpenCL 0.7 release was sponsored by Nokia, namely the Radio Implementation Research Team from Nokia Research Center. 1 Comment
true
true
true
Version 0.7 of POCL, the Portable OpenCL implementation targeting OpenCL 1.2 compliance, has been officially released.
2024-10-12 00:00:00
2013-01-09 00:00:00
null
null
null
Phoronix
null
null
5,138,920
http://www.theregister.co.uk/2013/01/30/youtube_subscriptions_coming_says_report/
YouTube's hilarious cat videos could soon cost you $5 a month
Kelly Fiveash
This article is more than **1 year old** # YouTube's hilarious cat videos could soon cost you $5 a month ## Ad giant Google 'experiments' with paid-for subscriptions YouTube is reportedly "experimenting" with the idea of charging people to watch some of the videos on its website. Google, which operates the vast library of funny cat footage, has asked 25 or so producers to put forward applications to create channels of videos that would cost viewers $1 to $5 a month to access. This is according to Advertising Age, which cited multiple people familiar with the dealings. The same report added that YouTube execs are also mulling over applying what in effect would be pay-per-view fees for live events, content libraries, self-help or financial advice programmes served up by the website. A Google spokesman told the magazine: We have long maintained that different content requires different types of payment models. The important thing is that, regardless of the model, our creators succeed on the platform. There are a lot of our content creators that think they would benefit from subscriptions, so we're looking at that. YouTube could start charging for such content as soon as this spring, apparently, and is likely to split the subscription revenues 45-55 favouring the filmmakers - this is similar to how money from advertising on free-to-watch videos is divvied up between the web giant and its content-uploading users. ® 93
true
true
true
Ad giant Google 'experiments' with paid-for subscriptions
2024-10-12 00:00:00
2013-01-30 00:00:00
null
article
theregister.com
The Register
null
null
20,400,669
http://mrmrs.cc/writing/chaos-design/
Chaos Design
null
## Chaos Design Why did we even invent computers in the first place? “…These facts seemed to me to throw some light on the origin of species—that mystery of mysteries, as it has been called by one of our greatest philosophers This excerpt is from the first paragraph in **The Origin of Species**. I’ve highlighted the word philosopher here because it’s easy to miss. But here is Charles Darwin giving a shout out to this unnamed person in the opening paragraph of his most seminal work. It turns out, he was referring to this guy, Sir John Herschel. In 1831 Herschel had authored A Preliminary Discourse on the Study of Natural Philosophy. A book that heavily influenced Darwin’s approach to science. So much so, in 1836 after 4 years of traveling on the HMS Beagle, Darwin wanted to do nothing more than visit Herschel in South Africa. Herschel was living in South Africa drawing and cataloging plants with his wife, to get away from the fast paced London lifestyle of the early 1800’s he could no longer take. Historians have noted how frustrating, that while Darwin kept a fairly detailed journal, there is no record that covers extensively what was discussed during their hillside chats. What we do know, is that during their meeting “Herschel inspired Darwin to apply the critical analysis of data associated with the physical sciences to the emerging life sciences…” Herschel himself was an accomplished astronomer. His earlier writing influenced generations of scientists, Darwin included. Scientific historians have noted “…astronomy has historically led the way in the development of scientific methodology, later applied to other disciplines.” We think of science as being a mature discipline but in reality Biology didn’t become mature until the mid 19th century. So here is Herschel, urging Darwin to study and borrow methodologies from other disciplines to advance his own. About a year after meeting with Herschel - Darwin drew this initial sketch, with the note “I think.” 23 years after their hillside chat, Darwin finally published the Origin of Species. Upon publishing, he sent a copy to Herschel with a note about Herschel’s influence on his work. "...Scarcely anything in my life made so deep an impression on me: it made me wish to try to add my mite to the accumulated store of natural knowledge." I think we could get some inspiration from this. How can we add something, even if it’s a small thing, to our accumulated store of design knowledge? What other disciplines can we learn from? Where can we apply their methodology to our work? London in 1821 according to this painting Now, it turns out this wasn’t the first time that Herschel had been involved in manifesting a big idea. 15 years prior in 1821, before he had retreated to South Africa, Herschel found himself sitting around a table in London with a friend, going over some tables of data. As one does with friends. Now there were two things in particular and about these tables of data that were troublesome. First off, the numbers were wrong. Secondly, it took people a lot of manual time to produce all of these inaccurate calculations. After finding yet another error, in a moment of pure frustration Herschel’s friend exclaimed: ‘I wish to God these calculations had been executed by steam’ ‘I think it’s possible’ replied Herschel That friend, was Charles Babbage, the man who invented the concept of a digital programmable computer. Hard not to love how his concept of automation is centered around steam. This drive to create the first computer was rooted in Babbage’s effort “to eliminate the risk of human error. The infallibility of machinery would eliminate the risk of error from calculation and transcription”. He saw the world how it was and saw a vision for a different world we could be living in. It’s hard not to appreciate the significance around the fact Herschel was present and involved in the origin story of two fairly significant ideas within the arc of human knowledge. The modern computer, and the theory of evolution. When we think about evolution we often think about this image. Or something like it. And this isn’t wrong, but it doesn’t fully illustrate what’s going on either. This is Darwin’s initial sketch again. While most depictions are linear, evolution is really a branching model and has been since the beginning. It’s important to remember when we talk about evolution we aren’t necessarily talking about getting rid of things. Although that can and does happen. Steam powered calculations was a really good idea. Good enough to catch the interest of a Count Ada Lovelace. She was the first person to recognize how powerful the idea could be if extended beyond just pure calculations. Together they pursued the design and construction of a programmable computer for years. Sadly their ideas **didn’t** evolve directly into the devices we know as computers. Their ideas died off. It wasn’t until after the computer was invented that their early ideas were recognized. It seems people didn’t have the same drive to fix spreadsheets that Babbage did. 194 years after this fateful event in London, which for reference is 41 years after the first personal computer, Paul Ford published a 40,000 word piece called What is Code. The first time I read it, one sentence in particular jumped off the page. “One study by a researcher at the University of Hawaii found that 88 percent of spreadsheets contain errors.” This sentence jumped off the page because I knew Babbage had wanted to fix the pesky problem of having errors in spreadsheets. 88%!? That seemed too high and I am skeptical. My curiosity was sparked. How important are these spreadsheets? How big are these errors? So I read Raymond Planko’s research paper from 1998 titled “What We Know About Spreadsheet Errors” which was featured in a special edition of “The Journal of End User Computing’s” on scaling up end user development. TL;DR the errors are very big and the spreadsheets are from Fortune 500 companies. This is a partial breakdown of errors from one spreadsheet - 10 errors of $100,000 - 6 errors of $10+ million - 1 error of $100+ million The auditor comments are more amusing than you might expect. “The investment’s value was overstated by 16%. Quite serious.” “One omission error would have caused an error of more than a billion dollars.” “Only significant errors” I’m curious as to why we have not moved the needle on fixing spreadsheets. Maybe that’s because spreadsheets are really hard to fix. Maybe it’s easier to get a car to drive itself. What would Babbage think? To see the wonders computers can do today contrasted with their failure to solve the original problem that frustrated him so. I’m interested in this sentiment. Where are we spending lots of rote time doing calculations incorrectly? As someone who has spent a lot of time refactoring css, a few things quickly come to mind. I want to revisit evolution, because part of what I’m here to speculate on with you is how things will evolve around us. In evolution **phyletic gradualism** is when change occurs ‘uniformly, slowly, and gradually.’. The idea of **punctuated equilibrium** is the idea that evolution happens in rapid short bursts followed by periods of stasis. “When significant evolutionary change occurs, the theory proposes that it is generally restricted to rare and geologically rapid events of branching speciation called cladogenesis.” I feel interface design has been at stasis for some time. It is hard to imagine over the next 20 years we are going to see a gradual advancement in how we do work. I’m observing a number of things that lead me to believe we are going to undergo a rapid burst of change. I suspect we might see a number of bursts in quick succession. What are the environmental/industry forces that might produce rapid evolutionary change in how we work and design? As astronomy helped inform other disciplines in the 1800s. Where can our relatively new discipline of digital design learn from today? ### We might start by taking a look at print We’re here to talk about interface design. An interface can be defined as “a point where two systems, subjects, organizations, etc. meet and interact” I’ve always thought about books as a point where an author and a reader can meet and interact. Books facilitate endless amounts of meetings across time and space. Cataloging an idea in a book can be such a powerful force, that it compels someone to sail from London to South Africa in 1836. So, if we are to learn from books, we might find ourselves asking - how has printing evolved since its inception and where is it going now? In “A Short History of the Printed Word.” Warren Chappel describes what he calls the three phases of print. ### The first phase “It involves the carving of whole pages into flat wooden blocks, and thus treating the written text like any woodcut illustration” This sounds like the first stage of many peoples design process. Thinking in pages. Or “large sections”. Makes sense as a starting point. It’s easy to wrap your head around a page. It is easy to have a meeting to look at “a page”. How many of you have ever created (or received) a design spec for an entire page to implement? How many of you have worked this way in the last 5 years? 2 years? 1 year? Last month? Will we look back on this as an ineffecient way to work? ### The second phase “The second phase depends on the carving and casting of individual letters or characters. Once these units of visible language have been cast in multiple copies, they can be endlessly assembled, disassembled, and reassembled into an infinite number of texts. That is what is meant by movable type.” #### Icons, Colors, and Typography Type-scale documention for BassCSS and Bootstrap It feels like we are collectively emerging into this second phase right now. The past decade has seen a proliferation of design systems and component libraries for the web. We’ve seen atomic/functional/oocss patterns go from gasp inducing horror shows, to the forefront of best practices. These collections are comprised of smaller pieces that can be cast in a multitude of copies. Saying movable type was a really big deal is an understatment. It should be noted, I am not implying that the advent of atomic/functional css is as culturally significant as the creation and adoption of movable type. I’m mostly interested in the parallels in workflows and mental models. Even though these evolutions of are happening more than 1,000 years apart - there are stark similarities. Rebass: A popular react component library Lightning Design System Ant Component LibraryWe see similar abstractions that are emerging. Spacing. Typography. Color. These can be used across components, pages, or even entire projects. These elements can be combined endlessly to produce a wide array of designs. The invention of movable type didn’t inherently make the writing in books better. And it’s important to note a new css or component architecture is not going to magically improve the quality of our interface design. But it can have far reaching effects in lowering the barrier of entry for access, giving us more people to help solve some of these problems. What will these systems look like once they are more mature? What about a world where every component is already made (possible)? What types of problems will we have then? While there has been a lot of movement into this direction - we still lack the level of composability found in other disciplines. Starting screen of MixamoDemo of uploading a 3d charachter and seamlessly composing fully configurable animations on top Could we use computers to automate this transition from phase 1 to phase 2? In some ways I think css in js solves this problem. Allowing people to attach any amount of styles to an element or component, just to have them split up into single purpose classes and reduced to the smallest code footprint that renders the fastest. But what about auditing the visual history of the web? How much can we learn from static css files? Using the Internet archives wayback machine and the css stats cli, we could download the history of a site and visualize how values change over time. Most companies have several websites with different front end codebases. What if could easily visualize all values across sites? And find where we are using common values? Padding and margin changing over time### The third phase “The third phase has only just begun, but clearly it involves another fundamental shift. In this phase, texts and scripts alike are electronically described in forms that can be stored, transmitted, edited and printed at high speed, on complex but small devices…” Digital interfaces allow for a fluidity to the printed word that is relatively new. In the past, you might spend 10-15 minutes picking a typeface and font size in Microsoft word in preparation for printing it out and sharing with others. But when you publish on Twitter, Facebook, Medium, you’re removed from this part of the design process. Even to your own website, you don’t have absolute control over how the typography will render for the end user. As your content will be consumed on multiple clients and devices. Set in serif, sans-serif, large, normal, or small. The user is able to configure a few basic things to choose between hundreds of designs. If they are looking at it in Safari they might just hit the reader button. Gone are your custom selected web fonts, your colors, your visual brand. Personally I love this. I think users should be able to seamlessly bounce between how we have decorated our home if they want to visit, while also allowing them to adorn our content in their own comforts. If we peer through the proper lens we’ll find this constraint, in this context, can be quite liberating. What can we do when **we only need to think about content.** So what is our third phase? Will we start to enter it before we’ve finished entering the second? While Chappel has defined three phases in print, that doesn’t mean we have the same limits. How many phases and shifts will we have? When I contemplate the future of humans interacting with computers, I’m consistently drawn to this quote from JCR Licklider: “It can allow a decision maker to do almost nothing but decision making, instead of processing data to get into position to make the decision.” That sounds like a quite the dream. But it also sounds foreign to how I interact with a computer. I find myself and my teammates, even against our best efforts, performing a lot of rote tasks, in an effort to get to a point where we can make a decision. And even then we aren’t always successful in that task. While Babbage was the originator of the concept of the computer - JCR Licklider was certainly one of it’s most influential visionaries. It’s hard to come across much early computer work that wasn’t influenced by his thinking. Again - I wonder what Licklider would think about the current state of affairs. Computers no doubt can do AMAZING things. But does it feel like we are living up to our potential? Are we essentially driving a bunch of Ferraris around in 2nd gear? I find that both building and designing is a constant cycle of having a question and trying to find the answer. When you have a question, the more steps/time involved to see something rendered that might answer that question, the slower your feedback loop. In my opinion, feedback loops for interface design seem unreasonably long. How fast and short can we make our feedback loops? Where are we missing critical feedback loops in our process? Which wheels are we unnecessarily inventing over and over again? As Alan Kay says, where are we reinventing flat tires? When you change something (code or design) you want instant feedback on the effect it will take in all possible contexts. If you know **how** you are affecting a system, the less likely you are to break it. Often times when we break an interface we’re breaking the things we can’t visualize during the development process. So what types of tools or processes can we create to augment this flow? “You either see it in your tooling, or you see it in a bug report. And it’s a lot more expensive when you see it in a bug report.” - James Culveyhouse I love this quote. So what’s it like to be good at interface design in 2019? As far as I understand it’s pretty easy. You just need to make sure you get this simple checklist done. - Loads instantly - 60fps - Usable with a keyboard - Screenreader friendly - Accessible contrast - Award winning content - Semantic HTML - Works on the following screen sizes: All of them, also watches - Works well in low light environment - Works well in a bright light environment - Supports right to left text - No scroll jank - Handles variable length content - Works if CSS doesn't load - Works if JS doesn't load - Well balanced if user has different zoom level - Looks amazing - Has proper error states - Only use design system - Hover state - Active state - Focus state - Loading state - Empty state - Easy to use navigation - No superfluous animations and handles reduce motion - Consistent with other parts of interface - Follows all of the fundamental UI patterns - 100% accessible, no bugs - Everything is fully documented - Make it pop Now this isn’t the fun type of checklist that you can breeze through in a linear fashion. Sometimes fixing something in one place, reduces the quality of some other tracked metric elsewhere. We’ve talked about some high level concepts and identified a few potential problems we could try to get computers to assist us in solving. And now it’s time to look around at some other methodologies we might be able to make use of or draw inspiration from. Let’s start by taking a look to some of our system engineering friends and what they’ve been up to recently with Chaos Engineering. #### Chaos Engineering is… The discipline of experimenting on a system in order to build confidence in the system’s capability to withstand turbulent conditions in production. To make their systems stronger companies like Netflix, intentionally break services on production randomly. They wanted to “move from a development model that assumed no breakdowns, to a model where breakdowns were considered to be inevitable.” The thinking is that people will build better systems if they know failure is guaranteed to happen. Netflix found this created alignment around ‘redundancy and process automation to survive such incidents’. What if we applied this thinking to how we build and design components and interfaces? What if we considered the states a user might experience our interface as an incident we’re trying to survive. What if we had… Chaos Design The discipline of experimenting on a component in order to build confidence in the components capability to withstand turbulent conditions in production. Chaos design in practice. If we were to reword the above a bit. We might come up with something like: To specifically address the uncertainty of distributed components at scale, Chaos Design can be thought of as the facilitation of experiments to uncover weaknesses in a components implementation or design. Define a ‘steady state’ as some measurable output of a component that indicates normal behavior. Hypothesize that this steady state will continue in both the control group and the experimental group. The harder it is to disrupt the steady state, the more confidence we have in the behavior of the system. If a weakness is uncovered, we now have a target for improvement before that behavior manifests in the system at large. To design stronger and more resilient systems, it can be necessary to spend the majority of your time working outside of the ideal state happy path that contains the perfect set of data. You know. Meeting design stuff. Where everything magically works. Think of every characteristic of an interface you depend on to not ‘fail’ for your design to ‘work.’ Now imagine if these services were randomly ‘failing’ constantly during your design process. How might we design differently? How would our workflows and priorities change? Here’s a potential list of things we might be relying on - CSS doesn't load - JS fails to load - CSS and JS fail to load - Network speed - Language / Variable content - Left to Right text - Presence of content (non-empty state) - Length of data - Data cleanliness - Size of viewport - Luminosity is low - Ambient light is too bright - Particular rendering engine - :hover states - Sight - Mouse usage - Business logic - Permissions - Plan level - Quotas - Network is online and transmitting data When thinking about how to design around a world where we can’t rely on these services ‘not failing’ how might we change the way we work? The difference to chaos engineering is that we can’t flip these switches on production. These are actual things happening on production all of the time! So how do we get our development environment to reflect the environments the component will actually encounter. The turbulence of production? Let’s envision what a chaos design environment might look like. What if for a three hour period the ability to use a mouse or trackpad is disabled? What might happen? Would we spend that time making keyboard navigation better so that we can keep working on our design? If you’re working on a modal and you can only launch it by clicking a button, you must fix the keyboard navigation to get back to the intended work. What if your screen didn’t render anything for 4 hours and the only way to interact with the interface was with a screenreader? What if our interfaces were thrown into color blind mode simulating one of the ways 12% of the population perceives color for a day at a time? Would we design higher contrast interfaces that didn’t rely on people perceiving green to know something is “good” or “positive”? What if for random 1 to 2 hour intervals your display only showed the mobile version - or what it looked like on a large display? Or both at the same time. Would our components be more responsive? Would they be more suited for the ergonomics of each device size? What if for 1 day a week your interface only showed text in French? On average French is 20-30% longer than English. Would our interfaces be less likely to break with variable content? What if for random 1 to 2 hour intervals your interface only showed right to left text? Would our interfaces be more global? Demo of component development environment at CloudflareWe could even drive all of the probabilities for our chaos design system by data collected in the real world. Instead of spending 80% of your time designing on a hi resolution 27” screen, maybe the screen size your component renders to could match the frequency of real world usage. This might sound like an awful way of working - as you lose a lot of control. But this does reflect the reality of the turbulence of production as we described before. And we might design stronger systems because of it. We haven’t found the discipline collectively to solve these problems by accident. A lot of what we are talking about - are universal problems we all have in development and that all of our customers have in production. Things we could build tooling for as a community. Within our own businesses, there are a variety of states that we don’t account for by default as well. Empty states. Different user permissions. Randomly populating our interfaces with content of the shortest and longest lengths we have stored in a database. Within enterprise software we often have users with different tier plans. Or different limits, quotas, and seats. And when we build UI are we building systems to think through these states we know will happen? Why does it seem smoothly handling empty states is the exception not the rule? View of github.com at various screen sizes “The thought of every age is reflected in its technique.” - Norbert Wiener Are we collectively happy with how our thoughts are manifesting as technique? ### Stasis Anyone recognize this component? The classic color picker. Now we as humans, have been making color pickers for decades. Color pickers from 1999 and 2019For the most part we keep building the same thing with the same functionality. Personally I think it’s weird that the default state of a color picker is “here is every color imaginable there are more than 16 million take your time.” What if instead of spending time designing and building all of these color pickers that are all the same… we tried to make a better color picker. What types of feedback loops might we actually want in a color picker? Demo of Kevin Gutowskis Contrast & Color PickerWe might want to know instantly about contrast. What is the current contrast with black and white for the selected color? As we generally design against a background of white, light gray, dark gray, or black. Even that might be a useful feedback loop. - What about showing what the current colors will look like for people who are color blind? - If we have a document palette - what if we exposed all the current accessible colors with what’s currently highlighted? - If we do select a color - what if other popular colors to pair it with were suggested? - What if the color picker only showed colors that aren’t currently being used on the web? Given any two colors - we don’t have a vector for determining if it’s aesthetically pleasing. Or what kind of aesthetic it is. But what if we did track that kind of data? This screenshot is from a project called RandomA11y It generates random pairs of accessible colors. And we wondered - what if people were able to vote? Could we train computers to get better at understanding how colors relate to each other? If this is something we can compute - could our UIs be even more dynamic and offload color as a user preference? Is this another way we can give control back to the user? And what if this was an API that others could consume? ` ```` "combos":256319, ``` "votes":256630, "votes_per_combo":0.9987881385652496, "up_votes":130529, "down_votes":126101, "latest_20": [ "id":256496, "color_one":"#555ef9", "color_two":"#f3fde6", "created_at":"2019-06-10T15:49:37.058Z", "updated_at":"2019-06-10T15:49:37.058Z", "contrast": "8.41" ] You could connect these types of apis to any design tool to improve feedback loops within color pickers or color palette generators. Read more about the API here ### Living on the Edge Edge computing is opening up a lot of new paths for us to affect interfaces. At Cloudflare we’ve got a product called Workers that allows you to write javascript at the edge. On the design team, we’re interested in how we can make it easier to augment the view layer. When we load the site into this tool, we extract all the colors in the html and css, and show them along the top, alowing them to be customised and previewed. When you press deploy, we deploy a new worker script with new mappings — “change #efefef to #cc23ef” We wouldn’t have to limit this to just color. We make any changes to themes and have our designs normalized against scales. Imagine the potential for a brand update - where you can just have all of the nearest colors updated automatically across all of your properties? You could imagine the ability to start augmenting your web page in interesting ways. In 3d design and photography you affect the color, not just by changing the color of the materials and surfaces, but by applying light from multiple directions with different types of filters. You could affect the theme of your interface based on the calulation of current time and where the light source would be. Demo courtesty of @winkervsbecks What else can we learn together? What other types of information can we make available to each other and contribute to our collective knowledge? These aren’t trade secrets. It’s not a zero sum game. We share information now! But we do it at a small scale with slow feedback loops. Besides just two colors - what can we learn about how different visual properties relate to each other? This is part of why John Otanderstarted to build Components AI What can we actually track over time about how values and combinations of properties relate to each other? For us it’s a natural extension of Random A11y The above is a long list of properties in css. But it’s not that long if you are a computer. On top of that - many of these properties are not needed when styling an element. When I’m styling a button, I don’t expect to use volume. Or page-break. Or a number of other properties. So what if we documented what we know so far about styling components. And created open templates for common components? Where design now becomes configuring an obvious set of properties, instead of needing to guess and declare. The goal is not to eliminate options, it’s to narrow focus on the essential, allowing for expansion and exploration if necessary. This idea of defining a component API has benefits extending beyond just these types of interfaces. Can we leverage teaching a computer what a button looks like in creative ways? Imagine having a design query language where you could ask to see all of the unique table styles in your app. Or all of the error states. Currently to do these types of audits, it takes a lot of rote work - and is likely to be outdated the week after it’s finished. Collection of buttons from a single companyWhat if you controlled inputs for a generative component design tool by deciding if you wanted to use the most popular or the least used values? Interface from sliding through scales constructed from scraped css. Demo available here ### Continuing to look elsewhere If we take a look at what people are doing with machine learning, it’s hard to not be intrigued by reinforcement learning. #### “Reinforcement learning is trial and error at a vast scale” From How to teach AI to play Games: Deep Reinforcement Learning There are people trying to train computers on how to beat video games. And they are getting pretty good. Which is probably worth a whole talk in itself. Now training computers to beat video games seems like a pretty obvious thing the first time you see it. The first time I saw a demo of it in practice though - this is the image that flashed in my head. Screenshot of Lighthouse, a popular auditing tool for sitesWhat’s our workflow when we when we are trying to optimize something on the web? Running a lighthouse audit takes ~10-60 seconds to run. We check to see if the numbers have gone up or down. We make some adjustments. We re-run the lighthouse audit. And we check to see if the numbers go up or down. Now you might be using something else to audit your code. But the workflow is probably similar. This workflow is ripe for distractions. Computers don’t need to stop to check their email. Or reply to a ping in chat. This is the type of work that I just have a feeling computers are better suited for. Figuring out implementation details. Here we have a desired outcome. Four 100s. There’s compelling work being done right now that is going to make this type of work even easier for a computer to do. Now this is emerging work now - but imagine where we’ll be in 101 years! So what will the life of an interface designer be like in the year 2120? or 2121 even? A nice round 300 years after Babbage first had the idea of calculations being executed by steam. “…back/neck/wrist strain will live in the past because I’ll be designing in a dimly lit room, in an inversion chair using mostly voice and gestural cues to control design software.” - Lauren LoPrete The first time I saw this video I could feel the paradigm shift. It’s from a study referenced in last years article titled The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities In it the constraint is that the grip has been disabled, but the crane is still able to grab ahold of the ball and move it between any two points. Will we learn to apply this type of optimization learning to interface design? My mind wonders at the creative solutions computers might come up with to get a website to be fully responsive, performant, and accessible. Project Dreamcatcher is a generative design project at Autodesk. They’ve started to incorporate some of that technology into other products and the industrial design industry is already seeing real world results. This is a tool that people use overhead, so weight is a primary concern. But they also have a constraint that it can’t be any weaker. With this problem and constraint, they were able to use generative design to shave off 3 pounds. That’s a 60% reduction in weight. Redesigned componentIT’S NOT BRUTE FORCE ENGINEERING. IT’S ELEGANT. YOU DEFINE A PROBLEM AND YOU GET A SOLUTION SET UNLIKE ANYTHING YOU’D PREDICT. - Frank DeSantis, Vice President of the Breakthrough Innovation How could we develop a language where we design interfaces by defining contraints and desired outcomes? ### Future of interface design So I’ve talked about how hard it is to be good here. But here’s the thing - this is the least amount of stuff we are ever going to need to worry about. Interfaces are going to get more and more complex. The likelihood of people sitting at a desk in front of two 27” monitors is incomprehensible to me. Like the third phase in print, I think much of our work will be augmented by the user. We see small scale emerging hints at this with dark mode options, theming controls at os level. A new media query for what avatar shape users prefer. We open up these small options because no matter which one you choose the interface designers think it’s good. But these are incredibly small sets of options if we were to calculate how many fully usable designs the user could pick from. The more we understand how things relate to each other - we can offer up more options with great confidence. There are 128 combinations of color based theming options from curated valuesMy best guess is it we will see augmented reality usage grow the most in the immediate future. As AR and VR become more prevalent, will interface design largely be world building? Will we interact with computers and machines by moving our body in expressive ways to manipulate our virtual environment? **Gods & Monsters**/ Ubisoft Regardless of what the future brings, our problem space is growing every single day. And we need better feedback loops to handle the increasing amount of chaos. I’m pretty sure robots won’t be taking away our jobs. But I do think they will take away some of our current work. I’m excited about that future though. I imagine we will spend more and more time defining a desired output with what our constraints are. If you’re an ad agency - maybe web performance is important but maybe not the MOST important thing. Maybe you’re willing to have a 2mb website for the added pay off of high definition visual shine. For many businesses - you don’t need anywhere near 1mb to serve up a page that will allow you to communicate with an audience, and potentially, to receive their input as well. So maybe you’re biggest constraints are around your color palette and making sure your site is accessible and localized. Being able to fluidly evaluate and augment content in multiple contexts will allow us to spend more time deciding and less time processing data. I hope you’ll join me in figuring out how to automate some of this work so we can build more resilient systems that fail less often. Someday I hope people get to use interfaces that always work, all of the time, no matter what. Maybe someday.
true
true
true
null
2024-10-12 00:00:00
2013-12-09 00:00:00
https://mrmrs.s3.us-west…/og-pen-plot.jpg
website
mrmrs.cc
mrmrs.cc
null
null
22,904,213
https://www.gilmorehealth.com/chinese-coronavirus-is-a-man-made-virus-according-to-luc-montagnier-the-man-who-discovered-hiv/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
3,719,347
http://www.insidehighered.com/blogs/education-oronte-churm/mike-daisey-liar-and-so-am-i
Mike Daisey Is a Liar, and So Am I
John Warner
**You have /5 articles left.** Sign up for a free account or log in. Mike Daisey is a liar. John D’Agata is a liar. Greg Mortenson is a liar. James Frey is a liar. I am a liar. You are a liar. --- When I discuss ethics with my students, we try to suss out whether or not there are any universal truths, things which we all agree are wrong. They name the usual suspects, murder, stealing, assault, cheating, lying, etc… I then go down the line and ask how many of them have violated one of these rules at some point in their lives. They laugh as I ask about murder, and even assault. Some will admit to a petty theft of a sister’s sweater or a candy bar from a convenience store. Lots will admit to having cheated in school at one point or another. (But never in my class, of course.) I leave lying out of the discussion, and then turn to other matters briefly before coming back to our list, and ask, “have any of you committed one of these terribly foul acts today?” Again, they laugh. Here, maybe I’ll go to the board and tap the dry erase pen on the word “lying.” “Who’s told a lie today?” I ask. It’s fun to watch a room full of people think. It’s one of the chief pleasures of teaching. After a few more beats as their faces softly collapse in that way that signals a small epiphany, I’ll say, “I’ve told a lie today, probably several if I think about it, and you have too.” I ask them what they’ve lied about that day, and it’s always trivial. A lie to mom and dad about what they did the night before, or to a friend about their availability for lunch. My own lies are the same, why I didn’t make it to the previous night’s hockey game, or if I noticed that we were almost out of milk before I poured the last of it on my cereal. In our minds, we use the “white lie” defense, a “diplomatic or well-intentioned untruth.” “Well-intentioned.” --- As many of you have probably already heard, radio program *This American Life* was compelled to retract a previous story by the monologist Mike Daisey. The story was based on Daisey’s one man show, *The Agony and the Ecstasy of Steve Jobs, *which Daisey billed as being based on his own mission to China and his interactions with and interviews of workers at the infamous Foxconn plant in Shenzhen, China. In his theatrical performance (which I haven’t seen), Daisey is said to tell the tales of mistreatment and abuse and inhuman working and living conditions the factory workers must endure for wages of $15 a day so that the West can be supplied with our electronic gadgets. Daisey’s one-man show is apparently quite compelling. Theaters sell out show after show. Listening to even short clips demonstrates Daisey’s mastery at holding audience attention. The facts of poor labor conditions in China in general, and at Foxconn in specific, have long been known -- that’s what led Daisey to undertake his trip in the first place -- but not widely, not in a way that had caused any significant response. Daisey’s monologue tells a tale of low-grade espionage, of using fake business cards and subterfuge to sneak past armed factory guards into tours for potential foreign investors, where Daisey and his intrepid interpreter and sidekick “Cathy,” would talk to 12- and 13-year-old girls who worked the assembly lines and slept ten to a room in 10 ft. by 10 ft. “concrete cells.” Listening to the original broadcast on *This American Life*, you can hear that Daisey is a master storyteller, making fine use of specific language, striking images and numerous dramatic pauses. He has brought the plight of these workers to life, and in so doing made people care, made people pay attention. That’s hard to dispute. Well-intentioned. What’s also clear is that part of what’s so compelling about his stories is the audience’s understanding that what they’re about to hear really happened. Except that it didn’t. For one thing, only the police and military in China can carry firearms, not factory guards. According to the *TAL *follow-up story retracting the original, it is these small details, like the armed guards, or factory workers looking to unionize meeting in a Starbucks, that sparked Daisey’s undoing. Rob Schmitz, a China-based correspondent for “Marketplace” (produced by American Public Media and frequently, though not exclusively, heard on NPR-member stations) was suspicious about Daisey’s claims the moment he heard the story. He’d been to many factories in China and had never seen armed guards. And why would factory workers making $15 to $20 per day meet in a Starbucks, which is “pricier in China than in the U.S.”? (*Note: This post has been updated to clarify the nature of "Marketplace."*) Schmitz wanted to dig deeper, so he started in a logical place, trying to track down Daisey’s translator, “Cathy.” As Schmitz recounts in the episode: “I could pretend finding her took amazing detective work. But basically, I just typed ‘Cathy and translator and Shenzhen into Google. I called the first number that came up.” From there Daisey’s account unravels almost completely. Daisey is not telling the truth about the guns, or the age of the workers he talked to. He lies about having seen the living quarters first hand. He lies about how long he spent in China, about how many factories he visited, the number of people he talked to. He lies about meeting a man who had been poisoned by exposure to n-Hexane, a solvent used in manufacturing that also acts as a neurotoxin. Daisey describes meeting a man whose hands “shake uncontrollably” because of his exposure to n-Hexane. “Most of them … can’t even pick up a glass,” he says. This man may exist. And if he does, he may be representative of thousands of people maimed and marred by working in factories that provide us with our electronic gee-gaws. Daisey did not meet him. --- Essayist John D’Agata also alters or invents facts in his writing, most notably in his story of the proposed Yucca Mountain nuclear waste depository and Las Vegas, *About a Mountain*. D’Agata says, “I like playing with the idea of journalism and our expectation of journalism. So I like making something feel journalistic and then slowly reveal that that approach isn’t really going to give us as readers what we want from the text, that we need to try a different sort of essaying, and then the essays become a lot more associative and the perhaps become a bit more imaginative and start taking the problematic liberties.” D’Agata’s lies are in the service of what he feels is a larger truth, the experience of art: “I think it is art’s job to trick us. I think it is art’s job to lure us into terrain that is going to confuse us perhaps make us feel uncomfortable and perhaps open up to us possibilities in the world that we hadn’t earlier considered.” Well-intentioned. --- Listening to *This American Life *host Ira Glass and reporter Rob Schmitz confront Mike Daisey is painful. Daisey admits some of his lies, things like inflating the number of factories he visited, and that he’d never personally met anyone poisoned by n-Hexane. Schmitz and Glass confront him about this directly: Rob Schmitz: "So you lied about that. That wasn’t what you saw." Mike Daisey: "I wouldn’t express it that way." Rob Schmitz: "How would you express it?" Mike Daisey: "I would say that I wanted to tell a story that captured the totality of 11 my trip. So when I was building the scene of that meeting, I wanted to have the voice of this thing that had been happening that everyone been talking about." Well-intentioned. --- The central story in Greg Mortenson’s *Three Cups of Tea* recounts his failed attempt at ascending K2 and how he was separated from his climbing companions during his descent and wound up stumbling into the small village of Korphe, where they nursed him back to health with their tea, their warm blankets, their yak butter. As thanks, Mortenson promised he would build them a school. Through his position at the Central Asian Institute, and his own charity, Pennies for Peace, Mortenson generated tens of millions of dollars in donations, some of them quite literally pennies from American schoolchildren, designated to build schools in Pakistan. Well-intentioned. The central story in *Three Cups of Tea* is not true, even by Mortenson’s contemporaneous account of his attempt on K2. -- Perhaps the most painful part of Mike Daisey’s conversation with Ira Glass and Rob Schmitz is his refusal or inability to say that he “lied.” It is clear that he lied to the fact checkers to cover his initial lies, going so far as to tell them that Cathy, the translator Rob Schmitz found with one phone call, is unavailable. He lies on the spot, even as he’s confronted with his lies, inventing justifications for continuing discrepancies between what he says happened, and what his constant companion Cathy recalls. It seems possible that Daisey has told this story so many times that he’s no longer entirely sure what really happened and what he’s made up. Daisey was “kind of sick about” the thought of being found out. Why? “Because I know that so much of this story is the best work I’ve ever made,” he says. Well-intentioned. -- There is a difference between John D’Agata and Mike Daisey. John D’Agata tells readers where he’s deviated from the known knowns. He even participated in a book, *The Lifespan of a Fact*, about the arguments he had with one of his fact checkers over his alterations. The conversation between John D’Agata and his fact checker, Jim Fingal over the course of seven years is an extended exploration of the nature of truth and what we can or can’t know. *The Lifespan of a Fact* has generated significant online commentary regarding fact checking and the obligations writers have to their audiences. It’s a complicated and worthwhile discussion. Well-intentioned. Though even the conversation in the book had to be “re-created” well after the fact, something D’Agata and Fingal’s publisher, W.W. Norton, has been coy about. --- James Frey is a liar. In *A Million Little Pieces* he says he spent 87 days in jail. The reality is that it was hours. This is the central lie, but not the only one. It’s not really worth counting them. Why did he lie about this? “I think one of the coping mechanisms I developed was sort of this image of myself that was greater, probably, than -- not probably -- that was greater than what I actually was. In order to get through the experience of the addiction, I thought of myself as being tougher than I was and badder than I was -- and it helped me cope. When I was writing the book ... instead of being as introspective as I should have been, I clung to that image.” Well-intentioned. -- Wife (shaking a nearly empty carton): "Did you notice that we’re almost out of milk?" Me (eating sufficiently milk-moistened cereal, also lying): "No." Why did I lie? Because it was morning, and I didn’t want to get into a thing. Why get into a thing over something so small and insignificant as unspilled milk? Well-intentioned? --- This is the opening of James Frey’s *A Million Little Pieces*: “I wake to the drone of an airplane engine and the feeling of something warm dripping down my chin. I lift my hand to feel my face. My front four teeth are gone, I have a hole in my cheek, my nose is broken and my eyes are swollen nearly shut. I open them and I look around and I’m in the back of a plane and there’s no one near me. I look at my clothes and my clothes are covered with a colorful mixture of spit, snot, urine, vomit, and blood. I reach for the call button and I find it and I push it and I wait and thirty seconds later an Attendant arrives.” Let me get this straight. A commercial airline allowed a man missing four front teeth, with a broken nose, eyes swollen shut and clothes covered in excrement who also happens to be unconscious to board unaccompanied? Did it take The Smoking Gun or Oprah to tell us that Frey might’ve created something post-truth? -- Here is a transcript of what *This American Life* calls “one of the most emotional moments in Daisey’s show.” It is his retelling of showing an iPad to a former Foxconn factory worker whose hand was “mangled” in an accident, received no medical attention, and was subsequently fired for “working too slowly.” “I reach into my satchel, and I take out my iPad. And when he sees it, his eyes widen, because one of the ultimate ironies of globalism, at this point there are no iPads in China. …. He's never actually seen one on, this thing that took his hand. I turn it on, unlock the screen, and pass it to him. He takes it. The icons flare into view, and he strokes the screen with his ruined hand, and the icons slide back and forth. And he says something to Cathy, and Cathy says, ‘he says it's a kind of magic’.” When reached for comment by Rob Schmitz, Cathy, Mike Daisey’s native Chinese translator says, “This is not true. You know, it’s just like a movie scenery.” Why can she see it, but we don’t? -- Do you imagine that my wife doesn’t know I’m lying about the milk? -- After the initial confrontation over the “discrepancies” in the originally aired piece on *TAL*. Mike Daisey had some more things to say on the subject. Ira Glass thought he was coming in to admit to more fabrications. Instead, Daisey said this: “And everything I have done in making this monologue for the theater has been toward that end -- to make people care. I’m not going to say that I didn’t take a few shortcuts in my passion to be heard. But I stand behind the work. My mistake, the mistake that I truly regret is that I had it on your show as journalism and it’s not journalism. It’s theater. I use the tools of theater and memoir to achieve its dramatic arc and of that arc and of that work I am very proud because I think it made you care, Ira, and I think it made you want to delve. And my hope is that it makes – has made- other people delve.” --- This is a comment taken from Oprah’s own website that is entirely typical of the response to her public dressing down of James Frey: “Oprah's attack on James Frey was unnecessary and honestly just mean. Sure, he should have labeled the book as "based off a true story" but really what it comes down to is his books make a point, they touch and inspire people and give people hope. I don't think he should be brutally attacked on National TV just because Oprah feels embarrassed? I could care less about her embarrassment, she should applaud James Frey for fighting an addiction and overcoming as many things as he did. Get off your high horse, Oprah.” -- Thanks to the efforts of Greg Mortenson through Pennies for Peace and the Central Asia Institute, three schools were built in Kunar province in Pakistan. Mortenson told Charlie Rose it was eleven. -- So, clearly I’ve been thinking about these things. I’m wondering if maybe we are all better off with these lies having been told. After all, aren’t we better off knowing about the deplorable working conditions in Chinese manufacturing plants that make the things we acquire so voraciously? Shouldn’t this message be spread far and wide? Is it not better to have three schools in a violent region of Pakistan, rather than none? If James Frey’s embellished tale of overcoming addiction helps even one person achieve the same, isn’t that a good trade-off? Hasn’t John D’Agata done us a favor exposing what we already know, but don’t acknowledge often enough, that all stories are constructions, that nothing is truly reliable, that art should retain the right to wrong foot us? -- In each of these cases, we’re told, essentially, that the lies somehow make for a “better” story. James Frey needs to be in jail for nearly three months instead of three hours because an addict who only spends three hours in jail hasn’t actually hit rock bottom. Greg Mortenson needs to embellish the number of schools he’s had a hand in building because twenty schools is a great thing, but 100 schools is some kind of miracle. Mike Daisey and John D’Agata need to make us “care” and sticking to the facts isn’t going to cut through the clutter. These stories are important. What harm is there in giving them a little extra juice. It’s better for my marriage if I’m just absent-minded or careless about the remaining amount of milk, rather than a selfish SOB who takes what he wants and doesn’t consider others. -- The thing is, that these lies, these distortions, these fabrications, these untruths don’t make for a *better* story. They make for an *easier* one, a story with fewer thorns to swallow on the way down, a less complicated story. That workers in factories that make Apple products are subject to conditions we simply wouldn’t tolerate in our own country is indisputable. We should be as outraged as Mike Daisey wants us to be, except that the $15 a day these people work for may also be far better than the alternative, which is to starve on nothing. Ten people to a ten foot square room may be a better shelter than none. But in that story, there’s no clear enemy, no target, no one to name the monologue after. Greg Mortenson’s “miracle” makes us feel like we can accomplish anything, that the divides between cultures are bridgeable with time and knowledge and education. This is all true, except that the undertaking is a million times harder than the *better* story makes it out to be. The real story, the true story is not so reassuring. Victory is not guaranteed. In *A Million Little Pieces*, James Frey, through force of will, manages to triumph over his addictions. It’s a great story that shows that you can overcome anything if you put your mind to it. Except that you can’t because that’s a fairy tale. I love my wife. I would take a bullet for her, but the truth is, if it’s between her and me for the last of the milk, I pick me. Maybe I’m just suspicious of these “better” stories because to me, the best stories are the most complicated ones, the ones that refuse to resolve in easy ways. Those are the stories that are most true because resolution is something that always remains just beyond our grasp. For example, the *TAL* story of the retraction of Mike Daisey’s is far more riveting that the original tale of Daisey’s trip to China, I promise. It would be a comforting story, an easy story to think that what ails these men is some kind of pathology unique to them, runaway egos, rooted in childhood psychic damage maybe. But who hasn't told a lie to look a little bigger, a smidge more important, to see the impressed looks in someone else's eyes. --- Let us also acknowledge the rationale that we tell these lies in service of some greater truth is also complete and utter bullshit. Mike Daisey and Greg Mortenson and John D’Agata and James Frey, and me will tell you that we tell the lie not to enrich ourselves, or for reasons of self-preservation, but because, in the words of Daisey, we “want to make people care.” This is convenient, and maybe we even believe it, but that doesn’t make it true. It would even be handy to blame these lies on simple greed. Mortenson and Frey have profited to the tune of millions. It’s possible Daisey is approaching that. But I think there’s a deeper truth here, a motivation that extends beyond the transparent B.S. that these lies are in the service of a higher calling. What these lies invariably do to the stories is take the focus off the story itself, and place it on the storyteller. Even before Daisey’s lies were exposed, his use of them served to make himself more central to the tale. The story is no longer about exploited workers, but about an intrepid and dogged Mike Daisey who cares so darn much he has to go and witness firsthand how his gadgets get made, and once there, connects so personally and profoundly with these workers, that only he can come back home and tell the story in a way that will change hearts and minds. Daisey isn’t in it for the money, but for the ego. Similarly, Greg Mortenson’s tale makes it clear that in some way he was special, he was chosen by fate to execute this mission. I bet Mortenson treasures his sixteen honorary degrees over his millions. For John D’Agata, his approach can’t help but remind us we need writers to tell our stories, lest we forget. I’m tired of talking about James Frey, but you get the point. --- Let us also not forget that none of us will ultimately be punished for our lies. Greg Mortenson’s Central Asia Institute and Pennies for Peace continue to operate. John D’Agata has drawn far more attention to himself and his book than would otherwise have been possible. I got the milk for my cereal, and my wife didn’t threaten divorce. The men and women of *This American Life*, the launching pad for the entire career of noted fact stickler David Sedaris, will get great credit for their defense of adhering strictly to the truth and fessing up to their mistake. Theaters are lining up in support of Mike Daisey. He will undoubtedly work the controversy into the monologue in such a way that will highlight his “I did it to make you care” defense and in so doing engender even more sympathy for what he will say is his cause. It may be the new top applause line in the whole show. But let’s remember Mike Daisey’s cause is Mike Daisey, not the exploited workforce of China. I’m still done talking about James Frey, but look him up, you won’t be surprised that he’s been doing just fine. **straight to your inbox**?
true
true
true
The title kind of says it all except that this one is almost 4,000 words and covers a lot of ground.
2024-10-12 00:00:00
2012-03-17 00:00:00
https://www.insidehigher…-no-wordmark.png
article
insidehighered.com
Inside Higher Ed | Higher Education News, Events and Jobs
null
null
9,244,252
http://www.developingandstuff.com/2015/03/export-list-from-wunderlist-even-if.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
23,699,264
https://twitter.com/kokoinkorea/status/1277901582802096129
x.com
null
null
true
true
false
null
2024-10-12 00:00:00
null
null
null
null
X (formerly Twitter)
null
null
33,089,356
https://github.blog/changelog/2022-10-04-introducing-actions-on-github-mobile/
Introducing Actions on GitHub Mobile · GitHub Changelog
Wp-Block-Co-Authors-Plus-Coauthors Is-Layout-Flow
GitHub's audit log allows admins to quickly review the actions performed by members of their Enterprise. It includes details such as who performed the action, what the action was, and when it was performed. GitHub's audit log provides users with the ability to export audit log activity for your enterprise as a JSON or CSV file download. Moving forward, customers can expect to see the following enhancements to their audit log exports: - Audit log exports will contain the same fields as the REST API and audit log streaming, bringing consistency across these three audit log consumption modalities. `actions` events will be present in audit log exports. - For Enterprises who have enabled the feature to display IP addresses in their enterprise audit logs, IP addresses will be present in audit log exports. - Audit log exports will be delivered as a compressed file. - Audit log JSON exports will be formatted with each line of the JSON file contains a single event, rather than a single JSON document with an array containing all the events as array elements. This feature will be gradually enabled for an increasing percentage of GitHub Enterprise Cloud customers with a goal of 100% enablement by October 28, 2022. Should you encounter a problem with your audit log exports, please reach out to GitHub Support for assistance.
true
true
true
Introducing Actions on GitHub Mobile
2024-10-12 00:00:00
2022-10-04 00:00:00
https://user-images.gith…b20695d93c34.jpg
article
github.blog
The GitHub Blog
null
null
6,079,898
https://shop.lenovo.com/ae/en/smartphones/k-series/k900/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
8,454,270
https://medium.com/@rob_ellis/creating-a-chat-bot-42861e6a2acd
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
26,166,277
https://www.wsj.com/articles/online-trading-platform-will-let-investors-bet-on-yes-or-no-questions-11613557800
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
7,658,889
http://www.bbc.com/future/story/20140427-i-swapped-bodies-with-someone
‘I swapped bodies with someone’
Rose Eveleth
# ‘I swapped bodies with someone’ **What if you could experience life in another person’s body? A few have done so, discovers Rose Eveleth, and they report profound emotional changes.** If you could be anybody in the world, who would it be? This is usually just a theoretical question. The idea of suddenly taking the form of your neighbour, a celebrity or even your dog is fun to think about, but seemingly impossible to execute. Yet a few people have experienced what it might be like to step into the skin of another person, thanks to an unusual virtual reality device. “The first seconds are just overwhelming,” says Rikke Frances Wahl, a woman who temporarily became a man. “It feels weird. You start to feel more and more comfortable in it, and you start to really get the fantasy of how it would be if it were your body.” Wahl, an actress, model and artist, was one of the participants in a body swapping experiment at the Be Another lab, a project developed by a group of artists based in Barcelona. She acquired her new body using a machine called The Machine to be Another. The set-up is relatively simple. Both users don an Oculus Rift virtual reality headset with a camera rigged to the top of it. The video from each camera is piped to the other person, so what you see is the exact view of your partner. If she moves her arm, you see it. If you move your arm, she sees it. To get used to seeing another person’s body without actually having control of it, participants start by moving their arms and legs very slowly, so that the other can follow along. Eventually, this kind of slow, synchronised movement becomes comfortable, and participants really start to feel as though they are living in another person’s body. “It was so natural,” Wahl says, laughing, “and at the same time it was so unnatural.” When Wahl swapped with her partner, Philippe Bertrand, an artist who works at the Be Another lab, they wound up stripping down to just their underwear. This is the scene that Wahl remembers when she thinks back on the experience. “We were standing there just in underwear, and I looked down, and I saw my whole body as a man, dressed in underpants,” she says. “That’s the picture I remember best.” Intriguingly, using such technology promises to alter people’s behaviour afterwards – potentially for the better. Studies have shown that virtual reality can be effective in fighting implicit racism – the inherent bias that humans have against those who don’t look or sound like them. Researchers at the University of Barcelona gave people a questionnaire called the Implicit Association Test, which measures the strength of people’s associations between, for instance, black people and adjectives such as good, bad, athletic or clumsy. Then they asked them to control the body of a dark skinned digital avatar using virtual reality goggles, before taking the test again. This time, the participants’ implicit bias scores were lower. Another study showed that using the so-called “rubber hand illusion” – where a subject watches researchers manipulate a rubber hand placed such that it seems like their own – can have the same impact. When that rubber hand is a colour unlike their skin, participants scored lower on tests for implicit racism than when they watched a hand of the same skin colour. The idea is that once you’ve “put yourself in another’s shoes” you’re less likely to think ill of them, because your brain has internalised the feeling of being that person. The creators of the Machine to Be Another hope to achieve a similar result. “At the end of body swapping, people feel like hugging each other,” says Arthur Pointeau, a programmer with the project. “It’s a really nice way to have this kind of experience, and to force empathy onto a person’s brain.” Aside from empathy, the Be Another lab has used the technology in other situations in which swapping places might have a positive effect. They’ve allowed therapists to switch with their patients, to better understand being physically disabled, and had wheelchair users swap with dancers. And they would like to offer the machine to doctors to help treat those with eating disorders who might have distorted ideas of their own body. Wahl says that she’d jump at the chance to swap bodies with someone again. “I would really, really recommend it to everyone, everyone should try this thing,” she says. “We all have different feelings and points of views about things,” says Pointeau, “and it’s really strongly related to our bodily experience. With this kind of experience we can promote empathy, but also maybe help people better understand themselves too.”
true
true
true
What if you could experience life in another person’s body? A few have done so, discovers Rose Eveleth, and they report profound emotional changes.
2024-10-12 00:00:00
2014-04-28 00:00:00
https://ychef.files.bbci…351/p01y2vpf.jpg
newsarticle
bbc.com
BBC
null
null
37,850,220
https://vimeo.com/146524997
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
30,684,881
https://1password.com/developers/
1Password Developer – Secure Your Secrets | 1Password
null
# Build faster and more securely with 1Password Developer 1Password streamlines how you manage SSH keys, API tokens, and other infrastructure secrets throughout the entire software development life cycle – from your first line of code all the way into production. ## Eliminate the complexity of secrets management Access and work with secrets right where you need them – in your terminal, in your IDE, in your app, or in production – without ever exposing them in plaintext. ### Develop faster Simplify how you work with SSH keys, API tokens, and other secrets when building applications. Easily and securely authenticate CLI and SSH connections with biometrics, sign your Git commits, and more without leaving the terminal or IDE. ### Protect secrets Find secrets in your code and save them into 1Password’s end-to-end encrypted vaults, and then securely load them into environment variables, configuration files, and scripts without exposing any plaintext secrets in code. ### Deploy securely Avoid secrets sprawl by storing infrastructure secrets for your team in 1Password and then securely access them in your CI/CD workflows and other infrastructure tools like Kubernetes, Terraform, Ansible, and Pulumi. ### More than 750,000 developers trust 1Password ## Everything you need across the development lifecycle ### Your SSH keys, made easy Generate, import, and store SSH keys in 1Password for safekeeping, then scan your fingerprint to use them in any Git or SSH workflow with 1Password’s built-in SSH agent. ### Bring 1Password to the command line Securely access your secrets in 1Password during development. Eliminate plaintext secrets in code, automate administrative tasks, and sign into any CLI with your fingerprint. ### Build with 1Password SDKs Use open-source SDKs for Python, Javascript, and Go to integrate your applications with 1Password. SDKs can be embedded within your application to decrypt data when and where it’s needed, so every value stays secret until that moment. ### Secure your secrets, from code to cloud Eliminate secret sprawl by removing hard-coded credentials from your CI/CD pipelines, infrastructure, and applications. Use service accounts to directly integrate with 1Password, or deploy a 1Password Connect server to access secrets in your infrastructure with a private REST API. ## Build and contribute with 1Password ### Join the community Get access to developer betas, provide feedback, and connect with the community. ### Students get a free year of 1Password Protect your online life, at school and beyond. Claim a free one year subscription to 1Password with the GitHub Student Developer Pack. ## FAQs about 1Password for Developers ### Is 1Password Developer included in all plans? Yes, 1Password Developer is part of every plan including Individual, Family, Teams, Business, and Enterprise. ### What do I need to start using 1Password Developer? ### What is the 1Password Developer Portal? ### How do I get started with integrating 1Password into my application? ### Does 1Password have an API? ### What SDKs does 1Password provide? ### Are there rate limits for Service Accounts? ### What is 1Password Secrets Automation? ## News and updates for developers Subscribe to our developer newsletter and be the first to know about new betas, tools, and resources for developers.
true
true
true
Secure and optimize developer workflows with 1Password Developer. Store, manage, and deploy secrets at scale across web applications, CI/CD pipelines, Kubernetes, and more
2024-10-12 00:00:00
2024-01-01 00:00:00
https://images.ctfassets…hic-1200x630.jpg
website
1password.com
1Password
null
null
21,980,951
https://v1again.wordpress.com/2020/01/04/mentoring-entrepreneurs-customer-financing/
Mentoring Entrepreneurs: Customer Financing
View more posts
I am passionate about entrepreneurship. I truly love it. And I deeply believe in equality in our society. And so I decided to put those two beliefs together when I sent out the below tweet: Before I hit send on the Tweet, I thought that I would only get a few responses. I do not have a lot of followers. But it blew up. The response has been overwhelming. I started the mentoring yesterday and the first sessions have gone well. I wanted to share the advice that I shared because I am sure that it is applicable to other entrepreneurs. The first mentoring conversation was with Jose who is running a SaaS business to generate customer referrals, ReferralVoodoo.com. He has a nice start to his business. He and his partner have bootstrapped it to 80 customers and 7 employees in only 10 months post launch. Really nice growth without any outside capital! Well done Jose. He has grown his referral software company in a very crowded space by focusing on small, service providers, such as Doctor’s offices. Jose said that he wanted to discuss capital raising. He had just raised some money and was thinking about raising more. After discussing his business in a little more detail, I suggested a different path than a Seed Round…using Customer Financing. Customer Financing is when you get your customers to fund your growth. That’s the best form of financing because you are not selling equity in your company. So how do you get customer’s to fund your growth? Sell annual licenses. During our conversation, Jose mentioned to me that almost all of his customers were on monthly plans. Also, the ASP was about $250 – $400/month. He offered a 20% discount to purchase annual plans, but few customers took him up on it. I told him one possible reason that few customers chose that option is because it isn’t worth $50 – $80 in savings to give up the flexibility of being able to cancel the contract at any point. The savings are just not a significant enough incentive. I suggested that he run an experiment. Take a cohort of prospects and only offer them annual plans. Tell them that monthly plans are no longer an option. Do it for just a cohort as an experiment. Guess what happened… At the end of the day, ReferralVoodoo was a great product at a fair price. Taking away the option of monthly plans didn’t change the customer’s intent to purchase. They chose the Monthly option because it was easier and it was given to them as a choice. But that choice was NOT required to close deals. The real benefit of an annual plan to Jose and his team is that pre-paid annual licenses create negative Working Capital. Meaning, you have cash for a contract that has not been fully used yet. Said differently, you have a year’s worth of cash flow from a customer up front. This allows you to invest that money in growth. I knew about this option because I did it myself at Validately. Switching to annual only licenses for my SaaS company was the single best operational decision that I made in my entire career. We immediately went from NEEDING to raise capital (which we likely wouldn’t have been able to do given our churn metrics at the time) to being cash flow positive. We used the cash flow from customer’s pre-paying for annual licenses to grow the team. Our ARR went from ~$350k at the time of the switch to ~$4.5 million at the time of our exit only 3.5 years later. Also our cash in the bank was at its highest level at the time of our exit. It was double the cash in the bank at the time of our prior capital raise. One word of caution is that you should focus on the word EXPERIMENT in the advice above. I am a big believer that successful startups have to constantly be experimenting. And not just with product or UX, but also with pricing, sales and marketing. Hide the monthly option on your public pricing page for a quarter. See what happens. If it works for your business, it will dramatically change your company for the better. If you try it, let me know how it goes. Love the latest updates. Congrats on the sale of the company! Would Appreciate the continuity of having this in WP – but is there any chance you can post to Medium? I always like to refer to / share insightful content and have articles in one place Look forward to the next article regardless of location 🙂 happy new year! Do you mean have this blog on two platforms?
true
true
true
I am passionate about entrepreneurship. I truly love it. And I deeply believe in equality in our society. And so I decided to put those two beliefs together when I sent out the below tweet: Befor…
2024-10-12 00:00:00
2020-01-04 00:00:00
https://v1again.wordpres…t-3.26.27-pm.png
article
wordpress.com
V1Again
null
null
4,971,849
http://nymag.com/daily/intelligencer/2012/12/silicon-valleys-exclusive-shuttles.html
The Commuter Kings: Riding Along on Silicon Valley’s Exclusive Shuttles
Kevin Roose
It’s 7:55 a.m., and I can’t find the Facebook bus. On the corner of Cole and Haight, where it’s supposed to be picking me up, all I see is a Google bus. In that moment, before my morning coffee, it feels as if some kind of urban planning dam has been breached — the San Francisco version of the Lewis Black bit about seeing a Starbucks across the street from another Starbucks and realizing it’s the end of the universe. But it’s not. It’s a daily occurrence here, in the shadowy shuttle world of Silicon Valley. I’ve come to San Francisco to investigate the rise of these shuttles, which are used by tech companies to ferry thousands of their employees from the city to Silicon Valley and back. Google has them. Facebook has them. Apple, Genentech, and Electronic Arts have them. Every day, beginning at around 6 a.m., they pick up employees at dozens of stops around the city and deposit them an hour later on the manicured suburban campuses of their tech companies. At night, they reverse the route, with the last riders getting back to their city dwellings around midnight. Collectively, these buses represent a vast armada of plush, Wi-Fi-enabled chariots, delivering the precious brains of coders and other employees safely to their destinations without enmeshing them in the hassle of public transportation. I find the Facebook bus several minutes later — it’s idling across the street — and climb aboard. My host, a Facebooker who has kindly authorized me to ride with him on his Friday morning commute, explains that we should speak *sotto voce* unless we want to attract attention. “Talking is usually pretty discouraged,” he says. “People converse up until they hit the highway, and then the unspoken rule kicks in, and it goes quiet.” Party buses these are not. From interviews with a dozen or so shuttle commuters at various companies in Silicon Valley, I’d learned that most tech shuttles are noiseless, with employees checking their e-mail, pinning on Pinterest, or dozing on the ride to work. There are exceptions — one former Facebook employee used to break out cheap Champagne and plastic cups on the Friday after-work shuttle, and a couple of Googlers reported some “light PDA” between co-worker couples — but for the most part, placidity reigns. “I was once yelled at for streaming Obama’s acceptance speech at the DNC,” a Googler reported. “It was using too much bandwidth.” The Silicon Valley shuttle system began as an unofficial series of van rentals and car-pool arrangements between workers who lived in San Francisco and in the East Bay, but has recently blossomed into a full-fledged company fleet. Google says its buses — which cost upwards of $500,000 apiece — carry a combined 4,500 to 5,000 riders a day. Facebook says that between 40 and 47 percent of its employees use some form of alternative transportation, including six different shuttle routes, to get to work. Both companies employ transportation managers who use complicated tracking systems to figure out the best ways to hack traffic in real time and ensure that the shuttle-to-rider ratio stays optimized. The shuttles themselves are plush but nondescript. Google’s Van Hool buses have the company name stitched into every headrest, and Electronic Arts’ buses are wrapped with portraits of video-game characters. My Facebook shuttle, the one nicknamed “Big Blue,” was a double decker with a full-length sunroof on top and clean, new-looking seats. My host and I roll along quietly on our way to Menlo Park, passing stores like “Happy High Herbs” in the Haight and wall murals of Bob Marley on Valencia Street. A few more riders pile in at each stop — first two or three, then a small handful, then a dozen or so. The top level remains entirely empty, and the bottom level only reaches half capacity once we hit the Dolores Park stop, where a big group of sleepy-looking Facebookers piles on. Before I boarded, my host had asked me not to reveal the exact locations where Facebook’s shuttles stop. The reason, he explained, was that Facebook and other tech companies were currently haggling with city officials over where they were and weren’t allowed to pick up passengers. The relationship between city officials who run the Muni, San Francisco’s public bus system, and the largely unregulated world of private shuttles has historically been tense, and is worsening as shuttles become a more prominent feature of city life. At least one San Francisco official, Supervisor John Avalos, has called for more stringent oversight of the behemoth tech buses, including stopping them from picking up passengers at Muni stops. (The practice is already officially prohibited, but one city staffer told me it was currently treated with “a wink and a nod.”) The prominence of shuttles has also amplified caste divisions, and magnified a culture clash between longtime San Francisco residents and the tech gentrifiers who have begun flooding into the city limits in recent years. In August, a Google bus driver was caught on film threatening a bystander who was taking a photo of the bus parked in a Muni lane, blocking the way for public buses and bicyclists. For many city residents, it looked like the self-centered tech ethos made manifest. “There’s definitely a divide,” one city official told me. “People are waiting for their Muni bus, and they see a fancy Google bus delaying their bus, so people who don’t even work in San Francisco can get to their high-paid jobs on time.” Still, the private bus system carries substantial benefits. Large, fuel-efficient buses save untold carbon emissions, take thousands of cars off the road, and fill voids left by the relatively sparse Muni routes. Apartments near shuttle lines have seen their values rise, and more than one Silicon Valley worker told me that they’d picked a place to live based on an existing route. In fact, for many tech workers, the shuttles are the only thing that makes living in San Francisco possible. Brendon Harrington, Google’s director of transportation programs, talked up the green benefits of his work when we spoke. He touted Google’s biodiesel engines, solar-powered charging stations, and the thousands of tons of carbon emissions its shuttles save every year. And indeed, the San Francisco County Transportation Authority, which made a presentation about the private shuttles in October, estimates that the shuttles save a net 28.7 million VMT (vehicle miles traveled) and replace roughly 757,000 single-passenger car trips. “It’s been enormously helpful, both from an environmental standpoint and in terms of getting people to work,” he said. The buses have non-environmental benefits, too — at Google, for example, employees can rent them on the weekends to shuttle friends to a party, say, or host an outing to Napa Valley. But for most employees, the biggest boon is having a clean, well-lighted place to work on the morning commute. During my Facebook ride, nothing of note happened. A programmer in a red plaid shirt edited a Wikipedia page on his laptop, while a bespectacled bearded guy nodded off across the aisle. The only noise, other than the clicking of keys, was the cheery “Good morning!” given by the bus driver, as each new passenger boarded. You might expect more from the rolling offices of tech companies, which are known for giving free massages and gourmet meals to employees at their headquarters. But when I asked my host if there were any hidden perks aboard the Facebook bus, he laughed. “Yeah, we keep people who complain about the IPO in here,” he joked, pointing to a small cabinet. “They serve us drinks.”
true
true
true
How do the titans of tech get to work?
2024-10-12 00:00:00
2012-12-26 00:00:00
https://pyxis.nymag.com/…social.w1200.jpg
article
nymag.com
Intelligencer
null
null
6,804,037
http://retractionwatch.com/2013/11/25/want-to-report-a-case-of-plagiarism-heres-how/
Want to report a case of plagiarism? Here’s how
Author Ivan Oransky
If you’ve come across a case of plagiarism and want to report it to the proper authorities, a new article in the journal *Ethics & Behavior* would be a good place to start. Mark Fox, a professor of management and entrepreneurship at Indiana University, and Jeffrey Beall, a librarian at the University of Colorado, Denver, known for Beall’s List of questionable publishers, teamed up for the article. As they write in their abstract: Scholarly open-access publishing has made it easier for researchers to discover and report academic misconduct such as plagiarism. However, as the website Retraction Watchshows, plagiarism is by no means limited to open-access journals. Moreover, various web-based services provide plagiarism detection software, facilitating one’s ability to detect pirated content. Upon discovering plagiarism, some are compelled to report it, but being a plagiarism whistleblower is inherently stressful and can leave one vulnerable to criticism and retaliation by colleagues and others (Anderson, 1993; Cabral-Cardoso, 2004). Reporting plagiarism can also draw the threat of legal action. This article draws upon our experiences as plagiarism whistleblowers with several goals in mind: to help would-be whistleblowers be better prepared for making well-founded allegations; to give whistleblowers some idea of what they can expect when reporting plagiarism; and to give suggestions for reducing whistleblowers’ vulnerability to threats and stress. Of course, you could always *not* report plagiarism: One unfortunate alternative to reporting plagiarism is to do nothing. In some cases inaction may be partly motivated by colleagues who provide advice such as: “No one will thank you for this”, “Be very careful that this doesn’t hurt your career”, or “Don’t be surprised that this gets covered up if you do complain”. As the authors of a Special Report on plagiarism in the Chronicle of Higher Educationobserve: “academe often discourages victims from seeking justice, and when they do, tends to ignore their complaints — a kind of scholarly ‘don’t ask, don’t tell’ policy” (Bartlett and Smallwood, 2004, p. A8). However, inaction may lead those who have discovered plagiarism to experience a lingering unrest as to whether they have done the “right thing”. An Office of Research Integrity (1995) study provides some insight into whether whistleblowers regret their actions. For whistleblowers that experienced no negative actions, 86% would definitely blow the whistle again and a further 5% would probably do so. Surprisingly, 60% of those who suffered one or more adverse actions as a result of their whistleblowing would do so again; and 15% probably would do so. The paper includes sections with useful tips, ranging from “Be Aware of What Constitutes Plagiarism” to “Some Allegations Will Be Taken More Seriously Than Others” to “Be Prepared for the Threat of Legal Action” to “Publicizing Allegations Through Mainstream Media and Online.” That last section ends with Also, carefully consider what, if any, use you want to make of Internet: Do you want to create a blog that highlights the plagiarism? Do you want to make a website such as Retraction Watchaware of any articles that have been retracted as a result of your whistleblowing? Why, yes, you do, of course! We’ve also published a guide to reporting alleged misconduct, to which we would add trying PubPeer. The authors conclude: Do not assume that you will be applauded for raising allegations. In particular, it is unlikely that colleagues and friends of the plagiarist will applaud your actions. Indeed, they may retaliate by examining your own published works, so it is not a good idea to report plagiarism if you yourself have ever committed research misconduct. Others may wish that plagiarism allegations be dealt with quietly, as publicity may adversely affect the reputation of the institution or journals where the misconduct occurred. Having said this, you should keep in mind the benefits of reporting plagiarism, namely that this serves as a deterrent to others and helps maintain the integrity of the academic and scholarly record. The easiest part of whistleblowing on plagiarism discovered online is that one can make allegations anonymously, citing the source and the document containing plagiarized wording — and that can be investigated by the institutional or federal investigative officials without any need to know or communicate with the whistle- blower. Using Retraction Watch, PubPeer, SciFraud or other blogs with a pseudonym allows such anonymity All or almost all organizations have an explicit policy to take no notice of anonymous allegations. This practice goes back at least to Roman times. The organization can therefore ignore almost all allegations of wrongdoing that are reported to them. There is no benefit to championing the cause of a powerless and impotent whistleblower who feels they must remain anonymous for personal or professional safety. Therefore organizations and individual treat such whistleblowers as if they have Ebola. I have no expertise in “Roman” history, ancient or modern, but in my 17 years of experience in the United States’ Office of Research Integrity (ORI), we routinely considered, and encouraged institutions to consider, anonymous allegations (especially ones of plagiarism). As noted in the preamble (Part III.C.) to the 2005 ORI regulation, http://ori.hhs.gov/FR_Doc_05-9643: “. . . it has been longstanding ORI practice to accept oral allegations, including oral, anonymous allegations . . . [which] may contain relatively complete information. . . . or lead to more complete information. . . . We also note that the Offices of the Inspector General at various Federal agencies routinely accept oral and anonymous allegations. . .” As noted in the paper that I wrote as an ORI official on such cases, published in Academic Medicine 73: 467-472 (1998) — http://journals.lww.com/academicmedicine/Abstract/1998/05000/Anonymity_and_pseudonymity_in_whistleblowing_to.9.aspx — “. . . from 1989 through 1997. . . the record shows that research institutions and the ORI have treated such allegations seriously. . . . however, very few anonymous complaints have been sufficiently substantive to be pursued in formal inquiries or investigations. . . ” In my ORI experience, few institutions have formal policies requiring that complainants have to identify themselves. I should not have written “explicit policy”. I think ‘de facto’ would be better. Don’t you think the low fraction of ORI reports attributed to anonymous reporting indicates that many anonymous communications are simply discarded? Well I have done some whistleblowing before and I must say that almost all times anon complainers are ignored. Someone using an alias has to an effort to get some answer (which 60% of times will be ‘blablabla+what is your name, phone number and affiliation’), and a great effort to see some action taken, and a major effort to see positive action being taken. I think ignoring the complaints work more at the individual-receiving-the-email level than on an official, organisation level. Oh yes, and many organisations have these encouraging statements that ‘any complaint will be taken seriously’ however have ignored complaints and asked for IDs. Also Pubpeer entries are usually still ignored. I have suffered negative effects, and I did and will do it again. I still believe in the readers who will heed the reported irregularity, and all works from that source will more rapidly fall into the big trash bin where most scientific literature eventually end. I think science only works well in the long term, but we can catalyse the self-correcting mechanisms with few simple actions. Sorry to hear that happened to you. People interested in the issues discussed here should have a look at what was reported by Jorge Luis Borges already in 1939 in his article “Pierre Menard, Author of the Quixote”. An extreme case of plagiarism! Two most recent examples of apparent self-plagiarism and data duplication: Case 1. Data presented during the Society for Neuroscience SfN2013 meeting (abstract 118.03, Dutta et al., Mtor pathway mediates diazoxide preconditioning in cultured neurons) http://www.abstractsonline.com/Plan/ViewAbstract.aspx?sKey=b63ffa4a-63ff-40d4-bb7a-72d0612504eb&cKey=c0396af2-1f2f-4263-9ef1-24a50024776b&mKey=%7b8D2A5BEC-4825-4CD6-9439-B42BB151D1CF%7d appears to be identical to the abstract presented by the same group at the American Physiological Society’s Experimental Biology EB2013 meeting earlier this year, and published in FASEB Journal (Dutta et al., 2013; 27: 691.1) http://www.fasebj.org/cgi/content/meeting_abstract/27/1_MeetingAbstracts/691.1?sid=72a6ec8f-2723-4fc6-a55b-b7b5be38b4c7 Case 2: Data presented during the SfN2013 meeting by the same group (abstract 144.05/06 by Rutkai et al.) http://www.abstractsonline.com/Plan/ViewAbstract.aspx?sKey=29fb0bb7-b3d1-45bb-8cd5-cc0a4f8334b8&cKey=8b3186fd-b1be-4f7d-965d-8ba86594f430&mKey=%7b8D2A5BEC-4825-4CD6-9439-B42BB151D1CF%7d appears to be highly similar to the abstract presented at the Experimental Biology EB2013 meeting earlier this year, and published in FASEB Journal (Rutkai et al., 2013; 27: 1131.10) http://www.fasebj.org/cgi/content/meeting_abstract/27/1_MeetingAbstracts/1131.10?sid=94027d44-8d11-4193-999d-40ee7ef378c2 mTOR Pathways I don’t think many people consider it unethical (or unusual) to present the same data at multiple meetings. too bad if they do not think it is unethical, because they should It also violates Ethical Policies of the conferences’ Societies too (which request all presented data to be novel) I’m with Dan Z on this one. However, a lot depends on where and how the abstracts will be published. If in a regular edition of a journal, searchable online, then it’s probably not OK to duplicate an abstract. If in a special edition abstract book, only available to those who attended the conference, then it doesn’t really count as a publication because it’s not on PubMed and not findable in the public domain. Abstracts typically do not show up on PubMed, and are not counted as publications on CVs, or counted toward h-index or other citation metrics. Some journals won’t even let you cite them in a paper. If you paint with a wide brush and classify all abstracts as published, then what if I print out an abstract I just wrote, give a copy to my colleague, and then file it on the shelf in my office. Technically it’s now published because someone saw it, but I don’t think you’d find anyone who would say it’s unethical to take such an item and publish it “for real” in a journal after that point. So, it really comes down to where you draw the line and classify something as published. For me, that line is whether it’s in PubMed, has a DOI, is in a journal with an ISBN, and is findable in the public domain. In the above cases CR lists, while technically the abstracts are available online, they are not listed on PubMed and are not found by Googling for the author’s names and key words. They’re in special editions of the journals which are only accessible and searchable through the proprietary web pages of the conferences in question. Now granted, the authors may still have broken the rules of the respective conferences, but it’s not a clear cut case of publishing the same thing twice because it could be argued that these are not true publications. On a side note, BTW, there’s technically nothing wrong with presenting the same scientific data at different conferences in the form of a talk. It’s often advantageous to present data to different audiences and get different reactions/feedback. Would it be right to say to a scientist “you gave that talk already, talk about something else or we’ll report you for self-plagiarism”? Try saying that to a stand-up comedian and see how far you get. This being the case, now consider the poster question again – if you can’t stop a person from presenting the same talk at 2 different conferences, why would you consider it OK to stop someone presenting the same poster twice? After all a poster is just like a talk, a kind of performance in front of an audience. Key note addresses, plenary lectures from senior scientists are identical for years….. How about practicing some measure of transparency with respect to the recycling of our previous work? Whether it is a paper being presented at a conference, a key note address or plenary lecture, simply include a note or a slide that indicates the degree of recycling, saying something to the effect that this paper has been presented (in part, perhaps) at such and such a conference or group, or that various slides from this presentation were presented elsewhere, etc., etc.? I also agree with Dan Z. So does Elsevier (and probably other publishers): here is the statement from the Elsevier Submission Declaration section of the Guide for Authors: “Submission of an article implies that the work described has not been published previously (except in the form of an abstract or as part of a published lecture or academic thesis or as an electronic preprint,…” Presenting the same talk at multiple meetings to presumably largely if not wholly different audiences and the discussion that occurs in doing so is good for science. That activity is not the same as republishing printed material. This is relevant to full-length articles but does not cover abstracts (which are a separate type of publications). If you present the same talk many times, it is fine – but if you present an abstract with real data several times – it is self-plagiarism, and we shall teach our students from Day 1 that this is a bad practice You have assumed that Elsevier is ethical. Alot of evidence points to the contrary, including flawed peer reviews, biased editors who suffer no punitive action by Elsevier, scandals related to the weapons industry by the parent company Reed-Elsevier, journals linked to conflicts of interest, and so much else that suggests that not all is well (ethically) at this publisher. Hence their massive marketing campaign to show that they are ethical. That means that Elsevier’s guidelines are somewhat folly. An abstract is simply a window onto the larger picture. So, indeed, if an abstract is ONLY an abstract, of a meeting, then that’s fine. However, usually an abstract is usually the “window” to what was presented at the meeting, either as a poster or as an oral presentation. If someone were to use, to use a clear example, Martin Luther King’s quotations from his speeches, without due attrribution or without the use of quotation marks, surely you will call this pure and blatant plagiarism. One could even say that Barack Obama is guilty of self-plagiarism because he does not list all the speeches where he first stated “Yes, we can”, and I assure you, that was self-plagiarized a zillion times. So, too is the recycling of one’s own words, either written or spoken, at a scientific meeting. Just because it can’t be tracked doesn’t mean that it didn’t exist. A self-plagiarized abstract is, in my view, an unethical act, because it most likely represents (very realistically) the self-plagiarism of ideas that were already shared (i.e., published as words or ideas) to the scientific public. Kenrod, I also strongly disagree with your notion that presenting the same presentation at different meetings is not plagiaristic in nature, and think that your concept of “is good for science” is problematic. This kind of reminds me of professors who just recycle the same notes, lectures, and speeches, year in, year out, at universities. Why aren’t they charged for gross self-plagiarism? Perhaps Miguel Roig could weigh in here, or Alan Price, based on their extensive experience. JATdS, the appropriateness of making the same or similar presentation in two or more conferences can be a somewhat complicated issue and one that I feel has yet to be fully addressed by the research integrity community. In fact, my sense is that there has been little formal discussion on the topic. No one can deny that there are benefits to communicating our data and/or ideas to the widest possible audience and I want to believe that it is, in part, on this basis that researchers present the same work in two or more conferences. But, I think that if an organization allows for the presentation of previously disseminated data, then it is still very important that such presentations be done with the utmost degree of transparency (see my earlier post). This especially important in terms of how we represent ourselves to those who must evaluate our research productivity for purposes of promotion and tenure, for multiple presentations of the same ideas/data, like duplicate publications, can also be used for purposes of vita padding. My general view about duplication of abstracts is this: As long as the audience (and the reader of the published abstract) is clearly and unambiguously alerted to the fact that the material was previously presented, then I don’t see such instances as self-plagiarism. But, I do have concerns with ‘duplicate’ conference presentations in situations when changes are made to the second version of the same presentation that make it appear as if it is new, when in fact it is not. For example, a second presentation may contain minor changes to the abstract, title, and/or to the authorship, that could conceivably lead to confusion as to the exact relationship between each set of data described in the slightly different abstracts. Then, there is the question of whether a second conference presentation which reuses data from an earlier presentation in different ways and/or combine some new data with old data that are then packaged as a ‘new’ presentation. The fact is that there can be all sorts of permutations of these conference presentations that could lead to much confusion (new added data with modified text and title, old data reanalyzed with new text and title and different authors, etc.). Mind you, some of these activities may be perfectly acceptable AS LONG AS THE READER IS MADE FULLY AWARE OF WHAT IS BEING DONE, but given that they may ultimately result in a published one paragraph abstract, can the material in these abstracts be misinterpreted? I think these scenarios can lead to problems. I note that these types of issues equally apply to actual publications and, again, the activities may be equally acceptable if readers are fully informed as to what is happening with the various data sets. Of course, if fully informed, most journals would reject this sort of duplication if there is no acceptable rationale for the different analyses, etc. Ultimately, I think the important question is not so much whether it is self-plagiarism to publish the same or similar abstracts in two different proceedings. The question for me is whether any such duplication, even if allowed by the different societies and with full transparency, can lead to confusion as to how the material presented is ultimately interpreted within the larger scientific record. If the practice leads to misinterpretation, and I think it has the potential to do so in some cases, then, it is problematic and should be curtailed. To JATdS– I have no experience with abstracts and “self-plagiarism” — as I have said before, I consider “self-plagiarism” to be a non-sequitur — there is only “duplicate or redundant publication” since “plagiarism” is the use of other person’s words or ideas without giving appropriate credit. I tend to agree with vhewig’s views (above), but I do not disagree with Miguel Roig’s thoughtful response. Alan Price Most conferences have specific policies regarding abstracts. Some meetings absolutely require that all data presented be new, while others do not. The former is more common when the abstracts are going to be printed in full in an ISI or Pubmed indexed publication, which is rare. In either case, the same data will most likely be published later in a journal article, which is the primary mode of scientific communication. Meeting presentations are not that serious – usually just an excuse to travel to the meeting destination. By the way, this could be a good Ask RW question: is it OK to “recycle” meeting abstracts/presentations? I reported a case of possible self-plagiarism just over a week ago, partially following recommendations from JATdS. I chose not to contact the authors directly, and I chose to do it anonymously. I emailed multiple editors to both of the journals involved, thinking it might be easy for a single editor to ignore an anonymous email but not so easy for multiple editors and journals. Besides providing links and citations to the two articles in question I very briefly described the issue. I received a reply from one of the editors of the journal with the most recent publication within two hours. That response was cc’d to the entire distribution list plus someone who appears to be on the staff of the publisher. The editor promised an inquiry and a response back to me. We’ll see how this goes. The keynote addresses and ‘overview’ abstracts may be similar for years – but presenting the same DATA twice is not. The Society trusts the investigators to present novel data, and not to present data already published with a different conference. We can take votes, but the fact remains: self-plagiarism is not acceptable. A graduate student would have been punished for this – why PIs are any different? Lets be honest, and change the culture Wow, this is absolutely amazing timing for this post. I came to this page specifically (I follow it regularly, and this was the first place I thought to go with this question…) because I stumbled across something that I am not sure what to do about. I am in the beginning stages of a new research project and have been doing some pretty extensive reading on one specific theoretical construct over the past several weeks. Just a few minutes ago, I was reading a dissertation on the topic and realized that what I was reading sounded familiar…TOO familiar. And I am not talking about a sentence or two…I am talking about entire paragraphs that were lifted (almost completely verbatim, including entire sentences that WERE verbatim) from another paper on the subject, including all of the citations and the same, very peculiar and not-by-accident word choices. And it’s not just the wording; they really took someone else’s original idea about the dimensional structure of a construct and claimed it as their own…for their dissertation. The original paper was published by a team of researchers in the U.K. in 2002, and the dissertation was written in 2010, by a student in the U.S., with no mention of contributions from any of the authors on the original publication. What would you do? Any advice (at all!) would be greatly appreciated. I have never encountered a situation like this and have no idea how to handle it. Thank you in advance 🙂 PS- I really love this blog! Sure. Call or write to the Research Integrity Officer for graduate students (may be the Associate Dean for Research at the Graduate School, or the Associate Vice President for Research of the University) — you should be able to find their name on the institution’s website (or write to me in confidence, and I will find it for you). Simply show the RIO what you have observed. So, in this kind of case, a dissertation has been accepted/published and one finds plagiarism (and other irregularities) years later, what happens? I know of a situation similar to publichealthwatch’s and the person has held several uni teaching positions (including one in which this person ironically served as ethics adviser), all based on a PhD not honestly earned. The paper titled “The Method of Control Solution to Deal with the Dynamics in Designing” , by Xiaolong Shen, Hunan Industry Polytechnic in the IEIT Journal of Adaptive & Dynamic Computing, 2012(4), 5–9, 2012, seems very similar to “Control of Designing” , by Hou Yuemin, Ji Linhong, in the Proceedings of the 10th International Conference on Frontiers of Design and Manufacturing, June 10~12, 2012, Chongqing I found an interesting case of plagiarism dating back over 100 years ago while preparing a book for republication on Createspace. While preparing this republication I ran across an apparent century old secret. The author of this book, Rev. Leighton Grane engaged in what appears to be plagiarism. I ran across this while searching for the book “A Crime Against the Soul” which he mentions in the below quote. It took quite a bit of time to find the book in question, as the actual work has a different name completely. I did find a passage by one Theodore T. Munger that was almost the same as the one in the book, only the time difference was off by about 20 years. Some words were different, and as it goes the two passages become less alike, but the same idea is there. I think when you read the two passages side by side you will see quite clearly that this is a classic case of plagiarism. Grane provides plenty of references for the works he used in this publication. So why not with this one? I can only guess from what I could learn from the original author is that Grane did not agree with Munger’s theological position. Grane was Anglican, while Munger was a Congregationalist. Anyone with a passing knowledge of English Church history will know that Congregationalists were borne out of confrontation with Anglicanism. It may be that Grane was not humble enough to admit his source of information was from someone he considered a theological opponent, or even a heretic. Of course this is just conjecture and you can make your own mind up about it. When, sixty or seventy years ago, the famous Caspar Hauser appeared in the streets of Nuremberg, released from a dungeon in which he had been confined from infancy, seeing no human face, hearing no human voice, nor ever seeing the full light of day, a distinguished German lawyer wrote a legal history of the case, which he called, A Crime Against the Life of a Soul. It was a fitting title. But it is even more accurately descriptive of the process to which so many voluntarily subject their own spiritual nature. Men muffle the voice of conscience, and next time its warning falls weak and muffled upon the will. And the weakness of remonstrance and the ease of resistance grow with every repetition of this wilfulness. Impulses of reverence are drowned in a sea of triviality: men walk by sight until the faculty of faith languishes and dies. Time and the petty concerns of this life so engross us that the intuitions of Immortality lose their force, and Eternity fades into nothingness. Paralysis sets in, not of the body perhaps, but (still more to be dreaded) of the soul, paralysis which can have no other end than spiritual death. Leighton Grane: The Hard Sayings of Jesus Christ, 1899 When, a half century ago, the famous Kaspar Hauser appeared in the streets of Nuremberg, having been released from a dungeon in which he had been confined from infancy, having never seen the face or heard the voice of man, nor gone without the walls of his prison, nor seen the full light of day, a distinguished lawyer in Germany wrote a legal history of the case which he entitled, A Crime against the Life of the Soul. It was well named. There is something unspeakably horrible in that mysterious page of history. To exclude a child not only from the light, but from its kind; to seal up the avenues of knowledge that are open to the most degraded savage; to force back upon itself every outgoing of the nature till the poor victim becomes a mockery before its Creator, is an unmeasurable crime; it is an attempt to undo God’s work. But it is no worse than the treatment some men bestow upon their own souls. If reverence is repressed, and the eternal heavens are walled out from view; if the sense of immortality is smothered; if the spirit is not taught to clothe itself in spiritual garments, and to walk in spiritual ways: such conduct can hardly be classed except as a crime against the life of the soul. Theodore T. Munger: The Freedom of Faith, 1883 What we need is a website where one can publish side by side plagiraized text and the alleged original text from where it was copied. Academics will never turn against each other pubically only anonymius website wehre you can clearly see it will force them to face reality It is really not a good idea to be a whistle blower on plagiarism, leave that work to editors of the journals and reviewers. Eh? Apparently the editors and reviewers missed the plagiarism, if someone else needs to report it! Sounds like something that a plagiarizer would say… Someone I know sometimes reports plagiarism to the journal itself. However, there were times that the journal responds quite unexpectedly, where even after showing them proof of misconduct (copied text, combining sentences from different sources, etc., possible falsification of results), the journal chooses to back the authors. If this happens, then should the whilstleblower just stop there?
true
true
true
If you’ve come across a case of plagiarism and want to report it to the proper authorities, a new article in the journal Ethics & Behavior would be a good place to start. Mark Fox, a prof…
2024-10-12 00:00:00
2013-11-25 00:00:00
http://www.retractionwatch.com/wp-content/uploads/2013/11/ethics-and-behavior.jpg
article
retractionwatch.com
Retraction Watch
null
null
1,158,049
http://codybrown.tumblr.com/post/419106809/nyc-needs-a-tech-startup-blog-lets-build-it
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
13,241,280
https://github.com/doyubkim/fluid-engine-dev
GitHub - doyubkim/fluid-engine-dev: Fluid simulation engine for computer graphics applications
Doyubkim
Jet framework is a fluid simulation engine SDK for computer graphics applications that was created by Doyub Kim as part of the book, "Fluid Engine Development". The code is built on C++11 and can be compiled with most of the commonly available compilers such as g++, clang++, or Microsoft Visual Studio. Jet currently supports macOS (10.10 or later), Ubuntu (14.04 or later), and Windows (Visual Studio 2015 or later). Other untested platforms that support C++11 also should be able to build Jet. The framework also provides Python API for faster prototyping. The latest code is always available from the `main` branch. Since the code evolves over time, the latest from the main branch could be somewhat different from the code in the book. To find the version that is consistent with the book, check out the branch `book-1st-edition` . - Basic math and geometry operations and data structures - Spatial query accelerators - SPH and PCISPH fluid simulators - Stable fluids-based smoke simulator - Level set-based liquid simulator - PIC, FLIP, and APIC fluid simulators - Upwind, ENO, and FMM level set solvers - Jacobi, Gauss-Seidel, SOR, MG, CG, ICCG, and MGPCG linear system solvers - Spherical, SPH, Zhu & Bridson, and Anisotropic kernel for points-to-surface converter - Converters between signed distance function and triangular mesh - C++ and Python API - Intel TBB, OpenMP, and C++11 multi-threading backends Every simulator has both 2-D and 3-D implementations. You will need CMake to build the code. If you're using Windows, you need Visual Studio 2015 or 2017 in addition to CMake. First, clone the code: ``` git clone https://github.com/doyubkim/fluid-engine-dev.git --recursive cd fluid-engine-dev ``` Build and install the package by running ``` pip install -U . ``` Now run some examples, such as: ``` python src/examples/python_examples/smoke_example01.py ``` For macOS or Linux: ``` mkdir build && cd build && cmake .. && make ``` For Windows: ``` mkdir build cd build cmake .. -G"Visual Studio 14 2015 Win64" MSBuild jet.sln /p:Configuration=Release ``` Now run some examples, such as: ``` bin/hybrid_liquid_sim ``` ``` docker pull doyubkim/fluid-engine-dev:latest ``` Now run some examples, such as: ``` docker run -it doyubkim/fluid-engine-dev [inside docker container] /app/build/bin/hybrid_liquid_sim ``` To learn how to build, test, and install the SDK, please check out INSTALL.md. All the documentations for the framework can be found from the project website including the API reference. Here are some of the example simulations generated using Jet framework. Corresponding example codes can be found under src/examples. All images are rendered using Mitsuba renderer and the Mitsuba scene files can be found from the demo repository. Find out more demos from the project website. Top-left: spherical, top-right: SPH blobby, bottom-left: Zhu and Bridson's method, and bottom-right: Anisotropic kernel This repository is created and maintained by Doyub Kim (@doyubkim). Chris Ohk (@utilForever) is a co-developer of the framework since v1.3. Many other contributors also helped improving the codebase including Jefferson Amstutz (@jeffamstutz) who helped integrating Intel TBB and OpenMP backends. Jet is under the MIT license. For more information, check out LICENSE.md. Jet also utilizes other open source codes. Checkout 3RD_PARTY.md for more details. I am making my contributions/submissions to this project solely in my personal capacity and am not conveying any rights to any intellectual property of any third parties. We would like to thank JetBrains for their support and allowing us to use their products for developing Jet Framework.
true
true
true
Fluid simulation engine for computer graphics applications - doyubkim/fluid-engine-dev
2024-10-12 00:00:00
2016-05-08 00:00:00
https://opengraph.githubassets.com/7272dcd4be4e438f7cd5dca9ce9bfd3c6237e1abae8478c886250542764176c9/doyubkim/fluid-engine-dev
object
github.com
GitHub
null
null
7,504,380
http://www.bloombergview.com/articles/2014-03-31/forget-bitcoin-african-e-money-is-the-currency-killer
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
7,421,024
http://labs.ft.com/2014/03/resimplifying-front-end-build-process-with-a-build-service/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
13,326,168
http://elixir-lang.org/blog/2017/01/05/elixir-v1-4-0-released/
Elixir v1.4 released
José Valim
# Elixir v1.4 released Elixir v1.4 brings new features, enhancements and bug fixes. The most notable changes are the addition of the `Registry` module, the `Task.async_stream/3` and `Task.async_stream/5` function which aid developers in writing concurrent software, and the new application inference and commands added to Mix. In this post we will cover the main additions. The complete release notes are also available. ## Registry The `Registry` is a new module in Elixir’s standard library that allows Elixir developers to implement patterns such as name lookups, code dispatching or even a pubsub system in a simple and scalable way. Broadly speaking, the Registry is a local, decentralized and scalable key-value process storage. Let’s break this in parts: - Local because keys and values are only accessible to the current node (opposite to distributed) - Decentralized because there is no single entity responsible for managing the registry - Scalable because performance scales linearly with the addition of more cores upon partitioning A registry may have unique or duplicate keys. Every key-value pair is associated to the process registering the key. Keys are automatically removed once the owner process terminates. Starting, registering and looking up keys is quite straight-forward: ``` iex> Registry.start_link(:unique, MyRegistry) iex> {:ok, _} = Registry.register(MyRegistry, "hello", 1) iex> Registry.lookup(MyRegistry, "hello") [{self(), 1}] ``` Finally, huge thanks to Bram Verburg who has performed extensive benchmarks on the registry to show it scales linearly with the number of cores by increasing the number of partitions. ## Syntax coloring Elixir v1.4 introduces the ability to syntax color inspected data structures and IEx automatically relies on this feature to provide syntax coloring for evaluated shell results: This behaviour can be configured via the `:syntax_colors` coloring option: ``` IEx.configure [colors: [syntax_colors: [atom: :cyan, string: :green]]] ``` To disable coloring altogether, simply pass an empty list to `:syntax_colors` . ## Task.async_stream When there is a need to traverse a collection of items concurrently, Elixir developers often resort to tasks: ``` collection |> Enum.map(&Task.async(SomeMod, :function, [&1])) |> Enum.map(&Task.await/1) ``` The snippet above will spawn a new task by invoking `SomeMod.function(element)` for every element in the collection and then await for the task results. However, the snippet above will spawn and run concurrently as many tasks as there are items in the collection. While this may be fine in many occasions, including small collections, sometimes it is necessary to restrict amount of tasks running concurrently, specially when shared resources are involved. Elixir v1.4 adds `Task.async_stream/3` and `Task.async_stream/5` which brings some of the lessons we learned from the GenStage project directly into Elixir: ``` collection |> Task.async_stream(SomeMod, :function, [], max_concurrency: 8) |> Enum.to_list() ``` The code above will also start the same `SomeMod.function(element)` task for every element in the collection except it will also guarantee we have at most 8 tasks being processed at the same time. You can use `System.schedulers_online` to retrieve the number of cores and balance the processing based on the amount of cores available. The `Task.async_stream` functions are also lazy, allowing developers to partially consume the stream until a condition is reached. Furthermore, `Task.Supervisor.async_stream/4` and `Task.Supervisor.async_stream/6` can be used to ensure the concurrent tasks are spawned under a given supervisor. ## Application inference In previous Mix versions, most of your dependencies had to be added both to your dependencies list and applications list. Here is how a `mix.exs` would look like: ``` def application do [applications: [:logger, :plug, :postgrex]] end def deps do [{:plug, "~> 1.2"}, {:postgrex, "~> 1.0"}] end ``` This was a common source of confusion and quite error prone as many developers would not list their dependencies in the applications list. Mix v1.4 now automatically infers your applications list as long as you leave the `:applications` key empty. The `mix.exs` above can be rewritten to: ``` def application do [extra_applications: [:logger]] end def deps do [{:plug, "~> 1.2"}, {:postgrex, "~> 1.0"}] end ``` With the above, Mix will automatically build your application list based on your dependencies. Developers now only need to specify which applications shipped as part of Erlang or Elixir that they require, such as `:logger` . Finally, if there is a dependency you don’t want to include in the application runtime list, you can do so by specifying the `runtime: false` option: ``` {:distillery, "> 0.0.0", runtime: false} ``` We hope this feature provides a more streamlined workflow for developers who are building releases for their Elixir projects. ## Mix install from SCM Mix v1.4 can now install escripts and archives from both Git and Hex, providing you with even more options for distributing Elixir code. This makes it possible to distribute CLI applications written in Elixir by publishing a package which builds an escript to Hex. `ex_doc` has been updated to serve as an example of how to use this new functionality. Simply running: ``` mix escript.install hex ex_doc ``` will fetch `ex_doc` and its dependencies, build them, and then install `ex_doc` to `~/.mix/escripts` (by default). After adding `~/.mix/escripts` to your `PATH` , running `ex_doc` is as simple as: ``` ex_doc ``` You can now also install archives from Hex in this way. Since they are fetched and built on the user’s machine, they do not have the same limitations as pre-built archives. However, keep in mind archives are loaded on every Mix command and may conflict with modules or dependencies in your projects. For this reason, escripts is the preferred format for sharing executables. It is also possible to install escripts and archives by providing a Git/GitHub repo. See `mix help escript.install` and `mix help archive.install` for more details. ## Summing up The full list of changes is available in our release notes. Don’t forget to check the Install section to get Elixir installed and our Getting Started guide to learn more. Happy coding!
true
true
true
Elixir v1.4 brings many improvements to the language, its standard library and the Mix build tool.
2024-10-12 00:00:00
2017-01-05 00:00:00
https://elixir-lang.org/…ixir-og-card.jpg
article
elixir-lang.org
The Elixir programming language
null
null
15,786,789
http://www.cbs46.com/story/36856477/mail-carriers-usps-warns-amazon-customers-will-get-free-stuff-if-mail-is-delivered-late
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
35,085,029
https://payments.posthaven.com/rc-w4d3-getting-chatgpt-to-categorize-programming-languages
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
35,782,275
https://containernet.github.io/
Containernet
null
Containernet is a fork of the famous Mininet network emulator and allows to use Docker containers as hosts in emulated network topologies. This enables interesting functionalities to build networking/cloud emulators and testbeds. Containernet is actively used by the research community, focussing on experiments in the field of cloud computing, fog computing, network function virtualization (NFV) and multi-access edge computing (MEC). One example for this is the NFV multi-PoP infrastructure emulator which was created by the SONATA-NFV project and is now part of the OpenSource MANO (OSM) project. ## Features - Add, remove Docker containers to Mininet topologies - Connect Docker containers to topology (to switches, other containers, or legacy Mininet hosts) - Execute commands inside containers by using the Mininet CLI - Dynamic topology changes - Add hosts/containers to a *running*Mininet topology - Connect hosts/docker containers to a *running*Mininet topology - Remove Hosts/Docker containers/links from a *running*Mininet topology - Add hosts/containers to a - Resource limitation of Docker containers - CPU limitation with Docker CPU share option - CPU limitation with Docker CFS period/quota options - Memory/swap limitation - Change CPU/mem limitations at runtime! - Expose container ports and set environment variables of containers through Python API - Traffic control links (delay, bw, loss, jitter) - Automated installation based on Ansible playbook ## Installation Containernet comes with two installation and deployment options. ### Option 1: Bare-metal installation This option is the most flexible. Your machine should run Ubuntu **20.04 LTS** and **Python3**. First install Ansible: ``` sudo apt-get install ansible ``` Then clone the repository: ``` git clone https://github.com/containernet/containernet.git ``` Finally run the Ansible playbook to install required dependencies: ``` sudo ansible-playbook -i "localhost," -c local containernet/ansible/install.yml ``` After the installation finishes, you should be able to get started. ### Option 2: Nested Docker deployment Containernet can be executed within a privileged Docker container (nested container deployment). There is also a pre-build Docker image available on Docker Hub. **Attention:** Container resource limitations, e.g. CPU share limits, are not supported in the nested container deployment. Use bare-metal installations if you need those features. You can build the container locally: ``` docker build -t containernet/containernet . ``` or alternatively pull the latest pre-build container: ``` docker pull containernet/containernet ``` You can then directly start the default containernet example: ``` docker run --name containernet -it --rm --privileged --pid='host' -v /var/run/docker.sock:/var/run/docker.sock containernet/containernet ``` or run an interactive container and drop to the shell: ``` docker run --name containernet -it --rm --privileged --pid='host' -v /var/run/docker.sock:/var/run/docker.sock containernet/containernet /bin/bash ``` ## Get started Using Containernet is very similar to using Mininet. ### Running a basic example Make sure you are in the `containernet` directory. You can start an example topology with some empty Docker containers connected to the network: ``` sudo python3 examples/containernet_example.py ``` After launching the emulated network, you can interact with the involved containers through Mininet’s interactive CLI. You can for example: - use `containernet> d1 ifconfig` to see the config of container`d1` - use `containernet> d1 ping -c4 d2` to ping between containers You can exit the CLI using `containernet> exit` . ### Running a client-server example Let’s simulate a webserver and a client making requests. For that, we need a server and client image. First, change into the `containernet/examples/basic_webserver` directory. Containernet already provides a simple Python server for testing purposes. To build the server image, just run ``` docker build -f Dockerfile.server -t test_server:latest . ``` If you have not added your user to the `docker` group as described here, you will need to prepend `sudo` . We further need a basic client to make a CURL request. Containernet provides that as well. Please run ``` docker build -f Dockerfile.client -t test_client:latest . ``` Now that we have a server and client image, we can create hosts using them. You can either checkout the topology script `demo.py` first or run it directly: ``` sudo python3 demo.py ``` If everything worked, you should be able to see following output: ``` Execute: client.cmd("time curl 10.0.0.251") Hello world. ``` ### Customizing topologies You can also add hosts with resource restrictions or mounted volumes: ``` # ... d1 = net.addDocker('d1', ip='10.0.0.251', dimage="ubuntu:trusty") d2 = net.addDocker('d2', ip='10.0.0.252', dimage="ubuntu:trusty", cpu_period=50000, cpu_quota=25000) d3 = net.addHost('d3', ip='11.0.0.253', cls=Docker, dimage="ubuntu:trusty", cpu_shares=20) d4 = net.addDocker('d4', dimage="ubuntu:trusty", volumes=["/:/mnt/vol1:rw"]) # ... ``` ## Documentation Containernet’s documentation can be found in the GitHub wiki. The documentation for the underlying Mininet project can be found on the Mininet website. ## Research Containernet has been used for a variety of research tasks and networking projects. If you use Containernet, let us know! ### Cite this work If you use Containernet for your work, please cite the following publication: M. Peuster, H. Karl, and S. v. Rossem: **MeDICINE: Rapid Prototyping of Production-Ready Network Services in Multi-PoP Environments**. IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN), Palo Alto, CA, USA, pp. 148-153. doi: 10.1109/NFV-SDN.2016.7919490. (2016) Bibtex: ``` @inproceedings{peuster2016medicine, author={M. Peuster and H. Karl and S. van Rossem}, booktitle={2016 IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN)}, title={MeDICINE: Rapid prototyping of production-ready network services in multi-PoP environments}, year={2016}, volume={}, number={}, pages={148-153}, doi={10.1109/NFV-SDN.2016.7919490}, month={Nov} } ``` ### Publications - M. Peuster, H. Karl, and S. v. Rossem: MeDICINE: Rapid Prototyping of Production-Ready Network Services in Multi-PoP Environments. IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN), Palo Alto, CA, USA, pp. 148-153. doi: 10.1109/NFV-SDN.2016.7919490. IEEE. (2016) - S. v. Rossem, W. Tavernier, M. Peuster, D. Colle, M. Pickavet and P. Demeester: Monitoring and debugging using an SDK for NFV-powered telecom applications. IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN), Palo Alto, CA, USA, Demo Session. IEEE. (2016) - Qiao, Yuansong, et al. Doopnet: An emulator for network performance analysis of Hadoop clusters using Docker and Mininet. Computers and Communication (ISCC), 2016 IEEE Symposium on. IEEE. (2016) - M. Peuster, S. Dräxler, H. Razzaghi, S. v. Rossem, W. Tavernier and H. Karl: A Flexible Multi-PoP Infrastructure Emulator for Carrier-grade MANO Systems. In IEEE 3rd Conference on Network Softwarization (NetSoft) Demo Track . (2017) **Best demo award!** - M. Peuster and H. Karl: Profile Your Chains, Not Functions: Automated Network Service Profiling in DevOps Environments. IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN), Berlin, Germany. IEEE. (2017) - M. Peuster, H. Küttner and H. Karl: Let the state follow its flows: An SDN-based flow handover protocol to support state migration. In IEEE 4th Conference on Network Softwarization (NetSoft). IEEE. (2018) **Best student paper award!** - M. Peuster, J. Kampmeyer and H. Karl: Containernet 2.0: A Rapid Prototyping Platform for Hybrid Service Function Chains. In IEEE 4th Conference on Network Softwarization (NetSoft) Demo, Montreal, Canada. (2018) - M. Peuster, M. Marchetti, G. García de Blas, H. Karl: Emulation-based Smoke Testing of NFV Orchestrators in Large Multi-PoP Environments. In IEEE European Conference on Networks and Communications (EuCNC), Lubljana, Slovenia. (2018) - S. Schneider, M. Peuster,Wouter Tvernier and H. Karl: A Fully Integrated Multi-Platform NFV SDK. In IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN) Demo, Verona, Italy. (2018) - M. Peuster, S. Schneider, Frederic Christ and H. Karl: A Prototyping Platform to Validate and Verify Network Service Header-based Service Chains. In IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN) 5GNetApp, Verona, Italy. (2018) - S. Schneider, M. Peuster and H. Karl: A Generic Emulation Framework for Reusing and Evaluating VNF Placement Algorithms. In IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN), Verona, Italy. (2018) - M. Peuster, S. Schneider, D. Behnke, M. Müller, P-B. Bök, and H. Karl: Prototyping and Demonstrating 5G Verticals: The Smart Manufacturing Case. In IEEE 5th Conference on Network Softwarization (NetSoft) Demo, Paris, France. (2019) - M. Peuster, M. Marchetti, G. Garcia de Blas, Holger Karl: Automated testing of NFV orchestrators against carrier-grade multi-PoP scenarios using emulation-based smoke testing. In EURASIP Journal on Wireless Communications and Networking (2019) ## Other projects and links There is an extension of Containernet called vim-emu which is a full-featured multi-PoP emulation platform for NFV scenarios. Vim-emu was developed as part of the SONATA-NFV project and is now hosted by the OpenSource MANO project: For running Mininet or Containernet distributed in a cluster, checkout Maxinet. You can also find an alternative/teaching-focused approach for Container-based Network Emulation by TU Dresden in their repository. ## Contact ### Support If you have any questions, please use GitHub’s issue system. ### Contribute Your contributions are very welcome! Please fork the GitHub repository and create a pull request. Please make sure to test your code using ``` sudo make test ``` ### Lead developer Manuel Peuster - Mail: <manuel (at) peuster (dot) de> - Twitter: @ManuelPeuster - GitHub: @mpeuster - Website: https://peuster.de
true
true
true
Use Docker containers as hosts in Mininet emulations.
2024-10-12 00:00:00
2016-01-01 00:00:00
null
website
github.io
Containernet
null
null
3,732,710
https://github.com/strangeloop/2011-slides
GitHub - strangeloop/2011-slides: Strange Loop 2011 speaker slides
Strangeloop
This repo holds slides from the Strange Loop 2011 conference. Learn more at http://thestrangeloop.com. All slides are copyright the speaker (not Strange Loop) and unless otherwise specified by the author, all rights are reserved by the author. Links to video will be added as they are released. Enjoy! Alex Miller - Category Theory, Monads, and Duality in (Big) Data - Erik Meijer - We Really Don't Know How to Compute! - Gerald Sussman (video) - "Post-PC Computing" is not a Vision - Allen Wirfs-Brock (video) - Simple Made Easy - Rich Hickey (video) - Languages Panel - Sussman, Hickey, Wirfs-Brock, Andrescu, Pamer, Ashkenas, Wampler (video) - On Distributed Failures (and handling them with Doozer) - Blake Mizerany (video) - Wrap Your SQL Head Around Riak MapReduce - Sean Cribbs (video) - Testing, Testing, iOS - Heath Borders (video) - CSS3 and Sass - Mark Volkmann (video) - Functional Thinking - Neal Ford (video) - Ratpack: Classy and Compact Groovy Web Apps - James Williams (video) - Glu-ing The Last Mile - Ken Sipe (video) - Storm: Twitter's scalable realtime computation system - Nathan Marz (video) - Extreme Cleverness: Functional Data Structures in Scala - Daniel Spiewak (video) - Concurrent Caching at Google - Charles Fry (video) - An Introduction to Doctor Who (and Neo4j) - Ian Robinson (video) - Building Applications with jQuery UI - Scott González (video) - JVM dynamic languages interoperability framework - Attila Szegedi (video) - fog or: How I Learned to Stop Worrying and Love Cloud - Wesley Beary (video) - Scalaz: Purely Functional Programming in Scala - Runar Bjarnason (video) - Dynamo is not just for datastores - Susan Potter (video) - Airplane-Mode HTML5: Is your website mobile-ready? [lowres] - Scott Davis (video) - Chloe and the Realtime Web [lowres] - Trotter Cashion (video) - Generic Programming Galore using D - Andrei Alexandrescu (video) - Skynet: A Scalable, Distributed Service Mesh in Go - Brian Ketelsen (video) - A Tale of Three Trees - Scott Chacon (video) - Machine Learning Hack Fest - Hilary Mason *[no slides, no video]* - CoffeeScript, the Rise of "Build Your Own JavaScript" - Jeremy Ashkenas (video) - Parser Combinators: How to Parse (nearly) Anything - Nate Young (video) - New-age Transactional Systems - Not Your Grandpa's OLTP - John Hugg (video) - Vim: From Essentials to Mastery - Bill Odom (video) - Bringing Riak to the Mobile Platform - Kresten Krab Thorup (video) - The Kotlin Programming Language - Andrey Breslav (video) - Distributed STM: A new programming model for the cloud - Cyprien Noel (video) - Getting Truth Out of the DOM - Yehuda Katz (video) - Monads Made Easy - Jim Duey - Bitcoin: Giving Money an Upgrade - Eric Brigham - Learn to Play Go - Rich Hickey and Jeff Brown *[no slides, no video]* - Transactions without Transactions - Richard Kreuter (video) - Distributed Systems with Gevent and ZeroMQ - Jeff Lindsay (video) - Mirah for Android Development - Brendan Ribera (video) - Actor Interaction Patterns - Dale Schumacher *[talk had no slides]*(video) - Embedding Ruby and RubyGems Over RedBridge - Yoko Harada (video) - Hadoop and Cassandra sitting in a tree... - Jake Luciani (video) - Android App Assimilation - Logan Johnson (video) - Running a startup on Haskell - Bryan O'Sullivan (video) - Heresies and Dogmas in Software Development - Dean Wampler (video) - Core HTML5 Canvas: Mind-blowing Apps in Your Browser - David Geary (video) - Distributed Systems: The Stuff Nobody Told You - Shaneal Manek (video) - DataMapper on Infinispan: Clustered NoSQL - Lance Ball (video) - Building Polyglot Systems with Scalang - Cliff Moon (video) - Event-Driven Programming in Clojure - Zach Tellman (video) - A P2P Digital Self with TeleHash - Jeremie Miller (video) - Distributed Data Analysis with Hadoop and R - Jonathan Seidman, Ramesh Venkataramaiah (video) - Taming Android - Eric Burke (video) - The Once And Future Script Loader - Kyle Simpson @getify (video) - STM: Silver bullet or ... - Peter Veentjer - Teaching Code Literacy - Sarah Allen (video) - A Tale of Two Runtimes [lowres] - Matthew Taylor (video) - Why CouchDB? - Benjamin Young (video) - Running Heroku on Heroku - Noah Zoschke (video) - Applying Principles of Stage Magic to User Experience - Danno Ferrin (video) - Have Your Cake and Eat It Too: Meta-Programming Java [lowres] - Howard Lewis Ship (video) - Product Engineering [lowres] - Mike Lee (video) - Akka: Reloaded - Josh Suereth (video) - The Future of F#: Type Providers - Joe Pamer (video) - The Mapping Dilemma - David Nolen (video) - Clojure Part 1: Intro to Clojure - Stuart Sierra - Erlang: Language Essentials - Martin Logan - Machine Learning - Hilary Mason - Node.js Bootcamp - James Carr - Learn Scala Interactively with the Scala Koans - Dianne Marsh, Joel Neely, Daniel Hinojosa - Intermediate Android - Michael Galpin - Git Foundations - Matthew McCullough - HTML 5 - Nathaniel Schutta - Clojure Part 2: Building Analytics with Clojure - Aaron Bedra - Erlang: Production Grade - Eric Merritt GitHub - Haskell: Functional Programming, Solid Code, Big Data - Bryan O'Sullivan (GitHub) - Cascalog - Nathan Marz - Intro to Django - Jacob Kaplan-Moss - Getting Cozy with Emacs - Phil Hagelberg - Git Advanced - Matthew McCullough - jQuery - Nathaniel Schutta
true
true
true
Strange Loop 2011 speaker slides. Contribute to strangeloop/2011-slides development by creating an account on GitHub.
2024-10-12 00:00:00
2011-09-21 00:00:00
https://opengraph.githubassets.com/cee69214c16e393d52878b3917b2cb2aabe2e807e25730b4abd850ef7996f023/strangeloop/2011-slides
object
github.com
GitHub
null
null
36,748,740
https://www.sfgate.com/business/article/san-francisco-downtown-wake-up-call-cities-18202995.php
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
15,139,461
https://github.com/livetourlab/live-tour-lab
GitHub - SpectivOfficial/live-tour-lab: :heart::heart::heart: Framework for creating Live Tours. Add React VR components. :heart::heart::heart:
SpectivOfficial
LiveTourLab is a framework for creating Live VR Tours. 21 ready components, easily add your own React VR code. 10x more interactive than 360 videos 10x faster creation than game engine VR 10x more extensible than GUI authoring tools 100% cross-platform including custom code 100% standard camera compatible 100% open source 0 server lock-in with static build 0 effort to start, a lifetime to master The repo includes components for navigation, cards, preloading, blending photography and video and much more. Easily add your own React VR code. Once done, you define the tour in a separate JSON file, meaning you can use the same all-static code build for many tours: **Create a new React VR & LiveTourLab project** ``` npm install -g react-vr-cli react-vr init MyLiveTour cd MyLiveTour npm install live-tour-lab --save npm install ``` **index.vr.js** While waiting for install, open index.vr.js and change it to: ``` import React from 'react'; import { AppRegistry } from 'react-vr'; import { LiveTour } from 'live-tour-lab'; export default class MyLiveTour extends React.Component { render() { return ( <LiveTour tourURI='hello-world.json' /> ); } }; AppRegistry.registerComponent('MyLiveTour', () => MyLiveTour); ``` **Grab the Hello World Tour** Replace the React VR hello world with the LiveTourLab hello world: ``` rm -rf static_assets git clone https://github.com/livetourlab/hello-world.git static_assets ``` **Run your first Live Tour!** ``` npm start ``` Point the browser as instructed, see build progress in the terminal window. *Hello world is done. Now time for code!* **Create a new file Hero.js** Let's add a new component. Create a new file in your main directory, MyLiveTour/Hero.js with the contents below ``` import React from 'react'; import { asset, View, Animated, AnimatedImage, } from 'react-vr'; export default class Hero extends React.Component { static defaultProps = { op: 1, // opacity of hero picture width: 1, // width of hero picture height: 1, // height of hero picture rotateY: 0, // position src: null, // file name }; constructor(props) { super(); this.state = { rotAnim: new Animated.Value(0), }; } componentDidMount() { this.startAnimation(); } startAnimation() { Animated.timing( this.state.rotAnim, { toValue: 360, duration: 3000, } ) .start(() => { // Restart at end this.state.rotAnim.setValue(0); this.startAnimation(); }); } render() { return ( <Animated.Image style={{ position:'absolute', layoutOrigin: [0.5, 0.5, 0], width: this.props.width, height: this.props.height, transform: [ {rotateY: this.props.rotateY}, {translateZ: -3}, {rotateY: this.state.rotAnim}, {translateX: 0.5} ], opacity: this.props.op, }} source={ asset(this.props.src) } /> ); } } ``` **index.vr.js** Open index.vr.js again. Import your new component and send it as a child to LiveTourLab. Full code again: ``` import React from 'react'; import { AppRegistry } from 'react-vr'; import { LiveTour } from 'live-tour-lab'; import Hero from './Hero'; export default class MyLiveTour extends React.Component { render() { return ( <LiveTour tourURI='hello-world.json' > <Hero entries="heroes" /> </LiveTour> ); } }; AppRegistry.registerComponent('MyLiveTour', () => MyLiveTour); ``` **Edit static_assets/hello-world.json** You indicated above that the Hero component will take care of "heroes" entries. So locate the last scene "Backyard", and add a "heroes" section, as follows. The elements are sent as props to your Hero.js component. ``` ... { "id":"Backyard", "photopanos":[ {"src":"1004-fraser-11-low.jpg","rotateY":350} ], "heroes": [ { "src": "boss.jpg", "width": 2, "height": 2, "rotateY": 35, "op": 0.9 }, { "src": "boss.jpg", "width": 2, "height": 2, "rotateY": 325 } ], "infos":[ ... ``` **Reload browser** Reload your browser window and enjoy! ;-) Now add a ?dev=1 to the URL: http://192.168.1.6:8081/vr/?dev=1#Backyard With the dev=1 flag, looking down and clicking the semi-transparent circular arrow reloads the json tour definition file, updating the scene definition while keeping all states intact. This works also a production build of the code. Try changing something in one of the "heroes" entries above and reload the json to instantly see the result. **Tour defaults** A lot of information was the same in the 2 hero entries. While you could change the defaults in your Hero.js component code, it is often the case that you want different looks in different tours. So go ahead and set a default for our tour instead. Add an entry to the defaults section at the top of the json file: ``` ... "defaults": { "heroes": { "src": "boss.jpg", "width": 2, "height": 2 }, "infos": { ... ``` Now, without changing the code, we can reduce the per-scene markup to just: ``` ... "heroes": [ { "rotateY": 35, "op": 0.1 }, { "rotateY": 325 } ], ... ``` While you can still override the defaults in individual entries eg. ``` ... "heroes": [ { "rotateY": 35, "height": 0.5, "width": 0.5 }, { "rotateY": 325 } ], ... ``` TV, YouTube and Netflix is turning the world population into passive addicts of entertainment. When my children grow up, I want media to instead fuel their imagination, let them be active participants, help them be present in the moment, and feel the impact of life changing experiences, even if far away. With love and respect, I invite you to take part in creating this entirely new media format. We are making a more interactive, more immersive, more extensible, faster to create, more standards-compatible format for experiences. Creating a new media format is big. It is so big, it is something that one of the giant companies would do in a gigantic project. So here I am, asking you to join me in doing just that, with the power of open source. If you want to do something big, you have to say so, and then stand up for it when people laugh at you, or you never get there. I have had the incredible blessing to get to play a part in changing the world once already, in another industry, and with your help, we can do it again. If you in any way feel inspired by VR and what it can do for mankind, please Star & Watch this Github repo and take part in its evolution. This is a humble beginning. It took us 20 years last time, but we reached almost every person on the planet, with twice as many users as Google & Facebook combined. Big things can be done. What you see in this repo is today 21 components making it easier to create tours. We needs 100'ds. As all cinematic VR content of today, the experience suffers from lack of parallax responsiveness, too slow hardware to keep up with the resolution needed, bulky media files, expensive cameras, and a million other issues. We will solve all of that. Dream big, start small, begin now. Again, I have put my heart and soul into this, please do me the honor of both Starring and Watching the Repo. Welcome to send me an email directly. I have always put pride in being accessible and I look forward to hearing from you: [email protected] Short term contribution wanted. If you have other ideas, please write to me as well. - 3PP video players for 2D videos - 3PP video players for Pano videos - Model: Pick up and rotate a product. - In-tour visual editor mode for the JSON (click-drag, add objects) - 2D UI in VR, with keyboard, 2D layout etc - Video alfa: Alfa channel support on videos that works cross-platform - Blink black: Nav fading to and from black upon scene change - Gaze toggle other objects than cards, eg video overlay - Prevent info popups from appearing first few moments of entering a new scene - Heat map recording - Pixel, Analytics recording - Navigation component peaking into next scene - Optimise convert options better for sharper 8k pano photos - Optimise ffmpeg options better for VP9 pano videos - Try-room, shopping assistant, dress up, checkout - VR chat - Game components - Avatar AI - Avatar Human - Specialized avatars: Dinner companion, Personal trainer, Executive coach etc. Long term contribution wanted: - Integrate with cameras manufacturer for dual-lens - Better file format for cinematic 3D supporting parallax movement - ...and much more :-) Done: - LiveTour - Navigation - Info popup - Flexible Card with Header, Content, Footer, Image, Video, Row, Buttons - Base background 360 photo - Base background 360 video - Anchor photo on background - Anchor video on background - Anchor video with auto-play sound on background - Sound - Pre-loader - Various dev tools In progress: - World rotation instead of scene rotation to avoid the rotation flicker or use fade-to-black // Anders Welcome to contribute to the LiveTourLab core by working the source code. Given the React VR project structure, I tried many different variants for folder structure, symlinks and hard links. Finally I ended up using a much simpler solution, which is what I recommend: Just move the live-tour-lab directory out of node_modules and into your project directory. Stand in the project folder, MyLiveTour in the getting started example above, and do: ``` # edit ./package.json and remove live-tour-lab from dependencies rm -rf node_modules/live-tour-lab git clone https://github.com/livetourlab/live-tour-lab.git npm start ``` That is it. Your project still runs. No need to symlink or manage dependencies. Now go ahead and edit the source. When you have produced something great, just push to github from inside the live-tour-lab directory (not project directory). Apache License Version 2.0 Please find the component reference documentation on: Thank you!
true
true
true
:heart::heart::heart: Framework for creating Live Tours. Add React VR components. :heart::heart::heart: - SpectivOfficial/live-tour-lab
2024-10-12 00:00:00
2017-08-16 00:00:00
https://opengraph.githubassets.com/4cc196863d859d05c6c80c3a0b7a5f92d13cdff0ea66ce0d57ef3ffbfd9df976/SpectivOfficial/live-tour-lab
object
github.com
GitHub
null
null
10,159,690
http://jonathancreamer.com/advanced-webpack-part-1-the-commonschunk-plugin/
Advanced WebPack Part 1 - The CommonsChunk Plugin
Jonathan Creamer
# Advanced WebPack Part 1 - The CommonsChunk Plugin "As a front end developer, I want to split my assets up into multiple bundles so that I can load only the JavaScript, and CSS needed for a page" For as long as I can remember in my career as a front end developer, one of the problems I've constantly been faced with was how to properly bundle assets for multi-page applications. There are many approaches to solving this problem, and it seems right now the most common one is to bundle CSS, and JavaScript separately and each into a single file. Generally through grunt, or gulp, all the CSS (SASS, LESS, etc) and JavaScript each get combined together into separate files, minified, and sent down the client. This is a very good solution to the problem at hand, but there are a few tweaks that I think can help improve things. Some issues of this solution are: - There's only 1 file for ALL the CSS in your app - A larger initial download can slow the time to render your site - Unless you load asynchronously, these large files block downloading Enter WebPack. ## WebPack WebPack is a bundler for front end assets. It can bundle lots of things. Not only just JavaScript and CSS either. It can do images, html, coffeescript, typescript, etc. It does this through the use of "loaders". A loader will allow you to target a specific file extension and pass it through that loader. ### Multiple Entries for multi-page Install WebPack as a global node.js module with.. ``` npm install -g webpack ``` Now create a `webpack.config.js` file ``` module.exports = { entry: { "home": "js/home", "list": "js/list", "details": "js/details" } }; ``` ``` js/home/index.js js/home/home.scss js/list/index.js js/list/list.scss js/details/index.js js/details/details.scss ``` Here we'll have 3 pages. A good way to organize things is to put each of these into separate folders... Let's add the babel-loader to webpack so we can use ES2015 modules and classes... ``` loaders: [{ test: /*.js$/, exclude: /node_modules/, loader: "babel" }] ``` Now let's create a few components to use across our pages. Similar to the page organzation, you can create a folder for components, and one for each component... ``` js/components/ js/components/header/index.js js/components/header/header.scss js/components/search/index.js js/components/search/search.scss ... ... ``` This type of organization will allow you to keep all the code for a given component in the same place. Then the code JS code for a module can look like this... ``` // components/search/index import "search.scss"; // WAT export default class Search { constructor({ el }) { this.$el = el; this.$el.on("focus", ".search__input", this.searchActivate.bind(this)); } searchActivate() { // ... } } ``` Our header might then import the search component, and it's styles ``` import "header.scss"; import Search from "../search"; export default class Header { constructor({ el }) { this.$el = el; this.search = new Search({ el: this.$el.find(".search") }); } } ``` The search, and header are components that each page would need, so let's import them into each one of our pages... ``` import "home.scss"; import Header from "../components/header"; const header = new Header({ el: ".header" }); ``` Here's what's great about WebPack, it would seem like since we've used the same module 3 times that when we build the bundle, we'd see it repeated 3 times. That's where WebPack plugins come into play. ### WebPack Plugins There are bunches of different plugins for WebPack. One of the coolest ones is the CommonsChunk plugin. WebPack defines each module of your code as a "chunk". The job of the CommonsChunk plugin is to determine which modules (or chunks) of code you use the most, and pull them out into a separate file. That way you can have a common file that contains both CSS and JavaScript that every page in your application needs. To get started... ``` var CommonsPlugin = new require("webpack/lib/optimize/CommonsChunkPlugin") // ... module.exports = { entry: { common: ["jquery"] }, plugins: [ new CommonsPlugin({ minChunks: 3, name: "common" }); ] }; ``` Require the plugin into your webpack.config file, then add a new `common` entry. You can preload the common chunk with stuff like jQuery that you may want on every page. You then need to create an instance of the plugin down in an array of plugins. You can specify the `minChunk` option in here as well. This option says, if any module is used X or more times, then take it out and pull it into the common chunk. The `name` must match with the key in the `entry` object. Now the next time you run WebPack, you'll have another outputed chunk that contains jQuery as well as any module that you have used 3 or more times. So our header that we've used in every page would be pulled out into the common chunk. ### Conclusion It's always been a challenge to determine what pages need what JavaScript and styles. Thankfully WebPack's CommonsChunk plugin makes it pretty simple to do this out of the box with just a bit of configuration. This is part 1 of Advanced Webpack. There will be more to come! Be sure and check out the webpack express starter repository which will have some examples of things talked about throughout the series.
true
true
true
Part 1 of the Advanced Webpack Series. This post covers using the CommonsChunk plugin to enable you to extract common modules into a single file.
2024-10-12 00:00:00
2015-09-02 00:00:00
https://www.jonathancrea…MG_2073-crop.JPG
article
jonathancreamer.com
Jonathan Creamer
null
null
26,578,913
https://www.bbc.com/news/av/uk-48762211
'Shocking' fake takeaway sold on Uber Eats
null
# 'Shocking' fake takeaway sold on Uber Eats Food delivery service Uber Eats has tightened up the way restaurants join the platform after BBC News successfully registered a takeaway on the site with no hygiene inspection. The team was able to process orders with no identity checks, bank details or food hygiene rating. “Shocking” is how one food safety expert described the situation. Uber Eats says it was “deeply concerned by the breach of food safety policy” and now demands that all new sign-ups have a valid food hygiene rating.
true
true
true
A BBC News team set up a fake takeaway restaurant on Uber Eats and started selling burgers.
2024-10-12 00:00:00
2019-06-27 00:00:00
https://ichef.bbci.co.uk…_002950295-1.jpg
null
bbc.com
bbc.com
null
null
26,755,313
https://amifloced.org
Am I FLoCed?
Electronic Frontier Foundation
## Google is testing FLoC on Chrome users worldwide. Find out if you're one of them. Google is running a Chrome "origin trial" to test out an experimental new tracking feature called Federated Learning of Cohorts (aka "FLoC"). According to Google, the trial currently affects 0.5% of users in selected regions, including Australia, Brazil, Canada, India, Indonesia, Japan, Mexico, New Zealand, the Philippines, and the United States. This page will try to detect whether you've been made a guinea pig in Google's ad-tech experiment. ### What is FLoC? Third-party cookies are the technology that powers much of the surveillance-advertising business today. But cookies are on their way out, and Google is trying to design a way for advertisers to keep targeting users based on their web browsing once cookies are gone. It's come up with FLoC. FLoC runs in your browser. It uses your browsing history from the past week to assign you to a group with other "similar" people around the world. Each group receives a label, called a FLoC ID, which is supposed to capture meaningful information about your habits and interests. FLoC then displays this label to *everyone you interact with* on the web. This makes it easier to identify you with browser fingerprinting, and it gives trackers a head start on profiling you. You can read EFF's analysis and criticisms of FLoC here. The Chrome origin trial for FLoC has been deployed to millions of random Chrome users without warning, much less consent. While FLoC is eventually intended to replace tracking cookies, during the trial, it will give trackers access to even more information about subjects. The origin trial is likely to continue into July 2021, and may eventually affect as many as 5% of Chrome users worldwide. See our blog post about the trial for more information. ### How can I opt out? For now, the only way for users to opt out of the FLoC trial in Chrome is by disabling third-party cookies. This may reset your preferences on some sites and break features like single sign-on. You can also use a different browser. Other browsers, including independent platforms like Firefox as well as Chromium-based browsers like Microsoft Edge and Brave, do not currently have FLoC enabled. If you are a website owner, your site will automatically be included in FLoC calculations if it accesses the FLoC API or if Chrome detects that it serves ads. You can opt out of this calculation by sending the following HTTP response header: `Permissions-Policy: interest-cohort=()` ### What does my FLoC ID mean? If you have been assigned a FLoC ID, it means that your browser has processed your browsing history and assigned you to a group of “a few thousand” similar users. The FLoC ID is the label for your behavioral group. This numeric label is not meaningful on its own. However, large advertisers (like Google) and websites (like… Google) will be able to analyze traffic from millions of users to figure out what the members of a particular FLoC have in common. Those actors may use your FLoC ID to infer your interests, demographics, or past behavior. To get more technical: your browser uses an algorithm called SimHash to calculate your FLoC ID. The system currently uses the list of domains you’ve visited in the past 7 days as input, and recalculates the FLoC ID once a week. The current version of the trial places each user into one of over 33,000 behavioral groups. You can view the code for the FLoC component here. Google has said that it intends to experiment with different grouping algorithms, and different parameters, throughout the trial. ### Why does this matter? FLoC exists because Google acknowledges the privacy harms of third-party cookies, but insists on continuing to let advertisers target you based on how you browse the web. We are happy Google will finally restrict third-party cookies in Chrome, but the last thing it should do is introduce new tracking technology. FLoC has privacy problems of its own, and it will likely continue to enable discrimination and other harms of targeted ads. EFF believes browser developers should focus on providing a private, user-friendly experience without catering to the interests of behavioral advertisers. We should imagine a better future without the harms of targeted ads—and without Google’s FLoC. ### Learn more To learn more about browser fingerprinting, and discover how well-protected your own browser is, check out EFF's Cover Your Tracks project. For an overview of how third-party trackers collect, use, and abuse your information both on and off the web, read our whitepaper, Behind the One-Way Mirror: A Deep Dive Into the Technology of Corporate Surveillance. To block third-party trackers using cookies, fingerprinting, and other sneaky methods, install EFF's browser extension Privacy Badger.
true
true
true
This page will check whether Google Chrome's experimental new ad-targeting technology is enabled in your browser.
2024-10-12 00:00:00
2021-03-01 00:00:00
https://amifloced.org/images/floc-4b.gif
website
amifloced.org
Am I FLoCed?
null
null
8,642,901
http://thermal.global
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
39,169,623
https://www.washingtonpost.com/wellness/2024/01/24/exercise-brain-volume-memory/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
21,579,969
https://techcrunch.com/2019/11/19/jinx-dog-food/
Former Casper execs are building a direct-to-consumer dog food startup called Jinx | TechCrunch
Anthony Ha
With Jinx, three former members of the Casper team are looking to bring what CEO Terri Rockovich called “the Casper playbook” to selling dog food. The startup has raised $5.65 million from an all-star list of investors including Alexis Ohanian of Initialized Capital, Align Ventures, Brand Foundry, Wheelhouse Group, Will Smith and his family, the rapper Nas, singer Halsey, YouTube star/late night host Lilly Singh and TV personality/former NFL star Michael Strahan. Rockovich previously served as vice president of acquisition and retention marketing at Casper, where she met her co-founders Sameer Mehta and Michael Kim. She said all three of them are “dog obsessives” who have experience trying to feed “picky eaters.” And they were “hungry for a brand that is skinned in a way that is a lot more relatable to millennial consumer.” It’s not just about taking regular dog food and selling it in a new way, either. Rockovich noted that an estimated 56% of dogs in the United States are overweight or obese. So Jinx’s staff nutritionist — working with a larger nutrition council — has developed a line of kibble and treats that she said is “packed with organic proteins, diversified proteins and easy-to-process carbohydrates for a moderately active animal.” Jinx plans to start selling its first products in January. Rockovich said it will target pet owners with a certain set of “lifestyle attributes” — like living in an apartment, hiring dog walkers and owning dogs who sleep in their beds — and educate them so they actually examine the ingredients of their dog food, whether they buy it from Jinx or someone else. “We understand the serious nature of creating something that goes into a body and kind of powers a lifestyle,” she said. “We’ve been so conscious of that. Frankly, it’s delayed our timeline — we know we have to get it right.” As for how much this will cost, Rockovich said Jinx will “fall in the premium category.” (If you’re familiar with premium dog food brands, she said Jinx pricing be somewhere between Blue Buffalo and Orijen.) And while the company will start off by selling directly to consumers through its website, Rockovich said her Casper experience has taught her the importance of having “some IRL presence, specifically in retail.” Ollie, a purveyor of ‘human grade’ pet food, just landed $12.6 million in fresh funding
true
true
true
With Jinx, three former members of the Casper team are looking to bring what CEO Terri Rockovich called "the Casper playbook" to selling dog food. The
2024-10-12 00:00:00
2019-11-19 00:00:00
https://techcrunch.com/w…x-Team-Photo.jpg
article
techcrunch.com
TechCrunch
null
null
3,496,309
http://synecdochic.dreamwidth.org/522290.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
11,567,561
https://sympa.inria.fr/sympa/arc/caml-list/2016-04/msg00075.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
25,410,175
https://github.com/bwasti/mac_benchmark
GitHub - bwasti/mac_benchmark: benchmarks for MacBooks (FMAs)
Bwasti
Currently benchmarks pipelined FMAs on ARM and Intel chips. To benchmark on a single thread, ``` git clone --recursive https://github.com/bwasti/mac_benchmark.git cd mac_benchmark make arm || make intel # only one will work ./bench ``` For multiple threads, rebuild the binary: ``` T=-DTHREADS=$(sysctl -a | grep machdep.cpu.core_count | awk '{print $2}') make arm CFLAGS=$T || make intel CFLAGS=$T ./bench ``` Results collected so far: Hardware | Chip | Single Core GFLOPs | Cores | All Cores GFLOPs | ---|---|---|---|---| 2020 Macbook Air | M1 | 91 | 8 | 460 | 2019 16" Macbook Pro | 2.4 GHz 8-Core Intel Core i9 | 135 | 8 | 800 | Please submit a pull request to update the README
true
true
true
benchmarks for MacBooks (FMAs). Contribute to bwasti/mac_benchmark development by creating an account on GitHub.
2024-10-12 00:00:00
2020-12-13 00:00:00
https://opengraph.githubassets.com/133c852eb84e7db4b7933185b2cc2fbdf1a2306ab3d2088a63d87736aaac9429/bwasti/mac_benchmark
object
github.com
GitHub
null
null
16,697,069
https://www.theatlantic.com/photo/2018/03/bike-share-oversupply-in-china-huge-piles-of-abandoned-and-broken-bicycles/556268/?single_page=true
The Bike-Share Oversupply in China: Huge Piles of Abandoned and Broken Bicycles
Alan Taylor
Last year, bike sharing took off in China, with dozens of bike-share companies quickly flooding city streets with millions of brightly colored rental bicycles. However, the rapid growth vastly outpaced immediate demand and overwhelmed Chinese cities, where infrastructure and regulations were not prepared to handle a sudden flood of millions of shared bicycles. Riders would park bikes anywhere, or just abandon them, resulting in bicycles piling up and blocking already-crowded streets and pathways. As cities impounded derelict bikes by the thousands, they moved quickly to cap growth and regulate the industry. Vast piles of impounded, abandoned, and broken bicycles have become a familiar sight in many big cities. As some of the companies who jumped in too big and too early have begun to fold, their huge surplus of bicycles can be found collecting dust in vast vacant lots. Bike sharing remains very popular in China, and will likely continue to grow, just probably at a more sustainable rate. Meanwhile, we are left with these images of speculation gone wild—the piles of debris left behind after the bubble bursts. # The Bike-Share Oversupply in China: Huge Piles of Abandoned and Broken Bicycles **Hints:**View this page full screen. Skip to the next and previous photo by typing j/k or ←/→. - - - Read more - - - - Read more - - - Read more - - - Read more - We want to hear what you think about this article. Submit a letter to the editor or write to [email protected].
true
true
true
Gigantic piles of impounded, abandoned, and broken bicycles have become a familiar sight in many Chinese cities, after a rush to build up its new bike-sharing industry vastly overreached.
2024-10-12 00:00:00
2018-03-22 00:00:00
https://cdn.theatlantic.…P-1/original.jpg
article
theatlantic.com
The Atlantic
null
null
23,949,762
https://blog.haschek.at/2020/the-encrypted-homelab.html
Christian Haschek's blog
Christian Haschek
If you are a homelabber or are in charge of a bunch of physical servers I'm sure you are very security cautious. You wouldn't keep an SSH server up that allows root via password (or at all) and you won't even think about port-forwarding insecure protocols like FTP to your blinkenlights. But have you ever thought about a defense strategy against people taking your devices from you? How could you secure your data from burglars or police raids? # Encryption, obviously! Not just since Snowden, we know that the only working solution for protecting data is encryption. Be it as transport encryption (SSH/HTTPS/Wireguard) or file encryption, the possible solutions are broad and the benefits great but what's working great for your home computer and phone, has a large flaw for servers: ## Do I need to enter a password on every device after a power outage or reboot? This would be less than ideal (even though most secure). If we want encrypted servers we need to find a way to automate reboots without having to plug in keyboards and typing passwords on all servers. ## USB Dongles? Many disk encryption systems allow users to create USB drives that store the passwords or encryption keys for easy unlocking. This is a good first step. You have a USB drive with you and instead of typing in passwords on all servers you plug in the drive before booting and the system unlocks automatically or by pressing a button if you're using a yubikey. Obvious flaws are: - Sooner or later you're going to leave the USB drive somewhere and the system is useless - Doesn't work for police raids as they will take all USB drives too - Single point of failure, unless you make copies - then it's multiple points of "where's my other key" Okay so what else could we do? # Obvious solution: Send decryption keys via the network Hear me out, it might be less insane than you think. All we need is **one device** that is hosting a NFS, SSH or HTTPS Server that is serving the keys and all servers have to have access to it. Then we have to configure the servers to stay in a loop until this server can be reached to grab the key and decrypt the drives and start services depending on storage (like docker). This means we have one computer that if powered on, will allow all servers to decrypt on boot. This computer obviously should be encrypted as well and should have a really good password. Unless you're a fan of "security by obscurity" then you might host the keys on a Raspberry Pi Zero that's hiding in a wall or hidden in another device like a coffee machine or - if you really trust your internet connection - can even be offsite. ## Ok but that means I need to be physically there to boot this key serving device, right? Ideally yes, but no! You can remoteley decrypt a LUKS encrypted linux device using SSH! You can even use Google 2FA for this so you can SSH in remoteley to decrypt the device using 2fa Another aproach would be to host the key server on a different location but this would mean if your internet is out you can't reboot your devices. # Enough talk, let's build an example: ZFS Server and Synology NAS Let's consider a typical simple homelab that hosts 3 devices: - A Workstation that will host the keys - A Server with ZFS a storage pool - A Synology NAS ## Step 1: Set up the Workstation to serve keys This setup assumes that your workstation uses full disk encryption (or some other security for your device) and it will work on all Operating systems. ### Creating the keys I use OpenSSL for this because it's installed on most linux distros. ``` mkdir ~/keys cd ~/keys openssl rand -out dataserver 512 openssl rand -out synology 128 ``` This will create two files: `dataserver` and `synology` . Why is the synology key only 128 bytes? I'll explain later when we're setting up the synology encryption. ### Serving these keys On this step you can use your imagination. You can serve these keys in multiple ways - Via a webserver - Creating a new user and serving the keys from SSH - Creating a NFS network share For demo purposes I'll serve them via a python-builtin webserver but if you really want to serve keys via a webserver this you better set up a real web server that's not accessible from the internet and enforces HTTPS. Run a local webserver using python (JUST FOR TESTING, DONT ACTUALLY SERVE YOUR KEYS USING UNENCRYPTED TRAFFIC) ``` cd ~/keys python -m SimpleHTTPServer 8080 ``` ## Step 2: Setting up the Synology NAS The idea is: - Creating an encrypted share using the GUI - Getting SSH access and creating a simple script that will decrypt the share on reboot - Making the decryption script run on boot The Synology NAS has a few limitations when it comes to encryption. We can't use the binary key we created earlier and also we can't use more than 64 characters for the encryption key. But it's okay we'll use what we have. ### Creating the share First let's translate the key we created in step 1 to a "password" we can use On the workstation run `cat ~/keys/synology | base64 -w 0 | tr -cd '[:alnum:]._-' | head -c 64` This will do several things: - base64 will create printable characters out of the binary key - the "-w 0" will truncate the base64 output to one line - the "tr .." command will remove any non-alphanumeric symbols (that are used by base64) - "head -c 64" will use the first 64 characters of the resulting string Copy and paste the output of the command (should be a 64 character long string) and use it as the passphrase when creating the encrypted share. ### Auto decrypting on boot First make sure you have enabled SSH access and can connect to the Synology NAS. **!! DO NOT DO THIS AS ROOT !!** Not just is it not secure but also after your Synology NAS updates it wipes the root folder (ask me how I know..) On the NAS: Create a script in your users home folder called `decrypt.sh` ``` #!/bin/sh while ! wget -q --spider http://10.0.0.1:8080/synology; do echo "Waiting for network" sleep 5 done key=$(curl --raw -s http://10.0.0.1:8080/synology | base64 -w 0 | tr -cd '[:alnum:]._-' | head -c 64) synoshare --enc_mount "secureshare" $key ``` Then make a symlink to this script in `/usr/local/etc/rc.d/` so it gets executed on boot. ``` chmod +x decrypt.sh sudo ln -s /home/yourusername/decrypt.sh /usr/local/etc/rc.d/decrypt.sh ``` Awesome! Now your synology nas should automatically decrypt the share on boot if your webserver is running. ## Step 3: Setting up your ZFS server The idea is identical to the Synology part. - Create a ZFS pool with encryption using our key - Create a script that runs on boot and decrypts the pool ### Important note Alpine Linux does not load the aes-ni module by default. If your CPU supports the AES-ni flag (check if`grep aes /proc/cpuinfo` outputs something) you also need to load the `aesni_intel` kernel module by using `modprobe aesni_intel` (gone after reboot) or making it permanent `echo 'aesni_intel' >> /etc/modules` ### Creating the encrypted dataset For testing purposes we'll not use real hard disks but a 1G test block filesystem ``` cd /tmp/ truncate -s 1G block sudo zpool create testpool /tmp/block sudo zpool set feature@encryption=enabled testpool curl -s http://10.0.0.1:8080/dataserver | base64 | sudo zfs create -o encryption=on -o keyformat=passphrase -o compression=on testpool/secure ``` The last command does the magic. It gets the key from our webserver, runs it through base64 and then pipes the results to the zfs create command which uses the piped key as passphrase. So from now on we can decrypt automatically ### The decrypt.sh script ``` #!/bin/sh while ! wget -q --spider http://10.0.0.1:8080/dataserver; do echo "Waiting for network" sleep 5 done curl -s http://10.0.0.1:8080/dataserver | base64 | sudo zfs load-key testpool/secure && sudo zfs mount testpool/secure ``` # TL;DR The idea is that you create encryption keys that get loaded over the network. Works for basically all encryption methods on linux and for Synology NAS devices. But it gives you a single point of failure (the key server) but that's kind of the idea. If that device is not running all your data will just be random noise. Also if I had encrypted all my computers in 2014 when I was raided by the police because they followed a wrong lead, I would have gotten my computers much sooner than the 1 year it took them to look through all my files of all my harddrives Comment using SSH! Info `ssh [email protected]`
true
true
true
Personal Blog of Christian Haschek
2024-10-12 00:00:00
2020-06-17 00:00:00
https://blog.haschek.at/…dposts/encds.jpg
article
haschek.at
Geek_At
null
null
40,392,759
https://ethz.ch/en/news-and-events/eth-news/news/2024/05/researchers-outsmarted-easyride-function-on-swiss-travel-app.html
Researchers outsmarted EasyRide function on Swiss travel app
Fabio Bergamin
# Researchers outsmarted EasyRide function on Swiss travel app Experiments by ETH Zurich computer security researchers showed that smartphones can be manipulated to allow the owner to ride Swiss trains for free. The researchers also highlighted ways of curbing such misuse. - Read - Number of comments ## In brief - Users with sufficient expertise can manipulate their smartphone’s location data. - This is how ETH Zurich researchers managed to trick the Swiss federal railways (SBB) app. They showed that it would be possible to ride the rails for free. - The researchers informed SBB. The company says it has now taken appropriate action. Now, this kind of ticket fraud would be detected at least after the fact and penalised. It makes travelling by train, bus and tram super easy: instead of buying a conventional ticket, people using the EasyRide function in the SBB app can start their journey with a single swipe on their smartphone. Once at their destination, they swipe the other way to check out again. A QR code visible in the app serves as their ticket. It confirms to the ticket inspector that they have activated the EasyRide function. During the journey, the app continuously transmits location data to an SBB server. The server uses this data to calculate the route travelled, allowing SBB to then bill the user for the fare. EasyRide has been available throughout Switzerland since 2018. Last year, however, ETH Zurich researchers managed to trick the system. The EasyRide function relies on smartphone location data, but users with specialised knowledge can manipulate this information. SBB says that it can now detect this kind of ticket fraud. **Ticket inspectors noticed nothing** A year ago, the situation was different: researchers and students belonging to the group led by Kaveh Razavi, Professor of Computer Security at ETH Zurich, suspected that the EasyRide function could be outsmarted, and so they put their suspicion to the test. They altered a smartphone so that its GPS data – which the SBB app accesses – was overwritten with fake but realistic-looking location information. This data simulated that the user was only moving around in a small area in a city without using public transport. The researchers used two approaches: In one case, a programme generated the fake location data directly on the smartphone. In the other case, the smartphone was connected to a server running the SBB app. This server generated fake location data and transmitted the EasyRide QR code to the smartphone. “Smartphone location data can be manipulated and cannot be relied upon entirely.”Michele Marazzi The ETH researchers tested their specially prepared smartphone on several train journeys from Zurich to the capital of a neighbouring canton. Their trickery went unnoticed by the ticket inspector and they were not contacted by SBB afterwards. Rather, SBB calculated the costs of the fake small-scale movements for which no public transport was used. In other words, the researchers were able to travel free of charge with EasyRide. They emphasise that while they showed the ticket inspector the EasyRide QR code, they were also in possession of a valid ticket at all times. **Today’s location data is untrustworthy** Although a person must have specialist knowledge to manipulate their smartphone, Razavi says, the necessary expertise is common among students doing a Bachelor’s in computer science. With the right amount of criminal ambition, it would even be possible to offer a smartphone program combined with an online service to supply tricksters lacking the requisite IT skills with fake, yet plausible, location data. “The basic truth is that smartphone location data can be manipulated and cannot be relied upon entirely,” says Michele Marazzi, a doctoral student in Razavi’s group. “So, app developers shouldn’t treat this data as trustworthy. That’s what we wanted our project to highlight.” When location data is used as the basis for calculating and billing a service, as in the SBB app, more attention must be paid to this vulnerability. **Comparison with trustworthy data required** The researchers propose two ways of solving the problem: either the location data must be verified using reliable positioning notifications, or smartphones must be designed to make such manipulation much more difficult. For the first approach, it would be possible to compare the data provided by the user’s smartphone with location data that the transport company trusts – such as that provided by the vehicle or a mobile device carried by the ticket inspector. The second approach is trickier: it would involve getting developers of smartphone hardware and operating systems on board and convincing them to deploy a new type of tamper-proof localisation technology. “But until that happens, all services that are obliged to rely on location information provided by smartphones have no choice other than to verify this data as best they can using a trustworthy source of location data,” says ETH professor Razavi. The ETH researchers informed SBB about the vulnerability in the EasyRide function, kept in touch with the company’s experts over the past year and presented them with their solutions for making the function more secure. SBB emphasises that it is an offence to use the EasyRide function in combination with manipulated location data. According to SBB, the company has improved the verification of the location data transmitted to the server following the information provided by the ETH Zurich research team. Instances of manipulation are now detected after the fact and offenders are prosecuted. For security reasons, SBB is not disclosing exactly how the checks are carried out.
true
true
true
Experiments by ETH Zurich computer security researchers showed that smartphones can be manipulated to allow the owner to ride Swiss trains for free. The researchers also highlighted ways of curbing such misuse.
2024-10-12 00:00:00
2024-05-15 00:00:00
https://ethz.ch/en/news-…box.20535445.jpg
null
ethz.ch
ethz.ch
null
null
24,546,401
http://ballingt.com/next-python/
ballingt
null
The time may be ripe for a new Python implementation. A lot of keynotes lately have called for one anyway. They are joined — informally, not speaking in an official capacity — by Python core developers in issuing a wakeup call: where is Python in the browser? Where is Python on mobile devices? How could Python be 2x faster? Barry Warsaw at the PyCon 2019 Python Steering Council Panel Keynote: The language is pretty awesome. […] The interpreter, in a sense, is 28 years old. Such a new Python implementation might be faster, work on different platforms, or have a smaller end deliverable. It might accomplish these goals with a just-in-time or ahead-of-time compiler. WebAssembly might significantly influence its implementation. And critically, it might implement a different specification of Python. Wait, what? Will this still be Python? If the new implementation is useful enough and has a level of compatibility with CPython that the community can deal with, (Brett Cannon said something like this on a podcast a few months ago) then it might somehow be canonized. This “Optimizable Python” or “Restricted Python” or “Fast Python” or “Static Python” or “Boring Python,” subset could, once agreed upon, have its semantics shadowed in CPython in an optional mode. What might be up for debate? A few suggestions from Łukasz’s talk: - eval / exec (compiled in an environment that doesn’t allow setting regions as executable, like iOS or (perhaps? I haven’t looked) the webassembly spec. - the complexities and dynamism of the import system. - metaclasses - I don’t know what this enables, but it seems like a concession parts of the community might be willing to make - descriptors - dynamic attribute access So now that the Python 2 to 3 transition is wrapping up and the Python language’s governance issues have been dealt with, it’s time for some dramatic initiatives: let’s grab some stakeholders and come up with the parts of the Python spec to mark optional and get this into CPython so we can start porting code again! I propose `python -z` for zoom — there we go, ZoomPython! — because I don’t see `-z` in python or IPython command line tools. We’ll need some syntax like JavaScript’s `use strict` to mark code this way, I propose the magic string `# this code zooms` . Can the committee just tell us what the new spec is already? No! Or as Łukasz Langa says in response to a a better question after his keynote, “Yes, but the way you get there is to have an alternative platform that informs you what the constrained version of the language should be. If you try to predict the future of what are people are going to need, you’re likely to end up with a design that is artificial and not necessarily useful.” So we’re back to the hoping and waiting and wondering: where will the implementation proposal for FastPython come from? Despite some calls for financially support of such an effort it seems that leading the prototyping of a new language implementation is not at the top of the priority list for committee. Core developers and language steering committee members seem to believe that this kind of experimental project should come from the outside. (try searching for the word Community in the transcript of the podcast Brett was on) This makes sense to me. PyPy is the second-most popular Python implementation. It’s “bug-compatible” with CPython, including the C extension interface, making it a viable drop-in, faster replacement for many CPython programs. It’s an incredible engineering effort, perhaps comparable in scope to the work optimizing JavaScript engines that made that language the fasted dynamically typed language in wide use. If dedicated graduate student and individual hackers, academic funding, and governments grants could make a Python so compliant and fast a Python implementation once, maybe that’s where the next implementation will come from too! MicroPython is closer in design to an imagined implementation of the future: its behavior is different than CPython in a variety of cases and includes the ability to compile individual functions that do not use features like context managers and generators. MicroPython was initially a Kickstarter-backed effort, then later supported by the Python Software Foundation as part of its inclusion on the BBC Micro Bit. The development of MicroPython provides an example of how an alternate implementation might be started by a single individual. But I think the most likely place for a new implementation to come from is a large company that uses Python and has a specific need for a new interpreter. This belief comes from my time at Dropbox, where I’ve seen how projects to improve languages can happen at a company of that size: since we had so many programmers working on so much Python code, better Python tooling would be so useful a case could be made for doing it ourselves. At Dropbox this project has been the Mypy Python static type checker, but I could imagine similar projects to write language implementations. (I’m not imagining too hard; Dropbox is also supporting work on mypyc, a Python compiler I’ll discuss more in a future post.) If you are employed at such a company, it’s hard for me to know how to help you make the business case, but know that it has been done before! Please consider it. Where would that be? A lot of companies! Some of my favorite corporate contributions to the Python community have come from Dropbox and Instagram, but Python isn’t a niche thing anymore and there must be dozens? hundreds? of companies with idiosyncratic business interests such that a Python implementation that ran in the browser, or ran faster, or ran sandboxed, would save them millions of dollars. In his inspirational keynote, Łukasz phrased this as a call to action: This is where you come in. Truly tremendous impact awaits! I don’t think I’ll be one the call-answerers here, but I wish these implementers the best! I think I support the apparent decision for the search for the next implementation not to be centrally directed; I agree that this can come from the community, and the proof of its usefulness can too. But there is something I think we can do centrally. Without pre-emptively deprecating Python language features or designating them as optional, Python can be made a more attractive implementation target by making it smaller in a another way: separating the language from standard library. Glyph proposes moving CPython toward a Kernel Python for a variety of reasons. I find that case convincing.
true
true
true
null
2024-10-12 00:00:00
2019-09-17 00:00:00
null
null
null
null
null
null
21,839,098
https://dev.to/tinacms/what-are-blocks-in-tinacms-1nm5
What are "Blocks" in TinaCMS?
DJ Walker
*"There are only two hard things in Computer Science: cache invalidation and naming things."* This axiom, attributed to Phil Karlton, resonates with anyone who has spent any amount of time working with software. The post you're currently reading is, in some ways, about the latter problem. One concept that we were eager to introduce to Tina is something we refer to as **Blocks fields**. We first introduced this concept in Forestry some time ago, and we think it’s a powerful idea. The challenge with Blocks is that it’s kind of an abstract idea, and thus was tagged with a similarly abstract name. **What are Blocks?** To put it succinctly, Blocks refers to a data structure that consists of an *array of unlike objects*. If you didn’t quite grok that, read on and I’ll do my best to explain why we introduced the Blocks concept to Tina and how it relates to other kinds of fields. ## Simple Fields and Compound Fields The field types we’ve implemented in Tina can be broadly grouped into two categories: **simple fields** and **compound fields**. The designation for whether a field is simple or compound has to do with the kind of data that the field represents. **Simple fields** are fields for data that can be represented as a single value, like a string or number. In computer science lingo, these are referred to as scalar values. An example of simple fields in Tina would be the text field, color field, or toggle. Even the markdown WYSIWYG can be considered a simple field, in spite of its complex frontend behavior, because the value it exports is just a big block of text. **Compound fields** are fields that can’t be represented by a single value. Data exported by a compound field is *structured.* When saved, a compound field’s data will be represented by a non-scalar data type such as an array or object. Compound fields are **fields composed of other fields**. The compound fields in Tina include the Group, Group List, and Blocks. ## *Groups* and *Group Lists* Tina’s **Group** field is a collection of **simple fields**. The fields that comprise a Group field can all be of the same type, or be of different types. Group fields are good for representing a single *entity* that is comprised of smaller pieces of data. Consider two ways to store a name in JSON. We could store the full name as a simple string: ``` { "name": "DJ Walker" } ``` Alternatively, we could contrive a simple data structure to store the name in a more semantic fashion: ``` { "name": { "first": "DJ", "last": "Walker" } } ``` We might use a simple text field in the first case, and a **Group** of two text fields for the second. ### Group Lists A **Group List** is similar to the **Group** field type, with an added dimension. Whereas the Group field represents a single entity, the Group List represents *multiple entities*. Let’s say, instead of a single name, we’re storing a list of names like this: ``` { "subscribers": [ { "first": "DJ", "last": "Walker" }, { "first": "Nolan", "last": "Phillips" } ] } ``` We could use a **Group List** here. All entities in the Group List have the same **shape**; in other words, each object in the array will have the same keys. This makes the Group List analogous to a two-dimensional data structure, like a spreadsheet or database table: first | last | ---|---| DJ | Walker | Nolan | Phillips | ## *Blocks*: Like a *Group List*, But Different Like the **Group List**, the **Blocks** structure represents multiple entities. The difference between a Group List and Blocks is that the Blocks structure supports multiple entities *with potentially different shapes*. This makes the relationship between entities in a Blocks structure much looser than with a Group List. ### What are Blocks Useful For? In practice, there are a couple use cases uniquely suited to Blocks. The primary motivation for the Blocks-style data structure was to facilitate a page builder experience. In our Tina Grande starter, a page can be strung together by adding different entities to a Blocks field, each one containing fields that configure a different part of the page. Another way Grande makes use of Blocks is in its embedded form builder. Like the page builder, Grande approaches forms as a sequence of loosely-related, complex components (in this case, the form fields.) ## Give Blocks a Chance By now, you should have a better sense of what we mean when we talk about Blocks in Tina. If you want to see a glimpse of what you can do with a blocks-based content strategy, take a look at our inline Tailwind and Next.js demo and give it a try. If you still aren't quite sure how the Blocks field works, or want to share some ideas on using Blocks, swing by our community forum and make a post! ## Top comments (1) The block thing is called Union in c++ and in GraphQL, maybe in more languages.
true
true
true
"There are only two hard things in Computer Science: cache invalidation and naming things." This axi...
2024-10-12 00:00:00
2019-12-19 00:00:00
https://dev-to-uploads.s…46ytpt1hl2rv.jpg
article
dev.to
DEV Community
null
null
25,507,306
https://www.entrepreneur.com/article/345866
The Rise of Alternative Venture Capital | Entrepreneur
Tristan Pollock
# The Rise of Alternative Venture Capital A new age of startup investing has arisen amid the demands of entrepreneurs, and it is altering the traditional venture capital model as we know it. Opinions expressed by Entrepreneur contributors are their own. Once upon a time, there was a very clear definition of venture capital. It was used to fund many of the largest technology companies you know, like Facebook, Twitter and LinkedIn, which received funding from venture capital firms by the names of Sequoia Capital, Accel Partners and Benchmark Capital. These firms put in millions of dollars in supergiant rounds for a percentage of equity and got up to 1,000 times returns with an IPO that occurred in less than 10 years. If these venture capitalists (commonly called VCs) got lucky, they would have one, two or three of these moonshot successes in their fund portfolio. This would then give them the return on investment they needed to fall in line with their investors' expectations. That's it. That is how VC evolved until today, when the startup explosion. The startup explosion in the last decade changed the trajectory of venture capital. Although big, successful deals in companies like Airbnb, Lyft and Uber still happened, there was a major increase in the number of startups being created around the U.S. and the world. In particular, there was a huge influx of startups in San Francisco and Silicon Valley. That's where the majority of risk-taking VCs were, after all. Often in the last decade, you could try to raise funding as a startup founder anywhere else and run into risk-averse investors who were yet to understand the open-eyed model of venture capital. These investors wanted to see more revenue and startup investments heavily derisked in order to understand and evaluate them. It used to feel like as soon as you left California and went east, your investment terms gradually got worse from New York to London to Europe. In many places, it was nearly impossible to raise any funding at all with the same model that worked in Silicon Valley. That's why it has the reputation it does today. **The heyday of venture capital** Silicon Valley is still known for innovation, but San Francisco has become the hotbed of startups and venture capitalists.Many VCs kept their offices or homes in Silicon Valley cornerstones on Sandhill Road in Menlo Park or Palo Alto or Mountain View but opened up hip new offices in the city to show face to the changing tide. Twitter, Uber, and Lyft decided to keep their offices in the city instead of moving to the valley like Facebook and Google. Coupled with the increase of startups moving to San Francisco from around the world, the spike in technology jobs, and a huge swath of new VC funds entering the fray, the model, and the city, started to change. Startups now could get funding more easily. The supply of capital was high. There were a plethora of new investors, including accelerators, incubators, angels, angel networks, dumb money, old money and more VCs than you could count. In many ways, this accelerated new technology services and products. It also started the rise of San Francisco becoming a cost-prohibitive place for many people and businesses, including many startup founders. But startup founders, being the entrepreneurs they are, found a way, whether that was funding or couch surfing. There was such a huge increase in funding mechanisms for startups, in fact, that many companies got funding that might not have otherwise. Diligence on startups in Northern California at this time was not intense like it still was in markets nearby on the East Coast or Southern California. Usually, just a pitch deck, a well-explained plan, novel technology, experienced founders, or a signaling investor could raise a $1 million seed round. No problem. **The first evolution** Amid all the startup world hullabaloo, the venture capital model started to take on different faces. AngelList and FundersClub saw the structure of a venture fund as an opportunity. A fund is made up of investors with a general partner who raises the money and does the due diligence on the startups in order for an investment to be made. Angel networks had already formed around this structure without forming VC funds, so it made natural entrepreneurial sense to simplify the fund creation process. These were the first online equity-based fundraising platforms. At the time, raising funding for a private company publicly still had its legal restrictions. Without the right permit, it was illegal to fundraise online for equity. Kickstarter made its way around that by calling the investment donations and rewarding donors with gifts, but no equity traded hands. AngelList called their first online investment vehicle appropriately Invest Online. Then later, Syndicates. Syndicates exploded in number as the startup world had for venture funds and tech companies. This was a huge breakthrough, and democratization of startup investing occurred. Almost anyone could not only invest,but form a syndicate of investors that looked to them to bring interesting deals. The FCC still required accreditation by investors, but enforcement online was a different story. In 2019, AngelList reached nearly $1.8 billion in assets under management, which is on par with most major VC funds. The venture capital scene would never be the same. Even though AngelList and other equity crowdfunding platforms improved on the fluidity of the model, the model was still mostly the same — an investor needs a big exit in order to return their fund. This left the door open to new styles of funding startups, and not just different size funds like Nano or Micro VCs. The excitement in startups was still rising, and so was the funding. At the same time, many startup founders had been sucked in and chewed up in the traditional venture capital model. If their company wasn't on a trajectory of rocketship growth, often founders were forgotten by their investors. Their VCs had to focus on the top 1 percent of the portfolio that they needed to scale and bring the multiples for their fund. The startup that was pushed to scale so fast it broke was left behind. Thus began a revolt. **The revolution begins** The revolt began slowly and quietly. It started with startup founders who had moved to San Francisco and become disenchanted or disenfranchised, leaving the city or becoming tired of the traditional VC model. Many of these entrepreneurs had raised early-stage funding and burned out on growing at a rate that is extremely hard to maintain. Often the push to grow the company that fast would kill the company outright. Some founders started different types of businesses in the Bay Area or back in their home city or country. Some built investment models to support their homegrown founder friends. Some looked to cryptocurrency and ICOs. Some might even have started revenue-stable lifestyle businesses, a type of business not favored in San Francisco until more recently. Venture capital had become a stamp of approval. Your funding amount was your success. How could it be any other way? "Founder friendly" was starting to be heard on the streets of San Francisco more. Y-Combinator and 500 Startups launched new convertible notes for early-stage investing called the SAFE and KISS respectively to give better terms to founders. Stripe built Stripe Atlas to help founders with the legal and financial requirements of starting a business. Financial institutions that had built their profits in different ways decided to be more helpful to the lucrative startup scene. So it began. Many founders who wanted to still build successful tech companies in and outside of San Francisco demanded new terms, or flat-out avoided traditional venture capital. They wanted to build healthy revenues naturally. They wanted to maintain ownership and not give up 20-25 percent of their company for a seed round. They wanted acquisition optionality and to not be forced to only sell or IPO at a $1 billion valuation. They wanted flexibility and fairness most of all. Then the stories of companies doing this started to become public. Tuft and Needle was a big one. It had considered venture capital but ended up building a smart, profitable business that sold for around $450 million with the founders still owning most of the company. Buffer was another sweetheart of the no- or low-funding company crowd who grew to 82 employees, is profitable and serves 75,000 customers. Countless other startups started to take notice, and so did the investors. **The funders become the innovators** The culmination of this pushback from founders was to create more solutions for the 99 percent of entrepreneurs. The unicorn outliers were too rare of a case study. There was a missed opportunity here. One of the first innovators on the venture capital model was Indie.vc. Known by its burning unicorn image, Indie.vc has tested multiple versions of its fund with three different investment models. Currently, it's a 12-month program that supports entrepreneurs on a path to profitability. It invests between $100,000 and $1 million and always takes an equity stake. In addition, it takes a percentage of gross revenue. Indie.vc Founder Bryce Roberts calls their model Permissionless Entrepreneurship. Another early innovator with a similar model is Earnest Capital, which created the Shared Earning Agreement. Also, called an SEA or SEAL (for cuteness' sake), a venture investor model built upon a combination of equity and annual cash payments. "Shared Earnings is equity-like," explains Earnest Capital founder Tyler Tringa, "and only a percentage of "profits' (technically "Founder Earnings') is paid to the investor after everybody, including the founders, are compensated." In between Earnest Capital and Indie.vc you have TinySeed, which describes itself as "the first startup accelerator designed for bootstrappers." The program is a 1-year, remote accelerator with 10-15 companies going through it at the same time. It based its terms on how Rand Fishkin raised venture capital for his company SparkToro: a 10 to 12 percent equity stake with a cut of dividends. For that, TinySeed invests $120,000 for the first founder and $60,000 per additional founder. Alternative VC models are even expanding internationally, where these models are needed the most, with one of the first examples being Pick & Shovel Ventures in Australia, which sets an up-front multiple with the founder and takes 5 percent of monthly recurring revenue (MRR) after a 12-month holiday period. The founder then pays back the venture funding either through revenue or an exit. "It's all about optionality," explains Pick & Shovel Ventures Founder Matt Allen. "Our business model works for profitable companies, companies that choose to raise and companies that exit early and create a windfall for the founders. I really want the founder to do what they feel is right and will support them in all aspects of that." The thought behind these new forms of venture capital is that they can attract revenue-generating startups with interesting technology or a novel product with founders who want to continue thoughtfully growing their company while maintaining ownership. That doesn't mean the company won't be a $1 billion unicorn in Silicon Valley's eyes, but it does mean that their investor's venture capital model doesn't require them to be in order to make a return on investment that's favorable to all involved. It's still an experiment. Another experiment is AI-backed investment firms like CircleUp. CircleUp uses proprietary algorithms to evaluate and identify consumer startups to which it should offer equity investments and working capital loans, typically to companies with $1 million to $15 million in revenue. Corl is another example that uses an artificially-intelligent platform to finance businesses in the digital economy and shares in their future revenue. Their pitch is a no-brainer: "30 percent of businesses don't have the assets necessary for debt financing and 98 percent don't meet the venture requirements for equity financing. This has led to a $3 trillion global funding deficit." The model they use is RBF or revenue-based financing. Revenue-based financing firms have also sprinted onto the scene in order to give other non-dilutive alternatives to startups. Most of these firms focus on earning commissions on revenues, so the startups they fund need to have a minimum level of annual revenue somewhere between $100,000 and $10,000,000. Not surprisingly, this is often ARR, or annual recurring revenue, that comes via predictable-revenue SaaS businesses. Although this suits a portion of the underserved startup scene, it doesn't address the majority of it and is one of many solutions a founder can choose from. **The future is flexible** In all senses of the word, alternative venture capital is flourishing. 2020 will be a year of major expansion. New methods and models are already launching in startup ecosystems across the globe in the footsteps of the first movers. These new founder-investor relationships seem to already be in a more empathetic, stable and healthy place than they often were before. As the model continues to evolve, the important thing to remember is that businesses can be built in many different ways. A founder's appetite for scaling culture can vary widely from high-growth blitzscaling to lifestyle living to slow-build big business. It's up to the founder and investor to strike a deal that supports the true mentality, cultural values and mission for both.
true
true
true
A new age of startup investing has arisen amid the demands of entrepreneurs, and it is altering the traditional venture capital model as we know it.
2024-10-12 00:00:00
2020-03-24 00:00:00
https://assets.entrepren…t=pjeg&auto=webp
article
entrepreneur.com
Entrepreneur
null
null
14,254,912
https://groups.google.com/forum/m/#!topic/kubernetes-dev/k58OyLT4wAU
Redirecting to Google Groups
null
null
true
true
false
null
2024-10-12 00:00:00
2024-10-03 00:00:00
null
null
google.com
groups.google.com
null
null
155,285
http://mashable.com/2008/04/04/google-we-bluffed/
Google On Spectrum Auction: We Bluffed
Stan Schroeder
Credit: The title pretty much sums up Google's story on their secretive (they had to keep quiet until now because of FCC's anti-collusion rules) C Block bid which ended up in Verizon's hands in the end. By their own admission, they were aware that the chances of them actually winning the bid were slim, but they had to push it to $4.6 billion, since this price would "trigger the important "open applications" and "open handsets" license conditions." To do this, they had to use every trick in the poker book: they were "prepared to gain the nationwide C Block licenses at a price somewhat higher than the reserve price," and they raised their own bid even though no one was bidding against them "to ensure aggressive bidding on the C Block." Well played, G.
true
true
true
Google On Spectrum Auction: We Bluffed
2024-10-12 00:00:00
2008-04-04 00:00:00
https://helios-i.mashabl….v1647017001.jpg
article
mashable.com
Mashable
null
null
25,442,442
https://en.wikipedia.org/wiki/Phase_modulation
Phase modulation - Wikipedia
Authority control databases National Germany
# Phase modulation Passband modulation | ---| Analog modulation | Digital modulation | Hierarchical modulation | Spread spectrum | See also | **Phase modulation** (**PM**) is a modulation pattern for conditioning communication signals for transmission. It encodes a message signal as variations in the instantaneous phase of a carrier wave. Phase modulation is one of the two principal forms of angle modulation, together with frequency modulation. In phase modulation, the instantaneous amplitude of the baseband signal modifies the phase of the carrier signal keeping its amplitude and frequency constant. The phase of a carrier signal is modulated to follow the changing signal level (amplitude) of the message signal. The peak amplitude and the frequency of the carrier signal are maintained constant, but as the amplitude of the message signal changes, the phase of the carrier changes correspondingly. Phase modulation is an integral part of many digital transmission coding schemes that underlie a wide range of technologies like Wi-Fi, GSM and satellite television. However it is not widely used for transmitting analog audio signals via radio waves.[ why?] It is also used for signal and waveform generation in digital synthesizers, such as the Yamaha DX7, to implement FM synthesis. A related type of sound synthesis called phase distortion is used in the Casio CZ synthesizers. ## Foundation [edit]In general form, an analog modulation process of a sinusoidal carrier wave may be described by the following equation:[1] - . *A(t)* represents the time-varying amplitude of the sinusoidal carrier wave and the cosine-term is the carrier at its angular frequency , and the instantaneous phase deviation . This description directly provides the two major groups of modulation, amplitude modulation and angle modulation. In amplitude modulation, the angle term is held constant, while in angle modulation the term *A(t)* is constant and the second term of the equation has a functional relationship to the modulating message signal. The functional form of the cosine term, which contains the expression of the instantaneous phase as its argument, provides the distinction of the two types of angle modulation, frequency modulation (FM) and phase modulation (PM).[2] In FM the message signal causes a functional variation of the carrier frequency. These variations are controlled by both the frequency and the amplitude of the modulating wave. In phase modulation, the instantaneous phase deviation (phase angle) of the carrier is controlled by the modulating waveform, such that the principal frequency remains constant. In principle, the modulating signal in both frequency and phase modulation may either be analog in nature, or it may be digital. The mathematics of the spectral behaviour reveals that there are two regions of particular interest: - For small amplitude signals, PM is similar to amplitude modulation (AM) and exhibits its unfortunate doubling of baseband bandwidth and poor efficiency. - For a single large sinusoidal signal, PM is similar to FM, and its bandwidth is approximately - , ## Modulation index [edit]As with other modulation indices, this quantity indicates by how much the modulated variable varies around its unmodulated level. It relates to the variations in the phase of the carrier signal: where is the peak phase deviation. Compare to the modulation index for frequency modulation. ## See also [edit]- Automatic frequency control - Modulation for a list of other modulation techniques - Modulation sphere - Polar modulation - Electro-optic modulator for Pockel's Effect phase modulation for applying sidebands to a monochromatic wave ## References [edit]**^**Klie, Robert H.; Bell Telephone Laboratories; AT&T (1977).*Principles*. Telecommunication Transmission Engineering. Vol. 1 (2nd ed.). Bell Center for Technical Education. ISBN 0-932764-13-4. OCLC 894686224.**^**Haykin, Simon (2001).*Communication Systems*. Wiley. p. 107. ISBN 0-471-17869-1.
true
true
true
null
2024-10-12 00:00:00
2001-12-02 00:00:00
null
website
wikipedia.org
Wikimedia Foundation, Inc.
null
null
35,601,017
https://webscrapeai.com/?ref=hn
Automate Your Data Collection With No-Code
null
Webscrape AI is the perfect tool for collecting data from the web without the hassle of manual scraping. No coding skills required. Simply enter the URL and the items you want to scrape, and our AI scraper will do the rest. Our AI scraper uses advanced algorithms to collect data accurately, so you can be confident in the results. With our AI scraper, you can automate your data collection process and free up your time to focus on other tasks. Our AI scraper allows you to easily customize your data collection preferences to suit your needs. Our AI scraper is an affordable solution for businesses of all sizes that want to collect data without breaking the bank. Our AI scraper uses state-of-the-art methods for data collection to ensure speedy collection of data. Pricing Plans Check out our pricing plans, billed monthly or annually at a discounted rate. $47/month $87/month An AI scraper is a tool that uses artificial intelligence algorithms to automatically collect data from websites. Yes, it's legal to use an AI scraper to collect publicly available data. However, it's important to make sure you're not violating any website's terms of service. No, WebscrapeAi is designed to be user-friendly and easy to use by anyone, regardless of technical skills. WebscrapeAi can collect data from any website that doesn't require authentication or login credentials. However, it's important to check each website's terms of service before scraping. No, there is no limit on the amount of data you can scrape with WebscrapeAi. However, it's important to use the tool responsibly and not abuse any website's resources. WebscrapeAi is a software as a service (SaaS) tool and requires a subscription to use. You can choose from monthly or yearly subscriptions based on your needs.
true
true
true
Webscrape AI is the perfect tool for collecting data from the web without the hassle of manual scraping. No coding skills required.
2024-10-12 00:00:00
null
null
null
webscrapeai.com
webscrapeai.com
null
null
3,477,720
http://www.foxnews.com/world/2012/01/17/british-airways-flight-mistakenly-tells-passengers-plane-will-crash/?test=latestnews
British Airways flight mistakenly tells passengers plane will crash
NewsCore
LONDON – Passengers flying over the Atlantic were terrified when it was announced twice that their plane could be about to crash. British Airways (BA) Flight 206 was at 35,000 feet, halfway from Miami to London's Heathrow Airport, when the taped message was played by accident. Screams rang out as it was repeated straightaway. An Edinburgh man said, "It was about 3:00am. An alarm sounded, and we were told we were about to land in the sea. I thought we were going to die. My wife was crying, and passengers were screaming. Then they played an announcement telling us to just ignore the warnings." Another passenger said, "When we landed, they were handing out letters apologizing, but it was the worst experience of my life. I don't think BA should get away with this." A BA spokesman said of the scare en route to Heathrow on Friday, "The cabin crew canceled the announcement immediately and sought to reassure customers that the flight was operating normally. We apologize to customers for causing them undue concern." In August 2010, a message announcing, "We may shortly need to make an emergency landing on water," was played by mistake on a British Airways flight from Heathrow to Hong Kong.
true
true
true
Passengers flying over the Atlantic were terrified when it was announced twice that their plane could be about to crash.
2024-10-12 00:00:00
2015-03-26 00:00:00
http://video.foxnews.com/thumbnails/011712/640/360/al_plane_011712.jpg
article
foxnews.com
Fox News
null
null