id
int64
3
41.8M
url
stringlengths
1
1.84k
title
stringlengths
1
9.99k
author
stringlengths
1
10k
markdown
stringlengths
1
4.36M
downloaded
bool
2 classes
meta_extracted
bool
2 classes
parsed
bool
2 classes
description
stringlengths
1
10k
filedate
stringclasses
2 values
date
stringlengths
9
19
image
stringlengths
1
10k
pagetype
stringclasses
365 values
hostname
stringlengths
4
84
sitename
stringlengths
1
1.6k
tags
stringclasses
0 values
categories
stringclasses
0 values
15,234,065
https://medium.com/@viniciusdacal/arc-simplifying-async-requests-in-redux-apps-e8052b874216
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
26,007,936
https://www.youtube.com/watch?v=o1eLKODSCqw
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
15,907,296
https://developer.nvidia.com/tensorrt
NVIDIA TensorRT
null
# NVIDIA TensorRT NVIDIA® TensorRT™ is an ecosystem of APIs for high-performance deep learning inference. TensorRT includes an inference runtime and model optimizations that deliver low latency and high throughput for production applications. The TensorRT ecosystem includes TensorRT, TensorRT-LLM, TensorRT Model Optimizer, and TensorRT Cloud. ## NVIDIA TensorRT Benefits ### Speed Up Inference by 36X NVIDIA TensorRT-based applications perform up to 36X faster than CPU-only platforms during inference. TensorRT optimizes neural network models trained on all major frameworks, calibrates them for lower precision with high accuracy, and deploys them to hyperscale data centers, workstations, laptops, and edge devices. ### Optimize Inference Performance TensorRT, built on the CUDA® parallel programming model, optimizes inference using techniques such as quantization, layer and tensor fusion, and kernel tuning on all types of NVIDIA GPUs, from edge devices to PCs to data centers. ### Accelerate Every Workload TensorRT provides post-training and quantization-aware training techniques for optimizing FP8, INT8, and INT4 for deep learning inference. Reduced-precision inference significantly minimizes latency, which is required for many real-time services, as well as autonomous and embedded applications. ### Deploy, Run, and Scale With Triton TensorRT-optimized models are deployed, run, and scaled with NVIDIA Triton™ inference-serving software that includes TensorRT as a backend. The advantages of using Triton include high throughput with dynamic batching, concurrent model execution, model ensembling, and streaming audio and video inputs. ## Explore the Features and Tools of NVIDIA TensorRT ### Large Language Model Inference NVIDIA TensorRT-LLM is an open-source library that accelerates and optimizes inference performance of recent large language models (LLMs) on the NVIDIA AI platform. Developers experiment with new LLMs for high performance and quick customization with a simplified Python API. Developers accelerate LLM performance on NVIDIA GPUs in the data center or on workstation GPUs—including NVIDIA RTX™ systems on native Windows—with the same seamless workflow. ### Optimized Inference Engines NVIDIA TensorRT Cloud is a developer service for compiling and creating optimized inference engines for ONNX. Developers can use their own model and choose the target RTX GPU. Then TensorRT Cloud builds the optimized inference engine, which can be downloaded and integrated into an application. TensorRT Cloud also provides prebuilt, optimized engines for popular LLMs on RTX GPUs. TensorRT Cloud is available in early access on NVIDIA GeForce RTX™ GPUs to select partners. Apply to be notified when it's publicly available. ### Optimize Neural Networks NVIDIA TensorRT Model Optimizer is a unified library of state-of-the-art model optimization techniques, including quantization, sparsity, and distillation. It compresses deep learning models for downstream deployment frameworks like TensorRT-LLM and TensorRT to efficiently optimize inference on NVIDIA GPUs. ### Major Framework Integrations TensorRT integrates directly into PyTorch, Hugging Face, and TensorFlow to achieve 6X faster inference with a single line of code. TensorRT provides an ONNX parser to import ONNX models from popular frameworks into TensorRT. MATLAB is integrated with TensorRT through GPU Coder to automatically generate high-performance inference engines for NVIDIA Jetson™, NVIDIA DRIVE®, and data center platforms. ## World-Leading Inference Performance TensorRT was behind NVIDIA’s wins across all performance tests in the industry-standard benchmark for MLPerf Inference. TensorRT-LLM accelerates the latest large language models for generative AI, delivering up to 8X more performance, 5.3X better TCO, and nearly 6X lower energy consumption. See All Benchmarks### 8X Increase in GPT-J 6B Inference Performance ### 4X Higher Llama2 Inference Performance ### Total Cost of Ownership ### Energy Use ## Accelerate Every Inference Platform TensorRT can optimize AI deep learning models for applications across the edge, laptops and desktops, and data centers. It powers key NVIDIA solutions, such as NVIDIA TAO, NVIDIA DRIVE, NVIDIA Clara™, and NVIDIA JetPack™. TensorRT is also integrated with application-specific SDKs, such as NVIDIA NIM, NVIDIA DeepStream, NVIDIA Riva, NVIDIA Merlin™, NVIDIA Maxine™, NVIDIA Morpheus, and NVIDIA Broadcast Engine. TensorRT provides developers a unified path to deploy intelligent video analytics, speech AI, recommender systems, video conferencing, AI-based cybersecurity, and streaming apps in production. From creator apps to games and productivity tools, TensorRT is embraced by millions of NVIDIA RTX, GeForce®, Quadro® GPU users. Whether integrated directly or via the ONNX-Runtime framework, TensorRT-optimized engines are weightless and compressed, empowering developers to incorporate AI-rich features without bloating app sizes. ## Read Success Stories # Amazon Discover how Amazon improved customer satisfaction by accelerating its inference 5X faster. # American Express American Express improves fraud detection by analyzing tens of millions of daily transactions 50X faster. Find out how. # Zoox Explore how Zoox, a robotaxi startup, accelerated their perception stack by 19X using TensorRT for real-time inference on autonomous vehicles. ## Widely Adopted Across Industries ## TensorRT Resources ### Read the Introductory TensorRT Blog Learn how to apply TensorRT optimizations and deploy a PyTorch model to GPUs. ### Watch On-Demand TensorRT Sessions From GTC Learn more about TensorRT and its features from a curated list of webinars at GTC. ### Get the Introductory Developer Guide See how to get started with TensorRT in this step-by-step developer and API reference guide. Use the right inference tools to develop AI for any application on any platform.
true
true
true
An SDK with an optimizer for high-performance deep learning inference.
2024-10-12 00:00:00
2023-10-17 00:00:00
https://d29g4g2dyqv443.c…-og-1200x630.jpg
website
nvidia.com
NVIDIA Developer
null
null
21,841,772
https://medium.com/@rbranson/rds-pricing-has-more-than-doubled-ef8c3b7e5218
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
21,875,137
https://gitlab.com/afiorillo/gunicorn-torify
Andrew Fiorillo / gunicorn-torify · GitLab
null
Skip to content GitLab Menu Why GitLab Pricing Contact Sales Explore Why GitLab Pricing Contact Sales Explore Sign in Get free trial G gunicorn-torify Find file Copy HTTPS clone URL Copy SSH clone URL [email protected]:afiorillo/gunicorn-torify.git Copy HTTPS clone URL https://gitlab.com/afiorillo/gunicorn-torify.git Loading
true
true
true
GitLab.com
2024-10-12 00:00:00
2024-10-12 00:00:00
https://gitlab.com/asset…1570febab5d2.jpg
object
gitlab.com
GitLab
null
null
15,101,640
http://hookify-cooking-programming.a3c1.starter-us-west-1.openshiftapps.com/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
34,770,296
https://phenaki.research.google
Phenaki: Generate Videos from Text - Google Research
null
# Phenaki Realistic video generation from open-domain textual descriptions ## Abstract We present Phenaki, a model that can synthesize realistic videos from textual prompt sequences. Generating videos from text is particularly challenging due to various factors, such as high computational cost, variable video lengths, and limited availability of high quality text-video data. To address the first two issues, Phenaki leverages its two main components: - An **encoder-decoder model**that compresses videos to discrete embeddings, or tokens, with a tokenizer that can work with variable-length videos thanks to its use of causal attention in time. - A **transformer model**that translates text embeddings to video tokens: we use a bi-directional masked transformer conditioned on pre-computed text tokens to generate video tokens from text, which are subsequently de-tokenized to create the actual video. To address the data issues, we demonstrate that joint training on a large corpus of image-text pairs ** and** a smaller number of video-text examples can result in generalization beyond what is available in the video datasets alone. When compared to prior video generation methods, we observed that Phenaki could generate arbitrarily long videos conditioned on an open-domain sequence of prompts in the form of time-variable text, or a story. To the best of our knowledge, this is the first time a paper studies generating videos from such time-variable prompts. Furthermore, we observed that our video encoder-decoder outperformed all per-frame baselines currently used in the literature on both spatio-temporal quality and number of tokens per video. ### Resources ### Example videos generated by Phenaki #### Prompts: “First person view of riding a motorcycle through a busy street.” “First person view of riding a motorcycle through a busy road in the woods.” “First person view of very slowly riding a motorcycle in the woods.” “First person view braking in a motorcycle in the woods.” “Running through the woods.” “First person view of running through the woods towards a beautiful house.” “First person view of running towards a large house.” “Running through houses between the cats.” “The backyard becomes empty.” “An elephant walks into the backyard.” “The backyard becomes empty.” “A robot walks into the backyard.” “A robot dances tango.” “First person view of running between houses with robots.” “First person view of running between houses; in the horizon, a lighthouse.” “First person view of flying on the sea over the ships.” “Zoom towards the ship.” “Zoom out quickly to show the coastal city.” “Zoom out quickly from the coastal city.” #### Prompts: “Lots of traffic in futuristic city.” “An alien spaceship arrives to the futuristic city.” “The camera gets inside the alien spaceship.” “The camera moves forward until showing an astronaut in the blue room.” “The astronaut is typing in the keyboard.” “The camera moves away from the astronaut.” “The astronaut leaves the keyboard and walks to the left.” “The astronaut leaves the keyboard and walks away.” “The camera moves beyond the astronaut and looks at the screen.” “The screen behind the astronaut displays fish swimming in the sea.” “Crash zoom into the blue fish.” “We follow the blue fish as it swims in the dark ocean.” “The camera points up to the sky through the water.” “The ocean and the coastline of a futuristic city.” “Crash zoom towards a futuristic skyscraper.” “The camera zooms into one of the many windows.” “We are in an office room with empty desks.” “A lion runs on top of the office desks.” “The camera zooms into the lion's face, inside the office.” “Zoom out to the lion wearing a dark suit in an office room.” “The lion wearing looks at the camera and smiles.” “The camera zooms out slowly to the skyscraper exterior.” “Timelapse of sunset in the modern city.” We wanted to understand if it would be possible to leverage Imagen Video’s ability to generate high-resolution videos with unprecedented photorealistic fidelity, and benefit from its underlying super-resolution modules to enhance Phenaki’s output, with the objective of combining the strengths of these two approaches into something that could create beautiful visual stories. To achieve this, we feed Phenaki’s output generated at a given time (plus the corresponding text prompt) to Imagen Video, which then performs spatial super-resolution. A distinct strength of Imagen Video, compared to other super-resolution systems, is its ability to incorporate the text into the super-resolution module. For an example showing how the end-to-end system works in practice, see the previous example. The corresponding captions corresponding this example are the following: #### Prompts: “very close up of Penguin riding wave on yellow surfboard” “penguin rides surf yellow surfboard unto beach. Penguin leaves yellow surfboard and keeps walking.” “Penguin quickly walking on beach and camera following. Penguin waves to camera. Feet go by camera in foreground” “A penguin runs into a 100 colorful bouncy balls’ “slow zoom out. penguin sitting on bird nest with a single colorful egg” “zoom out. Aerial view of penguin sitting on bird nest in rainbow antarctic glacier” ### Authors Ruben Villegas, Mohammad Babaeizadeh, Pieter-Jan Kindermans, Hernan Moraldo, Han Zhang, Mohammad Taghi Saffar, Santiago Castro, Julius Kunze, Dumitru Erhan ### Acknowledgements We give special thanks to the Imagen Video team for their collaboration and for providing their system to do super resolution. To our artist friends Irina Blok and Alonso Martinez for extensive creative exploration of the system and for using Phenaki to generate some of the videos showcased here. We also want to thank Niki Parmar for initial discussions. Special thanks to Gabriel Bender and Thang Luong for reviewing the paper and providing constructive feedback. We appreciate the efforts of Kevin Murphy and David Fleet for advising the project and providing feedback throughout. We are grateful to Evan Rapoport, Douglas Eck and Zoubin Ghahramani for supporting this work in a variety of ways. Tim Salimans and Chitwan Saharia helped us with brainstorming and coming up with shared benchmarks. Jason Baldridge was instrumental for bouncing ideas. Alex Rizkowsky was very helpful in keeping things organized, while Erica Moreira and Victor Gomes ensured smooth resourcing for the project. Sarah Laszlo and Kathy Meier-Hellstern have greatly helped us incorporate important responsible AI practices into this project, which we are immensely grateful for. Finally, Blake Hechtman and Anselm Levskaya were generous in helping us debug a number of JAX issues. **Credit for Phenakistoscope asset:** Creator: Muybridge, Eadweard, 1830-1904, artist Title: The zoopraxiscope* - a couple waltzing (No. 35., title from item.) Edits made: Extended background and converted file format to mp4
true
true
true
Discover Phenaki, a model that can generate realistic videos from text. Learn more about how Phenaki creates high quality video generation.
2024-10-12 00:00:00
null
https://lh3.googleusercontent.com/NIhQPszQyev14tA6RN8KIEBm9hvd01JoDKQ98lKm9wJ_C2ccvUPOI75WfACMQfAtuqtGlX_P5z46pmXGhupHUcfa-ObomtVk8GN_K168=w2880-e365-pa-nu
website
research.google
Phenaki: Generate Videos from Text - Google Research
null
null
34,547,546
https://github.com/struct-ure/kg
GitHub - struct-ure/kg: A self-contained, queryable knowledge graph of tech skills and IT stuff; maintained with git
Struct-Ure
struct-ure/kg is a self-contained knowledge graph (KG) of tech skills and IT stuff (software, platforms, etc.). It presents a GraphQL API to retrieve information from the graph. Transparent management of the structure and content of the graph is accomplished using git. "Editing" the KG is as simple as making changes to directories and files. - simple to contribute (fork -> edit files -> submit pr) - single Docker image - multilingual - integration with Wikidata - GraphQL API - replace simple tag concepts in your software with identifiers from the graph, e.g., `"C"` becomes`https://struct-ure.org/kg/it/programming-languages/c` - query the entire graph to build a tree-control to present tech skills and software in a UI (for example: this) - fork the repo and add your company/domain-specific knowledge for use within your organization - find nodes that have a particular category, e.g., all database nodes that are graph-oriented - find nodes by known aliases, e.g., "Golang" is an alias for the programming language "Go" - let your AI infer relationships, e.g., "EC2" is a part of "AWS" which is part of "Cloud Computing" which is part of "IT" `docker run -it -p 8080:8080 structureorg/kg` then, query the graph using your favorite GraphQL tool at http://localhost:8080/graphql. See the /query folder for example queries. Images for both amd64 and arm64 are available on Dockerhub. At present the KG contains over 1,700 concepts — everything from programming languages to electronic health care systems. While a promising start, there's still so much more to add! We're hopeful that domain experts, companies and tech enthusiasts will help move the KG forward. Additions and improvements to the KG are accomplished by editing the directory and file structure under the /root folder. Please fork this repo and create a pull request with your changes/additions. For more detail on conventions used in the /root folder and how to add/edit KG entries, please see CONTRIBUTING.md. For simple fixes (e.g., a spelling error, feel free to open an issue). Please use the Discussions board for questions and suggestions. struct-ure/kg uses timestamp-based versioning in the YY.MM.DD format. The first version, 23.01.19, was tagged on January 19, 2023. You can query the KG's current version via GraphQL `queryVersion {version}` . struct-ure/kg is built upon Dgraph, a horizontally scalable and distributed GraphQL database with a graph backend. struct-ure/kg is deployed as a simple single-node cluster in its published Docker image. At present, the KG has fewer than 60k edges (Dgraph can support graphs with hundreds of millions of edges when deployed in a high-availability configuration). The tools used to build graph-compatible import files are written in Go. - move the graph build and image publish steps to Github actions - investigate other non-IT domains for inclusion into the KG
true
true
true
A self-contained, queryable knowledge graph of tech skills and IT stuff; maintained with git - struct-ure/kg
2024-10-12 00:00:00
2023-01-16 00:00:00
https://opengraph.githubassets.com/325b15a1ce4bf726ec547bc1c75c6e16452469f51ef44699f11f4f8be01a8414/struct-ure/kg
object
github.com
GitHub
null
null
13,418,239
http://bryce.vc/post/155827298675/funding-independent-alternatives
Funding Independent Alternatives
Brycedotvc
## Funding Independent Alternatives A few weeks back, I was sitting at the counter of a local coffee shop waiting for my meeting to show up. As I sat, I overheard a conversation between the owner and a prospective vendor. The meeting ended with the vendor going in for the close. And their tactic struck me. “We’re a small family owned business. We only work with a small number of customers so we can provide them the absolute highest level of service and best possible products.” With that they shook hands and walked out. I have no idea if they closed the deal or not. My meeting arrived and my day went on, but I couldn’t shake that pitch. In a tech world that touts growth hacking and new features and scale, it was kind of refreshing to hear someone sell smallness and independence and a personal touch. But, if you read the tech press you’d think that type of culture or relationship with customers wasn’t possible online. You’d think that the only way to grow and defend your business was to weaponize your balance sheet. Pando recently wrote: Pattern matching is in; gut feelings are out. And Facebook, Google, and Apple are just too powerful to go around. What works now? Cash. Sh*t loads of cash. Paying to acquire users. If you are like me, you see more ads for meal kit services, startup mattress companies, upstart apparel companies, Pelotons, and the like than anything else on Facebook. That’s because those kinds of ads are one of the only way startups are growing right now. It’s a playbook people have been terrified to return to after the dot com bust. The result? VCs are the kingmakers again in a way they haven’t been for the last decade. So, according to people who’s business it is to invest money the only way to truly compete is to raise more money. Thinking face emoji. David Packard, of Hewlett Packard, was known to share a quote from an early customer of theirs which goes something like this: More organizations die of indigestion than starvation. Apt in a world of software that is trying to eat everything as quickly as possible. But, what if sh*t loads of cash wasn’t the answer? The problems with sh*t loads of cash means you have to do sh*ts more things. Sh*t loads more hiring. Build sh*t loads more features. Enter sh*t loads more markets. Take on sh*t loads biggers leases. And take on sh*t loads more expectations and accelerated timelines of what success needs to look like. What if there were something tech could learn from that vendor in that coffee shop? Nearly two years ago we wrote a post suggesting that there may be an opportunity for independent businesses to outmaneuver their ravenous Unicorn competition by staying smaller, profitable and personal longer: If you believe, as we do, that there will come a time when not having taken loads of funding, not selling out your users and not being forced to maximize shareholder value will be a competitive advantage then this type of designation might matter. Customers and users burned by cash burning startup after cash burning startup may start looking around for independent alternatives who aren’t looking to sell them out, or sell out themselves, only to have the products they love and rely on killed by acquiring companies. Tho, that sentiment flies in the face of today’s conventional wisdom maybe, just maybe, weaponized balance sheets will backfire. Take a look at the Techcrunch Unicorn Leaderboard (yes there is really something called a Unicorn Leaderboard). While these mystical creatures grapple with digesting 10s, and often hundreds of millions, of investment we think there are opportunities for thoughtful, focused, slow followers building profitable independent alternatives. If you’re one of them, get in touch.
true
true
true
A few weeks back, I was sitting at the counter of a local coffee shop waiting for my meeting to show up. As I sat, I overheard a conversation between the owner and a prospective vendor. The meeting...
2024-10-12 00:00:00
2017-01-14 00:00:00
null
article
bryce.vc
Tumblr
null
null
15,079,493
https://www.techinasia.com/5-rising-startups-india-aug-23-2017
Tech in Asia
null
If you're seeing this message, that means JavaScript has been disabled on your browser. Please enable JavaScript to make this website work.
true
true
true
null
2024-10-12 00:00:00
2020-01-01 00:00:00
null
null
null
Tech in Asia
null
null
2,038,083
http://www.techdirt.com/articles/20101223/14325412399/anyone-notice-that-sites-dont-have-to-rely-google-so-much-traffic-any-more.shtml
Anyone Notice That Sites Don't Have To Rely On Google So Much For Traffic Any More?
Mike Masnick
# Anyone Notice That Sites Don't Have To Rely On Google So Much For Traffic Any More? ### from the *the-value-of-earned-links* dept One of the driving forces behind some of the legal attacks on Google is that Google is the defacto monopoly on being found online. We’ve heard over and over again a claim along the lines of “if you’re not in Google, you’re not online.” And there may be *some* truth in that statement for many websites, but the rise of the social web appears to certainly be decreasing the reliance on Google for “being found.” Nearly two years ago, we wrote about the increasing value of “earned” or “passed” links or media. That is, with social communications platforms like Facebook and Twitter, people are promoting various websites themselves and others are discovering them not because of Google, but because their friends, families and colleagues are recommending them. I have to admit that I was still a little skeptical of how big this would really be, but in the last year (and especially the last six months), I’ve really changed my mind — and that’s because we’re seeing evidence of it directly. For years, our largest referrer every single day was Google. It wasn’t even close. Every day, people came from Google (sometimes via searches, sometimes via things like Google Reader or iGoogle), and it simply dominated how people found us. Yet, these days, it’s quite rare to see Google as the top referrer to Techdirt on any given day. Instead, it seems that every day we get an onslaught of traffic from at least one (and sometimes more) social communications platforms: StumbleUpon, Reddit, Twitter and Facebook now regularly come in as our biggest referrers. Google still drives a lot of traffic, but our traffic has certainly become a lot more diversified. And while those companies certainly are not “competitors” to Google in the traditional sense, when it comes to the question of “the only way to be found online is Google,” I can say empirically that’s simply not true for us. Along those lines, however, I should note that the reason those social communications systems work is because of *people* who *like what we have to say and want to share it*. That doesn’t work if your content sucks, so if your content sucks, you may still have to rely on Google (but, even then, part of what Google tries to do is make sure the sucky content gets dropped down as well — so the best solution might be to not have sucky content). Related to all this, as we head into a brief holiday break (we’ll be back next week, don’t worry), I wanted to *thank* everyone who makes this community so fun and dynamic — and certainly the folks who made this story possible by regularly sharing our stories on those other platforms. That’s mighty kind of you, and it is greatly appreciated. Finally, again, related to all of this, we *never* seem to post about the different ways to follow us online, even though most of you have probably figured it out on your own already. Of course, we have an RSS feed, a Twitter feed and a Facebook page (which often fails to update for reasons not at all clear to us). We also have an email list that sends out copies of each of the previous day’s posts early in the morning (US time) the following day. You can sign up for that by putting your email address in the box in the upper righthand corner of this page. Feel free to follow us (or not) however you prefer, and thanks for being a part of the community. Filed Under: earned links, linking, passed links, referrals, sharing, social communities, traffic Companies: facebook, google, reddit, stumbleupon, twitter ## Comments on “Anyone Notice That Sites Don't Have To Rely On Google So Much For Traffic Any More?” ## It was always foolish to rely on Google. I’ve been a web programmer for nearly 20 years and I can attest the kids coming out into the working world, calling themselves web developers, have much, much growing up to do. Every day, websites I frequent to see what these kids are up to always include an “SEO improvement” blog entry, as though it’s absolutely necessary to score high on search engine results. I’ve often challenged this mentality and am usually met with quite a bit of resistance, since my advice is “Build it, market it, and they will come.” and their reply is “SEO enhancement is marketing.” While I can agree to a point, it’s completely wrong. First of all, Google’s algorithms are proprietary, and therefore, any SEO enhancement is speculation, at best. Secondly, everyone else is doing the same SEO enhancements, so it’s basically moot. I’m glad this article included many free tools web developers can use to truly market their next project. Although I personally don’t use Twitter (nothing against it, just don’t own a cell phone), I’ve seen its impact on many websites. I’m also in 100% agreement good content makes or breaks a website and if those eyeballs aren’t clicking withing moments of arriving, a dead web site is in the making. Nice write up. Can’t wait to see what the kids, who call themselves web developers, think about it. PS: do note I know not every youngster falls into my generalization. I just wanted to save these tiring fingers a few keystrokes. ## Re: It was always foolish to rely on Google. Like all thing’s include a seed or ideas it take time to grow, good insight and beside there are a lot of idea out here, it just the top mangerment don’t see so well I always thought that mouth to mouth was a very powerful thing or in this case link to link or whatever. People send those things through e-mails, chats, forums, social forums and other places. I normally follow you through the day via an igoogle plug in, but i do tend to repost everything interesting to facebook. Common courtesy to share important info. (besides it’s better than the nutjob sites i see others sharing. At least you folks realize that unsourced news is just bloody gossip) I guess I’m an odd sort. First thing is I never go to these social sites. I guess I value my privacy more than showing up. I also don’t comment on sites that require email or registration to post. In fact I no longer use email, so I never get spam. There are other ways around that, which also don’t include IM. Google is a very small part of my internet experience at best. Zilch is more often the rule than not. Nor do I tend to view ads. There’s just been too many iframe attacks and they are too annoying to reading and clear thinking. So what it comes down to is that I find my way around by what I am interested in. Those blogs I go to, that have meat to them and food for thought, are also the ones that give sources to their topics. In the long run I find more concisely what I seek going this way that if I actually were searching with an engine for it. There are some things that just won’t show up in a search engine, no matter how you state the search term. ## Re: Re: I don’t mean this as a harsh criticism as such, but I have to wonder about people who claim these things… “I guess I value my privacy more than showing up.” You don’t need to log into Reddit to read the stories, and you don’t have to give any personal information to Twitter in order to follow people. Social sites don’t need any more personal information than you want to give them – one of my Facebook friends uses a fake name, no contact info other than a disposable webmail account and doesn’t allow anyone to friend him that he doesn’t know very well offline. But, it remains a great way to organise my social life in situations where my friends are online but not easily contactable via phone or other means (e.g. at work). “In fact I no longer use email, so I never get spam.” That’s a bit extreme, but whatever. You could be missing out on a lot by not allowing people to easily contact you online, and it definitely limits your options in other ways. Surely a disposable webmail account on a site with a decent spam filter would be more productive? “So what it comes down to is that I find my way around by what I am interested in.” It seems funny that you’re basically saying that you won’t allow anybody you know personally to contact you easily online, and are essentially guided by a small group of bloggers (who probably get their sources by the methods you’re avoiding well before they mention them to you). Whatever you fancy, I guess, but it seems rather unwieldy and inaccurate to me. “There are some things that just won’t show up in a search engine, no matter how you state the search term.” There are also things that will never show up in the methods you’re using, no matter how long you wait. ## Thanks for not sucking :) Your experience is shared by several other content sites I’m aware of. Google may want to call you to the stand to testify on their behalf next time someone takes them to task over visitor traffic issues 😉 Also, really do appreciate your thoughtful commentary on the various important issues taking place online and off. Even where I may not always agree, your perspectives are thought provoking and interesting. It’s always a pleasure to share your posts w/others. Happy Holidays to you and your team! the best solution might be to not have sucky contentIf only the media industry realized this. Maybe they wouldn’t have cancelled Firefly… Merry Christmas All! If you aren’t fortunate enough to have snow: http://www.shipmentoffail.com/fails/2010/12/unassembled-snowman/ ## As far as it goes we yet rely on google We have been working on social networking for two years, but so far Google is our top visitor provider… hope we are able to push if further on 2011… anyway Spain has it’s own rules 🙂 The question is how each of those sources get their traffic. Social networks and information networks get their source traffic from Google. Essentially, these other properties are just getting between you and google. Congrats, you have met the internet middle men. ## Re: Re: “Social networks and information networks get their source traffic from Google.” Citation needed, I believe… ## Re: Re: Re: “We have been working on social networking for two years, but so far Google is our top visitor provider” Post above mine. ## Re: Re: Re: Re: I wish people would learn how to use the site sometimes, it’s not hard. Your post was written as though it was responding to the original article, not a comment in the thread. Anyway, you still make no sense. Raul didn’t make the claim that I was questioning (“Social networks and information networks get their source traffic from Google.”), you did. Do you wish to supply a citation for your own claim, or is it just another of those unfounded assumptions that ACs like to make here? ## Re: Re: Re: 2Re:Oh, and “another poster in the thread claimed that his site’s top visitor provider is Google” doesn’t count as a citation. ## beware malware ... My niece, a 30-something, uses Facebook. She has had her computer infected with malware and effectively disabled with malware because she clicked a link on Facebook. … Twice within a few months. She simply cannot resist following links to “cute kittens” and the like. She cannot be trained to NOT do it. Oh well. ## Re: beware malware ... You could use that same argument against any website with outside links, which is pretty much all of them. It’s not Facebook’s fault. The blame lies somewhere between your niece and the malware site. ## Re: beware malware ... She may not be “trainable”, but that is not a problem with modern software anymore. Use the sandboxes that virus creators use to create their viruses. Try sandboxie(paid) http://www.sandboxie.com/ Sandboxes the browser. Its advantage is space, with virtual machines you need to give it gigabytes of space for it to work, this on the other hand uses little resources. Qemu(open source) http://www.qemu.org/ Create a virtual machine that you can use to browse the internet and delete the virtual machine after, you can start a fresh copy everytime, but it is difficult for layman people to transfer data to and from it, also you need to get a GUI for it separately like Virtual Machine Manager or Qemu Launcher. Xen(Open source) http://www.xen.org/ UML(User Mode Linux) http://en.wikipedia.org/wiki/User-mode_Linux Same as Qemu VMware(paid) http://www.vmware.com/ Same as Qemu but it is much more nooby friendly. This is the virtualization for dummies along with VirtualBox. VirtualBox(open source and paid version available) http://www.virtualbox.org/ Very good and free. Rollback RX(paid but is the one some viruses makers use to test things and it work wonders) http://www.rollbacksoftware.com/ You don’t need to expose yourself or your data anymore. Sandbox everything. http://en.wikipedia.org/wiki/Sandbox_(computer_security) Also something that works wonders is backup. Make bit by bit copy of the state of the machine right after a fresh install and when everything its ok, you just make a copy and store it in a DVD-R and reinstall that every year or when problems arise, do you ever wonder how internet coffee shops maintain their machines virus free?, that is how, also get a bootable disc OS different from the OS on the machine so you can boot from that disc and inspect the files from a different OS that probably won’t be vulnerable to any virus inside that filesystem so if you ever need to salvage some files from the disk before installing something over it that is the way to do it. Thank you Mike for the wonderful site and thoughtful analysis. I appreciate you taking the time to actually thinkabout the news stories and making comments on them.This is absolutely correct. Mark Suster has written some great pieces on the importance of Twitter that addresses precisely this issue. e.g. http://www.bothsidesofthetable.com/ Regularly lately I see links in Slashot stories that link back here. Given the readership of Slashdot I can only assume this drives substantial traffic to this site. More often than not, though, the link is to talk about a story, first reported elsewhere and re-summarized here. The link and the Slashdot story do not refer to any additional commentary Techdirt makes. While Techdirt itself may yet link back to the originating report (often through several links to earlier Techdirt articles which I have to believe is also an SEO effort), it is nonetheless Techdirt that gets direct credit for reporting the story (and the resultant traffic) from a Slashdot reader’s perspective. This mutual-admiration society of blogs has formed a new core of Internet information middlemen. Techdirt and Slashdot occupy privileged positions in this new world of middlemen due in large part to their longevity and the inertia of popularity. “Fans” enable this by lazily linking stories where they first read them rather than where they originated, and the middlemen abet them by not rectifying this, and, in some cases, creating a maze of internal links obfuscating the origin of the information. For a blog and a community that so often rails against and diminishes the contributions of (other) middlemen, I’m so often surprised at how well they are emulated. ## Re: Re: I’m not sure you totally understand why ‘we’ generally appear not to like middlemen. Generally, middlemen make a process less efficient. This isn’t always true. Middlemen, as wholesalers or retailers, came about because they WERE more efficient. Back then, and to an extent, still are today. We couldn’t, or had a hard time, buying directly from manufacturers. And even when we COULD, personal shipping costs might far outweigh the unit shipping cost of a retailer. And even then, the retailer advertises and hires retail employees, and does a whole bunch of things, instead of the manufacturer, which also made the process easier, and often times cheaper. Still does. Generally, we like those middlemen. They make things easier and cheaper. Now, let’s look at music. What once was a scarce resource, (records), is now an ‘infinite’ resource, (1’s and 0’s). We hate middlemen there, because they don’t do anything. And they cost money. There are no shipping costs. the music is often its own advertisement, there is no need or use of hiring retail employees . . . middlemen, in music, just cost. In Software, we see a similar story, most of the time. But, software fucks up, and is hard to understand at times, and sometimes the middleman, by providing support for a lot of programs and customers, actually ends up adding to the process. We like these middlemen, too, even though they also drive up the cost of the software. You are falling prey to the availability heuristic, in that we never complain about middlemen who we like. Similarly, we never complain about bias that we like. (sidenote: Excellent essay on the subject of bias, and ‘good bias’: http://nancyfulda.livejournal.com/279392.html) Now, let’s look at TD and slashdot. They cost nothing to us. Big plus there. They aggregate news that interests us. Big plus there. Slashdot notsomuch, but TD often provides additional commentary, or aggregates several news stories on the subjects into one article. Big plus. The community. Go to the comments, we have a community, it’s steady, the writers read and comment and argue with the rest of us . . . big plus. So yeah, TD is a middleman that we like, and on top of all that, TD even writes it’s own original articles. I hope that adequately explains why ‘internet information middlemen’ are appreciated. (Further note: There are information middlemen we dislike or hate: they are the ones who do not link back to their sources, twist information, (or get it wrong), without apology, take credit for others works, refuse to critically think, tunnel-vision and conspiracy theory themselves to death, and other such things) ## Re: Re: Re: The logical flaw here is not the availability heuristic, but confirmation bias. You have already decided you like Techdirt and dislike other middlemen, and must then invent reasons to justify what you already “know.” Your favorite middlemen are ruthlessly exploiting as many flaws in the human psyche as they can find, and you are part of that equation. They have convinced you that it is more valuable to put three pieces of someone else’s expensive work next to each other than to do the work in the first place. They have convinced you that a bunch of people making mostly uninformed (but inflammatory) blather is more valuable than boring facts. They use base rhetoric to incite primal feelings of distrust of authority and righteous indignation to create an in-group as false as any in the Stanford Prison Experiment and you jump at the chance to join: they present you a guard’s uniform and you cannot put it on fast enough. Then they explain to you that it’s the prisoners’ own fault that they are in prison and it is right that they be beaten. You parrot their rhetoric as if it were your own. Enjoy being the tool of someone else’s agenda. ## Re: Re: Re: Re: The logical flaw here is not the availability heuristic, but confirmation bias. You have already decided you like Techdirt and dislike other middlemen, and must then invent reasons to justify what you already “know.”Confirmation bias exists, but it’s not responsible for everything you disagree with, hon. I don’t always agree with the Techdirt perspective on what they write about, but I absolutely appreciate that they think before they write. That’s why I’m here, and that’s why most of us are here. Your favorite middlemen are ruthlessly exploiting as many flaws in the human psyche as they can find, and you are part of that equation. They have convinced you that it is more valuable to put three pieces of someone else’s expensive work next to each other than to do the work in the first place.What expensive work are you talking about? You mean the ‘expensive work’ of parroting the RIAA, Congress, or whatever talking head is being quoted in the original ‘news’ articles that TD posts usually link to? And how is that not comparable to the ‘expensive work’ of getting enough education to be able to analyze what those talking heads are saying? They have convinced you that a bunch of people making mostly uninformed (but inflammatory) blather is more valuable than boring facts.So posts that pick apart misinformation are blather? And the misinformation is fact? Lolwhut? They use base rhetoric to incite primal feelings of distrust of authority and righteous indignation to create an in-group as false as any in the Stanford Prison Experiment and you jump at the chance to join: they present you a guard’s uniform and you cannot put it on fast enough. Then they explain to you that it’s the prisoners’ own fault that they are in prison and it is right that they be beaten. You parrot their rhetoric as if it were your own.Did you take your meds today? Because your paranoia is showing. Enjoy being the tool of someone else’s agenda.I think you’re just upset that we’re not your tools, and we don’t follow your agenda. God, us self-thinking folks, we suck so bad… ## Re: Re: Re: 2Re:Says the person whose most recent triumph was collecting “favorite” posts on this site into yet another aggregation of aggregation. Sound and fury. We have redefined “yeah, what he said!” as “critical thinking.” 1984 indeed. Perhaps every year we should have a guest “editor” make a post identifying his or her favorite posts consisting of favorite posts. ## Re: Re: Re: Re: Oh, huh, I thought you might be someone who genuinely didn’t understand something. Instead, you’re an asshole. Go figure. Ah well, it’s worth wasting the time to explain it, just in case someone with a genuine confusion comes along sometime. ## Re: Re: Re: Re: Funny that you say this, because all of our favourite trolls (yourself apparently included) on this site are the exact opposite – they come to every comment section with the mindset that they must attack TechDirt at all costs. And when they can’t find a flaw in the story, they decide to go off on a random tangent (such as yourself) and make random ad hominem attacks (like you just demonstrated), usually with some variation of “Dirty pirates” or “Anarchists” or “Socialists”. A lot of readers here agree with a lot of what Mike says, but there is plenty that we will disagree with. The Anti-Mike’s in the crowd, however, will disagree no matter what – hence why we can’t take you seriously in the slightest. Oh, and Merry Christmas everyone ;). ## Re: Re: Re: 2Re:That you must redefine “troll,” “ad hominem,” and “tangent” to make your point valid is telling. Why don’t you impugn these ad hominem attacks (since they clearly are by your definition). ## Search vs Referral The fact that you’re getting a lot of play from the social sites is great for you, but it also is the type of site that you’ve created – a dynamic site with regular updates. Updates resonate with someone and they share it with their friends, friends who are also likely to have the story resonate with them. This means you’ve got a great site for “in the now”. It would be interesting to see if your historic stories (things you’ve written about months or years ago) – if the users who hit them today are coming from Google search or more personal referrals. My guess would be that Google drives users to your older content while the social networks drive users to your most recent posts. After all – when you want to look something up, do you ping your buddies and wait for a response, or do you just “Google it”? Personally I can’t see myself ever searching for a website via Facebook (though I guess it’s possible). On the flip side, if you find a great topic of conversation, I bet you share it with all your buddies via your favorite social network. One more note. I’m “facebooked out”. It seems many of the people who made Facebook a part of their lives early on – also have moved on. Not sure what that means, if anything, for the future of social networks. -CF ## Re: Search vs Referral >your historic stories One of the old historic techdirt stories drives requests to a web site I have log access to. Merry Christmas and a Happy New Year to you and your family, Mike. Keep up the good work in the New Year. ## ? You would think that by now there would be such as thing as a decentralized search engine application type-thing… Is there one? ## Re: Thank you for that wonderful display of pathos. Grossly lacking in logos (you only address one of the no less than six arguments made in the parent post), but certainly an admirable example of an argument made of pathos. ## Re: “You have already decided you like Techdirt and dislike other middlemen, and must then invent reasons to justify what you already “know.”” “They have convinced you that a bunch of people making mostly uninformed (but inflammatory) blather is more valuable than boring facts” “They use base rhetoric to incite primal feelings of distrust of authority and righteous indignation to create an in-group as false as any in the Stanford Prison Experiment and you jump at the chance to join [ . . .]” These are all ad hominem fallacies. One thing I don’t like about calling anything ‘ad hominem’ is that people tend to think about absolutely disconnected examples: “You are stinky, therefore you are wrong”. Ad hominems, the ones that we need to look at and avoid, are the ones that, in different circumstances, could be valid. But as stated, depend on the attack on the person. Take the second post here. We have the booleans: P = TD is uninformed blather Q = Facts are more valuable than blather R = ‘We’ have been convinced of the value of TD falsely U = ‘We’ have been convinced of the value of TD Forming (P^Q^U) -> R There are also some things that are non-explicit that I believe most people would read: S = ‘We’ are stupid T = TD is inflammatory (S^T) -> U R -> S Even if we take only what is intended, you have presented thus far no proof that TD is uninformed blather. That’s the ad hominem there. But then we have, in the non-explicit part, statement S. Ad hominem. Obviously when you see it. The ‘circular part’ isn’t a problem in this case because we have not established that R is necessary for S, (Or S^T for U, or U for R). For that matter, you’re also guilty of “guilty by association”, (a form of ad hom, if you hadn’t realized), by grouping with people who agree with TD. Which I wouldn’t even look twice at, because I was arguing on ‘their’ behalf, even though I specifically distanced myself from the argument I was making, and because it doesn’t matter here. An ad hom of “guilt by association” follows: Source A makes Claim B Group C makes Claim B Source A is from Group C Just for people who like it stated clearly 🙂 Anyhoo, if you generally operate on the premise that everyone else is generally speaking, stupider than you, you may wish to examine the Dunning?Kruger effect. The effect is usually stated as that someone with little skill will overestimate their own skill in an area, and underestimate more skilled people’s, uh, skill. I think it is better worded that people who do not examine their examination of their own skill will mis-estimate skill levels. (And unskilled people are unable to properly examine their examination, and so . . .) Indeed: “Dunning et al. cite a study saying that 94% of college professors rank their work as “above average” (relative to their peers), to underscore that the highly intelligent and informed are hardly exempt.” Now, unless that bottom 6% are really, really horrible, or the above 94% extremely uniform, the average is generally above 50% of the pop, and below 50% of the pop. So yeah, be aware of how that might play into your own ideas and you might learn a lot. Anyhoo, the points are: That you have made what appear to be ad hominem fallacies both in explicit and non-explicit arguments, given that nothing is backing them up that you have posted. And that you may wish to examine your own arguments and prejudices/bias. I’m examining mine as well, and taking into account the possibility that I may be on the wrong side of the argument. Merry Christmas AC. ## Search vs Referral That’s a pretty interesting hypothesis. What’s the data say, Mike? ## Re: That was a rather excessive analysis (which I stopped reading partway, by the way). The original post being analyzed could be analyzed in a much simpler way: it makes a wide array of assertions. In the absence of logos (no effort is made to support the assertions with facts or logic), the poster fills them with emotionally-charged terms in an attempt to win the argument by appeal to pathos. Sort of a brute-force approach to argumentation. I should note one other thing, as well: simply being insulting to one’s opponent is in the realm of pathos. Ad hominem is a defective (illogical) syllogism (logos) of the following form: Opponent is [something] (fact) People who are [something] don’t make rational arguments (implied assumption) Ergo opponent is wrong (conclusion) ## Re: I agree :p Except with excessive: I’m trying to explain to someone who doesn’t already know why his arguments aren’t valid. And also, IMO, examining other’s posts for ’emotionally-charged’ or ‘realm of pathos’ is generally useless because: a) The guilty party very rarely realizes his mistake, especially when pointed out so shortly b) It’s a very common fallacy, and although it doesn’t make an argument, it doesn’t make any other arguments by that person weaker c) It is occasionally valid. There are cases where questioning someone’s motives or intelligence is valid. With that in mind, I only argued for AC’s ad hominem because he explicitly did not recognize it, and I hope to help him by explaining it throughly, and I hope, correctly. ## If only it wasn't for piracy If it wasn’t for the fact that Firefly was a pirate’s favourite, and people actually paid to watch it, they wouldn’t have cancelled Firefly. Blame the pirates. ## If only it wasn't for piracy Indeed. And you can blame the pirates that there will never be a sequel to Avatar, Dexter, etc. ## If only it wasn't for piracy In case it was not obvious, Avatar is presently the most pirated movie ever, and Dexter is consistently among the most pirated TV series year-to-year and is also released through paid cable. ## Re: I don’t mean this as a harsh criticism as such, but I have to wonder about people who claim these things…The reluctance to use True Names is a longstanding one. It predates the web. Its efficacy is debatable, of course. ## beware malware ... Why don’t you install something like Linux on her machine? I’m sure Ubuntu would help keep her virus free. ## I don't think so I think that’s not really true although you have your basis for that. People still most specially internet marketers want to see their site in google. If you cannot find your site on google search engine then it’s pretty much a dead site and you don’t want that. Although, facebook and twitter does a lot of great wonders when it comes to traffic still you cannot just focus on that because it’s not enough still. ## Re: Regularly lately I see links in Slashot stories that link back here. Given the readership of Slashdot I can only assume this drives substantial traffic to this siteSlashdot does drive a decent amount of traffic, but significantly less than the sources listed above. But, yes, we’re happy on those occasions when Slashdot sends us traffic as well. More often than not, though, the link is to talk about a story, first reported elsewhere and re-summarized here. The link and the Slashdot story do not refer to any additional commentary Techdirt makes.Hmm. Well, that usually depends on who writes the submission, so not sure about that, but in looking over the last couple of stories that did get onto /., I don’t really believe your statements are accurate. For example, just looking at the two stories that were on /. in the last week: http://yro.slashdot.org/story/10/12/24/0412241/Will-Patents-Make-NCAA-Football-Playoffs-Impossible About the football playoff patent. That links to a few primary sources, and links to Techdirt because we’re the only source which actually found the patent in question. http://yro.slashdot.org/story/10/12/20/2139201/DHS-Seized-Domains-Based-On-Bad-Evidence About the DHS domain seizures. Links to a number of primary sources, and only links to the Techdirt post where we analyzed DHS’s argument. In my experience, it’s rare that /. will link to us *except* if the Techdirt post adds something to the story. So not really sure the basis of your claim. This mutual-admiration society of blogs has formed a new core of Internet information middlemen. Techdirt and Slashdot occupy privileged positions in this new world of middlemen due in large part to their longevity and the inertia of popularity.Inertia of popularity. Great phrase. If only there were anything behind it. I would suggest the next time you work hard for years to build something successful, ask how you would feel if someone without any knowledge of the situation claims your success is due entirely to inertia. For a blog and a community that so often rails against and diminishes the contributions of (other) middlemen, I’m so often surprised at how well they are emulated.Again, I believe you have your facts wrong. We have never said that “middlemen” as a whole are a problem. We have only suggested that middlemen who act as *gatekeepers* or *monopolists* who limit markets and limit efficiency are a problem. We’re all for middlemen who are *enablers*. http://www.techdirt.com/articles/20091208/0259297245.shtml http://www.techdirt.com/articles/20100811/18040910598.shtml http://www.techdirt.com/articles/20070201/004218.shtml I enjoy constructive criticism, but your criticism does not appear to be based on facts, but on some sort of views you have about us that are not supported by what is actually happening. ## Search vs Referral It would be interesting to see if your historic stories (things you’ve written about months or years ago) – if the users who hit them today are coming from Google search or more personal referrals.That’s a good question. Obviously, you’re right that a lot of the traffic is to recent stories, but not always. StumbleUpon, for example, quite frequently will suddenly drive a lot of traffic to an older story. Also, just this past week, we got a *TON* of traffic to a story from 2009 from Facebook, and I have no idea why. I asked around to see if there was any way to figure out where that traffic was coming from, and the answer was no. So it’s a mystery to me, but apparently someone on Facebook with a ridiculous number of followers mentioned a story from 2009 (about ad blockers) and it drove an astounding amount of traffic. So, it’s not always new stuff. ## Re: Here are 11 instances (all but one in 2010, I believe?) where Slashdot linked a Techdirt article, but did not seem to do so for the Techdirt-original content, and where they did not link the source from which the information being referred to came from. In my experience, it’s rare that /. will link to us *except* if the Techdirt post adds something to the story.All Techdirt posts “add something” to the original story: a resummary, commentary, synthesis, etc. I don’t and do not dispute this. In the below links, references in the slashdot article to anything beyond the original resummary are minimal. The slashdot reader is reading a summary of a summary of a summary of an event. – – – Link to a Techdirt article with little/no reference to original Techdirt content; Techdirt links to original story at NASAWatch and Bloomberg. No link to either source from Slashdot. http://science.slashdot.org/story/10/10/07/0157204/Astronaut-Sues-Dido-For-Album-Cover?from=rss – – – “TechDirt is reporting…” which is true, but Techdirt links to BusinessWeek as the source of the information that it is reporting. This slashdot article includes an excerpt from the Techdirt commentary, but still no link to the original source of the information on which the commentary is being made. http://news.slashdot.org/article.pl?sid=09/02/23/1657242 – – – First link is to Techdirt. No link in the Slashdot story to Techdirt’s source link at mobiledia.com. No additional summary of Techdirt commentary. http://apple.slashdot.org/story/10/11/30/145211/Apple-Sues-Steve-Jobs-Figurine-Maker-Over-Likeness?from=rss – – – “Techdirt catches Amazon…” Rather, MSNBC catches Amazon? No link to the MSNBC article from Slashdot. http://yro.slashdot.org/story/10/05/12/0052242/Amazon-Is-Collecting-Your-Kindle-Highlights-amp-Notes – – – Two Techdirt links here in an article with minimal reference to original Techdirt commentary. Origin link for the first Techdirt entry is a blog, which is itself a summary of a New York Times article. Origin link for the second Techdirt entry is an opinion piece from firstamendmentcenter.org No links to any non-Techdirt sources in the slashdot article. http://news.slashdot.org/article.pl?sid=10/09/01/1631229 – – – “Techdirt has details…” Actual origin of the details, according to Techdirt origin links, are Slashfilm and Deadline, but the Slashfilm article also links back to Deadline. No links to Deadline from slashdot article. http://entertainment.slashdot.org/story/10/07/09/1621218/Hollywood-Accounting-mdash-How-Harry-Potter-Loses-Money?from=rss – – – Summary of a summary of a summary of an article originally from People’s Daily Newspaper in China. As someone points out in the comments, the slashdot story links (only) to a Techdirt article, which links to a Christian Science Monitor article, which does not link to the People’s Daily article. http://entertainment.slashdot.org/story/10/10/22/237244/Chinas-Official-Newspaper-Pans-iPad-mdash-Too-Locked-Down – – – Slashdot article summarizes a few facts that Techdirt links back to Marketwire for. No link to Marketwire in the slashdot article. http://yro.slashdot.org/story/10/12/02/1647222/Jailtime-For-Jailbreaking – – – The information about the lawsuit came from Techdirt origin link thenextweb.com. No link to from the slashdot article. Some Techdirt commentary is discussed in the slashdot article, but the link refers to the original news, not the commentary – the part of the slashdot about the Techdirt “deconstruction” of the patent is not linked at all. http://yro.slashdot.org/story/10/07/24/1759200/Company-Claims-Patent-On-Spam-Filtering-Sues-World?from=rss – – – Article notes that Texas judge cites Mr. Spock. Techdirt cites Science Fiction Writers of America as source. No link to SFWA from Slashdot article. http://idle.slashdot.org/story/10/10/31/1955240/Texas-Supreme-Court-Cites-Mr-Spock – – – Only one link to Techdirt article. Techdirt links to BoingBoing. No link to BoingBoing story from slashdot. http://yro.slashdot.org/story/10/10/21/0019257/All-Your-Stonehenge-Photos-Are-Belong-To-England ~ ~ ~ ~ I would suggest the next time you work hard for years to build something successful, ask how you would feel if someone without any knowledge of the situation claims your success is due entirely to inertia.I said “in large part,” not “entirely.” In searching for the above links, I noted that Techdirt has had a front-page link from a slashdot article more than 20 times since October. Even one link from slashdot is enough to bring most ordinary hosting providers to their knees. The network effect is working in your favor. You have clearly crossed a huge tipping point, yet you seem ashamed of this? Hell, if I were you, I would exploit it like crazy. In fact, in my own life, I do (and I have far less inertia to exploit, but I have some). I can quantifiably tell you that the bonus in my current success due to inertia is about 100% – I do twice as well as colleagues that work just as hard as me and are just as productive solely because of reputation. Reputation is like money – it’s much easier to make some when you have some already, so if I play my cards right I can widen that gap over time. If “someone without any knowledge of the situation claims [my] success is due entirely to inertia” my response would be “not entirely, but about half of it.” I will happily debate the fairness and justness of this. I have qualms about it. But I have no illusions that I’m 2X as productive as my colleagues and would not at all be hurt by the accusation that I’m not. A “famous” guy (in a particular field) gave a talk recently that I attended. It was pretty good – about as good as most of the talks I go to. About 200 people showed up. If anybody but Mr. Famous were giving that same talk, 20 people would have showed up. So whereas my fame gives me a 100% advantage, Mr. Famous’ fame gives him a 1000% advantage. I don’t begrudge Mr. Famous his extra 180 attendees; I’m happy for him. But now let’s suppose Mr. Famous starts going around telling people that he has special insight about giving great talks – after all, he has 10X more attendees. Would it be impolite for someone to take Mr. Famous aside and suggest to him that maybe it’s not his talks that are 10X better, or that maybe they are better, but not 10X – perhaps much less? Stephen King claims to have published novels under the name Richard Bachman to get around a one-book-per-author-per-year limit imposed by his publisher. According to Wikipedia, in the introduction to The Bachman Bookshe also claims to have been trying to disambiguate skill and luck in his success. Wikipedia claims that sales ofThinnerwere a respectable 28,000 when published as Bachman, and 10 times that when people learned that Bachman was King. Though 28,000 sales shows that King has substantial raw skill, 90% of his success in terms of book sales is attributable to inertia. The book did not get 10X better when people learned who Richard Bachman was.Since you asked and I answered as candidly as I could, how much of your current success would you attribute to inertia vs. how much would you attribute to just being more insightful/interesting/intelligent than other bloggers? ## Re: “While Techdirt itself may yet link back to the originating report (often through several links to earlier Techdirt articles which I have to believe is also an SEO effort)” In my experience, Mike always links back to the originating story on a first post on the subject, then links back to previous TD posts in subsequent articles. Nothing wrong about that. “it is nonetheless Techdirt that gets direct credit for reporting the story” This is an opinion blog, not a reporting resource. If people choose to link back here instead of the original article, it’s usually because they find Mike’s commentary useful. Same with every other blog that gets posted to Slashdot or other aggregators. “”Fans” enable this by lazily linking stories where they first read them rather than where they originated, and the middlemen abet them by not rectifying this” Quick question: how are Mike or other people working at TD meant to “rectify” links sent to a news aggregator? “For a blog and a community that so often rails against and diminishes the contributions of (other) middlemen, I’m so often surprised at how well they are emulated.” As is stated in other replies, the “middlemen” criticised here are usually those who add nothing to the original product. The marketers, labels, lawyers and others who get between me and the music I wish to listen to or the movies I wish to view are just that – in the way. Sites like TD and SD add value, in the form of making stories easier to find and adding both editorial commentary (TD) and community participation (SD and TD). I’m still free to browse any primary news site for my information if I wish, and I can find my way back easily to original stories quite easily if I wish as well. This is by far the most stupid article I have read ## Re: Thank you for your insightful and in depth analysis of what was wrong with the article. Your keystrokes have enlightened us all. /moron ## Re: Firefly was cancelled because it was pants. ## beware malware ... Because Rex Karz is old and hates social networks. He probably doesn’t have a niece or, at least, doesn’t have one who infects her PC through using social networks. ## Google more effective for large sites? I thing Google may drive more traffic for large, established sites like TD than small start-up sites. Both large and small sites rely on their fan base to link to friends via email, social network, personal blogs, etc. However, large sites have the benefit of being on the radar- they are more likely to come up in casual conversation, the news and other media. Greater exposure means more Google searches by naive (in the scientific sense) people whose interest has been piqued. Small sites don’t have this visibility, are are more reliant on traffic by non-Google referrals. This is just theory though, so feel free to poke holes in it. Also at the very bottom of it all is the assumption that you have good content. ## Dont sing it - - BING IT.. Everyone knows google is not a monopoly, so the entire point of this article is pointless.. google, is a search engine, and that is all it is, you use it to find new things, but everything you know about you most probably go to directly. People do not generally use google as a starting point for their browsing, unless they are looking up a new subject. Then they will probably use BING as BING Is Not Google. Butits not a monopoly, its not even very important, and to say people are looking to google to see how ‘famous’ they are, that is a joke.. I have probably used Google 3 times in the past year, and that was only to see if it had a better search result than BING.. It did not.. it is not even a very effective referal system. Some web sites also have a very vague subject matter, or theme. Like Techdirt, if you did not know the name techdirt, when term would you put in a google search to find the type of site TD is ? IE, people do not find web sites like this from google, But according to Mike, Bit-Torrent is a search engine, so there is no reason why I cannot go to a bit torrent site, and do a search on it for Techdirt, and im sure I will get LOTS (ie NONE) hits.. Because we all know Mike seems to have a misunderstanding of what search engines are, or what a monopoly is.. ## It's because Mike works for Google, he has to appoligise for them.. he's under orders. One of the driving forces behind some of the legal attacks on Google is that Google is the defacto monopoly on being found online.What legal actions Mike ?? I mean the EU conducting.. it is has taken the step that everyone expected and has begun investigating Google for supposedly anti-competitive practices with regards to “competing search engines” such as Foundem.So the EU is condusting an investigation, that is **NOT**, that is not a legal attack. It is an investigation. so you are SO keen to appoligise for google, that you will even start to do that before there is any indication that there is an issue.. But just becuase its you’re precious Google, you jump to their defense, preemptively, just to make sure you get in first.. I know you think google can do no wrong, after all they send you money every week, you would not want to upset your overlords.. why dont you post a disclaimer stating that this web site is financed by Google, and therefore you have a financial interest in promoting google.. No one wants to shoot to golden goose do they?? How much do they pay you for your political support? I hope its alot, because the price you pay for it, is your reputation.. but who cares about things like that right MIKE ??? ## Dont sing it - - BING IT.. “But according to Mike, Bit-Torrent is a search engine, so there is no reason why I cannot go to a bit torrent site, and do a search on it for Techdirt, and im sure I will get LOTS (ie NONE) hits.. Because we all know Mike seems to have a misunderstanding of what search engines are, or what a monopoly is..” Seriously…how do you function in society without being able to read? ## Dont sing it - - BING IT.. Who said anything about functioning? ## If only it wasn't for piracy If it wasn’t for the fact that Firefly was a pirate’s favourite, and people actually paid to watch it, they wouldn’t have cancelled Firefly. Blame the pirates.Wow, not sure how a television show that played on Fox for 10 episodes, out of order, and was preempted a number of times by football and other events thus not giving it a chance to gain popularity despite a small but growing audience was cancelled because of pirates. After all, everyone I know who loves Firefly (including myself) actually has two or three licensed copies of the show (one to keep, one to watch, and one to lend to others.) Speaking of which, I think someone borrowed my third copy and hasn’t given it back yet…anyone here know where it went? Or did I miss the sarcasm tag? ## Re: Firefly was cancelled because it was pants.Firefly was cancelled because the morons at Fox were pants. FIFY, Your welcome. ## If only it wasn't for piracy uh, Avatar was the second-highest-grossing film of the year, and Dexter had the most watched finale of the year, as well. …Your point? ## It's because Mike works for Google, he has to appoligise for them.. he's under orders. Note the the investigation is on antitrust grounds. That means that the EU Commission believes that Google is unfairly using its market dominance. That means they’re looking for illegal behaviour.Google is becoming less relvant and less dominant. Can you see where this is going? ## Re: It’s nice to see that other people can work out what Mike does for a living.. And the reporting on reporting on reporting, and often simply refering back to his own comments of confirmation of the “facts”. I find to be a source of amusement, but what is amusing to me, would or could be dangerous to others who hang off Mikes every word.. and believe that Mike can do no wrong. Its good that people are willing to try to track down the original source, it would be nice if Mike could take the time to do that.. The quality of his ‘work’ would massively improve.. But if all you are interested in is the number of page hits you get, you will end up feeding the lowest common level.. Which is exactly what you have fallen into Mike, you tend to play to the majority, disregarding any actual facts, and trading your reputation for a quick dollar. ## Re: Classic that is exactly what happens here, I used to have a nice link to a paper, that is about how to form a terror or extreemist group. There are 4 primary stages, it goes right into how there is one ‘leader’ who dictates policy to the group, and he expects the group to comply with his ideals. He uses isolation techniques, (like linking to his own works) to control the flow of information. He uses repitition, he harps on the same points and subjects, as if they are mantras, that is what they are. He expects his followers to recite those mantras and have them imprinted on them. And you see that here all the time.. Another classic web site that employes the same techniques is “TECHRIGHTS”. ex Boycott Novell.. He uses the same techniques, and im sure on many occasions they work together to form or try to form opinions within their cult followers. http://emotionalsurvival.com/extremist_groups.htm DYNAMICS OF DOMESTIC TERRORIST GROUPS The Lethal Triad: IsolationIsolation represents a key component in the restructuring or indoctrination phase of most groups. Isolation appears to be the most powerful of the social dynamics operating in radical group processes. This practice protects members from the contaminating influences of the outside world. Simultaneously, the isolated individual gets bombarded by cause-related information in the form of “literature” or lectures by the group’s hierarchy. Although some groups APPEAR to be vociferou consumers of information from such sources and public access television, shortwave radio, and the internet. the group’s leadership CENSORS all of the information before disseminating it to group members.As isolation increases, CRITICAL THINKING DECREASESWithout access to alternative information sources, members encode new belief systems. Group tenents never are challenged, only recited. Platitude conditioning replaces reasoning processes. Although the isolation process itself is not pathological, the end result IS.. The extent of the deprivation and isolation yields an individual who responds to the group mandate with no individual thinking or decision making. group leaders actively discourage critical, self-contained thought.PROJECTIONStage 1. The group projects responsibility for its decisions and direction onto the LEADER. (Mike). Stage 2. The group projects the cause for its perceived grievances onto some outside entity. (anyone who agrees with copyright laws, or laws in general). RIAA, MPAA, US Government, EU, Mike is an expert at this part of projection techniques.. Each group requires a singleauthoritarian leader, who assumes absolute control of all group functions and decision-making processes. As members surrender critical thinking, they elevate the group leader to the status of absolute authority. Members abdicate all decision making and critical thinking to the group leader. REALITY TESTING DOES NOT OCCURPATHOLOGICAL ANGERStage 3 of the lethal triad.. 1. Isolation 2. projection 3. Pathological Anger Collectively, group members see themselves as victims of an outside force. As they project blame onto this entity, they grow emotionally volatile. As their anger grows, group members believe they are in a position of “RIGHTEOUSNESS”or“JUSTIFICATION”Because of their isolation, group members come into significant contact only with others who share their world view and emotional reaction to it.. (you see that here all the time). They neither test nor challenge the group hypothesis and feel no sense of individual accountability.So there you have it, I was paraphrasing, but you get the idea, and you can read it all yourself if you wish, just follow the link.. But you can easily see who Mike and TD and TechRights conduct their activities. How they do it, and how it works, and especially how it stop critical thinking.. exactly what Mike is shooting for.. ## Re: …and you are? You put an awful lot of time and effort trying to bash Mike, why not use that positively and start your own blog? First of all, congratulations on this blog. I have found it by chance as a friend recommend it to me on Twitter and I’m glad he did. I totally agree with you. ALthough Google is still a major player in the Internet and websites owe much of their traffick from it, it is also true that social media and interesting content can increase and diversify any site’s traffick. As you say, having interesting contentis a must-have if you want your site to attract new visitors and to keep those users that already know you coming through to your site. ## Facebook is unstable, unreliable “and a Facebook page (which often fails to update for reasons not at all clear to us)” Facebook seems to suffer from lots of known bugs, that they never address! One of the more serious bugs occurs when I try to post a link to a youtube page, there is a 5% chance that the link will pull a completely unrelated youtube video instead, and facebook will – for some strange reason – change the link you entered to reflect the new unrelated video! No matter how many times you go back, and remove and reinsert the same youtube link, facebook will take offense to it, and replace it with a completely unrelated video, again and again! It seems to be link specific, although I can’t remember specific examples right now (none of them infringed on copyrights or were in any way controversial), but I’ve faced this problem with at least 4 videos in the past year. Also, their status alerts seem to get messed up every once in a while! Sometimes the latest alert will appear along the bottom buried amongst the older alerts. I’m quite horrified that facebook overtook google to become the most visited site on the net, to be quite honest! Socially shared content is a good idea (like techdirt for example), since it is discussion based content, you get more information and discussion about a certain topic, than you ever would if you just read headline news on google! However, facebook falls short in that regard, I don’t think its as developed as it should be to handle this kind of responsibility! My primary source is still igoogle, and unless I’m looking for videos of kittens in hats, or a dancing bear, I’m not likely to ever switch to social media for serious information. Search Engines companies remain in my opinion the driving force behind the internet, and even if google dies, search engines won’t die with it. Yahoo will rise to take its place, and so on. ## What is Twitter, anyways?? Oh, and maybe this is a sign of my old age (I’m only 30 though!), but I completely fail to understand the appeal of Twitter! It seems to be more hype than anything! I can understand celebrities using it to let all their fans know what time they had a bowel movement this morning (I guess..), but other than that, it seems to be quite useless! The only time I thought “I kind of get it now!” was when Bill Gates used it to “communicate” with people (in less than 140 characters) over the Haiti earthquake efforts! But short of that one time thing with Bill Gates, every time I read an article talking about twitter and its effects, it seems to always greatly exaggerate its appeal, clout and usefulness. I keep going back to the site, trying to find anything useful or appealing, and end up giving up each and every time. It seems to be nothing more than vain and vague chit-chat with no real value! Quite frankly, I think Twitter is to google, as Tabloids are to Newspapers: Full of junk, written in the simplest, most inelegant form possible, in order to cater to the majority demographic with the least attention spans! ## Re: (late to the party as usual) mike sayz: Inertia of popularity. Great phrase. If only there were anything behind it. i think there is something behind it, and i think that you are a great example of this. think of it like this: there is a reason we refer to it as “15 minutes of fame.” your 15 minutes are (supposedly) easy to get, but then, poof! are over. now inertia (the physics thing) is based on mass. if i do something small, i am only going to be popular for a minute (or 15 in this case). but if i take my time, build up something grand, now i have a lot. it is harder to stop a boulder than a billard ball, ask sysiphus. so ‘inertia of popularity’ i believe is a good phrase that does have merit. it implies something more than a quick fix to the top, say like a one hit wonder. it emphasises the work that has already been done. it does not imply that you have stopped working. Indeed if you weren’t factually correct (except that nook article which still upsets me) and insightful we would stop coming. and you stick with your game plan. think of Digg, they changed, changed in a way the users didn’t like thinking that THEIR inertia would just keep everyone onboard. it did not. sometimes the rock can be stopped. ## Re: Says the person whose most recent triumph was collecting “favorite” posts on this site into yet another aggregation of aggregation.Actually, my most recent triumph was my four-year-old identifying xin an equation.Sound and fury. We have redefined “yeah, what he said!” as “critical thinking.” 1984 indeed.If that’s how you defined it, based on my post, then you obviously didn’t read it. My simply didn’t contain any critical thinking. I just pointed out that my favorite posts did. Also, 1984was about the balance of power, not critical thinking.Perhaps every year we should have a guest “editor” make a post identifying his or her favorite posts consisting of favorite posts.That would be pretty neat, actually. Like a ‘Best of 2010’ post. Good idea, Anonymous. ## Re: Really? are you really claiming that Mike is a Terrorist Extremist who has captured us all in his bunker for brainwashing? really darryl? it seems you have crossed some magical dividing line and begun your descent into the dark valleys of total insanity. I am not too worried about hurting your feelings now, since you will simply write it off as my cult brainwashing. Gee, sure is a great way to avoid dealing with things. Many industries agree with you: “those are not under-served customers, those are Pirates~!” Hey, whatever helps you keep the voices from screaming in your head or whatever other condition it at play here. ## Re: Mike links more to outside sources than his own, and Techdirt isn’t the only, or even in the minority, of sites that link to their own previous articles on a subject. That’s not some sort of psychological tool. It’s just helpful. Roflmao, the only people who see themselves as victims are the trolls, like yourself. As for the rest of us, we’re not pathologically angry at the RIAA or any of those guys. Some of us even feel sorry for them. Although some groups APPEAR to be vociferou consumers of information from such sources and public access television, shortwave radio, and the internet. the group’s leadership CENSORS all of the information before disseminating it to group members.When Mike blocks my Internet, I’ll let you know. Until then, lolwhut? As isolation increases, CRITICAL THINKING DECREASESWithout access to alternative information sources, members encode new belief systems. Group tenents never are challenged, only recited. Platitude conditioning replaces reasoning processes. Darryl, this is a fairly good description of what’s happened to you, not us. Are you Australian? It would explain how this happened to you… Because of their isolation, group members come into significant contact only with others who share their world view and emotional reaction to it.. (you see that here all the time).No, you don’t see it here. First, no one here comes into significant contact with each other. For the most part, we don’t even know each other’s names, marital status, profession, location… Next, we’re not isolated. We all live in an actual realworld, where we live and interact withrealpeople, most of whom are not Techdirt readers.How they do it, and how it works, and especially how it stop critical thinking.. exactly what Mike is shooting for..Because Mike is a super-villain? Oh, noes!!! Actually, this blog is a part of the Floor64 business model, so it’s more about money than some nefarious evil effort to make people feel better about pirating mp3s. ## Re: The book did not get 10X better when people learned who Richard Bachman was.No, they simply became 10x better publicized, which has nothing to do with inertia. Nice try, though. ## What is Twitter, anyways?? I use it as a news feed. I follow a number of news sources, who then post a link on Twitter when they post an article. I also follow some folks who irl-follow our Congress, which means I can find out what’s happening there right away. Of course, there’s also OMG Facts and Weird News… 🙂 Anyway, it’s like the world’s best ‘newspaper’, available at any time, with up-to-date news. ## Re: Lol. He could call it “Darryl’s Delusions” or “Verbal Diarrhea”. ## Re: You bring up a potentially useful point. What you summarize describes a good half the sites on the internet. It’d be a lot easier to find things on the web if we killed off all of those for terrorism. ## If only it wasn't for piracy You were so close, but managed to miss it. My point is that the level of piracy and the financial success of something are largely orthogonal values, correcting wordsworm’s misconception ## Re: i think richard bachman = steven king was really all the publicity it got. and the publishing world is all about inertia. if you look at the best selling authors from the last ten years you’ll see the same people over and over again. does steven king sell books because they are good or because he is steven king? that is what he was trying to answer as bachman (which is also a different writing style, so not just king as a different name) did he answer it? who knows. doesn’t change the fact that the best selling author lists for years now has had king, and nora roberts, and james patterson and danielle steele on it. this is how the publishing industry works, more so than music. we are boring at what books we buy. someone will own all of tom clancy’s books but never once try out a ludlum. read steven king? but not jack ketchem? (that is a crime by the way) people buy ‘safe’ books. terry brooks on the NYT best seller’s list (my favorate author, btw), no shock there. brian keene or wrath white? now we are talking. they push the safe authors because we buy them. sure individually we all buy other writers, but as a whole these core writers sell millions of books each year. what does that have to do with google? probably not much, but more to point out the example is bad. and as i stated above (comment 67) i do think the ‘inertia’ metaphor is a good one and that mike simply took it wrong. more work = more mass = more inertia. less work = less mass = 15 minutes of fame. ## Re: i think richard bachman = steven king was really all the publicity it got.Yes, that’s what I said. The Bachman books got a ton of publicity when the author’s name was revealed. does steven king sell books because they are good or because he is steven king?If Steven King appeared out of nowhere with bestselling books, I’d say it was because he was Steven King. But since he took years, starting with short stories in magazines, to become popular, I’d say it’s because he’s an excellent writer who has built up a large audience. that is what he was trying to answer as bachmanYes, and Bachman was still popular, and wrote many books before people figured it out. Bachman was more popular than King was, before he was King. I think that might answer your earlier question, as well. 🙂 (which is also a different writing style, so not just king as a different name)As someone who currently owns every single title ever published by Steven King/Richard Bachman, I disagree. did he answer it? who knows. doesn’t change the fact that the best selling author lists for years now has had king, and nora roberts, and james patterson and danielle steele on it.And you think that being really good shouldn’tresult in being really popular or what? Only new authors should have best-selling novels, or only unpopular authors? I guess I don’t understand what you’re saying.this is how the publishing industry works, more so than music. we are boring at what books we buy. someone will own all of tom clancy’s books but never once try out a ludlum. read steven king? but not jack ketchem? (that is a crime by the way) people buy ‘safe’ books.Safe doesn’t equal bad, you know. In light of this fact, why is it bad that people repeatedly choose to purchase books by the authors that they know and love? That’s like saying that people that love wool coats should spurn wool coats because cashmere is pretty warm, too, or that wool isn’t really good as people say it is because it’s popular. Neither coats nor literature are zero sum games. they push the safe authors because we buy them. sure individually we all buy other writers, but as a whole these core writers sell millions of books each year.Yes, but they also push unlikely breakout authors like J. K. Rowling and Stephanie Meyers, who also make the bestseller list. what does that have to do with google? probably not much, but more to point out the example is bad. and as i stated above (comment 67) i do think the ‘inertia’ metaphor is a good one and that mike simply took it wrong. more work = more mass = more inertia. less work = less mass = 15 minutes of fame.It’s a bad example because Techdirt isn’t the Steven King or the Stephanie Meyers of blogs. It’s somewhere in between, and on the upswing, meaning no inertia. ## What is Twitter, anyways?? My suspicion is that everyone or nearly everyone who produces any significant number of tweets (or updates on any type of social media) is a narcissist. Personally I use it for my linkspam: posting links to various things I find interesting during the course of my day; before getting on Twitter this was usually done via IRC or IM to my friends. This is, of course, basedon the assumption that people find my opinions on what is interesting of value. ## Re: you said: As someone who currently owns every single title ever published by Steven King/Richard Bachman, I disagree. desperation and the regulators were written specifically to prove this point. he wanted to tell the two stories with a different voice, and used bachman to do it. http://bookstove.com/book-talk/the-life-death-and-afterlife-of-richard-bachman/ glad you kept with king, i stopped after bag of bones (which was fantastic) mainly because i found small press horror. and any object has inertia (sorry i am an engineer), even when it is accelerating. inertia works in both ways, makes it keep moving, but makes it harder to change velocity. (inertia as in science, not inertia as in laymen speak) think of it like this. i read two posts on a site i’ve never been to. one of them has factually incorrect information that is pointed out in the comments, and no changes are made to the post. i am probably not coming back to that blog. their inertia was small. but i go to techdirt, same situation, i come back. why? lots of inertia, we all make mistakes, etc. etc. why the difference in opinion? there is more work, more effort on both sides between me and techdirt (admittedly mostly with techdirt, i just read) i am willing to ignore an error. same thing the other way. say i find some great scoop and post it on my no-name blog. i could submit it everywhere and it might get mentioned, but mostly will be ignored. i don’t have any inertia. but techdirt posts it, suddenly it is on slashdot, on giz, etc, etc. to further the metaphor, they have the inertia to get over the obstacles of trust and believability. which is all really just academic. it was merely pointing out it was a good metaphor for how hard work will create something that keeps going, while a hard push could create a fad, but those don’t last. so, again, techdirt has lots of inertia, which is good! it represents a lot of hard work! ## Re: desperation and the regulators were written specifically to prove this point. he wanted to tell the two stories with a different voice, and used bachman to do it.If that was his point, then he missed it, because it’s obvious that they’re by the same person, even if you remove all of the refs to one another. It’s the same voice, both times. Or maybe I’m just ‘tone-deaf’? (inertia as in science, not inertia as in laymen speak)Lol, I see how you were using it, but that’s not how the OP used it. Oh, thanks for the author suggestion, btw. Never heard of Jack Ketchum, but will check him out. 🙂 so, again, techdirt has lots of inertia, which is good!LOL, hope the crazy OP read that. ## Re: completely unrelated: jack ketchum wrath white brian keene f paul wilson tom monteleone tom passarella (lots more too) ## Re: LOL, hope the crazy OP read that.Remember, boys and girls, ad hominem attacks are perfectly OK if they are done by someone you like against someone you don’t. Another great example of inertia: Rose, being one of the dozen or or so named regular posters here has a posse in the other 11, +/- 10 Anonymous Cowards. She knows that her inertia with the group render her mostly immune to being called out on baseless attacks like this (calling me crazy and asking if I’ve taken my meds), whereas anybody else (especially someone outside the groupthink, like me) would get nailed to the wall. Inertia is a fine way to exploit a defect in the human ability to evaluate value. You can use positive inertia to get more credit than anybody else would for doing the exact same (positive) thing, and take less of a hit when doing the exact same (negative) thing. This seems to be the way of the world. As I said, I will happily argue whether it is fair, whether it’s good for organizations and societies, and what you should do about it. Malcolm Gladwell, in Blinkrelates that men were well-known to be better violin players than women for years. Orchestra tryouts confirmed it. Right up until they put a screen in front of the auditioners. Then, things changed.Men’s inertia gave them an advantage. I call this a defect in perception, but I doubt everyone here would. Some here would call it just a characteristic of perception, and insist that the best strategy is not to fight it, but to just accept it and deal with it. Some would even say that fighting it is detrimental to society. After all, if the goal is to maximally satisfy listeners, and listeners were more satisfied when it was a man playing and they knew it, why should we make concessions for women? Hell, maybe we should tell female violin players that if they want to compete, they should get sex-change operations, or just get another job entirely since there are plenty of men who want to play violin that will gladly take their places. It is one particular kind of attitude to argue that we should just get over it. I find it a little disturbing, especially when people start celebrating this “characteristic of perception,” and calling anyone who suggests blind auditions a troll, but I can at least see the point. But you cross another line entirely when you start arguing that men really do play better than women, that inertia has little or nothing to do with it, and that women should start to learn to play the violin the “manly way.” ## What is Twitter, anyways?? “I can understand celebrities using it to let all their fans know what time they had a bowel movement this morning (I guess..), but other than that, it seems to be quite useless!” You need to subscribe to better people, or at least avoid the more vapid celebrities you seem to be following… Other than Kevin Smith’s brand of humour, I don’t think I’ve ever read a tweet about someone going to the toilet, but I have gotten a lot of breaking news and useful information not immediately present elsewhere. ## Re: it implies something more than a quick fix to the top, say like a one hit wonder. it emphasises the work that has already been done.Yes, you have to build momentum. But what this means is that somebody equally (or more) insightful for a day, a week, or a month, will probably be ignored. Do we celebrate this? but if i take my time, build up something grand, now i have a lot.No, it’s more than that. It’s that everything you do from that point onward is worth more than it would be otherwise. It means you are getting lots of credit and benefit now for work you did in the past (oh wait, I thought that was a terrible, terrible thing?) Indeed if you weren’t factually correct (except that nook article which still upsets me) and insightful we would stop coming.Really, would you? How many times would an article have to be factually incorrect for you to stop coming? How much would the content of the site have to change for you to stop coming? My guess: lots. think of Digg, they changed, changed in a way the users didn’t like thinking that THEIR inertia would just keep everyone onboard. it did not. sometimes the rock can be stopped.Yes, they’re still the 131st most popular site on the Internet. That rock really pulled a 180. How many months has it been since it was basically universally agreed that Digg screwed the pooch? How many Internet businesses do you think would absolutely kill to have the traffic that Digg has even after they screwed the pooch? My guess would be all but about ~130 of them. ## Dont sing it - - BING IT.. “I have probably used Google 3 times in the past year, and that was only to see if it had a better search result than BING.. “ So, you admit that you had a completely free choice to use another search engine without penalty, you just chose not to exercise that choice. You also admit that you can, without penalty, use Bing instead of Google as your primary search engine. Maybe you should read up on what a monopoly is, because that’s a description of the exact opposite. “But according to Mike, Bit-Torrent is a search engine, so there is no reason why I cannot go to a bit torrent site, and do a search on it for Techdirt, and im sure I will get LOTS (ie NONE) hits.. “ So… you’re impling that because a site that searches for .torrent files won’t pick up a site written in .html, that somehow proves it’s not a search engine? Weird. You also make a rather stupid mistake in your very premise, of course – BitTorrent is a protocol, not a search engine. Sometimes, I really do hope you’re paid to write this crap. Surely nobody can be typing so many words to be so utterly wrong for free, right? ## Re: Remember, boys and girls, ad hominem attacks are perfectly OK if they are done by someone you like against someone you don’t.I didn’t say that you’re crazy because I disagree with you, orin response to your argument. I said you’re crazy because you genuinely seem to be crazy and paranoid. Remember, insults in and of themselves are not automatically an ad hominem attack.Or am I not allowed to make observations even afterrationally refuting your argument?Another great example of inertia: Rose, being one of the dozen or or so named regular posters here has a posse in the other 11, +/- 10 Anonymous Cowards. She knows that her inertia with the group render her mostly immune to being called out on baseless attacks like this…And yet you’re still able to post this comment? How am I ‘mostly immune’ to your response? …whereas anybody else (especially someone outside the groupthink, like me) would get nailed to the wall.This has more to do with the position of your comment than the background of the commenter. You’re much more likely to be ‘nailed to the wall’ when your post is closer to the top, which it was. Try waiting two days and posting crazy shit at the bottom, like this almost totally OT lit sidebar that another AC and I slipped into. No one will respond and then maybe you’ll feel better. This seems to be the way of the world. … Malcolm Gladwell… Completely unrelated crap… Yada yada… …cross another line… …the “manly way.”.See? Crazy. Not because I disagree with you (I’m neutral regarding this part of your post), but because your post really seems crazy. I found our web site on Yahoo. ## Re: i don’t celebrate the loss of anything. but there is a very low signal to noise ratio these days in general. it is hard for anything to be filtered through. i have read comments on obscure sites that are way more insightful than paid journalists on major newspapers. was that comment lost because only a dozen people read it? to turn it around, why shouldn’t what i have done in the past make what i have done in the present worth more? (or less?) one of the reasons i started coming to techdirt was because of the way the facts are presented along with the commentary. while there are times i haven’t agreed with the commentary, the facts are there for me to draw my own conclusions. i brought up the nook article because the facts weren’t presented correctly. now that is one article in a few years of reading (that i know) that did not have a correction. seems good to me. again, if that had been the second or third article i had read, i’d probably not be back. so, if mike decided that the facts were irrelevent and he was going to make a monster out of *insert company here* his inertia would not carry me through that. there is a reason i do not read political blogs (admittedly i do spend a lot of time on fark politics threads, but that is different. i do that for the lulz). now, i would probably stick around with techdirt longer than i would say ars or giz, but again, i have more respect for mike (he has more inertia for me) than say jesus diaz. i used to go to digg constantaly. it was more visited by me than facebook, slashdot and fark combined. now i go maybe once a month, if that. glad they are picking back up, but they lost me. they lost a lot of people. used to be front page articles had 1000+ diggs, now you only need 150 or so to get there. i wasn’t saying that “OMGZ DIGGZ IS TEH DEADZ” i was pointing to a recent example of a group believing their inertia was enough. a 180 lbs defender can take down a 250 lbs offense running at full speed if he knows how to hit him. ## What is Twitter, anyways?? Their main website is unappealing and uninteresting, and I seem to always get lost in it: their search tool doesn’t seem to do what I thought it would do! I thought if I entered “Bill Gates”, then it should lead me to Bill Gates’ twitter account! (you would think) But instead, it pulled up a bunch of entires that happen to mention “Bill Gates” written by a bunch of nobodies! You would think they would at least put the official “Bill gates” account on top of the results! I tried and I tried to find Bill Gates twitter from their home page, but I couldn’t seem to find it, and finally just gave up! I can understand the appeal of getting minute by minute news from congressmen – actually I’m very interested in that – but I can’t seem to find my way to a single twitter account, short of searching the net, going through each congressman trying to find out if they have a twitter account, then get that account name, and then finally go back and search for that account name on Twitter, if I can get their stupid search tool to work properly! I can certainly understand its usefulness for reporters and politicians and public figures, so maybe my opposition is mainly an innate hatred of the execution of Twitter, more than the idea itself!! I think the main website should be completely redesigned, re-organized, with some big name accounts being promoted on the main website to drag you in. The last thing I’m interested in is seeing what a few million strangers are “tweeting” about. Facebook seems to do a good job of that, along with pictures, videos, relationships .. etc But unless its a public figure I’m following, I still don’t see the appeal. Everytime I go to the twitter website thinking “Maybe its about time I figure this thing out” I spend some time going from one account to the next, trying to find anything interesting, and finally giving up! Quite frankly, I’m waiting for another company to take this same idea and execute it better. As far as Twitter is concerned, I’m not impressed. But just out of curiosity, can you send me some links to some “interesting” twitter accounts that I can follow? ## What is Twitter, anyways?? lol .. I don’t follow anyone on twitter, that’s the point. ” breaking news and useful information not immediately present elsewhere” How? please share your secret! Their homepage is freagin useless!! I can’t seem to find anything of value there! ## What is Twitter, anyways?? “lol .. I don’t follow anyone on twitter, that’s the point.” Yet, you claim to be an expert in what’s posted there? Interesting. “How? please share your secret! Their homepage is freagin useless!! I can’t seem to find anything of value there!” I follow numerous respected news sources including the BBC and The Guardian along with numerous other news sites. I follow various sites that use Twitter to give early warnings on system outages and bugs. I follow movie blogs that tend to have scoops hours before the mainstream press pick upon stories, and many blogs seem to get their Twitter post out before the RSS update. Just because YOU haven’t worked out how to use a tool, that doesn’t make it worthless. ## What is Twitter, anyways?? “Yet, you claim to be an expert in what’s posted there? Interesting.” I NEVER claimed that. If you read any of my posts, you will see that I’m claiming total ignorance on Twitter! “I follow numerous respected news sources including …” wow, I’m very impressed. How do I find them? Send me a link or something! My post wasn’t meant to mock, so don’t get so defensive! I’m genuinely asking for a way to find interesting twitter accounts to follow, and you responded by giving me your twitter resume! ## Re: You work for google and that was sarcasm wasn’t it … 😉 ## Re: Please stop feeding the sub bridge types please … (ie don’t feed the trolls) ## I don't think so Why in the hell was this comment flagged?? ## What is Twitter, anyways?? Fair enough, “expert” wasn’t the word I meant to use, but seriously? It’s not a hard system to use, and 2 seconds in Google would get you a how-to guide such as this one: http://mashable.com/guidebook/twitter/ Sorry if I come across as a little snarky, but you are loudly complaining about not being able to understand a site that millions of other people have worked out, while sitting in front of the planet’s biggest source of information on how to do stuff. I find that annoying. As for finding useful links, it’s down to who you want to find. There’s a search box, and a directory under “find people” or “browse interests”. You can look at what people have tweeted publicly before following them, then you’ll get instant updates whenever anyone you follow sends a tweet. Once you’re following a few people you’ll see retweets and mentions of other users you might find interesting. A few of the more mainstream users I’m following: @BBCBreaking, @guardian, @BoingBoing, @lonelyplanet, @newscientist, @ThatKevinSmith, @IMDb, @eddieizzard, @edgarwright ## sandbox Also something that works wonders is backup. Make bit by bit copy of the state of the machine right after a fresh install and when everything its ok, you just make a copy and store it in a DVD-R and reinstall that every year or when problems arise, do you ever wonder how internet coffee shops maintain their machines virus free?, that is how, also get a bootable disc OS different from the OS on the machine so you can boot from that disc and inspect the files from a different OS that probably won’t be vulnerable to any virus inside that filesystem so if you ever need to salvage some files from the disk before installing something over it that is the way to do it.
true
true
true
One of the driving forces behind some of the legal attacks on Google is that Google is the defacto monopoly on being found online. We’ve heard over and over again a claim along the lines of &…
2024-10-12 00:00:00
2010-12-23 00:00:00
https://www.techdirt.com…t-logo-white.png
article
techdirt.com
Techdirt
null
null
3,086,438
http://type.method.ac/
Kern Type
null
Start Over About this game Report a bug Change log Visitor stats Method of Action K E R N T Y P E a letter spacing game Letter can't be moved K eyboard Shortcuts TAB Select next letter SHIFT + TAB Select previous letter ENTER Next screen ← Nudge left 1px SHIFT + ← Nudge left 10px → Nudge right 1px SHIFT + → Nudge right 10px Share your solution Close Write your name or nick Copy and paste this link wherever you want Preview in a new window Your score 0 out of 100 Tweet your score Play again KernType is brought to you by Method of Action .
true
true
true
A game to learn how to kern type
2024-10-12 00:00:00
null
http://type.method.ac/ogimage.png
website
method.ac
type.method.ac
null
null
21,817,592
https://github.com/feross/simple-peer
GitHub - feross/simple-peer: 📡 Simple WebRTC video, voice, and data channels
Feross
We are hiring a peer-to-peer WebRTC mobile Web application expert. DFINITY is building an exciting peer-to-peer WebRTC-based mobile Web app to help improve democracy on the Internet Computer blockchain. The mobile web app connects groups of up to four people in a peer-to-peer WebRTC audio and video call so that they can mutually prove unique personhood. We are looking for a software engineer or consultant who can help us solve (platform-dependent) reliability issues of our implementation. We are interested in applicants with substantial WebRTC experience for mobile Web apps, experience with different communication patterns (e.g., peer-to-peer, server relay), and substantial problem-solving skills. Having experience in automated testing of this type of applications is a plus. Pay is extremely competitive for the right expertise. For details, please see the full job description. - concise, **node.js style**API for WebRTC **works in node and the browser!**- supports **video/voice streams** - supports **data channel**- text and binary data - node.js duplex stream interface - supports advanced options like: - enable/disable trickle ICE candidates - manually set config options - transceivers and renegotiation This package is used by WebTorrent and many others. - install - examples - api - events - error codes - connecting more than 2 peers? - memory usage - connection does not work on some networks? - Who is using `simple-peer` ? - license ``` npm install simple-peer ``` This package works in the browser with browserify. If you do not use a bundler, you can use the `simplepeer.min.js` standalone script directly in a `<script>` tag. This exports a `SimplePeer` constructor on `window` . Wherever you see `Peer` in the examples below, substitute that with `SimplePeer` . Let's create an html page that lets you manually connect two peers: ``` <html> <body> <style> #outgoing { width: 600px; word-wrap: break-word; white-space: normal; } </style> <form> <textarea id="incoming"></textarea> <button type="submit">submit</button> </form> <pre id="outgoing"></pre> <script src="simplepeer.min.js"></script> <script> const p = new SimplePeer({ initiator: location.hash === '#1', trickle: false }) p.on('error', err => console.log('error', err)) p.on('signal', data => { console.log('SIGNAL', JSON.stringify(data)) document.querySelector('#outgoing').textContent = JSON.stringify(data) }) document.querySelector('form').addEventListener('submit', ev => { ev.preventDefault() p.signal(JSON.parse(document.querySelector('#incoming').value)) }) p.on('connect', () => { console.log('CONNECT') p.send('whatever' + Math.random()) }) p.on('data', data => { console.log('data: ' + data) }) </script> </body> </html> ``` Visit `index.html#1` from one browser (the initiator) and `index.html` from another browser (the receiver). An "offer" will be generated by the initiator. Paste this into the receiver's form and hit submit. The receiver generates an "answer". Paste this into the initiator's form and hit submit. Now you have a direct P2P connection between two browsers! This example create two peers **in the same web page**. In a real-world application, *you would never do this*. The sender and receiver `Peer` instances would exist in separate browsers. A "signaling server" (usually implemented with websockets) would be used to exchange signaling data between the two browsers until a peer-to-peer connection is established. ``` var Peer = require('simple-peer') var peer1 = new Peer({ initiator: true }) var peer2 = new Peer() peer1.on('signal', data => { // when peer1 has signaling data, give it to peer2 somehow peer2.signal(data) }) peer2.on('signal', data => { // when peer2 has signaling data, give it to peer1 somehow peer1.signal(data) }) peer1.on('connect', () => { // wait for 'connect' event before using the data channel peer1.send('hey peer2, how is it going?') }) peer2.on('data', data => { // got a data channel message console.log('got a message from peer1: ' + data) }) ``` Video/voice is also super simple! In this example, peer1 sends video to peer2. ``` var Peer = require('simple-peer') // get video/voice stream navigator.mediaDevices.getUserMedia({ video: true, audio: true }).then(gotMedia).catch(() => {}) function gotMedia (stream) { var peer1 = new Peer({ initiator: true, stream: stream }) var peer2 = new Peer() peer1.on('signal', data => { peer2.signal(data) }) peer2.on('signal', data => { peer1.signal(data) }) peer2.on('stream', stream => { // got remote video stream, now let's show it in a video tag var video = document.querySelector('video') if ('srcObject' in video) { video.srcObject = stream } else { video.src = window.URL.createObjectURL(stream) // for older browsers } video.play() }) } ``` For two-way video, simply pass a `stream` option into both `Peer` constructors. Simple! Please notice that `getUserMedia` only works in pages loaded via **https**. It is also possible to establish a data-only connection at first, and later add a video/voice stream, if desired. ``` var Peer = require('simple-peer') // create peer without waiting for media var peer1 = new Peer({ initiator: true }) // you don't need streams here var peer2 = new Peer() peer1.on('signal', data => { peer2.signal(data) }) peer2.on('signal', data => { peer1.signal(data) }) peer2.on('stream', stream => { // got remote video stream, now let's show it in a video tag var video = document.querySelector('video') if ('srcObject' in video) { video.srcObject = stream } else { video.src = window.URL.createObjectURL(stream) // for older browsers } video.play() }) function addMedia (stream) { peer1.addStream(stream) // <- add streams to peer dynamically } // then, anytime later... navigator.mediaDevices.getUserMedia({ video: true, audio: true }).then(addMedia).catch(() => {}) ``` To use this library in node, pass in `opts.wrtc` as a parameter (see the constructor options): ``` var Peer = require('simple-peer') var wrtc = require('wrtc') var peer1 = new Peer({ initiator: true, wrtc: wrtc }) var peer2 = new Peer({ wrtc: wrtc }) ``` Create a new WebRTC peer connection. A "data channel" for text/binary communication is always established, because it's cheap and often useful. For video/voice communication, pass the `stream` option. If `opts` is specified, then the default options (shown below) will be overridden. ``` { initiator: false, channelConfig: {}, channelName: '<random string>', config: { iceServers: [{ urls: 'stun:stun.l.google.com:19302' }, { urls: 'stun:global.stun.twilio.com:3478?transport=udp' }] }, offerOptions: {}, answerOptions: {}, sdpTransform: function (sdp) { return sdp }, stream: false, streams: [], trickle: true, allowHalfTrickle: false, wrtc: {}, // RTCPeerConnection/RTCSessionDescription/RTCIceCandidate objectMode: false } ``` The options do the following: - `initiator` - set to`true` if this is the initiating peer - `channelConfig` - custom webrtc data channel configuration (used by`createDataChannel` ) - `channelName` - custom webrtc data channel name - `config` - custom webrtc configuration (used by`RTCPeerConnection` constructor) - `offerOptions` - custom offer options (used by`createOffer` method) - `answerOptions` - custom answer options (used by`createAnswer` method) - `sdpTransform` - function to transform the generated SDP signaling data (for advanced users) - `stream` - if video/voice is desired, pass stream returned from`getUserMedia` - `streams` - an array of MediaStreams returned from`getUserMedia` - `trickle` - set to`false` to disable trickle ICE and get a single 'signal' event (slower) - `wrtc` - custom webrtc implementation, mainly useful in node to specify in the wrtc package. Contains an object with the properties: - `objectMode` - set to`true` to create the stream in Object Mode. In this mode, incoming string data is not automatically converted to`Buffer` objects. Call this method whenever the remote peer emits a `peer.on('signal')` event. The `data` will encapsulate a webrtc offer, answer, or ice candidate. These messages help the peers to eventually establish a direct connection to each other. The contents of these strings are an implementation detail that can be ignored by the user of this module; simply pass the data from 'signal' events to the remote peer and call `peer.signal(data)` to get connected. Send text/binary data to the remote peer. `data` can be any of several types: `String` , `Buffer` (see buffer), `ArrayBufferView` (`Uint8Array` , etc.), `ArrayBuffer` , or `Blob` (in browsers that support it). Note: If this method is called before the `peer.on('connect')` event has fired, then an exception will be thrown. Use `peer.write(data)` (which is inherited from the node.js duplex stream interface) if you want this data to be buffered instead. Add a `MediaStream` to the connection. Remove a `MediaStream` from the connection. Add a `MediaStreamTrack` to the connection. Must also pass the `MediaStream` you want to attach it to. Remove a `MediaStreamTrack` from the connection. Must also pass the `MediaStream` that it was attached to. Replace a `MediaStreamTrack` with another track. Must also pass the `MediaStream` that the old track was attached to. Add a `RTCRtpTransceiver` to the connection. Can be used to add transceivers before adding tracks. Automatically called as neccesary by `addTrack` . Destroy and cleanup this peer connection. If the optional `err` parameter is passed, then it will be emitted as an `'error'` event on the stream. Detect native WebRTC support in the javascript environment. ``` var Peer = require('simple-peer') if (Peer.WEBRTC_SUPPORT) { // webrtc support! } else { // fallback } ``` `Peer` objects are instances of `stream.Duplex` . They behave very similarly to a `net.Socket` from the node core `net` module. The duplex stream reads/writes to the data channel. ``` var peer = new Peer(opts) // ... signaling ... peer.write(new Buffer('hey')) peer.on('data', function (chunk) { console.log('got a chunk', chunk) }) ``` `Peer` objects are instance of `EventEmitter` . Take a look at the nodejs events documentation for more information. Example of removing all registered **close**-event listeners: `peer.removeAllListeners('close')` Fired when the peer wants to send signaling data to the remote peer. **It is the responsibility of the application developer (that's you!) to get this data to the other peer.** This usually entails using a websocket signaling server. This data is an `Object` , so remember to call `JSON.stringify(data)` to serialize it first. Then, simply call `peer.signal(data)` on the remote peer. (Be sure to listen to this event immediately to avoid missing it. For `initiator: true` peers, it fires right away. For `initatior: false` peers, it fires when the remote offer is received.) Fired when the peer connection and data channel are ready to use. Received a message from the remote peer (via the data channel). `data` will be either a `String` or a `Buffer/Uint8Array` (see buffer). Received a remote video stream, which can be displayed in a video tag: ``` peer.on('stream', stream => { var video = document.querySelector('video') if ('srcObject' in video) { video.srcObject = stream } else { video.src = window.URL.createObjectURL(stream) } video.play() }) ``` Received a remote audio/video track. Streams may contain multiple tracks. Called when the peer connection has closed. Fired when a fatal error occurs. Usually, this means bad signaling data was received from the remote peer. `err` is an `Error` object. Errors returned by the `error` event have an `err.code` property that will indicate the origin of the failure. Possible error codes: `ERR_WEBRTC_SUPPORT` `ERR_CREATE_OFFER` `ERR_CREATE_ANSWER` `ERR_SET_LOCAL_DESCRIPTION` `ERR_SET_REMOTE_DESCRIPTION` `ERR_ADD_ICE_CANDIDATE` `ERR_ICE_CONNECTION_FAILURE` `ERR_SIGNALING` `ERR_DATA_CHANNEL` `ERR_CONNECTION_FAILURE` The simplest way to do that is to create a full-mesh topology. That means that every peer opens a connection to every other peer. To illustrate: To broadcast a message, just iterate over all the peers and call `peer.send` . So, say you have 3 peers. Then, when a peer wants to send some data it must send it 2 times, once to each of the other peers. So you're going to want to be a bit careful about the size of the data you send. Full mesh topologies don't scale well when the number of peers is very large. The total number of edges in the network will be where `n` is the number of peers. For clarity, here is the code to connect 3 peers together: ``` // These are peer1's connections to peer2 and peer3 var peer2 = new Peer({ initiator: true }) var peer3 = new Peer({ initiator: true }) peer2.on('signal', data => { // send this signaling data to peer2 somehow }) peer2.on('connect', () => { peer2.send('hi peer2, this is peer1') }) peer2.on('data', data => { console.log('got a message from peer2: ' + data) }) peer3.on('signal', data => { // send this signaling data to peer3 somehow }) peer3.on('connect', () => { peer3.send('hi peer3, this is peer1') }) peer3.on('data', data => { console.log('got a message from peer3: ' + data) }) ``` ``` // These are peer2's connections to peer1 and peer3 var peer1 = new Peer() var peer3 = new Peer({ initiator: true }) peer1.on('signal', data => { // send this signaling data to peer1 somehow }) peer1.on('connect', () => { peer1.send('hi peer1, this is peer2') }) peer1.on('data', data => { console.log('got a message from peer1: ' + data) }) peer3.on('signal', data => { // send this signaling data to peer3 somehow }) peer3.on('connect', () => { peer3.send('hi peer3, this is peer2') }) peer3.on('data', data => { console.log('got a message from peer3: ' + data) }) ``` ``` // These are peer3's connections to peer1 and peer2 var peer1 = new Peer() var peer2 = new Peer() peer1.on('signal', data => { // send this signaling data to peer1 somehow }) peer1.on('connect', () => { peer1.send('hi peer1, this is peer3') }) peer1.on('data', data => { console.log('got a message from peer1: ' + data) }) peer2.on('signal', data => { // send this signaling data to peer2 somehow }) peer2.on('connect', () => { peer2.send('hi peer2, this is peer3') }) peer2.on('data', data => { console.log('got a message from peer2: ' + data) }) ``` If you call `peer.send(buf)` , `simple-peer` is not keeping a reference to `buf` and sending the buffer at some later point in time. We immediately call `channel.send()` on the data channel. So it should be fine to mutate the buffer right afterward. However, beware that `peer.write(buf)` (a writable stream method) does not have the same contract. It will potentially buffer the data and call `channel.send()` at a future point in time, so definitely don't assume it's safe to mutate the buffer. If a direct connection fails, in particular, because of NAT traversal and/or firewalls, WebRTC ICE uses an intermediary (relay) TURN server. In other words, ICE will first use STUN with UDP to directly connect peers and, if that fails, will fall back to a TURN relay server. In order to use a TURN server, you must specify the `config` option to the `Peer` constructor. See the API docs above. - WebTorrent - Streaming torrent client in the browser - Virus Cafe - Make a friend in 2 minutes - Instant.io - Secure, anonymous, streaming file transfer - Zencastr - Easily record your remote podcast interviews in studio quality. - Friends - Peer-to-peer chat powered by the web - Socket.io-p2p - Official Socket.io P2P communication library - ScreenCat - Screen sharing + remote collaboration app - WebCat - P2P pipe across the web using Github private/public key for auth - RTCCat - WebRTC netcat - PeerNet - Peer-to-peer gossip network using randomized algorithms - lxjs-chat - Omegle-like video chat site - Whiteboard - P2P Whiteboard powered by WebRTC and WebTorrent - Peer Calls - WebRTC group video calling. Create a room. Share the link. - Netsix - Send videos to your friends using WebRTC so that they can watch them right away. - Stealthy - Stealthy is a decentralized, end-to-end encrypted, p2p chat application. - oorja.io - Effortless video-voice chat with realtime collaborative features. Extensible using react components 🙌 - TalktoMe - Skype alternative for audio/video conferencing based on WebRTC, but without the loss of packets. - CDNBye - CDNBye implements WebRTC datachannel to scale live/vod video streaming by peer-to-peer network using bittorrent-like protocol - Detox - Overlay network for distributed anonymous P2P communications entirely in the browser - Metastream - Watch streaming media with friends. - firepeer - secure signalling and authentication using firebase realtime database - Genet - Fat-tree overlay to scale the number of concurrent WebRTC connections to a single source (paper). - WebRTC Connection Testing - Quickly test direct connectivity between all pairs of participants (demo). - Firstdate.co - Online video dating for actually meeting people and not just messaging them - TensorChat - It's simple - Create. Share. Chat. - On/Office - View your desktop in a WebVR-powered environment - Cyph - Cryptographically secure messaging and social networking service, providing an extreme level of privacy combined with best-in-class ease of use - Ciphora - A peer-to-peer end-to-end encrypted messaging chat app. - Whisthub - Online card game Color Whist with the possibility to start a video chat while playing. - Brie.fi/ng - Secure anonymous video chat - Peer.School - Simple virtual classroom starting from the 1st class including video chat and real time whiteboard - FileFire - Transfer large files and folders at high speed without size limits. - safeShare - Transfer files easily with text and voice communication. - CubeChat - Party in 3D 🎉 - Homely School - A virtual schooling system - AnyDrop - Cross-platform AirDrop alternative with an Android app available at Google Play - Share-Anywhere - Cross-platform file transfer - QuaranTime.io - The Activity board-game in video! - Trango - Cross-platform calling and file sharing solution. - P2PT - Use WebTorrent trackers as signalling servers for making WebRTC connections - Dots - Online multiplayer Dots & Boxes game. Play Here! - simple-peer-files - A simple library to easily transfer files over WebRTC. Has a feature to resume file transfer after uploader interruption. - WebDrop.Space - Share files and messages across devices. Cross-platform, no installation alternative to AirDrop, Xender. Source Code - Speakrandom - Voice-chat social network using simple-peer to create audio conferences! - Deskreen - A desktop app that helps you to turn any device into a secondary screen for your computer. It uses simple-peer for sharing entire computer screen to any device with a web browser. - *Your app here! - send a PR!* MIT. Copyright (c) Feross Aboukhadijeh.
true
true
true
📡 Simple WebRTC video, voice, and data channels. Contribute to feross/simple-peer development by creating an account on GitHub.
2024-10-12 00:00:00
2014-06-26 00:00:00
https://opengraph.githubassets.com/7adb2eafbe69f57325dc31b9d289d2f8aad3bd5b1efc8771be7797298f307adc/feross/simple-peer
object
github.com
GitHub
null
null
9,098,976
http://techcrunch.com/2015/02/23/snowden-does-reddit/
Snowden Does Reddit | TechCrunch
Alex Wilhelm
Edward Snowden, who you might have heard of by now, took to Reddit today along with journalists Glenn Greenwald and Laura Poitras. Poitras won an Oscar last night for the Academy Award for Best Documentary Feature. Poitras’ winning film, CITIZENFOUR, covers when Greenwald, the filmmaker, and Snowden were together in Hong Kong, right before the documents were leaked and the world changed. I’m no film critic, but I can understand why the film won the award — it’s a raw look at a moment in history that has proven to be geopolitically pivotal, leading to change at the level of nations and multinational corporations. The Reddit session is much of what you would expect — you can read the full episode here — but there is one Snowden answer I think is worth highlighting in response to a question concerning how to bring domestic surveillance back to the fore of discussion, and perhaps to make it into an issue for the 2016 presidential election. Here’s Snowden, at full length: This is a good question, and there are some good traditional answers here. Organizing is important. Activism is important. At the same time, we should remember that governments don’t often reform themselves. One of the arguments in a book I read recently (Bruce Schneier, “Data and Goliath”), is that perfect enforcement of the law sounds like a good thing, but that may not always be the case. The end of crime sounds pretty compelling, right, so how can that be? Well, when we look back on history, the progress of Western civilization and human rights is actually founded on the violation of law. America was of course born out of a violent revolution that was an outrageous treason against the crown and established order of the day. History shows that the righting of historical wrongs is often born from acts of unrepentant criminality. Slavery. The protection of persecuted Jews. But even on less extremist topics, we can find similar examples. How about the prohibition of alcohol? Gay marriage? Marijuana? Where would we be today if the government, enjoying powers of perfect surveillance and enforcement, had — entirely within the law — rounded up, imprisoned, and shamed all of these lawbreakers? Ultimately, if people lose their willingness to recognize that there are times in our history when legality becomes distinct from morality, we aren’t just ceding control of our rights to government, but our agency in determing thour [sic] futures. How does this relate to politics? Well, I suspect that governments today are more concerned with the loss of their ability to control and regulate the behavior of their citizens than they are with their citizens’ discontent. How do we make that work for us? We can devise means, through the application and sophistication of science, to remind governments that if they will not be responsible stewards of our rights, we the people will implement systems that provide for a means of not just enforcing our rights, but removing from governments the ability to interfere with those rights.You can see the beginnings of this dynamic today in the statements of government officials complaining about the adoption of encryption by major technology providers. The idea here isn’t to fling ourselves into anarchy and do away with government, but to remind the government that there must always be a balance of power between the governing and the governed, and that as the progress of science increasingly empowers communities and individuals, there will be more and more areas of our lives where — if government insists on behaving poorly and with a callous disregard for the citizen — we can find ways to reduce or remove their powers on a new — and permanent — basis. Our rights are not granted by governments. They are inherent to our nature. But it’s entirely the opposite for governments: their privileges are precisely equal to only those which we suffer them to enjoy. We haven’t had to think about that much in the last few decades because quality of life has been increasing across almost all measures in a significant way, and that has led to a comfortable complacency. But here and there throughout history, we’ll occasionally come across these periods where governments think more about what they “can” do rather than what they “should” do, and what is lawful will become increasingly distinct from what is moral. In such times, we’d do well to remember that at the end of the day, the law doesn’t defend us; we defend the law. And when it becomes contrary to our morals, we have both the right and the responsibility to rebalance it toward just ends. Shorter Snowden: Governments don’t reform themselves, so we’ll have to do it with technology; our rights are inherent, and if we have no option other than direct action to enforce change, we might have to push. The exchange led to one of the single best exchanges that I have ever seen on Reddit: That gif, of course, is this one: Snowden, you’ll note, was careful not to specify how we might go about a “rebalance” of the power of government and the rights of the private citizenry. If he had, it would have been instant fodder for the Right to claim that all along, despite every evidence to the contrary, that Snowden has indeed been an activist for intra-national regime change. A coup, in other words. Instead, Snowden points out that sometimes you will not be handed the change you are looking for. Answering the same question, Greenwald made a salient point about the hydra that is consensus, a word that has become oddly enshrined in our modern political vernacular as a Good [Edited for length]: The key tactic DC uses to make uncomfortable issues disappear is bipartisan consensus. When the leadership of both parties join together – as they so often do, despite the myths to the contrary – those issues disappear from mainstream public debate. […] The problem is that the leadership of both parties, as usual, are in full agreement: they love NSA mass surveillance. So that has blocked it from receiving more debate. That NSA program was ultimately saved by the unholy trinity of Obama, Nancy Pelosi and John Bohener, who worked together to defeat the Amash/Conyers bill. […] That’s why the Dem efforts to hand Hillary Clinton the nomination without contest are so depressing. She’s the ultimate guardian of bipartisan status quo corruption, and no debate will happen if she’s the nominee against some standard Romney/Bush-type GOP candidate. Some genuine dissenting force is crucial. Finally, Poitras made an excellent point concerning her status as a journalist, and the different work of the filmmaker: Thanks for the kind words. I definitely consider myself a journalist, as well as an artist and a filmmaker. In my mind, it’s not a question about whether I am one or the other. Documentary films needs [sic] to do more than journalism – they need to communicate something that is more universal. Poitras went on to note that not only does she have more Snowden footage that she may release, but that she shot an interview with Wikileaks’ Julian Assange that she “realized in the edit room was a separate film.” Most recently in the NSA leak saga, NSA documents sourced from Snowden and published by Greenwald showed that the NSA had compromised the security of a SIM card company, stolen encryption keys, partially rendering the security of perhaps billions of phones moot.
true
true
true
Edward Snowden, who you might have heard of by now, took to Reddit today along with journalists Glenn Greenwald and Laura Poitras. Poitras won an Oscar last night for the Academy Award for Best Documentary Feature. Poitras' winning film, CITIZENFOUR, covers when Greenwald, the filmmaker, and Snowden were together in Hong Kong, right before the documents were leaked and the world changed.
2024-10-12 00:00:00
2015-02-23 00:00:00
https://techcrunch.com/w…t-3-27-19-pm.png
article
techcrunch.com
TechCrunch
null
null
17,844,572
https://hbr.org/2018/07/do-your-employees-feel-respected
Do Your Employees Feel Respected?
Kristie Rogers
#### In Brief ##### The Deficit A respectful workplace brings enormous benefits to organizations, but efforts to provide one often fall short. That’s partly because leaders have an incomplete understanding of respect. ##### The Fix Research shows that employees value two distinct types of respect. *Owed respect* is accorded equally to all members of a work group or an organization. *Earned respect* recognizes individuals who display valued qualities or behaviors and acknowledges that each employee has specific strengths and talents. ##### The Idea in Practice At Televerde, a technology-focused B2B marketing firm staffed by female prison inmates, regular displays of owed and earned respect have created an extraordinarily engaged workforce responsible for impressive profitability and growth. And recidivism among Televerde’s inmate employees is 80% lower than the national rate. When you ask workers what matters most to them, feeling respected by superiors often tops the list. In a recent survey by Georgetown University’s Christine Porath of nearly 20,000 employees worldwide, respondents ranked respect as the most important leadership behavior. Yet employees report more disrespectful and uncivil behavior each year. *Harvard Business Review*.
true
true
true
Lessons from a firm staffed by prison inmates.
2024-10-12 00:00:00
2018-07-01 00:00:00
https://hbr.org/resource…1804C_WILSON.png
article
hbr.org
Harvard Business Review
null
null
26,769,076
http://grammarly.com/blog/engineering/running-lisp-in-production/
Running Lisp in Production
Vsevolod Dyomkin
At Grammarly, the foundation of our business, our core grammar engine, is written in Common Lisp. It currently processes more than a thousand sentences per second, is horizontally scalable, and has reliably served in production for almost three years. We noticed that there are very few, if any, accounts of how to deploy Lisp software to modern cloud infrastructure, so we thought that it would be a good idea to share our experience. The Lisp runtime and programming environment provides several unique—albeit obscure—capabilities to support production systems (for the impatient, they are described in the final chapter). ## Wut Lisp?!! Contrary to popular opinion, Lisp is an incredibly practical language for building production systems. There are, in fact, many Lisp systems out there: When you search for an airline ticket on Hipmunk or take a Tube train in London, Lisp programs are being called. Our Lisp services are conceptually a classical AI application that operates on huge piles of knowledge created by linguists and researchers. It’s mostly a CPU-bound program, and it is one of the biggest consumers of computing resources in our network. We run these services on stock Linux images deployed to AWS. We use SBCL for production deployment and CCL on most of the developers’ machines. One of the nice things about Lisp is that you have an option of choosing from several mature implementations with different strengths and weaknesses: In our case, we optimized for processing speed on the server and for compilation speed in the dev environment (the reason this is critical for us is described in the later section). ## A stranger in a strange land At Grammarly, we use many programming languages for developing our services: In addition to JVM languages and JavaScript, we also develop in Erlang, Python, and Go. Proper service encapsulation enables us to use whatever language and platform makes the most sense. There is a cost to maintenance, but we value choice and freedom over rules and processes. We also try to rely on simple language-agnostic infrastructure tools. This approach spares us a lot of trouble integrating this zoo of technologies in our platform. For instance, StatsD is a great example of an amazingly simple and useful service that is extremely easy to use. Another one is Graylog2; it provides a brilliant specification for logging, and although there was no ready-made library for working with it from CL, it was really easy to assemble from the building blocks already available in the Lisp ecosystem. This is all the code that was needed (and most of it is just “word-by-word” translation of the spec): ``` (defun graylog (message &key level backtrace file line-no) (let ((msg (salza2:compress-data (babel:string-to-octets (json:encode-json-to-string #{ :version "1.0" :facility "lisp" :host *hostname* :|short_message| message :|full_message| backtrace :timestamp (local-time:timestamp-to-unix (local-time:now)) :level level :file file :line line-no }) :encoding :utf-8) 'salza2:zlib-compressor))) (usocket:socket-send (usocket:socket-connect *graylog-host* *graylog-port* :protocol :datagram :element-type '(unsigned-byte 8)) msg (length msg)))) ``` One of the common complaints about Lisp is that there are no libraries in the ecosystem. As you see, five libraries are used just in this example for such things as encoding, compression, getting Unix time, and socket connections. Lisp libraries indeed exist, but like all library integrations, we have challenges with them as well. For instance, to plug into the Jenkins CI system, we had to use xUnit, and it was not so straightforward to find the spec for it. Fortunately, this obscure Stack Overflow question helped—we ended up having to build this into our own testing library should-test. Another example is using HDF5 for machine learning models exchange: It took us some work to adapt the low-level HDF5-cffi library to our use case, but we had to spend much more time upgrading our AMIs to support the current version of the C library. Another principle that we try to follow in Grammarly platform is maximal decoupling of different services to ensure horizontal scalability and operational independence. This way, we do not need to interact with databases in the critical paths in our core services. We do, however, use MySQL, Postgres, Redis, and Mongo, for internal storage, and we’ve successfully used CLSQL, postmodern, cl-redis, and cl-mongo to access them from the Lisp side. We rely on Quicklisp for managing external dependencies and a simple system of bundling library source code with the project for our internal libraries or forks. The Quicklisp repository hosts more than a thousand Lisp libraries—not a mind-blowing number, but quite enough for covering all of our production needs. For deployment into production, we use a universal stack: The application is tested and bundled by Jenkins, put on the servers by Rundeck, and run there as a regular Unix process by Upstart. Overall, the problems we face with integrating a Lisp app into the cloud world are not radically different from the ones we encounter with many other technologies. If you want to use Lisp in production—and to experience the joy of writing Lisp code—there is no valid technical reason not to! ## The hardest bug I’ve ever debugged As ideal as this story is so far, it has not been all rainbows and unicorns. We’ve built an esoteric application (even by Lisp standards), and in the process have hit some limits of our platform. One unexpected thing was heap exhaustion during compilation. We rely heavily on macros, and some of the largest ones expand into thousands of lines of low-level code. It turned out that the SBCL compiler implements a lot of optimizations that allow us to enjoy quite fast generated code, but some of them require exponential time and memory resources. Unfortunately, there’s no way to influence that by turning them off or tuning somehow. However, there exists a well-known general solution, `call-with-* style` , in which you trade off a little performance for better modularity (which turned out crucial for our use case) and debuggability. Less surprising than compiler taming, we have spent some time with GC tuning to improve the latency and resource utilization in our system. SBCL provides a decent generational garbage collector, although the system is not nearly as sophisticated as in the JVM. We had to tune the generation sizes, and it turned out that the best option was to use an oversize heap: Our application consumes 2–4 gigabytes of memory but we run it with 25G heap size, which automatically results in a huge volume for the nursery. Yet another customization we had to make—a much less obvious one—was to run full GC programmatically every N minutes. With a large heap, we have noticed a gradual memory usage buildup over periods of tens of minutes, which resulted in spans of more time spent in GC and decreased application throughput. Our periodic GC approach got the system into a much more stable state with almost constant memory usage. On the left, you can see how an untuned system performs; on the right, the effect of periodic collection. Of all these challenges, the worst bug I’ve ever seen was a network bug. As usual with such stories, it was not a bug in the application but a problem in the underlying platform (this time, SBCL). And, moreover, I was bitten by it twice in two different services. But the first time I couldn’t figure it out, so I had to develop a workaround. As we were just beginning to run our service under substantial load in production, after some period of normal operation all of the servers would suddenly start to slow down and then would become unresponsive. After much investigation centering on our input data, we discovered that the problem was instead a race condition in low-level SBCL network code, specifically in the way the socket function `getprotobyname` , which is non-reentrant, was called. It was quite an unlikely race, so it manifested itself only in the high-load network service setup when this function was called tens of thousands of times. It knocked off one worker thread after another, eventually rendering the service comatose. Here’s the fix we settled on; unfortunately, it can’t be used in a broader context as a library. (The bug was reported to SBCL maintainers, and there was a fix there as well, but we are still running with this hack, just to be sure :). ``` #+unix (defun sb-bsd-sockets:get-protocol-by-name (name) (case (mkeyw name) (:tcp 6) (:udp 17))) ``` ## Back to the future Common Lisp systems implement a lot of the ideas of the venerable Lisp machines. One of the most prominent ones is the SLIME interactive environment. While the industry waits for LightTable and similar tools to mature, Lisp programmers have been silently and haughtily enjoying such capabilities with SLIME for many years. Witness the power of this fully armed and operational battle station in action. But SLIME is not just a Lisp’s take on an IDE. Being a client-server application, it allows to run its back-end on the remote machine and connect to it from your local Emacs (or Vim, if you must, with SLIMV). Java programmers can think of JConsole, but here you’re not constrained by the predefined set of operations and can do any kind of introspection and changes you want. We could not have debugged the socket race condition without this functionality. Furthermore, the remote console is not the only useful tool provided by SLIME. Like many IDEs, it has a jump-to-source function, but unlike Java or Python, I have SBCL’s source code on my machine, so I often consult the implementation’s sources, and this helps understand what’s going on much better. For the socket bug case, this was also an important part of the debugging process. Finally, another super-useful introspection and debugging tool we use is Lisp’s TRACE facility. It has completely changed my approach to debugging—from tedious local stepping to exploring the bigger picture. It was also instrumental in nailing our nasty bug. With `trace` , you define a function to trace, run the code, and Lisp prints all calls to that functions with arguments and all of its returns with results. It is somewhat similar to `stacktrace` , but you don’t get to see the whole stack and you dynamically get a stream of traces, which doesn’t stop the application. `trace` is like `print` on steroids; it allows you to quickly get into the inner workings of arbitrary complex code and monitor complicated flows. The only shortcoming is that you can’t trace macros. Here’s a snippet of tracing I did just today to ensure that a JSON request to one of our services is properly formatted and returns an expected result: ``` 0: (GET-DEPS ("you think that's bad, hehe, i remember once i had an old 100MHZ dell unit i was using as a server in my room")) 1: (JSON:ENCODE-JSON-TO-STRING #<HASH-TABLE :TEST EQL :COUNT 2 {1037DD9383}>) 2: (JSON:ENCODE-JSON-TO-STRING "action") 2: JSON:ENCODE-JSON-TO-STRING returned ""action"" 2: (JSON:ENCODE-JSON-TO-STRING "sentences") 2: JSON:ENCODE-JSON-TO-STRING returned ""sentences"" 1: JSON:ENCODE-JSON-TO-STRING returned "{"action":"deps","sentences":["you think that's bad, hehe, i remember once i had an old 100MHZ dell unit i was using as a server in my room"]}" 0: GET-DEPS returned ((("nsubj" 1 0) ("ccomp" 9 1) ("nsubj" 3 2) ("ccomp" 1 3) ("acomp" 3 4) ("punct" 9 5) ("intj" 9 6) ("punct" 9 7) ("nsubj" 9 8) ("root" -1 9) ("advmod" 9 10) ("nsubj" 12 11) ("ccomp" 9 12) ("det" 17 13) ("amod" 17 14) ("nn" 16 15) ("nn" 17 16) ("dobj" 12 17) ("nsubj" 20 18) ("aux" 20 19) ("rcmod" 17 20) ("prep" 20 21) ("det" 23 22) ("pobj" 21 23) ("prep" 23 24) ("poss" 26 25) ("pobj" 24 26))) ((<you 0,3> <think 4,9> <that 10,14> <'s 14,16> <bad 17,20> <, 20,21> <hehe 22,26> <, 26,27> <i 28,29> <remember 30,38> <once 39,43> <i 44,45> <had 46,49> <an 50,52> <old 53,56> <100MHZ 57,63> <dell 64,68> <unit 69,73> <i 74,75> <was 76,79> <using 80,85> <as 86,88> <a 89,90> <server 91,97> <in 98,100> <my 101,103> <room 104,108>)) ``` So to debug our nasty socket bug, I had to dig deep into the SBCL network code and study the functions being called, then connect via SLIME to the failing server and try tracing one function after another. And when I got a call that didn’t return, that was it. Finally, looking at `man` to find out that this function isn’t re-entrant and encountering some references to that in the SBCL’s source code comments allowed me to verify this hypothesis. That said, Lisp proved to be a remarkably reliable platform for one of our most critical projects. It is quite fit for the common requirements of modern cloud infrastructure, and although this stack is not very well-known and popular, it has its own strong points—you just have to learn how to use them. Not to mention the power of the Lisp approach to solving difficult problems—which is why we love it. But that’s a whole different story, for another time.
true
true
true
At Grammarly, the foundation of our business, our core grammar engine, is written in Common Lisp. It currently processes more than a thousand sentences per…
2024-10-12 00:00:00
2015-06-26 00:00:00
null
article
grammarly.com
Grammarly Engineering Blog
null
null
1,108,692
http://gregosuri.com/how-facebook-uses-erlang-for-real-time-chat
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
19,724,136
https://zoekeating.tumblr.com/post/184071393729/songwriters-you-have-a-choice
Songwriters, you have a choice
Zoekeating
### Songwriters, you have a choice *The last few years, I’ve taken a bit of a break from Thought Leading. I’ve had my own personal struggles and my thought leading has been directed at overcoming the disruption wrought upon life and by my husband’s cancer and death. I don’t mean to bring up my personal tragedy to get your attention but I do feel the need the need to explain why I haven’t been as active an artists’ advocate as I once was.* *However, important things are happening in the world of music royalties and I can’t sit on the sidelines.* *—* If you make music for a living, you might have been aware of the passage of the Music Modernization Act last year. The law sets up a non-profit entity called the Mechanical Licensing Collective (MLC) to issue blanket licenses to streaming services and to collect and pay the owners of songs. **If you are a self-published, DIY songwriter, that is you.** Much like SoundExchange collects and distributes your digital performance royalties, this new MLC will collect and distribute your digital mechanical royalties. This stuff, and mechanical royalties in particular, can be mind-numbingly boring but believe me, if you make a living off your songs, you need to pay attention. Your royalties are at stake and you have a short window of time to act. Two groups have submitted proposals to the copyright office to run the MLC, the National Music Publishers Association (NMPA) and the American Music Licensing Collective (AMLC). **I have joined the board of the AMLC because I believe they will get mechanical royalties to the songwriters who earned them.** There is a pot of an estimated $1.2 billion in unmatched mechanical royalties that have yet to be paid to the people who earned them. The streaming services were required to pay the royalties, not to match them. Making a system for connecting songs to owners and getting these black-box royalties to the people who earned them will be primary tasks of the new MLC. **Why should DIY songwriters care?** Millions of songs are recorded every year and the vast majority of them are by “self-published” songwriters and composers like me. We control our own copyrights and are not represented by the major music publishers in the NMPA. We are the ones who will rely on the MLC to get us royalties that in many cases, we haven’t been paid before. I would bet my favorite pair of shoes that self-published songwriters like me wrote the songs that generated that pot of royalties. The music publishers in the NMPA have direct deals with the streaming services. They have been collecting their royalties and will continue to do so without help of the MLC. This is the part that worries me: written into the law, and in fact lobbied for by the NMPA, is language that indicates board members of the MLC are able to recommend the pot of unmatched royalties be liquidated and distributed to themselves by market share. This gaping hole in the law should make all DIY songwriters sit up and pay attention. The board of the MLC will get to say what happens to that estimated billion dollars and to all unmatched royalties going forward. The publishers in the NMPA will not use the MLC yet they can recommend liquidating the pot of unmatched royalties and distributing it to themselves? Will they have any incentive to do the work required to match these royalties to the songwriters who should get it? Without question, the AMLC has the least conflict of interest, the best technology proposal and the least incentive to recommend directing other people’s royalties to themselves, not to mention their budget is a fraction of the one proposed by the NMPA. There are other things too. The AMLC doesn’t aim to make a single corporate-controlled database containing information about every composition in the world, which the NMPA does. I think we have experienced enough corporate consolidation of data, thank you very much. Instead, the AMLC’s proposal is for a decentralized network that pulls together data from the 100+ global music rights organizations and will use dynamic indexing, normalization and intelligent matching algorithms to connect songs with owners. **I trust the AMLC to get me my mechanical royalties.** If you are a songwriter, **you have only until April 22** to tell the Registrar of Copyrights which group you think should handle your mechanical royalties. **Click here** to make a comment with the copyright office. **Want to learn more? **Tomorrow April 10 at 5:30pm Central time the AMLC is holding a town hall. I’ll be there by video conference. You can join on your phone or computer and ask questions by going here:** https://zoom.us/j/188377751** *—* *To survive in this era as an artist you have to maximize all possible revenue streams: live performances, sync licensing, subscriptions, merchandise, performance royalties, sound recording royalties, mechanicals. It takes some work to collect all the pieces of your royalty pie. Someday, I hope those of us who own our copyrights will be able to enter all our information once instead of many, many times in many, many places.* *Imagine being able to identify yourself, your songs, your percentage ownership if you collaborated with someone, and then imagine collecting all the royalties — for the performance, for the recording, for the composition — without having to pay a hefty percentage for the privilege?* *That won’t be happening anytime soon. The royalty collection systems are complex and like other complex systems, many parties benefit from that complexity (healthcare anyone?).* ## 98 Notes - keisukeabe liked this - adamfromtheblock liked this - borjaxxf4-blog liked this - erskinepink liked this - moaz liked this - feminist-tiger-blog liked this - avaleigh05 liked this - lex-luthor24-blog liked this - punsuki-blog liked this - scarletfury1654-blog liked this - healingandkindness2022 liked this - festgehaltene-augenblicke-blog liked this - soonertee liked this - jaxonjone4-blog reblogged this from zoekeating - jaxonjone4-blog liked this - kandiwhore liked this - screechinglightcreatorkid-blog liked this - zoekeating posted this - Show more notes
true
true
true
The last few years, I’ve taken a bit of a break from Thought Leading. I’ve had my own personal struggles and my thought leading has been directed at overcoming the disruption wrought upon life and by...
2024-10-12 00:00:00
2019-04-09 00:00:00
null
article
tumblr.com
Tumblr
null
null
27,525,719
https://www.meetup.com/Kotlin-Mumbai/events/278621237/
Login to Meetup | Meetup
null
Skip to content You are on login view Log in Not a member yet? Sign up Email Password Forgot password Keep me signed in Log in or Log in with Facebook Log in with Google Log in with Apple Issues with log in?
true
true
true
Not a Meetup member yet? Log in and find groups that host online or in person events and meet people in your local community who share your interests.
2024-10-12 00:00:00
2024-01-01 00:00:00
https://secure.meetupsta…/meetup-logo.jpg
article
meetup.com
Meetup
null
null
7,287,804
http://www.cnet.com.au/microsoft-supportive-of-nokia-using-android-339346695.htm
CNET: Product reviews, advice, how-tos and the latest news
Jon Reed
Best of the Best Editors' picks and our top buying guides Best of the Best Editors' picks and our top buying guides Upgrade your inbox Get CNET Insider From talking fridges to iPhones, our experts are here to help make the world a little less complicated. ## More to Explore ## Latest ### Best Walmart Holiday Deals Still Available: Last Chance for Big Saving on Tech, Home Goods and More 10 minutes ago### Best Internet Providers in Honolulu, Hawaii 16 minutes ago### The Best Spots in Your Home To Help Indoor Plants Grow 24 minutes ago### Lemme Sleep Took Over My TikTok, So I Had to Try This Supplement Myself 2 hours ago### Quick and Easy Tips for Perfectly Crispy Bacon 2 hours ago### Best Places to Buy Glasses Online for 2024 3 hours ago### 23 Best Gifts for New Homeowners for the Holidays 2024 3 hours ago### How to Pause Your Internet Service 4 hours ago### ChatGPT Glossary: 48 AI Terms That Everyone Should Know 5 hours ago### How to Watch Ariana Grande on 'Saturday Night Live' Tonight Without Cable 5 hours ago### Best Gifts for Hikers, From Their Feet to Their Butts 5 hours ago### Aurora Viewers Share Stunning Photos of the Northern Lights 6 hours ago### This Visual Guide Shows Everyone How to Hit Daily Protein Needs 6 hours ago### 2025 Social Security COLA Increase: Here's What Happens Next 6 hours ago### Best iPhone 15 and iPhone 15 Pro Cases for 2024 6 hours ago## Our Expertise Expertise Lindsey Turrentine is executive vice president for content and audience. She has helped shape digital media since digital media was born. 0357911176 02468104 024681025 ## Tech ## Money ## Crossing the Broadband Divide Millions of Americans lack access to high-speed internet. Here's how to fix that. ## Energy and Utilities ## Deep Dives Immerse yourself in our in-depth stories. Get the best price on everything CNET Shopping helps you get the best prices on your favorite products. Get promo codes and discounts with a single click. Add to Chrome - it's free! ## Internet Low-Cost Internet Guide for All 50 States: Despite the End of ACP, You Still Have Options 10/05/2024 ## Sleep Through the Night Get the best sleep of your life with our expert tips. Get the best price on everything CNET Shopping helps you get the best prices on your favorite products. Get promo codes and discounts with a single click. Add to Chrome - it's free! ## Tech Tips Get the most out of your phone with this expert advice. Get the best price on everything CNET Shopping helps you get the best prices on your favorite products. Get promo codes and discounts with a single click. Add to Chrome - it's free! ## Home ## Daily Puzzle Answers ## Living Off Grid CNET's Eric Mack has lived off the grid for over three years. Here's what he learned.
true
true
true
Get full-length product reviews, the latest news, tech coverage, daily deals, and category deep dives from CNET experts worldwide.
2024-10-12 00:00:00
2024-10-12 00:00:00
https://www.cnet.com/a/i…t=675&width=1200
website
cnet.com
CNET
null
null
5,408,292
http://www.freshtilledsoil.com/webrtc-video-chat-demo-between-nexus-tablet-and-macbook-air/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
23,200,850
https://www.reuters.com/article/us-usa-huawei-tech-license-idUSKBN22R38V
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
38,337,844
https://www.cmaps.io/
Note-taking as it should be
null
Seamlessly organize your life, ideas, thoughts, and to-do lists by using Nodeland as a mind map generator. Visualize your information in a connected graph for better clarity and management. Automatically generate comprehensive maps on any topic, and fill or explain any node within your graph. It makes it easier to understand, expand, and retain information. Keep your documents at your fingertips. You can access and edit your content from any device, ensuring you’re always in sync and productive. Simplify and visualize complex information by breaking down lengthy documents into dynamic, interactive mind maps. The nodeland platform emerged from extensive research and development in the field of education, recognizing that human thinking is inherently non-linear and associative. We often make connections between different pieces of information without even realizing it. At nodeland, we believe that the best way to learn is by making these connections explicit and visible. Unlike traditional linear note-taking apps like Notion and Evernote, nodeland introduces a revolutionary connection-based model. Everything you write is seamlessly integrated into a dynamic mind map, fostering a more natural and effective learning process. Our platform's standout feature is the AI-powered assistant, tailored to enhance your learning journey. Struggling with a topic? Ask your AI assistant for detailed textual explanations. Unsure where to start studying? Let the AI curate a comprehensive concept map, guiding you through the material in a personalized way. For the hands-on learners, we invite you to explore our range of mind maps examples and see firsthand how nodeland can improve your note-taking experience. nodeland © 2024
true
true
true
Improve your thinking, learning and organization by connecting everything you write.
2024-10-12 00:00:00
2024-01-01 00:00:00
/logo_gradient.svg
null
nodeland.io
Nodeland
null
null
19,170,540
https://getpublii.com/blog/speed-up-startup-to-render.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
40,102,665
https://www.nytimes.com/2007/05/08/health/08fat.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
18,899,201
https://medium.com/sov-global/sov-2018-summary-and-2019-plans-73c13ec851fc
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
7,128,495
http://www.nytimes.com/interactive/2014/01/24/opinion/food-chains-extra-virgin-suicide.html?_r=0
Opinion | Extra Virgin Suicide (Published 2014)
Nicholas Blechman; NICHOLAS BLECHMAN
Much of the oil sold as Italian olive oil does not come from Italy, but from countries like Spain, Morocco and Tunisia. After being picked, the olives are driven to a mill … where they are cleaned, crushed and pressed. The oil is then pumped into a tanker truck … and shipped to Italy, the world’s largest importer of olive oil. Meanwhile, shipments of soybean oil or other cheap oils are labeled olive oil, and smuggled into the same port. At some refineries, the olive oil is cut with cheaper oil. Other refineries are even worse. They mix vegetable oils with beta-carotene, to disguise the flavor, and chlorophyll for coloring, to produce fake olive oil. Bottles are labeled “Extra Virgin” and branded with “Packed in Italy” or “Imported from Italy.” (Oddly, this is legal, even if the oil does not come from Italy — although the source countries are supposed to be listed on the label.) The “olive oil” is shipped around the world, to countries like the U.S., where one study found that 69 percent of imported olive oil labeled “extra virgin” did not meet, in an expert taste and smell test, the standard for that label. To combat fraud, a special branch of the Italian Carabinieri is trained to detect bad oil. Lab tests are easy to fake, so instead the police rely on smell. Police officers regularly raid refineries in an attempt to regulate the industry. But producers — many of whom have connections to powerful politicians — are rarely prosecuted. All this fraud, however, has created a drop in olive oil prices. Corrupt producers have undermined themselves, effectively committing economic suicide. The End Nicholas Blechman is an illustrator and the art director of the New York Times Book Review. An earlier version of this graphic contained several errors. Olives that are used in substandard oil are typically taken to mills days, weeks or even months after being picked — not “within hours.” The graphic conflated two dubious practices that can be found in parts of the olive oil industry. Some producers mix olive oil with soybean or other cheap oils, while others mix vegetable oils with beta carotene and chlorophyll to produce fake olive oil; the two practices are not usually combined. Olive oil bottled in Italy and sold in the United States may be labeled “packed in Italy” or “imported from Italy” — not “produced in Italy” — even if the oil does not come from Italy. (However, the source countries are supposed to be listed on the label.) A 2010 study by researchers at the University of California, Davis, found that 69 percent of imported olive oil labeled “extra virgin” did not meet, in an expert taste and smell test, the standard for that label. The study suggested that the substandard samples had been oxidized; had been adulterated with cheaper refined olive oil; or were of poor quality because they were made from damaged or overripe olives, or olives that had been improperly stored or processed — or some combination of these flaws. It did not conclude that 69 percent of olive oil for sale in the United States was doctored. Finally, the graphic incorrectly cited Tom Mueller, who runs the blog Truth in Olive Oil, as the source of the information. While Mr. Mueller’s blog and other writings were consulted in preparation of the graphic, several of his findings were misinterpreted.
true
true
true
The adulteration of Italian olive oil.
2024-10-12 00:00:00
2014-01-24 00:00:00
https://static01.nyt.com…826&k=ZQJBKqZ0VN
article
nytimes.com
The New York Times
null
null
24,615,077
https://www.cnbc.com/2020/09/28/uber-granted-temporary-london-license.html
Uber granted 18-month London license as judge overturns ban
Ryan Browne
LONDON — Uber won its legal fight to continue operating in London on Monday, as a judge overturned a ban on the ride-hailing app by the city's transport regulator and granted it a new 18-month license. Last year, Transport for London (TfL) stripped Uber of its license for a second time — it first declined to renew Uber's London license in 2017 — citing a "pattern of failures" that had put passengers at risk. The watchdog said a glitch in Uber's systems allowed unauthorized drivers to upload their photos to other driver accounts and fraudulently pick up passengers in at least 14,000 journeys. Handing down his decision at the Westminster Magistrates' Court on Monday, Judge Tan Ikram said he had "sufficient confidence" Uber "no longer poses a risk to public safety." "Despite their historical failings, I find (Uber), now, to be a fit and proper person to hold a London PHV (private hire vehicle) operator's licence," Ikram said in his judgement. Uber's new London license will last 18 months and comes with several conditions jointly agreed by Uber and TfL. Uber shares climbed 6% in premarket trading following the decision, but later settled to trade 3% higher. "This decision is a recognition of Uber's commitment to safety and we will continue to work constructively with TfL," said Jamie Heywood, regional general manager for Northern and Eastern Europe. "There is nothing more important than the safety of the people who use the Uber app as we work together to keep London moving." Uber had tried to allay the regulator's passenger safety concerns, introducing a new system in April to verify drivers' identities through a mix of facial recognition and human reviewers. Despite losing its license, the company was still able to operate in London as it appealed the ban. London is Uber's largest market by far in Europe. The company has racked up around 3.5 million users and 45,000 drivers in the U.K. capital since launching there in 2012. Uber is the city's top ride-hailing player but faces heavy competition from several new operators including India's Ola, Estonia's Bolt and Germany's Free Now. "We fully support the initial action taken by Transport for London and the high standard to which TfL holds all ride-hailing and taxi firms in the capital," said Mariusz Zabrocki, UK general manager at Free Now. "At the same time, we welcome Uber back among 'fit and proper' operators. Regardless of the decision taken in this specific case, we hope the process has sent a clear and impactful message to all operators – that cutting corners and potential endangerment of drivers and passengers will not be tolerated in London." ## 'A disaster for London' Monday's ruling removes a key source of regulatory uncertainty for Uber. But the firm still faces a number of legal battles around the world. In California, Uber is fighting a lawsuit that would see its drivers reclassified as employees. It is also fighting a similar case in the U.K.'s Supreme Court, where drivers want to be treated as workers entitled to protections like a minimum wage and holiday pay. A loss for Uber would hold significant consequences for the so-called gig economy. A verdict is expected later this year. The decision to restore Uber's London license drew a fierce reaction from the city's iconic black cab industry, which has frequently clashed with Uber over regulation and competitive fares. "Today's decision is a disaster for London," said Steve McNamara, general secretary of the Licensed Taxi Drivers' Association. "Uber has demonstrated time and time again that it simply can't be trusted to put the safety of Londoners, its drivers and other road users above profit. Sadly, it seems that Uber is too big to regulate effectively, but too big to fail."
true
true
true
A judge on Monday found Uber "fit and proper" to hold a London operator's license, despite what he called "historical failings."
2024-10-12 00:00:00
2020-09-28 00:00:00
https://image.cnbcfm.com…45&w=1920&h=1080
article
cnbc.com
CNBC
null
null
22,019,417
https://aeon.co/ideas/ghosts-visions-and-near-death-experiences-can-be-therapeutic
Ghosts, visions and near-death experiences can be therapeutic | Aeon Ideas
Andreas Sommer
*Photo by JR Korpa/Unsplash* ‘If thefruits for lifeof the state of conversion are good, we ought to idealise and venerate it, even though it be a piece of natural psychology; if not, we ought to make short work with it, no matter what supernatural being may have infused it.’ FromThe Varieties of Religious Experience(1902) by William James There is a long tradition of scientists and other intellectuals in the West being casually dismissive of people’s spiritual experiences. In 1766, the German philosopher Immanuel Kant declared that people who claim to see spirits, such as his contemporary, the Swedish scientist Emanuel Swedenborg, are mad. Kant, a believer in the immortality of the soul, did not draw on empirical or medical knowledge to make his case, and was not beyond employing a fart joke to get his derision across: ‘If a hypochondriac wind romps in the intestines it depends on the direction it takes; if it descends it becomes a f–––, if it ascends it becomes an apparition or sacred inspiration.’ Another ‘enlightened’ enemy of other-worldly visions was the chemist and devout Christian, Joseph Priestley. His own critique of spirit seership in 1791 did not advance scientific arguments either, but presented biblical ‘proof’ that the only legitimate afterlife was the bodily resurrection of the dead on Judgment Day. However, there is good cause to question the overzealous pathologisation of spiritual sightings and ghostly visions. About a century after Kant and Priestley scoffed at such experiences, William James, the ‘father’ of American scientific psychology, participated in research on the first international census of hallucinations in ‘healthy’ people. The census was carried out in 1889-97 on behalf of the International Congress of Experimental Psychology, and drew on a sample of 17,000 men and women. This survey showed that hallucinations – including ghostly visions – were remarkably widespread, thus severely undermining contemporary medical views of their inherent pathology. But the project was unorthodox in yet another respect because it scrutinised claims of ‘veridical’ impressions – that is, cases where people reported seeing an apparition of a loved one suffering an accident or other crisis, which they had in fact undergone, but which the hallucinator couldn’t have known about through ‘normal’ means. The vicinity of such positive findings with ‘ghost stories’ was reason enough for most intellectuals not to touch the census report with a bargepole, and the pathological interpretation of hallucinations and visions continued to prevail until the late-20th century. Things slowly began to change in about 1971, when the *British Medical Journal* published a study on ‘the hallucinations of widowhood’ by the Welsh physician W Dewi Rees. Of the 293 bereaved women and men in Rees’s sample, 46.7 per cent reported encounters with their deceased spouses. Most important, 69 per cent perceived these encounters as helpful, whereas only 6 per cent found them unsettling. Many of these experiences, which ranged from a sense of presence, to tactile, auditory and visual impressions indistinguishable from interactions with living persons, continued over years. Rees’s paper inspired a trickle of fresh studies that confirmed his initial findings – these ‘hallucinations’ don’t seem inherently pathological nor therapeutically undesirable. On the contrary, whatever their ultimate causes, they often appear to provide the bereaved with much-needed strength to carry on. Rees’s study coincided with writings by a pioneer of the modern hospice movement, the Swiss-American psychiatrist Elisabeth Kübler-Ross, in which she emphasised the prevalence of comforting other-worldly visions reported by dying patients – an observation supported by later researchers. Indeed, a 2010 study in the *Archives of Gerontology and Geriatrics* addressed the need for special training for medical personnel regarding these experiences, and in recent years the academic literature on end-of-life care has recurrently examined the constructive functions of death-bed visions in helping the dying come to terms with impending death. Kübler-Ross was also among the first psychiatrists to write about ‘near-death experiences’ (NDEs) reported by survivors of cardiac arrests and other close brushes with death. Certain elements have pervaded popular culture – impressions of leaving one’s body, passing through a tunnel or barrier, encounters with deceased loved ones, a light representing unconditional acceptance, insights of the interconnectedness of all living beings, and so on. Once you ignore the latest clickbait claiming that scientists studying NDEs have either ‘proven’ life after death or debunked the afterlife by reducing them to brain chemistry, you start to realise that there’s a considerable amount of rigorous research published in mainstream medical journals, whose consensus is in line with neither of these popular polarisations, but which shows the psychological import of the experiences. For instance, although no two NDEs are identical, they usually have in common that they cause lasting and often dramatic personality changes. Regardless of the survivors’ pre-existing spiritual inclinations, they usually form the conviction that death is not the end. Understandably, this finding alone makes a lot of people rather nervous, as one might fear threats to the secular character of science, or even an abuse of NDE research in the service of fire-and-brimstone evangelism. But the specialist literature provides little justification for such worries. Other attested after-effects of NDEs include dramatic increases in empathy, altruism and environmental responsibility, as well as strongly reduced competitiveness and consumerism. Virtually all elements of NDEs can also occur in psychedelic ‘mystical’ experiences induced by substances such as psilocybin and DMT. Trials at institutions such as Johns Hopkins University in Baltimore and Imperial College London have revealed that these experiences can occasion similar personality changes as NDEs, most notably a loss of fear of death and a newfound purpose in life. Psychedelic therapies are now becoming a serious contender in the treatment of severe conditions including addictions, post-traumatic stress disorder and treatment-resistant depressions. This brings us back to James, whose arguments in *The Varieties of Religious Experience* for the pragmatic clinical and social value of such transformative episodes have been mostly ignored by the scientific and medical mainstream. If there really are concrete benefits of personality changes following ‘mystical’ experiences, this might justify a question that’s not usually raised: could it be harmful to follow blindly the standard narrative of Western modernity, according to which ‘materialism’ is not only the default metaphysics of science, but an obligatory philosophy of life demanded by centuries of supposedly linear progress based on allegedly impartial research? Sure, the dangers of gullibility are evident enough in the tragedies caused by religious fanatics, medical quacks and ruthless politicians. And, granted, spiritual worldviews are not good for everybody. Faith in the ultimate benevolence of the cosmos will strike many as hopelessly irrational. Yet, a century on from James’s pragmatic philosophy and psychology of transformative experiences, it might be time to restore a balanced perspective, to acknowledge the damage that has been caused by stigma, misdiagnoses and mis- or overmedication of individuals reporting ‘weird’ experiences. One can be personally skeptical of the ultimate validity of mystical beliefs and leave properly theological questions strictly aside, yet still investigate the salutary and prophylactic potential of these phenomena. By making this quasi-clinical proposal, I’m aware that I could be overstepping my boundaries as a historian of Western science studying the means by which transcendental positions have been rendered inherently ‘unscientific’ over time. However, questions of belief versus evidence are not the exclusive domain of scientific and historical research. In fact, orthodoxy is often crystallised collective bias starting on a subjective level, which, as James himself urged, is ‘a weakness of our nature from which we must free ourselves, if we can’. No matter if we are committed to scientific orthodoxy or to an open-minded perspective on ghostly visions and other unusual subjective experiences, both will require cultivating a relentless scrutiny of the concrete sources that nourish our most fundamental convictions – including the religious and scientific authorities on which they rest perhaps a little too willingly.
true
true
true
Ghostly hallucinations and other unusual experiences can be therapeutic – we should be careful not to overpathologise them
2024-10-12 00:00:00
2020-01-06 00:00:00
https://images.aeonmedia…y=75&format=auto
article
aeon.co
Aeon Magazine
null
null
22,243,954
https://vicki.substack.com/p/one-very-bad-apple
One very bad Apple
Vicki Boykis
**Art: Apple Gatherers, Camille Pissaro, 1891** My fifth grade teacher, Mr. Stains, had a big energy about him. He imparted two American cultural norms upon me. First, he taught me about the religion of American football. (If you’re also interested in learning more, I recommend Billy Lynn’s Long Halftime Walk.) And second, he taught me about bumper stickers. He loved collecting them, and had a bunch tacked up on his bulletin board. One of his favorites was, “Just because you’re paranoid, doesn’t mean they’re not out to get you, because they are.” The phrase has lately turned from a funny school memory, into a jaded way of viewing a world that’s stacked against the end-consumer. Which is probably what Joseph Heller intended when he originally wrote Catch-22. **Bugs don’t get through closed windows** Recently, I’ve been thinking about this phrase in the context of Apple’s “commitment to privacy.” Apple has always made a marketing pitch that it was the most secure platform. This all started with Steve Jobs, who was famously obsessive about having complete control over every aspect of the hardware and software. Apple started this process early on. By late 2013, when Apple released its iOS 7 system, the company was encrypting by default all third-party data stored on customers’ phones. Since Apple is closed, it’s harder for hackers to get in. Security also means that Apple itself can’t reverse-engineer the code to see underlying messages. **The Apple of my FBI** This theory was put to a horrific stress test in 2015, when two shooters killed 15 people and injured 22 in San Bernardino, California. The shooters were killed in a shoot-out, and the police recovered three phones from the crime scene. One of them, the shooter’s work phone, was still in-tact, and locked with a numeric passcode. In the aftermath, the shooting was declared act of terrorism. As a result, the federal government wanted to get involved. In 2016, the FBI asked Apple to unlock the phone. Apple did not want to unlock the phone. The iPhone was locked with a four-digit passcode that the FBI had been unable to crack. The FBI wanted Apple to create a special version of iOS that would accept an unlimited combination of passwords electronically, until the right one was found. The new iOS could be side-loaded onto the iPhone, leaving the data intact. But Apple had refused. Cook and his team were convinced that a new unlocked version of iOS would be very, very dangerous. After thinking on the issue with a small group, at 4:30 in the morning, Tim Apple released a statement that talked about the vital need for encryption, and the threat against data security that the FBI’s request had resulted in. Specifically, the FBI wants us to make a new version of the iPhone operating system, circumventing several important security features, and install it on an iPhone recovered during the investigation. In the wrong hands, this software — which does not exist today — would have the potential to unlock any iPhone in someone’s physical possession. The FBI may use different words to describe this tool, but make no mistake: Building a version of iOS that bypasses security in this way would undeniably create a backdoor. And while the government may argue that its use would be limited to this case, there is no way to guarantee such control. **The Apple of my iMessage** This was perhaps the first time I heard any large company CEO talking about privacy and actually putting his money where his mouth was. I was super impressed. That year, I switched over to the iPhone. With everything I read after that, I became reassured, both by Apple and by third-party commentators, that Apple had no interest in anything except user privacy, because they didn’t need to sell data. The truth is we could make a ton of money if we monetized our customer, if our customer was our product," Cook said. "We’ve elected not to do that.""Privacy to us is a human right. It's a civil liberty, and something that is unique to America. This is like freedom of speech and freedom of the press," Cook said. "Privacy is right up there with that for us. And if I didn’t believe Tim, there were his privacy czars. Indeed, any collection of Apple customer data requires sign-off from a committee of three “privacy czars” and a top executive, according to four former employees who worked on a variety of products that went through privacy vetting. Approval is anything but automatic: products including the Siri voice-command feature and the recently scaled-back iAd advertising network were restricted over privacy concerns, these people said. And if I didn’t believe Tim and the privacy czars, in 2016 Apple’s machine learning teams started talking publicly about the differential privacy they were working on. Differential privacy is the practice of adding enough fake data, or noise, to a given machine learning algorithm that you can no longer tie back data to individuals, but it still allows ML predictions to work. Combined with federated privacy, where machine learning models are trained and run directly on the mobile device without directly connecting to Apple’s server, differential privacy results in really good, strong privacy. Differential privacy [2] provides a mathematically rigorous definition of privacy and is one of the strongest guarantees of privacy available. It is rooted in the idea that carefully calibrated noise can mask a user’s data. When many people submit data, the noise that has been added averages out and meaningful information emerges. But trusting a company is one thing. Trusting an independent person that you yourself trust is another. And Maciej Cegłowski said he trusted Apple. Maciej runs Pinboard. He’s written essays that I’ve linked to so many times that I should be giving him royalties. Some of my favorites include Haunted by Data, the Website Obesity Crisis, and Build a Better Monster, which I saw him deliver live in Philly. Right now he’s in Hong Kong doing some of the best reporting on the protests, in spite of the fact that he is not a journalist. In 2017, Macjiei started working with political campaigns and journalists on securing their devices. He’s since advocated many times for people to use iPhones. So, if Tim, the developers, the data scientists, the journalists, and Maciej were all telling me that I should use an Apple phone, I was going to use an Apple phone. **Doubts** And things were great for a year or so. But then, the paranoia started creeping in. First, cryptographers were not really happy with the way iMessages were encrypted. In 2016, they wrote a paper about ways you could exploit iMessage. In this paper, we conduct a thorough analysis of iMessage to determine the security of the protocol against a variety of attacks. Our analysis shows that iMessage has significant vulnerabilities that can be exploited by a sophisticated attacker. The practical implication of these attacks is that any party who gains access to iMessage ciphertexts may potentially decrypt them remotely and after the fact. The researchers, including Matthew Green, went on to say, “Our main recommendation is that Apple should replace the entirety of iMessage with a messaging system that has been properly designed and formally verified. " That sounds serious? But I’m not a cryptography expert. So I let that one slide. Then, there were reports that Apple contractors were listening to Siri. According to that contractor, Siri interactions are sent to workers, who listen to the recording and are asked to grade it for a variety of factors, like whether the request was intentional or a false positive that accidentally triggered Siri, or if the response was helpful. But I have never, ever turned on Siri, and the reports said that Apple anonymized the commands. I let it slide. Then, there was the story that you still had to opt out of ad tracking on your iPhone because third-party apps were collecting stuff about you. You might assume you can count on Apple to sweat all the privacy details. After all, it touted in a recent ad, “What happens on your iPhone stays on your iPhone.” My investigation suggests otherwise. iPhone apps I discovered tracking me by passing information to third parties — just while I was asleep — include Microsoft OneDrive, Intuit’s Mint, Nike, Spotify, The Washington Post and IBM’s The Weather Channel. One app, the crime-alert service Citizen, shared personally identifiable information in violation of its published privacy policy. Then, there were reports that Apple was running part of iCloud on AWS, a move that is fraught with its own security considerations and implications, not the least of which is that Apple is trusting a significant part of its infrastructure to a competitor. Then, there was the paper put out (by Google, but still) about how Apple’s intelligent tracking protection on Safari leaks data. Then, the last straw this January, when it was revealed that Apple was not encrypting iCloud backups, at the request of the FBI. And, Apple is again facing pressure from as the FBI is again asking it to hand over phones, related to another shooting that happened this January in Florida. And, finally, whatever happened to the 5C iPhone that Tim Apple so valiantly fought the FBI for? The government was able to backdoor into it anyway, without Apple’s help. **Why, Apple, why? ** In theory, we should all be very, very mad at Apple, who is playing in a big game of cross-hatch. At the same time that it is giving over unencrypted data to governments, has a big presence in China by doing what the Chinese government asks it to do, and allowing targeted advertising, it’s doing an enormous pro-privacy advertising campaign. Just look at this: And look at this ad that it put up at CES in Las Vegas this year, a trade show which it hasn’t even attended as a vendor in years. **What’s up** All we have to do is look at the workhorse models of Normcorian analysis: Apple’s history, its 10-K forms, and systems theory. First, I think it’s important to differentiate two things: the privacy of Apple systems themselves (iCloud, iMessage, the phone), and the third-party app ecosystem on which the company also relies. The first is entirely Apple’s responsibility. Apple started out with the Jobsian premise of being closed ,because being closed means you can control every aspect. Its corporate structure was basically just product decisions coming down from on high. But when Tim Cook took over the small, sleek systems that Steve had such a tight grip over, exploded in size. For example, Apple’s employee numbers have increased exponentially, from 20k to over 100k, since 2008. This in and of itself makes things hard to manage. There are 800 people working on the new iPhone camera alone. Imagine how many moving pieces that is. Now, add in iMessage. Add in the browser. Look! This diagram shows how many services are involved, and this is just for the AUDIO part of the phone. (Remember how complicated Ring is? Now multiply that by a million.) So, how many people are working on the iOS ecosystem? Anywhere upwards of 5,000 (based on a super loose Google search.) The phone itself and the OS have also grown exponentially more complicated. And the services Apple is offering are also a lot more diverse. It’s not just hardware anymore. It’s also software, streaming services, and peripherals like Watch. However, Apple is facing immense competition. The markets for the Company’s products and services are highly competitive and the Company is confronted by aggressive competition in all areas of its business. This means the company has to move quickly and differentiate. It’s already gotten planned obsolescence down to a science for its current phone owners: the Company must continually introduce new products, services and technologies, enhance existing products and services, effectively stimulate customer demandfor new and upgraded products and services So it has to lure new phone owners. And what better way to do that than to play up the privacy angle, something that’s been a growing concern for consumers? In the GDPR and CCPA era, privacy is a competitive advantage. Given all of this, it’s impossible to monitor every single security threat. So when Apple goofs on a privacy thing, I assume that half of it was intentional, and half was because a group of project managers simply can’t oversee all the ways that a privacy setting can go wrong on a phone connected to the internet. As Bruce Schneier said, security is really hard. And this is not even going into all the possible attack vectors that arise when a phone actively connects to the internet, including GPS tracking, which affects every phone. Second, when the app store launched in 2008, there were 500 apps. There are now over 2 million, and each of them could be tracking user data in any number of given ways. Apple has to tread a fine line. If it locks everything down at the phone level, apps would get angry and leave the platform. If it doesn’t, people get angry. Like YouTube, Apple has to tread a fine line between being permissive enough and being cancelled. The truth of the matter is that any modern cell phone is like a very data-rich, leaky sieve, constantly giving out information about you in any number of ways, to any number of parties, without incentives for companies to reign it in. Which is why Edward Snowden had lawyers put their phones in the fridge. Speaking of the NSA, Apple’s finally gotten so big that many governmental agencies are listening and making demands to get access to phones. There is the FBI. Then there is China, for whom Apple has already made a ton of concessions. And now, Apple now has a Russia problem, too. Russia recently mandated that all cell phones pre-load software that tells the Russian government what’s up. In November 2019, Russian parliament passed what’s become known as the “law against Apple.” The legislation will require all smartphone devices to preload a host of applications that may provide the Russian government with a glut of information about its citizens, including their location, finances, and private communications. So, here we are, in 2020, with Apple in a bit of a pickle. It’s becoming so big that it’s not prioritizing security. At the same time, it needs to advertise privacy as a key differentiator as consumer tastes change. And, at the same time, it’s about to get canclled by the FBI, China, and Russia. And while it’s thinking over all of these things, it’s royally screwing over the consumer who came in search of a respite from being tracked. And what is the consumer doing? Well, this one in particular has limited ad tracking, stopped iCloud backups of messages, and has resumed her all-encompassing paranoia. **What I’m reading lately: ** I’m on Python Bytes this week! Big if True: **The Newsletter:** This newsletter is about a different angle on tech news. It goes out once a week to free subscribers, and once more to paid subscribers. If you like it, forward it! **The Author: **I’m a data scientist. Most of my free time is spent wrangling a preschooler and a baby, reading, and writing bad tweets. Find out more here or follow me on Twitter.
true
true
true
Why is Apple's commitment to privacy going down the drain?
2024-10-12 00:00:00
2020-02-04 00:00:00
https://substackcdn.com/…1999_764x900.png
article
substack.com
Normcore Tech
null
null
18,382,539
https://blog.delibr.com/pos-should-write-explicit-questions-with-decisions-to-structure-the-conversations-around-new-features/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
7,802,136
http://io9.com/humanity-is-now-officially-ready-for-suspended-animatio-1581727874
io9 | We Come From the Future
Justin Carter
Prime Video remains committed to its expensive Tolkien adaptation—though there's still no official announcement about season three. By Cheryl Eddy By Justin Carter By Isaiah Colbert Todd Phillips' Joker saga took a...surprising turn, but Connor Storrie thinks the film was fated to not sit well with audiences. Art the Clown may be jumping to a new medium, but he's still gonna do what he does best and raise bloody hell. If Captain America 4 and the Snow White remake weren't enough for ya, Lilo & Stitch and Freaky Friday 2 are also hitting theaters next year. The Warner Bros. streamer has beefed up its horror selection in the name of spooky season. Also, Amazon expects its upcoming Tomb Raider series to be a big hit. No, it's not between Wade and Logan. Halloween is coming up fast—and there's no time like right now to kick off your scary streaming binge. Fans have plenty of My Hero Academia content to look forward to (and dread) before its anime finale. Amazon MGM's live-action feature based on the beloved anime just took a mecha-tastic step forward. Anime has far more to offer than shonen battle anime, and Dan Da Dan is proof of that. Not even an infinite number of alternate realities can help most versions of Harry Kim get a promotion, it seems. The latest episode of Marvel's witchy Disney+ series invites you to a slumber party massacre. Ashlee Lhamon's "Caesura" is io9's featured Lightspeed Magazine story for October. Another classic Star Wars game is getting the updated re-release treatment, with some changes to its anachronistic take on the prequel-era Jedi. And see how the world's getting on without Superman in a tease of what's coming in Superman & Lois' final season. The DC spinoff sequel co-starring Phoenix and Lady Gaga is now in theaters. Lego just released Luke Skywalker's lightsaber from Return of the Jedi as a set and it's got a hidden gem, literally. For the first time in the fantasy gaming series' history, players will not be menaced by giant, eight-legged beasties. There's plenty of other things trying to eat you, though. The badges for the 2025 convention, designed by artist TAKUMI, render Star Wars heroes and villains alike in the style of period Japanese art. After WandaVision's tight ties to Marvel's movies, Jac Schaeffer is glad for some breathing room on the witchy Disney+ series. Mode Follow us
true
true
true
null
2024-10-12 00:00:00
2024-10-12 00:00:00
https://gizmodo.com/app/…zmodo-social.jpg
article
gizmodo.com
Gizmodo
null
null
5,486,965
http://www.xconomy.com/san-francisco/2013/04/03/tv-apps-aim-to-channel-the-flood-of-online-video/
Home | Informa Connect
null
#### We are Informa Connect # Live events, digital content and training for professionals who want to achieve more. Search live and on-demand events, training and other content See upcoming events ## Choose your interest Find out about our industry events, digital content, and on-demand experiences, providing you with exceptional insights, connections, and commercial edge. Upcoming events Attend our next events, either in person, online or on-demand. Choose an Interest Upcoming Courses Attend our training courses, either in person, online or on-demand. Choose an Interest Trending News & Insights See what your industry is talking about right now. Choose an Interest ## About Informa Connect Providing professionals with access to extraordinary people and exceptional insight. Latest Videos
true
true
true
null
2024-10-12 00:00:00
2024-10-11 00:00:00
https://informaconnect.c…9c4763325724.png
website
informaconnect.com
informaconnect.com
null
null
18,759,568
https://muriz.me/products/lemon/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
10,635,620
http://www.wikitract.com/neural-random-access-machines/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
6,795,372
http://byutifu.com/live.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
18,347,921
https://writing.kemitchell.com/2018/10/24/How-to-Speak-Copyleft.html
How to Speak Copyleft
All content by Kyle E Mitchell; Who Is Not Your Lawyer
# How to Speak Copyleftthe missing vocabulary of copyleft design This post is part of a series, SSPL. What does the Open Source Definition have to say about how strong copyleft licenses can be? As it turns out, not much that’s helpful, and quite a bit that’s not. The Definition’s twenty-year-old criteria at best mildly suggest limits on open source copyleft, without offering any terms in which to analyze of express them. This post offers the missing vocabulary for copyleft licenses and copyleft-license limits, as a bridge to forthcoming posts on the Definition and what those limits should be. ## Copyleft We can generalize copyleft software licenses as permissive public software licenses with additional rules requiring licensees to share other work alike. Lawrence Rosen, former counsel for the Open Source Initiative, made this explicit in the tiny textual difference between his permissive Academic Free License and copyleft Open Software License. ## Design To implement a copyleft rule, a drafter must make four independent *design decisions*: *Trigger*: When must licensees share other work?*Reach*: What other work must licensees share?*Licensing*: On what terms must licensees share that work?*Distribution*: How must licensees share source for that work? Different copyleft licenses answer these questions differently, implementing different *design choices* in legal terms. When two copyleft licenses make the same general design choices, we can say they share a common *design approach*, even if they implement their design choices in entirely different language. ## Strength Generally, the more situations in which a copyleft implementation triggers, the more code it reaches, the more specific the license terms it requires, and the more broadly it requires source to be distributed, the *stronger* that copyleft license is. Calling a copyleft license strong thus gives us a general hint about its design, and the expected effects of that design, in practice. But calling a copyleft license strong does *not* tell us which particular design choices were made. For example, AGPLv3 and OSL 3.0 are currently considered strong copyleft licenses. Their triggers differ. AGPLv3 triggers on providing a network service only if the licensee made changes to the software as originally provided, while OSL triggers on “External Deployment”, with or without changes. Their reaches also vary. AGPLv3 reaches all “Corresponding Source”, including “all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts”, while OSL reaches only the work licensed and copyright “derivative works” of it. OSL’s trigger is stronger than AGPLv3’s, but AGPLv3’s reach is stronger than OSL’s. ## Effectiveness Calling a copyleft license strong doesn’t tell us which technical and legal circumstances its design choices depend on for effect, either. We can’t decide whether a strong license is *effective* in practice without knowing both which design choices it made, and how those choices play out in current technical and legal circumstances. Over time, as circumstances change and reviewers find more gaps in a license’s design and language, a license that began very strong may come to seem weak, and a license that started off effective may become ineffective, especially in particular circumstances. GPLv2 was once the strongest well known copyleft license. Since the appearance of network-copyleft licenses like AGPLv1, and the changes in the software industry those licenses responded to, GPLv2 no longer seems that strong. Neither is GPLv2 so effective as it once way, for the run of all software. In particular, GPLv2 is largely ineffective for software libraries widely used in network services. ## Maintenance Open source licenses, like open source programs, require maintenance. Software, the tools and processes we use to make it, and the ways we use it all change, and so does the law. Changes in operating environment require changes in design and implementation, to keep achieving the effects developers want. Copyleft and permissive licenses share maintenance tasks in common, like modernization of notice conditions and response to patent risk. Copyleft licenses face additional problems peculiar to the rules that distinguish them from permissive licenses. Each copyleft design decision poses a maintenance burden, to ensure specific, effective design choices remain available to copyleft license implementers. Implementations of specific design choices likewise require maintenance. Finally, design approaches require maintenance, as well. ## Regulation The open development community as a whole, or groups within it, might choose to *regulate* maintenance of copyleft, restricting the acceptable copyleft rules drafters can write in new licenses, and thereby the strength and effectiveness of copyleft licenses available to those who can’t or won’t write their own. In theory, regulation could weaken *or* strengthen copyleft. But as a practical matter, sufficiently weak copyleft licenses have the same effects as permissive licenses, and permissive licenses are broadly accepted. So copyleft regulation functions to weaken accepted copyleft approaches and implementations. The most important school of copyleft regulation has long been a form of self-regulation: “software freedom” as defined by the Free Software Foundation. This comes through most clearly in how FSF-drafted licenses permit the creation, use, and internal distribution of “private changes” by individuals and even very large organizations, without distributing our licensing source, even though copyleft could require doing so. The FSF has rejected stronger copyleft licenses from other drafters that do require sharing of private changes: Plan 9, Open Watcom, and RPL. The Open Source Initiative approved those licenses, indicating acceptance of looser copyleft regulation than FSF. The underlying law that software licenses rely upon could also regulate copyleft. That regulation might take the form of substantive limits on what copyleft licenses can accomplish, no matter how they’re worded, as well as limits on specific wording, that lawyers could draft around. For example, the doctrine of copyright misuse might eventually regulate copyleft reach. US courts disfavor copyright misuse, leaving it largely unclear, but a new development could impose a limit that drafters could not work around. In contrast, the fact that copyright licenses as such cannot control the use of software as such could limit copyleft triggers. But drafters have worked around that limit, leveraging the fact that in order to use software, licensees have to copy it. Copyleft drafters write their use rules as copying rules, implicitly or explicitly. For any proposed copyleft regulation, we could ask whether it excludes particular choices for a copyleft design decision, or overall design approaches, by some more general criteria. A rule prohibiting copyleft from triggering on use of a program would regulate trigger choices. As a consequence of regulating triggers, the rule would also regulate design approaches, excluding all those that incorporate use-based triggers, be their other design choices weak or strong. But a rule that if a copyleft license triggers on use, it must limit reach to changes and additions to the licensed work itself would regulate design approaches. Copyleft licenses might reach more work without triggering on use, or trigger on use without reaching past changes and additions, but could not do both. ## Purpose Strong copyleft serves two general user bases: software freedom activists and privately motivated upstarts. Activists use strong copyleft to exclude proprietary software creators, whose practices they condemn, and with whom they compete, from the benefits of their work. Upstarts also use strong copyleft to exclude others from the benefits of their work, to bestow competitive advantage on open developers, and to motivate submission of improvements back to their projects, all while preserving some advantages of permissive licensing. Some developers also use strong copyleft as the basis for “dual licensing” or “selling exceptions”: vending private licenses that permit use in proprietary software, which their public copyleft licenses do not. If a copyleft license’s trigger is too weak to cover the most valuable use of a piece of software, the copyleft license loses effectiveness for many purposes. For example, a copyleft trigger on distribution of software and work based on it, like GPLv2’s, fails to trigger on use of that library to provide a web server application, since the web application’s creator never distributes software to others. A stronger copyleft license with a stronger trigger, like OSL, makes copyleft effective for such a library once more. Even when a copyleft license’s trigger is effective, weakness in reach, licensing, and distribution can render it ineffective for purpose overall. For example, if a copyleft license’s trigger covers use of a library to provide a web application, but it reaches only the original library code and any changes to the library, not other parts of the web application, meeting the copyleft requirement may be trivial, in effect affording everyone, even closed developers, the benefit of the program. The license may continue to motivate submission of improvements back. But it may also facilitate denying software freedom by keeping the rest of the web application code closed, permitting use of the copyleft software to compete against its creator, whether the creator competes with businesses, other open projects, or both, and dodging the need for a paid, proprietary-use license, frustrating these other possible purposes. ## Repurposing Open source history is full of examples of developers applying a license drafted for one purpose for other, very different purposes. Developers *repurpose* licenses because their effects, at least at the time, meet their needs, even if the use case never occurred to the drafter, and because they lack the resources to draft and socialize licenses of their own. In particular, for lack of a stable organization or coalition representing upstart copyleft users, many such users repurpose the Free Software Foundation’s activist copyleft licenses. Linus Torvalds describes GPLv2 as a license that requires sharing changes back, brushing off the rest of the license as irrelevant legal details. Securing changes back was not the purpose of GPLv2, and no provision of GPLv2 says anything about sending code back to the licensor. Most dramatically, because of the FSF’s philosophical position on private changes, it avoided having GPLv2 trigger when a developer makes changes, and only triggers when a change is made *and* the changed version is distributed. GPLv2’s preamble says a great deal that Linus may not agree with. According to the FSF, GPLv3 achieves the purposes set out in GPLv2’s preamble better than GPLv2 did. But Linus rejects GPLv3 in very strong terms. Commercial firms that want to share with the open community, but not make their work available to competitors, often repurpose whatever the strongest FSF-drafted copyleft license happens to be at the time. Rationales vary. Some share Linus’ desire to receive patches back. Some want to empower open developers to compete with closed, for strategic reasons. Some want to deny their work to their competitors. The FSF’s writing indicates that it *did* intend its licenses to exclude proprietary competitors. On that point, business users and the FSF’s purposes overlap. But while the FSF’s philosophical writings don’t condemn dual licensing, or “selling exceptions”, as a practice, making that possible was not among FSF’s purposes in drafting licenses. The FSF itself does not dual license the projects that it stewards. To be fair, the FSF also repurposes licenses. In cases where a free standard competes with a proprietary standard, the FSF recommends the Apache License, Version 2.0, to maximize adoption, including among very large firms. The Apache Foundation did not draft its license to serve the FSF’s activist purposes. It just so happens that in specific circumstances, its license has effects the FSF wants. ## Prior Art With this vocabulary in hand, we can generalize innovative copyleft licenses by their purposes and design decisions: License | Trigger | Reach | License | Distribution | ---|---|---|---|---| GPLv2 (activist) | distribution of a copy | original code and work based on it | same terms | copies to recipients | LGPLv2 (activist, compromise) | distribution of a copy | original code and work based on it, but not work merely using the library | same terms or GPLv2+ | copies to recipients | CPL (upstart, compromise) | distribution of a copy | original code and work based on it, but not in separate modules | same terms | copies | AGPLv1 (activist, hardline) | distribution of a copy | original code and work based on it | same terms | copies to recipients and remote users of the program | Sybase/Watcom (upstart) | non-research, non-personal use | original code and work based on it | same terms | publication | OSL 3.0 (research) | distribution of a copy or providing a network service, with or without changes | original code and work based on it | same terms | copies to recipients and remote users of the program | EPLv1 (upstart, compromise) | distribution of a copy | original code and work based on it, but not in separate modules | same terms | copies | RPL 1.5 (upstart, hardline) | non-research, non-personal use | original code, work based on it, and code necessary to run work based on it | same terms | publication | GPLv3 (activist) | distribution of a copy | original code, work based on it, and all code needed to run and develop work | same terms or AGPLv3 | copies to recipients | AGPLv3 (activist) | distribution of a copy or providing a network service, with changes | original code, work based on it, and all code needed to run and develop work | same terms or GPLv3 | copies to recipients and remote users of the program | MPL 2.0 (activist, compromise) | distribution of a copy | original code and changes to it | same terms, GPLv2+, LGPLv2.1+, or AGPLv3 | copies to recipients | Among these examples, the strongest design choices are: *Trigger*: Sybase, RPL, and OSL, which cover modified and unmodified use*Reach*: (A)GPLv3, which reach beyond work based on the software to other work*Distribution*: Sybase and RPL, which require publication, rather than just providing or offering downstream Among these examples, the strongest design approaches are RPL, Sybase/Wacom, OSL, and AGPLv3. Your thoughts and feedback are always welcome by e-mail.
true
true
true
the missing vocabulary of copyleft design
2024-10-12 00:00:00
2018-10-24 00:00:00
null
null
null
/Dev/Lawyer
null
null
23,566,294
https://www.cnbc.com/2020/06/18/facebook-removes-trump-ads-with-symbols-used-by-nazis.html
Facebook removes Trump ads containing symbols used by Nazis to identify political prisoners
Salvador Rodriguez
Facebook on Thursday announced that it removed ads from President Donald Trump's campaign that contained a symbol associated with Nazis. "We removed these posts and ads for violating our policy against organized hate," a Facebook spokesman said in a statement. "Our policy prohibits using a banned hate group's symbol to identify political prisoners without the context that condemns or discusses the symbol." The ads contained red downward-pointing triangles, which the Nazis used to mark political prisoners. The symbol was used in ads by Trump, Vice President Mike Pence and the Trump campaign. One variation of the ad was targeted primarily to male voters in Texas and had the potential to reach more than 1 million users, according to Facebook's Ad Library. Campaign officials contended that the symbol is sometimes used by antifa, a loosely organized coalition of anti-fascists, although it does not appear to be commonly used. Trump has, without evidence, blamed antifa for violence at protests over the police killing of George Floyd, an unarmed Black man. ADL, an organization that fights anti-Semitism, condemned the ads in a statement. "Whether aware of the history or meaning, for the Trump campaign to use a symbol – one which is practically identical to that used by the Nazi regime to classify political prisoners in concentration camps -- to attack his opponents is offensive and deeply troubling," said ADL CEO Jonathan Greenblatt. "It is not difficult for one to criticize their political opponent without using Nazi-era imagery. We implore the Trump campaign to take greater caution and familiarize themselves with the historical context before doing so. Ignorance is not an excuse for appropriating hateful symbols." Facebook's removal of the ad comes after the company was criticized by its own employees and business partners for not removing or moderating a post by Trump that said "when the looting starts, the shooting starts." Facebook employees who protested the decision argued that the post from Trump violates Facebook's community standards, which prohibit language that incites serious violence. Facebook has also come under fire for its policy of allowing political ads containing false statements, with CEO Mark Zuckerberg maintaining that the company should not be "arbiters of truth."
true
true
true
The ads contained red downward-pointing triangles, which the Nazis used to mark political prisoners.
2024-10-12 00:00:00
2020-06-18 00:00:00
https://image.cnbcfm.com…08&w=1920&h=1080
article
cnbc.com
CNBC
null
null
29,776,360
https://www.nasa.gov/sites/default/files/atoms/files/niac_ono_comethitchhiker_phasei_finalreport_tagged.pdf
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
7,779,969
http://finance.yahoo.com/news/trustegg-partners-national-black-church-183321932.html;_ylt=AwrSyCM38HxTsCYAhriTmYlQ
TrustEgg Partners With the National Black Church Initiative to Open One Million New Savings Accounts for Children
Marketwired
# TrustEgg Partners With the National Black Church Initiative to Open One Million New Savings Accounts for Children SAN DIEGO, CA--(Marketwired - May 20, 2014) - Today, TrustEgg, an online savings platform, announced a partnership with the National Black Church Initiative (NBCI), a faith-based coalition of 34,000 churches, to open one million savings accounts for member's children. The collaboration will leverage NBCI's 15.7-million membership base to help bridge the socioeconomic divide affecting the black faith community while showing families that saving for the future is simple and important. TrustEgg helps families save for a child's future in a simple, smart and safe way. The easy, online option gives families access to a well-respected fund at Vanguard that earns a market rate of return. There is no minimum and the fees are typically much lower than hiring a financial advisor. TrustEgg also capitalizes on social media elements and can be easily shared with friends and family who can contribute to the account. "We're thrilled to partner with the NBCI to share this simple tool with their large membership base, helping families on their path to financial security," said Jeff Brice, CEO and Founder of TrustEgg. "TrustEgg is accessible to anyone, independent of background or current wealth status, to start an account. We're excited to see this partnership change the national savings rate and put one million kids on the path to financial stability." Generally, African American families save substantially less over the course of their lives, risking financial health of their children and themselves. The NBCI hopes to change these statistics for the better with this new partnership. "This is a wonderful addition to NBCI's portfolio of financial wellness plans. Combining TrustEgg's expertise and tools with our 15.7 million-strong membership base opens new doors for the African American community," said Rev. Anthony Evans, NBCI President. "As a leader in housing financial issues for the Black Church, we are in a strong position to convey the importance of savings and TrustEgg's tools finally present this important resource in a way that speaks our congregants' language. We are thrilled to see the benefits this partnership will undoubtedly create." Over the next few months, both organizations will work together through grassroots efforts to engage as much of the black faith community as possible, carrying the message that early financial planning delivers great rewards. **About TrustEgg:** TrustEgg is the simplest way to save for a child's future. TrustEgg enables anyone to create a Trust for their child in minutes for free with no minimum. The Trust can then be shared with friends and family. TrustEgg is a better savings option for the 70+ million children in the U.S. Learn more at www.trustegg.com **About NBCI:** The National Black Church Initiative (NBCI) is a coalition of 34,000 African American and Latino churches working to eradicate racial disparities in healthcare, technology, education, housing and the environment. NBCI's mission is to provide critical wellness information to all of its members, congregants, churches and the public. The National Black Church Initiative's methodology is utilizing faith and sound health science. The National Black Church Initiative's purpose is to partner with major organizations and officials whose main mission is to reduce racial disparities in the variety of areas cited above. NBCI offers faith-based, out-of-the-box and cutting edge solutions to stubborn economic and social issues. NBCI's programs are governed by credible statistical analysis, science based strategies and techniques, and methods that work. Learn more at www.naltblackchurch.com. **The following files are available for download:**
true
true
true
Today, TrustEgg, an online savings platform, announced a partnership with the National Black Church Initiative , a faith-based coalition of 34,000 churches, to open one million savings accounts for member's ...
2024-10-12 00:00:00
2014-05-20 00:00:00
https://s.yimg.com/cv/ap…go-1200x1200.png
article
yahoo.com
Yahoo Finance
null
null
15,730,050
https://www.nytimes.com/2017/11/18/nyregion/new-york-subway-system-failure-delays.html?_r=0
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
13,192,706
https://www.theatlantic.com/business/archive/2016/12/pyschology-white-collar-criminal/503408/?single_page=true
The Psychology of White-Collar Criminals
Eugene Soltes
# The Psychology of White-Collar Criminals A researcher reflects on conversations with nearly 50 convicted executives about why they did what they did. Two leading executive headhunters once wrote a book called *Lessons From the Top: The Search for America’s Best Business Leaders* that celebrated 50 titans of industry. Readers were encouraged “to learn from and pattern themselves” after the leadership qualities displayed by these executives. Yet within a few years of the book’s 1999 publication, three of those 50 were convicted of white-collar crimes and headed to prison, and three more faced tens of millions of dollars in fines for illicit activity. It was an extraordinary rate of failure for executives once deemed the “very best—and most successful—business leaders in America.” I’ve spent much of the last seven years investigating why so many respected executives engage in white-collar crime. Why is it that fraud, embezzlement, bribery, and insider trading often seem like disturbing norms among the upper echelons of business? Some commentators like to chalk up these executives’ failures to greed. Others argue that the extraordinary harm these executives do suggests that there’s something constitutionally different about them—that they are bad apples. Still others wonder if perhaps these individuals were blinded by ambition and just unable to admit failure. In order to test these theories, I interacted extensively with nearly 50 of the most prominent executives convicted of white-collar offenses. Many of them had lost the confidence that they once displayed. Isolated from the business community that had placed them on a pedestal, many openly shared their views and perspective with me. Over the phone, by letter, and during visits to prison, I little by little began to better understand them. At first, I was struck by their lack of remorse regarding either their actions or the harm those actions had caused. One executive even joked with me about how he’d been practicing with his $1,000-an-hour lawyer to convincingly, albeit falsely, express regret during his upcoming parole hearing. Troublingly, those who received lenient sentences for testifying against others often told me stories that differed from their sworn testimony. Many of the convicted executives I spent time with described their conduct bluntly. “Morals go out the window when the pressure is on,” explained Steven Hoffenberg, who confessed to running a Ponzi scheme that stole from thousands of investors in his company, the Towers Financial Corporation. “When the responsibility is there and you have to meet budgetary numbers, you can forget about morals.” The reactions to engaging in crime were not always as I expected, either. David Myers, the former controller of WorldCom, recalled thinking that he was “helping people and doing the right thing” while perpetrating one of the largest accounting frauds in history. In his mind, the fraud was superficially sustaining the company, its stock price, and the jobs of its employees. Some former executives defiantly denied that they did anything criminal. “I was in a good career making a couple million a year,” explained one executive who helped devise millions of dollars in illicit tax structures (and spoke on the condition of anonymity, given his continuing legal situation), “so it’s not that I’m going to risk everything to go do something shady or illegal.” Others felt that they were unfairly and selectively prosecuted for behavior that was ubiquitous in their industry. Most, however, accepted that they did something wrong. Yet, in spite of this recognition, it wasn’t clear even to the executives themselves why they made decisions that looked so thoughtless. After successful careers characterized by decades of careful decision-making, they found their own basic failures just as startling as others did. When prosecutors try to explain white-collar misconduct, they often describe it as resulting from a cost-benefit calculation. A 1976* Wall Street Journal *piece distinguished corporate crimes from other kinds of offenses by saying that “unlike the tempestuous and murderous spouse or the impoverished and desperate mugger, suite criminals are sophisticated and deliberative businessmen who engage in crime only after carefully calculating the benefits and costs.” This idea that white-collar offenders weigh expected costs against expected benefits comports with notions of how executives ought to make decisions. The explanation is also rooted in the influential work of Gary Becker, the University of Chicago economist who was awarded the Nobel Prize for, among other things, mathematically modeling crime based on such trade-offs. Becker’s work contrasted with decades of prior scholarship that characterized criminals as somehow psychologically aberrant. Instead, he argued that crime could be explained by seeing criminals not as physically or psychologically different kinds of people, but rather as individuals who simply viewed the costs and benefits of criminal activity differently. Motivated by this theory, I initially thought that if I could understand how executives thought about the costs and benefits of engaging in illicit conduct, I’d come to appreciate why they decided to act criminally. Perhaps they saw the rewards of hitting bonus targets or trading ahead of a deal as outweighing the potential repercussions. Maybe the executives just didn’t think they would get caught, so they underestimated the potential costs during their calculation. The problem was that the more I listened, the more their criminal decisions didn’t look like carefully deliberative cost-benefit calculations at all. “At the time this was going on,” Scott London, a KPMG executive convicted of insider trading, told me, “I just never really thought about the consequences.” This executive’s remark wasn’t unique. For instance, Sam Waksal, the former CEO of ImClone Systems who shared inside information with his daughter in a scandal that would infamously also engulf Martha Stewart, was surprised that many viewed his actions as “some kind of giant byzantine idea that [he] was trying to perpetrate.” Waksal understood that calling his daughter and telling her to dump her shares was wrong. Since he knew the SEC monitors this kind of trading, his decision couldn’t possibly represent the careful reasoning of a self-made man who prided himself on his intellectual prowess. Had he actually put his mind to it, presumably he could have devised a better fraud. “I don’t know what I was thinking,” he lamented. “I wasn’t, sadly.” If it’s bewildering that intelligent, even brilliant, people can fail to anticipate this devastation—not only to their firms, investors, and employees, but also to themselves—that’s because people assume they always act with careful thought and analysis. As a species, though, humans are incredibly poor at actually understanding their own decision-making processes. In fact, many decisions, even consequential ones, arise not from deliberation or reflection but from intuitions and gut instincts. But if these executives relied on intuition when making their criminal decisions, why didn’t they sense the possible consequences for themselves and for others? To outsiders, the harm caused by white-collar crimes is obvious. Economically, there’s relatively little difference between embezzling money from shareholders and stealing it from their wallets. But there’s a critical difference between a physical, intimate crime like taking someone’s wallet and the white-collar variant. The perpetrators of white-collar crimes are physically, psychologically, and even temporally distant from their victims. An embezzler doesn’t have to get close to victims, touch them, or see their reactions. As a consequence, embezzling doesn’t motivate the same visceral senses as robbery. Indeed, the former executives I came to know were unable to relate to those they had harmed. “It was, in my mind, a very small thing dealing with small dollars,” London, of KPMG, explained as he described the impact of his insider trading on his amorphous victims. Others, like Andrew Fastow, the former CFO of Enron, were being honored by the likes of *CFO *magazine at the same time that they were engaging in fraud, perversely suggesting that their actions might be viewed positively by others. “People thought this stuff was frickin’ brilliant,” Fastow recalled of his excitement. Usually, a gut feeling that something will be harmful is enough of a deterrence. But when the harm is distant or abstract, this internal alarm doesn’t always go off. This absence of intuition about the harm creates a particular challenge for executives. Today, managerial decisions impact ever-greater numbers of people and the distance between executives and the people their decisions affect continues to grow. In fact, many of the people most harmed or helped by executives’ decisions are those they will never identify or meet. In this less intimate world, age-old intuitions are not always well suited to sense the kinds of potential harms that people can cause in the business world. Reflecting on these limits to human intuition, I came to a conclusion that I found humbling. Most people like to think that they have the right values to make it through difficult times without falling prey to the same failures as the convicted executives I got to know. But those who believe they would face the same situations with their current values and viewpoints tend to underestimate the influence of the pressures, cultures, and norms that surround executive decision making. Perhaps a little humility is in order, given that people seem to have some difficulty predicting how they’d act in that environment. “What we all think is, ‘When the big moral challenge comes, I will rise to the occasion,’ [but] there’s not actually that many of us that will actually rise to the occasion,” as one former CFO put it. “I didn’t realize I would be a felon.”
true
true
true
A researcher reflects on conversations with nearly 50 convicted executives about why they did what they did.
2024-10-12 00:00:00
2016-12-14 00:00:00
https://cdn.theatlantic.…SS2/original.jpg
article
theatlantic.com
The Atlantic
null
null
24,278,633
https://medium.com/@karti/managers-can-only-influence-not-enforce-bc0818efa953
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
10,187,461
http://www.usgamer.net/articles/in-their-own-words-an-oral-history-of-diablo-ii-with-david-brevik-max-schaefer-and-erich-schaefer
An Oral History of Diablo II With David Brevik, Max Schaefer, and Erich Schaefer
Kat Bailey
# An Oral History of Diablo II With David Brevik, Max Schaefer, and Erich Schaefer The battles with Blizzard, the year of crunch, and the shower ideas behind one of gaming's most enduring pillars. *This article first appeared on USgamer, a partner publication of VG247. Some content, such as this article, has been migrated to VG247 for posterity after USgamer's closure - but it has not been edited or further vetted by the VG247 team.* **A little more than fifteen years ago, Blizzard North set to work on what would become one of the most popular and enduring action RPGs ever - Diablo II.** Finally stable after the success of Diablo, Blizzard North looked to create an even bigger sequel. The project's principals included David Brevik, Max Schaefer, and Erich Schaefer - Blizzard North's co-founders and the designers of the original game. They eventually found themselves mired in an overwhelming project, one in which they found themselves working 18 hour days to complete. Here, in their own words, is the story of the development of Diablo II, and what it was like to be at Blizzard North in those days lasting from early 1997 to mid-2000. ## January 1997: The Aftermath of Diablo *Diablo was released on December 31, 1996. It was met with critical acclaim, validating a difficult development process that had put Condor on the brink of insolvency several times. Following their acquisition by Blizzard and the subsequent success of Diablo, the newly-renamed Blizzard North began looking ahead to the future.* **Max Schaefer** It was a very weird time for us, because we signed to make Diablo first as an independent contractor. We were not part of Blizzard when we set out to make Diablo, and about halfway through they decided to buy us out. But that really changed the project entirely, so we almost started over halfway through, because now we were part of Blizzard, we didn't have any budgetary constraints like we did before. And then they added on the whole concept of doing Battle.net to it, so it was a time of intense change and action. All I remember in the aftermath was that it was so foreign to us the way that it kind of blew up before we put it out, that we were all kind of waiting to see what happens now. I remember that sense of, "Well, what happens now?" That's the thing I remember most about it, is the anticipation of this great unknown sequence of events that was now going to take place. **Erich Schaefer** Yeah, I remember the immediate aftermath. We were crunching trying to get it done leading right up to Christmas Eve. We took Christmas Day off, and then came back in on the 26th thinking, "OK, here we go, let's tackle the rest of the bugs, let's try to finish this thing up." And I remember that that morning it turned out we were pretty much done. There were no more deadly bugs. So, we were like, "OK. We were sitting there on the 26th, kinda, we're done, what do we do now?" And that just felt weird, because we were in such a daze from such a long crunch to get the thing done. Should we just send people home? Should we take the week off? It was kind of a weird haze going on and nobody knew what to do. So, we did end up pretty much taking the rest of the week through New Year's, then came back and said, "Well, I wonder what's going to happen." **Max Schaefer** It was no foregone conclusion that we would do a Diablo II. In fact, I think that we had decided that it would probably be fun to try something else. But, with the obvious popularity of Diablo 1 and lack of a clear idea of what else to do, I remember we did make a pretty quick decision that we were going to launch into Diablo II, and do it in a way that addressed some of the issues that were coming up in Diablo 1. We had no idea that people were actually going to play this game, much less try to cheat it. I think we kind of launched the advent of cheating in internet games. **Erich Schaefer** It didn't take us too long to get to [the point of wanting to make Diablo II]. I think, as I recall, and this is a long time ago now, I was sort of always pro-Diablo 2. Even at the end of Diablo 1, I had a big wish list that sort of turned into the Diablo II design document of what we would do from there. So, I figure it was more relief that everyone else got on board from my point of view. **David Brevik** We were all really happy with the success of Diablo. It was much more successful than we imagined it would be. But, we were kind of ready to move on to Diablo II because of some of the problems with Diablo 1. When we made Diablo 1, we just put in Battle.net, and we made it multiplayer about six months before we released the game, and it was peer-to-peer, not client-server, which means that every computer is in charge of all of its own information, including your character, so, it could easily be hacked. And when we made Battle.net, we knew the Internet existed and all this kind of thing, but it was like, if someone wants to hack their game, it's fine. They can ruin their own experience. But, then we realized, oh, crap, they can take that hack and they can put in on the internet, and now everybody has it. It wasn't just one person who was going to ruin their experience. So there was a real desire after seeing some of the feedback and seeing what was going on with the hacking to fix this in a real way. Given that, the team was pretty excited about moving on and working on a second version that we had the time to do a better job. We could do more as far as streaming levels, we could do more as far as keeping it in the world in a more cohesive faction, as well as put in running and some of these things that we really wanted that didn't make Diablo 1, but were things that we wanted all along. And, so, I think the team was really eager to do that. So it was an exciting time. People were really happy with the success. Things were really good for Blizzard in general. StarCraft was on the horizon, so it was a really fun time. **Erich Schaefer** We moved offices a couple of times in that period. I'm not sure exactly when the happened, but I think we finished Diablo 1 with about 14 or 15 people on the staff, and then finished Diablo II with about, I'm going to say about 45. And I think there was a pretty steady ramp. We didn't immediately hire up a ton of people to get going. I think it was sort of a steady ramp up during the Diablo II development. The number one personality was definitely David. Without him, there's no way this could have ever happened. He was the fearless leader of us all, and drove the development day to day, especially when things were going weird or we had to get back on track. So, I think, to me, he was the shining leader that made this thing work out. He had the technical skills to get anything done when we did have a hump, when it, oh, it turns out we were going to make this game a multiplayer game, or we were going to turn this thing that started out, famously, people talk about as a turn based game, it became real time halfway through, and it only took Dave about a day to do that. We had a great crew of artists that I loved working with. Michio Okamura was our character artist, and just loved working with him. We went back and forth with character designs all the time. He designed the original Diablo himself, and most of the monsters. We actually hired him when we were doing superhero games, and I don't think he had any job experience before that. We were doing the Justice League Task Force game, and he could kind of draw superheroes. So we kind of based that art on his drawings, and then he transitioned right into the Diablo stuff really well. Rick Seis was sort of Dave's right hand man, programmer, great guy to work with. **Max Schaefer** It was a different time then. We hired a weird mix of friends of ours from the past and guys we had just met. [Erich] mentioned Michio... Michio didn't work a computer. He did line drawings that we would turn into things, and he did character designs, all just by hand. He was a tremendous artist. We hired other people that had rudimentary computer skills, but it's not like today, where you come out with a Master's degree from some university in some of these topics. Everyone was sort of self-taught and just winging it. It was a very colorful group of guys, and we were all doing this for this first time, and we were all just seeing where it went. So it wasn't like, again, today, where half the staff has worked on other major titles, everyone is super well-trained and specialized. It was definitely more free-wheeling back then. But, it was really tight at that time. I think as we grew into Diablo II, we got a little bit more into factions and issues and troubles, but back in the early times when there was just the 14 of us, or whatever it was, we would hang out together after work, and we would play NHL Hockey tournaments on the Sega Genesis in the office. When there's only that many people, you know everybody, you talk to everybody every day, you hang out after work, and that was... I remember that being very good times. A lot of those people, it's funny, are still in the business today. I think that we kind of represent an old guard, but there's a lot of those guys are still making games and still doing this as a career, even though, when we started that, nobody had thought of this really as a long term career. **Erich Schaefer** One guy we should definitely mention too, that Max still works with to this day, is Matt Uelmen. He did all our sound effects and stuff back in the day, too. He came on early, maybe our fifth or sixth hire, even though we weren't sure we even needed a sound or music guy. But he kind of hassled us so much that we finally hired him, and he's just a great guy to work with, and we still work with him to this day, at least at Runic. Just a brilliant musician. Usually the smartest guy in the office on a lot of topics. But just also the classic music guy. He has a lot of eccentricities and personality quirks. He was also an incredibly good tester. He could figure out how to exploit or break your game better than anybody. **Max Schaefer** We hired a lot, we had to fill out a staff quite a bit at this point, so there was Stieg Hedlund as a designer, the Scandizzo brothers, lot of guys. Yeah, I'm just thinking back. It got a lot bigger for Diablo II. It was a lot bigger task, and we were trying to do a lot more. We had to get a bigger office, and then we got a bigger office than that, after that. So, yeah, it was a time of growth. Probably a little bit more management than we were used to. **Erich Schaefer** None of us had any management experience, and kind of still don't. We did a lot of things wrong. I think the classic thing we did wrong was just mis-managing the time and the scheduling, and we started to crunch to finish the game, what, a year or two, a year and a half before it came out, thinking that, hey, we're pretty close, we're four months away if we can really push and get this thing done. And then after about four months of crunching we were nowhere near, and we had to call an end to the crunch for a while. So, that was just classic time mismanagement and just a bad prediction on our part. Happens all the time, but I think that was one of the worst examples of all time that I'm familiar with. **Max Schaefer** It all kind of stems from, you start into a project like this, and you realize all the cool things that you can do. And we're like, well, we have the money and the time now, so why not add this feature or that feature? And it just adds up quickly, and all of a sudden, a two year project becomes a three year project, and like Erich says, we spent way too much time crunching at the end. Actually, some of the darker times was the end of Diablo II, just because of the growth of the company and the growth of the project to the point where everyone was having to work seven days a week, all waking hours, for almost a year. ## Planning and Production: New Worlds *Blizzard North had big dreams for Diablo II. After taking a few months to recover from the grind of Diablo, they returned to begin work on the sequel.* **Erich Schaefer** The first six months were very slow. Through the middle of Diablo 1, the company had a very relaxed atmosphere. We would spend the whole day playing other games. We would all take trips to go watch movies, and go to Fry's and pick up the latest games. In a way, we wasted a ton of time. I think it was a good waste of time, and fun, but finishing Diablo 1 was such a grueling experience for us, especially, having not really done something that big before that, we took it real easy the first six months of '97. We got back and just goofed around a lot. So, imagine, once you get into that mode, it takes a while to ramp back up. So I think it was a bit of a slow sluggish start. I wouldn't say it was bad, though. I think we needed it. **Max Schaefer** Yeah, we needed it. And there's always this time when you're working with new tech and new systems, that it takes a while for the programming staff and the engineering staff to have things in place for the designers and artists to actually work on stuff. So it's kind of natural between projects that there's a little bit of downtime. But it's always a bit of a challenge to get it ramped back up. **Erich Schaefer** I think by the second half, and again, this is a long time ago, but I think by the second half, we had a solid Act 1, we had the town, the surrounding landscapes, and I think the initial dungeons. And I think we had all that stuff by the end of '97, so we really had our feel and our look and knew what we were doing by then. We went back and forth a lot on whether we should wait to announce until we were almost done, or whether we should try to build hype early on. I do remember, there was concern that we were announcing it too early, but at the same time, I remember we announced StarCraft so early they didn't even have a game. They literally had some placeholder art on top of the Warcraft engine. And it looked completely different. And so, at the time, we were like, "Hey, let's just announce these things, and build hype from the very beginning. Maybe it even gives us a little push to get the game done ourselves."' And then, after those two examples, Diablo 2, which was announced way too early, and StarCraft, which was announced before they even started, I think we then changed tactics and kind of said, "Hey, let's just wait four months before we know the game's done." But I know they've gone back and forth since. Most of those we do consider were announced too early, though. **Max Schaefer** Yeah, although, we did have some extra considerations with the Diablo 1, because it wasn't too long after it was released. It was gaining popularity, and we realized that we were dealing with a pretty significant cheating problem that was impacting people's online experience, and we wanted to have an answer for that. We wanted to say, "Hey, yeah, for Diablo 2, we're going to a client-server system instead of a peer-to-peer system, that should take care of it." We thought it would be easy to take care of the cheats then. Not so much. But, we did want to have a public message to address it, and part of that was, "Hey, we're making Diablo 2 now, and it's not going to have this problem." **Erich Schaefer** We had this wish list where we wanted to go [with Diablo II], and one of the obvious things was that we wanted it to not just be one town with one dungeon. We were going to make it throughout a series of lands. We ended up with three, plus the finale in Hell, but, we didn't really know. So, at first we were sort of just compiling a lot of visual ideas and where we wanted to go. **David Brevik** One of the things that we wanted to do that never got done in Diablo II was that we were trying to design this thing called Battle.net Town. You know, my idea was to get into a world, and you don't leave the world. I wanted to try to break that reality, break that illusion that we're not in a contiguous world. I wanted you to start in that world, not start in the Battle.net chat lobby, but start in that world. And Battle.net Town was going to be a glorified chat room where you could move around, but you could still go to the pub and chat with people, or go to a vendor, and then you would go to a specific person and you would travel to Act 1. So, you never left the world, and that was kind of one of the things that didn't make it, but that was one of the things that we wanted the most, that I personally wanted the most, which was, I really wanted this kind of, more of an immersive world feeling to the game than we had, and make it more expansive. We're not just in the dungeon beneath this town, that there's a big world out there. "There was no definitive design document [for Diablo II]. There was a loose structure. "Hey, I know we wanted to go here for Diablo 1, this is kind of the first act. This is kind of the story we're telling there." And then we get to the second act, and "Hey, where do we want to go," and then, "Oh, I got this idea for this level, it's going to be in space, it's going to be awesome." And we'd mock it up. "Yeah, that's great. Doesn't make any sense in the story, but what the hell, it's kind of cool." - David Brevik We had been playing Ultima Online, and that had definitely some influence on this design, where you could walk from one part of the continent to the other. I was thinking, "Wow could we do this and mix it with Diablo?" And that was the way that we wanted to go, but we never really fully realized that dream. We got pretty close, in that we were able to stream a lot of the levels and things like that, but we never got to the point where you could go walk from the Sisters of the Sightless Eye Cathedral to the desert or whatever. That wasn't really fully realized for Blizzard until World of Warcraft. **Max Schaefer** The biggest thing was that we had the outdoors. We were going to expand from just levels of dungeon underneath a church and do real landscapes, real outdoor landscapes. So I was tasked with organizing the background team, and that was by far the biggest topic for us: "Now that we're going to be outdoors, what does that mean? Where should we go, where should we take this?" It really came down to, where would it be cool to be? It would be cool to start out in the Irish countryside, just like Diablo 1's town was in. But then, it's wide open, we should do desert levels, we could do rainforests or whatever. I think, again, we might have been one of the first, setting a long, now, long tradition, of having your second area in an RPG be the desert. You see that all over the place now. People start out in sort of an Irish countryside, then they go to a desert, then they go to something else. I remember, it was a whole lot of fun just pouring through books and looking at landscapes and looking at architecture and deciding where we wanted it to play. Where did we want to kill monsters? I think doing a desert is always challenging, because it's mostly sand, and so you have to break it up with enough features that it's interesting. Sort of the same with a rainforest, in that it's so densely packed with foliage that you kind of have to find clear areas that you play in. It's sort of the opposite of the desert. Instead of trying to invent stuff to constrain you and keep you along the path, you have to clear out areas and make it plausible that you're in a rainforest, and yet you can see your character and there's room to fight monsters. To that end, we had the rainforest mostly take place along a riverbed. **Erich Schaefer** That sort of reminds me of another challenge we had. The gameplay was fine in the dungeon in Diablo 1, but we were bringing it to the outdoors with just wide open fields. We asked ourselves, "What are the boundaries? What are the obstacles?" If it's just a field, there's no real tactics where you move around. So we put a lot of work into what should the sizes of these outdoors be and giving them boundaries and borders that made sense so you didn't want to wander forever. So we came up with all kinds of weird low walls that surrounded a lot of our outdoors that don't really make any sense. I remember at the time a lot of people were very negative on it. It was like, why couldn't we just hop over the walls? Itt's going to drive people crazy that we're limiting their area. I thought it was a big concern. Turns out, it felt really natural, and I don't think it ever got brought up again. The idea of taking it to the outdoors was one of the big early challenges. **Max Schaefer** And it took a lot of iteration, too. I remember we had way too big an area for a while, and claustrophobic areas. It took a lot of testing and play to figure out, when do you need a choke point just to reorient everyone along the path, and how big an area was it before you miss things or you're lost or you don't know where you've been? It was just a lot of play back and forth and tweaking of the size of things to get it right, because it is really different being outdoors than being down in a dungeon. **Erich Schaefer** Also, we had to invent, as far as I know, we probably invented the idea of a quest log to keep track of what quests you were on, which ones you had finished. For ourselves, at least, we had to invent that. There was no such thing in Diablo 1. It was just, you could do whatever was there, basically. There were some quests, but they didn't really operate in parallel, and you couldn't track them and do them or not to them. So, that was all new territory: how people know what to do, where to go. The waypoint system for getting around, we had to invent all that stuff. So, none of those were in the design document. They just kind of... they were problems that came up as we started to play, started to make the big area, and we were walking out there in the field. Hey, how are we ever going to get back to town? How are we going to get back to here? Those sort of things came up as we developed, we didn't plan them out beforehand. **David Brevik** There was no definitive design document [for Diablo II]. There was a loose structure. "Hey, I know we wanted to go here for Diablo 1, this is kind of the first act. This is kind of the story we're telling there." And then we get to the second act, and "Hey, where do we want to go," and then, "Oh, I got this idea for this level, it's going to be in space, it's going to be awesome." And we'd mock it up. "Yeah, that's great. Doesn't make any sense in the story, but what the hell, it's kind of cool." So, we'd put that in, and then, "Oh, let's go to the jungle. And we'll have little tiki men," or whatever the hell you want to call them there. Little goblins. And we'll have frogs and stuff. And then eventually, we knew we wanted to get to Hell, and what that would be like. We had sort of a loose structure to it, but there was no definitive design document, no specific things. Every day we would just iterate on it and change it how we please. And it's like, "Oh, I got this great idea, we're going to change it to be this way." And that was really more the way that we operated. It was like, as we played the game, we designed the game for features that we felt would be interesting additions to what we're doing, or make whatever it is that's broken better. **Erich Schaefer** Yeah, I think, actually, that's not that strange. We've never had, in any of the games since, the Torchlight or Hellgate or Rebel Galaxy here, I've never done a design document. Maybe that's my failure as a designer, in a way. At times we sort of fake one almost. We kind of said, "Let's just make sure that new hires have some idea of what's going on, or whoever our partners are have some confidence that we have anything." But, all our games, we've just sort of made as we go along. **Max Schaefer** It's part of our style that we try to have a running build going all the time, so that we can proof our concepts just basically by playing what we have, and it becomes readily apparent when everyone's playing the build what it needs. Do we need more variety in backgrounds, do we need certain kinds of monsters for this area? Once you have the framework in there, the details kind of fill themselves in. **Erich Schaefer** We sort of just plan for the month ahead, or the week ahead, even, at all times. "Hey, we need more monsters in this area, let's get a batch of monsters going." And again, I've said it a bunch of times, the iterative style where you, we constantly just evaluate the game. What's fun about the game, what's not? If this isn't fun, let's get rid of it, let's replace it with a different thing. And if we had a design document, we might be working on that thing that wasn't fun forever. Without the design document, we don't have to worry about that. At the same time, it makes bigger teams much harder, of course. It's hard to manage leads that then have to talk to their group and schedule out months ahead. That's pretty hard to do in our style. That's why, I think, that's why right now I'm working with two guys. It's just me and [Double Damage Games co-founder Travis Baldree], and it works fabulously with this iterative kind of style. But it really takes some key people on the team, again, Dave Brevik is the great example with Diablo, to be able to just adapt on the fly. He will fix things that morning and get us going for the rest of the week on this task. So, it's very designer developer heavy, and not producer heavy, style. We did run into trouble and got too big, and Blizzard North, even after Diablo 2, got even bigger and ran into some real troubles with this style, but I think the style is key for my enjoyment and it worked out for our best game, so I think it's good, even though I wouldn't exactly advise it. ## The Birth of the Skill Tree and the Development of Diablo II's Classes *The skill tree is now commonly known as an "RPG element," but it didn't become a genre fixture until Diablo II. As with many of Diablo's best ideas, it came to David Brevik in the shower.* **David Brevik** One of the things I became infamous for is that I would often come in, and, I don't know about often, but every now and then I would come in and say, "So, I had this idea in the shower this morning." It was like, "Oh, Dave's shower idea has hit the game hard, because it's some major change to the game." **Erich Schaefer** At some point early on we went with the skill tree idea. We didn't start with that. That was a brainstorm by Brevik again. He was like, "Hey, let's make skill trees that are similar to tech trees in [Civilization II]," which I believe we were playing at the time. So that sort of set the pace. Then we started to think, "OK, what would be on the trees for these various characters? Should they have shared abilities like they did in Diablo 1, or should they have their own skills entirely?" I think one of the cool things, before I get to the specific classes, again, I think we kind of came up with this. I'm sure there's examples, but at least for ourselves, of, the warrior classes use spells just like the mages. So, before that, warrior classes in RPGs would just come in and hack on guys. Maybe they had some ability or something, but they didn't have a raft of skills they would commonly use like we ended up doing in Diablo 2. **David Brevik** I have no idea where I thought of that idea. I mean, again, I thought of it in the shower. But I don't know where it came from. It came from, I didn't like the way that we were, how we were going up levels and getting these skills, and it didn't feel like there was enough creativity or choice or things like that. We wanted to give people this sense of, "How do I choose how to play my character?" We had all of these skills, and it was kind of a mess, and there was all of these potential builds, how do we organize it in such a way that allows the people to easily identify? One of the great things about Diablo 1, but one of the problems with Diablo 1, is that, from my hardcore nerdy perspective, you could make so many different builds, because everybody could do everything. But from kind of a general audience view, they were overwhelmed with all the possibilities. So we wanted to narrow that down into classes, yet still give a lot of flexibility within that class to kind of customize yourself and make you different from everybody else. And that's really the concept behind it. It was, "I'm going to make my character very different, even though I'm a Paladin and you're a Paladin, my Paladin plays very different from your Paladin, because of the choices I've made." That was really where the idea came from, and this was just a way to organize that idea. **Max Schaefer** I remember even on a fundamental level, kind of one of our goals with the character classes is that they would be slightly different than the stereotypical RPG character classes, but recognizable enough that you would know what they do. You should be able to look at the character and kind of get a sense of the way they play and what they do. That was a principle that another one of our key guys, Matt Householder, termed "familiar novelty," where you're seeing something new that you haven't seen in a game, but you understand what it is right away. **David Brevik** One of the things that he did that was really good for design for us was the way that he sort of set up the spreadsheets, because we used Excel to do all the data. And the way that we set up the balance, and the way that we did skills, I think [Stieg Hedlund] really brought that with us. So, we were able to put in more content with the way that he designed the way that we were going to do the data stuff for the levels and the difficulty, and things like that. That was definitely a big factor as well, because he helped to contribute in many other ways with design on all sorts of stuff, from monsters to whatever, all sorts of things. A lot of the radical ideas, more radical design ideas came from either myself or Erich Schaefer, I would say, the most. **Erich Schaefer** The classes themselves developed during play. I think it was largely Dave and I saying, "Hey, what would be fun to do with this guy?" and just cooking up skills on the fly; but a lot of times, most classes had advocates in the office, and people were big Paladin fans, or big Necromancer fans. They would just throw out ideas to do. The classes developed as we went based on the artists, Kelly Johnson making some of these characters. Just the way he moved, and the way that they sort of looked, kind of developed, "Hey, this is what this guy would do. Obviously he would have a shield slam." So, I think, again, a cool part of our iterative process is just like, "What would be fun to do with this character now? How can we go even more gonzo when he levels up, and gets even cooler things to do?" I can't remember any sharp disagreements [over classes]. I remember at the end, when there was a lot of concern over balance, and I remember, I'm not going to name names, but, people would say, "This skill is way too good." And we would argue about the balance of skills. And I think we ended up patching the game many times, but there were some really bad balance problems, due to just kind of weird arguments at the end, and since there were only two or three or four of us who really had in depth knowledge of how to play these characters towards the end game, we made some weird decisions just based on personalities. I don't remember, I think everybody was pretty much on board with the looks and the feels of the characters as we went along, though. **Max Schaefer** I remember, I think we did spend more time going back and forth, I think, on the look of the Sorceress. That was championed mostly by Mike Dashow. It ended up wonderful, but I do remember going back and forth on that. Paladin looked great from the get-go. The Barbarian too. Necromancer was such a weird class, nobody had any expectations or really strong feelings on what he should look like. And, what was the other one, the Amazon kind of did herself as well. You start with concept sketches, and then first in-game models, and, there's always a little bit of, "Hey, that didn't look like it did in the sketch. Why is that?" "Well, our camera angle and the scale makes it such that… That type of clothing doesn't look right anymore… Or, the way they move or walk looks a little bit weird." And normally, you put it up on screen, and everyone agrees, "Hey, that doesn't look right." "OK, we'll go back and fix it." And that's just the standard way that things work. I know we worked on the Paladin's animations for a while, because his look didn't look quite right, but his walk was always spot on. I maybe even have an incorrect memory about this, but I remember we went back and forth about the Sorceress quite a bit before it was nailed down. **Erich Schaefer** Yeah, and I kind of remember a lot of joking that it looked like the Necromancer was wearing a skirt. We ended up leaving it, because enough of us liked it, but that was, not necessarily a contentious point. But it was pretty funny. **David Brevik** It's amazing how many new standard things came out of Diablo and Diablo II. It surprises me all the time. The rarity thing [for loot], for example, just kind of made sense. In some roguelikes, they would have your common item and your magic item, so they were different colors of text or whatever. If it was a magic item it was blue, if it was a normal item it was white, that kind of thing. And, so, we took a step further and went with the rarity levels. The rarer something is then it has a different color. That really has stuck with games, and then they took it to a whole new level with World of Warcraft, and it really became standardized roleplaying stuff ever since then. ## The Big Crunch *As Diablo II's development rolled into 1999, Blizzard North pushed hard to have the game out by the end of the year. But no matter how close the end seemed, it always remained just out of reach. As the months passed, they found themselves mired in what seemed like a neverending crunch.* **Max Schaefer** It was probably June of '99 that we started the crunch in earnest, the real, seven days a week, all waking hours, driving home at midnight and coming back at eight. **David Brevik** With no real deadline, and no real way that we were managing our time, because we didn't estimate any tasks or anything like that, we just didn't estimate our time correctly. And the thing that sucked about that, though, was that we knew we weren't going to make it at our current rate, but we believed that we could do it if we just started crunching, we believed that we could make the end of 1999. And so, we started working really hard on meeting that deadline, and we started working, I was working every day. I took off, the last year of Diablo 2, I took off four days the whole year. I worked every other day, and most days I averaged about 14 hours a day. So, we worked incredibly hard to try to get this thing out by November or December that year, and so we were working, I don't know, we were crunching for six months, or something like that. We started in, like, May. By October, the end of September, early October, I remember the phone call with Mike saying, "You know, you guys are not going to make it, and we're going to delay this thing to next year." I couldn't accept it, I couldn't believe it, it didn't really register. It was like, "That's impossible, we're going to make it. We're going to do this, we're close." But, somebody who has a view from an outside perspective sees how far away you really are. So, they said, "We're going to take a few extra months. Three months, four months, six months, whatever. We're going to try and do this, but you guys need to continue and kind of finish it up." So we ended up extending the crunch. I took off Christmas Day and I came back the next day, and things like that. It was really brutal. **Erich Schaefer** We were already going to go overtime. We knew we were taking too long. We were probably already over our estimated time, but nobody really cared, because the game was going really well. But I'm pretty sure it was '99, June, maybe even May, we said, "OK, we got to crunch to get this game done by the end of this year, by the end of '99." And so, we worked for four months, just really crunching too hard, and it took Blizzard South, it took this strike team to say, "You know what, you guys are not going to make it. This is not going to happen this year." That was super depressing, because we had been working so hard for four months, and we argued, but it took a day or two to sink in, that yeah, we're not going to make '99, we were crunching almost for nothing. Sure, we got a bunch of stuff done in this time, but, we're burned out and we're not going to make it, and that was a pretty depressing moment. We decided, OK, we're going to just work normally until the end of the year and get going again at the beginning of 2000. So that was my biggest moment of realizing, "Uh-oh, we're not even close. This is going to go a long time." It was a very depressing moment. **David Brevik** We were doing everything. "We got to do this, that, and the other thing. We have to get this fixed, and this has to happen." So, we were running around with our hair on fire for a year, and we were trying to- There was nothing that was one individual thing that we felt was going to hold us back. "Oh, this would have been complete if it weren't just for whatever. Oh, we were all sitting around just adding more content to the game, because we were all waiting on the Battle.net server to work," or something like that. There was nothing like that. Every part of it was behind schedule. Content was behind schedule. Monsters were behind schedule. Levels, story, cinematics, technology, it was all behind. "We were already going to go overtime. We knew we were taking too long. We were probably already over our estimated time, but nobody really cared, because the game was going really well." - Erich Schaefer The game was always playable. So, as we put in new levels and things like that, you could play it every day and iterate on it, start your character out and play stuff. So you could play up to the point where the content kind of stopped, and so it gave us the chance to go back and actually complete the content. It wasn't one thing in particular, it was that the content was only so far. Like, Hell didn't exist, I don't think, by the end of 1999. I think that we put that all in in 2000. So, that whole act, and that whole section there. I don't think that it started going in until February or March, or something like that, if I remember correctly. **Erich Schaefer** It was the assets, and it was the iteration on it to make it play well, and then the balance of all the skills, and of course, now we're in Act 3, and you have to do a whole other part of the skill tree, playing in the actual levels that it's going to be requires going back and revisiting the skill tree again. So every time we'd put something in, you kind of had to do this recursive look through everything you've made to that point and make the proper adjustments. It was grueling. We didn't have development tools like we do nowadays. Everything was a gruel. Getting assets into the game was done by hand by an engineer, and nothing was easy. Nothing came easy. I remember I was making the interface for [the skills trees]. There was a stone slab, and the skills would all fit into these sockets. And I thought, "OK, we're basically done, I'll just make this skill tree and this stone slab, and then we will fill it out with skills". At least ten, twenty times, almost per character, I'd have to redo the stone slab with different positions for the skills and different hierarchies between them. And we never had a tool to do this, so I had to render out a whole new stone slab every time we changed a single thing. I made a ton of stone skill tree slabs art over and over and over that all just got thrown away. So, that was a great example, if we had better tools, or if we knew what the process was going to be, I wouldn't have put all that work into it. **David Brevik** I think people were loopy. We worked really hard and we worked late, and people played music, and they would go out and get a dinner and come back. Then sometimes we would do things like, "Oh, it's midnight. Farscape is on, so let's watch that for an hour," or something like that. It was 95 percent work, but there were little breaks here and there. There was a lot of office camaraderie around the late nights. Not everybody stayed late, but there was a good crew of, let's say ten people in particular that would stay late all the time. Some people would still stay late, don't get me wrong, stay until 9pm or 10pm, but that wasn't the 2am crew, or whatever. And then I'd get up at 6am and go to work and we'd do it all again. **Max Schaefer** We were always a couple months away, so if we just grind it out hard we'll be done with it. But it was mentally and physically exhausting. And people started to break down. People were getting sick all the time, they were breaking up with their girlfriends. It was bad. It was physically hard. **Erich Schaefer** It was physically hard. People would be sweating and you can just see the stress on their faces. We barely had relief. If someone had to take a week off or couldn't stay a few nights, other people would say, "Well, why aren't those people staying? We've got to stay." And we, as managers, as the people in charge, we didn't have good answers for that. We didn't handle it very well. It wasn't all the worst times. Again, there was a lot of camaraderie, and there was good times to be had amongst this, but it's probably comparable to people on the front lines of a war. They get drunk and have a good time once in a while, but then it's back to the grind in the morning. What made it OK for me, and this is different for a lot of people, but I knew the game was going to be great. I knew we were on the right track. So, it was worth it. But that was not true for everybody. There were a lot of doubters who thought, well, I don't really like how it's going, either because they're stressed out or because they really didn't like it. So for those people, it would be even harder. I had total confidence, and that's what made it OK, that there was going to be an end and it's going to be great. But for people who didn't have that, it made it even worse. For me, personally, a really weird experience is that I got married in May of 2000, which we thought was a very safe day to set the ceremony. And it turned out, it wasn't a safe date. We were working right through that time. And so, my poor wife had to handle all the arrangements and took on all the stress of the wedding. A lot of people picked up, my parents helped out a lot. But it was very odd that my wedding was coming up in the middle of this horrible crunch. So, at one point, I was just like, "OK, I'm leaving work, and I'm going to get my tuxedo, and the next day I'm getting married." I almost couldn't even think about the wedding. And then we kind of knew we were about done, so we were going to do a honeymoon at the end of June. And, we left on our honeymoon the week before we shipped. Now, all my stuff was really done, and it was just technical support things. But I just felt so weird. I've been here this whole time, and here at the very last days, I'm just taking off and leaving the crew behind. ## Launch: Critics, Bugs, and the Stone of Jordan *Diablo II finally launched in the U.S. on June 29th, and in Europe on June 30th. The team wasn't finished yet, though. They still had an expansion to make, and there were bugs and hackers to deal with, too.* **Erich Schaefer** I was in Paris when Diablo II released. And on the Champs-Elysees there was big banners, and the big Virgin megastore had a huge Diablo display. There were even TV commercials for Diablo in Paris. It all just felt so cool. We knew immediately that it was selling really well. And then soon after, we learned it was selling really well in Korea. So, that was all great to hear, even though we were still fighting a lot of these technical issues. That would fall much more on Dave and the Battle.net team staff. Max and I were more on the creative end of things, and pretty much had washed our hands of the whole project. So it was easier for us than a lot of those guys, I'm sure. **David Brevik** The only thing I remember… there may have been some problems on launch day. We finally got those solved, and we brought everything back up, and got things running, and then there was this gold duping exploit, like, with splitting piles of gold. And I just remember it, day two, I felt like the whole thing that we had worked for, where we were doing this client server thing where everybody could be protected, and then we had blown our economy, or whatever, on like, Day 2. We were like, "Oh, my God, what did we do, we've blown it." And so, it was just this emergency fix, getting those repaired and the exploit fixed, and stuff like that. Those are sort of the first memories I have of it. It goes straight into the frying pan of, you're in a live environment, and things are broken and you've got to fix them as fast as possible. Which was a new kind of concept for myself and the team, when you're doing these live products that are always on, that you have to respond to things in a much different way than you did with single player games, right? It's like, there's an exploit and we'll patch that at some point. We'll work on it over the next couple weeks, and maybe we'll put out a patch by the end of the month that fixes it. But if people want to destroy their game, I guess they can go ahead and do that. But most people won't. That was kind of the mentality, and then there was even less of a mentality of that if there was a bug or an exploit like that on a cartridge game or on a physical CD console game, you know, they weren't fixed ever. So, now, things are put out and they're patched regularly, but that was very uncommon practice back then. It was really unusual. One of the things that really separated Blizzard from a lot of the pack was the fact that we continued to support and patch our products and get rid of bugs and problems and include enhanced features and stuff like that even after the game launched. **Erich Schaefer** Yeah, I think the first week was kind of a disaster, wasn't it? I was sort of luckily gone on this honeymoon. **Max Schaefer** Yeah, it was a disaster, but it was a good disaster, because it was over capacity, and that's why we were having the problems. **Erich Schaefer** I think by now, everyone has their cheating Diablo 2 story. They don't say it as if Diablo's a crap game because there's cheating, now they kind of talk about the fun and the weird experiences with the cheating. So, it doesn't bug me anymore. **David Brevik** At some point the Stone of Jordan became a kind of unit of currency. We we made a bunch of items, and sometimes when you're making items, you're making ones that are better than others, and [the Stone of Jordan] was deemed the most valuable item in the game. Even though it wasn't necessarily true, it was deemed as a rare item to find, as well as extremely valuable. So people started trading things for, and it would cost Stones of Jordan. But then people started duping Stones of Jordan, so Stones of Jordan were everywhere, and there were lots of them in the game, because there were lots of duping bugs and things like that. "We finally got the [initial problems] solved, and we brought everything back up, and got things running, and then there was this gold duping exploit, like, with splitting piles of gold. And I just remember it, day two, I felt like the whole thing that we had worked for, where we were doing this client server thing where everybody could be protected, and then we had blown our economy, or whatever, on like, Day 2. We were like, "Oh, my God, what did we do, we've blown it." - David Brevik Then we fixed the duping bugs, but there was still a lot of them out there. So we came up with a way to remove Stones of Jordan out of the economy, a sort of Stone of Jordan sink to get rid of them, so that they weren't all over the place any more. And that was this donation thing, where you could donate the Stone of Jordan and summon a Super Diablo or something like that. People today still, when they think of items in Diablo 2, it's the number one item that comes to mind, just because it was so valuable, and then used mainly as a currency for trading, and then just became the most popular duped item, and then there were just gobs of them in the economy. **Erich Schaefer** We were surprised when the SoJ became a unit of currency, but we were happy to see that dynamic occur. I always thought the gold cap was a bit too blunt an instrument, and it was neat to see players figure out this work-around. **David Brevik** It was a learning experience. It was the first time we had done client-server stuff, and so some of the mistakes that we made with the way that we were doing left too many holes, and items were being generated, and we figured out a way to generate unique IDs for them, but that wasn't necessarily in the code at the beginning. And, trying to improve on those systems to make duping more difficult and fixing all the flaws and the places you could cancel, and actually transferring the items instead of destroying them and recreating them. When you're doing things like that, when you end up in a situation where one thing can get destroyed and another thing can get made, then the thing destroyed doesn't get destroyed because the player disconnected before that gets saved. There's all sorts of tricks that occur when you destroy the item and assume that it gets destroyed before you create the other version of it. So, it becomes a situation that's technically very challenging, but there's a lot that we learned there to make it more secure on future games that I worked on. But again, I really feel like when people mention Diablo 2, the number one item that comes to mind is the Stone of Jordan. And, even though it became this big duped thing, it was a staple, it was the item to find, the item to seek. You were always happy to get one, and it became super-popular. So, in a lot of ways, it legitimized your account once you had them. And so I still feel like today that people still seek that item, even though maybe it's not the best item in the world. **Max Schaefer** We were so in the infancy of this type of game at that point that no one could have anticipated the level of effort people put into cheating. I mean, we kind of did because we went to client-server as a model, but it's so complicated to do what we were doing that there's all kinds of little vulnerabilities all over the place, and, for the most part, you could play and avoid cheating problems. And, when people did figure out some horrible way to cheat, eventually it would get fixed by the Battle.net team. And certainly, we've learned a lot from there that we apply these days to help cheating issues. But at the time, it just kind of came with the territory of making some of the first really giant global internet games where there's thousands of people trying to break it at any moment. That was a completely new and foreign thing, and the fact that it worked at all meant that we did a decent job of it. **Erich Schaefer** I do remember one story. I went to a GDC conference, after Diablo II, and one of the topics at one of the forums or whatever was, "Cheating in Games." And I said, "Oh, I should go check out cheating in games, because that's right up our alley right now." And then in the roundtable they were having, everybody just was trashing Diablo the whole time. They were saying, "Here's what Diablo does and why they were stupid, and here's all these things." And I was just getting really mad sitting there, just starting to think in my head. I didn't say anything there, I think I was just in the audience. But I was just thinking, Why don't you guys make a decent game first, and then work on cheating? Which is not fair, and I'm sure a lot of them did make decent games. But I do remember being really mad that everyone was focused on the cheating, when, in my mind, the game was so fun, who cares about the cheating? **David Brevik** A lot of people thought Diablo II was worse [than its predecessor], which wasn't very fun. But I think that in a lot of ways you can't really make games for the critics, you got to make the game that you want. You have to be satisfied with the effort that you put in and the job that you feel that you and your team did. It's like, in a lot of ways, I hate to say it, but it's a little bit like golf. You have to be happy with your golf score, not compare it to everybody else, or really making sure that you're pleasing everybody. Because you're not going to please everybody, and people are going to say stuff for whatever reason, and they may even change their mind. Yeah, they didn't like it at first, but now they really love it, because you made this one little change. And they were ranting and raving about 55 different things, but you make one little change, and they're like, "Oh, I love it now." So, I think that you're going to get criticism no matter what you do. And this happens, not just in video games, but in any kind of entertainment, movies or music or books or whatever. I liked that one, or I didn't like this one, and everybody's entitled to their opinion, and opinions are free, so people are going to go ahead and say them. It's, how do you let it affect you. You need to focus on, how happy are you with the effort that you put in and the product that you made, and are you proud of what you've done, and are a majority of the people happy with what you created. I think that's the best way to look at it. **Max Schaefer** There was always people who didn't like Diablo, and that was true of Diablo 1 as well. Some people were absolutely convinced that we were killing the RPG genre by making it too arcadey, and that it wasn't even an RPG, and screw this game. But you could always tell that that was a distinct minority opinion, because people were playing it in droves, and buying it in droves. Most of my time, I think, after release, in game, in the chat rooms, seeing what people were saying, and by and large those people were very excited, so, you could find the critics. And, we also were learning at this point that when you have a forum, a bulletin board service, on your website, that that tends to aggregate negative opinion. So you could kind of get a false impression of what people were thinking by looking in our forums versus actually going in-game and seeing what people were doing in game and how much fun people were having. But, like Erich, I think that we were so exhausted. We knew the game was good, and we almost didn't care what people were saying, because God, we just needed to get some sleep. ## The Aftermath: Lords of Destruction, Patch 1.10, and Lessons Learned *Work began on an expansion pack for Diablo II almost as soon as it launched. It would be released the following year, and a steady stream of new features would contine into 2003 with Patch 1.1.* **David Brevik** We almost always had planned on doing an expansion, and it started pretty rapidly, because the totally project length was maybe 14 months or something like that. So it was probably right around mid-summer that it really started in earnest, and we put people on it and we started making the new act and stuff. I think there were some ideas and concepts for it even right away, things that we want to fix. But the first focus was trying to fix some of the bugs and trying to stabilize the service and giving people a little bit of time off and a break, because people were burnt out. I was extremely burnt out. I think that that was really where the focus was for a few months after Diablo II. **Max Schaefer** We basically felt that we had left something on the table from Diablo 1 by not doing an expansion. We had farmed out an expansion for Diablo 1 that didn't turn out very good and wasn't a very good experience. So I think we had it in our heads that we weren't going to leave that opportunity on the table again, and that it would be relatively easy to do an expansion for Diablo II. So I don't recall there being any, "Hey, should we do this" or "shouldn't we do this." We knew we were going to roll off of Diablo II. And after we got caught up on sleep and repairing our familial relations, we knew that we were going to roll right into it. One of the main goals we had in the expansion was to really plan it out so that it wouldn't go spiraling out of control, and really endeavor seriously to make a project on time and without this crazy crazy crunch at the end. So I think we kind of put the reins on being ambitious with expansion of the expansion as we went. We tried to stick to the script and hammer it out in a reasonable amount of time. **Erich Schaefer** Yeah, we learned some tough lessons on Diablo II that I think those guys really applied nicely to the expansion. I think almost everybody remembers Diablo really remembers the expansion and after the 1.1 patch. Honestly, that's how I remember it, too. I blur in my mind what was in each. But, we made some really great improvements, fixed up a lot of balance issues. I think it was just worth the continued effort we put into it that whole time to make it a great game. It was fun, and it sold well, but there was a lot of problems with the initial release, and that fixed a good many of them. **David Brevik** [Runewords] were just one of the features that we had been working on and that, because we had these socketed things, we could come up with letter arrangements. They were thinking of different types of items to socket into things. I don't know who came up with it, but the idea that we're going to make these words out of them, and different words turn the item into something else. I was not involved in those designs, so I don't exactly know where they came from. I thought it was fascinating to have these, to not just know what you were socketing, but have this puzzle associated with the socketing game. I think that this hearkened back to other roguelikes where, in many roguelikes, the potions that you find or the scrolls or the items will all be unidentified, and you don't know what a potion is until you actually quaff it, and then you kind of find out. And then, once you've discovered what it is, it'll be labeled from then on. So, it was random each time, the purple potion in one game would be invisibility potion, but it would be poison in a different game. You didn't know what it was, and I loved that aspect of adventuring and trying to figure things out, and the mystery associated with those kind of things. Capturing a little of that mysterious spirit in a game, especially in Diablo, was something that I always was looking for, but though we couldn't do it exactly the same way, I like the idea of experimentation and mystery around things, so the audience finds out how to craft and discover unique things about the game. So I love the idea of runewords, simply because that's the mystery that I think adds so much to the game, trial and error and discovering new stuff. It's so exciting to find recipes and whatnot, that's always something in gaming that I really enjoy, discovering the unknown. And you don't really have very many of those things as much anymore. Most people won't put them in the game, because they end up on the internet in ten seconds, so then it just becomes a hassle for people. People are upset that the interface isn't clear. It's kind of an era gone past, you know, games really don't do those kinds of things anymore, because they don't want to frustrate the players. The players want more instant gratification and ease of use than they did then. It was just a different time in gaming. **Max Schaefer** I think 1.10 added some longevity to the gameplay experience with the runewords and what have you, and the expansion pack really bumped the content level over the top. At that point, you had seven character classes that were fully fleshed out to play, you had a lot of really cool environments and a lot of dungeons and different kinds of monsters. It really kicked the amount of content over the top. And then, the other tweaks tightened the whole thing up. As a whole, it was a pretty good game. **David Brevik** It's funny because one of the things that people don't recall, and a lot of people today, especially younger people today, they don't even understand, is that the game was not super well-received when it came out, but when we put out the expansion a year later, that made a massive difference in people's opinions of the product. Because we were able to get a bunch of the bugs out, as well as improve and put in some critical new features that made the game much better, including a couple new character classes and a bunch of new items to get, polish the game better and put in new story... all of these really great features that added a lot to the product. And then in 2003, the 1.10 patch changed a lot of things and put in synergies and all sorts of things. So I think that those kind of things, they see where it is, but they don't see the journey. They only see it now in hindsight as, "Hey, it was a classic, and it came out day one as this amazing thing." I think people still see that today, with, like, World of Warcraft. They go and play World of Warcraft, and they can't even imagine how different the game was when it came out versus now, right? How many things have changed, and how flying didn't even exist, and people were like, "What? I don't even understand what that means." Because they forget the journey that a project takes to get where it is, and they look at it with these rose-colored glasses, "Well, how is it today versus the way that it was when it came out?" They can evaluate it today, versus the way that it was. And again, as you get further and further away from something you get these romantic notions about how wonderful something was, but then if you actually played it, you go back and go, "Oh, my God, I remember, I didn't like this, that, or the other thing." I think that still, there's more of a mythical whisper about how good it was versus maybe the reality. It's a pretty big accomplishment to make such a well-revered game. People will still stop and talk to me often about how much they have playing the game, and they still have it, and they have fond memories. A lot of times now, it's like, "That was my favorite game when I was in fifth grade." I'm like, "Uh, uhat makes me old. "My dad and I played that when I was a little kid." I get that one too. So, it feels great to have been a part of that pretty magical experience. I feel super fortunate that I was able to do that and we're super lucky to be involved. A lot of times, success like that is all about timing and luck and all sorts of things, and, we were very very fortunate to be part of that. I'm super proud of what we created and super proud to be part of that team, something that most game developers will never experience. **Max Schaefer** It was pretty solid [after Diablo II] that this was going to be our career, that this is what we do, and that we couldn't continue to do it that way. It was going to kill us, it was going to burn everyone out, we were going to have massive turnover, we were going to hate our lives and what we did. I think we got away with it a little bit, but it was sort of a period where, whatever cool idea we had, we would cram into the game. And, there was not a whole lot of discipline as to trying to keep the bounds of the project within reason. I guess it had not come up before. There was not the opportunity to go spinning out of control like that. "It's funny because one of the things that people don't recall, and a lot of people today, especially younger people today, they don't even understand, is that the game was not super well-received when it came out, but when we put out the expansion a year later, that made a massive difference in people's opinions of the product." - David Brevik I think with Diablo 1, we were just so happy to be released from our budgetary constraints and to be working officially as a Blizzard company at that point. It was enough work to just get our original idea down and working that the project stayed pretty contained, and it wasn't really until Diablo II, where, we started to feel like, "Okay, we know we can make this sort of thing, now what kind of crazy things can we think of to do with it?" And that process just kind of spiraled out of control. And so, the big lesson was: Design within your scope, and within your budget, and within your size of your team, and don't just do every cool thing that comes up, because you're going to come up with a whole lot more cool things than you have time to implement. It was such a wildly disproportionate success to anything that we could have expected or wanted, and then the whole way that we went from almost going out of business twenty times because didn't have any money and the milestone payments were late or whatever, going from there to having this giant franchise that you make... that people are still talking about today. It's just so inconceivable that you can't look upon it with anything but fondness at this point. Yeah, we screwed up and ground too hard for a while, and screwed up the schedules, and they were late, and there was little problems here and there with this and that, but overall, it's just all good, from my perspective. **Erich Schaefer** No regrets. It ruined a lot of lives, but it was worth it. *Thanks to David Craddock for providing the photos of Blizzard North. Concept art and sketches courtesy of Blizzard Entertainment.*
true
true
true
A little more than fifteen years ago, Blizzard North set to work on what would become one of the most popular and enduring action RPGs ever - Diablo II. Fina...
2024-10-12 00:00:00
2015-12-24 00:00:00
https://assetsio.gnwcdn.…pscale&auto=webp
article
vg247.com
VG247
null
null
5,200,545
http://www.wisebread.com/free-books-little-libraries-that-build-community-and-save-you-money
Free Books: Little Libraries That Build Community and Save You Money
Suzanne Favreau
One of my earliest childhood memories is standing in a parking lot in rural New Mexico with my mother, waiting for the Bookmobile to arrive. I loved the Bookmobile, which was a former delivery truck, outfitted with bookshelves. In pre-Internet days, the Bookmobile brought the outside world to towns too teeny to support their own libraries or bookstores. Bookmobiles still exist, often serving as a library for schools in poor neighborhoods and as a books-on-wheels service for elderly readers with mobility issues. Alas, with all the state budget cuts, libraries across America are cutting back services or completely closing down branches. Due to soaring gas prices, mobile libraries face a financial double whammy. While some readers can still access a library via downloadable books, in many places, like on Indian reservations, the public library is also the Internet hotspot for the entire community. No library? No Internet, and no access to free books. (See also: 4 Reasons Why You Should Support Your Local Library) Putting aside all the amazing services and perks that public libraries provide — research material, Internet access, DVD rental, air conditioning — the big question for many people becomes: "How am I going to afford my reading habit?" Obviously, if you’ve got the money, you can buy books. But most library patrons are frugal readers who prefer to pay for their books with their tax dollars, not their grocery budgets. Also, many places don’t have a big enough local economy to support a bookstore. Sure, you can always buy books online. But, even one-cent books cost money to ship. And you have to pay for the Internet access that you use to download free books. Luckily, bookworms are a plucky bunch. Around the world people are creating their own book exchanges and sharing their reading wealth with their neighbors. Little Free Library is a charity that started in 2010 to promote the construction of free book exchanges around the globe to promote literacy and build community. In August 2012, the Little Free Library movement surpassed Andrew Carnegie’s record total of 2,509 libraries built! My friend Heather joined the Little Free Library as a Steward last year. A tax deductable $34.95 got her a start-up kit with all sorts of helpful information, blueprints, and instructions on how to build her own library out of salvaged supplies, book plates, a numbered metal plaque for her library, and a GPS listing on the Little Free Library global network. More importantly, Heather gets the pleasure of knowing that she’s helping to subsidize libraries in poor areas, plus the accolades of all her neighbors. Interestingly enough, Heather’s little library isn’t cleaned out every day by book thieves or vandals. People really seem to grasp the concept that her tiny book shelf is a *lending library*. Heather’s library patrons are remarkably nice about returning books they have "checked out" of her little library once they are through reading them. If an actual lending library feels too overwhelming to manage, even at a miniature scale, or you lack yard space because you live in an apartment complex, you can join forces with a local coffee shop, hair salon, or car mechanic and curate a community book bank in a reader friendly private business. My local gym owner operates a leave-a-book/take-a-book style free reading exchange in the dressing room. Fact — the gym bookcase is actually where I get all of my trashy ladymags for free. While there is no expectation for people to actually return books to the gym, the owner and her wife make a point of keeping the shelf stocked with interesting reading material. If your local house of worship has a FREE box, you might ask if you can add a bookshelf in the same area. Books are more likely to be returned to book banks if people can make it part of their weekly routine. If you’ve ever traveled like a poor person, then you know that just about every youth hostel on the planet has a stack of random books in some corner that are free for the taking…often in more than one language. If you are a sporting type like me, you can actually track your holiday reading that you left behind in taxis, airports, and bus stops around the globe using bookcrossing.com. My friend Gwen, who lives in Germany, rides to work on a Hamburg *city bus* that has a built-in bookcase. (There’s some joke about German efficiency to be made from this, one that I just can’t think of right now). At any rate, that is some fine German engineering. Why Americans have not demanded bookcases on all public transportation, I have no idea, but it is just more evidence that we’re losing the empire. International Catch and Release reading is always a treasure hunt, but it’s also one of my favorite ways of pinching pennies, (or centimes, or lepta) when I travel. I used to buy guide books, but then I discovered that small hotel managers are usually more than happy to load me up with leftover maps and guides because their bookcases are crammed with travel books left behind by previous guests. Also, when I did my semester abroad in Florence, I packed a bunch of books that were set in Tuscany so I could create mini "book tour" walks where I’d try and track down all the literary locations I was reading about. I thought I was so clever until I arrived at school and discovered that generations of previous students had left behind multiple copies of" The Agony and the Ecstasy," "Portrait of a Lady," and anything ever written by John Ruskin on the school’s lending library shelf. Book geeks. We all think alike. Place-specific literature is just one more thing I now leave off my packing list when traveling in book-abundant areas. On New Year’s Day my husband and I host a book swap party and waffle extravaganza. (My husband’s waffles are legendary.) We invite all the readers in our life to come over with the good books that are taking up valuable shelf space in their homes. Everyone throws their books on the communal pile and takes what they want for free. People look forward to this party and actually "save up" their books for this event! Leftover books are donated to the Los Angeles Public Library Book Drive. *How much do you spend on your reading habit and where do you find inexpensive books?* *Disclaimer: The links and mentions on this site may be affiliate links. But they do not affect the actual opinions and recommendations of the authors.* Wise Bread is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to amazon.com. Another entertaining, well researched and well written article. I first heard about (and was intrigued by) the Little Free Library movement thanks to an article in USA Today. Thanks to Max Wong for providing more intriguing details about Little Free Libraries including the information on how to get started. There are many other wonderful suggestions in this article about how book lovers can indulge their passion without spending huge amounts of money. One of the things I like best about this author's writing is her ability to weave interesting tidbits about her own life into the fabric of the story she's crafting. I look forward to reading more from Max Wong. Suzanne-- Thank you so much for your kind words. They made my day! Also, thank you for giving birth to Meg! She's an amazing and inspirational editor. I honestly couldn't do this job without her. Hi, Max! A friend of mine was just yesterday telling me about these. What a fun idea! Thanks for writing about the specifics.
true
true
true
Around the world people are creating their own book exchanges. Learn how to find them, score cheap books, and meet your neighbors.
2024-10-12 00:00:00
2024-01-01 00:00:00
https://www.wisebread.co…55bd59f478_z.jpg
article
wisebread.com
Wise Bread
null
null
3,429,224
http://www.open-electronics.org/tidigino-the-arduino-based-gsm-remote-control/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
1,269,412
http://www.slideshare.net/al3x/the-how-and-why-of-scala-at-twitter
The Why and How of Scala at Twitter
Alex Payne Follow
A presentation at Twitter's official developer conference, Chirp, about why we use the Scala programming language and how we build services in it. Provides a tour of a number of libraries and tools, both developed at Twitter and otherwise. Report Share Report Share 1 of 31 More Related Content The Why and How of Scala at Twitter 2. The Why and How of Scala at Twitter Alex Payne (@al3x) Twitter Infrastructure 3. Who’s this guy? •Working at Twitter since 2007, before it was even a company. •Spent a couple years on the API. •Co-authored Programming Scala (O’Reilly, 2009). •Into programming languages, scaling stuff. 5. So, like, Ruby or Scala? •Both! •Ruby mostly on the frontend. •Scala mostly on the backend. •Thrift and data stores to connect the two. 6. Elevator Pitch • It’s fun. • It’s fast. • It’s runs on a great virtual machine (JVM). • It’s got an awesome community. • It can borrow from Java. 7. No, seriously, why? • A rich static type system that gets out of your way when you don’t need it, is awesome when you do. • Flexible syntax makes it easy to write readable, maintainable code. • Traits for cross-cutting modularity. • Lots of OOP goodness. 8. I’m not sold yet. • How about the choice between immutable and mutable variables? • Powerful functional programming: mapping, filtering, folding, currying, so much more. • Lazy values. • Pattern matching. • XML literals, and clean syntax for working with XML documents built right in! 9. What about concurrency? • You can do Actor concurrency (message passing), kinda like Erlang. • Or, use threads. • Or, use all the java.util.concurrent tools. • Or, use other JVM concurrency frameworks (ex: Netty, Apache Mina). 13. We build services. • Isolated components, can swap them out. • Talk to the rest of the system via Thrift (for now, maybe Avro in the future). • Independently tested for correctness, load. • Can have custom operational properties. 14. Scala Services at Twitter • Kestrel: queuing. • Flock (and Gizzard): social graph store. • Hawkwind: people search. • Hosebird: streaming API. • more in the works all the time... 16. We build libraries. • Reusable chunks of code that we can share between projects. • We open source them. • We try to keep the “NIH” to a minimum, but we also have very particular requirements. 17. Ostrich • Gather stats from inside your running process. • Counters, gauges, and timings. • Share stats many ways: JMX, JSON-over- HTTP, plain text Telnet-style socket, log files. 22. xrayspecs • A set of extensions to the fantastic Specs BDD test framework. • Handles testing concurrency, time, creating temporary folders. • The concurrency features have been rolled into the main distribution of Specs! 23. xrayspecs Examples “response arrives from server” in { get(“/example”) must eventually(notBe(null)) } “time should stop” in { Time.freeze() val nowSnapshot = Time.now Thread.sleep(30) Time.now mustEqual nowSnapshot } 24. scala-json • A cleaned-up version of the official Scala JSON codec. • Makes clever use of parser combinators – a good way to learn about them. • Heavily tested, fixes a bunch of edge cases. 26. Other Twitter Libraries • Naggati: build protocols for Apache Mina. • Smile: a memcached client that uses Actors. • Querulous: a nice database client. • Jackhammer: a load testing framework (coming soon). • probably stuff I'm leaving out... 27. Stuff We Use • There's great open source code in the Scala community. • We try to use and contribute back to third- party projects. • Follow @implicit_ly for new Scala releases. • Also subscribe to Repopular Scala. 28. sbt – the simple build tool • Scala’s answer to Ant and Maven. • Sets up new projects. • Maintains project configuration, build tasks, and dependencies in pure Scala. Totally open- ended. • Interactive console. • Will run tasks as soon as files in your project change – automatically compile and run tests! 29. specs • As aforementioned: great way to do BDD testing. • Set up pre- and post-conditions for individual tests, suites, etc. • Extremely flexible and extensible. • Great, responsive maintainer. • Supports mocking (we like Mockito). 30. The IDE Question • We've had the best luck with IntelliJ IDEA, but it's still pretty rough. • Most of us use a plain text editor like Emacs, VIM, or TextMate. • Combined with sbt, it's actually a really nice workflow!
true
true
true
The Why and How of Scala at Twitter - Download as a PDF or view online for free
2024-10-12 00:00:00
2010-04-15 00:00:00
https://cdn.slidesharecd…t=640&fit=bounds
website
slideshare.net
Slideshare
null
null
6,603,413
http://www.datamation.com/mobile-wireless/why-apple-is-gunning-for-microsoft-1.html
Why Apple Is Gunning for Microsoft | Datamation
Mike Elgan
Apple announced a few new products yesterday, including a new thin and light iPad Air model. But mostly, the event was an assault on its old rival Microsoft. Always disciplined in messaging, the message from Apple was loud and clear: Microsoft has no vision and their software is wildly overpriced. Apple CEO Tim Cook had this to say about Microsoft at the announcement: “The competition is different. They’re confused. They chased after netbooks. Now they’re trying to make PCs into tablets and tablets into PCs. Who knows what they will do next?” Apple engineering VP Craig Federighi said: “The days of spending hundreds of dollars to get the most from your computer are gone.” And Senior Apple VP Eddy Cue said “Others would have you spend as small fortune every year just to get their apps,” referring to Microsoft Office 365, which was displayed on the screen behind him. To emphasize their point of pricing, Apple appeared to cut prices to the products that compete against Microsoft cash-cow Windows and Office products to zero. Apple announced yesterday that the new version of OS X, code-named Mavericks, would be free. In practical terms, it’s not much of a price drop. OS X, Mountain Lion cost only $19.99. Still, even that is far less than what Microsoft charges for Windows 8, which starts at $119.99 and goes up to $199.99. Microsoft charges as much as they do because, well, that’s what they do for a living. They sell software, mostly. The Windows division earned more than $19 billion last year. Giving away Windows free is not an option. The disparity between OS X’s new price of free and Windows’ price of $119.99 is an illusion, actually. OS X isn’t actually free, and Windows usually costs far less. Apple’s new pricing policy is about making less money and lowering revenue by the amount they used to make with OS purchases. The cost of OS X will be recouped by system and content sales. Mavericks is free in the same way a 16 GB subsidized iPhone 5S costs $199 instead of $649. A “subsidized” phone isn’t actually subsidized at all. Quite the opposite. The apparent price has been reduced, but then the consumer pays for it in their monthly wireless bill. Even after it’s paid off, the customer keeps paying for it. So the average “subsidized” iPhone costs far more than an unlocked iPhone. Likewise, about 65% of the revenue from the Windows division at Microsoft comes from sales of Windows – not to users but to PC and laptop makers, which pay a bulk rate for Windows far less than the retail consumer price. When a user buys a new PC or laptop, they get Windows “free” in the same way a new iMac or MacBook user gets OS X “free.” So OS X really costs more than free. And Windows usually costs less than $119.99. But the perception Apple’s new pricing policy creates strongly favors Apple in the minds of consumers. Microsoft’s complex and confusing pricing structure also works against Microsoft. While even when OS X cost money, it was one simple price for everybody. Windows, on the other hand, costs different prices for different versions when you buy from the Microsoft Store. And it costs different prices on other sides, where online retailers are trying to compete on price against each other. As a result, the purchase of Windows is often a negative experience as consumers experience the “paradox of choice,” as psychologist Barry Schwartz calls it, followed by buyer’s remorse. The “paradox of choice” is a feeling of unhappiness caused by not being sure which version to get — save money on the basic version with fewer features or spend more and get Windows 8.1 Pro? Add Windows Media Center for an additional $99.99? Buyer’s remorse is that lingering feeling after purchase that one got the wrong version. Even when past versions of OS X cost actual money, it was a good feeling for consumers. One version meant no choice paralysis and no buyer’s remorse. But now that it appears to be free, upgraders will feel great after downloading it, whereas Windows upgraders and buyers will continue to feel bad after paying for Windows. Apple also recently made both iLife and iWork productivity suites free. (Note that these are free only for upgraders and new device buyers.) Office 365 now costs a $99 per year subscription fee, which means that, say, over a decade Microsoft customers will pay a whopping $1,000 for productivity suites competitors are charging nothing for. And although Microsoft’s Office is far more “feature rich” for some professionals, Apple’s alternatives are far simpler and easier to use for the majority of people. ### Why Microsoft is Now Enemy #1 I believe there are two reasons why Apple is suddenly gunning for Microsoft. The first is that Microsoft is currently in disarray. Microsoft CEO Steve Ballmer, who has held that position since 2000 and worked at Microsoft since 1980, announced in August that he would leave the company within a year. So the CEO of Microsoft is what they call in politics a “lame duck” leader — someone whose authority is weakened by the knowledge that he won’t be around much longer. Ballmer’s ability to rally and unite Microsoft’s waring divisions has been seriously compromised. Just as in boxing, where you really go after your opponent when he’s winded or injured, Apple is seizing the opportunity to kick Microsoft while it’s down for maximum effect. And second, with the decision to directly sell Surface tablets and the more recent decision to acquire Nokia, Microsoft has for the first time ever actually entered Apple’s business directly. Apple and Microsoft used to be “frenemies” engaged in “coopetition,” both competing and partnering, and each company offering software on the others’ platforms (in Apple’s case iTunes and Safari). Microsoft used to be a software company selling software to OEM partners and consumers, who actually bought their hardware from other companies. But the old PC business is in steep decline, thanks especially to consumer embrace of tablets like the Apple iPad. Microsoft’s response to this is to augment the old Microsoft model with a new Apple model of selling integrated hardware, software, services and content. While Google appears to be the major competitor, in fact Microsoft is now a more direct competitor. For example, Google’s massive market share leadership in smartphone operating system mostly doesn’t benefit Google or take sales away from Apple. The majority of Android deployments are for third-world, no-name, zero-margin devices that don’t come with access to Google’s Play store. These users aren’t “customers” of either Google or Apple, and really aren’t relevant users in Apple’s mind. But buyers of Surface tablets or Nokia phones are, in fact, the exact same customers Apple is going after. Every sale of a Microsoft device is a lost sale for Apple — and a lost opportunity for future software, service and content sales. And that’s why Apple is now gunning for Microsoft.
true
true
true
Apple announced a few new products yesterday, including a new thin and light iPad Air model. But mostly, the event was an assault on its old rival
2024-10-12 00:00:00
2013-10-23 00:00:00
null
article
datamation.com
Datamation
null
null
8,835,927
http://www.telegraph.co.uk/technology/apple/11321342/Apple-to-rebuild-historic-tech-barn-on-new-campus.html
Apple to rebuild historic tech barn on new campus
Rhiannon Williams
Apple has dismantled and plans to rebuild a historic barn which has stood on the site of its second Cupertino campus for 99 years. The Glendenning Barn served as a key socialising point within Silicon Valley for decades, when the 175 acre site belonged to rival tech giant HP. It played host to the company's annual picnic, employee reunions and "beer busts", former HP employee and Cupertino mayor Orrin Mahoney told **Mercury News**. Apple carefully dismantled the structure piece by piece, numbering every nail and plank of wood in order to rebuild it exactly as it was in a new location near the Apple fitness centre, where it will be used to store landscaping supplies and sports equipment. The company acquired the site in 2006 after former chief executive Steve Jobs announced plans to build Apple Campus 2, one mile east of Apple's existing headquarters. Construction is expected to near completion in 2016. The doughnut-shaped four-story building will house around 14,000 Apple employees, and will be surrounded by 6,000 indigenous trees, including apples, cherry, plums, apricots and persimmons. The new environment is a return to the agricultural orchards and flowering trees the Cupertino area of California was before it became the world's technology hub, as spearheaded by Jobs. Shortly before his death in 2011, Jobs told the Cupertino City Council that the land was special to him, as he had idolised HP as an electronics-loving teenager. "It all used to be apricot trees and apricot orchards," he said. "We would like to put a new campus on that, so that we can stay in Cupertino. We've hired some great architects to work with, some of the best in the world I think, and we've come up with a design that puts 12,000 people in one building." The barn, which was built in 1916, belonged to the Glendennings, a pioneering family who used it to house horses and wagons during fruit harvests. HP bought the land in the early 70s, but retained the barn as the area was transformed into concrete buildings and car parks. Artists' impression of Apple Campus 2
true
true
true
Apple has dismantled a 99 year-old barn famed as a key site for Silicon Valley socialising on its Californian campus, with plans to rebuild it exactly as it was
2024-10-12 00:00:00
2015-01-02 00:00:00
https://www.telegraph.co…rn-_3152634b.jpg
story
telegraph.co.uk
The Telegraph
null
null
12,381,488
https://www.youtube.com/watch?v=t73n4SEz1Zo
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
1,956,240
http://techcrunch.com/2010/11/30/wikileaks-julian-assange/
WikiLeaks Founder Added To The Interpol Wanted List | TechCrunch
Alexia Tsotsis
Two days after Internet whistleblower WikiLeaks released 251,287 U.S. diplomatic cables to major media organizations including the *New York Times* and Der *Spiegel*, international police organization Interpol has placed founder Julian Assange on its wanted list for “Sex Crimes,” in a warrant issued by the Public Prosecution Office in Gothenburg, Sweden. While Assange might be facing criminal charges if he returns to his native Australia and is under investigation in the US for espionage, the Interpol mediated charges here are in connection with rape allegations made by two different Swedish women back in August. While Interpol makes it clear that its infamous Red Notice list does not function as an international arrest warrant, it does serve the purpose of broadcasting internationally that the person in question is a fugitive and can aide in extradition process. Assange, who has previously denied the allegations, is rumored to currently be hiding in the United Kingdom, which as of yet has not shown any signs of taking legal action. The @wikileaks Twitter account has remained dormant since news about the release went out.
true
true
true
Two days after Internet whistleblower WikiLeaks released 251,287 U.S. diplomatic cables to major media organizations including the New York Times and Der Spiegel, international police organization Interpol has placed founder Julian Assange on its wanted list for "Sex Crimes," in a warrant issued by the Public Prosecution Office in Gothenburg, Sweden. The Interpol mediated charges here are in connection with rape allegations made by two different Swedish women back in August.
2024-10-12 00:00:00
2010-11-30 00:00:00
https://techcrunch.com/w…t-3-00-51-pm.png
article
techcrunch.com
TechCrunch
null
null
30,101,446
https://www.grid.news/story/economy/2022/01/27/houses-are-expensive-everywhere-and-we-dont-have-enough-of-them/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
10,634,053
http://www.nytimes.com/2015/11/27/business/george-zimmer-former-face-of-mens-wearhouse-watches-his-old-company-struggle.html?ref=business&_r=0
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
34,055,947
https://www.axios.com/2022/12/19/twitter-investors-split-elon-musk
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
37,128,929
https://www.gosh.org/about-us/peter-pan/copyright/
Peter Pan copyright
null
# Peter Pan copyright ## Who owns the copyright to Peter Pan? JM Barrie gifted the rights to *Peter Pan* to Great Ormond Street Hospital (GOSH) in 1929. Over the years, this generous gift has provided vital support for the hospital’s work, helping to give seriously ill children lives that are fuller, funner and longer. ## Copyright Designs and Patents Act The original copyright for *Peter Pan* expired in the UK and Europe in 1987, 50 years after Barrie’s death. However, an amendment to the Copyright Designs and Patents Act (CDPA) in 1988 granted GOSH unique rights to royalties from stage performances, adaptations, publications, audiobooks, ebooks, radio broadcasts and films. ## Copyright in UK and Europe In 1996, the copyright term was extended to 70 years after the author's death in the EU, but *Peter Pan* entered the public domain in Europe on 31 December 2007. In the UK, the CDPA ensures GOSH continues to benefit from Barrie’s gift, helping to make the hospital extraordinary for even more patients and families. ## US copyright Although the novel *Peter Pan* (also known as *Peter and Wendy*) is in the public domain in the US, the play (and stage adaptations) is in copyright there until December 2023. This is because the novel was published in 1911, but the play itself was only published in 1928, so its copyright was extended by the new term of first date of publication plus 95 years (set by the Sonny Bono Copyright Extension Act of 1998 for works published between 1923 and 1977). ## What are royalties? Royalties are a percentage of ticket or book sales paid to GOSH Charity for performances, publications, or films based on Peter Pan. ## Credits We're grateful to those who helped to create the words and imagery of *Peter Pan*. All quotes and illustrations on this site are copyrighted.
true
true
true
Great Ormond Street Hospital
2024-10-12 00:00:00
2007-12-31 00:00:00
https://www.gosh.org/sta…7901dd042d43.png
website
gosh.org
Great Ormond Street Charity
null
null
13,043,821
https://opensource.com/article/16/11/python-vs-r-machine-learning-data-analysis
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
18,640,136
http://www.naughtycomputer.uk/do_i_really_need_to_get_out_the_soldering_iron_again.html
Do I really need to get out the soldering-iron again?
null
16th July 2018 So I had this idea that it would be nice to listen to music when I get home from work everyday, so I put all my ripped music onto my phone, and I bought some good quality over-ear headphones. I found I liked the great sound quality of the headphones, and the way they block atmospheric noise out a bit so I can just hear the music. It's a great experience to listen to some great music for 20 minutes when I get home to relax. However, I had a problem. When my phone plays certain tracks it makes and explosion of noise when switching songs, or pausing and resuming. It turns out that all music players on Android actually play music using the Android-media-player-service. Apparently this service has some sort of bug. Maybe it doesn't like the metadata in those tracks? I'd actually noticed this problem before, but these new headphones make it much more of a problem because they're much more sensitive, so the sound explosion is extremely loud relative to the music, scaring the hell out of me. This is not OK because the whole point is for me to be relaxing. Although Android is Free Software, meaning I can modify the code, it would probably take me months to learn enough about music decoding and the Android-media-player-service to write a fix. So my first idea was to just get one of those portable CD players. So I bought a medium priced one from Argos. But this device has a problem too; it constantly feeds a hissing sound into the headphones. Like the phone's explosion problem, it's made worse by the sensitivity of the headphones, as the hiss has a constant amplitude, which is unaffected by the volume control. At this point I'm kind-of surprised at how hard it is to just listen to music with headphones. I'm sure I didn't have such bad problems doing this 15 years ago with a cheap MP3 player. My desktop has great sound quality but I don't want to be sitting at my computer chair while I listen to my music, I do that too much already. So I thought why don't I use my spare old laptop as a music-player? So I put a fresh install of Debian and installed mpd and ncmpc. It works and frankly ncmpc has a much nicer interface than any android music player apps I've tried, or a CD player. But the sound quality on the laptop is just not quite up-to scratch. I think the output buffer on the amplifier is just not very good and struggles with the dynamic load of the transducers. Right now I'm absolutely astonished at how difficult it appears to be to just listen to music with a good pair of headphones. Is it not 2018? The media is full of talk about preposterously ambitious ideas such as AI and self-driving cars and yet I can't even listen to a fuck-damn music track? O_o I've done my best to avoid this ... but it looks like I'm gonna have to solve this problem with A FUCKING SOLDERING IRON! So the circuit is this fucking simple OK? OK so now I made it with parts I had lying around and it's fucking primitive: It works completely great! I plug it into my laptop or CD player and it sorts out the sound quality and gets rid of all the hiss. Sounds fab, what else can I say? I'd like to point out that you could have built this in the 70s. The NE5532P op-amps I'm using were first made and sold in 1979. Interesting how a 1979-technology audio buffer can best a 2010 laptop or a 2018 CD-player. Of course I need to make a case and find a more sensible way to power it but right now I'm just glad I've got something that works. If anyone reads this post I'm sure loads of people will tell me that my problems are all my own making and if only I invested in an iPhone all my problems would go away. Well you know what? APPLE IS A SYMBOL OF PRETENTIOUSNESS AND IGNORANCE - YOU DO NOT EVEN KNOW HOW YOUR PHONE WORKS - I DO NOT HAVE TO PAY A TAX TO APPLE TO LISTEN TO MY MUSIC. The rest of this article is technical notes on how the circuit works for anyone who's interested or wants to make one. This circuit is a buffer and it helps by two mechanisms: If you want to make this but don't want the quietening effect, I think it should be fine to just remove the four 10K resistors. This gives you a gain of 1, which means the voltage levels are unchanged. If you wanted best possible sound you'd use some actual audio op-amps rather than these cheap NE5532P. And buffer the output of the op-amps somehow. But of-course size and cost balloons if you start adding buffers and stuff. If you want more info on headphone amps check out Tangentsoft (And you thought the 90s web was dead)? I don't have an oscilloscope so I can't do any proper signal analysis on the circuit but frankly if it sounds good; it is good as far as I'm concerned. And as I said it sounds great. Intrinsic DC-offset is unmeasurable with my equipment (0.0mV according to my voltmeter). Furthermore it divides the DC offset of the source by 10! There is however a small pop when it switches on and off. Here are the components before assembly. The headphone jacks are scavenged from an old computer: I arranged all the parts with the prototyping board balanced on a tray. The annoying thing about doing it this way is that you then have to turn it over to solder it, and the parts start falling out! By holding it on it's side with a handy-andy I was able to get it all soldered up without too many things falling out: This circuit was easy to make; I didn't even bother to prototype it and yet it worked first time. Though note I have made audio amps before so I knew what I was doing. Now I just need to finish this off with a proper case and power-supply so it's not a pain-in-the arse to use. Shame Maplin closed because now I have to order parts off the internet and WAIT. I'm beginning to notice a trend in the world-order whereby the shinier Apple's iPhones get the worse everything else gets. I'm sure one-day everyone will get their music injected direct from their iPhone to their brain by a needle. And anyone who tries to listen to music with their ears will be burned as a heretic for betraying "the Gods" (Apple, Spotify and other tech deities). So I did eventually finish it off with a proper enclosure. So that it can be repaired if necessary, I gave it a magnetically attached lid. By putting the two pairs of magnets different ways, I was able to make it repel of you try to put the lid on wrong. Quite cool :) The circuit is held-in by screws and is connected to the power-supply through a mini terminal block so it can be removed for maintenance, or even replacement. Because it requires >10V, I had to use two PP3 batteries to power it. These battery holders are quite nice, just draws that slide out. Battery life is only a few hours unfortunately because the circuit draws an enormous idle current of 23mA :o I've put some nice "Bumpon" rubber feet on the bottom. I designed a clever little circuit to drive the indicator LED. I wanted an indication of battery level. This is the schematic: What this does is, when the amp is turned on, lights up for a moment, then turns off and stays off. But the amount of time which it comes on for depends on battery level. If battery is good, it lights up of only a fraction of a second. But as the battery gets low, it stays on for longer and longer. If battery level is unacceptable it lights up forever. So that's that!
true
true
true
null
2024-10-12 00:00:00
2018-07-16 00:00:00
null
null
null
null
null
null
794,395
http://www.markevanstech.com/2009/08/30/twitter-isnt-over-hyped-its-just-misunderstood/
SITUS TOTO 4D ❤ Bandar Togel Online Terpercaya 2024
Bos Bandar Togel
# SITUS TOTO 4D ❤ Bandar Togel Online Terpercaya 2024 Situs Toto 4D adalah bandar togel online terpercaya yang menawarkan banyak Toto 4D dan proses transaksi yang cepat, bonus member baru, pasaran togel terlengkap ## Situs Toto 4D ⚡️ Bandar togel online Kategori | Detail | Nama Situs | markevanshub.com | URL Situs | markevanshub.com | Lokasi | Singapore | Jenis Permainan | Toto 4D, Toto 5D, Toto 6D, Slot Online, Live Casino, Slot Gacor | Frekuensi Undian | Harian, Mingguan, Bulanan | Waktu Undian | Setiap hari pada pukul 18:30 | Hasil Undian | Tersedia secara langsung di situs dan aplikasi mobile | Fitur Utama | | Cara Bermain | | Metode Pembayaran | | Batasan Taruhan | Minimum taruhan: IDR 10.000,- Maksimum taruhan: IDR ~ | Bonus dan Promosi | | Keamanan | SSL Enkripsi, Verifikasi 2-Faktor, Proteksi Data Pengguna | Dukungan Pelanggan | | Aplikasi Mobile | | Legalitas | Mematuhi regulasi perjudian di Singapore, berlisensi resmi dari Otoritas Perjudian Singapore | Ulasan Pengguna | | Kontak Resmi | | Dalam beberapa tahun terakhir, **Situs Toto 4D** dan togel online telah menjadi salah satu permainan yang paling populer di kalangan masyarakat Indonesia. Dengan kemajuan teknologi, kini pemain bisa dengan mudah memasang angka togel dari rumah tanpa harus mengunjungi bandar fisik. Namun, meskipun bermain togel online memberikan banyak kemudahan, penting bagi pemain untuk memilih situs togel 4D terpercaya yang menawarkan keamanan dan kenyamanan bermain. Jika Anda sedang mencari tempat terbaik untuk bermain togel 4D online, **Situs Toto 4D** adalah pilihan yang tepat. Artikel ini akan membahas mengapa Situs Toto 4D layak dijadikan pilihan utama serta keuntungan bermain di bandar togel online terpercaya. Selain itu, kita juga akan membahas beberapa tips dan trik seputar togel online, cara memaksimalkan bonus, dan fitur menarik yang ditawarkan oleh situs togel terbaik. ## Mengapa Memilih Situs Toto 4D? Bermain di situs togel online terpercaya seperti **Situs Toto 4D** memberikan banyak keuntungan. Salah satu hal utama yang perlu diperhatikan saat memilih bandar togel online adalah lisensi resmi. Situs Toto 4D telah memiliki lisensi yang menjamin operasionalnya sesuai dengan standar keamanan internasional, menjadikan situs ini bandar togel terpercaya di Asia. Keamanan adalah prioritas utama ketika Anda bermain togel online, dan **Situs Toto 4D** memberikan jaminan keamanan data pribadi pemain. Semua informasi yang Anda berikan, baik itu data pribadi maupun transaksi keuangan, dilindungi dengan teknologi enkripsi terbaru. Ini membuat Anda bisa bermain dengan tenang tanpa khawatir data Anda jatuh ke tangan yang salah. Selain keamanan, **Situs Toto 4D** juga menawarkan berbagai pasaran togel terlengkap dari seluruh dunia, termasuk togel Singapore, togel Hongkong, togel Sydney, dan togel Macau. Dengan berbagai pilihan pasaran ini, Anda tidak akan pernah kehabisan opsi untuk memasang angka favorit Anda. ## Kelebihan Bermain di Situs Toto 4D Berikut adalah beberapa kelebihan yang membuat **Situs Toto 4D** unggul di antara bandar togel online lainnya: **Proses Deposit dan Withdraw Cepat:**Salah satu masalah yang sering dihadapi pemain adalah lambatnya proses withdraw atau pencairan dana. Di**Situs Toto 4D**, Anda tidak perlu khawatir karena mereka menawarkan proses withdraw yang cepat dan mudah. Selain itu, Anda juga bisa melakukan deposit dengan berbagai metode pembayaran, termasuk QRIS dan e-wallet, sehingga transaksi menjadi lebih praktis.**Bonus Member Baru dan Cashback:**Jika Anda baru pertama kali bergabung dengan**Situs Toto 4D**, Anda akan disambut dengan bonus new member yang menarik. Tidak hanya itu, pemain setia juga akan mendapatkan cashback dan berbagai promosi menarik lainnya, termasuk bonus referral yang memberikan keuntungan tambahan jika Anda berhasil mengajak teman untuk bermain.**Minimal Deposit Terjangkau:**Bermain di bandar togel minimal deposit memberikan fleksibilitas bagi pemain, terutama bagi mereka yang baru mencoba peruntungan. Di**Situs Toto 4D**, Anda bisa mulai bermain dengan deposit minimal yang sangat terjangkau, memungkinkan siapa pun untuk merasakan keseruan bermain togel 4D tanpa perlu khawatir soal biaya besar.**Layanan Pelanggan 24 Jam:**Salah satu alasan mengapa banyak pemain memilih**Situs Toto 4D**adalah karena layanan pelanggan yang selalu siap membantu 24 jam sehari. Dengan adanya customer service yang ramah dan responsif, Anda bisa mendapatkan bantuan kapan pun Anda butuhkan. ## Pasaran Togel Populer di Situs Toto 4D Salah satu alasan utama mengapa **Situs Toto 4D** sangat diminati adalah karena variasi pasaran togel yang ditawarkan. Anda bisa mencoba peruntungan di beberapa pasaran togel terpopuler, termasuk: **Togel Singapore:**Pasaran togel yang paling banyak dimainkan dan memiliki reputasi sebagai pasaran terpercaya.**Togel Hongkong:**Pasaran ini terkenal dengan pengeluaran angka yang fair dan mudah diprediksi.**Togel Sydney:**Untuk pemain yang ingin mencoba pasaran yang sedikit berbeda dengan hadiah menarik.**Togel Macau:**Pasaran yang menawarkan variasi angka dan hadiah yang menggiurkan. ## Cara Memaksimalkan Bonus di Situs Toto 4D **Situs Toto 4D** dikenal dengan berbagai promo menarik yang ditawarkan untuk pemain baru maupun pemain lama. Untuk memaksimalkan bonus yang Anda dapatkan, ada beberapa tips yang bisa Anda ikuti: **Manfaatkan Bonus Member Baru:**Ketika pertama kali mendaftar di**Situs Toto 4D**, pastikan Anda memanfaatkan bonus new member. Biasanya, bonus ini diberikan sebagai tambahan pada deposit pertama Anda. Ini bisa membantu Anda memiliki modal tambahan untuk bermain.**Ikuti Program Cashback:**Banyak situs togel online termasuk**Situs Toto 4D**, menawarkan program cashback yang memberikan sebagian dari taruhan Anda kembali jika Anda kalah. Ini adalah cara yang baik untuk mengurangi kerugian dan memberikan Anda kesempatan kedua.**Gunakan Bonus Referral:**Jika Anda memiliki teman yang juga tertarik bermain togel online, Anda bisa mengundang mereka melalui program referral. Dengan begitu, Anda akan mendapatkan bonus setiap kali teman Anda melakukan deposit. Program ini bisa menjadi sumber pendapatan tambahan tanpa harus bermain.**Promo Mingguan dan Harian:**Selalu periksa halaman promo di**Situs Toto 4D**untuk melihat penawaran terbaru. Mereka sering mengadakan promo mingguan dan harian yang bisa memberikan Anda keuntungan ekstra. ## Tips Bermain Togel 4D Online Bermain togel online memang menyenangkan, tetapi untuk meningkatkan peluang menang, ada beberapa tips yang bisa Anda ikuti: **Pelajari Pola Pengeluaran:**Sebelum memasang taruhan, ada baiknya untuk mempelajari pola pengeluaran angka dari pasaran yang Anda pilih. Banyak situs togel menyediakan prediksi togel akurat yang bisa Anda jadikan referensi.**Kelola Modal dengan Bijak:**Salah satu kunci sukses dalam bermain togel online adalah mengelola modal dengan bijak. Jangan terlalu tergoda untuk memasang taruhan besar jika Anda belum yakin dengan angka yang Anda pilih.**Pilih Bandar dengan Diskon Besar:**Beberapa bandar togel menawarkan diskon besar untuk taruhan tertentu. Dengan memanfaatkan diskon ini, Anda bisa mengurangi biaya taruhan dan meningkatkan keuntungan.**Main di Pasaran yang Familiar:**Jika Anda baru bermain, disarankan untuk memulai di pasaran togel yang sudah familiar, seperti togel Singapore atau togel Hongkong. Pasaran ini lebih mudah diprediksi karena banyak informasi tersedia. ## Nikmati Keamanan dan Kecepatan Bermain di Situs Toto 4D Bermain togel online memang menyenangkan, terutama jika Anda bermain di bandar togel terpercaya seperti **Situs Toto 4D**. Dengan berbagai keunggulan, seperti deposit cepat, withdraw mudah, bonus member baru, dan pasaran togel terlengkap, **Situs Toto 4D** adalah pilihan terbaik bagi pemain yang ingin merasakan pengalaman bermain togel 4D yang aman, nyaman, dan menguntungkan. Jadi, tunggu apa lagi? Segera daftar di **Situs Toto 4D** dan nikmati berbagai promosi menarik serta kesempatan memenangkan hadiah besar setiap harinya. Jangan lupa, selalu bermain dengan bijak dan manfaatkan bonus cashback serta program referral untuk keuntungan maksimal!
true
true
true
Situs Toto 4D adalah bandar togel online terpercaya yang menawarkan banyak Toto 4D dan proses transaksi yang cepat, bonus member baru, pasaran togel terlengkap
2024-10-12 00:00:00
2024-08-01 00:00:00
https://drive.abbr.site/…ebp?format=1500w
product
markevanshub.com
SITUS TOTO 4D ❤ Bandar Togel Online Terpercaya 2024
null
null
12,632,586
https://techcrunch.com/2016/10/03/microsoft-expands-azure-datacenters-to-france-looks-to-beat-aws-on-image-of-trust/
Microsoft expands Azure data centers to France, launches trust offensive vs AWS, Google | TechCrunch
Ingrid Lunden
Companies like Microsoft, Amazon and Google continue to compete fiercely in the area of cloud services for consumers, developers and enterprises, and today Microsoft made its latest moves to lay out its bid to lead the race, while also launching a new mission to position itself as the cloud provider you can trust. Microsoft announced it would build its first Azure data center in France this year, as part of a $3 billion investment that it has made to build its cloud services in Europe. At the same time, the company also launched a new publication, *Cloud for Global Good*, with no fewer than 78 public policy recommendations in 15 categories like data protection and accessibility issues. The new expansion, investment and “trust” initiative were revealed by Microsoft CEO Satya Nadella, who was speaking at an event in Dublin, Ireland. He said that the expansion would mean that Microsoft covers “more regions than any other cloud provider… In the last year the capacity has more than doubled.” As a measure of how Microsoft and Amazon are intent on levelling each other on service availability right now, the news of the French data center comes one month after Amazon announced that it would also be building a data center in France. Nadella, of course, did not mention AWS by name but that is the big elephant in the room for Microsoft. Nadella said today that Microsoft has data centers covering 30 regions across the globe, “more regions than any other cloud provider,” with the European footprint including Ireland, the Netherlands, the UK and Germany. In Germany, its data center is operated by Deutsche Telekom on Microsoft’s behalf in a trustee model, a move made both for “digital sovereignty and compliance,” Nadella said, “and a real world understanding of what the customer needs.” The popularity of cloud-based storage and services has grown exponentially in the last several years, fuelled by the rise of smartphones and tablets that rely on cloud-based architectures to run apps; as well as a rise of other consumer and enterprise services that have also taken a remote storage and processing approach to deliver software more efficiently. While Microsoft may have lost to companies like Google/Android and Apple when it came to building a mobile platform or phone that is widely used by the mass market, it’s hoping that its presence in cloud services will give it a place at the table for computing in the future. “We have a very particular point of view by what we mean by mobile first and cloud first,” Nadella said today “It’s about the mobility of your experience across all devices in your life [and] the way to achieve that mobility … those experiences… is only possible because of the cloud.” As a business, it provides a very constant amount of recurring revenue for companies like Amazon and Microsoft, and as such is a strong engine for their respective financial performance. Offering basic services in the cloud like instances for developers also lays the ground work for upselling customers with a number of other features, ranging from other software and products through to more technology to improve apps, such as machine learning and artificial intelligence technology. (Nadella described his vision, for example, of “bots in every app”.) The business aspects of the cloud were less the focus of today’s presentation. More to the point was a new, interesting position that Microsoft is laying out for itself as the “more trusted, more responsible and more inclusive” cloud provider, in the words of Nadella, presumably in contrast to others like Amazon and Google. Microsoft has an interesting backstory when it comes to making news in Ireland. The country — in part due to its tax structure — has become the home for a number of major tech companies — not just Microsoft, but Facebook, Google, Apple and many more — when setting up their international headquarters, covering global operations outside of the U.S., which means that regulatory questions that arise in Ireland over issues like data protection or paying taxes have larger reverberations beyond it. In the case of Microsoft, the company was long embroiled in a case it was fighting against the U.S. government over data that it stored in servers in Ireland, that the U.S. government wanted to access. (It won the case earlier this year.) “Little did we know that this data center would lead to litigation against our own government… little did we know that if we persisted we would actually win the case,” said Brad Smith, Microsoft’s chief legal counsel and its president, in the presentation today. “People have rights and those rights need to be protected. We need to build a cloud that is responsible as well.”
true
true
true
Companies like Microsoft, Amazon and Google continue to compete fiercely in the area of cloud services for consumers, developers and enterprises, and
2024-10-12 00:00:00
2016-10-03 00:00:00
https://techcrunch.com/w…?resize=1200,645
article
techcrunch.com
TechCrunch
null
null
36,757,594
https://github.com/permitio/opal/releases/tag/0.7.0
Release v0.7.0 · permitio/opal
Permitio
# v0.7.0 ## What's Changed ### Supporting a new policy engine: Cedar Agent Cedar Agent provides the ability to run Cedar as a standalone agent (Similar to how one would use OPA) which can then be powered by OPAL. OPAL manages the policies loaded into Cedar through git, same as for OPA, and can push data updates in real time from external data sources. Example OPAL configuration for Cedar can be found here. The Cedar policy language offers better readability, better performance for policy evaluation and is analyzable via automatic reasoning. - Add a Cedar policy engine plugin by @shaulk in #461 - Shaul/per 5343 update cedar agent in opal by @shaulk in #463 ### Small fixes and improvements - Add platforms to build-push-action with amd64 and arm64 by @vivedo in #427 - [Snyk] Security upgrade setuptools from 39.0.1 to 65.5.1 by @RazcoDev in #323 - [Snyk] Security upgrade setuptools from 39.0.1 to 65.5.1 by @asafc in #324 - Bump json5 from 2.2.1 to 2.2.3 in /documentation by @dependabot in #354 - Bump webpack from 5.74.0 to 5.76.1 in /documentation by @dependabot in #410 - Upgrade GitHub Action by @cclauss in #417 - Docs: Add periodic_update_interval to data-sources.mdx by @roekatz in #458 - Oded/small docs fixes by @obsd in #283 - sort and add more questions by @orweis in #459 - Merge pull request #347 from permitio/improve-cli-windows-support by @orweis in #349 - Tests: Mark test_external_http_get flaky for retries by @roekatz in #460 - bump version: 0.7.0 by @asafc in #462 ## New Contributors **Full Changelog**: `0.6.1...0.7.0`
true
true
true
What's Changed Supporting a new policy engine: Cedar Agent Cedar Agent provides the ability to run Cedar as a standalone agent (Similar to how one would use OPA) which can then be powered by OPAL. ...
2024-10-12 00:00:00
2023-05-10 00:00:00
https://opengraph.githubassets.com/02354203bb4f12d30a6bfcaca368d4866a6c4f08e5b2c8ceb0c6583019ceea87/permitio/opal/releases/tag/0.7.0
object
null
GitHub
null
null
4,909,818
http://blog.wikibrains.com/wordpress/?p=32428174358
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
21,970,867
https://myrobotmower.com/irobot-terra-robot-lawn-mower/
iRobot Terra Robot Lawn Mower (First Impressions) - My Robot Mower
MyRobotMower
**The greatly anticipated robot lawn mower from iRobot is finally upon us.** After many years of waiting, with many people giving up hope it would ever happen, iRobot has finally announced that it will launch the Terra robot lawn mower later in 2019. It will first go on sale in Germany, along with a beta test program in the United States. iRobot, the company behind the popular Roomba robot vacuum cleaner and Brava robot mop has been long rumored to be interested in entering the growing robot lawn mower industry. Believe it or not, **it’s been over 10 years since the first rumors of a robot lawnmower from iRobot first surfaced.** iRobot certainly has the credentials and experience to make a very good robot lawn mower and it is one that I am very interested to see when it launches later this year. iRobot tends to focus on ease of use and simplicity in their products and certainly, their Roomba line of robot vacuum cleaners fits this bill and are the market leaders in this field. **How Does The iRobot Terra Work?** One of the key features of the iRobot Terra is that** it will not use a perimeter wire to mark the boundary of the lawn.** This is fairly groundbreaking, as up until now almost every other robot lawnmower requires you to laboriously place a perimeter wire around your lawn at a specific distance from the edge of the lawn. This allows robot lawn mowers to sense the edge of the lawn and cut as close to the edge of your lawn as they are able. There is only one other model on the market at the moment, from Ambrogio which does not use a perimeter wire. However, this model has a number of drawbacks which mean that it is not suitable for a lot of users and I generally don’t recommend it to people over the models that use a perimeter wire. I wrote an article about why it is so difficult to make a robot lawn mower without perimeter wire here. **The iRobot Terra will use a series of wireless beacons that you place around your yard instead of a perimeter wire.** These will provide a wireless, localized positioning signal to allow the iRobot Terra to find it’s way around your lawn. When installing the iRobot Terra, all you need to do is place these wireless beacons and then manually drive the robot lawn mower around your yard so that it can learn where the edge of your grass is. After this, you can let The iRobot Terra work away and it will cut your lawn without any user intervention. It will even drive around obstacles that you have in your lawn such as trees or garden furniture thanks to its obstacle detection systems. Another great feature of the iRobot Terra is that **it will cut your lawn in stripes, which is unlike most other robot lawn mowers available on the market.** With a few exceptions, robot lawn mowers use random computer algorithms to tell the machine what direction to mow in. This random pattern eventually leads to all of your grass The iRobot Terra will move up and down in stripes until it completes the job of cutting your grass. Whilst I don’t think this will produce a beautifully striped lawn, it will be more efficient and get your grass cut quicker than other robot lawn mowers available on the market today. Whilst we don’t yet know about the iRobot Terra’s specifications, iRobot have given some indication regarding the safety features that will be built into every machine. The iRobot Terra will have a system that stops the blades from working if the machine is lifted or tilted and there is also an emergency stop button which will deactivate the machine in mediately. The blades will also be tucked safely under the machine, meaning the risk of injury will be extremely low. Almost every robot lawn mower on the market today has excellent safety features and credentials. Of course, this is essential if companies want to convince us to welcome an automated blade-wielding machine into our gardens. Read my article on robot lawn mower safety features here. ## How Do You Control The iRobot Terra? Whilst the specifics of this haven’t been announced, we do know that it will have smart features integrated. This will mean that you will be able to monitor, control and schedule the iRobot Terra from a smartphone or tablet. There are also likely to be manual controls on the machine itself, but we do not yet know the specifics of this. I will update this article as soon as new information becomes available. **Why Did It Take iRobot So Long To Launch The Terra?** iRobot has had a market leading position in the robot vacuum cleaner market for years, and robot lawn mowers seem like such a natural extension of its existing product range. iRobot’s products are well-liked and get great reviews. We’ve actually been waiting so long that many people have almost forgotten about the hype. Whilst we don’t know all the reasons for the delay, we do know that a significant reason was that iRobot didn’t want to use a perimeter wire for edge-sensing. This posed a number of technical and regulatory problems. Unfortunately, iRobot’s proposal of using fixed outdoor radio beacons to transmit positioning information to the robot lawn mower required a waiver from the FCC. The FCC prohibit the use of fixed low-power radio transmitters without a license, and iRobots proposal was exactly this. iRobot needed a waiver from the FCC to proceed with their plans. They met stiff opposition from the National Radio Astronomy Observatory, who lodged strong objections that the radio signals from these beacons would interfere with their radio telescopes. The FCC however, sided with iRobot in 2015 and granted in a waiver which enabled them to proceed with their plans. Why then, has it taken from 2015 to 2019 for them to begin the launch of a product to the market. Unfortunately, only iRobot know this. There are a number of technical challenges to building a robot lawn mower that doesn’t use perimeter wire. - Firstly, most lawn’s are not bordered by walls, which provide the natural barriers that allow a robot vacuum cleaner to navigate. - Secondly, an alternative positioning system needs to be very precise. GPS is only accurate to 10-20cm, but even a few centimeters here or there can mean the difference between a strip of uncut grass, or your robot lawn mower falling into a flower bed. Even with today’s robot lawn mowers, it can take a bit of adjustment to get the perimeter wire setup perfectly to provide the best performance. iRobot has likely been waiting for the technology to mature so that they can bring a fully featured and convenient product to the robot lawn mower market. I for one will be delighted to get my hands on this machine as soon as possible once it launches in Europe to put it through its paces. Interestingly, the product is currently slated to launch in Germany later this year and as a beta program in the United States. At this time there are no further plans regarding the launch of the iRobot Terra in other countries. There is also no word yet on pricing, and this will have a massive bearing on the uptake of this product. You may be surprised that the iRobot Terra will be launching in Germany first, rather than in iRobot’s home market of the United States. However, this makes perfect sense as the robot lawnmower market is much larger and more mature in europe, and Germany and the Scandinavian countries are at the forefront of robot lawn mower adoption. Whilst the robot lawnmower market has been growing strongly over the last 10 years, it still makes up quite a small proportion of all lawn mowers sold. Houses in Europe typically have smaller gardens that are better suited to the capabilities of robot lawn mowers, but growth in the United States, particularly over the last 2 or 3 years has been significant. Husqvarna, John Deere, Worx and Honda are leading the charge to win over the American market. iRobot will be hoping that their introduction to the market will signal explosive growth of robot lawnmower uptake if they can produce a compelling product. I feel that iRobot is ideally placed to take a large chunk of this market and I very much hope that their introduction is like Apple’s entry into the smartphone market in 2007. Only time will tell how successful iRobot are with their new Terra robot lawn mower, but an additional competitor entering the market should stimulate competition and innovation which can only be good market as a whole. If you are trying to decide on a robot lawnmower and don’t want to wait to see how good the iRobot Terra is, read my comprehensive guide to all the best robot lawn mowers available today. Check back soon for more information about the iRobot Terra as I bring you the news, updates and hopefully a review of this product later in the year.
true
true
true
The iRobot Terra robot lawn mower has finally been announced. Read my first impressions of this long awaited robotic lawnmower.
2024-10-12 00:00:00
2019-01-30 00:00:00
https://myrobotmower.com…ot-lawnmower.jpg
article
myrobotmower.com
My Robot Mower
null
null
39,463,876
https://ktlthebest.github.io/posts/so-you-want-to-fp/
So You Want to Functional Programming
null
# So You Want to Functional Programming Hello there, poor soul! How come you found your way to this post? Are you perhaps unhappy with the way programming is done nowadays? Did you perhaps hear someone say that functional programming has all the benefits like: - Simplifying your projects - Removing a class of errors and bugs - Bringing back joy of programming of early days Well, whether those claims are true or not, you will have to see them for yourself. One interesting thing happening is: - people who are not into FP think those who are into FP are pretentious pricks who want to feel themselves smarter than everyeone else, while - the guys who are into FP don’t understand all the resistance when there’s such a wealth of good things about FP and it is actually not as difficult as many people claim. Well, it seems kinda accurate to me. There was a period in time when I was “aggressive” trying to get people learn Functional Programming. Because of me, some people are probably never crossing to our side. Oopsie-daisy. But in my defense, some of those people saw the benefits of those ideas, but are in a stage of their lives when they can’t afford to learn functional programming. That is a valid gripe with functional programming that those who are on the other side have: there’s just too much to learn. Of course, you can use `map` , `filter` , `fold` , recursion, immutability, etc. to spice up your code with some functional flavor. But the problem most advanced of us have with this, is that you won’t understand what you’re doing and whether the solution you’re deploying is the proper one. Essentially, you’re not getting what you were promised: understanding. But it is too difficult to get all of it right away (thanks OOP). Blindly deploying all the tools that you’ve learned in hopes it will work is not such a good approach, because high chances it is not going to work, but somehow it is the fault of the FP folk who proposed the idea, not the person who couldn’t figure out how to use the idea properly. Speaking from the experience of a friend who tried to make the code in Dart robust by employing a `Either` type, and who later regretted it, since Dart wasn’t developed with FP ideas in mind. But the problem still persists: functional programming is difficult to grasp. In truth, functional programming ideas are simple, and possibly much simpler than OOP ideas to understand (I mean understand, not just “ahh, I see” and forget the next 15 minutes). But for some people their brains are adamant in not comprehending those simple ideas. And I don’t know why. Might make a good PhD thesis in Neuroscience, who knows. So I’ll try my best to explain functional programming principles… in Python. Nowadays everyone and their grandmother seems to know Python, so it should be no problem. # What is the lesson?⌗ The most important takeaway from this post is the perspective. The correct one, hopefully. The problem with learning functional programming is that its ideas lie on several spectrums of logic, math, computer science and engineering, simultaneously. Its ideas are so deep, essentially like calculus, and are re-discovered again and again, by many different people, independently. The book of “Clean Code” is essentially a pseudo-introduction into functional programming (not that reading that book helped me understand how to go about my code as a kid, but I really liked it nonetheless). So yeah, there are many ways to look at the concept through many different lenses, and tugging on one concept attracts other concepts and ideas. That’s why when you’re learning functional programming, it feels like you have to learn a lot, simultaneously. Because you kinda do. Additionally, functional programming tends to attract people who like to explore ideas, which is nice, but they make bad teachers. That’s why we need better teaching materials for functional programming. At the end of your (beginning of a) journey into functional programming ~~if you make it~~, you get a sense of understanding. The world would never be the same. You’ll start to see things that weren’t there before. Like a sixth sense. And you’ll realize that the ideas were in fact quite simple (although not easy to grasp). You won’t get that understanding by just reading one blog post. Or two. Or ten. Or even one or two books. At least now. People who get into functional programming claim that Elm is a great gateway drug into functional programming: it is easy and friendly. The problem is that it is quite niche (for client-side frontend and compiles to JavaScript). Unless you do just that, you’re stuck. So one of my aspirations is to create a general purpose programming language, that will be a bad functional programming language, but which will make a good start into FP, just like Elm does. Unfortunately there isn’t one (yet), so we’ll be using Python. Remember, the most important takeaway of this blog post is the perspective. If you get the perspective, a lot of unconnected things will fall into place. So, let’s start. # Functions, Purity and Side-effects⌗ Just like in OOP, the most basic building block is an object (or a class, I don’t know), the most basic building block in functional programming is a… function. Don’t look at me like that: there are surprisingly many things that you can build with just functions. In fact, Alonzo Church, academic supervisor of Alan Turing, discovered lambda calculus and showed that it is equivalent to *the* Turing Machine. Essentially, what your fancy C++ and other languages can do, functional programming can also do. The first important point: ``` Important point no. 1: Function is all you need ``` Whenever you feel the itch to overcomplicate your solution, try writing a simple function first. But not just any function. What we like in functional programming is a *pure* function. So, now we are getting jargons territory, let’s quickly back it up with examples. Let’s imagine we are trying to write a function that takes the user’s name and prints the message of the day: ``` def motd(name): print("Hello, {}! Have a nice day!".format(name)) def main(): motd("KtlTheBest") ``` Now, while this example is purely imaginary, the code is emblematic of what people would usually write. While it is fine, the question is: how do you *test* this? “What do you mean by *test*?” I hear you asking. What I’m asking is why are you confident that this will work? “Well, I’ll run the code and see the result printed in the terminal…”. Yeah, but when I was initially writing this code, I made a mistake. I would have to run the code to see the bug. But what if the code would be more involved? Say, it would have many functions (or better say *procedures*) doing many things, tightly coupled and the only way you see the output is by pinging or calling some other guy sitting on another laptop checking the server response. Doesn’t sound so nice now, does it? “Well, that’s how the things are”. But we as functional programmers would disagree. We would say that the reason you are having problems is because most of your functions are *impure*. For example: `print()` is an impure function.`randint()` is an impure function.`time.now()` is an impure function.`input()` is an impure function.- Function that fails because it is Tuesday is an impure function. - Function that reads from a global variable is an impure function. - Function that changes the values of input arguments passed by reference is an impure function. Already, in a small function of one line: ``` def motd(name): print("Hello, {}! Have a nice day!".format(name)) ``` … you’re already experiencing problems. So, how do we make this testable? ``` def motd_pure(name) return "Hello {}! Have a nice day!".format(name) ``` What we are doing instead of printing is *returning a string to print*. Printing is not our responsibility now. Now, to verify the code, we can write something like this: ``` def main(): message = motd_pure("KtlTheBest") assert message == "Hello, KtlTheBest! Have a nice day!" print(message) ``` What we made, is essentially made an impure function a *pure* one. But as a consequence, this function became *testable* We also wrote an `assert` statement that will print **only** if the function `motd_pure()` is correct (by some arbitrary definition). And if you have very keen eyes, you’ll realize that there’s a bug in a `motd_pure` : I forgot a comma after `"Hello"` . But instead of blindly relying on my (or someone else’s) eyes, I can ask computer to verify the function and be 100% sure it works only when it is correct. So with that, let’s get into some definitions. ## What is purity?⌗ When talking about purity, functional programmers refer to pure functions. By pure functions we mean functions in mathematical sense: ``` f(x) = x * x ``` We can also say that pure functions: - Output of the pure function *always*depends solely on*input*arguments. - It has no *observable*side-effects. Let’s look at the function again. If we try to substitute numbers, we get different values: ``` f(1) = 1 * 1 = 1 f(2) = 2 * 2 = 4 f(10) = 10 * 10 = 100 f(n) = n * n = n * n ``` Here, the result of `f(x)` depends solely on `x` . Another interesting observation is that we can evaluate `f(x)` infinitely many times, and the result would always be the same: `x * x` . This is a nice property to have. “Are there functions that don’t behave like that?” you may ask. To which I’ll show you this code: ``` DEBUG = False def foo(x): if DEBUG == False: return x * 2 else: return -1 * x ``` Now, if we evaluate this function like this: `foo(3)` , we may get `9` , but when we run it again, `f(3) = -3` . Now, we have two different values for `foo(3)` . This is not a pure function. This function has an implicit *state*. Implicit, because it is not observable from the function signature (input and output arguments), but if you run it, you’ll feel the effects (for example different values for the same input arguments). Second line says that the function must not have *observable* side-effects. Let me show you: ``` def sum_left_to_right(l): sum = 0 for i in range(len(l)): sum += l[i] return sum def sum_right_to_left(l): sum = 0 for i in range(len(l) - 1, -1, -1): sum += l[i] return sum ``` I’ll give you a little sneak-peek, but *mutation* of a variable (i.e. `x = 1; x = 2; assert x != 1` ) is a *side-effect* and thus leads to impure functions. So while we are doing a side-effect, we essentially have pure functions. For the same list `l` the results of the functions would be the same. On a side note, we are doing the order in different ways in both functions, but since the functions are pure and the addition is *commutative*, i.e. the `x + y = y + x` , we can say: ``` sum_left_to_right(l) == sum_right_to_left(l) ``` However, if we were to add `print("Currently on: {}".format(i))` into the loop, the functions would become impure. ``` def sum_left_to_right(l): sum = 0 for i in range(len(l)): print("Currently on: {}".format(i)) sum += l[i] return sum def sum_right_to_left(l): sum = 0 for i in range(len(l) - 1, -1, -1): print("Currently on: {}".format(i)) sum += l[i] return sum ``` First of all, the functions are doing printing, which is side-effect by definition. But even if we compare by the side-effects, they would still be different, even if the sum is the same: ``` def sum_left_to_right(l): sum = 0 to_print = [] for i in range(len(l)): # Currently on: to_print.push(i) sum += l[i] return (sum, to_print) def sum_right_to_left(l): sum = 0 to_print = [] for i in range(len(l) - 1, -1, -1): # Currently on: to_print.push(i) sum += l[i] return (sum, to_print) ``` If we compare them, the results will not be the same (in general): ``` sum_left_to_right(l) != sum_right_to_left(l) |l| > 1 ``` This is dandy and all, but what’s the use, you may ask? Well, to answer that, we need to cover one more case of side-effect: mutating variables. You may have seen this millions of times: ``` x = 1 x = x + 1 ``` What this does, is that it “creates” a variable named `x` and assigns it a value of `1` . After that it *mutates* its value to `x + 1` . From the mathematical point of view this expression doesn’t make sense. There’s no value of `x` in `ℤ` , `ℕ` , `ℚ` , `ℝ` or `ℂ` that has this property (except if you do modulo 1, but that’s useless). The thing is, the ability to freely mutate state is the source of many software bugs. Of course, you may not believe me, since I don’t have enough experience writing software, but if you try to look at it yourself, you’ll see that I’m right. As for me, everytime I am forced to write in Python or any similar languages with hard-to-understand semantics (i.e. you look at the code and have no idea what will happen), I dread inside. Let’s look at this code: ``` x = ... # some value f(x) g(x) # x = ??? ``` The thing is with Python, is that you don’t know or can’t be sure. You don’t know, and compiler doesn’t know either. Simple values like `int` are passed by value: ``` def f(x): x = 2 x = 1 f(x) assert x == 1 ``` Same for lists: ``` def f(l): l = [] l = [1] f(l) assert l == [1] ``` However, values inside the list are passed by reference: ``` def f(l): l[0] = 2 l = [1] f(l) assert l == [2] ``` Don’t know about you, but this leaves a bad taste in my mouth. The reason is that the logic is purely artificial, somebody came up with those rules. That’s exactly the reason you have to do `===` instead of `==` in JavaScript or that Java does **Referential Equality** instead of **Structural Equality** (that’s the reason you have to write `s1.equals(s2)` instead of `s1 == s2` ). The ability to freely mutate state coupled with unintuitive semantics makes *reasoning* about programs hard. Reasoning is the ability to tell if the code is correct or not, especially useful when debugging. In fact, humans are really bad with reasoning. Most of the logical thinking that we do is in the prefrontal cortex, the front of the brain, its highest layers. Essentially, this part of the brain evolved last and is quite recent. Forcing that part of the brain to work is quite difficult. We didn’t evolve to solve math problems naturally, compared to breathing uncounciously, for example, that’s why math is difficult in general. But we have tools to aid us in that. Here is the tool: `=` . “Looks like… an assignment?” you ask yourself. No, no, it is not an assignment, it is *equality*. The ability to tell that two unrelated things are actually the same opens up a myriad of possibilities. On a side note, people who keep arguing that “you can’t understand me” or cultural isolutionism or whatever, they are essentially robbing people of the tools necessary to understand the world, but that is a story for another day. For example: ``` sum(angles of triange) = 180° ``` Here we establish an equality. While it may seem trivial, but it is useful, if you know first two angles, using this equality you can find the third angle: ``` a + b + c == 180 => c = 180 - a - b ``` Or take physics for example. We all may have seen this equation: ``` F = ma ``` Newton’s Second law. When I first saw this, I didn’t pay much attention. Of course, this equation gets introduced in a rather boring context, seemingly no use outside of simple kinematics. But take another example: ``` F = kx ``` This is a description of Hooke’s law, i.e. what force does the spring exert when displaced by the distance of `x` . Again, what of it? See, the `=` symbol is actually really powerful. Because we know that those two equations are basically the same, we can combine them: ``` F = ma F = kx ------- ma = kx ``` Seems boring, but with this we can find the answer to a question, “What is the acceleration of the object of mass `m` attached to a spring with a constant factor of `k` and displaced by `x` meters?”. Just basic algebra and voila! ``` (ma) / m = (kx) / m => a = kx / m ``` Now just plug in numbers and find the answer. Interestingly, physics is quite “functional”, in a mathematical sense. The whole science in general is established on the shoulders of equality, or rather equational reasoning. By measuring things and comparing them with others or saying that one is equal to another, we establish connections. And turns out, those connections are quite strong. So, the next time you’re wondering why do you need to learn physics: this is why. Understand that things are interconnected. Discover the power of equational reasoning. Let us move on. Unfortunately, the application of equational reasoning in programming is rather limited. In the presence of the side-effects it is even impossible: ``` print("Hello World!") ??? print("Goodbye World!") ``` How do you even compare those things? While comparison for those kinds of things is not well defined, we can compare values: ``` "Hello World!" != "Goodbye World!" ``` Side-effects complicate things, they complicate equational reasoning, they complicate reasoning in general. And they put implicit restrictions that you can’t easily verify: ``` x = ... # some value f(x) g(x) ``` Again, back to our example. Can you tell what this code does? Can you find a mistake in it? Probably not. What about this? ``` def f(x): x.close() def g(x): x.write("Hello filesystem!") x = open("file.txt", "w") f(x) g(x) ``` Now do you see the problem? We are writing to a closed file. The correct order must be this: ``` def f(x): x.close() def g(x): x.write("Hello filesystem!") x = open("file.txt", "w") g(x) # swapped f(x) # places ``` This was an easy example, but the arbitary invoking of side-effects from virtually anywhere can cause problems even on this scale. This requires us to read ALL the code to find bugs. Now you see why the job of software developer is so difficult? Because you’re bad at it. If the functions were pure, the order wouldn’t be so important, or rather the order of the functions would be *explicit*. Take this imaginary example: ``` (f(x) + g(x)) * (k(x) - h(x)) ``` All the functions are pure and perform some computations. It is clear to see that it doesn’t matter if we perform `f(x)` first or `g(x)` , or `h(x)` or `k(x)` . However, it is also clear that we must first do `f(x) + g(x)` and `k(x) - h(x)` before we can multiply them, or that we must perform the computations of the functions if we want to do the addition or the subtraction. With pure functions there are only computations, and with computations the order is described by data dependency. Data dependency is when you need to calculate `y` , but that `y` depends on `x` , i.e. `y = f(x)` . But if `x` depends on some other argument `u` , i.e. `x = g(u)` , then you get a clear dependency, or an order: ``` y = f(g(u)) ``` Simple? Simple. Hopefully at this point it is clear why we don’t like mutation, because mutation implies state and state complicates our code in many unpredictable ways. That’s why we as functional programmers tend to use *immutable* variables. In the dicitionary, `immutability` is defined as a quality of not changing, staying the same. If we take a problem of the form: ``` x + 3 = y => x = ? ``` While there are many values that `x` can take, once it is taken, it doesn’t change. That’s the beauty of the variables and how they should be used. Of course, the question is, how do you write a code with variables that don’t change? Simple, just create a new variable. And if that variable needs to change, create a new variable for that. I know, it will be difficult initially. But once you start doing it, you’ll notice it that since your variables are immutable, it becomes easy to *reason* about the program. So, at this point, we are ready to cut-off our first part of the intro into functional programming: - Use pure functions - Fuctions that do side-effects are impure - Impure function called inside pure function makes pure function impure - Use immutable variables - Don’t read from global variables - Pass all variables used through input arguments to make function pure - If side-effects are not observable from outside, the function is pure Even if you don’t progress further than this, if you just do those when you can, you’ll already see the benefit. That’s how I started my journey into functional programming. With a small step. It’s fine if it will be your only step. And for those who are still onboard, let us move on.
true
true
true
Part I Part II Part III Hello there, poor soul! How come you found your way to this post? Are you perhaps unhappy with the way programming is done nowadays? Did you perhaps hear someone say that functional programming has all the benefits like: Simplifying your projects Removing a class of errors and bugs Bringing back joy of programming of early days Well, whether those claims are true or not, you will have to see them for yourself.
2024-10-12 00:00:00
2024-02-22 00:00:00
/
article
null
Terminal
null
null
6,269,888
http://www.forbes.com/sites/erikkain/2013/08/24/xbox-one-vs-ps4-the-console-wars-and-the-attack-of-the-fans/
Xbox One Vs. PS4: The Console Wars And The Attack Of The Fans
Erik Kain
Xbox One is by far the best video game console of all time and everyone should buy it. On the other hand, the PlayStation 4 is the greatest piece of video game machinery ever constructed and everyone should buy it. Don't even get me started on the Wii U. Or my PC. Or the Sega Master System.... Here's the thing: Whenever I write a post pointing out something positive (or negative) about one of the big video game console manufacturers, I'm quickly labeled as that most dreadful of creatures: a fanboy. And not just a fanboy, but an obviously corrupt video game journalist who simply *must* be taking payments on the side from the subjects of my writing. This tiresome, incessant noise and nonsense comes from a subset of gamers so totally lacking in self-awareness that at times it's almost laughable. It's not unique to games, of course. Spend a day writing about Apple or smartphones or operating systems or really any tech whatsoever and you'll draw fans in like moths to flame, brave knights come to the defense of their preferred product's honor and virtue. I can never quite tell if I'm a It never occurs to the fanatic that not everyone is slavishly enthralled to winning an illusory and meaningless war. Maybe it's just the new way of belonging. Culture 2.0. The logical conclusion of self-definition through consumerism. (Which is not meant as a critique of consumerism necessarily, but rather the way it's employed with an almost religious zeal by certain devotees.) It's funny on a certain level, but it's also deeply irritating to me, and one reason I write articles about why you should buy a PlayStation 3 instead of an Xbox 360 and then write counter-articles arguing the exact opposite. You could call that "trolling" I suppose. I think of it more as a sort of prankster commentary. "One common thread I've noticed in a lot of fan communities - both online and off - is a lack of anything approaching critical thought," writes critic Stephen Bond. "A lack of criticism towards the object of fandom, and a lack of criticism towards the state of *being a fan*. All I see is blind idolatry, and blind dismissal of anything perceived as a threat or a rival. Sure, you often get fans who dare to break away from the pack, who dislike their idol's latest work, or even start to turn against him; but such disillusionment is part of the inevitable trajectory of the cultist." It's tough to be critical of one's own "state of being a fan." Bond claims to not be a true fan of anything, a claim I can't make for myself. But I do try to keep my own fanaticism under control. I try to be a critic always yet never let that undermine my capacity for enjoyment. Being critical and enjoying what you critique are not mutually exclusive. For instance, I love *Dark Souls---*anyone who reads this blog knows that---but it falls apart in the third act. I can admit that without diminishing the game. I love fantasy, but it's very hard to find good fantasy books and even some of the better series out there have deep and glaring flaws or, as is the case with Martin's *Song of Ice and Fire* books, a steady and terrible decline. I love video games in general, but they're often plagued by silly tropes, terrible writing, and flimsy design. Yet for all that, I can still enjoy them---even the ones suffering from these flaws. **Xbox One and PS4 are more alike than they are different.** One of the more irksome manifestations of gamer fandom these days is the little micro-war over hardware specs in the next-gen consoles. There are some small differences between both the Xbox One and the PS4, but they're not big enough to warrant much discussion, let alone lay grand claims about performance of games on either system. It's quite likely that while Microsoft has the better cloud service with Azure, Sony has the stronger machine in the PS4. And none of it will matter. The vast majority of games for both systems will be cross-platform and they will likely perform almost identically. More than any previous generation of video game hardware, this coming generation is most notable for its sameness. The Xbox One and PS4 resemble mid-range gaming PCs more than anything else, and while they employ slightly different methods, have different RAM, and so forth, the end result will be two machines that are more alike than otherwise. The main difference between the machines will be their controllers, their user interfaces, and the quality of their online services. And the Kinect, of course. The specs themselves will matter very little once all is said and done, and once developers figure out how best to optimize across platforms. Developers will work toward parity rather than face criticism over subpar performance on one system over the other. It will be easier to achieve parity because the two consoles are so similar this time around. It's also simpler and cheaper to develop essentially the same game across platforms, which is why PC versions of console games often feel constrained by their less powerful counterparts. **It's all about the games, and your preference is just that: a preference.** I think it's partly about belonging, but fandom is also about inferiority and bad coping strategies. Whether it's a "core" gamer scoffing at *Call of Duty* or an Xbox fan thumbing his nose at Sony, or a PC "master race" gamer out reminding everyone that PC is the best and they wouldn't *dare* even come into close proximity with a console lest it unman them somehow....the fact is, if you need to declare your allegiance you're already in trouble. The way I see it, games are the point, the cause, the goal, the reason. Games are why we're here, not the dishes they're served in. I love the weird Nintendo stuff on the 3DS and Wii U. I love the great JRPGs you can find on Sony's devices. I love playing first-person shooters like *Call of Duty* and *Halo* on the Xbox 360. I love playing glorious looking games like *The Witcher 2* on my PC, and the great depth and variety of games available on the PC (and, indeed, the mouse and keyboard for those afore-mentioned first-person shooters.) I love that consoles are easy to set up and use, and I love the fact that I can build my own gaming PC and watch it leave those consoles in the proverbial dust. Sometimes, I even love to play games on my iPhone, though these days the word "Gem" makes me want to throw my phone across the room. Nothing kills a game faster than in-your-face microtransactions. I understand how frustrating it can be to feel like a critic misunderstands something you love. I feel this way all the time reading critics of non-mainstream fantasy films who just *don't get it, *who just don't seem to understand the things I enjoy. I want to tell them how wrong they are, but it all comes down to preference and taste, for which there is no accounting. When film critics lambaste a movie by comparing it to video games, I feel irritated beyond measure. But my irritation is feckless without information to back it up. Knowledge is power. Anger just muddies the water. There's a line that must be drawn between being informed and being devoted. We can enjoy things while still casting our critical eye at them; the better we understand them, the more cutting and poignant our critique can be and the better future products will be because of it, hopefully. The point is, when you sit down at your keyboard and go looking for people to call "fanboys" that says a lot more about you than it does about the targets of your ire. This squabbling and name-calling and self-righteousness gives me a headache. I can't tell whether to laugh or cry. It saps the fun right out of games. The fact is, these major corporations and their consumer products do not need, or deserve, your unthinking loyalty and devotion. They need competitors and critics so that their consumers get the very best products possible.
true
true
true
As the next-gen console wars heat up, slavish devotion to a console only hurts the video game industry.
2024-10-12 00:00:00
2013-08-24 00:00:00
https://imageio.forbes.c…=1600&fit=bounds
article
forbes.com
Forbes
null
null
11,818,521
http://venturebeat.com/2016/04/06/why-pinterest-forces-you-off-its-mobile-site-and-into-its-app/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
11,342,684
https://mixpanel.com/blog/2016/03/09/we-re-in-a-bot-gold-rush-kik-tells-you-how-to-strike-it-rich
We're in a Bot Gold Rush. Here's how to strike it rich.
Mixpanel Team
**Blog Post** # We’re in a Bot Gold Rush. Kik tells you how to strike it rich. **Mixpanel Team** Quinn Brenner was living a nightmare. Her mother passed away. She was rejected from drama school. Then, as she was leaving an audition, she was hit by a car. After flat-lining, Quinn managed to survive, waking up in the hospital with two broken legs. Throughout this grueling recovery, her phone was her only connection to the outside world. If you happened to be texting with Quinn Brenner on Kik, the widely used messenger app with over 275 million global users, you would know Quinn wasn’t actually a girl. She was a bot. As the protagonist of the film Insidious: Chapter 3, Quinn Brenner’s character was launched on Kik’s Promoted Chats platform so moviegoers everywhere could interact with the character before they even bought a ticket. In fact, Kik’s users chatted with Quinn 50 to 60 times over the course of two days. This is pretty astonishing, considering millennials send 67 texts total on a daily basis. Sure, Quinn was captivating, but the bot’s success had more to do with the active audience: forty percent of Kik users are teens, the movie’s target demographic. Quinn is just one example of a fast-moving product trend. Messenger apps like Kik are reviving bots for the benefit of both products and users everywhere. As the first platform to launch bots about 18 months ago, Kik has since seen spectacular engagement from users when products build bots right. To Ivar Chan, developer evangelist at Kik, the future of bots is limitless, and messenger apps are the key players: “There’s room for everybody in this bot market. For products, this is the new gold rush.” If you’re a company looking to grow your product and find new distribution paths, bots may be the answer you’re looking for. But first, product managers and developers need to know how to make a bot. To ensure that you don’t pan for gold and come up empty handed, Kik has distilled three rules of thumb for venturing into this unknown territory, so you have the best chance of striking it rich. **THE BOT-ORTUNITY** For heads of product everywhere, bots are getting on their radars as a new way to serve their communities in a personalized, direct and familiar way, especially within messenger apps. “In a world where messenger apps have surpassed social networks, companies need to expand their digital presence to these greenfield pastures,” said Ivar. Still, it’s early days. Creator of the Botwiki and Botmakers community, Stefan Bohacek told me, “While there are a few companies offering bots as a product, I haven’t seen that many success stories — yet. We need to wait for Facebook Messenger to join already open platforms, like Slack, and it may take a few more months to see if the landscape of ‘business bots’ is going to make an impact.” While Kik is currently on a closed platform, the Kik team also realizes that the gold rush is only beginning. “Products have yet to fully realize how messenger bots can rocket their engagement rates, community and build strong distribution channels,” said Ivar. “When done right, bots are wildly successful in entertainment, but bots can be of service for users in any industry.” This means that bots have potential not only for massive movie promotions but also to spur engagement and drive revenue for anyone from e-commerce to SaaS and beyond. And Ivar isn’t the only one. Growth and product experts all over the spectrum concur: Andrew Chen, from Uber, said in an interview in February, “I’m hopeful that messaging will create the next generation platform for mobile app distribution.” But how does a product manager or developer even get started? First, you need to know your bot-history. **BACK TO BOT BASICS** Bots have existed almost as long as computers. While some bots have earned the rap of being spam, and some are fun, social experiments, like Eliza from 1968, post-Y2K’s SmarterChild, or the parody accounts on Twitter, bots have a rich history. And as of late, a lot of innovation. The rules of how to make a bot have changed, but the magic of bots mainly boils down to reducing friction. Bots cut down the time it takes to get what you want — whether that’s a joke, news, ordering takeout, or getting the latest metrics from that analytics report. Creating a frictionless experience via bots has been vastly overlooked by businesses, and herein lies the gold rush potential. But don’t think there’s a one-bot-fits-all for brands and businesses in this revolution. “In order to reach this ideal world of conversational commerce, where we buy via bot on messenger apps, developers must figure out the right experience that will bring users back and delight them,” said Ivar. But, he continued, “Messenger apps are ripe platforms for these native experiences especially since these apps are the most used in the world if you look at engagement, sheer usage and download numbers.” Today, if you want to go look something up or buy something, you’ll probably go to a browser and Google for a website. But that’s not the case everywhere around the world. Communicating with bots for goods and services may sound foreign to most Westerners, but it’s already the norm in China. “So much is done via bots on WeChat, the so-called ‘Everything App’ in China. You can even apply for a mortgage,” Ivar noted. From taking a quick mental break to chat with a movie character to making the most important financial decision of your life, bots are facilitating users’ needs at every level. “Similarly, we see Kik as a portal to connect the world through chat,” said Ivar, “and bots are the next wave for building connections.” Other leading chat apps like Facebook, Slack, and WhatsApp have successfully integrated bots in some forms, as well. Even Quartz, a media outlet, has recently launched an app that employs a bot to distribute curated news in a familiar text interface. #### But if you’re a product manager or developer, there’s a major caveat: bots can’t be mere replicas of a product. Bots need to improve upon what already exists, creating something that will ultimately feel irreplaceable to a user. In this new gold rush, companies can begin to reimagine what it means to connect with a customer in a very personalized, one-on-one, and yet automated way. A bot’s ability to deliver value, beyond a product’s original intent or promise, will increase brand loyalty, greatly contributing to the company’s engagement and growth goals. For a product manager or developer, empathy and user experience will be a guiding force when figuring out how to program a bot’s behavior. It may be intuitive, but companies should survey which bots have been successful and which ones were unpopular. For example, do you remember Clippy, the paperclip? The infamous Office Assistant bot for Microsoft Word was well known for being annoying as hell, tapping the desktop screen asking, “Are you writing a letter?” every chance he got. Well, don’t do that with bots. Successful bots are predicated on past innovations and the audience’s needs and desires. So, here are Kik’s three rules of thumb for building a valuable bot, beginning with the Golden Rule. **NO. 1: Every bot is a creature unlike any other. Treat it as such.** Bots are unique to products and their goals, and they are only as successful as they are built to be. Teams, first and foremost, need to know what bot is right for their product. “Bots live in one of four quadrants,” Ivar explained. “Your X-axis represents Engagement Time and the Y-axis represents Repeat Use. Each quadrant can be optimal for all different types of bots.” “If your product bot is looking for low engagement time and high repeat use, that’s probably going to be a news bot where you get a daily dose of gossip or information.” This is where Funny Or Die thrives on Kik. They’ve dished up hysterical content repeatedly in 3.5-minute snippets and created an evangelist following that also converts at incredibly high rates. (Keep reading to see those crazy results!) “Quinn Brenner was a good example of a bot with high engagement time and low repeat use. Because of its drawn out narrative, the engagement time extended, but once a chatter finished the story, they wouldn’t likely repeat the bot experience. They’d probably go buy a ticket to see the movie,” said Ivar. Then there’re the low engagement time and low repeat use bots, which doesn’t sound all that successful from the description. However, these could be one-off bots created for a concert or event. Ridesharing, e-commerce, delivery or even banking bots could also fall into this quadrant. It just depends on the product and what needs it fulfills for the user. #### Let’s dare to dream and say you were going to a Beyonce concert. As soon as you walked into The Staples Center, you could scan a Kik Code and start engaging virtually with the Queen B(ot). Whether backstage or in the nosebleeds, as a chatter, you could blast out gifs and videos shortly after they happen, slowly raising the FOMO levels to dangerous levels in everyone in your network. “Sure, you probably won’t go to the concert again,” said Ivar, “it will only happen once, and you’d use the bot for only a couple minutes. But, it doesn’t mean it was a bad bot if it’s in this quadrant of low engagement time, low repeat use. It just means that it was a bot designed for that quadrant.” #### What’s most exciting, however, for developers and product managers, Ivar told us, is that top right bot quadrant: “Everyone wants high engagement time and high repeat use. Developers are trying to chase that. But, we are still in the early days of bot-making, particularly for products, so we don’t know exactly what’s going to be in that quadrant.” **NO. 2: Build a bot with just the right touch.** Just like when you first start texting with your crush, bots that play hard to get build stronger relationships over the long run than those bots that come on too strong. When discussing the Quinn Brenner bot for Insidious: Chapter 3, Ivar illustrated how creating anticipation made a big impact on the bot’s engagement rates. In order for product managers and developers to figure out that top right quadrant, they’ll also need to keep the second and third rules in mind. “When you’d text Quinn Brenner, she would give you some hints of what’s happening, but after a while, the conversation would taper off just like a normal text conversation. You wouldn’t have the full experience in one sitting, rather over the course of a few days. Teenagers loved it.” With this bot, the goal was to build suspense amongst audience members so they’d go buy a ticket to the movie. In building anticipation, the Quinn bot also had high engagement rates. As mentioned before, the text exchange, compared to typical bot conversations, was 10x higher than usual. In the case of Quinn, a bit of patience paid off big time. #### For product managers and developers, the hurdle will be figuring out the optimal frequency of human-to-bot interaction. While you don’t want to be a Clippy, you also don’t want to play so hard to get that your users forget about you completely. Let’s say a food delivery app builds a bot that allows a Kik user to ping in her takeout order. Tonight, she’s craving Chinese. Keeping the delicate balance of human-bot interaction in mind, our Kik Foodbot would ping the chatter with an ETA on the kung pao chicken delivery, but Foodbot wouldn’t want to ping the chatter every day after that with a reminder of delivery options. Instead, Foodbot lets the user come to it. It holds out on directly reaching out to the user until there’s a reason to celebrate, say, a week later, with a promo deal to IHOP on National Pancake Day. When your bot has just the right touch that works for your product and your audience, your bot will be well on its way to retaining its users. This brings us to the last rule: **NO 3: Bots must prioritize engagement above all else.** Because when you do so, community growth and conversion will follow. When building their bot with Kik, Funny Or Die knew that if they kept their followers laughing, their users would keep coming back. The comedy website decided to concentrate on engagement in order to create a virtuous cycle. Serving laughs over ads proved to be successful for their engagement and community growth rates. Funny Or Die found the typical chatter would engage with about 25 pieces of content in each session, lasting about 3.5 minutes on average. As a result, its memes spread like wildfire across Kik. When Kik’s chatters shared the latest in-app gifs, those who shared not only looked good to their peers, but the crowds also started following Funny Or Die directly. “It’s amazing how quickly we built up a following on Kik,” said Patrick Starzan, the comedy website’s vice president of marketing and distribution. ”It took about three months to get to 1.5 million chatters, compared to the two or three years it took to get the same number of people on social networks.” By building a community that prioritizes user engagement above all else, Funny or Die saw higher conversions, too. “When we send out broadcast messages to our Kik chatters – usually with links to new videos – we see conversion rates as high as 10%, which is pretty substantial,” Starzan said. In comparison, Funny Or Die sends one broadcast message a week on Kik (note, rule No. 2 still applied here). In comparison, the comedy website pumps out content nearly five or six times a day on social networks like Facebook and Twitter and still sees lower conversion rates. To learn more about the essence of a bot and what makes them successful, I sat down with Samuel Woolley, the lead writer of a botifesto in Motherboard, doctoral candidate, bot expert for Political Bots and former provocateur-in-residence with Data & Society. “If I were a product manager, I would make sure to build a bot that was overtly bot-like. Name it Something-Bot,” Sam told me. “I’ve seen in my research that people love bots for their bot-ness. It’s when bots try to be overly human that people get frustrated with them.” Similar to social networks, companies shouldn’t spam customers with bots on messenger apps. Rather, messenger bots hold the potential to serve customers more so than brands and products ever thought they could before. “Whether we’re entering a Bot Gold Rush is still unclear,” said Sam. “There’s a lot of enthusiasm around bots, but to what extent it all pans out depends on the tech catching up. It’s still a bit clunky, and the capabilities of bots all depend on the algorithms that are built.” But the fact is, bots aren’t going anywhere. They are going to continue to play a major role in web traffic, marketing, politics, news, and soon, how mobile products are distributed on messenger platforms. Product managers and developers may feel hesitant to experiment with bots, but Kik is leading the charge. And with Kik’s demonstrated success with over 80 partners, the view looks pretty bright. “Our ultimate vision at Kik is to connect the world through chat, and we see bots as being an integral way to do so,” said Ivar. So at this major inflection point, where messenger apps like Kik give companies the platform to deploy bots to reach new customers, bot-makers need to agree on a code of conduct. Because with great bot-making comes great responsibility. It’s time to avoid spam and scams, and invent new scaffolds of connection with bots, bringing the world closer together in useful, fun and interesting ways. Are you in NYC? RSVP for our Mixpanel Office Hours where Kik talks to us about The Year of the Bot.
true
true
true
The future of bots is limitless. But product managers and developers need to school themselves and understand how to make a bot.
2024-10-12 00:00:00
2016-03-09 00:00:00
https://mixpanel.com/wp-…ge-sharing-1.png
article
mixpanel.com
Mixpanel
null
null
34,887,343
https://openandroidinstaller.org/
OpenAndroidInstaller
Tobias Sterbak
Linux is currently the best supported platform (tested with Ubuntu 20.04/22.04 LTS). Windows and MacOS are also well supported but you might experience more issues. So far there is no support for ARM-based systems. Note, that Ubuntu 22.04 can be booted from a USB drive without installing it. This might be a simple solution if you face any compatibility issues. #### How to run the application: - Download the .exe, flatpak or appropriate executable file for your OS. You might need to change permissions to run the executable. (On Windows, also install the Universal USB Drivers and other potentially drivers needed for your device.) - Start the desktop app and follow the instructions. You might need to allow or enable the execution of the software.
true
true
true
Makes installing alternative Android distributions like LineageOS nice and easy. Download now!
2024-10-12 00:00:00
2023-01-01 00:00:00
static/preview-header.png
null
null
null
null
null
34,103,929
https://packhacker.com/guide/best-travel-backpack/
Best Travel Backpack: How To Pick In 2024 | Pack Hacker
Tom Wahlin
*How To Choose The* # Best Travel Backpack The minimalist's guide to selecting a carry-on backpack for one bag travel. Fitting your life into one bag is no small task. We’re here to help. ### Best Travel Backpacks Click to learn more about why we love these top picks. - 9.2/10: Aer Travel Pack 3 (Best for one bag travel) - 9.1/10: GORUCK GR2 (40L) (Best for rugged adventures) - 8.9/10: Peak Design Travel Backpack 30L (Best for travel photographers) - 8.8/10: TOM BIHN Synik 30 (Best for built-in organization) - 8.6/10: Tortuga Travel Backpack 40L (Best for suitcase-like organization) - 8.5/10: TOM BIHN Techonaut 30 (Best for multiple carry modes) - 8.3/10: ULA Equipment Dragonfly (Best for lightweight carry) - 8.3/10: Able Carry Max Backpack (Best for daypack-like feel) - 8.2/10: Osprey Farpoint 40 (Best for budget travelers) - 8.2/10: Minaal Carry-On 3.0 Bag (Best for business travelers) - 8.0/10: EVERGOODS Civic Travel Bag 35L (CTB35) (Best for carry comfort) - 7.6/10: Topo Designs Global Travel Bag 40L (Best built-in packing cubes) - 7.5/10: Cotopaxi Allpa 35L Travel Pack (Best for showing a little personality) See all reviews: Travel Backpacks ## How to Select The Best Backpack for One Bag Travel There’s something so freeing about traveling with only one bag. All of your important stuff is within arm’s reach, and it forces you to cut down on many of life’s seemingly necessary consumer goods that you can probably live without. With one bag, you easily glide from location to location, always having just enough but never too much. Choosing the perfect travel backpack for one bag travel can be a challenging endeavor. There are so many brands and models to choose from with varying degrees of durability, price, and try-on-ability (we made this word up for trying something out before buying it online). Add varying views and opinions into the mix from folks with different values, needs, and body types—and you’ve got a veritable clusterf*ck of options to wade through. Whether you’re a new traveler gearing up for your first trip, a digital nomad going through a “sell-all-my-stuff-and-put-it-in-a-backpack” phase, or somewhere in between, it’s essential to have the best travel backpack that works for you. Here’s the bottom line: There is no “best” backpack that is perfect for every traveler in every scenario. However, we believe everyone can find a pack that’s perfect for their unique needs. In this guide, we’ll break down the factors we think are most important when choosing the ideal one-bag travel backpack for you. This guide is written and informed by Pack Hacker staff, many of whom are frequent travelers and digital nomads. That means we’re using and testing these products every day to better understand what’s available out there and how each bag may appeal to different types of travelers. If you’d rather skip all this info and get straight to the backpacks we’ve reviewed, you can take a look at our highest-rated travel backpack list in the next section, or all of our Travel Backpack Reviews. We’re constantly updating this list as we review and rate new bags frequently. ### Is It Better To Travel With a Backpack or Suitcase? We’ve found that backpacks give you much greater mobility. You can breeze through airports. You’ll never stand around a baggage carousel after a long haul again. And as long as your pack is carry-on size compliant, you’ll never lose your luggage, ever. Depending on your travel style and what you’re hauling, it comes down to your personal preference—both roller luggage and backpacks can be good options. In this guide, we’ll focus on travel backpacks for a couple of reasons: ### They Feel Freeing You’ve got both of your hands-free, and you’re not constantly dragging something behind you. No matter what terrain you’re walking on, you’ll never have the annoyance of loud or unsteady wheels behind you from standard travel luggage. Sure, roller bags work like a charm on smooth airport and hotel floors, but how about the winding cobblestone roads of Paris or a sandy beach in Ko Pha Ngan? You can traverse almost any terrain when you’re wearing a backpack. ### Travel Backpacks are Versatile & Usually Lightweight If you pack light enough, you can comfortably have all of your belongings with you at once. Did you arrive earlier than your hotel or Airbnb check in? No problem, just take your pack around with you for the day—no need to stop by and drop your luggage off. Versatility at its finest. We can’t necessarily guarantee the pack will be lightweight if you fill it up with a bunch of heavy stuff (like camera gear), so we made a Travel Camera Guide too 🙂. ### They Provide Flexibility You’ll take up less room on the airplane or in public transit. You’ll generally feel more agile vs needing to drag around rolly luggage, with the added benefit of not looking like an out-of-place tourist. It caters to a more adventurous lifestyle by always being ready to go. And, you can easily catch that train that’s about to depart without awkwardly side-running with a roller bag or two. ### Utilizing a Backpack in Travel Contexts In this guide, we’re going for travel versatility. We want you to look good carrying these bags around in an urban environment and have the flexibility to head out on a hike for a couple of days of camping without having your backpack ruined by the elements. If you’ve got a piece of roller luggage, it’s going to be hard to do that spontaneous half-day trek on the trail to the neighboring city you’ve been wanting to check out. Likewise, if you’re going to post up at a coffee shop for a day of office work, you’re going to look out of place with a bulky, multi-colored hiking bag. The packs mentioned in this article will blend into most city environments and are durable enough to withstand the abuse of longer excursions. Some of our top-rated bags for your travel backpack consideration. The Aer Travel Pack 3 takes some of our favorite travel backpack features and puts them into one bag: helpful load lifters, easy-to-engage compression straps, and easy access to different compartments. It has Aer’s sleek signature style and is made with quality materials like CORDURA® ballistic nylon and YKK zippers, which add a ton of durability and make this a reliable bag that can withstand extended travel. In fact, this is one of our highest-rated bags and a Pack Hacker Pick because of how it’s held up on trips across the U.S., Thailand, South Korea, and more. The organization is streamlined for easy packing, and it even includes a hidden pocket where you can tuck a smart tracker—a great feature for keeping track of your bag if it’s stolen or gets lost in transit. The harness system is super comfortable even when the bag is fully loaded and includes wide, cushioned shoulder straps with keepers to cut down on dangling. We also like that there’s an option to add a hip belt because it helps take a ton of weight off your shoulders when the bag is loaded. If you don’t need as much liter space, we recommend the Aer Travel Pack 3 Small because it takes key features from its bigger sibling and puts them in a smaller package built for shorter trips and smaller frames. Why We Like It - It has just-right organization and open space - Compression straps don’t impede access to the compartments, so it’s easy to grab gear quickly What You Should Know - Magnetic compression buckles sometimes come undone on their own - There isn’t a huge false bottom to the laptop compartment, which impacts tech protection—more of a nitpick, as we’ve found it’s still reliable If you’re looking for a durable pack that can handle any adventure you throw at it, look no further than the GORUCK GR2. It’s a little on the heavier side (courtesy of the CORDURA® Nylon and beefy YKK zippers), though we think the durability is worth the weight sacrifice. We’ve fit its boxy shape under the seat in front of us on some budget airlines, which is great if you’re trying to avoid fees while you travel the world (who isn’t?). In fact, this is the bag that Pack Hacker’s founder Tom used to travel the world for over 2 years. Though the organization inside is simple, there’s still plenty of room for packing cubes and pouches. It’s covered in PALS webbing, which we use to attach MOLLE accessories like pouches that we fill with items we want quick access to on the plane or while exploring. The customization options mean you can make the pack fit your specific needs, whether it’s Digital Nomad travel or a weekend fishing trip with your family. Plus, GORUCK has one of the best lifetime warranties in the business and a killer repair program, so if you have any issues, contact their customer service. Why We Like It - The external fabrics are some of the most durable we’ve seen—it even held up when we dragged it behind a car - Plenty of PALS webbing, so it’s easy to add modular MOLLE attachments to customize your organization What You Should Know - The rugged materials and hardware add a lot of weight to the pack - It has a tactical look and feel that’s hard to disguise if that’s not your style Since this is a bag from Peak Design, it has some great camera features. There are plenty of attachment points inside and out for your photography gear. However, it’s an excellent travel backpack even if you don’t take a DSLR on every trip, thanks to its clean lines and clever design. The main compartment has well-structured sides and opens clamshell to make it easy to pack, although we’ve noticed that anything we store on the bottom blocks built-in mesh pockets, so you’ll have to choose between gear storage or smooth access. They’re useful for gear you won’t need until you reach your destination, and side pockets help you get to things that you need as you travel, like your passport. There’s a well-padded sleeve for your laptop, and the front pocket has organizational features for tiny gear, which is great for getting to your essentials while sitting in your airplane seat or waiting at the gate. For times when you’re not packing as much, the compression system does an excellent job at holding gear in place. If you find that 30L isn’t enough space or you want bring more of your photo kit, we recommend the Peak Design Travel Backpack, which has the same great features and added room for your gear. It expands to 45 liters if needed and has compression snaps to lock it down if you want to use it as a daypack. Why We Like It - It has a comfortable harness system, with a sternum strap that won’t slip out of place - Structured sides and clamshell opening create a bucket shape that’s easy to load with gear What You Should Know - ID pocket on the back panel is easy to overlook, so a stranger may not see it if they find your misplaced bag - Some main compartment pockets aren’t as accessible as we’d like, which slows you down when searching for gear Going with a lower-capacity pack reduces size and weight, meaning you can even use it as a daypack once you arrive at your destination. However, that doesn’t mean it has to be short on features. Enter the TOM BIHN Synik 30. It’s a smaller version of the Synapse and features the same top-notch and customizable organization we’re used to seeing from TOM BIHN. That means it has multiple exterior pockets for storing gear and numerous attachment points on the interior for attaching modular pouches. While we like the ballistic nylon options because they’re sleek and durable, you can opt for a different material if you want (TOM BIHN has a ton to choose from). The style won’t be for everyone, and its round shape can make it more challenging to pack some packing cubes and pouches, causing you to lose out on some storage space in the corners (or lack thereof). However, once you’re used to the internal organization, this is one of the smartest-designed internal layouts we’ve seen in a travel backpack. Why We Like It - The internal organization is great for both travel and daily carry - Plenty of options to add modular pouches to customize gear organization What You Should Know - Has a heritage look that may not be everyone’s taste - Rounded edges can make it harder to pack with some organizers and pouches The Tortuga Travel Backpack 40L has a thickly padded harness system, from the shoulder straps to the hip belt and the back panel, along with vertical height adjustment and load lifters for extra support. All of these features together make for a comfortable carry even when the backpack is completely full. There are plenty of places to pack your gear, including water bottle pockets on each side, a top pocket for small items like keys, a front pocket for wide but flat items, and smaller pockets on the hip belt. You can stash your tech accessories in a well-organized admin panel, and there’s a dedicated laptop compartment as well. It includes a zippered pocket for accessories, which we love for the trips where we don’t need to bring a separate tech pouch. The large bucket space of the main compartment is simple, with no dividers to get in the way. This means you can pack however you please, whether you load up on packing cubes or fold your clothing into neat piles—though we recommend packing cubes so that things don’t get too jostled. If you’re vehemently against cubes (an interesting hill to die on, but we get it), a mesh compartment hinges along the main compartment opening for some built-in segmentation and is deep enough to hold a single layer of thick clothing or a couple layers of thinner items. Why We Like It - Structured material holds its shape regardless of how much gear is inside - Simple organization in other pockets while the main compartment is open to organize as you see fit What You Should Know - Can be slow to access the large mesh pocket in the main compartment because it opens toward the inside of the pack, not the outside - The harness system can feel a bit overkill for a bag of this size if it’s not full The Techonaut 30 is a classic example of what makes a TOM BIHN bag great. There are a ton of durable fabric and colorway options—we like the 525D ballistic nylon because of its strength-to-weight ratio, though there are stronger and lighter-weight options available depending on your preferences. Plus, it has clever, functional organization that’s easy to load with all your gear. When we need to keep even more small items in check, we add TOM BIHN pouches to the included O-rings around the bag (we’re partial to the Ghost Whale pouches because of their size, but almost any will work). You can carry the Techonaut 30 like a backpack, briefcase, or messenger bag, although you’ll have to get a separate strap to carry it as a messenger. We prefer backpack mode because the back panel is supportive even when all 30 liters are fully packed. Inside, it has a variety of pockets, including an integrated water bottle pocket and two quick-grab pockets, which work in either horizontal or vertical orientation, meaning you can store gear based on the way you’re carrying the bag. Briefcase mode? Use the top pockets. Backpack? Go for the sides. However, if you need to carry some hydration, we find that the integrated water bottle pocket can cut into the main compartment, so you’ll have to trade some storage space. Though the main and bottom compartments are separated, you can expand the former via a collapsible floor, which is handy if you need a bit of flexibility with the available space. This is great if you like traveling with shoes but don’t want to buy a separate shoe pouch. Why We Like It - Bottom pocket unzips to merge with the main compartment for even more storage space - It can be carried three ways, and all of them are comfortable What You Should Know - It’s tricky to see inside the top pocket because of its sideways opening - The dedicated shoe pocket struggles to fit large shoes, which isn’t ideal for those with large feet At less than 2 pounds, the Dragonfly is one of the lightest travel backpacks we’ve tested (and we’ve tested hundreds), yet it’s not lacking in features. The reason it’s so light is the Ultra 800™ Fabric. It’s 15 times stronger than steel by weight, twice as abrasion-resistant as nylons of the same denier, and waterproof to 200 psi, so you don’t have to worry about a rainstorm ruining your gear. The bag also has quality YKK AquaGuard zippers and Duraflex hardware. While it’s missing a ULA logo on the front, we appreciate the minimalist aesthetic. As for gear storage, there’s a built-in carabiner and leash for your keys in the top quick-access pocket, and there are both internal and external UltraStretch™ mesh pockets to organize your gear, including large water bottle pockets. In fact, they’re so large that we’re even able to hold things like a travel tripod. Inside is a sleeve that can hold up to a 15-inch laptop or a hydration bladder, depending on what you plan to do that day. Once you’re all loaded up, internal compression straps help to hold your clothing or packing cubes in place. However, you sacrifice a little in the harness system in the name of weight. A sturdy back panel has thin padding with aeration, and the shoulder straps have similar aeration but not as much padding. The sternum strap is also thin but helps take a little weight off when the pack is full. Plus, there are a lot of attachment loops all over the pack, which is great for modularity. Why We Like It - The oversized bottle pockets fit a variety of bulky gear and up to 64-ounce bottles - It has a quite spacious main compartment What You Should Know - It can be hard to zip when fully packed - The shoulder straps aren’t overly padded, which may not be suited for all body types The VX21 X-Pac material on the Able Carry Max Backpack gives it a sporty look that we like, and there’s also 1000D CORDURA® nylon on the underside for durability. You won’t have to worry about the sturdiness of this bag, as it’s well-constructed, with reinforced stitching in key areas. There is plenty of room in the laptop compartment for up to a 17-inch computer and organization for your tech gear. Loops and strips of webbing around the bag give you the flexibility to pack it however you wish, and there are two quick-grab pockets for gear you want to get at as you travel. You can even get a third quick-access spot if you use the internal bottle pocket instead of the one outside the bag for hydration. The Max Backpack is really comfortable to carry and easy to adjust. The shoulder straps have dense padding and breathable mesh undersides, with X-Pac on top for durability and style. While the tablet pocket is a bit shallow, we don’t have too many problems during regular use. Why We Like It - It’s easy to customize organization thanks to webbing and loop attachment points - The durable fabrics are held together with equally-sturdy stitching What You Should Know - The X-Pac material may not suit everyone, though you can always opt for CORDURA® nylon - A rear pocket is a bit narrow and tricky to access This durable bag is made with recycled and bluesign® approved polyester and a PFAS-free DWR coating, which is great if you’re an eco-friendly traveler. It has a bit of an outdoorsy look, which is to be expected from Osprey. However, the external storage is hard to beat if you’re the adventurous type. A large front stash pocket holds a water bottle or damp gear like a rain jacket or towel, and there’s also a decent-sized top pocket for smaller accessories. We like that it’s big enough to tuck your 3-1-1 bag inside to keep it within reach through the security line at the airport. The main compartment opens fully clamshell, and is easy to pack since you can see all the space at once. A couple of mesh pockets inside help organize your gear, and compression straps hold clothing or packing cubes in place as you travel. The large laptop compartment is accessible from the outside of the bag, so you can get some work done as you wait for the plane to board. What’s really great, though, is how comfortable you’ll be while carrying this bag. The breathable mesh back panel keeps things airy, and the harness shifts higher or lower so you can adjust it to your height and torso length. If you have a more petite frame but want to carry the same amount of gear, try the Osprey Fairview 40. As opposed to coming straight over your shoulders, these curve in and around, making it easier to carry for more petite users of any gender. Why We Like It - The harness system is comfy even when the pack is fully loaded - An ample-sized main compartment makes this a great pick for one bag travel What You Should Know - You can’t remove the bulky hip belt even if you don’t need it - There aren’t any dedicated bottle pockets, and the front pocket can be tight for larger bottles The 35L Minaal Carry-On 3.0 for one bag travel is aesthetically sleek and has smart features to improve quality of life on your trip. If you carry a lot of tech, you may appreciate that the laptop compartment lays completely flat, making it easy to load and access on the go. It has a suspended laptop sleeve that you can adjust to different sizes, so your 13-inch MacBook Air isn’t drowning in a pocket designed for a big gaming computer. Plus, the shoulder straps hide away behind a zipping panel, which we find makes it easy to slide this backpack into an overhead bin. The main compartment opens clamshell for easy packing and includes some built-in organization. However, unlike most other backpacks, you load the bag into the “scoop” section (the front of the bag) instead of the back. This takes a little getting used to, though it’s easy to use once you do. While we recommend taking advantage of packing cubes for most of your gear, there is a large mesh pocket at the top, along with a nylon pouch below it where you can pack shoes. Two external pockets give you quick access to your wallet, phone, and small accessories, and there’s also a security pocket behind the back panel for your passport. Just be careful when using the water bottle pocket, as they can slip out even when the bungee is tight. Why We Like It - It’s great to be able to securely carry devices of different sizes in the adjustable sleeve - Excellent accessibility since both compartments open fully clamshell What You Should Know - You have to pack it “scoop side down,” which can get unwieldy without packing cubes - The bungee designed to hold a bottle in place doesn’t always work as intended, and some bottles slip out Some packs are designed with a specific use in mind, and others are designed to be as versatile as possible. Every once in a while, you’ll come across a bag that does both (and does it well). The features on EVERGOODS’ Civic Travel Bag 35L, or CTB35, make it one of the most versatile travel backpacks we’ve seen on the market. There’s plenty of organization to choose from without going over the top, meaning there’s a spot for large and small gear alike. The main compartment has ample space, so we’re able to fit everything from a camera cube to bulky shoes inside, and it even has a few zippered pockets for small items like tech. As for external storage, there’s a built-in yoke pocket on the top and a vertical zippered pocket on the front that we like to use as a dump pocket for our phone, wallet, keys, and more while going through airport security. Plus, there’s an easily accessible laptop compartment if you work on the go. The harness system is contoured nicely, which makes this backpack incredibly comfortable to wear even when fully packed, so we have no problem carrying it all day long. We like the 35-liter option because it’s big enough to work for long trips. However, if you’re into the organization but want something smaller, it also comes in a 26-liter size (which we like equally as much). Why We Like It - The harness is well-padded and comfortable even when the pack is completely full of gear - It strikes a balance between built-in organization and empty space, so you’re not pigeonholed into packing your gear a specific way What You Should Know - Since the organization is so minimal, you’ll need to find a way to manage things like clothing—we recommend utilizing packing cubes - We find it difficult to stow the hip belt without it twisting a bit, so it takes a bit of finesse to get right We like the Topo Designs Global Travel Bag so much that we chose it for the first iteration of our Vacation Packing List. The large size makes sense because you can fit more gear; however, there’s a smaller 30-liter size that we find is better for smaller-framed folks and people who want to save space. Why do we like it so much? We’re happy you asked! These packs have built-in organization options inside the main compartment, including a divider with zippered pockets that we use to stow smaller items like socks and underwear, but it’s also great for tech or miscellaneous gear. There’s also a large second compartment, a dedicated laptop compartment, and a quick-grab pocket on the front that’s handy for gear you’ll need throughout the day. While all of this organization is great, it’s worth mentioning that all of these zippered pockets are pretty shallow, so you’ll have to pack strategically to ensure your bag will zip up when everything is loaded in. On the plus side, the liner is brightly colored, which makes finding your stuff that much easier! If all of that space isn’t enough for you, there are attachment points on the front of the bag where you can attach an additional daypack. The harness system isn’t our favorite because there’s no frame sheet to add structure and it can feel pretty heavy when it’s all packed out, but the hip belt does a good job taking some weight off your shoulders. Why We Like It - There’s ample organization to segment your gear, making it easier to find - The bright liner material adds a ton of visibility when we’re looking for our stuff in the multiple zippered pockets What You Should Know - Can be difficult to slide a laptop into the dedicated compartment when the bag is fully packed because of how it starts to bulge - It’s not the most comfortable bag we’ve worn for extended periods because the back panel lacks significant structure While some travel backpacks fit best in an urban setting, the Allpa 35L Travel Pack works as a hiking or work bag as well as a travel pack. However, just because it can serve other purposes doesn’t mean it’s lacking in the travel department. It has a refined design and ample space that make it easy to pack for vacation, with mesh dividers and organizers inside to help you keep your gear sorted. While the exterior materials aren’t very structured, you’re unlikely to reach for this large of a bag unless you plan to pack it out, so it’s not always noticeable. The polyester is coated with TPU for water resistance, so your gear is safe as you walk in nearly any weather. If you’re getting started on your journey into one bag travel, you can get the Allpa with an accessory bundle that includes mesh laundry bags, a nylon shoe bag, and a snap-on mesh water bottle sleeve. You also have the option to add-on Cotopaxi’s Batac Daypack, so you can have a complete travel system ready with just one click. And in case you needed another reason to consider Cotopaxi, you should know that their bags are made in the Philippines in a factory committed to fair labor and environmentally-sound practices, so you can feel good about your purchase, too. Why We Like It - It’s a ruggedly durable backpack if you’re a more adventurous traveler - The bag feels roomy and conveniently-placed pockets for small gear storage What You Should Know - Hip belt isn’t removable if it doesn’t fit, and the pockets often feel too snug when wearing the bag - It’s on the heavy side for its size Decisions, decisions… Navigating the not-so-clear world of travel packs. ### Video Guide Part 2: Form *Feel free to watch this guide section in video format. We’ll keep the written content on this page up to date.* *Be sure to subscribe to Pack Hacker on YouTube and never miss a video. We also have these videos in a series playlist format on YouTube so you can watch them easier.* ## Best Backpack Size & Weight for Carry-On Air Travel We favor smaller bags that fit in the overhead bin. Yes, it can be a challenge to fit your entire life into a 40L bag, but wow, is it worth it!. Trust us—you can fit your entire life into an 18L backpack if you’re disciplined, and we highly recommend staying under 50L for one bag travel. Life is just easier with a smaller & lighter backpack. If you want to cheat a bit and get some extra space, you can also go the sling bag on the front, backpack on the back route. Airlines can get pretty stingy around the amount of weight you can bring on board. It’s essential to make sure your backpack itself isn’t too heavy, or you won’t be able to fit in as much clothing and other travel gear. We’re all for less clothing and gear, but we are not for getting hit with extra fees if your carry-on is overweight. Starting out with a bag that’s already too heavy before you’ve packed it is just setting yourself up for failure! We calculate a carry-on compliance score for every travel backpack reviewed on our site using its dimensions and data we collect from most airlines worldwide. ### True Volume It’s easy to get caught up in all this talk around liters of a backpack. There’s really no “industry standard” around this, and the liter size of a pack can vary from brand to brand. What’s more important is the “True Volume” of a backpack and how usable the space is. Some weird, trapezoid-shaped backpack will certainly be more of a challenge than something with a larger, rectangular compartment. The thickness and flexibility of the material matter as well. A thin, strong material will leave you with more space inside of a backpack than something with thick padding in the liner. However, a rigid material—Dyneema, for instance—doesn’t have much additional flex and isn’t very forgiving when you’re trying to pack your bag to the brim. The efficiency of space can make or break the usefulness of a pack. ### Pack’s Exterior Profile The slimness of a pack can help out quite a bit. Not only does it seem less heavy because the weight is close to your back, but it has the added benefit of giving you a smaller, slimmer form factor. With this, you won’t be taking up too much room on public transit or smacking people in the face when you’re boarding the airplane—it’ll be a better experience for you and everyone around you. **PRO TIP:**Backpacks that offer a more “square” shape tend to hold more than bags of other shapes, but sometimes that comes with an aesthetic penalty (unless you’re into a box on your back). ### Max Legal Carry-On Otherwise known as “MLC,” Max Legal Carry-On size covers the largest acceptable backpack size for carrying on most airlines. Make sure to check with your airline before arriving at the airport, though—size limits can vary based on the airline you’re flying with. The Peak Design Travel Backpack is a well-executed travel bag from a company with an excellent track-record of bringing innovative and unique designs to the backpack world. This maximum legal carry-on can easily handle one bag travel, photography, or digital nomading with ease—and it will be a joy to use for any of those activities. ### Top-loading vs Panel-loading (Clamshell) Backpacks There’s a big debate around clamshell and top-loading packs. We’re personally a fan of clamshell for one-bag travel, as it gives you more open space to work with. Clamshell functions more like a suitcase and opens literally like a clam. You can easily open it up flat and see everything inside, so it tends to be easier to organize all your travel gear. The Able Carry Max Backpack is a clamshell-style backpack that opens to give you easy access to a spacious main compartment—this works great for packing cubes or rolled up clothing—whichever you prefer! Also, it’s got a large but low-profile water bottle pocket. Top-loading packs are great if you’re on a long, multi-day trek or participating in other outdoor-focused activities as there’s no main zipper that can fail you (which could be catastrophic if you’re halfway up Mt. Everest). The Thule Subterra 34L is a top-loading backpack with a roll top opening. A top loader’s usual pitfalls are fixed by an easy to access side zip that allows entry to the main compartment. This zip comes in handy when you don’t have time to mess around with the roll top, or you want to grab something located at the bottom of the bag. ### Weather Resistance Weather resistance is another key component to consider for one bag travel. With all your tech gear and expensive possessions in your pack, you don’t want it to get wet. We look for packs with some great weather resistance that’ll easily get you through light rain and ideally through 20 minutes of a monsoon in Southeast Asia. There’s a big difference between waterproof and water-resistant bags. We’re mainly focused on the latter, as this will be plenty in most situations. Sure, waterproof is more secure, but unless you’re leaving your pack outside in a torrential downpour for hours on end or plan to go snorkeling with your laptop on your back, there’s no need for that extra tech. The Mission Workshop Fitzroy VX utilizes weatherproof materials and weather-resistant zippers. We’ve found it to hold up decently in a downpour. Even if you’re caught in a pretty torrential rainstorm, you should be okay with the PET waterproof membrane. Got something that needs some additional weatherproofing? Consider picking up a DAKA Pouch. It’ll give your valuables that extra layer of protection without requiring you to purchase an entirely waterproof bag—plus, these pouches double as organizers, separating your precious gear from the rest of your loadout with some additional protection to boot. It’s a win-win. ### Durability and Quality Whether you’re traveling for a week, a month, or a year plus, your backpack is pretty much your home, so you don’t want it to break. Take it from us—the last thing you want is to find out that you lost your phone charger because your zipper broke during the journey to your next accommodation. Investing in a good backpack will prevent loss and damage to your gear, and higher quality products will last for several years. It can be a challenge to tell if a backpack is durable right out of the box, which is why we test bags as much as possible to notice any faults. Higher durability usually means higher weight, but not always. Here are a couple of key considerations we’ve found when it comes to durability. When it comes to durability, the Topo Designs Travel Bag 40L doesn’t mess around. The 1000D nylon, beefy YKK #10 zippers, and simplistic design all come together to create a bag that won’t let you down. ### Zippers YKK zippers are some of the best around, so naturally, the best travel backpack brands tend to use them. They’re super strong and have different weights depending on the area of the pack they’re used. A YKK #10 will keep a main compartment secure, whereas a YKK #5 may be suited for smaller side pockets that don’t receive as much use or tension. YKK is obsessed with quality, and they do everything in-house. They smelt their own brass, forge their own zipper teeth, and even make the machines that make their zippers and the cardboard boxes they ship in! Needless to say, you probably won’t end up with any broken zippers with YKK on your side. YKK zippers also account for about half of all zippers in the world, so that says something. Although less popular, RiRi zippers are pretty great too. Both RiRi and YKK are superior to any other zipper made in-house by a bag manufacturer, and Zoom Zippers are climbing up on that list as well, though we still find intermittent issues with them. ### Backpack Fabric and Material There are a ton of fabrics and materials out there, too. When looking at fabrics, you’ll often see a number followed by a D—250D, 950D, 1500D, etc. The D stands for denier, a term used to measure the fabric’s thickness and weight—specifically the yarn. The formal definition is the mass (in grams) per 9,000 meters of thread, so lightweight fabrics (like silk) have a very low denier, while heavier fabrics have a higher denier. When it comes to backpacks, a higher denier is not necessarily better. In general, a higher denier will be more durable (depending on the fabric & weave) but also heavier. While the denier can tell you the weight and thickness of a material, the type of material, weave, and manufacturing involved will ultimately tell you more about its strength and durability. Here are some materials you’ll come across when selecting your pack, along with the pros and cons of each one. #### Ripstop Nylon Pretty close in property to standard nylon, “ripstop” nylon has a unique square weave that prevents further tearing from happening after a puncture. It has an incredibly high strength-to-weight ratio, and, as the name implies, it is highly resistant to rips and tears. The reason why it’s so strong is that additional fibers are sewn into the weave. Ripstop Nylon was developed in World War II as a more robust alternative to silk parachutes and is currently used in ejector seat parachutes for fighter pilots! #### Ballistic Nylon Ballistic Nylon refers to any nylon fabric with a “ballistic weave,” a variation on the simple basketweave. This gives it excellent tensile and tear strength—especially when layered—and makes it heavier than a lot of other materials. Keep in mind that ballistic nylon almost exclusively comes in black. Why is it called ballistic? It was initially used on flak jackets for World War II airmen to protect them from artillery-shell and bullet fragmentations. PSA: We do not recommend the use of backpacks for protection in war zones. #### CORDURA® Nylon CORDURA® is not a fabric in and of itself—it is a brand covering a whole host of different materials, from cotton to nylon to polyester. What they do is take fabric from various mills, inspect it to make sure it’s up to their standards, and then slap that CORDURA® tag on it. Yes, it’s a bit deceiving, but they do put out some high-quality stuff. You’ll mostly always see a “®” next to “CORDURA” (in all caps) because #branding and #lawyers. #### Kodra Nylon Kodra is virtually synonymous with CORDURA® but made in Korea. Peak Design opted for this in V1 of their Everyday Backpack. #### Polyester Polyester is one of the most common fabrics on the planet. It’s made from plastic fibers, and you can find it pretty much everywhere—in clothing, pillows, seat belts, upholstery, rope, the list goes on… Oh, and backpacks. Polyester is not the most durable fabric, so you’ll usually find it on lower-end packs (think of those classic Jansport backpacks everyone had in high school). It’s really not the most suitable choice for a travel pack—as it just won’t hold up through the years. Besides lacking in durability, polyester is also fairly heavy compared to other fabrics like nylon. If you’re looking for a low-budget day pack, polyester is fine. If you’re looking for something more serious, stay away from it. #### Polypropylene Polypropylene is a polymer that is used to make fabrics. This stuff is seriously everywhere—it is the world’s second most widely produced synthetic plastic! It’s used to make ropes, carpets, labels, plastic lids on tic-tac containers, plastic chairs, long underwear…basically, if you see something made of plastic, there’s a solid chance there’s some polypropylene in it. You’ll find it mostly in minor backpack components, but it’s also used to make drawstring bags and totes like the ones that are handed out for free at a college fair or festival. Polypropylene fabric has a few things going for it. It’s cheap, it’s a good insulator because it doesn’t transfer heat very well, and it won’t absorb water since it’s hydrophobic. The major problem with polypropylene is that it is not very UV resistant. If it’s repeatedly exposed to sunlight, the fabric will fade and break down over time. This is not great for backpacks. You may, however, see polypropylene used as a liner on the inside of some packs as it won’t be affected by UV light and adds some additional protection. #### Canvas You could say that canvas is the OG backpack material. Back in the day, canvas was just about the only thing you would use for a “backpack,” outside of maybe a burlap sack thrown over your shoulder. In World War II, GI’s carried all their equipment around in canvas packs and slept in canvas tents. Canvas very thick and sturdy and was historically made from cotton, linen, or hemp coated in wax for waterproofing. Today, canvas tends to be made from things like nylon and polyester. Most modern backpack companies shy away from canvas because it’s usually heavy, not overly water-resistant, and easily damaged by abrasion. If you’re looking for a canvas one-bag travel pack, you’re not going to find much out there. However, if you want the nostalgia factor, you can still find a bespoke canvas bag to satisfy that. #### Leather We may need to scrap our statement on canvas because leather is arguably as OG as it gets. Its use has been traced back as far as Ancient Greece and Rome! Like canvas, you’re not going to see many travel packs made of leather. While a leather bag can make for an excellent, stylish daypack, it’s not ideal for a long-term travel pack, mainly because of its weight. There is also a lot of potential care involved. Between protective oils and various cleaning techniques, it can be a hassle to deal with if you’re on the move. There are three grades of leather—genuine, top-grain, and full-grain. Contrary to popular belief, genuine is the lowest grade of leather while full-grain is the highest. Full-grain is used for heavy-duty use-cases like weapon holsters and work belts, so if you’re dead set on a leather pack, we’d recommend looking for full-grain leather. Ideally, you’ll want to find something thin enough to not be overly heavy while still thick enough to ensure durability. **PSA:**Try to avoid bonded leather, patent leather, and corrected grain leather. These are all low-quality but can pass as the good stuff if you’re unaware of what to look for. Bonded leather is probably the worst—it’s a bunch of leather scraps that have been glued together—but they are all far less than ideal. #### Sailcloth The actual material is called ultra-high-molecular-weight polyethylene… but most know it as sailcloth (yep, the same material used on a sailboat). A relatively rare material for backpacks, brands like Tortuga have taken advantage of sailcloth due to its lightweight and water-resistant properties. It is by far the most lightweight and waterproof material on our list (no need for DWR or liners), but it does have some drawbacks. It’s stiff and crinkly with zero stretch, which can cause problems if you’re trying to utilize every nook and cranny of your pack. It isn’t quite as indestructible as some other materials listed, but it’s reasonably durable and can be patched. It also tends to be one of the most expensive backpack materials out there. #### Dyneema® Composite Fabric In May 2015, Dyneema purchased Cubic Tech, the creator and manufacturer of Cuben Fiber, and rebranded it as Dyneema® Composite Fabrics. So, Dyneema® Composite Fabrics = Cuben Fiber. This stuff was initially designed for high-tech sails on racing yachts because it is ridiculously light and robust. As such, it has been adopted wholeheartedly by the ultralight backpacking community. It’s sort of like the carbon fiber of the backpack world—high-tech, super strong, super light, and…super expensive. While Dyneema® Composite Fabric is popular within the ultralight backpacking community, it has yet to become commonplace in the one-bag travel scene. That being said, if you do see Dyneema® Composite Fabric, you should know that you’re getting some of the best stuff around. #### TPU Thermoplastic polyurethane —TPU for short—is a polymer used to add strength to a material, either through a manufacturing process or coating. You’ll recognize it on products like inflatable rafts, phone cases, wire cables, and footwear. Think stuff that needs to be as durable as possible to avoid things going south for the user. It easily sheds water and oil, resists abrasions, and won’t crack in high or low temps, making it ideal for frequent outdoor use. Unlike polypropylene, TPU is UV-resistant and won’t be subjected to the same amount of fading over time. If you’re the type of traveler who wants extra peace of mind on the go, you’ll want to keep an eye out for TPU because of the extra strength it adds to a pack, but we wouldn’t consider a lack of TPU a deal-breaker. #### X-Pac X-Pac is not so much a fabric as it is a bunch of fabrics smooshed together. With the help of lamination technology, it combines complementary materials to make an overall higher-performing product. Though there are variations in denier and waterproofing, it’s most commonly made up of a nylon face for durability, polyester mesh for strength, and waterproof film that won’t disappear over time. Like Dyneema® Composite Fabrics, it was inspired by the efficiency of sailcloth but is a less costly option that provides a similar level of ultralight performance. It holds its shape over time, won’t fade with UV exposure, and easily sheds moisture, making it great for outdoor enthusiasts who hike and bike with their pack on the regular. However, it may be a bit overkill for casual travelers unless you’re looking for a backpack for epic mountain climbing adventures. Ultimately, the production process and design will dictate whether your gear will stay together. If a bag is made with 1000D CORDURA®, but it doesn’t have good zippers to match, it doesn’t matter how good the fabric is. Look for brands that proudly back their product with generous warranties, like GORUCK and their “SCARS warranty” or Patagonia and their “Ironclad Guarantee.” These brands know they make quality products, so they’re happy to back it up. If a brand offers no warranty or a short warranty, there’s probably a financial reason for that, and the quality may not be as high. We’re all about buying quality pieces that last versus something that’s going to need repair or replacement year after year. Whether you’re hopping on a plane or navigating city streets, you need a backpack that can hold up. ### Video Guide Part 3: Function *Feel free to watch this guide section in video format. We’ll keep the written content on this page up to date.* *Be sure to subscribe to Pack Hacker on YouTube and never miss a video. We also have these videos in a series playlist format on YouTube so you can watch them easier.* ## Best Backpack for Comfortable Wear and Extended Travel Comfort is a big deal when it comes to one-bag travel—especially if you plan to carry the bag around with you for hours on end. You’ll want a high-quality harness that works with the shape of your body. When selecting a bag, it’s crucial to take your height and body type into consideration. Although this matters more for hiking backpacks where you’re carrying a ton of gear, it’s less important for smaller, one-bag travel packs. That doesn’t mean you shouldn’t think about it before you make a purchase. A backpack suited for someone that’s 6’5″ and 250 pounds probably isn’t the best travel backpack for someone that’s 5’3″ and 140 pounds. Buying something that doesn’t fit your frame correctly will make for some seriously uncomfortable travel. ### Men’s Focused Fit Vs Women’s Focused Fit Some backpacks are only available with a “one size fits all” harness system, but there is an ever-increasing number of women’s focused fit and men’s focused fit travel backpacks on the market. For example, the Thule Landmark 40L, REI Ruckpack 40, and Deuter AViANT Carry On Pro 36 are all available in two different fits. The differences are subtle but have a big impact on how comfortable the harness system is on your frame. Compared to a men’s focused fit, a women’s focused fit backpack will typically feature: - Thinner shoulder straps for a narrower frame - Shorter shoulder straps that position the bag higher up the back - Smaller hip belt with a more pronounced curve ### Backpack Straps You’ll want to look for bags with high-quality straps that work for your body type. A mismatch here could lead to an uncomfortable carry, even with only a little weight inside. Even though the GlobeRider45 has the functionality and looks of a travel backpack, it carries more like a daypack. Its shoulder straps feature dense padding that curves and falls naturally to the body. A relatively high top area does give it a very slight hiking backpack feel, but it’s an overall tameable bag to travel with, considering its 45-liter storage capacity. The thickness of straps doesn’t necessarily matter. Thinner straps that use high-quality foam may be more comfortable than thicker, bulkier straps. If you’re concerned with weight, look for bags that include load lifters – these are the adjusters that appear at the top of the straps. This concept is borrowed from larger hiking backpacks and does wonders for fitting the bag well to your back with different loads. Some straps swivel and pivot to cater to different shoulder widths and make it easier to quickly flip the pack around to access the goods you’ve got inside. ### Hip Belts We’re middle-of-the-road on hip belts for one-bag travel backpacks. They can help a ton if you’ve got a heavier load or plan to carry your pack for long stretches but aren’t necessary if you pack minimally in a smaller pack. A good hip belt should be comfortable and secure without becoming too cumbersome. There are few things worse than hitting people with your bulky hip belt while walking down the aisle of an airplane. We’d recommend taking a look at travel backpacks that feature a detachable or hideable hip belt, so you don’t have to use it when you don’t need to. ### Sternum Straps Nearly all travel backpacks include a sternum strap. They’re designed to distribute some weight away from your shoulders and secure the shoulder straps across your chest. While sternum straps are all pretty similar across the board, there are a couple of things we’d recommend looking out for. First, some will feature an elasticated portion that allows the strap to flex with your body as you walk. We’re big fans of these. Second, some sternum straps can be detached, leaving them vulnerable to falling off when not in use. We’re not kidding; this has happened to us on multiple occasions. Not good, especially when you’re traveling halfway around the world in remote locations! A detachable sternum strap is great when you don’t always need to use one, and it makes adjusting the height easy. Just make sure it’s secure and adequately anchored to the shoulder straps. ### Back Panel A well-designed back panel can make things much more comfortable. Although it’s hard to avoid the old sweaty back with more extended periods of wear in hotter climates, well-ventilated mesh and foam can help with this. A curved frame can help with ergonomics and ventilation, but we don’t see this on many travel-focused backpacks. Sometimes, it seems like overkill. ### How Do you Pack the Thing? With all these fancy features, it’s essential to consider how you should use them and how you pack your bag. Generally speaking, you want to load the heaviest items closest to your back. This’ll ensure the heaviest bits of your bag are the closest to your center of gravity, pulling you down less from the back of the bag. If you’ve got all the features mentioned above, you want to strap and tighten your hip belt first, then adjust the shoulder straps, then tighten the load lifter straps (the straps on top) to a 45° angle, and finally, adjust and tighten the sternum strap. The Heimplanet Travel Pack 34L (V2) has a horseshoe zipper at the top front of the pack, which opens up to allow you to reach into the main compartment and grab essential items rather than opening up the full clamshell. It also features liter independent compartments and pockets, which are great for packing to the absolute limits. Check out the smaller 28L version, too. ### Modular Backpack System If you want more options for customization, check out modular gear. To put it simply, this is gear that brands design to work with their bags. They allow you to make a bag suit your preferences, adding and swapping parts as needed instead of trying to fit your gear into the organization already installed in your pack. Anyone who uses a bag with PALS webbing, for example, will tell you how convenient it is to have loops ready where they can stick MOLLE accessories. Whereas PALS webbing and MOLLE attachments are one of the better-known standards out there, brand-specific modularity and attachment systems also exist. In fact, some brands, like ALPAKA, TOM BIHN, Boundary Supply, and Roark, are known for it. We like to count how many O-rings we can find on each TOM BIHN bag we buy because that’s where we can clip the brand’s key leashes, admin pouches, packing cubes, and more. These great for carrying tiny travel accessories wherever we go. Sizes range from Super Mini, which can hold AirPods, chapstick, and similarly sized items, to A5, which is big enough for an A5-size notebook and pens. They’re made from scrap fabric, so you can feel good about saving them from the cutting room floor. They clip to the O-rings in a TOM BIHN bag or a loop on another backpack to save you from digging for small gear. ALPAKA’s HUB Ecosystem lets you swap your keys, sanitizer, card holder, and more between your bags. Pull the Hypalon tab to release the magnetic fastener to swap your gear, then attach it to different points throughout their bags or the HUB ModPanel hanging in your house. Then you’ll always be able to find your keys. The Prima System includes a 30L travel backpack, the Fieldspace admin panel, and Verge Camera Case. The Fieldspace holds a tablet or small laptop, plus small accessories, docking to the laptop compartment with a magnet, so it’s removable if you don’t need it. The camera case is also fully customizable and can sit inside the pack, connect to its exterior, or be carried separately. ### Organization: Multiple Travel-Focused Features or One Big Compartment? Some backpacks take the approach of having a massive inner compartment with no organization. This is great if you’re planning on using some packing cubes or compression sacks, but not so great if you want a little more internal organization out of the box. More things to consider: is there a dedicated place to put a pen or two for those pesky customs forms? Is it easy to grab? How about a dedicated laptop compartment (or, for that matter, a dedicated laptop bag)? This iteration of Tortuga’s travel backpack design gives more control to the user. It has fewer organization options than its predecessors, but the extra space and weight savings can be better used for packing cubes and organizers. Those already invested in such accessories will find the wide and spacious main compartment easy to fill and navigate. ### Packing Cubes Packing cubes can be a great addition to your luggage regardless of whether the bag is one massive compartment or has a couple of smaller pockets inside. Packing cubes allow you to organize clothing between type, outfits, clean or dirty, and much more. **PRO TIP:**We’ve found that bright interior liners can be convenient. In low-light scenarios, it can be difficult to find what you’re looking for in a dark backpack. A bright liner can “turn a light on” and help you find what you’re looking for. The Osprey Transporter Global Carry-On’s size and shape make it easy to pack with cubes. Plus, the light gray interior makes it easy to find your gear. ### Compression and Expandability If you’re going with one bag, versatility is essential. Ideally, your pack will cater to different amounts of items that are packed in the bag. Some packs even offer detachable daypacks, but they tend to be slightly larger in liters to justify the additional use of materials (extra zippers and extra straps.) If you’re looking for a small travel daypack, consider some highly compressible bags from Matador. There won’t be any padding on these, but you could also pair these with a padded field pocket from GORUCK or a padded laptop compartment if you want to cafe-hop and work for the day. If you are looking for a more padded daypack, a Mystery Ranch In and Out Packable Daypack, or something like a Fjallraven Kanken 13″ Laptop Backpack could work. At the end of the day, you’re packing another set of straps, padding, and zippers—all space and weight that’s being subtracted from your main pack. We like sticking to one bag whenever possible, and there are some bags out there with the right size and look that can be used as a daypack and for one bag travel. The Thule Aion 28L Backpack expands to 32L when you need more space for a trip. Use the extra room when you’re traveling, then empty it and compress it back down when you arrive at your destination to have a slimmer bag that can be used as a daypack while walking around. Another great option is the Osprey Farpoint 40, mentioned above. One of our team members has utilized the compression straps to carry his tripod while traveling to numerous countries. **PRO TIP:**If you pick a travel backpack that has a slimmer profile and doesn’t protrude out as much in the back, the whole bag will feel lighter with the bulk of the weight closer to your center of gravity. ### Security Backpacks Be on the lookout for packs with great security features. Are the zippers lockable with TSA approved locks? Are there separate secret security compartments to place your passport and other valuables in hard-to-reach places? Is it made of a solid material to prevent the quick slash-and-grab? Are the outer pockets minimized to make it hard for a thief to unzip and grab what they want quickly? A lot of safety when traveling comes down to common sense and your own self-awareness, but there are a couple of pack features that can make your trips a little bit safer. ### Lockable Zippers & Anti-Theft Backpacks Some packs offer lockable zippers, or special looped zipper pulls that can be configured to deter thieves. Locking the zippers on your pack won’t turn it into an anti-theft backpack—someone can still take it or cut through the fabric—but it can help stop wrongdoers from quickly unzipping your bag for a quick-grab, or make them move to the next easily accessible bag on a train or bus. No backpack is impenetrable, though, and some of these features on backpacks can be gimmicky—included just so the purchaser has some peace of mind—even if the benefit isn’t that great. Peak Design’s security features (example below) and PacSafe’s Tough Zip put a lot of emphasis on that extra layer of security. The zippers on the Peak Design Travel Backpack come with multiple locking features. This won’t necessarily deter all theft, but it’ll stop anyone from the old unzip & grab trick, and it won’t be against TSA Guidelines. ### Anti-Theft Backpack Materials Some bags offer more robust fabric that naturally enforces the bag. As we mentioned before, materials like Ballistic Nylon, CORDURA®, and others are super helpful with this. Some companies even include special mesh wiring, like Pacsafe’s eXomesh®, that almost theft-proof your backpack, allowing you to lock it to a fixed object for added security. EXomesh® is either lined inside the fabric and can also be purchased externally with other backpacks. For the type of traveling we do, we think this is a little paranoid and adds some weight plus another thing to carry. But depending on your situation, it could be helpful. Strolling through Tokyo? Probably not necessary. Heading to Barcelona for the first time? Yeah, we’ll take that extra layer of security. ### RFID Blockers (Identity Theft-Proof Backpacks) We feel that having a bunch of RFID-blocking tech covering an entire backpack is overkill. Sure, it’ll stop folks from electronically scanning your passport, but If you’re concerned with this, you could get a special wallet or wrap your passport & cards in aluminum foil. Let’s face it—it’s much less effort for a thief to physically grab what they want from you than dicking around with RFID technology. But again, whatever helps you sleep at night. If it’s a 100% secure backpack you seek, we’re not going to stop you. If you’re looking for a secure travel pack, the Pacsafe Venturesafe EXP35 offers some great features for exactly that. From the eXomesh® slash-proof material to the secure zippers and RFID secure pockets, there is some great thinking that went into this pack along with some solid materials. You know what they say—“It’s not how you feel, it’s how you look.” Or something like that... ### Video Guide Part 4: Aesthetic *Feel free to watch this guide section in video format. We’ll keep the written content on this page up to date.* *Be sure to subscribe to Pack Hacker on YouTube and never miss a video. We also have these videos in a series playlist format on YouTube so you can watch them easier.* ## Finding the Best Travel Backpack Style For You At the end of the day, the look and feel of a travel backpack should be right for you and your tastes. There are many things to consider as far as aesthetics go we’ll pull in here for consideration. Stylish “urban travel” backpacks became a lot more popular within the last couple of years, and that’s the look we prefer. Gone are the days of international travel with a big blaze-orange hiking backpack. Those certainly have a utility, but that utility is in the wilderness. Here are a couple of overall style points for your consideration: ### Minimalist Travel Backpacks When you’re in a new country, think a bit about how you want to be perceived. If you’re heading to a more crowded or dicey area, nothing screams tourist like having a large, colorful backpack while looking up at tall buildings or a landmark in awe. It’s easier to keep a low profile and blend in a little if you’re not carrying around a monstrosity of a bag that acts as an advertisement for thieves and wrongdoers looking to target travelers for their own gain. It’s an added bonus if you can roll into a meeting wearing one of these things. As one-bag travel has become increasingly popular in recent years, we’re seeing many solid urban packs coming out that are built specifically with one-bag travel in mind. ### Tacticool Backpacks There are a ton of great, high-quality bags out there that are made to military spec. There’s some really great utility to things like MOLLE for customizing your pack and including other accessories on your bag, and the stronger materials make for highly durable bags. Keep in mind that some folks may perceive you as being in the military if your bag has too much digi camo going on. It’s one thing if the pack is all black & subdued, but another if it’s camo and filled with patches. If this is your look, go for it, but this type of pack might also bring about some “unwanted attention” in certain parts of the world. ### Outdoor & Hiking Backpacks Think sportier packs with lots of pockets, brighter colors, and louder material. For a long time, outdoor backpacks were the only option for long-term one-bag travelers. They tend to be bulky and are built to carry big, heavy loads over long distances. This typically means lots of straps and a tall pack that will peek up over your head. Great for an extended camping excursion, not so great for a trip through the airport or a newly-discovered city square. They also tend to scream “TOURIST.” No one casually walks around with a giant hiking backpack. ### Backpacker Backpacks If it’s not already obvious, the “Backpacker Backpack” is designed specifically for backpacking around the world. Typically from manufacturers that also make outdoor and hiking backpacks, this is the go-to style for anyone on a gap year looking to tick off as many countries in Southeast Asia as possible. And because of that, they’re some of the most popular bags on the market today. Sure, you’ll still look like a tourist—albeit not as much as you would wearing a hiking backpack—but that’s fine because that’s exactly what you’re doing. ### Heritage Backpacks These bags are engineered with a classic look in mind. Most will be some variation of the one-compartment style with leather straps, subdued colors, and some type of canvas-y material. These packs look great but can sometimes lack functionality and comfort. Although there are a few bespoke style travel bags (we like Vinta and Rivendell Mountain Works), most will fall into the daypack category. Having said all of this, aesthetic is subjective, and beauty remains in the eye of the beholder. This is why we conduct weekly polls over on our Instagram to get our communities’ take on the look of bags. Follow us on Instagram to cast your votes! You can find all the results of the polls on our individual review pages too, so you can see how well a bag you’re looking for has performed. The humble backpack: It’ll get you through anything and everything... ### There Really is No “Best Travel Backpack” Although, there is a best travel backpack for you. All this boils down to your preferences. When we first started creating this guide, we admittedly thought there would be one best bag for travel, but the deeper we dug, the more we realized it depends on your needs as an individual traveler. Sure, there are generally guiding principles to follow, and a bag made out of cardboard objectively won’t last, but there are too many quality backpacks out there to pick just one. If you’re on a short trip, a lighter, less durable pack will suit you well. If you’re headed to Southeast Asia during the monsoon season, you may want some heavy-duty weatherproofing. We wish you the best of luck moving forward with your selection. Still want more? Be sure to check out our other guides and travel gear reviews too! *Our team at Pack Hacker developed the “best travel backpack” guide in partnership with our friends (and bag experts) at Carryology. We’re constantly updating this guide as new backpacks are released, and the travel landscape changes.*
true
true
true
The best travel backpack is unique for each person. We break down how to choose your perfect one bag carry-on into 5 easy sections and suggest our top picks.
2024-10-12 00:00:00
2017-11-05 00:00:00
https://cdn.packhacker.c…age-update-1.jpg
article
packhacker.com
Pack Hacker
null
null
6,497,443
http://blog.videality.com/2013/10/videalitys-first-virtual-job-fair-nyc.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
37,348,267
https://wffacpa.com/ny-sales-tax-exemption-for-computer-hardware-purchased-for-software-web-design-development/
NY Sales Tax Exemption for Computer Hardware Purchased for Software & Web Design & Development - WFFA CPAs
Jason Ackerman
The New York Department of Taxation and Finance issued a reminder regarding a sales tax exemption available to software and web devs/designers on the purchase of computer hardware. In general, **no state or local sales tax** will be charged on the purchase of **computer hardware** when it is **used more than 50% of the time** to: - design and develop computer software for sale - provide website design and development services for sale - some combination of these two uses The exempt hardware can be anything from laptops and monitors to external hard drives and accessories. Design and development services include system analysis, program design, coding, testing, debugging, and documentation activities. The use of computer hardware for administration, production, or distribution activities is *not* eligible for the exemption. To claim the exemption, you must provide the hardware seller with a completed Form ST-121.3. *Note: Computer systems that are rented or leased may qualify for the exemption as well.* Have questions? Not sure if you are eligible for this exemption? Let us know, we can help!
true
true
true
NY offers a sales tax exemption for the purchase of computer hardware by some software and web designers and developers. Find out if you qualify!
2024-10-12 00:00:00
2016-09-28 00:00:00
https://wffacpa.com/wp-c…ter-Hardware.jpg
article
wffacpa.com
WFFA CPAs
null
null
1,396,421
http://paul.querna.org/slides/libcloud-2010-06.pdf
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
9,846,205
https://medium.com/@RecurVoice/how-to-increase-your-hourly-rate-as-a-freelancer-1f2fdf892088
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
18,336,338
https://www.sciencedirect.com/science/article/pii/S0003491618302586?via%3Dihub
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
19,134,857
https://www.youtube.com/watch?v=87SWZ0Pna8k
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
16,392,159
https://common-lisp.net/project/mcclim/posts/McCLIM-097-Imbolc-release.html
McCLIM
Daniel Kochmański
# McCLIM 0.9.7 "Imbolc" release Written by *Daniel Kochmański* on *2018-02-16 16:00* After 10 years we have decided that it is time to make a new release – the first one since 2008, which was McCLIM 0.9.6, *St. George's Day*. *Imbolc* is a Gaelic traditional festival marking the beginning of spring held between the winter solstice and the spring equinox. Due to a long period of time, the number of changes is too big to list in full detail and we will thus note only major changes made during the last eleven iterations (though many important changes were done before that). For more information please check out previous iteration reports on McCLIM blog, git log and the issue tracker. We'd like to thank all present and past contributors for their time, support and testing. - Bug fix: tab-layout fixes. - Bug fix: formatting-table fixes. - Bug fix: scrolling and viewport fixes and refactor. - Feature: raster image draw backend extension. - Feature: bezier curves extension. - Feature: new tests and demos in clim-examples. - Feature: truetype rendering is now default on clx. - Feature: additions to region, clipping rectangles and drawing. - Feature: clim-debugger and clim-listener improvmenets. - Feature: mop is now done with CLOSER-MOP. - Feature: threading is now done with BORDEAUX-THREADS. - Feature: clx-fb backend (poc of framebuffer-based backend). - Feature: assumption that all panes must be mirrored has been removed. - Cleanup: many files cleaned up from style warnings and such. - Cleanup: removal of PIXIE. - Cleanup: removal of CLIM-FFI package. - Cleanup: changes to directory structure and asd definitions. - Cleanup: numerous manual additions and corrections. - Cleanup: broken backends has been removed. - Cleanup: goatee has been removed in favour of Drei. - Cleanup: all methods have now corresponding generic function declarations. We also have a bounty program financed with money from the fundraiser. We are grateful for financial contributions which allow us to attract new developers and reward old ones with bounties. Currently active bounties (worth $2650) are available here. As *Imbolc* marks the beginning of spring we hope this release will be one of many in the upcoming future.
true
true
true
null
2024-10-12 00:00:00
2023-12-27 00:00:00
null
null
null
null
null
null
26,026,756
https://github.com/af/envalid
GitHub - af/envalid: Environment variable validation for Node.js
null
** Envalid is a small library for validating and accessing environment variables in Node.js programs ** Envalid is a small library for validating and accessing environment variables in Node.js programs, aiming to: - Ensure that your program only runs when all of its environment dependencies are met - Give you executable documentation about the environment your program expects to run in - Give you an immutable API for your environment variables, so they don't change from under you while the program is running - Type-safe: written completely in TypeScript, with great support for inference - Light: no dependencies besides tslib - Modular: customize behavior with custom validators, middleware, and reporters `cleanEnv()` returns a sanitized, immutable environment object, and accepts three positional arguments: `environment` - An object containing your env vars (eg.`process.env` )`validators` - An object that specifies the format of required vars.`options` - An (optional) object, which supports the following key:`reporter` - Pass in a function to override the default error handling and console output. See`src/reporter.ts` for the default implementation. By default, `cleanEnv()` will log an error message and exit (in Node) or throw (in browser) if any required env vars are missing or invalid. You can override this behavior by writing your own reporter. ``` import { cleanEnv, str, email, json } from 'envalid' const env = cleanEnv(process.env, { API_KEY: str(), ADMIN_EMAIL: email({ default: '[email protected]' }), EMAIL_CONFIG_JSON: json({ desc: 'Additional email parameters' }), NODE_ENV: str({ choices: ['development', 'test', 'production', 'staging'] }), }) // Read an environment variable, which is validated and cleaned during // and/or filtering that you specified with cleanEnv(). env.ADMIN_EMAIL // -> '[email protected]' // Envalid checks for NODE_ENV automatically, and provides the following // shortcut (boolean) properties for checking its value: env.isProduction // true if NODE_ENV === 'production' env.isTest // true if NODE_ENV === 'test' env.isDev // true if NODE_ENV === 'development' ``` For an example you can play with, clone this repo and see the `example/` directory. ``` git clone https://github.com/af/envalid cd envalid yarn prepare node example/server.js ``` Node's `process.env` only stores strings, but sometimes you want to retrieve other types (booleans, numbers), or validate that an env var is in a specific format (JSON, URL, email address). To these ends, the following validation functions are available: `str()` - Passes string values through, will ensure a value is present unless a`default` value is given. Note that an empty string is considered a valid value - if this is undesirable you can easily create your own validator (see below)`bool()` - Parses env var strings`"1", "0", "true", "false", "t", "f", "yes", "no", "on", "off"` into booleans`num()` - Parses an env var (eg.`"42", "0.23", "1e5"` ) into a Number`email()` - Ensures an env var is an email address`host()` - Ensures an env var is either a domain name or an ip address (v4 or v6)`port()` - Ensures an env var is a TCP port (1-65535)`url()` - Ensures an env var is a URL with a protocol and hostname`json()` - Parses an env var with`JSON.parse` Each validation function accepts an (optional) object with the following attributes: `choices` - An Array that lists the admissible parsed values for the env var.`default` - A fallback value, which will be present in the output if the env var wasn't specified. Providing a default effectively makes the env var optional. Note that`default` values are not passed through validation logic, they are default*output*values.`devDefault` - A fallback value to use*only*when`NODE_ENV` is explicitly set and*not*`'production'` . This is handy for env vars that are required for production environments, but optional for development and testing.`desc` - A string that describes the env var.`example` - An example value for the env var.`docs` - A URL that leads to more detailed documentation about the env var.`requiredWhen` - A function (env -> boolean) specifying when this env var is required. Use With default: undefined (optional value). You can easily create your own validator functions with `envalid.makeValidator()` . It takes a function as its only parameter, and should either return a cleaned value, or throw if the input is unacceptable: ``` import { makeValidator, cleanEnv } from 'envalid' const twochars = makeValidator((x) => { if (/^[A-Za-z]{2}$/.test(x)) return x.toUpperCase() else throw new Error('Expected two letters') }) const env = cleanEnv(process.env, { INITIALS: twochars(), }) ``` You can use either one of `makeValidator` , `makeExactValidator` and `makeStructuredValidator` depending on your use case. This validator has the output narrowed down to a subtype of `BaseT` (e.g. `str` ). Example of a custom integer validator: ``` const int = makeValidator<number>((input: string) => { const coerced = parseInt(input, 10) if (Number.isNaN(coerced)) throw new EnvError(`Invalid integer input: "${input}"`) return coerced }) const MAX_RETRIES = int({ choices: [1, 2, 3, 4] }) // Narrows down output type to '1 | 2 | 3 | 4' which is a subtype of 'number' ``` This validator has the output widened to `T` (e.g. `bool` ). To understand the difference with `makeValidator` , let's use it in the same scenario: ``` const int = makeExactValidator<number>((input: string) => { const coerced = parseInt(input, 10) if (Number.isNaN(coerced)) throw new EnvError(`Invalid integer input: "${input}"`) return coerced }) const MAX_RETRIES = int({ choices: [1, 2, 3, 4] }) // Output type is 'number' ``` As you can see in this instance, *the output type is exactly number, the parameter type of makeExactValidator*. Also note that here, `int` is not parametrizable.By default, if any required environment variables are missing or have invalid values, Envalid will log a message and call `process.exit(1)` . You can override this behavior by passing in your own function as `options.reporter` . For example: ``` const env = cleanEnv(process.env, myValidators, { reporter: ({ errors, env }) => { emailSiteAdmins('Invalid env vars: ' + Object.keys(errors)) }, }) ``` Additionally, Envalid exposes `EnvError` and `EnvMissingError` , which can be checked in case specific error handling is desired: ``` const env = cleanEnv(process.env, myValidators, { reporter: ({ errors, env }) => { for (const [envVar, err] of Object.entries(errors)) { if (err instanceof envalid.EnvError) { ... } else if (err instanceof envalid.EnvMissingError) { ... } else { ... } } } }) ``` In addition to `cleanEnv()` , as of v7 there is a new `customCleanEnv()` function, which allows you to completely replace the processing that Envalid applies after applying validations. You can use this custom escape hatch to transform the output however you wish. `customCleanEnv()` uses the same API as `cleanEnv()` , but with an additional `applyMiddleware` argument required in the third position: `applyMiddleware` - A function that can modify the env object after it's validated and cleaned. Envalid ships (and exports) its own default middleware (see src/middleware.ts), which you can mix and match with your own custom logic to get the behavior you desire. The `testOnly` helper function is available for setting a default value for an env var only when `NODE_ENV=test` . It is recommended to use this function along with `devDefault` . For example: ``` const env = cleanEnv(process.env, { SOME_VAR: envalid.str({ devDefault: testOnly('myTestValue') }), }) ``` For more context see this issue. Since by default Envalid's output is wrapped in a Proxy, structuredClone will not work on it. See #177. - dotenv is a very handy tool for loading env vars from `.env` files. It was previously used as a dependency of Envalid. To use them together, simply call`require('dotenv').config()` before you pass`process.env` to your`envalid.cleanEnv()` . - react-native-config can be useful for React Native projects for reading env vars from a `.env` file - fastify-envalid is a wrapper for using Envalid within Fastify - nestjs-envalid is a wrapper for using Envalid with NestJS - nuxt-envalid is a wrapper for using Envalid with NuxtJS
true
true
true
Environment variable validation for Node.js. Contribute to af/envalid development by creating an account on GitHub.
2024-10-12 00:00:00
2012-11-29 00:00:00
https://opengraph.githubassets.com/66885ce5f330e6b5139cf34a8a3c1b7781a4dcad7c34d61ffe4ea495430f1a6c/af/envalid
object
github.com
GitHub
null
null
25,956,630
https://twitter.com/ztownsend/status/1355133498646999044
x.com
null
null
true
true
false
null
2024-10-12 00:00:00
null
null
null
null
X (formerly Twitter)
null
null
39,983,412
https://www.onstartups.com/flashtags-a-simple-hack-for-conveying-context-without-confusion
FlashTags: A Simple Hack For Conveying Context Without Confusion
Dharmesh Shah
*The following was originally written as a post on the internal HubSpot wiki a couple of years ago. At the request of several fellow entrepreneurs (hi, Wade from Zapier!) I've mentioned this to, I'm sharing it publicly for the first time. Hope you find it useful. -Dharmesh* *For easy reference, you can access this page using FlashTags.org (it redirects here).* **From HubSpot Wiki, July 19, 2017** One of the things I struggle with is clearly conveying to someone how strongly I feel about something. This is sometimes referred to as "Hill Dying Status" (i.e. do I feel so strongly about this that it's a hill I'm willing to die on). By the way, not sure who originally used that phrase but I think it was Brian Halligan or maybe JD Sherman. Doesn't matter. Situations like the following happen for me multiple times a day (chances are, they happen to you too): - I come across an interesting article or video (sometimes about a competitor) and send it along to someone at HubSpot. Without context, they *might*think that I'm saying we should be doing that or adding that feature or somehow*reacting*to that news. But, most of the time, it's just something that I thought was "interesting". - Someone asks me a question or opinion on something. Turns out, I have opinions on *everything. S*ometimes, those opinions are even well informed. So, I share my opinion. Might be a hallway conversation or an email or whatever. Now, based on my history with that person, they may think: "Well, Dharmesh thinks I should do X so I'm going to do that, even though I was going to do Y." This is a problem because I almost always have much less information/data than the person asking the question – and I haven't really dug into the issue like they have. They're overvaluing my opinion. - Every now and then I feel super strongly about something. (Often, these are "Solve For The Customer" -- SFTC -- related). I "express" my feelings in a response to a long email thread. It gets buried in there, and then "dies". Nobody does anything. Not even a response. That makes me sad – but it's my fault. The person I had expected to at least respond had no idea that I felt strongly or wanted a response. So, I now share with you my not-so-secret hack to quickly communicate important context (either in a conversation or in an email thread). I've been using this for a while, and thought you mind find it useful as well. **How To Use A #FlashTag To Quickly Communicate Hill Dying Status** It's even easier than Sunday morning (which I've always found to be a poor benchmark): All you have to do is include one of the flashtags below in an email, Slack or even in a conversation. That's it. The tags are in ascending order of escalation (starting with the “I don’t feel strongly at all” to the “I really, really feel strongly"). **#fyi** -- Had this thought/idea/article/video/whatever pass through my brain. I haven't spent a lot of time thinking about it. You can read it or not. Act on it or not. No response needed or expected. Hill Dying Status (am I willing to die on this hill): I don't even see a hill. **#suggestion** -- Here's something I would do if I were you. But, I'm not you -- and you own this, so your call. Just consider it and weigh it against other things you're considering. I won't be offended if you go another way. A quick reaction/response would be appreciated (so I can learn what kinds of suggestions are useful/valuable), but is not necessary. Hill Dying Status: I saw the hill, but didn't feel strongly enough to commit the calories to climb it. **#recommendation** (or **#strongrecommendation**)-- I've thought about this a lot. It's kept me up at night. I dug in. I think I understand the tradeoffs. You can choose not to take the recommendation, and go your own way, but please do it for good reasons. Please dig in a bit yourself and have a well-reasoned rationale for why you don’t want to take the recommendation. Please don’t ignore or dismiss it out of hand. A response (either way) is politely requested. If it's a #strongrecommendation then a response explaining why you're not taking it is probably a good idea. Hill Dying Status: I climbed the hill. I breathed deeply I contemplated my life. I walked back down. **#plea** -- We don't like issuing edicts or directives at HubSpot. But...please, please, please just do this. Trust works both ways, and I need you to trust me on this. If you still feel compelled to resist, something’s not right, let's chat. Maybe even in (gasp!) person. Hill Dying Status: Dying on a hill is not on my bucket list, but if it were this would be a really good candidate. --- That's it. With just a few extra characters in that email or Slack, you can quickly convey how strongly you feel about something. Use it if you find it useful. It's just a #suggestion. Cheers, Dharmesh p.s. Why did I call it a #FlashTag? Because it's about communicating something in a flash (and flash rhymes with hash). And yes, I asked GrowthBot for "words that rhyme with hash". Oh, and in case you need to find this article again or tell somebody about it, just use FlashTags.org (it redirects to this page).
true
true
true
A simple trick for conveying context quickly to avoid confusion. It's simple, but effective.
2024-10-12 00:00:00
2019-03-18 00:00:00
https://www.onstartups.c…ulb-dharmesh.png
article
onstartups.com
OnStartups
null
null
32,779,896
https://scrib.am/the-articles/causality
Causality
null
Causality is a very underrated concept. In this article, I want to explore various examples of causality, in multiple scenarios. Clearly identifying the sequence of events which have led to a negative or a positive outcome is, in my opinion, one of the best ways to learn from the mistakes or the right moves. It enables you to potentially transfer those takeaways to a comparable situation. That’s how we should learn the lessons of history. Unfortunately, we rarely do. As defined by Wikipedia: Causality is influence by which one event, process, state, or object contributes to the production of another event, process, state, or object where the cause is partly responsible for the effect, and the effect is partly dependent on the cause. Everything is the consequence of a series of events which can be defined as a pattern of actions in a chain reaction. Chess or checkers players will confirm that once you’re on a wrong trajectory (a series of bad moves) it’s very difficult to correct your course, especially if the opponent has noticed your blunder. One single mistake, seemingly innocuous, can have devastating impacts further down the road. ### The Butterfly Effect You’ve probably heard of the Butterfly Effect. In chaos theory, the butterfly effect is the sensitive dependence on initial conditions in which a small change in one state of a deterministic nonlinear system can result in large differences in a later state. We’ve all experienced the Butterfly Effect when stuck in a traffic jam, caused by a random car which at some point slightly slowed down on the road ahead of us, causing a chain reaction. This is called a phantom traffic jam. It follows the shape of a long rippling wave, creating a dynamic instability, where small disturbances are amplified. Traffic jams aren’t the most problematic consequence of seemingly harmless causes. Your personal life and society as a whole can be disturbed by poorly assessed causality. Since those systems are highly complex, even chaotic, they’re very hard to model. Once they’re set into motion, it’s difficult to shift their course. Hence, it is advised not to understate *the initial conditions*, which can have an unsuspectedly dramatic impact on future developments.Old age diseases are often the consequence of a set of poor decisions made when you were a young child. Wars are almost always the consequence of a series of wrong strategic moves which might have seemed right in the first place, not accounting for their long term impact, as demonstrated by the unfortunate evolution of our relationships with Russia since the end of the Cold War. A catastrophic event is always caused by the confluence of multiple parameters. Our investigations should focus on the *unbundling of causal factors*having led to the deplorable outcome, bearing in mind that**correlation does not imply causation**.### Karma, Gaia & Cooperation In Indian religion and philosophy, the causal law of karma considers that good or bad actions determine the future of someone’s existence. The chemist James Lovelock hypothesized in his Gaia theory that we’re all part of a synergistic and self-regulating complex system, where all actions, large and small, have a systemic distributed impact. When the Chernobyl nuclear plant exploded in 1986, the radioactive clouds didn’t stop at the borders. When some countries keep on burning large quantities of coal, the gas emissions spread all around the globe, just as the ash from Eyjafjallajökull which, admittedly, had a more visible impact on air travel back in 2010. Positive moves also have a widespread impact, as nicely put in this blog excerpt (Huffington Post). The environmental benefits of any single nation's reductions in greenhouse gas emissions are spread worldwide, unlike the costs. This creates the possibility that some countries will want to "free ride" on the efforts of others. It's for this very reason that international cooperation is required. Circling back to the Gaia hypothesis, we could say that humans have a disproportionate leveraged impact on their otherwise self-regulating environment. And if you broaden your perspective into the vast universe, you could argue that our seemingly isolated actions might have a butterfly effect on a cosmic scale. We’re all in this together! ### The Causality of Success (& Failure) Causes don’t lead to consequences in a vacuum. They take place in a network of interactions. One of the wisest pieces of advice you can give to young people is to develop a **network of connections**who will influence the life they’re pursuing, on both professional and private levels.We are the sum of our relationships. This stance contrasts with the individualistic approach of self-development, which considers that the answers to all our existential questions should arise from our deep self, irrespective of the world around us. A more holistic approach of causality considers how we perform in a group, both influencing its dynamics and being influenced in return. Ignoring the two-way flow of influences is a recipe for failure. In terms of leadership, **influence by transformation**appears to work better in the long run than**influence by transaction**(the usual top-down approach)The first approach — transaction — emphasizes actual, actionable transactions between a leader and their subordinates. It focuses on improving an immediate situation by determining the steps that need to be taken in the short term. In the second approach — transformation — leaders act as role models and motivators who offers vision, excitement, encouragement, morale boost, and satisfaction to the followers. *Source:* *Harvard Business Review* As far as life events are concerned, opinions differ. Some people consider that you should first secure the grounds of a career path, then found a family (or focus on a fulfilling life as a single person). Others consider that being happy at a private level opens the doors to all forms of satisfaction. Since most of us live in a society where money is critical for survival, I would argue that, in the early adult phase of your life, professional fulfillment is probably the best foundation to self-development. When the basic needs are covered, you’re more receptive to all the opportunities around you. But work shouldn’t been a burden. Happiness (at work) leads to success, not the other way round. ### Can we reverse causality? Common sense will tell you that if the arrow of time always goes in the same direction, there’s no way to reverse the course of actions, in other words **no way to reverse causality**.But is this an undeniable scientific truth? Is **retrocausality**possible, i.e. having the**effect precede its cause**?At a quantum level, the standard timeline from left to right doesn’t necessarily apply, events can be in a fluid relationship. Logical objections to macroscopic time travel may not necessarily prevent retrocausality at other scales of interaction. Source: Retrocausality, Wikipedia And even at a macro level, according to Einstein’s Relativity Theory, we live in a “Block Universe” where the past still exists, the future already exists and the present is just an observation point in the 4-dimensional block, while the passage of time would be an illusion ( *if time flows, what are the banks of the river?*)The possibility that a future event might influence the present motivates some people to try to **“listen to their future”**to inform their present actions and potentially change the course of their destiny (*French physicist and research engineer Philippe Guillemant has developed**a body of work around this theory*).## Sources https://youtu.be/2JUljTvFbDA (in FR)
true
true
true
My Personal Encyclopedia
2024-10-12 00:00:00
2022-09-03 00:00:00
null
website
scrib.am
Scrib.am
null
null
26,463,513
https://github.com/torfsen/python-systemd-tutorial
GitHub - torfsen/python-systemd-tutorial: A tutorial for writing a systemd service in Python
Torfsen
Many Linux distributions use systemd to manage the system's services (or *daemons*), for example to automatically start certain services in the correct order when the system boots. Writing a systemd service in Python turns out to be easy, but the complexity of systemd can be daunting at first. This tutorial is intended to get you started. When you feel lost or need the gritty details, head over to the systemd documentation, which is pretty extensive. However, the docs are distributed over several pages, and finding what you're looking for isn't always easy. A good place to look up a particular systemd detail is systemd.directives, which lists all the configuration options, command line parameters, etc., and links to their documentation. Aside from this `README.md` file, this repository contains a basic implementation of a Python service consisting of a Python script (`python_demo_service.py` ) and a systemd unit file (`python_demo_service.service` ). The systemd version we're going to work with is 229, so if you're using a different version (see `systemctl --version` ) then check the systemd documentation for things that may differ. systemd supports both *system* and *user* services. System services run in the system's own systemd instance and provide functionalities for the whole system and all users. User services, on the other hand, run in a separate systemd instance tied to a specific user. Even if your goal is to develop a system service it is a good idea to start with a user service, because it allows you to focus on getting the service up and running before dealing with the complexities of setting up a system service. Most of this tutorial targets user services, but there's a section at the end on how to go from a user service to a system service once you're ready. To create a systemd service you need to create a corresponding *unit file*, which is a plain-text, ini-style configuration file. For this tutorial we will use a simple self-contained unit file, see systemd.unit for advanced approaches. Unit files for user services can be put in several places. Some of these require root access, but there are multiple possible places in your home directory. As far as I can tell, there is no established default choice for these, so for this tutorial we are going to use `~/.config/systemd/user/` . Therefore, store the following unit description as `~/.config/systemd/user/python_demo_service.service` : ``` [Unit] # Human readable name of the unit Description=Python Demo Service ``` Once you have done this, systemd will find our service: ``` $ systemctl --user list-unit-files | grep python_demo_service python_demo_service.service static ``` The unit options for systemd services are documented in systemd.service. We can now start to write the actual Python code for the service. Let's start small with a script that simply prints a message every 5 seconds. Store the following script as `python_demo_service.py` in a directory of your choice: ``` if __name__ == '__main__': import time while True: print('Hello from the Python Demo Service') time.sleep(5) ``` To link our service to our script, extend the unit file as follows: ``` [Unit] Description=Python Demo Service [Service] # Command to execute when the service is started ExecStart=/usr/bin/python path/to/your/python_demo_service.py ``` Now our service can be started: ``` $ systemctl --user start python_demo_service ``` Depending on your systemd version, you may need to reload the user daemon so that our service can be found and started. ``` $ systemctl --user daemon-reload ``` Note that this command returns immediately. This is because systemd has created a separate process that runs our script. This means that we don't have to care about the nasty details of correctly forking into a daemon process ourselves, since systemd does all the work for us. Yay! We can check that our service is running: ``` $ systemctl --user status python_demo_service ● python_demo_service.service - Python Demo Service Loaded: loaded (/home/torf/.config/systemd/user/python_demo_service.service; static; vendor preset: enabled) Active: active (running) since So 2018-12-30 17:46:03 CET; 2min 35s ago Main PID: 26218 (python) CGroup: /user.slice/user-1000.slice/[email protected]/python_demo_service.service └─26218 /usr/bin/python /home/torf/projects/python-systemd-tutorial/python_demo_service.py ``` In the first line of the output we can see the `Description` from our unit file. The output also tells us the state of our service and the PID it is running as. Obviously our service can also be stopped: ``` $ systemctl --user stop python_demo_service $ systemctl --user status python_demo_service ● python_demo_service.service - Python Demo Service Loaded: loaded (/home/torf/.config/systemd/user/python_demo_service.service) Active: inactive (dead) ``` You might have noticed that the output of our script's `print` calls did not show up on your terminal. This is because systemd detached the service process from that terminal and also redirected the process's `STDOUT` and `STDERR` streams. One thing to remember is that in Python, STDOUT and STDERR are buffered. When running in a terminal, this means that output will only show up after a newline (`\n` ) has been written. However, our service's STDOUT and STDERR are pipes, and in this case the buffer is only flushed once it is full. Hence the script's messages only turn up in systemd's logs after it has produced even more output. To avoid this effect we need to disable the buffering of STDOUT and STDERR, and one possibility to do so is to set the `PYTHONUNBUFFERED` environment variable. This can be done directly in our unit file by adding the following line to the `[Service]` section: ``` Environment=PYTHONUNBUFFERED=1 ``` As always when you change your unit file you need to tell systemd to reload its configuration, and (if your service is currently running), restart the service: ``` $ systemctl --user daemon-reload $ systemctl --user restart python_demo_service ``` The output from our script should now show up in systemd's logs, which by default are redirected to syslog: ``` $ grep 'Python Demo Service' /var/log/syslog Dec 30 18:05:34 leibniz python[26218]: Hello from the Python Demo Service ``` Another way to display your service's output is via ``` $ journalctl --user-unit python_demo_service ``` There are many more possible configurations for logging. For example, you can redirect STDOUT and STDERR to files instead. See systemd.exec for details. Many services are intended to be started automatically when the system boots. This is easy to achieve using systemd. First we need to attach our service to a suitable *target*: targets are special systemd units that are used for grouping other units and for synchronization during startup. See systemd.target for details about targets in general and systemd.special for a list of built-in targets. For user services, the `default.target` is usually a good choice. Add the following to your unit file: ``` [Install] WantedBy=default.target ``` Our service is now ready to be started automatically, but for that to actually happen we have to *enable* the service first: ``` $ systemctl --user enable python_demo_service Created symlink from /home/torf/.config/systemd/user/default.target.wants/python_demo_service.service to /home/torf/.config/systemd/user/python_demo_service.service. ``` If you restart your system now then our service will be started automatically once you log in. After your last session is closed, your user's systemd instance (and with it, our service) will shutdown. You can make your user's systemd instance independent from your user's sessions (so that our service starts at boot time even if you don't log in and also keeps running until a shutdown/reboot) via ``` $ sudo loginctl enable-linger $USER ``` To disable autostart, simply disable your service: ``` $ systemctl --user disable python_demo_service Removed symlink /home/torf/.config/systemd/user/default.target.wants/python_demo_service.service. ``` Note that simply enabling a service does not start it, but only activates autostart during boot-up. Similarly, disabling a service doesn't stop it, but only deactivates autostart during boot-up. If you want to start/stop the service immediately then you still need to do that manually as described above in addition to enabling/disabling the service. To check whether your service is enabled, use ``` $ systemctl --user list-unit-files | grep python_demo_service python_demo_service.service enabled ``` As with any other software, your service might crash. In that case, systemd can automatically try to restart it. By default, systemd will not do that, so you have to enable this functionality in your unit file. systemd has several options to precisely configure under which circumstances your service should be restarted. A good starting point is to set `Restart=on-failure` in the `[Service]` section of your unit file: ``` [Service] ... Restart=on-failure ``` This tells systemd to restart your daemon when it exits with a non-zero exit code. Other settings for `Restart` and related options are documented in systemd.service. As always you need to run `systemctl --user daemon-reload` for these changes to become effective. We can simulate a crash by killing our service using the `SIGKILL` signal: ``` $ systemctl --user --signal=SIGKILL kill python_demo_service ``` Afterwards, the logs will show that systemd restarted our service: ``` $ journalctl --user-unit python_demo_service [...] Jan 31 12:55:24 leibniz python[3074]: Hello from the Python Demo Service Jan 31 12:55:29 leibniz python[3074]: Hello from the Python Demo Service Jan 31 12:55:32 leibniz systemd[1791]: python_demo_service.service: Main process exited, code=killed, status=9/KILL Jan 31 12:55:32 leibniz systemd[1791]: python_demo_service.service: Unit entered failed state. Jan 31 12:55:32 leibniz systemd[1791]: python_demo_service.service: Failed with result 'signal'. Jan 31 12:55:33 leibniz systemd[1791]: python_demo_service.service: Service hold-off time over, scheduling restart. Jan 31 12:55:33 leibniz systemd[1791]: Stopped Python Demo Service. Jan 31 12:55:33 leibniz systemd[1791]: Started Python Demo Service. Jan 31 12:55:33 leibniz python[3089]: Hello from the Python Demo Service Jan 31 12:55:38 leibniz python[3089]: Hello from the Python Demo Service [...] ``` Often, a service needs to perform some initializiation before it is ready to perform its actual work. Your service can notify systemd once it has completed its initialization. This is particularly useful when other services depend on your service, since it allows systemd to delay starting these until your service is really ready. The notification is done using the sd_notify system call. We'll use the python-systemd package to execute it, so make sure it is installed. Then add the following lines to our script: ``` if __name__ == '__main__': import time import systemd.daemon print('Starting up ...') time.sleep(10) print('Startup complete') systemd.daemon.notify('READY=1') while True: print('Hello from the Python Demo Service') time.sleep(5) ``` You will also need to change the type of your service from `simple` (the default we've been previously using) to `notify` . Add the following line to the `[Service]` section of your unit file, and call `systemctl --user daemon-reload` afterwards. ``` Type=notify ``` You can then see the notification in action by (re-)starting the service: `systemctl` will wait for the service's notification before returning. ``` $ systemctl --user restart python_demo_service ``` You can do a lot more via sd_notify, see its documentation for details. Once you have a working user service you can turn it into a system service. Remember, however, that system services run in the system's central systemd instance and have a greater potential for disturbing your system's stability or security when not implemented correctly. In many cases, this step isn't really necessary and a user service will do just fine. Before turning our service into a system service let's make sure that its stopped and disabled. Otherwise we might end up with both a user service and a system service. ``` $ systemctl --user stop python_demo_service $ systemctl --user disable python_demo_service ``` Previously, we stored our unit file in a directory appropriate for user services (`~/.config/systemd/user/` ). As with user unit files, systemd looks into more than one directory for system unit files. We'll be using `/etc/systemd/system/` ', so move your unit file there and make sure that it has the right permissions ``` $ sudo mv ~/.config/systemd/user/python_demo_service.service /etc/systemd/system/ $ sudo chown root:root /etc/systemd/system/python_demo_service.service $ sudo chmod 644 /etc/systemd/system/python_demo_service.service ``` Our service is now a system service! This also means that instead of using `systemctl --user ...` we will now use `systemctl ...` (without the `--user` option) instead (or `sudo systemctl ...` if we're modifying something). For example: ``` $ systemctl list-unit-files | grep python_demo_service python_demo_service.service disabled ``` Similarly, use `journalctl --unit python_demo_service` to display the system service's logs. Until now you have probably stored the service's Python script somewhere in your home directory. That was fine for a user service, but isn't optimal for a system service. A separate subdirectory in `/usr/local/lib` is a better choice: ``` $ sudo mkdir /usr/local/lib/python_demo_service $ sudo mv ~/path/to/your/python_demo_service.py /usr/local/lib/python_demo_service/ $ sudo chown root:root /usr/local/lib/python_demo_service/python_demo_service.py $ sudo chmod 644 /usr/local/lib/python_demo_service/python_demo-service.py ``` Obviously we also need to change the script's location in our unit file: update the `ExecStart=...` line to ``` ExecStart=/usr/bin/python /usr/local/lib/python_demo_service/python_demo_service.py ``` and reload the changes via `sudo systemctl daemon-reload` . System services by default run as `root` , which is a security risk. Instead, we will use a user account dedicated to the service, so that we can use the usual security mechanisms (e.g. file permissions) to configure precisely what our service can and cannot access. A good choice for the name of the service user is the name of the service. To create the user we will use the useradd command: ``` $ sudo useradd -r -s /bin/false python_demo_service ``` Once you have created the user, add the following line to the `[Service]` section of your unit file: ``` User=python_demo_service ``` After reloading the systemd configuration restarting our service, we can check that it runs as the correct user: ``` $ sudo systemctl daemon-reload $ sudo systemctl restart python_demo_service $ sudo systemctl --property=MainPID show python_demo_service MainPID=18570 $ ps -o uname= -p 18570 python_demo_service ``` We now have a basic implementation of a system systemd service in Python. Depending on your goal, there are many ways to go forward. Here are some ideas: - Add support for reloading the service's configuration without a hard restart. See the `ExecReload` option. - Explore the other features of the python-systemd package, for example the `systemd.journal` module for advanced interaction with the systemd journal. And of course, if you find an error in this tutorial or have an addition, feel free to create an issue or a pull request. Happy coding!
true
true
true
A tutorial for writing a systemd service in Python - torfsen/python-systemd-tutorial
2024-10-12 00:00:00
2019-01-31 00:00:00
https://opengraph.githubassets.com/b66130305d7a88ae53290b7219d9b39961880f34310b9eb44c9f03b571967f99/torfsen/python-systemd-tutorial
object
github.com
GitHub
null
null
39,845
http://www.techcrunch.com/2007/08/05/amiestreetcom-closes-series-a-financing-led-by-amazoncom/
Amie Street Closes Series A Financing Led By Amazon.com | TechCrunch
Aria Alamalhodaei
Social music marketplace Amie Street has closed a Series A round of financing led by Amazon.com, along with some new partnerships and a site redesign. The amount of Amazon’s investment and the terms are not disclosed. We’ve been big fans of the model and the recent investment shows Amazon is too. On Amie Street, music is not sold for a flat rate, but rather fluctuates based on demand for the song. Artists upload their music (DRM free), which users can download at a starting price of free. As a song’s downloads increase, the price starts to rise, all the way up to $0.99. If a song gets to $0.30 or so, you know its popular. The artist keeps 70% of revenues after the first $5 in sales. SellABand also has a socially driven music monetization model. Users are rewarded for recommending hit songs with credit for purchasing additional music on Amie Street. The more popular a song becomes after a member has recommended it, the more credit he or she receives to spend on music. New partners include RoyaltyShare, INgrooves, Daptone Records, and United For Opportunity (UFO) are new labels working with Amie Street. The addition of the partners has expanded Amie Street’s music library over 1000%. The site redesign’s major change has been the addition of a personalized music home page that includes a music “news feed” that helps you track your friend’s recommended songs, new releases from your favorite bands, and even predicts songs you may like based on previous activity. The company has now grown to 12 people and out of their Long Island house to office space in Long Island city. No doubt, Amazon’s recent payments system seems an ideal fit for the site as well.
true
true
true
Social music marketplace Amie Street has closed a Series A round of financing led by Amazon.com, along with some new partnerships and a site redesign. The
2024-10-12 00:00:00
2007-08-05 00:00:00
https://techcrunch.com/w…iestreetlogo.png
article
techcrunch.com
TechCrunch
null
null
4,606,123
http://www.youtube.com/watch?v=hyGJBV1xnJI
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
17,835,156
http://scholar.harvard.edu/files/mickens/files/towashitallaway.pdf
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
23,172,636
https://www.cam.ac.uk/research/news/ai-techniques-in-medical-imaging-may-lead-to-incorrect-diagnoses
AI techniques in medical imaging may lead to incorrect diagnoses
null
Machine learning and AI are highly unstable in medical image reconstruction, and may lead to false positives and false negatives, a new study suggests. Machine learning and AI are highly unstable in medical image reconstruction, and may lead to false positives and false negatives, a new study suggests. To put it simply: there is still no such thing as a free lunch Anders Hansen A team of researchers, led by the University of Cambridge and Simon Fraser University, designed a series of tests for medical image reconstruction algorithms based on AI and deep learning, and found that these techniques result in myriad artefacts, or unwanted alterations in the data, among other major errors in the final images. The effects were typically not present in non-AI based imaging techniques. The phenomenon was widespread across different types of artificial neural networks, suggesting that the problem will not be easily remedied. The researchers caution that relying on AI-based image reconstruction techniques to make diagnoses and determine treatment could ultimately do harm to patients. Their results are reported in the *Proceedings of the National Academy of Sciences*. "There’s been a lot of enthusiasm about AI in medical imaging, and it may well have the potential to revolutionise modern medicine: however, there are potential pitfalls that must not be ignored," said Dr Anders Hansen from Cambridge’s Department of Applied Mathematics and Theoretical Physics, who led the research with Dr Ben Adcock from Simon Fraser University. "We’ve found that AI techniques are highly unstable in medical imaging, so that small changes in the input may result in big changes in the output." A typical MRI scan can take anywhere between 15 minutes and two hours, depending on the size of the area being scanned and the number of images being taken. The longer the patient spends inside the machine, the higher resolution the final image will be. However, limiting the amount of time patients spend inside the machine is desired, both to reduce the risk to individual patients and to increase the overall number of scans that can be performed. Using AI techniques to improve the quality of images from MRI scans or other types of medical imaging is an attractive possibility for solving the problem of getting the highest quality image in the smallest amount of time: in theory, AI could take a low-resolution image and make it into a high-resolution version. AI algorithms ‘learn’ to reconstruct images based on training from previous data, and through this training procedure aim to optimise the quality of the reconstruction. This represents a radical change compared to classical reconstruction techniques that are solely based on mathematical theory without dependency on previous data. In particular, classical techniques do not learn. Any AI algorithm needs two things to be reliable: accuracy and stability. An AI will usually classify an image of a cat as a cat, but tiny, almost invisible changes in the image might cause the algorithm to instead classify the cat as a truck or a table, for instance. In this example of image classification, the one thing that can go wrong is that the image is incorrectly classified. However, when it comes to image reconstruction, such as that used in medical imaging, there are several things that can go wrong. For example, details like a tumour may get lost or may falsely be added. Details can be obscured and unwanted artefacts may occur in the image. "When it comes to critical decisions around human health, we can’t afford to have algorithms making mistakes," said Hansen. "We found that the tiniest corruption, such as may be caused by a patient moving, can give a very different result if you’re using AI and deep learning to reconstruct medical images – meaning that these algorithms lack the stability they need." Hansen and his colleagues from Norway, Portugal, Canada and the UK designed a series of tests to find the flaws in AI-based medical imaging systems, including MRI, CT and NMR. They considered three crucial issues: instabilities associated with tiny perturbations, or movements; instabilities with respect to small structural changes, such as a brain image with or without a small tumour; and instabilities with respect to changes in the number of samples. They found that certain tiny movements led to myriad artefacts in the final images, details were blurred or completely removed, and that the quality of image reconstruction would deteriorate with repeated subsampling. These errors were widespread across the different types of neural networks. According to the researchers, the most worrying errors are the ones that radiologists might interpret as medical issues, as opposed to those that can easily be dismissed due to a technical error. "We developed the test to verify our thesis that deep learning techniques would be universally unstable in medical imaging," said Hansen. "The reasoning for our prediction was that there is a limit to how good a reconstruction can be given restricted scan time. In some sense, modern AI techniques break this barrier, and as a result become unstable. We’ve shown mathematically that there is a price to pay for these instabilities, or to put it simply: there is still no such thing as a free lunch." The researchers are now focusing on providing the fundamental limits to what can be done with AI techniques. Only when these limits are known will we be able to understand which problems can be solved. "Trial and error-based research would never discover that the alchemists could not make gold: we are in a similar situation with modern AI," said Hansen. "These techniques will never discover their own limitations. Such limitations can only be shown mathematically." **Reference:** Vegard Antun et al. ‘On instabilities of deep learning in image reconstruction and the potential costs of AI.’ Proceedings of the National Academy of Sciences (2020). DOI: 10.1073/pnas.1907377117 The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.
true
true
true
Machine learning and AI are highly unstable in medical image reconstruction, and may lead to false positives and false negatives, a new study suggests.
2024-10-12 00:00:00
2020-05-12 00:00:00
https://www.cam.ac.uk/si…ac-unsplash1.jpg
article
cam.ac.uk
University of Cambridge
null
null
19,676,778
https://www.sciencedaily.com/releases/2019/04/190412150625.htm
Quantum simulation more stable than expected
null
# Quantum simulation more stable than expected ## Quantum localization bounds Trotter errors in digital quantum simulation - Date: - April 12, 2019 - Source: - University of Innsbruck - Summary: - A localization phenomenon boosts the accuracy of solving quantum many-body problems with quantum computers which are otherwise challenging for conventional computers. This brings such digital quantum simulation within reach on quantum devices available today. - Share: A localization phenomenon boosts the accuracy of solving quantum many-body problems with quantum computers which are otherwise challenging for conventional computers. This brings such digital quantum simulation within reach on quantum devices available today. Quantum computers promise to solve certain computational problems exponentially faster than any classical machine. "A particularly promising application is the solution of quantum many-body problems utilizing the concept of digital quantum simulation," says Markus Heyl from Max Planck Institute for the Physics of Complex in Dresden, Germany. "Such simulations could have a major impact on quantum chemistry, materials science and fundamental physics." Within digital quantum simulation the time evolution of the targeted quantum many-body system is realized by a sequence of elementary quantum gates by discretizing time evolution, called Trotterization. "A fundamental challenge, however, is the control of an intrinsic error source, which appears due to this discretization," says Markus Heyl. Together with Peter Zoller from the Department of Experimental Physics at the University of Innsbruck and the Institute of Quantum Optics and Quantum Communication at the Austrian Academy of Sciences and Philipp Hauke from the Kirchhoff Institute for Physics and the Institute for Theoretical Physics at the University of Heidelberg they show in a recent paper in *Science Advances* that quantum localization-by constraining the time evolution through quantum interference-strongly bounds these errors for local observables. **More robust than expected** "Digital quantum simulation is thus intrinsically much more robust than what one might expect from known error bounds on the global many-body wave function," Heyl summarizes. This robustness is characterized by a sharp threshold as a function of the utilized time granularity measured by the so-called Trotter step size. The threshold separates a regular region with controllable Trotter errors, where the system exhibits localization in the space of eigenstates of the time-evolution operator, from a quantum chaotic regime where errors accumulate quickly rendering the outcome of the quantum simulation unusable. "Our findings show that digital quantum simulation with comparatively large Trotter steps can retain controlled Trotter errors for local observables," says Markus Heyl. "It is thus possible to reduce the number of quantum gate operations required to represent the desired time evolution faithfully, thereby mitigating the effects of imperfect individual gate operations." This brings digital quantum simulation for classically challenging quantum many-body problems within reach for current day quantum devices. **Story Source:** Materials provided by **University of Innsbruck**. *Note: Content may be edited for style and length.* **Journal Reference**: - Markus Heyl, Philipp Hauke, Peter Zoller. **Quantum localization bounds Trotter errors in digital quantum simulation**.*Science Advances*, 2019; 5 (4): eaau8342 DOI: 10.1126/sciadv.aau8342 **Cite This Page**: *ScienceDaily*. Retrieved October 12, 2024 from www.sciencedaily.com
true
true
true
A localization phenomenon boosts the accuracy of solving quantum many-body problems with quantum computers which are otherwise challenging for conventional computers. This brings such digital quantum simulation within reach on quantum devices available today.
2024-10-12 00:00:00
2024-10-12 00:00:00
https://www.sciencedaily…cidaily-icon.png
article
sciencedaily.com
ScienceDaily
null
null
29,993,149
https://billtcheng2013.medium.com/semantic-search-with-vector-database-4d80398d1e3f
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
40,287,020
https://www.youtube.com/watch?v=ymyIEGRw4-U
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
13,213,948
http://blog.aha.io/the-17-great-myths-of-product-management/
The 17 Great Myths of Product Management
Brian de Haaff
# The 17 Great Myths of Product Management We issued a warning to product managers last month — beware of bad advice. The Aha! Customer Success team (all former product managers) weighed in. And many of you added your own examples. Reading through your stories, I noticed something curious. Some of this product management advice was not just bad — it was flat-out false. This got me thinking about “workplace myths.” You know the ones I am talking about — the well-meaning pointers that have little basis in reality. For example, “you have to visit customers in person.” I wanted to shine some light on these myths, particularly the ones that product managers hear (and sometimes tell). Where do these tall tales originate? Well, as a product manager you see the big picture. Your job is knowing the “why” behind the work. But not everyone has this perspective. Some of your teammates may suffer from tunnel vision — others are naive or simply misinformed. Even if delivered with the best of intentions, workplace myths are a distraction. Your task is to separate reality from myth. And if something feels off, it likely is. Like any good fable, you will usually find a lesson within the illusion. So I asked the Customer Success team to gather up a few of the great fabrications they have heard. Here are the 17 myths of product management — and the reality behind them: **1. “Our technical architecture should drive our strategy.” ***Reality: *An effective strategy is driven not just by technology, but by considering the complete customer experience. **2. “The scrum master owns the ****product backlog****.” ***Reality: *The product owner owns the backlog. They represent the voice of the customer and use those insights for prioritization. **3. “If the feature is cool enough you can worry about the market later.” ***Reality:* You are wasting everyone’s time creating a feature simply because it is “cool.” If there is no need for it, implementing it will only frustrate your team and your customers. **4. “Do what you have to do (to close the sale, to keep the CEO happy, etc.).” ***Reality: *What this person is actually saying is, “I have lost sight of our purpose.” **5. “If you do not do this, I’ll go to someone else.” ***Reality:* If the work is not a priority, no one else will do it (and they should not). Trust your gut and have the confidence to say no. **6. “Make the date on this one without changes**.” *Reality: *You do not need to push something out the door just for the sake of getting it done. Technology should never come before the customer experience. **7. “Make everyone feel like they are the number-one priority.” ***Reality: *If everyone is number one, then no one is. **8. “Tell the sales team it will be ready on this date.” ***Reality: *Dates matter — but only if they are real. **9. “You have to help me — I already promised it.” ***Reality: *If a “promised” feature does not fulfill the larger strategy, then it should not be prioritized. **10. “You need to have all of the features that the competitors have.” ***Reality: *You need to invest in your customers and what they need — not in what the competition has. **11. “The customer is always right.” ***Reality: *Usually, but not always. It depends if the customer being discussed is the right customer for your strategy and business. **12. “Just discount it and get the deal done.” ***Reality:* Delivering value to your customers is your real top priority. Discounts imply that your product is worth a lot less. **13. “Customers like to be challenged.” ***Reality: *Customers do not want to struggle to figure out your product. **14. “It is really easy to configure that via the command-line.” ***Reality: *Unless you are dealing with the most sophisticated of customers who have deep technical experience, your job is to make it easy for them to achieve what they are trying to do with your product. **15. “The customer will not buy unless we add this capability.” ***Reality: *You will not understand a user’s motives until you talk to them. The best product managers are customer advocates and understand exactly what they need. **16. “We can unlock an entirely new market if we invest in adding this feature.” ***Reality: *You cannot confirm market potential unless you speak to your users and find out what is missing from the market. **17. “We should do this integration for the press.” ***Reality: *Sure, you might get mentioned in the press. But if the integration itself does not serve customers well, then it was not worth the ink used to print that glowing write-up. Call them what you will — fibs, fables, or giant works of fiction — workplace myths are prevalent. And it is up to you as a product manager to recognize and filter them out. Carefully weigh the words of others. Take your time and think about what the person *really* means if you want to find success. When strategy is your north star, no myth can lead you astray. **What product management myths have you heard?**
true
true
true
We issued a warning to product managers last month — beware of bad advice. The Aha! Customer Success team (all former product managers) weighed in. And many of you added your own examples. Reading through your stories, I noticed something curious.
2024-10-12 00:00:00
2016-12-19 00:00:00
https://images.ctfassets…g&bg=transparent
website
aha.io
Aha!
null
null
26,924,014
https://phys.org/news/2021-04-effects-solar-flares-earth-magnetosphere.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
15,098,822
https://corvuscrypto.com/posts/struggle-of-programming-education
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
36,609,169
https://www.visualcapitalist.com/sp/industrial-automation-who-leads-the-robot-race/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
2,336,327
http://www.engadget.com/2011/03/17/byd-motors-sneaks-on-to-american-market-could-make-us-debut-off/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
7,087,480
https://stripe-ctf.com
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
367,888
http://socialcomputing.ucsb.edu/contest2020/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
19,123,146
https://rahul.tech/reddit-memes-anti-vaxx
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
5,507,359
http://aras-p.info/blog/2013/04/07/mobile-hardware-stats-and-more/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
3,011,815
http://travel-map.tackers.cloudbees.net/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
17,819,478
https://www.microsoft.com/en-us/research/blog/dowhy-a-library-for-causal-inference/?OCID=msr_blog_dowhy_KDD_tw
DoWhy – A library for causal inference - Microsoft Research
Emily Maryatt
For decades, causal inference methods have found wide applicability in the social and biomedical sciences. As computing systems start intervening in our work and daily lives, questions of cause-and-effect are gaining importance in computer science as well. To enable widespread use of causal inference, we are pleased to announce a new software library, DoWhy (opens in new tab). Its name is inspired by Judea Pearl’s do-calculus for causal inference. In addition to providing a programmatic interface for popular causal inference methods, DoWhy is designed to highlight the critical but often neglected assumptions underlying causal inference analyses. DoWhy does this by first making the underlying assumptions explicit, for example, by explicitly representing identified estimands. And secondly by making sensitivity analysis and other robustness checks a first-class element of the causal inference process. Our goal is to enable people to focus their efforts on identifying assumptions for causal inference, rather than on details of estimation. Our motivation for creating DoWhy comes from our experiences in causal inference studies over the past few years, ranging from estimating the impact of a recommender system (opens in new tab) to predicting likely outcomes given a life event (opens in new tab). In each of these studies, we found ourselves repeating the common steps of finding the right identification strategy, devising the most suitable estimator, and conducting robustness checks, all from scratch. While we were impressed—sometimes intimidated—by the amount of knowledge in causal inference literature, we found that doing any empirical causal inference remained a challenging task. Ensuring we understood our assumptions and validated them appropriately was particularly daunting. More generally, we see that a “roll your own” approach to causal inference has resulted in studies with varying (sometimes minimal) approaches to testing of key assumptions. Spotlight: Blog post We therefore asked ourselves, what if there existed a software library that provides a simple interface to common causal inference methods that codified best practices for reasoning about and validating key assumptions? Unfortunately, the challenge is that causal inference depends on estimation of unobserved quantities—also known as the “fundamental problem” of causal inference. Unlike in supervised learning, such *counterfactual* quantities imply that we cannot have a purely objective evaluation through a held-out test set, thus precluding a plug-in approach to causal inference. For instance, for any intervention—such as a new algorithm or a medical procedure—one can either observe what happens when people are given the intervention, or when they are not. But never both. Therefore, causal analysis hinges critically on assumptions about the data-generating process. To succeed, it became clear to us that the assumptions need to be first-class citizens in a causal inference library. We designed DoWhy using two guiding principles—making causal assumptions explicit and testing robustness of the estimates to violations of those assumptions. First, DoWhy makes a distinction between identification and estimation. Identification of a causal effect involves making assumptions about the data-generating process and going from the counterfactual expressions to specifying a target estimand, while estimation is a purely statistical problem of estimating the target estimand from data. Thus, identification is where the library spends most of its time, just like we commonly do in our projects. To represent assumptions formally, DoWhy uses the Bayesian graphical model framework where users can specify what they know, and more importantly, what they don’t know, about the data-generating process. For estimation, we provide methods based on the potential-outcomes framework such as matching, stratification and instrumental variables. A happy side-effect of using DoWhy is that you will realize the equivalence and interoperability of the seemingly disjoint graphical model and potential outcome frameworks. Second, once assumptions are made, DoWhy provides robustness tests and sensitivity checks to test reliability of an obtained estimate. You can test how the estimate changes as underlying assumptions are varied, for example, by introducing a new confounder or by replacing the intervention with a placebo. Wherever possible, the library also automatically checks validity of obtained estimate based on assumptions in the graphical model. Still, we also understand that automated testing cannot be perfect. DoWhy therefore stresses interpretability of its output; at any point in the analysis, you can inspect the untested assumptions, identified estimands (if any) and the estimate (if any). In the future, we look forward to adding more features to the library, including support for more estimation and sensitivity methods and interoperability with available estimation software. We welcome your feedback and contributions as we develop the library. You can check out the DoWhy Python library on Github (opens in new tab). We include a couple of examples to get you started through Jupyter notebooks here (opens in new tab). If you are interested in learning more about causal inference, do check our tutorial on causal inference and counterfactual reasoning (opens in new tab), presented at KDD 2018 (opens in new tab) on Sunday, August 19th.
true
true
true
For decades, causal inference methods have found wide applicability in the social and biomedical sciences. As computing systems start intervening in our work and daily lives, questions of cause-and-effect are gaining importance in computer science as well. To enable widespread use of causal inference, we are pleased to announce a new software library, DoWhy. Its […]
2024-10-12 00:00:00
2018-08-21 00:00:00
https://www.microsoft.co…b7b4771617f2.png
article
microsoft.com
Microsoft Research
null
null
12,966,520
https://www.linkedin.com/pulse/most-project-managers-ignore-huge-aspect-task-maioli-mackeprang
Most project managers ignore a huge aspect of task estimation
Christian Maioli Mackeprang Christian Maioli Mackeprang Senior Software Engineer Published Oct
# Most project managers ignore a huge aspect of task estimation From the first time I did an estimate, it seemed evident to me that **previous experience with a task is tremendously helpful in estimating it accurately**. For instance, if you’ve installed a particular library a bunch of times, you should at this point have a good idea about how long that usually takes you, right? ## What the current methodologies do Each person on a team has different levels of familiarity with each task, so **it doesn’t matter if one person estimates all tasks, or if you take an average from estimates done by several people. In both cases, there is loss of valuable information.** Strangely, this familiarity factor has been largely ignored by teams I’ve worked in. My current one does “planning poker” for example, which tries to have a few people agree on an estimate. Problem is, what if only one of them has actually implemented such a task before? The weight is on that guy to get everyone else to agree with him. **Averaging estimates can lead to completely useless results.** Waterfall had a similar, if not worse, problem. Usually in that kind of environment, the team leader would do the estimating, or he would defer that to another team member. Again, programmer experience was ignored. **Never have a single person do all the estimating.** ### Peer pressure sweeps the problem under the carpet Another relevant factor that is usually ignored during estimates is: **who is actually going to implement it?** Certainly, the estimate cannot be accurate if you label a task as low complexity and then assign it to someone with zero previous experience with such a task. This problem cannot be avoided by any system that separates estimating from implementation. Some teams try to mitigate this by having the implementer re-estimate it, but then you are wasting time estimating it twice and, in my experience, **if the estimates differ greatly, there is going to be peer pressure on the implementer** to try to get him to agree with the previous estimate. ## The benefits of focusing on task familiarity An interesting thing about paying attention to task familiarity, is that it seems like people have no interest at all in gaming this metric. They know that they either do or do not know something, so there is naturally going to be less guesswork. If you love numbers, you can run a survey in your team to measure how familiar each person is with each task. **By starting from more accurate data, your conclusions should be more valuable.** Start by building a table with team members and tasks, and have each cell be a number from 0 to 10 measuring how familiar each member is with that task. Having done that, you can use that table to help you make decisions and to calculate valuable things. - To **maximize implementation speed**, simply assign each task to the person most familiar with it. - To **maximize the potential of your team**, have people learn more by assigning somewhat unfamiliar tasks to each one. Keep them in a challenging but not pharaonic effort level. **Get people up to speed quickly**by teaming up new members with those that are already familiar with the tasks you’ve assigned to the new guys.- Build a general technology familiarity table for your team, measuring each tech (React, Angular, etc.) against how familiar each member is with it, which will let you **know how much of an impact you’ll have by introducing the new tech into your stack**. ## Further reading For more on task estimation, check out my other article "Why Good Software Estimates Are Impossible" SEO Executive 7yArticle is great but it is not leading people to some conclusion end. I would recommend using CPM technique for doing better scheduling and completion of tasks, apart from that task management applications such as TaskQue(https://goo.gl/OS4rYN) which helps in integration of members and assignment of tasks are most useful for successful completion of Projects on time. BizDev and Ops at CodeTiburon, Web Solutions, Mobile Apps, UI/UX, Cloud, ML/NLP 7yFrankly saying, everything is messed up in this article. It's quite obvious that the task familiarity should be considered as a significant factor. Indeed, the person who is familiar with some piece of code can modify it faster. It's good when people share the knowledge. Moreover, pair programming is great practice. Is it something new? Even waterfall estimation process is described very poorly. 1) Planning poker is used to estimate the effort/complexity of Product Backlog items, i.e. user stories. It's not tasks. User story describes the functionality which will be visible to end users and contains different types of work. It would be very rare for a user story to be fully developed by a single person. So, in most cases, user stories can't be assigned to the one person who is "familiar with the task". https://www.mountaingoatsoftware.com/blog/the-difference-between-a-story-and-a-task 2) The development team splits user stories to tasks on Sprint planning meeting. After that, they discuss who will be assigned to a certain task. Also, the person who assigned to implement the set of tasks from Sprint backlog can estimate these tasks in hours to make sure that it's possible to do these task on time. 3) The goals of planning poker are: to initiate the discussion to provide top-level estimation (in story points, issues etc.) for release planning. I'm wondering why do so many people like this? Did they really read it? Experienced Software Development Leader 7yChristian Maioli Mackeprang, good article and a lot of good comments. In my career I have seen that estimating development efforts is always a big problem and most people are bad at it unless the task is very small. I like your idea of gathering information on the team's experience and using that to help drive the estimates. No process will truly solve this issue so any thoughts or ideas on how to gain improvement is always welcome. Vertriebsleiter allegra-software 7ythat is a very interesrted article. Really. have also a look at our Task Managment Software. www.trackplus.com. Great software.
true
true
true
From the first time I did an estimate, it seemed evident to me that previous experience with a task is tremendously helpful in estimating it accurately. For instance, if you’ve installed a particular library a bunch of times, you should at this point have a good idea about how long that usually take
2024-10-12 00:00:00
2016-10-12 00:00:00
https://media.licdn.com/dms/image/v2/C4D12AQGCHcUknWkfTw/article-cover_image-shrink_600_2000/article-cover_image-shrink_600_2000/0/1520130167319?e=2147483647&v=beta&t=ulT6zPlIZ-LcqYNA-ty6pxC5Xlxrs67627kk3IZvD04
article
linkedin.com
LinkedInEditors
null
null
774,462
http://www.wired.com/wiredscience/2009/08/visualizations/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null