Unnamed: 0
int64 0
192k
| title
stringlengths 1
200
| text
stringlengths 10
100k
| url
stringlengths 32
885
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
| info
stringlengths 45
90.4k
|
---|---|---|---|---|---|---|---|
1,100 | 7 Waking Actions You Can Do to Stay Productive for the Entire Day | I use a chrome extension called New Tab Draft which basically turns your new tabs and windows into a type-able notepad. I use it to jot down random thoughts, quick reminders but most importantly, my to-do list.
Having your to-do list in a highly-visible and accessible place gives you a constant reminder to stay on track.
It’s been highly effective for me because everything I do is on the internet. By having my to-do list the first thing I see when I open a new tab, it’s becomes impossible to even open Facebook or Youtube because it reminds me of the things I should be focusing on.
This trick works well with everyone — I’ve seen startup founders use it, kitchen caretakers, mothers, office workers, lawyers, etc. The key is not to overwhelm your to-do list with tasks and focus on solving no more than 3 (preferably 1) major task that day. More details below.
3. Brainstorm 10 ideas
I got this idea from a very prolific writer, James Altucher, who says that by brainstorming 10 ideas each day, you force your mind to become more creative.
I practice this exercise religiously as a writer. Most of the topics I write down are scrapped, but there’s always one or two out of the 10 that makes for a great piece.
I’d suggest anyone, even non-writers, to practice this. If you’re a mom trying to figure out what to cook for dinner, practice writing down 10 things to cook. If you’re a comedian, come up with 10 jokes. If you’re an artist, do 10 quick sketches. The true value of coming up with 10 ideas, besides the improvement in creativity is that it’s a numbers game: supposedly the more ideas you come up with, the more good ideas you can generate.
4. Disable all notifications/Turn phone on silent
This is pretty self-explanatory, but it’s something that many people still don’t do at work.
Social media notifications, email alerts or any kind of notification in general is a major distraction to getting things done. We click on it, the badge with the red numbers, because our minds are built that way — to feel good when we see that we’ve accomplished a task or received a message from someone.
The reason why we feel good clicking these numbers is because our brain releases dopamine, a neurotransmitter that gets released anytime our brain wants us to feel good. It’s the same reason why people drink, gamble and have sex.
Disabling notifications are easy — there are generally settings or apps available for most phones that’ll allow you to set notifications to silent or disable them entirely. I use the ‘Do not disturb’ on both my Macbook and iPhone to achieve this.
As for things I can’t avoid like Medium notifications (because I write for a living), I use Stylish, a chrome extension that allows you to alter the appearance of a website. It’s for the slightly more technical people, but you basically hide the notifications entirely using CSS rules. | https://tiffany-sun.medium.com/7-waking-actions-you-can-do-to-stay-productive-for-the-entire-day-48625d7a0f37 | ['Tiffany Sun'] | 2019-11-12 19:36:01.633000+00:00 | ['Creativity', 'Ideas', 'Productivity', 'Advice', 'Life Lessons'] | Title 7 Waking Actions Stay Productive Entire DayContent use chrome extension called New Tab Draft basically turn new tab window typeable notepad use jot random thought quick reminder importantly todo list todo list highlyvisible accessible place give constant reminder stay track It’s highly effective everything internet todo list first thing see open new tab it’s becomes impossible even open Facebook Youtube reminds thing focusing trick work well everyone — I’ve seen startup founder use kitchen caretaker mother office worker lawyer etc key overwhelm todo list task focus solving 3 preferably 1 major task day detail 3 Brainstorm 10 idea got idea prolific writer James Altucher say brainstorming 10 idea day force mind become creative practice exercise religiously writer topic write scrapped there’s always one two 10 make great piece I’d suggest anyone even nonwriters practice you’re mom trying figure cook dinner practice writing 10 thing cook you’re comedian come 10 joke you’re artist 10 quick sketch true value coming 10 idea besides improvement creativity it’s number game supposedly idea come good idea generate 4 Disable notificationsTurn phone silent pretty selfexplanatory it’s something many people still don’t work Social medium notification email alert kind notification general major distraction getting thing done click badge red number mind built way — feel good see we’ve accomplished task received message someone reason feel good clicking number brain release dopamine neurotransmitter get released anytime brain want u feel good It’s reason people drink gamble sex Disabling notification easy — generally setting apps available phone that’ll allow set notification silent disable entirely use ‘Do disturb’ Macbook iPhone achieve thing can’t avoid like Medium notification write living use Stylish chrome extension allows alter appearance website It’s slightly technical people basically hide notification entirely using CSS rulesTags Creativity Ideas Productivity Advice Life Lessons |
1,101 | What makes you better developer? | Photo by MORAN on Unsplash
There might be so many different paths of being developer. But being better developer might be important for many of us as a personal challenges, curiosity of how computers work, proud of creating something.
Some people decide they want to be a developer from getting inspired by another cool developers in very early ages. Some people might just start wondering how does computers work at high school or collage and try to implement some applications by themselves. There are many different paths and factors has a great impact of being developer.
I remember my first code was a simple hang-man game but at that time there was so many popular video games i would like to play. That time i spent more time on playing these games rather than creating them. But now it is the opposite. I am spending more time to creating the software development.
I had a chance to meet & work with many successful developers and i tried to get inspired by them to became a better developer. Read so many articles about how people became successful after their attempts of what they are trying to do. Many of my searches gave me another point of view.
Personally i believe that can be applied many sectors requires hands on experience for being better on what you want to accomplish. There are so many people very successful on their jobs and their life. Many of them share the same principles in common.
P ractice
If you want to accomplish something (believe me it can be anything) you have to practice on it.
There is famous quote from Edison while he was trying to find electric lamp;
“ I have not failed 10,000 times. I have not failed once. I have succeeded in proving that those 10,000 ways will not work. When I have eliminated the ways that will not work, I will find the way that will work.”
Thomas Edison
People get frustrated when they can not accomplish something or to be afraid to make mistakes. If you don’t make this mistakes now you might make them later on. Do not afraid of being wrong or making mistakes.
If none of us made mistakes there would be nothing to learn from them. | https://medium.com/quick-code/what-makes-you-better-developer-c2d196abc858 | ['Melih Yumak'] | 2020-08-06 11:16:51.156000+00:00 | ['Software Development', 'Development', 'Productivity', 'Developer', 'Software'] | Title make better developerContent Photo MORAN Unsplash might many different path developer better developer might important many u personal challenge curiosity computer work proud creating something people decide want developer getting inspired another cool developer early age people might start wondering computer work high school collage try implement application many different path factor great impact developer remember first code simple hangman game time many popular video game would like play time spent time playing game rather creating opposite spending time creating software development chance meet work many successful developer tried get inspired became better developer Read many article people became successful attempt trying Many search gave another point view Personally believe applied many sector requires hand experience better want accomplish many people successful job life Many share principle common P ractice want accomplish something believe anything practice famous quote Edison trying find electric lamp “ failed 10000 time failed succeeded proving 10000 way work eliminated way work find way work” Thomas Edison People get frustrated accomplish something afraid make mistake don’t make mistake might make later afraid wrong making mistake none u made mistake would nothing learn themTags Software Development Development Productivity Developer Software |
1,102 | Clouds on Mars could have been caused by meteors | Clouds on Mars could have been caused by meteors
How did Mars get its clouds? New research suggests the key ingredient could have been meteors.
Clouds in Mars’ middle atmosphere, which begins about 18 miles — 30 kilometres — above the surface, have been known to scientists and astronomers but they have, thus far, struggled to explain how they formed. A new study — published on June 17th in the journal Nature Geoscience — may have solved this martian cloud mystery.
The paper from researchers at CU Boulder examines those wispy accumulations suggesting that they are a result of a phenomenon known as meteoric smoke — the icy dust created by space debris slamming into the planet’s atmosphere.
An important aspect of the paper’s findings is the reminder that the atmospheres of planets and their weather patterns should not be considered isolated from their host solar systems.
Victoria Hartwick is a graduate student in the Department of Atmospheric and Ocean Sciences (ATOC) and lead author of the new study. She elaborates: “We’re used to thinking of Earth, Mars and other bodies as these really self-contained planets that determine their own climates. But climate isn’t independent of the surrounding solar system.”
Discovering the origins of martian cloud seeds
The research — co-authored by Brian Toon at CU Boulder and Nicholas Heavens at Hampton University in Virginia — centres on the fact clouds seem to form from nowhere.
Hartwick points out that this doesn’t mean they appear spontaneously. She says: “Clouds don’t just form on their own. They need something that they can condense onto.”
An example is clouds on our own planet — low-lying clouds begin life as tiny grains of sea salt or dust blown high into the air, water molecules then collate around these particles. This gathering of water molecules becomes bigger and bigger eventually forming the accumulations we can see from the ground.
Hartwick says that the problem in cracking the martian cloud mystery is that those sorts of cloud seed don’t exist in Mars’ middle atmosphere. Thus leading and her colleagues to suspect meteors and space debris could act as defacto cloud seeds.
As Hartwick says — approximately two to three tons of space debris crash into Mars every day, ripping apart the planet’s atmosphere and injecting a huge volume of dust into the air.
To discover if those dust clouds would be sufficient to give rise to Mars’ mysterious clouds, Hartwick and her team created massive computer simulations to model the flows and turbulence of the planet’s atmosphere. When they accounted for meteors in their calculations — clouds appeared.
Hartwick says: “Our model couldn’t form clouds at these altitudes before. But now, they’re all there, and they seem to be in all the right places.”
Hartwick and her team Mars cloud simulations (Hartwick)
Whilst Hartwick concedes that the model may initially sound unlikely, research has also shown that similar interplanetary debris and dust may help to seed clouds near Earth’s poles.
Despite this, the team warn that we shouldn’t expect to see gigantic thunderheads forming above the surface of Mars anytime soon as the clouds produced in the simulation were more thin and nebulous than those we experience on Earth.
A NASA balloon mission examined the rippling, electric blue clouds found high over Earth’s poles during twilight in summertime. (NASA)
Hartwick adds that just because are thin and wouldn’t be visible to the naked eye, that doesn’t mean they can’t have an effect on the dynamics of the climate. The simulations show that middle atmosphere clouds, for instance, could have a large impact on the Martian climate. Depending on where the team looked, those clouds could cause temperatures at high altitudes to swing up or down by as much as 10⁰ C.
It is this climactic impact which Brian Toon — a professor in ATOC — finds most compelling. He believes that the team’s findings with regards to modern-day Martian clouds may also help to reveal details of the red planet’s evolution including how it once managed to support liquid water at its surface.
Toon points out: “More and more climate models are finding that the ancient climate of Mars when rivers were flowing across its surface and life might have originated, was warmed by high altitude clouds.
“It is likely that this discovery will become a major part of that idea for warming Mars.”
Original research: http://dx.doi.org/10.1038/s41561-019-0379-6 | https://medium.com/swlh/clouds-on-mars-could-have-been-caused-by-meteors-ee4c74239374 | ['Robert Lea'] | 2019-06-17 15:11:30.784000+00:00 | ['Science', 'Space', 'Space Exploration', 'Mars', 'Weather'] | Title Clouds Mars could caused meteorsContent Clouds Mars could caused meteor Mars get cloud New research suggests key ingredient could meteor Clouds Mars’ middle atmosphere begin 18 mile — 30 kilometre — surface known scientist astronomer thus far struggled explain formed new study — published June 17th journal Nature Geoscience — may solved martian cloud mystery paper researcher CU Boulder examines wispy accumulation suggesting result phenomenon known meteoric smoke — icy dust created space debris slamming planet’s atmosphere important aspect paper’s finding reminder atmosphere planet weather pattern considered isolated host solar system Victoria Hartwick graduate student Department Atmospheric Ocean Sciences ATOC lead author new study elaborates “We’re used thinking Earth Mars body really selfcontained planet determine climate climate isn’t independent surrounding solar system” Discovering origin martian cloud seed research — coauthored Brian Toon CU Boulder Nicholas Heavens Hampton University Virginia — centre fact cloud seem form nowhere Hartwick point doesn’t mean appear spontaneously say “Clouds don’t form need something condense onto” example cloud planet — lowlying cloud begin life tiny grain sea salt dust blown high air water molecule collate around particle gathering water molecule becomes bigger bigger eventually forming accumulation see ground Hartwick say problem cracking martian cloud mystery sort cloud seed don’t exist Mars’ middle atmosphere Thus leading colleague suspect meteor space debris could act defacto cloud seed Hartwick say — approximately two three ton space debris crash Mars every day ripping apart planet’s atmosphere injecting huge volume dust air discover dust cloud would sufficient give rise Mars’ mysterious cloud Hartwick team created massive computer simulation model flow turbulence planet’s atmosphere accounted meteor calculation — cloud appeared Hartwick say “Our model couldn’t form cloud altitude they’re seem right places” Hartwick team Mars cloud simulation Hartwick Whilst Hartwick concedes model may initially sound unlikely research also shown similar interplanetary debris dust may help seed cloud near Earth’s pole Despite team warn shouldn’t expect see gigantic thunderhead forming surface Mars anytime soon cloud produced simulation thin nebulous experience Earth NASA balloon mission examined rippling electric blue cloud found high Earth’s pole twilight summertime NASA Hartwick add thin wouldn’t visible naked eye doesn’t mean can’t effect dynamic climate simulation show middle atmosphere cloud instance could large impact Martian climate Depending team looked cloud could cause temperature high altitude swing much 10⁰ C climactic impact Brian Toon — professor ATOC — find compelling belief team’s finding regard modernday Martian cloud may also help reveal detail red planet’s evolution including managed support liquid water surface Toon point “More climate model finding ancient climate Mars river flowing across surface life might originated warmed high altitude cloud “It likely discovery become major part idea warming Mars” Original research httpdxdoiorg101038s4156101903796Tags Science Space Space Exploration Mars Weather |
1,103 | Why Is Gold so Valuable? | How something useless becomes valuable
Something useless is valuable if we decide to treat it as valuable.
Seashells were once valued in many parts of the world and used as currency. Cigarettes have long been valued as money among prison inmates regardless of whether they smoked or not. And when smoking was banned, inmates started to value cans of mackerel, even expired ones. And you and I and everyone else value little pieces of paper with portraits and numbers printed on them.
We value things that everyone else values. And for some reason, everyone decided to value gold. | https://medium.com/i-wanna-know/why-is-gold-so-valuable-221ca9194015 | ['David B. Clear'] | 2020-12-10 06:31:56.671000+00:00 | ['Money', 'Chemistry', 'Society', 'Science', 'Economics'] | Title Gold ValuableContent something useless becomes valuable Something useless valuable decide treat valuable Seashells valued many part world used currency Cigarettes long valued money among prison inmate regardless whether smoked smoking banned inmate started value can mackerel even expired one everyone else value little piece paper portrait number printed value thing everyone else value reason everyone decided value goldTags Money Chemistry Society Science Economics |
1,104 | Programming in Your Browser Is (Almost) Here | How It Works
Screenshot by the author.
By going to any GitHub repository and clicking three buttons, you’ll be taken to a working VS Code window in your browser within about 30 seconds. In this one tab, it’s possible to code, build, test, and deploy faster than ever before.
The possibilities are extended even more with a ready-to-go terminal and the ability to install VS Code extensions within your codespace. You can even connect to a codespace in VS Code on your own machine using the Visual Studio Codespaces extension.
Repositories can also have specific settings to further customize Codespaces, including automatically installing extensions, forwarding ports, and more. This is done in a new devcontainer.json file and is documented nicely.
One benefit of using Codespaces is that new developers can be ready to help out in just a few seconds, with fewer errors along the way. New contributors can instantly have access to all the tools and info needed to make their first commit to a project.
Take a look at this clip from the Satellite 2020 keynote for a live demo of Codespaces: | https://medium.com/better-programming/programming-in-your-browser-is-almost-here-a6a68ce63b60 | ['Ben Soyka'] | 2020-08-31 14:43:42.280000+00:00 | ['Software Development', 'Github', 'Startup', 'Software Engineering', 'Programming'] | Title Programming Browser Almost HereContent Works Screenshot author going GitHub repository clicking three button you’ll taken working VS Code window browser within 30 second one tab it’s possible code build test deploy faster ever possibility extended even readytogo terminal ability install VS Code extension within codespace even connect codespace VS Code machine using Visual Studio Codespaces extension Repositories also specific setting customize Codespaces including automatically installing extension forwarding port done new devcontainerjson file documented nicely One benefit using Codespaces new developer ready help second fewer error along way New contributor instantly access tool info needed make first commit project Take look clip Satellite 2020 keynote live demo CodespacesTags Software Development Github Startup Software Engineering Programming |
1,105 | My Top 10 Favorite Facebook Advertising Features | Video is the future of Facebook.
Someday, Facebook might even be all video, all day.
And there’s good reason for that. People love to watch videos. At last count, Facebook users are watching 100 million hours of video per day on the social network.
Are you using Facebook Ads to grow your business?
If not, you should be. Here are nine reasons why.
Facebook has many great ad formats, targeting options, and campaign types.
Here are my top 10 favorite Facebook advertising features.
1. Lead Ads
In addition to being cheap and insanely effective, Facebook Lead Ads totally eliminate the need for people to visit a landing page on your website.
With Lead Ads you can acquire valuable contact information from potential customers who are using Facebook on a mobile device.
You can use these ads to get people to sign up for your email newsletter, offer deals or discounts, schedule appointments, and more.
2. Video Ads
Video ads are an awesome and cheap Facebook advertising feature — you can pay as little as a penny per video view!
More memorable than the usual text and image combo, Facebook video ads deliver strong brand recall and high engagement — and drive purchase intent.
Simply upload the video to Facebook’s native video player, customize the description, thumbnail, budget, and targeting, and go!
3. Engagement Ads on Wall Posts
Engagement ads can help make your Facebook Page look super popular to anyone who is checking out your business.
Facebook will only show this type of ad to the people who are most likely to engage with your post — reacting, commenting, or sharing.
Sure, getting thousands of comments and reactions is ultimately just vanity — but people want to be part of the in-crowd. Facebook Pages with zero fan interaction always looks a bit suspect. If your business is so great, where are all your customers?
4. Remarketing
Facebook remarketing lets you reach people who have already interacted with or checked out your brand in some way. Maybe they visited your website (or a specific page on it), took some sort of action in your app or game, or gave you their email address or phone number.
Facebook tags these people with cookie. Your remarketing ads will show to those people as they go through their Facebook News Feed so they will remember you and perhaps convert on one of your hard offers.
People who are familiar with your brand are 2x more likely to convert and 3x more likely to engage. Ridiculously powerful stuff!
5. Interest Targeting
Facebook’s interest targeting helps you find the people who are likely to be interested in buying your product or service.
You can reach specific audience based on their interests, their activities, and the pages they’ve liked. You can also combine interests to expand the reach of your ad.
Whether you want to target people who are interested in technology, fitness and wellness, entertainment, or a certain business/industry, this Facebook ad feature will help you do it.
6. Demographic Targeting
You can target people based on where they live, their age, their gender, their political leanings, their job title, or by specific life events (e.g., engagement, birthday, anniversary)
Facebook also offers financial targeting. You can specify that you only want to show it to people who make more than an income level you specify, whether it’s as low as $30,000 or more than $500,000
If you sell a pricy product, you want to make sure your ads are shown to people who can afford to buy your stuff!
7. Behavior Targeting
Facebook’s behavior targeting lets you reach people based on purchase history, intent, device usage, and more.
Facebook uses data from third-party partners to figure out what people are purchasing, online and offline. After matching up that data with user IDs, Facebook lets advertisers target audience segments based on thousands of different purchasing behaviors.
For instance, you can target ads to people who have purchased clothing, health and beauty, technology, or pet products. Or if you wanted to target based on travel, you could choose options such as frequent travelers, international travelers, cruises, or whether someone has used a travel app in the past month.
8. The Facebook Pixel
Facebook’s tracking pixel tracks actions that happen on your website as a result of your paid ads (as well as your organic posts). All you have to do is add some code to any pages you want to track.
Actions include things like adding an item to a cart, viewing content, making a purchase, and completing registration.
The tracking pixel will help you measure conversions, optimize your ads and targeting, and gain insights about the Facebook users visiting your website.
9. Website Conversion Campaigns
You want to use conversion campaigns when the objective of your ad is to get people to do something specific on your website or in your mobile app.
You define that action, whether it’s completing a purchase, adding something to a cart, or a page view.
10. Carousel Ads
Carousel Ads let you display multiple images or videos (up to 10) within the same ad unit. Each image or video can link to a different page of your website
You can use these images to highlight products, features, or a promotion.
When done well, Carousel ads have proven to significantly increase conversions and click-through rates.
Bonus: Facebook Messenger Bots
Businesses can now create bots for Facebook Messenger that will “talk” to your customers anytime, 24/7. How cool is that?
Facebook’s chat bots have a ton of potential in terms of customer service and sales. They can provide automated information, take orders, help you buy products or services, or provide shipping notifications.
And you never have to leave Facebook Messenger to shop or get the information you want.
Those are my favorite Facebook advertising features. What are yours?
Originally posted on Inc.com
About The Author
Larry Kim is the CEO of Mobile Monkey and founder of WordStream. You can connect with him on Twitter, Facebook, LinkedIn and Instagram. | https://medium.com/marketing-and-entrepreneurship/my-top-10-favorite-facebook-advertising-features-238916a1bd78 | ['Larry Kim'] | 2017-04-28 16:18:06.320000+00:00 | ['Marketing', 'Facebook', 'Social Media', 'Digital Marketing', 'Advertising'] | Title Top 10 Favorite Facebook Advertising FeaturesContent Video future Facebook Someday Facebook might even video day there’s good reason People love watch video last count Facebook user watching 100 million hour video per day social network using Facebook Ads grow business nine reason Facebook many great ad format targeting option campaign type top 10 favorite Facebook advertising feature 1 Lead Ads addition cheap insanely effective Facebook Lead Ads totally eliminate need people visit landing page website Lead Ads acquire valuable contact information potential customer using Facebook mobile device use ad get people sign email newsletter offer deal discount schedule appointment 2 Video Ads Video ad awesome cheap Facebook advertising feature — pay little penny per video view memorable usual text image combo Facebook video ad deliver strong brand recall high engagement — drive purchase intent Simply upload video Facebook’s native video player customize description thumbnail budget targeting go 3 Engagement Ads Wall Posts Engagement ad help make Facebook Page look super popular anyone checking business Facebook show type ad people likely engage post — reacting commenting sharing Sure getting thousand comment reaction ultimately vanity — people want part incrowd Facebook Pages zero fan interaction always look bit suspect business great customer 4 Remarketing Facebook remarketing let reach people already interacted checked brand way Maybe visited website specific page took sort action app game gave email address phone number Facebook tag people cookie remarketing ad show people go Facebook News Feed remember perhaps convert one hard offer People familiar brand 2x likely convert 3x likely engage Ridiculously powerful stuff 5 Interest Targeting Facebook’s interest targeting help find people likely interested buying product service reach specific audience based interest activity page they’ve liked also combine interest expand reach ad Whether want target people interested technology fitness wellness entertainment certain businessindustry Facebook ad feature help 6 Demographic Targeting target people based live age gender political leaning job title specific life event eg engagement birthday anniversary Facebook also offer financial targeting specify want show people make income level specify whether it’s low 30000 500000 sell pricy product want make sure ad shown people afford buy stuff 7 Behavior Targeting Facebook’s behavior targeting let reach people based purchase history intent device usage Facebook us data thirdparty partner figure people purchasing online offline matching data user IDs Facebook let advertiser target audience segment based thousand different purchasing behavior instance target ad people purchased clothing health beauty technology pet product wanted target based travel could choose option frequent traveler international traveler cruise whether someone used travel app past month 8 Facebook Pixel Facebook’s tracking pixel track action happen website result paid ad well organic post add code page want track Actions include thing like adding item cart viewing content making purchase completing registration tracking pixel help measure conversion optimize ad targeting gain insight Facebook user visiting website 9 Website Conversion Campaigns want use conversion campaign objective ad get people something specific website mobile app define action whether it’s completing purchase adding something cart page view 10 Carousel Ads Carousel Ads let display multiple image video 10 within ad unit image video link different page website use image highlight product feature promotion done well Carousel ad proven significantly increase conversion clickthrough rate Bonus Facebook Messenger Bots Businesses create bot Facebook Messenger “talk” customer anytime 247 cool Facebook’s chat bot ton potential term customer service sale provide automated information take order help buy product service provide shipping notification never leave Facebook Messenger shop get information want favorite Facebook advertising feature Originally posted Inccom Author Larry Kim CEO Mobile Monkey founder WordStream connect Twitter Facebook LinkedIn InstagramTags Marketing Facebook Social Media Digital Marketing Advertising |
1,106 | yes, this is an apocalypse — but it’s not what you think. | a few weeks ago, a few friends of mine were sitting in our apartment discussing the news around COVID-19 — at the time, in the American mainstream, the issue was still a mere rumbling, a trickle of ominous but still-remote stories about Italian hospitals and Iranian patients and the occasional reminder to “wash your hands and cover your cough.” back then — in those naïve days of early March — none of us present knew anyone who had been personally affected by the disease (that would change less than a week later). our lives still proceeded with minimal disruption.
yet we had noticed that stores were selling out of hand sanitizer, that some offices were encouraging employees to work from home, that healthcare experts were appearing increasingly on national news to issue warnings that jarred our stubborn sense of immunity. murmurs about travel restrictions and school closings were beginning to rise, a subtle unease leaking into the air.
adding up the signs, one of my friends asked, with a furrowed brow, “so, is this like, an apocalypse?”
i smiled, probably chuckled a little, and asked if, rather than “apocalypse,” he had meant to use the word “epidemic.” he nodded, “yeah i guess so,” he said, although he was perhaps just being polite.
i’ve thought about that exchange many times since. in the moment, i took his remark as a sort of verbal typo — of course he didn’t mean “apocalypse” — that was not a word used in literal, adult discussions. for me, the term is inseparable from the Biblical book of Revelation, the “end times” narrative, an account laden with symbolic images of cosmic upheaval, supernatural destruction, and Final Judgement.
so of course it doesn’t apply here, right? if the heavens aren’t being split open, if “angels of woe” aren’t ravaging the earth, if the moon (or is it the sun?) hasn’t turned blood red — then apocalypse is not what we’re experiencing.
so my thinking went.
until i recalled that the root meaning of the word “apocalypse” (ἀποκάλυψις or apokalypsis) is simply: “to uncover, reveal, expose.”
so, antichrists and demon armies aside, a literal apocalypse is merely an event in which a hidden object, reality, or truth is revealed.
suddenly, my friend’s words feel — dare i say it — prophetic.
because, so far, this has all felt like a vast revelation, a global exposure — of truths we have too long, too often, covered over. | https://sarahaziza1.medium.com/yes-this-is-an-apocalypse-but-its-not-what-you-think-d84185193f08 | ['Sarah Aziza'] | 2020-03-26 14:59:16.831000+00:00 | ['Health', 'Poetry', 'Spirituality', 'Covid 19', 'Mental Health'] | Title yes apocalypse — it’s thinkContent week ago friend mine sitting apartment discussing news around COVID19 — time American mainstream issue still mere rumbling trickle ominous stillremote story Italian hospital Iranian patient occasional reminder “wash hand cover cough” back — naïve day early March — none u present knew anyone personally affected disease would change le week later life still proceeded minimal disruption yet noticed store selling hand sanitizer office encouraging employee work home healthcare expert appearing increasingly national news issue warning jarred stubborn sense immunity murmur travel restriction school closing beginning rise subtle unease leaking air adding sign one friend asked furrowed brow “so like apocalypse” smiled probably chuckled little asked rather “apocalypse” meant use word “epidemic” nodded “yeah guess so” said although perhaps polite i’ve thought exchange many time since moment took remark sort verbal typo — course didn’t mean “apocalypse” — word used literal adult discussion term inseparable Biblical book Revelation “end times” narrative account laden symbolic image cosmic upheaval supernatural destruction Final Judgement course doesn’t apply right heaven aren’t split open “angels woe” aren’t ravaging earth moon sun hasn’t turned blood red — apocalypse we’re experiencing thinking went recalled root meaning word “apocalypse” ἀποκάλυψις apokalypsis simply “to uncover reveal expose” antichrist demon army aside literal apocalypse merely event hidden object reality truth revealed suddenly friend’s word feel — dare say — prophetic far felt like vast revelation global exposure — truth long often covered overTags Health Poetry Spirituality Covid 19 Mental Health |
1,107 | How to Thrive as a Freelancer Working From Home | As a freelancer, I have the good fortune of being able to work from home. Lots of people dream about earning a living from the comfort of their own home. You can choose your own hours, you don’t have a long commute to work, no moody colleagues, … the list of advantages is almost endless — especially if you’re an introvert, like me.
But even if it feels like a veritable paradise at first, working from home can quickly degenerate into a nightmare if you don’t approach it with the necessary discipline and self-management. In fact, a freelancer survey has shown that 62% of freelancers feel stressed, and 54% find it hard to stay productive at home.
We all know that time is money, but unfortunately, a lot of it gets lost in disorganization and disruption. What’s more, we’re faced with a constant barrage of technology, people, and tasks that can contribute to this disorganization. Many freelancers find that they rush from one task to the next, trying to get everything done. In the end, it’s not just your productivity that suffers but your mental health too.
So, to all of you out there who are considering earning your living by working from home, here are a few helpful tips to get you started. | https://medium.com/swlh/how-to-thrive-as-a-freelancer-working-from-home-52d73dbd5580 | ['Kahli Bree Adams'] | 2020-06-30 04:17:37.001000+00:00 | ['Business', 'Small Business', 'Entrepreneurship', 'Productivity', 'Freelancing'] | Title Thrive Freelancer Working HomeContent freelancer good fortune able work home Lots people dream earning living comfort home choose hour don’t long commute work moody colleague … list advantage almost endless — especially you’re introvert like even feel like veritable paradise first working home quickly degenerate nightmare don’t approach necessary discipline selfmanagement fact freelancer survey shown 62 freelancer feel stressed 54 find hard stay productive home know time money unfortunately lot get lost disorganization disruption What’s we’re faced constant barrage technology people task contribute disorganization Many freelancer find rush one task next trying get everything done end it’s productivity suffers mental health considering earning living working home helpful tip get startedTags Business Small Business Entrepreneurship Productivity Freelancing |
1,108 | How storage works in Prometheus, and why is this important? | How storage works in Prometheus, and why is this important?
Learn the bases that make Prometheus, so a great solution to monitor your workloads and use it for your own benefit.
Photo by Vincent Botta on Unsplash
Prometheus is one of the key systems in nowadays cloud architectures. The second graduated project from the Cloud Native Computing Foundation (CNCF) after Kubernetes itself, and it is the monitoring solution for excellence in most of the workloads running on Kubernetes.
If you already have used Prometheus some time, you know that it relies on a Time series database, and it is one of the key elements. Based on their own words from the Prometheus official page:
Every time series is uniquely identified by its metric name and optional key-value pairs called labels, and that series is similar to the tables in a relational model. And inside each of those series, we have the samples that are similar to the tuples. And each of the samples contains a float value and a milliseconds-precision timestamp.
Default on-disk approach
By default, Prometheus uses a local-storage approach storing all those samples on disk. This data is distributed on different files and folders to group different chunks of data.
So, we ve folders to create those groups, and by default, they are a two-hour block and can contain one or more files depends on the amount of data ingested in that period of time as each folder contains all the samples for that specific timeline.
Additionally, each folder also has some kind of metadata files that help locate each of the data files' metrics.
A file is persistent in a complete manner when the block is over, and before that, it keeps in memory and uses a write-ahead log technical to recover the data in case of a crash of the Prometheus server.
So, at a high-level view, the directory structure of a Prometheus server’s data directory will look something like this:
Remote Storage Integration
Default on-disk storage is good and has some limitations in terms of scalability and durability, even considering the performance improvement of the latest version of the TSDB. So, if we’d like to explore other options to store this data, Prometheus provides a way to integrate with remote storage locations.
It provides an API that allows writing samples that are being ingested into a remote URL and, at the same time, be able to read back sample data for that remote URL as shown in the picture below:
As always in anything related to Prometheus, the number of adapters created using this pattern is huge, and it can be seen in the following link in detail:
Summary
Knowing how the storage of Prometheus works is critical to understand how we can optimize their usage to improve the performance of our monitoring solution and provide a cost-efficient deployment.
In the following posts, we’re going to cover how we can optimize the usage of this storage layer, making sure that only the metrics and sample that are important to use are being stored and also how to analyze which metrics are the ones using most of the time-series database to be able to take good decision about which metrics should be dropped and which ones should be kept.
So, stay tuned for the next post regarding how we can have a better life with Prometheus and not die in the attempt. | https://medium.com/dev-genius/how-storage-works-in-prometheus-and-why-is-this-important-1882c340fee2 | ['Alex Vazquez'] | 2020-11-15 10:51:42.125000+00:00 | ['Software Development', 'Technology', 'Kubernetes', 'Cloud Computing', 'Programming'] | Title storage work Prometheus importantContent storage work Prometheus important Learn base make Prometheus great solution monitor workload use benefit Photo Vincent Botta Unsplash Prometheus one key system nowadays cloud architecture second graduated project Cloud Native Computing Foundation CNCF Kubernetes monitoring solution excellence workload running Kubernetes already used Prometheus time know relies Time series database one key element Based word Prometheus official page Every time series uniquely identified metric name optional keyvalue pair called label series similar table relational model inside series sample similar tuples sample contains float value millisecondsprecision timestamp Default ondisk approach default Prometheus us localstorage approach storing sample disk data distributed different file folder group different chunk data folder create group default twohour block contain one file depends amount data ingested period time folder contains sample specific timeline Additionally folder also kind metadata file help locate data file metric file persistent complete manner block keep memory us writeahead log technical recover data case crash Prometheus server highlevel view directory structure Prometheus server’s data directory look something like Remote Storage Integration Default ondisk storage good limitation term scalability durability even considering performance improvement latest version TSDB we’d like explore option store data Prometheus provides way integrate remote storage location provides API allows writing sample ingested remote URL time able read back sample data remote URL shown picture always anything related Prometheus number adapter created using pattern huge seen following link detail Summary Knowing storage Prometheus work critical understand optimize usage improve performance monitoring solution provide costefficient deployment following post we’re going cover optimize usage storage layer making sure metric sample important use stored also analyze metric one using timeseries database able take good decision metric dropped one kept stay tuned next post regarding better life Prometheus die attemptTags Software Development Technology Kubernetes Cloud Computing Programming |
1,109 | General Purpose Tensorflow 2.x Script to train any CSV file | With that out of the way, let's get started.
First, you just need to specify a few parameters.
DATASET_PATH — where your dataset is located in the file system
LABEL_NAME — target label column name
TASK — “r” for regression and “c” for classification
DUMMY_BATCH_SIZE — don’t worry about this. Just set it to 5
BATCH_SIZE
EPOCHS
TRAIN_FRAC — the fraction of data to be used for training. float between 0 and 1.
CHECKPOINT_DIR — folder to store checkpointed models
Imports
from collections import defaultdict
import os
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import kerastuner as kt
import IPython
during the whole process, we will be using tf.data to handle the dataset. By using tf.data you can load massive datasets one chunk at a time. It also has a lot of useful functionality like caching and prefetch the next chunk of data while the model is training on the current chunk.
The below function will return a fresh dataset object that you can iterate over one chunk(batch size) at a time.
def get_dataset(batch_size = 5):
return tf.data.experimental.make_csv_dataset(DATASET_PATH, batch_size = batch_size, label_name = LABEL_NAME, num_epochs = 1)
If the task is regression, the number of output nodes is one
if TASK == "r":
OUTPUT_NODES = 1
If it is a classification task then we need to find the number of labels(classes) to determine the number of output layers by iterating over the whole dataset.
elif TASK == "c":
unique_labels = set() for _ , label in get_dataset(batch_size=BATCH_SIZE):
for ele in label.numpy():
unique_labels.add(ele) num_labels = len(unique_labels) if num_labels <= 2:
OUTPUT_NODES = 1
else:
OUTPUT_NODES = num_labels
As I mentioned before, this script will do all the preprocessing on its own without the need to have external functions because all the preprocessing logic is embedded as one of the layers in the model. This enables you to just give the raw data to the deployed model.
The first step in achieving that is to define what is known as model inputs.
It's simply a dictionary where the keys are the column names and values are tf.keras.Input objects
model_inputs = {} for batch, _ in get_dataset(batch_size=DUMMY_BATCH_SIZE).take(1):
for col_name, col_values in batch.items():
model_inputs[col_name] = tf.keras.Input(shape=(1,), name=col_name, dtype=col_values.dtype)
Sample Output:
{ "clouds_all": <tf.Tensor "clouds_all:0" shape=(None, 1) dtype=int32>, "holiday": <tf.Tensor "holiday:0" shape=(None, 1) dtype=string>, "rain_1h": <tf.Tensor "rain_1h:0" shape=(None, 1) dtype=float32>, "snow_1h": <tf.Tensor "snow_1h:0" shape=(None, 1) dtype=float32>, "temp": <tf.Tensor "temp:0" shape=(None, 1) dtype=float32>, "weather_description": <tf.Tensor "weather_description:0" shape=(None, 1) dtype=string>, "weather_main": <tf.Tensor "weather_main:0" shape=(None, 1) dtype=string> }
One feature that tf.data has is that it can automatically detect the data type of each column. Let's use that to our advantage and split the model_input dictionary into multiple dictionaries according to the data type
integer_inputs = {}
float_inputs = {}
string_inputs = {} for col_name, col_input in model_inputs.items():
if col_input.dtype == tf.int32:
integer_inputs[col_name] = col_input elif col_input.dtype == tf.float32:
float_inputs[col_name] = col_input elif col_input.dtype == tf.string:
string_inputs[col_name] = col_input
Sample integer_inputs:
{'clouds_all': <tf.Tensor 'clouds_all:0' shape=(None, 1) dtype=int32>}
Sample float_inputs:
{
'rain_1h': <tf.Tensor 'rain_1h:0' shape=(None, 1) dtype=float32>, 'snow_1h': <tf.Tensor 'snow_1h:0' shape=(None, 1) dtype=float32>, 'temp': <tf.Tensor 'temp:0' shape=(None, 1) dtype=float32>
}
Sample string_inputs:
{ 'holiday': <tf.Tensor 'holiday:0' shape=(None, 1) dtype=string>, 'weather_description': <tf.Tensor 'weather_description:0' shape=(None, 1) dtype=string>, 'weather_main': <tf.Tensor 'weather_main:0' shape=(None, 1) dtype=string> }
Now, for the preprocessing, the general flow for the integer and float columns is to first concatenate the layers by passing them through tf.layers.Concatenate and then normalize them through a normalization layer. But to calculate the mean and std for normalization, we have to iterate through the whole dataset once.
when it comes to preprocessing the string columns, we first get the vocabulary(list of unique words) for each column. Then we pass the input through a string lookup layer tf.keras.layers.experimental.preprocessing.StringLookup and then through a one-hot encoding layer tf.keras.layers.experimental.preprocessing.CategoryEncoding. You can also perform some simple operations like converting everything to lowercase and eliminating leading and trailing white spaces.
StringLookup layer maps strings from a vocabulary to integer indices. CategoryEncoding then takes those integer indices and creates a one-hot vector.
Pass the inputs through their corresponding functions to get preprocessed layers
integer_layer = numerical_input_processor(integer_inputs)
float_layer = numerical_input_processor(float_inputs)
string_layer = string_input_processor(string_inputs)
Add all the inputs to a list
preprocessed_inputs = [] if integer_layer is not None:
preprocessed_inputs.append(integer_layer) if float_layer is not None:
preprocessed_inputs.append(float_layer) if string_layer is not None:
preprocessed_inputs.append(string_layer)
preprocessed_inputs might look something like this
[<tf.Tensor 'normalization/truediv:0' shape=(None, 1) dtype=float32>, <tf.Tensor 'normalization_1/truediv:0' shape=(None, 3) dtype=float32>, <tf.Tensor 'concatenate_1/concat:0' shape=(None, 66) dtype=float32>]
Concatenate all the inputs
if len(preprocessed_inputs) > 1:
preprocessed_inputs_cat = tf.keras.layers.Concatenate()(preprocessed_inputs) else:
preprocessed_inputs_cat = preprocessed_inputs
Finally, create a tf.keras.Model that takes the model_inputs and returns the preprocessed outputs
preprocessing_head = tf.keras.Model(model_inputs, preprocessed_inputs_cat)
You can also plot the preprocessing model to see what the individual layers look like. Increase the value of dpi if you want more the plot to have a higher clarity.
tf.keras.utils.plot_model(model = preprocessing_head, rankdir="LR", dpi=72, show_shapes=True, expand_nested=True, to_file="preprocessing_head.png")
You might get an image something like this | https://zahash.medium.com/general-purpose-tensorflow-2-x-script-to-train-any-csv-file-c5a5abe5c7fd | ['Zahash Z'] | 2020-12-01 14:10:39.137000+00:00 | ['Python', 'TensorFlow', 'Machine Learning', 'Data Science', 'Artificial Intelligence'] | Title General Purpose Tensorflow 2x Script train CSV fileContent way let get started First need specify parameter DATASETPATH — dataset located file system LABELNAME — target label column name TASK — “r” regression “c” classification DUMMYBATCHSIZE — don’t worry set 5 BATCHSIZE EPOCHS TRAINFRAC — fraction data used training float 0 1 CHECKPOINTDIR — folder store checkpointed model Imports collection import defaultdict import o import numpy np import matplotlibpyplot plt import tensorflow tf import kerastuner kt import IPython whole process using tfdata handle dataset using tfdata load massive datasets one chunk time also lot useful functionality like caching prefetch next chunk data model training current chunk function return fresh dataset object iterate one chunkbatch size time def getdatasetbatchsize 5 return tfdataexperimentalmakecsvdatasetDATASETPATH batchsize batchsize labelname LABELNAME numepochs 1 task regression number output node one TASK r OUTPUTNODES 1 classification task need find number labelsclasses determine number output layer iterating whole dataset elif TASK c uniquelabels set label getdatasetbatchsizeBATCHSIZE ele labelnumpy uniquelabelsaddele numlabels lenuniquelabels numlabels 2 OUTPUTNODES 1 else OUTPUTNODES numlabels mentioned script preprocessing without need external function preprocessing logic embedded one layer model enables give raw data deployed model first step achieving define known model input simply dictionary key column name value tfkerasInput object modelinputs batch getdatasetbatchsizeDUMMYBATCHSIZEtake1 colname colvalues batchitems modelinputscolname tfkerasInputshape1 namecolname dtypecolvaluesdtype Sample Output cloudsall tfTensor cloudsall0 shapeNone 1 dtypeint32 holiday tfTensor holiday0 shapeNone 1 dtypestring rain1h tfTensor rain1h0 shapeNone 1 dtypefloat32 snow1h tfTensor snow1h0 shapeNone 1 dtypefloat32 temp tfTensor temp0 shapeNone 1 dtypefloat32 weatherdescription tfTensor weatherdescription0 shapeNone 1 dtypestring weathermain tfTensor weathermain0 shapeNone 1 dtypestring One feature tfdata automatically detect data type column Lets use advantage split modelinput dictionary multiple dictionary according data type integerinputs floatinputs stringinputs colname colinput modelinputsitems colinputdtype tfint32 integerinputscolname colinput elif colinputdtype tffloat32 floatinputscolname colinput elif colinputdtype tfstring stringinputscolname colinput Sample integerinputs cloudsall tfTensor cloudsall0 shapeNone 1 dtypeint32 Sample floatinputs rain1h tfTensor rain1h0 shapeNone 1 dtypefloat32 snow1h tfTensor snow1h0 shapeNone 1 dtypefloat32 temp tfTensor temp0 shapeNone 1 dtypefloat32 Sample stringinputs holiday tfTensor holiday0 shapeNone 1 dtypestring weatherdescription tfTensor weatherdescription0 shapeNone 1 dtypestring weathermain tfTensor weathermain0 shapeNone 1 dtypestring preprocessing general flow integer float column first concatenate layer passing tflayersConcatenate normalize normalization layer calculate mean std normalization iterate whole dataset come preprocessing string column first get vocabularylist unique word column pas input string lookup layer tfkeraslayersexperimentalpreprocessingStringLookup onehot encoding layer tfkeraslayersexperimentalpreprocessingCategoryEncoding also perform simple operation like converting everything lowercase eliminating leading trailing white space StringLookup layer map string vocabulary integer index CategoryEncoding take integer index creates onehot vector Pass input corresponding function get preprocessed layer integerlayer numericalinputprocessorintegerinputs floatlayer numericalinputprocessorfloatinputs stringlayer stringinputprocessorstringinputs Add input list preprocessedinputs integerlayer None preprocessedinputsappendintegerlayer floatlayer None preprocessedinputsappendfloatlayer stringlayer None preprocessedinputsappendstringlayer preprocessedinputs might look something like tfTensor normalizationtruediv0 shapeNone 1 dtypefloat32 tfTensor normalization1truediv0 shapeNone 3 dtypefloat32 tfTensor concatenate1concat0 shapeNone 66 dtypefloat32 Concatenate input lenpreprocessedinputs 1 preprocessedinputscat tfkeraslayersConcatenatepreprocessedinputs else preprocessedinputscat preprocessedinputs Finally create tfkerasModel take modelinputs return preprocessed output preprocessinghead tfkerasModelmodelinputs preprocessedinputscat also plot preprocessing model see individual layer look like Increase value dpi want plot higher clarity tfkerasutilsplotmodelmodel preprocessinghead rankdirLR dpi72 showshapesTrue expandnestedTrue tofilepreprocessingheadpng might get image something like thisTags Python TensorFlow Machine Learning Data Science Artificial Intelligence |
1,110 | 5 Words We Need to Stop Using in Climate Change Conversations | 5 Words We Need to Stop Using in Climate Change Conversations Tabitha Whiting Follow Oct 11 · 5 min read
I’m not saying that language is what’s stopping us from adequately addressing the climate emergency. But I am saying that it has a part to play, and that some of the words commonly used around the topic of climate change are problematic. Here are 5 of those words.
1. Change
Photo by Ross Findon on Unsplash
Let’s start with the obvious one: change. ‘Climate change’ is now the most commonly used phrase to describe the long-term shifts in weather conditions and temperature caused by human-generated greenhouse gases.
It used to be described as global warming — but, quite rightly, there was a feeling that ‘warming’ did not truly reflect the reality of the impacts this phenomemon is responsible for, suggesting temperature changes alone.
‘Change’, though, has its own issues. The main issue is that ‘change’ is a neutral term. Change can be good, and change can be bad. There’s no urgency, and nothing that suggests climate change is something that with have vast, negative impacts.
This is dangerous because it could inadvertently play into the main narrative of climate sceptics — that our earth’s climate naturally fluctuates, and we’re simply in a period of increased temperatures currently, with nothing to do with human activity.
Over recent years we’ve seen growth in the use of the terms ‘climate crisis’ and ‘climate emergency’ instead of climate change, which have the benefit of invoking urgency and the need to act now. However, these terms could also end up being damaging — we tend to associate ‘crisis’ and ‘emergency’ with short-term problems which are over relatively quickly. Climate change doesn’t fit that. It’s a long-term emergency, and the danger is that these terms lose their impact with the general public as time goes on.
“There is a limited semantic ‘budget’ for using the language of emergency, and it’s possible you can lose audiences over time, particularly if there are no meaningful policies addressing the fact that there really is an ongoing emergency.” — Dr David Holmes, director of the Climate Change Communication Research Hub
There’s no perfect answer, but it’s important to be aware of the different connotations of these terms when having conversations with others about climate change.
2. Believe
Photo by Ran Berkovich on Unsplash
“Do you believe in climate change?”
It’s a question I see all too often — from Twitter threads to political debates. The problem with this question lies in the use of the word ‘believe’.
Belief has no place in conversations about climate change. 97% of the world’s research scientists agree that climate change exists, is caused by human activity, and will have vast and devastating impacts unless we start making drastic changes right now to the way that we live. It’s fact. Proven, scientific fact. There’s simply no room for belief. When we continue to use this word around the topic of climate change, we allow there to be room for belief.
And when we allow room for belief, we also allow room for disbelief, scepticism, and denial. Either you agree that we need to address climate change now, or you’re wrong (and likely have an ulterior motive to do with allowing capitalism to continue to thrive).
3. Goals
Photo by Markus Winkler on Unsplash
Whether it’s goals for reaching net-zero or targets set by The Paris Agreement, it’s very common to see headlines focused on climate goals and targets — and usually focused on the likelihood of us missing them.
These goals are important if we are to tackle the climate emergency. But when we focus on facts, targets, and statistics, it becomes too easy to dehumanise the climate emergency.
What those goals represent are lives. If Western societies fail to reach net-zero carbon by 2050 then we are threatening the lives of those at the front line of climate impacts, and failing to protect our fellow humans. It’s all too easy to forget that when we talk about climate change in terms of goals and targets.
4. Fight
Climate action and climate policy are often framed as part of the ‘fight against climate change’. We paint the picture that we are battling with this external force of ‘climate change’ which is overpowering us.
The reality is that it is humans who have caused climate change. And it is humans who have the power to halt it. It’s nothing to do with us ‘defeating’ the mightly power of climate change. The power lies in our hands, or rather, in the hands of the oil and gas giants to stop extracting and burning fossil fuels to drive our capitalist consumerist society, and politicians to force them to do so.
The word ‘solve’ can be added to this list for very similar reasons. We don’t need to ‘solve’ climate change. The solution is simple: stop pouring more and more greenhouse gases into the atmosphere. Searching for ‘solutions’ is only a distraction.
5. Neutral
How many adverts have you seen from large brands claiming that they’re going ‘100% carbon neutral’? Well, watch out, because it’s a classic greenwashing term.
The term carbon neutral means that the creation of a product or service does not produce any carbon emissions. Sounds great. But carbon neutrality usually comes from a combination of a brand reducing the carbon emissions of their practice and supply chain (for instance, through installing solar panels on a warehouse roof to provide renewable energy, and from the brand purchasing carbon offsets — paying someone else to capture or avoid emitting enough carbon emissions to ‘neutralise’ the carbon emissons they are causing.
Carbon offsets and carbon neutrality offer brands a way to assuage their climate guilt without actually doing anything to reduce their direct carbon footprint, and the ability to plaster ‘sustainable’ all over their marketing in order to win over ethically-minded customers.
It’s similar to the idea of ‘net zero’ which you’ll often hear politicians talk of, and which represents a combination of reducing emissions as well as relying on carbon capture technology to fix the problem. We want the focus to be on reducing direct carbon emissions, or we’re never going to get ourselves out of this crisis, simply racing against the planet to find more and more ways to capture the growing carbon emissions we continue to rack up.
They’re terms to be very wary of. Make sure you do your research before you buy from that ‘carbon neutral’ brand or trust the claims coming out of a politician’s when they talk about ‘net zero carbon’. | https://medium.com/age-of-awareness/5-words-we-need-to-stop-using-in-climate-change-conversations-3a2d43ca08cc | ['Tabitha Whiting'] | 2020-10-15 12:47:40.602000+00:00 | ['Environment', 'Words', 'Language', 'Sustainability', 'Climate Change'] | Title 5 Words Need Stop Using Climate Change ConversationsContent 5 Words Need Stop Using Climate Change Conversations Tabitha Whiting Follow Oct 11 · 5 min read I’m saying language what’s stopping u adequately addressing climate emergency saying part play word commonly used around topic climate change problematic 5 word 1 Change Photo Ross Findon Unsplash Let’s start obvious one change ‘Climate change’ commonly used phrase describe longterm shift weather condition temperature caused humangenerated greenhouse gas used described global warming — quite rightly feeling ‘warming’ truly reflect reality impact phenomemon responsible suggesting temperature change alone ‘Change’ though issue main issue ‘change’ neutral term Change good change bad There’s urgency nothing suggests climate change something vast negative impact dangerous could inadvertently play main narrative climate sceptic — earth’s climate naturally fluctuates we’re simply period increased temperature currently nothing human activity recent year we’ve seen growth use term ‘climate crisis’ ‘climate emergency’ instead climate change benefit invoking urgency need act However term could also end damaging — tend associate ‘crisis’ ‘emergency’ shortterm problem relatively quickly Climate change doesn’t fit It’s longterm emergency danger term lose impact general public time go “There limited semantic ‘budget’ using language emergency it’s possible lose audience time particularly meaningful policy addressing fact really ongoing emergency” — Dr David Holmes director Climate Change Communication Research Hub There’s perfect answer it’s important aware different connotation term conversation others climate change 2 Believe Photo Ran Berkovich Unsplash “Do believe climate change” It’s question see often — Twitter thread political debate problem question lie use word ‘believe’ Belief place conversation climate change 97 world’s research scientist agree climate change exists caused human activity vast devastating impact unless start making drastic change right way live It’s fact Proven scientific fact There’s simply room belief continue use word around topic climate change allow room belief allow room belief also allow room disbelief scepticism denial Either agree need address climate change you’re wrong likely ulterior motive allowing capitalism continue thrive 3 Goals Photo Markus Winkler Unsplash Whether it’s goal reaching netzero target set Paris Agreement it’s common see headline focused climate goal target — usually focused likelihood u missing goal important tackle climate emergency focus fact target statistic becomes easy dehumanise climate emergency goal represent life Western society fail reach netzero carbon 2050 threatening life front line climate impact failing protect fellow human It’s easy forget talk climate change term goal target 4 Fight Climate action climate policy often framed part ‘fight climate change’ paint picture battling external force ‘climate change’ overpowering u reality human caused climate change human power halt It’s nothing u ‘defeating’ mightly power climate change power lie hand rather hand oil gas giant stop extracting burning fossil fuel drive capitalist consumerist society politician force word ‘solve’ added list similar reason don’t need ‘solve’ climate change solution simple stop pouring greenhouse gas atmosphere Searching ‘solutions’ distraction 5 Neutral many advert seen large brand claiming they’re going ‘100 carbon neutral’ Well watch it’s classic greenwashing term term carbon neutral mean creation product service produce carbon emission Sounds great carbon neutrality usually come combination brand reducing carbon emission practice supply chain instance installing solar panel warehouse roof provide renewable energy brand purchasing carbon offset — paying someone else capture avoid emitting enough carbon emission ‘neutralise’ carbon emissons causing Carbon offset carbon neutrality offer brand way assuage climate guilt without actually anything reduce direct carbon footprint ability plaster ‘sustainable’ marketing order win ethicallyminded customer It’s similar idea ‘net zero’ you’ll often hear politician talk represents combination reducing emission well relying carbon capture technology fix problem want focus reducing direct carbon emission we’re never going get crisis simply racing planet find way capture growing carbon emission continue rack They’re term wary Make sure research buy ‘carbon neutral’ brand trust claim coming politician’s talk ‘net zero carbon’Tags Environment Words Language Sustainability Climate Change |
1,111 | 6 Actionable Tips To Achieve Your Fitness Goals In 2021 | I have been practicing yoga for three years now. Last year I started my yoga diploma course. All the advanced poses were part of the syllabus. Including middle split, handstand, standing splits, and some other extreme balancing poses.
I needed to perform all the extreme poses within a span of 6 months. So, I used to practice every day for 90 minutes. My yoga flow was intense and back-breaking. Motivating myself to do the same routine for 6 days a week was a challenge.
If you are new to the fitness world then you know what a struggle it is to follow a consistent workout routine. It takes a great deal of self-discipline to practice the same routine day after day until you get perfect at it.
Here are some techniques to keep you motivated to exercise so that you can achieve all your fitness goals in 2021.
Set S.M.A.R.T Goals
We make big goals on 1st January that we struggle to achieve for a few days before throwing them out in the dumpster. Most people are likely to give up their resolution by January 19. We
You need to make goals by using the S.M.A.R.T Method
S:- Specific
I want to get healthy is not a specific goal, you need to break it down as to how you are planning to achieve that goal. Maybe you need to lose weight to get healthy. But how much weight do you need to lose to get healthy?
It may be 10 pounds or 15 pounds. Set a specific goal to have a clear sight of what you want to do.
M:- Measurable
You can measure the amount of weight lost on a weighing scale, so losing a certain amount of weight is a measurable goal. Working out five times a week and walking 6000 steps daily are also measurable goals.
No matter how you measure your goal, it should be able to reflect success. accurately
A:- Attainable
You should be mindful of setting attainable and realistic goals. If you set goals that you have no power over then you will be left disappointed. You cannot get shredded six-pack abs in a week. That is an unattainable goal that will crush your fitness routine
R:- Relevant
Your goals should be aligned with your values. They should be able to transform your life for the better. They should be relevant to your journey.
Set goals that help you achieve your expected result.
T:- Time-Bound
All goals should have a window of execution. Giving yourself deadlines will help you in the timely execution of your goal. It will make you more serious about your goal.
It is easy to track a goal when you set up a time limitation on it. Time limits motivate you to push forward.
Follow the 2-Minute Rule
You need to build a habit of working out regularly to achieve all your health and fitness goals. Instilling a habit to be active every day is not as hard as everyone makes it to be.
You need to start small to become consistent with your practice. Start with a 30-minute walk rather than aiming for 10,000 steps.
Start any exercise and do it just for 2 minutes, you will not achieve your desired result in those 2 minutes but you will get motivated to follow through with your real workout. It is all about getting up from the couch to exercise.
Start small to build a foundation for bigger and better habits.
Make It A Habit To Be Active In The Morning
Morning exercise is the miracle cure you need to start your day in the right way. Morning exercise energies you for the day.
Doing a 15–20 minute workout in the morning sets you for the day, as it helps you in releasing the endorphins which result in a high.
Completing a portion of your daily workout earlier in the day will provide you motivation later in the day to smash your goal.
Embrace The Internet
There is a host of free workouts for you to try on the internet. You can find yoga, HIIT, Pilates, and many more types of workout on youtube.
There is a channel for every kind of workout, you can easily get started in fitness with the help of youtube. All you need is your phone, internet, and the will to become better.
Here is a list of top fitness influencers that you can follow in various sports.
Yoga:- Yoga with Adriene
Pilates:- blogilates
HIIT:- Pamela Rief
At-Home Workout:- Chloe Ting
Jumping Rope:- Jump Rope Dudes
Set A Specific Time To Workout
Exercising daily at the same time helps you build a routine. This routine is what you need to make working out a top-priority habit for 2021. Working out at the same time regularly helps you build an automated habit.
Working out becomes easy when you have assigned a particular time for it. You automatically close that window for any other activity. It helps you stay focused on your goal.
Maintain a Fitness Journal
Maintaining a fitness journal is one of the best tools that can help you achieve all your fitness goals. It helps you track your progress over time. You can track your progress daily, weekly, or every 15 days. It is up to you.
You can also track your rest days. Tracking your progress helps you in getting a real-time limit for achieving your goals. It encourages you to push forward. | https://medium.com/in-fitness-and-in-health/6-actionable-tips-to-achieve-your-fitness-goals-in-2021-9d6b414407f7 | ['Khyati Jain'] | 2020-12-28 22:00:56.249000+00:00 | ['Fitness', 'Fitness Tips', 'Health', 'Productivity', 'Sports'] | Title 6 Actionable Tips Achieve Fitness Goals 2021Content practicing yoga three year Last year started yoga diploma course advanced pose part syllabus Including middle split handstand standing split extreme balancing pose needed perform extreme pose within span 6 month used practice every day 90 minute yoga flow intense backbreaking Motivating routine 6 day week challenge new fitness world know struggle follow consistent workout routine take great deal selfdiscipline practice routine day day get perfect technique keep motivated exercise achieve fitness goal 2021 Set SMART Goals make big goal 1st January struggle achieve day throwing dumpster people likely give resolution January 19 need make goal using SMART Method Specific want get healthy specific goal need break planning achieve goal Maybe need lose weight get healthy much weight need lose get healthy may 10 pound 15 pound Set specific goal clear sight want Measurable measure amount weight lost weighing scale losing certain amount weight measurable goal Working five time week walking 6000 step daily also measurable goal matter measure goal able reflect success accurately Attainable mindful setting attainable realistic goal set goal power left disappointed cannot get shredded sixpack ab week unattainable goal crush fitness routine R Relevant goal aligned value able transform life better relevant journey Set goal help achieve expected result TimeBound goal window execution Giving deadline help timely execution goal make serious goal easy track goal set time limitation Time limit motivate push forward Follow 2Minute Rule need build habit working regularly achieve health fitness goal Instilling habit active every day hard everyone make need start small become consistent practice Start 30minute walk rather aiming 10000 step Start exercise 2 minute achieve desired result 2 minute get motivated follow real workout getting couch exercise Start small build foundation bigger better habit Make Habit Active Morning Morning exercise miracle cure need start day right way Morning exercise energy day 15–20 minute workout morning set day help releasing endorphin result high Completing portion daily workout earlier day provide motivation later day smash goal Embrace Internet host free workout try internet find yoga HIIT Pilates many type workout youtube channel every kind workout easily get started fitness help youtube need phone internet become better list top fitness influencers follow various sport Yoga Yoga Adriene Pilates blogilates HIIT Pamela Rief AtHome Workout Chloe Ting Jumping Rope Jump Rope Dudes Set Specific Time Workout Exercising daily time help build routine routine need make working toppriority habit 2021 Working time regularly help build automated habit Working becomes easy assigned particular time automatically close window activity help stay focused goal Maintain Fitness Journal Maintaining fitness journal one best tool help achieve fitness goal help track progress time track progress daily weekly every 15 day also track rest day Tracking progress help getting realtime limit achieving goal encourages push forwardTags Fitness Fitness Tips Health Productivity Sports |
1,112 | Vitamin D for Covid-19: New Research Shows Promise | Vitamin D for Covid-19: New Research Shows Promise
Studies highlight potential life-saving benefits. But some experts aren’t convinced.
The study’s findings were significant — “spectacular” even, in the words of at least one expert commenter.
A team of doctors at Reina Sofía University Hospital in Córdoba, Spain, split 76 newly admitted Covid-19 patients into two groups. One group got the standard treatment at the time, which included a cocktail of antibiotics and immunosuppressant drugs. The second group got the same standard treatment — plus a drug designed to raise vitamin D levels in the blood.
Among the 26 hospitalized people who received standard care alone, fully half went on to the intensive care unit (ICU) because their disease had worsened. Two of them died. But among the 50 people who received the vitamin D treatment on top of standard care, only one person ended up in the ICU. None died.
In their study write-up, published in October in the Journal of Steroid Biochemistry and Molecular Biology, the Spanish researchers explained that their experiment was a “pilot” study that requires follow-up work. But they also pointed out that theirs is not the first piece of evidence linking vitamin D to a reduced risk for severe respiratory infection. Far from it.
“Vitamin D supports a range of innate antiviral immune responses while simultaneously dampening down potentially harmful inflammatory responses,” says Adrian Martineau, PhD, a clinical professor of respiratory infection and immunity at Queen Mary University of London.
“The evidence that low vitamin D levels are a risk factor for severe [Covid-19] disease is not definitive, but many lines of research suggest that this is likely.”
Martineau was not involved with the Spanish study, but he has published several papers on vitamin D for the treatment and prevention of viral infections. In a 2017 research review, which appeared in the journal BMJ, he and his co-authors concluded that taking a daily or weekly vitamin D supplement is associated with a reduced risk for respiratory infection — especially among those who have low levels of the vitamin in their blood.
Martineau and others say it’s very possible — though not yet proven — that a vitamin D supplement could provide a measure of protection against SARS-CoV-2 and Covid-19.
How vitamin D may combat the coronavirus
Thanks in part to Martineau’s work on vitamin D and respiratory infections, the “sunshine vitamin” — so called because the human body requires UV light to make it — has been the focus of Covid-19 research almost since the start of the pandemic.
During the spring, several groups identified apparent associations between low levels of vitamin D and increased Covid-19 risks. Since that time, others have replicated their work. For a study published September 17 in PLOS One, researchers found that a person’s risk for a positive SARS-CoV-2 infection is “strongly and inversely” associated with blood levels of vitamin D. Taken together, these findings suggest that adequate vitamin D levels may help prevent a SARS-CoV-2 infection and also keep infections that do occur from growing worse.
The mechanisms that may explain vitamin D’s benefits are numerous. For example, macrophages are helpful white blood cells that play a number of virus-clearing roles. “Vitamin D deficiency impairs the ability of macrophages to mature,” says Petre Cristian Ilie, PhD, a Covid-19 investigator and research director at The Queen Elizabeth Hospital in the U.K. Moreover, Ilie says that vitamin D may increase levels of certain cell enzymes that help repel the coronavirus. There’s also evidence that the presence of vitamin D may dampen elements of the immune system that are involved in the so-called cytokine storm that is associated with severe Covid-19. These are just a sampling of the many ways in which vitamin D may protect against SARS-CoV-2.
“Even before the coronavirus pandemic, I think there was good evidence to support taking vitamin D supplements,” says Walter Willett, MD, a professor of epidemiology and nutrition at Harvard T.H. Chan School of Public Health. “This pandemic adds another reason.”
Since the early days of the pandemic, Willett has been investigating the relationship between vitamin D and Covid-19. “The evidence that low vitamin D levels are a risk factor for severe [Covid-19] disease is not definitive, but many lines of research suggest that this is likely,” he says.
“The healthier you are, the more your vitamin D levels will rise naturally.”
He points out that African Americans and other people of color, due to elevated levels of melanin in the skin, require more sun exposure than lighter-skinned individuals to produce like amounts of vitamin D. “We know from national surveys that Black people living in the U.S. have about 17 times higher rates of severe vitamin D deficiency than White people,” he says. Black Americans have also experienced disproportionately high rates of severe Covid-19. While many other factors contribute to these inequalities — including differences in income, work, and health care access — Willett says that it is “highly possible that low vitamin D levels can explain part of the huge disparities in severe Covid-19 infection.”
Research from Europe has found that countries hit hardest by Covid-19 — such as Spain and Italy — have a higher prevalence of vitamin-D deficiency than countries with populations that tend to be sufficient in D. It’s also worth noting that the burden of deadly Covid-19 appeared to dip during the sunnier summer months in the U.S., U.K., and elsewhere. There are many plausible explanations for this drop that have nothing to do with vitamin D or sun exposure — including improved clinical management of the disease. But some researchers have speculated that populationwide increases in sun exposure during the summer, and consequently improved vitamin D status, may have contributed to the disease’s apparent softening.
Not everyone is convinced
For all vitamin D’s promise, some experts say that the sunshine vitamin may turn out to be fool’s gold.
“I’m as excited as anyone about vitamin D, but I’m not ready to jump on the bandwagon,” says Mark Moyad, MD, the Jenkins/Pokempner director of preventive and alternative medicine at the University of Michigan Medical Center.
Moyad is among the world’s leading authorities on the risks and benefits of supplements. He reels off a number of potential confounders or complicating factors that could eventually squelch the current enthusiasm for vitamin D. “As you gain weight, vitamin D goes down because it’s sequestered in adipose tissue,” he explains. In fact, almost any disease that is associated with metabolic dysfunction or inflammation tends to drive down the blood’s quantity of vitamin D. And so it’s possible that low vitamin D is simply a marker of health issues — such as obesity and Type 2 diabetes — that are known to make Covid-19 worse, he says. (Similarly, some researchers have posited that vitamin D is an indicator of adequate sun exposure, which they say may provide a number of health benefits that are frequently misattributed to vitamin D.)
For those who want to raise their vitamin D levels safely, Moyad says that the best way to do so doesn’t involve a pill.
Furthermore, Moyad points out that some researchers have failed to find correlations between low levels of vitamin D and an increased risk for Covid-19 — including among people of color. Even if it turns out that vitamin D plays a role in moderating some aspect of SARS-CoV-2 infection or disease, it’s not a certainty that swallowing the vitamin as a supplement will do any good. “We’ve been trolled and teased before by these sorts of correlations, and we’ve paid the price,” he says.
To illustrate his point, he describes the decades of promising research that linked low vitamin D levels to bone weakness. But when, for a 2019 JAMA study, people took high daily doses of vitamin D for three years, their bones actually got weaker, not stronger. While the known risks of taking moderate amounts of vitamin D as a supplement are minimal, the JAMA study’s findings — as well as the findings of many other past vitamin studies — show that there can be unexpected and often unwanted consequences associated with supplement use.
For those who want to raise their vitamin D levels safely, Moyad says that the best way to do so doesn’t involve a pill. “Eat right, stop smoking, get some exercise, get outside in the sun, lose weight,” he says. “The healthier you are, the more your vitamin D levels will rise naturally.” For those dead set on taking a vitamin D supplement, he recommends taking no more than 600 to 800 IU per day, which is the National Institutes of Health (NIH) recommended daily amount for kids and adults.
Martineau — the London-based respiratory disease expert — offers similar advice. Based on some of his recent work, he says that daily doses of vitamin D in the 400 to 1,000 IU range appear to be safe and effective for the prevention of infections. Harvard’s Willett is a bit more bullish. He says that 2,000 IU per day is “a reasonable dose” for adults. But he also says more research is needed to identify optimal intakes.
“I love vitamin D, and I’m excited to see if it works,” Moyad adds. “But we may end up disappointed.” | https://elemental.medium.com/vitamin-d-for-covid-19-new-research-shows-promise-b2593e782933 | ['Markham Heid'] | 2020-10-08 05:32:47.127000+00:00 | ['The Nuance', 'Health', 'Covid 19', 'Science', 'Nutrition'] | Title Vitamin Covid19 New Research Shows PromiseContent Vitamin Covid19 New Research Shows Promise Studies highlight potential lifesaving benefit expert aren’t convinced study’s finding significant — “spectacular” even word least one expert commenter team doctor Reina Sofía University Hospital Córdoba Spain split 76 newly admitted Covid19 patient two group One group got standard treatment time included cocktail antibiotic immunosuppressant drug second group got standard treatment — plus drug designed raise vitamin level blood Among 26 hospitalized people received standard care alone fully half went intensive care unit ICU disease worsened Two died among 50 people received vitamin treatment top standard care one person ended ICU None died study writeup published October Journal Steroid Biochemistry Molecular Biology Spanish researcher explained experiment “pilot” study requires followup work also pointed first piece evidence linking vitamin reduced risk severe respiratory infection Far “Vitamin support range innate antiviral immune response simultaneously dampening potentially harmful inflammatory responses” say Adrian Martineau PhD clinical professor respiratory infection immunity Queen Mary University London “The evidence low vitamin level risk factor severe Covid19 disease definitive many line research suggest likely” Martineau involved Spanish study published several paper vitamin treatment prevention viral infection 2017 research review appeared journal BMJ coauthor concluded taking daily weekly vitamin supplement associated reduced risk respiratory infection — especially among low level vitamin blood Martineau others say it’s possible — though yet proven — vitamin supplement could provide measure protection SARSCoV2 Covid19 vitamin may combat coronavirus Thanks part Martineau’s work vitamin respiratory infection “sunshine vitamin” — called human body requires UV light make — focus Covid19 research almost since start pandemic spring several group identified apparent association low level vitamin increased Covid19 risk Since time others replicated work study published September 17 PLOS One researcher found person’s risk positive SARSCoV2 infection “strongly inversely” associated blood level vitamin Taken together finding suggest adequate vitamin level may help prevent SARSCoV2 infection also keep infection occur growing worse mechanism may explain vitamin D’s benefit numerous example macrophage helpful white blood cell play number virusclearing role “Vitamin deficiency impairs ability macrophage mature” say Petre Cristian Ilie PhD Covid19 investigator research director Queen Elizabeth Hospital UK Moreover Ilie say vitamin may increase level certain cell enzyme help repel coronavirus There’s also evidence presence vitamin may dampen element immune system involved socalled cytokine storm associated severe Covid19 sampling many way vitamin may protect SARSCoV2 “Even coronavirus pandemic think good evidence support taking vitamin supplements” say Walter Willett MD professor epidemiology nutrition Harvard TH Chan School Public Health “This pandemic add another reason” Since early day pandemic Willett investigating relationship vitamin Covid19 “The evidence low vitamin level risk factor severe Covid19 disease definitive many line research suggest likely” say “The healthier vitamin level rise naturally” point African Americans people color due elevated level melanin skin require sun exposure lighterskinned individual produce like amount vitamin “We know national survey Black people living US 17 time higher rate severe vitamin deficiency White people” say Black Americans also experienced disproportionately high rate severe Covid19 many factor contribute inequality — including difference income work health care access — Willett say “highly possible low vitamin level explain part huge disparity severe Covid19 infection” Research Europe found country hit hardest Covid19 — Spain Italy — higher prevalence vitaminD deficiency country population tend sufficient It’s also worth noting burden deadly Covid19 appeared dip sunnier summer month US UK elsewhere many plausible explanation drop nothing vitamin sun exposure — including improved clinical management disease researcher speculated populationwide increase sun exposure summer consequently improved vitamin status may contributed disease’s apparent softening everyone convinced vitamin D’s promise expert say sunshine vitamin may turn fool’s gold “I’m excited anyone vitamin I’m ready jump bandwagon” say Mark Moyad MD JenkinsPokempner director preventive alternative medicine University Michigan Medical Center Moyad among world’s leading authority risk benefit supplement reel number potential confounders complicating factor could eventually squelch current enthusiasm vitamin “As gain weight vitamin go it’s sequestered adipose tissue” explains fact almost disease associated metabolic dysfunction inflammation tends drive blood’s quantity vitamin it’s possible low vitamin simply marker health issue — obesity Type 2 diabetes — known make Covid19 worse say Similarly researcher posited vitamin indicator adequate sun exposure say may provide number health benefit frequently misattributed vitamin want raise vitamin level safely Moyad say best way doesn’t involve pill Furthermore Moyad point researcher failed find correlation low level vitamin increased risk Covid19 — including among people color Even turn vitamin play role moderating aspect SARSCoV2 infection disease it’s certainty swallowing vitamin supplement good “We’ve trolled teased sort correlation we’ve paid price” say illustrate point describes decade promising research linked low vitamin level bone weakness 2019 JAMA study people took high daily dos vitamin three year bone actually got weaker stronger known risk taking moderate amount vitamin supplement minimal JAMA study’s finding — well finding many past vitamin study — show unexpected often unwanted consequence associated supplement use want raise vitamin level safely Moyad say best way doesn’t involve pill “Eat right stop smoking get exercise get outside sun lose weight” say “The healthier vitamin level rise naturally” dead set taking vitamin supplement recommends taking 600 800 IU per day National Institutes Health NIH recommended daily amount kid adult Martineau — Londonbased respiratory disease expert — offer similar advice Based recent work say daily dos vitamin 400 1000 IU range appear safe effective prevention infection Harvard’s Willett bit bullish say 2000 IU per day “a reasonable dose” adult also say research needed identify optimal intake “I love vitamin I’m excited see works” Moyad add “But may end disappointed”Tags Nuance Health Covid 19 Science Nutrition |
1,113 | The most impressive Youtube Channels for you to Learn AI, Machine Learning, and Data Science. | This channel publishes interviews with data scientists from big companies like Google, Uber, Airbnb, etc. From these videos, you can get an idea of what it is like to be a data scientist and acquire valuable advice to apply in your life.
Xander Steenbrugge is a machine learning researcher at ML6. His YouTube channel summarizes the critical points about machine learning, reinforcement learning, and AI in general from a technical perspective while making them accessible for a bigger audience.
A new ML Youtube channel that everyone should check out, Machine Learning 101 posts explainer videos on beginner AI concepts. The channel also posts podcasts with expert data scientists and professionals working on AI in commercial industries.
FreeCodeCamp is an incredible non-profit organization. It is an open-source community that offers a collection of resources that helps people learn to code for free and create their projects. Its website is entirely free for anyone to learn about coding. Also, they have their news platform that shares articles on programming and projects.
Kevin Markham creates in-depth YouTube tutorials to understand AI and machine learning. Data School focuses on the topics you need to master first and offers in-depth tutorials that you can understand regardless of your educational background.
Machine Learning TV has resources for computer science students and enthusiasts to understand machine learning better.
This YouTube channel aims to make machine learning and reinforcement learning more approachable for everyone. There is a 12 video playlist for a full-introduction to neural networks for beginners, and it seems a subsequent intermediate neural network series is currently in production.
Andreas Kretz is a data engineer and founder of Plumbers of Data Science. He broadcasts live tutorials on his channel on how to get hands-on experience in data engineering and videos with questions and answers about data engineering with Hadoop, Kafka, Spark, and so on.
Edureka is an e-learning platform with several tutorials and guidelines on trending topics in the areas of Big Data & Hadoop, DevOps, Blockchain, Artificial Intelligence, Angular, Data Science, Apache Spark, Python, Selenium, Tableau, Android, PMP certification, AWS Architect, Digital Marketing and many more.
Ng was named one of Time’s 100 Most Influential People in 2012 and Fast Company’s Most Creed. He co-founded Coursera and deeplearning.ai and was a former vice president and chief scientist at Baidu. He is an adjunct professor at Stanford University.
The official Deep Learning AI YouTube channel has video tutorials from the deep learning specialization on Coursera. Founded by Andrew Ng, DeepLearning.AI is an education technology company that develops a global community of AI talent.
DeepLearning.AI’s expert-led educational experiences provide AI practitioners and non-technical professionals with the necessary tools to go all the way from foundational basics to advanced application, empowering them to build an AI-powered future.
Tech With Tim is a brilliant programmer who teaches Python, game development with Pygame, Java, and Machine Learning. He creates high-level coding tutorials in Python.
Created in 2016, Machine Learning University (MLU) is an initiative by Amazon with a direct objective: to train as many employees as possible to master the technology, essential for the company to achieve the “magic” of offering products with this integrated technology.
This YouTube channel has tutorial videos related to science, technology, and artificial intelligence.
Sentdex creates one of the best Python programming tutorials on YouTube. His tutorials range from beginners to more advanced. With more than 1000 videos about Python Programming tutorials, it goes further than just the basics. You can learn about machine learning, finance, data analysis, robotics, web development, game development, and more.
Joma Tech is a YouTuber who makes videos to help people get into the technology industry. He worked for large technology companies as a data scientist and software engineer. Based on his experience, he makes videos of interviews with experts and lifestyle in Silicon Valley and makes data science more accessible.
Python Programmer content includes tutorials on Python, Data Science, Machine Learning, book recommendations, and more.
This YouTube channel features topics such as how-to’s, reviews of software libraries and applications, and interviews with key individuals in the field of deep learning. DeepLearning.TV is all about Deep Learning, the field of study that teaches machines to perceive the world. Starting with a series that simplifies Deep Learning, the channel features topics such as How To’s, reviews of software libraries and applications, and interviews with key individuals in the field. Through a series of concept videos showcasing the intuition behind every Deep Learning method, we will show you that Deep Learning is actually simpler than you think.
YouTube videos to help you build what’s next with secure infrastructure, developer tools, APIs, data analytics, and machine learning, Helping you build what’s next with secure infrastructure, developer tools, APIs, data analytics, and machine learning.
Keith Galli is a recent graduate from MIT. He makes educational videos about computer science, programming, board games, and more.
Data Science Dojo is a channel that promises to teach data science to everyone in an easy to understand way. You will find a multitude of tutorials, lectures, and courses on data engineering and science. | https://medium.com/swlh/21-amazing-youtube-channels-for-you-to-learn-ai-machine-learning-and-data-science-for-free-486c1b41b92a | ['Jair Ribeiro'] | 2020-12-11 11:52:36.350000+00:00 | ['Data Science', 'Online Learning', 'Artificial Intelligence', 'AI', 'Learning'] | Title impressive Youtube Channels Learn AI Machine Learning Data ScienceContent channel publishes interview data scientist big company like Google Uber Airbnb etc video get idea what like data scientist acquire valuable advice apply life Xander Steenbrugge machine learning researcher ML6 YouTube channel summarizes critical point machine learning reinforcement learning AI general technical perspective making accessible bigger audience new ML Youtube channel everyone check Machine Learning 101 post explainer video beginner AI concept channel also post podcasts expert data scientist professional working AI commercial industry FreeCodeCamp incredible nonprofit organization opensource community offer collection resource help people learn code free create project website entirely free anyone learn coding Also news platform share article programming project Kevin Markham creates indepth YouTube tutorial understand AI machine learning Data School focus topic need master first offer indepth tutorial understand regardless educational background Machine Learning TV resource computer science student enthusiast understand machine learning better YouTube channel aim make machine learning reinforcement learning approachable everyone 12 video playlist fullintroduction neural network beginner seems subsequent intermediate neural network series currently production Andreas Kretz data engineer founder Plumbers Data Science broadcast live tutorial channel get handson experience data engineering video question answer data engineering Hadoop Kafka Spark Edureka elearning platform several tutorial guideline trending topic area Big Data Hadoop DevOps Blockchain Artificial Intelligence Angular Data Science Apache Spark Python Selenium Tableau Android PMP certification AWS Architect Digital Marketing many Ng named one Time’s 100 Influential People 2012 Fast Company’s Creed cofounded Coursera deeplearningai former vice president chief scientist Baidu adjunct professor Stanford University official Deep Learning AI YouTube channel video tutorial deep learning specialization Coursera Founded Andrew Ng DeepLearningAI education technology company develops global community AI talent DeepLearningAI’s expertled educational experience provide AI practitioner nontechnical professional necessary tool go way foundational basic advanced application empowering build AIpowered future Tech Tim brilliant programmer teach Python game development Pygame Java Machine Learning creates highlevel coding tutorial Python Created 2016 Machine Learning University MLU initiative Amazon direct objective train many employee possible master technology essential company achieve “magic” offering product integrated technology YouTube channel tutorial video related science technology artificial intelligence Sentdex creates one best Python programming tutorial YouTube tutorial range beginner advanced 1000 video Python Programming tutorial go basic learn machine learning finance data analysis robotics web development game development Joma Tech YouTuber make video help people get technology industry worked large technology company data scientist software engineer Based experience make video interview expert lifestyle Silicon Valley make data science accessible Python Programmer content includes tutorial Python Data Science Machine Learning book recommendation YouTube channel feature topic howto’s review software library application interview key individual field deep learning DeepLearningTV Deep Learning field study teach machine perceive world Starting series simplifies Deep Learning channel feature topic To’s review software library application interview key individual field series concept video showcasing intuition behind every Deep Learning method show Deep Learning actually simpler think YouTube video help build what’s next secure infrastructure developer tool APIs data analytics machine learning Helping build what’s next secure infrastructure developer tool APIs data analytics machine learning Keith Galli recent graduate MIT make educational video computer science programming board game Data Science Dojo channel promise teach data science everyone easy understand way find multitude tutorial lecture course data engineering scienceTags Data Science Online Learning Artificial Intelligence AI Learning |
1,114 | 53 Python Interview Questions and Answers | 1. What is the difference between a list and a tuple?
I’ve been asked this question in every python / data science interview I’ve ever had. Know the answer like the back of your hand.
Lists are mutable. They can be modified after creation.
Tuples are immutable. Once a tuple is created it cannot by changed
Lists have order. They are an ordered sequences, typically of the same type of object. Ie: all user names ordered by creation date, ["Seth", "Ema", "Eli"]
Tuples have structure. Different data types may exist at each index. Ie: a database record in memory, (2, "Ema", "2020–04–16") # id, name, created_at
2. How is string interpolation performed?
Without importing the Template class, there are 3 ways to interpolate strings.
name = 'Chris' # 1. f strings
print(f'Hello {name}') # 2. % operator
print('Hey %s %s' % (name, name)) # 3. format
print(
"My name is {}".format((name))
)
3. What is the difference between “is” and “==”?
Early in my python career I assumed these were the same… hello bugs. So for the record, is checks identity and == checks equality.
We’ll walk through an example. Create some lists and assign them to names. Note that b points to the same object as a in below.
a = [1,2,3]
b = a
c = [1,2,3]
Check equality and note they are all equal.
print(a == b)
print(a == c)
#=> True
#=> True
But do they have the same identity? Nope.
print(a is b)
print(a is c)
#=> True
#=> False
We can verify this by printing their object id’s.
print(id(a))
print(id(b))
print(id(c))
#=> 4369567560
#=> 4369567560
#=> 4369567624
c has a different id than a and b .
4. What is a decorator?
Another questions I’ve been asked in every interview. It’s deserves a post itself, but you’re prepared if you can walk through writing your own example.
A decorator allows adding functionality to an existing function by passing that existing function to a decorator, which executes the existing function as well as additional code.
We’ll write a decorator that that logs when another function is called.
Write the decorator function. This takes a function, func , as an argument. It also defines a function, log_function_called , which calls func() and executes some code, print(f'{func} called.') . Then it return the function it defined
def logging(func):
def log_function_called():
print(f'{func} called.')
func()
return log_function_called
Let’s write other functions that we’ll eventually add the decorator to (but not yet).
def my_name():
print('chris') def friends_name():
print('naruto') my_name()
friends_name()
#=> chris
#=> naruto
Now add the decorator to both.
@logging
def my_name():
print('chris') def my_name():print('chris') @logging
def friends_name():
print('naruto') def friends_name():print('naruto') my_name()
friends_name()
#=> <function my_name at 0x10fca5a60> called.
#=> chris
#=> <function friends_name at 0x10fca5f28> called.
#=> naruto
See how we can now easily add logging to any function we write just by adding @logging above it.
5. Explain the range function
Range generates a list of integers and there are 3 ways to use it.
The function takes 1 to 3 arguments. Note I’ve wrapped each usage in list comprehension so we can see the values generated.
range(stop) : generate integers from 0 to the “stop” integer.
[i for i in range(10)]
#=> [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
range(start, stop) : generate integers from the “start” to the “stop” integer.
[i for i in range(2,10)]
#=> [2, 3, 4, 5, 6, 7, 8, 9]
range(start, stop, step) : generate integers from “start” to “stop” at intervals of “step”.
[i for i in range(2,10,2)]
#=> [2, 4, 6, 8]
Thanks Searge Boremchuq for suggesting a more pythonic way to do this!
list(range(2,10,2))
#=> [2, 4, 6, 8]
6. Define a class named car with 2 attributes, “color” and “speed”. Then create an instance and return speed.
class Car :
def __init__(self, color, speed):
self.color = color
self.speed = speed car = Car('red','100mph')
car.speed
#=> '100mph'
7. What is the difference between instance, static and class methods in python?
Instance methods : accept self parameter and relate to a specific instance of the class.
Static methods : use @staticmethod decorator, are not related to a specific instance, and are self-contained (don’t modify class or instance attributes)
Class methods : accept cls parameter and can modify the class itself
We’re going to illustrate the difference around a fictional CoffeeShop class.
specialty = 'espresso'
def __init__(self, coffee_price):
self.coffee_price = coffee_price
# instance method
def make_coffee(self):
print(f'Making {self.specialty} for ${self.coffee_price}')
# static method
@staticmethod
def check_weather():
print('Its sunny') class CoffeeShop:specialty = 'espresso'def __init__(self, coffee_price):self.coffee_price = coffee_price# instance methoddef make_coffee(self):print(f'Making {self.specialty} for ${self.coffee_price}')# static methoddef check_weather():print('Its sunny')
@classmethod
def change_specialty(cls, specialty):
cls.specialty = specialty
print(f'Specialty changed to {specialty}') # class methoddef change_specialty(cls, specialty):cls.specialty = specialtyprint(f'Specialty changed to {specialty}')
CoffeeShop class has an attribute, specialty , set to 'espresso' by default. Each instance of CoffeeShop is initialized with an attribute coffee_price . It also has 3 methods, an instance method, a static method and a class method.
Let’s initialize an instance of the coffee shop with a coffee_price of 5 . Then call the instance method make_coffee .
coffee_shop = CoffeeShop('5')
coffee_shop.make_coffee()
#=> Making espresso for $5
Now call the static method. Static methods can’t modify class or instance state so they’re normally used for utility functions, for example, adding 2 numbers. We used ours to check the weather. Its sunny . Great!
coffee_shop.check_weather()
#=> Its sunny
Now let’s use the class method to modify the coffee shop’s specialty and then make_coffee .
coffee_shop.change_specialty('drip coffee')
#=> Specialty changed to drip coffee coffee_shop.make_coffee()
#=> Making drip coffee for $5
Note how make_coffee used to make espresso but now makes drip coffee !
8. What is the difference between “func” and “func()”?
The purpose of this question is to see if you understand that all functions are also objects in python.
def func():
print('Im a function')
func
#=> function __main__.func> func()
#=> Im a function
func is the object representing the function which can be assigned to a variable or passed to another function. func() with parentheses calls the function and returns what it outputs.
9. Explain how the map function works
map returns a map object (an iterator) which can iterate over returned values from applying a function to every element in a sequence. The map object can also be converted to a list if required.
def add_three(x):
return x + 3 li = [1,2,3] [i for i in map(add_three, li)]
#=> [4, 5, 6]
Above, I added 3 to every element in the list.
A reader suggested a more pythonic implementation. Thanks Chrisjan Wust !
def add_three(x):
return x + 3 li = [1,2,3]
list(map(add_three, li))
#=> [4, 5, 6]
Also, thanks Michael Graeme Short for the corrections!
10. Explain how the reduce function works
This can be tricky to wrap your head around until you use it a few times.
reduce takes a function and a sequence and iterates over that sequence. On each iteration, both the current element and output from the previous element are passed to the function. In the end, a single value is returned.
from functools import reduce def add_three(x,y):
return x + y li = [1,2,3,5] reduce(add_three, li)
#=> 11
11 is returned which is the sum of 1+2+3+5 .
11. Explain how the filter function works
Filter literally does what the name says. It filters elements in a sequence.
Each element is passed to a function which is returned in the outputted sequence if the function returns True and discarded if the function returns False .
def add_three(x):
if x % 2 == 0:
return True
else:
return False li = [1,2,3,4,5,6,7,8] [i for i in filter(add_three, li)]
#=> [2, 4, 6, 8]
Note how all elements not divisible by 2 have been removed.
12. Does python call by reference or call by value?
Be prepared to go down a rabbit hole of semantics if you google this question and read the top few pages.
In a nutshell, all names call by reference, but some memory locations hold objects while others hold pointers to yet other memory locations.
name = 'object'
Let’s see how this works with strings. We’ll instantiate a name and object, point other names to it. Then delete the first name.
x = 'some text'
y = x
x is y #=> True del x # this deletes the 'a' name but does nothing to the object in memory z = y
y is z #=> True
What we see is that all these names point to the same object in memory, which wasn’t affected by del x .
Here’s another interesting example with a function.
name = 'text' def add_chars(str1):
print( id(str1) ) #=> 4353702856
print( id(name) ) #=> 4353702856
# new name, same object
str2 = str1
# creates a new name (with same name as the first) AND object
str1 += 's'
print( id(str1) ) #=> 4387143328
# still the original object
print( id(str2) ) #=> 4353702856
add_chars(name)
print(name) #=>text
Notice how adding an s to the string inside the function created a new name AND a new object. Even though the new name has the same “name” as the existing name.
Thanks Michael P. Reilly for the corrections!
13. How to reverse a list?
Note how reverse() is called on the list and mutates it. It doesn’t return the mutated list itself.
li = ['a','b','c'] print(li)
li.reverse()
print(li)
#=> ['a', 'b', 'c']
#=> ['c', 'b', 'a']
14. How does string multiplication work?
Let’s see the results of multiplying the string ‘cat’ by 3.
'cat' * 3
#=> 'catcatcat'
The string is concatenated to itself 3 times.
15. How does list multiplication work?
Let’s see the result of multiplying a list, [1,2,3] by 2.
[1,2,3] * 2
#=> [1, 2, 3, 1, 2, 3]
A list is outputted containing the contents of [1,2,3] repeated twice.
16. What does “self” refer to in a class?
Self refers to the instance of the class itself. It’s how we give methods access to and the ability to update the object they belong to.
Below, passing self to __init__() gives us the ability to set the color of an instance on initialization.
class Shirt:
def __init__(self, color):
self.color = color
s = Shirt('yellow')
s.color
#=> 'yellow'
17. How can you concatenate lists in python?
Adding 2 lists together concatenates them. Note that arrays do not function the same way.
a = [1,2]
b = [3,4,5] a + b
#=> [1, 2, 3, 4, 5]
18. What is the difference between a shallow and a deep copy?
We’ll discuss this in the context of a mutable object, a list. For immutable objects, shallow vs deep isn’t as relevant.
We’ll walk through 3 scenarios.
i) Reference the original object. This points a new name, li2 , to the same place in memory to which li1 points. So any change we make to li1 also occurs to li2 .
li1 = [['a'],['b'],['c']]
li2 = li1 li1.append(['d'])
print(li2)
#=> [['a'], ['b'], ['c'], ['d']]
ii) Create a shallow copy of the original. We can do this with the list() constructor, or the more pythonic mylist.copy() (thanks Chrisjan Wust !).
A shallow copy creates a new object, but fills it with references to the original. So adding a new object to the original collection, li3 , doesn’t propagate to li4 , but modifying one of the objects in li3 will propagate to li4 .
li3 = [['a'],['b'],['c']]
li4 = list(li3) li3.append([4])
print(li4)
#=> [['a'], ['b'], ['c']] li3[0][0] = ['X']
print(li4)
#=> [[['X']], ['b'], ['c']]
iii) Create a deep copy. This is done with copy.deepcopy() . The 2 objects are now completely independent and changes to either have no affect on the other.
import copy li5 = [['a'],['b'],['c']]
li6 = copy.deepcopy(li5) li5.append([4])
li5[0][0] = ['X']
print(li6)
#=> [['a'], ['b'], ['c']]
19. What is the difference between lists and arrays?
Note: Python’s standard library has an array object but here I’m specifically referring to the commonly used Numpy array.
Lists exist in python’s standard library. Arrays are defined by Numpy.
Lists can be populated with different types of data at each index. Arrays require homogeneous elements.
Arithmetic on lists adds or removes elements from the list. Arithmetic on arrays functions per linear algebra.
Arrays also use less memory and come with significantly more functionality.
I wrote another comprehensive post on arrays.
20. How to concatenate two arrays?
Remember, arrays are not lists. Arrays are from Numpy and arithmetic functions like linear algebra.
We need to use Numpy’s concatenate function to do it.
import numpy as np a = np.array([1,2,3])
b = np.array([4,5,6]) np.concatenate((a,b))
#=> array([1, 2, 3, 4, 5, 6])
21. What do you like about Python?
Note this is a very subjective question and you’ll want to modify your response based on what the role is looking for.
Python is very readable and there is a pythonic way to do just about everything, meaning a preferred way which is clear and concise.
I’d contrast this to Ruby where there are often many ways to do something without a guideline for which is preferred.
22. What is you favorite library in Python?
Also subjective, see question 21.
When working with a lot data, nothing is quite as helpful as pandas which makes manipulating and visualizing data a breeze.
23. Name mutable and immutable objects
Immutable means the state cannot be modified after creation. Examples are: int, float, bool, string and tuple.
Mutable means the state can be modified after creation. Examples are list, dict and set.
24. How would you round a number to 3 decimal places?
Use the round(value, decimal_places) function.
a = 5.12345
round(a,3)
#=> 5.123
25. How do you slice a list?
Slicing notation takes 3 arguments, list[start:stop:step] , where step is the interval at which elements are returned.
a = [0,1,2,3,4,5,6,7,8,9] print(a[:2])
#=> [0, 1] print(a[8:])
#=> [8, 9] print(a[2:8])
#=> [2, 3, 4, 5, 6, 7] print(a[2:8:2])
#=> [2, 4, 6]
26. What is pickling?
Pickling is the go-to method of serializing and unserializing objects in Python.
In the example below, we serialize and unserialize a list of dictionaries.
import pickle obj = [
{'id':1, 'name':'Stuffy'},
{'id':2, 'name': 'Fluffy'}
] with open('file.p', 'wb') as f:
pickle.dump(obj, f) with open('file.p', 'rb') as f:
loaded_obj = pickle.load(f) print(loaded_obj)
#=> [{'id': 1, 'name': 'Stuffy'}, {'id': 2, 'name': 'Fluffy'}]
27. What is the difference between dictionaries and JSON?
Dict is python datatype, a collection of indexed but unordered keys and values.
JSON is just a string which follows a specified format and is intended for transferring data.
28. What ORMs have you used in Python?
ORMs (object relational mapping) map data models (usually in an app) to database tables and simplifies database transactions.
SQLAlchemy is typically used in the context of Flask, and Django has it’s own ORM.
29. How do any() and all() work?
Any takes a sequence and returns true if any element in the sequence is true.
All returns true only if all elements in the sequence are true.
a = [False, False, False]
b = [True, False, False]
c = [True, True, True] print( any(a) )
print( any(b) )
print( any(c) )
#=> False
#=> True
#=> True print( all(a) )
print( all(b) )
print( all(c) )
#=> False
#=> False
#=> True
30. Are dictionaries or lists faster for lookups?
Looking up a value in a list takes O(n) time because the whole list needs to be iterated through until the value is found.
Looking up a key in a dictionary takes O(1) time because it’s a hash table.
This can make a huge time difference if there are a lot of values so dictionaries are generally recommended for speed. But they do have other limitations like needing unique keys.
31. What is the difference between a module and a package?
A module is a file (or collection of files) that can be imported together.
import sklearn
A package is a directory of modules.
from sklearn import cross_validation
So packages are modules, but not all modules are packages.
32. How to increment and decrement an integer in Python?
Increments and decrements can be done with +- and -= .
value = 5 value += 1
print(value)
#=> 6 value -= 1
value -= 1
print(value)
#=> 4
33. How to return the binary of an integer?
Use the bin() function.
bin(5)
#=> '0b101'
34. How to remove duplicate elements from a list?
This can be done by converting the list to a set then back to a list.
a = [1,1,1,2,3]
a = list(set(a))
print(a)
#=> [1, 2, 3]
Note that sets will not necessarily maintain the order of a list.
35. How to check if a value exists in a list?
Use in .
'a' in ['a','b','c']
#=> True 'a' in [1,2,3]
#=> False
36. What is the difference between append and extend?
append adds a value to a list while extend adds values in another list to a list.
a = [1,2,3]
b = [1,2,3] a.append(6)
print(a)
#=> [1, 2, 3, 6] b.extend([4,5])
print(b)
#=> [1, 2, 3, 4, 5]
37. How to take the absolute value of an integer?
This can be done with the abs() function.
abs(2)
#=> 2 abs(-2)
#=> 2
38. How to combine two lists into a list of tuples?
You can use the zip function to combine lists into a list of tuples. This isn’t restricted to only using 2 lists. It can also be done with 3 or more.
a = ['a','b','c']
b = [1,2,3] [(k,v) for k,v in zip(a,b)]
#=> [('a', 1), ('b', 2), ('c', 3)]
39. How can you sort a dictionary by key, alphabetically?
You can’t “sort” a dictionary because dictionaries don’t have order but you can return a sorted list of tuples which has the keys and values that are in the dictionary.
d = {'c':3, 'd':4, 'b':2, 'a':1} sorted(d.items())
#=> [('a', 1), ('b', 2), ('c', 3), ('d', 4)]
40. How does a class inherit from another class in Python?
In the below example, Audi , inherits from Car . And with that inheritance comes the instance methods of the parent class.
class Car():
def drive(self):
print('vroom') class Audi(Car):
pass audi = Audi()
audi.drive()
41. How can you remove all whitespace from a string?
The easiest way is to split the string on whitespace and then rejoin without spaces.
s = 'A string with white space' ''.join(s.split())
#=> 'Astringwithwhitespace'
2 readers recommended a more pythonic way to handle this following the Python ethos that Explicit is better than Implicit . It’s also faster because python doesn’t create a new list object. Thanks Евгений Крамаров and Chrisjan Wust !
s = 'A string with white space'
s.replace(' ', '')
#=> 'Astringwithwhitespace'
42. Why would you use enumerate() when iterating on a sequence?
enumerate() allows tracking index when iterating over a sequence. It’s more pythonic than defining and incrementing an integer representing the index.
li = ['a','b','c','d','e'] for idx,val in enumerate(li):
print(idx, val)
#=> 0 a
#=> 1 b
#=> 2 c
#=> 3 d
#=> 4 e
43. What is the difference between pass, continue and break?
pass means do nothing. We typically use it because Python doesn’t allow creating a class, function or if-statement without code inside it.
In the example below, an error would be thrown without code inside the i > 3 so we use pass .
a = [1,2,3,4,5] for i in a:
if i > 3:
pass
print(i)
#=> 1
#=> 2
#=> 3
#=> 4
#=> 5
continue continues to the next element and halts execution for the current element. So print(i) is never reached for values where i < 3 .
for i in a:
if i < 3:
continue
print(i)
#=> 3
#=> 4
#=> 5
break breaks the loop and the sequence is not longer iterated over. So elements from 3 onward are not printed.
for i in a:
if i == 3:
break
print(i)
#=> 1
#=> 2
44. Convert the following for loop into a list comprehension.
This for loop.
a = [1,2,3,4,5]
a2 = []
for i in a:
a2.append(i + 1) print(a2)
#=> [2, 3, 4, 5, 6]
Becomes.
a3 = [i+1 for i in a] print(a3)
#=> [2, 3, 4, 5, 6]
List comprehension is generally accepted as more pythonic where it’s still readable.
45. Give an example of the ternary operator.
The ternary operator is a one-line if/else statement.
The syntax looks like a if condition else b .
x = 5
y = 10 'greater' if x > 6 else 'less'
#=> 'less' 'greater' if y > 6 else 'less'
#=> 'greater'
46. Check if a string only contains numbers.
You can use isnumeric() .
'123a'.isnumeric()
#=> False '123'.isnumeric()
#=> True
47. Check if a string only contains letters.
You can use isalpha() .
'123a'.isalpha()
#=> False 'a'.isalpha()
#=> True
48. Check if a string only contains numbers and letters.
You can use isalnum() .
'123abc...'.isalnum()
#=> False '123abc'.isalnum()
#=> True
49. Return a list of keys from a dictionary.
This can be done by passing the dictionary to python’s list() constructor, list() .
d = {'id':7, 'name':'Shiba', 'color':'brown', 'speed':'very slow'} list(d)
#=> ['id', 'name', 'color', 'speed']
50. How do you upper and lowercase a string?
You can use the upper() and lower() string methods.
small_word = 'potatocake'
big_word = 'FISHCAKE' small_word.upper()
#=> 'POTATOCAKE' big_word.lower()
#=> 'fishcake'
51. What is the difference between remove, del and pop?
remove() remove the first matching value.
li = ['a','b','c','d'] li.remove('b')
li
#=> ['a', 'c', 'd']
del removes an element by index.
li = ['a','b','c','d'] del li[0]
li
#=> ['b', 'c', 'd']
pop() removes an element by index and returns that element.
li = ['a','b','c','d'] li.pop(2)
#=> 'c' li
#=> ['a', 'b', 'd']
52. Give an example of dictionary comprehension.
Below we’ll create dictionary with letters of the alphabet as keys, and index in the alphabet as values.
# creating a list of letters
import string
list(string.ascii_lowercase)
alphabet = list(string.ascii_lowercase) # list comprehension
d = {val:idx for idx,val in enumerate(alphabet)} d
#=> {'a': 0,
#=> 'b': 1,
#=> 'c': 2,
#=> ...
#=> 'x': 23,
#=> 'y': 24,
#=> 'z': 25}
53. How is exception handling performed in Python?
Python provides 3 words to handle exceptions, try , except and finally .
The syntax looks like this.
try:
# try to do this
except:
# if try block fails then do this
finally:
# always do this
In the simplistic example below, the try block fails because we cannot add integers with strings. The except block sets val = 10 and then the finally block prints complete . | https://towardsdatascience.com/53-python-interview-questions-and-answers-91fa311eec3f | ['Chris I.'] | 2020-04-27 16:10:45.288000+00:00 | ['Python', 'Coding', 'Software Engineering', 'Data Science', 'Programming'] | Title 53 Python Interview Questions AnswersContent 1 difference list tuple I’ve asked question every python data science interview I’ve ever Know answer like back hand Lists mutable modified creation Tuples immutable tuple created cannot changed Lists order ordered sequence typically type object Ie user name ordered creation date Seth Ema Eli Tuples structure Different data type may exist index Ie database record memory 2 Ema 2020–04–16 id name createdat 2 string interpolation performed Without importing Template class 3 way interpolate string name Chris 1 f string printfHello name 2 operator printHey name name 3 format print name formatname 3 difference “is” “” Early python career assumed same… hello bug record check identity check equality We’ll walk example Create list assign name Note b point object 123 b c 123 Check equality note equal printa b printa c True True identity Nope printa b printa c True False verify printing object id’s printida printidb printidc 4369567560 4369567560 4369567624 c different id b 4 decorator Another question I’ve asked every interview It’s deserves post you’re prepared walk writing example decorator allows adding functionality existing function passing existing function decorator executes existing function well additional code We’ll write decorator log another function called Write decorator function take function func argument also defines function logfunctioncalled call func executes code printffunc called return function defined def loggingfunc def logfunctioncalled printffunc called func return logfunctioncalled Let’s write function we’ll eventually add decorator yet def myname printchris def friendsname printnaruto myname friendsname chris naruto add decorator logging def myname printchris def mynameprintchris logging def friendsname printnaruto def friendsnameprintnaruto myname friendsname function myname 0x10fca5a60 called chris function friendsname 0x10fca5f28 called naruto See easily add logging function write adding logging 5 Explain range function Range generates list integer 3 way use function take 1 3 argument Note I’ve wrapped usage list comprehension see value generated rangestop generate integer 0 “stop” integer range10 0 1 2 3 4 5 6 7 8 9 rangestart stop generate integer “start” “stop” integer range210 2 3 4 5 6 7 8 9 rangestart stop step generate integer “start” “stop” interval “step” range2102 2 4 6 8 Thanks Searge Boremchuq suggesting pythonic way listrange2102 2 4 6 8 6 Define class named car 2 attribute “color” “speed” create instance return speed class Car def initself color speed selfcolor color selfspeed speed car Carred100mph carspeed 100mph 7 difference instance static class method python Instance method accept self parameter relate specific instance class Static method use staticmethod decorator related specific instance selfcontained don’t modify class instance attribute Class method accept cl parameter modify class We’re going illustrate difference around fictional CoffeeShop class specialty espresso def initself coffeeprice selfcoffeeprice coffeeprice instance method def makecoffeeself printfMaking selfspecialty selfcoffeeprice static method staticmethod def checkweather printIts sunny class CoffeeShopspecialty espressodef initself coffeepriceselfcoffeeprice coffeeprice instance methoddef makecoffeeselfprintfMaking selfspecialty selfcoffeeprice static methoddef checkweatherprintIts sunny classmethod def changespecialtycls specialty clsspecialty specialty printfSpecialty changed specialty class methoddef changespecialtycls specialtyclsspecialty specialtyprintfSpecialty changed specialty CoffeeShop class attribute specialty set espresso default instance CoffeeShop initialized attribute coffeeprice also 3 method instance method static method class method Let’s initialize instance coffee shop coffeeprice 5 call instance method makecoffee coffeeshop CoffeeShop5 coffeeshopmakecoffee Making espresso 5 call static method Static method can’t modify class instance state they’re normally used utility function example adding 2 number used check weather sunny Great coffeeshopcheckweather sunny let’s use class method modify coffee shop’s specialty makecoffee coffeeshopchangespecialtydrip coffee Specialty changed drip coffee coffeeshopmakecoffee Making drip coffee 5 Note makecoffee used make espresso make drip coffee 8 difference “func” “func” purpose question see understand function also object python def func printIm function func function mainfunc func Im function func object representing function assigned variable passed another function func parenthesis call function return output 9 Explain map function work map return map object iterator iterate returned value applying function every element sequence map object also converted list required def addthreex return x 3 li 123 mapaddthree li 4 5 6 added 3 every element list reader suggested pythonic implementation Thanks Chrisjan Wust def addthreex return x 3 li 123 listmapaddthree li 4 5 6 Also thanks Michael Graeme Short correction 10 Explain reduce function work tricky wrap head around use time reduce take function sequence iterates sequence iteration current element output previous element passed function end single value returned functools import reduce def addthreexy return x li 1235 reduceaddthree li 11 11 returned sum 1235 11 Explain filter function work Filter literally name say filter element sequence element passed function returned outputted sequence function return True discarded function return False def addthreex x 2 0 return True else return False li 12345678 filteraddthree li 2 4 6 8 Note element divisible 2 removed 12 python call reference call value prepared go rabbit hole semantics google question read top page nutshell name call reference memory location hold object others hold pointer yet memory location name object Let’s see work string We’ll instantiate name object point name delete first name x text x x True del x deletes name nothing object memory z z True see name point object memory wasn’t affected del x Here’s another interesting example function name text def addcharsstr1 print idstr1 4353702856 print idname 4353702856 new name object str2 str1 creates new name name first object str1 print idstr1 4387143328 still original object print idstr2 4353702856 addcharsname printname text Notice adding string inside function created new name new object Even though new name “name” existing name Thanks Michael P Reilly correction 13 reverse list Note reverse called list mutates doesn’t return mutated list li abc printli lireverse printli b c c b 14 string multiplication work Let’s see result multiplying string ‘cat’ 3 cat 3 catcatcat string concatenated 3 time 15 list multiplication work Let’s see result multiplying list 123 2 123 2 1 2 3 1 2 3 list outputted containing content 123 repeated twice 16 “self” refer class Self refers instance class It’s give method access ability update object belong passing self init give u ability set color instance initialization class Shirt def initself color selfcolor color Shirtyellow scolor yellow 17 concatenate list python Adding 2 list together concatenates Note array function way 12 b 345 b 1 2 3 4 5 18 difference shallow deep copy We’ll discus context mutable object list immutable object shallow v deep isn’t relevant We’ll walk 3 scenario Reference original object point new name li2 place memory li1 point change make li1 also occurs li2 li1 abc li2 li1 li1appendd printli2 b c ii Create shallow copy original list constructor pythonic mylistcopy thanks Chrisjan Wust shallow copy creates new object fill reference original adding new object original collection li3 doesn’t propagate li4 modifying one object li3 propagate li4 li3 abc li4 listli3 li3append4 printli4 b c li300 X printli4 X b c iii Create deep copy done copydeepcopy 2 object completely independent change either affect import copy li5 abc li6 copydeepcopyli5 li5append4 li500 X printli6 b c 19 difference list array Note Python’s standard library array object I’m specifically referring commonly used Numpy array Lists exist python’s standard library Arrays defined Numpy Lists populated different type data index Arrays require homogeneous element Arithmetic list add remove element list Arithmetic array function per linear algebra Arrays also use le memory come significantly functionality wrote another comprehensive post array 20 concatenate two array Remember array list Arrays Numpy arithmetic function like linear algebra need use Numpy’s concatenate function import numpy np nparray123 b nparray456 npconcatenateab array1 2 3 4 5 6 21 like Python Note subjective question you’ll want modify response based role looking Python readable pythonic way everything meaning preferred way clear concise I’d contrast Ruby often many way something without guideline preferred 22 favorite library Python Also subjective see question 21 working lot data nothing quite helpful panda make manipulating visualizing data breeze 23 Name mutable immutable object Immutable mean state cannot modified creation Examples int float bool string tuple Mutable mean state modified creation Examples list dict set 24 would round number 3 decimal place Use roundvalue decimalplaces function 512345 rounda3 5123 25 slice list Slicing notation take 3 argument liststartstopstep step interval element returned 0123456789 printa2 0 1 printa8 8 9 printa28 2 3 4 5 6 7 printa282 2 4 6 26 pickling Pickling goto method serializing unserializing object Python example serialize unserialize list dictionary import pickle obj id1 nameStuffy id2 name Fluffy openfilep wb f pickledumpobj f openfilep rb f loadedobj pickleloadf printloadedobj id 1 name Stuffy id 2 name Fluffy 27 difference dictionary JSON Dict python datatype collection indexed unordered key value JSON string follows specified format intended transferring data 28 ORMs used Python ORMs object relational mapping map data model usually app database table simplifies database transaction SQLAlchemy typically used context Flask Django it’s ORM 29 work take sequence return true element sequence true return true element sequence true False False False b True False False c True True True print anya print anyb print anyc False True True print alla print allb print allc False False True 30 dictionary list faster lookup Looking value list take time whole list need iterated value found Looking key dictionary take O1 time it’s hash table make huge time difference lot value dictionary generally recommended speed limitation like needing unique key 31 difference module package module file collection file imported together import sklearn package directory module sklearn import crossvalidation package module module package 32 increment decrement integer Python Increments decrement done value 5 value 1 printvalue 6 value 1 value 1 printvalue 4 33 return binary integer Use bin function bin5 0b101 34 remove duplicate element list done converting list set back list 11123 listseta printa 1 2 3 Note set necessarily maintain order list 35 check value exists list Use abc True 123 False 36 difference append extend append add value list extend add value another list list 123 b 123 aappend6 printa 1 2 3 6 bextend45 printb 1 2 3 4 5 37 take absolute value integer done ab function abs2 2 abs2 2 38 combine two list list tuples use zip function combine list list tuples isn’t restricted using 2 list also done 3 abc b 123 kv kv zipab 1 b 2 c 3 39 sort dictionary key alphabetically can’t “sort” dictionary dictionary don’t order return sorted list tuples key value dictionary c3 d4 b2 a1 sortedditems 1 b 2 c 3 4 40 class inherit another class Python example Audi inherits Car inheritance come instance method parent class class Car def driveself printvroom class AudiCar pas audi Audi audidrive 41 remove whitespace string easiest way split string whitespace rejoin without space string white space joinssplit Astringwithwhitespace 2 reader recommended pythonic way handle following Python ethos Explicit better Implicit It’s also faster python doesn’t create new list object Thanks Евгений Крамаров Chrisjan Wust string white space sreplace Astringwithwhitespace 42 would use enumerate iterating sequence enumerate allows tracking index iterating sequence It’s pythonic defining incrementing integer representing index li abcde idxval enumerateli printidx val 0 1 b 2 c 3 4 e 43 difference pas continue break pas mean nothing typically use Python doesn’t allow creating class function ifstatement without code inside example error would thrown without code inside 3 use pas 12345 3 pas printi 1 2 3 4 5 continue continues next element halt execution current element printi never reached value 3 3 continue printi 3 4 5 break break loop sequence longer iterated element 3 onward printed 3 break printi 1 2 44 Convert following loop list comprehension loop 12345 a2 a2appendi 1 printa2 2 3 4 5 6 Becomes a3 i1 printa3 2 3 4 5 6 List comprehension generally accepted pythonic it’s still readable 45 Give example ternary operator ternary operator oneline ifelse statement syntax look like condition else b x 5 10 greater x 6 else le le greater 6 else le greater 46 Check string contains number use isnumeric 123aisnumeric False 123isnumeric True 47 Check string contains letter use isalpha 123aisalpha False aisalpha True 48 Check string contains number letter use isalnum 123abcisalnum False 123abcisalnum True 49 Return list key dictionary done passing dictionary python’s list constructor list id7 nameShiba colorbrown speedvery slow listd id name color speed 50 upper lowercase string use upper lower string method smallword potatocake bigword FISHCAKE smallwordupper POTATOCAKE bigwordlower fishcake 51 difference remove del pop remove remove first matching value li abcd liremoveb li c del remove element index li abcd del li0 li b c pop remove element index return element li abcd lipop2 c li b 52 Give example dictionary comprehension we’ll create dictionary letter alphabet key index alphabet value creating list letter import string liststringasciilowercase alphabet liststringasciilowercase list comprehension validx idxval enumeratealphabet 0 b 1 c 2 x 23 24 z 25 53 exception handling performed Python Python provides 3 word handle exception try except finally syntax look like try try except try block fails finally always simplistic example try block fails cannot add integer string except block set val 10 finally block print complete Tags Python Coding Software Engineering Data Science Programming |
1,115 | Can we assume that storytelling is a piece of art? | Can we assume that storytelling is a piece of art?
Photo by OVAN from Pexels
As long as I remember myself, I have been creating art with my hands, including jewellery, sculpture, and mosaic.
It was my godmother's idea and further encouragement in 2008… So, I started my journey to art by sculpting with plastilina when six months later, I blended art with holidays by attending two courses of mosaic in Mykonos island and marble sculpture in Paros island, Greece.
Then on, my art journey was contagious, and I wanted more and more… so attended additional jewellery classes of silversmithing and a dozen others, which were never enough for me… and I was hungrily added one more course to my to-do-list.
But what is Art for me?
A
Ritual
Treatment of my soul…
Art is my way to give colours to my feelings, create a shape to my chaos, and give a picture to my inner self … At the same time, art gives me the opportunity to express my emotions in red or blue, unfold my deepest fears in yellow and share my story with others in rose.
So, I was wondering: are my stories and this enthusiasm to write them a new type of art for me? Yes, they are, indeed.
It’s been a lot of years since I have watched the movie “Eat Pray Love.” Since then, I had been dreaming of writing a book about my life, including the parts I wanted to change and could not accept as problematic, and at the end, my unique way to improve my life and keep on walking. Every time of thinking to write down my story, I always said 'this is not possible, you are not a writer, you haven’t studied literature… Natalia, you think too highly for yourself.’
Until a July’s Sunday night, after a simple conversation with my boyfriend, seating on a bench by the Thames and drinking homemade Pimm’s in coffee mugs, when I realised how much I wanted to write stories about my life and connect with others who have similar beliefs or perspectives about our interactive trip to Cosmos…
Honestly, I cannot write about formulas on how to live longer, make your skin fresher or change your job… The only thing I can tell you is how important is to do things that give joy to my life… and how mistakes are part this and how much denial is strong at this trip… And then, out of the blue, how my smile arrives back and move on, and make my change and all obstacles belong to the past.
Furthemore I can agree with you, my life might be irrelevant to you and my approach can work for you in a full, slight or not at all version.
Nevertheless, we all have to accept the body and shaking tremors when we read a story that is familiar to our subconscious inner self… We use to say ‘we do not know what to do next’ or ‘how to do this’ but we all deep inside have the answer.
I believe that conditions in lives do not miraculously change if we do not react in advance… I strongly believe that since we start changing our destiny, the universe does its best to make things fit in us. Also, the universe’s best move might not correlate to our expectations.
Photo by Magda Ehlers from Pexels
When I create art by writing my story and sharing it in the community, I have first experienced those feelings or moments by the time I talk to you about. So, I basically draw the honest version of my “present” by realising my wise reactions, wrong acts and moreover my stupidity. This is not meant to deteriorate my audience or to compare myself to others. It is just an attempt to interact with myself and you, and in general to find fellow travellers on this ride. | https://natkokla.medium.com/can-we-assume-that-storytelling-is-a-piece-of-art-ad2e7b1f9858 | ['Natalia Kokla'] | 2018-09-05 19:53:59.433000+00:00 | ['Loving Myself', 'Art', 'Life', 'Psychology', 'Writing'] | Title assume storytelling piece artContent assume storytelling piece art Photo OVAN Pexels long remember creating art hand including jewellery sculpture mosaic godmother idea encouragement 2008… started journey art sculpting plastilina six month later blended art holiday attending two course mosaic Mykonos island marble sculpture Paros island Greece art journey contagious wanted more… attended additional jewellery class silversmithing dozen others never enough me… hungrily added one course todolist Art Ritual Treatment soul… Art way give colour feeling create shape chaos give picture inner self … time art give opportunity express emotion red blue unfold deepest fear yellow share story others rose wondering story enthusiasm write new type art Yes indeed It’s lot year since watched movie “Eat Pray Love” Since dreaming writing book life including part wanted change could accept problematic end unique way improve life keep walking Every time thinking write story always said possible writer haven’t studied literature… Natalia think highly yourself’ July’s Sunday night simple conversation boyfriend seating bench Thames drinking homemade Pimm’s coffee mug realised much wanted write story life connect others similar belief perspective interactive trip Cosmos… Honestly cannot write formula live longer make skin fresher change job… thing tell important thing give joy life… mistake part much denial strong trip… blue smile arrives back move make change obstacle belong past Furthemore agree life might irrelevant approach work full slight version Nevertheless accept body shaking tremor read story familiar subconscious inner self… use say ‘we know next’ ‘how this’ deep inside answer believe condition life miraculously change react advance… strongly believe since start changing destiny universe best make thing fit u Also universe’s best move might correlate expectation Photo Magda Ehlers Pexels create art writing story sharing community first experienced feeling moment time talk basically draw honest version “present” realising wise reaction wrong act moreover stupidity meant deteriorate audience compare others attempt interact general find fellow traveller rideTags Loving Art Life Psychology Writing |
1,116 | Engineering Management like a Pro — 101😮 | In this era of technology everything from ordering foods to transferring money, everything is available in just single tap, of course thanks to tech.
But building technology solution is not a piece of cake it requires tons of efforts to bring that food ordering or banking app in your smartphone.
Now, Straight to the point in this article talks about good practices and hacks that can help you in building a great software product with maximum productivity and efficiency in less time and cost.
Let’s start from the step zero of product development:-
It is very important for any product team to understand the idea behind the product and its impact and goal. Analyse all the consumer needs and discuss it with the engineering and development team about the feasibility and cost of development, this will make your engineering team aware of what they have to develop, the user segment and other constrains.
Now, Focus on engineering and development segment:-
Higher level application architecture:- Decide a highly scalable and easy to maintain application architecture for whole application which includes Backend, Frontend, DevOps and other segments if any. Technology selection:- Decide the technology and tools we are going to use to develop our application, and make sure that the technology we selected must satisfy the needs and goals of our application. Analyse resources:- Have a look at your human resource weather they are enough skilled on your selected technology stack, if not first get your workforce skilled on those technology, so that each team member can decide between good practice and bad practices, don’t start development just by relying on stackoveflow🚀 Segment level architecture:- Now define micro level architecture of every segment of your application i.e Frontend, Backend, DevOps etc. make sure to discuss with engineering team and give opportunity to all your members to find pros and cons of the decided architecture. Application performance from day 1:- No body want to use slow or buggy application, that’s why performance consideration really matters and each and every developer should implement the best optimized feature(where ever possible), because rework can be more costly. Define a code standard:- Defining a code standard and design pattern and ask every developer to follow that code standards, doing this can decrease the development time and maintenance effort to a larger extent. Decide a success matrix:- Analyzing the result is most important, similarly define a minimum threshold for every feature of your application, it will help you in delivering more perfect features and less bugs. Setup a proper test environment:- Analyse all the edge cases and provide a proper testing environments to developers so that they can test each and every feature while developing, this will help in decreasing the number of iterations. Test everything from min scale to max scale:- Let’s understand this with help of a example, suppose we are having a table in our application then test it when when zero records and then thousands of records, if the table lags then you need to change the strategy, this will not only help in performance and stability, but also help in improving the UX of application. Decide and test deployment architecture:- Deployment is really crucial part of any software product, don’t try to be conservative here because if your deployment infrastructure is not good, then well developed application cannot give good performance and may leave your user frustrated.
This it all about Engineering Management in this article in my upcoming article i will talk about each one of the above points in depth. | https://medium.com/codingurukul/engineering-management-like-a-pro-101-3e89706a16da | ['Suraj Kumar'] | 2020-04-12 04:01:12.012000+00:00 | ['User Experience', 'Product Development', 'Engineering Mangement', 'Startup', 'Engineering'] | Title Engineering Management like Pro — 101😮Content era technology everything ordering food transferring money everything available single tap course thanks tech building technology solution piece cake requires ton effort bring food ordering banking app smartphone Straight point article talk good practice hack help building great software product maximum productivity efficiency le time cost Let’s start step zero product development important product team understand idea behind product impact goal Analyse consumer need discus engineering development team feasibility cost development make engineering team aware develop user segment constrains Focus engineering development segment Higher level application architecture Decide highly scalable easy maintain application architecture whole application includes Backend Frontend DevOps segment Technology selection Decide technology tool going use develop application make sure technology selected must satisfy need goal application Analyse resource look human resource weather enough skilled selected technology stack first get workforce skilled technology team member decide good practice bad practice don’t start development relying stackoveflow🚀 Segment level architecture define micro level architecture every segment application ie Frontend Backend DevOps etc make sure discus engineering team give opportunity member find pro con decided architecture Application performance day 1 body want use slow buggy application that’s performance consideration really matter every developer implement best optimized featurewhere ever possible rework costly Define code standard Defining code standard design pattern ask every developer follow code standard decrease development time maintenance effort larger extent Decide success matrix Analyzing result important similarly define minimum threshold every feature application help delivering perfect feature le bug Setup proper test environment Analyse edge case provide proper testing environment developer test every feature developing help decreasing number iteration Test everything min scale max scale Let’s understand help example suppose table application test zero record thousand record table lag need change strategy help performance stability also help improving UX application Decide test deployment architecture Deployment really crucial part software product don’t try conservative deployment infrastructure good well developed application cannot give good performance may leave user frustrated Engineering Management article upcoming article talk one point depthTags User Experience Product Development Engineering Mangement Startup Engineering |
1,117 | Is Artificial Intelligence Possible | “ Artificial Intelligence has been brain-dead since the 1970s.” This rather ostentatious remark made by Marvin Minsky co-founder of the world-famous MIT Artificial Intelligence Laboratory was referring to the fact that researchers have been primarily concerned on small facets of machine intelligence as opposed to looking at the problem as a whole. This article examines the contemporary issues of artificial intelligence (AI) looking at the current status of the AI field together with potent arguments provided by leading experts to illustrate whether AI is an impossible concept to obtain.
Because of the scope and ambition, artificial intelligence defies simple definition. Initially, AI was defined as “the science of making machines do things that would require intelligence if done by men”. This somewhat meaningless definition shows how AI is still a young discipline and similar early definitions have been shaped by technological and theoretical progress made in the subject. So for the time being, a good general definition that illustrates the future challenges in the AI field was made by the American Association for Artificial Intelligence (AAAI) clarifying that AI is the “scientific understanding of the mechanisms underlying thought and intelligent behaviour and their embodiment in machines”.
The term “artificial intelligence” was first coined by John McCarthy at a Conference at Dartmouth College, New Hampshire, in 1956, but the concept of machine intelligence is in fact much older. In ancient Greek mythology the smith-god, Hephaestus, is credited with making Talos, a “bull-headed” bronze man who guarded Crete for King Minos by patrolling the island terrifying off impostors. Similarly, in the 13th century, mechanical talking heads were said to have been created to scare intruders, with Albert the Great and Roger Bacon reputedly among the owners. However, it is only in the last 50 years that AI has really begun to pervade popular culture. Our fascination with “thinking machines” is obvious, but has been wrongfully distorted by the science-fiction connotations seen in literature, film and television.
In reality, the AI field is far from creating the sentient beings seen in the media, yet this does not imply that successful progress has not been made. AI has been a rich branch of research for 50 years and many famed theorists have contributed to the field, but one computer pioneer that has shared his thoughts at the beginning and still remains timely in both his assessment and arguments is British mathematician Alan Turing. In the 1950s Turing published a paper called Computing Machinery and Intelligence in which he proposed an empirical test that identifies an intelligent behaviour “when there is no discernible difference between the conversation generated by the machine and that of an intelligent person.” The Turing test measures the performance of an allegedly intelligent machine against that of a human being and is arguably one of the best evaluation experiments at this present time. The Turing test, also referred to as the “imitation game” is carried out by having a knowledgeable human interrogator engage in a natural language conversation with two other participants, one a human the other the “intelligent” machine communicating entirely with textual messages. If the judge cannot reliably identify which is which, it is said that the machine has passed and is therefore intelligent. Although the test has a number of justifiable criticisms such as not being able to test perceptual skills or manual dexterity it is a great accomplishment that the machine can converse like a human and can cause a human to subjectively evaluate it as humanly intelligent by conversation alone.
Many theorists have disputed the Turing Test as an acceptable means of proving artificial intelligence, an argument posed by Professor Jefferson Lister states, “not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain”. Turing replied by saying “that we have no way of knowing that any individual other than ourselves experiences emotions and that therefore we should accept the test.” However, Lister did have a valid point to make, developing an artificial consciousness. Intelligent machines already exist that are autonomous; they can learn, communicate and teach each other, but creating an artificial intuition, a consciousness, “is the holy grail of artificial intelligence.” When modelling AI on the human mind many illogical paradoxes surface and you begin to see how the complexity of the brain has been underestimated and why simulating it has not been as straightforward as experts believed in the 1950s. The problem with human beings is that they are not algorithmic creatures; they prefer to use heuristic shortcuts and analogies to situations well known. However, this is a psychological implication, “it is not that people are smarter then explicit algorithms, but that they are sloppy and yet do well in most cases.”
The phenomenon of consciousness has caught the attention of many Philosophers and Scientists throughout history and innumerable papers and books have been published devoted to the subject. However, no other biological singularity has remained so resistant to scientific evidence and “persistently ensnarled in fundamental philosophical and semantic tangles.” Under ordinary circumstances, we have little difficulty in determining when other people lose or regain consciousness and as long as we avoid describing it, the phenomenon remains intuitively clear. Most Computer Scientists believe that the consciousness was an evolutionary “add-on” and can, therefore, be algorithmically modelled. Yet many recent claims oppose this theory. Sir Roger Penrose, an English mathematical physicist, argues that the rational processes of the human mind are not completely algorithmic and thus transcends computation and Professor Stuart Hameroff’s proposal that consciousness emerges as a macroscopic quantum state from a critical level of coherence of quantum level events in and around cytoskeletal microtubules within neurons. Although these are all theories with not much or no empirical evidence, it is still important to consider each of them because it is vital that we understand the human mind before we can duplicate it.
Another key problem with duplicating the human mind is how to incorporate the various transitional states of consciousness such as REM sleep, hypnosis, drug influence and some psychopathological states within a new paradigm. If these states are removed from the design due to their complexity or irrelevancy in a computer then it should be pointed out that perhaps consciousness cannot be artificially imitated because these altered states have a biophysical significance for the functionality of the mind.
If consciousness is not algorithmic, then how is it created? Obviously, we do not know. Scientists who are interested in subjective awareness study the objective facts of neurology and behaviour and have shed new light on how our nervous system processes and discriminates among stimuli. But although such sensory mechanisms are necessary for consciousness, it does not help to unlock the secrets of the cognitive mind as we can perceive things and respond to them without being aware of them. A prime example of this is sleepwalking. When sleepwalking occurs (Sleepwalking comprises approximately 25 per cent of all children and 7 per cent of adults) many of the victims carry out dangerous or stupid tasks, yet some individuals carry out complicated, distinctively human-like tasks, such as driving a car. One may dispute whether sleepwalkers are really unconscious or not, but if it is, in fact, true that the individuals have no awareness or recollection of what happened during their sleepwalking episode, then perhaps here is the key to the cognitive mind. Sleepwalking suggests at least two general behavioural deficiencies associated with the absence of consciousness in humans. The first is a deficiency in social skills. Sleepwalkers typically ignore the people they encounter, and the “rare interactions that occur are perfunctory and clumsy, or even violent.” The other major deficit in sleepwalking behaviour is linguistics. Most sleepwalkers respond to verbal stimuli with only grunts or monosyllables or make no response at all. These two apparent deficiencies may be significant. Sleepwalkers use of protolanguage; short, grammar-free utterances with referential meaning but lack syntax, may illustrate that the consciousness is a social adaptation and that other animals do not lack understanding or sensation, but that they lack language skills and therefore cannot reflect on their sensations and become self-aware. In principle Francis Crick, co-discover of double helix DNA structure believed this hypothesis. After he and James Watson solved the mechanism of inheritance, Crick moved to neuroscience and spent the rest of his trying to answer the biggest biological question; what is the consciousness? Working closely with Christof Koch, he published his final paper in the Philosophical Transactions of the Royal Society of London and in it he proposed that an obscure part of the brain, the claustrum, acts like a conductor of an orchestra and “binds” vision, olfaction, somatic sensation, together with the amygdala and other neuronal processing for the unification of thought and emotion. And the fact that all mammals have a claustrum means that it is possible that other animals have high intelligence.
So how different are the minds of animals in comparison to our own? Can their minds be algorithmically simulated? Many Scientists are reluctant to discuss animal intelligence as it is not an observable property and nothing can be perceived without reason and therefore there is not much-published research on the matter. But, by avoiding the comparison of some human mental states to other animals, we are impeding the use of a comparative method that may unravel the secrets of the cognitive mind. However, primates and cetacean have been considered by some to be extremely intelligent creatures, second only to humans. Their exalted status in the animal kingdom has lead to their involvement in almost all of the published experiments related to animal intelligence. These experiments coupled with analysis of primate and cetacean’s brain structure has to lead to many theories as to the development of higher intelligence as a trait. Although these theories seem to be plausible, there is some controversy over the degree to which non-human studies can be used to infer the structure of human intelligence.
By many of the physical methods of comparing intelligence, such as measuring the brain size to body size ratio, cetacean surpasses non-human primates and even rival human beings. For example “dolphins have a cerebral cortex which is about 40% larger a human being. Their cortex is also stratified in much the same way as humans. The frontal lobe of dolphins is also developed to a level comparable to humans. In addition, the parietal lobe of dolphins which “makes sense of the senses” is larger than the human parietal and frontal lobes combined. The similarities do not end there; most cetaceans have large and well-developed temporal lobes which contain sections equivalent to Broca’s and Wernicke’s areas in humans.”
Dolphins exhibit complex behaviours; they have a social hierarchy, they demonstrate the ability to learn complex tricks, when scavenging for food on the sea floor, some dolphins have been seen tearing off pieces of sponge and wrapping them around their “bottlenose” to prevent abrasions; illustrating yet another complex cognitive process thought to be limited to the great apes, they apparently communicate by emitting two very distinct kinds of acoustic signals, which we call whistles and clicks and lastly dolphins do not use sex purely for procreative purposes. Some dolphins have been recorded having homosexual sex, which demonstrates that they must have some consciousness. Dolphins have a different brain structure than humans that could perhaps be algorithmically simulated. One example of their dissimilar brain structure and intelligence is their sleep technique. While most mammals and birds show signs of rapid REM (Rapid Eye Movement) sleep, reptiles and cold-blooded animals do not. REM sleep stimulates the brain regions used in learning and is often associated with dreaming. The fact that cold-blooded animals do not have REM sleep could be enough evidence to suggest that they are not conscious and therefore their brains can definitely be emulated. Furthermore, warm-blood creatures display signs of REM sleep, and thus dream and therefore must have some environmental awareness. However, dolphins sleep unihemispherically, they are “conscious” breathers, and if fall asleep they could drown. Evolution has solved this problem by letting one half of its brain sleep at a time. As dolphins utilise this technique, they lack REM sleep and therefore a high intelligence, perhaps consciousness is possible that does not incorporate the transitional states mentioned earlier.
The evidence for animal consciousness is indirect. But so is the evidence for the big bang, neutrinos, or human evolution. As in any event, such unusual assertions must be subject to the rigorous scientific procedure, before they can be accepted as even vague possibilities. Intriguing, but more proof is required. However merely because we do not understand something does not mean that it is false — or not. Studying other animal minds is a useful comparative method and could even lead to the creation of artificial intelligence (that does not include irrelevant transitional states for an artificial entity), based on a model not as complex as our own. Still, the central point being illustrated is how ignorant our understanding of the human brain, or any other brain is and how one day a concrete theory can change thanks to enlightening findings.
Furthermore, an analogous incident that exemplifies this argument happened in 1847, when an Irish workman, Phineas Cage, shed new light on the field of neuroscience when a rock blasting accident sent an iron rod through the frontal region of his brain. Miraculously enough, he survived the incident, but even more, astonishing to the science community at the time were the marked changes in Cage’s personality after the road punctured his brain. Where before Cage was characterized by his mild-mannered nature, he had now become aggressive, rude and “indulging in the grossest profanity, which was not previously his custom, manifesting but little deference for his fellows, impatient of restraint or advice when it conflicts with his desires” according to the Boston physician Harlow in 1868. However, Cage sustained no impairment with regards to his intelligence or memory.
The serendipity of the Phineas Cage incident demonstrates how architecturally robust the structure of the brain is and by comparison how rigid a computer is. All mechanical systems and algorithms would stop functioning correctly or completely if an iron rod punctured them, that is with the exception of artificial neural systems and their distributed parallel structure. In the last decade, AI has begun to resurge thanks to the promising approach of artificial neural systems.
Artificial neural systems or simply neural networks are modelled on the logical associations made by the human brain, they are based on mathematical models that accumulate data, or “knowledge,” based on parameters set by administrators. Once the network is “trained” to recognize these parameters, it can make an evaluation, reach a conclusion and take action. In the 1980s, neural networks became widely used with the backpropagation algorithm, first described by Paul John Werbos in 1974. The 1990s marked major achievements in many areas of AI and demonstrations of various applications. Most notably in 1997, IBM’s Deep Blue supercomputer defeated the world chess champion, Garry Kasparov. After the match, Kasparov was quoted as saying the computer played “like a god.”
That chess match and all its implications raised profound questions about neural networks. Many saw it as evidence that true artificial intelligence had finally been achieved. After all, “a man was beaten by a computer in a game of wits.” But it is one thing to program a computer to solve the kind of complex mathematical problems found in chess. It is quite another for a computer to make logical deductions and decisions on its own.
Using neural networks, to emulate brain function provides many positive properties including the parallel functioning, relatively quick realisation of complicated tasks, distributed information, weak computation changes due to network damage (Phineas Cage), as well as learning abilities, i.e. adaptation upon changes in environment and improvement based on experience. These beneficial properties of neural networks have inspired many scientists to propose them as a solution for most problems, so with a sufficiently large network and adequate training, the networks could accomplish many arbitrary tasks, without knowing a detailed mathematical algorithm of the problem. Currently, the remarkable ability of neural networks is best demonstrated by the ability of Honda’s Asimo humanoid robot that cannot just walk and dance but even ride a bicycle. Asimo, an acronym for Advanced Step in Innovative Mobility, has 16 flexible joints, requiring a four-processor computer to control its movement and balance. Its exceptional human-like mobility is only possible because the neural networks that are connected to the robot’s motion and positional sensors and control its ‘muscle’ actuators are capable of being ‘taught’ to do a particular activity.
The significance of this sort of robot motion control is the virtual impossibility of a programmer being able to actually create a set of detailed instructions for walking or riding a bicycle, instructions which could then be built into a control program. The learning ability of the neural network overcomes the need to precisely define these instructions. However, despite the impressive performance of the neural networks, Asimo still cannot think for itself and its behaviour is still firmly anchored on the lower-end of the intelligent spectrum, such as reaction and regulation.
Neural networks are slowly finding there way into the commercial world. Recently, Siemens launched a new fire detector that uses a number of different sensors and a neural network to determine whether the combination of sensor readings are from a fire or just part of the normal room environment such as dust. Over fifty per cent of fire call-outs are false and of these well over half are due to fire detectors being triggered by everyday activities as opposed to actual fires, so this is clearly a beneficial use of the paradigm.
But are there limitations to the capabilities of neural networks or will they be the solution to creating strong-AI? Artificial neural networks are biologically inspired but that does not mean that they are necessarily biologically plausible. Many Scientists have published their thoughts on the intrinsic limitations of using neural networks; one book that received high exposure within the Computer Scientist community in 1969 was Perceptron by Minsky and Papert. Perceptron brought clarity to the limitations of neural networks, although many scientists were aware of the limited ability of an incomplex perceptron to classify patterns, Minsky’s and Papert’s approach of finding “what are neural networks good for?” illustrated what is impeding future development of neural networks. Within its time period Perceptron was exceptionally constructive and its identifiable content gave the impetus for later research that conquered some of the depicted computational problems restricting the model. An example is the exclusive-or problem. The exclusive-or problem contains four patterns of two inputs each; a pattern is a positive member of a set if either one of the input bits is on, but not both. Thus, changing the input pattern by one-bit changes the classification of the pattern. This is the simplest example of a linearly inseparable problem. A perceptron using linear threshold functions requires a layer of internal units to solve this problem, and since the connections between the input and internal units could not be trained, a perceptron could not learn this classification. Eventually, this restriction was solved by incorporating extra “hidden” layers. Although advances in neural network research have solved many of the limitations identified by Minsky and Papert, numerous still remain such as networks using linear threshold units still violate the limited order constraint when faced with linearly inseparable problems Additionally, the scaling of weights as the size of the problem space increases remains an issue.
It is clear that the dismissive views about neural networks disseminated by Minsky, Papert and many other Computer Scientists have some evidential support, but still, many researchers have ignored their claims and refused to abandon this biologically inspired system.
There have been several recent advances in artificial neural networks by integrating other specialised theories into the multi-layered structure in an attempt to improve the system methodology and move one step closer to creating strong-AI. One promising area is the integration of fuzzy logic. invented by Professor Lotfi Zadeh. Other admirable algorithmic ideas include quantum inspired neural networks (QUINNs) and “network cavitations” proposed by S.L.Thaler.
The history of artificial intelligence is replete with theories and failed attempts. It is in inevitable that the discipline will progress with technological and scientific discoveries, but will they ever reach the final hurdle? | https://medium.com/quick-code/is-artificial-intelligence-possible-4ec99a140294 | [] | 2019-04-23 18:49:49.766000+00:00 | ['Artificial Intelligence', 'AI', 'Machine', 'Programmer', 'Technology'] | Title Artificial Intelligence PossibleContent “ Artificial Intelligence braindead since 1970s” rather ostentatious remark made Marvin Minsky cofounder worldfamous MIT Artificial Intelligence Laboratory referring fact researcher primarily concerned small facet machine intelligence opposed looking problem whole article examines contemporary issue artificial intelligence AI looking current status AI field together potent argument provided leading expert illustrate whether AI impossible concept obtain scope ambition artificial intelligence defies simple definition Initially AI defined “the science making machine thing would require intelligence done men” somewhat meaningless definition show AI still young discipline similar early definition shaped technological theoretical progress made subject time good general definition illustrates future challenge AI field made American Association Artificial Intelligence AAAI clarifying AI “scientific understanding mechanism underlying thought intelligent behaviour embodiment machines” term “artificial intelligence” first coined John McCarthy Conference Dartmouth College New Hampshire 1956 concept machine intelligence fact much older ancient Greek mythology smithgod Hephaestus credited making Talos “bullheaded” bronze man guarded Crete King Minos patrolling island terrifying impostor Similarly 13th century mechanical talking head said created scare intruder Albert Great Roger Bacon reputedly among owner However last 50 year AI really begun pervade popular culture fascination “thinking machines” obvious wrongfully distorted sciencefiction connotation seen literature film television reality AI field far creating sentient being seen medium yet imply successful progress made AI rich branch research 50 year many famed theorist contributed field one computer pioneer shared thought beginning still remains timely assessment argument British mathematician Alan Turing 1950s Turing published paper called Computing Machinery Intelligence proposed empirical test identifies intelligent behaviour “when discernible difference conversation generated machine intelligent person” Turing test measure performance allegedly intelligent machine human arguably one best evaluation experiment present time Turing test also referred “imitation game” carried knowledgeable human interrogator engage natural language conversation two participant one human “intelligent” machine communicating entirely textual message judge cannot reliably identify said machine passed therefore intelligent Although test number justifiable criticism able test perceptual skill manual dexterity great accomplishment machine converse like human cause human subjectively evaluate humanly intelligent conversation alone Many theorist disputed Turing Test acceptable mean proving artificial intelligence argument posed Professor Jefferson Lister state “not machine write sonnet compose concerto thought emotion felt chance fall symbol could agree machine equal brain” Turing replied saying “that way knowing individual experience emotion therefore accept test” However Lister valid point make developing artificial consciousness Intelligent machine already exist autonomous learn communicate teach creating artificial intuition consciousness “is holy grail artificial intelligence” modelling AI human mind many illogical paradox surface begin see complexity brain underestimated simulating straightforward expert believed 1950s problem human being algorithmic creature prefer use heuristic shortcut analogy situation well known However psychological implication “it people smarter explicit algorithm sloppy yet well cases” phenomenon consciousness caught attention many Philosophers Scientists throughout history innumerable paper book published devoted subject However biological singularity remained resistant scientific evidence “persistently ensnarled fundamental philosophical semantic tangles” ordinary circumstance little difficulty determining people lose regain consciousness long avoid describing phenomenon remains intuitively clear Computer Scientists believe consciousness evolutionary “addon” therefore algorithmically modelled Yet many recent claim oppose theory Sir Roger Penrose English mathematical physicist argues rational process human mind completely algorithmic thus transcends computation Professor Stuart Hameroff’s proposal consciousness emerges macroscopic quantum state critical level coherence quantum level event around cytoskeletal microtubule within neuron Although theory much empirical evidence still important consider vital understand human mind duplicate Another key problem duplicating human mind incorporate various transitional state consciousness REM sleep hypnosis drug influence psychopathological state within new paradigm state removed design due complexity irrelevancy computer pointed perhaps consciousness cannot artificially imitated altered state biophysical significance functionality mind consciousness algorithmic created Obviously know Scientists interested subjective awareness study objective fact neurology behaviour shed new light nervous system process discriminates among stimulus although sensory mechanism necessary consciousness help unlock secret cognitive mind perceive thing respond without aware prime example sleepwalking sleepwalking occurs Sleepwalking comprises approximately 25 per cent child 7 per cent adult many victim carry dangerous stupid task yet individual carry complicated distinctively humanlike task driving car One may dispute whether sleepwalker really unconscious fact true individual awareness recollection happened sleepwalking episode perhaps key cognitive mind Sleepwalking suggests least two general behavioural deficiency associated absence consciousness human first deficiency social skill Sleepwalkers typically ignore people encounter “rare interaction occur perfunctory clumsy even violent” major deficit sleepwalking behaviour linguistics sleepwalker respond verbal stimulus grunt monosyllable make response two apparent deficiency may significant Sleepwalkers use protolanguage short grammarfree utterance referential meaning lack syntax may illustrate consciousness social adaptation animal lack understanding sensation lack language skill therefore cannot reflect sensation become selfaware principle Francis Crick codiscover double helix DNA structure believed hypothesis James Watson solved mechanism inheritance Crick moved neuroscience spent rest trying answer biggest biological question consciousness Working closely Christof Koch published final paper Philosophical Transactions Royal Society London proposed obscure part brain claustrum act like conductor orchestra “binds” vision olfaction somatic sensation together amygdala neuronal processing unification thought emotion fact mammal claustrum mean possible animal high intelligence different mind animal comparison mind algorithmically simulated Many Scientists reluctant discus animal intelligence observable property nothing perceived without reason therefore muchpublished research matter avoiding comparison human mental state animal impeding use comparative method may unravel secret cognitive mind However primate cetacean considered extremely intelligent creature second human exalted status animal kingdom lead involvement almost published experiment related animal intelligence experiment coupled analysis primate cetacean’s brain structure lead many theory development higher intelligence trait Although theory seem plausible controversy degree nonhuman study used infer structure human intelligence many physical method comparing intelligence measuring brain size body size ratio cetacean surpasses nonhuman primate even rival human being example “dolphins cerebral cortex 40 larger human cortex also stratified much way human frontal lobe dolphin also developed level comparable human addition parietal lobe dolphin “makes sense senses” larger human parietal frontal lobe combined similarity end cetacean large welldeveloped temporal lobe contain section equivalent Broca’s Wernicke’s area humans” Dolphins exhibit complex behaviour social hierarchy demonstrate ability learn complex trick scavenging food sea floor dolphin seen tearing piece sponge wrapping around “bottlenose” prevent abrasion illustrating yet another complex cognitive process thought limited great ape apparently communicate emitting two distinct kind acoustic signal call whistle click lastly dolphin use sex purely procreative purpose dolphin recorded homosexual sex demonstrates must consciousness Dolphins different brain structure human could perhaps algorithmically simulated One example dissimilar brain structure intelligence sleep technique mammal bird show sign rapid REM Rapid Eye Movement sleep reptile coldblooded animal REM sleep stimulates brain region used learning often associated dreaming fact coldblooded animal REM sleep could enough evidence suggest conscious therefore brain definitely emulated Furthermore warmblood creature display sign REM sleep thus dream therefore must environmental awareness However dolphin sleep unihemispherically “conscious” breather fall asleep could drown Evolution solved problem letting one half brain sleep time dolphin utilise technique lack REM sleep therefore high intelligence perhaps consciousness possible incorporate transitional state mentioned earlier evidence animal consciousness indirect evidence big bang neutrino human evolution event unusual assertion must subject rigorous scientific procedure accepted even vague possibility Intriguing proof required However merely understand something mean false — Studying animal mind useful comparative method could even lead creation artificial intelligence include irrelevant transitional state artificial entity based model complex Still central point illustrated ignorant understanding human brain brain one day concrete theory change thanks enlightening finding Furthermore analogous incident exemplifies argument happened 1847 Irish workman Phineas Cage shed new light field neuroscience rock blasting accident sent iron rod frontal region brain Miraculously enough survived incident even astonishing science community time marked change Cage’s personality road punctured brain Cage characterized mildmannered nature become aggressive rude “indulging grossest profanity previously custom manifesting little deference fellow impatient restraint advice conflict desires” according Boston physician Harlow 1868 However Cage sustained impairment regard intelligence memory serendipity Phineas Cage incident demonstrates architecturally robust structure brain comparison rigid computer mechanical system algorithm would stop functioning correctly completely iron rod punctured exception artificial neural system distributed parallel structure last decade AI begun resurge thanks promising approach artificial neural system Artificial neural system simply neural network modelled logical association made human brain based mathematical model accumulate data “knowledge” based parameter set administrator network “trained” recognize parameter make evaluation reach conclusion take action 1980s neural network became widely used backpropagation algorithm first described Paul John Werbos 1974 1990s marked major achievement many area AI demonstration various application notably 1997 IBM’s Deep Blue supercomputer defeated world chess champion Garry Kasparov match Kasparov quoted saying computer played “like god” chess match implication raised profound question neural network Many saw evidence true artificial intelligence finally achieved “a man beaten computer game wits” one thing program computer solve kind complex mathematical problem found chess quite another computer make logical deduction decision Using neural network emulate brain function provides many positive property including parallel functioning relatively quick realisation complicated task distributed information weak computation change due network damage Phineas Cage well learning ability ie adaptation upon change environment improvement based experience beneficial property neural network inspired many scientist propose solution problem sufficiently large network adequate training network could accomplish many arbitrary task without knowing detailed mathematical algorithm problem Currently remarkable ability neural network best demonstrated ability Honda’s Asimo humanoid robot cannot walk dance even ride bicycle Asimo acronym Advanced Step Innovative Mobility 16 flexible joint requiring fourprocessor computer control movement balance exceptional humanlike mobility possible neural network connected robot’s motion positional sensor control ‘muscle’ actuator capable ‘taught’ particular activity significance sort robot motion control virtual impossibility programmer able actually create set detailed instruction walking riding bicycle instruction could built control program learning ability neural network overcomes need precisely define instruction However despite impressive performance neural network Asimo still cannot think behaviour still firmly anchored lowerend intelligent spectrum reaction regulation Neural network slowly finding way commercial world Recently Siemens launched new fire detector us number different sensor neural network determine whether combination sensor reading fire part normal room environment dust fifty per cent fire callouts false well half due fire detector triggered everyday activity opposed actual fire clearly beneficial use paradigm limitation capability neural network solution creating strongAI Artificial neural network biologically inspired mean necessarily biologically plausible Many Scientists published thought intrinsic limitation using neural network one book received high exposure within Computer Scientist community 1969 Perceptron Minsky Papert Perceptron brought clarity limitation neural network although many scientist aware limited ability incomplex perceptron classify pattern Minsky’s Papert’s approach finding “what neural network good for” illustrated impeding future development neural network Within time period Perceptron exceptionally constructive identifiable content gave impetus later research conquered depicted computational problem restricting model example exclusiveor problem exclusiveor problem contains four pattern two input pattern positive member set either one input bit Thus changing input pattern onebit change classification pattern simplest example linearly inseparable problem perceptron using linear threshold function requires layer internal unit solve problem since connection input internal unit could trained perceptron could learn classification Eventually restriction solved incorporating extra “hidden” layer Although advance neural network research solved many limitation identified Minsky Papert numerous still remain network using linear threshold unit still violate limited order constraint faced linearly inseparable problem Additionally scaling weight size problem space increase remains issue clear dismissive view neural network disseminated Minsky Papert many Computer Scientists evidential support still many researcher ignored claim refused abandon biologically inspired system several recent advance artificial neural network integrating specialised theory multilayered structure attempt improve system methodology move one step closer creating strongAI One promising area integration fuzzy logic invented Professor Lotfi Zadeh admirable algorithmic idea include quantum inspired neural network QUINNs “network cavitations” proposed SLThaler history artificial intelligence replete theory failed attempt inevitable discipline progress technological scientific discovery ever reach final hurdleTags Artificial Intelligence AI Machine Programmer Technology |
1,118 | Word Analysis using Word Cloud in Python | Word Cloud
In this article we will try to understand the usage of very handy and useful tool known as word cloud. We will try to implement the Word cloud using python codes. This will be very short article
Word Cloud can be used in the analysis of words present in the corpus. Suppose you have a 2000–3000 words and we want to analyse which is the most common words or repeated words in the document. In the above discussed scenario word cloud will be very handy tool to use. People generally use it for the quick understanding of the document and used to understand what the document is about? Suppose if you have a 2000–3000 tweets and quickly you want to understand the nature of the tweet either its positive or negative. In this scenario word cloud will be very handy tool. Below are the steps we will follow to implement word cloud from scratch, right from the installation
Step 1: First we will install the word cloud by executing the below pip command from the terminal | https://medium.com/analytics-vidhya/word-analysis-using-word-cloud-in-python-b273ded249dc | ['Akash Deep'] | 2020-08-31 04:58:39.642000+00:00 | ['NLP', 'Word Cloud', 'Matplotlib', 'Python', 'Sentiment Analysis'] | Title Word Analysis using Word Cloud PythonContent Word Cloud article try understand usage handy useful tool known word cloud try implement Word cloud using python code short article Word Cloud used analysis word present corpus Suppose 2000–3000 word want analyse common word repeated word document discussed scenario word cloud handy tool use People generally use quick understanding document used understand document Suppose 2000–3000 tweet quickly want understand nature tweet either positive negative scenario word cloud handy tool step follow implement word cloud scratch right installation Step 1 First install word cloud executing pip command terminalTags NLP Word Cloud Matplotlib Python Sentiment Analysis |
1,119 | Setting Up Your Home Office as a New Freelancer | Assuming you do have the option to set up your own home office in a separate room, this is what you’ll need to bear in mind:
Furniture and set-up:
You’re going to spend a lot of time in your home office, so it pays to make it as pleasant and comfortable as possible. You will need a large enough desk that is the right height for you to work in comfort and a proper office chair with arm and back support that allows you to sit comfortably. Don’t skimp on the chair — your body will thank you!
I also highly recommend a footstool under the desk, so you can put your feet up. I have found this to be very comfortable for long stints at the PC, and it has definitely become indispensable for me.
Finally, don’t forget to get some storage options for your home office. The last things you’ll want is paperwork piling up on your desk.
Technical equipment:
No home office is complete without a few technical essentials. Let’s take a look at what these are.
Lighting
First of all, it is essential to have proper lighting in your home office. Ideally, you should have lighting installed over your reading area, over the computer and behind you so that there’s no reflection off the computer monitor.
Computer
It goes without saying that you will need a PC or Mac. As a freelancer writer or creative, you don’t usually need an enormous hard drive, but your computer should have plenty of RAM so you can research on the internet and run various applications concurrently. If your budget allows, I would also purchase a laptop in addition to a desktop PC. Technology has a tendency to fail just when you most need it (e.g. just as you’re about to finish that important 10,000-word project for a new client), so it is definitely worth having a second machine to work on in an emergency, or if you ever feel like working on the go.
Monitor
As freelancers, we typically stare at the computer screen all day long, so I’d recommend investing in a separate large monitor (at least 17'’). This will make working a lot easier and more comfortable.
Internet connection
A broadband or ADSL connection has become commonplace for most of us, and as a freelancer, you will definitely need one in order to upload and download large files and surf at faster speeds so as to not slow you down while researching a term or delivering a file.
Printer
A printer is another indispensable item for your home office. Clients will likely send you NDAs to sign at some point, you may need to print out vendor agreements or contracts, or you may want to print out a text and proofread the hard copy rather than checking it on screen.
External hard disk drive
You may have already experienced a computer crashing without you having backed up your data in a while. It has certainly happened to me before! It is therefore imperative to purchase an external hard disk drive as soon as possible and run daily backups.
Surge protector
A surge protector basically works as a shield, as it blocks excessive voltage spikes and surges. This will protect your electronics from damage, which is obviously a good idea.
Smartphone
A smartphone is important to have if you want the flexibility of being able to step out of your office during office hours and still being available in the sense that you can respond to email enquiries promptly without making your clients wait until you return. The client will feel reassured even if you just briefly acknowledge receipt of their email, e.g. “I’ll be back in the office in two hours and will respond to your email then.” Having a smartphone means there is no pressure to rush back to the office to check if you’ve missed any emails, as you can stay on top of things even while you are out and about. You can even open documents and see what files clients have sent you. | https://medium.com/the-lucky-freelancer/setting-up-your-home-office-as-a-new-freelancer-fcd22b3561b2 | ['Kahli Bree Adams'] | 2020-07-15 22:22:53.669000+00:00 | ['Entrepreneurship', 'Business', 'Small Business', 'Productivity', 'Freelancing'] | Title Setting Home Office New FreelancerContent Assuming option set home office separate room you’ll need bear mind Furniture setup You’re going spend lot time home office pay make pleasant comfortable possible need large enough desk right height work comfort proper office chair arm back support allows sit comfortably Don’t skimp chair — body thank also highly recommend footstool desk put foot found comfortable long stint PC definitely become indispensable Finally don’t forget get storage option home office last thing you’ll want paperwork piling desk Technical equipment home office complete without technical essential Let’s take look Lighting First essential proper lighting home office Ideally lighting installed reading area computer behind there’s reflection computer monitor Computer go without saying need PC Mac freelancer writer creative don’t usually need enormous hard drive computer plenty RAM research internet run various application concurrently budget allows would also purchase laptop addition desktop PC Technology tendency fail need eg you’re finish important 10000word project new client definitely worth second machine work emergency ever feel like working go Monitor freelancer typically stare computer screen day long I’d recommend investing separate large monitor least 17’ make working lot easier comfortable Internet connection broadband ADSL connection become commonplace u freelancer definitely need one order upload download large file surf faster speed slow researching term delivering file Printer printer another indispensable item home office Clients likely send NDAs sign point may need print vendor agreement contract may want print text proofread hard copy rather checking screen External hard disk drive may already experienced computer crashing without backed data certainly happened therefore imperative purchase external hard disk drive soon possible run daily backup Surge protector surge protector basically work shield block excessive voltage spike surge protect electronics damage obviously good idea Smartphone smartphone important want flexibility able step office office hour still available sense respond email enquiry promptly without making client wait return client feel reassured even briefly acknowledge receipt email eg “I’ll back office two hour respond email then” smartphone mean pressure rush back office check you’ve missed email stay top thing even even open document see file client sent youTags Entrepreneurship Business Small Business Productivity Freelancing |
1,120 | Quantum Computing And The Meaning Of Life—Not Just ‘42’ | But what exactly is quantum computing?
To understand why it’s so incredible, one must look at the difference between a quantum computer and a regular computer. A regular computer works by switching millions of tiny transistors between 1 and 0, or “on” and “off”.
The computer can only tell each transistor to either let an electric current pass or not. There’s no other way and no in-between. So a computer has to switch through the different combinations, one by one.
First, it’s for example 1000101, then 0101101 and then 1100100. These three random numbers already represent 3 different setups and have to occur in order. The computer can not make all 3 of them simultaneously. And though coming up with these 3 will only take the computer a few nanoseconds, having to go through billions of combinations with a lot more numbers (transistors) involved, can quickly become a time-consuming effort.
A quantum computer makes use of a physical phenomenon that takes place in the still quite mysterious quantum world. A so-called “qubit”, which replaces the traditional transistor and consists of a molecule that’s deliberately spun at incredible speeds by shooting it with lasers at pinpoint accuracy while keeping it suspended in a near-absolute-zero environment, will fall into a so-called superposition.
Remember the transistor? It’s either 1 or 0. The qubit, however, can be either 0, or 1, or anything in between (meaning a little of both at the same time). It uses a quantum state, which basically means it’s everything and nothing at the same time.
To describe it really simply: Instead of having to go through the three binary number examples one after the other, a quantum computer can calculate and display all three at the same time.
Imagine the game where you put a little ping pong ball under one of three plastic cups and start switching the cups around. If you were to work like a regular computer, you’d lift them up one by one to find the ball. A quantum computer simply lifts up all three at the same time, finds the ball, and then acts as if it never lifted the two empty cups in the first place. | https://medium.com/illumination/quantum-computing-and-the-meaning-of-life-not-just-42-b1d638c6cdd0 | ['Kevin Buddaeus'] | 2020-09-06 02:46:13.153000+00:00 | ['Technology', 'Data Science', 'Future', 'Science', 'Life'] | Title Quantum Computing Meaning Life—Not ‘42’Content exactly quantum computing understand it’s incredible one must look difference quantum computer regular computer regular computer work switching million tiny transistor 1 0 “on” “off” computer tell transistor either let electric current pas There’s way inbetween computer switch different combination one one First it’s example 1000101 0101101 1100100 three random number already represent 3 different setup occur order computer make 3 simultaneously though coming 3 take computer nanosecond go billion combination lot number transistor involved quickly become timeconsuming effort quantum computer make use physical phenomenon take place still quite mysterious quantum world socalled “qubit” replaces traditional transistor consists molecule that’s deliberately spun incredible speed shooting laser pinpoint accuracy keeping suspended nearabsolutezero environment fall socalled superposition Remember transistor It’s either 1 0 qubit however either 0 1 anything meaning little time us quantum state basically mean it’s everything nothing time describe really simply Instead go three binary number example one quantum computer calculate display three time Imagine game put little ping pong ball one three plastic cup start switching cup around work like regular computer you’d lift one one find ball quantum computer simply lift three time find ball act never lifted two empty cup first placeTags Technology Data Science Future Science Life |
1,121 | How to facilitate understanding of data? Data visualization [Examples and tools] | Data is everywhere.
We live in a world where is a lot of data and we may even think that there is too much data.
It’s true.
There are a few signs that there’s too much data. There are more and more of them, it is difficult to analyze, process or draw conclusions from them, which is why it is so important to be able to present them in a proper way.
In 2000, when the telescope was launched in New Mexico, more data was collected than in the entire history of astronomy, and today more or less the same amount of data is collected every few weeks.
Walmart has 20,000 stores worldwide in 70 countries and processes 2.5 petabytes of customer data every hour. For comparison, 1.5 PT is 10 billion Facebook photos.
This is an unprecedented amount of data in the past.
It seems that we live in a world ruled by the need to collect data. This is what the economy of the modern world is based on. Each of us “produces” data, each of us has a phone in his pocket, where applications are installed, which send further information about the user. Actually, the data has become a new currency. How many times have you met with the situation that you got something “for free”, only for providing your data? Promotions to set up a bank account, permission to receive marketing materials such as e-books, applications — Facebook, search engine and Google tools.
Data is a new business.
From such a huge amount of information, the art is to select the most important data, invest time and human resources and do it in such a way as to interest the audience.
What’s the purpose of data visualization?
If you want your Facebook post to have record-breaking results — what do you do? You add catchy, attractive graphics. This works the same way with reports. Good visualization attracts attention, is easier to understand, and helps you reach your audience quickly. With dashboards and graphics adjust to target-group, even huge data can be clear and understandable. Why?
Because most people are visual. So if you want your meetings with colleagues to be effective and your customers to understand your data better and faster, you should turn boring charts into eye-catching graphics. Here are some interesting numbers that confirm the importance of visualization:
People receive 90% of all information from their eyes,
Photos increase the readability of the text by 80%,
People remember 10% of what they hear, 20% of what they read and 80% of what they see,
If the leaflet contains no illustrations, people will remember 70% of it. Adding graphics can increase the number to 95%.
Proper visualization of data also provides many benefits for your company:
Quick decision making. Summarizing data is easy and fast thanks to graphics that allow you to quickly see if a specific column is higher than others if a given indicator exceeds a predetermined threshold, etc. — All this without the need to browse several pages of statistics in Google or Excel Sheets.
Summarizing data is easy and fast thanks to graphics that allow you to quickly see if a specific column is higher than others if a given indicator exceeds a predetermined threshold, etc. — All this without the need to browse several pages of statistics in Google or Excel Sheets. More engagement. Most people are better at seeing and remembering information presented in graphics with clear messages.
Most people are better at seeing and remembering information presented in graphics with clear messages. A better understanding of data. Well done reports are transparent not only to technical specialists, analysts and scientists dealing with data but also to non-technical managers such as CMO or CEO and help each employee make decisions in their area of responsibility.
With this influx of data, visual communication can be helpful and become a key aspect to attract and retain users for longer, or help stakeholders to understand and learn from the data presented — and this problem will be discussed in this article.
So, how do you do it?
The biggest problem in visualization of data is not, surprisingly, the selection of the wrong tool, lack of skills, but more prosaic thing of a strategic nature — the lack of orientation on the end-user, results in the fact that visualizations are often done automatically, without thinking whether the recipient, for whom the graphic representation is made — will understand the presented results and whether, above all, it will not take him more time than if the data were in the status quo.
This often results in a “mesh” of data when we want to present more data than the human brain can quickly and effectively analyze. Examples of how to NOT do visualization:
Completely incorrect proportions of data
Obvious manipulation of scale/proportion
3D charts work well very rarely…
In order not to make such mistakes, I suggest you stick to the following (often forgotten) basic questions to make a proper mindset for your next data visualization challenge.
For whom do you direct this visualization? RECIPIENT
Identify the highest priority people (e.g. teacher, classroom/management board, end-users), What are the current problems of the company, what are the expectations of the management and what are the difficulties that prevent the problem from being solved? Resist the temptation to create a visualization that meets the needs of each individual.
Why do you do this visualization? CONTEXT
Specify what issues you would like to discuss in a presentation, e.g. a business presentation. It is worth considering the type of decision: strategic (e.g. just give one answer — whether to buy a given property) operational (issues requiring a response many times a day) or more tactical (issues requiring a regular weekly or monthly review at a meeting)
What do you want to achieve with the presentation?
By a small selection of relevant information, you can significantly influence the next decisions of the stakeholders, e.g. by contrasting the sales results exceeding the statistically significant norm in a specific period of time.
How will you create this visualization? TYPE
Standard charts or those that require more work, but bring better results (properly made — they affect emotions subconsciously, and then — going further — decisions) — artistic visualizations.
How to say if the visualization is “good”?
Three simple criteria. Good visualization should have:
Story (functional and at the same time engaging [storytelling] content)
Adjusted form (Design adapted to the target person e.g. weekly report — simple charts/presentation aimed at evoking emotions — artistic charts)
Values (consistency of information showing solution)
And what is “bad” visualization?
Bad visualization means:
Lack of functionality (effect — uselessness)
Lack of appropriate form (effect — misunderstanding)
Inconsistency (possible consequence — potential manipulation)
Examples of well-made visualization (subjective)
Interactivity
A perfect example of a data visualization that combines all the necessary ingredients of an effective and engaging piece: it uses colour to easily distinguish trends; it allows the viewer to get a global sense of the data; it engages users by allowing them to interact with the piece, and it is surprisingly simple to understand in a single glance.
Story
This example tells the story of every known drone attack (obviously not controlled by AI…) and the victims in Pakistan. By sorting the information, the dramatic facts were presented in an easy to understand visual format.
Time-saving
This visualization shows 100 years of the evolution of rock in a single page. Not only does it simplify information for you, but also provides actual audio samples for each genre, from electronic blues to dark metal.
The context
The goal of this insightful interactive piece by Nikon is to give users a sense of the size of objects, both big and small, by using comparisons. Next to the Milky way, for example, a common object such as a ball or a car seem smaller than we ever imagined.
An ideal example for presentations e.g. of the board of directors. The aim is to create a chart that shows the average price per carat of a diamond over five years. What should be done to attract attention, increase the memorization of the information, and perhaps even affect further decisions such as entering a specific market? Add the context — “Diamonds WERE a girl’s best friend”.
Which visualization tools should I use?
It depends. Data visualization is a form of communication used in various fields, e.g. science, business, journalism etc.
Therefore, everything depends on the purpose of presenting data, the level of advancement of visualization and your experience with a particular program.
If you are just beginning with data visualization or simply lack an idea for a proper graph, then visit the website — https://datavizproject.com
To select the right tool, you need to specify an (often contradictory) objective:
- Analysis or presentation? — Do you want to research data (R, Python) or build visualizations for e.g. a client (D3.js, Illustrator) or maybe something in between (Tableau, Ggvis, Plotly)?
- Changes — Will you change your data while doing the visualization? In Illustrator you have to start building your chart from the beginning when you change/add value. In D3.js you can change the data in an external location and update the database by re-importing. In Plotly and Lyra — just import the database once and you can freely change it in the tool without losing a lot of valuable time.
- Basic or unusual chart types? You need standard “bar” or “line” charts (Excel, Highcharts); or maybe more original? (D3.js). If you don’t know how to code, then the solution to the second situation may be the Lyra application, where you can change any element without entering even a single line of code.
- Interactivity vs. static: You need to create interactive graphics e.g. for a website (D3.js, Highcharts) or maybe you just need static graphics in PDF/SVG/PNG format (R, Illustrator).
“There are no perfect tools for everything, there are only good tools for people with specific goals.”
The tools should be tailored to a specific need (blue — software libraries, red — programs).
What tools do we use in Wrocode to visualize data?
Free
Google Fusion Tables (a great tool for presenting geographical data, unfortunately, Google is cancelling its support for this program on December 3, 2019), Tableau Public (easy to use — often used by us to present e.g. sales data)
Paid
Sisense (simple interface — huge possibilities → definitely worth recommending)
…and many others depending on need. | https://medium.com/wrocode/how-to-facilitate-understanding-of-data-data-visualization-examples-and-tools-c07405bef679 | ['Łukasz Busz'] | 2019-08-26 12:58:30.284000+00:00 | ['Data Visualization', 'Big Data', 'Data Science', 'Graphic Design', 'Dashboard'] | Title facilitate understanding data Data visualization Examples toolsContent Data everywhere live world lot data may even think much data It’s true sign there’s much data difficult analyze process draw conclusion important able present proper way 2000 telescope launched New Mexico data collected entire history astronomy today le amount data collected every week Walmart 20000 store worldwide 70 country process 25 petabyte customer data every hour comparison 15 PT 10 billion Facebook photo unprecedented amount data past seems live world ruled need collect data economy modern world based u “produces” data u phone pocket application installed send information user Actually data become new currency many time met situation got something “for free” providing data Promotions set bank account permission receive marketing material ebooks application — Facebook search engine Google tool Data new business huge amount information art select important data invest time human resource way interest audience What’s purpose data visualization want Facebook post recordbreaking result — add catchy attractive graphic work way report Good visualization attracts attention easier understand help reach audience quickly dashboard graphic adjust targetgroup even huge data clear understandable people visual want meeting colleague effective customer understand data better faster turn boring chart eyecatching graphic interesting number confirm importance visualization People receive 90 information eye Photos increase readability text 80 People remember 10 hear 20 read 80 see leaflet contains illustration people remember 70 Adding graphic increase number 95 Proper visualization data also provides many benefit company Quick decision making Summarizing data easy fast thanks graphic allow quickly see specific column higher others given indicator exceeds predetermined threshold etc — without need browse several page statistic Google Excel Sheets Summarizing data easy fast thanks graphic allow quickly see specific column higher others given indicator exceeds predetermined threshold etc — without need browse several page statistic Google Excel Sheets engagement people better seeing remembering information presented graphic clear message people better seeing remembering information presented graphic clear message better understanding data Well done report transparent technical specialist analyst scientist dealing data also nontechnical manager CMO CEO help employee make decision area responsibility influx data visual communication helpful become key aspect attract retain user longer help stakeholder understand learn data presented — problem discussed article biggest problem visualization data surprisingly selection wrong tool lack skill prosaic thing strategic nature — lack orientation enduser result fact visualization often done automatically without thinking whether recipient graphic representation made — understand presented result whether take time data status quo often result “mesh” data want present data human brain quickly effectively analyze Examples visualization Completely incorrect proportion data Obvious manipulation scaleproportion 3D chart work well rarely… order make mistake suggest stick following often forgotten basic question make proper mindset next data visualization challenge direct visualization RECIPIENT Identify highest priority people eg teacher classroommanagement board endusers current problem company expectation management difficulty prevent problem solved Resist temptation create visualization meet need individual visualization CONTEXT Specify issue would like discus presentation eg business presentation worth considering type decision strategic eg give one answer — whether buy given property operational issue requiring response many time day tactical issue requiring regular weekly monthly review meeting want achieve presentation small selection relevant information significantly influence next decision stakeholder eg contrasting sale result exceeding statistically significant norm specific period time create visualization TYPE Standard chart require work bring better result properly made — affect emotion subconsciously — going — decision — artistic visualization say visualization “good” Three simple criterion Good visualization Story functional time engaging storytelling content Adjusted form Design adapted target person eg weekly report — simple chartspresentation aimed evoking emotion — artistic chart Values consistency information showing solution “bad” visualization Bad visualization mean Lack functionality effect — uselessness Lack appropriate form effect — misunderstanding Inconsistency possible consequence — potential manipulation Examples wellmade visualization subjective Interactivity perfect example data visualization combine necessary ingredient effective engaging piece us colour easily distinguish trend allows viewer get global sense data engages user allowing interact piece surprisingly simple understand single glance Story example tell story every known drone attack obviously controlled AI… victim Pakistan sorting information dramatic fact presented easy understand visual format Timesaving visualization show 100 year evolution rock single page simplify information also provides actual audio sample genre electronic blue dark metal context goal insightful interactive piece Nikon give user sense size object big small using comparison Next Milky way example common object ball car seem smaller ever imagined ideal example presentation eg board director aim create chart show average price per carat diamond five year done attract attention increase memorization information perhaps even affect decision entering specific market Add context — “Diamonds girl’s best friend” visualization tool use depends Data visualization form communication used various field eg science business journalism etc Therefore everything depends purpose presenting data level advancement visualization experience particular program beginning data visualization simply lack idea proper graph visit website — httpsdatavizprojectcom select right tool need specify often contradictory objective Analysis presentation — want research data R Python build visualization eg client D3js Illustrator maybe something Tableau Ggvis Plotly Changes — change data visualization Illustrator start building chart beginning changeadd value D3js change data external location update database reimporting Plotly Lyra — import database freely change tool without losing lot valuable time Basic unusual chart type need standard “bar” “line” chart Excel Highcharts maybe original D3js don’t know code solution second situation may Lyra application change element without entering even single line code Interactivity v static need create interactive graphic eg website D3js Highcharts maybe need static graphic PDFSVGPNG format R Illustrator “There perfect tool everything good tool people specific goals” tool tailored specific need blue — software library red — program tool use Wrocode visualize data Free Google Fusion Tables great tool presenting geographical data unfortunately Google cancelling support program December 3 2019 Tableau Public easy use — often used u present eg sale data Paid Sisense simple interface — huge possibility → definitely worth recommending …and many others depending needTags Data Visualization Big Data Data Science Graphic Design Dashboard |
1,122 | Like a Stone: | M ay 2020 marks the 3rd Anniversary of Cornell’s death. This was, is and will always be my love-letter to his family and musicians/music lover’s everywhere.
My wheels crawled along the asphalt and I breathed in the afternoon sky. Brushstrokes of cotton candy melting with fireside abstracts served my daily commute home from work well. How could I possibly mind rush hour when my drive literally reminds me to appreciate the view? Time pushed as traffic crept along the California coastline and so did I. My thoughts swirled around nothing and everything as my eyes took deliberate turns between sky and road. The volume on my radio was low enough to dim the car commercials but still present enough to tap my ear when I landed on a tune I liked.
Enter Chris Cornell. And just like that. My sunset had a soundtrack.
As soon as I heard those pipes the fact that I couldn’t peg which song he was singing (a rarity) didn’t matter. His voice is undeniable; uncompromised passion with a bellowing tone that weaved through the speakers straight into your blood. I was only a few seconds in when it hit me — I had no idea what song this was.
As a retired party girl who made her living on the stripper stage in the 90s, I take great pride in being ‘in the know’ with musicians from back in my day. I worked the clubs in Waikiki from Milli Vanilli and Terence Trent D’arby to Mötley Crüe and Fatboy Slim. Now decades later I can still tell you, with each song I hear — where I was working, what beaded leather or fluorescent lace costume I wore and which drug dealer had the best coke. Knowing his voice but not recognizing which song was more than annoying — it was a treat. I turned up the volume and without warning his lyrics pulled me inside a part of myself I was not expecting to revisit on a random afternoon drive home from the office.
“And I sat in regret
Of all the things I’ve done
For all that I’ve blessed
And all that I’ve wronged
In dreams until my death
I will wander on”
I could blame the sudden tickle in my nose and watery eyes on PMS or low blood sugar, but the fact is — Chris Cornell was more than his voice — he was a rock and roll poet.
My introduction to Audioslave and their new tune Like a Stone reignited something in me that’s difficult to describe. My pole dancing days far behind me, I wrestled with feelings of anxiety remembering who I was then compared to the woman I’ve become. There I was, driving home from my corporate job in an upper class, conservative town, and a song I fell in love with became a magic carpet ride to my past. With each note and lyric, I danced through a wormhole to a time when I was old enough to be on my own and make reckless choices, yet young enough to find my way out.
It seemed Chris Cornell and I both found our way out. He too was a survivor of the party scene in the 90s. I didn’t follow his personal life, and to be honest wasn’t aware that he was back on the charts fronting his new band at the time, Audioslave. But like a long lost sister who powered through Aquanet hairspray and faded jeans torn at the knee, I was proud we were both doing well in our new lives. | https://medium.com/narrative/like-a-stone-e5738bbe43 | ['Christine Macdonald'] | 2020-08-29 16:08:16.096000+00:00 | ['Depression', 'Death', 'Mental Health', 'Suicide', 'Music'] | Title Like StoneContent ay 2020 mark 3rd Anniversary Cornell’s death always loveletter family musiciansmusic lover’s everywhere wheel crawled along asphalt breathed afternoon sky Brushstrokes cotton candy melting fireside abstract served daily commute home work well could possibly mind rush hour drive literally reminds appreciate view Time pushed traffic crept along California coastline thought swirled around nothing everything eye took deliberate turn sky road volume radio low enough dim car commercial still present enough tap ear landed tune liked Enter Chris Cornell like sunset soundtrack soon heard pipe fact couldn’t peg song singing rarity didn’t matter voice undeniable uncompromised passion bellowing tone weaved speaker straight blood second hit — idea song retired party girl made living stripper stage 90 take great pride ‘in know’ musician back day worked club Waikiki Milli Vanilli Terence Trent D’arby Mötley Crüe Fatboy Slim decade later still tell song hear — working beaded leather fluorescent lace costume wore drug dealer best coke Knowing voice recognizing song annoying — treat turned volume without warning lyric pulled inside part expecting revisit random afternoon drive home office “And sat regret thing I’ve done I’ve blessed I’ve wronged dream death wander on” could blame sudden tickle nose watery eye PMS low blood sugar fact — Chris Cornell voice — rock roll poet introduction Audioslave new tune Like Stone reignited something that’s difficult describe pole dancing day far behind wrestled feeling anxiety remembering compared woman I’ve become driving home corporate job upper class conservative town song fell love became magic carpet ride past note lyric danced wormhole time old enough make reckless choice yet young enough find way seemed Chris Cornell found way survivor party scene 90 didn’t follow personal life honest wasn’t aware back chart fronting new band time Audioslave like long lost sister powered Aquanet hairspray faded jean torn knee proud well new livesTags Depression Death Mental Health Suicide Music |
1,123 | Why Going Vegan Might Not Be a Solution at All | Why More Cattle?
Well, what does this all have to do with more cattle?
We live in a broken world. And we want to come to a sustainable, desirable world like I just described. We want to be wise humans living within the ecosystems.
And don’t think that means we will be going back in time, living like cavemen. I’m convinced we will be living in harmony with nature in cities too.
But for that to happen, we need a transition period.
And the most important part of the transition is the restoration of the soil of our planet. We have been degrading the soil for ages and ages by maximizing our food production without care for the land.
We need to increase biodiversity and scientists estimate that at least one-quarter of our species on planet Earth live in the soil.
Soil Food Web for biodiversity. Source: commons.wikimedia
And the damage is so bad by now that farmland and nature reserve land both are very degraded. All over the world. Soil creatures cannot breathe anymore because the layers have compacted. Due to this compaction, water cannot be stored in the soil and plants grow with very superficial roots.
I work with farmers in my home country the Netherlands and the first thing we teach in nature-inclusive farming is that we have to prevent more compaction from happening. We have to stop compacting the soil with heavy tractors. If we don’t do that, all other efforts for creating biodiversity will fail.
The next steps are natural manure to give enough nitrogen and phosphorous to our soils combined with lots and lots of decomposing organic matter to build up humus. The healthy topsoil that our planet needs so badly.
Our agriculture has played a large role in degrading the land. Mindful agriculture can play a big part in restoring our soil to become healthy again. And that’s where cattle and chickens and pigs come in.
Animals are an intrinsic part of the web of life.
Cattle have hooves and they trample the soil in such a way that compaction is solved in a natural way. Chickens have claws and they love to eat the maggots out of the cattle manure. And pigs just love to turn soil with their snouts.
Allan Savory makes the case for more cattle with what he calls Holistic Management. Yes, overgrazing has been a problem degrading our soil. A solution, however, is not to go for less grazing.
Allan says we need more cattle and chickens and pigs doing what they do best, graze our grasslands, poo and trample the soil. We need to make sure the cattle need to be in big herds, grazing very quickly. Pooping an immense amount of manure. And then moving on and leaving the land alone for one of two complete years (natural cycles of plant growth).
I do agree with Allan very much when he says to environmentalist organizations that they need to work with cattle in Nature Reserves to restore degraded soils this way. Yes, we should.
And in this short film, permaculture expert Geoff Lawton gives his perspective on the matter. He gives nuances to Allan’s story that I relate to very much. We need to look at every situation locally and decide what tools of people-and-animal-collaboration works best.
Yes, we need animals.
There’s not a doubt in my mind that we need animals on a healthy planet. So what about the ammonia farts of cows, you ask? Healthy soil will absorb all. The problem is we don’t have healthy soil anymore…
Sorry, dear vegans, I will personally not go vegan. And I do not think that going vegan is a solution to all of our problems.
During the transition period, we need to work together. People in teams with animals to make our planet healthy again. And make sure we have enough food for all. And earn a living in our economies.
We have to find new ways of being together, producing our human food as well as restoring ecosystems. | https://medium.com/climate-conscious/why-going-vegan-might-not-be-a-solution-at-all-e85c28c4bcf | ['Desiree Driesenaar'] | 2020-10-07 11:02:41.757000+00:00 | ['Climate Change', 'Vision', 'Sustainability', 'Vegan', 'Climate Action'] | Title Going Vegan Might Solution AllContent Cattle Well cattle live broken world want come sustainable desirable world like described want wise human living within ecosystem don’t think mean going back time living like caveman I’m convinced living harmony nature city happen need transition period important part transition restoration soil planet degrading soil age age maximizing food production without care land need increase biodiversity scientist estimate least onequarter specie planet Earth live soil Soil Food Web biodiversity Source commonswikimedia damage bad farmland nature reserve land degraded world Soil creature cannot breathe anymore layer compacted Due compaction water cannot stored soil plant grow superficial root work farmer home country Netherlands first thing teach natureinclusive farming prevent compaction happening stop compacting soil heavy tractor don’t effort creating biodiversity fail next step natural manure give enough nitrogen phosphorous soil combined lot lot decomposing organic matter build humus healthy topsoil planet need badly agriculture played large role degrading land Mindful agriculture play big part restoring soil become healthy that’s cattle chicken pig come Animals intrinsic part web life Cattle hoof trample soil way compaction solved natural way Chickens claw love eat maggot cattle manure pig love turn soil snout Allan Savory make case cattle call Holistic Management Yes overgrazing problem degrading soil solution however go le grazing Allan say need cattle chicken pig best graze grassland poo trample soil need make sure cattle need big herd grazing quickly Pooping immense amount manure moving leaving land alone one two complete year natural cycle plant growth agree Allan much say environmentalist organization need work cattle Nature Reserves restore degraded soil way Yes short film permaculture expert Geoff Lawton give perspective matter give nuance Allan’s story relate much need look every situation locally decide tool peopleandanimalcollaboration work best Yes need animal There’s doubt mind need animal healthy planet ammonia fart cow ask Healthy soil absorb problem don’t healthy soil anymore… Sorry dear vegan personally go vegan think going vegan solution problem transition period need work together People team animal make planet healthy make sure enough food earn living economy find new way together producing human food well restoring ecosystemsTags Climate Change Vision Sustainability Vegan Climate Action |
1,124 | How to Unlock the 5-Paragraph Essay to Improve Your Writing | How to Unlock the 5-Paragraph Essay to Improve Your Writing
Why Beginnings, Middles, and Ends Still Matter in Writing
Photo by Magda Ehlers from Pexels
I taught the dreaded 5-paragraph thesis essay for 25 years in college. The students hated it. I hated reading their dull, lifeless essays. And yet, we persisted. There is a lesson in writing structure in the 5-paragraph essay, a human need for a beginning, middle, and end.
Success in school is often measured by how well a student writes a 5-paragraph thesis essay. College entrance exams, AP tests, the SAT, ACT, and exit exams in freshman writing classes often use the 5-paragraph thesis essay as the standard-bearer of good writing. Most of you reading this today probably had courses that emphasized the 5-paragraph essay, with its formulaic beginning with thesis statement as last sentence, its three body paragraphs to express three supporting points, and its conclusion focused on restatement of main points.
The 5-paragraph thesis essay is so formulaic that it doesn’t even have to be 5 paragraphs long.
The 5-paragraph thesis essay is so formulaic that it doesn’t even have to be 5 paragraphs long. It could be 20 paragraphs, with a single paragraph introduction, 18 paragraphs for each point to be made (1 point per paragraph), and 1 concluding paragraph. The format is extendable, like a ladder, but not flexible.
For all its faults, the 5-paragraph thesis essay is a perfect fundamental starting point to structuring writing. It is not, however, a model to hold up and worship. On the contrary, many writing theorists will attest to its limitations. But it does emphasize the absolute basics about writing that most of us practice today — a beginning, a middle, and an end.
The limitations of the 5-paragraph essay
I once taught a freshman college student, a young man from western Kansas, who wrote page after page of dull 5-paragraph essays. He was a hard worker, but he was not improving. After 3 essays, we sat down together to analyze his efforts.
Upon closer inspection and in talking with him, I discovered that upon the advice of his high school English teacher, he had employed the most severe 5-paragraph formula I had ever seen. Each essay was no more and no less than 20 sentences long. Each paragraph had four sentences — a topic sentence, a specific discussion point, an example, and a transitional sentence. In the introduction, the transitional sentence was the thesis statement, the overall point of the essay. In the conclusion, the transitional sentence was a concluding statement broadening the point of the whole essay or rounding it out with a clever statement.
In short, my student’s essays lacked development, specific details, life’s blood. It was all skeleton and no flesh. With the 20-sentence formula, there was no room for elaboration, a second (or third or fourth example), a personal anecdote, an insightful quotation.
My student’s essays lacked development, specific details, life’s blood. It was all skeleton and no flesh.
I don’t fault my student for trying what he was taught, nor do I fault his teacher for presenting a formula to prepare students for standardized college entrance exams. But the rigors of college thinking and learning were not suited for such a formulaic 5-paragraph thesis essay. Thought and ideas do not come neatly packaged with only three points. What if an essay idea needed a fourth point? What if a paragraph point needed a second example to help support and illustrate an idea? In the student’s formula, he would not have the opportunity to help his own case.
The origins of the 5-paragraph thesis essay
This story may be apocryphal, but it serves a good point. During graduate school, in my teacher training program, our director explained the origins of the 5-paragraph thesis essay thus: During WWII with the enormous response of men of all stripes volunteering to fight fascism, the drill sergeants needed a method to teach the recruits to get them through basic training and to the front lines as quickly as possible. Recruits came from a great variety of socioeconomic and educational backgrounds, from the very sharp to the very dull-witted. No one who was physically able, however, was turned away.
To teach to the common denominator, the drill sergeants hammered home a three-part structure.
1. Tell them what you are going to tell them.
2. Tell them.
3. Tell them what you told them.
For instance:
1. Today’s lesson is about keeping your helmet on your head, the most important of all your safety gear.
2. Keep your helmet on your head, for it is the most important of all your safety gear.
3. That’s it for your helmet. Remember to keep your helmet on your head to keep your head safe.
In this way, recruits heard the advice three times. In the rush to have recruits learn so much material in such little time, repetition was looked at as a way to get people to remember the most important points. Say something often enough and people will remember it. If even one recruit remembers to keep his helmet on to prevent a fatal injury, the drill sergeant can sleep soundly knowing he did the job as best he could.
Many of these drill sergeants turned into the college professors that filled America’s universities following WWII. They took the theories and ideas they learned from the service and applied them to their own work as writing theorists. And thus, the 5-paragraph thesis essay was born, with its 3-part structure:
Introduction — Tell your readers what you are going to tell them. Body — Tell your readers your main points. Conclusion — Tell your readers what you told them.
Here’s the rub. For all the millions of 5-paragraph thesis essays written by students over the years, this is not a form that occurs in nature. Browse through dozens of Medium articles today and look for a 5-paragraph thesis essay. I defy you to find one. Browse through magazines, such as The New Yorker, National Geographic, Esquire, Vogue, even Time and Newsweek, and you won’t find a 5-paragraph thesis essay there either.
For all the millions of 5-paragraph thesis essays written by students over the years, this is not a form that occurs in nature.
English teachers worth their salt know that the 5-paragraph thesis essay is a starting point for most students, not the goal. With declining literacy rates and skills today, you have to start somewhere. Most teachers want their students to excel beyond the 5-paragraph thesis essay, to learn the fundamental form so that they can escape from that limiting box into something else like a flowering vine that will meander and wind where it will.
The 5-paragraph thesis essay: beginning, middle, and end
Writing needs beginnings, middles, and ends. Human life is organized around this 3-part structure. Childhood / prime of life (adolescence and adulthood) / old age. Or non-working life / working life / retirement — whatever structure you put to it — we are biologically predisposed to see life in this 3-part structure.
This doesn’t mean that every article, every story, every narrative has an overt beginning, middle, and end. Not every story starts with “Once upon a time” or “A long time ago, in a galaxy far far away . . .”
Some of the greatest literary works start famously in medias res, in the middle of things, such as Hamlet, in which Hamlet’s father has already been killed and the plot set firmly in motion. Homer’s Iliad starts with an argument about going to war with Troy, long after the beautiful Greek Helen was kidnapped by Prince Paris of Troy, the action that prompted the war.
Some stories start far before their stated offerings. The great comic novel, The Life and Opinions of Tristram Shandy, Gentleman, is presumably about Tristram’s life, yet the first 150 pages are devoted to philosophical arguments between his father and uncle and various personages and getting Tristram himself born. Thus the “beginning” of this tale is way before the beginning of the main character’s life.
Or to take a more contemporary example, consider the enthralling Christopher Nolan movie Memento, which starts at the end, and evolves backward toward the beginning through the lens of Lenny suffering from short-term amnesia.
So much writing about writing discusses the opening or “the hook,” such as a personal anecdote to lead a reader into the essay. That is the beginning. And then the middle of an essay is the development of the main points, the details, perhaps an enumerated list (usually more than 3 things) with detailed examples of the main ideas. And then the end of the essay is included as a takeaway point, the one thing that you want your readers to remember. Even today, we are bound in so many ways to this 3-part “beginning, middle, and end” structure.
The takeaway
Writing is as much art as craft. The art of writing is hiding the craft from the reader, to hide the scaffolding, the hammer and nails and wooden planks that build the structure, while still offering a beginning, middle, and end. Or to use another metaphor, we clothe our writing so the skeleton doesn’t show. At first, our essays are skin and bones, undernourished adolescent structures. Once we flesh them out, they grow and flourish, and they need new clothes for their new shapes.
So after all, we are all still writing 5-paragraph thesis essays — essays composed of beginnings, middles, and ends, with a point to make, ideas for development, and a take-away point. We just clothe our essays in finer silks than their cousin, the 5-paragraph thesis essay. | https://medium.com/swlh/how-to-unlock-the-5-paragraph-essay-to-improve-your-writing-3f53e5eb96dd | ['Lee G. Hornbrook'] | 2020-04-16 15:00:17.451000+00:00 | ['Writing Tips', 'Writing', 'Essay', 'Higher Education', 'Creativity'] | Title Unlock 5Paragraph Essay Improve WritingContent Unlock 5Paragraph Essay Improve Writing Beginnings Middles Ends Still Matter Writing Photo Magda Ehlers Pexels taught dreaded 5paragraph thesis essay 25 year college student hated hated reading dull lifeless essay yet persisted lesson writing structure 5paragraph essay human need beginning middle end Success school often measured well student writes 5paragraph thesis essay College entrance exam AP test SAT ACT exit exam freshman writing class often use 5paragraph thesis essay standardbearer good writing reading today probably course emphasized 5paragraph essay formulaic beginning thesis statement last sentence three body paragraph express three supporting point conclusion focused restatement main point 5paragraph thesis essay formulaic doesn’t even 5 paragraph long 5paragraph thesis essay formulaic doesn’t even 5 paragraph long could 20 paragraph single paragraph introduction 18 paragraph point made 1 point per paragraph 1 concluding paragraph format extendable like ladder flexible fault 5paragraph thesis essay perfect fundamental starting point structuring writing however model hold worship contrary many writing theorist attest limitation emphasize absolute basic writing u practice today — beginning middle end limitation 5paragraph essay taught freshman college student young man western Kansas wrote page page dull 5paragraph essay hard worker improving 3 essay sat together analyze effort Upon closer inspection talking discovered upon advice high school English teacher employed severe 5paragraph formula ever seen essay le 20 sentence long paragraph four sentence — topic sentence specific discussion point example transitional sentence introduction transitional sentence thesis statement overall point essay conclusion transitional sentence concluding statement broadening point whole essay rounding clever statement short student’s essay lacked development specific detail life’s blood skeleton flesh 20sentence formula room elaboration second third fourth example personal anecdote insightful quotation student’s essay lacked development specific detail life’s blood skeleton flesh don’t fault student trying taught fault teacher presenting formula prepare student standardized college entrance exam rigor college thinking learning suited formulaic 5paragraph thesis essay Thought idea come neatly packaged three point essay idea needed fourth point paragraph point needed second example help support illustrate idea student’s formula would opportunity help case origin 5paragraph thesis essay story may apocryphal serf good point graduate school teacher training program director explained origin 5paragraph thesis essay thus WWII enormous response men stripe volunteering fight fascism drill sergeant needed method teach recruit get basic training front line quickly possible Recruits came great variety socioeconomic educational background sharp dullwitted one physically able however turned away teach common denominator drill sergeant hammered home threepart structure 1 Tell going tell 2 Tell 3 Tell told instance 1 Today’s lesson keeping helmet head important safety gear 2 Keep helmet head important safety gear 3 That’s helmet Remember keep helmet head keep head safe way recruit heard advice three time rush recruit learn much material little time repetition looked way get people remember important point Say something often enough people remember even one recruit remembers keep helmet prevent fatal injury drill sergeant sleep soundly knowing job best could Many drill sergeant turned college professor filled America’s university following WWII took theory idea learned service applied work writing theorist thus 5paragraph thesis essay born 3part structure Introduction — Tell reader going tell Body — Tell reader main point Conclusion — Tell reader told Here’s rub million 5paragraph thesis essay written student year form occurs nature Browse dozen Medium article today look 5paragraph thesis essay defy find one Browse magazine New Yorker National Geographic Esquire Vogue even Time Newsweek won’t find 5paragraph thesis essay either million 5paragraph thesis essay written student year form occurs nature English teacher worth salt know 5paragraph thesis essay starting point student goal declining literacy rate skill today start somewhere teacher want student excel beyond 5paragraph thesis essay learn fundamental form escape limiting box something else like flowering vine meander wind 5paragraph thesis essay beginning middle end Writing need beginning middle end Human life organized around 3part structure Childhood prime life adolescence adulthood old age nonworking life working life retirement — whatever structure put — biologically predisposed see life 3part structure doesn’t mean every article every story every narrative overt beginning middle end every story start “Once upon time” “A long time ago galaxy far far away ” greatest literary work start famously medias re middle thing Hamlet Hamlet’s father already killed plot set firmly motion Homer’s Iliad start argument going war Troy long beautiful Greek Helen kidnapped Prince Paris Troy action prompted war story start far stated offering great comic novel Life Opinions Tristram Shandy Gentleman presumably Tristram’s life yet first 150 page devoted philosophical argument father uncle various personage getting Tristram born Thus “beginning” tale way beginning main character’s life take contemporary example consider enthralling Christopher Nolan movie Memento start end evolves backward toward beginning lens Lenny suffering shortterm amnesia much writing writing discus opening “the hook” personal anecdote lead reader essay beginning middle essay development main point detail perhaps enumerated list usually 3 thing detailed example main idea end essay included takeaway point one thing want reader remember Even today bound many way 3part “beginning middle end” structure takeaway Writing much art craft art writing hiding craft reader hide scaffolding hammer nail wooden plank build structure still offering beginning middle end use another metaphor clothe writing skeleton doesn’t show first essay skin bone undernourished adolescent structure flesh grow flourish need new clothes new shape still writing 5paragraph thesis essay — essay composed beginning middle end point make idea development takeaway point clothe essay finer silk cousin 5paragraph thesis essayTags Writing Tips Writing Essay Higher Education Creativity |
1,125 | Will Tech’s Monopolies Survive 2020? | Will Tech’s Monopolies Survive 2020?
How the triple turmoil of a pandemic, protests, and a presidential election threatens Silicon Valley’s status quo.
Photo: Wang Ying/Xinhua via Getty
Welcome back to Pattern Matching, OneZero’s weekly newsletter that puts the week’s most compelling tech stories in context.
There was a brief moment, at the peak of the Covid-19 pandemic’s first wave in the United States, when it looked like Big Tech might be back in the public’s good graces. With stay-at-home orders across the country, screens were no longer an addictive distraction from real life, but the locus of real life itself. Zoom was powering business meetings; Houseparty, happy hours. Facebook was once again a dominant force in news; Apple and Google were partnering on a privacy-conscious contact tracing app. Politicians in the United States and Europe who had been laying the groundwork for new regulations suddenly had more urgent things to worry about.
That moment has passed. The lifting of lockdowns has been greeted not with sighs of relief at a return to the status quo, but with rallying cries to change it. There are protests in the streets. A presidential election looms. While criminal justice reform tops the domestic agenda, the appetite for tech reform appears to have returned as well.
The Pattern
Big Tech is back in the hot seat.
💬 The European Commission this week opened two antitrust probes against Apple, focusing on how its App Store rules and Apple Pay system, respectively, hamstring competitors. The App Store investigation was sparked by a 2019 complaint from Spotify about Apple’s practice of taking 30 percent of all subscription revenues from users who sign up for third-party apps on iOS. That puts Spotify at a disadvantage in competing with Apple’s own Apple Music service, from which Apple keeps 100 percent of revenues. (In 2018, Spotify stopped allowing users to pay via iOS.) The Apple Pay investigation, meanwhile, will examine how Apple limits the use of its devices’ “tap and go” payment functionality to Apple Pay alone, once again giving its own service a big edge over competitors.
💬 Spotify is hardly the only company affected. I wrote in depth in February about the brewing antitrust case against Apple, and the developers lining up to testify against it. In a twist of timing, one of those developers, Basecamp, launched a new paid email app, called Hey, on the same day the EU investigation was announced. Apple rejected it, on the grounds that it doesn’t allow the in-app subscription options that would give Apple its 30-percent cut. Protocol’s David Pierce recounted how that decision went down, while The Verge’s Dieter Bohn blasted Apple for inconsistencies in how it enforces its rules. Even Apple blogger Jon Gruber, who often defends the company, agreed that the company’s rent-seeking has gone too far. (Meanwhile, if you’re interested in Hey, read developer Kaya Thomas’ OneZero review of the buzzy, pricey new email platform.)
💬 Even mighty Facebook can’t get its apps onto Apple devices when they compete directly with Apple’s own offerings. The New York Times reported Thursday that Apple has rejected Facebook Gaming, the social network’s new casual gaming app, at least five times in the past four months, citing policies against apps that function primarily as game stores. Google, for its part, quickly approved Facebook Gaming on the Google Play store in April. Illustrating the user-unfriendly effects of Apple’s restrictions, the Times article explains that Facebook’s approach to getting its app approved has involved continually making the interface less intuitive, on the theory that this would make it less store-like.
💬 That Apple governs its App Store with impunity, and often to its own advantage, is not new. Neither is it new that Apple’s own apps sometimes compete with, copy, and crowd out those made for its platforms by independent developers. What is different now are the scope of Apple’s first-party app ambitions, the number of developers willing to risk the giant’s ire by speaking out, and the willingness of people in power to listen. In OneZero this week, Owen Williams argues that the EU case could be “a defining moment for the technology industry, as companies like Google and Facebook may find themselves scrutinized in a similar way.” And speaking of Google…
💬 Google made a similar power play this week by integrating its Meet videoconferencing software into the Gmail app. It’s a transparent attempt to leverage Google’s dominance in one market — email, in this case — against a rival (Zoom) that was outcompeting it in another market.
💬 The regulatory fervor is not confined to Europe. Back in the United States, the right broadened its assault on Section 230 this week, as Sen. Josh Hawley introduced a bill that would make it easier to sue tech companies for inconsistencies in how they moderate content. The move comes two weeks after Donald Trump signed an executive order challenging the legal protections that online platforms enjoy under Section 230 of the Communications Decency Act. Gizmodo’s Dell Cameron argues that the bill, like Trump’s order, is mostly toothless: It still allows companies to set their own rules of moderation, as long as they stick to them and apply them equally to all parties “in good faith.”
💬 And yet that even that mushy qualifier could open the door to enough lawsuits that some companies may simply decide a more hands-off approach is safest. Which is, of course, what Trump and Hawley want: for social platforms to keep their paws off of racist or false content from right-wing sources, including the president himself. This week conveniently brought us an illustration of the kind of dustup that could turn into a lawsuit, when NBC News reported that Google had banned the financial site ZeroHedge and conservative political site The Federalist from its ad network for spreading racist conspiracy theories about the anti-police brutality protests. As an uproar spread — both sites have large, vocal followings — Google disputed NBC News’ story. Google said The Federalist was never demonetized, but that it had reached an agreement with the publisher that involved removing racist comments from The Federalist’s comment section.
💬 These are the types of interventions that liberals and civil rights activists, along with some of tech companies’ own employees, have been calling for. (Some civil rights groups are now calling on companies to boycott Facebook’s ad platform.) They’re also the type that get the right riled up and build momentum for bills like Hawley’s, as Ben Shapiro and Ted Cruz were quick to rail against Google’s moves this week. As I wrote in a previous newsletter, we appear to have at last reached the point where the big platforms have to pick a side. Twitter was the first to do so, when it started flagging some of Trump’s tweets as misleading, and the company kept up its enforcement against him this week by putting a warning label on a video he tweeted. The video fabricated fake CNN footage of a “terrified” Black toddler running away from a “racist baby,” then implied that the network was spreading divisive fake news (a classic example of Trumpian projection).
💬 Facebook has opted for the laissez-faire approach to Trump’s posts, but even it felt compelled to take action this week when the liberal blog Media Matters for America reported that the president was running Facebook ads with Nazi iconography. (Trump’s campaign then claimed the inverted red triangle was an antifa symbol — which is a lie, according to historians who study the group.) Facebook removed the 88 offending ads.
💬 Antitrust enforcement and Section 230 reform are separate issues. But the growing momentum behind both is indicative of a larger trend: Big Tech has lost the benefit of the doubt. That happened long ago in Europe, but it is finally happening in the United States as well, from both major parties. And any notion that the pandemic or a Republican presidency would ease the regulatory pressure on Silicon Valley has now been put to rest. Americans of all ideologies are fed up with business as usual, their polarization arguably stoked by the tech platforms themselves, and their economic stability undermined by the rise of the gig economy. In other words, we’re living in a mess that is partly of the tech industry’s making. And now that mess is coming back to haunt it.
Undercurrents
Under-the-radar trends, stories, and random anecdotes worth your time
🗨️ Two Black leaders at Pinterest left the company over racial discrimination, saying they were subjected to offensive comments, unfair pay, and retaliation. CEO Ben Silbermann subsequently issued a public apology and admitted “parts of our culture are broken,” Bloomberg’s Sarah Frier reported. But the ex-employees, Ifeoma Ozoma and Aerica Shimizu Banks, who made up two-thirds of the platform’s public policy and social impact team, said on Twitter that they heard the apology only through the media. Their allegations are part of a wider reckoning over tech companies’ treatment of Black employees, and they dent the reputation of a platform that had previously earned praise for some progressive policies — which, it turns out, Ozoma and Banks had been criticized by their managers for championing. Read Ozoma’s full thread here.
🗨️ Lesser-known face recognition companies are eagerly courting law enforcement, looking to fill the vacuum after IBM, Microsoft, and Amazon stepped back. Clearview AI, NEC, nd Ayonix are among those poised to capitalize by ignoring the anger over surveillance technology’s discriminatory effects on Black communities, the Wall Street Journal reported. My OneZero colleague Dave Gershgorn has written about an even longer list of companies, including many that you might not expect, that have been trying to cash in on a face recognition gold rush. In Bloomberg Opinion, Cathy O’Neil makes the case that face recognition by law enforcement will continue until or unless Congress repeals post-9/11 legislation, such as the Real ID Act, that prioritized antiterrorism efforts over civil liberties.
🗨️ Instagram’s algorithm systematically incentivizes its users to show skin in their photos, according to a report from the nonprofit AlgorithmWatch.
🗨️ The great scourge of bots on social media may be overstated, bot expert Darius Kazemi argued, in a New York Times article by Siobhan Roberts.
Headlines of the Week
Facebook Groups Are Destroying America
— Nina Jankowicz and Cindy Otis, Wired
Devin Nunes’ Attorney Says He’s at ‘Dead End’ in Quest to Reveal Identity of Twitter Cow
— Kate Irby, Fresno Bee
Thanks for reading. Reach me with tips and feedback by responding to this post on the web, via Twitter direct message at @WillOremus, or by email at [email protected]. | https://onezero.medium.com/will-techs-monopolies-survive-2020-90a8ea05b6c3 | ['Will Oremus'] | 2020-06-20 13:57:29.136000+00:00 | ['Technology', 'Apple', 'Facebook', 'Pattern Matching'] | Title Tech’s Monopolies Survive 2020Content Tech’s Monopolies Survive 2020 triple turmoil pandemic protest presidential election threatens Silicon Valley’s status quo Photo Wang YingXinhua via Getty Welcome back Pattern Matching OneZero’s weekly newsletter put week’s compelling tech story context brief moment peak Covid19 pandemic’s first wave United States looked like Big Tech might back public’s good grace stayathome order across country screen longer addictive distraction real life locus real life Zoom powering business meeting Houseparty happy hour Facebook dominant force news Apple Google partnering privacyconscious contact tracing app Politicians United States Europe laying groundwork new regulation suddenly urgent thing worry moment passed lifting lockdown greeted sigh relief return status quo rallying cry change protest street presidential election loom criminal justice reform top domestic agenda appetite tech reform appears returned well Pattern Big Tech back hot seat 💬 European Commission week opened two antitrust probe Apple focusing App Store rule Apple Pay system respectively hamstring competitor App Store investigation sparked 2019 complaint Spotify Apple’s practice taking 30 percent subscription revenue user sign thirdparty apps iOS put Spotify disadvantage competing Apple’s Apple Music service Apple keep 100 percent revenue 2018 Spotify stopped allowing user pay via iOS Apple Pay investigation meanwhile examine Apple limit use devices’ “tap go” payment functionality Apple Pay alone giving service big edge competitor 💬 Spotify hardly company affected wrote depth February brewing antitrust case Apple developer lining testify twist timing one developer Basecamp launched new paid email app called Hey day EU investigation announced Apple rejected ground doesn’t allow inapp subscription option would give Apple 30percent cut Protocol’s David Pierce recounted decision went Verge’s Dieter Bohn blasted Apple inconsistency enforces rule Even Apple blogger Jon Gruber often defends company agreed company’s rentseeking gone far Meanwhile you’re interested Hey read developer Kaya Thomas’ OneZero review buzzy pricey new email platform 💬 Even mighty Facebook can’t get apps onto Apple device compete directly Apple’s offering New York Times reported Thursday Apple rejected Facebook Gaming social network’s new casual gaming app least five time past four month citing policy apps function primarily game store Google part quickly approved Facebook Gaming Google Play store April Illustrating userunfriendly effect Apple’s restriction Times article explains Facebook’s approach getting app approved involved continually making interface le intuitive theory would make le storelike 💬 Apple governs App Store impunity often advantage new Neither new Apple’s apps sometimes compete copy crowd made platform independent developer different scope Apple’s firstparty app ambition number developer willing risk giant’s ire speaking willingness people power listen OneZero week Owen Williams argues EU case could “a defining moment technology industry company like Google Facebook may find scrutinized similar way” speaking Google… 💬 Google made similar power play week integrating Meet videoconferencing software Gmail app It’s transparent attempt leverage Google’s dominance one market — email case — rival Zoom outcompeting another market 💬 regulatory fervor confined Europe Back United States right broadened assault Section 230 week Sen Josh Hawley introduced bill would make easier sue tech company inconsistency moderate content move come two week Donald Trump signed executive order challenging legal protection online platform enjoy Section 230 Communications Decency Act Gizmodo’s Dell Cameron argues bill like Trump’s order mostly toothless still allows company set rule moderation long stick apply equally party “in good faith” 💬 yet even mushy qualifier could open door enough lawsuit company may simply decide handsoff approach safest course Trump Hawley want social platform keep paw racist false content rightwing source including president week conveniently brought u illustration kind dustup could turn lawsuit NBC News reported Google banned financial site ZeroHedge conservative political site Federalist ad network spreading racist conspiracy theory antipolice brutality protest uproar spread — site large vocal following — Google disputed NBC News’ story Google said Federalist never demonetized reached agreement publisher involved removing racist comment Federalist’s comment section 💬 type intervention liberal civil right activist along tech companies’ employee calling civil right group calling company boycott Facebook’s ad platform They’re also type get right riled build momentum bill like Hawley’s Ben Shapiro Ted Cruz quick rail Google’s move week wrote previous newsletter appear last reached point big platform pick side Twitter first started flagging Trump’s tweet misleading company kept enforcement week putting warning label video tweeted video fabricated fake CNN footage “terrified” Black toddler running away “racist baby” implied network spreading divisive fake news classic example Trumpian projection 💬 Facebook opted laissezfaire approach Trump’s post even felt compelled take action week liberal blog Media Matters America reported president running Facebook ad Nazi iconography Trump’s campaign claimed inverted red triangle antifa symbol — lie according historian study group Facebook removed 88 offending ad 💬 Antitrust enforcement Section 230 reform separate issue growing momentum behind indicative larger trend Big Tech lost benefit doubt happened long ago Europe finally happening United States well major party notion pandemic Republican presidency would ease regulatory pressure Silicon Valley put rest Americans ideology fed business usual polarization arguably stoked tech platform economic stability undermined rise gig economy word we’re living mess partly tech industry’s making mess coming back haunt Undercurrents Undertheradar trend story random anecdote worth time 🗨️ Two Black leader Pinterest left company racial discrimination saying subjected offensive comment unfair pay retaliation CEO Ben Silbermann subsequently issued public apology admitted “parts culture broken” Bloomberg’s Sarah Frier reported exemployees Ifeoma Ozoma Aerica Shimizu Banks made twothirds platform’s public policy social impact team said Twitter heard apology medium allegation part wider reckoning tech companies’ treatment Black employee dent reputation platform previously earned praise progressive policy — turn Ozoma Banks criticized manager championing Read Ozoma’s full thread 🗨️ Lesserknown face recognition company eagerly courting law enforcement looking fill vacuum IBM Microsoft Amazon stepped back Clearview AI NEC nd Ayonix among poised capitalize ignoring anger surveillance technology’s discriminatory effect Black community Wall Street Journal reported OneZero colleague Dave Gershgorn written even longer list company including many might expect trying cash face recognition gold rush Bloomberg Opinion Cathy O’Neil make case face recognition law enforcement continue unless Congress repeal post911 legislation Real ID Act prioritized antiterrorism effort civil liberty 🗨️ Instagram’s algorithm systematically incentivizes user show skin photo according report nonprofit AlgorithmWatch 🗨️ great scourge bot social medium may overstated bot expert Darius Kazemi argued New York Times article Siobhan Roberts Headlines Week Facebook Groups Destroying America — Nina Jankowicz Cindy Otis Wired Devin Nunes’ Attorney Says He’s ‘Dead End’ Quest Reveal Identity Twitter Cow — Kate Irby Fresno Bee Thanks reading Reach tip feedback responding post web via Twitter direct message WillOremus email oremusmediumcomTags Technology Apple Facebook Pattern Matching |
1,126 | Easy Peasy Stores With Public and Private Actions | Easy Peasy Stores With Public and Private Actions
Easy Peasy provides a better API and experience on top of Redux
Bengtskar Lighthouse, Finland, 2020. Photo by the author.
Since the end of 2019, I have been using Easy Peasy to manage the state of my applications both professionally and personally. The library has a familiar API and logic with a lightweight feel and good flexibility. If you’re using or have used Redux and aren’t fully sold on it, take a look into Easy-Peasy. It may be worth it.
“Easy Peasy is an abstraction of Redux, providing a reimagined API that focuses on developer experience.” — Easy Peasy’s official website
Each time I’ve set up an Easy Peasy store, I’ve experimented more with its implementation. I’ve asked myself more questions about not only what the library can do but what can be done with it. During my latest project, I had asked myself, “What if I wanted my app to have access to only a specific subset of actions? Can Easy Peasy create private and public actions for its stores?”
I looked through the docs but didn’t find an answer to this question. I’d hoped for something like JavaScript classes where I could preface any action or state value with private . But while that wasn’t the case, that doesn’t mean that wasn’t the answer.
I mentioned that with each project, I explored more of not only what Easy Peasy could do but what I could do with it — and this will be an example of the latter.
Note: This article will assume a base understanding of creating a store and Hooks using Easy Peasy. Some code samples will roughly include these concepts but will not explain them or show them in full. Please visit the Easy Peasy docs for better information on getting started. | https://medium.com/better-programming/easy-peasy-stores-with-public-and-private-actions-5cc1682765da | ['Daniel Yuschick'] | 2020-10-29 14:38:41.187000+00:00 | ['Programming', 'JavaScript', 'Redux', 'Typescript', 'React'] | Title Easy Peasy Stores Public Private ActionsContent Easy Peasy Stores Public Private Actions Easy Peasy provides better API experience top Redux Bengtskar Lighthouse Finland 2020 Photo author Since end 2019 using Easy Peasy manage state application professionally personally library familiar API logic lightweight feel good flexibility you’re using used Redux aren’t fully sold take look EasyPeasy may worth “Easy Peasy abstraction Redux providing reimagined API focus developer experience” — Easy Peasy’s official website time I’ve set Easy Peasy store I’ve experimented implementation I’ve asked question library done latest project asked “What wanted app access specific subset action Easy Peasy create private public action stores” looked doc didn’t find answer question I’d hoped something like JavaScript class could preface action state value private wasn’t case doesn’t mean wasn’t answer mentioned project explored Easy Peasy could could — example latter Note article assume base understanding creating store Hooks using Easy Peasy code sample roughly include concept explain show full Please visit Easy Peasy doc better information getting startedTags Programming JavaScript Redux Typescript React |
1,127 | Would You Buy A Book For $117? Here’s How I’ll Sell It | Who would spend that much on a book? Nobody. But I’m not positioning this as a book. I’m selling hard to find information. Some of it cannot be found elsewhere. The book acts as the vehicle for delivering the info.
I’m selling it as a physical book only. $117 sounds like a lot of money for a book. I should also mention you can’t buy it on Amazon or at any bookstore. Plus, it appeals to a narrow audience. Only a few can benefit from the information.
None of this seems to make sense, right? It’s hard to find the product. It’s expensive, It appeals to a narrow audience. The format may not be convenient. What am I thinking? It all seems counter-intuitive. The truth is, to sell a book for that much money you need to do things different.
Some of it seems puzzling at first glance. Here’s why it works.
Narrow The Audience
Niche down to a smaller audience with specific needs and desires. You’ll likely find an under-served audience. This audience must feel passion or have a desperate need for what you are selling.
For example, let’s suppose you sell a course on dog training. You’ll compete with hordes of others. Most folks seeking out this information will find tons of options. It’s unlikely that you’ll offer something so earth shattering that you can charge a premium price for a book. You would need a different approach.
Now, let’s niche down and limit the audience to Catahoula Leopard dogs. You’re now facing a tiny audience. Though small, their passion for these dogs radiates. Only a few providers serve them. If you can fill that gap you hold more pricing power.
Narrow The Focus
Now we’re facing a handful of competitors. How could we narrow the focus and dive deep into a really specific part of Catahoula Leopard Dogs? What about training Catahoula Leopard dogs for competitive dog shows? Now we’re drilling down into a small but focused area of dog training. A small but fanatical crew of prospects exists. Our audience shrinks but the remaining prospects spend big dollars. That’s our opening.
Limit Availability
Scarcity drives up demand. Sure, that’s the first lesson in copywriting. If we offer our information in digital format it’s unlimited. A physical book you buy directly from the source screams scarcity. I can state that I’m printing only five hundred copies. Once I sell out they’re not coming back. It’s tough to sell that story on Amazon, even if it’s true.
The limited printing gives it a feeling of exclusivity. You’ll be one of only five hundred to possess this information. Your buyer feels a sense of prestige.
Oversize The Offer
The buyer gets more than just a book. The added bonuses increase the perceived value. You see, the book is just one piece of the overall offer. The onslaught of extras make the price a no brainer. I may even include some limited premium bonuses for the first twenty buyers.
Imagine crafting a sales campaign to owners of Catahoula competition dogs. See how specific we can get in our marketing message? You can drill down into the exact needs and challenges they face. Compare that to targeting all dog owners. There’s no way you can reach them on the same personal level.
There’s one more advantage of going higher priced with a small audience. I can send a handwritten thank you note to all buyers. Try doing that with a seven dollar ebook. | https://medium.com/writtenpersuasion/would-you-buy-a-book-for-117-heres-how-i-ll-do-it-68e7ee8eeb8d | ['Barry Davret'] | 2017-03-26 21:43:08.187000+00:00 | ['Marketing', 'Business', 'Persuasive Writing', 'Psychology', 'Digital Marketing'] | Title Would Buy Book 117 Here’s I’ll Sell ItContent would spend much book Nobody I’m positioning book I’m selling hard find information cannot found elsewhere book act vehicle delivering info I’m selling physical book 117 sound like lot money book also mention can’t buy Amazon bookstore Plus appeal narrow audience benefit information None seems make sense right It’s hard find product It’s expensive appeal narrow audience format may convenient thinking seems counterintuitive truth sell book much money need thing different seems puzzling first glance Here’s work Narrow Audience Niche smaller audience specific need desire You’ll likely find underserved audience audience must feel passion desperate need selling example let’s suppose sell course dog training You’ll compete horde others folk seeking information find ton option It’s unlikely you’ll offer something earth shattering charge premium price book would need different approach let’s niche limit audience Catahoula Leopard dog You’re facing tiny audience Though small passion dog radiates provider serve fill gap hold pricing power Narrow Focus we’re facing handful competitor could narrow focus dive deep really specific part Catahoula Leopard Dogs training Catahoula Leopard dog competitive dog show we’re drilling small focused area dog training small fanatical crew prospect exists audience shrink remaining prospect spend big dollar That’s opening Limit Availability Scarcity drive demand Sure that’s first lesson copywriting offer information digital format it’s unlimited physical book buy directly source scream scarcity state I’m printing five hundred copy sell they’re coming back It’s tough sell story Amazon even it’s true limited printing give feeling exclusivity You’ll one five hundred posse information buyer feel sense prestige Oversize Offer buyer get book added bonus increase perceived value see book one piece overall offer onslaught extra make price brainer may even include limited premium bonus first twenty buyer Imagine crafting sale campaign owner Catahoula competition dog See specific get marketing message drill exact need challenge face Compare targeting dog owner There’s way reach personal level There’s one advantage going higher priced small audience send handwritten thank note buyer Try seven dollar ebookTags Marketing Business Persuasive Writing Psychology Digital Marketing |
1,128 | An Ultimate Cheat Sheet for Data Visualization in Pandas | An Ultimate Cheat Sheet for Data Visualization in Pandas
All the Basic Types of Visualization That Is Available in Pandas and Some Advanced Visualization That Are Extremely Useful and Time Saver
We use python’s pandas' library primarily for data manipulation in data analysis. But we can use Pandas for data visualization as well. You even do not need to import the Matplotlib library for that. Pandas itself can use Matplotlib in the backend and render the visualization for you. It makes it really easy to makes a plot using a DataFrame or a Series. Pandas use a higher-level API than Matplotlib. So, it can make plots using fewer lines of code.
I will start with the very basic plots using random data and then move to the more advanced one with a real dataset.
I will use a Jupyter notebook environment for this tutorial. If you do not have that installed, you can simply use a Google Colab notebook. You even won’t have to install pandas on it. It already has that installed for us.
If you want a Jupyter notebook installed that’s also a great idea. Please go ahead and install the anaconda package.
It’s a great package for data scientists and it’s free.
Then install pandas using:
pip install pandas
or in your anaconda prompt
conda install pandas
You are ready to rock n roll!
Pandas Visualization
We will start with the most basic one.
Line Plot
First import pandas. Then, let’s just make a basic Series in pandas and make a line plot.
import pandas as pd
a = pd.Series([40, 34, 30, 22, 28, 17, 19, 20, 13, 9, 15, 10, 7, 3])
a.plot()
The most basic and simple plot is ready! See, how easy it is. We can improve it a bit.
I will add:
a figure size to make the size of the plot bigger,
color to change the default blue color,
title on top that shows what this plot is about
and font size to change the default font size of those numbers on the axis
a.plot(figsize=(8, 6), color='green', title = 'Line Plot', fontsize=12)
There are a lot more styling techniques we will learn throughout this tutorial.
Area Plot
I will use the same series ‘a’ and make an area plot here,
I can use the .plot() method and pass a parameter kind to specify the kind of plot I want like:
a.plot(kind='area')
or I can write like this
a.plot.area()
Both of the methods I mentioned above will create this plot:
The area plot makes more sense and also look nicer when there are several variables in it. So, I will make a couple more Series, make a DataFrme, and make an area plot from it.
b = pd.Series([45, 22, 12, 9, 20, 34, 28, 19, 26, 38, 41, 24, 14, 32])
c = pd.Series([25, 38, 33, 38, 23, 12, 30, 37, 34, 22, 16, 24, 12, 9])
d = pd.DataFrame({'a':a, 'b': b, 'c': c})
Let’s plot this DataFrame ‘d’ as an area plot now,
d.plot.area(figsize=(8, 6), title='Area Plot')
You do not have to accept those default colors. Let’s change those colors and add some more style to it.
d.plot.area(alpha=0.4, color=['coral', 'purple', 'lightgreen'],figsize=(8, 6), title='Area Plot', fontsize=12)
Probably the parameter alpha is new to you.
The ‘alpha’ parameter adds some translucent looks to the plot.
It appears to be very useful at times when we have overlapping area plots or histograms or dense scatter plots.
This .plot() function can make eleven types of plots:
line area bar barh pie box hexbin hist kde density scatter
I would like to show the use of all of those different plots. For that, I will use the NHANES dataset by the Centers for Disease Control and Prevention. I downloaded this dataset and kept it in the same folder as this Jupyter notebook. Please feel free to download the dataset and follow along:
Here I import the dataset:
df = pd.read_csv('nhanes_2015_2016.csv')
df.head()
This dataset has 30 columns and 5735 rows.
Before starting to make plots, it is important to check the columns of the dataset:
df.columns
output:
Index(['SEQN', 'ALQ101', 'ALQ110', 'ALQ130', 'SMQ020', 'RIAGENDR', 'RIDAGEYR', 'RIDRETH1', 'DMDCITZN', 'DMDEDUC2', 'DMDMARTL', 'DMDHHSIZ', 'WTINT2YR', 'SDMVPSU', 'SDMVSTRA', 'INDFMPIR', 'BPXSY1', 'BPXDI1', 'BPXSY2', 'BPXDI2', 'BMXWT', 'BMXHT', 'BMXBMI', 'BMXLEG', 'BMXARML', 'BMXARMC', 'BMXWAIST', 'HIQ210', 'DMDEDUC2x', 'DMDMARTLx'], dtype='object')
The names of the columns might look strange. But do not worry about that. I will keep explaining the meaning of the columns as we go. And we will not use all the columns. We will use some of them to practice these plots.
Histogram
I will use the weight of the population to make a basic histogram.
df['BMXWT'].hist()
As a reminder, the histogram provides the distribution of frequency. It shows in the picture above that about 1825 people have a weight of 75. The maximum people have the weight in the range of 49 to 99.
What if I want to put several histograms in one plot?
I will make three histograms in one plot using weight, height, and body mass index (BMI).
df[['BMXWT', 'BMXHT', 'BMXBMI']].plot.hist(stacked=True, bins=20, fontsize=12, figsize=(10, 8))
But if you want three different histograms that are also possible using just one line of code like this:
df[['BMXWT', 'BMXHT', 'BMXBMI']].hist(bins=20,figsize=(10, 8))
It can be even more dynamic!
We have systolic blood pressure data in the ‘BPXSY1’ column and the level of education in the ‘DMDEDUC2’ column. If we want to examine the distribution of the systolic blood pressure in the population of each education level, that is also be done in just one line of code.
But before doing that I want to replace the numeric value of the ‘DMDEDUC2’ column with more meaningful string values:
df["DMDEDUC2x"] = df.DMDEDUC2.replace({1: "less than 9", 2: "9-11", 3: "HS/GED", 4: "Some college/AA", 5: "College", 7: "Refused", 9: "Don't know"})
Make the histograms now,
df[['DMDEDUC2x', 'BPXSY1']].hist(by='DMDEDUC2x', figsize=(18, 12))
Look! We have the distribution of systolic blood pressure levels for each education level in just one line of code!
Bar Plot
Now let’s see how the systolic blood pressure changes with marital status. This time I will make a bar plot. Like before I will replace the numeric values of the ‘DMDMARTL’ column with more meaningful strings.
df["DMDMARTLx"] = df.DMDMARTL.replace({1: "Married", 2: "Widowed", 3: "Divorced", 4: "Separated", 5: "Never married", 6: "Living w/partner", 77: "Refused"})
To make the bar plot we need to preprocess the data. That is to group the data by different marital statuses and take the mean of each group. Here I do the processing of the data and plot in the same line of code.
df.groupby('DMDMARTLx')['BPXSY1'].mean().plot(kind='bar', rot=45, fontsize=10, figsize=(8, 6))
Here we used the ‘rot’ parameter to rotate the x ticks 45 degrees. Otherwise, they will be too cluttered.
If you like, you can make it horizontal as well,
df.groupby('DMDEDUC2x')['BPXSY1'].mean().plot(kind='barh', rot=45, fontsize=10, figsize=(8, 6))
I want to make a bar plot with multiple variables. We have a column that contains the ethnic origin of the population. It will be interesting to see if people’s weight, height, and body mass index change with ethnic origin.
To plot that, we need to group those three columns (weight, height, and body mass index) by ethnic origin and take the mean.
df_bmx = df.groupby('RIDRETH1')['BMXWT', 'BMXHT', 'BMXBMI'].mean().reset_index()
This time I did not change the ethnic origin data. I kept the numeric values as it is. Let’s make our bar plot now,
df_bmx.plot(x = 'RIDRETH1',
y=['BMXWT', 'BMXHT', 'BMXBMI'],
kind = 'bar',
color = ['lightblue', 'red', 'yellow'],
fontsize=10)
Looks like ethnic group 4 is a little higher than the rest of them. But they are all very close. No significant difference.
We can stack different parameters (weight, height, and body mass index) on top of each other as well.
df_bmx.plot(x = 'RIDRETH1',
y=['BMXWT', 'BMXHT', 'BMXBMI'],
kind = 'bar', stacked=True,
color = ['lightblue', 'red', 'yellow'],
fontsize=10)
Pie Plot
Here I want to check if marital status and education have any relation.
I need to group the marital status by education level and count the population in each marital status group by education level. Sound too wordy, right? Let’s see it:
df_edu_marit = df.groupby('DMDEDUC2x')['DMDMARTL'].count()
pd.Series(df_edu_marit)
Using this Series make a pie plot very easily:
ax = pd.Series(df_edu_marit).plot.pie(subplots=True, label='',
labels = ['College Education', 'high school',
'less than high school', 'Some college',
'HS/GED', 'Unknown'],
figsize = (8, 6),
colors = ['lightgreen', 'violet', 'coral', 'skyblue', 'yellow', 'purple'], autopct = '%.2f')
Here I added a few style parameters. Please feel free to try with more style parameters.
Boxplot
For example, I will make a box plot using body mass index, leg, and arm length data.
color = {'boxes': 'DarkBlue', 'whiskers': 'coral',
'medians': 'Black', 'caps': 'Green'}
df[['BMXBMI', 'BMXLEG', 'BMXARML']].plot.box(figsize=(8, 6),color=color)
Scatter Plot
For a simple scatter plot I want to see if there is any relationship between body mass index(‘BMXBMI’) and systolic blood pressure(‘BPXSY1’).
df.head(300).plot(x='BMXBMI', y= 'BPXSY1', kind = 'scatter')
This was so simple! I used only 300 data because if I use all the data, the scatter plot becomes too dense to understand. Though you can use the alpha parameter to make it translucent. I preferred to keep it light for this tutorial.
Now, let’s check a little advanced scatter plot with the same one line of code.
This time I will add some color shades. I will male a scatter plot, putting weight in the x-axis and height in the y-axis. There is a little twist!
I will also add the length of the leg. But the length of the legs will show in shades. If the length of the leg is longer the shade will be darker else the shade will be lighter.
df.head(500).plot.scatter(x= 'BMXWT', y = 'BMXHT', c ='BMXLEG', s=50, figsize=(8, 6))
It shows the relationship between weight and height. You can see if there is any relationship between the length of the legs with height and weight as well.
Another way of adding a third parameter like that is to add size in the particles. Here, I am putting the height in the x-axis, weight in the y-axis, and body mass index as an indicator of the bubble size.
df.head(200).plot.scatter(x= 'BMXHT', y = 'BMXWT',
s =df['BMXBMI'][:200] * 7,
alpha=0.5, color='purple',
figsize=(8, 6))
Here the smaller dots means lower BMI and the bigger dots mean the higher BMI.
Hexbin
Another beautiful type of visualization where the dots are hexagonal. When the data is too dense, it is useful to put them in bins. As you can see, in the previous two plots I used only 500 and 200 data because if I put all the data in the dataset the plot becomes too dense to understand or daw any information from it.
In this case, using spatial distribution can be very useful. I am using hexbin where data will be represented in hexagons. Each hexagon is a bin representing the density of that bin. Here is an example of the most basic hexbin.
df.plot.hexbin(x='BMXARMC', y='BMXLEG', gridsize= 20)
Here the darker color represents the higher density of data and the lighter color represents the lower density of data.
Does it sound like a histogram? Yes, right? Instead of bars, it is represented by colors.
If we add an extra parameter ‘C’, the distribution changes. It will not be like a histogram anymore.
The parameter ‘C’ specifies the position of each (x, y) coordinate, accumulates for each hexagonal bins, and then reduced by using reduce_C_function. If the reduce_C_ function is not specified, by default it uses np.mean. You can specify it the way you want as np.mean, np.max, np.sum, np.std, etc.
Look at the documentation for more information here
Here is an example:
df.plot.hexbin(x='BMXARMC', y='BMXLEG', C = 'BMXHT',
reduce_C_function=np.max,
gridsize=15,
figsize=(8,6))
Here the darker color of a hexagon means, np.max has a higher value for the population height(‘BMXHT’) for that bin as you can see that I used np.max as a reduce_C_function. You can use a colormap instead of shades of color:
df.plot.hexbin(x='BMXARMC', y='BMXLEG', C = 'BMXHT',
reduce_C_function=np.max,
gridsize=15,
figsize=(8,6),
cmap = 'viridis')
Looks pretty, right? And also very informative.
Some Advanced Visualization
I explained some basic plotting above that people use in everyday life when they deal with data. But data scientists need some more. Pandas library has some more advanced visualization as well. That can provide a lot more information in one line of code.
Scatter_matrix
Scatter_matrix is very useful. It provides a huge amount of information packed in one plot. It can be used for general data analysis or feature engineering on machine learning. Let’s see an example first. I will explain after that.
from pandas.plotting import scatter_matrix scatter_matrix(df[['BMXWT', 'BMXHT', 'BMXBMI', 'BMXLEG', 'BMXARML']], alpha = 0.2, figsize=(10, 8), diagonal = 'kde')
Look at that! I used five features here. I got the relationship between all five variables with each other. In the diagonals, it gives you the density plot of each individual feature. We discuss more on density plots in my next example.
KDE or density plots
KDE plots or Kernel Density Plots are built to provide the probability distribution of a series or a column in a DataFrame. Let’s see the probability distribution of the weight variable (‘BMXWT’).
df['BMXWT'].plot.kde()
You can visualize several probability-distribution in one plot. Here I am making a probability distribution of height, weight, and BMI in the same plot:
df[['BMXWT', 'BMXHT', 'BMXBMI']].plot.kde(figsize = (8, 6))
You can use other style parameters we described before as well. I like to keep it simple.
Parallel_coordinates
This is a good way of showing multi-dimensional data. It clearly shows the clusters if there is any. For example, I want to see if there is any difference in height, weight, and BMI between men and women. Let's check.
from pandas.plotting import parallel_coordinates parallel_coordinates(df[['BMXWT', 'BMXHT', 'BMXBMI', 'RIAGENDR']].dropna().head(200), 'RIAGENDR', color=['blue', 'violet'])
You can see the clear difference in body weight, height, and BMI between men and women. Here, 1 is men and 2 is women.
Bootstrap_plot
This a very important plot for research and statistical analysis. It will save a lot of time in statistical analysis. The bootstrap plot is used to assess the uncertainty of a given dataset.
This function takes a random sample of a specified size. Then mean, median, and midrange is calculated for that sample. This process is repeated a specified number of times.
Here I am creating a bootstrap plot using the BMI data:
from pandas.plotting import bootstrap_plot bootstrap_plot(df['BMXBMI'], size=100, samples=1000, color='skyblue')
Here, the sample size is 100 and the number of samples is 1000. So, it took a random sample of 100 data to calculate mean, median, and midrange. The process is repeated 1000 times.
This is an extremely important process and a time saver for statisticians and researchers.
Conclusion
I wanted to make a cheat sheet for the data visualization in Pandas. Though if you use matplotlib and seaborn, there are a lot more options or types of visualization. But we use these basic types of visualization in our everyday life if you deal with data. Using pandas for this visualization will make your code much simpler and save a lot of lines of code.
Feel free to follow me on Twitter and like my Facebook page.
More Reading: | https://towardsdatascience.com/an-ultimate-cheat-sheet-for-data-visualization-in-pandas-4010e1b16b5c | ['Rashida Nasrin Sucky'] | 2020-11-13 14:49:54.337000+00:00 | ['Artificial Intelligence', 'Data Science', 'Python', 'Data Visualization', 'Programming'] | Title Ultimate Cheat Sheet Data Visualization PandasContent Ultimate Cheat Sheet Data Visualization Pandas Basic Types Visualization Available Pandas Advanced Visualization Extremely Useful Time Saver use python’s panda library primarily data manipulation data analysis use Pandas data visualization well even need import Matplotlib library Pandas use Matplotlib backend render visualization make really easy make plot using DataFrame Series Pandas use higherlevel API Matplotlib make plot using fewer line code start basic plot using random data move advanced one real dataset use Jupyter notebook environment tutorial installed simply use Google Colab notebook even won’t install panda already installed u want Jupyter notebook installed that’s also great idea Please go ahead install anaconda package It’s great package data scientist it’s free install panda using pip install panda anaconda prompt conda install panda ready rock n roll Pandas Visualization start basic one Line Plot First import panda let’s make basic Series panda make line plot import panda pd pdSeries40 34 30 22 28 17 19 20 13 9 15 10 7 3 aplot basic simple plot ready See easy improve bit add figure size make size plot bigger color change default blue color title top show plot font size change default font size number axis aplotfigsize8 6 colorgreen title Line Plot fontsize12 lot styling technique learn throughout tutorial Area Plot use series ‘a’ make area plot use plot method pas parameter kind specify kind plot want like aplotkindarea write like aplotarea method mentioned create plot area plot make sense also look nicer several variable make couple Series make DataFrme make area plot b pdSeries45 22 12 9 20 34 28 19 26 38 41 24 14 32 c pdSeries25 38 33 38 23 12 30 37 34 22 16 24 12 9 pdDataFrameaa b b c c Let’s plot DataFrame ‘d’ area plot dplotareafigsize8 6 titleArea Plot accept default color Let’s change color add style dplotareaalpha04 colorcoral purple lightgreenfigsize8 6 titleArea Plot fontsize12 Probably parameter alpha new ‘alpha’ parameter add translucent look plot appears useful time overlapping area plot histogram dense scatter plot plot function make eleven type plot line area bar barh pie box hexbin hist kde density scatter would like show use different plot use NHANES dataset Centers Disease Control Prevention downloaded dataset kept folder Jupyter notebook Please feel free download dataset follow along import dataset df pdreadcsvnhanes20152016csv dfhead dataset 30 column 5735 row starting make plot important check column dataset dfcolumns output IndexSEQN ALQ101 ALQ110 ALQ130 SMQ020 RIAGENDR RIDAGEYR RIDRETH1 DMDCITZN DMDEDUC2 DMDMARTL DMDHHSIZ WTINT2YR SDMVPSU SDMVSTRA INDFMPIR BPXSY1 BPXDI1 BPXSY2 BPXDI2 BMXWT BMXHT BMXBMI BMXLEG BMXARML BMXARMC BMXWAIST HIQ210 DMDEDUC2x DMDMARTLx dtypeobject name column might look strange worry keep explaining meaning column go use column use practice plot Histogram use weight population make basic histogram dfBMXWThist reminder histogram provides distribution frequency show picture 1825 people weight 75 maximum people weight range 49 99 want put several histogram one plot make three histogram one plot using weight height body mass index BMI dfBMXWT BMXHT BMXBMIplothiststackedTrue bins20 fontsize12 figsize10 8 want three different histogram also possible using one line code like dfBMXWT BMXHT BMXBMIhistbins20figsize10 8 even dynamic systolic blood pressure data ‘BPXSY1’ column level education ‘DMDEDUC2’ column want examine distribution systolic blood pressure population education level also done one line code want replace numeric value ‘DMDEDUC2’ column meaningful string value dfDMDEDUC2x dfDMDEDUC2replace1 le 9 2 911 3 HSGED 4 collegeAA 5 College 7 Refused 9 Dont know Make histogram dfDMDEDUC2x BPXSY1histbyDMDEDUC2x figsize18 12 Look distribution systolic blood pressure level education level one line code Bar Plot let’s see systolic blood pressure change marital status time make bar plot Like replace numeric value ‘DMDMARTL’ column meaningful string dfDMDMARTLx dfDMDMARTLreplace1 Married 2 Widowed 3 Divorced 4 Separated 5 Never married 6 Living wpartner 77 Refused make bar plot need preprocess data group data different marital status take mean group processing data plot line code dfgroupbyDMDMARTLxBPXSY1meanplotkindbar rot45 fontsize10 figsize8 6 used ‘rot’ parameter rotate x tick 45 degree Otherwise cluttered like make horizontal well dfgroupbyDMDEDUC2xBPXSY1meanplotkindbarh rot45 fontsize10 figsize8 6 want make bar plot multiple variable column contains ethnic origin population interesting see people’s weight height body mass index change ethnic origin plot need group three column weight height body mass index ethnic origin take mean dfbmx dfgroupbyRIDRETH1BMXWT BMXHT BMXBMImeanresetindex time change ethnic origin data kept numeric value Let’s make bar plot dfbmxplotx RIDRETH1 yBMXWT BMXHT BMXBMI kind bar color lightblue red yellow fontsize10 Looks like ethnic group 4 little higher rest close significant difference stack different parameter weight height body mass index top well dfbmxplotx RIDRETH1 yBMXWT BMXHT BMXBMI kind bar stackedTrue color lightblue red yellow fontsize10 Pie Plot want check marital status education relation need group marital status education level count population marital status group education level Sound wordy right Let’s see dfedumarit dfgroupbyDMDEDUC2xDMDMARTLcount pdSeriesdfedumarit Using Series make pie plot easily ax pdSeriesdfedumaritplotpiesubplotsTrue label label College Education high school le high school college HSGED Unknown figsize 8 6 color lightgreen violet coral skyblue yellow purple autopct 2f added style parameter Please feel free try style parameter Boxplot example make box plot using body mass index leg arm length data color box DarkBlue whisker coral median Black cap Green dfBMXBMI BMXLEG BMXARMLplotboxfigsize8 6colorcolor Scatter Plot simple scatter plot want see relationship body mass index‘BMXBMI’ systolic blood pressure‘BPXSY1’ dfhead300plotxBMXBMI BPXSY1 kind scatter simple used 300 data use data scatter plot becomes dense understand Though use alpha parameter make translucent preferred keep light tutorial let’s check little advanced scatter plot one line code time add color shade male scatter plot putting weight xaxis height yaxis little twist also add length leg length leg show shade length leg longer shade darker else shade lighter dfhead500plotscatterx BMXWT BMXHT c BMXLEG s50 figsize8 6 show relationship weight height see relationship length leg height weight well Another way adding third parameter like add size particle putting height xaxis weight yaxis body mass index indicator bubble size dfhead200plotscatterx BMXHT BMXWT dfBMXBMI200 7 alpha05 colorpurple figsize8 6 smaller dot mean lower BMI bigger dot mean higher BMI Hexbin Another beautiful type visualization dot hexagonal data dense useful put bin see previous two plot used 500 200 data put data dataset plot becomes dense understand daw information case using spatial distribution useful using hexbin data represented hexagon hexagon bin representing density bin example basic hexbin dfplothexbinxBMXARMC yBMXLEG gridsize 20 darker color represents higher density data lighter color represents lower density data sound like histogram Yes right Instead bar represented color add extra parameter ‘C’ distribution change like histogram anymore parameter ‘C’ specifies position x coordinate accumulates hexagonal bin reduced using reduceCfunction reduceC function specified default us npmean specify way want npmean npmax npsum npstd etc Look documentation information example dfplothexbinxBMXARMC yBMXLEG C BMXHT reduceCfunctionnpmax gridsize15 figsize86 darker color hexagon mean npmax higher value population height‘BMXHT’ bin see used npmax reduceCfunction use colormap instead shade color dfplothexbinxBMXARMC yBMXLEG C BMXHT reduceCfunctionnpmax gridsize15 figsize86 cmap viridis Looks pretty right also informative Advanced Visualization explained basic plotting people use everyday life deal data data scientist need Pandas library advanced visualization well provide lot information one line code Scattermatrix Scattermatrix useful provides huge amount information packed one plot used general data analysis feature engineering machine learning Let’s see example first explain pandasplotting import scattermatrix scattermatrixdfBMXWT BMXHT BMXBMI BMXLEG BMXARML alpha 02 figsize10 8 diagonal kde Look used five feature got relationship five variable diagonal give density plot individual feature discus density plot next example KDE density plot KDE plot Kernel Density Plots built provide probability distribution series column DataFrame Let’s see probability distribution weight variable ‘BMXWT’ dfBMXWTplotkde visualize several probabilitydistribution one plot making probability distribution height weight BMI plot dfBMXWT BMXHT BMXBMIplotkdefigsize 8 6 use style parameter described well like keep simple Parallelcoordinates good way showing multidimensional data clearly show cluster example want see difference height weight BMI men woman Lets check pandasplotting import parallelcoordinates parallelcoordinatesdfBMXWT BMXHT BMXBMI RIAGENDRdropnahead200 RIAGENDR colorblue violet see clear difference body weight height BMI men woman 1 men 2 woman Bootstrapplot important plot research statistical analysis save lot time statistical analysis bootstrap plot used ass uncertainty given dataset function take random sample specified size mean median midrange calculated sample process repeated specified number time creating bootstrap plot using BMI data pandasplotting import bootstrapplot bootstrapplotdfBMXBMI size100 samples1000 colorskyblue sample size 100 number sample 1000 took random sample 100 data calculate mean median midrange process repeated 1000 time extremely important process time saver statistician researcher Conclusion wanted make cheat sheet data visualization Pandas Though use matplotlib seaborn lot option type visualization use basic type visualization everyday life deal data Using panda visualization make code much simpler save lot line code Feel free follow Twitter like Facebook page ReadingTags Artificial Intelligence Data Science Python Data Visualization Programming |
1,129 | Why React Hooks Are the Wrong Abstraction | Hooks Problem #1: Attached During Render
As a general rule of design, I’ve found that we should always first try to disallow our users from making mistakes. Only if we’re unable to prevent the user from making a mistake should we then inform them of the mistake after they’ve made it.
For example, when allowing a user to enter a quantity in an input field, we could allow them to enter alphanumeric characters and then show them an error message if we find an alphabetic character in their input. However, we could provide better UX if we only allowed them to enter numeric characters in the field, which would eliminate the need to check whether they have included alphabetic characters.
React behaves quite similarly. If we think about Hooks conceptually, they are static through the lifetime of a component. By this, I mean that once declared, we cannot remove them from a component or change their position in relation to other Hooks. React uses lint rules and will throw errors to try to prevent developers from violating this detail of Hooks.
In this sense, React allows the developer to make mistakes and then tries to warn the user of their mistakes afterward. To see what I mean, consider the following example:
This produces an error on the second render when the counter is incremented because the component will remove the second useState hook:
Error: Rendered fewer hooks than expected. This may be caused by an accidental early return statement.
The placement of our Hooks during a component’s first render determines where the Hooks must be found by React on every subsequent render.
Given that Hooks are static through the lifetime of a component, wouldn’t it make more sense for us to declare them on component construction as opposed to during the render phase? If we attach Hooks during the construction of a component, we no longer need to worry about enforcing the rules of Hooks because a hook would never be given another chance to change positions or be removed during the lifetime of a component.
Unfortunately, function components were given no concept of a constructor, but let’s pretend that they were. I imagine that it would look something like the following:
By attaching our Hooks to the component in a constructor, we wouldn’t have to worry about them shifting during re-renders.
If you’re thinking, “You can’t just move Hooks to a constructor. They need to run on every render to grab the latest value” at this point, then you’re totally correct!
We can’t just move Hooks out of the render function because we will break them. That’s why we’ll have to replace them with something else. But first, the second major problem of Hooks. | https://medium.com/better-programming/why-react-hooks-are-the-wrong-abstraction-8a44437747c1 | ['Austin Malerba'] | 2020-12-14 18:42:43.966000+00:00 | ['Programming', 'React Hook', 'JavaScript', 'React', 'Software Engineering'] | Title React Hooks Wrong AbstractionContent Hooks Problem 1 Attached Render general rule design I’ve found always first try disallow user making mistake we’re unable prevent user making mistake inform mistake they’ve made example allowing user enter quantity input field could allow enter alphanumeric character show error message find alphabetic character input However could provide better UX allowed enter numeric character field would eliminate need check whether included alphabetic character React behaves quite similarly think Hooks conceptually static lifetime component mean declared cannot remove component change position relation Hooks React us lint rule throw error try prevent developer violating detail Hooks sense React allows developer make mistake try warn user mistake afterward see mean consider following example produce error second render counter incremented component remove second useState hook Error Rendered fewer hook expected may caused accidental early return statement placement Hooks component’s first render determines Hooks must found React every subsequent render Given Hooks static lifetime component wouldn’t make sense u declare component construction opposed render phase attach Hooks construction component longer need worry enforcing rule Hooks hook would never given another chance change position removed lifetime component Unfortunately function component given concept constructor let’s pretend imagine would look something like following attaching Hooks component constructor wouldn’t worry shifting rerenders you’re thinking “You can’t move Hooks constructor need run every render grab latest value” point you’re totally correct can’t move Hooks render function break That’s we’ll replace something else first second major problem HooksTags Programming React Hook JavaScript React Software Engineering |
1,130 | Why We Find Patterns in Randomness | Why We Find Patterns in Randomness
It helps us survive, and we naturally evolved as humans to do so
From Free Nature Stock on Pexels
I see patterns in randomness all the time, even if sometimes, a cigar is just a cigar, not a phallic symbol. For me, everything going wrong in a given warning is a sign of God wanting me to be challenged on a given morning. The run that ran poorly was a sign of God’s plan that running wasn’t for me that day. The fact that I’m not motivated to do my work means God wants me to do something else at a given moment.
The gambler’s paradox is the phenomenon of someone who places bets and looks for patterns to take advantage of. However, those patterns tend not to actually be there, so according to American physicist, Richard A. Muller, “the routine heel really is random — at least at an honest casino.” Each spin of the roulette is inherently random, and in the world of gambling, streaks don’t empirically exist. The gambler’s paradox is also called the Monte Carlo fallacy and the fallacy of the maturity of chances, and is widely cited as to why people gamble as much as they do.
There’s actually a psychological term for the tendency to ascribe patterns to randomness — apophenia. Psychiatrist Klaus Conrad defines apophenia as the “unmotivated seeing of connections [accompanied by] a specific feeling of abnormal meaningfulness.” Within the sphere of apophenia is pareidolia, which is a perception of images and sounds in random stimuli — like seeing a face in an inanimate object, or seeing the face of Jesus on tree stumps.
Dr. Po Chi Wu in Psychology Today asks, then, whether life is just a series of random events. He talks about various levels of consciousness. Our tendency towards apophenia doesn't mean everything is random, as much as I might believe. But does a lack of randomness mean there are patterns? And who created those patterns if they exist? Are those patterns comprehensible for humans?
For me, my faith dictates that I will never know God’s plan or ways. Job acknowledges in Job 42:2 the comprehensibility of God:
“I know that you can do all things
and that no purpose of yours can be thwarted.”
Vox made a YouTube video asking why we so often find faces in purses, or why we experience pareidolia. We are fascinated faces of Jesus in tree trunks. Pareidolia, in Greek, means “beyond the image.” It became considered a form of psychosis after psychologists started testing pareidolia in the Rorshach Test.
But why are our brains constantly trying to make patterns in things where there intrinsically aren’t patterns? For example, why do we constantly see patterns in the stars? The stars did not align to look like a dipper or a bear, so why did we give them those names and try to make sense of the universe’s nonsense?
A 2009 study from Hadjikhani et al. in NeuroReport found that the fusiform face area became activated when people saw faces in non-face objects. Using magnetoencephalography (MEG), the researchers found that people who perceived faces in objects had early activation in the fusiform face area, faster than perceptions of common objects, and only slower than the perception of actual faces:
“Our findings suggest that face perception evoked by face-like objects is a relatively early process, and not a late re-interpretation cognitive phenomenon,” Hadjikhani et al. said.
Wu asks the grand question at the end of the day: how much of our future can we actually influence? I believe that my success, when I have it, is a series of lucky coincidences. I work hard. I’m passionate. But a million things had to break my way, like growing up with both parents, growing up in America, a roof above my head, and food on my plate.
He talks a lot about entrepreneurs, who often talk about success as being in the right place at the right time. He says that they tend to have a “single-minded focus” for good timing, and as a result, interpret random events as meaningful. In the state of single-mindedness, is there a liberation in not overthinking things and just doing — is there a benefit to ascribing so much meaning to randomness. | https://medium.com/publishous/why-we-find-patterns-in-randomness-d4913e85814d | ['Ryan Fan'] | 2020-09-22 15:18:53.871000+00:00 | ['Neuroscience', 'Life Lessons', 'Psychology', 'Spirituality', 'Philosophy'] | Title Find Patterns RandomnessContent Find Patterns Randomness help u survive naturally evolved human Free Nature Stock Pexels see pattern randomness time even sometimes cigar cigar phallic symbol everything going wrong given warning sign God wanting challenged given morning run ran poorly sign God’s plan running wasn’t day fact I’m motivated work mean God want something else given moment gambler’s paradox phenomenon someone place bet look pattern take advantage However pattern tend actually according American physicist Richard Muller “the routine heel really random — least honest casino” spin roulette inherently random world gambling streak don’t empirically exist gambler’s paradox also called Monte Carlo fallacy fallacy maturity chance widely cited people gamble much There’s actually psychological term tendency ascribe pattern randomness — apophenia Psychiatrist Klaus Conrad defines apophenia “unmotivated seeing connection accompanied specific feeling abnormal meaningfulness” Within sphere apophenia pareidolia perception image sound random stimulus — like seeing face inanimate object seeing face Jesus tree stump Dr Po Chi Wu Psychology Today asks whether life series random event talk various level consciousness tendency towards apophenia doesnt mean everything random much might believe lack randomness mean pattern created pattern exist pattern comprehensible human faith dictate never know God’s plan way Job acknowledges Job 422 comprehensibility God “I know thing purpose thwarted” Vox made YouTube video asking often find face purse experience pareidolia fascinated face Jesus tree trunk Pareidolia Greek mean “beyond image” became considered form psychosis psychologist started testing pareidolia Rorshach Test brain constantly trying make pattern thing intrinsically aren’t pattern example constantly see pattern star star align look like dipper bear give name try make sense universe’s nonsense 2009 study Hadjikhani et al NeuroReport found fusiform face area became activated people saw face nonface object Using magnetoencephalography MEG researcher found people perceived face object early activation fusiform face area faster perception common object slower perception actual face “Our finding suggest face perception evoked facelike object relatively early process late reinterpretation cognitive phenomenon” Hadjikhani et al said Wu asks grand question end day much future actually influence believe success series lucky coincidence work hard I’m passionate million thing break way like growing parent growing America roof head food plate talk lot entrepreneur often talk success right place right time say tend “singleminded focus” good timing result interpret random event meaningful state singlemindedness liberation overthinking thing — benefit ascribing much meaning randomnessTags Neuroscience Life Lessons Psychology Spirituality Philosophy |
1,131 | Video Streaming Using Flask and OpenCV | Stream video using OpenCV and Flask. (Image Source)
My name is Anmol Behl and I am a member of team Bits-N-Bytes.We are a group of three team members pursuing B.Tech in Computer Science and Engineering from KIET Group of Institutions,Ghaziabad.This article explains how to stream video using Flask and OpenCV taking face detection as an example.
Nowadays, machine learning is becoming a highly popular field. Due to the daily needs and rapid increase in development, there is a need for deploying machine learning models. But it is very difficult or impossible to deploy them on mobile devices. One option is to use machine learning mobile frameworks such as TensorFlow Lite to call pre-trained models.
Are there any easier options? Yes! With the 5G approaching, it will take only 0.01 seconds to upload a 100KB image at a speed of about 100Mbps, so we can deploy almost everything including face recognition as a service on the server-side. Considering an example of Face Detection this article will demonstrate how to stream video on Linux servers using Python Flask and OpenCV.
Let’s Begin…
STEP 1: Creating and activating Environment
We will create a virtual environment for the project.
The virtualenv package is required to create virtual environments. You can install it with pip:
$ pip install virtualenv
To create a virtual environment, you must specify a path. We are creating our environment’s local directory called ‘Videorecognition in the home folder, to create the folder type the following:
$ virtualenv Videorecognition
To activate the environment execute the following command:
$ source Videorecognition/bin/activate
STEP 2: Installing Flask and OpenCV
First, we need to refresh/upgrade the pre-installed packages/libraries with the apt-get package manager:
$ sudo apt-get upgrade
$ sudo apt-get upgrade
Now we will execute the following commands to install OpenCV and Flask:
$ sudo apt install python3-opencv
$ pip install Flask
STEP 3: Creating Project Structure
Now that all the pre-requisites are installed let’s set up our project:
├── VideoStreaming/
│ ├── camera.py
│ ├── main.py
│ ├── haarcascade_frontalface_alt2.xml
│ ├── templates/
│ │ ├── index.html
STEP 4: Detecting faces using OpenCV
Now as we have created our project structure we will detect face using OpenCV.We will use the most basic and easiest way to detect face i.e, by using Haarcascades.
#camera.py # import the necessary packages
import cv2 # defining face detector
face_cascade=cv2.CascadeClassifier("haarcascade_frontalface_alt2.xml")
ds_factor=0.6 class VideoCamera(object):
def __init__(self):
#capturing video
self.video = cv2.VideoCapture(0)
def __del__(self):
#releasing camera
self.video.release() def get_frame(self):
#extracting frames
ret, frame = self.video.read()
frame=cv2.resize(frame,None,fx=ds_factor,fy=ds_factor,
interpolation=cv2.INTER_AREA)
gray=cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
face_rects=face_cascade.detectMultiScale(gray,1.3,5)
for (x,y,w,h) in face_rects:
cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,0),2)
break # encode OpenCV raw frame to jpg and displaying it
ret, jpeg = cv2.imencode('.jpg', frame)
return jpeg.tobytes()
STEP 5: Creating a webpage for displaying video
Now as we will create a webpage to display our video.
<-- index.html -->
<html>
<head>
<title>Video Streaming Demonstration</title>
</head>
<body>
<h1>Video Streaming Demonstration</h1>
<img id="bg" src="{{ url_for('video_feed') }}">
</body>
</html>
STEP 6: Creating Streaming Server
Now as we have detected faces using haarcascade and created a webpage to display video, we will integrate these two modules with our server.
# main.py # import the necessary packages
from flask import Flask, render_template, Response
from camera import VideoCamera app = Flask(__name__)
def index():
# rendering webpage
return render_template('index.html') @app .route('/')def index():return render_template('index.html') def gen(camera):
while True:
#get camera frame
frame = camera.get_frame()
yield (b'--frame\r
'
b'Content-Type: image/jpeg\r
\r
' + frame + b'\r
\r
')
def video_feed():
return Response(gen(VideoCamera()),
mimetype='multipart/x-mixed-replace; boundary=frame') @app .route('/video_feed')def video_feed():return Response(gen(VideoCamera()),mimetype='multipart/x-mixed-replace; boundary=frame') if __name__ == '__main__':
# defining server ip address and port
app.run(host='0.0.0.0',port='5000', debug=True)
STEP 7: Starting and Accessing the Server
Open a terminal window in the project directory by executing the following commands:
$ cd Videorecognition
$ cd VideoStreaming
To start the server execute the following command:
python main.py
In order to access the server open browser and navigate to the Server URL:
Conclusion
I hope this tutorial can help others find their way of beginning with Flask and OpenCV!
For details and final code, please visit my GitHub depository: Video-Streaming-with-Flask
See you in my next tutorial!
Thank you,
Anmol | https://medium.com/datadriveninvestor/video-streaming-using-flask-and-opencv-c464bf8473d6 | ['Anmol Behl'] | 2020-08-31 16:42:14.333000+00:00 | ['Machine Learning', 'Flask', 'Computer Vision', 'Opencv', 'Python'] | Title Video Streaming Using Flask OpenCVContent Stream video using OpenCV Flask Image Source name Anmol Behl member team BitsNBytesWe group three team member pursuing BTech Computer Science Engineering KIET Group InstitutionsGhaziabadThis article explains stream video using Flask OpenCV taking face detection example Nowadays machine learning becoming highly popular field Due daily need rapid increase development need deploying machine learning model difficult impossible deploy mobile device One option use machine learning mobile framework TensorFlow Lite call pretrained model easier option Yes 5G approaching take 001 second upload 100KB image speed 100Mbps deploy almost everything including face recognition service serverside Considering example Face Detection article demonstrate stream video Linux server using Python Flask OpenCV Let’s Begin… STEP 1 Creating activating Environment create virtual environment project virtualenv package required create virtual environment install pip pip install virtualenv create virtual environment must specify path creating environment’s local directory called ‘Videorecognition home folder create folder type following virtualenv Videorecognition activate environment execute following command source Videorecognitionbinactivate STEP 2 Installing Flask OpenCV First need refreshupgrade preinstalled packageslibraries aptget package manager sudo aptget upgrade sudo aptget upgrade execute following command install OpenCV Flask sudo apt install python3opencv pip install Flask STEP 3 Creating Project Structure prerequisite installed let’s set project ├── VideoStreaming │ ├── camerapy │ ├── mainpy │ ├── haarcascadefrontalfacealt2xml │ ├── template │ │ ├── indexhtml STEP 4 Detecting face using OpenCV created project structure detect face using OpenCVWe use basic easiest way detect face ie using Haarcascades camerapy import necessary package import cv2 defining face detector facecascadecv2CascadeClassifierhaarcascadefrontalfacealt2xml dsfactor06 class VideoCameraobject def initself capturing video selfvideo cv2VideoCapture0 def delself releasing camera selfvideorelease def getframeself extracting frame ret frame selfvideoread framecv2resizeframeNonefxdsfactorfydsfactor interpolationcv2INTERAREA graycv2cvtColorimagecv2COLORBGR2GRAY facerectsfacecascadedetectMultiScalegray135 xywh facerects cv2rectangleframexyxwyh025502 break encode OpenCV raw frame jpg displaying ret jpeg cv2imencodejpg frame return jpegtobytes STEP 5 Creating webpage displaying video create webpage display video indexhtml html head titleVideo Streaming Demonstrationtitle head body h1Video Streaming Demonstrationh1 img idbg src urlforvideofeed body html STEP 6 Creating Streaming Server detected face using haarcascade created webpage display video integrate two module server mainpy import necessary package flask import Flask rendertemplate Response camera import VideoCamera app Flaskname def index rendering webpage return rendertemplateindexhtml app routedef indexreturn rendertemplateindexhtml def gencamera True get camera frame frame cameragetframe yield bframer bContentType imagejpegr r frame br r def videofeed return ResponsegenVideoCamera mimetypemultipartxmixedreplace boundaryframe app routevideofeeddef videofeedreturn ResponsegenVideoCameramimetypemultipartxmixedreplace boundaryframe name main defining server ip address port apprunhost0000port5000 debugTrue STEP 7 Starting Accessing Server Open terminal window project directory executing following command cd Videorecognition cd VideoStreaming start server execute following command python mainpy order access server open browser navigate Server URL Conclusion hope tutorial help others find way beginning Flask OpenCV detail final code please visit GitHub depository VideoStreamingwithFlask See next tutorial Thank AnmolTags Machine Learning Flask Computer Vision Opencv Python |
1,132 | From Utopia to Reality: Marketing and the Big Data Revolution | Perfect information. First Degree Price Discrimination. You might know these terms from high school economics defining utopian conditions in the marketplace. One involves having instantaneous knowledge of all market prices, utilities, and cost functions. The other involves selling each customer a good as per his individual willingness to pay.
Read More | https://medium.com/data-analytics-and-ai/from-utopia-to-reality-marketing-and-the-big-data-revolution-7e1b6514a060 | ['Ella William'] | 2019-06-15 13:20:59.219000+00:00 | ['Big Data', 'Marketing', 'Analytics', 'Data Visualization', 'Data Science'] | Title Utopia Reality Marketing Big Data RevolutionContent Perfect information First Degree Price Discrimination might know term high school economics defining utopian condition marketplace One involves instantaneous knowledge market price utility cost function involves selling customer good per individual willingness pay Read MoreTags Big Data Marketing Analytics Data Visualization Data Science |
1,133 | The Mind-Body Connection Is Stronger Than You Think | ‘Biofeedback’ Could Ease Headaches, Anxiety — and Maybe a Lot Else
By providing a window onto the body’s inner workings, biofeedback could help people control what was once thought to be ungovernable
The man’s boasts were outlandish. Wim Hof, a 51-year-old Dutch endurance athlete, claimed that he could voluntarily control his own immune system — ramping its activity up or down at will. Moreover, he said that he could teach this skill to others.
Hof’s assertions might never have been put to the test but for the fact that he held several remarkable world records — including one for the longest time spent submerged neck-deep in an ice bath, and another for the fastest half marathon run barefoot on snow. His accomplishments earned him the nickname “the Iceman” and generated enough public interest — at least in the Netherlands — that scientists decided to investigate his claims.
For a 2012 study that appeared in the journal Psychosomatic Medicine, a team of Dutch researchers injected Hof with a toxin from E. coli bacteria. In past experiments, the toxin had reliably caused nausea and other symptoms and had also produced steep elevations in blood biomarkers of inflammation. After receiving the injection, Hof reported almost no symptoms, and his blood samples revealed “a remarkably mild inflammatory response,” the study authors reported.
In 2014, a follow-up study repeated the experiment using a group of 12 people who had trained using Hof’s methods. Blood samples indicated that, like Hof, they were able to exert some control over their body’s inflammatory response, which is a component of the immune system usually thought to be wholly involuntary and that, when overactive, can cause or contribute to a wide range of health problems.
“Hof had effectively found the off-switch for his immune system,” says Scott Carney, an investigative journalist who details his own experience with Hof’s techniques in the 2017 bestseller What Doesn’t Kill Us. “The [Hof] studies showed that something that should be impossible to do was possible.”
What does Hof’s method entail? Carney says it combines meditation and breathing exercises with cold exposures, such as a frigid shower. During these cold-exposure intervals, which Carney duly undertook, he would concentrate on speeding up his own metabolism in order to generate body heat and stop himself from shaking or shivering. Through this and related exercises, “you start to sort of gain control of internal bodily processes that we don’t generally try to access,” he says. “I’ve seen people use this for remission of Crohn’s, arthritis — just crazy things.”
It’s possible that biofeedback, by giving people a real-time look at the internal workings of their nervous system, can facilitate these self-calming practices.
While all of this may sound wild and far-fetched, some of Hof’s practices dovetail with a long-studied form of therapy known as biofeedback. “As the word suggests, biofeedback involves feeding back to the patient their own bio-signals, such as blood pressure or heart rate variability or muscle tension — anything that the body is displaying as a result of activity of the autonomic nervous system,” says Stefan Hofmann, PhD, a professor of psychology at Boston University. (Hofmann is not related to Hof.)
The autonomic nervous system (ANS) is so named because its processes are largely automatic — meaning involuntary and unconscious. The ANS plays a role in breathing, heart rate, digestion, thermoregulation, and a lot else. While Hof’s unconventional methods use the body’s response to cold as its source of biofeedback, this therapy has traditionally used specialized sensors to produce real-time measures of a person’s heart rate variability or other ANS-generated internal signals.
“By feeding these signals back to people, they can, to some extent, gain control over them,” Hofmann explains.
The research on biofeedback
Researchers have been exploring biofeedback since the 1960s. Some of the strongest work in support of its therapeutic power has examined its effect among people with headaches.
A 2019 review from the U.S. Department of Veterans Affairs found “high-confidence evidence that biofeedback is effective for reducing the frequency, duration, and intensity of migraine and tension-type headaches.”
While biofeedback techniques vary, many headache studies have involved hooking people up to electromyogram (EMG) sensors that measure electrical activity in the skin and muscles, which ebbs and flows in response to headache pain. This EMG data is “fed back” to a person as sounds, images, or both. For example, as a person’s headache worsens, a computer connected to the EMG may display a colored circle that contracts or grows red. “You might focus on widening the circle or changing its color,” Hofmann says. By doing this, people often find that they’re able to turn down their headache’s intensity.
Apart from its role in headache management, EMG biofeedback has helped people recover muscle function and mobility following a stroke. The technique can also treat incontinence, blood flow problems, and — as the Hof research suggests — maybe even inflammation and symptoms of some autoimmune disorders. A 2019 study from researchers at UCLA found that virtual reality–based biofeedback — using VR headsets to show people visual representations of their own breathing patterns — helped to reduce pain among people with rheumatoid arthritis and lupus. Researchers have also found that biofeedback, perhaps by increasing activity in the vagus nerve, may have some anti-inflammatory effects.
More recently, scientists have started looking at biofeedback as a treatment for anxiety disorders. A 2018 study in the journal Stress found that biofeedback lowered symptoms of burnout and improved math performance among a group of college students. Boston University’s Hofmann co-authored a 2017 research review that found biofeedback based on heart rate variability — a measure of the time between heart beats — could produce a large drop in self-reported stress and anxiety.
“There has been a long-standing view among behavioral scientists that if you can feel and perceive something, then you may be able to control it.”
How does biofeedback do all this? “That’s still a bit of a mystery,” Hofmann says. But there are theories. He explains that with some concentration and practice, people have the ability to regulate their own breathing or muscle tension, which can have a calming effect on heart rate, blood pressure, and other elements of the autonomic nervous system that are associated with unhealthy states of arousal. It’s possible that biofeedback, by giving people a real-time look at the internal workings of their nervous system, can facilitate these self-calming practices.
“There has been a long-standing view among behavioral scientists that if you can feel and perceive something, then you may be able to control it,” adds Hugo Critchley, MD, PhD, chair of psychiatry at the University of Sussex in the U.K.
One of Critchley’s primary areas of research concerns the ways in which the mind, brain, and body interact during states of arousal. He points out that, somewhat surprisingly, research on experienced Buddhist meditators has found that they are no better than non-meditators when it comes to detecting their own heartbeat.
By giving people a sharper view of their heartbeat and other internal states, biofeedback may help people take the wheel of processes or functions that were long presumed to be ungovernable, Critchley says.
Bringing biofeedback out of the lab
In the past, technologies capable of providing people with accurate, real-time biofeedback tended to be expensive and cumbersome. But that’s changing. “There’s a revolution going on with high-tech companies measuring bio-signals, and biofeedback will have a role in that,” Hofmann says.
Some companies are planning to roll out VR headset programs that work with the Apple Watch and other wearable heart rate monitors to provide helpful biofeedback measures. And already there are a number of commercial wearables that are capable of measuring heart rate variability. (This is different from a simple heart rate monitor, which Hofmann says is not a helpful biofeedback output.)
A lot more research is needed to clarify biofeedback’s therapeutic uses and mechanisms of action. It could turn out that coupling biofeedback with meditation or muscle-relaxation techniques could enhance these practices’ well-studied health benefits. It’s also possible — though far from certain — that biofeedback could help people turn down harmful inflammation or other arousal-linked internal states.
“If you’re able to be more aware of what’s going on in your body, that can give you better control of arousal,” Critchley says. “And a lot of arousal is inappropriate.” | https://elemental.medium.com/biofeedback-could-ease-headaches-anxiety-and-maybe-a-lot-else-630f3c51f99f | ['Markham Heid'] | 2020-09-17 05:31:01.485000+00:00 | ['Health', 'Biology', 'Anxiety', 'The Nuance', 'Science'] | Title MindBody Connection Stronger ThinkContent ‘Biofeedback’ Could Ease Headaches Anxiety — Maybe Lot Else providing window onto body’s inner working biofeedback could help people control thought ungovernable man’s boast outlandish Wim Hof 51yearold Dutch endurance athlete claimed could voluntarily control immune system — ramping activity Moreover said could teach skill others Hof’s assertion might never put test fact held several remarkable world record — including one longest time spent submerged neckdeep ice bath another fastest half marathon run barefoot snow accomplishment earned nickname “the Iceman” generated enough public interest — least Netherlands — scientist decided investigate claim 2012 study appeared journal Psychosomatic Medicine team Dutch researcher injected Hof toxin E coli bacteria past experiment toxin reliably caused nausea symptom also produced steep elevation blood biomarkers inflammation receiving injection Hof reported almost symptom blood sample revealed “a remarkably mild inflammatory response” study author reported 2014 followup study repeated experiment using group 12 people trained using Hof’s method Blood sample indicated like Hof able exert control body’s inflammatory response component immune system usually thought wholly involuntary overactive cause contribute wide range health problem “Hof effectively found offswitch immune system” say Scott Carney investigative journalist detail experience Hof’s technique 2017 bestseller Doesn’t Kill Us “The Hof study showed something impossible possible” Hof’s method entail Carney say combine meditation breathing exercise cold exposure frigid shower coldexposure interval Carney duly undertook would concentrate speeding metabolism order generate body heat stop shaking shivering related exercise “you start sort gain control internal bodily process don’t generally try access” say “I’ve seen people use remission Crohn’s arthritis — crazy things” It’s possible biofeedback giving people realtime look internal working nervous system facilitate selfcalming practice may sound wild farfetched Hof’s practice dovetail longstudied form therapy known biofeedback “As word suggests biofeedback involves feeding back patient biosignals blood pressure heart rate variability muscle tension — anything body displaying result activity autonomic nervous system” say Stefan Hofmann PhD professor psychology Boston University Hofmann related Hof autonomic nervous system ANS named process largely automatic — meaning involuntary unconscious ANS play role breathing heart rate digestion thermoregulation lot else Hof’s unconventional method use body’s response cold source biofeedback therapy traditionally used specialized sensor produce realtime measure person’s heart rate variability ANSgenerated internal signal “By feeding signal back people extent gain control them” Hofmann explains research biofeedback Researchers exploring biofeedback since 1960s strongest work support therapeutic power examined effect among people headache 2019 review US Department Veterans Affairs found “highconfidence evidence biofeedback effective reducing frequency duration intensity migraine tensiontype headaches” biofeedback technique vary many headache study involved hooking people electromyogram EMG sensor measure electrical activity skin muscle ebb flow response headache pain EMG data “fed back” person sound image example person’s headache worsens computer connected EMG may display colored circle contract grows red “You might focus widening circle changing color” Hofmann say people often find they’re able turn headache’s intensity Apart role headache management EMG biofeedback helped people recover muscle function mobility following stroke technique also treat incontinence blood flow problem — Hof research suggests — maybe even inflammation symptom autoimmune disorder 2019 study researcher UCLA found virtual reality–based biofeedback — using VR headset show people visual representation breathing pattern — helped reduce pain among people rheumatoid arthritis lupus Researchers also found biofeedback perhaps increasing activity vagus nerve may antiinflammatory effect recently scientist started looking biofeedback treatment anxiety disorder 2018 study journal Stress found biofeedback lowered symptom burnout improved math performance among group college student Boston University’s Hofmann coauthored 2017 research review found biofeedback based heart rate variability — measure time heart beat — could produce large drop selfreported stress anxiety “There longstanding view among behavioral scientist feel perceive something may able control it” biofeedback “That’s still bit mystery” Hofmann say theory explains concentration practice people ability regulate breathing muscle tension calming effect heart rate blood pressure element autonomic nervous system associated unhealthy state arousal It’s possible biofeedback giving people realtime look internal working nervous system facilitate selfcalming practice “There longstanding view among behavioral scientist feel perceive something may able control it” add Hugo Critchley MD PhD chair psychiatry University Sussex UK One Critchley’s primary area research concern way mind brain body interact state arousal point somewhat surprisingly research experienced Buddhist meditators found better nonmeditators come detecting heartbeat giving people sharper view heartbeat internal state biofeedback may help people take wheel process function long presumed ungovernable Critchley say Bringing biofeedback lab past technology capable providing people accurate realtime biofeedback tended expensive cumbersome that’s changing “There’s revolution going hightech company measuring biosignals biofeedback role that” Hofmann say company planning roll VR headset program work Apple Watch wearable heart rate monitor provide helpful biofeedback measure already number commercial wearable capable measuring heart rate variability different simple heart rate monitor Hofmann say helpful biofeedback output lot research needed clarify biofeedback’s therapeutic us mechanism action could turn coupling biofeedback meditation musclerelaxation technique could enhance practices’ wellstudied health benefit It’s also possible — though far certain — biofeedback could help people turn harmful inflammation arousallinked internal state “If you’re able aware what’s going body give better control arousal” Critchley say “And lot arousal inappropriate”Tags Health Biology Anxiety Nuance Science |
1,134 | Python and Bokeh: Part II. The beginner’s guide to creating… | Photo by Ronit Shaked on Unsplash
The beginner’s guide to creating interactive dashboards: Bokeh server and applications.
This is the second part of our tutorial series on Bokeh visualization library. In this part of the series, we will explore Bokeh application and how to serve them using Bokeh server.
Please, refer to the first part of these series to cover the basics of building scatter, line, and bar plots, and learn how to apply styling and embed plots in web pages.
All the examples in Part I of the series displayed simple embedded visualizations with only the basic interactivity. However, we often need to create full-fledged applications to properly visualize the data, as real-world visualizations usually require not only interactions with a user via buttons, dropdown menus, and other elements, but also dynamic data updates.
Bokeh allows this, as plots can be not only embedded, but also combined into large and elaborate web applications, with the built-in Bokeh server. It handles data and plot updates between backend server in Python and client-side Bokeh JavaScript library, which is responsible for actual drawing in a browser.
Anatomy of a Bokeh application
Bokeh defines a set of abstractions for storing and transporting objects between backend and frontend. The main and most important one is the document. A Bokeh document is a container, which incorporates all the elements, including plots, widgets and interactions.
Each document contains one or more models. In the context of Bokeh, models are plots , axes , tools or any other visual or non-visual element, which is drawn or used to draw something else on the user screen. Bokeh backend serializes documents to JSON which are sent and displayed by the BokehJS client.
In turn, Bokeh application is an entity, which creates and populates documents with updates on the backend with all the necessary configuration to properly transfer data between the server and client. Although Bokeh applications are written in Python and handled by the server, it is also possible to add custom JavaScript functionality on a client side, with JavaScript callback or otherwise. This is a very powerful tool, but we will use it only occasionally, and mostly for styling. Bokeh JS client can also handle user actions.
Without further ado let’s proceed to actual coding. We will start by exploring the main building blocks of Bokeh apps and leverage that knowledge to create a truly interactive dashboard for real-world data visualizations with all the interactivity and data facilities we may need.
Running Bokeh server
Let’s create a simple application, which plots some random data. The most basic way to do so is to use a single Python script, which takes an empty document and fills it with models:
Bokeh exposes curdoc() function, which provides a handle for the current default document. Although you can create and populate documents manually, with all the flexibility possible, curdoc() is the most straightforward way to get a document and start working with it, while not compromising on flexibility too much.
After that, we can use figure() to create a model, which is a Figure instance in this case:
Although we already specified figure size, this model is still empty and is not attached to the document. Let’s plot some random values on a newly created figure and attach it to the document we have:
Each document has one or more root elements. Root elements are the direct children of a document and, as we will see later, they can be directly referenced elsewhere.
As simple as it looks, we have our first Bokeh application. Just a final touch: we will define a page title to be displayed in the browser tab with bokeh_doc.title = "Sample Bokeh App" and we're ready to pass it to Bokeh server:
As you might have already figured out this command bokeh will run a server and manage backend-related tasks, like downloading data sets. run subcommand launches a server, while --show option indicates, which app should be open in a browser window.
The complete code for this application:
Basic Bokeh application
Under the hood, Bokeh server performed a lot of tasks: it added a figure model to the document, serialized the model and sent it to the client browser session. All of that happened without our intervention at all.
Widgets and callbacks
Now we have a working app which displays a static scatter plot. Apparently, we have covered static plots in Part I, so what’s the difference? It’s simple: Bokeh server provides a rich functionality to make plots and other document elements actually dynamic and interactive. This is achieved by a system of callbacks and attribute updates, which are handled by Bokeh server.
For example, to update some plot with new data, we only need to add that to a corresponding data source. No need to send updates to a client session, Bokeh with perform this for us.
Callbacks and various data source update mechanisms are the main building blocks of Bokeh interactivity, so let’s explore them.
Periodic callbacks
The main and, probably, the most common type of callback used in dynamic dashboards, is the periodic callback. After being registered to a document, it will be fired by Bokeh server on specified time intervals.
Periodic callbacks are most commonly used to fetch new data and update plots. Although it’s better to perform long-running I/O operations outside main thread, we will not bother about it in this tutorial, as this is a very general Python topic, not specific to Bokeh.
To illustrate, how periodic callbacks are used, let’s create a simple application with an empty figure in it:
To add data to the figure, we use a periodic callback, which draws random numbers at each run:
The method bokeh_doc.add_periodic_callback notifies Bokeh server, that add_circles function must be fired every second, or once per 1000 milliseconds. Note, that each callback will add new renderer to the plot, so after a while, we will have a bunch of independent glyphs in our app. While it is fine for the sake of this presentation, more efficient scenarios will require ColumnDataSource functionality (to be introduced later in this tutorial).
As previously, we launch the app:
If everything works as expected, the app will add a circle to the plot every second. The complete code for this app is as follows:
Application with a periodic callback
Widgets and attributes callbacks
Periodic callbacks help to make Bokeh applications dynamic, but we also want to make our plots responsive to user interactions, like clicks and selections. The typical mechanism of interaction is when the user engages with some element or visual component, like a Button, Dropdown menu, Slider, etc., to change some aspect of plotting.
For example, a user may select a category from a dropdown menu to filter the data for display, based on the selected category. Or, we may want to turn on or off periodic callback on a button click (yes, these callbacks may be added or removed dynamically). Both types of interactions basically work the same.
User interaction typically causes some changes in a model or models attributes. For example, slider value is bound to value attribute, and as a user interacts with a slider, the attribute changes correspondingly.
Bokeh allows to bind a callback to these attribute changes and the way to register such callback is through on_change method, which is exposed by various models. The callback function must have a specific signature, as you will see in a moment. This mechanism is extremely powerful and enables custom handling of virtually any changes in a Bokeh application.
We will extend our app from previous section:
Bokeh provides predefined widgets, like the Button, which can be used immediately in an application:
We use a predefined style called success for a green color button (very similar to the Bootstrap CSS framework) with Generate label on it.
So far the app does nothing: no data is plotted to the sample_plot , the button is not responsive to a click, and it is not even added to the document.
Buttons in Bokeh expose a simpler callback mechanism to handle clicks: callback function should have no arguments and is registered with on_click method. You certainly can use any Python functional tool to create such a callback function from a generic function with an arbitrary signature.
Now we need to add a plot and a button to the document. We will use a basic column layout for this, with only a minor change: we will wrap the button in an element called widgetbox , which is responsible for the proper placement and padding of the widgets (check how it will look without it):
So far we have not created any attribute callbacks, but we will shortly.
When any of the high-level plotting methods like circle , vbar are applied on a Figure instance, those methods add an additional renderer to that figure (look into the renderers attribute of a Figure instance). As you already probably figured out, we can attach a callback to any such change. To illustrate this, let's create a very basic callback with the correct signature:
Note the signature: it’s generic, so that the same callback function may be bound to different attributes: the attribute name itself is the input parameter to the function.
Now, on each button click, we will see "attribute 'renderers' changed" in the terminal. As simple as that. Each time we click the button, we add a new renderer to the plot and Bokeh fires renderer_added callback. The power of this mechanism is that we do not connect this renderer_added callback to the button at all. We just properly handle the sequence of events, launched by the button click. This allows to wire callbacks in Bokeh application in a complex and flexible way.
The complete code for this app is as follows:
Providing data
In all the examples above, we plotted data in a straightforward way: by providing numpy arrays to a corresponding glyph method.
In larger applications, this simple implementation has a couple of important drawbacks. First, each call to a circle or any other glyph function adds a new renderer to the plot which makes them hard to track. Second, this approach couples the data layer to the view layer, which makes the code entangled and harder to maintain as the app grows larger.
The right way to handle data in Bokeh applications is via ColumnDataSource and CDSView . ColumnDataSource is a data container, which was introduced in Part I of this tutorial. CDSView is a filtering mechanism for column data sources, which allows us to visualize only certain data elements with a single data source under the hood. We will use CDSView later in our examples and in our dashboard application.
Streaming
Let’s rewrite our button application in a more efficient way. First, we create a data source and use it for plotting:
For now, sample_plot depends on data_scr as the source for its data to be presented as circles. But this data source is still empty. How should we populate it with data?
Instead of plotting data directly on a button click, we will now stream data into the data source:
Let’s breakdown this code: the data_scr.stream method of ColumnDataSource appends new data to the data source. Remember, Bokeh keeps track of the two versions of our data: one on the server side and the other on the client side.
Using data_scr.stream ensures that only diff changes will be sent to the client, while the data source itself will not be recreated or resend from scratch. This is important, as sometimes you may need to stream large amounts of data, and creating new data sources on each update is costly both in terms of resources and performance.
Note also the rollover argument: it caps the maximum number of most recent data points that the client will keep. Again, this is more suitable for applications, which update frequently and with large amounts of data.
The complete code for this app is as follows:
Patching
While streaming is used to provide new data, sometimes you need to change the data, which is already in the data source. For this use case ColumnDataSource exposes a patch method. Again, only diff updates will be sent to the client with a minimal network overhead.
Let’s rewrite our callback function — we will randomly select 3 circles and change their x field:
The patch method requires a dictionary, with keys being the fields of data source to be changed. You do not need to change all the fields at once. Values are sequences of pairs, where the 0-th element is the index to patch at, and the 1-st element is the new value to patch with.
Patching in Bokeh has two drawbacks to be mentioned, though. One is the very strict type checking in data_scr.patch : int will pass, while np.int64 or np.int32 won't. That's why we need to do patch_idx = [int(ix) for ix in patch_idx] .
Also, patching won’t handle generators, so zip won't work without transforming it to the actual data sequence.
The complete code for this app is as follows:
Streaming and patching methods enable us to decouple the data layer from the view layer. Now our data updates are efficiently handled by Bokeh, so we do not have to bother with how data changes will find their way to the client. We can now design Bokeh applications around data, and instruct plots to use specific data sources for drawing.
Using tables
One important widget to explore before creating a real dashboard is a table. A common use case is to display actual data values for inspection.
Let’s create a simple application with tables. Along the way, we will also explore filtering with CDSView .
First, we need to add some imports:
In Bokeh, a table is a collection of columns, linked to some data source (what a surprise!). To create a table, we need to construct table columns first:
Note, that we provide not only field names, which correspond to columns in a data source, but also titles, which will be displayed in the header row. We need also the data source and then we’re ready to create the table itself:
Now, to illustrate how Bokeh handles filtering and selections, we will add the second table, with the same columns, but with CDSView :
This code looks foreign, so let’s break it down a bit. In this table, we want to display only filtered rows, and we start by creating a mask. This mask filters nothing as for now, but we only need a placeholder to create a CDSView . We will update it later.
To create a boolean mask in CDSView , we use a BooleanFilter . Another option may be an IndexFilter , which allows selecting which indices should be displayed in a view. Finally, we create the table itself and instruct it to show data from the data_scr according to columns and subject to any filtering, as defined in the data_view .
Now, how do we change, which rows are displayed in our filtered table? Remember, Bokeh tracks all the changes in the document and its children. What we need to do is just to change the view, and Bokeh will transfer the changes to the client.
Let’s first arrange our application together:
If you launch it now, nothing interesting will happen: tables will be absolutely identical. To make them look different, let’s add a periodic callback:
In this callback, we randomly select a subset of rows to recreate the view. No additional actions are needed: Bokeh will notify every model, which needs to re-render on this change (in this case, only filtered_table ).
Note several useful things about tables and data in Bokeh in general:
you can sort table rows by some column value by clicking at the column header,
to get to original order, you need to Ctrl+click on the column header.
Moreover, if you select one or more rows in one table, you will see, that the same rows are selected in another table (except those, which are filtered at the moment). This is an important observation: in reality, you selected not rows in the table, but rows in the underlying ColumnDataSource .
Bokeh notices the client change and notifies every model (tables, plots, and others), which uses the same data source, causing all of them to respond to the change. Actually, you can even attach a callback to the selection and handle even more elaborate behavior.
The complete code for this app is as follows:
Next steps
Now that we know how to create glyphs, provide them with data (filtered or now), and create dynamic and interactive Bokeh applications, we can proceed to the final part of the series: dynamic dashboards. In the next part, we will create a real interactive dashboard, using the real data, coming from an external data source. We will handle all the aspects of the dynamic dashboard: data management, plotting, interactivity, and styling. Stay tuned! | https://medium.com/y-data-stories/python-and-bokeh-part-ii-d81024c9578f | ['Gleb Ivashkevich'] | 2019-07-20 21:41:11.524000+00:00 | ['Bokeh', 'Y Data', 'Data Visualization', 'Python', 'Visualization'] | Title Python Bokeh Part II beginner’s guide creating…Content Photo Ronit Shaked Unsplash beginner’s guide creating interactive dashboard Bokeh server application second part tutorial series Bokeh visualization library part series explore Bokeh application serve using Bokeh server Please refer first part series cover basic building scatter line bar plot learn apply styling embed plot web page example Part series displayed simple embedded visualization basic interactivity However often need create fullfledged application properly visualize data realworld visualization usually require interaction user via button dropdown menu element also dynamic data update Bokeh allows plot embedded also combined large elaborate web application builtin Bokeh server handle data plot update backend server Python clientside Bokeh JavaScript library responsible actual drawing browser Anatomy Bokeh application Bokeh defines set abstraction storing transporting object backend frontend main important one document Bokeh document container incorporates element including plot widget interaction document contains one model context Bokeh model plot ax tool visual nonvisual element drawn used draw something else user screen Bokeh backend serializes document JSON sent displayed BokehJS client turn Bokeh application entity creates populates document update backend necessary configuration properly transfer data server client Although Bokeh application written Python handled server also possible add custom JavaScript functionality client side JavaScript callback otherwise powerful tool use occasionally mostly styling Bokeh JS client also handle user action Without ado let’s proceed actual coding start exploring main building block Bokeh apps leverage knowledge create truly interactive dashboard realworld data visualization interactivity data facility may need Running Bokeh server Let’s create simple application plot random data basic way use single Python script take empty document fill model Bokeh expose curdoc function provides handle current default document Although create populate document manually flexibility possible curdoc straightforward way get document start working compromising flexibility much use figure create model Figure instance case Although already specified figure size model still empty attached document Let’s plot random value newly created figure attach document document one root element Root element direct child document see later directly referenced elsewhere simple look first Bokeh application final touch define page title displayed browser tab bokehdoctitle Sample Bokeh App ready pas Bokeh server might already figured command bokeh run server manage backendrelated task like downloading data set run subcommand launch server show option indicates app open browser window complete code application Basic Bokeh application hood Bokeh server performed lot task added figure model document serialized model sent client browser session happened without intervention Widgets callback working app display static scatter plot Apparently covered static plot Part what’s difference It’s simple Bokeh server provides rich functionality make plot document element actually dynamic interactive achieved system callback attribute update handled Bokeh server example update plot new data need add corresponding data source need send update client session Bokeh perform u Callbacks various data source update mechanism main building block Bokeh interactivity let’s explore Periodic callback main probably common type callback used dynamic dashboard periodic callback registered document fired Bokeh server specified time interval Periodic callback commonly used fetch new data update plot Although it’s better perform longrunning IO operation outside main thread bother tutorial general Python topic specific Bokeh illustrate periodic callback used let’s create simple application empty figure add data figure use periodic callback draw random number run method bokehdocaddperiodiccallback notifies Bokeh server addcircles function must fired every second per 1000 millisecond Note callback add new renderer plot bunch independent glyph app fine sake presentation efficient scenario require ColumnDataSource functionality introduced later tutorial previously launch app everything work expected app add circle plot every second complete code app follows Application periodic callback Widgets attribute callback Periodic callback help make Bokeh application dynamic also want make plot responsive user interaction like click selection typical mechanism interaction user engages element visual component like Button Dropdown menu Slider etc change aspect plotting example user may select category dropdown menu filter data display based selected category may want turn periodic callback button click yes callback may added removed dynamically type interaction basically work User interaction typically cause change model model attribute example slider value bound value attribute user interacts slider attribute change correspondingly Bokeh allows bind callback attribute change way register callback onchange method exposed various model callback function must specific signature see moment mechanism extremely powerful enables custom handling virtually change Bokeh application extend app previous section Bokeh provides predefined widget like Button used immediately application use predefined style called success green color button similar Bootstrap CSS framework Generate label far app nothing data plotted sampleplot button responsive click even added document Buttons Bokeh expose simpler callback mechanism handle click callback function argument registered onclick method certainly use Python functional tool create callback function generic function arbitrary signature need add plot button document use basic column layout minor change wrap button element called widgetbox responsible proper placement padding widget check look without far created attribute callback shortly highlevel plotting method like circle vbar applied Figure instance method add additional renderer figure look renderers attribute Figure instance already probably figured attach callback change illustrate let create basic callback correct signature Note signature it’s generic callback function may bound different attribute attribute name input parameter function button click see attribute renderers changed terminal simple time click button add new renderer plot Bokeh fire rendereradded callback power mechanism connect rendereradded callback button properly handle sequence event launched button click allows wire callback Bokeh application complex flexible way complete code app follows Providing data example plotted data straightforward way providing numpy array corresponding glyph method larger application simple implementation couple important drawback First call circle glyph function add new renderer plot make hard track Second approach couple data layer view layer make code entangled harder maintain app grows larger right way handle data Bokeh application via ColumnDataSource CDSView ColumnDataSource data container introduced Part tutorial CDSView filtering mechanism column data source allows u visualize certain data element single data source hood use CDSView later example dashboard application Streaming Let’s rewrite button application efficient way First create data source use plotting sampleplot depends datascr source data presented circle data source still empty populate data Instead plotting data directly button click stream data data source Let’s breakdown code datascrstream method ColumnDataSource appends new data data source Remember Bokeh keep track two version data one server side client side Using datascrstream ensures diff change sent client data source recreated resend scratch important sometimes may need stream large amount data creating new data source update costly term resource performance Note also rollover argument cap maximum number recent data point client keep suitable application update frequently large amount data complete code app follows Patching streaming used provide new data sometimes need change data already data source use case ColumnDataSource expose patch method diff update sent client minimal network overhead Let’s rewrite callback function — randomly select 3 circle change x field patch method requires dictionary key field data source changed need change field Values sequence pair 0th element index patch 1st element new value patch Patching Bokeh two drawback mentioned though One strict type checking datascrpatch int pas npint64 npint32 wont Thats need patchidx intix ix patchidx Also patching won’t handle generator zip wont work without transforming actual data sequence complete code app follows Streaming patching method enable u decouple data layer view layer data update efficiently handled Bokeh bother data change find way client design Bokeh application around data instruct plot use specific data source drawing Using table One important widget explore creating real dashboard table common use case display actual data value inspection Let’s create simple application table Along way also explore filtering CDSView First need add import Bokeh table collection column linked data source surprise create table need construct table column first Note provide field name correspond column data source also title displayed header row need also data source we’re ready create table illustrate Bokeh handle filtering selection add second table column CDSView code look foreign let’s break bit table want display filtered row start creating mask mask filter nothing need placeholder create CDSView update later create boolean mask CDSView use BooleanFilter Another option may IndexFilter allows selecting index displayed view Finally create table instruct show data datascr according column subject filtering defined dataview change row displayed filtered table Remember Bokeh track change document child need change view Bokeh transfer change client Let’s first arrange application together launch nothing interesting happen table absolutely identical make look different let’s add periodic callback callback randomly select subset row recreate view additional action needed Bokeh notify every model need rerender change case filteredtable Note several useful thing table data Bokeh general sort table row column value clicking column header get original order need Ctrlclick column header Moreover select one row one table see row selected another table except filtered moment important observation reality selected row table row underlying ColumnDataSource Bokeh notice client change notifies every model table plot others us data source causing respond change Actually even attach callback selection handle even elaborate behavior complete code app follows Next step know create glyph provide data filtered create dynamic interactive Bokeh application proceed final part series dynamic dashboard next part create real interactive dashboard using real data coming external data source handle aspect dynamic dashboard data management plotting interactivity styling Stay tunedTags Bokeh Data Data Visualization Python Visualization |
1,135 | We Achieved More This Year Than We Give Ourselves Credit For | We Achieved More This Year Than We Give Ourselves Credit For
When merely surviving is an accomplishment
Photo by Michelangelo Buonarroti from Pexels
For many of us it’s been the most difficult year of our lives. Even if worse things have happened to us in the years before, most of us haven’t spent a whole year in confusion and paranoia.
Even the most powerful governments and renowned experts were not prepared for this, so we can’t blame ourselves for waking up into chaos one morning and not knowing what to do next.
At first, it may have felt like we’re handling this pretty well, but the cost of living in lockdown kept piling up week after week.
Maybe that’s why showing some kindness to ourselves is overdue.
Despite the worsening woes of lockdown life, we have made it this far. The fact that we’re alive and breathing right now is indicative of the right steps we have taken and the luck we had on our side.
That’s a great place to grow gratefulness.
We’ve had to accept life within four walls as the new normal
As much as we love coming home, being home all the time and bringing home everything we did elsewhere — like work and exercise — are challenges none of us signed up for.
We gave up our temples and turned them into trains in a loop, living in the irony of constant restlessness in a place meant for resting.
We accepted looking at faces we love only through a flat-screen, and showing our battlegrounds-of-residence to strangers on camera.
We shared our losses and griefs from a distance and came up with creative ideas to celebrate birthdays, anniversaries, and graduations in front of blue light emitters we were already overexposed to.
We accepted Zoom as a household name, something most of us would probably never know of in different circumstances.
We cheered and protested, supported and voted, taught and learned, loved and protected, all within an ever-shrinking walled perimeter.
And we still continue to hold our own.
The last thing we need is undeserved guilt
We need to rid ourselves of wild expectations, of perfection and invincibility, on top of everything we’ve been carrying already.
Yes, we lost jobs, but we also spent months brainstorming to fortify our fragile livelihoods going forward. We learned new skills, found new opportunities, came across ideas that we probably wouldn’t have in another reality.
Yes, we lost loved ones and opened our hearts to others who suffered the same, but we learned to embrace uncertainty as an inevitable component of life, vowing to express love and thankfulness more often, more sincerely.
All of that means we owe ourselves a pat on the back, a hug for the soul, and a high-five to our nerves.
It wasn’t a year of setbacks and slowdowns
It’s rather been a year of finding better solutions for survival. We’re ending a year knowing we’re united not only in our battles but in our triumphs too, as long as we count ourselves as a part of a larger entity: Humanity.
Even when some of us find these simple truths to be obvious, it’s not that hard to forget them when we measure ourselves against astronomical expectations in circumstances where merely surviving is an accomplishment.
So let’s take a kind look at the mirror and say, “You did great!”
And remember that each of us, and by aggregation all of us, reached a milestone worthy of celebration. | https://medium.com/live-your-life-on-purpose/we-achieved-more-this-year-than-we-give-ourselves-credit-for-d390edec40b4 | ['Mutasim Billah'] | 2020-12-24 12:25:15.631000+00:00 | ['Society', 'Pandemic', '2020', 'Motivation', 'Self Love'] | Title Achieved Year Give Credit ForContent Achieved Year Give Credit merely surviving accomplishment Photo Michelangelo Buonarroti Pexels many u it’s difficult year life Even worse thing happened u year u haven’t spent whole year confusion paranoia Even powerful government renowned expert prepared can’t blame waking chaos one morning knowing next first may felt like we’re handling pretty well cost living lockdown kept piling week week Maybe that’s showing kindness overdue Despite worsening woe lockdown life made far fact we’re alive breathing right indicative right step taken luck side That’s great place grow gratefulness We’ve accept life within four wall new normal much love coming home home time bringing home everything elsewhere — like work exercise — challenge none u signed gave temple turned train loop living irony constant restlessness place meant resting accepted looking face love flatscreen showing battlegroundsofresidence stranger camera shared loss grief distance came creative idea celebrate birthday anniversary graduation front blue light emitter already overexposed accepted Zoom household name something u would probably never know different circumstance cheered protested supported voted taught learned loved protected within evershrinking walled perimeter still continue hold last thing need undeserved guilt need rid wild expectation perfection invincibility top everything we’ve carrying already Yes lost job also spent month brainstorming fortify fragile livelihood going forward learned new skill found new opportunity came across idea probably wouldn’t another reality Yes lost loved one opened heart others suffered learned embrace uncertainty inevitable component life vowing express love thankfulness often sincerely mean owe pat back hug soul highfive nerve wasn’t year setback slowdown It’s rather year finding better solution survival We’re ending year knowing we’re united battle triumph long count part larger entity Humanity Even u find simple truth obvious it’s hard forget measure astronomical expectation circumstance merely surviving accomplishment let’s take kind look mirror say “You great” remember u aggregation u reached milestone worthy celebrationTags Society Pandemic 2020 Motivation Self Love |
1,136 | New Survey Identifies 98 Long-Lasting Covid Symptoms | New Survey Identifies 98 Long-Lasting Covid Symptoms
Early research helps quantify coronavirus long-haulers’ experiences
Photo: RuslanDashinsky/Getty Images
There’s a growing number of people around the globe who have survived Covid-19, only to find persistent symptoms lasting weeks or months, and even new effects like hair loss that don’t show up until weeks after they’ve been declared Covid-free.
They’re called long-haulers. Their experiences are poorly understood by the medical community and often dismissed by doctors as psychological issues, writes epidemiologist and Covid survivor Margot Gage Witvliet, PhD, in an article on The Conversation. But the aches, pains, and inconveniences are real, according to Witvliet, who had Covid-19 four months ago and is now suffering from tinnitus, chest pain, and heart-racing.
98 long-haul effects
A new survey of 1,567 long-haulers now shows just how wide-ranging these long-term symptoms are, stretching from sadness and blurry vision to diarrhea and joint pain. Here are the top 10 complaints and the percentage of people reporting each one (many long-haulers report several effects):
100% Fatigue
66.8% Muscle or body aches
65.1% Shortness of breath or difficulty breathing
59.0% Difficulty concentrating or focusing
58.5% Inability to exercise or be active
57.6% Headache
49.9% Difficulty sleeping
47.6% Anxiety
45.6% Memory problems
41.9% Dizziness
The survey, which includes self-reported post-Covid symptoms, grew out of a Facebook page called Survivor Corps, a grassroots group devoted to educating Covid-19 long-haulers and connecting them to the medical and research communities.
The findings were analyzed and presented by Natalie Lambert, PhD, an associate research professor in medicine at Indiana University and Wendy Chung, MD, a neurodevelopmental specialist at Columbia University Irving Medical Center.
Their paper has not been formally peer-reviewed nor published in a journal, but the findings echo other research and the growing number of documented if anecdotal cases.
In all, survey respondents noted 98 different effects, far more than the 11 common symptoms that the U.S. Centers for Disease Control and Prevention lists as possible signs that a person has the disease.
Several of the ills are far from benign: tinnitus; cramps; flashes or floaters in vision; night sweats, pain in the hand and feet. A quarter of the effects involved pain.
“The results of this survey suggest that the brain, whole body, eye and skin symptoms are also frequent-occurring health problems for people recovering from Covid-19,” the researchers state.
‘We expected to see a lot of long-term damage’
Studies have shown that Covid-19 infects much more than the respiratory system. By late spring, we knew the disease was affecting the body from head to toe, swelling the brain and compromising many of the body’s organs, and that it could be a blood vessel disease. A new study suggests Covid-19 can infect the thyroid gland, causing excess hormone release.
And as time goes on, studies have begun looking at potential long-term effects.
Heart images taken 10 weeks after people contracted Covid-19 found 78 of 100 had some sort of inflammation or other abnormalities, even if the people had few or no preexisting cardiovascular issues, researchers reported July 27 in the journal JAMA Cardiology.
“We expected to see a lot of long-term damage from Covid-19: scarring, decreased lung function, decreased exercise capacity,” Ali Gholamrezanezhad, a radiologist at the Keck School of Medicine at the University of Southern California, tells Science Magazine.
“It’s going to take months to a year or more to determine if there are any long-lasting, deleterious consequences of the infection.”
There are ongoing odd effects, too. Beyond enduring pain, hundreds of Covid-19 survivors are experiencing hair loss. “We are seeing patients who had Covid-19 two to three months ago and are now experiencing hair loss,” says Shilpi Khetarpal, MD, a dermatologist at the Cleveland Clinic. The demoralizing effect was made vivid recently in a Twitter video posted by the celebrity Alyssa Milano, herself recovering — in some ways — from the disease. Khetarpal says, however, that the effects should be temporary.
Answers coming, in months or years
Covid-19 would not be the only virus to cause chronic symptoms. Polio usually causes mild cold or flu-like symptoms. But in about 1% of cases, it damages the neurological system and can leave a person partially paralyzed. Epstein-Barr virus and the herpes virus are both suspected of causing chronic fatigue syndrome, but scientists aren’t sure.
Given the current pandemic is only months old, nobody can say for sure if any of the post-Covid complications will become life-long problems. “It’s going to take months to a year or more to determine if there are any long-lasting, deleterious consequences of the infection,” Dr. Anthony Fauci, director of the National Institute of Allergy and Infectious Disease, said last month in an interview on Facebook. “We just don’t know that now. We haven’t had enough time.”
That leaves sufferers like Witvliet, the epidemiologist, in limbo, mostly just resting while wondering when her headaches, brain fog, and extreme fatigue might clear up.
“It’s too soon to say we’re disabled,” she writes. “It’s also too soon to know how long the damage will last.” | https://elemental.medium.com/new-survey-identifies-98-long-lasting-covid-symptoms-87935b258a3e | ['Robert Roy Britt'] | 2020-08-14 05:31:01.020000+00:00 | ['Health', 'Pandemic', 'Covid 19', 'Symptoms', 'Coronavirus'] | Title New Survey Identifies 98 LongLasting Covid SymptomsContent New Survey Identifies 98 LongLasting Covid Symptoms Early research help quantify coronavirus longhaulers’ experience Photo RuslanDashinskyGetty Images There’s growing number people around globe survived Covid19 find persistent symptom lasting week month even new effect like hair loss don’t show week they’ve declared Covidfree They’re called longhaulers experience poorly understood medical community often dismissed doctor psychological issue writes epidemiologist Covid survivor Margot Gage Witvliet PhD article Conversation ache pain inconvenience real according Witvliet Covid19 four month ago suffering tinnitus chest pain heartracing 98 longhaul effect new survey 1567 longhaulers show wideranging longterm symptom stretching sadness blurry vision diarrhea joint pain top 10 complaint percentage people reporting one many longhaulers report several effect 100 Fatigue 668 Muscle body ache 651 Shortness breath difficulty breathing 590 Difficulty concentrating focusing 585 Inability exercise active 576 Headache 499 Difficulty sleeping 476 Anxiety 456 Memory problem 419 Dizziness survey includes selfreported postCovid symptom grew Facebook page called Survivor Corps grassroots group devoted educating Covid19 longhaulers connecting medical research community finding analyzed presented Natalie Lambert PhD associate research professor medicine Indiana University Wendy Chung MD neurodevelopmental specialist Columbia University Irving Medical Center paper formally peerreviewed published journal finding echo research growing number documented anecdotal case survey respondent noted 98 different effect far 11 common symptom US Centers Disease Control Prevention list possible sign person disease Several ill far benign tinnitus cramp flash floater vision night sweat pain hand foot quarter effect involved pain “The result survey suggest brain whole body eye skin symptom also frequentoccurring health problem people recovering Covid19” researcher state ‘We expected see lot longterm damage’ Studies shown Covid19 infects much respiratory system late spring knew disease affecting body head toe swelling brain compromising many body’s organ could blood vessel disease new study suggests Covid19 infect thyroid gland causing excess hormone release time go study begun looking potential longterm effect Heart image taken 10 week people contracted Covid19 found 78 100 sort inflammation abnormality even people preexisting cardiovascular issue researcher reported July 27 journal JAMA Cardiology “We expected see lot longterm damage Covid19 scarring decreased lung function decreased exercise capacity” Ali Gholamrezanezhad radiologist Keck School Medicine University Southern California tell Science Magazine “It’s going take month year determine longlasting deleterious consequence infection” ongoing odd effect Beyond enduring pain hundred Covid19 survivor experiencing hair loss “We seeing patient Covid19 two three month ago experiencing hair loss” say Shilpi Khetarpal MD dermatologist Cleveland Clinic demoralizing effect made vivid recently Twitter video posted celebrity Alyssa Milano recovering — way — disease Khetarpal say however effect temporary Answers coming month year Covid19 would virus cause chronic symptom Polio usually cause mild cold flulike symptom 1 case damage neurological system leave person partially paralyzed EpsteinBarr virus herpes virus suspected causing chronic fatigue syndrome scientist aren’t sure Given current pandemic month old nobody say sure postCovid complication become lifelong problem “It’s going take month year determine longlasting deleterious consequence infection” Dr Anthony Fauci director National Institute Allergy Infectious Disease said last month interview Facebook “We don’t know haven’t enough time” leaf sufferer like Witvliet epidemiologist limbo mostly resting wondering headache brain fog extreme fatigue might clear “It’s soon say we’re disabled” writes “It’s also soon know long damage last”Tags Health Pandemic Covid 19 Symptoms Coronavirus |
1,137 | What is a Dashboard Style Guide and Why Should You Use One? | What is a Dashboard Style Guide and Why Should You Use One?
Dashboard style guide (part 1)
Many organizations employ BI developers and data analysts to create data visualizations and dashboards. Many of them, do not come from a design background and they focus their efforts on the mere creation of dashboards. Often, the design is pushed out and falls into random visual choices which leave a result that looks messy and unclear.
A well-designed dashboard is not just “more beautiful”, but also easier to understand. When UX aspects are not taken into account, you may end up with a dashboard that does not serve the purpose it was made for in the first place and is inaccessible to end users.
What is a style guide?
A style guide is a document which provides guidelines for planning and designing components or pages. The objective is to create uniformity and consistency while reducing design efforts by reusing components.
A style guide is crucial especially for large companies as it helps them maintain a consistent brand and design language (internally and externally) — even when many designers are working on the company’s products.
A UX style guide focuses mainly on functionality, while a UI style guide emphasizes graphic design: colors, fonts, layouts, typography, etc. | https://medium.com/tint-studio/what-is-a-dashboard-style-guide-and-why-should-you-use-one-fb84ce8ffbb0 | ['Anat Sifri'] | 2019-04-30 09:05:10.840000+00:00 | ['Design', 'Dashboard', 'Data Visualization', 'Uxui Design'] | Title Dashboard Style Guide Use OneContent Dashboard Style Guide Use One Dashboard style guide part 1 Many organization employ BI developer data analyst create data visualization dashboard Many come design background focus effort mere creation dashboard Often design pushed fall random visual choice leave result look messy unclear welldesigned dashboard “more beautiful” also easier understand UX aspect taken account may end dashboard serve purpose made first place inaccessible end user style guide style guide document provides guideline planning designing component page objective create uniformity consistency reducing design effort reusing component style guide crucial especially large company help maintain consistent brand design language internally externally — even many designer working company’s product UX style guide focus mainly functionality UI style guide emphasizes graphic design color font layout typography etcTags Design Dashboard Data Visualization Uxui Design |
1,138 | The Jackpot of the Availability Cascade | You’re a soldier in the information war, but do you know what the endgame feels like?
Photo by Chansereypich Seng on Unsplash
Imagine two combatants of relatively equal strength. They’re both very good, very well practiced. They know all the moves.
The only way one of them can win is when the other one slips up and lets their guard down. Maybe it’s exhaustion that causes the slip. Maybe it’s the surprise of a secret move they’ve never trained against, a move their opponent was holding onto for just the right moment.
Whatever the reason, the match ends and a champion is declared.
Some in the crowd erupt in joy while others fall into a shocked silence.
It was a long fight. A tense fight.
But in the end, the winner was decided.
Those who cheered for the victor go home energized and self-satisfied; those who were rooting for the loser go home disappointed and turn their attention to other battles.
Some people were neutral the whole time, just spectators to the fight, but not many.
It’s almost impossible to watch a contest like this — a brawl between two powerful duelists at the top of their game — and not give in to the very human urge to pick a side.
I’m Not Writing about a Real Fight, of Course
I’m describing the intellectual clash between millions of Americans on issues in the public sphere — abortion, universal healthcare, illegal immigration, welfare, climate change.
Like the two combatants I described above, the opposing sides in these battles have become entrenched.
They can’t convince their opponents to give up. Nor have they been able to persuade enough neutral fighters to rally to their cause and overwhelm their adversary.
Many of these battles span years, decades, and even generations.
In some cases — like the conflict over what power states should have versus what power the federal government should have — the battle has been raging since the founding of our nation.
And the battlefield is massive.
It’s more of a battle-space, actually, because it spans both the real and conceptual realms.
For the largest and longest-running battles, those who lead the charge against the opposing side have managed to accumulate a large supply of rhetorical munitions, a “pro” for every “con.”
For the newest battles, the attackers wonder whether they can use the element of surprise to achieve victory quickly.
Can the defenders be defeated swiftly before they have a chance to muster?
Or will the attackers charge forward valiantly, winning a few quick victories only to become bogged down in a massive war of attrition they’re not yet prepared to fight?
The Path to Victory
If you’re still new to the information wars — or maybe you’ve been around awhile and just can’t quite grasp the metaphor I’m using — it might seem that I’m describing more of an ongoing process, something like history rather than a battle that actually can be won.
But there’s a reason lobbyists and dark-money groups and nonprofits throw billions of dollars every year into the prolonged battles they’re fighting.
It’s because their ideological crusaders are holding the line and hoping for a Big Event — bad luck, an error by their opponent, or something totally unexpected — that will break the deadlock and pave the way to victory.
But before I provide you with a technical definition for what’s known as an “availability cascade,” consider the following:
On September 10, 2001, many Americans were aware that global terrorism was a threat but were disinterested in committing U.S. forces overseas. Four days later — on September 14, 2001 — the American attitude had changed so rapidly and decisively that the U.S. Congress authorized the President to wage a global war on terrorism: only one member of the U.S. House of Representatives opposed the resolution, and the U.S. Senate agreed unanimously with only two Senators abstaining.
You know what happened between September 10th and September 14th, 2001, don’t you?
Suffice to say, even though skepticism about deploying American forces overseas was quite high throughout the 1990s and early 2000s, the 9/11 attack was such a spectacular event that a long, long time had to pass before anyone could publicly criticize American aggression overseas without being accused of treachery.
Vice President Dick Cheney, along with his buddies in the military-industrial complex, had warned for years about the threat of global terrorism only to be rebuffed by bureaucrats and politicians who said things like “the American public doesn’t like military adventurism” and “the American public can’t stand to see Americans dying on foreign soil.”
Then, all of a sudden — voila! In the span of just a few days, the Vice President won his battle over U.S. intervention abroad as the arguments of his opposition collapsed just as quickly and suddenly as the buildings of the World Trade Center on that September morning.
Technical Definition
If you’re itching for a technical definition for this phenomenon as opposed to just a single illustrative case study, here it is.
The term “availability cascade” was first used by economist Timur Kuran and legal scholar Cass Sunstein in a 1999 paper published in the Stanford Law Review.
The term was not used in the context of convincing Americans to go to war, however, but in the context of risk regulation.
Kuran and Sunstein noticed the way environmental and public-health activists — who they gave the fancy title of “availability entrepreneurs”—used the mass media to persuade the public on issues related to health and the environment.
The two academics were concerned about how activists could whip up public sentiment over a particular issue and force the government to take unwarranted or even destructive actions based on populist fear and anger rather than sound economics or science.
Kuran and Sunstein defined the availability cascade as follows:
A self-reinforcing process of collective belief formation by which an expressed perception triggers a chain reaction that gives the perception of increasing plausibility through its rising availability in public discourse.
Said another way, the availability cascade is basically a meme that becomes legitimized and popularized — whether rightly or wrongly, accurately or not — by “going viral.”
And I don’t mean the sort of one-off visual/textual meme we’re exposed to on Facebook or Twitter, although these are certainly innovative weapons in the information wars.
Rather, the availability cascade is a process of repetition that frequently follows from a Big Event and leads to widespread belief adoption, suddenly and decisively shifting public perception and creating an overwhelming call to action.
Other Examples
Whether you want to think about the availability cascade as a meme, “going viral,” or a come-to-Jesus moment, it’s absolutely a real phenomenon.
Like a river that suddenly bursts free of its banks and etches a new course into the landscape, the availability cascade can lead to a decisive victory in the information wars when little resistance is encountered.
While we’re speaking of rivers, one of the most famous availability cascades in American history occurred in the summer of 1969 when the Cuyahoga River caught fire in downtown Cleveland.
The river, which was one of the most chemically polluted in the United States, had actually caught fire something like 13 times before, including a massive 1952 fire that resulted in hundreds of thousands of dollars in damage to boats, a bridge, and a riverfront office building.
Although the 1969 fire wasn’t that bad and no photographs of it were available, this didn’t stop Time magazine from publishing a dramatic photo from the far more destructive 1952 fire, shocking enough people that the public outcry eventually led to various environmental improvements such as the Clean Water Act and the creation of the Environmental Protection Agency.
You could convincingly argue that the entire cascade surrounding the Cuyahoga River fire was manufactured given that the photographic evidence had been manipulated.
And indeed, that’s exactly what happens when smart and savvy availability entrepreneurs craft a compelling narrative out of small events and/or less-than-complete information.
But lest we worry that all of history has been manipulated, it’s helpful to also consider examples of purely organic availability cascades.
While approaching its mooring in Lakehurst, New Jersey, from Germany in 1937, the hydrogen-filled airship Hindenburg burst into flames and crashed. The disaster was recorded on film and quickly rushed out to the theaters via newsreel.
Although the airship industry had been sputtering for years, the Hindenburg crash touched off an availability cascade that airships were not safe and permanently ended their use for commercial passengers practically overnight.
If you haven’t seen (or don’t recall) the Hindenburg newsreel footage, it’s worth discovering (or rediscovering).
I can only imagine what it would have been like in 1937, as an average person who had up until that point seen only primitive special effects out of Hollywood, to sit down in the darkness of a theater and watch real footage of tiny human beings scattering like ants beneath the hulk of a flaming dirigible as it fell to Earth.
Hitting the Jackpot
But while the Hindenburg newsreel is dramatic, it plays only a soft prelude to some of the more intense and opinion-swaying imagery we’ve seen in the years since.
Although availability cascades are not always touched off by fires, explosions, and violence, it certainly seems to help — among the first things that come to mind are atomic bombs detonating, U.S. television coverage of the 1968 Tet Offensive in Vietnam, the World Trade Center falling down, and the chilling gun-sight camera video from 2007 that shows jovial U.S. soldiers gunning down two Reuters journalists and other civilians in the streets of Iraq.
More recently, the widely watched video that showed the brutal and unnecessary killing of George Floyd by police officers in Minneapolis, Minnesota, touched off a public conversation that had all the hallmarks of an availability cascade in the making.
Although it remains to be seen whether more significant changes are ahead for police departments across the country, it’s possible that in retrospect the video of the killing will be viewed as the “tipping point” that forced America to finally confront the disproportionate violence police officers commit against people of color.
A strong visual component of the availability cascade is clearly important, and herein lies the challenge for today’s information warriors who hope to hit the jackpot and win a decisive victory.
For one thing, many issues do not yield well to dramatic visual imagery. This is, thankfully, one of the many challenges faced by tin-foilers in the anti-vaxx space, who lack a compelling image that “proves” vaccines cause autism. (They don’t.)
Meanwhile, even if a strong visual component exists, information warriors still face an uphill battle given how thoroughly Americans have already been exposed to dramatic imagery — including fictional explosions of spaceships, planets, Death Stars, and the like. Though obviously much of the violence we have seen isn’t real, our subconscious minds don’t necessarily know this. Imagery is imagery.
As people’s sensitivity thresholds rise, so too does the threshold needed to ignite the availability cascade and give it a life of its own.
This means that information warriors will search out ever-more-stupefying ways to get people’s attention and win a decisive battle on issues in the public sphere.
So consider yourself warned.
Or — if you happen to be a foot-soldier in the information wars — consider yourself informed. | https://medium.com/swlh/the-jackpot-of-the-availability-cascade-72adbec8da7e | ['Jack Luna'] | 2020-10-13 22:00:04.310000+00:00 | ['Politics', 'Society', 'Psychology', 'Media', 'Social Media Marketing'] | Title Jackpot Availability CascadeContent You’re soldier information war know endgame feel like Photo Chansereypich Seng Unsplash Imagine two combatant relatively equal strength They’re good well practiced know move way one win one slip let guard Maybe it’s exhaustion cause slip Maybe it’s surprise secret move they’ve never trained move opponent holding onto right moment Whatever reason match end champion declared crowd erupt joy others fall shocked silence long fight tense fight end winner decided cheered victor go home energized selfsatisfied rooting loser go home disappointed turn attention battle people neutral whole time spectator fight many It’s almost impossible watch contest like — brawl two powerful duelist top game — give human urge pick side I’m Writing Real Fight Course I’m describing intellectual clash million Americans issue public sphere — abortion universal healthcare illegal immigration welfare climate change Like two combatant described opposing side battle become entrenched can’t convince opponent give able persuade enough neutral fighter rally cause overwhelm adversary Many battle span year decade even generation case — like conflict power state versus power federal government — battle raging since founding nation battlefield massive It’s battlespace actually span real conceptual realm largest longestrunning battle lead charge opposing side managed accumulate large supply rhetorical munition “pro” every “con” newest battle attacker wonder whether use element surprise achieve victory quickly defender defeated swiftly chance muster attacker charge forward valiantly winning quick victory become bogged massive war attrition they’re yet prepared fight Path Victory you’re still new information war — maybe you’ve around awhile can’t quite grasp metaphor I’m using — might seem I’m describing ongoing process something like history rather battle actually there’s reason lobbyist darkmoney group nonprofit throw billion dollar every year prolonged battle they’re fighting It’s ideological crusader holding line hoping Big Event — bad luck error opponent something totally unexpected — break deadlock pave way victory provide technical definition what’s known “availability cascade” consider following September 10 2001 many Americans aware global terrorism threat disinterested committing US force overseas Four day later — September 14 2001 — American attitude changed rapidly decisively US Congress authorized President wage global war terrorism one member US House Representatives opposed resolution US Senate agreed unanimously two Senators abstaining know happened September 10th September 14th 2001 don’t Suffice say even though skepticism deploying American force overseas quite high throughout 1990s early 2000s 911 attack spectacular event long long time pas anyone could publicly criticize American aggression overseas without accused treachery Vice President Dick Cheney along buddy militaryindustrial complex warned year threat global terrorism rebuffed bureaucrat politician said thing like “the American public doesn’t like military adventurism” “the American public can’t stand see Americans dying foreign soil” sudden — voila span day Vice President battle US intervention abroad argument opposition collapsed quickly suddenly building World Trade Center September morning Technical Definition you’re itching technical definition phenomenon opposed single illustrative case study term “availability cascade” first used economist Timur Kuran legal scholar Cass Sunstein 1999 paper published Stanford Law Review term used context convincing Americans go war however context risk regulation Kuran Sunstein noticed way environmental publichealth activist — gave fancy title “availability entrepreneurs”—used mass medium persuade public issue related health environment two academic concerned activist could whip public sentiment particular issue force government take unwarranted even destructive action based populist fear anger rather sound economics science Kuran Sunstein defined availability cascade follows selfreinforcing process collective belief formation expressed perception trigger chain reaction give perception increasing plausibility rising availability public discourse Said another way availability cascade basically meme becomes legitimized popularized — whether rightly wrongly accurately — “going viral” don’t mean sort oneoff visualtextual meme we’re exposed Facebook Twitter although certainly innovative weapon information war Rather availability cascade process repetition frequently follows Big Event lead widespread belief adoption suddenly decisively shifting public perception creating overwhelming call action Examples Whether want think availability cascade meme “going viral” cometoJesus moment it’s absolutely real phenomenon Like river suddenly burst free bank etches new course landscape availability cascade lead decisive victory information war little resistance encountered we’re speaking river one famous availability cascade American history occurred summer 1969 Cuyahoga River caught fire downtown Cleveland river one chemically polluted United States actually caught fire something like 13 time including massive 1952 fire resulted hundred thousand dollar damage boat bridge riverfront office building Although 1969 fire wasn’t bad photograph available didn’t stop Time magazine publishing dramatic photo far destructive 1952 fire shocking enough people public outcry eventually led various environmental improvement Clean Water Act creation Environmental Protection Agency could convincingly argue entire cascade surrounding Cuyahoga River fire manufactured given photographic evidence manipulated indeed that’s exactly happens smart savvy availability entrepreneur craft compelling narrative small event andor lessthancomplete information lest worry history manipulated it’s helpful also consider example purely organic availability cascade approaching mooring Lakehurst New Jersey Germany 1937 hydrogenfilled airship Hindenburg burst flame crashed disaster recorded film quickly rushed theater via newsreel Although airship industry sputtering year Hindenburg crash touched availability cascade airship safe permanently ended use commercial passenger practically overnight haven’t seen don’t recall Hindenburg newsreel footage it’s worth discovering rediscovering imagine would like 1937 average person point seen primitive special effect Hollywood sit darkness theater watch real footage tiny human being scattering like ant beneath hulk flaming dirigible fell Earth Hitting Jackpot Hindenburg newsreel dramatic play soft prelude intense opinionswaying imagery we’ve seen year since Although availability cascade always touched fire explosion violence certainly seems help — among first thing come mind atomic bomb detonating US television coverage 1968 Tet Offensive Vietnam World Trade Center falling chilling gunsight camera video 2007 show jovial US soldier gunning two Reuters journalist civilian street Iraq recently widely watched video showed brutal unnecessary killing George Floyd police officer Minneapolis Minnesota touched public conversation hallmark availability cascade making Although remains seen whether significant change ahead police department across country it’s possible retrospect video killing viewed “tipping point” forced America finally confront disproportionate violence police officer commit people color strong visual component availability cascade clearly important herein lie challenge today’s information warrior hope hit jackpot win decisive victory one thing many issue yield well dramatic visual imagery thankfully one many challenge faced tinfoilers antivaxx space lack compelling image “proves” vaccine cause autism don’t Meanwhile even strong visual component exists information warrior still face uphill battle given thoroughly Americans already exposed dramatic imagery — including fictional explosion spaceship planet Death Stars like Though obviously much violence seen isn’t real subconscious mind don’t necessarily know Imagery imagery people’s sensitivity threshold rise threshold needed ignite availability cascade give life mean information warrior search evermorestupefying way get people’s attention win decisive battle issue public sphere consider warned — happen footsoldier information war — consider informedTags Politics Society Psychology Media Social Media Marketing |
1,139 | 3 Beliefs I Abandoned After 3 Years of Professional Coding | 2. Programmers Shouldn’t Be Insecure About Their Work
Now, I’ve got some really skilled senior developers working with me. They are competent individuals who require high standards from my pull requests. Otherwise, they will easily reject them without remorse.
I used to think of these people as developers not capable of doing any wrong. I would look at them in awe and wonder if one day I would reach their level of skills and never code anything incorrectly.
But this belief was wrong.
It didn’t take much for me to gain more experience and dialogue with my colleagues, only to realize that they have insecurities too. They don't always know everything and are not always sure about the best possible solution for a given problem. They have to constantly renew their knowledge like I have to. They know that sometimes they just have to go with the flow and their intuition on what is best for the project.
I’ve freed myself from this belief and now I don’t always feel stuck in my own world thinking that I will never be a senior developer because I know that seniors have my feelings of insecurity too. | https://medium.com/better-programming/3-beliefs-i-abandoned-after-3-years-of-professional-coding-d4a71b588100 | ['Piero Borrelli'] | 2020-11-18 15:31:48.843000+00:00 | ['Python', 'JavaScript', 'Technology', 'Productivity', 'Programming'] | Title 3 Beliefs Abandoned 3 Years Professional CodingContent 2 Programmers Shouldn’t Insecure Work I’ve got really skilled senior developer working competent individual require high standard pull request Otherwise easily reject without remorse used think people developer capable wrong would look awe wonder one day would reach level skill never code anything incorrectly belief wrong didn’t take much gain experience dialogue colleague realize insecurity dont always know everything always sure best possible solution given problem constantly renew knowledge like know sometimes go flow intuition best project I’ve freed belief don’t always feel stuck world thinking never senior developer know senior feeling insecurity tooTags Python JavaScript Technology Productivity Programming |
1,140 | I Wish I Knew Then What I Know Now | I Wish I Knew Then What I Know Now
How I got my compassion to include myself
Photo by Giulia Bertelli on Unsplash
All my friends say I am a kind person. I call when you are sick. I bring food over when you can’t cook. I give to those on the street who ask for spare change. Everyone knows I bend over backwards to help my kids. I am just not very kind to myself. In fact, I am darn hard on myself.
Looking back on my life, if I knew then what I know now, I would have cut myself some slack. Let me tell you the story of four words that changed my life. And no, the four words were not, “will you marry me?”
It began at the end of an aerobics class when we were gathering up our belongings. I overheard someone say, “I am more self-compassionate.” A bright, beam-like light turned on in my brain—just four words.
I had never considered self-compassion — there was no connection in my mind between the words “self” and “compassion.” It sounded so new-age-granola-woo woo or something. At best, it felt self-indulgent like a day at the spa, something I never had time for.
I had a million things to get done, — who the f*ck could think about self-compassion?
But it sounded soothing. Who knows why those words resonated with me so I made a mental note, filing it away for future reference.
We all struggle
For most, life is a struggle. It’s not perfect, and some of us have a more challenging road than others. Illness. Anxiety. Grief. Loneliness. Financial issues. Relationship problems. Depression. Guilt. Failure. Trauma. Addiction. Toxic work environments. Maybe we love and care for someone who suffers. If you don’t identify with any of these descriptors, just living through these uncertain pandemic times is a struggle.
Just do it
I was raised with the expectation to work hard, help others and “just do it.” Like many, my life was a whirlwind of constant pressure that I thought I thrived on. I could monitor my success by the items crossed off on my to-do list. Many spiral notebooks filled with lists documented the progress of my life.
I was an entrepreneur working 80+ hours a week in a start-up that eventually grew. I had two children to mother. Being a parent is a transformative experience, but it also brings a lifetime of thinking about their health, behaviour, grades, friends, etc.
Nothing prepared me for my son’s depression as a teenager and now as an adult, a crushing and unrelenting illness. Parents with kids struggling with a mental illness know the concern, worry, and stress, not to mention the guilt and self-blame. Did I miss a sign? Was it because I had a demanding job? Was what I did somehow not enough? I was heartbroken because no love or healthy meal could fix or help or alleviate his struggles, leaving me emotionally emptied.
After three years in a long-term residence, my father died. He was the person I spoke to every day and whose care was always on my mind. It is estimated that approximately 30% of us provide help to a chronically ill, disabled, or aged family member or friend during any given year and spend an average of 20 hours per week providing care for someone we love.
All this left no time for me.
How self-compassion helped me
My inner voice was a harsh taskmaster and kept screaming, “just do it” and don’t think about anything else. I confronted life’s challenges by being in a perpetual problem-solving mode.
“I can do it all,” was my mantra.
That was great until I crashed and burned.
After reflection and a little help, I realized that the tough “just do it” voice that screamed in my head was not sustainable. It was a long journey to replace that harsh taskmaster’s voice.
Self-compassion is an easy concept to explain but challenging to implement. It involves treating yourself with the same caring and kindness you would treat a family member, a friend or even a stranger who is having a hard time. Our culture emphasizes being kind to others struggling, but it does not emphasize being kind to ourselves.
I had to sit down and have a little talk with myself to understand why being kind to myself did not come naturally.
I had to undo a lifelong pattern of thinking to be convinced that having compassion for myself was the same as having compassion for a friend who is suffering. I had to sit down and have a little talk with myself to understand why being kind to myself did not come naturally. But eventually, I got there.
For me, being self-compassionate gave me the space to step back and observe my life without judgement: this is the way it is for now. It connected me to the notion that many suffer or go through rough times: I am not alone. It allowed me to speak kindly to myself: the way I would speak to a friend going through a hard time. It takes practice — being kind to yourself is not easy — but like any skill, the more you practice, the better you get.
How self-compassion can help you
While I had a slave-driver inner voice, for others your inner voice can be a harsh critic driving you to be tough on yourself, beat yourself up, think you are the only one to have failed, or you are not good enough. The voice is a merciless judge of your inadequacies and shortcomings.
Still others live with the crushing fist of depression and anxiety. Or we care for someone who is sick. Perhaps we have a demanding ageing parent or a terrible boss. Our reality feels like being shrink-wrapped, unable to punch our way out.
I am not an expert or a therapist, and there isn’t one formula for developing self-compassion. But I know it works. Here is what you can try:
sit down somewhere quiet and identify what your situation is, accepting it without judgement or criticism or trying to figure out what to do accept that the rough time you are going through is painful realize struggles are a reality shared by all of us talk to yourself as if speaking to your best friend with kindness, care and reassurance allow yourself to be more healthy and happy because you care about yourself and not because you have to fix something or you are worthless or unacceptable the way you are repeat often and notice the shift in your thinking.
Image created by author with photo licensed from Adobe Stock
Research and resources
Whatever your struggle is, research-based evidence confirms that the benevolent inner voice of self-compassion shifts your mindset to a more positive place. It provides you with confidence, resilience and a belief in your abilities. It lowers levels of depression and anxiety.
Resources for further reading: If you are interested in learning more about self-compassion, I recommend the books written by Dr. Kirsten Neff, a pioneer and leader in the field of self-compassion research for over 15 years. She has an excellent website.
I know all this now and wonder why it took me so long to figure out. But better late than never — so the saying goes. I hope self-compassion can be a soothing force in your life too. | https://medium.com/crows-feet/i-wish-i-knew-then-what-i-know-now-adb8df39a088 | ['Alice Goldbloom'] | 2020-11-21 01:22:26.928000+00:00 | ['Health', 'It Happened To Me', 'Self Compassion', 'Life', 'Mental Health'] | Title Wish Knew Know NowContent Wish Knew Know got compassion include Photo Giulia Bertelli Unsplash friend say kind person call sick bring food can’t cook give street ask spare change Everyone know bend backwards help kid kind fact darn hard Looking back life knew know would cut slack Let tell story four word changed life four word “will marry me” began end aerobics class gathering belonging overheard someone say “I selfcompassionate” bright beamlike light turned brain—just four word never considered selfcompassion — connection mind word “self” “compassion” sounded newagegranolawoo woo something best felt selfindulgent like day spa something never time million thing get done — fck could think selfcompassion sounded soothing know word resonated made mental note filing away future reference struggle life struggle It’s perfect u challenging road others Illness Anxiety Grief Loneliness Financial issue Relationship problem Depression Guilt Failure Trauma Addiction Toxic work environment Maybe love care someone suffers don’t identify descriptor living uncertain pandemic time struggle raised expectation work hard help others “just it” Like many life whirlwind constant pressure thought thrived could monitor success item crossed todo list Many spiral notebook filled list documented progress life entrepreneur working 80 hour week startup eventually grew two child mother parent transformative experience also brings lifetime thinking health behaviour grade friend etc Nothing prepared son’s depression teenager adult crushing unrelenting illness Parents kid struggling mental illness know concern worry stress mention guilt selfblame miss sign demanding job somehow enough heartbroken love healthy meal could fix help alleviate struggle leaving emotionally emptied three year longterm residence father died person spoke every day whose care always mind estimated approximately 30 u provide help chronically ill disabled aged family member friend given year spend average 20 hour per week providing care someone love left time selfcompassion helped inner voice harsh taskmaster kept screaming “just it” don’t think anything else confronted life’s challenge perpetual problemsolving mode “I all” mantra great crashed burned reflection little help realized tough “just it” voice screamed head sustainable long journey replace harsh taskmaster’s voice Selfcompassion easy concept explain challenging implement involves treating caring kindness would treat family member friend even stranger hard time culture emphasizes kind others struggling emphasize kind sit little talk understand kind come naturally undo lifelong pattern thinking convinced compassion compassion friend suffering sit little talk understand kind come naturally eventually got selfcompassionate gave space step back observe life without judgement way connected notion many suffer go rough time alone allowed speak kindly way would speak friend going hard time take practice — kind easy — like skill practice better get selfcompassion help slavedriver inner voice others inner voice harsh critic driving tough beat think one failed good enough voice merciless judge inadequacy shortcoming Still others live crushing fist depression anxiety care someone sick Perhaps demanding ageing parent terrible bos reality feel like shrinkwrapped unable punch way expert therapist isn’t one formula developing selfcompassion know work try sit somewhere quiet identify situation accepting without judgement criticism trying figure accept rough time going painful realize struggle reality shared u talk speaking best friend kindness care reassurance allow healthy happy care fix something worthless unacceptable way repeat often notice shift thinking Image created author photo licensed Adobe Stock Research resource Whatever struggle researchbased evidence confirms benevolent inner voice selfcompassion shift mindset positive place provides confidence resilience belief ability lower level depression anxiety Resources reading interested learning selfcompassion recommend book written Dr Kirsten Neff pioneer leader field selfcompassion research 15 year excellent website know wonder took long figure better late never — saying go hope selfcompassion soothing force life tooTags Health Happened Self Compassion Life Mental Health |
1,141 | taylor swift’s “evermore”: track-by-track review | an overview of evermore
On July 24, 2020, Taylor Swift sent shock waves through the music industry when she unexpectedly dropped her 8th studio album, folklore. She had always spaced out her album eras by more than two years and this album came less than a year after her 7th studio album, Lover. In addition to being a surprise (announced to the world less than 24 hours before its release), it was also notable for three other reasons. The first is that it was the first high-profile music release produced during the COVID-19 pandemic. The second is that it represented a marked departure from Taylor Swift’s increasingly pop music-oriented sound. The third is that it was good; like really, really good.
Within a few weeks, folklore had become the best-selling album of 2020. The first single, “cardigan,” became Taylor Swift’s sixth #1 hit on the Billboard Hot 100 and gave her the distinction of being the first artist to debut atop the album and single charts in the same week. The album received the best reviews of her career to date, obtaining an astonishing average score of 88/100 on Metacritic. It was recently nominated for 5 Grammys, including Album of the Year (her fourth nomination in that category; she’s one of only two female artists who have ever won it twice). And it is currently racking up other end-of-the-year accolades, including being in the top spot on many high profile publications’ lists of the year’s best albums.
Perhaps the only thing more shocking than the surprise release and subsequent success of folklore was the surprise drop of her 9th studio album, evermore, only 20 weeks later.
Taylor followed the same strategy with evermore that she did with folklore. She announced it via social media less than 24 hours before it debuted with cover art, links to her website for pre-orders and merchandise, and a statement about the origin of the album. The statement read:
“To put it plainly, we just couldn’t stop writing songs. To try and put it more poetically, it feels like we were standing on the edge of the folklorian woods and had a choice: to turn and go back or to travel further into the forest of this music. We chose to wander deeper in … I’ve never done this before. In the past I’ve always treated albums as one-off eras and moved onto planning the next one after an album was released. There was something different with Folklore. In making it, I felt less like I was departing and more like I was returning. I loved the escapism I found in these imaginary/not imaginary tales. I loved the ways you welcomed the dreamscapes and tragedies and epic tales of love lost and found into your lives. So I just kept writing them.” — Taylor Swift
Only a few days after its December 11 release, the album has already amassed enough sales to guarantee a #1 debut on next week’s Billboard 200 chart, spawned a notable hit in the form of lead single “willow,” received similar critical acclaim to its predecessor (its current Metacritic score is 85/100), and led numerous critics to retool their “best of” end-of-year list.
Rather than reflect a distinct new “era,” evermore represents a “sister album” to folklore. It could also be described as a companion album or sequel. What it is not is an album of outtakes from the folklore recording sessions that were assembled together to prolong the buzz around that album. It is a fully realized, cohesive artistic vision and, in fact, is even longer than folklore. It clocks in at over an hour without even counting the two forthcoming bonus tracks available on the yet-to-be shipped physical release of the album.
Image copyright: Republic Records/Taylor Swift
Taylor Swift continues to work with the same creative team on evermore, but with a a different emphasis. Whereas folklore’s tracks were split pretty evenly between those produced by Jack Antonoff and Aaron Dessner, this one much more heavily features Dessner’s work and also collaborations from Bon Iver, HAIM, and Marcus Mumford (of Mumford & Sons). The result is that there is even more of an indie rock sound to many of the songs. But it also takes interesting detours into chamber rock, grunge, folk music, and her country roots.
Even more than was the case with folklore, there is little sense of tempo or urgency on evermore. The songs take their time to breathe and build and the tempo shifts not only between songs but very often within them. In terms of production, the songs frequently go to genuinely unexpected places. And while some of these experiments work better than others, they are all fascinating.
As is usually the case with any Taylor Swift album, the highlight is the song-writing. She continues her time-honored tradition of delivering heartfelt confessionals, albeit with more psychological complexity and maturity than ever before. She also further experiments with third-person storytelling and interwoven narratives. We may not get something as clever and audacious as the teenage love story trilogy from folklore, but there are fascinating recurring characters and themes here. The themes of devastating heartbreak, begrudging forgiveness, romantic neglect, forbidden love, human evil, nostalgia, and grief all prominent.
Like folklore, evermore cannot be appreciated after a single listen. It took me multiple listens to fully be cast under its spell and appreciate the intricacies of its writing and production. It’s far from Taylor Swift’s most accessible album, but it’s one of her very best.
Without further ado, here is my track-by-track review of evermore.
Image copyright: Republic Records/Taylor Swift
evermore: track-by-track review
“willow”
This folksy, yearning love song was an interesting choice for the album’s lead single. It is hardly the album’s catchiest, most provocative, or most powerful song, but it is a strong and fitting start to the album. The lyrics depict the complexities of wanting someone to love you back and Swift’s vocals effectively oscillate between deep and plaintive and heightened and breathless. As is virtually always the case, Swift said it best when she described the glockenspiel-driven song by saying, “I think it sounds like casting a spell to make somebody fall in love with you.”
Favorite lyrics: “Wait for the signal, and I’ll meet you after dark/ Show me the places where the others gave you scars/ Now this is an open-shut case/
I guess I should’ve known from the look on your face/ Every bait-and-switch was a work of art”
“champagne problems”
Swift describes the storyline of this song as involving “longtime college sweethearts [who] had very different plans for the same night, one to end it and one who brought a ring.” It is a wrenching piano-driven ballad co-written with her romantic partner Joe Alwyn (under the pseudonym William Bowery). It is filled with stunning details that evoke vivid imagery and also delves into the female protagonist’s mental health struggles. One of the most interesting aspects of the song to me is its title. “Champagne problems” is typically a phrase that is used to describe problems that may seem very real and painful to an individual but are truly insignificant when compared to the suffering of others. But a broken engagement and mental illness are undeniably painful topics. Is she acknowledging that in the epic tragedy of 2020, singing about a breakup feels trivial? Or is she sharing the protagonist’s perspective of the situation? Questions like this are part of what drive the brilliance of evermore.
Favorite lyrics: “Sometimes you just don’t know the answer/ ’Til someone’s on their knees and asks you/ ‘She would have made such a lovely bride/ What a shame she’s fucked in the head,’ they said/ But you’ll find the real thing instead/ She’ll patch up your tapestry that I shred”
“gold rush”
The only song on the album produced by Jack Antonoff, who produced about half the songs on each of her last three albums, this one opens with an ethereal chant and then unexpectedly evolves into a quicker pace and poppier sound. The lyrics find the protagonist falling for someone that is universally adored and being pursued with the fervor of settlers looking for gold in California (hence the title). She is filled with longing and jealousy and ultimately realizes that the chase and the fight isn’t worth it.
Favorite lyrics: “At dinner parties, I call you out on your contrarian shit/
And the coastal town we wandered ‘round had nеver seen a love as pure as it/ And thеn it fades into the gray of my day-old tea/ ’Cause it could never be”
“‘tis the damn season”
One of my favorite songs on the album, “‘tis the damn season” is propelled by an electric guitar strum and tells the story of a woman who left her small town of Tupelo, Mississippi to make it in Hollywood. She is back for the holidays and reunites with an old flame. Her ambivalence about both leaving and returning home is palpable and Taylor Swift’s aching vocal performance is one of her best. The song is a winner on its own, but is made even richer when you reach the track “Dorothea” and realize that this track is also about her, albeit from a different perspective. It’s the kind of world building and intertwining narratives that were a big part of what made folklore so ambitious and breathtaking.
Favorite lyrics: “Sleep in half the day just for old times’ sake/ I won’t ask you to wait if you don’t ask me to stay/ So I’ll go back to L.A. and the so-called friends/ Who’ll write books about me if I ever make it/ And wonder about the only soul/ Who can tell which smiles I’m fakin’/ And the heart I know I’m breakin’ is my own/ To leave the warmest bed I’ve ever known”
Image copyright: Republic Records/Taylor Swift
“tolerate it”
Many albums ago, Taylor Swift’s fans noticed that her fifth tracks tend to be a particularly wrenching and confessional ballads (e.g., “All Too Well,” “The Archer,” “My Tears Ricochet”). The fifth track of evermore is no exception. The refrain “I know my love should be celebrated/ But you tolerate it” is a punch to the gut and it perfectly sums up the song, which tells of a woman who has been faithfully devoted to her partner for years but is becoming increasingly resentful that he fails to demonstrate fidelity and passion in return…or even interest. The song evokes the heartbreaking dynamic of being in love with someone who appears largely indifferent toward you.
Favorite lyrics: “While you were out building other worlds, where was I? Where’s the man who’d throw blankets over my barbed wire?/ I made you my temple, my mural, my sky/ Now I’m begging for footnotes in the story of your life/ Drawing hearts in the byline/ Always taking up too much space or time”
“no body, no crime (feat. HAIM)”
My favorite song on the album — and one of my favorite songs Taylor Swift has ever made — is this ambitious collaboration with the rock band HAIM. In the grand tradition of Cher’s “Dark Lady” and Carrie Underwood’s “Two Black Cadillacs,” the song tells the story of a murder (a double murder, no less!) from various perspectives. I fell in love with it instantly, but it took me numerous listens to fully comprehend the shifting narrative. The first verse involves the narrator’s friend telling her that she thinks her husband is cheating. The second verse tells of the narrator’s friend’s disappearance and the narrator’s suspicion that it was her husband that killed her. The third verse tells of the narrator murdering her late friend’s husband and subsequently covering it up. There is a chilling refrain and the song’s productions and vocals perfectly fit the cinematic and macabre content. It is also one of the most authentically and unapologetically country songs she has produced in ages.
Favorite lyrics: “Good thing my daddy made me get a boating license when I was fifteen/ And I’ve cleaned enough houses to know how to cover up a scene/ Good thing Este’s sister’s gonna swear she was with me/ Good thing his mistress took out a big life insurance policy”
“happiness”
Taylor Swift reportedly wrote this song only a week before the album’s release, but it hardly feels like a rush job. It tells the story of someone in the aftermath of a breakup realizing that although they are devastated, they know there will be happiness again. It is fittingly slow, deliberate, and contemplative in its production. It also finds Swift engaging in a remarkably mature and complex approach to a breakup that contrasts markedly with the music she wrote as a teenager. Just look at lines like “No one teaches you what to do/ When a good man hurts you/ And you know you hurt him, too.”
Favorite lyrics: “Honey, when I’m above the trees/ I see it for what it is/ But now my eyes leak acid rain on the pillow where you used to lay your head/ After giving you the best I had/ Tell me what to give after that/ All you want from me now is the green light of forgiveness/ You haven’t met the new me yet/ And I think she’ll give you that”
“dorothea”
The dyad from “‘tis the damn season” is revisited here, albeit from the perspective of the male this time. He talks about his high school girlfriend (the titular Dorothea) who went off to become a star in Hollywood. He pines for her and wonders if she ever thinks about him now that she has moved on. Although it’s more uptempo and upbeat than many of the other songs, it has a heartbreaking innocence and earnestness.
Favorite lyrics: “It’s never too late to come back to my side/ The starts in your eyes shined brighter in Tupelo/ And if you’re ever tired of bеing known for who you know/ You know that you’ll always know me, Dorothea”
“coney island (feat. The National)”
This indie rock duet with Matt Berninger of the rock band the National (whose bandmate Aaron Dressner produced much of evermore), is intensely nostalgic. It depicts with vivid detail and heartbreaking longing the story of a couple looking back on a relationship that fell apart largely because of unequal levels of commitment. For me, it never reaches the dramatic heights I was hoping it would, but the lyrics are gorgeous and the mix of Swift’s mellifluous vocals with Berninger’s raspy baritone is truly special.
Favorite lyrics: “Break my soul in two looking for you/ But you’re right here/ If I can’t relate to you anymore/ Then who am I related to?/ And if this is the long haul/ How’d we get here so soon?/ Did I close my first around something delicate?/ Did I shatter you?”
“ivy”
Swift delves back into the lyrical theme of infidelity on this banjo-driven track that features harmonizing from Justin Vernon of Bon Iver. It tells the story of a woman who has reluctantly fallen in love with a man who is not her husband. She longs to be faithful, but cannot stop herself even though she is perfectly aware of the havoc it will wreak. The track is similar thematically to “illicit affairs,” which is a stand out from folklore (and like “ivy” was the tenth track on that album).
Favorite lyrics: “Clover blooms in the fields/ Spring breaks loose, the time is near/ What would he do if he found us out?/ Crescent moon, coast is clear/ Spring breaks loose, but so does fear/ He’s gonna burn this house to the ground”
Image copyright: Republic Records/Taylor Swift
“cowboy like me”
Of all the songs on evermore, this song snuck up on me the most. I was somewhat indifferent to it upon first listen and yet I kept returning to it over and over due to its hypnotic production and exceedingly interesting lyrics. Like “no body, no crime” this song goes back to her country roots and also involves vivid storytelling. But the similarities pretty much end there. Rather than a double murder, this song tells the story of two grifters who are constantly looking for wealthy people to romantically pursue but unexpectedly break all their own rules by falling in love with each other. The lyrics are beautifully complemented by backup vocals from Marcus Mumford (of the popular country rock band Mumford & Sons) and an elegant orchestration involving guitars, mandolins, and harmonicas.
Favorite lyrics: “And the skeletons in both our closets/ Plotted hard to f*** this up/ And the old men that I’ve swindled/ Really did believe I was the one/ And the ladies lunching have their stories about/ When you passed through town/ But that was all before I locked it down”
“long story short”
This tale of rising from the ashes is one of the few songs on her one-two punch of folklore and evermore that with slightly different production could have been right at home on one of her previous, pop-oriented albums. It’s an indie-rock track that heavily features drums and guitars and has a jaunty, catchy chorus. It is another meditation on the cruel treatment Taylor Swift received from the media that she received due to various controversies and (mostly) some old-fashioned misogyny a few years earlier. But the way she cheerfully dismisses it and refocuses on her current happiness with her romantic partner suggests that she has fully moved on, making it one of the album’s truly heartwarming songs.
Favorite lyrics: “Past me/ I wanna tell you not to get lost in these petty things/ Your nemeses/ Will defeat themselves before you get the chance to swing/ And he’s passing by/ Rare as the glimmer of a comet in the sky/ And he feels like home/ If the shoe fits, walk in it wherever you go”
“marjorie”
Yet another example of how folklore and evermore are “sister albums” is the fact that the thirteenth track of each is an ode one of her grandparents. While folklore’s “epiphany” focuses on her grandfather’s time in WWII, evermore’s 13th track “marjorie” tells the story of her grandmother. Identified by many critics and fans as a highlight of evermore, it is a gorgeously written tale of guilt, grief, and regret. The subject, Marjorie Finlay, was an opera singer who died when Taylor was just 13 years old. As such, the song takes an alternating perspective of young Taylor not appreciating her grandmother enough while she had her and grown Taylor wishing she could do it all over. To make it all the more poignant, the song samples her grandmother’s actual vocals for the backing track.
Favorite lyrics: “I should’ve asked you questions/ I should’ve asked you how to be/ Asked you to write it down for me/ Should’ve kept every grocery store receipt/ ’Cause every scrap of you would be taken from me/ Watched as you signed your name ‘Marjorie’/ All your closets of backlogged dreams/ And how you left them all to me”
“closure”
Aptly described by genius.com as a “wild industrial folk number,” “closure” tells the story of a woman who keeps getting offered the opportunity for closure with an ex-partner, but repeatedly rejects it. She does so not out of a desire to hurt him, but rather a reflection that he is doing it for selfish reasons and she simply doesn’t need it. She has moved on. Its unique, complex, and unpredictable orchestration and production make it one of the album’s most sonically bold tracks.
Favorite lyrics: “I know I’m just a wrinkle in your new life/ Staying friends would iron it out so nice/ Guilty, guilty, reaching out across the sea/ That you put between you and me/ But it’s fake and it’s oh so unnecessary”
“evermore (feat. Bon Iver)”
Although it never reaches the power of “exile,” the collaboration with Bon Iver that appeared on folklore and is one of Swift’s best songs, there is a lot to admire here. Particularly, there is the thrilling bridge that occurs mid-song that features Taylor and Justin Vernon of Bon Iver trading lines and harmonizing. The majority of the song surrounding the hook is a somber, piano ballad that delves into the narrator’s depression. This one doesn’t work quite as well for me lyrically as most of the rest of the album, but it is beautifully performed, fascinating in its production, and caps the album on an appropriately somber but hopeful note.
Favorite lyrics: “Hey December/ Guess I’m feeling unmoored/ Can’t remember/ What I used to fight for/ I rewind thе tape/ But all it does is pause/ On thе very moment all was lost/ Sending signals/ To be double-crossed”
Image copyright: Republic Records/Taylor Swift
In Sum
Admittedly, I was a bit worried on December 10th when Taylor Swift announced evermore. Her previous surprise release was such a career-redefining masterwork and such an innovative and powerful artistic creation that I was worried that if it wasn’t as good as folklore or came off as a cash grab, it could have cheapened folklore’s legacy. But it doesn’t cheapen it. It deepens it.
Four full listens in and I’m still finding new things to appreciate lyrically and sonically. The very act of poring over the lyrics in order to select my favorites for this article somehow increased my already substantial appreciation for the scope and nuance of Swift’s songwriting. Standout tracks for me are “no body, no crime,” “‘tis the damn season,” “champagne problems,” and “cowboy like me.” But, as was the case with folklore, there isn’t a single bad track on the album.
The question most Swifties and music critics will be dogged with is: “Sure it’s good, but is it as good as folklore?” In my opinion, this isn’t the correct question to ask. My answer to that would be a tentative “no.” For me, it ever-so-slightly lacks the cohesion and emotional punch of folklore. However, this is likely heavily influenced by the facts that folklore was a marked departure from her prior album (and thus totally unexpected) and I have now had nearly five months to savor and explore it.
I think the question should be asked is: “Is evermore also a masterpiece like folklore is?” And my answer that to that question would be an unequivocal “Yes.” Choosing between folklore and evermore is like deciding which movie is better — The Godfather or The Godfather Part II? Before Sunrise, Before Sunset, or Before Midnight? Alien or Aliens? Toy Story, Toy Story 2, or Toy Story 3? I may have my personal preferences, but comparing them is ultimately splitting hairs. They are all masterpieces that are head and shoulders above virtually everything else being produced in the medium and they complement and deepen each other beautifully.
Rating for “evermore”: 5/5 stars | https://medium.com/rants-and-raves/taylor-swifts-evermore-track-by-track-review-17a14557dda9 | ['Richard Lebeau'] | 2020-12-16 21:42:18.209000+00:00 | ['Music', 'Writing', 'Culture', 'Entertainment', 'Feminism'] | Title taylor swift’s “evermore” trackbytrack reviewContent overview evermore July 24 2020 Taylor Swift sent shock wave music industry unexpectedly dropped 8th studio album folklore always spaced album era two year album came le year 7th studio album Lover addition surprise announced world le 24 hour release also notable three reason first first highprofile music release produced COVID19 pandemic second represented marked departure Taylor Swift’s increasingly pop musicoriented sound third good like really really good Within week folklore become bestselling album 2020 first single “cardigan” became Taylor Swift’s sixth 1 hit Billboard Hot 100 gave distinction first artist debut atop album single chart week album received best review career date obtaining astonishing average score 88100 Metacritic recently nominated 5 Grammys including Album Year fourth nomination category she’s one two female artist ever twice currently racking endoftheyear accolade including top spot many high profile publications’ list year’s best album Perhaps thing shocking surprise release subsequent success folklore surprise drop 9th studio album evermore 20 week later Taylor followed strategy evermore folklore announced via social medium le 24 hour debuted cover art link website preorders merchandise statement origin album statement read “To put plainly couldn’t stop writing song try put poetically feel like standing edge folklorian wood choice turn go back travel forest music chose wander deeper … I’ve never done past I’ve always treated album oneoff era moved onto planning next one album released something different Folklore making felt le like departing like returning loved escapism found imaginarynot imaginary tale loved way welcomed dreamscapes tragedy epic tale love lost found life kept writing them” — Taylor Swift day December 11 release album already amassed enough sale guarantee 1 debut next week’s Billboard 200 chart spawned notable hit form lead single “willow” received similar critical acclaim predecessor current Metacritic score 85100 led numerous critic retool “best of” endofyear list Rather reflect distinct new “era” evermore represents “sister album” folklore could also described companion album sequel album outtake folklore recording session assembled together prolong buzz around album fully realized cohesive artistic vision fact even longer folklore clock hour without even counting two forthcoming bonus track available yettobe shipped physical release album Image copyright Republic RecordsTaylor Swift Taylor Swift continues work creative team evermore different emphasis Whereas folklore’s track split pretty evenly produced Jack Antonoff Aaron Dessner one much heavily feature Dessner’s work also collaboration Bon Iver HAIM Marcus Mumford Mumford Sons result even indie rock sound many song also take interesting detour chamber rock grunge folk music country root Even case folklore little sense tempo urgency evermore song take time breathe build tempo shift song often within term production song frequently go genuinely unexpected place experiment work better others fascinating usually case Taylor Swift album highlight songwriting continues timehonored tradition delivering heartfelt confessional albeit psychological complexity maturity ever also experiment thirdperson storytelling interwoven narrative may get something clever audacious teenage love story trilogy folklore fascinating recurring character theme theme devastating heartbreak begrudging forgiveness romantic neglect forbidden love human evil nostalgia grief prominent Like folklore evermore cannot appreciated single listen took multiple listens fully cast spell appreciate intricacy writing production It’s far Taylor Swift’s accessible album it’s one best Without ado trackbytrack review evermore Image copyright Republic RecordsTaylor Swift evermore trackbytrack review “willow” folksy yearning love song interesting choice album’s lead single hardly album’s catchiest provocative powerful song strong fitting start album lyric depict complexity wanting someone love back Swift’s vocal effectively oscillate deep plaintive heightened breathless virtually always case Swift said best described glockenspieldriven song saying “I think sound like casting spell make somebody fall love you” Favorite lyric “Wait signal I’ll meet dark Show place others gave scar openshut case guess should’ve known look face Every baitandswitch work art” “champagne problems” Swift describes storyline song involving “longtime college sweetheart different plan night one end one brought ring” wrenching pianodriven ballad cowritten romantic partner Joe Alwyn pseudonym William Bowery filled stunning detail evoke vivid imagery also delf female protagonist’s mental health struggle One interesting aspect song title “Champagne problems” typically phrase used describe problem may seem real painful individual truly insignificant compared suffering others broken engagement mental illness undeniably painful topic acknowledging epic tragedy 2020 singing breakup feel trivial sharing protagonist’s perspective situation Questions like part drive brilliance evermore Favorite lyric “Sometimes don’t know answer ’Til someone’s knee asks ‘She would made lovely bride shame she’s fucked head’ said you’ll find real thing instead She’ll patch tapestry shred” “gold rush” song album produced Jack Antonoff produced half song last three album one open ethereal chant unexpectedly evolves quicker pace poppier sound lyric find protagonist falling someone universally adored pursued fervor settler looking gold California hence title filled longing jealousy ultimately realizes chase fight isn’t worth Favorite lyric “At dinner party call contrarian shit coastal town wandered ‘round nеver seen love pure thеn fade gray dayold tea ’Cause could never be” “‘tis damn season” One favorite song album “‘tis damn season” propelled electric guitar strum tell story woman left small town Tupelo Mississippi make Hollywood back holiday reunites old flame ambivalence leaving returning home palpable Taylor Swift’s aching vocal performance one best song winner made even richer reach track “Dorothea” realize track also albeit different perspective It’s kind world building intertwining narrative big part made folklore ambitious breathtaking Favorite lyric “Sleep half day old times’ sake won’t ask wait don’t ask stay I’ll go back LA socalled friend Who’ll write book ever make wonder soul tell smile I’m fakin’ heart know I’m breakin’ leave warmest bed I’ve ever known” Image copyright Republic RecordsTaylor Swift “tolerate it” Many album ago Taylor Swift’s fan noticed fifth track tend particularly wrenching confessional ballad eg “All Well” “The Archer” “My Tears Ricochet” fifth track evermore exception refrain “I know love celebrated tolerate it” punch gut perfectly sum song tell woman faithfully devoted partner year becoming increasingly resentful fails demonstrate fidelity passion return…or even interest song evokes heartbreaking dynamic love someone appears largely indifferent toward Favorite lyric “While building world Where’s man who’d throw blanket barbed wire made temple mural sky I’m begging footnote story life Drawing heart byline Always taking much space time” “no body crime feat HAIM” favorite song album — one favorite song Taylor Swift ever made — ambitious collaboration rock band HAIM grand tradition Cher’s “Dark Lady” Carrie Underwood’s “Two Black Cadillacs” song tell story murder double murder le various perspective fell love instantly took numerous listens fully comprehend shifting narrative first verse involves narrator’s friend telling think husband cheating second verse tell narrator’s friend’s disappearance narrator’s suspicion husband killed third verse tell narrator murdering late friend’s husband subsequently covering chilling refrain song’s production vocal perfectly fit cinematic macabre content also one authentically unapologetically country song produced age Favorite lyric “Good thing daddy made get boating license fifteen I’ve cleaned enough house know cover scene Good thing Este’s sister’s gonna swear Good thing mistress took big life insurance policy” “happiness” Taylor Swift reportedly wrote song week album’s release hardly feel like rush job tell story someone aftermath breakup realizing although devastated know happiness fittingly slow deliberate contemplative production also find Swift engaging remarkably mature complex approach breakup contrast markedly music wrote teenager look line like “No one teach good man hurt know hurt too” Favorite lyric “Honey I’m tree see eye leak acid rain pillow used lay head giving best Tell give want green light forgiveness haven’t met new yet think she’ll give that” “dorothea” dyad “‘tis damn season” revisited albeit perspective male time talk high school girlfriend titular Dorothea went become star Hollywood pine wonder ever think moved Although it’s uptempo upbeat many song heartbreaking innocence earnestness Favorite lyric “It’s never late come back side start eye shined brighter Tupelo you’re ever tired bеing known know know you’ll always know Dorothea” “coney island feat National” indie rock duet Matt Berninger rock band National whose bandmate Aaron Dressner produced much evermore intensely nostalgic depicts vivid detail heartbreaking longing story couple looking back relationship fell apart largely unequal level commitment never reach dramatic height hoping would lyric gorgeous mix Swift’s mellifluous vocal Berninger’s raspy baritone truly special Favorite lyric “Break soul two looking you’re right can’t relate anymore related long haul How’d get soon close first around something delicate shatter you” “ivy” Swift delf back lyrical theme infidelity banjodriven track feature harmonizing Justin Vernon Bon Iver tell story woman reluctantly fallen love man husband longs faithful cannot stop even though perfectly aware havoc wreak track similar thematically “illicit affairs” stand folklore like “ivy” tenth track album Favorite lyric “Clover bloom field Spring break loose time near would found u Crescent moon coast clear Spring break loose fear He’s gonna burn house ground” Image copyright Republic RecordsTaylor Swift “cowboy like me” song evermore song snuck somewhat indifferent upon first listen yet kept returning due hypnotic production exceedingly interesting lyric Like “no body crime” song go back country root also involves vivid storytelling similarity pretty much end Rather double murder song tell story two grifter constantly looking wealthy people romantically pursue unexpectedly break rule falling love lyric beautifully complemented backup vocal Marcus Mumford popular country rock band Mumford Sons elegant orchestration involving guitar mandolin harmonica Favorite lyric “And skeleton closet Plotted hard f old men I’ve swindled Really believe one lady lunching story passed town locked down” “long story short” tale rising ash one song onetwo punch folklore evermore slightly different production could right home one previous poporiented album It’s indierock track heavily feature drum guitar jaunty catchy chorus another meditation cruel treatment Taylor Swift received medium received due various controversy mostly oldfashioned misogyny year earlier way cheerfully dismisses refocuses current happiness romantic partner suggests fully moved making one album’s truly heartwarming song Favorite lyric “Past wanna tell get lost petty thing nemesis defeat get chance swing he’s passing Rare glimmer comet sky feel like home shoe fit walk wherever go” “marjorie” Yet another example folklore evermore “sister albums” fact thirteenth track ode one grandparent folklore’s “epiphany” focus grandfather’s time WWII evermore’s 13th track “marjorie” tell story grandmother Identified many critic fan highlight evermore gorgeously written tale guilt grief regret subject Marjorie Finlay opera singer died Taylor 13 year old song take alternating perspective young Taylor appreciating grandmother enough grown Taylor wishing could make poignant song sample grandmother’s actual vocal backing track Favorite lyric “I should’ve asked question should’ve asked Asked write Should’ve kept every grocery store receipt ’Cause every scrap would taken Watched signed name ‘Marjorie’ closet backlogged dream left me” “closure” Aptly described geniuscom “wild industrial folk number” “closure” tell story woman keep getting offered opportunity closure expartner repeatedly reject desire hurt rather reflection selfish reason simply doesn’t need moved unique complex unpredictable orchestration production make one album’s sonically bold track Favorite lyric “I know I’m wrinkle new life Staying friend would iron nice Guilty guilty reaching across sea put it’s fake it’s oh unnecessary” “evermore feat Bon Iver” Although never reach power “exile” collaboration Bon Iver appeared folklore one Swift’s best song lot admire Particularly thrilling bridge occurs midsong feature Taylor Justin Vernon Bon Iver trading line harmonizing majority song surrounding hook somber piano ballad delf narrator’s depression one doesn’t work quite well lyrically rest album beautifully performed fascinating production cap album appropriately somber hopeful note Favorite lyric “Hey December Guess I’m feeling unmoored Can’t remember used fight rewind thе tape pause thе moment lost Sending signal doublecrossed” Image copyright Republic RecordsTaylor Swift Sum Admittedly bit worried December 10th Taylor Swift announced evermore previous surprise release careerredefining masterwork innovative powerful artistic creation worried wasn’t good folklore came cash grab could cheapened folklore’s legacy doesn’t cheapen deepens Four full listens I’m still finding new thing appreciate lyrically sonically act poring lyric order select favorite article somehow increased already substantial appreciation scope nuance Swift’s songwriting Standout track “no body crime” “‘tis damn season” “champagne problems” “cowboy like me” case folklore isn’t single bad track album question Swifties music critic dogged “Sure it’s good good folklore” opinion isn’t correct question ask answer would tentative “no” eversoslightly lack cohesion emotional punch folklore However likely heavily influenced fact folklore marked departure prior album thus totally unexpected nearly five month savor explore think question asked “Is evermore also masterpiece like folklore is” answer question would unequivocal “Yes” Choosing folklore evermore like deciding movie better — Godfather Godfather Part II Sunrise Sunset Midnight Alien Aliens Toy Story Toy Story 2 Toy Story 3 may personal preference comparing ultimately splitting hair masterpiece head shoulder virtually everything else produced medium complement deepen beautifully Rating “evermore” 55 starsTags Music Writing Culture Entertainment Feminism |
1,142 | How Does Spotify Know You So Well? | Recommendation Model #1: Collaborative Filtering
First, some background: When people hear the words “collaborative filtering,” they generally think of Netflix, as it was one of the first companies to use this method to power a recommendation model, taking users’ star-based movie ratings to inform its understanding of which movies to recommend to other similar users.
After Netflix was successful, the use of collaborative filtering spread quickly, and is now often the starting point for anyone trying to make a recommendation model.
Unlike Netflix, Spotify doesn’t have a star-based system with which users rate their music. Instead, Spotify’s data is implicit feedback — specifically, the stream counts of the tracks and additional streaming data, such as whether a user saved the track to their own playlist, or visited the artist’s page after listening to a song.
But what is collaborative filtering, truly, and how does it work? Here’s a high-level rundown, explained in a quick conversation:
Image source: Collaborative Filtering at Spotify, by Erik Bernhardsson, ex-Spotify.
What’s going on here? Each of these individuals has track preferences: the one on the left likes tracks P, Q, R, and S, while the one on the right likes tracks Q, R, S, and T.
Collaborative filtering then uses that data to say:
“Hmmm… You both like three of the same tracks — Q, R, and S — so you are probably similar users. Therefore, you’re each likely to enjoy other tracks that the other person has listened to, that you haven’t heard yet.”
Therefore, it suggests that the one on the right check out track P — the only track not mentioned, but that his “similar” counterpart enjoyed — and the one on the left check out track T, for the same reasoning. Simple, right?
But how does Spotify actually use that concept in practice to calculate millions of users’ suggested tracks based on millions of other users’ preferences?
With matrix math, done with Python libraries!
In actuality, this matrix you see here is gigantic. Each row represents one of Spotify’s 140 million users — if you use Spotify, you yourself are a row in this matrix — and each column represents one of the 30 million songs in Spotify’s database.
Then, the Python library runs this long, complicated matrix factorization formula:
Some complicated math…
When it finishes, we end up with two types of vectors, represented here by X and Y. X is a user vector, representing one single user’s taste, and Y is a song vector, representing one single song’s profile.
The User/Song matrix produces two types of vectors: user vectors and song vectors. Image source: From Idea to Execution: Spotify’s Discover Weekly, by Chris Johnson, ex-Spotify.
Now we have 140 million user vectors and 30 million song vectors. The actual content of these vectors is just a bunch of numbers that are essentially meaningless on their own, but are hugely useful when compared.
To find out which users’ musical tastes are most similar to mine, collaborative filtering compares my vector with all of the other users’ vectors, ultimately spitting out which users are the closest matches. The same goes for the Y vector, songs: you can compare a single song’s vector with all the others, and find out which songs are most similar to the one in question.
Collaborative filtering does a pretty good job, but Spotify knew they could do even better by adding another engine. Enter NLP. | https://medium.com/s/story/spotifys-discover-weekly-how-machine-learning-finds-your-new-music-19a41ab76efe | ['Sophia Ciocca'] | 2020-04-08 22:49:04.455000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'Spotify', 'Tech', 'Music'] | Title Spotify Know WellContent Recommendation Model 1 Collaborative Filtering First background people hear word “collaborative filtering” generally think Netflix one first company use method power recommendation model taking users’ starbased movie rating inform understanding movie recommend similar user Netflix successful use collaborative filtering spread quickly often starting point anyone trying make recommendation model Unlike Netflix Spotify doesn’t starbased system user rate music Instead Spotify’s data implicit feedback — specifically stream count track additional streaming data whether user saved track playlist visited artist’s page listening song collaborative filtering truly work Here’s highlevel rundown explained quick conversation Image source Collaborative Filtering Spotify Erik Bernhardsson exSpotify What’s going individual track preference one left like track P Q R one right like track Q R Collaborative filtering us data say “Hmmm… like three track — Q R — probably similar user Therefore you’re likely enjoy track person listened haven’t heard yet” Therefore suggests one right check track P — track mentioned “similar” counterpart enjoyed — one left check track reasoning Simple right Spotify actually use concept practice calculate million users’ suggested track based million users’ preference matrix math done Python library actuality matrix see gigantic row represents one Spotify’s 140 million user — use Spotify row matrix — column represents one 30 million song Spotify’s database Python library run long complicated matrix factorization formula complicated math… finish end two type vector represented X X user vector representing one single user’s taste song vector representing one single song’s profile UserSong matrix produce two type vector user vector song vector Image source Idea Execution Spotify’s Discover Weekly Chris Johnson exSpotify 140 million user vector 30 million song vector actual content vector bunch number essentially meaningless hugely useful compared find users’ musical taste similar mine collaborative filtering compare vector users’ vector ultimately spitting user closest match go vector song compare single song’s vector others find song similar one question Collaborative filtering pretty good job Spotify knew could even better adding another engine Enter NLPTags Machine Learning Artificial Intelligence Spotify Tech Music |
1,143 | A Whole Life Approach to Writing | A Whole Life Approach to Writing
How to contain, capture and corral creative inspiration
That loving feeling
One of my personal mantras as it relates to writing is I write because I love it. It is on my profile, it shows up in a poem here and an essay there, and it is totally un-unique. Yep, you read that right — un-unique. Even though a deep passion for writing pulses through my veins, I know I am not the special bearer of creative desires. A love for communicating through the written word is the universal soul beat of wordsmiths across the globe. It is a creative energy all true writers feel.
Sometimes.
Oh no, you may be thinking about now, not another article on how to keep pressing on when feelings of inspiration are nowhere to be found! Fear not, I will not be giving any tips on how to discipline yourself through the dry spells and persevere through the grueling hours of tedium. Although, if you need that kind of thing, there are many good resources out there.
The bookends of the whole life approach
I would like to take a different approach — a whole life approach to keeping that love of writing alive. First, a little personal background may be in order, to frame what I will be addressing and set the stage for what I will not be.
I remember the early days of my own writing passion when, as far back as first grade, I discovered that creative writing assignments were not work, but fun. It is the memory of times alone in my room, however, when an overwhelming desire to write a story would fall upon me, that I remember most. I would sit down, with my wide-ruled notebook paper and #2 pencil, and start to sketch out the characters and conversations that were a part of my make-believe world. It was exhilarating! Until it wasn’t. I would get about 2 or 3 chapters in, the enthusiasm would wane, and the story would end up somewhere in my room, probably in the triangular space behind my catty-corner dresser that served as my fort. I kept waiting for that feeling to come back so I could finish my creation, but alas.
I also recall the first time I discovered that I could produce a finished product by sheer discipline with no feeling of inspiration whatsoever. It was my senior year of high school, when I had to write a literary analysis paper for my English class. It was one of those papers where you have to back up your thesis with sub-points supported by quotations from the book, and I had likely not even read the whole book. Somehow, I miraculously produced a worthy paper. But what was eye-opening to me at the time was that I could write something I was proud of without feeling like writing or even caring about the topic I was writing about. In the end, however, the exercise did produce several feelings: a sense of accomplishment, a boost in confidence and the satisfaction of self-discipline.
These little snapshots represent what I like to think of as two very important bookends of a writer’s life: we need times of both deep inspirational creativity and structured discipline if we are going to bring our ideas to fruition. We have only to look at nature for an example of how beauty, spontaneity, order, and structure are intermingled throughout the created world.
Those ordinary moments
But what about all of the other moments of everyday life? The times when seedling thoughts lie dormant, when our little ant army lives march along as we do our duties, or when inner and outer storms rage leaving no time for structured planning, let alone space for contemplation and inspiration? The truth is that, for most of us, even those who write for a living, these non-writing times represent the majority of our existence. Eating and sleeping alone can take up close to half of every 24-hour period. If we want to stay healthy and maintain relationships with other human beings, that adds in a few more hours a day, and I could go on and on. Trust me, I am a scheduler, and I have parsed my life carefully over the years in an effort to understand where all the time goes.
As a writing lover, I need to remember that writing is a whole life experience and includes much more than the moments I sit down to write, whether from major inspiration or necessary discipline. Between these times are the ordinary moments, the everyday occurrences that are easily lost, but worthy of guarding and gleaning. I find that most of my inspiration comes either directly or indirectly through normal life experiences—when I am brushing my teeth, shuttling kids around, sweeping the floor or having a conversation with family and friends. It is in these very times that the little fireflies of creativity float across the landscape of my mind. It is also at these very times that those same fireflies will flutter and flicker right back out into the void of night if I don’t (humanely, of course) contain, capture and corral them. If I don’t want them to disappear, I need to safeguard them until I can release their illuminating sparks into my writing.
Containing, capturing and corralling creativity. These alliterative categories are not perfectly clear-cut, but instead of trying to squeeze creativity into a mold, I am satisfied that the overlap is further proof of the whole life nature of the writing process. Following is how I seek to hold on to the brief flashes of my fireflies.
Contain the emotions
Ah, those sometimes pleasant, sometimes painful, often unwieldy emotions. How to contain them when their definition is unclear, their nature so fleeting and life moves on leaving us with only an undefined memory? We know what words to use to tell — confusion, anger, love — but how do we hold on to the moment — contain it — so that we show the emotion in our writing?
I recently found myself in a situation where I felt extremely agitated. At the time, I probably felt any number of emotions — sadness, anger, confusion and eventually peace. See, I just told you how I felt, but, while I’m sure you can relate, it probably didn’t stir up any emotion in you.
As I was processing my emotions and trying to calm myself, I felt the urge to write, but I wasn’t at a place of clarity to do so and it wasn’t really the right time. Instead, I decided to contain the emotions within the framework of my experience. I wrote stream of consciousness notes — with no care for spelling, structure, flow — into my phone, intermingling sights, sounds, feelings and thoughts into one long rambling string. Afterwards, every time I read those notes, I recalled the incident vividly and eventually shaped the experience into a poem.
Capture the senses
We are constantly using our senses. They are the physical intake valves through which we relate to the world around us. They not only enable us to experience pleasure and pain, but they can stir up emotions and trigger deep thinking. Hence, the difficulty in placing them in a completely separate category. For my current purposes, I am making a distinction that I find helpful. When I think of capturing the senses, I am less concerned with trying to identify what I am feeling and more concerned with making sure that the fleeting revelations my senses bring to mind do not escape.
Here is an example. One of the things I love to do is go for country drives. They expand my view, help me take deep breaths, pull me out of my stress pits. But why? It is because, as I am driving along, I see broad open spaces, beautiful contrasting colors, surprising vistas around the bend. I need to capture those.
Or what about the other senses? I find that smells and sounds often bring with them wafts and echoes of nostalgia. The smell of my grandparents’ home or the distant sound of a train whistle remind me of what it felt like to be a child — not only remind me, but carry me back in time. The taste of macadamia nut crusted mahi mahi takes me to a romantic open-air dinner on the edge of the Pacific Ocean. The touch of my grandson’s little lips on my cheek brings with it both today’s new joys and yesterday’s memories of young motherhood.
Whenever I am in a situation where my senses bring refreshment to my soul, clarity to my thinking or reminders of past situations, I try to capture them by recording them in short phrases that will jog my sensory memories when I decide to use them in a story or poem or perhaps even a non-fiction piece.
Corral the concepts
Whether out of the clear blue sky or triggered by one of the above two items, I have days where a concept races through my mind, and I think, “I need to write about that!” It could be something based on a very simple observation of a daily frustration like, “Why is there always a mismatch between plastic storage containers and lids?” Or it could be a quick glance in the mirror that triggers thoughts on the mysteries of aging, “how can I still feel so young on the inside while the outside insists on growing old?”
Recently, not on a country drive, but on a suburban drive along a winding back road, I was provided a real-life illustration of a concept I think of often — perspective. After having driven this 3-mile stretch several times from one direction — on my way home from somewhere — I now approached it from the opposite direction — on my way to that somewhere — and was struck by the unfamiliarity of it all. I was aware of new scenery I had never noticed, and instead of being confident about the speed I should take as I rounded a curve, I was cautious and apprehensive, not knowing what would be around the bend and unsure of how much further I needed to drive to get to the end. It felt like a completely different road. Perspective.
Knowing there was a life lesson in there, I corralled the concept by recording it with a sentence or two and stored it away in my computer Incubator folder. That folder contains documents entitled Miscellaneous Ideas (so original!) and Quirky and Fleeting Thoughts (that’s better!) as well as individual documents for ideas I have spent time expounding on a bit more extensively.
Concluding encouragement
My goal in sharing these things is not to provide methods, but to encourage all writers, including myself, to keep our love of writing alive even in times when we are not experiencing light bulb illuminations or when we are unable to carve out intentional time for writing. By recognizing that our lives are full of inspirational moments just waiting to be contained, captured and corralled, we have the opportunity to dot the landscape of our everyday moments with fireflies, keeping our own creative lights burning, while waiting for the perfect time to release our new and glowing creations into the world.
So what fireflies have flown across your creative landscape today? I know you’ve seen some. Now just take a few moments to contain, capture or corral them. Meanwhile, I need to go take care of a few of my own. | https://medium.com/literally-literary/a-whole-life-approach-to-writing-a082ff5abcbf | ['Valori Maresco'] | 2020-04-12 03:03:42.493000+00:00 | ['Creativity', 'Writing Tips', 'Writing Life', 'Essay', 'Writing'] | Title Whole Life Approach WritingContent Whole Life Approach Writing contain capture corral creative inspiration loving feeling One personal mantra relates writing write love profile show poem essay totally ununique Yep read right — ununique Even though deep passion writing pulse vein know special bearer creative desire love communicating written word universal soul beat wordsmith across globe creative energy true writer feel Sometimes Oh may thinking another article keep pressing feeling inspiration nowhere found Fear giving tip discipline dry spell persevere grueling hour tedium Although need kind thing many good resource bookend whole life approach would like take different approach — whole life approach keeping love writing alive First little personal background may order frame addressing set stage remember early day writing passion far back first grade discovered creative writing assignment work fun memory time alone room however overwhelming desire write story would fall upon remember would sit wideruled notebook paper 2 pencil start sketch character conversation part makebelieve world exhilarating wasn’t would get 2 3 chapter enthusiasm would wane story would end somewhere room probably triangular space behind cattycorner dresser served fort kept waiting feeling come back could finish creation ala also recall first time discovered could produce finished product sheer discipline feeling inspiration whatsoever senior year high school write literary analysis paper English class one paper back thesis subpoints supported quotation book likely even read whole book Somehow miraculously produced worthy paper eyeopening time could write something proud without feeling like writing even caring topic writing end however exercise produce several feeling sense accomplishment boost confidence satisfaction selfdiscipline little snapshot represent like think two important bookend writer’s life need time deep inspirational creativity structured discipline going bring idea fruition look nature example beauty spontaneity order structure intermingled throughout created world ordinary moment moment everyday life time seedling thought lie dormant little ant army life march along duty inner outer storm rage leaving time structured planning let alone space contemplation inspiration truth u even write living nonwriting time represent majority existence Eating sleeping alone take close half every 24hour period want stay healthy maintain relationship human being add hour day could go Trust scheduler parsed life carefully year effort understand time go writing lover need remember writing whole life experience includes much moment sit write whether major inspiration necessary discipline time ordinary moment everyday occurrence easily lost worthy guarding gleaning find inspiration come either directly indirectly normal life experiences—when brushing teeth shuttling kid around sweeping floor conversation family friend time little firefly creativity float across landscape mind also time firefly flutter flicker right back void night don’t humanely course contain capture corral don’t want disappear need safeguard release illuminating spark writing Containing capturing corralling creativity alliterative category perfectly clearcut instead trying squeeze creativity mold satisfied overlap proof whole life nature writing process Following seek hold brief flash firefly Contain emotion Ah sometimes pleasant sometimes painful often unwieldy emotion contain definition unclear nature fleeting life move leaving u undefined memory know word use tell — confusion anger love — hold moment — contain — show emotion writing recently found situation felt extremely agitated time probably felt number emotion — sadness anger confusion eventually peace See told felt I’m sure relate probably didn’t stir emotion processing emotion trying calm felt urge write wasn’t place clarity wasn’t really right time Instead decided contain emotion within framework experience wrote stream consciousness note — care spelling structure flow — phone intermingling sight sound feeling thought one long rambling string Afterwards every time read note recalled incident vividly eventually shaped experience poem Capture sens constantly using sens physical intake valve relate world around u enable u experience pleasure pain stir emotion trigger deep thinking Hence difficulty placing completely separate category current purpose making distinction find helpful think capturing sens le concerned trying identify feeling concerned making sure fleeting revelation sens bring mind escape example One thing love go country drive expand view help take deep breath pull stress pit driving along see broad open space beautiful contrasting color surprising vista around bend need capture sens find smell sound often bring waft echo nostalgia smell grandparents’ home distant sound train whistle remind felt like child — remind carry back time taste macadamia nut crusted mahi mahi take romantic openair dinner edge Pacific Ocean touch grandson’s little lip cheek brings today’s new joy yesterday’s memory young motherhood Whenever situation sens bring refreshment soul clarity thinking reminder past situation try capture recording short phrase jog sensory memory decide use story poem perhaps even nonfiction piece Corral concept Whether clear blue sky triggered one two item day concept race mind think “I need write that” could something based simple observation daily frustration like “Why always mismatch plastic storage container lids” could quick glance mirror trigger thought mystery aging “how still feel young inside outside insists growing old” Recently country drive suburban drive along winding back road provided reallife illustration concept think often — perspective driven 3mile stretch several time one direction — way home somewhere — approached opposite direction — way somewhere — struck unfamiliarity aware new scenery never noticed instead confident speed take rounded curve cautious apprehensive knowing would around bend unsure much needed drive get end felt like completely different road Perspective Knowing life lesson corralled concept recording sentence two stored away computer Incubator folder folder contains document entitled Miscellaneous Ideas original Quirky Fleeting Thoughts that’s better well individual document idea spent time expounding bit extensively Concluding encouragement goal sharing thing provide method encourage writer including keep love writing alive even time experiencing light bulb illumination unable carve intentional time writing recognizing life full inspirational moment waiting contained captured corralled opportunity dot landscape everyday moment firefly keeping creative light burning waiting perfect time release new glowing creation world firefly flown across creative landscape today know you’ve seen take moment contain capture corral Meanwhile need go take care ownTags Creativity Writing Tips Writing Life Essay Writing |
1,144 | How to Set Up a Publication in Medium | This post will be a quickie primer in how to set up a Medium publication to house your Blog-Your-Own-Book Challenge posts. We’ll talk about how to set up the publication in general, and then how to set up navigation in the publication so that your posts are all housed together nice and neat.
I hear you thinking: Do I really need to do this?
The answer is: Yes.
You should have a publication on Medium if your plan is to be a writer on Medium. My friend Ashley Shannon always says “every post needs a home.” She’s right. Sometimes that home is in someone else’s publication.
But sometimes it should be in your own. In your own publication, you can start gathering up an audience that’s yours. How cool is that?
Ready?
Create Your Publication
First steps first. You need to actually build a publication. This is way, way easier than you might think. You can do it in ten or fifteen minutes. Maybe a little longer if you want to spend time creating graphics for the header, avatar, and logo.
Step One: Go to the Publication Page
Click on your little round photo in the upper right corner of your screen, then click on ‘publications.’
Screenshot: Author
Step Two: Open a New Publication Page
Click on the ‘new publication’ button.
Screenshot: Author
Step Three: Create Your New Publication
Here are the questions you’ll be asked:
Name. This is where you’ll name your publication. It will become part of your URL: medium.com/YOURPUBNAME, so think carefully and choose wisely!
This is where you’ll name your publication. It will become part of your URL: medium.com/YOURPUBNAME, so think carefully and choose wisely! Tagline. This is a sentence about your publication.
This is a sentence about your publication. Description. This is a longer description than your tagline. Maybe two or three sentences. It will go in the footer of all your posts and show up in searches on Medium.
This is a longer description than your tagline. Maybe two or three sentences. It will go in the footer of all your posts and show up in searches on Medium. Publication Avatar. This is the picture that will show up in the little round circle at the top of page for your publication, similar to your personal avatar picture. It also shows up in previews of your content. It should be square and at least 60 X 60 pixels.
This is the picture that will show up in the little round circle at the top of page for your publication, similar to your personal avatar picture. It also shows up in previews of your content. It should be square and at least 60 X 60 pixels. Publication Logo. This one is optional. It’s something you can create that will be added to the top of every post. It should be 600 X 72 pixels.
This one is optional. It’s something you can create that will be added to the top of every post. It should be 600 X 72 pixels. Add your contact info, including your email address and social media links. These will be public, so consider creating a dedicated email address.
Add up to five tags to your publication, similar to when you write a post.
Add other writers or editors if you have them.
Click ‘next.’
Step Four: Set Up Your Publications Header
On the next page, you’ll be able to set up the header for your publication — what it will look like for readers.
We’re going to look at this tool bar:
Screenshot: Author | https://medium.com/the-write-brain/how-to-set-up-a-publication-in-medium-97c64243d8b2 | ['Shaunta Grimes'] | 2020-07-24 03:40:23.888000+00:00 | ['Byob', 'Writing', 'Blogging', 'Medium', 'Creativity'] | Title Set Publication MediumContent post quickie primer set Medium publication house BlogYourOwnBook Challenge post We’ll talk set publication general set navigation publication post housed together nice neat hear thinking really need answer Yes publication Medium plan writer Medium friend Ashley Shannon always say “every post need home” She’s right Sometimes home someone else’s publication sometimes publication start gathering audience that’s cool Ready Create Publication First step first need actually build publication way way easier might think ten fifteen minute Maybe little longer want spend time creating graphic header avatar logo Step One Go Publication Page Click little round photo upper right corner screen click ‘publications’ Screenshot Author Step Two Open New Publication Page Click ‘new publication’ button Screenshot Author Step Three Create New Publication question you’ll asked Name you’ll name publication become part URL mediumcomYOURPUBNAME think carefully choose wisely you’ll name publication become part URL mediumcomYOURPUBNAME think carefully choose wisely Tagline sentence publication sentence publication Description longer description tagline Maybe two three sentence go footer post show search Medium longer description tagline Maybe two three sentence go footer post show search Medium Publication Avatar picture show little round circle top page publication similar personal avatar picture also show preview content square least 60 X 60 pixel picture show little round circle top page publication similar personal avatar picture also show preview content square least 60 X 60 pixel Publication Logo one optional It’s something create added top every post 600 X 72 pixel one optional It’s something create added top every post 600 X 72 pixel Add contact info including email address social medium link public consider creating dedicated email address Add five tag publication similar write post Add writer editor Click ‘next’ Step Four Set Publications Header next page you’ll able set header publication — look like reader We’re going look tool bar Screenshot AuthorTags Byob Writing Blogging Medium Creativity |
1,145 | COVID-19 Lays Bare Inequities In Our Health Care System | The coronavirus is disproportionately affecting Black Americans and intensifying social determinants of health.
Social determinants of health are conditions in which people live, work, and learn that impact health risks and health outcomes. COVID-19 is showing the world how social determinants of health, matter, especially to Black Americans.
According to The Henry J. Kaiser Family Foundation, social determinants of health include factors like socioeconomic status, education, neighborhood, and physical environment, employment, and social support networks, as well as access to health care. Addressing social determinants of health is vital for improving health and reducing health disparities. There are a growing number of initiatives to shape policies and practices in non-health sectors in ways that promote health and health equity like Medicaid-specific initiatives. However, many challenges remain and are visible in the COVID-19 data.
There are over 1.4 million cases of the coronavirus worldwide, according to Johns Hopkins University, and more than 80,000 deaths. Today, as stated by Worldometer, there are 400,412 coronavirus cases and 12,854 deaths in the US. The majority of fatalities are disproportionately Black. Nikole Hannah-Jones has done a great job aggregating the data in a Twitter thread:
Now, even as Black Americans risk higher exposure, they are already disproportionately suffering the comorbidities that make COVID-19 so deadly. Nikole Hannah-Jones explains, “Black Americans are 40 % more likely to have hypertension than white, twice as likely to have diabetes, up to 3x asthma hospitalization.” Additionally, poor Black people are more vulnerable to COVID-19. Blacks are more likely to work service sector jobs where they can’t practice social distancing and work from home.
So what do we do? Below are just a few ways to address the social determinants of health and help those most at risk: | https://medium.com/humble-ventures/covid-19-lays-bare-inequities-in-our-health-care-system-1bf65a1aaf6c | ['Harry Alford'] | 2020-04-08 02:44:19.887000+00:00 | ['Covid 19', 'Social Determinants', 'Black People', 'Coronavirus', 'Health'] | Title COVID19 Lays Bare Inequities Health Care SystemContent coronavirus disproportionately affecting Black Americans intensifying social determinant health Social determinant health condition people live work learn impact health risk health outcome COVID19 showing world social determinant health matter especially Black Americans According Henry J Kaiser Family Foundation social determinant health include factor like socioeconomic status education neighborhood physical environment employment social support network well access health care Addressing social determinant health vital improving health reducing health disparity growing number initiative shape policy practice nonhealth sector way promote health health equity like Medicaidspecific initiative However many challenge remain visible COVID19 data 14 million case coronavirus worldwide according Johns Hopkins University 80000 death Today stated Worldometer 400412 coronavirus case 12854 death US majority fatality disproportionately Black Nikole HannahJones done great job aggregating data Twitter thread even Black Americans risk higher exposure already disproportionately suffering comorbidities make COVID19 deadly Nikole HannahJones explains “Black Americans 40 likely hypertension white twice likely diabetes 3x asthma hospitalization” Additionally poor Black people vulnerable COVID19 Blacks likely work service sector job can’t practice social distancing work home way address social determinant health help riskTags Covid 19 Social Determinants Black People Coronavirus Health |
1,146 | History Warns of the Deadly Threat to Humanity from Artificial Intelligence | An article published in Nature magazine in autumn 2017 makes for interesting reading. It reports on research carried out by Washington State University and Arizona State University, which shows that the wealth disparity in human societies was insignificant until the development of agriculture. That occurred in different parts of the world around 13,000 years ago. What happened next should be a warning to humans in the age of artificial intelligence (AI).
Land cultivation started when groups of nomads stayed in one location probably due to illness, injury, bad weather, or fear of other tribes. A few individuals experimented with seeds and plants and discovered that they could grow edible crops in dedicated plots and repeat the process each year. That reduced their need to constantly hunt, fish, and search for wild fruit and vegetables. Some grabbed more land than others and became the wealthiest of the group. The wealth gap increased even more, when some people learned how to tame large animals like oxen and horses and used them to till larger areas and, in the case of horses, more effectively fight adversaries and, so, acquire more land.
Having the latest and most powerful technology — in the broadest sense of the word — has always meant riches and power. The industrial revolution, which replaced much animal and human sweat with steam power, made the owners of the steam engines and factories very wealthy. Today’s technological equivalent of oxen, horses, and steam engines are computer systems and, just as in the days of the early humans, those who control that new technology are among the richest. Technology itself, however, may soon upend that age-old equivalence.
Many respected experts predict that the processing power of computers will surpass that of humans within the next few decades. Some of those experts, including Tesla and SpaceX CEO, Elon Musk, and the late theoretical physicist, Stephen Hawking, worry that artificial intelligence machines will eventually become conscious i.e. gain self-awareness, will be smarter than humans, and continue to get smarter quickly. These ultra-smart machines, the experts warn, will pose an existential threat to humanity because humans will not know what they’re thinking and so won’t be able to control them. Of course, nobody knows for sure that this will happen and, if it does, precisely when, but the expert warnings are credible enough to be taken seriously.
Scientists call the hypothetical moment in time when machines become conscious as the “singularity.” If that moment arrives, the experts suggest a number of possible scenarios. The most benign is that the machines will work for the benefit of their human creators and that there would be no reason for them to harm humans. Yet how could anyone be sure that that would be the case, since humans would not know what the machines are thinking? Even today, computer scientists don’t fully understand why complex computers make some of the decisions they make. Autonomous machines that can harm humans already exist. Drones without a human controller can be programmed to locate and attack targets. Some scientists argue that if machines become conscious, they are likely to regard humans as unnecessary and inefficient and eliminate them.
Elon Musk, among others, suggests that the only way to match these super-intelligent machines is to augment human intelligence by joining human brains to the machines. In that science-fiction scenario, the human race would become a race of cyborgs. Since cyborgs’ machine elements will doubtlessly evolve quicker than the biological elements, humans will gradually, but effectively turn into machines. That suggests that Elon Musk’s proposal is not really a solution at all and that the machines will eventually take over one way or another.
Thanks to AI, the rich will get richer until, ironically, humans lose control of the technology they invented. That’s unlikely to happen anytime soon, but it could happen sooner than most people think. When it does, for the first time in human history, being very wealthy will count for very little. | https://medium.com/digital-diplomacy/history-warns-of-the-deadly-threat-to-humanity-from-artificial-intelligence-e08eccfc9a5f | ['George J. Ziogas'] | 2020-12-16 12:25:12.629000+00:00 | ['Artificial Intelligence', 'Society', 'Technology', 'Data Science', 'History'] | Title History Warns Deadly Threat Humanity Artificial IntelligenceContent article published Nature magazine autumn 2017 make interesting reading report research carried Washington State University Arizona State University show wealth disparity human society insignificant development agriculture occurred different part world around 13000 year ago happened next warning human age artificial intelligence AI Land cultivation started group nomad stayed one location probably due illness injury bad weather fear tribe individual experimented seed plant discovered could grow edible crop dedicated plot repeat process year reduced need constantly hunt fish search wild fruit vegetable grabbed land others became wealthiest group wealth gap increased even people learned tame large animal like ox horse used till larger area case horse effectively fight adversary acquire land latest powerful technology — broadest sense word — always meant rich power industrial revolution replaced much animal human sweat steam power made owner steam engine factory wealthy Today’s technological equivalent ox horse steam engine computer system day early human control new technology among richest Technology however may soon upend ageold equivalence Many respected expert predict processing power computer surpass human within next decade expert including Tesla SpaceX CEO Elon Musk late theoretical physicist Stephen Hawking worry artificial intelligence machine eventually become conscious ie gain selfawareness smarter human continue get smarter quickly ultrasmart machine expert warn pose existential threat humanity human know they’re thinking won’t able control course nobody know sure happen precisely expert warning credible enough taken seriously Scientists call hypothetical moment time machine become conscious “singularity” moment arrives expert suggest number possible scenario benign machine work benefit human creator would reason harm human Yet could anyone sure would case since human would know machine thinking Even today computer scientist don’t fully understand complex computer make decision make Autonomous machine harm human already exist Drones without human controller programmed locate attack target scientist argue machine become conscious likely regard human unnecessary inefficient eliminate Elon Musk among others suggests way match superintelligent machine augment human intelligence joining human brain machine sciencefiction scenario human race would become race cyborg Since cyborgs’ machine element doubtlessly evolve quicker biological element human gradually effectively turn machine suggests Elon Musk’s proposal really solution machine eventually take one way another Thanks AI rich get richer ironically human lose control technology invented That’s unlikely happen anytime soon could happen sooner people think first time human history wealthy count littleTags Artificial Intelligence Society Technology Data Science History |
1,147 | Software Roles and Titles | I’ve noticed a lot of confusion in the industry about various software roles and titles, even among founders, hiring managers, and team builders. What are the various roles and responsibilities on a software team, and which job titles tend to cover which roles?
Before I dig into this too much, I’d like to emphasize that every team is unique, and responsibilities tend to float or be shared between different members of the team. Anybody at any time can delegate responsibilities to somebody else for various reasons.
If your team isn’t exactly what I describe here, welcome to the club. I suspect very few teams and particular software roles will match perfectly with what we’re about to explore. This is just a general framework that describes averages more than any particular role or team.
I’ll start with management titles and work my way through various roles roughly by seniority.
I’d also like to emphasize that you should not feel constrained by your job title. I like to build an engineering culture which favors:
Skills over titles
over titles Continuous delivery over deadlines
over deadlines Support over blame
over blame Collaboration over competition
I like to reward initiative with increased responsibility, and if somebody has the skills and initiative to take on and outgrow the title they’re hired for, I like to promote rather than risk losing a rising star to another company or team.
Software Development Roles
Engineering Fellow
CEO
CTO
CIO/Chief Digital Officer/Chief Innovation Officer
VP of Engineering/Director of Engineering
Chief Architect
Software Architect
Engineering Project Manager/Engineering Manager
Technical Lead/Engineering Lead/Team Lead
Principal Software Engineer
Senior Software Engineer/Senior Software Developer
Software Engineer
Software Developer
Junior Software Developer
Intern Software Developer
We’ll also talk a little about how these roles relate to other roles including:
VP of Product Management/Head of Product
Product Manager
VP of Marketing
Note: Sometimes “director”, or “head” titles indicate middle managers between tech managers and the C-Suite. Often, “Chief” titles indicate a C-suite title. C-suite employees typically report directly to the CEO, and have potentially many reports in the organizations they lead. At very large companies, those alternate titles often fill similar roles to C-suite executives, but report to somebody who is effectively the CEO of a smaller business unit within the larger organization. Different business units sometimes operate as if they are separate companies, complete with their own isolated accounting, financial officers, etc. Different business units can also have VPs, e.g., “Vice President of Engineering, Merchant Operations”.
Engineering Fellow
The title “fellow” is the pinnacle of achievement for software engineers. It is typically awarded in recognition of people who have made outstanding contributions to the field of computing, and is usually awarded after an engineer writes a number of top selling books, wins prizes like the Turing Award, the Nobel Prize, etc. In other words, fellows are usually already famous outside the organization, and the company is trying to strengthen their brand by more strongly associating themselves with admired and influential people.
In my opinion, organizations should not try to hire for “fellow” roles. Instead, find the best and brightest, hire them, and then grant the title (and benefits) if the engineer is deserving of it.
A fellow typically also holds another title at the company. Often a CTO, Architect, VP of Engineering, or principal role, where they are in a position to lead, mentor, or serve as an example and inspiration to other members of the organization.
CEO
The CEO is the position of most authority in an organization. Typically, they set the vision and north star for the company. They rally everybody around a common understanding of why the company exists, what the mission is, and what the company’s values are. Frequently, CEOs are also the public face of the company, and in some cases, become synonymous with the brand (e.g., Steve Jobs with Apple, Elon Musk with Tesla/SpaceX, etc.)
In some cases, CEOs are also the technical founder of a software organization, in which case, they also often fill the CTO role, and may have a VPs of Operations, Sales, Strategy, and Marketing helping with some of the other common CEO responsibilities.
The CEO of a small company frequently wears a lot of hats, as you may have picked up from all the other roles that fell out of the CEO title when I mentioned that some CEOs lead the technology team.
In any case, if there are important organizational decisions to be made, you can’t run it up the chain of responsibility any higher than the CEO.
If you are a CEO, remember that you’re ultimately responsible, and you should trust your instincts, but don’t forget that even most famous CEOs have mentors and advisors they consult with on a regular basis. Trust your gut, but seek out smart, insightful people to challenge you to improve, as well.
CTO
Like the CEO role, the CTO role shape-shifts over time. At young startups, the CTO is often a technical cofounder to a visionary or domain-driven CEO. Frequently they are not qualified to take the title at a larger company, and hopefully grow into it as the company grows. Frequently, a startup CTO finds that they prefer more technical engineering roles, and settle back into other roles, like Principal Engineer, VP of Engineering, or Chief Architect.
In many organizations, the mature CTO role is outward facing. They participate in business development meetings, frequently helping to land large partnerships or sales. Many of them hit the conference circuit and spend a lot of time evangelizing the development activities of the organization to the wider world: sharing the company’s innovations and discovering opportunities in the market which match up well with the company’s core competencies. CTOs frequently work closely with the product team on product strategy, and often have an internal-facing counterpart in engineering, such as the VP of Engineering.
CTOs also frequently set the vision and north star of the engineering team. The goals for the team to work towards.
CIO/Chief Digital Officer/Chief Innovation Officer
The Chief Innovation Officer (CIO) is like a CTO, but typically employed by a company that would not normally be considered a “tech company”. The goal of the CIO is to reshape the company into one that consumers perceive as tech-savvy and innovative: To show the world what the future of the industry looks like, no matter what that industry is. For example, a home remodeling superstore chain might have a CIO responsible for partnering with tech companies to build a mixed reality app to show shoppers what a specific couch or wall color would look like in their living room, or using blockchains and cryptocurrencies to enhance the security and efficiency of supply chain logistics.
Not to be confused with a Chief Information Officer (CIO), a title which is typically used in companies who are even more detached from technology, interested about as far as it aids their core operations. Unlike a Chief Innovation Officer, A Chief Information Officer is more likely to be leading tech integration and data migration projects than building new apps and trying to figure out how a company can disrupt itself from the inside. There are Chief Information Officers who act more like Chief Innovation Officers, but in my opinion, they should use the appropriate title.
Most tech-native companies (app developers, etc) don’t have either kind of CIO. Instead, those responsibilities fall to the CTO and VP of Engineering.
VP of Engineering/Director of Engineering
While CTOs often face outward, the VP of Engineering often faces inward. A VP of Engineering is frequently responsible for building the engineering team and establishing the engineering culture and operations. The CTO might tell the engineering team what needs to get done on the grand scale, e.g., “be the leading innovator in human/computer interaction”. The VP of Engineering helps foster a culture that manages the “how”. The best VPs of Engineering at first come across as somebody who’s there to help the team work efficiently, and then they almost disappear. Developers on the team collaborate well, mentor each other, communicate effectively, and they think, “Hey, we’re a great team. We work really well together!” and maybe they think that’s all a lucky accident.
The truth is that almost never happens by accident. It happens because there’s a VP of Engineering constantly monitoring the team’s progress, process, culture, and tone of communications. They’re encouraging developers to use certain tools, hold specific kinds of meetings at specific times in order to foster better collaboration with fewer interruptions. The best VPs of Engineering have been engineers, both on dysfunctional teams, and on highly functional teams. They know the patterns and anti-patterns for effective software development workflows.
They work with the heads of product and product managers to ensure that there’s a good product discovery process (they don’t lead it or take charge of it, just make sure that somebody is on it and doing it well), and that product and design deliverables are adequately reviewed by engineers prior to implementation hand offs. I’m going to stop there before I write a book on all the work that goes into leading effective development operations. For more of my thoughts on this topic, check out How to Build a High Velocity Development Team.
Many startups are too small to hire a full time VP of Engineering, but it’s still very important to get engineering culture right as early as possible. If you need help with this, reach out.
Chief Architect
At small organizations, the chief architect could be a technical co-founder with the self-awareness to realize that they won’t want the responsibilities of a CTO as the company grows. Maybe they don’t like to travel, or are simply more interested in software design than conference talks, business development, and sales calls that infiltrate the lives of many CTOs. The chief architect may be responsible for selecting technology stacks, designing collaborations and interfaces between computing systems, assessing compute services offerings (AWS, Azure, ZEIT Now, etc.), and so on. A chief architect may evaluate a wide range of industry offerings and make pre-approved or favored recommendations to work with particular vendors.
As the company matures, the chief architect may also need to work closely with the CTO, and sometimes partner organizations to develop integrations between services. At many companies, the CTO also serves as the chief architect.
Software Architect
A software architect serves many of the purposes of a chief architect, but is generally responsible for smaller cross-sections of functionality. Architects will often work with the chief architect to implement their slice of the larger architectural vision. Software architects often make tech stack choices for particular applications or features, rather than company-wide decisions.
Engineering Project Manager/Engineering Manager/Project Manager
An Engineering Project Manager (also called “Engineering Manager” or simply “Project Manager”) is in charge of managing the workflow of an engineering team. Some larger companies have both Engineering Managers and Project Managers. In that case, the Engineering Manager typically acts like the VP of Engineering at the local team scope, while the Project Manager takes on the responsibilities described here.
Project Managers typically interface with both product leaders and an engineering leader such as VP of Engineering, CTO, or a middle manager to cultivate and prune the work backlogs, track the progress of work tickets, detailed progress reports (milestone burn down charts, completed vs open tickets, month/month progress reports, etc.) You can think of them as the analog of a shop manager for a manufacturing assembly line. They watch the work floor and make sure that the assembly line runs smoothly, and work product isn’t piling up on the floor in front of a bottleneck.
The best Project Managers also spend a lot of time classifying issues and bugs in order to analyze metrics like bug density per feature point, what caused the most bugs (design error, spec error, logic error, syntax error, type error, etc.) and so on. Those kinds of metrics can be used to measure the effectiveness of various initiatives, and point out where improvements can be made to the engineering process.
Engineering Managers tend to develop a good understanding of the strengths of various team members, and get good at assigning work tickets to the appropriate responsible parties, although, this should be a collaborative effort, seeking feedback from individual developers on what their career goals are and what they want to focus on, within the bounds of the project scope available.
If there is time pressure or work backlogs piling up, the Project Manager should collaborate with the engineering and product leaders to figure out the root cause and correct the dysfunction as soon as possible.
Wherever possible, the Project Managers should be the only ones directly delegating tasks to individual engineers in order to avoid the multiple bosses problem. Engineers should have a clear idea of who they report directly to, and who’s in charge of delegating to them. If you’re a different kind of engineering leader, and you’re guilty of delegating directly to engineers, it’s probably a good idea to coordinate with the Engineering Manager in charge of the report you’re delegating to and delegate through them so that the work receives correct, coordinated prioritization, and the Engineering Manager is aware of what each engineer is actively working on at any given moment.
At very small organizations, the Engineering Manager is often also the CTO and VP of Engineering (with or without the corresponding titles). If that’s you, don’t worry about the previous paragraph.
A common dysfunction is that the Engineering Manager can begin to think that because product hands off work for engineering to implement, and Engineering Managers work closely with product teams, that the Engineering Manager reports to a Product Manager. In every case I’ve seen that happen, it was a mistake. See “Avoiding Dysfunctions…” below.
Tech Lead/Team Lead
The Tech Lead or Team Lead is usually the leader of a small number of developers. They are usually senior engineers who act like mentors, examples, and guides for the rest of the team. Usually, engineers report to the project manager or engineering manager, but a tech lead may be responsible for the team’s code quality measures, such as ensuring that adequate code reviews are being conducted, and that the team’s technical standards (such as TDD) are being upheld.
Engineer Career Progressions
Generally, engineers can take one of two career paths: move into management, or keep coding. Management positions aren’t for everyone. Lots of engineers prefer to stay on the technical path. That progression can take many directions, twists, and turns, but could look something like this:
Intern -> Junior Software Developer -> Software Developer/Engineer -> Senior Software Engineer -> Principal Software Engineer -> Software Architect -> Senior Software Architect -> Chief Architect -> CTO -> Engineering Fellow
Alternatively, for those engineers interested in a people leadership role, a progression might look something like this:
Intern -> Junior Software Developer -> Software Developer/Engineer -> Team Lead/Tech Lead -> Engineering Manager/Project Manager -> Senior Engineering Manager -> Director of Engineering -> VP of Engineering
Avoiding Dysfunctions in Engineering Leadership
IMO, VP of Engineering, CTO, VP of Product, and VP of Marketing should all report directly to the CEO. Each of them needs to be in charge of their own process. External facing CTOs should not have direct reports (if they do, it usually means they are filling both the CTO and VP of Engineering Roles). Instead, the Engineering leaders report to the VP of Engineering. This is to avoid the two bosses dysfunction, but also because these roles are fundamentally different: one focused on the customer and how the organization fits into the wider world, and the other focused on internal, day-to-day operations. They’re two wildly different skill sets, with sometimes competing priorities.
I’ve seen a lot of dysfunction in engineering leadership because of confusion about which engineering leaders are responsible for what, and it tends to be a recipe for disaster. Whatever is right for your organization, make sure that responsibilities and chain of authority are clear, in order to avoid engineers feeling torn between two or three different “bosses”.
Likewise, in an organization of sufficient size, product and engineering need to be two separately led teams. What I mean by that is that the product managers should own the product roadmap. They should be evangelists for the users, and they should be really plugged into the users, often engaging with them 1:1 and learning about their workflows and pain-points in great depth. They should be experts on what the market needs, and they should be very familiar with the company’s strengths and capabilities to fill those needs.
That said, the VP of Engineering (or whomever is filling that role) needs to be in charge of delivery, and production pace. While the product managers should own the roadmap, the engineering managers need to be responsible for taking those roadmap priorities, matching them to the engineering capacity, and reporting on the timing. Product and marketing teams will have strong opinions about when something should ship, but only the engineering management has a good gauge of whether or not those delivery timelines are possible given the roadmap requirements. The engineering team needs the authority not simply to push back on timing, but in most cases, to completely own timing, working with the CEO, product, and marketing teams to figure out priorities, understand strategic needs of the company, and then help shape a development cadence that can meet those needs without imposing drop-dead deadlines that ultimately hurt the company’s ability to deliver quality products at a reliable pace.
The best performing teams I’ve ever been involved with subscribed to the no deadlines approach. We build great products without announcing them in advance, and then let the marketing teams promote work that is already done. Alternatively, when you’re working in the public view, transparency is a great solution. Instead of cramming to meet an arbitrary deadline, actively share your progress, with ticket burn-down charts, a clear view of remaining work, progress, pace, and remaining scope, and change over time that can indicate scope creep. When you share detailed information about the progress being made, and share the philosophy that we can’t promise a delivery date, but we can share everything we know about our progress with you, people can see for themselves the work and the pace.
Because of differing, often competing goals, product, marketing and engineering need to be separate roles reporting directly to the CEO where none of them can dictate to each other. If your team feels time pressure to work overtime, or crunch to get some key deliverable out before some drop-dead deadline, it points to a dysfunction here. Either the engineering managers are reporting to the wrong people, or the team lacks a strong engineering leader who understands the futility of software estimates and the need for a collaborative give-and-take between engineering and product in order to ensure the flexibility of shipping scaled-back MVPs to hit delivery targets.
Product should own the continuous discovery process. Engineering should own the continuous delivery process. Marketing should work hand-in-hand with the product team to ensure that product messaging to the wider world is on-point. The whole thing should fit together like a pipeline, creating a smoothly flowing, positive feedback cycle. Like an assembly line, the slowest bottleneck in the process must set the pace for the rest of the process, otherwise, it will lead to an ever-growing backlog that piles up so much that backlog items become obsolete, and backlog management becomes a full-time job.
Product teams who feel like engineering is not keeping pace should focus first on quality of engineering hand-off deliverables. Have we done adequate design review? Has an engineer had a chance to provide constructive feedback before handoff? 80% of software bugs are caused by specification or UX design errors, and many of those can be caught before work ever gets handed off to an engineering team. Once you have that process finely tuned, ask yourself if you’ve really explored the product design space thoroughly enough. Did you build one UX and call it done, or did you try multiple variations? Building and testing variations on user workflows is one of the most valuable contributions a product team can make. Do you have a group of trusted users or customers you can run A/B prototype tests with?
One of the biggest dysfunctions of software teams is that the product team is producing sub-par deliverables (sometimes little more than a few rushed, buggy mock-ups), and failing to run any of them by customers or engineers prior to handing them off. That dysfunction causes a pileup of re-work and engineering backlog that often gets blamed on engineering teams.
Make sure that the delegation of responsibilities makes sense, that you’re not putting undue time pressure on engineering, and that you have a great product team engaged in a collaborative product discovery process, working with real users to build the best product.
Engineering managers, I’m not letting you off the hook. If these dysfunctions exist on your team, it’s your responsibility to address them with product, marketing, and business leadership, and spearhead requirements for engineering hand-offs. It’s also your responsibility to protect the productive pace of your team, go to bat for additional resources if your team is being pressured to produce more than your current capacity can handle, to report clearly on the work pacing and backlog, and to demo completed work and ensure that your team is getting due credit for the fine work that is being done.
Don’t place blame, but do demonstrate that your team is doing their very best work. | https://medium.com/javascript-scene/software-roles-and-titles-e3f0b69c410c | ['Eric Elliott'] | 2020-07-17 00:54:42.751000+00:00 | ['Engineering Management', 'Software Engineering', 'Software Development', 'Technology', 'Startup'] | Title Software Roles TitlesContent I’ve noticed lot confusion industry various software role title even among founder hiring manager team builder various role responsibility software team job title tend cover role dig much I’d like emphasize every team unique responsibility tend float shared different member team Anybody time delegate responsibility somebody else various reason team isn’t exactly describe welcome club suspect team particular software role match perfectly we’re explore general framework describes average particular role team I’ll start management title work way various role roughly seniority I’d also like emphasize feel constrained job title like build engineering culture favor Skills title title Continuous delivery deadline deadline Support blame blame Collaboration competition like reward initiative increased responsibility somebody skill initiative take outgrow title they’re hired like promote rather risk losing rising star another company team Software Development Roles Engineering Fellow CEO CTO CIOChief Digital OfficerChief Innovation Officer VP EngineeringDirector Engineering Chief Architect Software Architect Engineering Project ManagerEngineering Manager Technical LeadEngineering LeadTeam Lead Principal Software Engineer Senior Software EngineerSenior Software Developer Software Engineer Software Developer Junior Software Developer Intern Software Developer We’ll also talk little role relate role including VP Product ManagementHead Product Product Manager VP Marketing Note Sometimes “director” “head” title indicate middle manager tech manager CSuite Often “Chief” title indicate Csuite title Csuite employee typically report directly CEO potentially many report organization lead large company alternate title often fill similar role Csuite executive report somebody effectively CEO smaller business unit within larger organization Different business unit sometimes operate separate company complete isolated accounting financial officer etc Different business unit also VPs eg “Vice President Engineering Merchant Operations” Engineering Fellow title “fellow” pinnacle achievement software engineer typically awarded recognition people made outstanding contribution field computing usually awarded engineer writes number top selling book win prize like Turing Award Nobel Prize etc word fellow usually already famous outside organization company trying strengthen brand strongly associating admired influential people opinion organization try hire “fellow” role Instead find best brightest hire grant title benefit engineer deserving fellow typically also hold another title company Often CTO Architect VP Engineering principal role position lead mentor serve example inspiration member organization CEO CEO position authority organization Typically set vision north star company rally everybody around common understanding company exists mission company’s value Frequently CEOs also public face company case become synonymous brand eg Steve Jobs Apple Elon Musk TeslaSpaceX etc case CEOs also technical founder software organization case also often fill CTO role may VPs Operations Sales Strategy Marketing helping common CEO responsibility CEO small company frequently wear lot hat may picked role fell CEO title mentioned CEOs lead technology team case important organizational decision made can’t run chain responsibility higher CEO CEO remember you’re ultimately responsible trust instinct don’t forget even famous CEOs mentor advisor consult regular basis Trust gut seek smart insightful people challenge improve well CTO Like CEO role CTO role shapeshifts time young startup CTO often technical cofounder visionary domaindriven CEO Frequently qualified take title larger company hopefully grow company grows Frequently startup CTO find prefer technical engineering role settle back role like Principal Engineer VP Engineering Chief Architect many organization mature CTO role outward facing participate business development meeting frequently helping land large partnership sale Many hit conference circuit spend lot time evangelizing development activity organization wider world sharing company’s innovation discovering opportunity market match well company’s core competency CTOs frequently work closely product team product strategy often internalfacing counterpart engineering VP Engineering CTOs also frequently set vision north star engineering team goal team work towards CIOChief Digital OfficerChief Innovation Officer Chief Innovation Officer CIO like CTO typically employed company would normally considered “tech company” goal CIO reshape company one consumer perceive techsavvy innovative show world future industry look like matter industry example home remodeling superstore chain might CIO responsible partnering tech company build mixed reality app show shopper specific couch wall color would look like living room using blockchains cryptocurrencies enhance security efficiency supply chain logistics confused Chief Information Officer CIO title typically used company even detached technology interested far aid core operation Unlike Chief Innovation Officer Chief Information Officer likely leading tech integration data migration project building new apps trying figure company disrupt inside Chief Information Officers act like Chief Innovation Officers opinion use appropriate title technative company app developer etc don’t either kind CIO Instead responsibility fall CTO VP Engineering VP EngineeringDirector Engineering CTOs often face outward VP Engineering often face inward VP Engineering frequently responsible building engineering team establishing engineering culture operation CTO might tell engineering team need get done grand scale eg “be leading innovator humancomputer interaction” VP Engineering help foster culture manages “how” best VPs Engineering first come across somebody who’s help team work efficiently almost disappear Developers team collaborate well mentor communicate effectively think “Hey we’re great team work really well together” maybe think that’s lucky accident truth almost never happens accident happens there’s VP Engineering constantly monitoring team’s progress process culture tone communication They’re encouraging developer use certain tool hold specific kind meeting specific time order foster better collaboration fewer interruption best VPs Engineering engineer dysfunctional team highly functional team know pattern antipatterns effective software development workflow work head product product manager ensure there’s good product discovery process don’t lead take charge make sure somebody well product design deliverable adequately reviewed engineer prior implementation hand offs I’m going stop write book work go leading effective development operation thought topic check Build High Velocity Development Team Many startup small hire full time VP Engineering it’s still important get engineering culture right early possible need help reach Chief Architect small organization chief architect could technical cofounder selfawareness realize won’t want responsibility CTO company grows Maybe don’t like travel simply interested software design conference talk business development sale call infiltrate life many CTOs chief architect may responsible selecting technology stack designing collaboration interface computing system assessing compute service offering AWS Azure ZEIT etc chief architect may evaluate wide range industry offering make preapproved favored recommendation work particular vendor company matures chief architect may also need work closely CTO sometimes partner organization develop integration service many company CTO also serf chief architect Software Architect software architect serf many purpose chief architect generally responsible smaller crosssections functionality Architects often work chief architect implement slice larger architectural vision Software architect often make tech stack choice particular application feature rather companywide decision Engineering Project ManagerEngineering ManagerProject Manager Engineering Project Manager also called “Engineering Manager” simply “Project Manager” charge managing workflow engineering team larger company Engineering Managers Project Managers case Engineering Manager typically act like VP Engineering local team scope Project Manager take responsibility described Project Managers typically interface product leader engineering leader VP Engineering CTO middle manager cultivate prune work backlog track progress work ticket detailed progress report milestone burn chart completed v open ticket monthmonth progress report etc think analog shop manager manufacturing assembly line watch work floor make sure assembly line run smoothly work product isn’t piling floor front bottleneck best Project Managers also spend lot time classifying issue bug order analyze metric like bug density per feature point caused bug design error spec error logic error syntax error type error etc kind metric used measure effectiveness various initiative point improvement made engineering process Engineering Managers tend develop good understanding strength various team member get good assigning work ticket appropriate responsible party although collaborative effort seeking feedback individual developer career goal want focus within bound project scope available time pressure work backlog piling Project Manager collaborate engineering product leader figure root cause correct dysfunction soon possible Wherever possible Project Managers one directly delegating task individual engineer order avoid multiple boss problem Engineers clear idea report directly who’s charge delegating you’re different kind engineering leader you’re guilty delegating directly engineer it’s probably good idea coordinate Engineering Manager charge report you’re delegating delegate work receives correct coordinated prioritization Engineering Manager aware engineer actively working given moment small organization Engineering Manager often also CTO VP Engineering without corresponding title that’s don’t worry previous paragraph common dysfunction Engineering Manager begin think product hand work engineering implement Engineering Managers work closely product team Engineering Manager report Product Manager every case I’ve seen happen mistake See “Avoiding Dysfunctions…” Tech LeadTeam Lead Tech Lead Team Lead usually leader small number developer usually senior engineer act like mentor example guide rest team Usually engineer report project manager engineering manager tech lead may responsible team’s code quality measure ensuring adequate code review conducted team’s technical standard TDD upheld Engineer Career Progressions Generally engineer take one two career path move management keep coding Management position aren’t everyone Lots engineer prefer stay technical path progression take many direction twist turn could look something like Intern Junior Software Developer Software DeveloperEngineer Senior Software Engineer Principal Software Engineer Software Architect Senior Software Architect Chief Architect CTO Engineering Fellow Alternatively engineer interested people leadership role progression might look something like Intern Junior Software Developer Software DeveloperEngineer Team LeadTech Lead Engineering ManagerProject Manager Senior Engineering Manager Director Engineering VP Engineering Avoiding Dysfunctions Engineering Leadership IMO VP Engineering CTO VP Product VP Marketing report directly CEO need charge process External facing CTOs direct report usually mean filling CTO VP Engineering Roles Instead Engineering leader report VP Engineering avoid two boss dysfunction also role fundamentally different one focused customer organization fit wider world focused internal daytoday operation They’re two wildly different skill set sometimes competing priority I’ve seen lot dysfunction engineering leadership confusion engineering leader responsible tends recipe disaster Whatever right organization make sure responsibility chain authority clear order avoid engineer feeling torn two three different “bosses” Likewise organization sufficient size product engineering need two separately led team mean product manager product roadmap evangelist user really plugged user often engaging 11 learning workflow painpoints great depth expert market need familiar company’s strength capability fill need said VP Engineering whomever filling role need charge delivery production pace product manager roadmap engineering manager need responsible taking roadmap priority matching engineering capacity reporting timing Product marketing team strong opinion something ship engineering management good gauge whether delivery timeline possible given roadmap requirement engineering team need authority simply push back timing case completely timing working CEO product marketing team figure priority understand strategic need company help shape development cadence meet need without imposing dropdead deadline ultimately hurt company’s ability deliver quality product reliable pace best performing team I’ve ever involved subscribed deadline approach build great product without announcing advance let marketing team promote work already done Alternatively you’re working public view transparency great solution Instead cramming meet arbitrary deadline actively share progress ticket burndown chart clear view remaining work progress pace remaining scope change time indicate scope creep share detailed information progress made share philosophy can’t promise delivery date share everything know progress people see work pace differing often competing goal product marketing engineering need separate role reporting directly CEO none dictate team feel time pressure work overtime crunch get key deliverable dropdead deadline point dysfunction Either engineering manager reporting wrong people team lack strong engineering leader understands futility software estimate need collaborative giveandtake engineering product order ensure flexibility shipping scaledback MVPs hit delivery target Product continuous discovery process Engineering continuous delivery process Marketing work handinhand product team ensure product messaging wider world onpoint whole thing fit together like pipeline creating smoothly flowing positive feedback cycle Like assembly line slowest bottleneck process must set pace rest process otherwise lead evergrowing backlog pile much backlog item become obsolete backlog management becomes fulltime job Product team feel like engineering keeping pace focus first quality engineering handoff deliverable done adequate design review engineer chance provide constructive feedback handoff 80 software bug caused specification UX design error many caught work ever get handed engineering team process finely tuned ask you’ve really explored product design space thoroughly enough build one UX call done try multiple variation Building testing variation user workflow one valuable contribution product team make group trusted user customer run AB prototype test One biggest dysfunction software team product team producing subpar deliverable sometimes little rushed buggy mockups failing run customer engineer prior handing dysfunction cause pileup rework engineering backlog often get blamed engineering team Make sure delegation responsibility make sense you’re putting undue time pressure engineering great product team engaged collaborative product discovery process working real user build best product Engineering manager I’m letting hook dysfunction exist team it’s responsibility address product marketing business leadership spearhead requirement engineering handoff It’s also responsibility protect productive pace team go bat additional resource team pressured produce current capacity handle report clearly work pacing backlog demo completed work ensure team getting due credit fine work done Don’t place blame demonstrate team best workTags Engineering Management Software Engineering Software Development Technology Startup |
1,148 | We All Have a Story to Tell | I believe writers are open books. Our readers are privy to our life experiences, our families, our personality traits and our thoughts and feelings. They share our secrets and our memories through our written words, but what happens when we have a story we yearn to tell but it can’t be told?
What happens when we can’t share the truth and there is no outlet? There are circumstances when it’s just not enough to write it for yourself, when you feel the need to be heard but you vow yourself to silence. Perhaps it’s too private or we fear consequences, or it’s a part of us we aren’t willing to share with the world.
It could be the story is too painful to share. I recently wrote a story that will never be published. No one other than me will ever read it.
My story is too painful. It keeps me awake at night crushing me. Allowing others in is my only hope for escape. Therefore, I will never be free.
“It must be a divine power or non-human force because every day I tell myself I’m done, yet I’m still here. I want to surrender but there’s this unexplainable will to keep going that I never knew I had.”
At the same time my story will eat me alive. It weighs heavy on me. Each word is carefully thought out. Emotions are adjectives. Actions are verbs. Specifics are nouns, yet no one will ever be given the opportunity to connect with each carefully chosen word.
There are some secrets I cannot share. There are certain truths that cannot be told. There’s a blank page in every book.
“I have no resolution. I’m incapable of freeing you from your pain. Each new day picks up where the last left off and I feel as desperate as you do. While you feel undeserving I feel as if I’m not doing enough.”
I am failing myself and my readers. I’m stifling characters, burying details and barricading scenes. There isn’t always a hero in the hero’s journey.
I am no hero.
I’m incapable of navigating the unknown world in real life and when we put things in writing they become a part of us that will always be there.
I refer to this as the curse of the creative. We become dependent on processing life through the process of writing. We become human when we are recognized as human. We become connected and bonded to those who we hook with our opening paragraph and cheer for us when we triumph in the end. | https://erikasauter.medium.com/we-all-have-a-story-to-tell-ae1af9a87b81 | ['Erika Sauter'] | 2018-09-14 17:52:32.014000+00:00 | ['Life', 'Writing', 'Life Lessons', 'Inspiration', 'Creativity'] | Title Story TellContent believe writer open book reader privy life experience family personality trait thought feeling share secret memory written word happens story yearn tell can’t told happens can’t share truth outlet circumstance it’s enough write feel need heard vow silence Perhaps it’s private fear consequence it’s part u aren’t willing share world could story painful share recently wrote story never published one ever read story painful keep awake night crushing Allowing others hope escape Therefore never free “It must divine power nonhuman force every day tell I’m done yet I’m still want surrender there’s unexplainable keep going never knew had” time story eat alive weighs heavy word carefully thought Emotions adjective Actions verb Specifics noun yet one ever given opportunity connect carefully chosen word secret cannot share certain truth cannot told There’s blank page every book “I resolution I’m incapable freeing pain new day pick last left feel desperate feel undeserving feel I’m enough” failing reader I’m stifling character burying detail barricading scene isn’t always hero hero’s journey hero I’m incapable navigating unknown world real life put thing writing become part u always refer curse creative become dependent processing life process writing become human recognized human become connected bonded hook opening paragraph cheer u triumph endTags Life Writing Life Lessons Inspiration Creativity |
1,149 | My Favorite Pro Creative Apps for 2021 | Bear: The one and only home for all my ideas. The tagging system sold me on this. It’s not the only app with this feature, but it does do it rather well. Instead of categories, Bear allows you to add multiple sorting tags to each note. With categories, one note has one category. With tags, one note can have multiple “categories.”
As a result, the organization is more complete. Which matters to me because connections between thoughts are important. For instance, let’s say I create a note about a new strategy for my trading algorithm. That would obviously belong in the “Code” category. But what if I also want to add it to my post queue, the “Posts” category? I can’t categorize it as both “Code” and “Posts.” But with Bear, I can.
I simply add both “#code” and “#posts” to that one note. Done. Now I can find it under both “#code” and “#posts” in the sidebar. In other apps, I’d have to duplicate the information. Maybe I’d add “post about new trading strategy” to a different note under the Posts category. Obviously, Bear’s tag system is superior.
Bear’s note editor in fullscreen mode. Source: author.
Moving on, the Bear editor is one of the best I’ve used for on-the-fly ideas. Fast. Clean. Effective. Plus, Markdown support is excellent — I can format quickly even on my phone. This was a problem with my ex-idea jotter, Agenda. Editing was too slow. In contrast, there’s less friction with Bear. A big deal when I’ve got three things to write down at the same time.
Now, I’m punctilious when it comes to UI. I probably apply more selection criteria to user interfaces than I’ll apply to my wife. Corner radius of a single button looks off? Junk it. There’s no compromise when it comes to design. I look at these tools for hours a day. They must be pretty. And I’m happy to report Bear passes all the tests. In fact, I might even call it impressive. Dark. Clear. Intuitive. I can get behind the aesthetic.
The Dieci theme suits my fancy on my iPhone and iPad. On Mac, Solarized Dark suits the bigger screen. You’ve got plenty of skin options if you go pro — more on that later.
Extras
But things get fun when you leave the app, too. The developers thought of everything. It’s pretty much as integrated as a 1st party app. For instance, I set up a Siri Shortcut to dictate a new Bear note using Siri. When I suddenly come up with a bug fix while cooking, this is a lifesaver. In addition, I have a shortcut to record a post idea from my clipboard — useful when something I’m reading inspires me. Just copy some text, run the shortcut, and the idea is stored in Bear under “#posts”. The whole process takes three seconds. Not all note apps can do this.
And the Bear widgets on my iPad Pro’s Today view? Remarkably useful. I set it up to show my most recent notes. So with one tap, I can pick up where I left off. Right from the home screen. No digging through categories. No filtering. This is what notes should be: Natural. Accessible. An extension of your own mind. Bear gets as close to that as I’ve ever experienced.
I mean, you can even access your local Bear Notes database on macOS. For the non-programmers out there, this means you have infinite automation options for manipulating your notes. Granted, most of you won’t need this. But hey, it’s an option.
Pricing
Finally, let’s talk cost. Bear Pro will cost you $1.49 per month. If you don’t have commitment issues, unlike me, you could pay roughly $1.25 per month, billed annually. Whichever option you prefer, just buy it. This pricing is absurd. I’m not sure how the good folks who make this are turning a profit. A dollar and a half per month? You can find that walking down the street.
The free version doesn’t have sync. And themes are rather limited. If you’re a light user, I guess you could make it work. But it’s a deal-breaker for me. I use this every day. All the time. For hours. To do everything. They could charge ten times their current price, and it’d still be a bargain. It helps that I’m sold on the brand, too. I mean, what developer offers free themed wallpapers? Well-played, Bear. You got me there.
In summary, I can’t recommend Bear enough. It’s simple if that’s all you need. But when the rubber meets the road, it keeps up. Value for money, capable, and pretty. What else do you need? Migrating from your existing notes app is trivial, too. So there’s not much stopping you.
Alternative: Drafts
One alternative I’ve heard about is Drafts: Similar editing, similar organization, just five times uglier. Can’t stand it. Not enough negative space in the layout of the editor. The menus look messy. Too distracting. Overall, not as consistent as Bear. I need serenity in my idea sanctuary. But if you’re less pedantic, why not give it a shot. | https://medium.com/swlh/best-pro-productivity-creative-apps-crypto-trader-blogger-2021-1aafbdefa919 | ['Mika Y.'] | 2020-12-22 08:21:01.713000+00:00 | ['Business', 'Work', 'Technology', 'Productivity', 'Startup'] | Title Favorite Pro Creative Apps 2021Content Bear one home idea tagging system sold It’s app feature rather well Instead category Bear allows add multiple sorting tag note category one note one category tag one note multiple “categories” result organization complete matter connection thought important instance let’s say create note new strategy trading algorithm would obviously belong “Code” category also want add post queue “Posts” category can’t categorize “Code” “Posts” Bear simply add “code” “posts” one note Done find “code” “posts” sidebar apps I’d duplicate information Maybe I’d add “post new trading strategy” different note Posts category Obviously Bear’s tag system superior Bear’s note editor fullscreen mode Source author Moving Bear editor one best I’ve used onthefly idea Fast Clean Effective Plus Markdown support excellent — format quickly even phone problem exidea jotter Agenda Editing slow contrast there’s le friction Bear big deal I’ve got three thing write time I’m punctilious come UI probably apply selection criterion user interface I’ll apply wife Corner radius single button look Junk There’s compromise come design look tool hour day must pretty I’m happy report Bear pass test fact might even call impressive Dark Clear Intuitive get behind aesthetic Dieci theme suit fancy iPhone iPad Mac Solarized Dark suit bigger screen You’ve got plenty skin option go pro — later Extras thing get fun leave app developer thought everything It’s pretty much integrated 1st party app instance set Siri Shortcut dictate new Bear note using Siri suddenly come bug fix cooking lifesaver addition shortcut record post idea clipboard — useful something I’m reading inspires copy text run shortcut idea stored Bear “posts” whole process take three second note apps Bear widget iPad Pro’s Today view Remarkably useful set show recent note one tap pick left Right home screen digging category filtering note Natural Accessible extension mind Bear get close I’ve ever experienced mean even access local Bear Notes database macOS nonprogrammers mean infinite automation option manipulating note Granted won’t need hey it’s option Pricing Finally let’s talk cost Bear Pro cost 149 per month don’t commitment issue unlike could pay roughly 125 per month billed annually Whichever option prefer buy pricing absurd I’m sure good folk make turning profit dollar half per month find walking street free version doesn’t sync theme rather limited you’re light user guess could make work it’s dealbreaker use every day time hour everything could charge ten time current price it’d still bargain help I’m sold brand mean developer offer free themed wallpaper Wellplayed Bear got summary can’t recommend Bear enough It’s simple that’s need rubber meet road keep Value money capable pretty else need Migrating existing note app trivial there’s much stopping Alternative Drafts One alternative I’ve heard Drafts Similar editing similar organization five time uglier Can’t stand enough negative space layout editor menu look messy distracting Overall consistent Bear need serenity idea sanctuary you’re le pedantic give shotTags Business Work Technology Productivity Startup |
1,150 | How to Start a New Brand | Starting a new brand is no easy feat. By “brand” we mean essentially starting a new business.
To do this effectively you first need to make sure you are solving a problem people actually need help solving.
Unfortunately, many businesses get too ahead of themselves and think they are solving a problem but come to find out that no one really needs their solution. This, in essence, is why so many businesses and startups fail.
If you are indeed solving a problem AND solving it well, you can then proceed to identify what we like to refer to as the first step in branding.
Ask yourself:
“What is it that you want to be known for?”.
Allow your customers to identify with you
Answering this question is a safe way to slowly grow a brand that encapsulates what it is you stand for and do. From there, your target audience will do their part in identifying with your business. This part comes naturally. The hope is that the “identification” period is a pleasant one. If it is, consumers will continue to come to you for whatever it is you offer.
Our video answers the question of “How to Start a New Brand” by taking our own brand (Couple of Creatives) and explaining the building blocks behind why we started it.
Follow along and try to find what it is about your own business that is unique. If you can identify something, focus on ways to make it valuable to your customers. If they can identify with you over your competition then your branding efforts are working. | https://medium.com/couple-of-creatives/how-to-start-a-new-brand-25fe0089ad57 | ['Andy Leverenz'] | 2017-02-22 02:45:14.813000+00:00 | ['Business', 'Branding', 'Entrepreneurship', 'Design', 'Freelancing'] | Title Start New BrandContent Starting new brand easy feat “brand” mean essentially starting new business effectively first need make sure solving problem people actually need help solving Unfortunately many business get ahead think solving problem come find one really need solution essence many business startup fail indeed solving problem solving well proceed identify like refer first step branding Ask “What want known for” Allow customer identify Answering question safe way slowly grow brand encapsulates stand target audience part identifying business part come naturally hope “identification” period pleasant one consumer continue come whatever offer video answer question “How Start New Brand” taking brand Couple Creatives explaining building block behind started Follow along try find business unique identify something focus way make valuable customer identify competition branding effort workingTags Business Branding Entrepreneurship Design Freelancing |
1,151 | The 12 Days of Christmas if It Were Written by Jane Austen | Image by author
The 12 Days of Christmas if It Were Written by Jane Austen
After you watch Bridgerton, sing this Regency-themed carol
While watching the new show Bridgerton on Netflix and admiring all the historical drama, don’t forget the lady who inspired much of it. Jane Austen not only wrote novels, she wrote parodies of popular tunes including The 12 Days of Christmas. Though not as well-known as Pride and Prejudice, it’s worth a listen or a sing-a-long.
On the first day of Christmas Jane Austen gave to me;
A proposal to a lady.
Image public domain
On the second day of Christmas Jane Austen gave to me;
Two pompous men,
and a proposal to a lady.
On the third day of Christmas Jane Austen gave to me;
Three French gowns,
two pompous men,
and a proposal to a lady.
On the fourth day of Christmas Jane Austen gave to me;
Four carriages,
three French gowns,
two pompous men,
and a proposal to a lady.
Image public domain
On the fifth day of Christmas Jane Austen gave to me;
Five ruined girls!
Four carriages,
three French gowns,
two pompous men,
and a proposal to a lady.
image pubic domain
On the sixth day of Christmas Jane Austen gave to me;
Six ladies-a-walking,
five ruined girls!
Four carriages,
three French gowns,
two pompous men,
and a proposal to a lady.
On the seventh day of Christmas Jane Austen gave to me;
Seven schemers-a-scheming,
six ladies-a-walking,
five ruined girls!
Four carriages,
three French gowns,
two pompous men,
and a proposal to a lady.
On the eighth day of Christmas Jane Austen gave to me;
Eight mothers-a-worrying,
seven schemers-a-scheming,
six ladies-a-walking,
five ruined girls!
Four carriages,
three French gowns,
two pompous men,
and a proposal to a lady.
Image pubic domain
On the ninth day of Christmas Jane Austen gave to me;
Nine soldiers-a-flirting,
eight mothers-a-worrying,
seven schemers-a-scheming,
six ladies-a-walking,
Five ruined girls!
Four carriages,
three french gowns,
two pompous men,
and a proposal to a lady.
Image public domain
On the tenth day of Christmas Jane Austen gave to me;
Ten characters misunderstanding,
nine soldiers-a-flirting,
eight mothers-a-worrying,
seven schemers-a-scheming,
six ladies-a-walking,
five ruined girls!
Four carriages,
three French gowns,
two pompous men,
and a proposal to a lady.
On the eleventh day of Christmas Jane Austen gave to me;
Eleven meaningful glances,
ten characters misunderstanding,
nine soldiers-a-flirting,
eight mothers-a-worrying,
seven schemers-a-scheming,
six ladies-a-walking,
five ruined girls!
Four carriages,
three French gowns,
two pompous men,
and a proposal to a lady.
Image public domain
On the Twelfth day of Christmas Jane Austen gave to me;
Twelve maids actually working,
eleven meaningful glances,
ten characters misunderstanding,
nine soldiers-a-flirting,
eight mothers-a-worrying,
seven schemers-a-scheming,
six ladies-a-walking,
five ruined girls!
Four carriages,
three French gowns,
two pompous men,
and a proposal to a lady. | https://medium.com/jane-austens-wastebasket/the-12-days-of-christmas-if-it-were-written-by-jane-austen-d61b0ac0492d | ['Kyrie Gray'] | 2020-12-26 18:30:40.435000+00:00 | ['Books', 'Literature', 'Satire', 'Music', 'Humor'] | Title 12 Days Christmas Written Jane AustenContent Image author 12 Days Christmas Written Jane Austen watch Bridgerton sing Regencythemed carol watching new show Bridgerton Netflix admiring historical drama don’t forget lady inspired much Jane Austen wrote novel wrote parody popular tune including 12 Days Christmas Though wellknown Pride Prejudice it’s worth listen singalong first day Christmas Jane Austen gave proposal lady Image public domain second day Christmas Jane Austen gave Two pompous men proposal lady third day Christmas Jane Austen gave Three French gown two pompous men proposal lady fourth day Christmas Jane Austen gave Four carriage three French gown two pompous men proposal lady Image public domain fifth day Christmas Jane Austen gave Five ruined girl Four carriage three French gown two pompous men proposal lady image pubic domain sixth day Christmas Jane Austen gave Six ladiesawalking five ruined girl Four carriage three French gown two pompous men proposal lady seventh day Christmas Jane Austen gave Seven schemersascheming six ladiesawalking five ruined girl Four carriage three French gown two pompous men proposal lady eighth day Christmas Jane Austen gave Eight mothersaworrying seven schemersascheming six ladiesawalking five ruined girl Four carriage three French gown two pompous men proposal lady Image pubic domain ninth day Christmas Jane Austen gave Nine soldiersaflirting eight mothersaworrying seven schemersascheming six ladiesawalking Five ruined girl Four carriage three french gown two pompous men proposal lady Image public domain tenth day Christmas Jane Austen gave Ten character misunderstanding nine soldiersaflirting eight mothersaworrying seven schemersascheming six ladiesawalking five ruined girl Four carriage three French gown two pompous men proposal lady eleventh day Christmas Jane Austen gave Eleven meaningful glance ten character misunderstanding nine soldiersaflirting eight mothersaworrying seven schemersascheming six ladiesawalking five ruined girl Four carriage three French gown two pompous men proposal lady Image public domain Twelfth day Christmas Jane Austen gave Twelve maid actually working eleven meaningful glance ten character misunderstanding nine soldiersaflirting eight mothersaworrying seven schemersascheming six ladiesawalking five ruined girl Four carriage three French gown two pompous men proposal ladyTags Books Literature Satire Music Humor |
1,152 | This is What Could Make or Break the U.S. This Fall | This is What Could Make or Break the U.S. This Fall
If the United States doesn’t get its act together, it’s going to be a tough autumn
As May gave way to June, rates of Covid-19 cases and deaths were falling across much of the United States, especially in New York, New Jersey, Michigan, and many of the virus’s springtime hot spots. Some epidemiological models even predicted that warm and sunny weather, coupled with more open windows and outdoor-centric lifestyles, would push infection rates so low that much of America could return to a state of relative normalcy.
Of course, things haven’t played out that way. Falling case and death counts helped lull many parts of the country into a false sense of security. In many states, imprudent reopenings, coupled with poor adherence to commonsense safety measures, gave a dwindling virus a big boost. “We declared victory at a plateau a couple months ago, and now we have a brand-new peak that has broken the previous record by twice the magnitude,” says Mark Cameron, PhD, an associate professor in the Department of Population and Quantitative Health Sciences at Case Western Reserve University School of Medicine in Cleveland.
Cameron and other experts say that if the United States makes the right moves now — starting today — there may still be time to right the ship before the fall. But if we don’t, prognostications for the coming months are almost uniformly dire.
“Whether you’re looking at individual states or the whole country, the outlook right now is grim,” he says.
Creating fresh reservoirs
Over and over again, virus experts highlight two foreseeable events as likely to cause major trouble this fall. Those events are school reopenings and the advent of the cold and flu season.
“I understand the need for parents to send their kids back to school, but I think school reopenings are going to create huge reservoirs of infected people,” says Lee Riley, MD, a professor and division head of infectious diseases and vaccinology at UC-Berkeley.
Riley says that if students and teachers rigorously adhere to mask guidelines and other virus-blocking protocols, and if people who develop symptoms or who are exposed to an infected individual immediately self-quarantine, it’s possible that schools could safely reopen. But he says that the politicization of these measures — and masks in particular — will make this level of adherence unlikely. “Kids will bring the infection home, and many people will develop severe infections, especially parents and grandparents who have preexisting conditions,” he says. “I don’t think there’s any question that will happen.”
“We are roughly seven months into this pandemic in the U.S., which is the most technically sophisticated country on the planet, and we’re still struggling to get aggressive and widespread testing in every community.”
Experts say the arrival of the cold and flu season will worsen this already difficult situation — and in more ways than one. For starters, Riley says that people who catch one of the seasonal bugs may be more likely to develop a serious infection if also exposed to SARS-CoV-2. “I think it will be very important for people to get the flu vaccine, and I think demand for that vaccine will create shortages,” he says.
Clinics overwhelmed by non-Covid-19 patients
Common cold and flu cases could also put a strain on the country’s still-feeble Covid-19 testing capabilities. “When we start seeing the normal circulation of influenza and this whole huge family of viruses that cause respiratory illness at this time every year, we’re going to have a lot of people developing symptoms that are consistent with Covid-19,” says Julie Fischer, PhD, a microbiologist at the Georgetown University Center for Global Health Science and Security.
Fischer says that easy access to quick and accurate Covid-19 testing will be absolutely vital when it comes to identifying the true coronavirus cases and ensuring that those people receive prompt and appropriate treatment. Testing will also ensure that hospitals and clinics aren’t overwhelmed with non-Covid-19 patients. Unfortunately, it seems unlikely that these testing capabilities will be in place by fall. “We are roughly seven months into this pandemic in the U.S., which is the most technically sophisticated country on the planet, and we’re still struggling to get aggressive and widespread testing in every community,” Fischer says. “We’re still rationing.”
She says that America’s testing deficiencies do not stem from a lack of technical know-how or capability. “The problem is that [testing] requires a lot of planning and communication and supporting policies, all of which are leadership issues,” Fischer says. In states or cities that have strong leaders who have “been guided by science,” she says that testing capabilities are good and getting better. “But in places where leadership or communication or coordination have broken down, we’re still struggling with testing,” she says. “The fact that we’re still dealing with these basic testing questions is incredibly frustrating.”
If everyone got on board with those measures, and if the country’s leaders ensured that testing access is appropriately ramped up, the fall could be a time of minor and controlled outbreaks.
Time is running out
There’s still time — roughly two months — to ramp up testing capabilities across the country. “But this will take real leadership, including at the national level,” Fischer says.
Other experts agree that the United States still has a chance to get its act together in time to prevent calamity this fall. “If we are diligent and do everything we can to break the chains of transmission, we could get down the other side of this current wave by October,” says Case Western Reserve’s Cameron. “But we can’t make the same mistakes we made after the spring.”
What needs to happen? The answers aren’t surprising. “Other countries have been able to lower their curves by having good mask compliance, by controlling the opening of risky businesses like bars, and by avoiding large groups of people getting together without adequate social distancing,” Cameron says.
If everyone got on board with those measures, and if the country’s leaders ensured that testing access is appropriately ramped up, the fall could be a time of minor and controlled outbreaks. Throw in the emergence of a new and highly effective form of Covid-19 treatment — and several are in the works — and there’s a slim chance that autumn could turn out to be a happier and safer time in the United States than summer has proven to be. But it’s not looking good.
“I wish I had a more hopeful view for the fall, but in the U.S. there are a lot of people who don’t like to follow rules,” says Efraín Rivera-Serrano, PhD, a molecular virologist at the University of North Carolina at Chapel Hill.
Rivera-Serrano says that all the models that predicted lower transmission rates this summer were based primarily on environmental variables — such as more UV light and greater humidity levels. Those optimistic models didn’t account for huge chunks of the country ignoring social distancing directives and other safety measures.
“I don’t have high hopes, because I don’t trust people’s behavior,” he adds. | https://elemental.medium.com/theres-still-some-hope-for-the-u-s-this-fall-9b9527e9cb97 | ['Markham Heid'] | 2020-08-07 19:29:42.614000+00:00 | ['The Nuance', 'Covid 19', 'Coronavirus', 'Public Health', 'Health'] | Title Could Make Break US FallContent Could Make Break US Fall United States doesn’t get act together it’s going tough autumn May gave way June rate Covid19 case death falling across much United States especially New York New Jersey Michigan many virus’s springtime hot spot epidemiological model even predicted warm sunny weather coupled open window outdoorcentric lifestyle would push infection rate low much America could return state relative normalcy course thing haven’t played way Falling case death count helped lull many part country false sense security many state imprudent reopenings coupled poor adherence commonsense safety measure gave dwindling virus big boost “We declared victory plateau couple month ago brandnew peak broken previous record twice magnitude” say Mark Cameron PhD associate professor Department Population Quantitative Health Sciences Case Western Reserve University School Medicine Cleveland Cameron expert say United States make right move — starting today — may still time right ship fall don’t prognostication coming month almost uniformly dire “Whether you’re looking individual state whole country outlook right grim” say Creating fresh reservoir virus expert highlight two foreseeable event likely cause major trouble fall event school reopenings advent cold flu season “I understand need parent send kid back school think school reopenings going create huge reservoir infected people” say Lee Riley MD professor division head infectious disease vaccinology UCBerkeley Riley say student teacher rigorously adhere mask guideline virusblocking protocol people develop symptom exposed infected individual immediately selfquarantine it’s possible school could safely reopen say politicization measure — mask particular — make level adherence unlikely “Kids bring infection home many people develop severe infection especially parent grandparent preexisting conditions” say “I don’t think there’s question happen” “We roughly seven month pandemic US technically sophisticated country planet we’re still struggling get aggressive widespread testing every community” Experts say arrival cold flu season worsen already difficult situation — way one starter Riley say people catch one seasonal bug may likely develop serious infection also exposed SARSCoV2 “I think important people get flu vaccine think demand vaccine create shortages” say Clinics overwhelmed nonCovid19 patient Common cold flu case could also put strain country’s stillfeeble Covid19 testing capability “When start seeing normal circulation influenza whole huge family virus cause respiratory illness time every year we’re going lot people developing symptom consistent Covid19” say Julie Fischer PhD microbiologist Georgetown University Center Global Health Science Security Fischer say easy access quick accurate Covid19 testing absolutely vital come identifying true coronavirus case ensuring people receive prompt appropriate treatment Testing also ensure hospital clinic aren’t overwhelmed nonCovid19 patient Unfortunately seems unlikely testing capability place fall “We roughly seven month pandemic US technically sophisticated country planet we’re still struggling get aggressive widespread testing every community” Fischer say “We’re still rationing” say America’s testing deficiency stem lack technical knowhow capability “The problem testing requires lot planning communication supporting policy leadership issues” Fischer say state city strong leader “been guided science” say testing capability good getting better “But place leadership communication coordination broken we’re still struggling testing” say “The fact we’re still dealing basic testing question incredibly frustrating” everyone got board measure country’s leader ensured testing access appropriately ramped fall could time minor controlled outbreak Time running There’s still time — roughly two month — ramp testing capability across country “But take real leadership including national level” Fischer say expert agree United States still chance get act together time prevent calamity fall “If diligent everything break chain transmission could get side current wave October” say Case Western Reserve’s Cameron “But can’t make mistake made spring” need happen answer aren’t surprising “Other country able lower curve good mask compliance controlling opening risky business like bar avoiding large group people getting together without adequate social distancing” Cameron say everyone got board measure country’s leader ensured testing access appropriately ramped fall could time minor controlled outbreak Throw emergence new highly effective form Covid19 treatment — several work — there’s slim chance autumn could turn happier safer time United States summer proven it’s looking good “I wish hopeful view fall US lot people don’t like follow rules” say Efraín RiveraSerrano PhD molecular virologist University North Carolina Chapel Hill RiveraSerrano say model predicted lower transmission rate summer based primarily environmental variable — UV light greater humidity level optimistic model didn’t account huge chunk country ignoring social distancing directive safety measure “I don’t high hope don’t trust people’s behavior” addsTags Nuance Covid 19 Coronavirus Public Health Health |
1,153 | Blood pressure data analysis from NHANES dataset. | Speaking about sex differences in BP, I highly recommend the following article for further understanding of the data:
https://academic.oup.com/ajh/article/31/12/1247/5123934, and further explanation on the effects of aging in BP in both sexes, I recommend this article: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4768730/.
Overall, the analysis of the NHANES dataset showed that the difference between diastolic (t= -2.47) and systolic BP (t = 3.1) in the age band [0–10 years] is small compared to other age bands. This is observed, between many reasons, by a low sexual differentiation in this pre-puberty period. The difference between males and females in systolic BP reach it’s maximum value on the age band of [20–30 years] (t= 37.31) and diastolic BP on the age band of [30–40 years] (t=19.68), both BP being higher in males compared to females (this is why the t value is positive, once the variable 1 is always the male group compared to the variable 2 female).
After the greatest difference in systolic BP in the [20–30 years], the difference drops through years reaching a negative t value of -1.9 in the age band of [60–70 years]. The negative t value is saying that, on our sample, the mean systolic BP of females is higher than males. The diastolic BP difference between males and females also reduces over years. Many factors contributes to this observed result. One of those factors may be drop of sexual hormones in females, as they get older.
Using the “t age” values observed for systolic BP, we can see that males have high increase in BP, specially between the age bands of [10–20 years](t = 33.16) and [20–30 years](t=34.71) which slows down in older groups, for example [70–80 years] (t=4.17). Females, on the other hand, start with slower increase in systolic BP in the age bands[10–20 years](t=23.85) and [20–30 years](t=17.47). Then females increases their t values in the age band of [40–50 years](t=20.9), reaching the point of a higher mean systolic BP on the age band of [60–70 years] (speaking about the sample only).
Using the “t age” values for diastolic BP, I would highlight brief observations. First of all, it’s interesting to note the shape of “U” of data over years for both sexes, increasing until age band of [40–50 years] and reducing after this age band.
The purpose of calculating “t male” and “t female” was observe variations between diastolic and systolic BP in males or females respectively between age bands. In the first age band [0–10 years] we observe a “t female” of 104.14 and “t male” of 103.36, which increases in age band of [10–20 years] and falls as age increases. This was not an expected result once the difference between means increase after 40 years old. But, we also see a continuous increase in the standard deviation of the variables as the groups get older, especially on systolic BP.
It is important to highlight that many biases might be contained is this data analysis. For example, smoking, HDL, cholesterol, obesity, nutrition behaviour, alcohol consumption and diabetes were not explored between age bands and gender and might influence BP.
In resume, the difference between males and females are minimal in BP between 0–10 years old. The difference becomes clear between 10–40 years old, with males having greater systolic and diastolic BP. Those differences are reduced between 40–70 years old and swap between 70–80 years old in matters of systolic BP, and are reduced in matters of diastolic BP. Both genders suffers an increase of systolic BP as they get older. Both genders suffers increase in diastolic BP as they get older until 40–50 years, after this point, diastolic BP is reduced. | https://joaoguilherm3.medium.com/blood-pressure-data-analysis-from-nhanes-dataset-51ec40d30cec | ['João Guilherme'] | 2020-11-06 20:46:20.094000+00:00 | ['Python', 'Data Visualization', 'Pandas', 'Science', 'Blood Pressure'] | Title Blood pressure data analysis NHANES datasetContent Speaking sex difference BP highly recommend following article understanding data httpsacademicoupcomajharticle311212475123934 explanation effect aging BP sex recommend article httpswwwncbinlmnihgovpmcarticlesPMC4768730 Overall analysis NHANES dataset showed difference diastolic 247 systolic BP 31 age band 0–10 year small compared age band observed many reason low sexual differentiation prepuberty period difference male female systolic BP reach it’s maximum value age band 20–30 year 3731 diastolic BP age band 30–40 year t1968 BP higher male compared female value positive variable 1 always male group compared variable 2 female greatest difference systolic BP 20–30 year difference drop year reaching negative value 19 age band 60–70 year negative value saying sample mean systolic BP female higher male diastolic BP difference male female also reduces year Many factor contributes observed result One factor may drop sexual hormone female get older Using “t age” value observed systolic BP see male high increase BP specially age band 10–20 yearst 3316 20–30 yearst3471 slows older group example 70–80 year t417 Females hand start slower increase systolic BP age bands10–20 yearst2385 20–30 yearst1747 female increase value age band 40–50 yearst209 reaching point higher mean systolic BP age band 60–70 year speaking sample Using “t age” value diastolic BP would highlight brief observation First it’s interesting note shape “U” data year sex increasing age band 40–50 year reducing age band purpose calculating “t male” “t female” observe variation diastolic systolic BP male female respectively age band first age band 0–10 year observe “t female” 10414 “t male” 10336 increase age band 10–20 year fall age increase expected result difference mean increase 40 year old also see continuous increase standard deviation variable group get older especially systolic BP important highlight many bias might contained data analysis example smoking HDL cholesterol obesity nutrition behaviour alcohol consumption diabetes explored age band gender might influence BP resume difference male female minimal BP 0–10 year old difference becomes clear 10–40 year old male greater systolic diastolic BP difference reduced 40–70 year old swap 70–80 year old matter systolic BP reduced matter diastolic BP gender suffers increase systolic BP get older gender suffers increase diastolic BP get older 40–50 year point diastolic BP reducedTags Python Data Visualization Pandas Science Blood Pressure |
1,154 | Machine Learning — How it works. Definitions of AI and ML. What their… | Photo by Alina Grubnyak on Unsplash
You hear the words “Machine Learning” and “Artificial Intelligence” tossed around all the time nowadays. News articles pop up about how Artificial Intelligence (AI) is being used for predictive policing. You hear how companies are rushing to implement Machine Learning (ML) into their products. But what are AI and ML? How does it work? Read on to find out.
What’s the Difference? What are They?
In everyday life, we use ML interchangeably with AI, but they’re different! AI is the whole domain whereas ML is a specific field in AI. In other words, AI is to math as ML is to geometry.
So what are they? AI is the act of getting a machine to do actions that typically require human intelligence. This is a broad definition because it’s the definition for the whole field. AI applications range from chatbots, to composing music, to designing airplane parts!
How does AI do this? One approach is ML. Machine Learning processes give machines data, then have them learn automatically from that data to produce actions. This may seem simple at first, but actually programming a program to classify cats and dogs seems like an impossible task! But don’t worry , it’s possible, and you’ll learn how it works in the next section. | https://medium.com/datadriveninvestor/machine-learning-how-it-works-900b53d0e3d7 | ['Dickson Wu'] | 2020-11-10 20:33:42.282000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'AI', 'Data Science', 'Programming'] | Title Machine Learning — work Definitions AI ML their…Content Photo Alina Grubnyak Unsplash hear word “Machine Learning” “Artificial Intelligence” tossed around time nowadays News article pop Artificial Intelligence AI used predictive policing hear company rushing implement Machine Learning ML product AI ML work Read find What’s Difference everyday life use ML interchangeably AI they’re different AI whole domain whereas ML specific field AI word AI math ML geometry AI act getting machine action typically require human intelligence broad definition it’s definition whole field AI application range chatbots composing music designing airplane part AI One approach ML Machine Learning process give machine data learn automatically data produce action may seem simple first actually programming program classify cat dog seems like impossible task don’t worry it’s possible you’ll learn work next sectionTags Machine Learning Artificial Intelligence AI Data Science Programming |
1,155 | Top 10 Benefits Of Artificial Intelligence | Did you know that Artificial Intelligence will contribute a whopping $15.7 trillion to the global economy by 2030!? In addition to economic benefits, AI is also responsible for making our lives simpler. This article on the Benefits Of Artificial Intelligence will help you understand how Artificial Intelligence is impacting all domains of our life and at last benefiting humankind.
I’ll be discussing the benefits of Artificial Intelligence in the following domains:
Automation Productivity Decision Making Solving Complex Problems Economy Managing Repetitive Tasks Personalization Global Defense Disaster Management Lifestyle
Increased Automation
Artificial Intelligence can be used to automate anything ranging from tasks that involve extreme labor to the process of recruitment. That’s right!
There is n number of AI-based applications that can be used to automate the recruitment process. Such tools help to free the employees from tedious manual tasks and allow them to focus on complex tasks like strategizing and decision making.
Increased Automation — Benefits Of Artificial Intelligence — Edureka
An example of this is the conversational AI recruiter MYA. This application focuses on automating tedious parts of the recruitment process such as scheduling screening and sourcing.
Mya is trained by using advanced Machine Learning algorithms and it also uses Natural Language Processing (NLP) to pick up on details that come up in a conversation. Mya is also responsible for creating candidate profiles, perform analytics, and finally shortlists applicants.
Increased Productivity
Artificial Intelligence has become a necessity in the business world. It is being used to manage highly computational tasks that require maximum effort and time.
Did you know that 64% of businesses depend on AI-based applications for their increased productivity and growth?
increased Productivity — Benefits Of Artificial Intelligence — Edureka
An example of such an application is the Legal Robot. I call it the Harvey Spectre of the virtual world.
This bot uses Machine Learning techniques like Deep Learning and Natural Language Processing to understand and analyze legal documents, find and fix costly legal errors, collaborate with experienced legal professionals, clarify legal terms by implementing an AI-based scoring system and so on. It also allows you to compare your contract with those in the same industry to ensure yours is standard.
Smart Decision Making
One of the most important goals of Artificial Intelligence is to help in making smarter business decisions. Salesforce Einstein which is a comprehensive AI for CRM (Customer Relationship Management), has managed to do that quite effectively.
As Albert Einstein quoted:
“The definition of genius is taking the complex and making it simple.”
Smart Decision Making — Benefits Of Artificial Intelligence — Edureka
Salesforce Einstein is removing the complexity of Artificial Intelligence and enabling organizations to deliver smarter, and more personalized customer experiences. Driven by advanced Machine Learning, Deep Learning, Natural Language Processing, and predictive modeling, Einstein is implemented in large scale businesses for discovering useful insights, forecasting market behavior and making better decisions.
Solve Complex Problems
Throughout the years, AI has progressed from simple Machine Learning algorithms to advanced machine learning concepts such as Deep Learning. This growth in AI has helped companies solve complex issues such as fraud detection, medical diagnosis, weather forecasting and so on.
Solve Complex Problems — Benefits Of Artificial Intelligence — Edureka
Consider the use case of how PayPal uses Artificial Intelligence for fraud detection. Thanks to deep learning, PayPal is now able to identify possible fraudulent activities very precisely.
PayPal processed over $235 billion in payments from four billion transactions by its more than 170 million customers.
Machine learning and deep learning algorithms mine data from the customer’s purchasing history in addition to reviewing patterns of likely fraud stored in its databases and can tell whether a particular transaction is fraudulent or not.
Strengthens Economy
Regardless of whether AI is considered a threat to the world, it is estimated to contribute over $15 trillion to the world economy by the year 2030.
According to a recent report by PwC, the progressive advances in AI will increase the global GDP by up to 14% between now and 2030, the equivalent of an additional $15.7 trillion contribution to the world’s economy.
Strengthens Economy — Benefits Of Artificial Intelligence — Edureka
It is also said that the most significant economic gains from AI will be in China and North America. These two countries will account for almost 70% of the global economic impact. The same report also reveals that the greatest impact of Artificial Intelligence will be in the field of healthcare and robotics.
The report also states that approximately $6.6 trillion of the expected GDP growth will come from productivity gains, especially in the coming years. Major contributors to this growth include the automation of routine tasks and the development of intelligent bots and tools that can perform all human-level tasks.
Presently, most of the tech giants are already in the process of using AI as a solution to laborious tasks. However, companies that are slow to adopt these AI-based solutions will find themselves at a serious competitive disadvantage.
Managing Repetitive Tasks
Performing repetitive tasks can become very monotonous and time-consuming. Using AI for tiresome and routine tasks can help us focus on the most important tasks in our to-do list.
An example of such an AI is the Virtual Financial assistant used by the Bank Of America, called Erica.
Erica implements AI and ML techniques to cater to the bank’s customer service requirements. It does this by creating credit report updates, facilitating bill payments and helping customers with simple transactions.
Erica’s capabilities have recently been expanded to help clients make smarter financial decisions, by providing them with personalized insights.
As of 2019, Erica has surpassed 6 million users and has serviced over 35 million customer service requests.
Personalization
Research from McKinsey found that brands that excel at personalization deliver five to eight times the marketing ROI and boost their sales by more than 10% over companies that don’t personalize. Personalization can be an overwhelming and time-consuming task, but it can be simplified through artificial intelligence. In fact, it’s never been easier to target customers with the right product.
An example of this is the UK based fashion company ‘Thread’ that uses AI to provide personalized clothing recommendations for each customer.
Personalization — Benefits Of Artificial Intelligence — Edureka
Most customers would love a personal stylist, especially one that comes at no charge. But staffing enough stylists for 650,000 customers would be expensive. Instead, UK-based fashion company Thread uses AI to provide personalized clothing recommendations for each of its customer. Customers take style quizzes to provide data about their personal style.
Each week, customers receive personalized recommendations that they can vote up or down. Thread’s uses a Machine Learning algorithm called Thimble that uses customer data to find patterns and understand the likes of the buyer. It then suggests clothes based on the customer’s taste.
Global Defense
The most advanced robots in the world are being built with global defense applications in mind. This is no surprise since any cutting-edge technology first gets implemented in military applications. Though most of these applications don’t see the light of day, one example that we know of is the AnBot.
Global Defense — Benefits Of Artificial Intelligence — Edureka
The AI-based robot developed by the Chinese is an armed police robot designed by the country’s National Defence University. Capable of reaching max speeds of 11 mph, the machine is intended to patrol areas and, in the case of danger, can deploy an “electrically charged riot control tool.”
The intelligent machine stands at a height of 1.6m and can spot individuals with criminal records. The AnBot has contributed to enhancing security by keeping a track of any suspicious activity happening around its vicinity.
Disaster Management
For most of us, precise weather forecasting makes vacation planning easier, but even the smallest advancement in predicting the weather majorly impacts the market.
Accurate weather forecasting allows farmers to make critical decisions about planting and harvesting. It makes shipping easier and safer. And most importantly it can be used to predict natural disasters that impact the lives of millions.
Weather Forecast — Benefits Of Artificial Intelligence — Edureka
After years of research, IBM partnered with the Weather Company and acquired tons and tons of data. This partnership gave IBM access to the Weather Company’s predictive models, which provided tons of weather data that it could feed into IBM’s AI platform Watson to attempt to improve predictions.
In 2016 the Weather Company claimed their models used more than 100 terabytes of third-party data daily.
The product of this merger is the AI based IBM Deep Thunder. The system provides highly customized information for business clients by using hyper-local forecasts — at a 0.2 to 1.2-mile resolution. This information is useful for transportation companies, utility companies, and even retailers.
Enhances Lifestyle
In the recent past, Artificial Intelligence has evolved from a science-fiction movie plot to an essential part of our everyday lives. Since the emergence of AI in the 1950s, we have seen exponential growth in it’s potential. We use AI based virtual assistants such as Siri, Cortana, and Alexa to interact with our phones and other devices; It is used to predict deadly diseases such as ALS and leukemia.
Enhanced Lifestyle — Benefits Of Artificial Intelligence — Edureka
Amazon monitors our browsing habits and then serves up products it thinks we’d like to buy, and even Google decides what results to give us based on our search activity.
Despite being considered a threat AI still continues to help us in many ways. Just like how Eliezer Yudkowsky, co-founder and research fellow at the Machine Intelligence Research Institute quoted:
“ By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”
With this note, I’d like to conclude by asking you, how do you think Artificial Intelligence will help us create a better world?
So with this, we come to an end of this Benefits Of Artificial Intelligence blog. Stay tuned for more blogs on the most trending technologies.
If you wish to check out more articles on the market’s most trending technologies like Artificial Intelligence, DevOps, Ethical Hacking, then you can refer to Edureka’s official site.
Do look out for other articles in this series which will explain the various other aspects of Deep Learning. | https://medium.com/edureka/benefits-of-artificial-intelligence-dc2d3e64ba80 | ['Sahiti Kappagantula'] | 2020-10-06 12:49:10.728000+00:00 | ['Deep Learning', 'Artificial Intelligence', 'AI', 'Decision Making'] | Title Top 10 Benefits Artificial IntelligenceContent know Artificial Intelligence contribute whopping 157 trillion global economy 2030 addition economic benefit AI also responsible making life simpler article Benefits Artificial Intelligence help understand Artificial Intelligence impacting domain life last benefiting humankind I’ll discussing benefit Artificial Intelligence following domain Automation Productivity Decision Making Solving Complex Problems Economy Managing Repetitive Tasks Personalization Global Defense Disaster Management Lifestyle Increased Automation Artificial Intelligence used automate anything ranging task involve extreme labor process recruitment That’s right n number AIbased application used automate recruitment process tool help free employee tedious manual task allow focus complex task like strategizing decision making Increased Automation — Benefits Artificial Intelligence — Edureka example conversational AI recruiter MYA application focus automating tedious part recruitment process scheduling screening sourcing Mya trained using advanced Machine Learning algorithm also us Natural Language Processing NLP pick detail come conversation Mya also responsible creating candidate profile perform analytics finally shortlist applicant Increased Productivity Artificial Intelligence become necessity business world used manage highly computational task require maximum effort time know 64 business depend AIbased application increased productivity growth increased Productivity — Benefits Artificial Intelligence — Edureka example application Legal Robot call Harvey Spectre virtual world bot us Machine Learning technique like Deep Learning Natural Language Processing understand analyze legal document find fix costly legal error collaborate experienced legal professional clarify legal term implementing AIbased scoring system also allows compare contract industry ensure standard Smart Decision Making One important goal Artificial Intelligence help making smarter business decision Salesforce Einstein comprehensive AI CRM Customer Relationship Management managed quite effectively Albert Einstein quoted “The definition genius taking complex making simple” Smart Decision Making — Benefits Artificial Intelligence — Edureka Salesforce Einstein removing complexity Artificial Intelligence enabling organization deliver smarter personalized customer experience Driven advanced Machine Learning Deep Learning Natural Language Processing predictive modeling Einstein implemented large scale business discovering useful insight forecasting market behavior making better decision Solve Complex Problems Throughout year AI progressed simple Machine Learning algorithm advanced machine learning concept Deep Learning growth AI helped company solve complex issue fraud detection medical diagnosis weather forecasting Solve Complex Problems — Benefits Artificial Intelligence — Edureka Consider use case PayPal us Artificial Intelligence fraud detection Thanks deep learning PayPal able identify possible fraudulent activity precisely PayPal processed 235 billion payment four billion transaction 170 million customer Machine learning deep learning algorithm mine data customer’s purchasing history addition reviewing pattern likely fraud stored database tell whether particular transaction fraudulent Strengthens Economy Regardless whether AI considered threat world estimated contribute 15 trillion world economy year 2030 According recent report PwC progressive advance AI increase global GDP 14 2030 equivalent additional 157 trillion contribution world’s economy Strengthens Economy — Benefits Artificial Intelligence — Edureka also said significant economic gain AI China North America two country account almost 70 global economic impact report also reveals greatest impact Artificial Intelligence field healthcare robotics report also state approximately 66 trillion expected GDP growth come productivity gain especially coming year Major contributor growth include automation routine task development intelligent bot tool perform humanlevel task Presently tech giant already process using AI solution laborious task However company slow adopt AIbased solution find serious competitive disadvantage Managing Repetitive Tasks Performing repetitive task become monotonous timeconsuming Using AI tiresome routine task help u focus important task todo list example AI Virtual Financial assistant used Bank America called Erica Erica implement AI ML technique cater bank’s customer service requirement creating credit report update facilitating bill payment helping customer simple transaction Erica’s capability recently expanded help client make smarter financial decision providing personalized insight 2019 Erica surpassed 6 million user serviced 35 million customer service request Personalization Research McKinsey found brand excel personalization deliver five eight time marketing ROI boost sale 10 company don’t personalize Personalization overwhelming timeconsuming task simplified artificial intelligence fact it’s never easier target customer right product example UK based fashion company ‘Thread’ us AI provide personalized clothing recommendation customer Personalization — Benefits Artificial Intelligence — Edureka customer would love personal stylist especially one come charge staffing enough stylist 650000 customer would expensive Instead UKbased fashion company Thread us AI provide personalized clothing recommendation customer Customers take style quiz provide data personal style week customer receive personalized recommendation vote Thread’s us Machine Learning algorithm called Thimble us customer data find pattern understand like buyer suggests clothes based customer’s taste Global Defense advanced robot world built global defense application mind surprise since cuttingedge technology first get implemented military application Though application don’t see light day one example know AnBot Global Defense — Benefits Artificial Intelligence — Edureka AIbased robot developed Chinese armed police robot designed country’s National Defence University Capable reaching max speed 11 mph machine intended patrol area case danger deploy “electrically charged riot control tool” intelligent machine stand height 16m spot individual criminal record AnBot contributed enhancing security keeping track suspicious activity happening around vicinity Disaster Management u precise weather forecasting make vacation planning easier even smallest advancement predicting weather majorly impact market Accurate weather forecasting allows farmer make critical decision planting harvesting make shipping easier safer importantly used predict natural disaster impact life million Weather Forecast — Benefits Artificial Intelligence — Edureka year research IBM partnered Weather Company acquired ton ton data partnership gave IBM access Weather Company’s predictive model provided ton weather data could feed IBM’s AI platform Watson attempt improve prediction 2016 Weather Company claimed model used 100 terabyte thirdparty data daily product merger AI based IBM Deep Thunder system provides highly customized information business client using hyperlocal forecast — 02 12mile resolution information useful transportation company utility company even retailer Enhances Lifestyle recent past Artificial Intelligence evolved sciencefiction movie plot essential part everyday life Since emergence AI 1950s seen exponential growth it’s potential use AI based virtual assistant Siri Cortana Alexa interact phone device used predict deadly disease ALS leukemia Enhanced Lifestyle — Benefits Artificial Intelligence — Edureka Amazon monitor browsing habit serf product think we’d like buy even Google decides result give u based search activity Despite considered threat AI still continues help u many way like Eliezer Yudkowsky cofounder research fellow Machine Intelligence Research Institute quoted “ far greatest danger Artificial Intelligence people conclude early understand it” note I’d like conclude asking think Artificial Intelligence help u create better world come end Benefits Artificial Intelligence blog Stay tuned blog trending technology wish check article market’s trending technology like Artificial Intelligence DevOps Ethical Hacking refer Edureka’s official site look article series explain various aspect Deep LearningTags Deep Learning Artificial Intelligence AI Decision Making |
1,156 | Odds of Dying: What You Should Really Worry About | Odds of Dying: What You Should Really Worry About
The things that could seriously get in the way of your well-being, according to the latest statistics from the National Safety Council.
When lightning struck during a monsoon storm near our home on July 28, 2016. We went out to chase it. My son, Marius Britt, got this photo. Notice the total absence of sharks.
Standing in a thunderstorm with utter glee, I love being pummeled by raindrops the size of crickets and daring God to strike me. I live among rattlesnakes in the Arizona desert, and respect them greatly, but I don’t really fear them so long as I’ve got a good body’s length between us (except that one time when one trapped me in the garden at the side of the house and I had to use my cell phone to call my wife inside the house so she could call the fire department, but that’s another story). Flying doesn’t scare me (as long as I’m not the pilot, alone and lost over the Northern California wilderness in a developing thunderstorm with a faulty compass, but that’s also another story).
Anyway, here’s what really scares me: I’m terrified being eaten by a shark. So much so that dangling my feet even in a man-made lake gives me the heebie-jeebies. I can muster no happiness at all swimming in the ocean. The fear is totally irrational, I know.
OK, Finally, the News …
Blue sharks are supposedly not very aggressive. I get zero comfort in that scientific viewpoint. Photo: NOAA/NEFSC
In the pursuit of happiness, fear and worry can be real downers (death even more so). So to put your mind at ease about everything from bee stings to asteroid strikes, here are the leading causes of death, announced yesterday by the National Safety Council. These are the things that could seriously get in the way of your well-being.
Top 10 Lifetime Odds of Dying By …
Heart Disease: 1 in 6 Cancer: 1 in 7 Chronic lower respiratory disease: 1 in 27 Suicide: 1 in 88 Opioid overdose: 1 in 96 Motor vehicle crash: 1 in 103 Fall: 1 in 114 Gun assault: 1 in 285 Pedestrian incident: 1 in 556
Choices, choices. Photo: Pixabay/SplitShire
The data are for 2017, the most recent year compiled and analyzed. They cover “selected categories,” not all possible ways to croak. Here is the rest of the published list — more stuff to worry about, though maybe a little less so (with a few comments by yours truly):
Motorcyclist: 1 in 858 (Note #1: these are the odds if you are one, not the odds of being killed by one)
(Note #1: these are the odds if you are one, not the odds of being killed by one) Drowning: 1 in 1,117 (this is, like, right behind sharks on my fear scale; hmm…)
(this is, like, right behind sharks on my fear scale; hmm…) Fire or smoke: 1 in 1,474
Choking on food: 1 in 2,696 (who knew? practice your Heimlich!)
(who knew? practice your Heimlich!) Bicyclist: 1 in 4,047 (see Note #1 above)
(see Note #1 above) Accidental gun discharge: 1 in 8,527
Sunstroke: 1 in 8,912
Electrocution, radiation, extreme temperatures and pressure: 1 in 15,638 (one has to use one’s imagination on this one)
(one has to use one’s imagination on this one) Sharp objects: 1 in 28,000 (self-inflicted? probably)
(self-inflicted? probably) Cataclysmic storm: 1 in 31,394 (if you’re the only person killed by a given tornado, are you part of this stat?)
(if you’re the only person killed by a given tornado, are you part of this stat?) Hot surfaces and substances: 1 in 46,045
Hornet, wasp and bee stings 1 in 46,562
Dog attack: 1 in 115,111
Plane crash: 1 in 188,364
Lightning: 1 in 218,106
Railway passenger: 1 in 243,765
Lowering the Odds
Of course, if you never ride a motorcycle, bicycle or train, you can scratch those off your kick-the-bucket list.
And to your friends who are terrified to fly, you can offer a different angle on that tired but true automobile axiom. Tell them: “You’re more likely to die in a dog attack!”
Since this article is part of a science-reporting project, I have to add: Eat well, don’t drink too much, sleep better, chill out and get lots of exercise. There, you just lowered the odds of death by some of the Top 10 items above.
Fear is Healthy, But …
Meanwhile, the NSC has some advice around knowing the numbers and using them wisely:
“Fear is natural and healthy,” the agency says. “It can help us respond to danger more quickly or avoid a dangerous situation altogether.
“It can also cause us to worry about the wrong things, especially when it comes to estimating our level of risk. If we overestimate our risk in one area, it can lead to anxiety and interfere with carrying out our normal daily routine. Ironically, it also leads us to underestimate real risks that can injure or kill us.”
Image: Pixabay/Bibbi228
Sharks & Asteroids
Odds of death by asteroid, by the way, weren’t on the NSC list. But two scientists, working separately, have put that figure at 1 in 700,000 and/or 1 in 1.6 million. Take your pick, then worry about something else.
Finally, the odds of dying in a shark attack are 1 in 3.7 million, according to one expert estimate. I find zero comfort in that, but I’m happy I live in the desert.
You can see the full NSC report here. | https://medium.com/luminate/odds-of-dying-what-you-should-really-worry-about-cc761901565b | ['Robert Roy Britt'] | 2019-01-23 20:46:35.662000+00:00 | ['Self Improvment', 'Death And Dying', 'Happiness', 'Science', 'Health'] | Title Odds Dying Really Worry AboutContent Odds Dying Really Worry thing could seriously get way wellbeing according latest statistic National Safety Council lightning struck monsoon storm near home July 28 2016 went chase son Marius Britt got photo Notice total absence shark Standing thunderstorm utter glee love pummeled raindrop size cricket daring God strike live among rattlesnake Arizona desert respect greatly don’t really fear long I’ve got good body’s length u except one time one trapped garden side house use cell phone call wife inside house could call fire department that’s another story Flying doesn’t scare long I’m pilot alone lost Northern California wilderness developing thunderstorm faulty compass that’s also another story Anyway here’s really scare I’m terrified eaten shark much dangling foot even manmade lake give heebiejeebies muster happiness swimming ocean fear totally irrational know OK Finally News … Blue shark supposedly aggressive get zero comfort scientific viewpoint Photo NOAANEFSC pursuit happiness fear worry real downer death even put mind ease everything bee sting asteroid strike leading cause death announced yesterday National Safety Council thing could seriously get way wellbeing Top 10 Lifetime Odds Dying … Heart Disease 1 6 Cancer 1 7 Chronic lower respiratory disease 1 27 Suicide 1 88 Opioid overdose 1 96 Motor vehicle crash 1 103 Fall 1 114 Gun assault 1 285 Pedestrian incident 1 556 Choices choice Photo PixabaySplitShire data 2017 recent year compiled analyzed cover “selected categories” possible way croak rest published list — stuff worry though maybe little le comment truly Motorcyclist 1 858 Note 1 odds one odds killed one Note 1 odds one odds killed one Drowning 1 1117 like right behind shark fear scale hmm… like right behind shark fear scale hmm… Fire smoke 1 1474 Choking food 1 2696 knew practice Heimlich knew practice Heimlich Bicyclist 1 4047 see Note 1 see Note 1 Accidental gun discharge 1 8527 Sunstroke 1 8912 Electrocution radiation extreme temperature pressure 1 15638 one use one’s imagination one one use one’s imagination one Sharp object 1 28000 selfinflicted probably selfinflicted probably Cataclysmic storm 1 31394 you’re person killed given tornado part stat you’re person killed given tornado part stat Hot surface substance 1 46045 Hornet wasp bee sting 1 46562 Dog attack 1 115111 Plane crash 1 188364 Lightning 1 218106 Railway passenger 1 243765 Lowering Odds course never ride motorcycle bicycle train scratch kickthebucket list friend terrified fly offer different angle tired true automobile axiom Tell “You’re likely die dog attack” Since article part sciencereporting project add Eat well don’t drink much sleep better chill get lot exercise lowered odds death Top 10 item Fear Healthy … Meanwhile NSC advice around knowing number using wisely “Fear natural healthy” agency say “It help u respond danger quickly avoid dangerous situation altogether “It also cause u worry wrong thing especially come estimating level risk overestimate risk one area lead anxiety interfere carrying normal daily routine Ironically also lead u underestimate real risk injure kill us” Image PixabayBibbi228 Sharks Asteroids Odds death asteroid way weren’t NSC list two scientist working separately put figure 1 700000 andor 1 16 million Take pick worry something else Finally odds dying shark attack 1 37 million according one expert estimate find zero comfort I’m happy live desert see full NSC report hereTags Self Improvment Death Dying Happiness Science Health |
1,157 | Social Media is Not “Social” | Social Media is Not “Social”
It’s media. Start treating it that way.
Social Media is no longer just Social, it’s the most important and underutilized Media Channel available to marketers.
It’s 2017, and if you haven’t been playing really close attention over the last two years, digital and more importantly social media have dramatically shifted. You’ve likely read some headlines, you seen digital spending reports rising, but most marketers have largely ignored them. As a marketer, you can no longer look at Social Media as this siloed off marketing step-child that you pay little or no attention to, (except on Wednesdays and every other weekend) each month. You are likely more involved and infatuated with your other favorite marketing channels. You’ve had them longer and you are more invested in them. They’re like family. Entrepreneur and tech-soothsayer, Gary Vaynerchuk articulated it in a powerful way late last year:
“Social Media is just a slang term for the current state of the internet.”
This is 100% spot on. In it’s earliest beginnings, Social was a handful of networks, each with their own audiences and capabilities — and mostly was considered “for the kids”. That may have been true 5 or 10 years ago. But in 2017, the networks of Facebook (including Instagram, Facebook Messenger and WhatsApp), Snapchat, Pinterest and YouTube, are THE NETWORKS where (billions) of consumers are spending their time. Furthermore, they are quickly evolving out of being merely social channels, but rather full-fledged media companies.
If you don’t buy into Gary V’s above thesis on how to look at social in today’s digital landscape, then consider the following: In March 2017, Snapchat went public and in its first week — it’s currently valued higher than Delta, Target, Hershey, Viacom, CBS and Hilton (just to name a few).
Chart from Statista
Not too bullish on Snapchat? Ok, then consider these stats from comScore, eMarketer & Nielsen.
66% of all digital time is spent on a mobile device.
89% of all mobile device time is spent on apps.
49% of mobile app time is on Social Media and Messaging Apps.
“Mobile first” doesn’t just pertain to your website anymore, it’s about evolved consumer behavior patterns towards mobile and social.
Do these consumer usage stats have your attention? They should rock you to your marketing core. Social Media has gone beyond being a handful of channels that you can activate your marketing on, they are now the channels you must be activating on.
Still, so many seasoned marketers continue to look at it this way — yeah that’s cute, no thanks. It’s in the same vein of how they laughed off the importance of Facebook a decade ago and the same way channels like Snapchat are getting laughed off today.
Over 150M user per day are spending minutes and hours on Snapchat alone. That’s called scale and that’s why it matters. You can’t disregard or disrespect a platform like Snapchat just because you don’t understand it or because it’s not the same feed format that you’ve grown accustomed to with Facebook, Twitter, LinkedIn, etc.
Mega hotel chains laughed off AirBnB.
Major metropolitan taxi medallion holders laughed off UBER.
Barnes & Noble and Wal-Mart laughed off Amazon.
P&G-Gillette laughed off Dollar Shave Club.
…Billions of dollars of lost revenue later, here we are. Digital, social, peer-to-peer messaging apps, and the gig-economy continue to win and they will take down more formerly assumed un-killable giants as they mature.
You have to pay the piper.
In 2014, Marketers freaked out when Facebook changed their algorithm and they lost their page’s organic reach. FREAKED. THE. F. OUT. What brands and marketers had been getting for free, they now needed to pay for — and still not achieve the same results. It sucked, sure. But, what marketers took for granted, was that this change gave birth to the single most powerful branding and advertising vehicle in history.
Print ads were king for over 150 years until Radio came along in 1920 and disrupted a two-century old medium . Radio ads were king until TV advertising hit scale in the 1950’s. TV was king, and will be dethroned in 2017 by digital ad spending for the first time in history. Digital moves fast.
The consumer internet is really only about 20 years old. Smartphones are only 10 years old. Let that sink in, and consider the dramatic shifts in the marketing/advertising landscape over the last 20 years compared to the previous 300.
It took 38-years before 50 million people gained access to radios
It took television 13-years to get to 50 million
It took Instagram a year and a half
It took Flickr two years to reach 100 million uploaded pictures
It took Instagram eight months
Again, Digital isn’t just fast it’s lightning fast, and the combination of smartphones and social have taken over how we use the internet itself.
You likely (or better) understand the relevance of digital marketing in today’s landscape. Now you need to wrap your head around the idea that social can, and will overtake Paid Search, Display, Adwords, etc. in the very near future. The internet itself, as we know it, will give way to the IoT, Voice-Search, and AR/VR in the next 15 years…that’s a seperate post and rabbit hole to go down all together.
Here’s the rub… You cannot be nostalgic or romanticize the advertising and marketing methods that worked for you in the past — 50 years, 15 years or 5 years ago. It is irrelevant in today’s society and consumer behaviors.
Social media is already and has already become the next thing. Marketers as a whole just haven’t been capitalizing on it and giving it the respect it deserves. The cost to entry is as low as it will ever be right now because the mega-brands haven’t completely priced out the competition yet. But the clock is ticking, and if you have been reading the tea leaves it’s changing. As an example:
In the U.S., the Super Bowl is the most valuable TV ad buy, year-in and year-out based on exposure and viewership. Yet in 2017, decade long major brands players like Kraft-Heinz, Frito-Lay Doritos decided to drop out. So, why do you expect they bailed— and where do you expect those tens of millions of dollars for production and ad-buys are going this year? Here’s a hint, it’s not direct mail.
Once the big boy brands decide to go all-in on social advertising because of the micro-targeting and delivery vehicles that it can provide compared to traditional channels, the cost of entry will skyrocket. SMB’s are going to feel the strain of those tens of millions of dollars and the CPM to reach your target is going to go up by 3–5x of what it is today by 2020. The supply and demand curve will shift as more businesses continue investing larger spends into digital.
“Anti Social” social (or Peer-to-Peer Messaging) is the next iteration of digital communication.
Once marketers start positioning themselves around the current state of what consumers are actually doing, they will start making an impact where it matters. Instead of scrambling to figure it out after something they initially dismissed actually hit scale and started to matter. Case in point, Messaging Apps.
The next big thing that marketers need to pay attention to is the mass usage of Messaging Apps. “Kids aren’t on Facebook anymore”, said the out of touch digital marketer. Yes, they are. They’re just using different iterations of the platform in the form of peer-to-peer messaging apps like Facebook Messenger, What’sApp, and a slew of others globally.
A projected 1.2 Billion users will be on Facebook owned What’sApp in 2017.
Facebook Messenger has another 1B+ Active Users. This is “where the kids are”.
Most popular mobile messaging apps worldwide as of January 2017, monthly active users (in millions) from Statista
For marketers, Social Media should no longer be a “Should we or shouldn’t we” conversation. If you’re even having that conversation, “What the ROI of shifting dollars from your PPC spend to social is”, or even the “We want to know how to do social” — then you are still a million miles away from the bulls-eye.
What you need to do, is reconfigure your entire understanding of digital marketing.
Stop focusing on immediate ROI and 12-month marketing calendars that tether you to traditional media buying cycles.
Start thinking about the long game: Brand Value and Customer Retention. | https://medium.com/on-advertising/social-media-is-not-social-its-media-start-treating-it-that-way-6cda5c39881a | ['Chad Anderson'] | 2017-03-13 16:03:06.719000+00:00 | ['Social Media Marketing', 'Facebook', 'Social Media', 'Digital Marketing', 'Marketing'] | Title Social Media “Social”Content Social Media “Social” It’s medium Start treating way Social Media longer Social it’s important underutilized Media Channel available marketer It’s 2017 haven’t playing really close attention last two year digital importantly social medium dramatically shifted You’ve likely read headline seen digital spending report rising marketer largely ignored marketer longer look Social Media siloed marketing stepchild pay little attention except Wednesdays every weekend month likely involved infatuated favorite marketing channel You’ve longer invested They’re like family Entrepreneur techsoothsayer Gary Vaynerchuk articulated powerful way late last year “Social Media slang term current state internet” 100 spot it’s earliest beginning Social handful network audience capability — mostly considered “for kids” may true 5 10 year ago 2017 network Facebook including Instagram Facebook Messenger WhatsApp Snapchat Pinterest YouTube NETWORKS billion consumer spending time Furthermore quickly evolving merely social channel rather fullfledged medium company don’t buy Gary V’s thesis look social today’s digital landscape consider following March 2017 Snapchat went public first week — it’s currently valued higher Delta Target Hershey Viacom CBS Hilton name Chart Statista bullish Snapchat Ok consider stats comScore eMarketer Nielsen 66 digital time spent mobile device 89 mobile device time spent apps 49 mobile app time Social Media Messaging Apps “Mobile first” doesn’t pertain website anymore it’s evolved consumer behavior pattern towards mobile social consumer usage stats attention rock marketing core Social Media gone beyond handful channel activate marketing channel must activating Still many seasoned marketer continue look way — yeah that’s cute thanks It’s vein laughed importance Facebook decade ago way channel like Snapchat getting laughed today 150M user per day spending minute hour Snapchat alone That’s called scale that’s matter can’t disregard disrespect platform like Snapchat don’t understand it’s feed format you’ve grown accustomed Facebook Twitter LinkedIn etc Mega hotel chain laughed AirBnB Major metropolitan taxi medallion holder laughed UBER Barnes Noble WalMart laughed Amazon PGGillette laughed Dollar Shave Club …Billions dollar lost revenue later Digital social peertopeer messaging apps gigeconomy continue win take formerly assumed unkillable giant mature pay piper 2014 Marketers freaked Facebook changed algorithm lost page’s organic reach FREAKED F brand marketer getting free needed pay — still achieve result sucked sure marketer took granted change gave birth single powerful branding advertising vehicle history Print ad king 150 year Radio came along 1920 disrupted twocentury old medium Radio ad king TV advertising hit scale 1950’s TV king dethroned 2017 digital ad spending first time history Digital move fast consumer internet really 20 year old Smartphones 10 year old Let sink consider dramatic shift marketingadvertising landscape last 20 year compared previous 300 took 38years 50 million people gained access radio took television 13years get 50 million took Instagram year half took Flickr two year reach 100 million uploaded picture took Instagram eight month Digital isn’t fast it’s lightning fast combination smartphones social taken use internet likely better understand relevance digital marketing today’s landscape need wrap head around idea social overtake Paid Search Display Adwords etc near future internet know give way IoT VoiceSearch ARVR next 15 years…that’s seperate post rabbit hole go together Here’s rub… cannot nostalgic romanticize advertising marketing method worked past — 50 year 15 year 5 year ago irrelevant today’s society consumer behavior Social medium already already become next thing Marketers whole haven’t capitalizing giving respect deserves cost entry low ever right megabrands haven’t completely priced competition yet clock ticking reading tea leaf it’s changing example US Super Bowl valuable TV ad buy yearin yearout based exposure viewership Yet 2017 decade long major brand player like KraftHeinz FritoLay Doritos decided drop expect bailed— expect ten million dollar production adbuys going year Here’s hint it’s direct mail big boy brand decide go allin social advertising microtargeting delivery vehicle provide compared traditional channel cost entry skyrocket SMB’s going feel strain ten million dollar CPM reach target going go 3–5x today 2020 supply demand curve shift business continue investing larger spends digital “Anti Social” social PeertoPeer Messaging next iteration digital communication marketer start positioning around current state consumer actually start making impact matter Instead scrambling figure something initially dismissed actually hit scale started matter Case point Messaging Apps next big thing marketer need pay attention mass usage Messaging Apps “Kids aren’t Facebook anymore” said touch digital marketer Yes They’re using different iteration platform form peertopeer messaging apps like Facebook Messenger What’sApp slew others globally projected 12 Billion user Facebook owned What’sApp 2017 Facebook Messenger another 1B Active Users “where kid are” popular mobile messaging apps worldwide January 2017 monthly active user million Statista marketer Social Media longer “Should shouldn’t we” conversation you’re even conversation “What ROI shifting dollar PPC spend social is” even “We want know social” — still million mile away bullseye need reconfigure entire understanding digital marketing Stop focusing immediate ROI 12month marketing calendar tether traditional medium buying cycle Start thinking long game Brand Value Customer RetentionTags Social Media Marketing Facebook Social Media Digital Marketing Marketing |
1,158 | Google Analytics is the Only SEO Analytics Tool You Need: Here’s How to Use It (Part 1 of 2) | Google Analytics is the Only SEO Analytics Tool You Need: Here’s How to Use It (Part 1 of 2)
Also, How to Prove the True Value of SEO for Your Boss, Client, or Business
SEO is often a Catch-22 for small and medium-sized businesses.
One common question we get from our clients perfectly illustrates why:
“We hear buzzwords like SEO a lot, but how do we know if investing time and money into SEO is worth the investment? How do I know how much SEO adds real value and profit to my business?”
Interest over time on Google Trends for “SEO” in the United States, 2004 — present. Image via Google Trends.
One option to test the value of SEO for your business is to hire a full-time digital marketer, digital analyst, or SEO consultant. Of course, they will have to prove their business value to your company using analytics.
Or maybe you want to test the value of SEO yourself using the many SEO tools out there on the market. Backlinko recently created a comprehensive review of 189 SEO tools that are currently on the market. But most of them are costly or at best freemium tools that lock away most of the valuable features for the paid tier.
And herein lies the chicken-and-egg problem facing many small and medium-sized businesses:
Businesses want to investigate if SEO will really have positive return on their investment of time and money. But they often must pay a big sum upfront for a digital marketer or consultant to analyze whether SEO is the right marketing channel for them in the first place.
But it doesn’t have to be that way — this is where Google Analytics comes in!
As you probably know, Google Analytics a free digital analytics tool, and by some measures, it is already being used by more than half of all websites on the internet. And for most SMBs, it’s actually the only SEO analytics tool you need to evaluate the value of SEO for your business.
So in this week’s post (Part 1), we’ll first show you how to find your organic search traffic metrics in Google Analytics. Then we’ll walk you through how to use Google Analytics to measure the value of SEO for your manager or client.
Part 1 will cover:
How to find your organic search traffic in Google Analytics reports How to measure the value of your organic search traffic with Google Analytics
Which SEO-related metrics and reports should you track on Google Analytics? How to build a SEO dashboard to see your key SEO metrics at a glance
Let’s begin!
How to find your organic search traffic in Google Analytics reports
Just so we’re on the same page, let’s start with some basic definitions.
SEO, or search engine optimization, is the technique of growing the amount of high-quality organic search traffic to a website via search engines like Google.
By “organic search traffic,” we mean website visitors who search a keyword and click on a search engine result, rather than a pay-per-click (PPC) ad. In other words, this is free traffic from search engines, rather than paid traffic from digital ads.
Now that we know what SEO is, let’s start with how to find and isolate your organic traffic on Google Analytics. There are at least two main ways to look at how your organic traffic is performing.
Option 1: Drill Down in the Channels Report
The first way is to simply go to your Channels report (Acquisition >> All Traffic >> Channels). This will show you how your different channel groupings are performing in terms of traffic, engagement and conversions. By channels, we mean different ways that visitors are getting to your website (e.g. traffic from Paid Search, Referral, Social, etc).
Click on Organic Search to drill down on your organic search traffic (i.e. see metrics for only your visitors from organic search).
This will then display your Organic Keywords report, which shows your top-performing keywords sorted by the metrics of your choosing (Acquisition >> All Traffic >> Channels >> Keyword).
For example, you may want to sort your organic keywords by bounce rate (the percent of users who get to your site and immediately leave without further actions). This will let you see which keywords drive the most highly engaged or high-quality traffic.
Segment by Search Engine: You can also segment organic search traffic by source if you want to look at specific search engines (i.e. how many visitors are coming from Google, Yahoo, Bing, etc). Go to Acquisition >> All Traffic >> Channels >> Source (tab).
Segment by Landing Page: Lastly, you may want to identify the landing pages that are driving the most organic search traffic to your site. By landing page, we mean the first web page that a visitor sees when they visit your website.
To see the highest-traffic landing pages for your organic search traffic, click the Landing Page primary dimension in the organic keywords report (Acquisition >> All Traffic >> Channels >> Landing Page).
Option 2: Add “Organic Traffic” as a Segment in Any Report
However, perhaps you want to further analyze your organic traffic in a different report. If that’s the case, you can add the “Organic Traffic” default segment at the top of the report. This will allow you to dive deeper on your Audience, Behavior, and Conversion reports for your Organic Traffic segment.
Now that you know how to find SEO metrics in Google Analytics, let’s talk about which specific metrics to analyze to quantify the value of your organic traffic.
How to measure the value of your organic traffic with Google Analytics
So why do you want to measure the impact of your SEO?
Because it’s typically undervalued by CEOs and SMB owners. After all, quantifying the value of SEO and organic traffic for your business is a unique challenge.
Because organic search traffic (like social traffic) often sits at the top of the marketing funnel, many if not most consumers these days do not convert to paid customers the first time they encounter your website.
As I mentioned in last week’s post, you may need an email marketing or remarketing campaign to make the conversion. As such, organic search traffic often doesn’t get the credit it deserves for bringing in revenue to the business, and is chronically undervalued by management.
What’s why I use the “Multi-Channel Funnels” reports in Google Analytics to measure the value of SEO. Specifically, I recommend using the “Assisted Conversions” report (Conversions >> Multi-Channel Funnels >> Assisted Conversions).
This is possibly the best report for investigating whether Google Analytics is underestimating the value of organic search traffic (or any channel) with last click attribution (a digital analytics model gives credit for a conversion to whichever channel the “last click” came from).
The report focuses on the “Assisted Conversions” metric, which represent conversions in which a channel appeared on the conversion path, but was not the final conversion interaction.
Like players in basketball, the value of a channel in digital marketing is more than just points scored directly, but also the number of assists.
Think about an assist in basketball, where a player may not be the person who actually puts the ball through the hoop, but may assist their teammate. Because assists provide important value to the basketball team and make “scoring” (or “conversions”) possible, the number of assists is an important metric to track to understand the value of a player for a team in both basketball and digital marketing.
Therefore, the Assisted Conversion report is your best bet for understanding the true impact of your SEO. You might be surprised to find that the measurable value of your SEO efforts is double what you thought it was.
Step 1: Go to your Assisted Conversions report
Click Conversions >> Multi-Channel Funnels >> Assisted Conversions. There you’ll see the number of assisted conversions and the value of these conversions for all your channel groupings.
As you can see in this screenshot, Display has the highest Assisted / Last Click Conversions ratio for the Google Merchandise Store, meaning Display’s impact on the company’s number of conversions is the most understated.
Step 2: Look at your Assisted Conversions for organic search traffic
Click “Organic Search” under “MCF channel grouping.” Here you’ll see
Step 3: Compare your Assisted Conversion Value with Direct Conversion Value for Each Source (Search Engine)
For the business in this screenshot, most of the assisted conversions for organic search is coming from Google. If you compare this company’s Assisted Conversion Value of Google organic search ($2912.08) with its Direct Conversion Value ($2717.08), you’ll see that the real value of Google organic search for this business is more than double its Direct Conversion value.
In other words, an SEO analyst can show their manager or client that the value of SEO is double what they originally thought it was based on direct conversions alone!
Now that you know how to demonstrate the value of SEO with Google Analytics, let’s talk about which metrics to monitor to understand how organic search is performing for your business.
Next Steps
Today, we covered: (1) how to isolate your organic search traffic in Google Analytics reports, and (2) how to measure the value of your organic traffic with Google Analytics.
As you can tell, learning how to analyze SEO with Google Analytics is not an easy task. It takes a serious amount of investment in time and learning.
That’s why at Humanlytics, we’ve been helping a few dozen businesses optimize their digital channels, including their SEO and organic search traffic. Many of these businesses are led by very smart and technical cofounders. But even these entrepreneurs who are trained in digital marketing and data analytics often don’t have the bandwidth or resources to distill actionable insights from their SEO data.
This is the reason the next feature we’re building in our digital analytics platform is an AI-based tool to recommend the right digital channels to focus on. This AI tool will tell you whether SEO is the right channel for your business based on your Google Analytics data, so you won’t have to waste any money on the wrong marketing activities.
Our AI-based marketing analytics tool whether SEO — or any channel — is right for your business. PC: The Daily Dot
In other words, the tool automates everything we’ve explained in this tutorial so you can spend less time learning this stuff through trial-and-error, and more time doing what you do best — running your business.
If you’re interested in beta testing this feature for free (or need help setting up your conversion goals), sign up with the form at the end of the post, or shoot me an email at [email protected].
Tune in next week for Part 2 of tracking SEO with Google Analytics, where we’ll walk you through how to choose the right SEO metrics for your SEO dashboard with Google Analytics.
Specifically we’ll cover: | https://medium.com/analytics-for-humans/google-analytics-is-the-only-seo-analytics-tool-you-need-heres-how-to-use-it-part-1-of-2-451e66ff102 | ['Patrick Han'] | 2018-06-08 19:41:55.873000+00:00 | ['SEO', 'Google Analytics', 'Digital Marketing', 'Startup', 'AI'] | Title Google Analytics SEO Analytics Tool Need Here’s Use Part 1 2Content Google Analytics SEO Analytics Tool Need Here’s Use Part 1 2 Also Prove True Value SEO Boss Client Business SEO often Catch22 small mediumsized business One common question get client perfectly illustrates “We hear buzzword like SEO lot know investing time money SEO worth investment know much SEO add real value profit business” Interest time Google Trends “SEO” United States 2004 — present Image via Google Trends One option test value SEO business hire fulltime digital marketer digital analyst SEO consultant course prove business value company using analytics maybe want test value SEO using many SEO tool market Backlinko recently created comprehensive review 189 SEO tool currently market costly best freemium tool lock away valuable feature paid tier herein lie chickenandegg problem facing many small mediumsized business Businesses want investigate SEO really positive return investment time money often must pay big sum upfront digital marketer consultant analyze whether SEO right marketing channel first place doesn’t way — Google Analytics come probably know Google Analytics free digital analytics tool measure already used half website internet SMBs it’s actually SEO analytics tool need evaluate value SEO business week’s post Part 1 we’ll first show find organic search traffic metric Google Analytics we’ll walk use Google Analytics measure value SEO manager client Part 1 cover find organic search traffic Google Analytics report measure value organic search traffic Google Analytics SEOrelated metric report track Google Analytics build SEO dashboard see key SEO metric glance Let’s begin find organic search traffic Google Analytics report we’re page let’s start basic definition SEO search engine optimization technique growing amount highquality organic search traffic website via search engine like Google “organic search traffic” mean website visitor search keyword click search engine result rather payperclick PPC ad word free traffic search engine rather paid traffic digital ad know SEO let’s start find isolate organic traffic Google Analytics least two main way look organic traffic performing Option 1 Drill Channels Report first way simply go Channels report Acquisition Traffic Channels show different channel grouping performing term traffic engagement conversion channel mean different way visitor getting website eg traffic Paid Search Referral Social etc Click Organic Search drill organic search traffic ie see metric visitor organic search display Organic Keywords report show topperforming keywords sorted metric choosing Acquisition Traffic Channels Keyword example may want sort organic keywords bounce rate percent user get site immediately leave without action let see keywords drive highly engaged highquality traffic Segment Search Engine also segment organic search traffic source want look specific search engine ie many visitor coming Google Yahoo Bing etc Go Acquisition Traffic Channels Source tab Segment Landing Page Lastly may want identify landing page driving organic search traffic site landing page mean first web page visitor see visit website see highesttraffic landing page organic search traffic click Landing Page primary dimension organic keywords report Acquisition Traffic Channels Landing Page Option 2 Add “Organic Traffic” Segment Report However perhaps want analyze organic traffic different report that’s case add “Organic Traffic” default segment top report allow dive deeper Audience Behavior Conversion report Organic Traffic segment know find SEO metric Google Analytics let’s talk specific metric analyze quantify value organic traffic measure value organic traffic Google Analytics want measure impact SEO it’s typically undervalued CEOs SMB owner quantifying value SEO organic traffic business unique challenge organic search traffic like social traffic often sits top marketing funnel many consumer day convert paid customer first time encounter website mentioned last week’s post may need email marketing remarketing campaign make conversion organic search traffic often doesn’t get credit deserves bringing revenue business chronically undervalued management What’s use “MultiChannel Funnels” report Google Analytics measure value SEO Specifically recommend using “Assisted Conversions” report Conversions MultiChannel Funnels Assisted Conversions possibly best report investigating whether Google Analytics underestimating value organic search traffic channel last click attribution digital analytics model give credit conversion whichever channel “last click” came report focus “Assisted Conversions” metric represent conversion channel appeared conversion path final conversion interaction Like player basketball value channel digital marketing point scored directly also number assist Think assist basketball player may person actually put ball hoop may assist teammate assist provide important value basketball team make “scoring” “conversions” possible number assist important metric track understand value player team basketball digital marketing Therefore Assisted Conversion report best bet understanding true impact SEO might surprised find measurable value SEO effort double thought Step 1 Go Assisted Conversions report Click Conversions MultiChannel Funnels Assisted Conversions you’ll see number assisted conversion value conversion channel grouping see screenshot Display highest Assisted Last Click Conversions ratio Google Merchandise Store meaning Display’s impact company’s number conversion understated Step 2 Look Assisted Conversions organic search traffic Click “Organic Search” “MCF channel grouping” you’ll see Step 3 Compare Assisted Conversion Value Direct Conversion Value Source Search Engine business screenshot assisted conversion organic search coming Google compare company’s Assisted Conversion Value Google organic search 291208 Direct Conversion Value 271708 you’ll see real value Google organic search business double Direct Conversion value word SEO analyst show manager client value SEO double originally thought based direct conversion alone know demonstrate value SEO Google Analytics let’s talk metric monitor understand organic search performing business Next Steps Today covered 1 isolate organic search traffic Google Analytics report 2 measure value organic traffic Google Analytics tell learning analyze SEO Google Analytics easy task take serious amount investment time learning That’s Humanlytics we’ve helping dozen business optimize digital channel including SEO organic search traffic Many business led smart technical cofounder even entrepreneur trained digital marketing data analytics often don’t bandwidth resource distill actionable insight SEO data reason next feature we’re building digital analytics platform AIbased tool recommend right digital channel focus AI tool tell whether SEO right channel business based Google Analytics data won’t waste money wrong marketing activity AIbased marketing analytics tool whether SEO — channel — right business PC Daily Dot word tool automates everything we’ve explained tutorial spend le time learning stuff trialanderror time best — running business you’re interested beta testing feature free need help setting conversion goal sign form end post shoot email patrickhumanlyticsco Tune next week Part 2 tracking SEO Google Analytics we’ll walk choose right SEO metric SEO dashboard Google Analytics Specifically we’ll coverTags SEO Google Analytics Digital Marketing Startup AI |
1,159 | Java Concurrency: Locks | A lock is a thread synchronization mechanism like synchronized blocks except locks can be more sophisticated than Java’s synchronized blocks. Locks are created using synchronized blocks, so it is not like we can get totally rid of the synchronized keyword.
From Java 5 the package java.util.concurrent.locks contains several lock implementations, so you may not have to implement your own locks. But you will still need to know how to use them, and it can still be useful to know the theory behind their implementation.
Lock vs Synchronized Block
There are few differences between the use of synchronized block and using Lock API:
A synchronized block is fully contained within a method — we can have Lock API’s lock() and unlock() operation in separate methods
we can have Lock API’s lock() and unlock() operation in separate methods A synchronized block doesn’t support fairness, any thread can acquire the lock once released, no preference can be specified. We can achieve fairness within the Lock APIs by specifying the fairness property . It makes sure that the longest waiting thread is given access to the lock
. It makes sure that the longest waiting thread is given access to the lock A thread gets blocked if it can’t get access to the synchronized block. The Lock API provides the tryLock() method. The thread acquires lock only if it’s available and not held by any other thread. This reduces the blocking time of thread waiting for the lock
This reduces the blocking time of thread waiting for the lock A thread that is in the “waiting” state to acquire access to a synchronized block, can’t be interrupted. The Lock API provides a method lockInterruptibly() which can be used to interrupt the thread when it’s waiting for the lock
Lock Reentrance
All implicit monitors implement reentrant characteristics. Reentrant means that locks are bound to the current thread. A thread can safely acquire the same lock multiple times without running into deadlocks (e.g. a synchronized method calls another synchronized method on the same object).
When the thread first enters into the lock, a hold count is set to one. Before unlocking the thread can re-enter into lock again and every time hold count is incremented by one. For every unlocks request, the hold count is decremented by one and when the hold count is 0, the resource is unlocked.
Lock Fairness
Java’s synchronized blocks make no guarantees about the sequence in which threads trying to enter them are granted access. Therefore, if many threads are constantly competing for access to the same synchronized block, there is a risk that one or more of the threads are never granted access — that access is always granted to other threads. This is called starvation. To avoid this a Lock should be fair.
Reentrant Locks also offer a fairness parameter, by which the lock would abide by the order of the lock request i.e. after a thread unlocks the resource, the lock would go to the thread which has been waiting for the longest time. This fairness mode is set up by passing true to the constructor of the lock.
Lock Methods
Let’s take a look at the methods in the Lock interface:
lock() — call to the lock() method increments the hold count by 1 and gives the lock to the thread if the shared resource is initially free.
call to the method increments the hold count by 1 and gives the lock to the thread if the shared resource is initially free. unlock() — call to the unlock() method decrements the hold count by 1. When this count reaches zero, the resource is released.
call to the method decrements the hold count by 1. When this count reaches zero, the resource is released. tryLock() — if the resource is not held by any other thread, then call to tryLock() returns true and the hold count is incremented by one. If the resource is not free then the method returns false and the thread is not blocked but it exists.
if the resource is not held by any other thread, then call to returns true and the hold count is incremented by one. If the resource is not free then the method returns false and the thread is not blocked but it exists. tryLock(long timeout, TimeUnit unit) — as per the method, the thread waits for a certain time period as defined by arguments of the method to acquire the lock on the resource before exiting.
as per the method, the thread waits for a certain time period as defined by arguments of the method to acquire the lock on the resource before exiting. lockInterruptibly() — this method acquires the lock if the resource is free while allowing for the thread to be interrupted by some other thread while acquiring the resource. It means that if the current thread is waiting for a lock but some other thread requests the lock, then the current thread will be interrupted and return immediately without acquiring a lock.
Lock Implementations
Multiple lock implementations are available in the standard JDK:
ReentrantLock
ReentrantReadWriteLock
StampedLock
ReentrantLock
The class ReentrantLock is a mutual exclusion lock with the same basic behavior as the implicit monitors accessed via the synchronized keyword but with extended capabilities. As the name suggests this lock implements reentrant characteristics just as implicit monitors.
A lock is acquired via lock() and released via unlock() . It's important to wrap your code into a try/finally block to ensure unlocking in case of exceptions. This method is thread-safe just like the synchronized counterpart. If another thread has already acquired the lock subsequent calls to lock() pause the current thread until the lock has been unlocked. Only one thread can hold the lock at any given time.
ReentrantReadWriteLock
ReentrantReadWriteLock class implements the ReadWriteLock interface. The interface ReadWriteLock specifies another type of lock maintaining a pair of locks for read and write access.
The idea behind read-write locks is that it's usually safe to read mutable variables concurrently as long as nobody is writing to this variable. So the read-lock can be held simultaneously by multiple threads as long as no threads hold the write-lock. This can improve performance and throughput in the case reads are more frequent than writes.
Let’s see rules for acquiring the ReadLock or WriteLock by a thread:
Read Lock — if no thread acquired the write lock or requested for it then multiple threads can acquire the read lock
— if no thread acquired the write lock or requested for it then multiple threads can acquire the read lock Write Lock — if no threads are reading or writing then only one thread can acquire the write lock
The above example first acquires a write-lock in order to put a new value to the map after sleeping for one second. Before this task has finished two other tasks are being submitted trying to read the entry from the map and sleep for one second:
When you execute this code sample you’ll notice that both read tasks have to wait the whole second until the writing task has finished. After the write lock has been released both read tasks are executed in parallel and print the result simultaneously to the console. They don’t have to wait for each other to finish because read-locks can safely be acquired concurrently as long as no write-lock is held by another thread.
StampedLock
Java 8 ships with a new kind of lock called StampedLock which also supports read and write locks just like in the example above. In contrast to ReadWriteLock the locking methods of a StampedLock return a stamp represented by a long value. You can use these stamps to either release a lock or to check if the lock is still valid.
Another feature provided by StampedLock is optimistic locking. Most of the time read operations don’t need to wait for write operation completion and as a result of this, the full-fledged read lock isn’t required.
Instead, we can upgrade to read lock:
Working With Conditions
The Condition class provides the ability for a thread to wait for some condition to occur while executing the critical section.
This can occur when a thread acquires access to the critical section but doesn’t have the necessary condition to perform its operation. For example, a reader thread can get access to the lock of a shared queue, which still doesn’t have any data to consume.
Traditionally Java provides wait(), notify() and notifyAll() methods for thread intercommunication. Conditions have similar mechanisms, but in addition, we can specify multiple conditions:
Semaphores
In addition to locks, the Concurrency API also supports counting semaphores. Whereas locks usually grant exclusive access to variables or resources, a semaphore is capable of maintaining whole sets of permits. This is useful in different scenarios where you have to limit the amount of concurrent access to certain parts of your application.
Here’s an example of how to limit access to a long-running task simulated by sleep(5) :
The executor can potentially run 10 tasks concurrently but we use a semaphore of size 5, thus limiting concurrent access to 5. It’s important to use a try/finally block to properly release the semaphore even in case of exceptions.
Executing the above code results in the following output:
Semaphore acquired
Semaphore acquired
Semaphore acquired
Semaphore acquired
Semaphore acquired
Could not acquire semaphore
Could not acquire semaphore
Could not acquire semaphore
Could not acquire semaphore
Could not acquire semaphore
The semaphores permit access to the actual long-running operation simulated by sleep(5) up to a maximum of 5. Every subsequent call to tryAcquire() elapses the maximum wait timeout of one second, resulting in the appropriate console output that no semaphore could be acquired.
Conclusion
In this article, we have seen different implementations of the Lock interface and learned how to use them in multithreaded applications.
In the next article, we will check the Executors. Stay tuned. | https://medium.com/javarevisited/java-concurrency-locks-9d161e1d1847 | ['Dmytro Timchenko'] | 2020-11-14 04:47:00.100000+00:00 | ['Software Development', 'Technology', 'Software Engineering', 'Java', 'Programming'] | Title Java Concurrency LocksContent lock thread synchronization mechanism like synchronized block except lock sophisticated Java’s synchronized block Locks created using synchronized block like get totally rid synchronized keyword Java 5 package javautilconcurrentlocks contains several lock implementation may implement lock still need know use still useful know theory behind implementation Lock v Synchronized Block difference use synchronized block using Lock API synchronized block fully contained within method — Lock API’s lock unlock operation separate method Lock API’s lock unlock operation separate method synchronized block doesn’t support fairness thread acquire lock released preference specified achieve fairness within Lock APIs specifying fairness property make sure longest waiting thread given access lock make sure longest waiting thread given access lock thread get blocked can’t get access synchronized block Lock API provides tryLock method thread acquires lock it’s available held thread reduces blocking time thread waiting lock reduces blocking time thread waiting lock thread “waiting” state acquire access synchronized block can’t interrupted Lock API provides method lockInterruptibly used interrupt thread it’s waiting lock Lock Reentrance implicit monitor implement reentrant characteristic Reentrant mean lock bound current thread thread safely acquire lock multiple time without running deadlock eg synchronized method call another synchronized method object thread first enters lock hold count set one unlocking thread reenter lock every time hold count incremented one every unlocks request hold count decremented one hold count 0 resource unlocked Lock Fairness Java’s synchronized block make guarantee sequence thread trying enter granted access Therefore many thread constantly competing access synchronized block risk one thread never granted access — access always granted thread called starvation avoid Lock fair Reentrant Locks also offer fairness parameter lock would abide order lock request ie thread unlocks resource lock would go thread waiting longest time fairness mode set passing true constructor lock Lock Methods Let’s take look method Lock interface lock — call lock method increment hold count 1 give lock thread shared resource initially free call method increment hold count 1 give lock thread shared resource initially free unlock — call unlock method decrement hold count 1 count reach zero resource released call method decrement hold count 1 count reach zero resource released tryLock — resource held thread call tryLock return true hold count incremented one resource free method return false thread blocked exists resource held thread call return true hold count incremented one resource free method return false thread blocked exists tryLocklong timeout TimeUnit unit — per method thread wait certain time period defined argument method acquire lock resource exiting per method thread wait certain time period defined argument method acquire lock resource exiting lockInterruptibly — method acquires lock resource free allowing thread interrupted thread acquiring resource mean current thread waiting lock thread request lock current thread interrupted return immediately without acquiring lock Lock Implementations Multiple lock implementation available standard JDK ReentrantLock ReentrantReadWriteLock StampedLock ReentrantLock class ReentrantLock mutual exclusion lock basic behavior implicit monitor accessed via synchronized keyword extended capability name suggests lock implement reentrant characteristic implicit monitor lock acquired via lock released via unlock important wrap code tryfinally block ensure unlocking case exception method threadsafe like synchronized counterpart another thread already acquired lock subsequent call lock pause current thread lock unlocked one thread hold lock given time ReentrantReadWriteLock ReentrantReadWriteLock class implement ReadWriteLock interface interface ReadWriteLock specifies another type lock maintaining pair lock read write access idea behind readwrite lock usually safe read mutable variable concurrently long nobody writing variable readlock held simultaneously multiple thread long thread hold writelock improve performance throughput case read frequent writes Let’s see rule acquiring ReadLock WriteLock thread Read Lock — thread acquired write lock requested multiple thread acquire read lock — thread acquired write lock requested multiple thread acquire read lock Write Lock — thread reading writing one thread acquire write lock example first acquires writelock order put new value map sleeping one second task finished two task submitted trying read entry map sleep one second execute code sample you’ll notice read task wait whole second writing task finished write lock released read task executed parallel print result simultaneously console don’t wait finish readlocks safely acquired concurrently long writelock held another thread StampedLock Java 8 ship new kind lock called StampedLock also support read write lock like example contrast ReadWriteLock locking method StampedLock return stamp represented long value use stamp either release lock check lock still valid Another feature provided StampedLock optimistic locking time read operation don’t need wait write operation completion result fullfledged read lock isn’t required Instead upgrade read lock Working Conditions Condition class provides ability thread wait condition occur executing critical section occur thread acquires access critical section doesn’t necessary condition perform operation example reader thread get access lock shared queue still doesn’t data consume Traditionally Java provides wait notify notifyAll method thread intercommunication Conditions similar mechanism addition specify multiple condition Semaphores addition lock Concurrency API also support counting semaphore Whereas lock usually grant exclusive access variable resource semaphore capable maintaining whole set permit useful different scenario limit amount concurrent access certain part application Here’s example limit access longrunning task simulated sleep5 executor potentially run 10 task concurrently use semaphore size 5 thus limiting concurrent access 5 It’s important use tryfinally block properly release semaphore even case exception Executing code result following output Semaphore acquired Semaphore acquired Semaphore acquired Semaphore acquired Semaphore acquired Could acquire semaphore Could acquire semaphore Could acquire semaphore Could acquire semaphore Could acquire semaphore semaphore permit access actual longrunning operation simulated sleep5 maximum 5 Every subsequent call tryAcquire elapses maximum wait timeout one second resulting appropriate console output semaphore could acquired Conclusion article seen different implementation Lock interface learned use multithreaded application next article check Executors Stay tunedTags Software Development Technology Software Engineering Java Programming |
1,160 | Another story about microservices: Hexagonal Architecture | When you hear stories about the most gigantic projects having a microservice architecture, you are tempted to introduce dozens of tiny applications that would work for you, like house elves, invisible and undemanding. However, system architectures lie on a spectrum.
What we imagine is the extreme end of that spectrum: tiny applications exchanging many messages. At the other end of the spectrum you imagine a giant monolith that stands alone to do too many things. In reality, there are many service-oriented architectures lying somewhere between those two extremes.
In a nutshell, a microservice architecture means that each application, or microservice’s code and resources are its very own and will not be shared with any other app. When two applications need to communicate, they use an application programming interface (API) — a controlled set of rules that both programs can handle. Developers can make many changes to each application as long as it plays well with the API.
This idea comes in many flavors, with different shares of the monolith architecture. In this post, we are going to discuss one of such variations of microservice architecture, known as Hexagonal Architecture.
The first key concept of this architecture is to keep all the business models and logic in a single place, and the second concept — each hexagon should be independent.
What is Hexagonal Architecture?
Invented by Alistair Cockburn in 2005, Hexagonal Architecture, or to call it properly, Ports and Adapters, is driven by the idea that the application is central to your system. All inputs and outputs reach or leave the core of the application through a port that isolates the application from external technologies, tools and delivery mechanics.
Allow an application to equally be driven by users, programs, automated test or batch scripts, and to be developed and tested in isolation from its eventual run-time devices and databases. Alistair Cockburn
Hexagonal Architecture draws a thick line between the software’s inside and outside parts, decoupling the business logic from the persistence and the service layer. The inside part makes up the use cases and the domain model it’s built upon. The outside part includes UI, database, etc. The connection between them is realized via ports and their implementation counterparts are called adapters. In this way, Hexagonal Architecture ensures encapsulation of logic in different layers, which ensures higher testability and control over the code.
Architecture components
Each side of the hexagon represents an input — port that uses an adapter for the specific type. Hence, ports and adapters form two major components of Hexagon Architecture:
Ports
A port is a gateway, provided by the core logic. It allows the entry or exiting of data to and from the application. The simplest implementation of a Port is an API layer. Ports exist in 2 types: inbound and outbound.
An inbound port is the only part of the core exposed to the world that defines how the Core Business Logic can be used.
An outbound port is an interface the core needs to communicate with the outside world
Adapters
An adapter transforms one interface into another, creating a bridge between the application and the service that it needs. In hexagonal architecture all communication between the primary (which use system to achieve a particular goal) and secondary actors (which system uses to achieve primary actor’s goals) and application ports is done with the help of adapters. Therefore, adapters can also be of two types:
Primary Adapters
The primary or Driving Adapters represents the UI. It is a piece of code between the user and the core logic. They are called driving adapters because they drive the application, and start actions in the core application. Examples of a primary adapters are API controllers, Web controllers or views.
Secondary Adapters
The secondary or Driven Adapters represent the connection to back-end databases, external libraries, mail API’s, etc. It is an implementation of the secondary port, which is an interface. These adapters react to actions initiated by the primary adapters.
Domain Model
The third component of the architecture is the domain model, a conceptual model that represents meaningful concepts to the domain that need to be modelled in software. The concepts include the data involved in the business and rules the business uses in relation to that data.
Benefits of Hexagonal Architecture
High Maintainability, since changes in one area of an application doesn’t affect others.
Ports and Adapters are replaceable with different implementations that conform to the same interface.
The application is agnostic to the outside world, so it can be driven by any number of different controls.
The application is independent from external services, so you can develop the inner core before building external services, such as databases.
Easier to test in isolation, since the code is decoupled from the implementation details of the outside world.
Our implementation of Hexagonal Architecture
One of Sciforce’s projects required to separate often changing external elements from internal ones that might lessen the impact of change and simplify the testing process.
The standard architectural template is based on the Spring set of frameworks:
The Core contains objects that represent the business logic of the service. Typically for Hexagonal Architecture, the core knows nothing about the outside world, including the network and the file system. All communication with the outside world is handled by Inbound and Outbound gateway layers.
In our application, a Port is a Groovy interface used to access either a Core or an Outbound object and an Adapter is an implementation of a Port interface that understands how to transform to and from external representations into the Core’s internal data model.
You can swap out an Adapter in gateway layer: for example, you can enable accepting messages from RabbitMQ instead of HTTP clients with no impact on the Core.We can also substitute Service Stubs for outbound gateways increasing the speed and reliability of integration tests.
Example: Simple application
In the example, we show a scheme of a simple application that uses the Spring Framework to accept a REST request with some text, converts the text to lowercase and saves it to MongoDB.
At the first step, when the client sends an HTTP request, the RestController is responsible for handling it. In terms of Hexagonal Architecture, this controller serves as an Adapter that communicates the request from the HTTP protocol to the internal domain model (a simple string in this case). It is also responsible for calling into the Core via the Port and sending the results back over the client.
The ConversionService is the object that the inbound Adapter (RestController) invokes via the ConversionPort interface. The service itself doesn’t access anything outside of the process: all it needs to do its job is to access the in-memory objects. After performing all the necessary processing (converting the text to lowercase), it delegates the task of storing the results to the outbound PersistencePort.
The MongoDBGateway is the implementation of the PersistencePort. It knows how to adapt the Core’s internal model, which is plain text, into a form that MongoDB can handle. Though the example is basic, in more sophisticated systems, it might implement exception, logging and retry logic in the code.
This example shows only 3 objects but in an actual application, you would have multiple objects in play. For example, a single REST controller would respond to different URLs by calling different services which then call different outbound gateways.
Conclusion
To sum things up, the main idea of Hexagonal Architecture is decoupling the application logic from the inputs and outputs. It helps to free your most important code of unnecessary technical details and to achieve flexibility, testability and other important advantages that make your working process more efficient.
However, like other architectures, hexagonal architecture has its limitations and downsides. For instance, it will effectively duplicate the number of classes on your boundary.
When it is the best choice? As it facilitates the detachment of your external dependencies, it will help you with the classes that you anticipate will be swapped out in production in future and the classes that you intend to fake in tests. | https://medium.com/sciforce/another-story-about-microservices-hexagonal-architecture-23db93fa52a2 | [] | 2020-01-10 14:40:55.620000+00:00 | ['Programming', 'Microservices', 'Software Engineering', 'Software Development', 'Software Architecture'] | Title Another story microservices Hexagonal ArchitectureContent hear story gigantic project microservice architecture tempted introduce dozen tiny application would work like house elf invisible undemanding However system architecture lie spectrum imagine extreme end spectrum tiny application exchanging many message end spectrum imagine giant monolith stand alone many thing reality many serviceoriented architecture lying somewhere two extreme nutshell microservice architecture mean application microservice’s code resource shared app two application need communicate use application programming interface API — controlled set rule program handle Developers make many change application long play well API idea come many flavor different share monolith architecture post going discus one variation microservice architecture known Hexagonal Architecture first key concept architecture keep business model logic single place second concept — hexagon independent Hexagonal Architecture Invented Alistair Cockburn 2005 Hexagonal Architecture call properly Ports Adapters driven idea application central system input output reach leave core application port isolates application external technology tool delivery mechanic Allow application equally driven user program automated test batch script developed tested isolation eventual runtime device database Alistair Cockburn Hexagonal Architecture draw thick line software’s inside outside part decoupling business logic persistence service layer inside part make use case domain model it’s built upon outside part includes UI database etc connection realized via port implementation counterpart called adapter way Hexagonal Architecture ensures encapsulation logic different layer ensures higher testability control code Architecture component side hexagon represents input — port us adapter specific type Hence port adapter form two major component Hexagon Architecture Ports port gateway provided core logic allows entry exiting data application simplest implementation Port API layer Ports exist 2 type inbound outbound inbound port part core exposed world defines Core Business Logic used outbound port interface core need communicate outside world Adapters adapter transforms one interface another creating bridge application service need hexagonal architecture communication primary use system achieve particular goal secondary actor system us achieve primary actor’s goal application port done help adapter Therefore adapter also two type Primary Adapters primary Driving Adapters represents UI piece code user core logic called driving adapter drive application start action core application Examples primary adapter API controller Web controller view Secondary Adapters secondary Driven Adapters represent connection backend database external library mail API’s etc implementation secondary port interface adapter react action initiated primary adapter Domain Model third component architecture domain model conceptual model represents meaningful concept domain need modelled software concept include data involved business rule business us relation data Benefits Hexagonal Architecture High Maintainability since change one area application doesn’t affect others Ports Adapters replaceable different implementation conform interface application agnostic outside world driven number different control application independent external service develop inner core building external service database Easier test isolation since code decoupled implementation detail outside world implementation Hexagonal Architecture One Sciforce’s project required separate often changing external element internal one might lessen impact change simplify testing process standard architectural template based Spring set framework Core contains object represent business logic service Typically Hexagonal Architecture core know nothing outside world including network file system communication outside world handled Inbound Outbound gateway layer application Port Groovy interface used access either Core Outbound object Adapter implementation Port interface understands transform external representation Core’s internal data model swap Adapter gateway layer example enable accepting message RabbitMQ instead HTTP client impact CoreWe also substitute Service Stubs outbound gateway increasing speed reliability integration test Example Simple application example show scheme simple application us Spring Framework accept REST request text convert text lowercase save MongoDB first step client sends HTTP request RestController responsible handling term Hexagonal Architecture controller serf Adapter communicates request HTTP protocol internal domain model simple string case also responsible calling Core via Port sending result back client ConversionService object inbound Adapter RestController invokes via ConversionPort interface service doesn’t access anything outside process need job access inmemory object performing necessary processing converting text lowercase delegate task storing result outbound PersistencePort MongoDBGateway implementation PersistencePort know adapt Core’s internal model plain text form MongoDB handle Though example basic sophisticated system might implement exception logging retry logic code example show 3 object actual application would multiple object play example single REST controller would respond different URLs calling different service call different outbound gateway Conclusion sum thing main idea Hexagonal Architecture decoupling application logic input output help free important code unnecessary technical detail achieve flexibility testability important advantage make working process efficient However like architecture hexagonal architecture limitation downside instance effectively duplicate number class boundary best choice facilitates detachment external dependency help class anticipate swapped production future class intend fake testsTags Programming Microservices Software Engineering Software Development Software Architecture |
1,161 | Good Collaborations Are Art, Great Ones Are Kitsch | Heron Preston’s collaboration with oral care brand MOON sounds like something out of MSCHF factory. Known for its purposefully absurd and random viral stunts, MSCHF is the creator of Nike sneakers filled with Holy Water, toaster-shaped bath bombs, and an app making stock investments based on astrological signs.
While it certainly wouldn’t look out of place next to the squeaky chicken bong popularized by the “factory”, the limited edition stain removal whitening toothpaste in fact dropped on StockX on October 27th. Asking price climbed from $15 to $27 via DropX and the toothpaste came in a limited batch of 350 items.
Collaborations like MOON x Heron Preston, Colgate x Supreme, Aimé Leon Dore x Porsche 964, McDonalds x Travis Scott or White Castle x Telfar are often dismissed as stunt-y, tongue-in-cheek, garish and lowbrow. Be that as it may, most of them aim to be appreciated in an ironic and knowing way.
Good collaborations are art, great collaborations are kitsch. They fit into the definition of kitsch perfectly: a replica that’s purposefully fake, and that’s where the joke is. Take it seriously, and you are a goon.
There are already obvious parallels between collaborations and the world of art (and kitsch): there are auctions, collectors, dealers, critics, resale marketplaces, monographs. Just like art, collaborations aim to shock and surprise. They can’t be criticized, and they strive to reach high prices and cultural immortality.
Power to the Mundane
“You know it’s art when the check clears,” said Andy Warhol. With Roy Lichtenstein and Robert Indiana, Warhol made his way into museums by turning the mundane world into works of art by enriching it with pop references, connotations and associations. Warhol’s art is commercial and his commercials are art (a Warhol ad launched Absolut vodka in 1986).
At the same time, fine art went from museums into fashion, design and pop culture. Elsa Schiaparelli — the original creator of the newspaper print dress — was probably the proto fashion collaborator who featured her Surrealist friends like Salvador Dali on her designs. In the ’80s, New York designer Willi Smith invited artists, performers and graphic designers to join his project of making art part of daily life. In the early 00s, Jeff Koons, Damien Hirst, Takashi Murakami and Stephen Spouse joined forces with Louis Vuitton where then creative director Marc Jacobs turned the fashion-art collaboration into global cash cows. Recently, Cindy Sherman collaborated with Undercover and Yoyoi Kusama has just released her new Veuve Clicquot La Grande Dame limited-edition bottle and gift box. It retails for $30,000 and comes with a poem.
When someone buys a Cindy Sherman x Undercover, they aren’t actually buying a bag or a t-shirt; they’re buying a legit work of art. When they wear it, a person shows off their knowledge and cultural awareness. They also see themselves through a new lens: not as mere consumers, but as collectors. Done right, collaborations generate collectibles, justify high prices, create cult objects, and initiate brands in the domain of intangibles. Thanks to this newly-acquired timelessness, symbolic authority and post-materialistic form, Undercover isn’t a mere commercial entity, but a shrine of culture and human creativity. Through collaborations, brands ingrain themselves in culture, not in a market segment.
Culture x Commerce
Collaborations transform non-culture into culture. It’s a great business model: collaborations don’t need financial capital, only a strong brand capital. Supreme can put its logo on a brick and collaborate with Colgate as long as its brand equity is attractive.
Moncler, Mini and Aimé Leon Dore made collaborations integral parts of their DNA. With good reason: compressed trend cycles force brands to constantly come up with the new stuff. Consumers today expect physical products at the unattainable speed of Instagram. A quick solve is to riff off already popular and familiar stuff. Cue in the endless Air Jordan and Supreme collaborations. A brand uses Air Jordan or Supreme’s aesthetic just enough to become kitsch, which gives it a new context and an ironic read and turns it into an insider joke.
Collaborations work well in mature markets, where consumers are bored and products are commodified. There are only so many Uniqlo items that a person can own, but not if those items were made by Jun Takashi, Jil Sander or Pharrell. Having the fashion link allows Uniqlo to cultivate “elitism to all:” it can sell a lot of Pharrell t-shirts to a lot of people without diluting its symbolic value. This symbolic value makes a commodity incomparable: a very few people will pick MOON over Crest or Colgate in a pharmacy. But many will select it to add some flex to their bathrooms. Limited editions keep the cultural pioneers interested in the brand, and MOON can enjoy a temporary monopoly by rendering its competition irrelevant: collaborations are hard to replicate.
Hardest to replicate are inconsistent and random collaborations, like Heron Preston x MOON or Steven Alan x Mucinex. Their genius is in that they shun any coherence. Coherence is for suckers, because collaborations aren’t brand extensions. They’re a creative expression of a brand that let it flex its zeitgeist muscles, promote it as a trendsetter and turn its products into brand communication. A Chanel snowboard or IKEA x Craig Green make Chanel and IKEA modern and culturally present and curious. An unexpected collaboration attracts collectors, cultural pioneers and hypebeasts. It becomes the source of a brand’s aspirational power.
Collaborations Trade in Aspiration
For a brand, having aspirational power is everything. In the modern economy, the growth motor isn’t a price. It’s taste, aesthetics, identity and thrill. Economic growth doesn’t come from products, but from the intangible social and cultural capital that a brand creates. Products are just a vehicle for beauty, thrill, identity, transformational experiences and a life aesthetically worth living. Moon toothpaste, in its own words, “is destined to elevate your everyday, oral care routine into a true oral beauty experience.” By collaborating with Heron Preston, MOON puts this mission on steroids. It makes brushing teeth more culturally and socially relevant. Having an orange toothpaste turns everyday hygiene into a creative and inspiring ritual. Every time we brush our teeth with Heron Preston x Moon, we create a social distance between ourselves as those unenlightened enough to use Crest. We also create a link between us and all other cultural pioneers of oral care.
Collaborations aren’t a brand gloss. They’re a strategic transformation of a brand’s operating system. In the aspirational economy, this transformation is a matter of a brand’s long-term renewal and cultural relevance. Strategic collaborations across a brand’s entire value chain are akin to making a safe bet on a brand’s cultural and business future.
At the level of marketing and sales, collaborations protect pricing power, ensure high margins and reframe consumers’ perception of the brand. At the level of a product concept and production, collaborators provide value innovation. At the level of distribution, collaborations expand a brand’s market and renew its customer base. A collaboration between luxury brands and Chinese KOLs give these brands an in with the Chinese customer. A collaboration between Rimowa and streetwear pioneers like Supreme, Bape and Anti Anti Social Club renews brand associations. At the level of merchandising, collaborations give halo to the core collection, re-evaluate brand perception and increase brand consideration. Before Nike launches a new model, it seeds it on runways of its fashion collaborators like Undercover or Sacai. The collaborators add their imprint, making it culturally noteworthy and spurring interest in the model’s later commercial release by Nike.
Collaborations are basically a constant brand re-contextualization: they take it from one context and put it into another one. In that sense, there isn’t a “bad” collaboration: collaborations are calculated cultural and business tests. Some contexts are more fertile than others, but just as evolution constantly mixes stuff up to see what sticks (theropods didn’t), a brand stays alive through remixes. Collaborations are the strategy of brand awareness, market expansion and its fountain of youth.
Through re-contextualization, collaborations:
Allow brands to start trading in exchange value, not in use value. Use value is defined by a product’s functionality. Exchange value is defined by a product’s social appeal. A social hit becomes a market hit. Brands that insert their products in the cultural exchange system and not in a market segment, win.
Give everyday products identity. In a crowded competitive landscape, a brand is the key product differentiator. A brand makes products stand for something more than their function and separates them from commodities. A collaboration enforces brand identity, ensures its continuity, and connects products into a narrative.
Infuse taste and meaning into ordinary consumption. Today, a brand’s products and services do not only fulfill their basic functions. Their job is to aesthetically enrich their buyers’ lives and become social links that signal status, social distinction and belonging.
Collaborations are easier to understand once they’re taken out the domain of brand stunts and into the domain of art. Art is a big business. Art is also a big social and cultural commentator, critic and cynic. It tells us what we need to know about the world we live in and about where the future is going. Collaborations do the same.
This article was originally published on Highsnobiety. | https://medium.com/swlh/good-collaborations-are-art-great-ones-are-kitsch-e583fb2374fb | ['Ana Andjelic'] | 2020-11-23 08:30:18.256000+00:00 | ['Entrepreneurship', 'Art', 'Collaboration', 'Culture', 'Marketing'] | Title Good Collaborations Art Great Ones KitschContent Heron Preston’s collaboration oral care brand MOON sound like something MSCHF factory Known purposefully absurd random viral stunt MSCHF creator Nike sneaker filled Holy Water toastershaped bath bomb app making stock investment based astrological sign certainly wouldn’t look place next squeaky chicken bong popularized “factory” limited edition stain removal whitening toothpaste fact dropped StockX October 27th Asking price climbed 15 27 via DropX toothpaste came limited batch 350 item Collaborations like MOON x Heron Preston Colgate x Supreme Aimé Leon Dore x Porsche 964 McDonalds x Travis Scott White Castle x Telfar often dismissed stunty tongueincheek garish lowbrow may aim appreciated ironic knowing way Good collaboration art great collaboration kitsch fit definition kitsch perfectly replica that’s purposefully fake that’s joke Take seriously goon already obvious parallel collaboration world art kitsch auction collector dealer critic resale marketplace monograph like art collaboration aim shock surprise can’t criticized strive reach high price cultural immortality Power Mundane “You know it’s art check clears” said Andy Warhol Roy Lichtenstein Robert Indiana Warhol made way museum turning mundane world work art enriching pop reference connotation association Warhol’s art commercial commercial art Warhol ad launched Absolut vodka 1986 time fine art went museum fashion design pop culture Elsa Schiaparelli — original creator newspaper print dress — probably proto fashion collaborator featured Surrealist friend like Salvador Dali design ’80s New York designer Willi Smith invited artist performer graphic designer join project making art part daily life early 00s Jeff Koons Damien Hirst Takashi Murakami Stephen Spouse joined force Louis Vuitton creative director Marc Jacobs turned fashionart collaboration global cash cow Recently Cindy Sherman collaborated Undercover Yoyoi Kusama released new Veuve Clicquot La Grande Dame limitededition bottle gift box retail 30000 come poem someone buy Cindy Sherman x Undercover aren’t actually buying bag tshirt they’re buying legit work art wear person show knowledge cultural awareness also see new lens mere consumer collector Done right collaboration generate collectible justify high price create cult object initiate brand domain intangible Thanks newlyacquired timelessness symbolic authority postmaterialistic form Undercover isn’t mere commercial entity shrine culture human creativity collaboration brand ingrain culture market segment Culture x Commerce Collaborations transform nonculture culture It’s great business model collaboration don’t need financial capital strong brand capital Supreme put logo brick collaborate Colgate long brand equity attractive Moncler Mini Aimé Leon Dore made collaboration integral part DNA good reason compressed trend cycle force brand constantly come new stuff Consumers today expect physical product unattainable speed Instagram quick solve riff already popular familiar stuff Cue endless Air Jordan Supreme collaboration brand us Air Jordan Supreme’s aesthetic enough become kitsch give new context ironic read turn insider joke Collaborations work well mature market consumer bored product commodified many Uniqlo item person item made Jun Takashi Jil Sander Pharrell fashion link allows Uniqlo cultivate “elitism all” sell lot Pharrell tshirts lot people without diluting symbolic value symbolic value make commodity incomparable people pick MOON Crest Colgate pharmacy many select add flex bathroom Limited edition keep cultural pioneer interested brand MOON enjoy temporary monopoly rendering competition irrelevant collaboration hard replicate Hardest replicate inconsistent random collaboration like Heron Preston x MOON Steven Alan x Mucinex genius shun coherence Coherence sucker collaboration aren’t brand extension They’re creative expression brand let flex zeitgeist muscle promote trendsetter turn product brand communication Chanel snowboard IKEA x Craig Green make Chanel IKEA modern culturally present curious unexpected collaboration attracts collector cultural pioneer hypebeasts becomes source brand’s aspirational power Collaborations Trade Aspiration brand aspirational power everything modern economy growth motor isn’t price It’s taste aesthetic identity thrill Economic growth doesn’t come product intangible social cultural capital brand creates Products vehicle beauty thrill identity transformational experience life aesthetically worth living Moon toothpaste word “is destined elevate everyday oral care routine true oral beauty experience” collaborating Heron Preston MOON put mission steroid make brushing teeth culturally socially relevant orange toothpaste turn everyday hygiene creative inspiring ritual Every time brush teeth Heron Preston x Moon create social distance unenlightened enough use Crest also create link u cultural pioneer oral care Collaborations aren’t brand gloss They’re strategic transformation brand’s operating system aspirational economy transformation matter brand’s longterm renewal cultural relevance Strategic collaboration across brand’s entire value chain akin making safe bet brand’s cultural business future level marketing sale collaboration protect pricing power ensure high margin reframe consumers’ perception brand level product concept production collaborator provide value innovation level distribution collaboration expand brand’s market renew customer base collaboration luxury brand Chinese KOLs give brand Chinese customer collaboration Rimowa streetwear pioneer like Supreme Bape Anti Anti Social Club renews brand association level merchandising collaboration give halo core collection reevaluate brand perception increase brand consideration Nike launch new model seed runway fashion collaborator like Undercover Sacai collaborator add imprint making culturally noteworthy spurring interest model’s later commercial release Nike Collaborations basically constant brand recontextualization take one context put another one sense isn’t “bad” collaboration collaboration calculated cultural business test context fertile others evolution constantly mix stuff see stick theropod didn’t brand stay alive remixes Collaborations strategy brand awareness market expansion fountain youth recontextualization collaboration Allow brand start trading exchange value use value Use value defined product’s functionality Exchange value defined product’s social appeal social hit becomes market hit Brands insert product cultural exchange system market segment win Give everyday product identity crowded competitive landscape brand key product differentiator brand make product stand something function separate commodity collaboration enforces brand identity ensures continuity connects product narrative Infuse taste meaning ordinary consumption Today brand’s product service fulfill basic function job aesthetically enrich buyers’ life become social link signal status social distinction belonging Collaborations easier understand they’re taken domain brand stunt domain art Art big business Art also big social cultural commentator critic cynic tell u need know world live future going Collaborations article originally published HighsnobietyTags Entrepreneurship Art Collaboration Culture Marketing |
1,162 | Best practices for combining data science and design | By Ricky Hennessy, Sheetal D Raina & Amanda Ward
At Fjord, we’ve been combining data science and design on integrated teams for the past three years. By bringing together these two disparate disciplines, we’ve been able to deliver tremendous value to our clients through data-driven products and services that address user needs and solve critical business problems. Despite the enormous potential, most organizations struggle to enable effective collaboration between data science and design.
In this article, we’ll share best practices and lessons learned on how data scientists and designers can work together to deliver transformative products and services.
Create a Shared Understanding
With different backgrounds and ways of seeing the world, it’s important to create a shared understanding between data scientists and designers so we can get the most out of this powerful collaboration. Instead of parallel tracks working in isolation, take advantage of opportunities for knowledge sharing and question asking. By remaining flexible and open to different approaches, data scientists can benefit from a deep understanding of user needs and designers can unlock possibilities presented by large sets of data.
When working with one of our aerospace clients, our data science team was performing exploratory data analysis while our design team was interviewing users. By co-locating the team and providing opportunities for spontaneous check-ins, the design team was able to validate research findings using real data, and the data science team was able to gain insight into the peculiarities contained in the data. This led to a much deeper understanding of both the end users and the data.
Don’t Lose Sight of Business Goals
Data science is a process for solving business problems using data. Before we can begin the process, it’s critical that we have a clear understanding of the business. Too often, the design process leads to an overly user-centric view that ignores real-world realities and constraints. On the other hand, data scientists can become so focused on solving technical problems that they lose sight of core issues that need to be addressed. For any data science and design collaboration to be successful, it’s critical to align user needs with business goals. Ensuring that the team is equipped with business understanding will lead to solutions that deliver value while also addressing user needs.
While working with a Fortune 100 manufacturing company, it became apparent that the initial problem at hand was too broad. If we had focused exclusively on what users were looking for or what project stakeholders wanted, we would have developed solutions that delivered little to no value for the organization. Instead, we viewed user needs and stakeholder input through a “business goals” lens, allowing us to narrow our focus to high value problems.
Execute Through Seamless Collaboration
Defining the right problem starts with the utilization of a “business goals” mindset. However, defining the problem is just the beginning. Individual understanding leads nowhere unless there is effective collaboration between team members. At every stage, integrating various viewpoints with different skills and expertise helps the team solve the right business problem in unison.
At Fjord, we have developed a framework that translates business problems into solvable data science problems using a human-centered design approach. The key lies in integrating strategy, design, and data science in a way that they are more potent than when they work separately. | https://medium.com/design-voices/best-practices-for-combining-data-science-and-design-3aebdb4e076e | ['Ricky Hennessy'] | 2019-07-08 11:35:46.491000+00:00 | ['Design', 'Data Science', 'Design Thinking', 'Collaboration', 'AI'] | Title Best practice combining data science designContent Ricky Hennessy Sheetal Raina Amanda Ward Fjord we’ve combining data science design integrated team past three year bringing together two disparate discipline we’ve able deliver tremendous value client datadriven product service address user need solve critical business problem Despite enormous potential organization struggle enable effective collaboration data science design article we’ll share best practice lesson learned data scientist designer work together deliver transformative product service Create Shared Understanding different background way seeing world it’s important create shared understanding data scientist designer get powerful collaboration Instead parallel track working isolation take advantage opportunity knowledge sharing question asking remaining flexible open different approach data scientist benefit deep understanding user need designer unlock possibility presented large set data working one aerospace client data science team performing exploratory data analysis design team interviewing user colocating team providing opportunity spontaneous checkins design team able validate research finding using real data data science team able gain insight peculiarity contained data led much deeper understanding end user data Don’t Lose Sight Business Goals Data science process solving business problem using data begin process it’s critical clear understanding business often design process lead overly usercentric view ignores realworld reality constraint hand data scientist become focused solving technical problem lose sight core issue need addressed data science design collaboration successful it’s critical align user need business goal Ensuring team equipped business understanding lead solution deliver value also addressing user need working Fortune 100 manufacturing company became apparent initial problem hand broad focused exclusively user looking project stakeholder wanted would developed solution delivered little value organization Instead viewed user need stakeholder input “business goals” lens allowing u narrow focus high value problem Execute Seamless Collaboration Defining right problem start utilization “business goals” mindset However defining problem beginning Individual understanding lead nowhere unless effective collaboration team member every stage integrating various viewpoint different skill expertise help team solve right business problem unison Fjord developed framework translates business problem solvable data science problem using humancentered design approach key lie integrating strategy design data science way potent work separatelyTags Design Data Science Design Thinking Collaboration AI |
1,163 | 💭DRAWING: Jung💭 | Artist’s Note №1
For synchronicity’s sake, here’s my drawing of legendary psychoanalyst Carl Jung. When I think of Switzerland, I first think of chocolate — then I think of Carl Jung. It’s probably pathological.
Artist’s Note №2
This drawing will earn no money (Medium recently all-but-demonetized poetry, cartoons, flash fiction and other short articles). Please consider buying me a coffee. More coffee=more drawings for you to enjoy.
Artist’s Note №3
This drawing is brought to you by the letter “D.” “D” is for Dr. Franklin’s Staticy Cat and Other Outrageous Tales, my collection of humorous stories and drawings for children.
Artist’s Note №4
My new one-man Medium magazine is called — Rolli. Subscribe today.
Artist’s Note №5
From now on, I’m letting my readers determine how often I post new material. When this post reaches 1000 claps — but not before — I’ll post something new. | https://medium.com/pillowmint/drawing-jung-3beee4e6bb98 | ['Rolli', 'Https', 'Ko-Fi.Com Rolliwrites'] | 2020-01-27 19:26:16.207000+00:00 | ['Art', 'Drawing', 'Mental Health', 'Psychology'] | Title 💭DRAWING Jung💭Content Artist’s Note №1 synchronicity’s sake here’s drawing legendary psychoanalyst Carl Jung think Switzerland first think chocolate — think Carl Jung It’s probably pathological Artist’s Note №2 drawing earn money Medium recently allbutdemonetized poetry cartoon flash fiction short article Please consider buying coffee coffeemore drawing enjoy Artist’s Note №3 drawing brought letter “D” “D” Dr Franklin’s Staticy Cat Outrageous Tales collection humorous story drawing child Artist’s Note №4 new oneman Medium magazine called — Rolli Subscribe today Artist’s Note №5 I’m letting reader determine often post new material post reach 1000 clap — — I’ll post something newTags Art Drawing Mental Health Psychology |
1,164 | One vs One & One vs All | One vs One & One vs All
Binary Class Classification Approach to Solve Multi Class Classification Problem
What is Multi Class Classification problem?
When we predict one class out of multi class known as multi class classification .Suppose your mother has given you a task to bring mango from a basket having variety of fruits , so indirectly you mother had told you to solve multi class classification problem.
But our main is to apply the binary classification approach to predict the result from multi class.
Why we need One vs Rest and One vs One?
There are some classification algorithm which has not been made to solve multi class classification problem directly these algorithms are LogisticRegression and SupportVectorClassifier. By applying heuristic approach to these algorithms we can solve multi class classification problem. let’s get started……….
One Vs Rest
Suppose we have three classes: [Machine Learning, Deep Learning, NLP]. We have to find the final prediction result out of it.
Step 1: Take multi class data and split into multiple binary classes. let we have 100 classes . Then by using One Vs Rest: 1 class belong to (class 0) and remaining 999 class will belong to (class 1). This process will happen again and again till each and every class will form their individual Model.
number of model being created = number of class
Figure 1
Step 2: Each model predicts probability whichever model will have highest probability, will be considered for further binary class classification to get final result.
Let we have probabilities for Model 1, Model 2 ,Model 3 are 0.3,0.5,0.2 respectively . As we observe Model 2 has highest probability hence we will use model 2 for final prediction of class o or class 1. i.e;
Model 2: — Deep Learning (Class 0) & [Machine Learning, NLP] (Class 1)
If we will predict class 0 means our predicted result will be Deep Learning else Result will be form class 1 i.e: [Machine Learning, NLP]
# logistic regression for multi-class classification using a one-vs-rest
from sklearn.datasets import make_classification
from sklearn.linear_model import LogisticRegression
from sklearn.multiclass import OneVsRestClassifier
# define dataset
X, y = make_classification(n_samples=1000, n_features=10, n_informative=5, n_redundant=5, n_classes=3, random_state=1)
# define model
model = LogisticRegression()
# define the ovr strategy
ovr = OneVsRestClassifier(model)
# fit model
ovr.fit(X, y)
# make predictions
yhat = ovr.predict(X)
Disadvantage
As it makes numbers of model equals to number of classes hence it does slow prediction of output. Means it has high time complexity.
If we will have 100s of classes then task will be so much arduous
One Vs One
Suppose you have n number of classes then it will gives model of one class vs another class.let you have four classes in a dataset A,B,C ,D.
Step 1: Convert the multi class dataset into binary class ,Here number of models will be;
Figure 2
Total 6 binary classes will be formed as shown below;
Figure 3
Step 2: Find the probability of each Model whichever will have highest probability that model will give final predicted output output.
Figure 4
As Model 3 has highest probability hence if we will predict class 0 result will be A and prediction of class 1 will give result as D .
# SVM for multi-class classification using one-vs-one
from sklearn.datasets import make_classification
from sklearn.svm import SVC
from sklearn.multiclass import OneVsOneClassifier
# define dataset
X, y = make_classification(n_samples=1000, n_features=10, n_informative=5, n_redundant=5, n_classes=3, random_state=1)
# define model
model = SVC()
# define ovo strategy
ovo = OneVsOneClassifier(model)
# fit model
ovo.fit(X, y)
# make predictions
yhat = ovo.predict(X)
Conclusion
Hope you liked this blog if you have any suggestion about further improvement please do comment below.
If you want to learn more about this topic please click this link | https://medium.com/ai-in-plain-english/one-vs-one-one-vs-all-binary-class-classification-approach-to-solve-multi-class-classification-f285b72dc12c | ['Akhil Anand'] | 2020-12-25 10:55:01.866000+00:00 | ['Logistic Regression', 'Svm', 'Machine Learning', 'AI', 'Artificial Intelligence'] | Title One v One One v AllContent One v One One v Binary Class Classification Approach Solve Multi Class Classification Problem Multi Class Classification problem predict one class multi class known multi class classification Suppose mother given task bring mango basket variety fruit indirectly mother told solve multi class classification problem main apply binary classification approach predict result multi class need One v Rest One v One classification algorithm made solve multi class classification problem directly algorithm LogisticRegression SupportVectorClassifier applying heuristic approach algorithm solve multi class classification problem let’s get started……… One Vs Rest Suppose three class Machine Learning Deep Learning NLP find final prediction result Step 1 Take multi class data split multiple binary class let 100 class using One Vs Rest 1 class belong class 0 remaining 999 class belong class 1 process happen till every class form individual Model number model created number class Figure 1 Step 2 model predicts probability whichever model highest probability considered binary class classification get final result Let probability Model 1 Model 2 Model 3 030502 respectively observe Model 2 highest probability hence use model 2 final prediction class class 1 ie Model 2 — Deep Learning Class 0 Machine Learning NLP Class 1 predict class 0 mean predicted result Deep Learning else Result form class 1 ie Machine Learning NLP logistic regression multiclass classification using onevsrest sklearndatasets import makeclassification sklearnlinearmodel import LogisticRegression sklearnmulticlass import OneVsRestClassifier define dataset X makeclassificationnsamples1000 nfeatures10 ninformative5 nredundant5 nclasses3 randomstate1 define model model LogisticRegression define ovr strategy ovr OneVsRestClassifiermodel fit model ovrfitX make prediction yhat ovrpredictX Disadvantage make number model equal number class hence slow prediction output Means high time complexity 100 class task much arduous One Vs One Suppose n number class give model one class v another classlet four class dataset ABC Step 1 Convert multi class dataset binary class number model Figure 2 Total 6 binary class formed shown Figure 3 Step 2 Find probability Model whichever highest probability model give final predicted output output Figure 4 Model 3 highest probability hence predict class 0 result prediction class 1 give result SVM multiclass classification using onevsone sklearndatasets import makeclassification sklearnsvm import SVC sklearnmulticlass import OneVsOneClassifier define dataset X makeclassificationnsamples1000 nfeatures10 ninformative5 nredundant5 nclasses3 randomstate1 define model model SVC define ovo strategy ovo OneVsOneClassifiermodel fit model ovofitX make prediction yhat ovopredictX Conclusion Hope liked blog suggestion improvement please comment want learn topic please click linkTags Logistic Regression Svm Machine Learning AI Artificial Intelligence |
1,165 | Your Punctuality is Reflecting More Than You Know | Life is anything but predictable. People will argue the validity of free will until the end of time. One thing remains to be true though. So much of life boils down to how you react to it.
I like to think that be proactive is always the best approach. Too many things at least. Yet, we don’t always have a choice in the matter if unpredictable circumstances arise. We are thrown into reactive mode. How you respond is everything, and it speaks volumes to the kind of person you are deep down.
One of the foundations of social and professional life is punctuality. Unfortunately, this is often at odds with the curveballs life throws at you. There can be a plethora of natural or human causes that will put a hard stop on your punctuality.
How you respond matters. How you communicate matters. Ask yourself: Do I value punctuality and understand how it reflects upon me?
Why does punctuality matter?
Trust & Dependability
So much of our world comes down to this one word. We place an underappreciated amount of trust in absolute strangers. While many say that we earn trust, for many it is pre-loaded when meeting people.
In the workplace, hiring is a vetting process in itself. There’s a baseline of trust that many have for new coworkers, managers or direct reports. It makes us comfortable to think we can safely hand off responsibility to someone.
If you are the person who is late to your interview, that IS your first impression. Follow that up with habitual tardiness to meetings or deadlines, and you’re now that person. While arguable better than lacking interpersonal skills, it still breeds distrust.
The act of being punctual will strengthen or confirm the trust others have in you. We all know who the most dependable people are in our lives. Also in our workplaces. When they say they will be somewhere to help you or make a meeting, you have no doubt it will happen. Be that person and make it habitual.
Respect
What is one of the first feelings you have when someone is late for a meeting? Or lunch? If they’re normally punctual and reliable, it’s surprising. Likely the assumption that something came up and it’s out of their control. But they’ll arrive.
Now, what if that person is always late? I’m willing to guess one of the thoughts that circulate is that they don’t respect your time. Or the plans. Or the meeting. While this may not be true, it can be hard to avoid this internal accusation.
The presumption that someone’s actions are indicative of who they are at the core. Less about uncontrollable circumstances. It comes down to one thing: repetition.
Being punctual shows you have respect for yourself, others and the subject at hand. People want to believe you have that respect and enjoy returning the favor. Give the sign that doing so is worth their time.
Integrity
We all know that phrase that ‘your word is your bond.’ Like it or not, this holds true throughout all aspects of life.
When you give your word, it immediately stands to reflect upon you as a person. As a coworker. A friend. And while other judgments are reversible, damage to your integrity isn’t the same. Losing the value of your word can be irreversible, depending on the circumstances.
It stands to reason that your stamp, your seal of your personal brand, gets its value from your word. Make it unwavering. | https://medium.com/imposters-inc/your-punctuality-is-reflecting-more-than-you-know-31d6500e7c14 | ['Michael Lanasa'] | 2020-07-22 00:01:09.623000+00:00 | ['Business', 'Startup', 'Leadership', 'Life Lessons', 'Entrepreneurship'] | Title Punctuality Reflecting KnowContent Life anything predictable People argue validity free end time One thing remains true though much life boil react like think proactive always best approach many thing least Yet don’t always choice matter unpredictable circumstance arise thrown reactive mode respond everything speaks volume kind person deep One foundation social professional life punctuality Unfortunately often odds curveballs life throw plethora natural human cause put hard stop punctuality respond matter communicate matter Ask value punctuality understand reflects upon punctuality matter Trust Dependability much world come one word place underappreciated amount trust absolute stranger many say earn trust many preloaded meeting people workplace hiring vetting process There’s baseline trust many new coworkers manager direct report make u comfortable think safely hand responsibility someone person late interview first impression Follow habitual tardiness meeting deadline you’re person arguable better lacking interpersonal skill still breed distrust act punctual strengthen confirm trust others know dependable people life Also workplace say somewhere help make meeting doubt happen person make habitual Respect one first feeling someone late meeting lunch they’re normally punctual reliable it’s surprising Likely assumption something came it’s control they’ll arrive person always late I’m willing guess one thought circulate don’t respect time plan meeting may true hard avoid internal accusation presumption someone’s action indicative core Less uncontrollable circumstance come one thing repetition punctual show respect others subject hand People want believe respect enjoy returning favor Give sign worth time Integrity know phrase ‘your word bond’ Like hold true throughout aspect life give word immediately stand reflect upon person coworker friend judgment reversible damage integrity isn’t Losing value word irreversible depending circumstance stand reason stamp seal personal brand get value word Make unwaveringTags Business Startup Leadership Life Lessons Entrepreneurship |
1,166 | Beautiful Intelligence | Mark Rothko used color to paint abysses deep enough to lose yourself in. Prince used his voice to conjure images of raspberry berets and oceans of violets in bloom. Break dancers and ballerinas use the body in motion to express everything from love to fury.
We instinctively engage with color, sound, and motion from the time we’re born. These universal forms of expression take us places the written word sometimes can’t go, viscerally animating ideas and providing context.
But people often associate these communicative tools more with the arts than with corporate spaces, which is perhaps why we haven’t traditionally seen these elements in digital productivity ecosystems. Color, sound, and motion fell by the wayside in a world of suits and ties and black and white documents.
That world is increasingly antiquated.
Even in offices where you may still have to cover tattoos and remove piercings, people are starting to welcome gifs and emojis in professional content. And why not? Whether your desired aesthetic feels more Bauhaus or more pop, moving beyond letters and numbers makes productivity more authentic, beautiful, and resonant.
In Microsoft 365, we’re blending art and science to create a library of high-quality content that leverages our powerful AI. From creating original work in-house to commissioning designs from artists worldwide, we’ll be rolling out a curated array of new illustrations, looping videos, photography, fonts, animations, stickers, and much more, beginning primarily within Office. | https://medium.com/microsoft-design/beautiful-intelligence-6e03cdfe8ab0 | ['Rachel Romano'] | 2020-09-17 17:05:22.448000+00:00 | ['Microsoft', 'Design', 'AI'] | Title Beautiful IntelligenceContent Mark Rothko used color paint abyss deep enough lose Prince used voice conjure image raspberry beret ocean violet bloom Break dancer ballerina use body motion express everything love fury instinctively engage color sound motion time we’re born universal form expression take u place written word sometimes can’t go viscerally animating idea providing context people often associate communicative tool art corporate space perhaps haven’t traditionally seen element digital productivity ecosystem Color sound motion fell wayside world suit tie black white document world increasingly antiquated Even office may still cover tattoo remove piercings people starting welcome gifs emojis professional content Whether desired aesthetic feel Bauhaus pop moving beyond letter number make productivity authentic beautiful resonant Microsoft 365 we’re blending art science create library highquality content leverage powerful AI creating original work inhouse commissioning design artist worldwide we’ll rolling curated array new illustration looping video photography font animation sticker much beginning primarily within OfficeTags Microsoft Design AI |
1,167 | How CRY makes money | I realized that we’ve never taken the time to fully explain all that we do at CRY, including how we earn revenue. It’s fairly simple but does warrant some discussion.
First, we are a three-person team. It’s hard for me to give you titles because, for the most part, all of us do a little of everything. At our core, we are content creators. Whether it’s here on CRY Mag or our We CRY Together celebration, our goal is to create or curate stories that are in line with our vision of elevating emerging artists, building a connected creative community or navigating the emotional aspects of what it means to be a writer or artist.
How we make money
As much as we love what we do, we need money for our business to work. And yes, we are a business. As much fun as we have doing this, we’re actually very focused on driving revenue because we know that’s what it takes to give us the freedom to keep creating.
Ghostwriting
Our main source of revenue comes from ghostwriting. We help people write stories of accomplishment, overcoming some kind of obstacle, memoirs, or family histories that people want to capture. This means someone contracts us to write an entire book. The cost for this is tens of thousands of dollars so it’s reserved for those who are committed to telling their story and have the finances to do so. Not everyone who chooses this option is “rich.” You’d be surprised how resourceful people can be when they’re motivated.
Proposals (story structures)
We definitely understand that not everyone can afford for us to write their book in full. As a separate offering, we write proposals. A proposal can have different meanings, but for CRY, it’s more like an outline for your book. If someone who is not a writer has an idea for a book, it’s actually really difficult for them to understand how to get the idea out of their head on onto the page. That’s where we help. We provide structure, a succinct synopsis, chapter by chapter summary, along with other outputs so by the time we’re finished, our clients know exactly what to write and how to format their story. We do this for anywhere between $2,000-$3,000, depending on a few variables.
Editing
Sometimes, clients come to us with hundreds of pages of blog posts, journal entries and loose notes. We take those pages and turn it into a real book. The cost for this varies depending on the amount of work we have to do, but typically comes in around $6,000 — $8,000.
Why are we sharing this?
Because it’s important to be transparent about money. There shouldn’t be this secret around how revenue is earned, especially if it can help inform or educate someone who is trying to do something similar. We love what we do at CRY, and earning revenue through these streams allows us to create and curate content. We definitely have plans to expand our offerings, and when we do, we’ll be sure to share another update! | https://medium.com/cry-mag/how-cry-makes-money-ca79e7eeb24a | ['Kern Carter'] | 2020-10-28 13:14:10.215000+00:00 | ['Creativity', 'Ghostwriting', 'Education', 'Money', 'Writing'] | Title CRY make moneyContent realized we’ve never taken time fully explain CRY including earn revenue It’s fairly simple warrant discussion First threeperson team It’s hard give title part u little everything core content creator Whether it’s CRY Mag CRY Together celebration goal create curate story line vision elevating emerging artist building connected creative community navigating emotional aspect mean writer artist make money much love need money business work yes business much fun we’re actually focused driving revenue know that’s take give u freedom keep creating Ghostwriting main source revenue come ghostwriting help people write story accomplishment overcoming kind obstacle memoir family history people want capture mean someone contract u write entire book cost ten thousand dollar it’s reserved committed telling story finance everyone chooses option “rich” You’d surprised resourceful people they’re motivated Proposals story structure definitely understand everyone afford u write book full separate offering write proposal proposal different meaning CRY it’s like outline book someone writer idea book it’s actually really difficult understand get idea head onto page That’s help provide structure succinct synopsis chapter chapter summary along output time we’re finished client know exactly write format story anywhere 20003000 depending variable Editing Sometimes client come u hundred page blog post journal entry loose note take page turn real book cost varies depending amount work typically come around 6000 — 8000 sharing it’s important transparent money shouldn’t secret around revenue earned especially help inform educate someone trying something similar love CRY earning revenue stream allows u create curate content definitely plan expand offering we’ll sure share another updateTags Creativity Ghostwriting Education Money Writing |
1,168 | Exploring Google Cloud Vision API and Feature Demonstration With Python | Features of Google Cloud Vision API
1. Face detection
The Vision API can perform feature detection on local image files and remote image URLs. DETECT_FACES and DETECT_FACES_URI functions can perform multiple-face detection within an image, along with the associated key facial attributes, such as emotional state and headwear.
2. Image attributes
Detects general attributes of the image, such as dominant colors and appropriate crop hints.
3. Detect labels
The Vision API can detect and extract information about entities in an image, across a broad group of categories. Labels can identify general objects, locations, activities, animal species, products, and more.
4. Optical Character Recognition (OCR)
Detects and extracts text from images. TEXT_DETECTION and DOCUMENT_TEXT_DETECTION annotations support OCR.
5. Web Detection
Web Detection detects web references to an image. Searches the web for the best guess label and pages with full and partial matching images.
6. Detect multiple objects
The Cloud Vision API can detect and extract multiple objects in an image with Object Localization — a module that identifies information about the object, the position of the object, and rectangular bounds for the region of the image that contains the object.
7. Detect explicit content (SafeSearch) | https://medium.com/better-programming/exploring-google-cloud-vision-api-and-feature-demonstration-with-python-1f02e1dbdfd3 | ['Ulku Guneysu'] | 2020-12-15 02:24:31.137000+00:00 | ['Google Cloud Platform', 'Python', 'Machine Learning', 'Programming', 'Computer Vision'] | Title Exploring Google Cloud Vision API Feature Demonstration PythonContent Features Google Cloud Vision API 1 Face detection Vision API perform feature detection local image file remote image URLs DETECTFACES DETECTFACESURI function perform multipleface detection within image along associated key facial attribute emotional state headwear 2 Image attribute Detects general attribute image dominant color appropriate crop hint 3 Detect label Vision API detect extract information entity image across broad group category Labels identify general object location activity animal specie product 4 Optical Character Recognition OCR Detects extract text image TEXTDETECTION DOCUMENTTEXTDETECTION annotation support OCR 5 Web Detection Web Detection detects web reference image Searches web best guess label page full partial matching image 6 Detect multiple object Cloud Vision API detect extract multiple object image Object Localization — module identifies information object position object rectangular bound region image contains object 7 Detect explicit content SafeSearchTags Google Cloud Platform Python Machine Learning Programming Computer Vision |
1,169 | Q&A: Sam Felix, Director of Audience + Platforms @ The New York Times | Q&A: Sam Felix, Director of Audience + Platforms @ The New York Times
This week, The Idea caught up with Sam Felix to discuss how The Times thinks about reaching new audiences, how it evaluates new and different platforms, and how it adjusts to changes in the tech-universe.
Tell us about your role and what you do.
I focus on the company-wide strategy, partnerships and relationships with key tech partners — focusing a majority of my time on Google, Apple, Facebook, Snapchat, etc. — and figuring out how and to what extent we should be working with these platforms.
My job is to think deeply about how The Times’ journalism is reflected off-platform and monetized and to make sure there’s a clear value to us participating in the platform, whether that be direct revenue or a strong audience value.
We spend most of our time focusing on projects, big and small, that help connect The Times to these sometimes hard to understand, hard to reach audiences off-platform and look for ways to drive them back to our O&O (owned and operated). So the types of projects we tend to work on and lead are things like Subscribe with Google, AMP, Apple News, Snapchat Discover.
How does your team fit within the larger organization?
We’re in a pretty unique position within the company — we work across most of the departments to help spread a consistent strategy for the platforms. We’re technically part of the marketing organization, but physically sit in the newsroom.
We of course have no influence over the coverage and really maintain an appropriate separation from the journalists, but it’s very clear and central to our work that we partner closely with the newsroom audience team and the newsroom thinkers to ensure that we’re building audiences off-platform that reflect our overall strategy.
Can you talk more about The Times’ overall strategy? How do you evaluate new or emerging platforms?
When we’re thinking about new platforms, we are vigilant about our strategy. We look at these new opportunities and these new platforms through different lenses: Can the platform help us demonstrate the breadth of our report to a new audience that we don’t otherwise reach? Does the platform help drive this audience back to our O&O?
We believe that the best way to experience The Times’ journalism is on our properties, and so our goal is to bring users back to our properties, fully immersing them in the journalism, with the goal of developing a relationship and a daily habit with our readers.
We also ask ourselves: does this platform give us a new way of thinking about storytelling — so things like Snapchat Discover — in a way that we might not otherwise do on our properties.
One example is Reddit. Obviously Reddit is not new, it’s actually been around for a very long time, but they are sort of emerging for us, in a way. When we look at how we engage with Reddit and what sort of investment we put into it, one of the things that was most attractive to us was leveraging AMAs as a way to expose this very engaged Times audience to our journalists and answer questions about how the report is made, pulling back the curtain on The New York Times.
Again, something we can’t always do as easily, especially for a new audience on our platforms, and so we can leverage these partners to help us really tell more of The New York Times’ story.
Thinking about some of the big changes that have happened within the last year regarding Google and Facebook, have they had any impact on how you approach or think about these platforms?
These changes have definitely impacted how we optimize on a daily basis, but I wouldn’t say that they’ve dramatically, or very much at all, changed our strategy for how we approach Google and Facebook holistically. Facebook’s been making these types of changes, big and small, to the algorithm for at least a year, if not more.
We’ve certainly tracked the changes and we monitor how they impact our distribution mix and try to respond in our programming strategy, but we are not reliant on Facebook traffic to generate audience, so it doesn’t impact our business in a material way or our ability to reach our readers.
At the same time, we’ve definitely seen the shift towards search and Google, or the shift back to Google. We have a very strong partnership with Google and make it a priority to work very closely with them on things like Subscribe with Google — to not only take advantage of the resurgence of search, but also to ensure that we are providing the best possible subscriber experience for our readers, no matter where they choose to meet us.
We also want to do what we can to help Google create products and experiences within its platform that support the news business so that we don’t find ourselves in the land of too many pivots.
We’re constantly keeping an eye on what’s happening on each of those platforms, but trying to be thoughtful and cautious in our approach overall.
You mentioned part of your strategy is to ultimately drive users back to your O&O. How does Google AMP fit into that strategy?
We were an early partner with Google on AMP and took a very measured approach the last couple years to understanding the format, the experience, and working with Google to enhance the story page.
We did a lot of testing around the impacts on audience as well as advertising, and in that time, worked with Google, watched how the industry had shifted, and eventually become more comfortable with the format. And so we recently made the decision to convert most of our articles, save for some that the AMP format doesn’t quite support yet, into AMP pages.
So far it’s continued to enhance performance and have a positive impact on our audience within the Google ecosystem. We took our time and we weren’t one of the first ones to jump all in on AMP, because it is important that the AMP page that is presented to the reader is a positive experience that we believe is equal to, or at least nearly as great as, what we hope we’re able to provide on our web pages.
We also wanted to make sure, and this is why Subscribe with Google has been great, that we’re able to align our business model consistently across AMP and our website and various other touch points.
So technically it is served from a Google server, but Google’s done a lot of work to make it come as close to being a part of our own universe as possible.
How are you thinking about news aggregation app like Apple News and Flipboard? How do aggregate apps like that fit into the NYT’s strategy?
Well, we are cautious. We think about these platforms as an opportunity to reach readers who are more interested in a browsing experience, specifically browsing a wide universe of publishers in a short amount of time — so these are the skimmers who want their top news fast from many outlets.
By having a presence on these platforms, we’re able to expose our journalism to these readers and try to look for and cultivate opportunities to bring them back to our platform and pull them more deeply into the report.
For example, we’ve had a lot of success driving newsletter signups on Apple News. So even though that audience lives exclusively off-platform, it does drive a lot of reach, though it doesn’t come into our analytics in the same way as say Google or direct traffic. So traffic, in this case, has a slightly different meaning.
But because we are finding that that user base is coming back to us within the Apple News ecosystem and is then taking the next steps to sign up for a newsletter, that demonstrates some real value to us and tells us that it does play a role in the subscriber journey.
Is there a cool project you’ve been working on recently? Is there anything you’ve learned or hope to learn from it?
At the beginning of each year, we reevaluate what our position is with each of the major platforms and how we want to situate The Times within this larger tech-universe. So we’re in the midst of all of that 2019 planning and reviewing the landscape of all of the major tech platforms.
It’s a project in flight, but it’s been a really fascinating body of work this year given everything that’s happened in 2018 — with all of the platforms and the big questions we have to think about when it comes to privacy, trust, aggregation, bundling subscriptions, and what it means when a platform might be doing something that feels directly competitive to our business model, or on the surface feels really helpful to getting journalism to big audiences, but might not ultimately help us actually achieve that goal in the long run.
It’s a big thought experiment, but it’s something I think all publishers should be spending a big chunk of time doing often as these platforms update and change and regulation starts to get into the conversation.
What’s the most interesting thing you’ve seen recently from a media outlet other than your own?
I’m still a big fan of Axios. I love everything they’re doing. I’ve been continually impressed with their ability to leverage their newsletter strategy into a segmented audience strategy where they can build clear, almost communities around each of these subject lines.
They’re developing a direct relationship with their readers. Being in a reader’s inbox is probably one of the closest relationships you can have, so I thought it was a great move when they acquired Sports Internet to flesh out their sports vertical.
I’m really excited to see what they do with that and what else they expand into and hope they can continue to be as engaging in their approach for these new verticals. | https://medium.com/the-idea/q-a-sam-felix-director-of-audience-platforms-the-new-york-times-609885934524 | ['Lizzy Raben'] | 2019-05-06 14:37:39.734000+00:00 | ['Journalism', 'Atlantic Media', 'Facebook', 'Subscriber Spotlight', 'The New York Times'] | Title QA Sam Felix Director Audience Platforms New York TimesContent QA Sam Felix Director Audience Platforms New York Times week Idea caught Sam Felix discus Times think reaching new audience evaluates new different platform adjusts change techuniverse Tell u role focus companywide strategy partnership relationship key tech partner — focusing majority time Google Apple Facebook Snapchat etc — figuring extent working platform job think deeply Times’ journalism reflected offplatform monetized make sure there’s clear value u participating platform whether direct revenue strong audience value spend time focusing project big small help connect Times sometimes hard understand hard reach audience offplatform look way drive back OO owned operated type project tend work lead thing like Subscribe Google AMP Apple News Snapchat Discover team fit within larger organization We’re pretty unique position within company — work across department help spread consistent strategy platform We’re technically part marketing organization physically sit newsroom course influence coverage really maintain appropriate separation journalist it’s clear central work partner closely newsroom audience team newsroom thinker ensure we’re building audience offplatform reflect overall strategy talk Times’ overall strategy evaluate new emerging platform we’re thinking new platform vigilant strategy look new opportunity new platform different lens platform help u demonstrate breadth report new audience don’t otherwise reach platform help drive audience back OO believe best way experience Times’ journalism property goal bring user back property fully immersing journalism goal developing relationship daily habit reader also ask platform give u new way thinking storytelling — thing like Snapchat Discover — way might otherwise property One example Reddit Obviously Reddit new it’s actually around long time sort emerging u way look engage Reddit sort investment put one thing attractive u leveraging AMAs way expose engaged Times audience journalist answer question report made pulling back curtain New York Times something can’t always easily especially new audience platform leverage partner help u really tell New York Times’ story Thinking big change happened within last year regarding Google Facebook impact approach think platform change definitely impacted optimize daily basis wouldn’t say they’ve dramatically much changed strategy approach Google Facebook holistically Facebook’s making type change big small algorithm least year We’ve certainly tracked change monitor impact distribution mix try respond programming strategy reliant Facebook traffic generate audience doesn’t impact business material way ability reach reader time we’ve definitely seen shift towards search Google shift back Google strong partnership Google make priority work closely thing like Subscribe Google — take advantage resurgence search also ensure providing best possible subscriber experience reader matter choose meet u also want help Google create product experience within platform support news business don’t find land many pivot We’re constantly keeping eye what’s happening platform trying thoughtful cautious approach overall mentioned part strategy ultimately drive user back OO Google AMP fit strategy early partner Google AMP took measured approach last couple year understanding format experience working Google enhance story page lot testing around impact audience well advertising time worked Google watched industry shifted eventually become comfortable format recently made decision convert article save AMP format doesn’t quite support yet AMP page far it’s continued enhance performance positive impact audience within Google ecosystem took time weren’t one first one jump AMP important AMP page presented reader positive experience believe equal least nearly great hope we’re able provide web page also wanted make sure Subscribe Google great we’re able align business model consistently across AMP website various touch point technically served Google server Google’s done lot work make come close part universe possible thinking news aggregation app like Apple News Flipboard aggregate apps like fit NYT’s strategy Well cautious think platform opportunity reach reader interested browsing experience specifically browsing wide universe publisher short amount time — skimmer want top news fast many outlet presence platform we’re able expose journalism reader try look cultivate opportunity bring back platform pull deeply report example we’ve lot success driving newsletter signups Apple News even though audience life exclusively offplatform drive lot reach though doesn’t come analytics way say Google direct traffic traffic case slightly different meaning finding user base coming back u within Apple News ecosystem taking next step sign newsletter demonstrates real value u tell u play role subscriber journey cool project you’ve working recently anything you’ve learned hope learn beginning year reevaluate position major platform want situate Times within larger techuniverse we’re midst 2019 planning reviewing landscape major tech platform It’s project flight it’s really fascinating body work year given everything that’s happened 2018 — platform big question think come privacy trust aggregation bundling subscription mean platform might something feel directly competitive business model surface feel really helpful getting journalism big audience might ultimately help u actually achieve goal long run It’s big thought experiment it’s something think publisher spending big chunk time often platform update change regulation start get conversation What’s interesting thing you’ve seen recently medium outlet I’m still big fan Axios love everything they’re I’ve continually impressed ability leverage newsletter strategy segmented audience strategy build clear almost community around subject line They’re developing direct relationship reader reader’s inbox probably one closest relationship thought great move acquired Sports Internet flesh sport vertical I’m really excited see else expand hope continue engaging approach new verticalsTags Journalism Atlantic Media Facebook Subscriber Spotlight New York Times |
1,170 | Going Full Greta | As with many others, Greta Thunberg is my current number one hero. She makes me think. And, importantly, she walks her talk. Compared to most Americans I have a teeny tiny carbon footprint but Greta has got me thinking about how I might make it even smaller. I’ve realized there is more I can do but I’ve also realized that I am not sure I can go ‘Full Greta’ and I’ll explain why in a minute.
Greta does not use fossil fuel powered vehicles for transportation. For many Americans that would be unthinkable. At my last job I had a co-worker who lived just two blocks from the office yet she drove to work each day. When her car broke down she was beside herself. She didn’t know what to do. She was calling all her friends trying to get rides to work. IT NEVER OCCURRED TO HER to simply go out her front door and walk two blocks. IT NEVER OCCURRED TO HER! Sadly, this is the mindset of so many Americans. They simply will not go anywhere without driving.
I have not owned a fossil fuel powered car — or any vehicle — in six years. I’ve gone for long stretches of time without a car several times in the past as well. I walk! And my legs do not pollute. And they don’t require license plates, a license, insurance, gas, or constant repair bills. They have not only saved the planet from some pollution but they have saved me countless tens of thousands of dollars. And walking has also greatly improved my health.
Of course I must admit that going car-less has not always been about environmental activism. Usually it was brought about by debilitating poverty. It was not until I was forced to go car-less that I began understanding the environmental impact of doing so. Now I’m quite happy and healthy being car-less. As poverty continues to skyrocket in America many others will be forced to give up their expensive car addictions. This will probably help the environment a little but I don’t think this is the best way to address the climate crisis. The results are not as powerful when we are forced into action as they are when we consciously and purposefully take action through choice.
Greta also does not utilize air travel, which is by far one of the most polluting forms of travel carbon-per-person-wise. The last time I was on a plane was in the early 1990s. Back during the first 30 years of my life I was an ardent lover of air travel. I flew a lot and enjoyed the heck out of it. But then the joy seemed to drain away and I quit flying. So there’s not much I can do to shrink my carbon footprint by cutting back on flying since I don’t fly anyway.
Greta tries to never buy new clothes. She wears hand-me-downs from her older sister and also gets clothes from thrift stores. When I was a kid I absolutely hated being given hand-me-down clothes from my older brother. I simply refused to wear them. I didn’t want his vibes on me. And for most of my life I would not even consider buying clothes at a thrift store. Buying used clothes? Ewe gross! Right?
But over this last decade I have been slowly softening my hard-headed stance about this. It all started about 8 years ago when I needed a blazer for a certain event. I had burned all my ties and business suits way back when I quit the corporate world and I had zero money to buy a new blazer. So I bit the bullet and went to the local thrift store. To my surprise I found a blazer that was in mint condition that fit me perfectly. I bought it for $2 and when I got home I found a $5 bill in one of the pockets. I dare anyone to find a deal like that at any popular clothing retail fashion store.
I have since shopped at the thrift store on a somewhat regular basis, mostly for household items and books and potential birthday presents for people under the age of 10. But I have also bought some shirts and trousers and jackets. I even bought the first umbrella I’ve ever owned at that thrift store. But to be honest I have to say that I vehemently draw the line at socks and underwear. I buy those at the nearby evil Wal-Mart.
There is one part of Greta’s environmentally friendly lifestyle that I personally have trouble with, though, and that is the fact that she is a vegan. I just don’t think I can take that step. Over recent decades I have radically decreased my intake of meat — around 90%. I don’t eat pork and I rarely eat beef or chicken but one of my favorite foods is organic, grass-fed bison meat. It’s expensive, though, so I can rarely afford to buy it. But I still manage to have around 5 to 7 bison burgers a year — usually on special occasions or holidays. I feel good knowing the local rancher and his family who raise the organic, grass-fed bison and I know the spiritual ways in which they handle the entire harvesting process. And their ranch is only about 95 miles away so not much gas is used in transport.
I know that I could take that step and give up meat entirely but there are two animal products that I simply cannot imagine giving up and those are organic, cage-free chicken eggs and organic real butter made from grass-fed cows. I consume those products almost every single day.
Egg yolks are probably my very favorite food in the world. I only eat the yolks, never the whites. 97.41869% (approximately) of all the nutrition found in an egg is in the yolk. And 99.03574% (approximately) of all the flavor of an egg is found in the yolk. The whites are useless and go in the compost bucket.
There is no more orgasmic culinary experience than plopping a hot, yet still liquid, egg yolk into one’s mouth and letting its yumminess explode with flavor throughout the mouth.
And real, natural, organic butter made from grass-fed cows is perhaps the most crucially important food we can eat to maintain cardiovascular health.
I like the fact that we don’t have to kill the chicken to get the egg and we don’t have to kill the cow to make the butter. The problem, of course, is that we think we must have huge factory farms to produce animal products to scale for an exploding population. (‘Scale’ is my least favorite word in the American Business Lexicon.) So I always try to ‘source’ organic, cage-free chicken eggs from local farmers, several of which I know personally who live just outside of town.
I just don’t think I can go 100% ‘full Greta’ but I can get close. I can try. The important thing that I am grateful for about Greta is that she has prompted me to make closer observations of my own actions in order to find new ways to help be a part of mitigating the effects of the climate crisis. Very importantly, she is helping to expand mass awareness of how everyone can help. I give her my own personal Nobel Prize.
But there is something that I’ve never heard Greta talking about…
For years, many woo-woo masters have talked about how our outer environment reflects our inner environment. As the saying goes, ‘As within, so without,’ or something like that. The pollution in our outer environment is a reflection of the pollution within us so we cannot fix the outer pollution until we fix the inner pollution; our psychological pollution, our emotional pollution, our collective attitudes and beliefs and self-loathing, our fear, our guilt, our hate, our prejudices, our greed…
I’ve been working on cleaning out my inner pollution for decades. Every time I clean out some inner pollution I find more inner pollution that needs to be cleared out. I clear it out then find another layer of inner pollution that needs to be worked on. There are so many layers of inner pollution. Sometimes I feel like a big old fat onion. But I keep working on it.
So as we begin to observe our outer behavior in order to help heal the planet’s environment we must also be sure to more closely observe our inner workings to see what can be healed within. I feel that we will achieve much greater and faster success by working both without and within simultaneously. One thing is for certain and that is that we simply cannot continue to go on in an unobserved path of somnambulist denial. | https://whitefeather9.medium.com/going-full-greta-a532991ec2d7 | ['White Feather'] | 2019-10-20 16:30:51.142000+00:00 | ['Vegan', 'Environment', 'Self', 'Climate Change', 'Food'] | Title Going Full GretaContent many others Greta Thunberg current number one hero make think importantly walk talk Compared Americans teeny tiny carbon footprint Greta got thinking might make even smaller I’ve realized I’ve also realized sure go ‘Full Greta’ I’ll explain minute Greta use fossil fuel powered vehicle transportation many Americans would unthinkable last job coworker lived two block office yet drove work day car broke beside didn’t know calling friend trying get ride work NEVER OCCURRED simply go front door walk two block NEVER OCCURRED Sadly mindset many Americans simply go anywhere without driving owned fossil fuel powered car — vehicle — six year I’ve gone long stretch time without car several time past well walk leg pollute don’t require license plate license insurance gas constant repair bill saved planet pollution saved countless ten thousand dollar walking also greatly improved health course must admit going carless always environmental activism Usually brought debilitating poverty forced go carless began understanding environmental impact I’m quite happy healthy carless poverty continues skyrocket America many others forced give expensive car addiction probably help environment little don’t think best way address climate crisis result powerful forced action consciously purposefully take action choice Greta also utilize air travel far one polluting form travel carbonperpersonwise last time plane early 1990s Back first 30 year life ardent lover air travel flew lot enjoyed heck joy seemed drain away quit flying there’s much shrink carbon footprint cutting back flying since don’t fly anyway Greta try never buy new clothes wear handmedowns older sister also get clothes thrift store kid absolutely hated given handmedown clothes older brother simply refused wear didn’t want vibe life would even consider buying clothes thrift store Buying used clothes Ewe gross Right last decade slowly softening hardheaded stance started 8 year ago needed blazer certain event burned tie business suit way back quit corporate world zero money buy new blazer bit bullet went local thrift store surprise found blazer mint condition fit perfectly bought 2 got home found 5 bill one pocket dare anyone find deal like popular clothing retail fashion store since shopped thrift store somewhat regular basis mostly household item book potential birthday present people age 10 also bought shirt trouser jacket even bought first umbrella I’ve ever owned thrift store honest say vehemently draw line sock underwear buy nearby evil WalMart one part Greta’s environmentally friendly lifestyle personally trouble though fact vegan don’t think take step recent decade radically decreased intake meat — around 90 don’t eat pork rarely eat beef chicken one favorite food organic grassfed bison meat It’s expensive though rarely afford buy still manage around 5 7 bison burger year — usually special occasion holiday feel good knowing local rancher family raise organic grassfed bison know spiritual way handle entire harvesting process ranch 95 mile away much gas used transport know could take step give meat entirely two animal product simply cannot imagine giving organic cagefree chicken egg organic real butter made grassfed cow consume product almost every single day Egg yolk probably favorite food world eat yolk never white 9741869 approximately nutrition found egg yolk 9903574 approximately flavor egg found yolk white useless go compost bucket orgasmic culinary experience plopping hot yet still liquid egg yolk one’s mouth letting yumminess explode flavor throughout mouth real natural organic butter made grassfed cow perhaps crucially important food eat maintain cardiovascular health like fact don’t kill chicken get egg don’t kill cow make butter problem course think must huge factory farm produce animal product scale exploding population ‘Scale’ least favorite word American Business Lexicon always try ‘source’ organic cagefree chicken egg local farmer several know personally live outside town don’t think go 100 ‘full Greta’ get close try important thing grateful Greta prompted make closer observation action order find new way help part mitigating effect climate crisis importantly helping expand mass awareness everyone help give personal Nobel Prize something I’ve never heard Greta talking about… year many woowoo master talked outer environment reflects inner environment saying go ‘As within without’ something like pollution outer environment reflection pollution within u cannot fix outer pollution fix inner pollution psychological pollution emotional pollution collective attitude belief selfloathing fear guilt hate prejudice greed… I’ve working cleaning inner pollution decade Every time clean inner pollution find inner pollution need cleared clear find another layer inner pollution need worked many layer inner pollution Sometimes feel like big old fat onion keep working begin observe outer behavior order help heal planet’s environment must also sure closely observe inner working see healed within feel achieve much greater faster success working without within simultaneously One thing certain simply cannot continue go unobserved path somnambulist denialTags Vegan Environment Self Climate Change Food |
1,171 | How Kanye West Built A Multi-Billion Dollar Sneaker Brand | How Kanye West Built A Multi-Billion Dollar Sneaker Brand
What made the brand successful, and what can potentially kill it
An Adidas Yeezy. By andreimihaiducu from Pixabay
The chances are pretty high, that you’ve already heard of The Yeezy. It’s the sneaker brand of Kanye West in collaboration with Adidas, and now Gap.
To grab these babies, people across the U.S. would camp outside shoe stores in dead cold winter, hire taskers to wait in line on their behalf, and even deploy sneaker-buying bots to nab a pair online before they sell out.
Those who are unable to purchase these within the few precious seconds before they sell out, end up spending thousands of dollars over the market value to purchase them in the resale market.
The man behind all this craze is Kanye West.
He has successfully created one of the most hyped sneakers of all time. However, there are some potential roadblocks in the way ahead, that he must navigate through to help the company stand the test of time. | https://medium.com/better-marketing/how-kanye-west-built-a-multi-billion-dollar-sneaker-brand-78eed4acfcaa | ['Kiran Jain'] | 2020-07-06 18:58:02.122000+00:00 | ['Marketing', 'Fashion', 'Celebrity', 'Business', 'Startup'] | Title Kanye West Built MultiBillion Dollar Sneaker BrandContent Kanye West Built MultiBillion Dollar Sneaker Brand made brand successful potentially kill Adidas Yeezy andreimihaiducu Pixabay chance pretty high you’ve already heard Yeezy It’s sneaker brand Kanye West collaboration Adidas Gap grab baby people across US would camp outside shoe store dead cold winter hire taskers wait line behalf even deploy sneakerbuying bot nab pair online sell unable purchase within precious second sell end spending thousand dollar market value purchase resale market man behind craze Kanye West successfully created one hyped sneaker time However potential roadblock way ahead must navigate help company stand test timeTags Marketing Fashion Celebrity Business Startup |
1,172 | There Are Only 3 Things A Writer Must Do Every Morning | There’s no shortage of people telling you what you “must” do in the morning if you want to change your life.
You know exactly what I mean, too. Wake up early. Have a cold shower. Meditate. Exercise. Plan your day. Set your priorities.
Blah, blah, blah. I call bull.
Those tips have been around for decades. Centuries, probably. Despite all the well-meaning advice, most of us are still tired, overworked, underpaid, disorganized and wishing we knew how to get it together.
It’s an easy slide into self blame.
But it’s not your fault. It’s faulty information.
Know why all that stuff doesn’t work? Because it’s like trying to eat the proverbial elephant. Too much. That is not how the human brain works.
The human brain is infinitely malleable. Change is possible. For all of us and any of us. But mass rebuild works better for cars than people.
We have to change in small increments. Tiny steps. Even more important? Our brain needs to buy into what we’re doing. If your brain doesn’t buy it, your body isn’t going to follow through. Simple as that.
You can say affirmations about how “rich” you are until the cows come home, and your brain is saying yeah? Well then why is my bank account dry?
Know what I mean?
If you are a writer, or want to be a writer, there’s only 3 things you need to do every morning. Not big cataclysmic changes. Just 3 tiny things.
1. Take a few minutes for you
In his book, Miracle Morning, Hal Elrod says how we wake up each day dramatically affects us all through the day. Even more so, it affects the level of success at everything we do all day.
Ever have one of those mornings where the alarm doesn’t ring, or you turn it off and when you wake up already late? You’re rushing around trying to get out the door, and forget stuff, and traffic sucks.
And somehow, the crazy morning just affects the whole day. Know what I mean? We’ve all been there.
The opposite is true, too.
When I was caring for Dad at the end of his life I learned that the hard way. It was hard work, and that’s the understatement of the year. Juggling work and caring for a dying parent would test the patience of a saint. I am not a saint.
If I started the day bolting out of bed to him calling me? By the end of the day I felt like something the cat dragged in.
I learned to wake up early enough to take a few lazy minutes for myself and it made all the difference. Doesn’t even matter how you spend it. Watching the sun rise or feeding the birds. Reading a book, playing solitaire.
You get to pick how much time and how to spend them. But do that for yourself. So every day starts with your needs, not someone else’s. | https://medium.com/linda-caroll/there-are-only-3-things-a-writer-must-do-every-morning-f4769933739e | ['Linda Caroll'] | 2020-09-15 06:40:51.711000+00:00 | ['Creativity', 'Self', 'Advice', 'Habits', 'Writing'] | Title 3 Things Writer Must Every MorningContent There’s shortage people telling “must” morning want change life know exactly mean Wake early cold shower Meditate Exercise Plan day Set priority Blah blah blah call bull tip around decade Centuries probably Despite wellmeaning advice u still tired overworked underpaid disorganized wishing knew get together It’s easy slide self blame it’s fault It’s faulty information Know stuff doesn’t work it’s like trying eat proverbial elephant much human brain work human brain infinitely malleable Change possible u u mass rebuild work better car people change small increment Tiny step Even important brain need buy we’re brain doesn’t buy body isn’t going follow Simple say affirmation “rich” cow come home brain saying yeah Well bank account dry Know mean writer want writer there’s 3 thing need every morning big cataclysmic change 3 tiny thing 1 Take minute book Miracle Morning Hal Elrod say wake day dramatically affect u day Even affect level success everything day Ever one morning alarm doesn’t ring turn wake already late You’re rushing around trying get door forget stuff traffic suck somehow crazy morning affect whole day Know mean We’ve opposite true caring Dad end life learned hard way hard work that’s understatement year Juggling work caring dying parent would test patience saint saint started day bolting bed calling end day felt like something cat dragged learned wake early enough take lazy minute made difference Doesn’t even matter spend Watching sun rise feeding bird Reading book playing solitaire get pick much time spend every day start need someone else’sTags Creativity Self Advice Habits Writing |
1,173 | How to Decide Between Algorithm Outputs Using the Validation Error Rate | How to Decide Between Algorithm Outputs Using the Validation Error Rate Monument Follow Aug 12 · 3 min read
Monument (www.monument.ai) enables you to quickly apply algorithms to data in a no-code interface. But, after you drag the algorithms onto data to generate predictions, you need to decide which algorithm or combination of algorithms is most reliable for your task.
In the ocean temperature tutorial, we cleaned open remote sensing data and fed the data into Monument in order to forecast future ocean temperatures. In that case, we used visual inspection to evaluate the accuracy of different algorithms, which was possible because the historical data roughly formed a sine curve. Visual inspection is one tool in the data science toolbox, but there are other tools as well.
The Validation Error Rate is another useful tool in cases where you want to get more fine-grained or where visual inspection does not yield obvious insights. There are other error functions that can be used, but Validation Error Rate is the default error function in Monument.
What Is The Validation Error Rate And Why Is It Important?
The Validation Error Rate measures the distance between “out of sample” values and estimates produced by the algorithm. You can find this metric in the INFO box in the lower-left corner of the MODEL workspace.
As a general rule of thumb, the “more negative” your Validation Error Rate is, the more accurate the model is. Negative infinity would be a perfect model. In the real world, as we will see with our ocean temperature data, sometimes the best you can do is a small, but nevertheless positive number.
Currently, Monument only displays one Validation Error Rate at a time. To view the Validation Error Rate for other algorithms that you have trained, click the drop-down arrow on the right side of the algorithm pill and select SHOW ERROR RATE.
To compare the performance of the models, I have pasted below a table of all the Validation Error Rates applied to the ocean temperatures data, sorted from lowest to highest.
Algorithm Performance On The HABSOS Data
As we discovered in the tutorial, with default parameters, AR and G-DyBM perform the best on the cleaned and transformed data.
How To Improve Algorithm Performance
Typically, we can improve the Validation Error Rate — i.e. make it “more negative” — by adjusting the algorithms’ parameters. You can access an algorithm’s parameters by selecting PARAMETERS in the algorithm pill drop-down.
Choosing which parameters to edit to improve performance depends heavily on your business objectives and the nature of the data you’re looking at. We will cover common cases in future tutorials, but the best approach is to experiment yourself to develop an intuition around which parameters most improve results for different kinds of data.
Certain algorithms allow for automated parameter adjustment. In Monument, the LSTM and LightGBM algorithms also have “AutoML,” which is short for Automated Machine Learning. AutoML automatically adjusts an algorithm’s parameters to optimize performance. You can select AUTOML from the algorithm drop-down to access these capabilities.
For example, when we run AutoML on the HABSOS data, we can lower the Validation Error Rate by 0.04 from 3.273 to 3.233. Not a huge improvement on this particular data, but an improvement nonetheless. Often, the gains are much greater.
There are other reports within Monument that we can use to improve algorithm performance, including, dependent variables, forecast training convergence, and feature importance. We’ll explore these topics in future tutorials. | https://medium.com/swlh/how-to-decide-between-algorithm-outputs-using-the-validation-error-rate-c288a358ca9b | [] | 2020-08-15 23:34:51.982000+00:00 | ['Machine Learning', 'Automl', 'Algorithms', 'Artificial Intelligence', 'Big Data'] | Title Decide Algorithm Outputs Using Validation Error RateContent Decide Algorithm Outputs Using Validation Error Rate Monument Follow Aug 12 · 3 min read Monument wwwmonumentai enables quickly apply algorithm data nocode interface drag algorithm onto data generate prediction need decide algorithm combination algorithm reliable task ocean temperature tutorial cleaned open remote sensing data fed data Monument order forecast future ocean temperature case used visual inspection evaluate accuracy different algorithm possible historical data roughly formed sine curve Visual inspection one tool data science toolbox tool well Validation Error Rate another useful tool case want get finegrained visual inspection yield obvious insight error function used Validation Error Rate default error function Monument Validation Error Rate Important Validation Error Rate measure distance “out sample” value estimate produced algorithm find metric INFO box lowerleft corner MODEL workspace general rule thumb “more negative” Validation Error Rate accurate model Negative infinity would perfect model real world see ocean temperature data sometimes best small nevertheless positive number Currently Monument display one Validation Error Rate time view Validation Error Rate algorithm trained click dropdown arrow right side algorithm pill select SHOW ERROR RATE compare performance model pasted table Validation Error Rates applied ocean temperature data sorted lowest highest Algorithm Performance HABSOS Data discovered tutorial default parameter AR GDyBM perform best cleaned transformed data Improve Algorithm Performance Typically improve Validation Error Rate — ie make “more negative” — adjusting algorithms’ parameter access algorithm’s parameter selecting PARAMETERS algorithm pill dropdown Choosing parameter edit improve performance depends heavily business objective nature data you’re looking cover common case future tutorial best approach experiment develop intuition around parameter improve result different kind data Certain algorithm allow automated parameter adjustment Monument LSTM LightGBM algorithm also “AutoML” short Automated Machine Learning AutoML automatically adjusts algorithm’s parameter optimize performance select AUTOML algorithm dropdown access capability example run AutoML HABSOS data lower Validation Error Rate 004 3273 3233 huge improvement particular data improvement nonetheless Often gain much greater report within Monument use improve algorithm performance including dependent variable forecast training convergence feature importance We’ll explore topic future tutorialsTags Machine Learning Automl Algorithms Artificial Intelligence Big Data |
1,174 | A More Strategic Way to Delete Your Emails | Photo: JohnnyGreigg / Getty Images
For all the ways that G Drive has made our lives run better, it makes getting rid of those memories kind of complicated. The most intuitive way to delete messages in Gmail is scrolling and clicking. Forever.
So technology columnist Angela Lashbrook tried to find another way, a journey she writes about in Debugger, Medium’s new publication about consumer technology.
Turns out you need a two-part system: Let an app clear out most of the junk. Then employ search-and-destroy strategies to find groups of emails to delete all at once. Her tips are surprising but easy and effective.
The best approach, however, is the simplest: daily maintenance. “That means deleting all the emails you don’t need as you receive them so you don’t get caught up in the mess I spent the last several days cleaning up,” Lashbrook writes.
And in that way, your inbox is like any important relationship: Pay a little attention every day so you can focus on the present, not thousands of pieces of baggage from the past. | https://forge.medium.com/how-your-inbox-is-like-any-other-relationship-in-your-life-df4f9d77f2b | ['Ross Mccammon'] | 2020-10-21 14:49:38.703000+00:00 | ['Inbox', 'Google', 'Email', 'Productivity'] | Title Strategic Way Delete EmailsContent Photo JohnnyGreigg Getty Images way G Drive made life run better make getting rid memory kind complicated intuitive way delete message Gmail scrolling clicking Forever technology columnist Angela Lashbrook tried find another way journey writes Debugger Medium’s new publication consumer technology Turns need twopart system Let app clear junk employ searchanddestroy strategy find group email delete tip surprising easy effective best approach however simplest daily maintenance “That mean deleting email don’t need receive don’t get caught mess spent last several day cleaning up” Lashbrook writes way inbox like important relationship Pay little attention every day focus present thousand piece baggage pastTags Inbox Google Email Productivity |
1,175 | QC — Quantum programming: implementation issues | Photo by NeONBRAND
We are still in the early stage of Quantum computing. Expect surprises! A quantum algorithm depends on the available qubits and also on how those qubits are connected. Different models of quantum computers may do it differently. This is like a program runs on Intel i7 processor and fails for the next generation processor. Decoherence is another major issue in quantum computing. Don’t expect to pause a running program and grab a coffee. Once you hit the run button, you are under the clock. The program must finish quickly before the quantum information is decayed. In this article, we will cover some implementation issues as well as the decoherence problem.
Swap gate
The physical implementation of qubits may not be symmetric. A controlled-not gate may take qubit-0 as the control and qubit-1 as the target but not the opposite.
Strongly depend on implementations
Here, it shows how 2-qubit may be connected in CNOT-gate on IBM Q5.
As shown below, if we connect q1 and q2 with a controlled-not gate, we will get an error.
Having the q1 as the control qubit with q2 as a target is not allowed in this machine.
By reshuffle the gates, the error is gone.
Here is another example for IBM Q20.
IBM Q 20 Tokyo (20-qubits) — Modified from source
To overcome the limitation, we can use swapping to swap two qubits.
Here are other possibilities to overcome the issue:
Universal Gate
In classical computing, NAND gate is a universal gate that can build the remaining operations. We may implement a few more universal gates for performance reason. Complex gates are built on top of these physical gates. In quantum gates, the universal set is
This can further reduce and approximated with any precision by CNOT, H, S, and T gates with some overhead. In practice, IBM Q (ibmqx4) implements the following physical gates:
where
For the programming interface, all the following operations are provided in IBM Q and build on top of the gates above.
This actually leads us to one important topic.
Not all qubits are equal
Precision and errors remain an issue for quantum computing. As indicated above, different qubits have different gate error and readout error. Gate error is about the precision in applying a quantum gate. i.e. how accurate can we control the superposition? Readout error is the error in measuring qubits. MultiQubit gate error is the error on operating the 2-qubit gate. This information may take into consideration when implementing an algorithm. Or at least this should be documented with your execution run for future comparison or reference.
Decoherence & errors
As mentioned before, every quantum program is running under the clock. Once you start hitting the run button, quantum information starts degrading because of the interaction with the environment (any electric and magnetic fields in the vicinity). Your program must be completed before the quantum state becomes garbage. So you should aware of the length of the physical quantum gates used in your program. Since the program is written in logical quantum gates, knowledge of how physical gates are translated into physical gates is helpful.
The quality of the quantum computers can be measured by the relaxation time (T1), coherence time (T2), readout errors, and gate errors.
Source IBM
The decoherence process is measured by T1 and T2 above.
T1 — Energy relaxation: the time taken for the excited |1⟩ state decays toward the ground state |0⟩.
T2 — Dephasing which affects the superposition phase. T2 includes the effect of dephasing as well as energy relaxation.
That is why this information is always published for your reference.
Fault tolerance
Fault tolerance computing is no longer taught in engineering for a very long time. Quantum computing brings the subject back. The quantum calculation is vulnerable to errors. To counterbalance the problem, we can add additional quantum gates for error detection or error correction. This is one major reason that we always need far more qubits than we think. The following sample is an encoder and a decoder that allows one qubit error.
Source: IBM
Next
Now we finish the quantum gates. Next, we will start learning programming a quantum algorithm.
Here is the link for the whole series: | https://jonathan-hui.medium.com/qc-quantum-programming-implementation-issues-51e3a146645e | ['Jonathan Hui'] | 2019-01-17 00:43:52.232000+00:00 | ['Programming', 'Data Science', 'Science', 'Software Development', 'Artificial Intelligence'] | Title QC — Quantum programming implementation issuesContent Photo NeONBRAND still early stage Quantum computing Expect surprise quantum algorithm depends available qubits also qubits connected Different model quantum computer may differently like program run Intel i7 processor fails next generation processor Decoherence another major issue quantum computing Don’t expect pause running program grab coffee hit run button clock program must finish quickly quantum information decayed article cover implementation issue well decoherence problem Swap gate physical implementation qubits may symmetric controllednot gate may take qubit0 control qubit1 target opposite Strongly depend implementation show 2qubit may connected CNOTgate IBM Q5 shown connect q1 q2 controllednot gate get error q1 control qubit q2 target allowed machine reshuffle gate error gone another example IBM Q20 IBM Q 20 Tokyo 20qubits — Modified source overcome limitation use swapping swap two qubits possibility overcome issue Universal Gate classical computing NAND gate universal gate build remaining operation may implement universal gate performance reason Complex gate built top physical gate quantum gate universal set reduce approximated precision CNOT H gate overhead practice IBM Q ibmqx4 implement following physical gate programming interface following operation provided IBM Q build top gate actually lead u one important topic qubits equal Precision error remain issue quantum computing indicated different qubits different gate error readout error Gate error precision applying quantum gate ie accurate control superposition Readout error error measuring qubits MultiQubit gate error error operating 2qubit gate information may take consideration implementing algorithm least documented execution run future comparison reference Decoherence error mentioned every quantum program running clock start hitting run button quantum information start degrading interaction environment electric magnetic field vicinity program must completed quantum state becomes garbage aware length physical quantum gate used program Since program written logical quantum gate knowledge physical gate translated physical gate helpful quality quantum computer measured relaxation time T1 coherence time T2 readout error gate error Source IBM decoherence process measured T1 T2 T1 — Energy relaxation time taken excited 1⟩ state decay toward ground state 0⟩ T2 — Dephasing affect superposition phase T2 includes effect dephasing well energy relaxation information always published reference Fault tolerance Fault tolerance computing longer taught engineering long time Quantum computing brings subject back quantum calculation vulnerable error counterbalance problem add additional quantum gate error detection error correction one major reason always need far qubits think following sample encoder decoder allows one qubit error Source IBM Next finish quantum gate Next start learning programming quantum algorithm link whole seriesTags Programming Data Science Science Software Development Artificial Intelligence |
1,176 | My Doctor Is a Gardener | Want more? No? Okay. Nevermind. But in case you change your mind, subscribe to my newsletter to get notified whenever I publish something new. | https://medium.com/the-haven/my-doctor-is-a-gardener-18ad2a810fac | ['David B. Clear'] | 2020-12-10 21:06:52.212000+00:00 | ['Humor', 'Health', 'Covid 19', 'Comics', 'Coronavirus'] | Title Doctor GardenerContent Want Okay Nevermind case change mind subscribe newsletter get notified whenever publish something newTags Humor Health Covid 19 Comics Coronavirus |
1,177 | You are Music | Image courtesy by John Mosca
Music awakens, inspires and elevates humanity
Everyone should play his own music — chat, write and communicate through art, music and images. Words are not anymore sufficient to transfer one’s own thoughts and feelings.
Words are obsolete, poor and a very slow way to convey one’s own invisibility to others.
That’s why music is becoming the new language and a new way of communication for the new coming,post-human civilisation.
Let’s make this world beautiful. ‘You are Gods’ who have forgotten to be such. Music is the forgotten language through which the Gods used to talk to each other, and transfer their will to men.
Music is the way that allows every man to return to his own peaceful creative nature, and be able so, to communicate in all directions and to all human beings, the original, powerful language of democracy, joy and beauty.
Music is a declaration of life that gives equal opportunities to all of its members. Within music, races, nations, politics and religion dissolve. Music is the only law, and creativity, its order.
It doesn’t matter wheather your Music is happy or sad, loud or soft, weak or strong, harmonious or not, it comes from You; and whatever it is, it’s your Music, it’s always projecting your aliveness, your uniqueness, your Dream.
The origin of your Music is more important than the Music itself,
because YOU ARE MUSIC.
Music is the most simple, powerful way to spread out all over the world, democracy and love, is the manifest of a new humanity, free from divisions and conditionings . Without nation, without religion, without politics, a humanity endowed with infinite creativity, free from discriminations and conflicts, tending towards peace, unity, love. | https://medium.com/age-of-awareness/you-are-music-e8ed4364f62 | ["Elio D'Anna"] | 2020-12-11 15:22:00.167000+00:00 | ['Beyourself', 'Musicians', 'Communication', 'Inspiration', 'Music'] | Title MusicContent Image courtesy John Mosca Music awakens inspires elevates humanity Everyone play music — chat write communicate art music image Words anymore sufficient transfer one’s thought feeling Words obsolete poor slow way convey one’s invisibility others That’s music becoming new language new way communication new comingposthuman civilisation Let’s make world beautiful ‘You Gods’ forgotten Music forgotten language Gods used talk transfer men Music way allows every man return peaceful creative nature able communicate direction human being original powerful language democracy joy beauty Music declaration life give equal opportunity member Within music race nation politics religion dissolve Music law creativity order doesn’t matter wheather Music happy sad loud soft weak strong harmonious come whatever it’s Music it’s always projecting aliveness uniqueness Dream origin Music important Music MUSIC Music simple powerful way spread world democracy love manifest new humanity free division conditioning Without nation without religion without politics humanity endowed infinite creativity free discrimination conflict tending towards peace unity loveTags Beyourself Musicians Communication Inspiration Music |
1,178 | R2D2 as a model for AI collaboration | This essay is about the way we design relationships between humans and machines — and not just how we design interactions with them (though that is a part of it), but more broadly, what are the postures we have towards machine intelligences in our lives and what are their postures toward us? What do we want those relationships to look and feel like?
As I’ve been thinking through these ideas, science fiction characters are one of the lenses I’ve been finding helpful as a way of thinking about different models for our relationships with machines. Specifically, I’ve been using these three — C3PO, Iron Man, & R2D2 — as notable archetypes that describe different approaches to how we might design machine intelligences to engage with humans.
C3PO
C3PO is a protocol droid; he’s supposedly designed to not only emulate human social interaction, but to do so in a highly skilled way such that he can negotiate and communicate across many languages and cultures. But he’s really bad at it. He’s annoying to most of the characters around him, he’s argumentative, he’s a know-it-all, he doesn’t understand basic human motivations, and often misinterprets social cues. In many ways, C3PO is the perfect encapsulation of the popular fantasy of what a robot should be and the common failures inherent in that model.
Now this idea didn’t emerge in 1977. The idea of a mechanical person, a humanoid robot, has been around since at least the 1920s. Eric the Robot, or “Eric Robot” as he was more commonly known, was one of the earliest humanoid robots. He was pretty rudimentary, as you can see from the kind of critique that was leveled at Eric by The New York Times in the above quotes — it critiques his social demeanor and also questions his utility — why is a robot that moves and converses better that the one that can tell when the water levels in Washington are too high? Is this the best form or use for a robot?
But despite these criticisms, people were fascinated by Eric, as well as by the other early robots of his ilk, because it was the first moment that there was this idea, this promise, that machines might be able to walk among us and be animate. We’ve had this idea in our social imagination for nearly a century now—the assumption that, given sophisticated enough technology, we would eventually create robotic beings that would converse and interact with us just like human beings.
Our technology today is many orders of magnitude more sophisticated than Eric’s creators could have imagined. And we are still trying to fulfill this promise of a humanoid robot. We see the state of the art today with robots like Sophia.
This is so much closer to that humanoid ideal than Eric was, but it falls so short and just feels awkward and creepy.
On the software side (“bots” vs. “robots”), we have examples of the failure of the C3PO model in the voice assistants we use every day. They’ve gotten pretty “good”, but still can’t understand context well enough to respond appropriately in a consistent way, and the interactions are far from satisfying. It turns out that even with incredibly rich computational and machine learning resources, interacting like a human is really hard. It’s hard to program a computer to do that in any satisfying way because in many ways it’s hard for humans to do successfully either. We get it wrong so much of the time.
Interacting like a human is hard (even for humans).
We don’t all talk to each other the same way. We don’t all have the same set of cultural backgrounds or conversational expectations. Below are charts created by British linguist Richard Lewis to show the conversational process of negotiating a deal in different cultures.
These challenges have come to the fore lately with everyone from search engines to banks trying to create convincing conversational interfaces. We see companies struggling with the limitations of this approach. In some cases they have addressed those challenges by hiring humans to either support the bots or in some cases actually pose as chatbots.
Welcome to 2020, where humans pretend to be bots who are in turn pretending to be human.
Stop trying to make machines be like people
Obviously, the ideal of creating machines that interact with us just like people do is incredibly complicated and may actually be unattainable. But more importantly, this isn’t actually a compelling goal for how we should be implementing computational intelligence in our lives. We need to stop trying to make machines be like people and find some more interesting constructs for how to think about these entities.
The reason people started with the C3PO model is because human conversation is the dominant metaphor we have for interacting and engaging with others. My hypothesis is that this whole anthropomorphic model for robots is fundamentally just a skeuomorph because we haven’t developed new constructs for machine intelligence yet, in the same way that desks and file drawer metaphors were the first attempts at digital file systems, and when TV first emerged, people read radio scripts in front of cameras.
Instead, a more compelling approach would be to exploit the unique affordances of machines. We already have people. Humans are already quite good at doing human things. Machines are good at doing different things. So why don’t we design for what they’re good at, what their unique abilities are and how those abilities can enhance our lives? | https://alexis.medium.com/r2d2-as-a-model-for-ai-collaboration-9a2638bfbd09 | ['Alexis Lloyd'] | 2020-11-21 16:39:03.281000+00:00 | ['AI', 'Robots', 'Design', 'UX'] | Title R2D2 model AI collaborationContent essay way design relationship human machine — design interaction though part broadly posture towards machine intelligence life posture toward u want relationship look feel like I’ve thinking idea science fiction character one lens I’ve finding helpful way thinking different model relationship machine Specifically I’ve using three — C3PO Iron Man R2D2 — notable archetype describe different approach might design machine intelligence engage human C3PO C3PO protocol droid he’s supposedly designed emulate human social interaction highly skilled way negotiate communicate across many language culture he’s really bad He’s annoying character around he’s argumentative he’s knowitall doesn’t understand basic human motivation often misinterprets social cue many way C3PO perfect encapsulation popular fantasy robot common failure inherent model idea didn’t emerge 1977 idea mechanical person humanoid robot around since least 1920s Eric Robot “Eric Robot” commonly known one earliest humanoid robot pretty rudimentary see kind critique leveled Eric New York Times quote — critique social demeanor also question utility — robot move converse better one tell water level Washington high best form use robot despite criticism people fascinated Eric well early robot ilk first moment idea promise machine might able walk among u animate We’ve idea social imagination nearly century now—the assumption given sophisticated enough technology would eventually create robotic being would converse interact u like human being technology today many order magnitude sophisticated Eric’s creator could imagined still trying fulfill promise humanoid robot see state art today robot like Sophia much closer humanoid ideal Eric fall short feel awkward creepy software side “bots” v “robots” example failure C3PO model voice assistant use every day They’ve gotten pretty “good” still can’t understand context well enough respond appropriately consistent way interaction far satisfying turn even incredibly rich computational machine learning resource interacting like human really hard It’s hard program computer satisfying way many way it’s hard human successfully either get wrong much time Interacting like human hard even human don’t talk way don’t set cultural background conversational expectation chart created British linguist Richard Lewis show conversational process negotiating deal different culture challenge come fore lately everyone search engine bank trying create convincing conversational interface see company struggling limitation approach case addressed challenge hiring human either support bot case actually pose chatbots Welcome 2020 human pretend bot turn pretending human Stop trying make machine like people Obviously ideal creating machine interact u like people incredibly complicated may actually unattainable importantly isn’t actually compelling goal implementing computational intelligence life need stop trying make machine like people find interesting construct think entity reason people started C3PO model human conversation dominant metaphor interacting engaging others hypothesis whole anthropomorphic model robot fundamentally skeuomorph haven’t developed new construct machine intelligence yet way desk file drawer metaphor first attempt digital file system TV first emerged people read radio script front camera Instead compelling approach would exploit unique affordances machine already people Humans already quite good human thing Machines good different thing don’t design they’re good unique ability ability enhance livesTags AI Robots Design UX |
1,179 | Round in Circles | Photo Credit — Me! Greenwich, London
The anticipation has my heart racing
All I want to do is pick up the pace and move forward
keep on going keep on growing
in this life plant the seeds I've been sewing
reap the rewards of the knowing and the learnings from past endeavours
Cross a million Bridges over rivers
This spoken word s*** has me thinking keeps me writing stops me sinking
this spoken flow is how I show the world
just who I am
Not just who I am but
what I can
do
can you
imagine where this life will take us
Just what this life will make us
ain't no one gonna forsake us
because it's up to us how we move forward
This better life we're moving towards
and our Horizons are filled with bright light
our Horizons eradicate trite spite
get rid of all the hate that sits within us the debates they try to spin us
round in circles instead of straight lines
We'll create our fate and we'll be just fine. | https://medium.com/poets-unlimited/round-in-circles-dd469019f210 | ['Aarish Shah'] | 2017-09-16 03:21:31.685000+00:00 | ['Writing', 'Photography', 'Inspiration', 'Poetry', 'Creativity'] | Title Round CirclesContent Photo Credit — Greenwich London anticipation heart racing want pick pace move forward keep going keep growing life plant seed Ive sewing reap reward knowing learning past endeavour Cross million Bridges river spoken word thinking keep writing stop sinking spoken flow show world imagine life take u life make u aint one gonna forsake u u move forward better life moving towards Horizons filled bright light Horizons eradicate trite spite get rid hate sits within u debate try spin u round circle instead straight line Well create fate well fineTags Writing Photography Inspiration Poetry Creativity |
1,180 | Top 10 Safest Jobs from AI | This is a series of 4 articles I am sharing here, for people who are concerned and eager to understand more about job displacement impact potentially caused by artificial intelligence technology. You would read about “safe” versus “endangered” jobs in this series. The jobs listed in each article are demonstrative from my research research and technological knowledge, which may or may not fit into your personal scenario. I highly encourage readers to take those as references and inspirations, and to start re-imagine and re-strategize your career today with our shared future — powered by AI.
How to determine what jobs are safe/unsafe?
White collar:
Repetition vs. strategic:
Does your job have minimal repetition of tasks? Do you regularly come up with insights that are important to your company? Do you make key decisions that cross functions for your company?
Simplicity vs. complexity:
Do most decisions in your job require complexity or deliberation? In your job, do you need to regularly learn and understand a lot of complex information?
Blue collar
Dexterity vs. repetition:
Does it require at least a year of training to be qualified for your job? Does your job involve very little repetition of the same task(s)?
Fixed vs. unstructured environment
Is your job usually performed in different environments each time? (e.g., a taxi driver would always work in the same taxi) Is your work environment unstructured?
For all jobs: human-contact / empathy / compassion
Is communication and persuasion one of the most important parts of your job?
Do you spend >30% of your work time with people who are not employed by your company (e.g., customers, potential customers, partners) ?
Is a key part of your job performance measured by how well you interact with people?
Does your job result in happiness, safety, or health of those your directly service?
Do you lead or manage people in your job?
Top 10 Safest Jobs from AI
Psychiatrists
Psychiatrists, social workers, marriage counselors are all professions that require strong communication skills, empathy, and the ability to win trust from clients. These are the weakest areas for AI. Also, with the changing times, growing inequality, and job displacements, the need for these services are likely to increase.
Therapists (occupational, physical, massage)
Dexterity is one of the challenges of AI. Physical therapy (such as chiropractics or massage therapy) involves applying very delicate pressures and sensing equally minute responses from the client’s body. In addition, there are added challenges of customizing care for each client, consequences of hurting a client, and the need for person-to-person interaction. The human interaction includes: the ongoing therapy, professional advice, small talk, as well as encouragement and empathy. These aspects make the job impregnable to AI for the short-term.
Medical caregivers (nurses, elderly care)
The overall healthcare industry is expected to grow substantially, due to increased income, greater benefits, AI lowering the cost of care, and aging population (which requires much more care). Many of these reasons will foster a symbiotic environment where AI helps the analytical and repetitive aspects of the healthcare, as more of the healthcare profession shifts to attentiveness, compassion, support, and encouragement.
AI researchers and engineers
As AI grows, there will be a jump in the market demand for AI professionals. Gartner estimates that in the next few years, these increases will outnumber the jobs replaced. However, one needs to keep in mind, AI tools are getting better so that some of the entry-level positions in AI will be automated over time as well. AI professionals will need to keep up with the changes, just like over the years software engineers had to learn about assembly language, high-level language, object-oriented programming, mobile programming, and now AI programming.
Fiction writers
Story-telling is one of the highest levels of creativity, which AI will be weak. Writers have to ideate, create, engage, and write with style and beauty. In particular, a great fictional book has original ideas, interesting characters, engaging plot, surprising emotions and poetic language. All of these are hard to replicate. While AI will be able to write social media messages, title suggestions, or even imitate writing styles, the best books, movies, and plays will be written by humans for the foreseeable future. Entertainment will be a hot area in the era of AI, because we will have more wealth overall, and more free time.
Teachers
AI will become great tools to teachers and education, capable of knowing how to help personalize education curriculum based on each students’ competence, progress, aptitude, and temperament. However, education will be more about helping each student to find what he or she wants, helping hone each student’s ability to learn independently, and being the friend and mentor to help each student to learn to interact with others and gain trust. These are jobs that can only be done by teachers, and require a low student:teacher ratio (like 5:1 or fewer). So there will be many more such humanistic teacher positions created in the future. In fact, a parent may be the best such teacher — if future governments would be wise enough to compensate home schooling parent-teachers. If you are or want to be a teacher, learn more how to connect to your students, one student at a time or in small groups, and less about how to lecture to 50 students.
Criminal defense attorney
Top lawyers will have nothing to worry about their jobs — reasoning across domains, winning the trust of the clients, years working with many judges, and persuading the jury are the perfect combination of complexity, strategy, and human interaction that are so hard for AI. However, a lot of paralegal and preparatory work in document review, analysis, and recommendations can be done much better by AI. And many tasks performed by paralegal: legal discovery, creating contracts, and handling small claims, parking cases, will be increasingly handled by AI. Cost of law makes it worthwhile for AI companies to go after AI-paralegals and AI-junior lawyers, but not the top lawyers themselves.
Computer Scientists & Engineers
As the information age advances, McKinsey report shows that engineering jobs (computer scientists, engineers, IT administrators, IT workers, tech consulting). This high-wage category is expected to increase by 20 million to 50 million globally by 2030. But these jobs require staying up-to-date with technology, and moving into areas that are not automated by technologies.
Scientists
Scientists are the ultimate profession of human creativity. AI can only optimize based on goals set by human creativity. While AI is not likely to replace scientists, AI would make great tools for scientists. For example, for drug discovery, AI can be used to hypothesize and test possible uses of known drugs for diseases, or filter possible new drugs for scientists to consider. AI will amplify the human scientists.
Manager/Leaders
Good managers have human interaction skills including motivation, negotiation, persuasion. Good managers will effectively connect on behalf of the companies to employees, and vice versa. More importantly, the best managers are leaders, who establish a strong culture and values, and through their action and words, cause the employees to follow with their heart. While AI can be used to manage performance, managers will continue to be humans. That said, if a manager is merely a bureaucrat sitting behind a desk and giving employees orders, he or she will be replaced by other humans. | https://kaifulee.medium.com/top-10-safest-jobs-from-ai-1824cacd1954 | ['Kai-Fu Lee'] | 2020-12-03 09:04:59.386000+00:00 | ['Artificial Intelligence', 'Technology', 'Tech', 'Jobs', 'AI'] | Title Top 10 Safest Jobs AIContent series 4 article sharing people concerned eager understand job displacement impact potentially caused artificial intelligence technology would read “safe” versus “endangered” job series job listed article demonstrative research research technological knowledge may may fit personal scenario highly encourage reader take reference inspiration start reimagine restrategize career today shared future — powered AI determine job safeunsafe White collar Repetition v strategic job minimal repetition task regularly come insight important company make key decision cross function company Simplicity v complexity decision job require complexity deliberation job need regularly learn understand lot complex information Blue collar Dexterity v repetition require least year training qualified job job involve little repetition task Fixed v unstructured environment job usually performed different environment time eg taxi driver would always work taxi work environment unstructured job humancontact empathy compassion communication persuasion one important part job spend 30 work time people employed company eg customer potential customer partner key part job performance measured well interact people job result happiness safety health directly service lead manage people job Top 10 Safest Jobs AI Psychiatrists Psychiatrists social worker marriage counselor profession require strong communication skill empathy ability win trust client weakest area AI Also changing time growing inequality job displacement need service likely increase Therapists occupational physical massage Dexterity one challenge AI Physical therapy chiropractic massage therapy involves applying delicate pressure sensing equally minute response client’s body addition added challenge customizing care client consequence hurting client need persontoperson interaction human interaction includes ongoing therapy professional advice small talk well encouragement empathy aspect make job impregnable AI shortterm Medical caregiver nurse elderly care overall healthcare industry expected grow substantially due increased income greater benefit AI lowering cost care aging population requires much care Many reason foster symbiotic environment AI help analytical repetitive aspect healthcare healthcare profession shift attentiveness compassion support encouragement AI researcher engineer AI grows jump market demand AI professional Gartner estimate next year increase outnumber job replaced However one need keep mind AI tool getting better entrylevel position AI automated time well AI professional need keep change like year software engineer learn assembly language highlevel language objectoriented programming mobile programming AI programming Fiction writer Storytelling one highest level creativity AI weak Writers ideate create engage write style beauty particular great fictional book original idea interesting character engaging plot surprising emotion poetic language hard replicate AI able write social medium message title suggestion even imitate writing style best book movie play written human foreseeable future Entertainment hot area era AI wealth overall free time Teachers AI become great tool teacher education capable knowing help personalize education curriculum based students’ competence progress aptitude temperament However education helping student find want helping hone student’s ability learn independently friend mentor help student learn interact others gain trust job done teacher require low studentteacher ratio like 51 fewer many humanistic teacher position created future fact parent may best teacher — future government would wise enough compensate home schooling parentteachers want teacher learn connect student one student time small group le lecture 50 student Criminal defense attorney Top lawyer nothing worry job — reasoning across domain winning trust client year working many judge persuading jury perfect combination complexity strategy human interaction hard AI However lot paralegal preparatory work document review analysis recommendation done much better AI many task performed paralegal legal discovery creating contract handling small claim parking case increasingly handled AI Cost law make worthwhile AI company go AIparalegals AIjunior lawyer top lawyer Computer Scientists Engineers information age advance McKinsey report show engineering job computer scientist engineer administrator worker tech consulting highwage category expected increase 20 million 50 million globally 2030 job require staying uptodate technology moving area automated technology Scientists Scientists ultimate profession human creativity AI optimize based goal set human creativity AI likely replace scientist AI would make great tool scientist example drug discovery AI used hypothesize test possible us known drug disease filter possible new drug scientist consider AI amplify human scientist ManagerLeaders Good manager human interaction skill including motivation negotiation persuasion Good manager effectively connect behalf company employee vice versa importantly best manager leader establish strong culture value action word cause employee follow heart AI used manage performance manager continue human said manager merely bureaucrat sitting behind desk giving employee order replaced humansTags Artificial Intelligence Technology Tech Jobs AI |
1,181 | Keeping a Distance — From Friend and Foe Alike | Written by
Almost famous cartoonist who laughs at her own jokes and hopes you will, too. | https://marcialiss17.medium.com/keeping-a-distance-from-friend-and-foe-alike-28f1bcad6f70 | [] | 2020-04-01 14:14:51.249000+00:00 | ['Mental Health', 'Health', 'Comics', 'Trump', 'Covid 19'] | Title Keeping Distance — Friend Foe AlikeContent Written Almost famous cartoonist laugh joke hope tooTags Mental Health Health Comics Trump Covid 19 |
1,182 | Artistic Voronoi Diagrams in Python | Making a Voronoi Diagram
Let’s start by importing all the libraries we need.
Script 1 — Importing libraries.
If you get any ModuleNotFoundError let’s take care of them using Anaconda to install the missing packages. I will only go over the less common packages, since it’s likely you already have Pandas, Matplotlib, and other widely used packages installed.
Open Anaconda Prompt and navigate to the desired conda environment.
To install PIL a Python package used for image processing:
conda install -c anaconda pillow
Now let’s define the helper function that will do the bulk of the work:
Script 2 — Helper function that makes the Voronio diagram. You can find the original Python script here: https://rosettacode.org/wiki/Voronoi_diagram#Python.
You can read the doc string to understand the parameters the function takes. Click here to see some of the color palettes you can use.
To create the Voronoi diagram, sites are randomly drawn from a 2D Gaussian distribution. Then, each site is assigned a random color from one of the two palettes. Finally, the diagram is created pixel by pixel and then saved.
Now we are ready to make a cool looking diagram. | https://medium.com/i-want-to-be-the-very-best/artistic-voronoi-diagrams-in-python-928bdae85dd8 | ['Frank Ceballos'] | 2019-12-24 13:08:41.324000+00:00 | ['Python', 'Math', 'Art', 'Design', 'Voronoi'] | Title Artistic Voronoi Diagrams PythonContent Making Voronoi Diagram Let’s start importing library need Script 1 — Importing library get ModuleNotFoundError let’s take care using Anaconda install missing package go le common package since it’s likely already Pandas Matplotlib widely used package installed Open Anaconda Prompt navigate desired conda environment install PIL Python package used image processing conda install c anaconda pillow let’s define helper function bulk work Script 2 — Helper function make Voronio diagram find original Python script httpsrosettacodeorgwikiVoronoidiagramPython read doc string understand parameter function take Click see color palette use create Voronoi diagram site randomly drawn 2D Gaussian distribution site assigned random color one two palette Finally diagram created pixel pixel saved ready make cool looking diagramTags Python Math Art Design Voronoi |
1,183 | Bottoms-Up: How the Pinterest growth team decentralizes its structure | Neeraj Chandra | Growth engineering
On the growth team at Pinterest, we describe our structure as “bottoms-up”, meaning ideas and responsibilities flow throughout the team in a way that provides a lot of autonomy to work on the projects each person wants to tackle, across a variety of roles. The result is a team that is very scalable and flexible. And it means that each of us on the team has a lot of liberty to choose how we spend our time.
How this approach works
Ultimately, my main priority is to build experiences that help contribute to the acquisition, activation, and retention of users, which is also the mechanism by which our team is evaluated.
Specifically, my team focuses on user conversion, and in the context of Pinterest, that means showing the value of Pinterest to prospective users and converting them into active users. To do so, we come up with ideas for improvements, build and launch experiments to measure the change, and analyze results. If the experiment contributed a valuable increase, we ship the experiment to users. Conversely, if the experiment had no impact on the growth funnel or negatively contributed to user growth, we shut it down, remove the experiment, and analyze the results to learn and improve. Our team runs hundreds of experiments per quarter, so in order to accurately measure the impact of a specific experiment, we randomly bucket the population into control and experiment groups.
To run so many experiments, our team is constantly searching for new ideas. We don’t believe that idea generation should belong only to one person; instead the entire team is responsible. We regularly spend time brainstorming and evaluating new ideas. Anyone can pitch an idea to the team to receive feedback, and if the idea holds up it gets moved into the backlog. Part of demonstrating that an idea is worthy is showing there is enough of an opportunity, and so we must also explain why the idea is important and large enough to work on.
In a similar fashion, the responsibility to analyze results also falls to each team member. We’ve built an experimentation framework that calculates and aggregates key metrics, but often times we need to dig further to understand user behavior. And so, a key part of my job is to utilize our data tools to construct queries and analyze the results of an experiment. While I can certainly receive help if needed, it’s expected that I drive the analysis and help complete our understanding of Pinners.
A side benefit of everyone being engaged in the analysis is we’re always thinking about the type of data and logs needed in advance of any new experiment. Because we all have experience using logs to answer questions, additional input throughout the development process can only make us better.
Why we do it
At this point, you might be wondering why we’ve organized ourselves in such a way. We still have some specialization, to be sure, but on the whole it’s a much looser structure. Why have everyone spend time on generating ideas and analyzing results?
This particular structure is ideal for us given the nature of our work, as it allows us to focus on the hard problems, remain scalable and flexible, and encourage our own growth.
Some additional key reasons:
Make the right decisions quickly. Team input enables us to move faster in the development of high quality ideas so that we can focus on solving the right problems. If we pick the wrong project to work on, it doesn’t matter how efficient we are at development and analysis, for the entire idea is likely to have poor results and be shut down. By including everyone in the process, we can generate more solutions to our problems and incorporate a wider diversity of perspectives. In addition to coming up with ideas that solve a user problem, it’s also important to estimate the potential impact of any opportunity, so that we can prioritize accordingly. Predicting the expected impact from a new project can be difficult for just one person to do, but by having the entire team involved we can hold each other accountable and arrive at a more objective and rigorous result. In practice, this means that we have a weekly meeting where individuals pitch the team on any ideas they have. As a team, we discuss each idea for its merits and its potential opportunity, and then use that to shape our prioritization.
Team input enables us to move faster in the development of high quality ideas so that we can focus on solving the right problems. If we pick the wrong project to work on, it doesn’t matter how efficient we are at development and analysis, for the entire idea is likely to have poor results and be shut down. By including everyone in the process, we can generate more solutions to our problems and incorporate a wider diversity of perspectives. In addition to coming up with ideas that solve a user problem, it’s also important to estimate the potential impact of any opportunity, so that we can prioritize accordingly. Predicting the expected impact from a new project can be difficult for just one person to do, but by having the entire team involved we can hold each other accountable and arrive at a more objective and rigorous result. In practice, this means that we have a weekly meeting where individuals pitch the team on any ideas they have. As a team, we discuss each idea for its merits and its potential opportunity, and then use that to shape our prioritization. Adjust to focus on the current problems. By involving everyone in each part of the process, we remain flexible and scalable. If our team is low on ideas, we can all shift our focus for the week to generating new ones. If we have a healthy backlog of ideas, we prioritize developing those ideas and launching experiments. If we need to analyze ongoing experiments, we can each perform the necessary analysis to arrive at a conclusion.The result is we can adjust to solve the current problems, regardless of where they are in our process; we can also easily onboard new members without changing our process.
By involving everyone in each part of the process, we remain flexible and scalable. If our team is low on ideas, we can all shift our focus for the week to generating new ones. If we have a healthy backlog of ideas, we prioritize developing those ideas and launching experiments. If we need to analyze ongoing experiments, we can each perform the necessary analysis to arrive at a conclusion.The result is we can adjust to solve the current problems, regardless of where they are in our process; we can also easily onboard new members without changing our process. New ideas lead to professional development. Finally, each of us can learn and grow by being exposed to the different parts of the process. I personally find this to be really exciting, because I get to be involved in generating ideas, prioritizing work, developing experiments, and analyzing results. Each of these requires a different set of skills, and as the nature of my work shifts or my interests change, I can adjust accordingly. Since joining the team, I’ve gotten better at each of these things. I’ve improved my ability to come up with ideas for consideration, and I’ve improved my ability to dig deep into data analysis. Getting better at one part of the process helps me with the other parts too. For example, by becoming more familiar with analyzing experiments, I can incorporate better logs during development, and by becoming better at generating ideas, I can understand what questions to answer during analysis. It also means that, in addition to building new experiences, I get to think about the larger problem of user acquisition and activation. This is great for me and great for the business; with each of us learning more about user growth, we can in turn contribute better and higher-impact ideas, while ensuring our teams move at a fast pace.
What the approach requires
This bottoms-up approach may not work for every team. Because we tend to build smaller initiatives to test our hypotheses, before committing to larger efforts, we’re constantly having to ideate, adapt and learn. Consequently, our team benefits tremendously from our flexible nature and focus on ideation. The entire team, to its credit, has also really embraced this mentality, and it’s so exciting to see everyone adopt a curious mindset and consider new opportunities.
Through refinement, we’ve found success making the bottoms-up approach work for us. But how does it work for management?
I asked Ludo Antonov, the head of growth engineering at Pinterest, about his thoughts on this approach and why he helped structure the team this way. This is what he told me:
“I love ‘bottoms-up’ and decentralized teams because it allows everyone on the team to feel ownership over the product and the company. It encourages various functions such as design, marketing, engineering, and product to come together and learn from each other’s perspective and build a world class product. I always say that we should strive for the people that spend the most of their personal time on turning an idea into reality to have the most say in its direction. This leads to a much higher quality of execution and happiness of the team, while fostering learning and excellence.
I get ecstatic when I see engineers, designers or marketers propose radically different ideas than what we have in the product today and channel that passion into making these ideas successful. In that sense, I think of anyone in a leadership position on the team (myself included) as being an advisor to them, guiding and enabling them in that process of creating value. My favorite part is that when I have ideas that I think we should build, I know that the team is empowered to say ‘no’ to me. Thus, I know that when we actually get aligned on building something, it is because they believe it’s worth their time and will make the team and the product more successful, versus building things because of a leader’s opinion.”
Tying it all together
In the end, our team has found success with a bottoms-up culture. It’s encouraged each of us to take ownership over our domain while generating a healthy diversity of ideas. Being able to test my own ideas is incredibly empowering too — there’s no substitute for trying out an idea and seeing how it actually performs. What I enjoy most is being able to spend time throughout my day thinking about our strategy and coming up with new ideas, and then turning those ideas into reality. I get to be a part of the bigger conversations, and it’s an incredible opportunity to have influence on a product that over 250 million people use each month.
Thanks to Brian Lee, Ludo Antonov, Hayder Casey, and Jeff Chang for their help with this post! | https://medium.com/pinterest-engineering/bottoms-up-how-the-pinterest-growth-team-decentralizes-team-structure-d9f890fa8869 | ['Pinterest Engineering'] | 2019-04-05 19:13:29.008000+00:00 | ['Growth', 'Engineering', 'Startup', 'Team Collaboration'] | Title BottomsUp Pinterest growth team decentralizes structureContent Neeraj Chandra Growth engineering growth team Pinterest describe structure “bottomsup” meaning idea responsibility flow throughout team way provides lot autonomy work project person want tackle across variety role result team scalable flexible mean u team lot liberty choose spend time approach work Ultimately main priority build experience help contribute acquisition activation retention user also mechanism team evaluated Specifically team focus user conversion context Pinterest mean showing value Pinterest prospective user converting active user come idea improvement build launch experiment measure change analyze result experiment contributed valuable increase ship experiment user Conversely experiment impact growth funnel negatively contributed user growth shut remove experiment analyze result learn improve team run hundred experiment per quarter order accurately measure impact specific experiment randomly bucket population control experiment group run many experiment team constantly searching new idea don’t believe idea generation belong one person instead entire team responsible regularly spend time brainstorming evaluating new idea Anyone pitch idea team receive feedback idea hold get moved backlog Part demonstrating idea worthy showing enough opportunity must also explain idea important large enough work similar fashion responsibility analyze result also fall team member We’ve built experimentation framework calculates aggregate key metric often time need dig understand user behavior key part job utilize data tool construct query analyze result experiment certainly receive help needed it’s expected drive analysis help complete understanding Pinners side benefit everyone engaged analysis we’re always thinking type data log needed advance new experiment experience using log answer question additional input throughout development process make u better point might wondering we’ve organized way still specialization sure whole it’s much looser structure everyone spend time generating idea analyzing result particular structure ideal u given nature work allows u focus hard problem remain scalable flexible encourage growth additional key reason Make right decision quickly Team input enables u move faster development high quality idea focus solving right problem pick wrong project work doesn’t matter efficient development analysis entire idea likely poor result shut including everyone process generate solution problem incorporate wider diversity perspective addition coming idea solve user problem it’s also important estimate potential impact opportunity prioritize accordingly Predicting expected impact new project difficult one person entire team involved hold accountable arrive objective rigorous result practice mean weekly meeting individual pitch team idea team discus idea merit potential opportunity use shape prioritization Team input enables u move faster development high quality idea focus solving right problem pick wrong project work doesn’t matter efficient development analysis entire idea likely poor result shut including everyone process generate solution problem incorporate wider diversity perspective addition coming idea solve user problem it’s also important estimate potential impact opportunity prioritize accordingly Predicting expected impact new project difficult one person entire team involved hold accountable arrive objective rigorous result practice mean weekly meeting individual pitch team idea team discus idea merit potential opportunity use shape prioritization Adjust focus current problem involving everyone part process remain flexible scalable team low idea shift focus week generating new one healthy backlog idea prioritize developing idea launching experiment need analyze ongoing experiment perform necessary analysis arrive conclusionThe result adjust solve current problem regardless process also easily onboard new member without changing process involving everyone part process remain flexible scalable team low idea shift focus week generating new one healthy backlog idea prioritize developing idea launching experiment need analyze ongoing experiment perform necessary analysis arrive conclusionThe result adjust solve current problem regardless process also easily onboard new member without changing process New idea lead professional development Finally u learn grow exposed different part process personally find really exciting get involved generating idea prioritizing work developing experiment analyzing result requires different set skill nature work shift interest change adjust accordingly Since joining team I’ve gotten better thing I’ve improved ability come idea consideration I’ve improved ability dig deep data analysis Getting better one part process help part example becoming familiar analyzing experiment incorporate better log development becoming better generating idea understand question answer analysis also mean addition building new experience get think larger problem user acquisition activation great great business u learning user growth turn contribute better higherimpact idea ensuring team move fast pace approach requires bottomsup approach may work every team tend build smaller initiative test hypothesis committing larger effort we’re constantly ideate adapt learn Consequently team benefit tremendously flexible nature focus ideation entire team credit also really embraced mentality it’s exciting see everyone adopt curious mindset consider new opportunity refinement we’ve found success making bottomsup approach work u work management asked Ludo Antonov head growth engineering Pinterest thought approach helped structure team way told “I love ‘bottomsup’ decentralized team allows everyone team feel ownership product company encourages various function design marketing engineering product come together learn other’s perspective build world class product always say strive people spend personal time turning idea reality say direction lead much higher quality execution happiness team fostering learning excellence get ecstatic see engineer designer marketer propose radically different idea product today channel passion making idea successful sense think anyone leadership position team included advisor guiding enabling process creating value favorite part idea think build know team empowered say ‘no’ Thus know actually get aligned building something believe it’s worth time make team product successful versus building thing leader’s opinion” Tying together end team found success bottomsup culture It’s encouraged u take ownership domain generating healthy diversity idea able test idea incredibly empowering — there’s substitute trying idea seeing actually performs enjoy able spend time throughout day thinking strategy coming new idea turning idea reality get part bigger conversation it’s incredible opportunity influence product 250 million people use month Thanks Brian Lee Ludo Antonov Hayder Casey Jeff Chang help postTags Growth Engineering Startup Team Collaboration |
1,184 | Joyful Exercises for Contributing to Low-Fat, Lean Muscles, and Dense Bones | Joyful Exercises for Contributing to Low-Fat, Lean Muscles, and Dense Bones
Fitness is a passion for me. I do it as a ritual at home nowadays.
I used to go to the gym and love the rituals. However, nowadays, it is difficult for me to go to the gym. Not going to the gym does not mean giving up fitness goals. I created a customised gym at home for my specific needs.
I perform a wide variety of workout regimes. As I get older, the type of exercises changed a lot. I used to do a lot of cardio when I was younger. My main focus is weight training, resistance training, callisthenics, high-intensity interval training (HIIT), and mild cardio.
The purpose of this post to share with you three joyful exercises I perform a daily basis to keep my lean muscles, bones density, and low-fat percentage.
Even though I passionately work out, my approach to exercise is gentle. For example, instead of using too heavy dumbbells, I prefer using my body weight. It is natural and produces the required outcomes for me.
Let me introduce you the three simple exercise I do almost every day to maintain my low-body fat percentage, lean muscles, and dense bones.
Apart from a few hundred push-ups, I use the pull-up machine every day.
Here are the three joyful exercises; enjoy. | https://medium.com/illumination-curated/joyful-exercises-for-contributing-to-low-fat-lean-muscles-and-dense-bones-18a9d7614afc | ['Dr Mehmet Yildiz'] | 2020-12-28 16:52:20.670000+00:00 | ['Self Improvement', 'Fitness', 'Writing', 'Health', 'Technology'] | Title Joyful Exercises Contributing LowFat Lean Muscles Dense BonesContent Joyful Exercises Contributing LowFat Lean Muscles Dense Bones Fitness passion ritual home nowadays used go gym love ritual However nowadays difficult go gym going gym mean giving fitness goal created customised gym home specific need perform wide variety workout regime get older type exercise changed lot used lot cardio younger main focus weight training resistance training callisthenics highintensity interval training HIIT mild cardio purpose post share three joyful exercise perform daily basis keep lean muscle bone density lowfat percentage Even though passionately work approach exercise gentle example instead using heavy dumbbell prefer using body weight natural produce required outcome Let introduce three simple exercise almost every day maintain lowbody fat percentage lean muscle dense bone Apart hundred pushup use pullup machine every day three joyful exercise enjoyTags Self Improvement Fitness Writing Health Technology |
1,185 | The Forgotten Key to Great Software: Human Touch (and Special Goggles) | Cast your mind back to 2006. RHCP’s Stadium Arcadium, Yeah Yeah Yeahs’ Show Your Bones, and The Killers’ Sam’s Town were grooving through your FM stereo. The second Pirates of the Caribbean movie was netting blockbuster sales (sails?), as the book Eat, Pray, Love was encouraging us all to take chances, live a little louder, and drink white wine at 10am (in a fun way, not in an alcoholic sort of way).
The iPod had been dominating the music world for five full years, and had a near-monopoly on the listening-device industry. But a new foe approached. One that promised to take the MP3 fight straight to those turtleneck-wearing hipsters in Cupertino. In November of 2006 the Microsoft Zune, a.k.a. the iPod killer, was unveiled.
On paper the two devices looked incredibly similar. In fact the Zune had a leg up in some areas, with it’s larger screen and built-in FM tuner. A battle of two tech giants, both yearning to get their device into your pocket, you’d think sales figures would be neck & neck. Yet after ~2 years of production the Zune finally reached 2 million in total sales, while 3.53 million iPods were being sold every single month.
Why the substantial mismatch in sales? Sure the iPod was introduced 5 years earlier and had momentum, but that doesn’t fully explain the wildly different results.
I’d ague it’s because the iPod was built around it’s users, from it’s conception, to every design choice made, to every line of code written and hardware component created.
The iPod seemed intuitive, cool, and easy to use. It made sense. It felt good. That scroll wheel was smooth yet tactile as your finger raced around the rim. The menus & UX let you jump quickly to the perfect song, while also allowing you to meander through your catalog, looking for inspiration. Simple games like Texas Hold ’Em and that brick breaking game, though not cutting edge in their gameplay or graphics, gave the user something to concentrate on as they grooved to their favorite songs.
Even though the Zune stood toe-to-toe technically with the iPod, it lacked that human-centered touch. Instead of the easy-to-master scroll wheel, the Zune stuck with traditional directional buttons, causing you to “click, click, click, click…” through selection after selection until you found the right jam. The device was larger, heavier, and generally felt clunkier than the sleek iPod. The respective stores (iTunes vs. Zune Store) offered very different user experiences and ease of integration with their products.
On paper the two devices were incredibly similar, and if feature lists were the only criteria of a well-received piece of tech then Apple and Microsoft would’ve posted similar sales. But Apple built a device entirely around it’s users, and that made all the difference.
Software Can Fulfill All Requirements, and Still Be a Disaster
We’ve all had those sprint reviews, where devs unveil the feature they worked an unholy number of hours on, putting their blood, sweat, and tears into making sure reality perfectly matches each requirements specification, only to be bombarded with questions from other stakeholders during the demo.
“Why do I have to click two buttons to reach this page?”
“I can’t find the Submit button, where is it?”
“Why do I have to toggle back and forth between pages? Can’t I just see all this info on one page?”
Worse still is when the whiz-bang feature that‘s been clamored for finally gets deployed to production, and no one touches it for months on end. When asked, users respond with statements like:
“Oh that? Yeah I found it easier to just put all my comments in Notepad”
“I tried that approach, but it just seemed so complicated. Whereas the old way I’m familiar with.”
“Wait, you can do that? I never knew!”
As engineers, we like building cool things. Therefore we tend to focus exclusively on the engineering & technical aspects of our software. How robust is the design? How clean is the code? How scalable and maintainable is the system? Engineers should care about these dimensions, and fight for them if need be, but they’re only half the battle when creating a great system.
When using an iPod, 12-year old me wasn’t appreciating the masterful architecture that allowed each piece of the system to interface flawlessly with the greater whole, or the brilliant algorithms that saved heaps of space and time during repetitive calculations. I liked the fact that I could buy my favorite Rise Against album (remember them?) on iTunes, transfer it onto the iPod with ease, and within a few clicks listen to my new tunes wherever I wanted to. As an end user I didn’t care about the technical aspects of the system, I cared about the experience I had when using the system.
UX, not UI
So how do you build a great user experience? The old-school approach was to hire a couple designers, send them a list of fields & elements to be displayed on each screen, and have interns write CSS/GUI code to look like the resulting mockups. UI done, developers could now design the rest of the system around what they deemed important/interesting.
While good interface design is critical, it only scratches the layer of the entire experience. Most software goes beyond a series of screens. It’s dynamic; calculating, reacting, and sometimes being shaped by users’ input, in an intricate dance of man & machine. Good user interfaces don’t mean much if the user has to wait 45 seconds for a central page to load, or is redirected to unexpected, disjointed places as they flow through the system.
The responsibility of creating a great user experience can’t be entirely shouldered by designers. Every member of the team must keep this goal in mind, from the product owner who selects the right features to prioritize (not necessarily the most popular), to testers who, on top of teasing out crashes, bring up feelings of ambiguity or annoyance felt when using the system, to developers, who write every line of code with the intention of helping users achieve their goals or have an experience.
User Goggles
Jay Leno, famed software visionary
Stacks of books, papers, and articles have been written about the art of making a great user experience, and I recommend that all software professionals take at least a cursory dive into this field. But you don’t need a degree in Human Computer Interaction to create software that users love. Instead you just need the human touch, a.k.a. empathy, a.k.a. walking in another’s shoes, a.k.a user goggles.
The D.A.R.E., S.A.A.D., and P.A.D.B.T.O.B.A.T.G.W.J.D.T.W.O.K (People Against Drinking Before Twenty-One But After That Go Wild Just Don’t Touch Weed O.K.) programs in high schools relish the chance to break out their Drunk Goggles at school assemblies. These goggles simulate the visual, balance, & coordination impairment caused by a few too many White Russians, leading their wearer to stumble around the room like a drunkard on a Friday night. The person isn’t actually drunk, they’re probably stone sober, but they interface with the world as if they were a heavy drinker.
When giving fresh code a first run, I like to imagine putting on my “user goggles”. Although I’ve spent anywhere from hours to months in the intricate details of this system, I forget all that and pretend that I’m a first time user, trying to accomplish a specific task. It’s tough, and sometimes it feels like another distraction from writing useful code…
But as I’m stumbling through my task, I notice things:
The Submit button’s on the left. That’s weird. I’d expect it to be on the right.
I clicked Continue, is it doing anything? Hello?
I feel like all my time is spent toggling between these tabs. Back and forth. Back and forth. Ugh.
I keep submitting this form, and each time the next page takes at least 10 seconds to load. I’ve actually started checking my phone each time I click Submit, that’s how bad it is.
I write down all these little thoughts. These criticisms, annoyances, and tics. My users would think the exact same thoughts if they loaded my system for the first time, current state. I use these thoughts to fuel a second-pass at the code, to get in front if this criticism. It’s much easier to change the system now, then 4 months later when most of the dev team’s moved on and the code base is 5,000 lines ticker.
Every team member should practice some form of this exercise.
While writing user stories, Product Owners should close their eyes and say “As a user, what do I actually want to do? What do I need to do? What’s the purpose of me loading this system? Does this user story help me execute my purpose for being here?”
While trying new features, Testers should load the home page and say “I’m Jane Soandso. I loaded somerandomsystem.com because I want to do ___. Let’s see if I can figure out how to get this done quickly, easily, and painlessly”
Write down every hiccup, concern, or negative thought, and fix the root cause if possible. Remember: if you notice something, then you can almost guarantee that your users will notice the same thing, and their experience will be hampered as a result.
Conclusion
What’s presented here isn’t new (walk a mile in someone else’s shoes, anyone?), but user empathy’s sorely underutilized in software development. Development team members often see software in a very specific light. We’re creating a piece of art. A massive collection of code, interfaces, ideas, and test plans assembled by highly-skilled professionals over the course of months (or years). But the people who use this software (and directly/indirectly give us money for rent & Tostitos) see our creation as a tool, to be used for certain tasks or experiences, and nothing more. By simulating the mindset of our users mid-work, we can constantly readjust our creation to better fit their needs, lives, wants, hates, and secret, innermost desires (that last one’s tricky, and may require some imagination). By wearing our user goggles mid-development we can later release our software with confidence, knowing it’ll achieve it’s ultimate purpose: to be used & loved by actual people in the real-world. | https://medium.com/walk-before-you-sprint/the-forgotten-key-to-great-software-the-human-touch-297ca8300301 | ['Grant Gadomski'] | 2019-05-04 02:00:50.856000+00:00 | ['UX', 'Software Development', 'Software', 'Design', 'Software Engineering'] | Title Forgotten Key Great Software Human Touch Special GogglesContent Cast mind back 2006 RHCP’s Stadium Arcadium Yeah Yeah Yeahs’ Show Bones Killers’ Sam’s Town grooving FM stereo second Pirates Caribbean movie netting blockbuster sale sail book Eat Pray Love encouraging u take chance live little louder drink white wine 10am fun way alcoholic sort way iPod dominating music world five full year nearmonopoly listeningdevice industry new foe approached One promised take MP3 fight straight turtleneckwearing hipster Cupertino November 2006 Microsoft Zune aka iPod killer unveiled paper two device looked incredibly similar fact Zune leg area it’s larger screen builtin FM tuner battle two tech giant yearning get device pocket you’d think sale figure would neck neck Yet 2 year production Zune finally reached 2 million total sale 353 million iPods sold every single month substantial mismatch sale Sure iPod introduced 5 year earlier momentum doesn’t fully explain wildly different result I’d ague it’s iPod built around it’s user it’s conception every design choice made every line code written hardware component created iPod seemed intuitive cool easy use made sense felt good scroll wheel smooth yet tactile finger raced around rim menu UX let jump quickly perfect song also allowing meander catalog looking inspiration Simple game like Texas Hold ’Em brick breaking game though cutting edge gameplay graphic gave user something concentrate grooved favorite song Even though Zune stood toetotoe technically iPod lacked humancentered touch Instead easytomaster scroll wheel Zune stuck traditional directional button causing “click click click click…” selection selection found right jam device larger heavier generally felt clunkier sleek iPod respective store iTunes v Zune Store offered different user experience ease integration product paper two device incredibly similar feature list criterion wellreceived piece tech Apple Microsoft would’ve posted similar sale Apple built device entirely around it’s user made difference Software Fulfill Requirements Still Disaster We’ve sprint review devs unveil feature worked unholy number hour putting blood sweat tear making sure reality perfectly match requirement specification bombarded question stakeholder demo “Why click two button reach page” “I can’t find Submit button it” “Why toggle back forth page Can’t see info one page” Worse still whizbang feature that‘s clamored finally get deployed production one touch month end asked user respond statement like “Oh Yeah found easier put comment Notepad” “I tried approach seemed complicated Whereas old way I’m familiar with” “Wait never knew” engineer like building cool thing Therefore tend focus exclusively engineering technical aspect software robust design clean code scalable maintainable system Engineers care dimension fight need they’re half battle creating great system using iPod 12year old wasn’t appreciating masterful architecture allowed piece system interface flawlessly greater whole brilliant algorithm saved heap space time repetitive calculation liked fact could buy favorite Rise album remember iTunes transfer onto iPod ease within click listen new tune wherever wanted end user didn’t care technical aspect system cared experience using system UX UI build great user experience oldschool approach hire couple designer send list field element displayed screen intern write CSSGUI code look like resulting mockups UI done developer could design rest system around deemed importantinteresting good interface design critical scratch layer entire experience software go beyond series screen It’s dynamic calculating reacting sometimes shaped users’ input intricate dance man machine Good user interface don’t mean much user wait 45 second central page load redirected unexpected disjointed place flow system responsibility creating great user experience can’t entirely shouldered designer Every member team must keep goal mind product owner selects right feature prioritize necessarily popular tester top teasing crash bring feeling ambiguity annoyance felt using system developer write every line code intention helping user achieve goal experience User Goggles Jay Leno famed software visionary Stacks book paper article written art making great user experience recommend software professional take least cursory dive field don’t need degree Human Computer Interaction create software user love Instead need human touch aka empathy aka walking another’s shoe aka user goggles DARE SAAD PADBTOBATGWJDTWOK People Drinking TwentyOne Go Wild Don’t Touch Weed OK program high school relish chance break Drunk Goggles school assembly goggles simulate visual balance coordination impairment caused many White Russians leading wearer stumble around room like drunkard Friday night person isn’t actually drunk they’re probably stone sober interface world heavy drinker giving fresh code first run like imagine putting “user goggles” Although I’ve spent anywhere hour month intricate detail system forget pretend I’m first time user trying accomplish specific task It’s tough sometimes feel like another distraction writing useful code… I’m stumbling task notice thing Submit button’s left That’s weird I’d expect right clicked Continue anything Hello feel like time spent toggling tab Back forth Back forth Ugh keep submitting form time next page take least 10 second load I’ve actually started checking phone time click Submit that’s bad write little thought criticism annoyance tic user would think exact thought loaded system first time current state use thought fuel secondpass code get front criticism It’s much easier change system 4 month later dev team’s moved code base 5000 line ticker Every team member practice form exercise writing user story Product Owners close eye say “As user actually want need What’s purpose loading system user story help execute purpose here” trying new feature Testers load home page say “I’m Jane Soandso loaded somerandomsystemcom want Let’s see figure get done quickly easily painlessly” Write every hiccup concern negative thought fix root cause possible Remember notice something almost guarantee user notice thing experience hampered result Conclusion What’s presented isn’t new walk mile someone else’s shoe anyone user empathy’s sorely underutilized software development Development team member often see software specific light We’re creating piece art massive collection code interface idea test plan assembled highlyskilled professional course month year people use software directlyindirectly give u money rent Tostitos see creation tool used certain task experience nothing simulating mindset user midwork constantly readjust creation better fit need life want hate secret innermost desire last one’s tricky may require imagination wearing user goggles middevelopment later release software confidence knowing it’ll achieve it’s ultimate purpose used loved actual people realworldTags UX Software Development Software Design Software Engineering |
1,186 | Unveiling the Algorithm behind “Picks For You” | The Picks For You module on 1688.com is now much more than just a product recommendation channel. It has led to the development of some major online promotions and good marketing scenarios including Top-ranking Products, Must-buy List, Theme Marketplace, and Discover Quality Goods. Inserting marketing scenarios into the Picks For You display in the form of cards helps to distribute traffic and improve overall position exposure gain. This article illustrates how to insert marketing scenario cards into the Picks For You display.
Background
Currently, the marketing scenarios inserted into Picks For You on the 1688.com app are mostly product collections. With metrics that are relevant to IPV as the current focus, consider exposure gain to measure model results. Calculate the exposure gain as follows:
Exposure gain =
In the case of only product recommendations, this metric is equal to PV_CTR.
Figure 1 The Picks For You display on the homepage of 1688.com app
Challenges and Solutions
At present, sellers decide and provide the mapping between marketing scenario cards and the products to display in the Picks For You column.
How to insert certain marketing scenario cards into product recommendations?
Status Before Iteration
It is very straightforward to randomly attach a card to recommended products with a certain probability. This method lowers exposure gain as it ignores the capacity of cards and user preference for different cards. While causing metric values to drop significantly, this method helps to accumulate initial data in a short time.
Weak Personalization
To transform a product into a card, define a card quality score and a user preference score, then use the formula shown in Figure 2 to determine which type of card to use.
Figure 2 Card selection formula
At the same time, instead of simply applying the product-card relationships provided by sellers, filter the provided collection of product-card pairs. There are multiple cards under each card type for every product, therefore filtering by card quality scores is critical. User preference scores for different card types are calculated offline. iGraph synchronizes these two types of data daily. In online scheduling, cards are inserted once they are attached to the corresponding products based on the formula shown in Figure 2. For better results, carefully arrange the display frequency of cards, ensuring that there is a certain number of products in between to avoid the situation wherein too many cards appear on one screen. The exposure gain grows by 3.23%, in comparison to the results before iteration. However, it is still lower than the base value without inserting any card. It is significant to note that there is a limited improvement even while experimenting with the multiple variations of the formula in Figure 2.
Machine Learning Model
Now, with so many cards available to attach for a recommended product, the question arises that which one is most likely to draw clicks. The suggested model attempts to answer this question while transforming it into a Click-Through-Rate (CTR) estimation problem. Sort the estimated CTR values and pick the highest CTR. The final display results are subject to several rules.
Examples and Features
From the Picks For You data, select the Exposure and Click data of recommended products that are available for attaching cards as training examples. Features categorize into three parts- user feature, (trigger) item feature, and card feature. Use the product form as a special card form. Select 85 features as the model input, including 62 real number features, 19 categorical features, and 4 cross features. Real number features are statistical features in the user, item, and card dimensions. For example, the statistics of CTR of a product (item) on the Picks For You platform, and CTR in different forms. For categorical features, make sure to embed the same before inserting them into the model.
Recall
Based on the final product recommendations of Picks For You, recall the candidate collection, item2item2card, from the selected product-card collection. Currently, the mapping between products and cards only uses the above-mentioned card quality score, but does not consider the relationship between products and cards. The overall capacity of a card is not necessarily equal to the capacity when it is attached to a certain item. Therefore, add an item2theme mapping, where the theme represents item-card. SWING algorithm helps to construct this method using Card Exposure and Click data of several days and considering an item-card pair as an item entity. Online A/B testing shows that after adding this recall, exposure gain increases by 0.79%.
Sorting Model
Use the Wide & Deep Model (WDL) as the sorting model, despite using the Deep & Cross Network (DCN) during iterations. A/B testing shows little difference in exposure gain between the two models, with DCN 0.03% higher than WDL. Use XTensorFlow to train models every day and push to RTP.
Figure 3 WDL (left) and DCN (right)
Effectiveness
The effectiveness is composed of two factors: card content and precise card distribution for product recommendation. If the card content is poor, it will further reduce users’ interest in clicking the card again, and also affects upstream distribution. Compared to the weak personalization, the current strategy increases exposure gain by 6.77% and the number of products clicked per user by 18.60%. Secondly, against the product recommendation method, the current strategy boosts exposure gain by 1.58% and the number of products clicked per user by 0.01%.
System Process
Online Scheduling
The product recommendations from Picks For You determine the overall product sequence, whereas the card sorting model decides which cards are attached to which products and eventually where to display based on rules (the card interval strategy used in the weak personalization phase still applies). Figure 4 shows a flowchart of online scheduling, which includes the weak personalization and machine learning module.
Figure 4 Scheduling flow diagram, containing weak personalization and machine learning model
Card Fallback and Cold Start
If certain card types are missing in the final result of a single request, then according to a pre-set probability, up to one card will be inserted for each missing card type. The interval policy also affects the fallback result. A card can’t be inserted if there is no suitable position. This ensures card fallback and cold start; and increases the diversity of cards, allowing users to see other types of cards.
Road Ahead
The current model is merely a card selector, which fails to consider the overall sequence of products and cards. Currently, the card selector only models the exposure and click data at the Picks For You level. Considering that we aim to take into account clicks inside cards, there is room for optimization. In the future, we plan to use the existing product recommendation results for recall and the card selector as another method for cards to train a mixed sorting model downstream for overall sorting.
Original Source: | https://medium.com/dataseries/unveiling-the-algorithm-behind-picks-for-you-cc5ed8a82772 | ['Alibaba Cloud'] | 2020-02-06 06:35:03.431000+00:00 | ['Alibaba', 'Machine Learning', 'Algorithms', 'Artificial Intelligence', 'Big Data'] | Title Unveiling Algorithm behind “Picks You”Content Picks module 1688com much product recommendation channel led development major online promotion good marketing scenario including Topranking Products Mustbuy List Theme Marketplace Discover Quality Goods Inserting marketing scenario Picks display form card help distribute traffic improve overall position exposure gain article illustrates insert marketing scenario card Picks display Background Currently marketing scenario inserted Picks 1688com app mostly product collection metric relevant IPV current focus consider exposure gain measure model result Calculate exposure gain follows Exposure gain case product recommendation metric equal PVCTR Figure 1 Picks display homepage 1688com app Challenges Solutions present seller decide provide mapping marketing scenario card product display Picks column insert certain marketing scenario card product recommendation Status Iteration straightforward randomly attach card recommended product certain probability method lower exposure gain ignores capacity card user preference different card causing metric value drop significantly method help accumulate initial data short time Weak Personalization transform product card define card quality score user preference score use formula shown Figure 2 determine type card use Figure 2 Card selection formula time instead simply applying productcard relationship provided seller filter provided collection productcard pair multiple card card type every product therefore filtering card quality score critical User preference score different card type calculated offline iGraph synchronizes two type data daily online scheduling card inserted attached corresponding product based formula shown Figure 2 better result carefully arrange display frequency card ensuring certain number product avoid situation wherein many card appear one screen exposure gain grows 323 comparison result iteration However still lower base value without inserting card significant note limited improvement even experimenting multiple variation formula Figure 2 Machine Learning Model many card available attach recommended product question arises one likely draw click suggested model attempt answer question transforming ClickThroughRate CTR estimation problem Sort estimated CTR value pick highest CTR final display result subject several rule Examples Features Picks data select Exposure Click data recommended product available attaching card training example Features categorize three part user feature trigger item feature card feature Use product form special card form Select 85 feature model input including 62 real number feature 19 categorical feature 4 cross feature Real number feature statistical feature user item card dimension example statistic CTR product item Picks platform CTR different form categorical feature make sure embed inserting model Recall Based final product recommendation Picks recall candidate collection item2item2card selected productcard collection Currently mapping product card us abovementioned card quality score consider relationship product card overall capacity card necessarily equal capacity attached certain item Therefore add item2theme mapping theme represents itemcard SWING algorithm help construct method using Card Exposure Click data several day considering itemcard pair item entity Online AB testing show adding recall exposure gain increase 079 Sorting Model Use Wide Deep Model WDL sorting model despite using Deep Cross Network DCN iteration AB testing show little difference exposure gain two model DCN 003 higher WDL Use XTensorFlow train model every day push RTP Figure 3 WDL left DCN right Effectiveness effectiveness composed two factor card content precise card distribution product recommendation card content poor reduce users’ interest clicking card also affect upstream distribution Compared weak personalization current strategy increase exposure gain 677 number product clicked per user 1860 Secondly product recommendation method current strategy boost exposure gain 158 number product clicked per user 001 System Process Online Scheduling product recommendation Picks determine overall product sequence whereas card sorting model decides card attached product eventually display based rule card interval strategy used weak personalization phase still applies Figure 4 show flowchart online scheduling includes weak personalization machine learning module Figure 4 Scheduling flow diagram containing weak personalization machine learning model Card Fallback Cold Start certain card type missing final result single request according preset probability one card inserted missing card type interval policy also affect fallback result card can’t inserted suitable position ensures card fallback cold start increase diversity card allowing user see type card Road Ahead current model merely card selector fails consider overall sequence product card Currently card selector model exposure click data Picks level Considering aim take account click inside card room optimization future plan use existing product recommendation result recall card selector another method card train mixed sorting model downstream overall sorting Original SourceTags Alibaba Machine Learning Algorithms Artificial Intelligence Big Data |
1,187 | The Perfect Micro-Workout For Writers That Won’t Kill Your Flow | Recently we bought a treadmill, but sadly I’m not using it as much as I’d hoped I would. I enjoy it but finding a time in my day to devote to exercise has been problematic.
Our house is small and the basement is my daycare so we don’t have the kind of extra space that most people have in their homes.
The treadmill lives in our bedroom, which is right beside our daughter’s room and it’s also over the daycare.
That means that I can’t use it in the mornings when my daughter is sleeping (I probably could, but I’m not a jerk) or when the daycare kids are sleeping. So that rules out the two best times for me to exercise.
Also, I write in the morning, so exercising at that time would waste my best, most focused energy. When the daycare kids are sleeping, I’m eating my lunch and editing usually.
After work, I’m getting dinner ready, and I’m not an evening person.
Image by author via Canva.
I know people who go to the gym or a yoga class after dinner, but we don’t eat until around 7:30 so that’s not going to work. By 8 pm, I’m in my jammies, on my computer again doing courses, writing more, or getting caught up with my favorite Medium writers.
Even though I want to exercise, part of the problem is finding the right opportunity during the day.
Finding a time to exercise can be the trickiest part of getting your workout done, even if you don’t dread it.
I think this is why I’m always most successful when getting my exercise incidentally. | https://medium.com/illumination-curated/the-perfect-micro-workout-for-writers-that-wont-kill-your-flow-5f0bdb9826c7 | ['Erin King'] | 2020-11-03 02:40:15.927000+00:00 | ['Health', 'Lifestyle', 'Writers Life', 'Self', 'Writing'] | Title Perfect MicroWorkout Writers Won’t Kill FlowContent Recently bought treadmill sadly I’m using much I’d hoped would enjoy finding time day devote exercise problematic house small basement daycare don’t kind extra space people home treadmill life bedroom right beside daughter’s room it’s also daycare mean can’t use morning daughter sleeping probably could I’m jerk daycare kid sleeping rule two best time exercise Also write morning exercising time would waste best focused energy daycare kid sleeping I’m eating lunch editing usually work I’m getting dinner ready I’m evening person Image author via Canva know people go gym yoga class dinner don’t eat around 730 that’s going work 8 pm I’m jammies computer course writing getting caught favorite Medium writer Even though want exercise part problem finding right opportunity day Finding time exercise trickiest part getting workout done even don’t dread think I’m always successful getting exercise incidentallyTags Health Lifestyle Writers Life Self Writing |
1,188 | What I Learned After 360 Days of Journaling | Journaling Helps Build Positive Habits With Ease
“Depending on what they are, our habits will either make us or break us. We become what we repeatedly do.” ―Sean Covey
Journaling every day is a habit that you build. It does not come as easy as you may think. It is time-consuming, and some days, it feels harder to pick up that pen and paper.
Like building any habit, it takes effort, patience, and consistency.
Once I started to build journaling as a habit, it actually helped me develop other positive habits with much more ease.
I became used to being patient with myself. I was putting the effort and mindfulness towards working at building that habit every day.
I’ve found myself working out more consistently. I’ve begun to see patterns in my behaviors over the last few months and making a conscious change from the parts of me that were toxic.
I managed to write each day — how much I wrote and finding what time works best.
Set Yourself Up for Success
I started small — writing one sentence for a week. My initial goal was only to sum up, my day in one sentence.
Starting small and working your way up to sets you up for success. When starting anything new, it’ll become overwhelming when you dive right into the deep end. You don’t want to let yourself drown.
As time went on, I started to have the urge to write more. So, I took the training wheels off from limiting myself to only one sentence and took off writing as much as I felt like. (Although to continue building muscle, I made sure that I had a minimum of writing at least one to three sentences)
It was hard finding the time of day to journal at first. At first, I typically wrote as soon as I woke up, as they resembled Morning Pages. However, some mornings I would wake up later and not have time for work, so I’d write before bed.
For a few months, I ended up buying a second journal to test out writing one for the mornings and one before bed. However, that did not work for me, it just took up too much time, and I had to label these journals to make sure I write in them at the right time.
I then tried to write my entries in many different ways. At first, I treated it like my therapist. Letting everything thought out — the good and the bad. It felt like a long-overdue release.
Then, for a few weeks, I focused on gratitude. I began to write more about what I was grateful for in my life. I would write it with as much clear visualization as possible. Like being grateful for the sun and ocean, I would write how I felt the breeze brush my face with kisses, reminding me that I was alive. Or how the sounds of the ocean were reminding me that I am strong, that everything in life reflects the rise and fall of the waves.
Building this habit, I didn’t set too many limits and boundaries to ease into journaling consistently. The first three months were trial and error. Eventually, you’ll start to gain momentum and find ways that work.
Results
After four months, I began to find a consistent time that worked best with my schedule. It was better for me to write as soon as I woke up. It gave me the time to process events and thought from the previous day. While also being able to incorporate what I've dreamt the night as well.
It’s helped me keep one journal so that my thoughts are consecutive and together in one place. It makes it a lot easier when I want to go back and read the past months to reflect further.
Sometimes, when I think I don’t have much to say, it has helped me create themes or focus on entries. There are weeks where I want to feel more grateful, so I write as a reminder for what I’m grateful for. There are weeks where I begin to feel stagnant or really struggling. So I started writing letters to the universe asking for strength or visualizing letters writing as my future self, thanking the universe for bringing me what I’ve always dreamed of. I write in specific detail, tapping into all of my senses, realizing that I could get there — I will get there.
As I kept writing, I kept building the amount that I wrote each week. By the 3rd month, I was able to write one full page without stopping.
Now, I can write more than one page, as if I’ve just written one sentence.
What You Can Do to Start Your Journaling Practice | https://medium.com/age-of-awareness/what-i-learned-after-360-days-of-journaling-843914e7bd78 | ['Jess Tam'] | 2020-12-15 18:02:58.508000+00:00 | ['Creativity', 'Journaling', 'Self Love', 'Self Improvement', 'Writing'] | Title Learned 360 Days JournalingContent Journaling Helps Build Positive Habits Ease “Depending habit either make u break u become repeatedly do” ―Sean Covey Journaling every day habit build come easy may think timeconsuming day feel harder pick pen paper Like building habit take effort patience consistency started build journaling habit actually helped develop positive habit much ease became used patient putting effort mindfulness towards working building habit every day I’ve found working consistently I’ve begun see pattern behavior last month making conscious change part toxic managed write day — much wrote finding time work best Set Success started small — writing one sentence week initial goal sum day one sentence Starting small working way set success starting anything new it’ll become overwhelming dive right deep end don’t want let drown time went started urge write took training wheel limiting one sentence took writing much felt like Although continue building muscle made sure minimum writing least one three sentence hard finding time day journal first first typically wrote soon woke resembled Morning Pages However morning would wake later time work I’d write bed month ended buying second journal test writing one morning one bed However work took much time label journal make sure write right time tried write entry many different way first treated like therapist Letting everything thought — good bad felt like longoverdue release week focused gratitude began write grateful life would write much clear visualization possible Like grateful sun ocean would write felt breeze brush face kiss reminding alive sound ocean reminding strong everything life reflects rise fall wave Building habit didn’t set many limit boundary ease journaling consistently first three month trial error Eventually you’ll start gain momentum find way work Results four month began find consistent time worked best schedule better write soon woke gave time process event thought previous day also able incorporate Ive dreamt night well It’s helped keep one journal thought consecutive together one place make lot easier want go back read past month reflect Sometimes think don’t much say helped create theme focus entry week want feel grateful write reminder I’m grateful week begin feel stagnant really struggling started writing letter universe asking strength visualizing letter writing future self thanking universe bringing I’ve always dreamed write specific detail tapping sens realizing could get — get kept writing kept building amount wrote week 3rd month able write one full page without stopping write one page I’ve written one sentence Start Journaling PracticeTags Creativity Journaling Self Love Self Improvement Writing |
1,189 | Direct from the investor: 5 lessons for entrepreneurs in 2021 | Direct from the investor: 5 lessons for entrepreneurs in 2021
The end of the year is good for balance. And it’s also good for, who knows, tearing up some manuals.
Photo by Andreas Klassen on Unsplash
I consider myself a start-up adviser and investor for beginners. Great apprentice, modesty aside, but with a lot to learn.
Having been an administrator for over 04 years, I reinvented myself in the business world by doing M&As. And, six months ago, I immersed myself in this world that excites and fascinates me as a startup advisor and investor.
At 24, all right. It’s not too bad to feel like a boy. It tastes good. And so, I follow and I will continue to learn.
That said, here are my five boy cents for you, the entrepreneur, to start 2021: | https://medium.com/datadriveninvestor/direct-from-the-investor-5-lessons-for-entrepreneurs-in-2021-f0c67fc502c6 | ['Marco Antonio'] | 2020-12-27 17:32:29.075000+00:00 | ['Investing', 'Entrepreneurship', 'Business', 'Technology', 'Productivity'] | Title Direct investor 5 lesson entrepreneur 2021Content Direct investor 5 lesson entrepreneur 2021 end year good balance it’s also good know tearing manual Photo Andreas Klassen Unsplash consider startup adviser investor beginner Great apprentice modesty aside lot learn administrator 04 year reinvented business world MAs six month ago immersed world excites fascinates startup advisor investor 24 right It’s bad feel like boy taste good follow continue learn said five boy cent entrepreneur start 2021Tags Investing Entrepreneurship Business Technology Productivity |
1,190 | Is the New Instagram Update a New Form of Dark Pattern? | Image by natanaelginting on freepik
In the last days, the most recent Instagram update has been in the news for the worst reasons. Many users and influencers have publicly spoken out their dissatisfaction, namely James Charles, who, in a rant video, advised his followers not to update their apps.
At the center of these negative opinions is the new layout, where the basic Instagram options — camera and notifications — have been removed from the bottom menu bar and instead are displayed in the upper-right corner of the home page, next to the direct message button.
Author/Copyright holder: Instagram. Copyright terms and license: Fair Use.
The new bottom menu features the “Reels” (mimicking Instagram’s competitor TikTok) and “Shop” button. This means that these two options are now easily accessible on every screen, while the camera and the notifications button are inaccessible in some tabs.
Instagram is very explicit with its intentions on its blog: to prioritize Reels and its e-commerce platform over traditional posts.
The company states that this change was taken in order to keep up with the fast-paced evolution of technology use patterns. However, this approach seems to be forcing the users to have new intentions when using the app and not otherwise. It even makes me wonder if we are facing a new type of dark pattern.
In my recent post about Dark Patterns, I showed how Instagram uses the Roach Motel pattern. Indeed, we already know this company has no problem in resorting to dark patterns to reach their goals. However, if Instagram is being honest about its intentions with this update, we cannot consider it a dark pattern, right?
The issue is the new bottom menu layout takes advantage of years of muscle memory developed by the users, who now have a high probability of accidentally clicking on the Reels or Shop button when looking for the app’s traditional features.
Certainly, Instagram has a competent UX Team who is able to understand the real users’ needs and motivations. It’s very possible they knew users are not interested in the same features the company is aiming to promote. Maybe this is the reason why Instagram’s official blog post seems to be anticipating users’ disappointment:
Author/Copyright holder: Instagram. Copyright terms and license: Fair Use.
What about you, what do you think about Instagram’s new update? Do you also consider it unfair for users or do you find it a good business decision? | https://uxplanet.org/is-the-new-instagram-update-a-new-form-of-dark-pattern-1776697cffd8 | ['Mariana Vargas'] | 2020-11-19 11:12:20.132000+00:00 | ['Visual Design', 'UX', 'Creativity', 'Design', 'Technology'] | Title New Instagram Update New Form Dark PatternContent Image natanaelginting freepik last day recent Instagram update news worst reason Many user influencers publicly spoken dissatisfaction namely James Charles rant video advised follower update apps center negative opinion new layout basic Instagram option — camera notification — removed bottom menu bar instead displayed upperright corner home page next direct message button AuthorCopyright holder Instagram Copyright term license Fair Use new bottom menu feature “Reels” mimicking Instagram’s competitor TikTok “Shop” button mean two option easily accessible every screen camera notification button inaccessible tab Instagram explicit intention blog prioritize Reels ecommerce platform traditional post company state change taken order keep fastpaced evolution technology use pattern However approach seems forcing user new intention using app otherwise even make wonder facing new type dark pattern recent post Dark Patterns showed Instagram us Roach Motel pattern Indeed already know company problem resorting dark pattern reach goal However Instagram honest intention update cannot consider dark pattern right issue new bottom menu layout take advantage year muscle memory developed user high probability accidentally clicking Reels Shop button looking app’s traditional feature Certainly Instagram competent UX Team able understand real users’ need motivation It’s possible knew user interested feature company aiming promote Maybe reason Instagram’s official blog post seems anticipating users’ disappointment AuthorCopyright holder Instagram Copyright term license Fair Use think Instagram’s new update also consider unfair user find good business decisionTags Visual Design UX Creativity Design Technology |
1,191 | Three Stories Every Entrepreneur Should Know | Storyboard for Breaking Bad, and we all know how that ended. Image from Uproxx.
What would you say your company does? It seems like a simple question that should have a simple answer, but that’s rarely the case.
Here’s something I’ve done over my 20+ years as an entrepreneur to keep everyone focused on the tasks at hand while also keeping an eye on the future.
Entrepreneurs usually blur the lines of what their startup is, what it will be, and what it should be. This is fine until you try to start planning around those stories. At that point, you need to be asking: What are the priorities today and how do we execute on those priorities without mortgaging the future? The reverse question is just as important: How much time do we spend working on those new things that aren’t generating revenue yet?
The Three Story Rule
Every startup should have three stories, loosely related to the three arcs most storytellers use in episodic storytelling. An easy way to think about it is a television series. When you watch an episode of a TV show, the writers are usually working on three storylines:
Story A: Story with an arc that begins and ends in this episode (or maybe a two-parter).
Story B: Story with a longer arc that lasts a few episodes or more. This current episode will advance the plot of Story B in smaller increments, and maybe drop a twist in here or there.
Story C: Story with a much longer arc, maybe out to the end of the season or the end of the series itself. This current episode might not advance Story C at all, or it may just drop a few hints. At the end of the season or the series, you’ll be able to look back and piece Story C together, but that won’t be easy or even possible in real time.
Now let’s take that story strategy and apply it to your startup, and I’ll use my most recent startup as an example.
Story A: Right Now
Story A is the what your company is doing today that is generating revenue, building market share, and adding value to the company. Story A is about this fiscal quarter, this fiscal year, and next fiscal year.
At Automated Insights, Story A was our story for the first few years while we were known as Statsheet, a company that aggregated sports statistics and turned them into visualizations and automated content. This is how we made our money — either using our own data to generate content or using data like Yahoo Fantasy Football to generate Fantasy Matchup Recaps.
While we were breaking new ground in the arena of sports stats, we were one player in a sea of players, and while automating content from sports stats gave us a competitive advantage, sports was still a highly commoditized and difficult marketplace.
Story B: What’s Next
Story B is what’s going to open up new markets using new technologies or new products. Story B is about what you could do if the stars aligned properly or if you raised enough money for a longer runway, because Story B usually comes with a lot more risk for a lot more reward.
A few years into Statsheet, when we went to raise our Series A round, we pitched using our proprietary automated content engine on all kinds of data, generating machine-written stories in finance, marketing, weather, fitness, you name it. We changed our name to Automated Insights and pivoted completely with a $5 million raise.
That pivot came with a ton of risk. We had friends (and potential acquirers) in sports and we would now be making sports just a part of our story. In return, we would be one of the first players in the nascent Natural Language Generation (NLG) market, a pre-cursor to the “AI” market.
It was not a coincidence that the acronym for our new company name was also AI.
Story C: The Billion-Dollar Story
Story C usually involves a seismic change that disrupts existing markets, and as you can imagine, it’s a million times more difficult to pull off.
Uber and Lyft are on Story C. They’re no longer known as a better taxi or for solving a specific problem. They’re about creating a market in which a large portion of people can no longer live without them. In most urban areas, ride hailing services are now a necessity, as the ability they offer to do more things cheaply has made a major impact on lifestyle. There’s just no going back.
Story C was actually where my vision split from my former startup. I was focused more on real-time, plain-word insights generated from a mesh of private and public data, i.e. Alexa, Google Assistant, and Siri. The company was turning towards more of a B2B approach, first as a SaaS NLG tool, and then as a business intelligence tool.
No one was wrong here, but the latter was the direction the company took. So now I’m working on a new Story A at a new startup. And I’ve got Stories B and C in my purview.
So which story do you tell? Well, it depends on who you’re talking to.
For the press, for customers, and for potential employees, stick to Story A — if these folks aren’t jazzed about Story A, then you’re not spending enough time on Story A.
In fact, you should consider Story B and Story C to be intellectual property. It’s not the kind of thing you want to go too deeply into without an NDA or some protection in place.
For your board, your investors, and your employees, focus on Story A, of course, but also keep them aware of Story B and drop hints about Story C. Story B is where you’re headed next. It might be what you raise your next round on, or it may be your next big pivot. Story C is best kept in the distance until you’ve crushed Story A and made significant progress on Story B. It’s a goal, mainly, and you should just be making sure you’re not closing doors to it as you move forward.
Once you get your stories straight, then it’s just about execution. But come back to them often, every quarter or even every sprint, and make sure everyone is still on the same page. | https://jproco.medium.com/three-stories-every-entrepreneur-should-know-3476909629bb | ['Joe Procopio'] | 2019-01-04 19:56:32.109000+00:00 | ['Product Management', 'Business', 'Entrepreneurship', 'Startup', 'Technology'] | Title Three Stories Every Entrepreneur KnowContent Storyboard Breaking Bad know ended Image Uproxx would say company seems like simple question simple answer that’s rarely case Here’s something I’ve done 20 year entrepreneur keep everyone focused task hand also keeping eye future Entrepreneurs usually blur line startup fine try start planning around story point need asking priority today execute priority without mortgaging future reverse question important much time spend working new thing aren’t generating revenue yet Three Story Rule Every startup three story loosely related three arc storyteller use episodic storytelling easy way think television series watch episode TV show writer usually working three storyline Story Story arc begin end episode maybe twoparter Story B Story longer arc last episode current episode advance plot Story B smaller increment maybe drop twist Story C Story much longer arc maybe end season end series current episode might advance Story C may drop hint end season series you’ll able look back piece Story C together won’t easy even possible real time let’s take story strategy apply startup I’ll use recent startup example Story Right Story company today generating revenue building market share adding value company Story fiscal quarter fiscal year next fiscal year Automated Insights Story story first year known Statsheet company aggregated sport statistic turned visualization automated content made money — either using data generate content using data like Yahoo Fantasy Football generate Fantasy Matchup Recaps breaking new ground arena sport stats one player sea player automating content sport stats gave u competitive advantage sport still highly commoditized difficult marketplace Story B What’s Next Story B what’s going open new market using new technology new product Story B could star aligned properly raised enough money longer runway Story B usually come lot risk lot reward year Statsheet went raise Series round pitched using proprietary automated content engine kind data generating machinewritten story finance marketing weather fitness name changed name Automated Insights pivoted completely 5 million raise pivot came ton risk friend potential acquirer sport would making sport part story return would one first player nascent Natural Language Generation NLG market precursor “AI” market coincidence acronym new company name also AI Story C BillionDollar Story Story C usually involves seismic change disrupts existing market imagine it’s million time difficult pull Uber Lyft Story C They’re longer known better taxi solving specific problem They’re creating market large portion people longer live without urban area ride hailing service necessity ability offer thing cheaply made major impact lifestyle There’s going back Story C actually vision split former startup focused realtime plainword insight generated mesh private public data ie Alexa Google Assistant Siri company turning towards B2B approach first SaaS NLG tool business intelligence tool one wrong latter direction company took I’m working new Story new startup I’ve got Stories B C purview story tell Well depends you’re talking press customer potential employee stick Story — folk aren’t jazzed Story you’re spending enough time Story fact consider Story B Story C intellectual property It’s kind thing want go deeply without NDA protection place board investor employee focus Story course also keep aware Story B drop hint Story C Story B you’re headed next might raise next round may next big pivot Story C best kept distance you’ve crushed Story made significant progress Story B It’s goal mainly making sure you’re closing door move forward get story straight it’s execution come back often every quarter even every sprint make sure everyone still pageTags Product Management Business Entrepreneurship Startup Technology |
1,192 | My Phobia: (British) Baked Beans. | My Phobia: (British) Baked Beans.
The most sickening article I’ve written.
Photo by Deepansh Khurana on Unsplash\
I feel sick to my stomach. My face is tight with anxiety and it’s causing crows’ feet on either side of my eyes. They age me beyond my years. They make me shiver. My shoulders are tense. Why can’t I stop staring at them? They say it’s tough to stop looking at unsettling things, and this is beyond repulsive. I’m captivated by them- in the worst possible way.
I have a phobia of baked beans.
Most people have an aversion to snakes, spiders and Tom Cruise movies. Society accepts them as ‘normal’.
Leguminophobia is the irrational fear of baked beans. And it turns out, I am not alone in my struggle. I read an article about a cook who had to quit his job because he couldn’t cook a Full English Breakfast, in which they are a staple. Questions have been posted on Reddit and Quora by those wondering if this affects others. There is even a hashtag on Twitter #Leguminophobia. This is the solidarity I needed.
You can stop laughing now. This is serious(ish).
If I was a Goliath, baked beans would be my David.
The truth is, laughter is a standard response to those who learn about my phobia. I have had to come to terms with this reaction over the years. Most people have an aversion to snakes, spiders and Tom Cruise movies. Society accepts them as ‘normal’. I am well aware that my detestation of a low-cost popular foodstuff is plain weird.
Acknowledging the fact they exist in the same world as I, is traumatic. Never mind the fact people choose to eat them as food- shudder. If I was a Goliath, Baked Beans would be my David.
British baked beans- most famously made by Heinz- are premade and come in a tin. It’s shiny silver, rudimentary and sturdy. It would break a toe (or brick for that matter) if dropped on one, and remain unblemished. The opening reveals a cold, shiny tomato-based red gloop that glistens in even the dimmest of light. How can food make my eyes squint? They taste overly sweet and artificial despite being made with no additives. The beans are Haricot Beans apparently-whatever they are. They have a dry centre and mushy texture meaning they stick to your teeth when chewed.
The flatulence they produce is almost unique in its smell. “Did you eat baked beans today? It smells like you did”.
It all started in my family home as a child where they were deemed an acceptable dish at any time. For breakfast, lunch, or a side dish at dinner. Baked beans were everywhere. I recall my brother warming some in the microwave for lunch. He nuked them for too long meaning he could hardly hold the bowl without burning his hands. I was just about coping with them in my vicinity until I saw a thin layer of red skin on the top, which was caused by excessive heating. He proceeded to scrape it off with a spoon, flick it into the bin, before taking a mouthful. For me, this episode was trauma in its purest form.
Photo by Cottonbro on Pexels
“They keep you regular” my mom used to say. Which, to me, was just another reason, on the already exhaustive list, to hate them. When I spy someone chowing down on beans without a care in the world, I feel desperately sorry for their partner, friends, family, or colleagues who will have this human airbag in their company later on. The flatulence they produce is almost unique in its smell. “Did you eat baked beans today? It smells like you did”. The scent is as sinister as it is distinctive.
Watching them being ingested is all engaging for me. Like falling into anaphylaxis when stung by a bee. Why do baked beans seem to infect every other item on the plate? The bean ‘jus’ is unable to be hoarded into one part of the plate and kept there willingly. I once saw someone build a damn of sausages and toast to save their mushrooms from drowning on their plate. Their attempt was futile. I can only assume the only way to segregate your food from baked beans and their fluid is to keep them in a bowl independent from your meal. That would, however, require one to eat a spoon of pure baked bean, with nothing to distract from the noxious taste or texture.
After they have been eaten, my torture isn’t over. When the bean-tainted plate is removed there is always, ALWAYS a bean remaining on the table. It’s sitting there, thinking it’s got away from its bio bin grave. Surrounded by a thin layer of source smeared on the counter. Cold and still gleaming in the light. How can just one bean make me squint? When it’s wiped up, the weak bean structure squashes into the cloth, causing further insult.
God forbid I have to wash up the dishes. The dishwater will be polluted with a reddish tint and the sink strainer will expose a naked pale bean after having its coating washed off.
A world without baked beans would be a better one. Can we agree to get fiber and protein from other foodstuffs? It would make my life indefinitely happier. | https://medium.com/writers-blokke/my-phobia-british-baked-beans-7050173c32f9 | ['Nathan Foolchand'] | 2020-12-22 21:34:29.384000+00:00 | ['Health', 'Short Story', 'Life', 'Writing', 'Food'] | Title Phobia British Baked BeansContent Phobia British Baked Beans sickening article I’ve written Photo Deepansh Khurana Unsplash feel sick stomach face tight anxiety it’s causing crows’ foot either side eye age beyond year make shiver shoulder tense can’t stop staring say it’s tough stop looking unsettling thing beyond repulsive I’m captivated worst possible way phobia baked bean people aversion snake spider Tom Cruise movie Society accepts ‘normal’ Leguminophobia irrational fear baked bean turn alone struggle read article cook quit job couldn’t cook Full English Breakfast staple Questions posted Reddit Quora wondering affect others even hashtag Twitter Leguminophobia solidarity needed stop laughing seriousish Goliath baked bean would David truth laughter standard response learn phobia come term reaction year people aversion snake spider Tom Cruise movie Society accepts ‘normal’ well aware detestation lowcost popular foodstuff plain weird Acknowledging fact exist world traumatic Never mind fact people choose eat food shudder Goliath Baked Beans would David British baked bean famously made Heinz premade come tin It’s shiny silver rudimentary sturdy would break toe brick matter dropped one remain unblemished opening reveals cold shiny tomatobased red gloop glisten even dimmest light food make eye squint taste overly sweet artificial despite made additive bean Haricot Beans apparentlywhatever dry centre mushy texture meaning stick teeth chewed flatulence produce almost unique smell “Did eat baked bean today smell like did” started family home child deemed acceptable dish time breakfast lunch side dish dinner Baked bean everywhere recall brother warming microwave lunch nuked long meaning could hardly hold bowl without burning hand coping vicinity saw thin layer red skin top caused excessive heating proceeded scrape spoon flick bin taking mouthful episode trauma purest form Photo Cottonbro Pexels “They keep regular” mom used say another reason already exhaustive list hate spy someone chowing bean without care world feel desperately sorry partner friend family colleague human airbag company later flatulence produce almost unique smell “Did eat baked bean today smell like did” scent sinister distinctive Watching ingested engaging Like falling anaphylaxis stung bee baked bean seem infect every item plate bean ‘jus’ unable hoarded one part plate kept willingly saw someone build damn sausage toast save mushroom drowning plate attempt futile assume way segregate food baked bean fluid keep bowl independent meal would however require one eat spoon pure baked bean nothing distract noxious taste texture eaten torture isn’t beantainted plate removed always ALWAYS bean remaining table It’s sitting thinking it’s got away bio bin grave Surrounded thin layer source smeared counter Cold still gleaming light one bean make squint it’s wiped weak bean structure squash cloth causing insult God forbid wash dish dishwater polluted reddish tint sink strainer expose naked pale bean coating washed world without baked bean would better one agree get fiber protein foodstuff would make life indefinitely happierTags Health Short Story Life Writing Food |
1,193 | How To Get Hired At Your Dream Job In Just 11 Lines Of Python Code | How To Get Hired At Your Dream Job In Just 11 Lines Of Python Code
A complete newbie-friendly guide on how to use Machine Learning to be hired at your dream job
“ There has literally never been a time in the history of humankind where this short article could have been more relevant. “
In the U.S. alone an estimated 47 million jobs are predicted to disappear in the post-COVID19 world. To put that in perspective, a staggering 32% of the population might become unemployed. Similar statistics can be found for almost all countries across the globe. What does this mean?
The majority of these people are under no circumstance going to simply accept such a fate. Governments around the world have pledged relief-funds to their citizens. Although such funds have been quite generous in many countries, under no possible point of view are they enough to feed the millions of people who are soon going to be left without a fixed source of income.
It thus becomes obvious that the majority of the ones affected are going to try to find a new place of employment.
In other words, if you considered the job market prior to COVID-19 of being extremely competitive, finding an occupation in 2020 will prove to be exponentially more difficult as the competition is going to be significantly higher than what it was in the pre-COVID19 world.
Taking into consideration the above-mentioned, the aim of this article is to walk you through the complete process of creating a beginner-friendly machine learning model which is going to almost guarantee that you are accepted at the job of your dreams.
If you like this article and are interested in receiving exclusive monthly content for free, subscribe to my exclusive mailing list at the end of this article (can be also accessed directly from here). | https://medium.com/datadriveninvestor/get-hired-at-your-dream-job-in-just-11-lines-of-python-code-7a8d41bbb335 | ['Filippos Dounis'] | 2020-06-05 00:21:48.339000+00:00 | ['Business', 'Entrepreneurship', 'Programming', 'Data Science', 'Artificial Intelligence'] | Title Get Hired Dream Job 11 Lines Python CodeContent Get Hired Dream Job 11 Lines Python Code complete newbiefriendly guide use Machine Learning hired dream job “ literally never time history humankind short article could relevant “ US alone estimated 47 million job predicted disappear postCOVID19 world put perspective staggering 32 population might become unemployed Similar statistic found almost country across globe mean majority people circumstance going simply accept fate Governments around world pledged relieffunds citizen Although fund quite generous many country possible point view enough feed million people soon going left without fixed source income thus becomes obvious majority one affected going try find new place employment word considered job market prior COVID19 extremely competitive finding occupation 2020 prove exponentially difficult competition going significantly higher preCOVID19 world Taking consideration abovementioned aim article walk complete process creating beginnerfriendly machine learning model going almost guarantee accepted job dream like article interested receiving exclusive monthly content free subscribe exclusive mailing list end article also accessed directly hereTags Business Entrepreneurship Programming Data Science Artificial Intelligence |
1,194 | Visualising high-dimensional datasets using PCA and t-SNE in Python | Update: April 29, 2019. Updated some of the code to not use ggplot but instead use seaborn and matplotlib . I also added an example for a 3d-plot. I also changed the syntax to work with Python3.
The first step around any data related challenge is to start by exploring the data itself. This could be by looking at, for example, the distributions of certain variables or looking at potential correlations between variables.
The problem nowadays is that most datasets have a large number of variables. In other words, they have a high number of dimensions along which the data is distributed. Visually exploring the data can then become challenging and most of the time even practically impossible to do manually. However, such visual exploration is incredibly important in any data-related problem. Therefore it is key to understand how to visualise high-dimensional datasets. This can be achieved using techniques known as dimensionality reduction. This post will focus on two techniques that will allow us to do this: PCA and t-SNE.
More about that later. Lets first get some (high-dimensional) data to work with.
MNIST dataset
We will use the MNIST-dataset in this write-up. There is no need to download the dataset manually as we can grab it through using Scikit Learn.
First let’s get all libraries in place.
from __future__ import print_function
import time import numpy as np
import pandas as pd from sklearn.datasets import fetch_mldata
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE %matplotlib inline
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D import seaborn as sns
and let’s then start by loading in the data
mnist = fetch_mldata("MNIST original")
X = mnist.data / 255.0
y = mnist.target print(X.shape, y.shape) [out] (70000, 784) (70000,)
We are going to convert the matrix and vector to a Pandas DataFrame. This is very similar to the DataFrames used in R and will make it easier for us to plot it later on.
feat_cols = [ 'pixel'+str(i) for i in range(X.shape[1]) ] df = pd.DataFrame(X,columns=feat_cols)
df['y'] = y
df['label'] = df['y'].apply(lambda i: str(i)) X, y = None, None print('Size of the dataframe: {}'.format(df.shape)) [out] Size of the dataframe: (70000, 785)
Because we dont want to be using 70,000 digits in some calculations we’ll take a random subset of the digits. The randomisation is important as the dataset is sorted by its label (i.e., the first seven thousand or so are zeros, etc.). To ensure randomisation we’ll create a random permutation of the number 0 to 69,999 which allows us later to select the first five or ten thousand for our calculations and visualisations.
# For reproducability of the results
np.random.seed(42) rndperm = np.random.permutation(df.shape[0])
We now have our dataframe and our randomisation vector. Lets first check what these numbers actually look like. To do this we’ll generate 30 plots of randomly selected images.
plt.gray()
fig = plt.figure( figsize=(16,7) )
for i in range(0,15):
ax = fig.add_subplot(3,5,i+1, title="Digit: {}".format(str(df.loc[rndperm[i],'label'])) )
ax.matshow(df.loc[rndperm[i],feat_cols].values.reshape((28,28)).astype(float))
plt.show()
Now we can start thinking about how we can actually distinguish the zeros from the ones and two’s and so on. If you were, for example, a post office such an algorithm could help you read and sort the handwritten envelopes using a machine instead of having humans do that. Obviously nowadays we have very advanced methods to do this, but this dataset still provides a very good testing ground for seeing how specific methods for dimensionality reduction work and how well they work.
The images are all essentially 28-by-28 pixel images and therefore have a total of 784 ‘dimensions’, each holding the value of one specific pixel.
What we can do is reduce the number of dimensions drastically whilst trying to retain as much of the ‘variation’ in the information as possible. This is where we get to dimensionality reduction. Lets first take a look at something known as Principal Component Analysis.
Dimensionality reduction using PCA
PCA is a technique for reducing the number of dimensions in a dataset whilst retaining most information. It is using the correlation between some dimensions and tries to provide a minimum number of variables that keeps the maximum amount of variation or information about how the original data is distributed. It does not do this using guesswork but using hard mathematics and it uses something known as the eigenvalues and eigenvectors of the data-matrix. These eigenvectors of the covariance matrix have the property that they point along the major directions of variation in the data. These are the directions of maximum variation in a dataset.
I am not going to get into the actual derivation and calculation of the principal components — if you want to get into the mathematics see this great page — instead we’ll use the Scikit-Learn implementation of PCA.
Since we as humans like our two- and three-dimensional plots lets start with that and generate, from the original 784 dimensions, the first three principal components. And we’ll also see how much of the variation in the total dataset they actually account for.
pca = PCA(n_components=3)
pca_result = pca.fit_transform(df[feat_cols].values) df['pca-one'] = pca_result[:,0]
df['pca-two'] = pca_result[:,1]
df['pca-three'] = pca_result[:,2] print('Explained variation per principal component: {}'.format(pca.explained_variance_ratio_)) Explained variation per principal component: [0.09746116 0.07155445 0.06149531]
Now, given that the first two components account for about 25% of the variation in the entire dataset lets see if that is enough to visually set the different digits apart. What we can do is create a scatterplot of the first and second principal component and color each of the different types of digits with a different color. If we are lucky the same type of digits will be positioned (i.e., clustered) together in groups, which would mean that the first two principal components actually tell us a great deal about the specific types of digits.
plt.figure(figsize=(16,10))
sns.scatterplot(
x="pca-one", y="pca-two",
hue="y",
palette=sns.color_palette("hls", 10),
data=df.loc[rndperm,:],
legend="full",
alpha=0.3
)
From the graph we can see the two components definitely hold some information, especially for specific digits, but clearly not enough to set all of them apart. Luckily there is another technique that we can use to reduce the number of dimensions that may prove more helpful. In the next few paragraphs we are going to take a look at that technique and explore if it gives us a better way of reducing the dimensions for visualisation. The method we will be exploring is known as t-SNE (t-Distributed Stochastic Neighbouring Entities).
For a 3d-version of the same plot
ax = plt.figure(figsize=(16,10)).gca(projection='3d')
ax.scatter(
xs=df.loc[rndperm,:]["pca-one"],
ys=df.loc[rndperm,:]["pca-two"],
zs=df.loc[rndperm,:]["pca-three"],
c=df.loc[rndperm,:]["y"],
cmap='tab10'
)
ax.set_xlabel('pca-one')
ax.set_ylabel('pca-two')
ax.set_zlabel('pca-three')
plt.show()
T-Distributed Stochastic Neighbouring Entities (t-SNE)
t-Distributed Stochastic Neighbor Embedding (t-SNE) is another technique for dimensionality reduction and is particularly well suited for the visualization of high-dimensional datasets. Contrary to PCA it is not a mathematical technique but a probablistic one. The original paper describes the working of t-SNE as:
“t-Distributed stochastic neighbor embedding (t-SNE) minimizes the divergence between two distributions: a distribution that measures pairwise similarities of the input objects and a distribution that measures pairwise similarities of the corresponding low-dimensional points in the embedding”.
Essentially what this means is that it looks at the original data that is entered into the algorithm and looks at how to best represent this data using less dimensions by matching both distributions. The way it does this is computationally quite heavy and therefore there are some (serious) limitations to the use of this technique. For example one of the recommendations is that, in case of very high dimensional data, you may need to apply another dimensionality reduction technique before using t-SNE:
| It is highly recommended to use another dimensionality reduction
| method (e.g. PCA for dense data or TruncatedSVD for sparse data)
| to reduce the number of dimensions to a reasonable amount (e.g. 50)
| if the number of features is very high.
The other key drawback is that it:
“Since t-SNE scales quadratically in the number of objects N, its applicability is limited to data sets with only a few thousand input objects; beyond that, learning becomes too slow to be practical (and the memory requirements become too large)”.
We will use the Scikit-Learn Implementation of the algorithm in the remainder of this writeup.
Contrary to the recommendation above we will first try to run the algorithm on the actual dimensions of the data (784) and see how it does. To make sure we don’t burden our machine in terms of memory and power/time we will only use the first 10,000 samples to run the algorithm on. To compare later on I’ll also run the PCA again on the subset.
N = 10000 df_subset = df.loc[rndperm[:N],:].copy() data_subset = df_subset[feat_cols].values pca = PCA(n_components=3)
pca_result = pca.fit_transform(data_subset) df_subset['pca-one'] = pca_result[:,0]
df_subset['pca-two'] = pca_result[:,1]
df_subset['pca-three'] = pca_result[:,2] print('Explained variation per principal component: {}'.format(pca.explained_variance_ratio_)) [out] Explained variation per principal component: [0.09730166 0.07135901 0.06183721]
x
time_start = time.time()
tsne = TSNE(n_components=2, verbose=1, perplexity=40, n_iter=300)
tsne_results = tsne.fit_transform(data_subset) print('t-SNE done! Time elapsed: {} seconds'.format(time.time()-time_start)) [out] [t-SNE] Computing 121 nearest neighbors...
[t-SNE] Indexed 10000 samples in 0.564s...
[t-SNE] Computed neighbors for 10000 samples in 121.191s...
[t-SNE] Computed conditional probabilities for sample 1000 / 10000
[t-SNE] Computed conditional probabilities for sample 2000 / 10000
[t-SNE] Computed conditional probabilities for sample 3000 / 10000
[t-SNE] Computed conditional probabilities for sample 4000 / 10000
[t-SNE] Computed conditional probabilities for sample 5000 / 10000
[t-SNE] Computed conditional probabilities for sample 6000 / 10000
[t-SNE] Computed conditional probabilities for sample 7000 / 10000
[t-SNE] Computed conditional probabilities for sample 8000 / 10000
[t-SNE] Computed conditional probabilities for sample 9000 / 10000
[t-SNE] Computed conditional probabilities for sample 10000 / 10000
[t-SNE] Mean sigma: 2.129023
[t-SNE] KL divergence after 250 iterations with early exaggeration: 85.957787
[t-SNE] KL divergence after 300 iterations: 2.823509
t-SNE done! Time elapsed: 157.3975932598114 seconds
Now that we have the two resulting dimensions we can again visualise them by creating a scatter plot of the two dimensions and coloring each sample by its respective label.
df_subset['tsne-2d-one'] = tsne_results[:,0]
df_subset['tsne-2d-two'] = tsne_results[:,1] plt.figure(figsize=(16,10))
sns.scatterplot(
x="tsne-2d-one", y="tsne-2d-two",
hue="y",
palette=sns.color_palette("hls", 10),
data=df_subset,
legend="full",
alpha=0.3
)
This is already a significant improvement over the PCA visualisation we used earlier. We can see that the digits are very clearly clustered in their own sub groups. If we would now use a clustering algorithm to pick out the seperate clusters we could probably quite accurately assign new points to a label. Just to compare PCA & T-SNE:
plt.figure(figsize=(16,7)) ax1 = plt.subplot(1, 2, 1)
sns.scatterplot(
x="pca-one", y="pca-two",
hue="y",
palette=sns.color_palette("hls", 10),
data=df_subset,
legend="full",
alpha=0.3,
ax=ax1
) ax2 = plt.subplot(1, 2, 2)
sns.scatterplot(
x="tsne-2d-one", y="tsne-2d-two",
hue="y",
palette=sns.color_palette("hls", 10),
data=df_subset,
legend="full",
alpha=0.3,
ax=ax2
)
PCA (left) vs T-SNE (right)
We’ll now take the recommendations to heart and actually reduce the number of dimensions before feeding the data into the t-SNE algorithm. For this we’ll use PCA again. We will first create a new dataset containing the fifty dimensions generated by the PCA reduction algorithm. We can then use this dataset to perform the t-SNE on
pca_50 = PCA(n_components=50)
pca_result_50 = pca_50.fit_transform(data_subset) print('Cumulative explained variation for 50 principal components: {}'.format(np.sum(pca_50.explained_variance_ratio_))) [out] Cumulative explained variation for 50 principal components: 0.8267618822147329
Amazingly, the first 50 components roughly hold around 85% of the total variation in the data.
Now lets try and feed this data into the t-SNE algorithm. This time we’ll use 10,000 samples out of the 70,000 to make sure the algorithm does not take up too much memory and CPU. Since the code used for this is very similar to the previous t-SNE code I have moved it to the Appendix: Code section at the bottom of this post. The plot it produced is the following one:
PCA (left) vs T-SNE (middle) vs T-SNE on PCA50 (right)
From this plot we can clearly see how all the samples are nicely spaced apart and grouped together with their respective digits. This could be an amazing starting point to then use a clustering algorithm and try to identify the clusters or to actually use these two dimensions as input to another algorithm (e.g., something like a Neural Network).
So we have explored using various dimensionality reduction techniques to visualise high-dimensional data using a two-dimensional scatter plot. We have not gone into the actual mathematics involved but instead relied on the Scikit-Learn implementations of all algorithms.
Roundup Report
Before closing off with the appendix…
Together with some likeminded friends we are sending out weekly newsletters with some links and notes that we want to share amongst ourselves (why not allow others to read them as well?).
Appendix: Code
Code: t-SNE on PCA-reduced data
time_start = time.time() tsne = TSNE(n_components=2, verbose=0, perplexity=40, n_iter=300)
tsne_pca_results = tsne.fit_transform(pca_result_50) print('t-SNE done! Time elapsed: {} seconds'.format(time.time()-time_start)) [out] t-SNE done! Time elapsed: 42.01495909690857 seconds
And for the visualisation | https://towardsdatascience.com/visualising-high-dimensional-datasets-using-pca-and-t-sne-in-python-8ef87e7915b | ['Luuk Derksen'] | 2019-04-29 21:25:39.322000+00:00 | ['Data Science', 'Machine Learning', 'Data Visualization', 'Visualization', 'Python'] | Title Visualising highdimensional datasets using PCA tSNE PythonContent Update April 29 2019 Updated code use ggplot instead use seaborn matplotlib also added example 3dplot also changed syntax work Python3 first step around data related challenge start exploring data could looking example distribution certain variable looking potential correlation variable problem nowadays datasets large number variable word high number dimension along data distributed Visually exploring data become challenging time even practically impossible manually However visual exploration incredibly important datarelated problem Therefore key understand visualise highdimensional datasets achieved using technique known dimensionality reduction post focus two technique allow u PCA tSNE later Lets first get highdimensional data work MNIST dataset use MNISTdataset writeup need download dataset manually grab using Scikit Learn First let’s get library place future import printfunction import time import numpy np import panda pd sklearndatasets import fetchmldata sklearndecomposition import PCA sklearnmanifold import TSNE matplotlib inline import matplotlibpyplot plt mpltoolkitsmplot3d import Axes3D import seaborn sn let’s start loading data mnist fetchmldataMNIST original X mnistdata 2550 mnisttarget printXshape yshape 70000 784 70000 going convert matrix vector Pandas DataFrame similar DataFrames used R make easier u plot later featcols pixelstri rangeXshape1 df pdDataFrameXcolumnsfeatcols dfy dflabel dfyapplylambda stri X None None printSize dataframe formatdfshape Size dataframe 70000 785 dont want using 70000 digit calculation we’ll take random subset digit randomisation important dataset sorted label ie first seven thousand zero etc ensure randomisation we’ll create random permutation number 0 69999 allows u later select first five ten thousand calculation visualisation reproducability result nprandomseed42 rndperm nprandompermutationdfshape0 dataframe randomisation vector Lets first check number actually look like we’ll generate 30 plot randomly selected image pltgray fig pltfigure figsize167 range015 ax figaddsubplot35i1 titleDigit formatstrdflocrndpermilabel axmatshowdflocrndpermifeatcolsvaluesreshape2828astypefloat pltshow start thinking actually distinguish zero one two’s example post office algorithm could help read sort handwritten envelope using machine instead human Obviously nowadays advanced method dataset still provides good testing ground seeing specific method dimensionality reduction work well work image essentially 28by28 pixel image therefore total 784 ‘dimensions’ holding value one specific pixel reduce number dimension drastically whilst trying retain much ‘variation’ information possible get dimensionality reduction Lets first take look something known Principal Component Analysis Dimensionality reduction using PCA PCA technique reducing number dimension dataset whilst retaining information using correlation dimension try provide minimum number variable keep maximum amount variation information original data distributed using guesswork using hard mathematics us something known eigenvalue eigenvectors datamatrix eigenvectors covariance matrix property point along major direction variation data direction maximum variation dataset going get actual derivation calculation principal component — want get mathematics see great page — instead we’ll use ScikitLearn implementation PCA Since human like two threedimensional plot let start generate original 784 dimension first three principal component we’ll also see much variation total dataset actually account pca PCAncomponents3 pcaresult pcafittransformdffeatcolsvalues dfpcaone pcaresult0 dfpcatwo pcaresult1 dfpcathree pcaresult2 printExplained variation per principal component formatpcaexplainedvarianceratio Explained variation per principal component 009746116 007155445 006149531 given first two component account 25 variation entire dataset let see enough visually set different digit apart create scatterplot first second principal component color different type digit different color lucky type digit positioned ie clustered together group would mean first two principal component actually tell u great deal specific type digit pltfigurefigsize1610 snsscatterplot xpcaone ypcatwo huey palettesnscolorpalettehls 10 datadflocrndperm legendfull alpha03 graph see two component definitely hold information especially specific digit clearly enough set apart Luckily another technique use reduce number dimension may prove helpful next paragraph going take look technique explore give u better way reducing dimension visualisation method exploring known tSNE tDistributed Stochastic Neighbouring Entities 3dversion plot ax pltfigurefigsize1610gcaprojection3d axscatter xsdflocrndpermpcaone ysdflocrndpermpcatwo zsdflocrndpermpcathree cdflocrndpermy cmaptab10 axsetxlabelpcaone axsetylabelpcatwo axsetzlabelpcathree pltshow TDistributed Stochastic Neighbouring Entities tSNE tDistributed Stochastic Neighbor Embedding tSNE another technique dimensionality reduction particularly well suited visualization highdimensional datasets Contrary PCA mathematical technique probablistic one original paper describes working tSNE “tDistributed stochastic neighbor embedding tSNE minimizes divergence two distribution distribution measure pairwise similarity input object distribution measure pairwise similarity corresponding lowdimensional point embedding” Essentially mean look original data entered algorithm look best represent data using le dimension matching distribution way computationally quite heavy therefore serious limitation use technique example one recommendation case high dimensional data may need apply another dimensionality reduction technique using tSNE highly recommended use another dimensionality reduction method eg PCA dense data TruncatedSVD sparse data reduce number dimension reasonable amount eg 50 number feature high key drawback “Since tSNE scale quadratically number object N applicability limited data set thousand input object beyond learning becomes slow practical memory requirement become large” use ScikitLearn Implementation algorithm remainder writeup Contrary recommendation first try run algorithm actual dimension data 784 see make sure don’t burden machine term memory powertime use first 10000 sample run algorithm compare later I’ll also run PCA subset N 10000 dfsubset dflocrndpermNcopy datasubset dfsubsetfeatcolsvalues pca PCAncomponents3 pcaresult pcafittransformdatasubset dfsubsetpcaone pcaresult0 dfsubsetpcatwo pcaresult1 dfsubsetpcathree pcaresult2 printExplained variation per principal component formatpcaexplainedvarianceratio Explained variation per principal component 009730166 007135901 006183721 x timestart timetime tsne TSNEncomponents2 verbose1 perplexity40 niter300 tsneresults tsnefittransformdatasubset printtSNE done Time elapsed secondsformattimetimetimestart tSNE Computing 121 nearest neighbor tSNE Indexed 10000 sample 0564s tSNE Computed neighbor 10000 sample 121191s tSNE Computed conditional probability sample 1000 10000 tSNE Computed conditional probability sample 2000 10000 tSNE Computed conditional probability sample 3000 10000 tSNE Computed conditional probability sample 4000 10000 tSNE Computed conditional probability sample 5000 10000 tSNE Computed conditional probability sample 6000 10000 tSNE Computed conditional probability sample 7000 10000 tSNE Computed conditional probability sample 8000 10000 tSNE Computed conditional probability sample 9000 10000 tSNE Computed conditional probability sample 10000 10000 tSNE Mean sigma 2129023 tSNE KL divergence 250 iteration early exaggeration 85957787 tSNE KL divergence 300 iteration 2823509 tSNE done Time elapsed 1573975932598114 second two resulting dimension visualise creating scatter plot two dimension coloring sample respective label dfsubsettsne2done tsneresults0 dfsubsettsne2dtwo tsneresults1 pltfigurefigsize1610 snsscatterplot xtsne2done ytsne2dtwo huey palettesnscolorpalettehls 10 datadfsubset legendfull alpha03 already significant improvement PCA visualisation used earlier see digit clearly clustered sub group would use clustering algorithm pick seperate cluster could probably quite accurately assign new point label compare PCA TSNE pltfigurefigsize167 ax1 pltsubplot1 2 1 snsscatterplot xpcaone ypcatwo huey palettesnscolorpalettehls 10 datadfsubset legendfull alpha03 axax1 ax2 pltsubplot1 2 2 snsscatterplot xtsne2done ytsne2dtwo huey palettesnscolorpalettehls 10 datadfsubset legendfull alpha03 axax2 PCA left v TSNE right We’ll take recommendation heart actually reduce number dimension feeding data tSNE algorithm we’ll use PCA first create new dataset containing fifty dimension generated PCA reduction algorithm use dataset perform tSNE pca50 PCAncomponents50 pcaresult50 pca50fittransformdatasubset printCumulative explained variation 50 principal component formatnpsumpca50explainedvarianceratio Cumulative explained variation 50 principal component 08267618822147329 Amazingly first 50 component roughly hold around 85 total variation data let try feed data tSNE algorithm time we’ll use 10000 sample 70000 make sure algorithm take much memory CPU Since code used similar previous tSNE code moved Appendix Code section bottom post plot produced following one PCA left v TSNE middle v TSNE PCA50 right plot clearly see sample nicely spaced apart grouped together respective digit could amazing starting point use clustering algorithm try identify cluster actually use two dimension input another algorithm eg something like Neural Network explored using various dimensionality reduction technique visualise highdimensional data using twodimensional scatter plot gone actual mathematics involved instead relied ScikitLearn implementation algorithm Roundup Report closing appendix… Together likeminded friend sending weekly newsletter link note want share amongst allow others read well Appendix Code Code tSNE PCAreduced data timestart timetime tsne TSNEncomponents2 verbose0 perplexity40 niter300 tsnepcaresults tsnefittransformpcaresult50 printtSNE done Time elapsed secondsformattimetimetimestart tSNE done Time elapsed 4201495909690857 second visualisationTags Data Science Machine Learning Data Visualization Visualization Python |
1,195 | The NLP Cypher | 12.20.20 | NeurIPS & Knowledge Graphs
Michael Galkin returns with his round-up of graph news this time out of NeurIPS 🔥. Around five percent of papers from the conference were on graphs so lots to discuss.
His TOC:
Blog:
For an enterprise take w/r/t the power of knowledge graphs:
Training Data Extraction Attack 👀
A new paper (with authors from every major big tech), was recently published showing how one can attack language models like GPT-2 and extract information verbatim like personal identifiable information from just by querying the model. 🥶! The information extracted derived from the models’ training data that was based on scraped internet info. This is a big problem especially when you train a language model on a private custom dataset. The paper discusses causes and possible work arounds.
Paper
Book Me Some Data
Looks like Booking.com wants a new recommendation engine and they are offering up their dataset of over 1 million anonymized hotel reservations to get you in the game. Pretty cool if you want a chance in working with real-world data.
Here’s the training dataset schema:
user_id — User ID
check-in — Reservation check-in date
checkout — Reservation check-out date
affiliate_id — An anonymized ID of affiliate channels where the booker came from (e.g. direct, some third party referrals, paid search engine, etc.)
device_class — desktop/mobile
booker_country — Country from which the reservation was made (anonymized)
hotel_country — Country of the hotel (anonymized)
city_id — city_id of the hotel’s city (anonymized)
utrip_id — Unique identification of user’s trip (a group of multi-destinations bookings within the same trip)
“The eval dataset is similar to the train set except that the city_id of the final reservation of each trip is concealed and requires a prediction.”
UFO Files Dumped on Archive.org
There’s a nice dump of UFO files spanning several decades and countries if you want to get your alien research on. Apparently there was some copyright beef between content owners and media publishers which led ultimately to a third party obtaining a copy in the wild and uploading the files to archive.org 😭. Anyway, it’s a good data source to try out your latest OCR algo or if you are interested in searching for anti-gravity propulsion tech.
👽:
The Air Force Ported µZero
Apparently the US Air Force decided to port DeepMind’s µZero to the navigation/sensor system of a U-2 “dragon lady” spy plane. And they have called it ARTUµ, inspired by R2-D2 from Star Wars 😭. Recently, they ran the first ever simulated flight to show off the AI’s capabilities. The mission was for ARTUµ to conduct reconnaissance of enemy missile launchers on the ground while the pilot looked for aerial threats.
DARPA be like:
Article:
GitHub Search Index Update
GitHub will get rid of your repo from its code search index if it’s been inactive for more than a year. So how do you stay ‘active’?
“Recent activity for a repository means that it has had a commit or has shown up in a search result.”
Getting Rid of Intents
Alan Nichol opines on the latest state of conversational AI and his RASA platform with regards to the goal of getting rid of intents as being paramount for conversational AIs to achieve Kurzweil levels of robustness. They are currently experimenting with end-2-end learning as an alternative to intents.
In RASA 2.2 and beyond, intents will be optional.
Blog:
Speech Transformers & Datasets & WMT20 Model Checkpoints from FAIR
XLSR-53: Multilingual Self-Supervised Speech Transformer
Multilingual pre-trained wav2vec 2.0 models
Multi-Lingual LibriSpeech Dataset
WMT Models Out
Facebook FAIR’s WMT’20 news translation task submission models
Repo Cypher 👨💻
A collection of recently released repos that caught our 👁 | https://medium.com/towards-artificial-intelligence/the-nlp-cypher-12-20-20-cfc4b197517c | ['Quantum Stat'] | 2020-12-20 23:24:27.464000+00:00 | ['Machine Learning', 'Deep Learning', 'Data Science', 'AI', 'Artificial Intelligence'] | Title NLP Cypher 122020Content NeurIPS Knowledge Graphs Michael Galkin return roundup graph news time NeurIPS 🔥 Around five percent paper conference graph lot discus TOC Blog enterprise take wrt power knowledge graph Training Data Extraction Attack 👀 new paper author every major big tech recently published showing one attack language model like GPT2 extract information verbatim like personal identifiable information querying model 🥶 information extracted derived models’ training data based scraped internet info big problem especially train language model private custom dataset paper discus cause possible work arounds Paper Book Data Looks like Bookingcom want new recommendation engine offering dataset 1 million anonymized hotel reservation get game Pretty cool want chance working realworld data Here’s training dataset schema userid — User ID checkin — Reservation checkin date checkout — Reservation checkout date affiliateid — anonymized ID affiliate channel booker came eg direct third party referral paid search engine etc deviceclass — desktopmobile bookercountry — Country reservation made anonymized hotelcountry — Country hotel anonymized cityid — cityid hotel’s city anonymized utripid — Unique identification user’s trip group multidestinations booking within trip “The eval dataset similar train set except cityid final reservation trip concealed requires prediction” UFO Files Dumped Archiveorg There’s nice dump UFO file spanning several decade country want get alien research Apparently copyright beef content owner medium publisher led ultimately third party obtaining copy wild uploading file archiveorg 😭 Anyway it’s good data source try latest OCR algo interested searching antigravity propulsion tech 👽 Air Force Ported µZero Apparently US Air Force decided port DeepMind’s µZero navigationsensor system U2 “dragon lady” spy plane called ARTUµ inspired R2D2 Star Wars 😭 Recently ran first ever simulated flight show AI’s capability mission ARTUµ conduct reconnaissance enemy missile launcher ground pilot looked aerial threat DARPA like Article GitHub Search Index Update GitHub get rid repo code search index it’s inactive year stay ‘active’ “Recent activity repository mean commit shown search result” Getting Rid Intents Alan Nichol opines latest state conversational AI RASA platform regard goal getting rid intent paramount conversational AIs achieve Kurzweil level robustness currently experimenting end2end learning alternative intent RASA 22 beyond intent optional Blog Speech Transformers Datasets WMT20 Model Checkpoints FAIR XLSR53 Multilingual SelfSupervised Speech Transformer Multilingual pretrained wav2vec 20 model MultiLingual LibriSpeech Dataset WMT Models Facebook FAIR’s WMT’20 news translation task submission model Repo Cypher 👨💻 collection recently released repos caught 👁Tags Machine Learning Deep Learning Data Science AI Artificial Intelligence |
1,196 | Create Highly Available Websites using AWS Infrastructure | In this age, the transition to e-commerce is happening at a very fast pace. These e-commerce companies, both small and big would like to have to their website up and running 24 hours and 365 days a year not to lose customers and business. For instance, think about amazon.com going down for a couple of hours.
This is where Cloud Computing comes into the picture. Cloud provides agility to build Highly Available (HA) and fault-tolerant applications which was not possible with the on-premise data centers. The Cloud vendors have Data Centers across multiple geographical locations with redundancy to help us build an HA website. The same applies to any of the business-critical applications that need to run all the time. Currently, AWS Global Infrastructure spans 21 geographical Regions with 66 Availability Zones (AZ) with more Regions and AZs to come in the near future. In this article, we will focus on building HA applications in the Cloud.
AWS Global Infrastructure
AWS Global Infrastructure has:
Regions and Availability Zones (AZ)
Data Centers in AZs
Edge Locations
Let’s discuss each of these in detail.
Regions & Availability Zones
Each Region is a geographical area with more than one isolated location called Availability Zones (AZ). Each AZ can be a combination of one or more Data Centers. For Example, North Virginia is a Region with a code of us-east-1 and has 6 AZ. You can learn more about regions in AWS documentation.
As shown in the above diagram, there are multiple AWS Regions. And each Region has a minimum of two AZs. Each AZ has multiple Data Centers.
Note: Some of the resources in AWS are global, while some are region-specific. When we create an IAM User it is Global and when we create an EC2 Instance it is regional. While creating an EC2 Instance, we got an option to pick the region and which AZ. It is not the same with the IAM User as it is global.
Data Centers in Availability Zones
Amazon is very secretive of the DC locations but that is not the case with AZs.
For Example, the Mumbai AZ ap-south-1a might be in the east of Mumbai and ap-south-1c in the west. This way if there is any natural catastrophe like flood or fire around one of the AZs, it doesn’t affect the other AZs. Also, each of the Data Center (DC) has its own redundancy of Power Supply, UPS, Routers, Internet Connectivity, etc to avoid any Single-Point-Of-Failure. There is no common infrastructure between two DCs and the DCs are connected by high-speed internet connectivity for low latency.
While creating a resource, AWS provides us an option to pick the Region and the AZs, but not the DC within it. The following factors are used for picking an AWS Region.
Pricing (North Virginia Region is the oldest and the cheapest Region)
Security and compliance requirement
User/customer location
Service availability
Latency
Note: For the sake of learning AWS, North Virginia is best as it is one the cheapest AWS Region and usually is the first one to support any new AWS feature for us to try it out.
Edge Locations
In AWS Global Infrastructure, the Edge Locations used for caching the static and streaming data. Currently, there is a global network of 187 Points of Presence (176 Edge Locations and 11 Regional Edge Caches) in 69 cities across 30 countries. When compared to the AZs, the number of edge locations is almost triple and close to the end-user. This makes the Edge Locations a ripe candidate for caching data, while the regions are for hosting web servers, databases and so on.
These Edge Locations provide lower latency when compared to the Regions. When a request is made by the user, the Edge Location checks if the data is there locally and if not then gets the data from the appropriate Region, stores it locally and then passes it on to the user.
Edge Locations is an AWS term provided by the AWS CloudFront Service. An AWS Edge Location is called PoP (Points of Presence) in general term. A similar service is provided by Content Distribution Network (CDN) providers like Akamai, Cloudflare, etc. These CDN providers cache the data like streaming video during a live match to provide a better experience to the end-user. Below is the table comparing the AWS and the general terminology.
Creating a Highly Available Application using the AWS Global Infrastructure
AWS Global Infrastructure provides a set of Regions, AZs and Edge Locations to create a Highly Available and Fault Tolerant application.
Example
If a web server is hosted in a single AZ, any problem with that AZ will make the website unavailable. To get around this the webserver can be deployed in multiple AZs within the same region as shown below. Similarly, any problem with a region which will also make the website unavailable. To make the website even more available, we can have the webserver across multiple regions, this way the availability of the website is not dependent on the availability of a single region.
HA website and applications are created by using redundancy, but the problem with redundancy is that it comes at a cost. We need to run the same website at multiple locations. Also, the same web server can be deployed in some other Cloud like GCP or Azure. Here we are running the same web server in different Clouds and so the configuration is called Multi-Cloud Configuration. Google Anthos helps in building hybrid and Multi-Cloud applications using Kubernetes.
When we have the webserver across multiple locations, how does the traffic gets distribute among them? We can’t simply keep them sitting idle. This is where the AWS Elastic Load Balancers come into play. The AWS ELB takes the request from the end-user and distributes the same across multiple web servers.
Conclusion
While setting up our own DC gives the flexibility to design our hardware and software, it comes at the cost of time and money. By leveraging AWS, with a few clicks or API calls we can create a highly available, secure, fault-tolerant, reliable, performant application. The different Cloud Vendors like Google, Amazon, Microsoft have been spending billions to set up new DC in different geographical locations.
If you wish to check out more articles on the market’s most trending technologies like Artificial Intelligence, DevOps, Ethical Hacking, then you can refer to Edureka’s official site.
Do look out for other articles in this series which will explain the various other aspects of AWS. | https://medium.com/edureka/create-websites-using-aws-1577a255ea36 | ['Vishal Padghan'] | 2020-09-11 10:04:57.347000+00:00 | ['Aws Global Infrastructure', 'Web Development', 'AWS', 'Amazon Web Services', 'Cloud Computing'] | Title Create Highly Available Websites using AWS InfrastructureContent age transition ecommerce happening fast pace ecommerce company small big would like website running 24 hour 365 day year lose customer business instance think amazoncom going couple hour Cloud Computing come picture Cloud provides agility build Highly Available HA faulttolerant application possible onpremise data center Cloud vendor Data Centers across multiple geographical location redundancy help u build HA website applies businesscritical application need run time Currently AWS Global Infrastructure span 21 geographical Regions 66 Availability Zones AZ Regions AZs come near future article focus building HA application Cloud AWS Global Infrastructure AWS Global Infrastructure Regions Availability Zones AZ Data Centers AZs Edge Locations Let’s discus detail Regions Availability Zones Region geographical area one isolated location called Availability Zones AZ AZ combination one Data Centers Example North Virginia Region code useast1 6 AZ learn region AWS documentation shown diagram multiple AWS Regions Region minimum two AZs AZ multiple Data Centers Note resource AWS global regionspecific create IAM User Global create EC2 Instance regional creating EC2 Instance got option pick region AZ IAM User global Data Centers Availability Zones Amazon secretive DC location case AZs Example Mumbai AZ apsouth1a might east Mumbai apsouth1c west way natural catastrophe like flood fire around one AZs doesn’t affect AZs Also Data Center DC redundancy Power Supply UPS Routers Internet Connectivity etc avoid SinglePointOfFailure common infrastructure two DCs DCs connected highspeed internet connectivity low latency creating resource AWS provides u option pick Region AZs DC within following factor used picking AWS Region Pricing North Virginia Region oldest cheapest Region Security compliance requirement Usercustomer location Service availability Latency Note sake learning AWS North Virginia best one cheapest AWS Region usually first one support new AWS feature u try Edge Locations AWS Global Infrastructure Edge Locations used caching static streaming data Currently global network 187 Points Presence 176 Edge Locations 11 Regional Edge Caches 69 city across 30 country compared AZs number edge location almost triple close enduser make Edge Locations ripe candidate caching data region hosting web server database Edge Locations provide lower latency compared Regions request made user Edge Location check data locally get data appropriate Region store locally pass user Edge Locations AWS term provided AWS CloudFront Service AWS Edge Location called PoP Points Presence general term similar service provided Content Distribution Network CDN provider like Akamai Cloudflare etc CDN provider cache data like streaming video live match provide better experience enduser table comparing AWS general terminology Creating Highly Available Application using AWS Global Infrastructure AWS Global Infrastructure provides set Regions AZs Edge Locations create Highly Available Fault Tolerant application Example web server hosted single AZ problem AZ make website unavailable get around webserver deployed multiple AZs within region shown Similarly problem region also make website unavailable make website even available webserver across multiple region way availability website dependent availability single region HA website application created using redundancy problem redundancy come cost need run website multiple location Also web server deployed Cloud like GCP Azure running web server different Clouds configuration called MultiCloud Configuration Google Anthos help building hybrid MultiCloud application using Kubernetes webserver across multiple location traffic get distribute among can’t simply keep sitting idle AWS Elastic Load Balancers come play AWS ELB take request enduser distributes across multiple web server Conclusion setting DC give flexibility design hardware software come cost time money leveraging AWS click API call create highly available secure faulttolerant reliable performant application different Cloud Vendors like Google Amazon Microsoft spending billion set new DC different geographical location wish check article market’s trending technology like Artificial Intelligence DevOps Ethical Hacking refer Edureka’s official site look article series explain various aspect AWSTags Aws Global Infrastructure Web Development AWS Amazon Web Services Cloud Computing |
1,197 | “Stimulus”, or Survival? | “Stimulus”, or Survival?
The longer Washington plays games with our lives, the more needless suffering real people are forced to experience.
Photo by Maria Oswalt on Unsplash
The dead of winter is coming, and with it comes the culmination of the housing and economic crisis that has only continued to build since the start of the coronavirus pandemic that’s ripping through the United States even worse than before. As I have said countless times before, whether it be the total lack of support for parents and teachers struggling to help children get through this unprecedented school year, the loss of over 300,000 lives, or the fact that that tens of millions of people are about to lose their housing through no fault of their own, it’s irrefutable that the government has failed us in every conceivable way virtually from the moment we were made aware of how significant the effect of this pandemic would be. The idea that another “stimulus” check for the American people is even controversial is testament alone to how little our lawmakers tasked with representing us actually care.
One need only look at the unemployment sub-reddit on Reddit to get a glimpse of just how bad things have been, and continue to get worse. Jeff Stein with The Washington Post shared an image of one of the posts concerning unemployment, where the user writes:
“I hope we all get some good news today. I’m on the verge of being homeless as well since I haven’t paid anything to my roommate and their girlfriend (ruthless) wants to kick me out into the cold. (Not her problem, as she says. Lol) I applied for SNAP yesterday and I have been dumpster diving for food the last few days outside of restaurants… I don’t have a car so every time I walk to food banks and stuff like that they’re either closed or need proof of residency and I don’t have that as I’m just couch surfing…”
Under the ‘unemployed sub-reddit, another user wrote:
“I’m losing it. I’ve been unemployed for 10 months now because of Covid. I’ve applied to over 600 jobs and only one interview. I have BS in Business Admin and MS in International Relations. I’ve had my resume redone, done interview prep, updated LinkedIn, and reached out to contacts/ connections. I’m coming off a terrible experience at last job with the US government that left me diagnosed with PTSD, major depressive disorder, and severe anxiety. Im really really not well enough mentally to work full-time and my depression has been more than overwhelming the last few months. I’m struggling getting out of bed, taking my dog out, and I am having an extremely difficult time. I can’t sleep through the night and have difficulty eating. Therapy isn’t working anymore. I live alone (immune compromised). No family support (financially or emotionally). Unemployment ran out two weeks ago. I’ve lived in 7 different places in the last year. Lost my apartment in June, managed to find new place in October, and I am hanging on by threads. Rent due in two weeks. Little savings left, high credit card debt and all bills sitting on credit card. I’m not sure how much longer I can keep doing this. Do I check into mental health facility? What do I do please help. Please any advice would be appreciated.”
In a consumer-driven economy like that of the United States, it should go without saying that the stability of the economy is dependent upon the ability of average Americans to actually be able to spend. It should come as no surprise then, that the direct relief payments the American people desperately need have been dubbed “stimulus checks”. That said at this rate, the situation has been so dire the need has nothing to do with economic stimulus, but rather the ability just to survive.
It’s near impossible not to read stories like the one above, and not sink into feelings of total, utter hopelessness and raw anger. For months I have been wondering what it would take for the American people to rise, whether it be mass demonstrations in the streets or a general strike, and demand that basic needs be met in the richest nation on earth. But when reflecting on what millions of people are currently going through, can we really be surprised that people don’t have the time or the energy to organize? Can we really be surprised that people have become so conditioned to the cruelty, our lawmakers feel entirely comfortable subjecting their constituents to unnecessary trauma while they hold our stability hostage? Are they capable of realizing that the longer they allow this to fester, the less likely it is that any amount they send us will be enough to help people catch up on their bills and overdraft fees, let alone stimulate their beloved economy?
The wealth of the richest people in the country has grown by such an unprecedented amount as a direct result of this pandemic, a study found that a one time wealth tax could bring in enough money to send every single American a check of $3000, and still leave the elites with more money than they had at the beginning of the pandemic. Meanwhile, single mothers who lost their jobs as a direct result of the same circumstances that saw a surge in their fortunes are putting their bank accounts into overdraft just to buy diapers for their babies.
I can’t help wondering what it will take for our lawmakers to actually care. What are the American people going to have to resort to in order to get through to people who have been so insulated from the pain and the rage, that they have no ability to understand how their failures are directly impacting tens of millions of people? How much more are we going to be expected to put up with before they realize these checks aren’t some frivolous luxury they don’t feel we’re worthy of receiving, but an absolute necessity? | https://medium.com/discourse/stimulus-or-survival-f63c68d16310 | ['Lauren Elizabeth'] | 2020-12-20 03:58:59.724000+00:00 | ['Economy', 'Politics', 'Society', 'Coronavirus', 'Government'] | Title “Stimulus” SurvivalContent “Stimulus” Survival longer Washington play game life needle suffering real people forced experience Photo Maria Oswalt Unsplash dead winter coming come culmination housing economic crisis continued build since start coronavirus pandemic that’s ripping United States even worse said countless time whether total lack support parent teacher struggling help child get unprecedented school year loss 300000 life fact ten million people lose housing fault it’s irrefutable government failed u every conceivable way virtually moment made aware significant effect pandemic would idea another “stimulus” check American people even controversial testament alone little lawmaker tasked representing u actually care One need look unemployment subreddit Reddit get glimpse bad thing continue get worse Jeff Stein Washington Post shared image one post concerning unemployment user writes “I hope get good news today I’m verge homeless well since haven’t paid anything roommate girlfriend ruthless want kick cold problem say Lol applied SNAP yesterday dumpster diving food last day outside restaurants… don’t car every time walk food bank stuff like they’re either closed need proof residency don’t I’m couch surfing…” ‘unemployed subreddit another user wrote “I’m losing I’ve unemployed 10 month Covid I’ve applied 600 job one interview BS Business Admin MS International Relations I’ve resume redone done interview prep updated LinkedIn reached contact connection I’m coming terrible experience last job US government left diagnosed PTSD major depressive disorder severe anxiety Im really really well enough mentally work fulltime depression overwhelming last month I’m struggling getting bed taking dog extremely difficult time can’t sleep night difficulty eating Therapy isn’t working anymore live alone immune compromised family support financially emotionally Unemployment ran two week ago I’ve lived 7 different place last year Lost apartment June managed find new place October hanging thread Rent due two week Little saving left high credit card debt bill sitting credit card I’m sure much longer keep check mental health facility please help Please advice would appreciated” consumerdriven economy like United States go without saying stability economy dependent upon ability average Americans actually able spend come surprise direct relief payment American people desperately need dubbed “stimulus checks” said rate situation dire need nothing economic stimulus rather ability survive It’s near impossible read story like one sink feeling total utter hopelessness raw anger month wondering would take American people rise whether mass demonstration street general strike demand basic need met richest nation earth reflecting million people currently going really surprised people don’t time energy organize really surprised people become conditioned cruelty lawmaker feel entirely comfortable subjecting constituent unnecessary trauma hold stability hostage capable realizing longer allow fester le likely amount send u enough help people catch bill overdraft fee let alone stimulate beloved economy wealth richest people country grown unprecedented amount direct result pandemic study found one time wealth tax could bring enough money send every single American check 3000 still leave elite money beginning pandemic Meanwhile single mother lost job direct result circumstance saw surge fortune putting bank account overdraft buy diaper baby can’t help wondering take lawmaker actually care American people going resort order get people insulated pain rage ability understand failure directly impacting ten million people much going expected put realize check aren’t frivolous luxury don’t feel we’re worthy receiving absolute necessityTags Economy Politics Society Coronavirus Government |
1,198 | Learnin’ Good All This AI Stuff for Product Management | How product managers can learn what they need to know about AI to be at peak effectiveness
The most frequent question I get about AI from colleagues, product managers and others, is,
“What do I need to know about AI and what’s the best way to learn it?”
I’ve invested a considerable amount of time taking numerous courses, so I dug into my emails to collect some of the suggestions I’ve doled out.
Is this necessary?
First, it’s worth addressing the extent to which a product manager even needs to understand how AI works in order to be effective. There is an endless stream of business articles about what AI is, what it does and how it is going to disrupt this and that, all of which is great, but I am talking about understanding how it works (e.g. stochastic gradient descent, convolutional neural networks, tree search, supervised vs unsupervised learning, and all the other mumbo jumbo that sounds complicated until you know what it is).
As Marty Cagan pointed out in Inspired (a must-read), product managers can come from a variety of different vertical disciplines, including those that are not necessarily technical, such as marketing or sales. Can these individuals, or even product managers who come from engineering but don’t necessarily have a background in AI, be successful managing AI products?
In his article Behind Every Great Product, Marty says,
While you don’t need to be able to invent or implement the new technology yourself in order to be a strong product manager, you do need to be comfortable enough with the technology that you can understand it and see its potential applications.
With this, I could not agree more. As mentioned in my last post, if a product manager’s two key responsibilities are assessing opportunities and defining products to be built, these become particularly challenging without an understanding of the underlying technology.
AI, however, is raising the bar. Almost 3 years ago Jon Evans welcomed everyone to the era of “hardtech” by comparing beginner tutorials for Android and TensorFlow. (I program in both and the latter is much more difficult.) Things have not gotten easier in the intervening years. For those who want to product manage in AI (which I highly recommend), at this very early stage of AI commercialization, it is critical to understand how it works in order to grasp what it cannot yet do and envision what is possible. You might be able to bumble around in PowerPoint for a while, but whether you are technical or not, it is important to learn the technology in order to be able to understand it and see its potential, not to mention communicate credibly with engineering.
If you are still skeptical, allow me to also offer a comprehensive list of all the associated costs with learning AI:
If you don’t think you have what it takes to learn AI, whatever you imagine that to be, you are probably mistaken. Read on or jump to the bottom for some inspiration (then come back).
How to get there
First, if you don’t know how already, learn how to code (a little).
There has always been a lot of discussion about whether or not founders need to know how to code. TechCrunch had an article many years ago about “ The Trouble with Non-Tech Co-founders” and then Steve Blank (required reading) opined on “ Why Founders Should Know How to Code.” More recently TechCrunch implored people to “ Please Don’t Learn to Code” before posting an article on how the CEO of Auth0 still codes. Recognizing that product managers and founders are different things, although running product is most always one of the responsibilities of a founder, the relationship here is apt. Regardless, discussions around whether or not these people need to know how to code is complicated.
The good news is that, the amount of coding skill you need to be able to get what you need out of your AI studies is relatively easy. You are not going to write production code, or any code that will go into the products you manage; you are just going to write enough to get through the classes. Take an online introduction to Python class (I have never taken one so cannot make a recommendation), get yourself an account on Stack Overflow (check out that juicy rep) and you should be good to go.
Once you have the programming basics, then what? While no one could ever explore every AI class under the sun, here is the list of some MOOC s that I recommend, with a little commentary:
Udacity Deep Learning Nanodegree: This is the first one I took, without ever having written a line of Python. (Not recommended, but I was already proficient in Java and R.) I was in the first group that ever went through this course, so it was more than a little rough, but it was fun and amazing. I was instantly hooked. Udacity now also has an AI Nanodegree, but I’m unfamiliar with that.
Coursera Machine Learning offered by Stanford: I took this one, too, and it’s fantastic. Prof. Andrew Ng is a wonderful instructor and the way he organizes the concepts to cascade one into the next is beautiful. It uses Matlab (you can use Octave for free) which puts some people off, but it is worth it.
Coursera Deep Learning Specialization offered by Stanford: I took this one, although as part of Stanford’s CS230 course (see below), again with Prof. Andrew Ng. Like his ML course, it is marvelous.
Coursera Machine Learning Foundations offered by University of Washington: I have not taken this one, but it comes highly recommend from my good friend Anthony Stevens, Global Enterprise AI Architect at IBM.
fast.ai: I have not taken this, but I’ve heard it is outstanding, and it is free. They tout not requiring any advanced math prerequisites. Leave a comment if you have any opinions.
EdX Artificial Intelligence offered by Columbia University: I haven’t taken this one either, but I’ve heard great things.
Udacity AI Product Manager Nanodegree: This one certainly sounds like it is in the bullseye, and I’m a fan of the Udacity Deep Learning Nanodegree, but this one is non-technical (no programming required), so I’m skeptical for the reasons described above. I haven’t taken it, and nobody else has yet either, because it was only announced yesterday. If you want to do it for real, which I am sure you do, then I’d recommend one of the courses above. (They are hard, but not that hard.) As soon as I learn something about this one I’ll report back here.
Going all the way
I am not recommending this for product managers because it is overkill, a ton of work with deadlines, and expensive, but I am in the middle of Stanford’s 4-course, 16-unit Graduate Certificate in Artificial Intelligence. These are not MOOCs or continuing education (I’ve done those, too), but rather 200-level, master’s degree classes, with serious prerequisites and full-time students, at Stanford, in the lecture hall, with TAs and exams and team projects and the rest of the ball of wax. Having completed CS230 and CS221, I don’t know which two courses I’ll take next (I want to take all of them), but I’m aiming to finish in March 2020.
Pacman competition assignment from Stanford’s CS221 — AI Principals & Techniques
Many friends have asked me, “Why?” It was a bit impulsive, but upon reflection I have a few answers:
It makes me better at my job. In my product management role at PARC, helping Xerox launch AI-powered applications, I want to explore every possibility to bring value to the team.
I can see Hoover Tower from my office window. Proximity to campus isn’t a reason for anything, but it is motivational.
The classes are excellent. AI is amazing. The assignments are fun. In CS221 we developed AI to play Pacman. The class had a competition to see whose AI could generate the highest average score and I placed 11th out of ~350 students, so not too bad.
As an electrical engineer before getting an MBA, this feels a little like I am getting “back to my roots.”
You can potentially go on to get a Stanford master’s degree in CS! If accepted into the program, you can roll over the units from the Graduate Certificate, which means you are already a third of the way there. Apparently there are department emails that refer to these classes as “tryouts,” so if I do well enough (I have heard it very challenging to get in even with straight As) I’ll apply.
Learning is awesome.
Bringing it home
As Rachel Thomas, deep learning researcher and co-founder of fast.ai, said recently during a talk about AI,
“It’s natural to feel like if you’re not a math genius or you don’t have a PhD from Stanford that you couldn’t possibly hope to understand what’s going on, much less to get involved. I’m here to tell you that is false.”
Agreed! Sure, there is a bit of math, but this isn’t string theory. Rachel goes on to talk about Melissa Fabros, an English literature Ph.D. (almost literally a poet) who took the fast.ai course and went on to win a grant to work in computer vision. With some determination and perseverance, almost everyone can learn the basics of how AI works, and then some.
If you are a product manager, or would like to become one, who wants to work in AI, or is perhaps there already, then there is no reason not to learn a little Python and get going with the MOOCs. If you are passionate about the technology, which frankly you better be if this is what you want to do, then you will get there.
So get pumped, sign yourself up for a class, start cranking and let me know how it goes in the comments below. Good luck! | https://medium.com/swlh/learnin-good-all-this-ai-stuff-for-product-management-1133b69aee | ['Mark Cramer'] | 2019-08-09 15:50:42.157000+00:00 | ['Machine Learning', 'Product Management', 'Artificial Intelligence', 'AI', 'Learning'] | Title Learnin’ Good AI Stuff Product ManagementContent product manager learn need know AI peak effectiveness frequent question get AI colleague product manager others “What need know AI what’s best way learn it” I’ve invested considerable amount time taking numerous course dug email collect suggestion I’ve doled necessary First it’s worth addressing extent product manager even need understand AI work order effective endless stream business article AI going disrupt great talking understanding work eg stochastic gradient descent convolutional neural network tree search supervised v unsupervised learning mumbo jumbo sound complicated know Marty Cagan pointed Inspired mustread product manager come variety different vertical discipline including necessarily technical marketing sale individual even product manager come engineering don’t necessarily background AI successful managing AI product article Behind Every Great Product Marty say don’t need able invent implement new technology order strong product manager need comfortable enough technology understand see potential application could agree mentioned last post product manager’s two key responsibility assessing opportunity defining product built become particularly challenging without understanding underlying technology AI however raising bar Almost 3 year ago Jon Evans welcomed everyone era “hardtech” comparing beginner tutorial Android TensorFlow program latter much difficult Things gotten easier intervening year want product manage AI highly recommend early stage AI commercialization critical understand work order grasp cannot yet envision possible might able bumble around PowerPoint whether technical important learn technology order able understand see potential mention communicate credibly engineering still skeptical allow also offer comprehensive list associated cost learning AI don’t think take learn AI whatever imagine probably mistaken Read jump bottom inspiration come back get First don’t know already learn code little always lot discussion whether founder need know code TechCrunch article many year ago “ Trouble NonTech Cofounders” Steve Blank required reading opined “ Founders Know Code” recently TechCrunch implored people “ Please Don’t Learn Code” posting article CEO Auth0 still code Recognizing product manager founder different thing although running product always one responsibility founder relationship apt Regardless discussion around whether people need know code complicated good news amount coding skill need able get need AI study relatively easy going write production code code go product manage going write enough get class Take online introduction Python class never taken one cannot make recommendation get account Stack Overflow check juicy rep good go programming basic one could ever explore every AI class sun list MOOC recommend little commentary Udacity Deep Learning Nanodegree first one took without ever written line Python recommended already proficient Java R first group ever went course little rough fun amazing instantly hooked Udacity also AI Nanodegree I’m unfamiliar Coursera Machine Learning offered Stanford took one it’s fantastic Prof Andrew Ng wonderful instructor way organizes concept cascade one next beautiful us Matlab use Octave free put people worth Coursera Deep Learning Specialization offered Stanford took one although part Stanford’s CS230 course see Prof Andrew Ng Like ML course marvelous Coursera Machine Learning Foundations offered University Washington taken one come highly recommend good friend Anthony Stevens Global Enterprise AI Architect IBM fastai taken I’ve heard outstanding free tout requiring advanced math prerequisite Leave comment opinion EdX Artificial Intelligence offered Columbia University haven’t taken one either I’ve heard great thing Udacity AI Product Manager Nanodegree one certainly sound like bullseye I’m fan Udacity Deep Learning Nanodegree one nontechnical programming required I’m skeptical reason described haven’t taken nobody else yet either announced yesterday want real sure I’d recommend one course hard hard soon learn something one I’ll report back Going way recommending product manager overkill ton work deadline expensive middle Stanford’s 4course 16unit Graduate Certificate Artificial Intelligence MOOCs continuing education I’ve done rather 200level master’s degree class serious prerequisite fulltime student Stanford lecture hall TAs exam team project rest ball wax completed CS230 CS221 don’t know two course I’ll take next want take I’m aiming finish March 2020 Pacman competition assignment Stanford’s CS221 — AI Principals Techniques Many friend asked “Why” bit impulsive upon reflection answer make better job product management role PARC helping Xerox launch AIpowered application want explore every possibility bring value team see Hoover Tower office window Proximity campus isn’t reason anything motivational class excellent AI amazing assignment fun CS221 developed AI play Pacman class competition see whose AI could generate highest average score placed 11th 350 student bad electrical engineer getting MBA feel little like getting “back roots” potentially go get Stanford master’s degree CS accepted program roll unit Graduate Certificate mean already third way Apparently department email refer class “tryouts” well enough heard challenging get even straight I’ll apply Learning awesome Bringing home Rachel Thomas deep learning researcher cofounder fastai said recently talk AI “It’s natural feel like you’re math genius don’t PhD Stanford couldn’t possibly hope understand what’s going much le get involved I’m tell false” Agreed Sure bit math isn’t string theory Rachel go talk Melissa Fabros English literature PhD almost literally poet took fastai course went win grant work computer vision determination perseverance almost everyone learn basic AI work product manager would like become one want work AI perhaps already reason learn little Python get going MOOCs passionate technology frankly better want get get pumped sign class start cranking let know go comment Good luckTags Machine Learning Product Management Artificial Intelligence AI Learning |
1,199 | 4 Uncommon Python Tricks You Should Learn | 1. Multiple Assignment
When you want to give several variables the same values, you will often see code that repeats the assignment statement for each variable.
It may look something like the following code, which uses one separate assignment per variable:
a = 1
b = 1
c = 1
In Python, we can simplify this, and assign to every variable at once.
a = b = c = 1
# a == 1
# b == 1
# c == 1
After doing this, every variable was assigned to the right-most value in the chain. Since it just takes the right-most value, we can also replace 1 with a variable. | https://medium.com/better-programming/4-uncommon-python-tricks-you-should-learn-2d3a156c10f2 | ['Devin Soni'] | 2020-02-12 17:24:54.423000+00:00 | ['Programming', 'Coding', 'Software Engineering', 'Python', 'Technology'] | Title 4 Uncommon Python Tricks LearnContent 1 Multiple Assignment want give several variable value often see code repeat assignment statement variable may look something like following code us one separate assignment per variable 1 b 1 c 1 Python simplify assign every variable b c 1 1 b 1 c 1 every variable assigned rightmost value chain Since take rightmost value also replace 1 variableTags Programming Coding Software Engineering Python Technology |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.