Unnamed: 0
int64
0
192k
title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
info
stringlengths
45
90.4k
4,200
20 Free iPhone Mockups [PSD, Sketch] - December 2020
Device mockups are getting more and more popular these days. Apple started this trend by sharing the frontal PSDs of the recent iPhones on their Guidelines portal years ago. These mockups were just renders of the devices without any artistic branded touch to them. Lots of designers in the industry felt a need to create some custom frames to present their products in a unique way. And the over years the mockups visual style went from photorealistic ones to simplified versions like some of the ones above. Now it is a huge trend when designers wait each fall for the upcoming Apple iPhone event and start drawing as Tim Cook speaks on stage. And then they post them on Dribbble or Bē. Q: Where to use these mockups? A: Lots of products these days find these templates/mockups useful for a wide range of marketing needs: AppStore screenshots, app landing pages or just present UI/UX design works using these iPhone X mockups.
https://uxplanet.org/free-iphone-x-mockups-psd-sketch-4c455d74b2c3
['They Make Design']
2020-12-21 07:41:37.838000+00:00
['Mockup', 'Design', 'Sketch', 'Psd', 'Iphone X']
Title 20 Free iPhone Mockups PSD Sketch December 2020Content Device mockups getting popular day Apple started trend sharing frontal PSDs recent iPhones Guidelines portal year ago mockups render device without artistic branded touch Lots designer industry felt need create custom frame present product unique way year mockups visual style went photorealistic one simplified version like one huge trend designer wait fall upcoming Apple iPhone event start drawing Tim Cook speaks stage post Dribbble Bē Q use mockups Lots product day find templatesmockups useful wide range marketing need AppStore screenshots app landing page present UIUX design work using iPhone X mockupsTags Mockup Design Sketch Psd Iphone X
4,201
Look on algorithms behind Natural Language Processing (NLP).
Natural language processing (NLP) describes the interaction between human language and computers. Human language is different than what computers understands. Computers understand machine language or we can say the binary codes. Computers don’t speak or understand human language unless they are programmed to do so. And that’s where NLP comes into picture. How does natural language processing works? There are two main techniques used with NLP , the first one is syntax analysis and the second one is semantic analysis. Syntax is the structure or form of expressions, statements, and program units. Syntax can be used for assessing meaning from a language supported grammatical rules. There are some of the techniques used in syntax analysis which includes: I.) parsing :- which is a grammatical analysis for a sentence. II.) word segmentation :- which divides an outsized piece of text to units III.) sentence breaking:- which places sentence boundaries in large texts IV.) morphological segmentation:- which divides words into groups V.) stemming:- which divides words with inflection in them to root forms Semantics is the meaning of those expressions, statements, and program units. There are algorithms which NLP applies to know the meaning and structure of sentences. There are some of the techniques used in semantic analysis which includes: I.) word meaning disambiguation:- which derives the meaning of a word supported context II.) named entity recognition:- which determines words which will be categorized into groups III.) natural language generation:- which will use a database to work out semantics behind words Also, we can divide NLP field into two camps: Linguistics camp Statistics camp. The idea of NLP started in the early era of AI. In fact, it came into existence during the time of Alan Turing, who is considered to be the founder of both AI and computing in general. The challenge was to create a machine that can converse in a way that is indistinguishable from human which is also known as Turing test. “ELIZA” one of the earliest famous AI program that can be considered as an attempt to beat the Turing test. As we know that there were no such algorithms that could really understand the human language at that time. So, we can say that ELIZA and other chat bot programs at that time used to be programmed manually crafting lots and lots of rules to respond the human conversation. So, it can be said that those programs never had the capacity of actually understanding the natural language rather we can say that they were the result of psychology, to fool humans. So, the concept of linguistic arose which can be viewed as the science of how language is created. A pattern is searched in a language and the rules for constructing and interpreting all natural language utterances are formulated, which is done by linguists. And some models or grammars are generalized on the basis of that rule. (Linguistic rules are also used to parse and recognize the artificial language when building a compiler). The way of parsing natural language is also very much similar except that Context Free Grammars are limited so instead Context-Sensitive Grammars are used. Then in the 90’s, a different perspective was approached to the NLP problem by a statisticians. After that essentially all the Linguistic theories were all thrown out. A simple model of language was introduced which was called “Bag of Words ” model. This model is very simple, it assumes that sentence is nothing but just a bag of words. This model doesn’t care for the order of words. For example, “I go for walk” and “walk I go for” are not dissimilar under this model, though one of these two sentence has a higher probability. When using this model, there is no necessity of meanings, it assumes that whenever it sees these four words, it likely has a similar meaning. Why would anyone wants to use “Bag of Words” model when there is a sophisticated Linguistic model. What advantages does this statistic camp provides? The statistics camp wants to avoid manual programming of rules to and look for automatic interpreting of language just like supervised fashion, by feeding in large amount of labelled data and learning patterns. Let’s talk about some of the existing algorithms: Algorithms can be simple as Vector Space Model where text can be represented as vector and data can be obtained by vector operations. Embedding is one such use case. Inference driving algorithms such as Frequent item set is one such use case, where you can look into text corpus and try to make inference about what would come next. Relevance ranking algorithms used in search engine such as Tf-IDF, BM25, pagerank, etc. There are algorithms which are used understand meaning out of texts. Like Latent semantic analysis ( LSA) , Probabilistic Semantic analysis (pLSA) and Latent Dirichlet allocation (LDA) . LSA) , and . There are algorithms which try to derive sentiments, context and subject of written text. Like sentiment analysis is very popular as it tries to associate some sentiment value to the unknown words. Also, in recent time there are deep learning models/algorithms which uses statistical methods to process tokens using multilayer ANNs. As we can see there is no one type of algorithm for NLP. Various approaches to NLP information retrieval can be drawn from below image: Coreference resolution: “Adam stabbed Bob, and he bled to death!” It’s huge problem in NLP to determine whether “he” in the above sentence refers to Adam or Bob. It is very well-studied problem in NLP and also has a fancy name “Coreference Resolution”. In linguistics, coreference, sometimes written co-reference, occurs when two or more expressions in a text refer to the same person or thing; they have the same referent, like in above sentence. Back in 2001, machine learning algorithms was approached (paper). The proposed classifier was decision tree, which classifies given candidate pair of words as either “Coreferential” (meaning refers to the same thing) or “Not Coreferential”. Following features were used for each candidate pair: Distance : which can be computed as number of sentences between the two words. (more the distance we can say the words are less coreferential). : which can be computed as number of sentences between the two words. (more the distance we can say the words are less coreferential). Pronoun : determines whether candidate pairs are pronouns, one of them is, or none. : determines whether candidate pairs are pronouns, one of them is, or none. String Match : which can be defined as the overlap between the two words. ( “Prime Minister XXX” and “The Prime Minister” can be considered coreferential). : which can be defined as the overlap between the two words. ( “Prime Minister XXX” and “The Prime Minister” can be considered coreferential). Number Agreement : which defines whether candidate pair of words are singular, both plural, or neither. : which defines whether candidate pair of words are singular, both plural, or neither. Semantic Class Agreement : which defines whether candidate pair of words are of the same semantic class, if any. (“Person”, “Organization”, etc.). : which defines whether candidate pair of words are of the same semantic class, if any. (“Person”, “Organization”, etc.). Gender Agreement : can be defined as whether candidate pair of words are of the same gender, if any. (“Male”, “Female”, “Neither”). : can be defined as whether candidate pair of words are of the same gender, if any. (“Male”, “Female”, “Neither”). Appositive : defines whether candidate pair of words are appositives (Say, If a sentence starts with “The Nepali President, XXX said…”, then “President” and “XXX” are appositives and are probably coreferential). : defines whether candidate pair of words are appositives (Say, If a sentence starts with “The Nepali President, XXX said…”, then “President” and “XXX” are appositives and are probably coreferential). ..and a few more similar features. References
https://tmilan0604.medium.com/look-on-algorithms-behind-natural-language-processing-nlp-e06f18b6c31d
['Milan Thapa']
2020-10-30 06:36:09.990000+00:00
['Machine Learning', 'Artificial Intelligence', 'Algorithms', 'Naturallanguageprocessing', 'Turing Test']
Title Look algorithm behind Natural Language Processing NLPContent Natural language processing NLP describes interaction human language computer Human language different computer understands Computers understand machine language say binary code Computers don’t speak understand human language unless programmed that’s NLP come picture natural language processing work two main technique used NLP first one syntax analysis second one semantic analysis Syntax structure form expression statement program unit Syntax used assessing meaning language supported grammatical rule technique used syntax analysis includes parsing grammatical analysis sentence II word segmentation divide outsized piece text unit III sentence breaking place sentence boundary large text IV morphological segmentation divide word group V stemming divide word inflection root form Semantics meaning expression statement program unit algorithm NLP applies know meaning structure sentence technique used semantic analysis includes word meaning disambiguation derives meaning word supported context II named entity recognition determines word categorized group III natural language generation use database work semantics behind word Also divide NLP field two camp Linguistics camp Statistics camp idea NLP started early era AI fact came existence time Alan Turing considered founder AI computing general challenge create machine converse way indistinguishable human also known Turing test “ELIZA” one earliest famous AI program considered attempt beat Turing test know algorithm could really understand human language time say ELIZA chat bot program time used programmed manually crafting lot lot rule respond human conversation said program never capacity actually understanding natural language rather say result psychology fool human concept linguistic arose viewed science language created pattern searched language rule constructing interpreting natural language utterance formulated done linguist model grammar generalized basis rule Linguistic rule also used parse recognize artificial language building compiler way parsing natural language also much similar except Context Free Grammars limited instead ContextSensitive Grammars used 90’s different perspective approached NLP problem statistician essentially Linguistic theory thrown simple model language introduced called “Bag Words ” model model simple assumes sentence nothing bag word model doesn’t care order word example “I go walk” “walk go for” dissimilar model though one two sentence higher probability using model necessity meaning assumes whenever see four word likely similar meaning would anyone want use “Bag Words” model sophisticated Linguistic model advantage statistic camp provides statistic camp want avoid manual programming rule look automatic interpreting language like supervised fashion feeding large amount labelled data learning pattern Let’s talk existing algorithm Algorithms simple Vector Space Model text represented vector data obtained vector operation Embedding one use case Inference driving algorithm Frequent item set one use case look text corpus try make inference would come next Relevance ranking algorithm used search engine TfIDF BM25 pagerank etc algorithm used understand meaning text Like Latent semantic analysis LSA Probabilistic Semantic analysis pLSA Latent Dirichlet allocation LDA LSA algorithm try derive sentiment context subject written text Like sentiment analysis popular try associate sentiment value unknown word Also recent time deep learning modelsalgorithms us statistical method process token using multilayer ANNs see one type algorithm NLP Various approach NLP information retrieval drawn image Coreference resolution “Adam stabbed Bob bled death” It’s huge problem NLP determine whether “he” sentence refers Adam Bob wellstudied problem NLP also fancy name “Coreference Resolution” linguistics coreference sometimes written coreference occurs two expression text refer person thing referent like sentence Back 2001 machine learning algorithm approached paper proposed classifier decision tree classifies given candidate pair word either “Coreferential” meaning refers thing “Not Coreferential” Following feature used candidate pair Distance computed number sentence two word distance say word le coreferential computed number sentence two word distance say word le coreferential Pronoun determines whether candidate pair pronoun one none determines whether candidate pair pronoun one none String Match defined overlap two word “Prime Minister XXX” “The Prime Minister” considered coreferential defined overlap two word “Prime Minister XXX” “The Prime Minister” considered coreferential Number Agreement defines whether candidate pair word singular plural neither defines whether candidate pair word singular plural neither Semantic Class Agreement defines whether candidate pair word semantic class “Person” “Organization” etc defines whether candidate pair word semantic class “Person” “Organization” etc Gender Agreement defined whether candidate pair word gender “Male” “Female” “Neither” defined whether candidate pair word gender “Male” “Female” “Neither” Appositive defines whether candidate pair word appositives Say sentence start “The Nepali President XXX said…” “President” “XXX” appositives probably coreferential defines whether candidate pair word appositives Say sentence start “The Nepali President XXX said…” “President” “XXX” appositives probably coreferential similar feature ReferencesTags Machine Learning Artificial Intelligence Algorithms Naturallanguageprocessing Turing Test
4,202
About Written Tales
pixabay.com Who is behind Written Tales? I would like to introduce myself. My name is Kevin, a writer just like you trying to build a reader base. Why? I was tired of how the publishing business works. How some charge writers a fee to submit their work. Others make the writer wait for months without hearing a word. How the entire process can be discouraging for new authors. Then, a cord struck within. I had a desire to create a publishing platform. A program to help writers grow their talent and promote the work they write. And from this “Written Tales” was born. The Goal The goal of Written Tales is to give new and seasoned writers a platform where they have an uncensored voice. A stage where their work can reach maximum exposure through multiple social platforms. Without creative arts, innovation will die. Society will tumble into the abyss of ignorance. And critical thinking will become a lost art. Writers need an uncensored platform for their voices, and a community to help them grow. Due to this need, I decided to fund the project because I believe in the cause. Uncensored? We are not reckless in what we publish, but we are open-minded. We believe in free speech and will protect it even if we do not agree with the author’s position. Some creative works may offend, others will bring happiness. And this is the beauty of a platform that does not restrict a person’s view. Again, we will not publish reckless writing. But, writing that leads to lively debate, we will. Final Comments We are here to help bring literature back to the forefront of society through short stories, flash fiction, and poetry. If you would like to be a part of this cause, please join as an author, or support us by signing up for the Written Tales newsletter.
https://medium.com/written-tales/about-written-tales-d64a809d2cee
['Written Tales']
2020-11-20 11:26:29.287000+00:00
['Poetry', 'Publishing', 'Fiction', 'Writing', 'Written Tales']
Title Written TalesContent pixabaycom behind Written Tales would like introduce name Kevin writer like trying build reader base tired publishing business work charge writer fee submit work Others make writer wait month without hearing word entire process discouraging new author cord struck within desire create publishing platform program help writer grow talent promote work write “Written Tales” born Goal goal Written Tales give new seasoned writer platform uncensored voice stage work reach maximum exposure multiple social platform Without creative art innovation die Society tumble abyss ignorance critical thinking become lost art Writers need uncensored platform voice community help grow Due need decided fund project believe cause Uncensored reckless publish openminded believe free speech protect even agree author’s position creative work may offend others bring happiness beauty platform restrict person’s view publish reckless writing writing lead lively debate Final Comments help bring literature back forefront society short story flash fiction poetry would like part cause please join author support u signing Written Tales newsletterTags Poetry Publishing Fiction Writing Written Tales
4,203
An Oral History of ‘Coffee News’
I ntroduction You may not notice it, sitting in the background. Next to the lost-pet notices and bassist want-ads. Above the sugar. Its tan visage inviting you for a five-second perusal. Just enough color to camouflage a weak coffee stain. Coffee News is everywhere and nowhere. Widely read, but never truly understood. The anodyne accompaniment to many a Starbuck’s study session. The anesthetic accomplice to many a caffeinated evening’s eavesdropping. On the scale of stimulating reading material, today’s Coffee News lies somewhere between Highlights magazine, a Lutheran church bulletin, and a Carl’s Jr. place mat. But behind that drab page lies a story of bacchanalia, murder, betrayal, greed, and scandal that has long been known only to a select few. Scattered until now in family legends, depositions, indictments, and unsold vanity autobiographies, the history of Coffee News is presented here for the first time, told in the words of those who lived the dream…or the nightmare. On the scale of stimulating reading material, today’s ‘Coffee News’ lies somewhere between ‘Highlights’ magazine, a Lutheran church bulletin, and a Carl’s Jr. place mat. PART ONE: The Indianapolis Imbroglio Walter Fine, Managing Editor, Coffee News, 1978–1993: I suppose you’re asking me because I’m the oldest one left, everyone I know is dead, and I have no one else to talk to, so you think I’ll agree to your interview. Well, you’re right. So here goes. I’ll tell it to you the way I heard it. Linus Anacletus Clement Coffee made his fortune as a slave trader in Vicksburg, Mississippi. His son, Clement Coffee, grew that fortune as a Mississippi River barge pilot and later as a steamboat captain who specialized in returning runaway slaves. His son, Clement Coffee II, Chip, was a cattle trader and meatpacking magnate whose abattoirs were the basis for The Jungle. Clement III, Trip, was a renowned lawyer in St. Louis. He cornered the market in refrigerated rail cars and physically held them ransom at a rail yard in Kansas City using a private army of Pinkerton men. In that way, he amassed a still greater fortune. Clement IV, Skip, was sort of a reclusive philanthropist. He financed Birth of a Nation, has a dorm named after him at Dartmouth, and his charitable gifts endowed work-orphanages and union-busting-private-detective schools around the country. His first son, Clement V, Quint, became a priest and died of dystentery while aiding Colombian children freed from slavery on coffee plantations. Quint’s younger brother, Vance Coffee, was a rampaging drunk and a womanizer. He invested the whole family fortune into casinos in Warm Springs, Nevada. He thought the name was better than the other options, Reno and Las Vegas. Well, he didn’t think about where the interstate was going to go and that was that. Lost the whole fortune. He went to Colombia to borrow money from Quint. Discovered powdered cocaine there. Started smuggling it in. He thought he had snorted it all on the plane ride to Miami, but he forgot the pinch in his snuff box. So he got busted at customs. Went to prison in Terre Haute, Indiana for a few years. When he got out, he broke into an elementary school in Indianapolis and made off with five mimeograph machines. He stashed them under a nearby bridge, where he lived at the time. He published the first edition of Vance Coffee’s News of the Day in 1951. It started as a really virulent right-wing rag. Truman was a commie, Ike was a commie, Nixon’s a commie, there’s fluoride in the toothpaste. All that stuff. He’d pass it out at VFW halls, tattoo parlors, and biker bars. Old Bob Welch was one of the earliest readers and I’ve heard it said it inspired him to found the John Birch Society in ‘58. Never had anything to do with coffee. Unless you count Vance going to Colombia. And even that had more to do with cocaine, as it turned out. Vivian Martz, acquaintance of Vance Coffee: There was a joke in those days, “What do you call ten copies of Coffee’s News? A blanket.” Felicia Wittingdon, Vice President of Franchising & Distribution, Grupo CN Media, S.A., owner of Coffee News, 2008 — : Yeah, I’ve heard that one. I think today I hear it more as a motto, “Coffee News: The blanket you can read.” Things more like that. Irony, you know. We take pride in it today, our service to the homeless. We’ve switched to warmer paper. It is a special paper too, made so that if you scrunch it up a bunch of times, it gets soft enough to use as toilet paper if you’re in a pinch…so to speak. We thought of putting adhesive on the bottom and right margins to make it possible to actually attach them together to form a blanket. But it’s a cost thing. It’s print media and it’s free, so, as you can imagine, our budget is pretty constrained. Coffee News: The blanket you can read. Ian Hogg, creator of “Slag Off, You Posh Twats!,” the logo of Coffee News since 1970: The logo began as my proposal for the cover of Sgt. Pepper’s Lonely Hearts Club Band. I still think Pete Blake nicked the idea, the bastard. Instead of cutouts of all these pop and political figures, I had had a collage of all these miserable people from Liverpool from all walks of life. Drunk pipe fitter. Smoking chimney sweep. Bitter cab driver. Newsboy on diet pills. Mum pushing a pram with her fifth baby, taking a nip. All glaring at the Beatles like, “You fink you’re better’n, you cunts? Fook right off!” And the Beatles sitting there, in all that ridiculous regalia like, “Yeah, you Scouser twats, we’re rich innat ’n’ yer bollocks!” So it was this indictment of the nouveau riche and tax-dodging cunts like the Beatles. Lennon got it. I think Paul thought it hit a little too close to home. Posh twat. Anyways, Pete Blake takes that and replaces these Liverpudlians with famous people and makes a queen’s tit. Goes down in history. So he’s a gobshite. But I had done these cartoony sketches of the idea before I’d made the photo collage. I had one in a drawer somewhere after I moved to New York in early 1970. I’d just finished doing the Today’s Now, Currently, a pop-art exhibit at the ICA in ’69. Stan Mason met me at this bar in Greenwich Village one day. He’d just gotten to New York and asked if I had anything he could use as a logo for this new paper he’s peddling. Offered to pay. So I dug up one of those drawings, turned those jealous frowns upside down, tacked in some newspapers, and there you have it. Two hundreds dollars. Never thought about it again until you asked. Erin Stolhanske, granddaughter of William Stolhanske: My grandpa, William [Stolhanske], had a little coffee shop in the front of his grocery store. He ran the store, grandma ran the coffee shop. As I understand it, she let Viv Martz put the paper next to the apartment listings, classifieds, and garage-sale notices. By the cream. Viv was a waitress there. Gramps didn’t know who Vance Coffee was, let alone what was in the papers. My grandpa was not a political guy. He voted for Stevenson twice. [Vance Coffee was found murdered in 1960 outside an apartment in Indianapolis after what police determined was an amphetamine-fueled, Nazi-themed sex orgy. Motive was never determined, but Vance’s gambling debts to local mobster “Stoney” De Luca were strongly suspected. — ed.] The attorney general came by after that bastard [Vance Coffee] was killed, asking why gramps was distributing anti-Semitic literature promoting the overthrow of the American government. They never charged him, but he found [Coffee’s] estate sale and overpaid for the mimeograph machines so they wouldn’t become a Bircher pilgrimage destination or be put to the same use again. The only person he knew who could write was his son, my uncle Dave. [David Stolhanske died in 2004. His quotes herein are from the transcript of his deposition in Stolhanske v. Mason, CA-98–00784, S.D. Ind. (LEXIS 98–082889712) — ed.] Dave Stolhanske, owner of Coffee News, 1960–69: I had been a journalism major at Ball State and had just come home looking for a job. I was pouring coffee at mom’s coffee shop. They didn’t call it being a barista then; it was Maxwell House. I changed the name of the paper to Coffee News only so I could use most of the original typography and layout. I wasn’t good at typesetting. It was that simple. The fact that it was put out at a coffee shop was a coincidence. I put my poetry in there. Ads for the local floral shop. Some jokes. Garage-sale notices. Quotes from my old copy of Bartlett’s from school. Recipes. ‘This Day in History’-type stuff. Pretty wholesome. Other coffee shops around town began carrying the paper, so I made some side money on the advertising. … In the early-’60s we published a few stories from Kurt Vonnegut under a pseudonym, Norma van Haayden. Kurt and I had been in Sunday school together and he’d send me whatever had gotten rejected from the big magazines. Those stories later served as the basis for Cat’s Cradle. … I’m surprised I kept it going as long as I did. I finally quit the coffee shop when I got a job at Honeywell writing their style guide for the writing of technical manuals. … I played a lot of bridge back then. Stan [Mason] was in my bridge club. I guess I never saw the potential [of Coffee News] beyond a few coffee places in Indy and Carmel. But the original idea and format was mine. Not the militant fascism. The wholesome part, after we got it from Vance Coffee. That stuff. … On that night, Stan and I had been drinking a lot of beer. I remember Stan [Mason] saying he really liked the idea of Coffee News and had big ideas for it. I humored him, but I wasn’t interested. I don’t remember signing anything and I would never have signed anything. But if I did, I was incapacitated. And as far as the Vonnegut stuff, I guess that’s why we’re here today. Walter Fine: Stan Mason was a son of a bitch and an asshole. But I loved the man. A true visionary. [Stan Mason died in 2012. His quotes herein are from his autobiography “The Best Things in Life are Free — The Life and Times of Stan Mason, Sole & Exclusive Creator and Publisher of Coffee News,” © 1997, Simon & Shuster, as well as his testimony in SEC v. Mason/CNG Publishing, Inc., 87:808991, S.D.N.Y., (LEXIS 90–109283577, June 4, 1990) — ed.] Stan Mason, Owner & Editor-In-Chief, Coffee News, 1969–2006; President of Mason Publishing, L.P., 1978–84; Chairman & CEO of Mason/CNG Publishing, Inc., 1984–2006: I don’t like to talk about other people, but I will say this. Davey Stolhanske was a degenerate gambler and a drunk. We had the same bookie. I knew he was in to him for about two thousand. Davey hated the [Indianapolis] Pacers [professional basketball team] because his girlfriend had cheated on him with Chick Rollins, who wrote for the [Indianapolis] Star [the city’s major newspaper] and owned part of the team. He knew better, but he couldn’t help but bet against them. They kept winning. He kept losing. … Davey drank Yuengling like water. I’d known this guy forever. We played cards. He was bitching about how much he owed his bookie, so yeah, I knew about the debt. We’re playing bridge and we start betting. I’d lived in Chicago for a few years and worked at the Tribune. I knew what kind of money was in advertising, and I’d seen this Coffee News rag all over town since I’d been back. So I just had an idea. Do the same thing in a bigger town. Do it in every town. And boom. Rich. So I says to him, Davey, I got a bet for you. You win, I pay off your debt to [bookie] Stoney [De Luca]. I win, you give me your coffee newspaper. I won. … A week later, Davey calls me up. He’s bitching about the bet. Doesn’t wanna give up the paper. I take pity. I say, you know what, I’ll buy it off you. He says, How much? I say, How much do you owe Stoney? So we met at The Indianapolitan [night club] and we drew up a contract, and that was that. So, yeah, it was a bridge bet that led me to get the paper, but I bought it fair and square for two thousand dollars. I did not win the paper in a card bet, because betting on cards is illegal in the great state of Indiana and such a gambling winning would be an illegal, and thus unenforceable, contract. … At the time, I was unaware of the Kurt Vonnegut stories that had appeared in Coffee News in, I guess, ’61 or ’62, but as a matter of course, whenever I purchased any publication, I made sure to include all copyrights and other intellectual property, known or unknown [emphasis in original], held by that publication. That’s just my due diligence. That’s business. Anthony “Flat Tire” Medrano, interviewed at Federal Corrections Complex, Terre Haute, Indiana, 2016: The way I heard it, Davey Stolhanske signed that contract with a tire iron held against his head. Actually, that’s the way I saw it. I was holding the tire iron. Stoney De Luca was there. What do I give a shit? Stoney’s dead and the statute of limitations on that expired in ’75. … Why now? Well, nobody ever asked me before. [Stanislaus “Stoney” De Luca died at his home in Coral Cables, Florida in 1988 of natural causes and complications from acute syphilitic necropathy — ed.] The way I heard it, Davey Stolhanske signed that contract with a tire iron held against his head. Actually, that’s the way I saw it. I was holding the tire iron. — Anthony Medrano Walter Fine: I’d worked at the New York Sun and then the Daily News. I was out of a job for personal reasons. When I was released, I met Stan Mason at Delmonico’s. My friend Billy “Batts” Battaliano had introduced us. I knew him from working the blotter at the Daily News. Stan knew him through some guy in Indianapolis, Stoney something. Anyway, he was hustling this paper and needed somebody to run the print side. That was right when he got to town. It was 1970 or so. He was involved a lot on the editorial side at first, but needed help. So I was Assistant to the Editor, then Assistant Editor through most of the ’70s. Finally, he got more into the higher-level publishing aspect and I basically took over running the paper in ’78. … A couple of months after I started, Stan came into my office holding some back issues he’d dug out of a box he brought with him from Indiana. He asked if I knew who Norma Van Haayden was. I asked if she’d been one of the girls who’d come back with us from P.J. Clarke’s [the famous New York bar] earlier that week. He said no. He asked if I knew a lawyer. My wife at the time was from old New York money. She gave me a name. Piers van Valkenberg, former partner, Debevoise, Wardwell, & Van Dyck, LLP: All I can say about that is that in 1971, Coffee News reached a settlement with Mr. Vonnegut and his publishers on terms satisfactory to all parties. It was New York in the 1970s and I owned the highest circulation paper in town, and we were expanding across the country. We were making so much money I said, ‘We can’t charge for this.’ It was a beautiful thing. — Stan Mason Walter Fine: The advertising paid the bills. The Vonnegut royalties paid for the drugs. Our offices were across the alley from The National Lampoon and on the same floor. There was a zip line at one point. It was anarchy. Erin Stolhanske: I didn’t know anything about the Vonnegut stories then, but I was just a kid. Later, I remember Uncle Dave talking about it, showing us the stories. He didn’t know anything about the law. He ended up teaching English in Castleton [Indiana]. It wasn’t until Stan Mason’s book came out that the light bulb went off. Dave Stolhanske: He knew. I know he knew because I told him. People say I didn’t know, but I knew. I’m not stupid. Not like they say. I’m smart. I was an English major. I knew about copyright. There was nothing in there about copyrights when I signed it. If I did. Which I didn’t. … If I did. It was under duress. I told you. They had a tire iron to my head! … It was Stoney De Luca and another guy. No, I don’t know his name. Stan Mason: It was New York in the 1970s and I owned the highest circulation paper in town, and we were about expand across the country. I said, ‘We can’t charge for this.’ It was a beautiful thing.
https://medium.com/the-clap/an-oral-history-of-coffee-news-3a57a3ca7f9e
['J.P. Melkus']
2018-08-25 21:41:52.945000+00:00
['Satire', 'Parody', 'Journalism', 'Oral History', 'Humor']
Title Oral History ‘Coffee News’Content ntroduction may notice sitting background Next lostpet notice bassist wantads sugar tan visage inviting fivesecond perusal enough color camouflage weak coffee stain Coffee News everywhere nowhere Widely read never truly understood anodyne accompaniment many Starbuck’s study session anesthetic accomplice many caffeinated evening’s eavesdropping scale stimulating reading material today’s Coffee News lie somewhere Highlights magazine Lutheran church bulletin Carl’s Jr place mat behind drab page lie story bacchanalia murder betrayal greed scandal long known select Scattered family legend deposition indictment unsold vanity autobiography history Coffee News presented first time told word lived dream…or nightmare scale stimulating reading material today’s ‘Coffee News’ lie somewhere ‘Highlights’ magazine Lutheran church bulletin Carl’s Jr place mat PART ONE Indianapolis Imbroglio Walter Fine Managing Editor Coffee News 1978–1993 suppose you’re asking I’m oldest one left everyone know dead one else talk think I’ll agree interview Well you’re right go I’ll tell way heard Linus Anacletus Clement Coffee made fortune slave trader Vicksburg Mississippi son Clement Coffee grew fortune Mississippi River barge pilot later steamboat captain specialized returning runaway slave son Clement Coffee II Chip cattle trader meatpacking magnate whose abattoir basis Jungle Clement III Trip renowned lawyer St Louis cornered market refrigerated rail car physically held ransom rail yard Kansas City using private army Pinkerton men way amassed still greater fortune Clement IV Skip sort reclusive philanthropist financed Birth Nation dorm named Dartmouth charitable gift endowed workorphanages unionbustingprivatedetective school around country first son Clement V Quint became priest died dystentery aiding Colombian child freed slavery coffee plantation Quint’s younger brother Vance Coffee rampaging drunk womanizer invested whole family fortune casino Warm Springs Nevada thought name better option Reno Las Vegas Well didn’t think interstate going go Lost whole fortune went Colombia borrow money Quint Discovered powdered cocaine Started smuggling thought snorted plane ride Miami forgot pinch snuff box got busted custom Went prison Terre Haute Indiana year got broke elementary school Indianapolis made five mimeograph machine stashed nearby bridge lived time published first edition Vance Coffee’s News Day 1951 started really virulent rightwing rag Truman commie Ike commie Nixon’s commie there’s fluoride toothpaste stuff He’d pas VFW hall tattoo parlor biker bar Old Bob Welch one earliest reader I’ve heard said inspired found John Birch Society ‘58 Never anything coffee Unless count Vance going Colombia even cocaine turned Vivian Martz acquaintance Vance Coffee joke day “What call ten copy Coffee’s News blanket” Felicia Wittingdon Vice President Franchising Distribution Grupo CN Media SA owner Coffee News 2008 — Yeah I’ve heard one think today hear motto “Coffee News blanket read” Things like Irony know take pride today service homeless We’ve switched warmer paper special paper made scrunch bunch time get soft enough use toilet paper you’re pinch…so speak thought putting adhesive bottom right margin make possible actually attach together form blanket it’s cost thing It’s print medium it’s free imagine budget pretty constrained Coffee News blanket read Ian Hogg creator “Slag Posh Twats” logo Coffee News since 1970 logo began proposal cover Sgt Pepper’s Lonely Hearts Club Band still think Pete Blake nicked idea bastard Instead cutout pop political figure collage miserable people Liverpool walk life Drunk pipe fitter Smoking chimney sweep Bitter cab driver Newsboy diet pill Mum pushing pram fifth baby taking nip glaring Beatles like “You fink you’re better’n cunt Fook right off” Beatles sitting ridiculous regalia like “Yeah Scouser twat we’re rich innat ’n’ yer bollocks” indictment nouveau riche taxdodging cunt like Beatles Lennon got think Paul thought hit little close home Posh twat Anyways Pete Blake take replaces Liverpudlians famous people make queen’s tit Goes history he’s gobshite done cartoony sketch idea I’d made photo collage one drawer somewhere moved New York early 1970 I’d finished Today’s Currently popart exhibit ICA ’69 Stan Mason met bar Greenwich Village one day He’d gotten New York asked anything could use logo new paper he’s peddling Offered pay dug one drawing turned jealous frown upside tacked newspaper Two hundred dollar Never thought asked Erin Stolhanske granddaughter William Stolhanske grandpa William Stolhanske little coffee shop front grocery store ran store grandma ran coffee shop understand let Viv Martz put paper next apartment listing classified garagesale notice cream Viv waitress Gramps didn’t know Vance Coffee let alone paper grandpa political guy voted Stevenson twice Vance Coffee found murdered 1960 outside apartment Indianapolis police determined amphetaminefueled Nazithemed sex orgy Motive never determined Vance’s gambling debt local mobster “Stoney” De Luca strongly suspected — ed attorney general came bastard Vance Coffee killed asking gramps distributing antiSemitic literature promoting overthrow American government never charged found Coffee’s estate sale overpaid mimeograph machine wouldn’t become Bircher pilgrimage destination put use person knew could write son uncle Dave David Stolhanske died 2004 quote herein transcript deposition Stolhanske v Mason CA98–00784 SD Ind LEXIS 98–082889712 — ed Dave Stolhanske owner Coffee News 1960–69 journalism major Ball State come home looking job pouring coffee mom’s coffee shop didn’t call barista Maxwell House changed name paper Coffee News could use original typography layout wasn’t good typesetting simple fact put coffee shop coincidence put poetry Ads local floral shop joke Garagesale notice Quotes old copy Bartlett’s school Recipes ‘This Day History’type stuff Pretty wholesome coffee shop around town began carrying paper made side money advertising … early’60s published story Kurt Vonnegut pseudonym Norma van Haayden Kurt Sunday school together he’d send whatever gotten rejected big magazine story later served basis Cat’s Cradle … I’m surprised kept going long finally quit coffee shop got job Honeywell writing style guide writing technical manual … played lot bridge back Stan Mason bridge club guess never saw potential Coffee News beyond coffee place Indy Carmel original idea format mine militant fascism wholesome part got Vance Coffee stuff … night Stan drinking lot beer remember Stan Mason saying really liked idea Coffee News big idea humored wasn’t interested don’t remember signing anything would never signed anything incapacitated far Vonnegut stuff guess that’s we’re today Walter Fine Stan Mason son bitch asshole loved man true visionary Stan Mason died 2012 quote herein autobiography “The Best Things Life Free — Life Times Stan Mason Sole Exclusive Creator Publisher Coffee News” © 1997 Simon Shuster well testimony SEC v MasonCNG Publishing Inc 87808991 SDNY LEXIS 90–109283577 June 4 1990 — ed Stan Mason Owner EditorInChief Coffee News 1969–2006 President Mason Publishing LP 1978–84 Chairman CEO MasonCNG Publishing Inc 1984–2006 don’t like talk people say Davey Stolhanske degenerate gambler drunk bookie knew two thousand Davey hated Indianapolis Pacers professional basketball team girlfriend cheated Chick Rollins wrote Indianapolis Star city’s major newspaper owned part team knew better couldn’t help bet kept winning kept losing … Davey drank Yuengling like water I’d known guy forever played card bitching much owed bookie yeah knew debt We’re playing bridge start betting I’d lived Chicago year worked Tribune knew kind money advertising I’d seen Coffee News rag town since I’d back idea thing bigger town every town boom Rich say Davey got bet win pay debt bookie Stoney De Luca win give coffee newspaper … week later Davey call He’s bitching bet Doesn’t wanna give paper take pity say know I’ll buy say much say much owe Stoney met Indianapolitan night club drew contract yeah bridge bet led get paper bought fair square two thousand dollar win paper card bet betting card illegal great state Indiana gambling winning would illegal thus unenforceable contract … time unaware Kurt Vonnegut story appeared Coffee News guess ’61 ’62 matter course whenever purchased publication made sure include copyright intellectual property known unknown emphasis original held publication That’s due diligence That’s business Anthony “Flat Tire” Medrano interviewed Federal Corrections Complex Terre Haute Indiana 2016 way heard Davey Stolhanske signed contract tire iron held head Actually that’s way saw holding tire iron Stoney De Luca give shit Stoney’s dead statute limitation expired ’75 … Well nobody ever asked Stanislaus “Stoney” De Luca died home Coral Cables Florida 1988 natural cause complication acute syphilitic necropathy — ed way heard Davey Stolhanske signed contract tire iron held head Actually that’s way saw holding tire iron — Anthony Medrano Walter Fine I’d worked New York Sun Daily News job personal reason released met Stan Mason Delmonico’s friend Billy “Batts” Battaliano introduced u knew working blotter Daily News Stan knew guy Indianapolis Stoney something Anyway hustling paper needed somebody run print side right got town 1970 involved lot editorial side first needed help Assistant Editor Assistant Editor ’70s Finally got higherlevel publishing aspect basically took running paper ’78 … couple month started Stan came office holding back issue he’d dug box brought Indiana asked knew Norma Van Haayden asked she’d one girl who’d come back u PJ Clarke’s famous New York bar earlier week said asked knew lawyer wife time old New York money gave name Piers van Valkenberg former partner Debevoise Wardwell Van Dyck LLP say 1971 Coffee News reached settlement Mr Vonnegut publisher term satisfactory party New York 1970s owned highest circulation paper town expanding across country making much money said ‘We can’t charge this’ beautiful thing — Stan Mason Walter Fine advertising paid bill Vonnegut royalty paid drug office across alley National Lampoon floor zip line one point anarchy Erin Stolhanske didn’t know anything Vonnegut story kid Later remember Uncle Dave talking showing u story didn’t know anything law ended teaching English Castleton Indiana wasn’t Stan Mason’s book came light bulb went Dave Stolhanske knew know knew told People say didn’t know knew I’m stupid like say I’m smart English major knew copyright nothing copyright signed didn’t … duress told tire iron head … Stoney De Luca another guy don’t know name Stan Mason New York 1970s owned highest circulation paper town expand across country said ‘We can’t charge this’ beautiful thingTags Satire Parody Journalism Oral History Humor
4,204
Personal Finance Classes Offered By Making Of A Millionaire
Our top personal finance classes available on Skillshare If you don’t have a Skillshare subscription, that is okay. If you use this link, you get a free trial on Skillshare, which is more than enough time to take all of our courses. In the interest of full disclosure, if you do sign up for the free trial, we get a referral fee from Skillshare. It costs you nothing (and in fact, it gives you 2-free months) and helps us keep this publication alive, but we want to be fully transparent at all times. 1. Personal Finance Masterclass: 6 Steps To Lock In Your Financial Goals If you take one class on personal finance this year, make it this one. This class is about much more than budgeting. By the end of this class, you will know exactly how much you need to be saving for an emergency fund, retirement, and to pay off all your debts. To do that, you will use the custom-built excel workbook that I have built and made available for anyone taking this class. It will crunch all of the numbers for you and create a budget that locks in these goals. In this class, you will learn how to use the excel workbook as well as the six steps to creating a goals-based budget.
https://medium.com/makingofamillionaire/resources-for-making-of-a-millionaire-readers-f2438dec0993
['Ben Le Fort']
2020-12-21 17:07:00.356000+00:00
['Money', 'Personal Finance', 'Education', 'Community', 'Productivity']
Title Personal Finance Classes Offered Making MillionaireContent top personal finance class available Skillshare don’t Skillshare subscription okay use link get free trial Skillshare enough time take course interest full disclosure sign free trial get referral fee Skillshare cost nothing fact give 2free month help u keep publication alive want fully transparent time 1 Personal Finance Masterclass 6 Steps Lock Financial Goals take one class personal finance year make one class much budgeting end class know exactly much need saving emergency fund retirement pay debt use custombuilt excel workbook built made available anyone taking class crunch number create budget lock goal class learn use excel workbook well six step creating goalsbased budgetTags Money Personal Finance Education Community Productivity
4,205
Event Sourcing From Static Data Using Kafka
Event Sourcing From Static Data Using Kafka A different distributed scheduler approach. Events in DDD platforms use to be raised by interaction with external sources, and those events use to be generated from commands (updates, creations, deletions, or pure business actions). Distributed computing platforms receive messages from other systems and there is usually a gateway where those messages become events with a generic and standard format. Users can also interact with APIs and raise other events that must be propagated over the platform in order to save the information or notify other services to affect other domain entities. Events life-cycle is not a long term process, basically, we could summarize it like: “something has changed, and maybe someone is interested in this change” maybe our event can notify a service, and this service is forced to raise another event, but the life of this “consequence” should be similar than the one that triggered it. On the other hand, it’s easy to find business information related to dates, or temporal information, that should perform the transformation in our data. In this situation, we can face the problem that motivates this post. Events cannot wake themselves up. A typical problem, expiration date. Let’s imagine we are working on an e-commerce platform, and maybe we have thought about creating object models called… I don’t know… price? (Maybe you could think this section is a plagiarism of Walmart labs post (1), but I swear I had to deal with exactly the same problem before reading its solution) The prices can work like promotions in a certain way, but if the prices want to be dynamic, they need to work (activate, deactivate) in a temporal window. We can think in large promotion days like Black Friday, as window times for promotions, or even as activation periods for different prices. Let’s suppose a typical situation related to event-streaming systems: Price is a model entity, and it has an attribute called “expiration_date” with a date value, and another called “status” with active/inactive value. An external system begins to load a bunch of active prices through a similar bunch of price-domain-events. Our asynchronous CQRS-based persist system is listening to our message’s middleware and quickly saves all prices in the persistence engine. Another service is also listening and refresh all prices in our cache system. Users can see new prices, data is consistent and everything is running like it’s supposed to. Let’s have a beer, this streaming platform has been successfully designed. A typical price evet lifecycle. When space-time in our dimension reaches the date marked as the expiration date for one of our little prices, what should happen? This price should change its status and users should notice the change… but what really happens? Absolutely nothing. Our events cannot work with time attributes unless these attributes have only informative purposes. We can’t change entities and make notifications to other services. Our entire system depends on external systems to send every time information and that kind of event if some information must be changed. This could be a problem, or at least a great limitation to design event-based platforms. So, how can we know then, if some promotion or some price has expired? Solutions based on a distributed scheduler Basically, all solutions for this problem are based on schedulers or distributed schedulers, this means many jobs searching over trillions of elements. If we are lucky we can have our entities distributed and well balanced over persistence systems and some entity-based designs to look for changes in small triggers. Couchbase has proposed recently an eventing framework working on one of its services which could be a great solution for this problem. (2) Document insertions in the database are linked to small functions, and these functions can be scheduled to run when our attribute “expiration_date” time comes. Through Kafka connectors, each document can be transformed into a domain event and be released in the middleware. Wallmart also has released Big Ben. This is a system that can be used by a service to schedule a request that needs to be processed in the future. The service registers an event in the scheduler and suspends the processing of the current request. When the stipulated time arrives, the requesting service is notified by the scheduler and the former can resume processing of the suspended request. Those are both good solutions to solve this problem, but we had an idea that could be simple (and therefore smart) and help with all of our cases. Kafka to the rescue Stream processing its maybe the greatest strength of Kafka. New features related to Kstreams and KTables are showing a new world of possibilities for software engineers and architects. KTable is an abstraction of a changelog stream from a primary-keyed table. Each record in this changelog stream is an update on the primary-keyed table with the record key as the primary key. A KTable is either defined from a single Kafka topic that is consumed message by message or the result of a KTable transformation. An aggregation of a KStream also yields a KTable. Since Kafka 2.4 KTable joins work as SQL joins, Foreign-key, many to one, joins were added to Kafka in KIP-213 (3). This basically means that we can join events not only using its primary key, we can also join events in different topics by matching any of its attributes. Join by foreign key between two KTables. Our solution What do foreign keys in KTables have to do with our static events? Let’s think about our original problem with expiration dates. In a pure event sourcing system, we would have a topic dedicated to price events. Creation, update, and deletion events are allocated on the same price topic. On one hand, we can develop a really easy service based on a simple scheduler. Its responsibility is sending time events each minute, or each second if we need more accuracy. On the other one, we have to deploy a joiner service, the “Updater”. This service is listening from time event topic and price (or any other domain) event topic. Its entry points are two KTables, and these KTables are allowed to store a very big set of data. When the timed event arrives in time topic (and time KTable), our update service seeks over domain KTable if one specified field matches with this date. If there are one or many matches, we can send a new update event with our price, or even we can put some logic into the update-service in order to change the price entity status. Prices lifecycle with even update process based on time events. Show me the code! Ok, this could be a good solution but, how many lines of code do you need for a joiner? Less than ten lines: Joiner by Fk with KTables. Performance We can think in many scenarios for event expiration or release. We have tested scenarios for 0.5–1%, 5–10%, and 50% of business events affected for time events. Let’s imagine the worst situation, one in which time is over midnight, and it begins a very special date, where almost half of our entities have to change its status. As you can see, we have filled our topics with 4 and 8 million messages in order to stress Ktable join processors. Performance tests. In average cases, our system is updating elements (releasing events) each millisecond, working with one replica. Worst cases can make join over our KTables in 2 milliseconds. We have checked this system scaling horizontally close to a linear progression in performance metrics. We could say this solution can release as many events as you want with a really low effort in development and infrastructure. Generalization What do we need to use this solution across all our domains? Not much work, really. We just need to configure our time scheduler service (It can be fault-tolerant through replication because we can filter replicated messages with the same temporal key in destination topic) and one “joiner” service for each entity topic. In each domain, it can be found many domain entities “allocated” in a Kafka topic, each one of this topic receives events related to these entities, and those events can be resent or reloaded in our event pipeline when its temporal field matches with timed events. Placing a few dedicated services, our platform can “reload” events itself leaving that responsibility in Kafka, and it also guarantees consistency and really good fault tolerance levels. Acknowledgments I would like to thank Rafael Serrano and Jose Luis Noheda the support received, Soufian Belahrache (Black belt on KTables), and Francisco Javier Salas for their work on this POC, and Juan López for the peer review.
https://medium.com/swlh/event-sourcing-from-static-data-using-kafka-d00069332802
['Javier Martinez Valbuena']
2020-08-13 14:21:20.699000+00:00
['Streaming', 'Kafka', 'Microservices', 'Event Sourcing', 'Software Architecture']
Title Event Sourcing Static Data Using KafkaContent Event Sourcing Static Data Using Kafka different distributed scheduler approach Events DDD platform use raised interaction external source event use generated command update creation deletion pure business action Distributed computing platform receive message system usually gateway message become event generic standard format Users also interact APIs raise event must propagated platform order save information notify service affect domain entity Events lifecycle long term process basically could summarize like “something changed maybe someone interested change” maybe event notify service service forced raise another event life “consequence” similar one triggered hand it’s easy find business information related date temporal information perform transformation data situation face problem motivates post Events cannot wake typical problem expiration date Let’s imagine working ecommerce platform maybe thought creating object model called… don’t know… price Maybe could think section plagiarism Walmart lab post 1 swear deal exactly problem reading solution price work like promotion certain way price want dynamic need work activate deactivate temporal window think large promotion day like Black Friday window time promotion even activation period different price Let’s suppose typical situation related eventstreaming system Price model entity attribute called “expirationdate” date value another called “status” activeinactive value external system begin load bunch active price similar bunch pricedomainevents asynchronous CQRSbased persist system listening message’s middleware quickly save price persistence engine Another service also listening refresh price cache system Users see new price data consistent everything running like it’s supposed Let’s beer streaming platform successfully designed typical price evet lifecycle spacetime dimension reach date marked expiration date one little price happen price change status user notice change… really happens Absolutely nothing event cannot work time attribute unless attribute informative purpose can’t change entity make notification service entire system depends external system send every time information kind event information must changed could problem least great limitation design eventbased platform know promotion price expired Solutions based distributed scheduler Basically solution problem based scheduler distributed scheduler mean many job searching trillion element lucky entity distributed well balanced persistence system entitybased design look change small trigger Couchbase proposed recently eventing framework working one service could great solution problem 2 Document insertion database linked small function function scheduled run attribute “expirationdate” time come Kafka connector document transformed domain event released middleware Wallmart also released Big Ben system used service schedule request need processed future service register event scheduler suspends processing current request stipulated time arrives requesting service notified scheduler former resume processing suspended request good solution solve problem idea could simple therefore smart help case Kafka rescue Stream processing maybe greatest strength Kafka New feature related Kstreams KTables showing new world possibility software engineer architect KTable abstraction changelog stream primarykeyed table record changelog stream update primarykeyed table record key primary key KTable either defined single Kafka topic consumed message message result KTable transformation aggregation KStream also yield KTable Since Kafka 24 KTable join work SQL join Foreignkey many one join added Kafka KIP213 3 basically mean join event using primary key also join event different topic matching attribute Join foreign key two KTables solution foreign key KTables static event Let’s think original problem expiration date pure event sourcing system would topic dedicated price event Creation update deletion event allocated price topic one hand develop really easy service based simple scheduler responsibility sending time event minute second need accuracy one deploy joiner service “Updater” service listening time event topic price domain event topic entry point two KTables KTables allowed store big set data timed event arrives time topic time KTable update service seek domain KTable one specified field match date one many match send new update event price even put logic updateservice order change price entity status Prices lifecycle even update process based time event Show code Ok could good solution many line code need joiner Less ten line Joiner Fk KTables Performance think many scenario event expiration release tested scenario 05–1 5–10 50 business event affected time event Let’s imagine worst situation one time midnight begin special date almost half entity change status see filled topic 4 8 million message order stress Ktable join processor Performance test average case system updating element releasing event millisecond working one replica Worst case make join KTables 2 millisecond checked system scaling horizontally close linear progression performance metric could say solution release many event want really low effort development infrastructure Generalization need use solution across domain much work really need configure time scheduler service faulttolerant replication filter replicated message temporal key destination topic one “joiner” service entity topic domain found many domain entity “allocated” Kafka topic one topic receives event related entity event resent reloaded event pipeline temporal field match timed event Placing dedicated service platform “reload” event leaving responsibility Kafka also guarantee consistency really good fault tolerance level Acknowledgments would like thank Rafael Serrano Jose Luis Noheda support received Soufian Belahrache Black belt KTables Francisco Javier Salas work POC Juan López peer reviewTags Streaming Kafka Microservices Event Sourcing Software Architecture
4,206
Why Shopify Hires For Potential Not Talent And How You Can Too
Why Shopify Hires For Potential Not Talent And How You Can Too Potential can beat talent. Photo by Tim Marshall on Unsplash While watching a podcast on Youtube last week, I had one of those aha moments. The podcast was one of those random suggestions on Youtube and featured Tobi Lutke (Founder of Shopify). He was talking about how to build a team without access to a ‘primary’ talent market. Silicon Valley is a well known primary talent market. The Bay area offers a high concentration of well qualified and talented people with specific skill sets. Drawn by the success of other startups, many people move there in the hopes of tasting their version of startup success. Whereas Ottawa, where Tobi founded Shopify, is closer to a political hub for Canada. Ottawa is a center for the arts and cultural institutions, national museums, etc. Most of the “talent” not interested in Arts or Politics moved out of the area. Tobi mentioned that many best selling business books are written about building unicorn companies in primary talent markets. He thought many of the ‘best practices’ for building a team encouraged in business books are not relevant to the average business. One of the common maxims you’ll come across is ‘hire people who are better than you at what you don’t like to do.’ When all the books, articles, blog posts, podcasts, etc., that we consume tells us this, it’s easy to believe that this is the only way to build a company. The problem is these people are often too expensive or not available in a given talent-pool. Most of us don’t have the luxury of hiring from a ready-made work-force. We have to hire from talent pools with weaker skill sets but, importantly, find people with the same amount of potential. Shopify realized this difference very early in their journey. Instead of focusing on hiring the best talent available, they built their business around hiring for potential and then developing that potential. Fixed vs. Growth Mindsets In secondary Talent pools, Tobi explains we need to create learner’s organizations. As much as a company aims to produce a product or service that people want and need. A company also needs to build a culture that encourages learning and development. Shopify has created a hiring process that focuses on people’s potential rather than skill. They look for people that will far exceed the role they are currently hiring them for; they are looking for tomorrow’s company leaders. Because their focus is to hire based on potential, then they need to hire people who have the capacity and want to reach their potential. Shopify differentiates between two types of people when it comes to potential. People with a fixed mindset People with a growth mindset. People with a fixed mindset “believe their qualities are fixed traits and, therefore, cannot change. These people document their intelligence and talents rather than working to develop and improve them. They also believe that talent alone leads to success, and effort is not required.” — Unknown. While people with a growth mindset “have an underlying belief that their learning and intelligence can grow with time and experience. When people believe they can become smarter, they realize that their effort has an effect on their success, so they put in extra time, leading to higher achievement.” — Unknown. We can see this in many areas of life. My housemate is a personal trainer, and time again, he faces this difference in mindset. Some of his clients believe they are the way they are, and they can’t be helped. In comparison, others look forward to improving and watching their growth. I’ve noticed people with a fixed mindset are on defense. “I can’t” or “I don’t have time” or “I’m not XYZ” There’s always a reason not to. In contrast, people with a growth mindset are open to developing themselves and exploring new opportunities. Fixed Mindset people make excuses and pass responsibility. Growth mindset people find a way and take responsibility. Shopify internalised this distinction between people’s mindsets and used it to create ‘The Shopify Way.’ A system they have developed to hire for potential and develop that potential into world-class talent. THE SHOPIFY WAY This is how it works. They hire for potential. They look for in others what others don’t see in themselves. They help people develop a growth mindset. They give them a Shopify education: Company history, previous mistakes, employees who previously held a fixed mindset. Reasons for doing what the company does. Not just saying this is the way it is, and you need to accept it. Instead, they provide context and explanations so people can find the reason by themselves. They develop their skill sets. Only then do they focus on building a person’s necessary skills. “Shopify aims to help people fulfill their potential 10–20 years earlier than they otherwise would have”. They support them with mentors. One skilled person is paired with five unskilled workers. Mentorship is essential in helping inexperienced employees navigate the nuances of personal growth. We don’t develop the same way and come unstuck at different points. We need people to support us through these moments. They give them challenges designed to push their staff past what they thought was previously possible. “Hey, we have this problem, and it’s vital for our company’s continued success, and we think you’re the right person for the job.” They remove self-imposed boundaries that people put on themselves. Hire for potential, unlock a growth mindset, and support the journey. It works for Shopify and could work for you.
https://medium.com/the-innovation/why-shopify-hires-for-potential-not-talent-and-how-you-can-too-231f2fab2f37
['Rhys Jeffery']
2020-12-17 15:03:37.899000+00:00
['Hiring', 'Mindset', 'Talent', 'Human Resources', 'Entrepreneurship']
Title Shopify Hires Potential Talent TooContent Shopify Hires Potential Talent Potential beat talent Photo Tim Marshall Unsplash watching podcast Youtube last week one aha moment podcast one random suggestion Youtube featured Tobi Lutke Founder Shopify talking build team without access ‘primary’ talent market Silicon Valley well known primary talent market Bay area offer high concentration well qualified talented people specific skill set Drawn success startup many people move hope tasting version startup success Whereas Ottawa Tobi founded Shopify closer political hub Canada Ottawa center art cultural institution national museum etc “talent” interested Arts Politics moved area Tobi mentioned many best selling business book written building unicorn company primary talent market thought many ‘best practices’ building team encouraged business book relevant average business One common maxim you’ll come across ‘hire people better don’t like do’ book article blog post podcasts etc consume tell u it’s easy believe way build company problem people often expensive available given talentpool u don’t luxury hiring readymade workforce hire talent pool weaker skill set importantly find people amount potential Shopify realized difference early journey Instead focusing hiring best talent available built business around hiring potential developing potential Fixed v Growth Mindsets secondary Talent pool Tobi explains need create learner’s organization much company aim produce product service people want need company also need build culture encourages learning development Shopify created hiring process focus people’s potential rather skill look people far exceed role currently hiring looking tomorrow’s company leader focus hire based potential need hire people capacity want reach potential Shopify differentiates two type people come potential People fixed mindset People growth mindset People fixed mindset “believe quality fixed trait therefore cannot change people document intelligence talent rather working develop improve also believe talent alone lead success effort required” — Unknown people growth mindset “have underlying belief learning intelligence grow time experience people believe become smarter realize effort effect success put extra time leading higher achievement” — Unknown see many area life housemate personal trainer time face difference mindset client believe way can’t helped comparison others look forward improving watching growth I’ve noticed people fixed mindset defense “I can’t” “I don’t time” “I’m XYZ” There’s always reason contrast people growth mindset open developing exploring new opportunity Fixed Mindset people make excuse pas responsibility Growth mindset people find way take responsibility Shopify internalised distinction people’s mindset used create ‘The Shopify Way’ system developed hire potential develop potential worldclass talent SHOPIFY WAY work hire potential look others others don’t see help people develop growth mindset give Shopify education Company history previous mistake employee previously held fixed mindset Reasons company saying way need accept Instead provide context explanation people find reason develop skill set focus building person’s necessary skill “Shopify aim help people fulfill potential 10–20 year earlier otherwise would have” support mentor One skilled person paired five unskilled worker Mentorship essential helping inexperienced employee navigate nuance personal growth don’t develop way come unstuck different point need people support u moment give challenge designed push staff past thought previously possible “Hey problem it’s vital company’s continued success think you’re right person job” remove selfimposed boundary people put Hire potential unlock growth mindset support journey work Shopify could work youTags Hiring Mindset Talent Human Resources Entrepreneurship
4,207
The World Won’t Cry With You
Leaving my dreams behind, I walked a thousand miles. To see if the world would buy my smiles. I lost my passion; I lost my will. But nothing moved, and watching my pain; the world stood still.
https://medium.com/afwp/the-world-wont-cry-with-you-65ccda88daad
['Darshak Rana']
2020-11-29 15:31:55.089000+00:00
['Life Lessons', 'Motivation', 'Poetry', 'Life', 'Philosophy']
Title World Won’t Cry YouContent Leaving dream behind walked thousand mile see world would buy smile lost passion lost nothing moved watching pain world stood stillTags Life Lessons Motivation Poetry Life Philosophy
4,208
Project Journal, Week 5. Welcome to our DATA360 team blog! This…
Welcome to our DATA360 team blog! This blog will be the journey of our investigation into interesting aspects of the crime in Chicago. The city of Chiacgo. Credits: Fox News We started our project by brainstorming the topic we would like to dig into. KD was interested in crime in general, so we decided to investigate crime. Aside from that, Chicago is known by many people as the city of crime and violence. With that in mind, we agreed to make ‘Crime in Chicago’ as our topic. Data Mining For the first week, we found some relating and interesting datasets on Kaggle and on other sources. Kaggle, a data-sharing community. Source: Kaggle Crimes of Chicago The first dataset we found interesting is Crimes in Chicago, a ernomous BigQuery dataset which consists of crime data from 2001 to 2017. This dataset contains more than 6,000,000 rows of incident data (Yes, 6 millions). We are not so sure to what extend we can use this dataset for, but we are sure that we can do many great things with it. We figured we can merge other datasets with this one to tell interesting stories. Other Datasets Other datasets we found include the temperature of Chicago’s Midway Airport from 2000–2019. We found an interesting finding with Crimes in Chicago and Midway Airport Temperature which we will share in our next blog post. Additionally, we gathered Chicago’s Gasoline Price from 2000 to 2019 and Chicago’s Unemployment Rate from 1990 to 2019. We have not done any analysis with those two datasets yet, but we expect to find some interesting correlations once we do. That’s what we got for this week blog! There will be more, but we will save the fun for next times. I hope you enjoy this blog post. Please comment below for what you found interesting or what you want to suggest about our project!
https://medium.com/augie-data360-chicago-crime-analysis/project-journey-week-5-74c3d49ebce3
['Minh Ta']
2019-05-02 04:07:13.405000+00:00
['Chicago Crime', 'Chicago', 'Data Science', 'Kaggle', 'Bigquery']
Title Project Journal Week 5 Welcome DATA360 team blog This…Content Welcome DATA360 team blog blog journey investigation interesting aspect crime Chicago city Chiacgo Credits Fox News started project brainstorming topic would like dig KD interested crime general decided investigate crime Aside Chicago known many people city crime violence mind agreed make ‘Crime Chicago’ topic Data Mining first week found relating interesting datasets Kaggle source Kaggle datasharing community Source Kaggle Crimes Chicago first dataset found interesting Crimes Chicago ernomous BigQuery dataset consists crime data 2001 2017 dataset contains 6000000 row incident data Yes 6 million sure extend use dataset sure many great thing figured merge datasets one tell interesting story Datasets datasets found include temperature Chicago’s Midway Airport 2000–2019 found interesting finding Crimes Chicago Midway Airport Temperature share next blog post Additionally gathered Chicago’s Gasoline Price 2000 2019 Chicago’s Unemployment Rate 1990 2019 done analysis two datasets yet expect find interesting correlation That’s got week blog save fun next time hope enjoy blog post Please comment found interesting want suggest projectTags Chicago Crime Chicago Data Science Kaggle Bigquery
4,209
The Top Online Data Science Courses for 2019
After over 80+ hours of watching course videos, doing quizzes and assignments, reading reviews on various aggregators and forums, I’ve narrowed down the best data science courses available to the list below. TL;DR The best data science courses: Criteria The selections here are geared more towards individuals getting started in data science, so I’ve filtered courses based on the following criteria: The course goes over the entire data science process The course uses popular open-source programming tools and libraries The instructors cover the basic, most popular machine learning algorithms The course has a good combination of theory and application The course needs to either be on-demand or available every month or so There’s hands-on assignments and projects The instructors are engaging and personable The course has excellent ratings — generally, greater than or equal to 4.5/5 There’s a lot more data science courses than when I first started this page four years ago, and so there needs to now be a substantial filter to determine which courses are the best. I hope you feel confident that the courses below are truly worth your time and effort, because it will take several months (or more) of learning and practice to be a data science practitioner. In addition to the top general data science course picks, I have included a separate section for more specific data science interests, like Deep Learning, SQL, and other relevant topics. These are courses with a more specialized approach, and don’t cover the whole data science process, but they are still the top choices for that topic. These extra picks are good for supplementing before, after, and during the main courses. Resources you should use when learning When learning data science online it’s important to not only get an intuitive understanding of what you’re actually doing, but also to get sufficient practice using data science on unique problems. In addition to the courses listed below, I would suggest reading two books: Introduction to Statistical Learning — available for Free — one of the most widely recommended books for beginners in data science. Explains the fundamentals of machine learning and how everything works behind the scenes Applied Predictive Modeling — a breakdown of the entire modeling process on real-world datasets with incredibly useful tips each step of the way These two textbooks are incredibly valuable and provide a much better foundation than just taking courses alone. The first book is incredibly effective at teaching the intuition behind much of the data science process, and if you are able to understand almost everything in there, then you’re more well off than most entry-level data scientists. QUICK TIP Use Video Speed Controller for Chrome to speed up any video. I usually choose between 1.5x — 2.5x speed depending on the content, and use the “s” (slow down) and “d” (speed up) key shortcuts that come with the extension. Now to an overview and review of each course. 1. Data Science Specialization — JHU @ Coursera This course series is one of the most enrolled in and highly rated course collections in this list. JHU did an incredible job with the balance of breadth and depth in the curriculum. One thing that’s included in this series that’s usually missing from many of data science courses is a complete section on statistics, which is the backbone to data science. Overall, the Data Science specialization is an ideal mix of theory and application using the R programming language. As far as prerequisites go, you should have some programming experience (doesn’t have to be R) and you have a good understanding of Algebra. Previous knowledge of Linear Algebra and/or Calculus isn’t necessary, but it is helpful. Price — Free or $49/month for certificate and graded materials Provider — Johns Hopkins University Curriculum: The Data Scientist’s Toolbox R Programming Getting and Cleaning Data Exploratory Data Analysis Reproducible Research Statistical Inference Regression Models Practical Machine Learning Developing Data Products Data Science Capstone If you’re rusty with statistics and/or want to learn more R first, check out the Statistics with R Specialization as well. 2. Introduction to Data Science — Metis An extremely highly rated course — 4.9/5 on SwichUp and 4.8/5 on CourseReport — which is taught live by a data scientist from a top company. This is a six week long data science course that covers everything in the entire data science process, and it’s the only live online course in this list. Furthermore, not only will you get a certificate upon completion, but since this course also accredited, you’ll also receive continuing education units. Two nights per week, you’ll join the instructor with other students to learn data science as if it was an online college course. Not only are you able to ask questions, but the instructor also spends extra time for office hours to further help those students that might be struggling. Price — $750 The curriculum: Computer Science, Statistics, Linear Algebra Short Course Exploratory Data Analysis and Visualization Data Modeling: Supervised/Unsupervised Learning and Model Evaluation Data Modeling: Feature Selection, Engineering, and Data Pipelines Data Modeling: Advanced Supervised/Unsupervised Learning Data Modeling: Advanced Model Evaluation and Data Pipelines | Presentations For prerequisites, you’ll need to know Python, some linear algebra, and some basic statistics. If you need to work on any of these areas, Metis also has Beginner Python and Math for Data Science, a separate live online course just for learning the Python, Stats, Probability, Linear Algebra, and Calculus for data science. 3. Applied Data Science with Python Specialization — UMich @ Coursera University of Michigan, who also launched an online data science Master’s degree, produce this fantastic specialization focused the applied side of data science. This means you’ll get a strong introduction to commonly used data science Python libraries, like matplotlib, pandas, nltk, scikit-learn, and networkx, and learn how to use them on real data. This series doesn’t include the statistics needed for data science or the derivations of various machine learning algorithms, but does provide a comprehensive breakdown of how to use and evaluate those algorithms in Python. Because of this, I think this would be more appropriate for someone that already knows R and/or is learning the statistical concepts elsewhere. If you’re rusty with statistics, consider the Statistics with Python Specialization first. You’ll learn many of the most important statistical skills needed for data science. Price — Free or $49/month for certificate and graded materials Provider — University of Michigan Courses: Introduction to Data Science in Python Applied Plotting, Charting & Data Representation in Python Applied Machine Learning in Python Applied Text Mining in Python Applied Social Network Analysis in Python To take these courses, you’ll need to know some Python or programming in general, and there are actually a couple of great lectures in the first course dealing with some of the more advanced Python features you’ll need to process data effectively. Dataquest is a fantastic resource on its own, but even if you take other courses on this list, Dataquest serves as a superb complement to your online learning. Dataquest foregoes video lessons and instead teaches through an interactive textbook of sorts. Every topic in the data science track is accompanied by several in-browser, interactive coding steps that guide you through applying the exact topic you’re learning. Video-based learning is more “passive” — it’s very easy to think you understand a concept after watching a 2-hour long video, only to freeze up when you actually have to put what you’ve learned in action. — Dataquest FAQ To me, Dataquest stands out from the rest of the interactive platforms because the curriculum is very well organized, you get to learn by working on full-fledged data science projects, and there’s a super active and helpful Slack community where you can ask questions. The platform has one main data science learning curriculum for Python: Data Scientist In Python Path This track currently contains 31 courses, which cover everything from the very basics of Python, to Statistics, to the math for Machine Learning, to Deep Learning, and more. The curriculum is constantly being improved and updated for a better learning experience. Price — 1/3 of content is Free, $29/month for Basic, $49/month for Premium Here’s a condensed version of the curriculum: Python — Basic to Advanced Python data science libraries — Pandas, NumPy, Matplotlib, and more Visualization and Storytelling Effective data cleaning and exploratory data analysis Command line and Git for data science SQL — Basic to Advanced APIs and Web Scraping Probability and Statistics — Basic to Intermediate Math for Machine Learning — Linear Algebra and Calculus Machine Learning with Python — Regression, K-Means, Decision Trees, Deep Learning and more Natural Language Processing Spark and Map-Reduce Additionally, there’s also entire data science projects scattered throughout the curriculum. Each project’s goal is to get you to apply everything you’ve learned up to that point and to get you familiar with what it’s like to work on an end-to-end data science strategy. Lastly, if you’re more interested in learning data science with R, then definitely check out Dataquest’s new Data Analyst in R path. The Dataquest subscription gives you access to all paths on their platform, so you can learn R or Python (or both!). 5. Statistics and Data Science MicroMasters — MIT @ edX MicroMasters from edX are advanced, graduate-level courses that carry real credits you can apply to a select number of graduate degrees. The inclusion of probability and statistics courses makes this series from MIT a very well-rounded curriculum for being able to understand data intuitively. Due to its advanced nature, you should have experience with single and multivariate calculus, as well as Python programming. There isn’t any introduction to Python or R like in some of the other courses in this list, so before starting the ML portion, they recommend taking Introduction to Computer Science and Programming Using Python to get familiar with Python. Price — Free or $1,350 for credential and graded materials Provider — University of Michigan Courses: Probability — The Science of Uncertainty and Data Data Analysis in Social Science — Assessing Your Knowledge Fundamentals of Statistics Machine Learning with Python: from Linear Models to Deep Learning Capstone Exam in Statistics and Data Science The ML course has several interesting projects you’ll work on, and at the end of the whole series you’ll focus on one exam to wrap everything up. 6. CS109 Data Science — Harvard Screenshot from lecture: https://matterhorn.dce.harvard.edu/engage/player/watch.html?id=e15f221c-5275-4f7f-b486-759a7d483bc8 With a great mix of theory and application, this course from Harvard is one of the best for getting started as a beginner. It’s not on an interactive platform, like Coursera or edX, and doesn’t offer any sort of certification, but it’s definitely worth your time and it’s totally free. Curriculum: Web Scraping, Regular Expressions, Data Reshaping, Data Cleanup, Pandas Exploratory Data Analysis Pandas, SQL and the Grammar of Data Statistical Models Storytelling and Effective Communication Bias and Regression Classification, kNN, Cross Validation, Dimensionality Reduction, PCA, MDS SVM, Evaluation, Decision Trees and Random Forests, Ensemble Methods, Best Practices Recommendations, MapReduce, Spark Bayes Theorem, Bayesian Methods, Text Data Clustering Effective Presentations Experimental Design Deep Networks Building Data Science Python is used in this course, and there’s many lectures going through the intricacies of the various data science libraries to work through real-world, interesting problems. This is one of the only data science courses around that actually touches on every part of the data science process. 7. Python for Data Science and Machine Learning Bootcamp — Udemy Also available using R. A very reasonably priced course for the value. The instructor does an outstanding job explaining the Python, visualization, and statistical learning concepts needed for all data science projects. A huge benefit to this course over other Udemy courses are the assignments. Throughout the course you’ll break away and work on Jupyter notebook workbooks to solidify your understanding, then the instructor follows up with a solutions video to thoroughly explain each part. Curriculum: Python Crash Course Python for Data Analysis — Numpy, Pandas Python for Data Visualization — Matplotlib, Seaborn, Plotly, Cufflinks, Geographic plotting Data Capstone Project Machine learning — Regression, kNN, Trees and Forests, SVM, K-Means, PCA Recommender Systems Natural Language Processing Big Data and Spark Neural Nets and Deep Learning This course focuses more on the applied side, and one thing missing is a section on statistics. If you plan on taking this course it would be a good idea to pair it with a separate statistics and probability course as well. An honorary mention goes out to another Udemy course: Data Science A-Z. I do like Data Science A-Z quite a bit due to its complete coverage, but since it uses other tools outside of the Python/R ecosystem, I don’t think it fits the criteria as well as Python for Data Science and Machine Learning Bootcamp. Other top data science courses for specific skills Deep Learning Specialization — Coursera Created by Andrew Ng, maker of the famous Stanford Machine Learning course, this is one of the highest rated data science courses on the internet. This course series is for those interested in understanding and working with neural networks in Python. SQL for Data Science — Coursera Pair this with Mode Analytics SQL Tutorial for a very well-rounded introduction to SQL, an important and necessary skill for data science. Mathematics for Machine Learning — Coursera This is one of the most highly rated courses dedicated to the specific mathematics used in ML. Take this course if you’re uncomfortable with the linear algebra and calculus required for machine learning, and you’ll save some time over other, more generic math courses. How to Win a Data Science Competition — Coursera One of the courses in the Advanced Machine Learning Specialization. Even if you’re not looking to participate in data science competitions, this is still an excellent course for bringing together everything you’ve learned up to this point. This is more of an advanced course that teaches you the intuition behind why you should pick certain ML algorithms, and even goes over many of the algorithms that have been winning competitions lately. Bayesian Statistics: From Concept to Data Analysis — Coursera Bayesian, as opposed to Frequentist, statistics is an important subject to learn for data science. Many of us learned Frequentist statistics in college without even knowing it, and this course does a great job comparing and contrasting the two to make it easier to understand the Bayesian approach to data analysis. Spark and Python for Big Data with PySpark — Udemy From the same instructor as the Python for Data Science and Machine Learning Bootcamp in the list above, this course teaches you how to leverage Spark and Python to perform data analysis and machine learning on an AWS cluster. The instructor makes this course really fun and engaging by giving you mock consulting projects to work on, then going through a complete walkthrough of the solution. Learning Guide How to actually learn data science When joining any of these courses you should make the same commitment to learning as you would towards a college course. One goal for learning data science online is to maximize mental discomfort. It’s easy to get caught in the habit of signing in to watch a few videos and feel like you’re learning, but you’re not really learning much unless it hurts your brain. Vik Paruchuri (from Dataquest) produced this helpful video on how to approach learning data science effectively: Essentially, it comes down to doing what you’re learning, i.e. when you take a course and learn a skill, apply it to a real project immediately. Working through real-world projects that you are genuinely interested in helps solidify your understanding and provides you with proof that you know what you’re doing. One of the most uncomfortable things about learning data science online is that you never really know when you’ve learned enough. Unlike in a formal school environment, when learning online you don’t have many good barometers for success, like passing or failing tests or entire courses. Projects help remediate this by first showing you what you don’t know, and then serving as a record of knowledge when it’s done. All in all, the project should be the main focus, and courses and books should supplement that. When I first started learning data science and machine learning, I began (as a lot do) by trying to predict stocks. I found courses, books, and papers that taught the things I wanted to know, and then I applied them to my project as I was learning. I learned so much in a such short period of time that it seems like an improbable feat if laid out as a curriculum. It turned out to be extremely powerful working on something I was passionate about. It was easy to work hard and learn nonstop because predicting the market was something I really wanted to accomplish. Essential knowledge and skills Source: Udacity There’s a base skill set and level of knowledge that all data scientists must possess, regardless of what industry they’re in. For hard skills, you not only need to be proficient with the mathematics of data science, but you also need the skills and intuition to understand data. The Mathematics you should be comfortable with: Algebra Statistics (Frequentist and Bayesian) Probability Linear Algebra Basic calculus Optimization Furthermore, these are the basic programming skills you should be comfortable with: Python or R, SQL Extracting data from various sources, like SQL databases, JSON, CSV, XML, and text files Cleaning and transforming unstructured, messy data Effective Data visualization Machine learning — Regression, Clustering, kNN, SVM, Trees and Forests, Ensembles, Naive Bayes Lastly, it’s not all about the hard skills; there’s also many soft skills that are extremely important and many of them aren’t taught in courses. These are: Curiosity and creativity Communication skills — speaking and presenting in front of groups, and being able to explain complex topics to non-technical team members Problem solving — coming up with analytical solutions for business problems Python vs. R After going through the list you might have noticed that each course is dedicated to one language: Python or R. So which one should you learn? Short answer: just learn Python, or learn both. Python is an incredibly versatile language, and it has a huge amount of support in data science, machine learning, and statistics. Not only that, but you can also do things like build web apps, automate tasks, scrape the web, create GUIs, build a blockchain, and create games. Because Python can do so many things, I think it should be the language you choose. Ultimately, it doesn’t matter that much which language you choose for data science since you’ll find many jobs looking for either. So why not pick the language that can do almost anything? In the long run, though, I think learning R is also very useful since many statistics/ML textbooks use R for examples and exercises. In fact, both books I mentioned at the beginning use R, and unless someone translates everything to Python and posts it to Github, you won’t get the full benefit of the book. Once you learn Python, you’ll be able to learn R pretty easily. Check out this StackExchange answer for a great breakdown of how the two languages differ in machine learning. Are certificates worth it? One big difference between Udemy and other platforms, like edX, Coursera, and Metis, is that the latter offer certificates upon completion and are usually taught by instructors from universities. Some certificates, like those from edX and Metis, even carry continuing education credits. Other than that, many of the real benefits, like accessing graded homework and tests, are only accessible if you upgrade. If you need to stay motivated to complete the entire course, committing to a certificate also puts money on the line so you’ll be less likely to quit. I think there’s definitely personal value in certificates, but, unfortunately, not many employers value them that much. Coursera and edX vs. Udemy Udemy does not currently have a way to offer certificates, so I generally find Udemy courses to be good for more applied learning material, whereas Coursera and edX are usually better for theory and foundational material. Whenever I’m looking for a course about a specific tool, whether it be Spark, Hadoop, Postgres, or Flask web apps, I tend to search Udemy first since the courses favor an actionable, applied approach. Conversely, when I need an intuitive understanding of a subject, like NLP, Deep Learning, or Bayesian Statistics, I’ll search edX and Coursera first. Wrapping Up Data science is vast, interesting, and rewarding field to study and be a part of. You’ll need many skills, a wide range of knowledge, and a passion for data to become an effective data scientist that companies want to hire, and it’ll take longer than the hyped up YouTube videos claim. If you’re more interested in the machine learning side of data science, check out the Top 5 Machine Learning Courses for 2019 as a supplement to this article. If you have any questions or suggestions, feel free to leave them in the comments below. Thanks for reading and have fun learning! Originally published at learndatasci.com.
https://medium.com/free-code-camp/top-7-online-data-science-courses-for-2019-e4afdc4693e7
[]
2019-05-02 20:18:44.339000+00:00
['Artificial Intelligence', 'Machine Learning', 'Technology', 'Data Science', 'Programming']
Title Top Online Data Science Courses 2019Content 80 hour watching course video quiz assignment reading review various aggregator forum I’ve narrowed best data science course available list TLDR best data science course Criteria selection geared towards individual getting started data science I’ve filtered course based following criterion course go entire data science process course us popular opensource programming tool library instructor cover basic popular machine learning algorithm course good combination theory application course need either ondemand available every month There’s handson assignment project instructor engaging personable course excellent rating — generally greater equal 455 There’s lot data science course first started page four year ago need substantial filter determine course best hope feel confident course truly worth time effort take several month learning practice data science practitioner addition top general data science course pick included separate section specific data science interest like Deep Learning SQL relevant topic course specialized approach don’t cover whole data science process still top choice topic extra pick good supplementing main course Resources use learning learning data science online it’s important get intuitive understanding you’re actually also get sufficient practice using data science unique problem addition course listed would suggest reading two book Introduction Statistical Learning — available Free — one widely recommended book beginner data science Explains fundamental machine learning everything work behind scene Applied Predictive Modeling — breakdown entire modeling process realworld datasets incredibly useful tip step way two textbook incredibly valuable provide much better foundation taking course alone first book incredibly effective teaching intuition behind much data science process able understand almost everything you’re well entrylevel data scientist QUICK TIP Use Video Speed Controller Chrome speed video usually choose 15x — 25x speed depending content use “s” slow “d” speed key shortcut come extension overview review course 1 Data Science Specialization — JHU Coursera course series one enrolled highly rated course collection list JHU incredible job balance breadth depth curriculum One thing that’s included series that’s usually missing many data science course complete section statistic backbone data science Overall Data Science specialization ideal mix theory application using R programming language far prerequisite go programming experience doesn’t R good understanding Algebra Previous knowledge Linear Algebra andor Calculus isn’t necessary helpful Price — Free 49month certificate graded material Provider — Johns Hopkins University Curriculum Data Scientist’s Toolbox R Programming Getting Cleaning Data Exploratory Data Analysis Reproducible Research Statistical Inference Regression Models Practical Machine Learning Developing Data Products Data Science Capstone you’re rusty statistic andor want learn R first check Statistics R Specialization well 2 Introduction Data Science — Metis extremely highly rated course — 495 SwichUp 485 CourseReport — taught live data scientist top company six week long data science course cover everything entire data science process it’s live online course list Furthermore get certificate upon completion since course also accredited you’ll also receive continuing education unit Two night per week you’ll join instructor student learn data science online college course able ask question instructor also spends extra time office hour help student might struggling Price — 750 curriculum Computer Science Statistics Linear Algebra Short Course Exploratory Data Analysis Visualization Data Modeling SupervisedUnsupervised Learning Model Evaluation Data Modeling Feature Selection Engineering Data Pipelines Data Modeling Advanced SupervisedUnsupervised Learning Data Modeling Advanced Model Evaluation Data Pipelines Presentations prerequisite you’ll need know Python linear algebra basic statistic need work area Metis also Beginner Python Math Data Science separate live online course learning Python Stats Probability Linear Algebra Calculus data science 3 Applied Data Science Python Specialization — UMich Coursera University Michigan also launched online data science Master’s degree produce fantastic specialization focused applied side data science mean you’ll get strong introduction commonly used data science Python library like matplotlib panda nltk scikitlearn networkx learn use real data series doesn’t include statistic needed data science derivation various machine learning algorithm provide comprehensive breakdown use evaluate algorithm Python think would appropriate someone already know R andor learning statistical concept elsewhere you’re rusty statistic consider Statistics Python Specialization first You’ll learn many important statistical skill needed data science Price — Free 49month certificate graded material Provider — University Michigan Courses Introduction Data Science Python Applied Plotting Charting Data Representation Python Applied Machine Learning Python Applied Text Mining Python Applied Social Network Analysis Python take course you’ll need know Python programming general actually couple great lecture first course dealing advanced Python feature you’ll need process data effectively Dataquest fantastic resource even take course list Dataquest serf superb complement online learning Dataquest foregoes video lesson instead teach interactive textbook sort Every topic data science track accompanied several inbrowser interactive coding step guide applying exact topic you’re learning Videobased learning “passive” — it’s easy think understand concept watching 2hour long video freeze actually put you’ve learned action — Dataquest FAQ Dataquest stand rest interactive platform curriculum well organized get learn working fullfledged data science project there’s super active helpful Slack community ask question platform one main data science learning curriculum Python Data Scientist Python Path track currently contains 31 course cover everything basic Python Statistics math Machine Learning Deep Learning curriculum constantly improved updated better learning experience Price — 13 content Free 29month Basic 49month Premium Here’s condensed version curriculum Python — Basic Advanced Python data science library — Pandas NumPy Matplotlib Visualization Storytelling Effective data cleaning exploratory data analysis Command line Git data science SQL — Basic Advanced APIs Web Scraping Probability Statistics — Basic Intermediate Math Machine Learning — Linear Algebra Calculus Machine Learning Python — Regression KMeans Decision Trees Deep Learning Natural Language Processing Spark MapReduce Additionally there’s also entire data science project scattered throughout curriculum project’s goal get apply everything you’ve learned point get familiar it’s like work endtoend data science strategy Lastly you’re interested learning data science R definitely check Dataquest’s new Data Analyst R path Dataquest subscription give access path platform learn R Python 5 Statistics Data Science MicroMasters — MIT edX MicroMasters edX advanced graduatelevel course carry real credit apply select number graduate degree inclusion probability statistic course make series MIT wellrounded curriculum able understand data intuitively Due advanced nature experience single multivariate calculus well Python programming isn’t introduction Python R like course list starting ML portion recommend taking Introduction Computer Science Programming Using Python get familiar Python Price — Free 1350 credential graded material Provider — University Michigan Courses Probability — Science Uncertainty Data Data Analysis Social Science — Assessing Knowledge Fundamentals Statistics Machine Learning Python Linear Models Deep Learning Capstone Exam Statistics Data Science ML course several interesting project you’ll work end whole series you’ll focus one exam wrap everything 6 CS109 Data Science — Harvard Screenshot lecture httpsmatterhorndceharvardeduengageplayerwatchhtmlide15f221c52754f7fb486759a7d483bc8 great mix theory application course Harvard one best getting started beginner It’s interactive platform like Coursera edX doesn’t offer sort certification it’s definitely worth time it’s totally free Curriculum Web Scraping Regular Expressions Data Reshaping Data Cleanup Pandas Exploratory Data Analysis Pandas SQL Grammar Data Statistical Models Storytelling Effective Communication Bias Regression Classification kNN Cross Validation Dimensionality Reduction PCA MDS SVM Evaluation Decision Trees Random Forests Ensemble Methods Best Practices Recommendations MapReduce Spark Bayes Theorem Bayesian Methods Text Data Clustering Effective Presentations Experimental Design Deep Networks Building Data Science Python used course there’s many lecture going intricacy various data science library work realworld interesting problem one data science course around actually touch every part data science process 7 Python Data Science Machine Learning Bootcamp — Udemy Also available using R reasonably priced course value instructor outstanding job explaining Python visualization statistical learning concept needed data science project huge benefit course Udemy course assignment Throughout course you’ll break away work Jupyter notebook workbook solidify understanding instructor follows solution video thoroughly explain part Curriculum Python Crash Course Python Data Analysis — Numpy Pandas Python Data Visualization — Matplotlib Seaborn Plotly Cufflinks Geographic plotting Data Capstone Project Machine learning — Regression kNN Trees Forests SVM KMeans PCA Recommender Systems Natural Language Processing Big Data Spark Neural Nets Deep Learning course focus applied side one thing missing section statistic plan taking course would good idea pair separate statistic probability course well honorary mention go another Udemy course Data Science AZ like Data Science AZ quite bit due complete coverage since us tool outside PythonR ecosystem don’t think fit criterion well Python Data Science Machine Learning Bootcamp top data science course specific skill Deep Learning Specialization — Coursera Created Andrew Ng maker famous Stanford Machine Learning course one highest rated data science course internet course series interested understanding working neural network Python SQL Data Science — Coursera Pair Mode Analytics SQL Tutorial wellrounded introduction SQL important necessary skill data science Mathematics Machine Learning — Coursera one highly rated course dedicated specific mathematics used ML Take course you’re uncomfortable linear algebra calculus required machine learning you’ll save time generic math course Win Data Science Competition — Coursera One course Advanced Machine Learning Specialization Even you’re looking participate data science competition still excellent course bringing together everything you’ve learned point advanced course teach intuition behind pick certain ML algorithm even go many algorithm winning competition lately Bayesian Statistics Concept Data Analysis — Coursera Bayesian opposed Frequentist statistic important subject learn data science Many u learned Frequentist statistic college without even knowing course great job comparing contrasting two make easier understand Bayesian approach data analysis Spark Python Big Data PySpark — Udemy instructor Python Data Science Machine Learning Bootcamp list course teach leverage Spark Python perform data analysis machine learning AWS cluster instructor make course really fun engaging giving mock consulting project work going complete walkthrough solution Learning Guide actually learn data science joining course make commitment learning would towards college course One goal learning data science online maximize mental discomfort It’s easy get caught habit signing watch video feel like you’re learning you’re really learning much unless hurt brain Vik Paruchuri Dataquest produced helpful video approach learning data science effectively Essentially come you’re learning ie take course learn skill apply real project immediately Working realworld project genuinely interested help solidify understanding provides proof know you’re One uncomfortable thing learning data science online never really know you’ve learned enough Unlike formal school environment learning online don’t many good barometer success like passing failing test entire course Projects help remediate first showing don’t know serving record knowledge it’s done project main focus course book supplement first started learning data science machine learning began lot trying predict stock found course book paper taught thing wanted know applied project learning learned much short period time seems like improbable feat laid curriculum turned extremely powerful working something passionate easy work hard learn nonstop predicting market something really wanted accomplish Essential knowledge skill Source Udacity There’s base skill set level knowledge data scientist must posse regardless industry they’re hard skill need proficient mathematics data science also need skill intuition understand data Mathematics comfortable Algebra Statistics Frequentist Bayesian Probability Linear Algebra Basic calculus Optimization Furthermore basic programming skill comfortable Python R SQL Extracting data various source like SQL database JSON CSV XML text file Cleaning transforming unstructured messy data Effective Data visualization Machine learning — Regression Clustering kNN SVM Trees Forests Ensembles Naive Bayes Lastly it’s hard skill there’s also many soft skill extremely important many aren’t taught course Curiosity creativity Communication skill — speaking presenting front group able explain complex topic nontechnical team member Problem solving — coming analytical solution business problem Python v R going list might noticed course dedicated one language Python R one learn Short answer learn Python learn Python incredibly versatile language huge amount support data science machine learning statistic also thing like build web apps automate task scrape web create GUIs build blockchain create game Python many thing think language choose Ultimately doesn’t matter much language choose data science since you’ll find many job looking either pick language almost anything long run though think learning R also useful since many statisticsML textbook use R example exercise fact book mentioned beginning use R unless someone translates everything Python post Github won’t get full benefit book learn Python you’ll able learn R pretty easily Check StackExchange answer great breakdown two language differ machine learning certificate worth One big difference Udemy platform like edX Coursera Metis latter offer certificate upon completion usually taught instructor university certificate like edX Metis even carry continuing education credit many real benefit like accessing graded homework test accessible upgrade need stay motivated complete entire course committing certificate also put money line you’ll le likely quit think there’s definitely personal value certificate unfortunately many employer value much Coursera edX v Udemy Udemy currently way offer certificate generally find Udemy course good applied learning material whereas Coursera edX usually better theory foundational material Whenever I’m looking course specific tool whether Spark Hadoop Postgres Flask web apps tend search Udemy first since course favor actionable applied approach Conversely need intuitive understanding subject like NLP Deep Learning Bayesian Statistics I’ll search edX Coursera first Wrapping Data science vast interesting rewarding field study part You’ll need many skill wide range knowledge passion data become effective data scientist company want hire it’ll take longer hyped YouTube video claim you’re interested machine learning side data science check Top 5 Machine Learning Courses 2019 supplement article question suggestion feel free leave comment Thanks reading fun learning Originally published learndatascicomTags Artificial Intelligence Machine Learning Technology Data Science Programming
4,210
Make Passive Income Programming — 5 Incomes for Software Developers
Wouldn’t it be beautiful to get paid to do something that you love? Better yet, what if that thing could passively generate you a hefty chunk of change every year? Well, if you’re one of the lucky souls that found a passion for programming then I have good news for you. There are a ton of ways for software developers to make passive income programming. While additionally reaping many other benefits for their career as well. As a self-taught software developer who has a Bachelor of Commerce degree, I felt obligated to share the knowledge I have with the community. So without further ado, here are five ways you can turn your coding abilities into another passive income stream. 1. Build Software Hopefully, it doesn’t come as a surprise that building software is the first method on this list. I mean, this is what we do! The great thing about creating software is that once it’s built (and relatively bug-free), there isn’t much more work you need to put into it. Especially if that software only has one purpose and doesn’t require additional features being implemented. So how can we turn software development into a passive income stream? Well, there are a few approaches we could follow. Personal Projects The first way to make money building software is by creating your own software. Something that people will actually find useful. Then selling that invention either as a SaaS or through advertising within the platform. This can literally be anything. Is there something you wish existed that made your life easier? Does something exist but could be done better? As long as you can solve some specific pain points and there is a demand for the software, there’s a chance you can monetize it! For an example of this approach, check out Glide.js. The developers at Glide.js realized there was a lot of demand for a javascript slider library with a very small codebase (~23kb). So they decided to build a library that makes slider development trivial without bloating your codebase. Since they were first to market a product like this, they were able to build a network of developers that use and recommend their software. Now anyone that views the documentation page gets greeted with a non-obtrusive carbon ad that earns money passively. In addition, they also have a donation page if you feel inclined to support them. The great thing about building software as passive income is that you can use this as an opportunity to learn that new language or framework you’ve been putting off. Which would make a great addition to your portfolio and expand your knowledge. Not only that, whatever you build could make somebody’s life easier. Allowing you to contribute back to the community that gives you so much. Doesn’t that sound amazing? I think it does. If you are a new developer or always get stuck on the process of building software, check out my guide on How To Plan a Coding Project — A Programming Outline. It’s a way to approach software development by breaking it down into steps (much like the concept of programming). Partnering Up The second way to make money building software would be to partner up with a business owner or entrepreneur that has a great idea for an application. Preferably a simple one that doesn’t require a significant amount of development time. Now here is where this becomes a passive stream and not just another freelance client. Agree on a contract that gives you a percentage of the incoming revenue or profit for the product. If additions need to be made for the software, you can either include a fixed amount of working hours a month, work at a reduced hourly rate, or outsource a developer. Whatever you decide, once the product is built, the business aspect is now out of your hands. No need to worry about marketing or sales. Just for your monthly royalty checks! Do this for a few pieces of software and pretty soon you will have a pretty great passive income stream. You might be wondering how you can find a business partner like this? Well, there is no shortage of great business ideas from entrepreneurial-minded individuals. A great place to start for this could be r/Entrepreneur or any forum board or group that business professionals might hang out. In my personal opinion, finding a business partner online can be a little.. sketchy. Personally, I believe working with local businesses’ can be a much safer route to go down. Most of them might not have a great app idea but they do have products to sell. Which brings me to our next great passive income stream. 2. eCommerce & Shopify If you have ever thought about diving into the world of eCommerce, now is the time. There are many businesses, both local and abroad, that could benefit from providing an online outlet for their storefront. Following the methodology above, you can very easily make passive income by building eCommerce stores with Shopify. Offer to build the stores for free, walk them through importing products, and in return, receive a small percentage of the revenue. This approach is easy to sell businesses on because it is has a very low-risk factor. If the store makes less than expected, the business owner is no worse off than before. Making it a much easier sell. So, how does eCommerce with Shopify differ from building any other type of software? Great question. I am going to answer that. The Benefits of Shopify The first reason is that it can be dead simple to build an eCommerce store with Shopify. I built my first Shopify store within a few weeks at the beginning of my career as a developer. That store generated +$75K in the first year. In addition, the liquid templating language used by Shopify is very intuitive to pick up and makes it easy to build out frontend that displays product data. There is also a plethora of tools available to make development easier while financial data is all handled by Shopify. Making the process as smooth as possible. I am not afraid to admit that in my early days I assumed Shopify was a joke platform. Meant solely for ambitious but delusional dropshippers that didn’t know the first thing about programming or business. After my experience with the platform, I can confidently say I love Shopify and what they have done for eCommerce and their developers. I can honestly say I couldn’t imagine building another Frankenstein WooCommerce site again. Shopify Partner Program Shopify also has a partnership program that revolves around the idea of passive income. There are plenty of ways to make passive income with Shopify. Whether it’s building tools, referring store owners or developing customer stores yourself. That’s right, in addition to working out a revenue model with your clients, Shopify also pays you recurring revenue based on your client’s Shopify plan. Better yet, you have access to each store’s dashboard so you can work out how much your clients owe you every month. Seriously. Shopify may be your best bet in making passive income. Especially if you excel in frontend development. Here is a screenshot of the $66 USD I made this year with Shopify, along with one of my client’s store that made ~$75K this year. Even by making 5% of revenue with this store, you would receive ~$3,500 in completely passive income. Obviously, don’t expect every store to make this kind of revenue, but if you’re smart about it, and pick your businesses right, you could make it a full-time job! 3. Start A Development Blog Looking for a long term strategy? Starting a development blog can be a great way to earn passive income programming. It is also a great way to stay up to date on current technologies, help beginners with the knowledge you have and improve your writing skills. I mean, who doesn’t love a developer who can actually write a decent README file? I know I do. You can check out my blog here: thecodebytes.com The truth about earning revenue from a blog is that it can take a lot of time and effort to build a following and reap any sort of benefits. However, it can definitely be done. A close friend of mine actually earned +$7K from blogging in 2019 and has been growing ever since. There are essentially three ways to make money passively writing about code. I’ll walk through them for you. Advertising I know this has been mentioned but this is probably the easiest of the monetization methods mentioned. Advertising partners such as Adsense or Monumetric allow you to display advertisements on your blog and get paid passively! It really doesn’t get much easier than that. The only challenge from there is making sure your content is of high quality and by building an audience. Affiliates Another popular way for bloggers to make money is with affiliate programs. Affiliates are essentially links pointing to products or services that you partner with. If someone signs up for an affiliate from your unique URL, the partner will give you some form of compensation. Amazon has a popular affiliate program but if you’re looking for something closer related to the development-sphere, Shopify, Codeacademy and probably any other platform that has a large following would be a great place to start. 3rd Party Sites In addition to your personal blog, I also wanted to state that there are third party writing platforms that can help you earn money by writing about code. I personally use Medium, but there are a lot of sites out there. Dev.to and Hackernoon are two cool platforms that allow you to cross-post from your own blog. Allowing you to link back to your original content while still helping the community. A big win/win if you ask me! How much money can you make with Medium? Well, it depends on how much you write and whether or not the post goes viral. I haven’t written much on medium but I wanted to include a shameless screenshot for full transparency. As you can see, in total I earned around $22.29 from my three articles. This number isn’t great, but if you spent some serious time writing articles, this number would add up. My articles are continuing to make money as well. You can check out what I write about here. Important Note: If you are interested in making money with Medium. Make sure you sign up for the Medium Partnership Program or you won’t get paid. 4. Online Tutoring Videos The fourth way to make passive income programming is through online tutoring. If you are more of a visual and outgoing person, video content is the way to go (aren’t all programmers outgoing?). The best part about video content is that it is re-usable. You record it once and then it is easily distributable forever. There are two main forms of online tutoring videos. Youtube Youtube follows a similar revenue model as blogging. Making most of your money off of advertising or affiliate sales. It also works by building a consistent following and growing your account. For that reason, I won’t talk too much about it. If you are really looking to grow your passive income programming streams, building both a blog and youtube while cross-posting wouldn’t be a bad idea! Allowing you to grow two revenue streams at once. Massive Open Online Courses A second way to make money with online tutoring is with MOOCs (Massive Open Online Courses). These courses allow you as a developer to make a course and share it online for anyone to view for a set price. If you are a good developer and have gained a decent following online (through youtube or blogging), selling a MOOC is a very realistic way to make passive income. Figuring out what to make a course about is a balancing act between both what is in demand and what has low competition. If there is something you are highly skilled in that many developers want to know more about, this could a great idea for a course. How much money can you make with MOOCs? Honestly, the sky’s the limit. Take Brad Traversy’s MERN Stack Front To Back. With 42,945 students x ~$20 per student he has made over 1 million dollars from one course. Obviously, he has spent a lot of time building up his audience by consistently providing quality content. However, you can see the height of the pay ceiling. It’s never been easier to make online courses about code. Sites like udemy don’t even require a degree to become a teacher. Just signup and upload your content. The student reviews will be the deciding factor on whether or not your content is worth paying for. *Your students analyzing your content quality* 5. Outsourcing Freelancing Clients This brings us to the final passive income idea for developers. Finding and outsourcing freelance clients. When I first started out as a self-taught developer, the only work I could find was as a freelancer. It was actually very difficult to find my first client that paid well. But after that, it became significantly easier and easier to find new clients. Mainly due to my growing portfolio and word of mouth. So much that I had to start turning down offers because I could not work fast enough to take on additional clients. The solution? Start outsourcing freelance clients to other developers. As a programmer, you have two key characteristics for this to work. First, you know what tools and approximate time frames it would take to get something done. Second, you also have the skillset to find other developers and vet them for their abilities. By outsourcing developers that are willing to work for a reduced rate, you can essentially be a middle man for clients. Taking freelance offers, sending them to your developers and sending them back to your clients. This is a mutually beneficial scenario. Clients like someone in their time zone who is available, fluent in English and gets the job done on time, on budget and with good coding standards. You can be the one to bridge the gap for developers overseas. Allowing them to make a liveable wage and yourself to make passive income. Now, I know that this last one isn’t technically passive. However, if you can scale it, eventually you could also hire someone to take over the management aspect. At this point, you are essentially just running a business. However, it would be a passive business. Just something to keep in mind. Closing Remarks So there you have it. Five ways to make passive income with programming that will actually make you money. As a developer, you are blessed with a high barrier to entry that makes it very difficult for a non-technical person to make money within this niche. Giving you much better odds of making an income without worrying about steep competition. Now, I am not saying these methods will be easy. People often confuse passive income with easy income. However, is anything worth doing ever easy? We all know programming isn’t. I hope I have proven that with enough hard work upfront, you can reap the benefits of passive income for years to come. Finally allowing you to quit your day job, save for that vacation or simply invest some extra money. I really don’t care what you do. I just wanted to let you know that these options are always available to you. Because I love you. So you’re welcome. If you are a beginner coder, check out my article on Become a Professional Full Stack Web Developer in 2020. It should give you a good starting point if you want to delve into the wide world of Web Development. Happy coding!
https://medium.com/swlh/make-passive-income-programming-5-incomes-for-software-developers-fd605395db71
['Grant Darling']
2020-12-26 17:23:09.570000+00:00
['Passive Income', 'Programming', 'Web Development', 'Make Money', 'Entrepreneurship']
Title Make Passive Income Programming — 5 Incomes Software DevelopersContent Wouldn’t beautiful get paid something love Better yet thing could passively generate hefty chunk change every year Well you’re one lucky soul found passion programming good news ton way software developer make passive income programming additionally reaping many benefit career well selftaught software developer Bachelor Commerce degree felt obligated share knowledge community without ado five way turn coding ability another passive income stream 1 Build Software Hopefully doesn’t come surprise building software first method list mean great thing creating software it’s built relatively bugfree isn’t much work need put Especially software one purpose doesn’t require additional feature implemented turn software development passive income stream Well approach could follow Personal Projects first way make money building software creating software Something people actually find useful selling invention either SaaS advertising within platform literally anything something wish existed made life easier something exist could done better long solve specific pain point demand software there’s chance monetize example approach check Glidejs developer Glidejs realized lot demand javascript slider library small codebase 23kb decided build library make slider development trivial without bloating codebase Since first market product like able build network developer use recommend software anyone view documentation page get greeted nonobtrusive carbon ad earns money passively addition also donation page feel inclined support great thing building software passive income use opportunity learn new language framework you’ve putting would make great addition portfolio expand knowledge whatever build could make somebody’s life easier Allowing contribute back community give much Doesn’t sound amazing think new developer always get stuck process building software check guide Plan Coding Project — Programming Outline It’s way approach software development breaking step much like concept programming Partnering second way make money building software would partner business owner entrepreneur great idea application Preferably simple one doesn’t require significant amount development time becomes passive stream another freelance client Agree contract give percentage incoming revenue profit product addition need made software either include fixed amount working hour month work reduced hourly rate outsource developer Whatever decide product built business aspect hand need worry marketing sale monthly royalty check piece software pretty soon pretty great passive income stream might wondering find business partner like Well shortage great business idea entrepreneurialminded individual great place start could rEntrepreneur forum board group business professional might hang personal opinion finding business partner online little sketchy Personally believe working local businesses’ much safer route go might great app idea product sell brings next great passive income stream 2 eCommerce Shopify ever thought diving world eCommerce time many business local abroad could benefit providing online outlet storefront Following methodology easily make passive income building eCommerce store Shopify Offer build store free walk importing product return receive small percentage revenue approach easy sell business lowrisk factor store make le expected business owner worse Making much easier sell eCommerce Shopify differ building type software Great question going answer Benefits Shopify first reason dead simple build eCommerce store Shopify built first Shopify store within week beginning career developer store generated 75K first year addition liquid templating language used Shopify intuitive pick make easy build frontend display product data also plethora tool available make development easier financial data handled Shopify Making process smooth possible afraid admit early day assumed Shopify joke platform Meant solely ambitious delusional dropshippers didn’t know first thing programming business experience platform confidently say love Shopify done eCommerce developer honestly say couldn’t imagine building another Frankenstein WooCommerce site Shopify Partner Program Shopify also partnership program revolves around idea passive income plenty way make passive income Shopify Whether it’s building tool referring store owner developing customer store That’s right addition working revenue model client Shopify also pay recurring revenue based client’s Shopify plan Better yet access store’s dashboard work much client owe every month Seriously Shopify may best bet making passive income Especially excel frontend development screenshot 66 USD made year Shopify along one client’s store made 75K year Even making 5 revenue store would receive 3500 completely passive income Obviously don’t expect every store make kind revenue you’re smart pick business right could make fulltime job 3 Start Development Blog Looking long term strategy Starting development blog great way earn passive income programming also great way stay date current technology help beginner knowledge improve writing skill mean doesn’t love developer actually write decent README file know check blog thecodebytescom truth earning revenue blog take lot time effort build following reap sort benefit However definitely done close friend mine actually earned 7K blogging 2019 growing ever since essentially three way make money passively writing code I’ll walk Advertising know mentioned probably easiest monetization method mentioned Advertising partner Adsense Monumetric allow display advertisement blog get paid passively really doesn’t get much easier challenge making sure content high quality building audience Affiliates Another popular way blogger make money affiliate program Affiliates essentially link pointing product service partner someone sign affiliate unique URL partner give form compensation Amazon popular affiliate program you’re looking something closer related developmentsphere Shopify Codeacademy probably platform large following would great place start 3rd Party Sites addition personal blog also wanted state third party writing platform help earn money writing code personally use Medium lot site Devto Hackernoon two cool platform allow crosspost blog Allowing link back original content still helping community big winwin ask much money make Medium Well depends much write whether post go viral haven’t written much medium wanted include shameless screenshot full transparency see total earned around 2229 three article number isn’t great spent serious time writing article number would add article continuing make money well check write Important Note interested making money Medium Make sure sign Medium Partnership Program won’t get paid 4 Online Tutoring Videos fourth way make passive income programming online tutoring visual outgoing person video content way go aren’t programmer outgoing best part video content reusable record easily distributable forever two main form online tutoring video Youtube Youtube follows similar revenue model blogging Making money advertising affiliate sale also work building consistent following growing account reason won’t talk much really looking grow passive income programming stream building blog youtube crossposting wouldn’t bad idea Allowing grow two revenue stream Massive Open Online Courses second way make money online tutoring MOOCs Massive Open Online Courses course allow developer make course share online anyone view set price good developer gained decent following online youtube blogging selling MOOC realistic way make passive income Figuring make course balancing act demand low competition something highly skilled many developer want know could great idea course much money make MOOCs Honestly sky’s limit Take Brad Traversy’s MERN Stack Front Back 42945 student x 20 per student made 1 million dollar one course Obviously spent lot time building audience consistently providing quality content However see height pay ceiling It’s never easier make online course code Sites like udemy don’t even require degree become teacher signup upload content student review deciding factor whether content worth paying student analyzing content quality 5 Outsourcing Freelancing Clients brings u final passive income idea developer Finding outsourcing freelance client first started selftaught developer work could find freelancer actually difficult find first client paid well became significantly easier easier find new client Mainly due growing portfolio word mouth much start turning offer could work fast enough take additional client solution Start outsourcing freelance client developer programmer two key characteristic work First know tool approximate time frame would take get something done Second also skillset find developer vet ability outsourcing developer willing work reduced rate essentially middle man client Taking freelance offer sending developer sending back client mutually beneficial scenario Clients like someone time zone available fluent English get job done time budget good coding standard one bridge gap developer overseas Allowing make liveable wage make passive income know last one isn’t technically passive However scale eventually could also hire someone take management aspect point essentially running business However would passive business something keep mind Closing Remarks Five way make passive income programming actually make money developer blessed high barrier entry make difficult nontechnical person make money within niche Giving much better odds making income without worrying steep competition saying method easy People often confuse passive income easy income However anything worth ever easy know programming isn’t hope proven enough hard work upfront reap benefit passive income year come Finally allowing quit day job save vacation simply invest extra money really don’t care wanted let know option always available love you’re welcome beginner coder check article Become Professional Full Stack Web Developer 2020 give good starting point want delve wide world Web Development Happy codingTags Passive Income Programming Web Development Make Money Entrepreneurship
4,211
YouTube Gave Me an Award and I Hated It
When I started at university, I thought I would be incredibly proud if I got a master’s degree. But as I slowly got closer, it seemed less and less impressive. Similarly, before I was accepted into the conservatory, it seemed inconceivable to me that I could ever do something so fantastic as getting a degree there. But I’m about to start my third year and not only am I doing it — I’m surrounded by people who are also doing it. It suddenly doesn’t feel so special anymore. Imagine a mountain climber who is about to reach the peak of a mountain and goes: ‘But… this isn’t impressive at all. It’s just three lousy little steps.’ We forget where we have come from. We forget to look behind us and see the distance we’ve already crossed. We forget to be grateful to our former selves — for all the times we stuck with it, when it was difficult, but especially when it was easy. “The journey of a thousand miles begins with a single step. And then… like a million more steps.” – Felicity Ward This is really funny, but it’s actually kind of profound. It’s as if we think that in order to be truly fulfilled, something needs to feel like a Heroic Effort, when in reality it’s the thousands of tiny steps (some of which we took because we had no choice) that got us to where we are. If you study for every test because there is a professor forcing you, that doesn’t mean you’re not actually passing the tests. It means you set yourself up in a way that didn’t allow you to slack off. Just because I have a natural aptitude for languages, that doesn’t mean that A for French at the conservatory is meaningless. And just because I enjoyed making mermaid videos and it never really felt like work, that doesn’t mean I’m not allowed to be proud of reaching a milestone. Me with my Silver Play Button. Yay. I’m not an unhappy achiever. I’m an ungrateful one. I think I’ll maybe just hang that silver play button somewhere and practise gratefulness. Not gratefulness for all the effort and blood/sweat/tears it took, but gratefulness for the fact that sometimes, things appear to have come easy — and that’s okay.
https://medium.com/the-ascent/youtube-gave-me-an-award-and-i-hated-it-36f36c752a93
['Stella Brüggen']
2020-09-06 17:01:01.892000+00:00
['Psychology', 'Happiness', 'Careers', 'Self Improvement', 'Self']
Title YouTube Gave Award Hated ItContent started university thought would incredibly proud got master’s degree slowly got closer seemed le le impressive Similarly accepted conservatory seemed inconceivable could ever something fantastic getting degree I’m start third year — I’m surrounded people also suddenly doesn’t feel special anymore Imagine mountain climber reach peak mountain go ‘But… isn’t impressive It’s three lousy little steps’ forget come forget look behind u see distance we’ve already crossed forget grateful former self — time stuck difficult especially easy “The journey thousand mile begin single step then… like million steps” – Felicity Ward really funny it’s actually kind profound It’s think order truly fulfilled something need feel like Heroic Effort reality it’s thousand tiny step took choice got u study every test professor forcing doesn’t mean you’re actually passing test mean set way didn’t allow slack natural aptitude language doesn’t mean French conservatory meaningless enjoyed making mermaid video never really felt like work doesn’t mean I’m allowed proud reaching milestone Silver Play Button Yay I’m unhappy achiever I’m ungrateful one think I’ll maybe hang silver play button somewhere practise gratefulness gratefulness effort bloodsweattears took gratefulness fact sometimes thing appear come easy — that’s okayTags Psychology Happiness Careers Self Improvement Self
4,212
A Mid-Autumn Day’s Matinee
My play Translation by was just published here on Medium — you could say it’s “in previews” if you want to seem like a real theatre geek. In the traditions of Shakespeare in the Park and midweek midday matinee performances, I am unlocking it Wednesday (+ 3 others) for all to read, enjoy and share. Translation by — Doing my best not to spoil it (yet still sell it) I can say it is a comedy of errors centering on a group of diverse players trying to ready a translation of a work… and not fully succeeding.
https://medium.com/the-coffeelicious/a-mid-autumn-days-matinee-8c8b6a864b13
['Ernio Hernandez']
2017-11-22 12:30:39.225000+00:00
['Reading', 'Fiction', 'Writing', 'Culture', 'Play']
Title MidAutumn Day’s MatineeContent play Translation published Medium — could say it’s “in previews” want seem like real theatre geek tradition Shakespeare Park midweek midday matinee performance unlocking Wednesday 3 others read enjoy share Translation — best spoil yet still sell say comedy error centering group diverse player trying ready translation work… fully succeedingTags Reading Fiction Writing Culture Play
4,213
Visualization With Seaborn
Seaborn is a Python data visualization library based on Matplotlib. It provides a high-level interface for drawing attractive and informative statistical graphics. It provides choices for plot style and color defaults, defines simple high-level functions for common statistical plot types, and integrates with the functionality provided by Pandas DataFrames. The main idea of Seaborn is that it provides high-level commands to create a variety of plot types useful for statistical data exploration, and even some statistical model fitting. 1.0.1 Table of Contents Creating basic plots Advance Categorical plots in Seaborn Density plots Pair plots # importing required libraries import seaborn as sns sns.set() sns.set(style='darkgrid') import numpy as np import pandas as pd #importing matplotlib import matplotlib.pyplot as plt %matplotlib inline import warnings warnings.filterwarnings("ignore") plt.rcParams['figure.figsize']=(10,10) In this notebook, we will use the Big Mart Sales Data. You can download the data from Github: https://github.com/Yuke217 # read the dataset df = pd.read_csv("dataset/bigmart_data.csv") # drop the null values df = df.dropna(how="any") # View the top results df.head() 1.0.2 Creating basic plots Let’s have a look at how can you create some basic plots in seaborn in a single line for which multiple lines were required in Matplotlib. 1.0.2.1 Line Chart With some datasets, you may want to understand changes in one variable as a function of time or a similarly continuous variable. In seaborn, this can be accomplished by the lineplot() function, either directly or with relplot by setting kind=”line”: # line plot using lineplot() sns.lineplot(x="Item_Weight", y="Item_MRP",data=df[:50]) 1.0.2.2 Bar chart In seaborn, you can create a bar chart by simply using the barplot function. function. Notice that to achieve the same thing in Matplotlib, we had to write extra code just to group the data category wise. And then we had to write much more code to make sure that the plot comes out correct. sns.barplot(x="Item_Type", y="Item_MRP", data=df[:5]) 1.0.2.3 Histogram You can create a histogram in seaborn by using distplot(). sns.distplot(df['Item_MRP']) 1.0.2.4 Box plots You can use Boxplot() for creating boxplots in seaborn for creating boxplots in seaborn Let’s try to visualize the distribution of Item_Outlet_Sales of items. sns.boxplot(df['Item_Outlet_Sales'], orient='vertical') 1.0.2.5 Violin plot A violin plot plays a similar role as a box and whisker plot. It shows the distribution of quantitative data across several levels of one(or more) categorical variables such that those distributions can be compared. Unlike a box plot, in which all of the plot components correspond to actual data points, the violin plot features a kernel density estimation of the underlying distribution. You can create a violin plot using the violinplot() in seaborn sns.violinplot(df['Item_Outlet_Sales'], orient='vertical') 1.0.2.6 Scatter plot It depicts the relationship between two variables using a cloud of points, where each point represents an observation in the dataset. You can use relplot() with the option of kind=scatter to plot a scatter plot in seaborn. with the option of kind=scatter to plot a scatter plot in seaborn. Notice the default option is scatter # scatter plot sns.relplot(x="Item_MRP", y="Item_Outlet_Sales", data = df[:200], kind="scatter") 1.0.2.7 Hue semantic We can also add another dimension to the plot by coloring the points according to a third variable. In seaborn, this is referred to as using a “Hue semantic”. sns.relplot(x="Item_MRP", y="Item_Outlet_Sales", hue="Item_Type", data=df[:200]) Remember the line chart that we created earlier, When we have hue semantic, we can create more complex line plots in seaborn. semantic, we can create more complex line plots in seaborn. In the following example, different line plots for different categories of Outlet_Size are made. # different line plots for different categories of the Outlet_Size sns.lineplot(x="Item_Weight", y="Item_MRP", hue="Outlet_Size", data=df[:100]) 1.0.2.8 Bubble plot We utilize the hue semantic to color bubbles by their Item_Visibility and at the same time use it as size of individual bubbles. # bubble plot sns.relplot(x="Item_MRP", y="Item_Outlet_Sales", data=df[:200],kind="scatter", size="Item_Visibility", hue="Item_Visibility") 1.0.2.9 Category wise sub plot You can also create plots based on category in seaborn. in seaborn. We have created scatter plots for each Outlet_Size Now we create three plots based on different Outlet_Size using col. # subplots for each of the category of Outlet_Size sns.relplot(x="Item_Weight", y="Item_Visibility", hue= 'Outlet_Size',col ="Outlet_Size",data=df[:100] ) 1.1 2. Advance categorical plots in seaborn For categorical variables we have three different families in seaborn. Categorical scatterplots: stripplot() (with kind=”strip”; the default) swarmplot() (with kind=”swarm”) Categorical distribution plots: boxplot() (with kind=”box”) violinplot() (with kind=”violin”) Boxenplot() (with kind=”bowen”) Categorical estimate plots: pointplot() (with kind=”point”) barplot() (with kind=”bar”) The default representation of the data in catplot() uses a scatterplot. 1.1.1 a. Categorical scatterplots 1.1.1.1 Strip plot Draws a scatterplot where one variable is categorical. You can create this by passing kind=strip in the catplot . sns.catplot(x="Outlet_Size", y="Item_Outlet_Sales", kind="strip", data=df[:250]) 1.1.1.2 Swarm plot This function is similar to stripplot() , but the points are adjusted so that they don't overlap. , but the points are adjusted so that they don't overlap. This gives a better representation of the distribution of values, but it does not scale well to large numbers of observations. This style of plot is sometimes called a “beeswarm”. You can created this by passing kind=swarm in the catplot . sns.catplot(x="Outlet_Size", y="Item_Outlet_Sales", kind='swarm',data=df[:250]) 1.1.2 b. Categorical distribution plots 1.1.2.1 Box Plots Box plot shows the three quartile values of the distribution along with the extreme values. The “whiskers” extend to points that lie within 1.5 IQRs of the lower and upper quartile, and then observations that fall ourside this range are displayed independently. This means that each value in the boxplot corresponds to an actual observation in the data. sns.catplot(x="Outlet_Size", y="Item_Outlet_Sales", kind="box", data=df) 1.1.2.2 Violin Plots sns.catplot(x="Outlet_Size", y="Item_Outlet_Sales", kind="violin",data=df) 1.1.2.3 Boxen plots This style of plot was originally named a “letter value” plot because it shows a large number of quantiles that are defined as “letter values”. It is similarto a box plot in plotting a nonparametric representation of a distribution in which all features correspond to actual observations. By plotting more quantiles, it provides more information about the shape of the distribution, particularly in the tails. sns.catplot(x="Outlet_Size", y="Item_Outlet_Sales", kind="boxen",data=df) 1.1.2.4 Point plot sns.catplot(x="Outlet_Size", y="Item_Outlet_Sales",kind="point",data=df) 1.1.2.5 Bar plots sns.catplot(x="Outlet_Size", y="Item_Outlet_Sales", kind="bar",data=df) 1.2 3. Density Plots Rather than a histogram, we can get a smooth estimate of the distribution using a kernel density estimation, which Seaborn does with sns.dkeplot: A Density Plot visualises the distribution of data over a continuous interval or time period. Density plot allows for smoother distribution by smoothing out noise. The peaks of a Density Plot help display where values are concentrated over the interval. An advantage Density Plots have over Histograms is that they’re better at determining the distribution shape because they’re not affected by the number of bins used (each bar used in a typical histogram). # distribution of Item Visibility plt.figure(figsize=(10,10)) sns.kdeplot(df["Item_Visibility"],shade=True) # distribution of Item MRP plt.figure(figsize=(10,10)) sns.kdeplot(df["Item_MRP"],shade=True) 1.2.1 Histogram and Density Plot Histograms and KDE can be combined using distplot: plt.figure(figsize=(10,10)) sns.distplot(df['Item_Outlet_Sales']) 1.3 4. Pair plots When you generalize joint plots to datasets of larger dimentions, you end up with pair plots. This is very useful for exploring correlations between multidimensional data, when you’d like to plot all pairs of values against each other. We’ll demo this with the well-known Iris dataset, which lists measurements of petals and sepals of three iris species: iris = sns.load_dataset("iris") iris.head()
https://medium.com/analytics-vidhya/visualization-with-seaborn-e2d9cacd932b
['Yuke Liu']
2020-07-02 14:51:37.106000+00:00
['Data Science', 'Seaborn', 'Matplotlib', 'Data Visualization']
Title Visualization SeabornContent Seaborn Python data visualization library based Matplotlib provides highlevel interface drawing attractive informative statistical graphic provides choice plot style color default defines simple highlevel function common statistical plot type integrates functionality provided Pandas DataFrames main idea Seaborn provides highlevel command create variety plot type useful statistical data exploration even statistical model fitting 101 Table Contents Creating basic plot Advance Categorical plot Seaborn Density plot Pair plot importing required library import seaborn sn snsset snssetstyledarkgrid import numpy np import panda pd importing matplotlib import matplotlibpyplot plt matplotlib inline import warning warningsfilterwarningsignore pltrcParamsfigurefigsize1010 notebook use Big Mart Sales Data download data Github httpsgithubcomYuke217 read dataset df pdreadcsvdatasetbigmartdatacsv drop null value df dfdropnahowany View top result dfhead 102 Creating basic plot Let’s look create basic plot seaborn single line multiple line required Matplotlib 1021 Line Chart datasets may want understand change one variable function time similarly continuous variable seaborn accomplished lineplot function either directly relplot setting kind”line” line plot using lineplot snslineplotxItemWeight yItemMRPdatadf50 1022 Bar chart seaborn create bar chart simply using barplot function function Notice achieve thing Matplotlib write extra code group data category wise write much code make sure plot come correct snsbarplotxItemType yItemMRP datadf5 1023 Histogram create histogram seaborn using distplot snsdistplotdfItemMRP 1024 Box plot use Boxplot creating boxplots seaborn creating boxplots seaborn Let’s try visualize distribution ItemOutletSales item snsboxplotdfItemOutletSales orientvertical 1025 Violin plot violin plot play similar role box whisker plot show distribution quantitative data across several level oneor categorical variable distribution compared Unlike box plot plot component correspond actual data point violin plot feature kernel density estimation underlying distribution create violin plot using violinplot seaborn snsviolinplotdfItemOutletSales orientvertical 1026 Scatter plot depicts relationship two variable using cloud point point represents observation dataset use relplot option kindscatter plot scatter plot seaborn option kindscatter plot scatter plot seaborn Notice default option scatter scatter plot snsrelplotxItemMRP yItemOutletSales data df200 kindscatter 1027 Hue semantic also add another dimension plot coloring point according third variable seaborn referred using “Hue semantic” snsrelplotxItemMRP yItemOutletSales hueItemType datadf200 Remember line chart created earlier hue semantic create complex line plot seaborn semantic create complex line plot seaborn following example different line plot different category OutletSize made different line plot different category OutletSize snslineplotxItemWeight yItemMRP hueOutletSize datadf100 1028 Bubble plot utilize hue semantic color bubble ItemVisibility time use size individual bubble bubble plot snsrelplotxItemMRP yItemOutletSales datadf200kindscatter sizeItemVisibility hueItemVisibility 1029 Category wise sub plot also create plot based category seaborn seaborn created scatter plot OutletSize create three plot based different OutletSize using col subplots category OutletSize snsrelplotxItemWeight yItemVisibility hue OutletSizecol OutletSizedatadf100 11 2 Advance categorical plot seaborn categorical variable three different family seaborn Categorical scatterplots stripplot kind”strip” default swarmplot kind”swarm” Categorical distribution plot boxplot kind”box” violinplot kind”violin” Boxenplot kind”bowen” Categorical estimate plot pointplot kind”point” barplot kind”bar” default representation data catplot us scatterplot 111 Categorical scatterplots 1111 Strip plot Draws scatterplot one variable categorical create passing kindstrip catplot snscatplotxOutletSize yItemOutletSales kindstrip datadf250 1112 Swarm plot function similar stripplot point adjusted dont overlap point adjusted dont overlap give better representation distribution value scale well large number observation style plot sometimes called “beeswarm” created passing kindswarm catplot snscatplotxOutletSize yItemOutletSales kindswarmdatadf250 112 b Categorical distribution plot 1121 Box Plots Box plot show three quartile value distribution along extreme value “whiskers” extend point lie within 15 IQRs lower upper quartile observation fall ourside range displayed independently mean value boxplot corresponds actual observation data snscatplotxOutletSize yItemOutletSales kindbox datadf 1122 Violin Plots snscatplotxOutletSize yItemOutletSales kindviolindatadf 1123 Boxen plot style plot originally named “letter value” plot show large number quantiles defined “letter values” similarto box plot plotting nonparametric representation distribution feature correspond actual observation plotting quantiles provides information shape distribution particularly tail snscatplotxOutletSize yItemOutletSales kindboxendatadf 1124 Point plot snscatplotxOutletSize yItemOutletSaleskindpointdatadf 1125 Bar plot snscatplotxOutletSize yItemOutletSales kindbardatadf 12 3 Density Plots Rather histogram get smooth estimate distribution using kernel density estimation Seaborn snsdkeplot Density Plot visualises distribution data continuous interval time period Density plot allows smoother distribution smoothing noise peak Density Plot help display value concentrated interval advantage Density Plots Histograms they’re better determining distribution shape they’re affected number bin used bar used typical histogram distribution Item Visibility pltfigurefigsize1010 snskdeplotdfItemVisibilityshadeTrue distribution Item MRP pltfigurefigsize1010 snskdeplotdfItemMRPshadeTrue 121 Histogram Density Plot Histograms KDE combined using distplot pltfigurefigsize1010 snsdistplotdfItemOutletSales 13 4 Pair plot generalize joint plot datasets larger dimentions end pair plot useful exploring correlation multidimensional data you’d like plot pair value We’ll demo wellknown Iris dataset list measurement petal sepal three iris specie iris snsloaddatasetiris irisheadTags Data Science Seaborn Matplotlib Data Visualization
4,214
The Big Six Framework: How We Lowered The Cost Per Lead By 80%
The Two Most Common Problems In Advertising The two most common problems in advertising are found on opposite sides of the spectrum: ignorance and overwhelm. Ignorance It’s not uncommon for advertisers to be unaware of all of the factors necessary for successful advertising. This tends to lead to overoptimization where you double down on one or a few areas of the ad campaigns while neglecting other perhaps more important aspects. Overwhelm On the other hand, advertisers who are aware of all the data, options, tools, strategies and tactics that exist, tend to become overwhelmed. This leads to paralysis where you stop executing and start spending too much time analyzing. The Solution How do you solve these problems? First, you need to get a clear picture of the entire advertising landscape and all the factors involved. Second, you need a process for identifying the single largest bottleneck, so you can focus your attention where it’s needed most. And we’re going to show you how we can achieve both of these objectives. Introducing The Big Six Framework Based on years of failing, succeeding and learning, we’ve created a powerful framework that allows us to cut through complexity and consistently produce results for our clients. We call it the Big Six: The Simple Framework That Produces Staggering Results Variables Variables are basically all the components of effective advertising. All of them can be manipulated to improve performance. Depending on the audience, product, etc., some variables may be more important than others. Mechanism The mechanisms are the way in which the variables are manipulated. Metrics Metrics are used for diagnosing and tell us whenever there’s a problem with a variable. With relevant benchmarks and targets, metrics allow us to easily identify the largest bottleneck(s). Note: Like all frameworks, this one is designed to simplify a complex reality. For best results, use with judgment!😊 All right, that’s enough theory! Let’s see how we used this framework for one of our clients to systematically lower the CPL by over 80%. Applying The Framework Planning The Campaign📊 We were approached by a company in the health and fitness space looking to acquire new members. Before working with us, they had tried to generate leads sporadically and with mixed results. Now they wanted to take a more structured approach. First, we ran the numbers. Using previous experience and client data, we created benchmarks and targets for the metrics (CPM, Click Through Rate, Conversion Rate). This is a crucial step, as otherwise, you won’t be able to use the metrics as a diagnosing tool. Hypothetical Target CPA Analysis Second, we planned and prioritized our actions. We created a backlog of all the things we wanted to test and implement. This prioritization allows us to make sure we’re always using our time efficiently and not just doing busywork. Knowing the numbers makes it easier to prioritize. For instance, we knew that it would be more challenging to improve the Cost Per Click (CPC) than the Conversion Rate. So we focused on the variable we thought was going to have the biggest impact: the Offer. We created two identical campaigns and landing pages with two different offers, and launched the first ads. Going Live🚀 After running the ads for a few days, things were looking.. not great! In fact, we had a CPL of just over $40, far above our target. Instead of freaking out and rethinking the entire campaign, we analyzed the data and found the reason behind the high CPL: a lower-than-expected Conversion Rate. And since neither Offer was converting well, we knew there was a problem with the Landing Page. By analyzing screen recordings from Hotjar, we were able to further pinpoint the problem: visitors were reading most of the content but leaving the page once they came to the contact form at the bottom. We redesigned the form and made it shorter and easier to fill out. Ready For Round Two🥊 With a new form, things were moving in the right direction and we were looking at a CPL of $26. We could now see a clear difference in performance between the two offers. We still weren’t quite satisfied with the conversion rate, so we decided to go back and test a third offer. New Targeting🎯 The new offer turned out to be the highest converting of the three and brought down the CPL to $12. Both the landing page and the offer were now working, and the conversion rate was where we wanted it to be. We decided to shift our attention to the traffic side. While we were focusing on improving the conversion rate, we also paid attention to the advertising performance as well. By looking at a breakdown of the demographics and geography of our audience, we’d noticed that some of the segments were underperforming, and we made a few changes to the targeting. Note: While you should ideally only change one variable at a time, you want to constantly look at data and potentially revise your priorities. The Final Touch🎨 The new targeting had a positive (but not huge) impact, and the CPL dropped down to just below $10. We were confident that the ads were showing to the right people, and we could now focus on further increasing the CTR (Click-Through-Rate). The easiest way to do this was by capturing more attention through new and better ad creatives. Note: The creative is often the most important variable and the thing we focus on first. Here it was last since we were building out the advertising funnel from scratch. In Summary With the new ad creatives, we finally managed to decrease the CPL to less than $7, which represents an 80% improvement from where we started. This meant that the client was now getting 5x the number of leads at the original cost! 80% Reduction In CPL The fastest and straightest path to great results comes from understanding all the factors without letting it overwhelm you. Be systematic, patient and focus on what matters. Good luck!
https://medium.com/rho-1/the-big-six-framework-how-we-lowered-the-cost-per-lead-by-80-99ea93696518
['Josua Fagerholm']
2020-03-06 01:02:00.635000+00:00
['Digital Advertising', 'Advertising', 'Marketing', 'Digital Marketing', 'Facebook Marketing']
Title Big Six Framework Lowered Cost Per Lead 80Content Two Common Problems Advertising two common problem advertising found opposite side spectrum ignorance overwhelm Ignorance It’s uncommon advertiser unaware factor necessary successful advertising tends lead overoptimization double one area ad campaign neglecting perhaps important aspect Overwhelm hand advertiser aware data option tool strategy tactic exist tend become overwhelmed lead paralysis stop executing start spending much time analyzing Solution solve problem First need get clear picture entire advertising landscape factor involved Second need process identifying single largest bottleneck focus attention it’s needed we’re going show achieve objective Introducing Big Six Framework Based year failing succeeding learning we’ve created powerful framework allows u cut complexity consistently produce result client call Big Six Simple Framework Produces Staggering Results Variables Variables basically component effective advertising manipulated improve performance Depending audience product etc variable may important others Mechanism mechanism way variable manipulated Metrics Metrics used diagnosing tell u whenever there’s problem variable relevant benchmark target metric allow u easily identify largest bottleneck Note Like framework one designed simplify complex reality best result use judgment😊 right that’s enough theory Let’s see used framework one client systematically lower CPL 80 Applying Framework Planning Campaign📊 approached company health fitness space looking acquire new member working u tried generate lead sporadically mixed result wanted take structured approach First ran number Using previous experience client data created benchmark target metric CPM Click Rate Conversion Rate crucial step otherwise won’t able use metric diagnosing tool Hypothetical Target CPA Analysis Second planned prioritized action created backlog thing wanted test implement prioritization allows u make sure we’re always using time efficiently busywork Knowing number make easier prioritize instance knew would challenging improve Cost Per Click CPC Conversion Rate focused variable thought going biggest impact Offer created two identical campaign landing page two different offer launched first ad Going Live🚀 running ad day thing looking great fact CPL 40 far target Instead freaking rethinking entire campaign analyzed data found reason behind high CPL lowerthanexpected Conversion Rate since neither Offer converting well knew problem Landing Page analyzing screen recording Hotjar able pinpoint problem visitor reading content leaving page came contact form bottom redesigned form made shorter easier fill Ready Round Two🥊 new form thing moving right direction looking CPL 26 could see clear difference performance two offer still weren’t quite satisfied conversion rate decided go back test third offer New Targeting🎯 new offer turned highest converting three brought CPL 12 landing page offer working conversion rate wanted decided shift attention traffic side focusing improving conversion rate also paid attention advertising performance well looking breakdown demographic geography audience we’d noticed segment underperforming made change targeting Note ideally change one variable time want constantly look data potentially revise priority Final Touch🎨 new targeting positive huge impact CPL dropped 10 confident ad showing right people could focus increasing CTR ClickThroughRate easiest way capturing attention new better ad creatives Note creative often important variable thing focus first last since building advertising funnel scratch Summary new ad creatives finally managed decrease CPL le 7 represents 80 improvement started meant client getting 5x number lead original cost 80 Reduction CPL fastest straightest path great result come understanding factor without letting overwhelm systematic patient focus matter Good luckTags Digital Advertising Advertising Marketing Digital Marketing Facebook Marketing
4,215
beginnings
I can is shedding its onerous mass to become the starting point of a high spirit drop the ‘t erase for heart’s sake can’t is a nefarious tightrope masquerading as path meant for falling soles it has no business in the hallways of beginning decorated with hope.
https://medium.com/meri-shayari/beginnings-9f22c9ce04
['Rebeca Ansar']
2020-12-19 18:31:34.829000+00:00
['Life Lessons', 'Motivation', 'Poetry', 'Poet', 'Poem']
Title beginningsContent shedding onerous mass become starting point high spirit drop ‘t erase heart’s sake can’t nefarious tightrope masquerading path meant falling sol business hallway beginning decorated hopeTags Life Lessons Motivation Poetry Poet Poem
4,216
Build React Tabs Using Recoil, Styled Components, and Storybook.js
Build React Tabs Using Recoil, Styled Components, and Storybook.js A development guide to building React components with the latest technologies Image credit: Author In a previous article, we introduced Recoil, the state management library that’s been available since May 2020. For managing states, Recoil is simpler and more effective than Context API and Redux. We have been using it for our projects ever since. In another article, we introduced styled components, a JavaScript library that allows us to write CSS inside a JavaScript file. As a result, components can run independently, without relying on any external CSS files. Storybook is a tool for UI development. It makes development faster and easier by isolating components. This allows us to work on one component at a time. We use tabs as an example to illustrate the power of Recoil, styled components, and storybook. As we are writing an interview series, creating tabs components is also a frequently asked interview question. This article prepares you for both development work and interview challenges.
https://medium.com/better-programming/build-react-tabs-using-recoil-styled-components-and-storybook-js-4ad534cef007
['Jennifer Fu']
2020-12-30 00:17:18.166000+00:00
['Nodejs', 'Recoil', 'JavaScript', 'React', 'Programming']
Title Build React Tabs Using Recoil Styled Components StorybookjsContent Build React Tabs Using Recoil Styled Components Storybookjs development guide building React component latest technology Image credit Author previous article introduced Recoil state management library that’s available since May 2020 managing state Recoil simpler effective Context API Redux using project ever since another article introduced styled component JavaScript library allows u write CSS inside JavaScript file result component run independently without relying external CSS file Storybook tool UI development make development faster easier isolating component allows u work one component time use tab example illustrate power Recoil styled component storybook writing interview series creating tab component also frequently asked interview question article prepares development work interview challengesTags Nodejs Recoil JavaScript React Programming
4,217
Beyond DQN/A3C: A Survey in Advanced Reinforcement Learning
One of my favorite things about deep reinforcement learning is that, unlike supervised learning, it really, really doesn’t want to work. Throwing a neural net at a computer vision problem might get you 80% of the way there. Throwing a neural net at an RL problem will probably blow something up in front of your face — and it will blow up in a different way each time you try. A lot of the biggest challenges in RL revolve around two questions: how we interact with the environment effectively (e.g. exploration vs. exploitation, sample efficiency), and how we learn from experience effectively (e.g. long-term credit assignment, sparse reward signals). In this post, I want to explore a few recent directions in deep RL research that attempt to address these challenges, and do so with particularly elegant parallels to human cognition. In particular, I want to talk about: hierarchical RL, memory and predictive modeling, and combined model-free and model-based approaches. This post will begin with a quick review of two canonical deep RL algorithms — DQN and A3C — to provide us some intuitions to refer back to, and then jump into a deep dive on a few recent papers and breakthroughs in the categories described above. Review: DQN and A3C/A2C Disclaimer: I am assuming some basic familiarity with RL (and thus will not provide an in-depth tutorial on either of these algorithms), but even if you’re not 100% solid on how they work, the rest of the post should still be accessible. DeepMind’s DQN (deep Q-network) was one of the first breakthrough successes in applying deep learning to RL. It used a neural net to learn Q-functions for classic Atari games such as Pong and Breakout, allowing the model to go straight from raw pixel input to an action. Algorithmically, the DQN draws directly on classic Q-learning techniques. In Q-learning, the Q-value, or “quality”, of a state-action pair is estimated through iterative updates based on experience. In essence, with every action we take in a state, we can use the immediate reward we receive and a value estimate of our new state to update the value estimate of our original state-action pair: Training DQN consists of minimizing the MSE (mean squared error) of the Temporal Difference error, or TD-error, which is shown above. The two key strategies employed by DQN to adapt Q-learning for deep neural nets, which have since been successfully adopted by many subsequent deep RL efforts, were: experience replay, in which each state/action transition tuple (s, a, r, s’) is stored in a memory “replay” buffer and randomly sampled to train the network, allowing for re-use of training data and de-correlation of consecutive trajectory samples; and use of a separate target network — the Q_hat part of the above equation — to stabilize training, so the TD error isn’t being calculated from a constantly changing target from the training network, but rather from a stable target generated by a mostly fixed network. Subsequently, DeepMind’s A3C (Asynchronous Advantage Actor Critic) and OpenAI’s synchronous variant A2C, popularized a very successful deep learning-based approach to actor-critic methods. Actor-critic methods combine policy gradient methods with a learned value function. With DQN, we only had the learned value function — the Q-function — and the “policy” we followed was simply taking the action that maximized the Q-value at each step. With A3C, as with the rest of actor-critic methods, we learn two different functions: the policy (or “actor”), and the value (the “critic”). The policy adjusts action probabilities based on the current estimated advantage of taking that action, and the value function updates that advantage based on the experience and rewards collected by following the policy: As we can see from the updates above, the value network learns a baseline state value V(s_i;θ_v) with which we can compare our current reward estimate, R, to obtain the “advantage,” and the policy network adjusts the log probabilities of actions based on that advantage via the classic REINFORCE algorithm. The real contribution of A3C comes from its parallelized and asynchronous architecture: multiple actor-learners are dispatched to separate instantiations of the environment; they all interact with the environment and collect experience, and asynchronously push their gradient updates to a central “target network” (an idea borrowed from DQN). Later, OpenAI showed with A2C that asynchronicity does not actually contribute to performance, and in fact reduces sample efficiency. Unfortunately, details of these architectures are beyond the scope of this post, but if distributed agents excite you like they excite me, make sure you check out DeepMind’s IMPALA — very useful design paradigm for scaling up learning. Both DQN and A3C/A2C can be powerful baseline agents, but they tend to suffer when faced with more complex tasks, severe partial observability, and/or long delays between actions and relevant reward signals. As a result, entire subfields of RL research have emerged to address these issues. Let’s get into some of the good stuff :). Hierarchical Reinforcement Learning Hierarchical RL is a class of reinforcement learning methods that learns from multiple layers of policy, each of which is responsible for control at a different level of temporal and behavioral abstraction. The lowest level of policy is responsible for outputting environment actions, leaving higher levels of policy free to operate over more abstract goals and longer timescales. Why is this so appealing? First and foremost, on the cognitive front, research has long suggested that human and animal behavior is underpinned by hierarchical structure. This is intuitive in everyday life: when I decide to cook a meal (which is basically never, by the way, but for the sake of argument let us assume I am a responsible human being), I am able to divide this task into simpler sub-tasks: chopping vegetables, boiling pasta, etc. without losing sight of my overarching goal of cooking a meal; I am even able to swap out sub-tasks, e.g. cooking rice instead of making pasta, to complete the same goal. This suggests an inherent hierarchy and compositionality in real-world tasks, in which simple, atomic actions can be strung together, repeated, and composed to complete complicated jobs. In recent years, research has even uncovered direct parallels between HRL components and specific neural structures within the prefrontal cortex. On the technical RL front, HRL is especially appealing because it helps address two of the biggest challenges I mentioned under our second question, i.e. how to learn from experience effectively: long-term credit assignment and sparse reward signals. In HRL, because low-level policies learn from intrinsic rewards based on tasks assigned by high-level policies, atomic tasks can still be learned in spite of sparse rewards. Furthermore, the temporal abstraction developed by high-level policies enables our model to handle credit assignment over temporally extended experiences. So how does it work? There are a number of different ways to implement HRL. One recent paper from Google Brain takes a particularly clean and simple approach, and introduces some nice off-policy corrections for data-efficient training. Their model is called HIRO. μ_hi is the high-level policy, which outputs “goal states” for the low-level policy to reach. μ_lo, the low-level policy, outputs environment actions in an attempt to reach that goal state observation. Here’s the idea: we have 2 layers of policy. The high-level policy is trained to maximize the environment reward R. Every c timesteps, the high-level policy samples a new action, which is a “goal state” for the low-level policy to reach. The low-level policy is trained to take environment actions that would produce a state observation similar to the given goal state. Consider a simple example: say we are training a robot to stack colored cubes in a certain order. We only get a single reward of +1 in the end if the task is completed successfully, and a reward of 0 at all other time-steps. Intuitively, the high-level policy is responsible for coming up with the necessary sub-goals to complete: perhaps the first goal state it outputs would be “observe a red cube in front of you;” the next might be “observe a blue cube next to a red cube;” and then “observe a blue cube on top of a red cube.” The low-level policy bumbles around the environment until it comes up with the sequence of actions necessary to produce these observations, e.g. picking up the blue cube and moving it on top of the red one. HIRO uses a variant of the DDPG (Deep Deterministic Policy Gradient) training objective to train the low-level policy, whose intrinsic reward is parameterized as the distance between the current observation and the goal observation: DDPG is another seminal deep RL algorithm that extended ideas from DQN to a continuous action space. It is another actor-critic method that uses policy gradients to optimize the policy, but instead of optimizing it with respect to the advantage as in A3C, it optimizes it with respect to the Q-values. Thus in HIRO, the DDPG-adjacent error to minimize becomes: Meanwhile, in order to use off-policy experience, the high-level policy is trained with off-policy corrections. Here’s the idea: to be sample efficient, we want to use some form of replay buffer, like DQN. However, old experience cannot be used directly to train the high-level policy. This is because the low-level policy is constantly learning and changing, so even if we condition on the same goals as our old experience, our low-level policy may now exhibit different actions/transitions. The off-policy correction proposed in HIRO is to retroactively change the goal seen in off-policy experience to maximize the likelihood of the observed action sequence. In other words, if the replay experience says the old agent took actions (x,y,z) to reach goal g, we find a goal g̃ that would make the current agent most likely to take those same actions (x,y,z), i.e. one that would maximize this log probability of the action sequence: The high-level policy is then trained with a DDPG variant on those actions, the new goal, and the environment reward R. HIRO is certainly not the only approach to HRL. FeUdal networks were an earlier, related work that used a learned “goal” representation instead of the raw state observation. Indeed, lot of variation in research stems from different ways to learn useful low-level sub-policies; many papers have used auxiliary or “proxy” rewards, and others have experimented with pre-training or multi-task training. Unlike HIRO, many of these approaches require some degree of hand engineering or domain knowledge, which inherently limits generalizability. Another recently-explored option is to use population-based training (PBT), another algorithm I am a personal fan of. In essence, internal rewards are treated as additional hyperparameters, and PBT learns the optimal evolution of these hyperparameters across “evolving” populations during training. HRL is a very popular area of research right now, and is very easily interpolatable with other techniques (check out this paper combining HRL with imitation learning). At its core, however, it’s just a really intuitive idea. It’s extensible, has neuroanatomical parallels, and addresses a bunch of fundamental problems in RL. Like the rest of good RL, though, it can be quite tricky to train. Memory and Attention Now let’s talk about some other ways to address the problems of long-term credit assignment and sparse reward signals. Specifically, let’s talk about the most obvious way: make the agent really good at remembering things. Memory in deep learning is always fun, because try as researchers might (and really, they do try), few architectures beat out a well-tuned LSTM. Human memory, however, does not work anything like an LSTM; when we go about tasks in daily life, we recall and attend to specific, context-dependent memories, and little else. When I go back home and drive to the local grocery store, I’m using memories from the last hundred times I’ve driven this route, not memories of how to get from Camden Town to Piccadilly Circus in London — even if those memories are fresh in recent experience. In this sense, our memory almost seems queryable by context: depending on where I am and what I’m doing, my brain knows which memories will be useful to me. In deep learning, this is the driving thesis behind external, key-value-based memory stores. This idea is not new; Neural Turing Machines, one of the first and favorite papers I ever read, augmented neural nets with a differentiable, external memory store accessible via vector-valued “read” and “write” heads to specific locations. We can easily imagine this being extended into RL, where at any given time-step, an agent is given both its environment observation and memories relevant to its current state. That’s exactly what the recent MERLIN architecture extends upon. MERLIN has 2 components: a memory-based predictor (MBP), and a policy network. The MBP is responsible for compressing observations into useful, low-dimensional “state variables” to store directly into a key-value memory matrix. It is also responsible for passing relevant memories to the policy, which uses those memories and the current state to output actions. This architecture may look a little complicated, but remember, the policy is just an recurrent net outputting actions, and the MBP is only really doing 3 things: compressing the observation into a useful state variable z_t to pass on to the policy, writing z_t into a memory matrix, and fetching other useful memories to pass on to the policy. The pipeline looks something like this: the input observation is first encoded and then fed through an MLP, the output of which is added to the prior distribution over the next state variable to produce the posterior distribution. This posterior distribution, which is conditioned on all the previous actions/observations as well as this new observation, is then sampled to produce a state variable z_t. Next, z_t gets fed into the MBP’s LSTM, whose output is used to update the prior and to read to/write from memory via vector-valued “read keys” and “write keys” — both of which are produced as a linear function of the LSTM’s hidden state. Finally, downstream, the policy net leverages both z_t and read outputs from memory to produce an action. A key detail is that in order to ensure the state representations are useful, the MBP is also trained to predict the reward from the current state z_t, so learned representations are relevant to the task at hand. Training of MERLIN is a bit complicated; since the MBP is intended to serve as a useful “world model,” an intractable objective, it is trained to optimize the variational lower bound (VLB) loss instead. (If you are unfamiliar with VLB, I found this post quite useful, but you really don’t need it to understand MERLIN). There are two components to this VLB loss: The KL-divergence between the prior and posterior probability distributions over this next state variable, where the posterior is additionally conditioned on the new observation. Minimizing this KL ensures that this new state variable is consistent with previous observations/actions. The reconstruction loss of the state variable, in which we attempt to reproduce the input observation (e.g. the image, previous action, etc.) and predict the reward based on the state variable. If this loss is small, we have found a state variable that is an accurate representation of the observation, and useful for producing actions that give a high reward. Here is our final VLB loss, with the first term being reconstruction and the second being the KL divergence: The policy network’s loss is a slightly fancier version of the policy gradient loss we discussed above with A3C; it uses an algorithm called the Generalized Advantage Estimation Algorithm, the details of which are beyond the scope of this post (but can be found in section 4.4 of the MERLIN paper’s appendix), but it looks similar to the standard policy gradient update shown below: Once trained, MERLIN should be able to predictively model the world through state representations and memory, and its policy should be able to leverage those predictions to take useful actions. MERLIN is not the only deep RL work to use external memory stores — all the way back in 2016, researchers were already applying this idea in an MQN, or Memory Q-Network, to solve mazes in Minecraft — but this concept of using memory as a predictive model of the world has some unique neuroscientific traction. Another Medium post has done a great job of exploring this idea, so I won’t repeat it all here, but the key argument is that our brain likely does not function as an “input-output” machine, like most neural nets are interpreted. Instead, it functions as a prediction engine, and our perception of the world is actually just the brain’s best guesses about the causes of our sensory inputs. Neuroscientist Amil Seth sums up this 19th century theory by Hermann von Helmholtz nicely: The brain is locked inside a bony skull. All it receives are ambiguous and noisy sensory signals that are only indirectly related to objects in the world. Perception must therefore be a process of inference, in which indeterminate sensory signals are combined with prior expectations or ‘beliefs’ about the way the world is, to form the brain’s optimal hypotheses of the causes of these sensory signals. MERLIN’s memory-based predictor aims to fulfill this very purpose of predictive inference. It encodes observations and combines them with internal priors to generate a “state variable” that captures some representation — or cause — of the input, and stores these states in long-term memory so the agent can act upon them later. Agents, World Models, and Imagination Interestingly, the concept of the brain as a predictive engine actually leads us back to the first RL question we want to explore: how do we learn from the environment effectively? After all, if we’re not going straight from observations to actions, how should we best interact with and learn from the world around us? Traditionally in RL, we can either do model-free learning or model-based learning. In model-free RL, we learn to map raw environment observations directly to values or actions. In model-based RL, we first learn a transition model of the environment based on raw observations, and then use that model to choose actions. The outside circle depicts model-based RL; the “direct RL” loop depicts model-free RL Being able to plan based on a model is much more sample-efficient than having to work from pure trial-and-error as in model-free learning. However, learning a good model is often very difficult, and compounding errors from model imperfections generally leads to poor agent performance. For this reason, a lot of early successes in deep RL (e.g. DQN and A3C) were model-free. That said, the lines between model-free and model-based RL have been blurred as early as the Dyna algorithm in 1990, in which a learned model is used to generate simulated experience to help train the model-free policy. Now in 2018, a new “Imagination-augmented Agents” algorithm has been introduced that directly combines the two approaches. In Imagination-Augmented Agents (I2A), the final policy is a function of both a model-free component and a model-based component. The model-based component is referred to as the agent’s “imagination” of the world, and consists of imagined trajectories rolled out by the agent’s internal, learned model. The key, however, is that the model-based component also has an encoder at the end that aggregates the imagined trajectories and interprets them, enabling the agent to learn to ignore its imagination when necessary. In this sense, if the agent discovers its internal model is projecting useless and inaccurate trajectories, it can learn to ignore the model and proceed with its model-free arm. The figure above describes how I2A’s work. An observation is first passed to both the model-free and model-based components. In the model-based component, n different trajectories are “imagined” based on the n possible actions that could be taken in the current state. These trajectories are obtained by feeding the action and state into the internal environment model, transitioning to a new imagined state, taking the maximum next action in that, and so on. A distilled imagination policy (which is kept similar to the final policy via cross-entropy loss) chooses the next actions. After some fixed k steps, these trajectories are encoded and aggregated together, and fed into the policy network along with the output of the model-free component. Critically, the encoding allows the policy to interpret the imagined trajectories in whatever way is most useful — ignoring them if appropriate, extracting non-reward-related information when available, and so on. The policy is trained via a standard policy gradient loss with advantage, similar to A3C and MERLIN, so this should look familiar by now: Additionally, a policy distillation loss is added between the actual policy and the internal model’s imagined policy, to ensure that the imagined policy chooses actions similar to what the current agent would: I2A outperforms a number of baselines, including the MCTS (Monte Carlo Tree Search) planning algorithm. It is also able to perform well in experiments where its model-based component is intentionally restricted to make poor predictions, demonstrating that it is able to trade-off use of the model in favor of model-free methods when necessary. Interestingly, the I2A with a poor internal model actually slightly outperformed the I2A with a good model in the end — the authors chalked this up to either random initialization or the noisy internal model providing some form of regularization in the end, but this is definitely an area for further investigation. Regardless, the I2A is fascinating because it is, in some ways, also exactly how we go about acting in the world. We’re always planning and projecting into the future based on some mental model of the environment that we’re in, but we also tend to be aware that our mental models could be entirely inaccurate — especially when we’re in new environments or situations we’ve never seen. In that case, we proceed by trial-and-error, just like model-free methods, but we also use this new experience to update our internal mental model. There’s a lot of work going on right now in combining model-based and model-free methods. Berkeley AI came out with a Temporal Difference Model (TDM) which also has a very interesting premise. The idea is to let an agent set more temporally abstracted goals, i.e. “be in X state in k time steps,” and learn those long-term model transitions while maximizing the reward collected within each k steps. This gives us a smooth transition between model-free exploration on actions and model-based planning over high-level goals — which, if you think about it, sort of brings us all the way back to the intuitions in hierarchical RL. All these research papers focus on the same goal: achieving the same (or superior) performance as model-free methods, with the same sample efficiency that model-based methods can provide. Conclusion Deep RL models are really hard to train, period. But thanks to that difficulty, we have been forced to come up with an incredible range of strategies, approaches, and algorithms to harness the power of deep learning for classical (and some non-classical) control problems. This post has been a very, very incomplete survey of deep RL — there is a lot of research out there that I haven’t covered, and more yet that I’m not even aware of. However, hopefully this sprinkling of research directions in memory, hierarchy, and imagination offers a glimpse into how we can begin addressing some of the recurring challenges and bottlenecks in the field. If you think I’m missing something big, I probably am — let me know what it is in the comments. :) Meanwhile, happy RL hacking!
https://towardsdatascience.com/advanced-reinforcement-learning-6d769f529eb3
['Joyce Xu']
2018-10-01 21:13:53.411000+00:00
['Robotics', 'Artificial Intelligence', 'Machine Learning', 'Reinforcement Learning', 'Deep Learning']
Title Beyond DQNA3C Survey Advanced Reinforcement LearningContent One favorite thing deep reinforcement learning unlike supervised learning really really doesn’t want work Throwing neural net computer vision problem might get 80 way Throwing neural net RL problem probably blow something front face — blow different way time try lot biggest challenge RL revolve around two question interact environment effectively eg exploration v exploitation sample efficiency learn experience effectively eg longterm credit assignment sparse reward signal post want explore recent direction deep RL research attempt address challenge particularly elegant parallel human cognition particular want talk hierarchical RL memory predictive modeling combined modelfree modelbased approach post begin quick review two canonical deep RL algorithm — DQN A3C — provide u intuition refer back jump deep dive recent paper breakthrough category described Review DQN A3CA2C Disclaimer assuming basic familiarity RL thus provide indepth tutorial either algorithm even you’re 100 solid work rest post still accessible DeepMind’s DQN deep Qnetwork one first breakthrough success applying deep learning RL used neural net learn Qfunctions classic Atari game Pong Breakout allowing model go straight raw pixel input action Algorithmically DQN draw directly classic Qlearning technique Qlearning Qvalue “quality” stateaction pair estimated iterative update based experience essence every action take state use immediate reward receive value estimate new state update value estimate original stateaction pair Training DQN consists minimizing MSE mean squared error Temporal Difference error TDerror shown two key strategy employed DQN adapt Qlearning deep neural net since successfully adopted many subsequent deep RL effort experience replay stateaction transition tuple r s’ stored memory “replay” buffer randomly sampled train network allowing reuse training data decorrelation consecutive trajectory sample use separate target network — Qhat part equation — stabilize training TD error isn’t calculated constantly changing target training network rather stable target generated mostly fixed network Subsequently DeepMind’s A3C Asynchronous Advantage Actor Critic OpenAI’s synchronous variant A2C popularized successful deep learningbased approach actorcritic method Actorcritic method combine policy gradient method learned value function DQN learned value function — Qfunction — “policy” followed simply taking action maximized Qvalue step A3C rest actorcritic method learn two different function policy “actor” value “critic” policy adjusts action probability based current estimated advantage taking action value function update advantage based experience reward collected following policy see update value network learns baseline state value Vsiθv compare current reward estimate R obtain “advantage” policy network adjusts log probability action based advantage via classic REINFORCE algorithm real contribution A3C come parallelized asynchronous architecture multiple actorlearners dispatched separate instantiation environment interact environment collect experience asynchronously push gradient update central “target network” idea borrowed DQN Later OpenAI showed A2C asynchronicity actually contribute performance fact reduces sample efficiency Unfortunately detail architecture beyond scope post distributed agent excite like excite make sure check DeepMind’s IMPALA — useful design paradigm scaling learning DQN A3CA2C powerful baseline agent tend suffer faced complex task severe partial observability andor long delay action relevant reward signal result entire subfields RL research emerged address issue Let’s get good stuff Hierarchical Reinforcement Learning Hierarchical RL class reinforcement learning method learns multiple layer policy responsible control different level temporal behavioral abstraction lowest level policy responsible outputting environment action leaving higher level policy free operate abstract goal longer timescales appealing First foremost cognitive front research long suggested human animal behavior underpinned hierarchical structure intuitive everyday life decide cook meal basically never way sake argument let u assume responsible human able divide task simpler subtasks chopping vegetable boiling pasta etc without losing sight overarching goal cooking meal even able swap subtasks eg cooking rice instead making pasta complete goal suggests inherent hierarchy compositionality realworld task simple atomic action strung together repeated composed complete complicated job recent year research even uncovered direct parallel HRL component specific neural structure within prefrontal cortex technical RL front HRL especially appealing help address two biggest challenge mentioned second question ie learn experience effectively longterm credit assignment sparse reward signal HRL lowlevel policy learn intrinsic reward based task assigned highlevel policy atomic task still learned spite sparse reward Furthermore temporal abstraction developed highlevel policy enables model handle credit assignment temporally extended experience work number different way implement HRL One recent paper Google Brain take particularly clean simple approach introduces nice offpolicy correction dataefficient training model called HIRO μhi highlevel policy output “goal states” lowlevel policy reach μlo lowlevel policy output environment action attempt reach goal state observation Here’s idea 2 layer policy highlevel policy trained maximize environment reward R Every c timesteps highlevel policy sample new action “goal state” lowlevel policy reach lowlevel policy trained take environment action would produce state observation similar given goal state Consider simple example say training robot stack colored cube certain order get single reward 1 end task completed successfully reward 0 timesteps Intuitively highlevel policy responsible coming necessary subgoals complete perhaps first goal state output would “observe red cube front you” next might “observe blue cube next red cube” “observe blue cube top red cube” lowlevel policy bumbles around environment come sequence action necessary produce observation eg picking blue cube moving top red one HIRO us variant DDPG Deep Deterministic Policy Gradient training objective train lowlevel policy whose intrinsic reward parameterized distance current observation goal observation DDPG another seminal deep RL algorithm extended idea DQN continuous action space another actorcritic method us policy gradient optimize policy instead optimizing respect advantage A3C optimizes respect Qvalues Thus HIRO DDPGadjacent error minimize becomes Meanwhile order use offpolicy experience highlevel policy trained offpolicy correction Here’s idea sample efficient want use form replay buffer like DQN However old experience cannot used directly train highlevel policy lowlevel policy constantly learning changing even condition goal old experience lowlevel policy may exhibit different actionstransitions offpolicy correction proposed HIRO retroactively change goal seen offpolicy experience maximize likelihood observed action sequence word replay experience say old agent took action xyz reach goal g find goal g̃ would make current agent likely take action xyz ie one would maximize log probability action sequence highlevel policy trained DDPG variant action new goal environment reward R HIRO certainly approach HRL FeUdal network earlier related work used learned “goal” representation instead raw state observation Indeed lot variation research stem different way learn useful lowlevel subpolicies many paper used auxiliary “proxy” reward others experimented pretraining multitask training Unlike HIRO many approach require degree hand engineering domain knowledge inherently limit generalizability Another recentlyexplored option use populationbased training PBT another algorithm personal fan essence internal reward treated additional hyperparameters PBT learns optimal evolution hyperparameters across “evolving” population training HRL popular area research right easily interpolatable technique check paper combining HRL imitation learning core however it’s really intuitive idea It’s extensible neuroanatomical parallel address bunch fundamental problem RL Like rest good RL though quite tricky train Memory Attention let’s talk way address problem longterm credit assignment sparse reward signal Specifically let’s talk obvious way make agent really good remembering thing Memory deep learning always fun try researcher might really try architecture beat welltuned LSTM Human memory however work anything like LSTM go task daily life recall attend specific contextdependent memory little else go back home drive local grocery store I’m using memory last hundred time I’ve driven route memory get Camden Town Piccadilly Circus London — even memory fresh recent experience sense memory almost seems queryable context depending I’m brain know memory useful deep learning driving thesis behind external keyvaluebased memory store idea new Neural Turing Machines one first favorite paper ever read augmented neural net differentiable external memory store accessible via vectorvalued “read” “write” head specific location easily imagine extended RL given timestep agent given environment observation memory relevant current state That’s exactly recent MERLIN architecture extends upon MERLIN 2 component memorybased predictor MBP policy network MBP responsible compressing observation useful lowdimensional “state variables” store directly keyvalue memory matrix also responsible passing relevant memory policy us memory current state output action architecture may look little complicated remember policy recurrent net outputting action MBP really 3 thing compressing observation useful state variable zt pas policy writing zt memory matrix fetching useful memory pas policy pipeline look something like input observation first encoded fed MLP output added prior distribution next state variable produce posterior distribution posterior distribution conditioned previous actionsobservations well new observation sampled produce state variable zt Next zt get fed MBP’s LSTM whose output used update prior read towrite memory via vectorvalued “read keys” “write keys” — produced linear function LSTM’s hidden state Finally downstream policy net leverage zt read output memory produce action key detail order ensure state representation useful MBP also trained predict reward current state zt learned representation relevant task hand Training MERLIN bit complicated since MBP intended serve useful “world model” intractable objective trained optimize variational lower bound VLB loss instead unfamiliar VLB found post quite useful really don’t need understand MERLIN two component VLB loss KLdivergence prior posterior probability distribution next state variable posterior additionally conditioned new observation Minimizing KL ensures new state variable consistent previous observationsactions reconstruction loss state variable attempt reproduce input observation eg image previous action etc predict reward based state variable loss small found state variable accurate representation observation useful producing action give high reward final VLB loss first term reconstruction second KL divergence policy network’s loss slightly fancier version policy gradient loss discussed A3C us algorithm called Generalized Advantage Estimation Algorithm detail beyond scope post found section 44 MERLIN paper’s appendix look similar standard policy gradient update shown trained MERLIN able predictively model world state representation memory policy able leverage prediction take useful action MERLIN deep RL work use external memory store — way back 2016 researcher already applying idea MQN Memory QNetwork solve maze Minecraft — concept using memory predictive model world unique neuroscientific traction Another Medium post done great job exploring idea won’t repeat key argument brain likely function “inputoutput” machine like neural net interpreted Instead function prediction engine perception world actually brain’s best guess cause sensory input Neuroscientist Amil Seth sum 19th century theory Hermann von Helmholtz nicely brain locked inside bony skull receives ambiguous noisy sensory signal indirectly related object world Perception must therefore process inference indeterminate sensory signal combined prior expectation ‘beliefs’ way world form brain’s optimal hypothesis cause sensory signal MERLIN’s memorybased predictor aim fulfill purpose predictive inference encodes observation combine internal prior generate “state variable” capture representation — cause — input store state longterm memory agent act upon later Agents World Models Imagination Interestingly concept brain predictive engine actually lead u back first RL question want explore learn environment effectively we’re going straight observation action best interact learn world around u Traditionally RL either modelfree learning modelbased learning modelfree RL learn map raw environment observation directly value action modelbased RL first learn transition model environment based raw observation use model choose action outside circle depicts modelbased RL “direct RL” loop depicts modelfree RL able plan based model much sampleefficient work pure trialanderror modelfree learning However learning good model often difficult compounding error model imperfection generally lead poor agent performance reason lot early success deep RL eg DQN A3C modelfree said line modelfree modelbased RL blurred early Dyna algorithm 1990 learned model used generate simulated experience help train modelfree policy 2018 new “Imaginationaugmented Agents” algorithm introduced directly combine two approach ImaginationAugmented Agents I2A final policy function modelfree component modelbased component modelbased component referred agent’s “imagination” world consists imagined trajectory rolled agent’s internal learned model key however modelbased component also encoder end aggregate imagined trajectory interprets enabling agent learn ignore imagination necessary sense agent discovers internal model projecting useless inaccurate trajectory learn ignore model proceed modelfree arm figure describes I2A’s work observation first passed modelfree modelbased component modelbased component n different trajectory “imagined” based n possible action could taken current state trajectory obtained feeding action state internal environment model transitioning new imagined state taking maximum next action distilled imagination policy kept similar final policy via crossentropy loss chooses next action fixed k step trajectory encoded aggregated together fed policy network along output modelfree component Critically encoding allows policy interpret imagined trajectory whatever way useful — ignoring appropriate extracting nonrewardrelated information available policy trained via standard policy gradient loss advantage similar A3C MERLIN look familiar Additionally policy distillation loss added actual policy internal model’s imagined policy ensure imagined policy chooses action similar current agent would I2A outperforms number baseline including MCTS Monte Carlo Tree Search planning algorithm also able perform well experiment modelbased component intentionally restricted make poor prediction demonstrating able tradeoff use model favor modelfree method necessary Interestingly I2A poor internal model actually slightly outperformed I2A good model end — author chalked either random initialization noisy internal model providing form regularization end definitely area investigation Regardless I2A fascinating way also exactly go acting world We’re always planning projecting future based mental model environment we’re also tend aware mental model could entirely inaccurate — especially we’re new environment situation we’ve never seen case proceed trialanderror like modelfree method also use new experience update internal mental model There’s lot work going right combining modelbased modelfree method Berkeley AI came Temporal Difference Model TDM also interesting premise idea let agent set temporally abstracted goal ie “be X state k time steps” learn longterm model transition maximizing reward collected within k step give u smooth transition modelfree exploration action modelbased planning highlevel goal — think sort brings u way back intuition hierarchical RL research paper focus goal achieving superior performance modelfree method sample efficiency modelbased method provide Conclusion Deep RL model really hard train period thanks difficulty forced come incredible range strategy approach algorithm harness power deep learning classical nonclassical control problem post incomplete survey deep RL — lot research haven’t covered yet I’m even aware However hopefully sprinkling research direction memory hierarchy imagination offer glimpse begin addressing recurring challenge bottleneck field think I’m missing something big probably — let know comment Meanwhile happy RL hackingTags Robotics Artificial Intelligence Machine Learning Reinforcement Learning Deep Learning
4,218
All I Need to Know About User-Centered Design I Learned One Summer at Apple
All I Need to Know About User-Centered Design I Learned One Summer at Apple Kristy Knabe Follow May 14 · 3 min read I loved the old rainbow Apple. Looked great on a t-shirt! In 1991, I was in grad school at Carnegie Mellon and I got a summer internship at Apple Computer. Little did I know this would be a life-changing summer. My internship was with Apple’s fledgling User-Aided Design team, a team started by a small segment of people who worked in the Instructional Products group who were responsible for writing Apple’s user manuals. Back then Apple shipped products with beautiful 4-color manuals. I wish I had a few of those today — they would be on my coffee table. Personal computing was an emerging technology in 1991 and most people were pretty intimidated by the thought of having a computer on their desk. So the manuals had to explain everything from unpacking the box (which is how the Set Up Poster originated) to how to turn on the computer. Even the desktop metaphor was new — so usability testing was critical to product success. Did the illustrations make sense? Was the packaging organized so the user could figure out what to do first, what to do next? Did the first steps in an online tutorial (Macromedia!) make sense to even the most novice user? Only the users could really tell us how the hardware, software, packaging and documentation came together to form an integrated product. That summer I learned the think-aloud protocol. I learned how to give users instructions about usability research and how to have them sign an NDA. I learned to run elaborate soundboards and recording equipment in a test lab. I learned to interact with users through one-way glass and find ways to make them comfortable even when the test protocols could be pretty intimidating. I learned how to write tasks from a user’s point of view and watch users without saying much at all. I learned to ask “what would you expect” and “what do you think” more than I could ever imagine! I learned to analyze users’ comments, questions and actions. I learned to edit video on very difficult video editing equipment to create those few critical highlights for the stakeholders and management team. I was with the group who were organizing the Usability Professionals Association’s (UPA) first annual conference in Orem, Utah so I learned the value of meeting with other people who were dedicated to improving usability. I also was fortunate to learn what went into organizing a professional conference. Those were exciting times. I was hired full-time at Apple in January 1992 and worked there throughout the 90s. Those were challenging years at Apple but the User-Aided Design group grew and usability was a main focus of product design. I saw some products succeed and many more fail. But the “failures” always led to better products, experiences and designs. So Apple’s bad years in the 90s led to some very (very!) good years down the line. And a lot of the user research informed product designs. I have spent almost 30 years in the User-Centered Design field. Somewhere along the way, we became UX Researchers, UX Strategists, UXers. The UPA became the UXPA. I have worked in many different industries, on marketing teams, engineering teams, product design teams and in innovation and ideation labs. It has been a great career that I continue to love. And some things have changed. There are no labs with fancy equipment and one-way mirrors anymore. The video editing takes hours and not weeks. Recruiting users is much easier and incentives cost much less. But most of what I learned that summer in 1991 at Apple I still use every day. Or at least every week. The basics of User-Centered Design have stood the test of time — watch users, don’t just ask; define primary users and tasks and support them in design; iterative testing is the most effective so test early and test often; and decide key product design decisions in the lab (virtual now) and not the conference room. The summer of 1991 changed my life. And a few products changed too thanks to the skills I learned.
https://medium.com/marketade/all-i-need-to-know-about-user-centered-design-i-learned-one-summer-at-apple-89df72bc7ae
['Kristy Knabe']
2020-05-14 16:29:24.196000+00:00
['User Experience', 'UX', 'Apple', 'User Research']
Title Need Know UserCentered Design Learned One Summer AppleContent Need Know UserCentered Design Learned One Summer Apple Kristy Knabe Follow May 14 · 3 min read loved old rainbow Apple Looked great tshirt 1991 grad school Carnegie Mellon got summer internship Apple Computer Little know would lifechanging summer internship Apple’s fledgling UserAided Design team team started small segment people worked Instructional Products group responsible writing Apple’s user manual Back Apple shipped product beautiful 4color manual wish today — would coffee table Personal computing emerging technology 1991 people pretty intimidated thought computer desk manual explain everything unpacking box Set Poster originated turn computer Even desktop metaphor new — usability testing critical product success illustration make sense packaging organized user could figure first next first step online tutorial Macromedia make sense even novice user user could really tell u hardware software packaging documentation came together form integrated product summer learned thinkaloud protocol learned give user instruction usability research sign NDA learned run elaborate soundboard recording equipment test lab learned interact user oneway glass find way make comfortable even test protocol could pretty intimidating learned write task user’s point view watch user without saying much learned ask “what would expect” “what think” could ever imagine learned analyze users’ comment question action learned edit video difficult video editing equipment create critical highlight stakeholder management team group organizing Usability Professionals Association’s UPA first annual conference Orem Utah learned value meeting people dedicated improving usability also fortunate learn went organizing professional conference exciting time hired fulltime Apple January 1992 worked throughout 90 challenging year Apple UserAided Design group grew usability main focus product design saw product succeed many fail “failures” always led better product experience design Apple’s bad year 90 led good year line lot user research informed product design spent almost 30 year UserCentered Design field Somewhere along way became UX Researchers UX Strategists UXers UPA became UXPA worked many different industry marketing team engineering team product design team innovation ideation lab great career continue love thing changed lab fancy equipment oneway mirror anymore video editing take hour week Recruiting user much easier incentive cost much le learned summer 1991 Apple still use every day least every week basic UserCentered Design stood test time — watch user don’t ask define primary user task support design iterative testing effective test early test often decide key product design decision lab virtual conference room summer 1991 changed life product changed thanks skill learnedTags User Experience UX Apple User Research
4,219
How to Avoid Losing Your Foreign Language Skills
Speak to yourself Let’s start with the most cringy tip of mine: Have (overly) dramatic monologues with yourself, if your surrounding allows it. Living alone obviously makes it easier, but you could still do it when your flatmates are out or when you take a walk through a nearby forest or park. I am absolutely convinced that speaking a foreign language with yourself makes you more self-confident and improves your pronunciation. Let it all out —the long Italian vowels, the French filler words, the super guttural English sounds, the hard German consonants… Find those binge-worthy podcasts On my daily one-hour walk that I’ve introduced due to the lack of corona-caused alternative activities, I usually listen to podcasts in foreign languages — mostly in French. I’ve discovered that it’s quite hard to find podcasts in foreign languages on Spotify (the service I use) because it will mostly suggest German podcasts. A good idea I’ve discovered is to search for interviews with personalities I’m interested in (politicians, singers, authors). If I like the podcast host who has published that interview, I might like the other episodes of that podcast. You can also search for topics like “racism”, “feminism”, “climate change” in your target languages and hope to find corresponding productions. Watch series, movies, and Youtube videos This is probably the most obvious one and one that most of us already do with utmost pleasure. One tip: Using a VPN allows you to watch e.g. Netflix content that is normally not available in your country. Why not make it a habit to watch at least one movie per week in your target language? Speak to friends and participate in language gatherings If you have friends who speak your target languages, feel blessed. If not, there are still plenty of ways to get that language practice going. In pre-corona times, I’d have suggested that you look for events like “polyglot meetups” or “language cafes” on Facebook or simply via Google. In our current times where meeting strangers is not exactly recommended, you could download the app Tandem or find a virtual tandem partner online. Language conferences (such as “Women in Language” which I wrote about here) usually also offer language exchange sessions. Change the language of your operating system My tablet is in French, my phone used to be in Italian, my laptop is in English. Why not add those little technical words and phrases to your subconsciousness, so that you’ll sometimes catch yourself thinking in that language? “Ah, on est déjà lundi !” — Ah, it’s already Monday! You can of course do the same for apps like Facebook or Instagram. Photo by Kari Shea on Unsplash Watch fitness videos in another language Every morning after leaving the cozy comfort of my warm bed, I make myself do yoga. I used to only watch videos from “Yoga with Adriene” (gotta love her), but I started thinking: “Why not find some French yoga videos?” There are obviously fewer French-speaking yoga teachers with Youtube channels out there, but I really enjoy having my morning routine in French. To my surprise, there were some words like “tailleur” (cross-legged seat) I had never used in my life. A few YouTube yoga recommendations: Mady Morrison (German), ELLE (French), Yoga Fire by Jo (French) and MichaelaYoga (Italian, German and English). Teach others (for free) Do you have friends interested in learning the language you already speak? That may be an extremely useful opportunity for you to practice the language. You probably also know the saying: You only know something when you’re able to teach it. This might not be entirely accurate for languages because most people have difficulties to explain grammar rules or think ‘What the heck is a possessive pronoun?!’, but are still perfectly apt users of that language. Nevertheless, explaining something to others is a win-win situation: Another person is happy they received help and you feel good for putting your skills to use. Obviously, according to the level to which you speak the language, you can even monetize your language teaching and e.g. become a community teacher on the website “italki” where you can register without an official diploma. That’s what a Ukrainian study friend of mine is doing. Write your diary in that language Producing journal entries makes you actively use that language. I invite you to read this article of mine about journaling in your target language(s): Cook and bake in your target language Be it a recipe video or a written recipe: Those will fulfill the purpose of filling your stomach (always nice) and feeding you that kitchen vocab’. I always get slightly nostalgic when looking at French recipes: The units of measurement and some ingredients remind me of the time when I lived in France and they were the most normal things in the world. For instance, “Maïzena” is the word the French use for cornstarch — even though it’s a brand name. Devour books and news articles You might be willing to read books, but have a hard time finding them. Check out local libraries and, in the worst case, buy ebooks that you can download wherever you live. If you have local friends that speak your target language, asking them if they own books in that language might also be worth a try. News articles on the other hand can also teach you about current issues in a country where your target language is spoken so that you’d stay up to date about local developments. For French, that would obviously mean that apart from a French newspaper, I could read Belgian, Congolese or Martinicain news (to name just very few). Read aloud Just like speaking to yourself, reading texts in another language out loud makes you practice the pronunciation in a safe surrounding at home (and what do we love more than staying at home, right? #covid19). What I like to do when I come across a word I’m unsure how to pronounce: I google “[word] pronunciation [language]”, e.g. “näringsfång pronunciation Swedish”. They are numerous websites where people have recorded words in their mother tongue so you can hear an authentic version — and otherwise, most dictionaries will have audio versions of their available words. Listen to songs, sing along & dance (!) I recommend that you create playlists for each of the languages you learn or have learned. Whenever you feel like listening to one of them, you can directly select the French or the Portuguese playlist and dance around while washing your dishes, cooking, or rhythmically swinging your tea towel to the beat (the perks of having no flatmates who could eye you with great amusement).
https://medium.com/language-lab/how-to-avoid-losing-your-foreign-language-skills-64df9c6c6155
['Annika Wappelhorst']
2020-11-28 10:05:22.128000+00:00
['Motivation', 'Self Learning', 'Language', 'Language Learning', 'How To']
Title Avoid Losing Foreign Language SkillsContent Speak Let’s start cringy tip mine overly dramatic monologue surrounding allows Living alone obviously make easier could still flatmate take walk nearby forest park absolutely convinced speaking foreign language make selfconfident improves pronunciation Let —the long Italian vowel French filler word super guttural English sound hard German consonants… Find bingeworthy podcasts daily onehour walk I’ve introduced due lack coronacaused alternative activity usually listen podcasts foreign language — mostly French I’ve discovered it’s quite hard find podcasts foreign language Spotify service use mostly suggest German podcasts good idea I’ve discovered search interview personality I’m interested politician singer author like podcast host published interview might like episode podcast also search topic like “racism” “feminism” “climate change” target language hope find corresponding production Watch series movie Youtube video probably obvious one one u already utmost pleasure One tip Using VPN allows watch eg Netflix content normally available country make habit watch least one movie per week target language Speak friend participate language gathering friend speak target language feel blessed still plenty way get language practice going precorona time I’d suggested look event like “polyglot meetups” “language cafes” Facebook simply via Google current time meeting stranger exactly recommended could download app Tandem find virtual tandem partner online Language conference “Women Language” wrote usually also offer language exchange session Change language operating system tablet French phone used Italian laptop English add little technical word phrase subconsciousness you’ll sometimes catch thinking language “Ah est déjà lundi ” — Ah it’s already Monday course apps like Facebook Instagram Photo Kari Shea Unsplash Watch fitness video another language Every morning leaving cozy comfort warm bed make yoga used watch video “Yoga Adriene” gotta love started thinking “Why find French yoga videos” obviously fewer Frenchspeaking yoga teacher Youtube channel really enjoy morning routine French surprise word like “tailleur” crosslegged seat never used life YouTube yoga recommendation Mady Morrison German ELLE French Yoga Fire Jo French MichaelaYoga Italian German English Teach others free friend interested learning language already speak may extremely useful opportunity practice language probably also know saying know something you’re able teach might entirely accurate language people difficulty explain grammar rule think ‘What heck possessive pronoun’ still perfectly apt user language Nevertheless explaining something others winwin situation Another person happy received help feel good putting skill use Obviously according level speak language even monetize language teaching eg become community teacher website “italki” register without official diploma That’s Ukrainian study friend mine Write diary language Producing journal entry make actively use language invite read article mine journaling target language Cook bake target language recipe video written recipe fulfill purpose filling stomach always nice feeding kitchen vocab’ always get slightly nostalgic looking French recipe unit measurement ingredient remind time lived France normal thing world instance “Maïzena” word French use cornstarch — even though it’s brand name Devour book news article might willing read book hard time finding Check local library worst case buy ebooks download wherever live local friend speak target language asking book language might also worth try News article hand also teach current issue country target language spoken you’d stay date local development French would obviously mean apart French newspaper could read Belgian Congolese Martinicain news name Read aloud like speaking reading text another language loud make practice pronunciation safe surrounding home love staying home right covid19 like come across word I’m unsure pronounce google “word pronunciation language” eg “näringsfång pronunciation Swedish” numerous website people recorded word mother tongue hear authentic version — otherwise dictionary audio version available word Listen song sing along dance recommend create playlist language learn learned Whenever feel like listening one directly select French Portuguese playlist dance around washing dish cooking rhythmically swinging tea towel beat perk flatmate could eye great amusementTags Motivation Self Learning Language Language Learning
4,220
How to write articles that people want to read
How to write articles that people want to read Here are a bunch of recommendations that the In Plain English team consider to be best practices when writing articles that your readers will find engaging and easy-to-read Take a moment to give your article a good title and subtitle . Try to make them concise, yet compelling. If in doubt, ask yourself: “Would I find this title interesting enough that I would want to continue to read the article?” . Try to make them concise, yet compelling. If in doubt, ask yourself: “Would I find this title interesting enough that I would want to continue to read the article?” Don’t create weird formats for your headings and subheadings. Just keep them simple and make sure that the formatting for each heading/subheading in your article is consistent with one another. If you are planning on numbering your headings, here are some examples for you to refer to: 1. This is a good heading 2. This is another good heading 1 : > This is a bad heading 2 #: This is another bad heading
https://medium.com/javascript-in-plain-english/how-to-write-articles-that-people-want-to-read-6e661edb6d06
['Sunil Sandhu']
2020-07-17 22:06:35.637000+00:00
['Writing', 'Programming', 'Articles', 'Guides And Tutorials', 'Tutorial']
Title write article people want readContent write article people want read bunch recommendation Plain English team consider best practice writing article reader find engaging easytoread Take moment give article good title subtitle Try make concise yet compelling doubt ask “Would find title interesting enough would want continue read article” Try make concise yet compelling doubt ask “Would find title interesting enough would want continue read article” Don’t create weird format heading subheading keep simple make sure formatting headingsubheading article consistent one another planning numbering heading example refer 1 good heading 2 another good heading 1 bad heading 2 another bad headingTags Writing Programming Articles Guides Tutorials Tutorial
4,221
What Does the New Twitter Character Limit Mean for Marketers?
The 140-character limit on Twitter may soon be just a memory- Twitter announced on its blog that they are rolling out a character limit of 280 to a small group of users, and if it is successful, it will be launched to everyone. Twitter, anticipating potential backlash at the new development, said “We understand since many of you have been Tweeting for years, there may be an emotional attachment to 140 characters — we felt it, too. But we tried this, saw the power of what it will do, and fell in love with this new, still brief, constraint.” There is, in fact, a fair amount of backlash to this announcement. Soon after the announcement was made, #Twitter280 was trending on Twitter, and was filled with people who had access to the 280 character limit using it purely to show their disdain for it. A large portion of these tweets are calling out Twitter for implementing this new character limit that nobody really seemed to be asking for, while failing to address other improvements to Twitter that have been highly requested, like the opportunity to edit tweets, better harrassment reporting tools, and a zero tolerance attitude towards hate speech. While the general public seems to be reaching the consensus that the longer character limit is not a good thing, many marketers are wondering how this will impact their strategy and ability to engage with their audience. Right after Twitter announced the change, many brands didn’t think too much but instead jumped right into testing out/tweeting about the new character limit. After the dust had settled, however, AdWeek reached out to several marketing agencies to get their take. Rachel Spiegelman, CEO of Pitch, said “The 280-character tweets will likely dilute Twitter as a receptive marketing platform for consumers engaging with brands. Some of the most successful brands on Twitter, including Wendy’s, JetBlue and DiGiorno Pizza, have gotten to the peaks of brand engagement because of the discipline and rigor it takes to fit a message into 140 characters.” Science seems to agree that shorter tweets perform better. Twitter’s best practices reference research by Buddy Media found that 100 characters is the ideal tweet length: “Creativity loves constraints and simplicity is at our core. Tweets are limited to 140 characters so they can be consumed easily anywhere, even via mobile text messages. There’s no magical length for a Tweet, but a recent report by Buddy Media revealed that Tweets shorter than 100 characters get a 17% higher engagement rate.” Social media scientist Dan Zarrella performed research to find out which tweet lengths resulted in the highest click-through rates (CTRs). He found that tweets between 120 and 130 characters long had the highest CTRs. Others expressed that they didn’t feel that the change would allow them to better deliver what audiences were actually looking for. Jennifer Ruggle, SVP of digital solutions at The Sandbox Agency said, “Users don’t go to Twitter to read long text blocks.” In terms of changes in strategy, some marketing professionals are expressing fear that brands will jump into usage of the 280-character limit without really considering the effects that it will have. John Sampogna, co-CEO and founding partner at Wondersauce, said “I’m sure the brands and users who truly ‘get’ the platform will find new creative ways of using it. My concern is most will not.” One potential upside is that the increased character limit may allow brands to deliver better customer service and better address complaints. The increased character limit allows for better explanations and more in-depth responses. It will also help brands more clearly list legal terms and conditions in Tweets. This is particularly relevant for brand influencers. “This also gives no excuse for brand influencers not to disclose transparency or sponsorship language when applicable as well, which is better overall for consumers,” said Hannah Redmond, group director of strategy and innovation at The Marketing Arm. It may take some time to fully understand how the new 280-character limit will shape marketing strategies, for better or worse. For now, brands should tread carefully and not lose sight of what draws people to Twitter and what type of content they are looking for. That means not posting longer tweets simply because it is now an option. Brands and marketers should still try to use the character limit as a driver of creativity, by attempting to deliver messages that resonate in a short amount of characters. That being said, brands and marketers should not ignore the opportunities to better engage with consumers that the 280-character limit may create. A big component of this will be improved, or more in-depth customer service.
https://medium.com/fanzee/what-does-the-new-twitter-character-limit-mean-for-marketers-49bca104868e
['Leah Bury']
2017-10-04 15:11:59.276000+00:00
['Social Media', 'Social Media Marketing', 'Marketing', 'Digital Marketing', 'Twitter']
Title New Twitter Character Limit Mean MarketersContent 140character limit Twitter may soon memory Twitter announced blog rolling character limit 280 small group user successful launched everyone Twitter anticipating potential backlash new development said “We understand since many Tweeting year may emotional attachment 140 character — felt tried saw power fell love new still brief constraint” fact fair amount backlash announcement Soon announcement made Twitter280 trending Twitter filled people access 280 character limit using purely show disdain large portion tweet calling Twitter implementing new character limit nobody really seemed asking failing address improvement Twitter highly requested like opportunity edit tweet better harrassment reporting tool zero tolerance attitude towards hate speech general public seems reaching consensus longer character limit good thing many marketer wondering impact strategy ability engage audience Right Twitter announced change many brand didn’t think much instead jumped right testing outtweeting new character limit dust settled however AdWeek reached several marketing agency get take Rachel Spiegelman CEO Pitch said “The 280character tweet likely dilute Twitter receptive marketing platform consumer engaging brand successful brand Twitter including Wendy’s JetBlue DiGiorno Pizza gotten peak brand engagement discipline rigor take fit message 140 characters” Science seems agree shorter tweet perform better Twitter’s best practice reference research Buddy Media found 100 character ideal tweet length “Creativity love constraint simplicity core Tweets limited 140 character consumed easily anywhere even via mobile text message There’s magical length Tweet recent report Buddy Media revealed Tweets shorter 100 character get 17 higher engagement rate” Social medium scientist Dan Zarrella performed research find tweet length resulted highest clickthrough rate CTRs found tweet 120 130 character long highest CTRs Others expressed didn’t feel change would allow better deliver audience actually looking Jennifer Ruggle SVP digital solution Sandbox Agency said “Users don’t go Twitter read long text blocks” term change strategy marketing professional expressing fear brand jump usage 280character limit without really considering effect John Sampogna coCEO founding partner Wondersauce said “I’m sure brand user truly ‘get’ platform find new creative way using concern not” One potential upside increased character limit may allow brand deliver better customer service better address complaint increased character limit allows better explanation indepth response also help brand clearly list legal term condition Tweets particularly relevant brand influencers “This also give excuse brand influencers disclose transparency sponsorship language applicable well better overall consumers” said Hannah Redmond group director strategy innovation Marketing Arm may take time fully understand new 280character limit shape marketing strategy better worse brand tread carefully lose sight draw people Twitter type content looking mean posting longer tweet simply option Brands marketer still try use character limit driver creativity attempting deliver message resonate short amount character said brand marketer ignore opportunity better engage consumer 280character limit may create big component improved indepth customer serviceTags Social Media Social Media Marketing Marketing Digital Marketing Twitter
4,222
Building microservices using IBM CloudPaks as amateur developer 2/5
Building microservices using IBM CloudPaks as amateur developer 2/5 Chechu Follow Sep 12 · 4 min read Microservices Logging in Openshift This is the second article of a set of 5 about how to code microservices using IBM CloudPaks: 1.Leveraging Kabanero 2. MicroServices logging in OpenShift 3. Working with ServiceMesh and Microservices 4. Async Communication for Microservices with IBM Event Streams and IBM MQ (Kafka and MQ) 5. Microservices reliability across Kubernetes clusters with IBM MultiCloud Manager In the first article, I built an app following a microservice architecture, but it can be a nightmare to debug that application, in a multi-user scenario by leveraging the native logging system. As we introduced a more “dispersed” architecture, tracing a request through the different microservices can get complicated, especially with a multi-user deployment. As we need to identify each “thread’s” progression through the microservices, we will need a method to identify each “thread”. Let's review the app and see my approach to solve this problem looking at the logs dropped. Microservices Application Topology Microservice App Architecture Looking at one “thread” progression, for example on the management page load request: Looking at even this simple app running in a multi-user scenario, it is apparent that a lot of log entries will be generated for the same transaction without any context to identify the user who generated the log entry, making it impossible to quickly diagnose an issue. In order to address this challenge, I created my own solution based on the Mapped Diagnostic Context instead of the logging framework. Here is an outline of my solution: In a nutshell, the UI (React microservice) creates a unique ID, that is attached to each request that a transaction generates, together with the respective username. In my local development with Appsody, we can see the following: The schema I used for the logs is the following: reqID: “16de6c53–7279–467f-b689-dd9c03ca8d6b” user: “[email protected]” status: “successful” service: “auth” message: “(/api/users/login) — User Login: [email protected] SUCCESS” Using ReqID and user I can trace the progress of a transaction across all microservices. The other keys will identify the respective microservice, the status, and a detailed message. The code added is in this repos: - UI : React portal using IBM Carbon Design - APIGW : API gateway to expose the backend microservices to the React portal - Auth: Authentication microservice to validate users accessing backend microservices are logged. - Management: Microservice that manages the creation of the courses. - K8sManager: Microservice that manage the creation of the workspaces on Openshift and the Linux VM (to run ‘oc’ commands) *** Each article’s code is hosted in a branch on this repo, with the Master branch hosting all the modifications across the 5 articles. Once we push the new code to our repos, the pipelines described in the previous article will start to deploy the new pods. If the Openshift cluster has the logging operator installed we can use Kibana to see the logs and filter them. The example below shows a flow of ‘user login’ and selects a ‘course’, which retrieves or creates a workspace. And if we filter by the “reqID”: “bb696da4-e3e7–4dd5–9860–578a80ea2bcd”, we can trace the progress of the request through the microservices. Wrap Up Following the Mapped Diagnostic Context presented above, we can implement a “distributed tracing” microservice pattern, explained here. This facilitates debugging a transaction that spans across different microservices. This becomes critical as the topology gets more dispersed. If you find this helpful you might want to take a look at the next article of the series — Working with ServiceMesh and Microservices— (to be published on September 21st).
https://medium.com/ibm-garage/building-microservices-using-ibm-cloudpaks-as-amateur-developer-2-5-bf06cdebabbc
[]
2020-09-17 13:57:52.657000+00:00
['Openshift 4', 'Microservices', 'Logging', 'Microservice Architecture']
Title Building microservices using IBM CloudPaks amateur developer 25Content Building microservices using IBM CloudPaks amateur developer 25 Chechu Follow Sep 12 · 4 min read Microservices Logging Openshift second article set 5 code microservices using IBM CloudPaks 1Leveraging Kabanero 2 MicroServices logging OpenShift 3 Working ServiceMesh Microservices 4 Async Communication Microservices IBM Event Streams IBM MQ Kafka MQ 5 Microservices reliability across Kubernetes cluster IBM MultiCloud Manager first article built app following microservice architecture nightmare debug application multiuser scenario leveraging native logging system introduced “dispersed” architecture tracing request different microservices get complicated especially multiuser deployment need identify “thread’s” progression microservices need method identify “thread” Lets review app see approach solve problem looking log dropped Microservices Application Topology Microservice App Architecture Looking one “thread” progression example management page load request Looking even simple app running multiuser scenario apparent lot log entry generated transaction without context identify user generated log entry making impossible quickly diagnose issue order address challenge created solution based Mapped Diagnostic Context instead logging framework outline solution nutshell UI React microservice creates unique ID attached request transaction generates together respective username local development Appsody see following schema used log following reqID “16de6c53–7279–467fb689dd9c03ca8d6b” user “adminchechucom” status “successful” service “auth” message “apiuserslogin — User Login adminchechucom SUCCESS” Using ReqID user trace progress transaction across microservices key identify respective microservice status detailed message code added repos UI React portal using IBM Carbon Design APIGW API gateway expose backend microservices React portal Auth Authentication microservice validate user accessing backend microservices logged Management Microservice manages creation course K8sManager Microservice manage creation workspace Openshift Linux VM run ‘oc’ command article’s code hosted branch repo Master branch hosting modification across 5 article push new code repos pipeline described previous article start deploy new pod Openshift cluster logging operator installed use Kibana see log filter example show flow ‘user login’ selects ‘course’ retrieves creates workspace filter “reqID” “bb696da4e3e7–4dd5–9860–578a80ea2bcd” trace progress request microservices Wrap Following Mapped Diagnostic Context presented implement “distributed tracing” microservice pattern explained facilitates debugging transaction span across different microservices becomes critical topology get dispersed find helpful might want take look next article series — Working ServiceMesh Microservices— published September 21stTags Openshift 4 Microservices Logging Microservice Architecture
4,223
9 Popular Cross-Platform Tools for App Development in 2019
9 Popular Cross-Platform Tools for App Development in 2019 Picking up the right app development tools is important for building a good app. To help get you started, I’ve already conducted the research to give you the top options available for cross-platform app development tools. Read on to know about the multiple platform tools ! Popular Cross-Platform Tools for App Development in 2019 When business firms think about building a mobile app, their minds go straight to cross-platform app development. Today startups and SMEs find cross-platform as an excellent form of technology to develop an app on multiple platforms like Android, iOS, and Windows simultaneously. This means by building a single app you can target both Android and iOS, thus, maximizing your reach to the target audience. In fact, the cross-platform application development market surpassed the figure of $7.9 in 2019. Ideally, cross-platform technology delivers native-like apps because of the advent of advanced tools and technologies that allow developers to build apps that may appear similar to native apps. Also, in such a scenario when the number of apps in the Google Play Store was most recently placed at around $2.6 million apps in March 2019. Businesses wouldn’t want to risk missing their presence on Google play store or any other platform. Budgeting always an issue for businesses if they go for native apps, this is where cross-platform technology has emerged as the premium choice for businesses that aim to build their app multiple platforms. So, move onto the list of the best cross-platform app development tools to go for in 2019. 1. Adobe PhoneGap PhoneGap — Best Mobile app development tool (Source: Google Images) PhoneGap is owned by Adobe and is one of the best cross-platform development tools to use in 2019. It’s based on the open source framework Apache Cordova that gives you access to complete set of PhoneGap toolset which helps streamline the app development process and include the options: Debugging tools allow you to inspect HTML, CSS, and debug codes in JavaScript. All I would suggest is that you must take the help of a dedicated cross platform developer Here is the list of tools: For iOS App Development Safari Web Inspector Tool Steps to Use: Take your iOS device and connect to your computer. Now, install and launch Safari on your system. Make your PhoneGap application launched on iOS Device. Open the menu of Safari Develop, and look for your iOS Device in the list. Select “PhoneGap Webview” listed under your iOS device. For Android App Development Chrome Developer Tool Steps to Use: Make sure your Android test device supports all the developer options. Now launch your Google Chrome web browser. Look for chrome://inspect in Chrome. Select PhoneGap Application on your device. Developer tools will launch. For Windows, visit the page Microsoft Visual Studio One of the reasons why I am suggesting PhoneGap is because anyone can learn how to use their tools, even if you don’t have experience of using them. PhoneGap takes care of the development process by compiling all your work in the Cloud, so you don’t need to maintain native SDKs. 2. Appcelerator Appcelerator — Most popular mobile app development tools (Source: Google Images) Appcelerator is a cross-platform mobile app development platform that helps get your app ready in a faster way by simplifying the whole process. By using a single JavaScript code you can build native-like apps and mobile apps with cloud-like performance. Another top benefit of Appcelerator is their quality as it can be used for building apps for any device or operating system. The tool also makes it easy for you to use and test your apps using the automated mobile tests that allow you to measure your app usage and results of your app project. You can detect bugs, crashes, and also make some adjustments to improve the overall performance of your app. With Appcelerator, you will be provided with access to Hyperloop that is one of the best cross-platform APIs for the multi-platform application development. 3. Corona Corona- Apps development Tool (Source: Google Images) Corona is a cross-platform ideal for creating games and apps for mobile devices, desktop, and tv devices using just one code base. This tool speeds up your coding process and you can easily update your code, save the changes, and instantly see the results on real devices. With Corona, your apps are optimized for performance because of the lightweight scripting power of Lua that enhances your app performance at every level. Corona is free to use cross-platform app development tool that primarily used in 2d games as it’s great to use for high-quality graphics and high-speed development of games. 4. React Native React Native — Best app development software (Source: Google Images) React Native allow you to create native applications and uses JavaScript as a programming language to build apps. The strong side of React Native is that you can write modules in languages such as C, Swift, and Java. The best part is you can work on image editing and video processing that aren’t possible with the other API frameworks. React Native is unquestionably the best platform to use for cross-platform app development because it interprets your source code and convert it to the native elements in less time. Both Facebook and Instagram have used React Native to build their native apps that are the most used applications of the world. So, you can trust on React Native. 5. Xamarin Xamarin — Best cross platform mobile app development tools (Source: Google Images) Microsoft Visual Studio Xamarin’s allows you to build apps for different platforms such as Windows, iOS, and Android using a single .net code. The best part of Xamarin cross-platform tool is that all the apps built on it look and feel like native apps that is because it uses the native interfaces that work the same way a user wants to use them. With Xamarin, you can give your app a platform-specific hardware boosts to achieve the performance similar to native apps. Also, most of your coding approx. 75% will be the same, regardless of the platform you’re building your mobile application for. Xamarin works on a single code by identifying it and accelerates the process for cross-platform mobile app development. Xamarin works on both Mac and PC systems and offers you tools such as debugging, UI design tool, and code editing. 6. Qt QT: Cross Platform Mobile App Development Kit (Source: Google Images) Qt is the best cross-platform development tool for mobile app development. Why I’m counting this tool in the best cross-platform tools is because of its quality features that allow creating fluid, UIs, applications, and embedded devices with the same code for Android, iOS, and Windows. If your app is not performing well and you want to rework on it, you can easily make changes to your app using Qt that will automatically make all the changes applied to your app. This software tool also allows you to see how your app is performing on different platforms. Moreover, it’s easy to use and don’t have a complex interface like some other cross-platform development tools I’ve seen. 7. Sencha Sencha: Easy Mobile App Development Tool (Source: Google Images) With Sencha you will get all the modern Java and JavaScript frameworks that help you build your web apps easily for any device. It provides you 115+ fully supported and test UI components that you can easily integrate into your apps. It is one of the most comprehensive tools to perform end-to-end testing of apps on all the platforms. In addition, Sencha provides you with the “Themer” to create reusable themes by customizing themes built on iOS, Ext JS, ExtAngular, and ExtReact. Sencha offers a data visualization table that makes it easier for you to track your app information. This also makes it possible for you to organize your app content and how your content is displayed on the browser, device, and screen size. 8. Unity3D Unity3D: Open source Web App Development Tool ( Source: Google image) This cross-platform app development tool is so popular because of its graphics quality that is absolutely incredible. It’s so easy to use this tool and you can use it for more than just a mobile app. With Unity3D tool you can export your app or games to 17 platforms that include — iOS, Android, Windows, Xbox, PlayStation, Linux, Web, and Wii. Unity3d can also be used to track user analytics and share your app on social networks. You can also connect with the network of Unity3D developers called Unity Connect to find help and get your questions answered if you’re having tech issues with coding or something else. 9. 5App 5App: iOS and Android App Development Tool ( Source: Google image) 5App is a unique tool designed specifically for businesses into learning, HR consulting, and firms that want to organize and deliver resources to their employees or to the right people at the right time. 5Apps uses HTML5 and JavaScript for coding of apps and emphasis on the security of app data. The tool allows you to quickly create relevant content to support your employees’ learning and performance. The finished app is compatible with both Android and iOS devices, so you can choose accordingly as per your company’s needs. Final Thoughts Today, businesses face tough competition and their main focus is on the target audience. That’s why businesses need to take advantage of cross-platform app development tools as possible. In my list of top 7 cross-platform mobile development tools, you can find a tool that can manage all of your mobile app development needs. This isn’t always easy to choose the best development tool because of so many options available on the market. So refer to my list of top cross-platform app development tools to build your mobile app.
https://medium.com/hackernoon/9-popular-cross-platform-tools-for-app-development-in-2019-53765004761b
['Amyra Sheldon']
2019-07-09 09:53:35.048000+00:00
['Crossplatform Application', 'Open Source App Dev Tools', 'Best And Popular', 'Mobile App Development', 'Web App Development Tools']
Title 9 Popular CrossPlatform Tools App Development 2019Content 9 Popular CrossPlatform Tools App Development 2019 Picking right app development tool important building good app help get started I’ve already conducted research give top option available crossplatform app development tool Read know multiple platform tool Popular CrossPlatform Tools App Development 2019 business firm think building mobile app mind go straight crossplatform app development Today startup SMEs find crossplatform excellent form technology develop app multiple platform like Android iOS Windows simultaneously mean building single app target Android iOS thus maximizing reach target audience fact crossplatform application development market surpassed figure 79 2019 Ideally crossplatform technology delivers nativelike apps advent advanced tool technology allow developer build apps may appear similar native apps Also scenario number apps Google Play Store recently placed around 26 million apps March 2019 Businesses wouldn’t want risk missing presence Google play store platform Budgeting always issue business go native apps crossplatform technology emerged premium choice business aim build app multiple platform move onto list best crossplatform app development tool go 2019 1 Adobe PhoneGap PhoneGap — Best Mobile app development tool Source Google Images PhoneGap owned Adobe one best crossplatform development tool use 2019 It’s based open source framework Apache Cordova give access complete set PhoneGap toolset help streamline app development process include option Debugging tool allow inspect HTML CSS debug code JavaScript would suggest must take help dedicated cross platform developer list tool iOS App Development Safari Web Inspector Tool Steps Use Take iOS device connect computer install launch Safari system Make PhoneGap application launched iOS Device Open menu Safari Develop look iOS Device list Select “PhoneGap Webview” listed iOS device Android App Development Chrome Developer Tool Steps Use Make sure Android test device support developer option launch Google Chrome web browser Look chromeinspect Chrome Select PhoneGap Application device Developer tool launch Windows visit page Microsoft Visual Studio One reason suggesting PhoneGap anyone learn use tool even don’t experience using PhoneGap take care development process compiling work Cloud don’t need maintain native SDKs 2 Appcelerator Appcelerator — popular mobile app development tool Source Google Images Appcelerator crossplatform mobile app development platform help get app ready faster way simplifying whole process using single JavaScript code build nativelike apps mobile apps cloudlike performance Another top benefit Appcelerator quality used building apps device operating system tool also make easy use test apps using automated mobile test allow measure app usage result app project detect bug crash also make adjustment improve overall performance app Appcelerator provided access Hyperloop one best crossplatform APIs multiplatform application development 3 Corona Corona Apps development Tool Source Google Images Corona crossplatform ideal creating game apps mobile device desktop tv device using one code base tool speed coding process easily update code save change instantly see result real device Corona apps optimized performance lightweight scripting power Lua enhances app performance every level Corona free use crossplatform app development tool primarily used 2d game it’s great use highquality graphic highspeed development game 4 React Native React Native — Best app development software Source Google Images React Native allow create native application us JavaScript programming language build apps strong side React Native write module language C Swift Java best part work image editing video processing aren’t possible API framework React Native unquestionably best platform use crossplatform app development interprets source code convert native element le time Facebook Instagram used React Native build native apps used application world trust React Native 5 Xamarin Xamarin — Best cross platform mobile app development tool Source Google Images Microsoft Visual Studio Xamarin’s allows build apps different platform Windows iOS Android using single net code best part Xamarin crossplatform tool apps built look feel like native apps us native interface work way user want use Xamarin give app platformspecific hardware boost achieve performance similar native apps Also coding approx 75 regardless platform you’re building mobile application Xamarin work single code identifying accelerates process crossplatform mobile app development Xamarin work Mac PC system offer tool debugging UI design tool code editing 6 Qt QT Cross Platform Mobile App Development Kit Source Google Images Qt best crossplatform development tool mobile app development I’m counting tool best crossplatform tool quality feature allow creating fluid UIs application embedded device code Android iOS Windows app performing well want rework easily make change app using Qt automatically make change applied app software tool also allows see app performing different platform Moreover it’s easy use don’t complex interface like crossplatform development tool I’ve seen 7 Sencha Sencha Easy Mobile App Development Tool Source Google Images Sencha get modern Java JavaScript framework help build web apps easily device provides 115 fully supported test UI component easily integrate apps one comprehensive tool perform endtoend testing apps platform addition Sencha provides “Themer” create reusable theme customizing theme built iOS Ext JS ExtAngular ExtReact Sencha offer data visualization table make easier track app information also make possible organize app content content displayed browser device screen size 8 Unity3D Unity3D Open source Web App Development Tool Source Google image crossplatform app development tool popular graphic quality absolutely incredible It’s easy use tool use mobile app Unity3D tool export app game 17 platform include — iOS Android Windows Xbox PlayStation Linux Web Wii Unity3d also used track user analytics share app social network also connect network Unity3D developer called Unity Connect find help get question answered you’re tech issue coding something else 9 5App 5App iOS Android App Development Tool Source Google image 5App unique tool designed specifically business learning HR consulting firm want organize deliver resource employee right people right time 5Apps us HTML5 JavaScript coding apps emphasis security app data tool allows quickly create relevant content support employees’ learning performance finished app compatible Android iOS device choose accordingly per company’s need Final Thoughts Today business face tough competition main focus target audience That’s business need take advantage crossplatform app development tool possible list top 7 crossplatform mobile development tool find tool manage mobile app development need isn’t always easy choose best development tool many option available market refer list top crossplatform app development tool build mobile appTags Crossplatform Application Open Source App Dev Tools Best Popular Mobile App Development Web App Development Tools
4,224
“Full-Time Crypto”
For nearly all of us in the cryptocurrency space uttering the words “full-time crypto” means something special. For you, that phrase may mean early retirement, a change in your career path, financial freedom or even cutting ties (somewhat) with fiat monies forever. In my case, all those reasons have made me ponder that phrase multiple times a week over the past 6+ years. Reflections on a hobby that began in 2011 and includes hundreds of deaths of my passion project, solidifies the fact that I’m “oldskool” in the cryptocoin world. Fittingly, it was one of those articles proclaiming Bitcoin’s death that drew me into the rabbit hole to start with. Illustration: Martin Venezky The idea of digital money — convenient and untraceable, liberated from the oversight of governments and banks — had been a hot topic since the birth of the Internet. Cypherpunks, the 1990s movement of libertarian cryptographers, dedicated themselves to the project. Yet every effort to create virtual cash had foundered. — “The Rise and Fall of Bitcoin” — Benjamin Wallace, WIRED “Hooked on a Feeling” Like many of you internet nomads, this sounded amazing. Individual Sovereignty; are you kidding me? Sign me up! There were limited, believable use cases at this point. Only BTC moving between exchanges and sites like Silk Road or LocalBitcoins, but the promise of a future, free from even these centralized 3rd Party Bitcoin services, was more than enough to spark hope in a decentralized economy. Things changed about 48 hours later with my first 3X bump… “I’m in this for the Coin” 👈 For most of you in the cryptocurrency space, this is your morning, afternoon, evening and, most likely, dream life. People in this state of euphoria contemplate “full-time crypto” life pretty regularly — retiring early, financial freedom and even changing your career path to “day trader”. Don’t let anyone in this space squelch your new found love for chart reading or magnetism towards FOMO. It’s part of the initiation that every oldskooler worth listening to will admit to participating in when cornered. I think something else they’ll tell you is, “it’s more than just making money.” Now if you’re listening or not is another story. Most of the price action leading into 2013/2014 was surrounding Satoshi’s “greatest invention of all time” A purely peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution. - “Bitcoin: A Peer-to-Peer Electronic Cash System” — Satoshi Nakamoto Little did most of us know that the “coin” was only 1/2 of the equation. Funny how viewing things through money-tinted glasses leaves out details… like, how does this new cash system actually work? I think this is where journalists fail to see Satoshi’s vision in their death bed write-ups, and where many of us found time to learn after experiencing the gut check run-up to $1200 → $160 → $600 → $300. “Semi-Retirement” That’s what I called it. We had our first girl in 2012, and we were blessed enough financially for me to leave my architecture firm and focus on our family and maintain our lifestyle via day trading cryptocurrencies. I think my quota was make 1 BTC/day trading — do that and we were good to go! Still in my late-thirties at the time, calling it a retirement sounded great (to me) but I didn’t want the raised eyebrows all my old college roommates would surely give at our family parties. So I improvised. I mean, who was I to spoil their joy of being neck deep in parenthood? Raising children, going to ball practices every night, eating McDonald’s in the car every meal. Spending 40+ hours a week away from the ones you loved shouldn’t be interrupted by such nonsense as retiring 15 years after just getting started. Essentially it was a sabbatical from my “real job” as a healthcare design architect. The career path I unwittingly signed on for as a toddler building with LEGOs, refined in high-school designing cars and lost days of sleep over in college. “Living the American Dream” of paying into 5–10 years of debt and then trudging through nearly 50 years of service for someone not named “Mr. ME”. Oh glory days! ”I’m in this for the Blockchain” I was still playing family man and day trading cryptocurrencies in 2014 when I decided to return to my architecture firm. There really isn’t anything better than having freedom to spend the majority of your day with the people most important in your life. My decision to return to a real job wasn’t as much a necessity as it was a strategic move. Financially we were good, but I had 2 reasons to jump back in. First, frankly, I was bored. I missed doing design and I wasn’t really participating in the communities of any of the coins I was holding. When you aren’t involved in a hobby you say you’re passionate about you quickly lose interest. Its the Curse of the Entrepreneur; lacking focus and chasing ideas. There weren’t hours of crypto webcasts that popped up daily in my feed to keep me entertained, only Bitcointalk.org which was good for a couple train wreck sightings a week. Looking back at this time, I can see the tight bond of community, value and technology. We like to shill our favorite coin’s “first to” and “best at” technology but most users still don’t understand it all and deep down they only want more people to talk about their project so the price goes up. Second, Bitcoin was in a remarkably stable price state and it was telling me it is time to accumulate (hence we needed capital), as news was beginning to surround the distributed ledger 1/2 of Satoshi’s invention; the “blockchain”. I had already begun focusing on privacy coins before returning to the 9–5 with new projects like Darkcoin and Monero beginning development. But I really gravitated to projects that were bringing utility into the crypto space. I didn’t sign up to Bitcoin for merely another way to move money on the internet from one exchange to another, I signed up for a better this 👇 Commerce on the Internet has come to rely almost exclusively on financial institutions serving as trusted third parties to process electronic payments. While the system works well enough for most transactions, it still suffers from the inherent weaknesses of the trust based model. — “Bitcoin: A Peer-to-Peer Electronic Cash System” — Satoshi Nakamoto To me this speaks to the power of the blockchain. The ability to create trustless models for social platforms, exchanges, marketplaces, storage and even oracles. These projects were moving past the coin and building in utility, just like the quote above taken from the first line of the Bitcoin Whitepaper introduction. This ultimately led me to the Shadow Project which had already developed a privacy currency, ShadowCash SDC, but was now building in some utility with a decentralized marketplace. To me, this was the next step for crypto; creating a decentralized, anonymous economy! Around this time I finally began volunteering for the first time, wherever I could help within the SDC community. It may have taken me a while to figure this out but, I had skills from real life that could actually help a team of volunteers working on an open-source project that I also believed in and invested in. “Get off your ass and do something!”
https://medium.com/decentralize-today/full-time-crypto-9b389c6db099
['Paul Schmitzer']
2018-01-16 21:31:46.165000+00:00
['Cryptocurrency', 'Retirement', 'Freedom', 'Entrepreneurship', 'Bitcoin']
Title “FullTime Crypto”Content nearly u cryptocurrency space uttering word “fulltime crypto” mean something special phrase may mean early retirement change career path financial freedom even cutting tie somewhat fiat monies forever case reason made ponder phrase multiple time week past 6 year Reflections hobby began 2011 includes hundred death passion project solidifies fact I’m “oldskool” cryptocoin world Fittingly one article proclaiming Bitcoin’s death drew rabbit hole start Illustration Martin Venezky idea digital money — convenient untraceable liberated oversight government bank — hot topic since birth Internet Cypherpunks 1990s movement libertarian cryptographer dedicated project Yet every effort create virtual cash foundered — “The Rise Fall Bitcoin” — Benjamin Wallace WIRED “Hooked Feeling” Like many internet nomad sounded amazing Individual Sovereignty kidding Sign limited believable use case point BTC moving exchange site like Silk Road LocalBitcoins promise future free even centralized 3rd Party Bitcoin service enough spark hope decentralized economy Things changed 48 hour later first 3X bump… “I’m Coin” 👈 cryptocurrency space morning afternoon evening likely dream life People state euphoria contemplate “fulltime crypto” life pretty regularly — retiring early financial freedom even changing career path “day trader” Don’t let anyone space squelch new found love chart reading magnetism towards FOMO It’s part initiation every oldskooler worth listening admit participating cornered think something else they’ll tell “it’s making money” you’re listening another story price action leading 20132014 surrounding Satoshi’s “greatest invention time” purely peertopeer version electronic cash would allow online payment sent directly one party another without going financial institution “Bitcoin PeertoPeer Electronic Cash System” — Satoshi Nakamoto Little u know “coin” 12 equation Funny viewing thing moneytinted glass leaf details… like new cash system actually work think journalist fail see Satoshi’s vision death bed writeups many u found time learn experiencing gut check runup 1200 → 160 → 600 → 300 “SemiRetirement” That’s called first girl 2012 blessed enough financially leave architecture firm focus family maintain lifestyle via day trading cryptocurrencies think quota make 1 BTCday trading — good go Still latethirties time calling retirement sounded great didn’t want raised eyebrow old college roommate would surely give family party improvised mean spoil joy neck deep parenthood Raising child going ball practice every night eating McDonald’s car every meal Spending 40 hour week away one loved shouldn’t interrupted nonsense retiring 15 year getting started Essentially sabbatical “real job” healthcare design architect career path unwittingly signed toddler building LEGOs refined highschool designing car lost day sleep college “Living American Dream” paying 5–10 year debt trudging nearly 50 year service someone named “Mr ME” Oh glory day ”I’m Blockchain” still playing family man day trading cryptocurrencies 2014 decided return architecture firm really isn’t anything better freedom spend majority day people important life decision return real job wasn’t much necessity strategic move Financially good 2 reason jump back First frankly bored missed design wasn’t really participating community coin holding aren’t involved hobby say you’re passionate quickly lose interest Curse Entrepreneur lacking focus chasing idea weren’t hour crypto webcasts popped daily feed keep entertained Bitcointalkorg good couple train wreck sighting week Looking back time see tight bond community value technology like shill favorite coin’s “first to” “best at” technology user still don’t understand deep want people talk project price go Second Bitcoin remarkably stable price state telling time accumulate hence needed capital news beginning surround distributed ledger 12 Satoshi’s invention “blockchain” already begun focusing privacy coin returning 9–5 new project like Darkcoin Monero beginning development really gravitated project bringing utility crypto space didn’t sign Bitcoin merely another way move money internet one exchange another signed better 👇 Commerce Internet come rely almost exclusively financial institution serving trusted third party process electronic payment system work well enough transaction still suffers inherent weakness trust based model — “Bitcoin PeertoPeer Electronic Cash System” — Satoshi Nakamoto speaks power blockchain ability create trustless model social platform exchange marketplace storage even oracle project moving past coin building utility like quote taken first line Bitcoin Whitepaper introduction ultimately led Shadow Project already developed privacy currency ShadowCash SDC building utility decentralized marketplace next step crypto creating decentralized anonymous economy Around time finally began volunteering first time wherever could help within SDC community may taken figure skill real life could actually help team volunteer working opensource project also believed invested “Get as something”Tags Cryptocurrency Retirement Freedom Entrepreneurship Bitcoin
4,225
Even a pandemic can’t stop the desperate flow of refugees to Europe
Even a pandemic can’t stop the desperate flow of refugees to Europe In the circumstances, an EU humanitarian package might serve as a band aid but not much more. Influx. Members of a German rescue NGO on a rubber boat during an operation off the Libyan coast. (AFP) In the weeks since the World Health Organisation declared a pandemic, it’s become clear that the outbreak of disease can paralyse national economies but not the flight of desperate people across the Mediterranean to apparent safety. How else to explain the fact that migrants are still travelling from Libya towards Europe? In the last week or so, more than 500 migrants left Libya for Europe, according to the International Organisation for Migration (IOM). On April 12, the Italian government had, perforce, to quarantine a ship-load of migrants at sea. The good thing is it didn’t try to send them back. The way things are going right now, “no state wants to rescue” migrants, according to the German non-profit Sea Watch. Libya, Italy and Malta have all shut their borders citing the pandemic. Last week, Libya refused entry to about 280 returning migrants. IOM initially said Libyan ports appeared to have closed altogether. But later, the UN Refugee Agency’s special envoy for the central Mediterranean Vincent Cochetel clarified that “Libya’s Directorate for Combating Illegal Migration does not seem able or prepared to take more detainees.” Under the terms of a deal between Italy and Libya’s UN-backed government, signed in 2017 and renewed last November, the Libyan coastguard is meant to stop migrant boats heading for Europe and return their passengers to Libya. But the pandemic seems to have thrown all of that into doubt. So what rights, if any, do refugees have during a once-in-a-century pandemic? The first point to note is that refugees and asylum-seekers are recognised under international law. Although unprecedented times require unprecedented measures, it’s reasonable to say that migrants of all sorts should at least be entitled to just and humane treatment. In this context, there is no more shining example than Portugal. Earlier this month, Portugal granted full citizenship rights, through June 30, to all refugees, asylum-seekers and migrants with pending applications for residency certificates. This will allow them to access healthcare, a government spokesperson explained. The decision stands as one of the more heartwarming instances of pragmatic humanism in the age of the coronavirus. Elsewhere, not so much. The exceptional circumstances of a pandemic have justifiably prompted border closures and travel restrictions, but it’s all too clear that several countries are simply using the coronavirus outbreak to push the same restrictionist policies they pursued before. It was on March 1, before a single coronavirus case was recorded in Hungary, that it suspended the right to claim asylum in the country, claiming there was a connection between the disease and illegal migration. Landlocked Hungary has the luxury of self-isolation afforded by its geography, but not island nations like Malta. On April 13, Malta’s foreign minister and home minister jointly wrote to the European Union’s (EU) High Representative for Foreign Affairs and Security Policy Josep Borrell to demand “imminent and substantial” humanitarian assistance for Libya to deal with “the rapidly deteriorating migration situation in the Mediterranean during this testing hour.” Malta’s argument was stark. Unless the EU launches a humanitarian mission for Libya with at least 100 million euros “today and not tomorrow,” there may be little or no “incentive” for migrants to stay put in Libya rather than making for European soil. Accordingly, the Maltese ministers wrote, the EU should “boost the empowerment of the Libyan Coast Guard in enhancing the control of its borders, as well as concretely ensuring that Libya represents a safe port for the disembarkation of migrants.” The issue will be discussed at an emergency EU meeting. But a second tangential point may be harder to confront. With the pandemic triggering the worst economic downturn since the 1930s’ Great Depression, poor countries face the prospect of debt crises and political turmoil. This, in turn, could prompt massive outflows of migrants towards the rich world, especially Europe. As Kristalina Georgieva, managing director of the International Monetary Fund, recently noted: “Trouble travels. It doesn’t stay in one place.” The implications are dire for conflict-scarred countries like Libya. In the Maltese letter to EU High Representative Borrell, the ministers described Libya as “a complex landscape plagued with difficulties across conflict, health, humanitarian and migration dimensions, all of which are snowballing at this very moment.” The COVID-19 crisis, they added, is “leaving its mark in Libya and is weakening an already fragile health system.” More than 650,000 people wait to “leave Libyan shores for Europe,” they warned. In the circumstances, an EU humanitarian package might serve as a band aid but not much more. Originally published in The Arab Weekly
https://rashmee.medium.com/even-a-pandemic-cant-stop-the-desperate-flow-of-refugees-to-europe-22c51b1a7c27
['Rashmee Roshan Lall']
2020-04-19 11:33:44.332000+00:00
['Europe', 'Restrictionist', 'Refugees', 'Asylum Seekers', 'Coronavirus']
Title Even pandemic can’t stop desperate flow refugee EuropeContent Even pandemic can’t stop desperate flow refugee Europe circumstance EU humanitarian package might serve band aid much Influx Members German rescue NGO rubber boat operation Libyan coast AFP week since World Health Organisation declared pandemic it’s become clear outbreak disease paralyse national economy flight desperate people across Mediterranean apparent safety else explain fact migrant still travelling Libya towards Europe last week 500 migrant left Libya Europe according International Organisation Migration IOM April 12 Italian government perforce quarantine shipload migrant sea good thing didn’t try send back way thing going right “no state want rescue” migrant according German nonprofit Sea Watch Libya Italy Malta shut border citing pandemic Last week Libya refused entry 280 returning migrant IOM initially said Libyan port appeared closed altogether later UN Refugee Agency’s special envoy central Mediterranean Vincent Cochetel clarified “Libya’s Directorate Combating Illegal Migration seem able prepared take detainees” term deal Italy Libya’s UNbacked government signed 2017 renewed last November Libyan coastguard meant stop migrant boat heading Europe return passenger Libya pandemic seems thrown doubt right refugee onceinacentury pandemic first point note refugee asylumseekers recognised international law Although unprecedented time require unprecedented measure it’s reasonable say migrant sort least entitled humane treatment context shining example Portugal Earlier month Portugal granted full citizenship right June 30 refugee asylumseekers migrant pending application residency certificate allow access healthcare government spokesperson explained decision stand one heartwarming instance pragmatic humanism age coronavirus Elsewhere much exceptional circumstance pandemic justifiably prompted border closure travel restriction it’s clear several country simply using coronavirus outbreak push restrictionist policy pursued March 1 single coronavirus case recorded Hungary suspended right claim asylum country claiming connection disease illegal migration Landlocked Hungary luxury selfisolation afforded geography island nation like Malta April 13 Malta’s foreign minister home minister jointly wrote European Union’s EU High Representative Foreign Affairs Security Policy Josep Borrell demand “imminent substantial” humanitarian assistance Libya deal “the rapidly deteriorating migration situation Mediterranean testing hour” Malta’s argument stark Unless EU launch humanitarian mission Libya least 100 million euro “today tomorrow” may little “incentive” migrant stay put Libya rather making European soil Accordingly Maltese minister wrote EU “boost empowerment Libyan Coast Guard enhancing control border well concretely ensuring Libya represents safe port disembarkation migrants” issue discussed emergency EU meeting second tangential point may harder confront pandemic triggering worst economic downturn since 1930s’ Great Depression poor country face prospect debt crisis political turmoil turn could prompt massive outflow migrant towards rich world especially Europe Kristalina Georgieva managing director International Monetary Fund recently noted “Trouble travel doesn’t stay one place” implication dire conflictscarred country like Libya Maltese letter EU High Representative Borrell minister described Libya “a complex landscape plagued difficulty across conflict health humanitarian migration dimension snowballing moment” COVID19 crisis added “leaving mark Libya weakening already fragile health system” 650000 people wait “leave Libyan shore Europe” warned circumstance EU humanitarian package might serve band aid much Originally published Arab WeeklyTags Europe Restrictionist Refugees Asylum Seekers Coronavirus
4,226
Problems Deep Learning will probably solve by 2019
It is hyperbole to say deep learning is achieving state-of-the-art results across a range of difficult problem domains. A fact, but also hyperbole. In this post, you will discover recent applications of deep learning. Deep Learning for Forecasting Nuclear Accidents Forecasting is one of the many applications where machine learning techniques have established a firm footing. With the deep learning networks getting better with each passing day, the move to entrust these networks with something as sophisticated and incredibly powerful as nuclear plants are in progress. The external instabilities like Tsunamis and extremist activities like terrorism cannot be forecasted with certainty. But what happens within a nuclear plant can be controlled and should be. Deep learning for diagnosis and prognosis A piece of news published two days ago claims that deep learning can analyze lung cancer histopathology slides in less than 30 seconds. Deep learning to eradicate suicide In a recent NYU study wherein scientists built a natural language processing AI, basically, the same technology that runs Alexa, Assistant, and Siri that can detect PTSD in veterans with 89 percent accuracy just by listening to audio recordings of the person’s speech. Deep Learning to Save Lives The rapid advances in computer vision due to the application of AI starting in 2012, have led to predictions of the imminent demise of radiologists, to be replaced by better diagnosticians — Deep Learning algorithms. These algorithms will help “automate every visual aspect of medicine,” going beyond radiology to pathology, dermatology, dentistry, and to all situations where “a doctor or a nurse are staring at an image and need to make a quick decision.” This “automation” does not mean replacing doctors. Rather, it means the augmentation of their work, providing consistent, accurate, and timely assistance. We need all the doctors we have in the world and we will need 10X more because of the aging population. AI-Based System to cut process time for abnormal X-Rays Deep Learning can help a Sales Team Thrive Machine Learning, specifically Deep Learning, fills in gaps that human intuition never could. Put to use across a team of eager sales pros, its innate advantages add a layer of intelligence to any crew’s knowledge base. As a tool, deep learning provides insights by spotting and naming patterns in millions of unstructured data points. Deep Learning bridges gaps in the sales pipeline by determining who is most likely to convert to the next stage in the sales funnel. Using Deep Learning, sales leaders can not only identify a good-fit potential customer, but also predict the possible deal size, deal cycles, and other insights. Conclusion There are many cases where AI and Deep learning can revolutionize a particular field. The list would go on and on. There are a plethora of applications. By the end of 2019, we will witness a wide variety of problems solved by AI.
https://medium.com/dataseries/problems-deep-learning-will-probably-solve-by-2019-b02233ed9aad
['Surya Remanan']
2019-04-29 19:25:23.217000+00:00
['Deep Learning', 'Artificial Intelligence', 'Data Science']
Title Problems Deep Learning probably solve 2019Content hyperbole say deep learning achieving stateoftheart result across range difficult problem domain fact also hyperbole post discover recent application deep learning Deep Learning Forecasting Nuclear Accidents Forecasting one many application machine learning technique established firm footing deep learning network getting better passing day move entrust network something sophisticated incredibly powerful nuclear plant progress external instability like Tsunamis extremist activity like terrorism cannot forecasted certainty happens within nuclear plant controlled Deep learning diagnosis prognosis piece news published two day ago claim deep learning analyze lung cancer histopathology slide le 30 second Deep learning eradicate suicide recent NYU study wherein scientist built natural language processing AI basically technology run Alexa Assistant Siri detect PTSD veteran 89 percent accuracy listening audio recording person’s speech Deep Learning Save Lives rapid advance computer vision due application AI starting 2012 led prediction imminent demise radiologist replaced better diagnostician — Deep Learning algorithm algorithm help “automate every visual aspect medicine” going beyond radiology pathology dermatology dentistry situation “a doctor nurse staring image need make quick decision” “automation” mean replacing doctor Rather mean augmentation work providing consistent accurate timely assistance need doctor world need 10X aging population AIBased System cut process time abnormal XRays Deep Learning help Sales Team Thrive Machine Learning specifically Deep Learning fill gap human intuition never could Put use across team eager sale pro innate advantage add layer intelligence crew’s knowledge base tool deep learning provides insight spotting naming pattern million unstructured data point Deep Learning bridge gap sale pipeline determining likely convert next stage sale funnel Using Deep Learning sale leader identify goodfit potential customer also predict possible deal size deal cycle insight Conclusion many case AI Deep learning revolutionize particular field list would go plethora application end 2019 witness wide variety problem solved AITags Deep Learning Artificial Intelligence Data Science
4,227
Top 9 Jupyter Notebook extensions
Introduction Jupyter Notebook is probably the most popular tool used by Data Scientists. It allows mixing code, text, and inspecting the output in one document. This is something that is not possible with some other programming IDEs. However, the vanilla version of the Jupyter notebooks is not perfect. In this article, we will show you how to make it slightly better by installing some useful extensions.
https://towardsdatascience.com/top-9-jupyter-notebook-extensions-7a5d30269bc8
['Magdalena Konkiewicz']
2020-06-24 19:03:40.847000+00:00
['Artificial Intelligence', 'Machine Learning', 'Technology', 'Data Science', 'Programming']
Title Top 9 Jupyter Notebook extensionsContent Introduction Jupyter Notebook probably popular tool used Data Scientists allows mixing code text inspecting output one document something possible programming IDEs However vanilla version Jupyter notebook perfect article show make slightly better installing useful extensionsTags Artificial Intelligence Machine Learning Technology Data Science Programming
4,228
8 Life Lessons I’ve Learned at 40-Something That I Wish I’d Known at 20-Something
8 Life Lessons I’ve Learned at 40-Something That I Wish I’d Known at 20-Something Some of the things that come with age are great. Awareness is one of them. Photo: Anna Pritchard/Unsplash My 40s are a lot different than I thought they’d be when I was still in my 20s. On the one hand, I have a much deeper understanding of why my dad liked naps so much when I was a kid. I’ve learned not to ever fall asleep in an awkward position if I want to be able to walk the next day. I can’t just eat whatever I want anymore if I don’t want to suffer the horrible consequences either. However, I’m also a lot more aware and secure in myself than I thought I’d be at this age. I’m calmer. I don’t sweat the small stuff nearly as much. And I’ve learned a thing or three about life that I wish I’d understood a lot earlier on. Here are some of the more important ones. Do yourself a favor and get this stuff straight now so you don’t have to do what I did and learn the hard way. 1. There’s no such thing as too late or too old. When I was younger, I was super concerned about whether or not I was keeping up with other people my age when it came to the big milestones in life. I was never what you’d call an overachiever, so I didn’t care whether I was the first of my friends to get married or land my dream job. I just knew I wasn’t cool with being the last. That meant I jumped headfirst into things that deserved a lot more thought and consideration. I rushed into marriage in my mid-20s and wound up divorced by 29. I pushed myself to take on huge responsibilities I wasn’t ready for way too soon in life and I wound up with bad credit it took me my entire 30s to fix. Now I couldn’t even tell you why I did those things or what the big rush even was. Don’t waste your 20s rushing to become your parents. You’ll look back one day and regret simply being young when you had the chance to be. There’s no set age by which you have to find your ultimate bliss in life, own a home, choose a life partner, or anything else major. For some people — myself included — that ideal time is a little later in life. For others, it’s never, because they get older, gain some perspective, and realize they don’t even want those things. So don’t waste your 20s rushing to become your parents. You’ll look back one day and regret simply being young when you had the chance to be. 2. Who you were as a child is more important than you think. One of the dumbest things I’ve ever been led to believe was that children don’t know themselves — that I didn’t know myself. It eventually turned out that I knew myself better as a child than I have at any other point in my life. It’s just that it’s so darned easy to lose sight of yourself once society starts telling you how wrong you are for liking what you like and being whoever it is that you are. For instance, I knew I wanted to make my life about creating things when I was a kid, as well as that a typical 9 to 5 job probably wasn’t for me. My parents, on the other hand, had their heart set on my working in animal care for some reason and eventually managed to convince me that’s what I wanted too. They did such a good job of it that when I eventually found myself working ridiculous hours as a vet tech at a local animal clinic, I couldn’t understand why I hated it so much. These days, I’m a full-time writer who works out of her home according to a flexible schedule of my choosing — a much better fit. The thing is it’s fine to want to make your family proud, but if their dreams for you differ from your dreams for yourself, you’ll be a lot happier if you listen to yourself. No one knows you as well as you know yourself and you knew yourself without limits or shame when you were a kid. Hold onto the things you loved and longed for then. They turn out to be pretty important, especially when you inevitably find yourself wondering what to do with your life next. Chances are the answer is connected to something that made you come alive as a child. Photo by Yarden on Unsplash 3. It’s better to make memories than collect things. My mother has this huge beef with people who spend money on stuff like concert tickets, vacations, or special dinners at restaurants. She reasoned that once you’ve gone to that concert, it’s over and you have nothing tangible to show for it, meaning the tickets were a huge waste of money. If you had to spend money on fun, you bought things instead… objects. Unlike the concert tickets, you’ll have the things you buy potentially forever, especially if you take care of them. That’s the approach to disposable income and leisure time that I grew up with and lived by for years. And as with that vet tech job I never truly wanted, I couldn’t figure out why all this crap I was buying wasn’t making me as happy as it was supposed to. Part of it had to do with the hard truth that most “stuff” becomes pretty useless sooner or later. If it doesn’t break or wear out, it becomes obsolete — like the massive cassette collection that was my world when I was in my teens. Same for all the knickknacks I spent my 20s collecting. “Stuff” becomes pretty useless sooner or later. If it doesn’t break or wear out, it becomes obsolete. Memories are a different story though. Most of the physical objects I spent so much money on when I was younger hit a landfill years ago. But I still remember the concerts I went to, the vacations I took, and the festivals I attended like they were yesterday. Those memories and the way I felt when I was creating them are as shiny and precious to me today as they were back then. So are the ways some of those experiences changed me as a person. These days, I never think back on the past and regret not buying some trendy piece of clothing that I probably wouldn’t even have worn or yet another statue to sit on my bookshelf collecting dust. I think about that trip to Romania I had the opportunity to take in college, but ultimately passed on. I think about the time I went to Mexico on a cruise and let my stick-in-the-mud ex talk me out of riding a burro up a dirt trail while I was there. It makes me sad that I don’t have those memories to look back on, especially since I may never have those same opportunities again. But the good news is I learned to just go ahead and do the things I want to do in life, even if it means doing them alone. The memories and cool stories last a lifetime. 4. The little things are the big things. Speaking of memories, I’ve learned that it’s not always obvious when you’re creating one that’s going to mean a lot to you one day. Everyone knows their wedding day or the day their child is born is a big deal and that they’ll remember that for the rest of their life. Some of my favorite memories are the ones that kind of snuck up on me at the time though. I’m talking about the time my husband and I drove out to our favorite barbecue spot on Memorial Day one year and spent the whole day there, even though it got super cold and started to snow unexpectedly. I mean the day I was walking by the beach with my friends as a teenager in the fog, saw a seal, and thought for a split second that it was a mermaid. There’s the time I signed up for an online film appreciation class on a whim and realized I still love learning as an adult. And the week a random frog lived underneath my bedroom window and made me happy every night with all his little frog noises. Those are some of the moments and occurrences that turned out to mean the most to me over the years. I couldn’t even tell you why, but there’s something magical about them — something that suggests they’re what life is truly all about. They were little things that became big because they had meaning, especially if they were also shared with someone I loved. Photo by Bruno Nascimento on Unsplash 5. Taking care of yourself physically is every bit as important as people tell you it is. Ignore that piece of advice and you’ll eventually wish you hadn’t, I assure you. I’m not sure how things are for young people these days, but I wasn’t taught about fitness in much detail when I was young. Sure, I was taught it was important, but I was never properly schooled on why or told what exactly would happen to you if you chose not to bother. I certainly wasn’t given any practical advice on how to turn fitness and proper self-care into permanent habits. Luckily for me, years of working on my feet and having friends who preferred physical pastimes to simply sitting around all the time meant I spent most of my life “accidentally fit”. The problem came when I got older, had more choices, and started making a bunch that meant I wasn’t very active anymore. That quickly led to the swift and blinding development of numerous health problems and this horrible feeling that I had no control over my life anymore. Get so used to taking care of yourself that doing otherwise feels unbearably weird. These days, I’m doing much better in that department. I’ve gone out of my way to educate myself on how to take care of my body, as well as to establish a healthy routine that’s realistic for me. The “realistic for you” part is critical because, at the end of the day, it doesn’t matter how effective a given fitness regimen is. If you hate it with the fire of a thousand suns, you’re not going to stick with it and you can’t benefit from exercise you’re not doing. Don’t do what I did and wait until you’re 40 and your metabolism is slowing down to get your act together. Do it while you’re still young and stick with it. Find a way to love being active and to make it a daily part of your routine. Get so used to taking care of yourself that doing otherwise feels unbearably weird. You’ll be glad you did one day, because seriously. If I could change just one thing about how I ran my life when I was younger, this would be the thing. (Here’s a piece I wrote all about that in particular, should you be interested.) 6. The best time to make your dreams come true is now. Not in 10 years when you’ve figured out what your one true career path is. Not in a few months when you’ve finally lost that stubborn 20 pounds. Not tomorrow when the weather’s better and not “someday” when your life’s finally the big, perfect bowl of peach cobbler you hope it eventually will be. It’s now… today! The unshakeable optimism that comes with being young is amazing and I remember it fondly. I figured my whole life was still ahead of me and took it for granted that everything would simply work out in my favor one day all by itself, so why force things? I wanted to travel, but I thought the experience would be better “someday” when I had tons of money and a perfect job that didn’t feel as soul-sucking as my current one did. I wanted to speak multiple languages, but I wanted to learn in the perfect house I thought I’d own someday while sitting in the perfect combination office-study I also planned on having. I wanted to teach myself how to do genuinely awesome makeup, but I wanted a flawless life and a circle of brag-worthy friends to show it off to first. Well, guess what. That perfect life never materializes because it doesn’t exist. Even if you’re crazy successful one day, you’ll forever have constraints on your time or your resources. There will always be something going on that stops circumstances from being ideal, so start working on the things you want to do, be, and experience now. Then you can spend middle age building on what you’ve already learned, not starting from scratch. Photo by Henry Hustava on Unsplash 7. Nobody’s coming to save you from yourself or your life. Like a lot of very shy young girls, I spent a lot more time reading books and watching movies than I did having real-life experiences and meaningful interactions with other people. That gave me the impression that my life was eventually going to play out like the stories I loved so much and that I wouldn’t have to do anything special to help it happen. My life was legitimately hard for me when I was young for lots of reasons, but it never occurred to me to try to rise above it so I’d be able to build myself a better one eventually. Instead, I fantasized about the day someone else would love me enough to do it for me. I thought one day my emotionally unavailable parents would suddenly become different people and want to help me out in life the way my friends’ parents helped them. Or that whenever that perfect partner finally materialized he’d take care of me and provide for me. That way I’d never have to step out of my comfort zone, try anything scary or new, and figure out life for myself. If you do luck out one day and meet someone who’d love to give you an awesome life just because you’re you, trust that they’re going to expect you to pitch in in one way or another. People get tired of being the only horse on the team who’s actively working to pull the wagon. Well, life doesn’t work like that, so if you think this way, it’s to your benefit to get it sorted now while you’re still young. “Princess-in-a-tower disease” isn’t a good look on someone who’s in their 30s and it’s an even worse one on someone middle-aged or older. Don’t be fooled either. You don’t have to have been a young girl who enjoyed Disney princess movies a little bit too much to have this issue, so it’s worth asking yourself some questions. Are you an aspiring creative who’s banking so hard on “being discovered” one day that you’re not actively seeking out and seizing opportunities? Are you coasting through life because you assume you’ll eventually inherit money or property when your parents croak? Are you a parent who thinks your kids are going to grow up one day and undo all your mistakes for you? If so, it’s time to grow up. No one is out there chomping at the bit to save you from your apathy and lack of gumption. And if you do luck out one day and meet someone who’d love to give you an awesome life just because you’re you, trust that they’re going to expect you to pitch in and help on one level or another. People get tired of being the only horse on the team who’s actively working to pull the wagon. Always do your share and pull your weight, even if no one asked you to. 8. No one is entitled to a relationship with you (and vice versa). I’ve touched here and there on the fact that my home life was pretty dysfunctional when I was growing up. It was that low-key type of dysfunctional that sneaks up on you though. No one hit me or put lit cigarettes out on my arms, but there was a lot of emotional abuse and gaslighting going on. There still is. Eventually, I concluded that it was better to end my relationships with some of the most toxic people in my family and put up extremely strict boundaries with others. I’ve made similar decisions with other people in the past, especially ex-partners and false friends who took so much more than they gave. Learning to say no to harmful relationships with toxic people changed my life overnight. Healthy relationships that are two-way streets are much too good to miss out on, but you need to make room for them in your life. No one is entitled to a relationship with you for any reason, especially if they’re unwilling to treat you with basic human decency — not even family. People who care about you don’t kick you while you’re down or try to destroy your joy in the things you love. They don’t tell you you’re worthless, mock your appearance, and delight in being cruel to you. If you have people like this in your life, you are absolutely within your rights to cut them off, protect yourself, and move on. Even if they’re family. People also have the right to decide the same when it comes to you, so learning how to gracefully let others exit your life is also worthwhile. Healthy relationships that are two-way streets are much too good to miss out on, but you need to make room for them in your life. There won’t be any if you’re clinging to people who don’t value their relationships with you to the extent that they should. I’m not a huge believer in regret as far as life goes. I do believe strongly in learning as much as you can from your experiences. That’s a process that won’t ever stop for me, as I’ve learned to enjoy the challenge of growing and evolving over the years. Whatever age you are now, please do the same. It keeps life meaningful, colorful, and worthwhile. Shannon Hilson is a full-time professional writer from Monterey, California. She lives a quiet, creative life with her husband who is a movie producer and composer. When she’s not either writing or reading, she loves cooking and studying foreign languages.
https://medium.com/the-post-grad-survival-guide/8-life-lessons-ive-learned-at-40-something-that-i-wish-i-d-known-at-20-something-d7d1b0617eff
['Shannon Hilson']
2020-08-27 00:30:59.701000+00:00
['Life Lessons', 'Self Love', 'Aging', 'Self Improvement', 'Self-awareness']
Title 8 Life Lessons I’ve Learned 40Something Wish I’d Known 20SomethingContent 8 Life Lessons I’ve Learned 40Something Wish I’d Known 20Something thing come age great Awareness one Photo Anna PritchardUnsplash 40 lot different thought they’d still 20 one hand much deeper understanding dad liked nap much kid I’ve learned ever fall asleep awkward position want able walk next day can’t eat whatever want anymore don’t want suffer horrible consequence either However I’m also lot aware secure thought I’d age I’m calmer don’t sweat small stuff nearly much I’ve learned thing three life wish I’d understood lot earlier important one favor get stuff straight don’t learn hard way 1 There’s thing late old younger super concerned whether keeping people age came big milestone life never you’d call overachiever didn’t care whether first friend get married land dream job knew wasn’t cool last meant jumped headfirst thing deserved lot thought consideration rushed marriage mid20s wound divorced 29 pushed take huge responsibility wasn’t ready way soon life wound bad credit took entire 30 fix couldn’t even tell thing big rush even Don’t waste 20 rushing become parent You’ll look back one day regret simply young chance There’s set age find ultimate bliss life home choose life partner anything else major people — included — ideal time little later life others it’s never get older gain perspective realize don’t even want thing don’t waste 20 rushing become parent You’ll look back one day regret simply young chance 2 child important think One dumbest thing I’ve ever led believe child don’t know — didn’t know eventually turned knew better child point life It’s it’s darned easy lose sight society start telling wrong liking like whoever instance knew wanted make life creating thing kid well typical 9 5 job probably wasn’t parent hand heart set working animal care reason eventually managed convince that’s wanted good job eventually found working ridiculous hour vet tech local animal clinic couldn’t understand hated much day I’m fulltime writer work home according flexible schedule choosing — much better fit thing it’s fine want make family proud dream differ dream you’ll lot happier listen one know well know knew without limit shame kid Hold onto thing loved longed turn pretty important especially inevitably find wondering life next Chances answer connected something made come alive child Photo Yarden Unsplash 3 It’s better make memory collect thing mother huge beef people spend money stuff like concert ticket vacation special dinner restaurant reasoned you’ve gone concert it’s nothing tangible show meaning ticket huge waste money spend money fun bought thing instead… object Unlike concert ticket you’ll thing buy potentially forever especially take care That’s approach disposable income leisure time grew lived year vet tech job never truly wanted couldn’t figure crap buying wasn’t making happy supposed Part hard truth “stuff” becomes pretty useless sooner later doesn’t break wear becomes obsolete — like massive cassette collection world teen knickknack spent 20 collecting “Stuff” becomes pretty useless sooner later doesn’t break wear becomes obsolete Memories different story though physical object spent much money younger hit landfill year ago still remember concert went vacation took festival attended like yesterday memory way felt creating shiny precious today back way experience changed person day never think back past regret buying trendy piece clothing probably wouldn’t even worn yet another statue sit bookshelf collecting dust think trip Romania opportunity take college ultimately passed think time went Mexico cruise let stickinthemud ex talk riding burro dirt trail make sad don’t memory look back especially since may never opportunity good news learned go ahead thing want life even mean alone memory cool story last lifetime 4 little thing big thing Speaking memory I’ve learned it’s always obvious you’re creating one that’s going mean lot one day Everyone know wedding day day child born big deal they’ll remember rest life favorite memory one kind snuck time though I’m talking time husband drove favorite barbecue spot Memorial Day one year spent whole day even though got super cold started snow unexpectedly mean day walking beach friend teenager fog saw seal thought split second mermaid There’s time signed online film appreciation class whim realized still love learning adult week random frog lived underneath bedroom window made happy every night little frog noise moment occurrence turned mean year couldn’t even tell there’s something magical — something suggests they’re life truly little thing became big meaning especially also shared someone loved Photo Bruno Nascimento Unsplash 5 Taking care physically every bit important people tell Ignore piece advice you’ll eventually wish hadn’t assure I’m sure thing young people day wasn’t taught fitness much detail young Sure taught important never properly schooled told exactly would happen chose bother certainly wasn’t given practical advice turn fitness proper selfcare permanent habit Luckily year working foot friend preferred physical pastime simply sitting around time meant spent life “accidentally fit” problem came got older choice started making bunch meant wasn’t active anymore quickly led swift blinding development numerous health problem horrible feeling control life anymore Get used taking care otherwise feel unbearably weird day I’m much better department I’ve gone way educate take care body well establish healthy routine that’s realistic “realistic you” part critical end day doesn’t matter effective given fitness regimen hate fire thousand sun you’re going stick can’t benefit exercise you’re Don’t wait you’re 40 metabolism slowing get act together you’re still young stick Find way love active make daily part routine Get used taking care otherwise feel unbearably weird You’ll glad one day seriously could change one thing ran life younger would thing Here’s piece wrote particular interested 6 best time make dream come true 10 year you’ve figured one true career path month you’ve finally lost stubborn 20 pound tomorrow weather’s better “someday” life’s finally big perfect bowl peach cobbler hope eventually It’s now… today unshakeable optimism come young amazing remember fondly figured whole life still ahead took granted everything would simply work favor one day force thing wanted travel thought experience would better “someday” ton money perfect job didn’t feel soulsucking current one wanted speak multiple language wanted learn perfect house thought I’d someday sitting perfect combination officestudy also planned wanted teach genuinely awesome makeup wanted flawless life circle bragworthy friend show first Well guess perfect life never materializes doesn’t exist Even you’re crazy successful one day you’ll forever constraint time resource always something going stop circumstance ideal start working thing want experience spend middle age building you’ve already learned starting scratch Photo Henry Hustava Unsplash 7 Nobody’s coming save life Like lot shy young girl spent lot time reading book watching movie reallife experience meaningful interaction people gave impression life eventually going play like story loved much wouldn’t anything special help happen life legitimately hard young lot reason never occurred try rise I’d able build better one eventually Instead fantasized day someone else would love enough thought one day emotionally unavailable parent would suddenly become different people want help life way friends’ parent helped whenever perfect partner finally materialized he’d take care provide way I’d never step comfort zone try anything scary new figure life luck one day meet someone who’d love give awesome life you’re trust they’re going expect pitch one way another People get tired horse team who’s actively working pull wagon Well life doesn’t work like think way it’s benefit get sorted you’re still young “Princessinatower disease” isn’t good look someone who’s 30 it’s even worse one someone middleaged older Don’t fooled either don’t young girl enjoyed Disney princess movie little bit much issue it’s worth asking question aspiring creative who’s banking hard “being discovered” one day you’re actively seeking seizing opportunity coasting life assume you’ll eventually inherit money property parent croak parent think kid going grow one day undo mistake it’s time grow one chomping bit save apathy lack gumption luck one day meet someone who’d love give awesome life you’re trust they’re going expect pitch help one level another People get tired horse team who’s actively working pull wagon Always share pull weight even one asked 8 one entitled relationship vice versa I’ve touched fact home life pretty dysfunctional growing lowkey type dysfunctional sneak though one hit put lit cigarette arm lot emotional abuse gaslighting going still Eventually concluded better end relationship toxic people family put extremely strict boundary others I’ve made similar decision people past especially expartners false friend took much gave Learning say harmful relationship toxic people changed life overnight Healthy relationship twoway street much good miss need make room life one entitled relationship reason especially they’re unwilling treat basic human decency — even family People care don’t kick you’re try destroy joy thing love don’t tell you’re worthless mock appearance delight cruel people like life absolutely within right cut protect move Even they’re family People also right decide come learning gracefully let others exit life also worthwhile Healthy relationship twoway street much good miss need make room life won’t you’re clinging people don’t value relationship extent I’m huge believer regret far life go believe strongly learning much experience That’s process won’t ever stop I’ve learned enjoy challenge growing evolving year Whatever age please keep life meaningful colorful worthwhile Shannon Hilson fulltime professional writer Monterey California life quiet creative life husband movie producer composer she’s either writing reading love cooking studying foreign languagesTags Life Lessons Self Love Aging Self Improvement Selfawareness
4,229
NASA discovers water on surface of Earth’s moon
NASA discovers water on surface of Earth’s moon Scientia Follow Oct 27 · 3 min read News | Alliah Antig Photo courtesy of NASA/Daniel Rutter The National Aeronautics and Space Administration (NASA) confirmed in a press release the presence of water on the southern hemisphere of the moon. The findings by NASA’s Stratospheric Observatory for Infrared Astronomy (SOFIA) were published in the latest issue of Nature Astronomy. In their previous exploration, hydrogen was the only element present in the Clavius crater, one of the largest craters visible from Earth. While they detected hydration on the lunar surface of the moon, the researchers could not distinguish at that time if it was water or other hydroxyl compounds. SOFIA, with the help of its Faint Object Infrared Camera for the SOFIA Telescope, was able to discover a concentration of 100 to 412 parts per million of molecular water in the Clavius crater, an indication of the presence of water on the sunlit surface of the moon. The researchers also concluded that the discovery of water in a small lunar soil region “is a result of local geology and is probably not a global phenomenon.” “Now we know it is there. This discovery challenges our understanding of the lunar surface and raises intriguing questions about resources relevant for deep space exploration,” Paul Hertz, director of the Astrophysics Division in the Science Mission Directorate at NASA, said. The discovery gave rise on how water persists in a different situation especially on a harsh, airless lunar surface. “Without a thick atmosphere, water on the sunlit lunar surface should just be lost to space. Yet somehow we’re seeing it. Something is generating the water, and something must be trapping it there,” Casey Honniball, one of the authors of the study, stated. NASA theorized that the raining down of micrometeorites on the lunar surface brought small amounts of water on the moon’s surface upon impact, resulting in the transformation of hydroxyl into water. The observations made are now used to formulate a systemic approach on how to learn more about the production, storage, and transportation of water across the moon. Resource maps of the moon will be added to the task of NASA’s Volatiles Investigating Polar Exploration Rover which will be used for future human explorations in space. Jacob Bleacher, chief exploration scientist for NASA’s Human Exploration and Operations Mission Directorate, sees that this will open more opportunities for new scientific discoveries. “If we can use the resources at the moon, then we can carry less water and more equipment to help enable new scientific discoveries,” Bleacher added, looking at the likelihood of utilizing the resources found at the moon to minimize the load of equipment needed to carry during explorations. NASA aims to learn more about the causes and effects of the presence of water through the Artemis program. Their purpose of establishing a sustainable human presence by the end of the decade can now be achieved by gathering relevant information in advance before sending the first woman and next man to the lunar surface in 2024. #
https://medium.com/up-scientia/nasa-discovers-water-on-surface-of-earths-moon-ab1155bfc83a
[]
2020-10-28 10:27:46.218000+00:00
['News', 'Astronomy', 'Science', 'Moon', 'NASA']
Title NASA discovers water surface Earth’s moonContent NASA discovers water surface Earth’s moon Scientia Follow Oct 27 · 3 min read News Alliah Antig Photo courtesy NASADaniel Rutter National Aeronautics Space Administration NASA confirmed press release presence water southern hemisphere moon finding NASA’s Stratospheric Observatory Infrared Astronomy SOFIA published latest issue Nature Astronomy previous exploration hydrogen element present Clavius crater one largest crater visible Earth detected hydration lunar surface moon researcher could distinguish time water hydroxyl compound SOFIA help Faint Object Infrared Camera SOFIA Telescope able discover concentration 100 412 part per million molecular water Clavius crater indication presence water sunlit surface moon researcher also concluded discovery water small lunar soil region “is result local geology probably global phenomenon” “Now know discovery challenge understanding lunar surface raise intriguing question resource relevant deep space exploration” Paul Hertz director Astrophysics Division Science Mission Directorate NASA said discovery gave rise water persists different situation especially harsh airless lunar surface “Without thick atmosphere water sunlit lunar surface lost space Yet somehow we’re seeing Something generating water something must trapping there” Casey Honniball one author study stated NASA theorized raining micrometeorite lunar surface brought small amount water moon’s surface upon impact resulting transformation hydroxyl water observation made used formulate systemic approach learn production storage transportation water across moon Resource map moon added task NASA’s Volatiles Investigating Polar Exploration Rover used future human exploration space Jacob Bleacher chief exploration scientist NASA’s Human Exploration Operations Mission Directorate see open opportunity new scientific discovery “If use resource moon carry le water equipment help enable new scientific discoveries” Bleacher added looking likelihood utilizing resource found moon minimize load equipment needed carry exploration NASA aim learn cause effect presence water Artemis program purpose establishing sustainable human presence end decade achieved gathering relevant information advance sending first woman next man lunar surface 2024 Tags News Astronomy Science Moon NASA
4,230
Why Remote Learning Was a Big Hot Mess
This September, we started the 2020–21 school year fully remote and it only took three weeks for our family to breakdown and send the kids back to school on a hybrid schedule. I served on the re-opening committees at school, made suggestions, asked questions, and offered to help — but here we are — worse off. Being an involved parent is an understatement in 2020. Aside from helping plan for the re-opening from brainstorming social-emotional learning ideas to inclusion, keeping on top of the kids each day while working remotely, and explaining to our teenager how to log into Zoom securely, nothing could have prepared us for this year. And I’m left wondering — Why are our kids held to impossible standards during a worldwide pandemic? It started on the first day of school when our son’s teacher posted a Zoom link to google classroom. He introduced himself while looking at the kids who attended in person, read them the book, wrote on the board while we watched, like peeping toms through a keyhole of sorts (Zoom). He muted his mic and then walked away and didn’t return. The next day, the google classroom remained unchanged, with no new links, no classwork, nothing. The same happened on Monday. Photo Credit: Canva.com In another class, his teacher hit the mute button while talking and continued to talk for over twenty-five minutes while a dozen students sat there staring patiently at the screen. That same teacher has given him zeros for not completing work with vague instructions while he has no access to the textbook online. And I’m not pointing the finger — I’m saying that we are all stuck in miscommunication limbo waiting for this cycle to end. But it isn’t going to end soon. Now that the flood gates of remote possibilities have opened, remote learning to some capacity is a permanent part of our lives. A few mismarked absences and zeros later, our son fell apart. He fell apart before me. He said he felt invisible after more e-mails about missed Zoom calls and classwork came. Eager to help him, I logged into his classes and saw for myself — vague assignment instructions and Zoom links buried under the announcements stream in unmanaged google classrooms. Teachers responded to his private comments with e-mails that went unread. It was like going on a scavenger hunt for information and none of us wanted to play anymore. Remote only learning is a big hot mess because it requires parents and caregivers to be more present than any of us know how to be, or have time to be. We’re also asking teachers to deliver instruction to separate populations of students with completely different needs at the same time. On top of everything, we are asking kids to act like adults. Our teenager has been silently struggling, not asking for help, upset about miscommunications with teachers he has never met in a school he has never actually stepped foot in while accumulating absences in the empty shells of his virtual classrooms. And it is all his fault, allegedly. Why is it assumed that teenagers are mature enough to manage their own remote learning? Why are they left to fend for themselves in this sink or swim environment? This isn’t a job and they aren’t at work. But their parents probably are. Not every student has someone at home to help them during the day. What they need are clear instructions for everything from logging in to meetings to assignment prompts. They need broken-down grading rubrics and reminders. They need grace periods and brain breaks. They need compassion.
https://medium.com/age-of-awareness/why-remote-learning-was-a-big-hot-mess-12fd89f6b161
['Laura J. Murphy']
2020-10-27 00:44:32.223000+00:00
['Education', 'Remote Learning', 'Mental Health', 'Parenting', 'Education Reform']
Title Remote Learning Big Hot MessContent September started 2020–21 school year fully remote took three week family breakdown send kid back school hybrid schedule served reopening committee school made suggestion asked question offered help — — worse involved parent understatement 2020 Aside helping plan reopening brainstorming socialemotional learning idea inclusion keeping top kid day working remotely explaining teenager log Zoom securely nothing could prepared u year I’m left wondering — kid held impossible standard worldwide pandemic started first day school son’s teacher posted Zoom link google classroom introduced looking kid attended person read book wrote board watched like peeping tom keyhole sort Zoom muted mic walked away didn’t return next day google classroom remained unchanged new link classwork nothing happened Monday Photo Credit Canvacom another class teacher hit mute button talking continued talk twentyfive minute dozen student sat staring patiently screen teacher given zero completing work vague instruction access textbook online I’m pointing finger — I’m saying stuck miscommunication limbo waiting cycle end isn’t going end soon flood gate remote possibility opened remote learning capacity permanent part life mismarked absence zero later son fell apart fell apart said felt invisible email missed Zoom call classwork came Eager help logged class saw — vague assignment instruction Zoom link buried announcement stream unmanaged google classroom Teachers responded private comment email went unread like going scavenger hunt information none u wanted play anymore Remote learning big hot mess requires parent caregiver present u know time We’re also asking teacher deliver instruction separate population student completely different need time top everything asking kid act like adult teenager silently struggling asking help upset miscommunications teacher never met school never actually stepped foot accumulating absence empty shell virtual classroom fault allegedly assumed teenager mature enough manage remote learning left fend sink swim environment isn’t job aren’t work parent probably every student someone home help day need clear instruction everything logging meeting assignment prompt need brokendown grading rubric reminder need grace period brain break need compassionTags Education Remote Learning Mental Health Parenting Education Reform
4,231
5 Tips for Composing Event Handler Functions in React
5. Avoid Referencing and Depending on the State Inside Event Handlers (Closures) This is a really dangerous thing to do. If done right, you should have no problems dealing with states in callback handlers. But if you slip at one point and it introduces silent bugs that are hard to debug, that’s when the consequences begin to engulf that extra time out of your day. If you’re doing something like this … … you should probably revisit these handlers and check if you’re actually getting the right results. If our input has a value of 23 and we type another 3 on the keyboard, here’s what the results say: If you understand the execution context in JavaScript, this makes no sense because the call to setValue has already finished executing before moving onto the next line. Well, that’s actually still right. There’s nothing JavaScript is doing that’s wrong right now. It’s actually React doing its thing. For a full explanation of the rendering process, you can head over to their documentation. But, in short, whenever React enters a new render phase, it takes a snapshot of everything that’s present specific to that render phase. It’s a phase in which React essentially creates a tree of React elements, which represents the tree at that point in time. By definition, the call to setValue does cause a rerender, but that render phase is at a future point in time. This is why the state value is still 23 after the setValue has finished executing, because the execution at that point in time is specific to that render, sort of like having their own little world they live in. This is how the concept of execution context looks like in JavaScript: This is React’s render phase in our examples (you can think of this as React having its own execution context): With that said, let’s take a look at our call to setCollapsed again:
https://medium.com/better-programming/5-tips-for-composing-event-handler-functions-in-react-479553968585
[]
2020-05-20 14:22:43.794000+00:00
['JavaScript', 'React', 'Reactjs', 'Nodejs', 'Programming']
Title 5 Tips Composing Event Handler Functions ReactContent 5 Avoid Referencing Depending State Inside Event Handlers Closures really dangerous thing done right problem dealing state callback handler slip one point introduces silent bug hard debug that’s consequence begin engulf extra time day you’re something like … … probably revisit handler check you’re actually getting right result input value 23 type another 3 keyboard here’s result say understand execution context JavaScript make sense call setValue already finished executing moving onto next line Well that’s actually still right There’s nothing JavaScript that’s wrong right It’s actually React thing full explanation rendering process head documentation short whenever React enters new render phase take snapshot everything that’s present specific render phase It’s phase React essentially creates tree React element represents tree point time definition call setValue cause rerender render phase future point time state value still 23 setValue finished executing execution point time specific render sort like little world live concept execution context look like JavaScript React’s render phase example think React execution context said let’s take look call setCollapsed againTags JavaScript React Reactjs Nodejs Programming
4,232
I spent most of the last three decades scribbling about traveling, business and dining.
I spent most of the last three decades scribbling about traveling, business and dining. Reporting on the tastes, flavors, ideas and sights that deemed worthy to be documented in ink. In that journey, I have become an expert at expressing other people’s passion. Not my own. When I decided my voice worthy and ready, I wrote a sad book about my sister dying. Then I wrote an equally dreary book about my son’s drug addition. Again- I found myself the presenter of other people’s stories. I would mantra — “My life has been unusual and full of adventure! It is time to tell my story.” In my novel queue I have my circus story (I left law school and became a trapeze artist for 5 years), a couple of screenplays about raising 6 foster kids, a teleplay based on a weekly murder and HOA and a grand adventure after my uncle died and left me 5000 animals. Many words tapped, thesaurus consulted and wine gulped, but I never sent my babies into the wild. I kept them in folders and desk drawers waiting for that magical moment when I felt worthy/ready/brave/done. For my maiden voyage, it made sense to combine sex and food for a summer read and Consumed was born. In this building of prose, I accidentally acquired a muse. In searching sexual and dinning matters our profiles found each other and became tangled like a gold chain left in the jewelry box. The muse bounced and enhanced my words. I did the same for them. I loved it because the act of writing can get quite lonely and I’m a social beast. Also there was something in the anonymity that brought out an honest bravery I didn’t know I possessed. In this unusual pairing of voices, we volley sexual and life situations back and fourth. The result is rough, funny and sometimes extremely sexy. This has become a highly addictive practice. It’s a new relationship bound only by prose and honesty. There are no limits or judgments. Writing is a lonely road and the opportunity to travel it with a compadre made it palpable. Thru practice, it provided the courage I lacked. We have never met. I don’t know the muses name. They do not know mine. I have been given permission to share using a pen name. This strange liaison is one of the finest I’ve ever experienced. We chat about everything and nothing. It’s honest, profound and most importantly facilitated my voice. I write for them. I believe everyone should have a muse in their life. Rarely does this happen. I thought it right to share. On this blog/space/journal I will share our daily contemplations. I have promised to do this daily, the muse has not. You may recieve my strange brain fevers, or a tapestry of elegance, a dance of two writers. See if you can tell where the muse voice ends and mine begins. I barely can anymore.
https://medium.com/muse-writtings/i-spent-most-of-the-last-three-decades-scribbling-about-traveling-business-and-dining-5a696d8a2838
['Teri Bayus']
2016-07-11 04:13:17.852000+00:00
['Muse', 'Sex', 'Writing', 'Affairs', 'Writer']
Title spent last three decade scribbling traveling business diningContent spent last three decade scribbling traveling business dining Reporting taste flavor idea sight deemed worthy documented ink journey become expert expressing people’s passion decided voice worthy ready wrote sad book sister dying wrote equally dreary book son’s drug addition found presenter people’s story would mantra — “My life unusual full adventure time tell story” novel queue circus story left law school became trapeze artist 5 year couple screenplay raising 6 foster kid teleplay based weekly murder HOA grand adventure uncle died left 5000 animal Many word tapped thesaurus consulted wine gulped never sent baby wild kept folder desk drawer waiting magical moment felt worthyreadybravedone maiden voyage made sense combine sex food summer read Consumed born building prose accidentally acquired muse searching sexual dinning matter profile found became tangled like gold chain left jewelry box muse bounced enhanced word loved act writing get quite lonely I’m social beast Also something anonymity brought honest bravery didn’t know possessed unusual pairing voice volley sexual life situation back fourth result rough funny sometimes extremely sexy become highly addictive practice It’s new relationship bound prose honesty limit judgment Writing lonely road opportunity travel compadre made palpable Thru practice provided courage lacked never met don’t know mus name know mine given permission share using pen name strange liaison one finest I’ve ever experienced chat everything nothing It’s honest profound importantly facilitated voice write believe everyone muse life Rarely happen thought right share blogspacejournal share daily contemplation promised daily muse may recieve strange brain fever tapestry elegance dance two writer See tell muse voice end mine begin barely anymoreTags Muse Sex Writing Affairs Writer
4,233
Wintertime
Wintertime A poem on on the nature of time in the cold season by Kristen Munk on Unsplash I hear the mountain of this magical landscape enchanted by souls who were wronged in the past The land is still loyal and it sings to the spirits Shuddering peaks groan and creak as they exhale fall’s last gasp They contract and they cool like the days of the season They shrink like the years before me on my path I’ll turn thirty one with the rise of the sun and I tense like the rocks at the prospect of that The moments are cold and they turn to ice crystals frigid seconds set in stones that skip forward through time But in the heat of the fire they’re melting and malleable and for once in my life the manipulation is not my mine I finally let go and I let the heat warm me I unclench my hands turned to fists in the cold The minutes and hours thaw into each other Time turns to liquid that my fingers can’t hold Coyotes call at the base of the mountain Their voice brings me back to the tempo of time It echos the rhythm of winter come running and the moments freeze back into the the forward design
https://medium.com/for-the-sake-of-the-song/wintertime-c1f8551324fe
['Sydney J. Shipp']
2020-11-30 22:31:56.911000+00:00
['Poetry', 'Self-awareness', 'Winter', 'Time', 'Nature']
Title WintertimeContent Wintertime poem nature time cold season Kristen Munk Unsplash hear mountain magical landscape enchanted soul wronged past land still loyal sings spirit Shuddering peak groan creak exhale fall’s last gasp contract cool like day season shrink like year path I’ll turn thirty one rise sun tense like rock prospect moment cold turn ice crystal frigid second set stone skip forward time heat fire they’re melting malleable life manipulation mine finally let go let heat warm unclench hand turned fist cold minute hour thaw Time turn liquid finger can’t hold Coyotes call base mountain voice brings back tempo time echo rhythm winter come running moment freeze back forward designTags Poetry Selfawareness Winter Time Nature
4,234
Five Important Facts You Should Know about Digital Marketing
According to a ‘Managing Digital Marketing’ study by Smart Insights, 46% of brands don’t have a defined digital marketing strategy, while 16% do have a strategy but haven’t yet integrated it into their marketing activity. The right digital marketing strategy can give companies the competitive edge over their rivals, and to maximize growth, profit, and value. Here are the five important facts that you should know about digital marketing. They’ll ensure you get the most out of your people and digital investments by aligning them with the critical moves that drive competitive advantage and superior results.
https://medium.com/marketing-in-the-age-of-digital/five-important-facts-you-should-know-about-digital-marketing-41e53aecba3d
['Wenting Xu', 'Tina']
2020-08-10 00:02:08.872000+00:00
['Digital Transformation', 'Marketing', 'Strategy', 'Marketing Strategies', 'Digital Marketing']
Title Five Important Facts Know Digital MarketingContent According ‘Managing Digital Marketing’ study Smart Insights 46 brand don’t defined digital marketing strategy 16 strategy haven’t yet integrated marketing activity right digital marketing strategy give company competitive edge rival maximize growth profit value five important fact know digital marketing They’ll ensure get people digital investment aligning critical move drive competitive advantage superior resultsTags Digital Transformation Marketing Strategy Marketing Strategies Digital Marketing
4,235
Styled Components: A CSS-in-JS Approach
Build More Styled Components We continue building styled components for div and a tags: AppDiv is created at line five, with styles at lines 5-7. AppDiv replaces div with className at line 26. AppLink is created at line 20, with styles at lines 20-22. AppLink replaces a with className at line 32. More things are styled: Although the text alignment for AppDiv isn’t obvious, the blue link puts us a step closer to the original Create React App. We’ve used styled.tagname helper methods. Can the tagname be a component name? No. If we want to build upon a tagged template literal, styled should be used as a constructor like styled(Component) . The new component inherits the styling of Component . In the following code, Button1 is styled with the red text with white background. Button2 inherits the red text, and has the yellow background. Button3 inherits the yellow background, and is styled with the green text. const Button1 = styled.button` color: red; background: white; `; const Button2 = styled(Button1)` background: yellow; `; const Button3 = styled(Button2)` color: green; `; Put them together: <Button1>Button1</Button1> <Button2>Button2</Button2> <Button3>Button3</Button3> It looks like this: If the styled target is a simple element ( styled.tagName ), styled components passes through any known HTML attribute to the DOM. If it’s a custom React component ( styled(Component) ), styled components pass through all props. The above button example can be accomplished by passed props used in interpolations: const Button = styled.button` color: ${(props) => props.clr || "red"}; background: ${(props) => props.bg || "white"};; `; Here’s the usage: <Button>Button1</Button> <Button bg="yellow">Button2</Button> <Button clr="green" bg="yellow">Button3</Button> It’s important to define styled components outside the render method, otherwise, they are recreated on every single render pass. For the three generated buttons, each has two classes connected to it: The first is the static class, which does not have any style attached to it. It’s used to quickly identify which styled component a DOM object belongs to. The second one is the dynamic class, which is different for every element. It’s used to style the component:
https://medium.com/better-programming/styled-components-a-css-in-js-approach-755f6a196c42
['Jennifer Fu']
2020-07-21 15:21:51.823000+00:00
['JavaScript', 'React', 'Reactjs', 'Nodejs', 'Programming']
Title Styled Components CSSinJS ApproachContent Build Styled Components continue building styled component div tag AppDiv created line five style line 57 AppDiv replaces div className line 26 AppLink created line 20 style line 2022 AppLink replaces className line 32 thing styled Although text alignment AppDiv isn’t obvious blue link put u step closer original Create React App We’ve used styledtagname helper method tagname component name want build upon tagged template literal styled used constructor like styledComponent new component inherits styling Component following code Button1 styled red text white background Button2 inherits red text yellow background Button3 inherits yellow background styled green text const Button1 styledbutton color red background white const Button2 styledButton1 background yellow const Button3 styledButton2 color green Put together Button1Button1Button1 Button2Button2Button2 Button3Button3Button3 look like styled target simple element styledtagName styled component pass known HTML attribute DOM it’s custom React component styledComponent styled component pas prop button example accomplished passed prop used interpolation const Button styledbutton color prop propsclr red background prop propsbg white Here’s usage ButtonButton1Button Button bgyellowButton2Button Button clrgreen bgyellowButton3Button It’s important define styled component outside render method otherwise recreated every single render pas three generated button two class connected first static class style attached It’s used quickly identify styled component DOM object belongs second one dynamic class different every element It’s used style componentTags JavaScript React Reactjs Nodejs Programming
4,236
Mark Zuckerberg Shares The Jewish Prayer He Says to His Daughters Every Night
Mark Zuckerberg is among the busiest CEOs around the globe. The 33-year-old runs Facebook, the social-media giant with a market cap of $547 billion. As CEO, Zuckerberg spends a lot of time directing and managing his company, however he still makes the time to exercise, travel and most importantly, spend time with his family. His philosophy is to stay productive and balanced by eliminating nonessential choices from his life and by setting ambitious goals for himself. His typical routine includes an 8 am wake up call, a morning work out session and a lot of time at Facebook. He doesn’t waste time dealing with any of the little choices we make everyday such as picking an outfit. When asked about his wardrobe in 2014, he told an audience: “I really want to clear my life to make it so that I have to make as few decisions as possible about anything except how to best serve this community.” Despite his busy life, he always manages to spend some time with his wife Priscilla Chan and his daughters Max and August. He also doesn’t give up on his Jewish identity and cares to pass it on to his daughters. Every night before going to bed, the Facebook CEO tucks his children in with a traditional Jewish prayer, the “Mi Shebeirach.” He mentioned the same prayer when he gave the commencement address at Harvard University. Facebook founder said: “It goes, ‘May the source of strength who blessed the ones before us help us find the courage to make our lives a blessing,’ ” he said. “I hope you find the courage to make your life a blessing.” Zuckerberg quoted “Mi Shebeirach” prayer for healing that was written by Debbie Friedman, one of the most significant Jewish musicians of the past 50 years.
https://medium.com/jewish-economic-forum/mark-zuckerberg-shares-the-jewish-prayer-he-says-to-his-daughters-every-night-1852318bf1ae
[]
2018-02-12 14:33:33.558000+00:00
['Mark Zuckerberg', 'Facebook', 'Ethics', 'Jewish', 'Jef']
Title Mark Zuckerberg Shares Jewish Prayer Says Daughters Every NightContent Mark Zuckerberg among busiest CEOs around globe 33yearold run Facebook socialmedia giant market cap 547 billion CEO Zuckerberg spends lot time directing managing company however still make time exercise travel importantly spend time family philosophy stay productive balanced eliminating nonessential choice life setting ambitious goal typical routine includes 8 wake call morning work session lot time Facebook doesn’t waste time dealing little choice make everyday picking outfit asked wardrobe 2014 told audience “I really want clear life make make decision possible anything except best serve community” Despite busy life always manages spend time wife Priscilla Chan daughter Max August also doesn’t give Jewish identity care pas daughter Every night going bed Facebook CEO tuck child traditional Jewish prayer “Mi Shebeirach” mentioned prayer gave commencement address Harvard University Facebook founder said “It go ‘May source strength blessed one u help u find courage make life blessing’ ” said “I hope find courage make life blessing” Zuckerberg quoted “Mi Shebeirach” prayer healing written Debbie Friedman one significant Jewish musician past 50 yearsTags Mark Zuckerberg Facebook Ethics Jewish Jef
4,237
Functional Programming in Java
LAMBDA EXPRESSIONS Lambda expressions or functions are blocks of code that can be assigned to a variable, passed around as an argument or even returned from functions. They are anonymous functions and contains parameters, lambda operator (->) and function body. Lambda expression syntax Lambda expressions were introduced in Java as a means of supporting functional programming. As Lambda expressions are anonymous functions, passed around as arguments, we need a way to execute these functions on demand. This is where functional interfaces come into play. Functional interfaces, having only a single abstract method, accepts the lambda function or a method reference as the implementation for that particular abstract method. To understand better, let’s see how streams and optionals uses lambda expressions as implementations for the abstract method in functional interfaces. FUNCTIONAL INTERFACES USED IN JAVA STREAMS Streams in Java provide a functional approach to process a collection of objects. Stream.java provides different methods to process list elements, map(), flatMap(), filter(), sorted() etc, each of which takes a functional interface type as an argument. Let’s consider an example of a LIST of names and Stream on the list to filter out names that contains the letter ‘a’. The .filter() here is a function to filter out elements from the list that satisfies the specified criteria and the .collect() returns back another list with the filtered elements. Notice that the input passed onto the filter function is a lambda expression. The filter() method in Stream.java has the following structure. filter function in Stream.java from java.util.stream As you can see, filter accepts a Predicate as an argument. So what is a predicate? Predicate.java A predicate is a functional interface provided in java.util.function package and contains one abstract method, which is, boolean test(T t). But how do we get the implementation for test(T t)? What gets executed when predicate.test(t) is called?
https://medium.com/swlh/functional-programming-in-java-c6d03c93392a
['Thameena S']
2020-10-14 16:27:12.581000+00:00
['Lambda', 'Java', 'Lambda Expressions', 'Functional Programming', 'Functionalinterface']
Title Functional Programming JavaContent LAMBDA EXPRESSIONS Lambda expression function block code assigned variable passed around argument even returned function anonymous function contains parameter lambda operator function body Lambda expression syntax Lambda expression introduced Java mean supporting functional programming Lambda expression anonymous function passed around argument need way execute function demand functional interface come play Functional interface single abstract method accepts lambda function method reference implementation particular abstract method understand better let’s see stream optionals us lambda expression implementation abstract method functional interface FUNCTIONAL INTERFACES USED JAVA STREAMS Streams Java provide functional approach process collection object Streamjava provides different method process list element map flatMap filter sorted etc take functional interface type argument Let’s consider example LIST name Stream list filter name contains letter ‘a’ filter function filter element list satisfies specified criterion collect return back another list filtered element Notice input passed onto filter function lambda expression filter method Streamjava following structure filter function Streamjava javautilstream see filter accepts Predicate argument predicate Predicatejava predicate functional interface provided javautilfunction package contains one abstract method boolean testT get implementation testT get executed predicatetestt calledTags Lambda Java Lambda Expressions Functional Programming Functionalinterface
4,238
Why is no one talking about depression after university?
Every year, thousands of students’ lives change dramatically, often leaving them isolated, anxious, and even depressed. It’s time we started talking about it. “Anxiety about Monday would start on Saturday night.” Post-university depression is not only real, but also rarely talked about. Photo: Flickr/pigeonpie “Imagine sitting on a limb for a long time and, when you try to stand on it, you buckle under. You can’t get up. Everyone around you is standing up and telling you to do the same, but you just can’t. You dare not.” Robyn Hall* graduated from university last summer. Despite being one of the lucky few to quickly find a job in her chosen field, she still struggled with the transition into her new life. She described the difficulty of coming to terms with her feelings of depression. “‘But you’re a graduate!’ my brain yelled at me. ‘Grow up!’ But the self-loathing continued. You leave a place you’ve been in for three or four years, where you developed so much, leaving behind the closest friends you’ve possibly ever had. Even if you do get a job, nobody tells you that once you ‘hit the jackpot’, you’ll struggle to make new friends; that 9–5 will leave you exhausted. You’re scared of not being good enough, that you won’t live up to expectations. It’s the ultimate disparity between representation and reality.” Robyn is not the only one to struggle with depression after leaving university. When I graduated, I went from feeling the happiest I’ve been in my adult life, to the worst. By October I was jumping at sudden noises and afraid to leave my bedroom. When a year-long relationship suddenly ended, I didn’t know how to see past the black clouds pressing in on me. I sought help from my GP, who referred me to a local mental health outreach programme. But in the end, it was time, a relocation, and support from friends that began to stabilise the feelings of anxiety and depression. I can count graduates with similar stories on two hands — and those are just the ones close enough to confide in me. Every year, thousands of people’s lives are turned upside down when they jubilantly throw a hat into the air, then watch it come crashing down into reality. So why does no one talk about the feelings of hopelessness that so many are left with? After all, with over 900,000 young people currently unemployed and benefits for under-25s constantly under threat, is it any wonder that mental health issues in young people are rising across the board? I spoke to Matt Tidby, who stayed in his university town of Norwich following graduation, supporting himself with temp jobs. “The majority of the work itself was doable, if monotonous — but things like the telephone, where I was expected to advise on mortgages after about half-a-day’s training, left me hugely anxious and very unhappy. I suffered on a personal level, and lost a lot of confidence in my ability to do both that job, and any of the jobs I actually craved. “Quite ridiculously, I lived in fear of being ‘put on the phones’ — I built that minor stress into a mountain of worry that blotted out everything. After about a month, the job applications stopped. I got into quite a destructive system of trying to make it to each weekend without things getting too shit to handle. Anxiety about Monday would start on Saturday night.” Matt eventually left the job, recognising the damage it was doing, and said that things were beginning to get better. “It’s a daily, rapidly changing situation, really — a positive email or a phone call can reverse many days of feeling low. It’s a strange inversion of my time temping; whereas once I lived in terror of the phone ringing, now I urge it to. I’m more hopeful that it will.” While researching this piece, I found very little information targeted specifically at graduates suffering from mental health problems, despite an article in the Independent last year that found that of 40 students and recent graduates surveyed, “95% believed that post-university depression was very much a real thing”. With so little information available, I contacted the mental health charity Mind directly. Head of information, Beth Murphy, had this to say: “Moving on from university is often the biggest change a person has experienced up to that point in their lifetime. Added to this, today’s graduates are facing the double-whammy of the debt associated with paying for university and a tough job market that can seem impenetrable. “Financial stress and uncertainty around employment are major contributors to mental health problems like anxiety and depression. Mind has seen a surge in calls to our Infoline from people struggling with financial difficulties, many of them post-graduates. Our In the Red report actually found that 85% of respondents said their financial difficulties had made their mental health problems worse.” So if post-university depression is “a real thing”, why does no one talk about it? Is this the same stigma surrounding mental health that affects all sufferers, or is there something else going on? Robyn believes that there is a pressure on graduates to feel grateful for their position. “Once you get a 9–5 job, coping with depression can be worse. People are all over to congratulate you, help you in any way they can; you’re so afraid of disappointing everyone that you just let the guilt fester away. I think even in the media it’s not represented enough that you can do your ‘dream job’ and not feel right.” So what can be done? Beth recommends communication above all else. “If you are worried about your mental health, confide in a friend or family member or speak to your GP. There are also lots of small things you can do to make yourself feel better — exercise can be hugely beneficial, releasing chemicals which help increase wellbeing and mood. Keeping in touch with friends is also important, as withdrawing from social contact can make things worse.” Whether you attended university or not, being young and uncertain about your future is the perfect opportunity for feelings of anxiety to take hold. I’m constantly struggling with my own mental health, but I’m one of the lucky ones; I have a job to focus me, friends to listen when things get dark, and access to medical help. But the same can’t be said for everyone, and with mental health trusts asked to shave almost 20% from their budgets next year, that last, vital support system is more at risk than ever. It’s time to stop suffering in silence and acknowledge depression after graduation as a real risk to young adults. And it’s time to stop cutting the very services that may well save their lives. For information, support and advice please visit mind.org.uk or call Mind’s confidential mental health information service on 0300 123 3393. To find out more about starting conversations and tackling mental health stigma, visit time-to-change.org.uk *Names have been changed.
https://medium.com/abstract-magazine/why-is-no-one-talking-about-depression-after-university-94d3e09ca1d2
['Amy Fox']
2015-12-11 11:00:42.662000+00:00
['Life', 'Depression', 'Mental Health']
Title one talking depression universityContent Every year thousand students’ life change dramatically often leaving isolated anxious even depressed It’s time started talking “Anxiety Monday would start Saturday night” Postuniversity depression real also rarely talked Photo Flickrpigeonpie “Imagine sitting limb long time try stand buckle can’t get Everyone around standing telling can’t dare not” Robyn Hall graduated university last summer Despite one lucky quickly find job chosen field still struggled transition new life described difficulty coming term feeling depression “‘But you’re graduate’ brain yelled ‘Grow up’ selfloathing continued leave place you’ve three four year developed much leaving behind closest friend you’ve possibly ever Even get job nobody tell ‘hit jackpot’ you’ll struggle make new friend 9–5 leave exhausted You’re scared good enough won’t live expectation It’s ultimate disparity representation reality” Robyn one struggle depression leaving university graduated went feeling happiest I’ve adult life worst October jumping sudden noise afraid leave bedroom yearlong relationship suddenly ended didn’t know see past black cloud pressing sought help GP referred local mental health outreach programme end time relocation support friend began stabilise feeling anxiety depression count graduate similar story two hand — one close enough confide Every year thousand people’s life turned upside jubilantly throw hat air watch come crashing reality one talk feeling hopelessness many left 900000 young people currently unemployed benefit under25s constantly threat wonder mental health issue young people rising across board spoke Matt Tidby stayed university town Norwich following graduation supporting temp job “The majority work doable monotonous — thing like telephone expected advise mortgage halfaday’s training left hugely anxious unhappy suffered personal level lost lot confidence ability job job actually craved “Quite ridiculously lived fear ‘put phones’ — built minor stress mountain worry blotted everything month job application stopped got quite destructive system trying make weekend without thing getting shit handle Anxiety Monday would start Saturday night” Matt eventually left job recognising damage said thing beginning get better “It’s daily rapidly changing situation really — positive email phone call reverse many day feeling low It’s strange inversion time temping whereas lived terror phone ringing urge I’m hopeful will” researching piece found little information targeted specifically graduate suffering mental health problem despite article Independent last year found 40 student recent graduate surveyed “95 believed postuniversity depression much real thing” little information available contacted mental health charity Mind directly Head information Beth Murphy say “Moving university often biggest change person experienced point lifetime Added today’s graduate facing doublewhammy debt associated paying university tough job market seem impenetrable “Financial stress uncertainty around employment major contributor mental health problem like anxiety depression Mind seen surge call Infoline people struggling financial difficulty many postgraduate Red report actually found 85 respondent said financial difficulty made mental health problem worse” postuniversity depression “a real thing” one talk stigma surrounding mental health affect sufferer something else going Robyn belief pressure graduate feel grateful position “Once get 9–5 job coping depression worse People congratulate help way you’re afraid disappointing everyone let guilt fester away think even medium it’s represented enough ‘dream job’ feel right” done Beth recommends communication else “If worried mental health confide friend family member speak GP also lot small thing make feel better — exercise hugely beneficial releasing chemical help increase wellbeing mood Keeping touch friend also important withdrawing social contact make thing worse” Whether attended university young uncertain future perfect opportunity feeling anxiety take hold I’m constantly struggling mental health I’m one lucky one job focus friend listen thing get dark access medical help can’t said everyone mental health trust asked shave almost 20 budget next year last vital support system risk ever It’s time stop suffering silence acknowledge depression graduation real risk young adult it’s time stop cutting service may well save life information support advice please visit mindorguk call Mind’s confidential mental health information service 0300 123 3393 find starting conversation tackling mental health stigma visit timetochangeorguk Names changedTags Life Depression Mental Health
4,239
3 Ways to Implement the Singleton Pattern in TypeScript With Node.js
The Problem — Logging Example Here’s an example problem: I have a Node.js app for payment processing that uses a Logger class. We want to keep a single logger instance in this example and ensure the Logger state is shared across the Payment app. To keep things simple, let’s say that we need to ensure that the logger needs to keep track of the total number of logged messages within the app. Ensuring that the counter is tracked globally within the app means that we will need a singleton class to achieve this. A high-level diagram of the sample app by the author. Let’s go through each of the classes that we will be using. Logger class: Logger.ts A basic logger class that allows its clients to log a message with a timestamp. It also allows the client to retrieve the total number of logged messages. Payment class: Payment.ts The Payment processing class processes the payment. It logs the payment instantiation and payment processing: The entry point of the app: index.ts The entry point creates an instance of the Logger class and processes the payment. It also processes the payment through the Payment class: If we run the code above, we will get the following output: # Run the app tsc && node dist/creational/singleton/problem/index.js Output screenshot by the author. Notice that the log count stays at 1 despite showing 3 logged messages. The count remains at 1 because a new instance of Logger is created in index.ts and Payment.ts separately. The log count here only represents what’s logged in index.ts . However, we also want to include the number of logged messages in the Payment class. Here are different ways to solve this problem by using a singleton design pattern.
https://medium.com/better-programming/3-ways-to-implement-the-singleton-pattern-in-typescript-with-node-js-75129f391c9b
['Ardy Dedase']
2020-11-02 16:29:10.810000+00:00
['Technology', 'Startup', 'Software Development', 'JavaScript', 'Programming']
Title 3 Ways Implement Singleton Pattern TypeScript NodejsContent Problem — Logging Example Here’s example problem Nodejs app payment processing us Logger class want keep single logger instance example ensure Logger state shared across Payment app keep thing simple let’s say need ensure logger need keep track total number logged message within app Ensuring counter tracked globally within app mean need singleton class achieve highlevel diagram sample app author Let’s go class using Logger class Loggerts basic logger class allows client log message timestamp also allows client retrieve total number logged message Payment class Paymentts Payment processing class process payment log payment instantiation payment processing entry point app indexts entry point creates instance Logger class process payment also process payment Payment class run code get following output Run app tsc node distcreationalsingletonproblemindexjs Output screenshot author Notice log count stay 1 despite showing 3 logged message count remains 1 new instance Logger created indexts Paymentts separately log count represents what’s logged indexts However also want include number logged message Payment class different way solve problem using singleton design patternTags Technology Startup Software Development JavaScript Programming
4,240
Growing Older, But Not Growing Old
Attitude, Attitude, Attitude Photo by Alex Wilken from Pixabay An Irish study a few years ago found one of the most important elements in maintaining physical and cognitive health as we age is attitude. Of course, that cuts both ways. “Everyone will grow older,” says the study’s lead researcher Deirdre Robertson, “and if negative attitudes towards aging are carried throughout life, they can have a detrimental, measurable effect on mental, physical, and cognitive health.” If that’s not good enough, here’s what George Burns said about aging: “You can’t help getting older, but you don’t have to get old.” Remember, that comes from a man who once played God! I’ve never understood the aversion to getting older. I remember as a preschooler being in awe of a little restaurant on the corner across from the junior high school in my hometown. The restaurant was torn down decades ago and a new junior high was built across town. Finley’s corner opened an hour before school started and closed an hour after school ended. My older brothers hung out there with other junior and high school kids. One day my mother took me into this exotic place and we sat at the counter and drank coca-cola served by the man himself, Mr. Finley. Though my feet dangled from the stool about two feet from the floor, I felt I had hit the big time. If I had known the quote from jazz great Fats Waller, I probably would have invoked it - “Somebody shoot me while I’m happy.” It was intoxicating having the legendary Mr. Finley engage me in typical adult to child banter: “How old are you?” “Do you go to school yet?” And me responding in typical child to adult diatribes. “I have a dog.” “His name is Laddie.” “I named him after Lassie, but he’s a boy, so I couldn’t name him Lassie.” (This was the 1950s. Naming was more tied to gender then.) Maybe it was the familiarity we were building; maybe it was the carbonation or sugar from all the soda. Whatever, it was, I dropped all sense of decorum and referred to Mr. Finley by the name all the older kids used — “Old Man Finley.” My embarrassed Mother admonished me not to speak to Mr. Finley “that way”. “That’s alright,” Mr. Finley said. “that’s what all the kids call me.” My mother and I remembered and laughed about the incident from then on. I also remembered Mr. Finley’s attitude. He was not only amused, but he also seemed sort of proud. That’s the way I feel. I’ve worked all my life to get older. I’m proud of it and I’m determined to enjoy it. Not only do I feel intuitively that an optimistic approach to aging will help me live a better life, but there is also more science to back that up. In 2018, a Yale School of Public Health study found that a positive attitude about getting older significantly reduced the likelihood of developing dementia. To me having a good attitude about growing older has always been linked to humor. “He’s so old that when he orders a three-minute egg, they ask for the money upfront.” That’s from that Burns fellow again. He not only lived to be 100. He started his solo career as a comedian when he was 80. Sure, with age you may have to moderate your lifestyle a little. You may not be able to say, eat, drink, and be merry. You can say, eat (wisely), drink (moderately), walk (once a day), take a nap (when you don’t get eight hours of sleep), and be happy you’ve made it this far.
https://medium.com/crows-feet/growing-older-but-not-growing-old-c9f05f61f6f
['Max K. Erkiletian']
2020-10-23 21:38:14.275000+00:00
['Aging', 'Humor', 'Mental Health', 'Lessons Learned', 'Positive Thinking']
Title Growing Older Growing OldContent Attitude Attitude Attitude Photo Alex Wilken Pixabay Irish study year ago found one important element maintaining physical cognitive health age attitude course cut way “Everyone grow older” say study’s lead researcher Deirdre Robertson “and negative attitude towards aging carried throughout life detrimental measurable effect mental physical cognitive health” that’s good enough here’s George Burns said aging “You can’t help getting older don’t get old” Remember come man played God I’ve never understood aversion getting older remember preschooler awe little restaurant corner across junior high school hometown restaurant torn decade ago new junior high built across town Finley’s corner opened hour school started closed hour school ended older brother hung junior high school kid One day mother took exotic place sat counter drank cocacola served man Mr Finley Though foot dangled stool two foot floor felt hit big time known quote jazz great Fats Waller probably would invoked “Somebody shoot I’m happy” intoxicating legendary Mr Finley engage typical adult child banter “How old you” “Do go school yet” responding typical child adult diatribe “I dog” “His name Laddie” “I named Lassie he’s boy couldn’t name Lassie” 1950s Naming tied gender Maybe familiarity building maybe carbonation sugar soda Whatever dropped sense decorum referred Mr Finley name older kid used — “Old Man Finley” embarrassed Mother admonished speak Mr Finley “that way” “That’s alright” Mr Finley said “that’s kid call me” mother remembered laughed incident also remembered Mr Finley’s attitude amused also seemed sort proud That’s way feel I’ve worked life get older I’m proud I’m determined enjoy feel intuitively optimistic approach aging help live better life also science back 2018 Yale School Public Health study found positive attitude getting older significantly reduced likelihood developing dementia good attitude growing older always linked humor “He’s old order threeminute egg ask money upfront” That’s Burns fellow lived 100 started solo career comedian 80 Sure age may moderate lifestyle little may able say eat drink merry say eat wisely drink moderately walk day take nap don’t get eight hour sleep happy you’ve made farTags Aging Humor Mental Health Lessons Learned Positive Thinking
4,241
How to Focus: Back to Basics as a form of Meditation
How to Focus: Back to Basics as a form of Meditation There’s no chanting, a fair amount of swearing, it’s a pain in the ass — but it delivers perspective, appreciation, and focus. Wood by Robert Ruggiero It’s a brand new year, we’ve created our New Year’s Resolutions and we’ve even looked at how to make sure we deliver on our resolutions, but we still need help in knuckling down and focusing on the tasks and the year ahead. What’s a guy to do? The first thing I always do is procrastinate. I will write an article at some point about Positive Procrastination (I appreciate the irony of putting that off for now) but I genuinely believe in living by a fully comprehensive to-do list and being able to procrastinate by picking up on another task that needs doing so that time is never wasted, just tasks are not necessarily prioritised in the best way. I reach for tools that will help me succeed. Anyone who has ever bought more than one self-help book will recognise the pattern: I need help with task X; e.g. writing articles I will spend time researching what other people have done to write articles I will spend my money buying bokos/subscribing to resources that other people sell about writing I will realise that all of the people selling these resources didn’t make any money from writing and instead make their money from talking about making money from writing Lifering by Frederick Tubiermont The ridiculous thing is that I know what I need to succeed and how to focus. I’ve done it before, I’ve learned it before, I’ve achieved it before. Not everyone works on a computer 80% of the time but for me, the winning pattern is:
https://medium.com/copse-magazine/how-to-focus-back-to-basics-as-a-form-of-meditation-21996623ba48
['Adam Colthorpe']
2020-01-02 11:23:54.445000+00:00
['Productivity', 'Self Improvement', 'Life', 'Meditation', 'Work']
Title Focus Back Basics form MeditationContent Focus Back Basics form Meditation There’s chanting fair amount swearing it’s pain as — delivers perspective appreciation focus Wood Robert Ruggiero It’s brand new year we’ve created New Year’s Resolutions we’ve even looked make sure deliver resolution still need help knuckling focusing task year ahead What’s guy first thing always procrastinate write article point Positive Procrastination appreciate irony putting genuinely believe living fully comprehensive todo list able procrastinate picking another task need time never wasted task necessarily prioritised best way reach tool help succeed Anyone ever bought one selfhelp book recognise pattern need help task X eg writing article spend time researching people done write article spend money buying bokossubscribing resource people sell writing realise people selling resource didn’t make money writing instead make money talking making money writing Lifering Frederick Tubiermont ridiculous thing know need succeed focus I’ve done I’ve learned I’ve achieved everyone work computer 80 time winning pattern isTags Productivity Self Improvement Life Meditation Work
4,242
Drafted
Drafted Good ideas at the time? Probably not. I once read Stephen King penned a story so horrifying, so ghastly, so macabre, he suffered terrible nightmares while writing it. Yup. It disturbed him to such a degree, he could never bring himself to send it to his publisher. The manuscript remained ever locked in his desk drawer. He eventually tossed the key into the deep waters of a harbor close to his home. True story. Pretty sure. May have been Dave Barry. Regardless, it goes without saying even though I’m saying it anyway that every author maintains a dark file of stories that will never see the light of day. Perhaps they’re stories so freakishly scary, their release would risk sending readers into cardiac arrest. Perhaps they’re stories so sad, the authors can’t complete them past flowing tears. Perhaps they’re stories so beautiful yet so personal, the authors can’t bear to share them with the public for fear they won’t receive the love they deserve. Or perhaps they’re stories that just plain suck. Mine apply to that last one. Here are titles of story ideas I simply had to flush and the reasons why: My Last Physical Exam My general physician strongly recommended I lose an amount of weight proportionate to that of a Northern Pacific baby sea lion. I decided I would post this story only after I’d lost the weight. How to Lose Weight When You’re Over 50 This can’t be done. Cobra Kai (The Karate Kid Sequel): A Movie Review This ended up sounding like a thousand word rant about how good Ralph Macchio looks at 56. And it contained way too much profanity. My review, not the movie. An Easy Way to Clean Your Barbecue with Safe Chemicals I don’t want to relive this. In the Girls’ Room with Pink Curtains Near Asphyxiation What I thought was the making of an excellent mind-blower of a science-fiction tale was just bits and pieces of a hallucination I experienced while repainting our poorly ventilated daughters’ bedroom. My transcription was twenty-two paragraphs comprised of one word: mot. I don’t know what that means. Anime is Awesome! This story took root when I made an earnest attempt to embrace anime films. My daughters are enamored with them, so I’d rented three movies from the library, made popcorn and settled in for a Miyazaki marathon one Sunday afternoon intent on gaining a powerful appreciation. I had pen and paper ready as the magic began to unfold. I gave up thirty minutes into the first one. I stumbled from the room certain I was suffering a paint fume relapse. World War Walmart I began this story following a shouting match with a Russian couple after they cut me off in a discount store parking lot. Things escalated quickly then stopped abruptly when we weren’t able to understand each other’s insults. We ended up shaking hands. I thought this encounter, if worded properly, could be shaped into an intelligent allegory shedding light on important topics such as global relations, diversity, acceptance, the human condition and rolled back prices as they all relate in today’s political arena. Nope. Fond Memories as a College Freshman I have no fond memories as a college freshman. My roommate was a stoner, my professors were assholes and Sammy Hagar joined Van Halen. My Daughters Hate Me I started this post during a bad week when my three teenage daughters all became angry with me for some reason. I don’t know what I’d done, but this was a major turning point in my life as a parent. The story was to be a deep dive into my shortcomings as a father and how the relationships I shared with my daughters had changed forever. My Daughters Love Me I stopped working on the previous story when my daughters suddenly returned to being nice to me again. The Wisdom of a Wife Apparently those last two stories shared a logical explanation which was revealed to me in a private conversation with my wife who was careful to use small, slowly spoken words. A terrible misunderstanding. She also suggested I kibosh the whole subject. Yes. That was for the best. Medium: The Board Game Actually I’m not done with this. So now that I’ve dredged up the embarrassing rejects of my otherwise masterful body of work, I hope you feel encouraged to share an idea you intend to keep hidden under lock and key forever. I don’t want to feel alone on this. Thanks so much for reading. And not judging. I shall now resume publishing the quality subject matter you so richly deserve.
https://thehappysidestep.medium.com/drafted-beb55ba5f368
[]
2019-03-26 14:13:43.210000+00:00
['Satire', 'Parenting', 'Writing', 'Huffington Paint', 'Humor']
Title DraftedContent Drafted Good idea time Probably read Stephen King penned story horrifying ghastly macabre suffered terrible nightmare writing Yup disturbed degree could never bring send publisher manuscript remained ever locked desk drawer eventually tossed key deep water harbor close home True story Pretty sure May Dave Barry Regardless go without saying even though I’m saying anyway every author maintains dark file story never see light day Perhaps they’re story freakishly scary release would risk sending reader cardiac arrest Perhaps they’re story sad author can’t complete past flowing tear Perhaps they’re story beautiful yet personal author can’t bear share public fear won’t receive love deserve perhaps they’re story plain suck Mine apply last one title story idea simply flush reason Last Physical Exam general physician strongly recommended lose amount weight proportionate Northern Pacific baby sea lion decided would post story I’d lost weight Lose Weight You’re 50 can’t done Cobra Kai Karate Kid Sequel Movie Review ended sounding like thousand word rant good Ralph Macchio look 56 contained way much profanity review movie Easy Way Clean Barbecue Safe Chemicals don’t want relive Girls’ Room Pink Curtains Near Asphyxiation thought making excellent mindblower sciencefiction tale bit piece hallucination experienced repainting poorly ventilated daughters’ bedroom transcription twentytwo paragraph comprised one word mot don’t know mean Anime Awesome story took root made earnest attempt embrace anime film daughter enamored I’d rented three movie library made popcorn settled Miyazaki marathon one Sunday afternoon intent gaining powerful appreciation pen paper ready magic began unfold gave thirty minute first one stumbled room certain suffering paint fume relapse World War Walmart began story following shouting match Russian couple cut discount store parking lot Things escalated quickly stopped abruptly weren’t able understand other’s insult ended shaking hand thought encounter worded properly could shaped intelligent allegory shedding light important topic global relation diversity acceptance human condition rolled back price relate today’s political arena Nope Fond Memories College Freshman fond memory college freshman roommate stoner professor asshole Sammy Hagar joined Van Halen Daughters Hate started post bad week three teenage daughter became angry reason don’t know I’d done major turning point life parent story deep dive shortcoming father relationship shared daughter changed forever Daughters Love stopped working previous story daughter suddenly returned nice Wisdom Wife Apparently last two story shared logical explanation revealed private conversation wife careful use small slowly spoken word terrible misunderstanding also suggested kibosh whole subject Yes best Medium Board Game Actually I’m done I’ve dredged embarrassing reject otherwise masterful body work hope feel encouraged share idea intend keep hidden lock key forever don’t want feel alone Thanks much reading judging shall resume publishing quality subject matter richly deserveTags Satire Parenting Writing Huffington Paint Humor
4,243
Git and Github: A Love Story or Something Like That.
Github Repo As I continue my journey to becoming a software engineer, I’m trying to identify gaps in my knowledge. Basic things that I should probably know. As I research them I plan to write a post, which I feel is a good way of retaining the knowledge that I have acquired. Towards the end of my time at the Flatiron School, it occurred to me that this thing that I had basically been using the whole time was still a mystery. I knew how to fork and clone something, I knew how to initialize (or init) a repo, and add files and save them. I just didn't know the why’s or the how’s. I didn't really know what Git even was, and I barely knew about Github. So that’s what I’m going to do here. I will discuss what git is, and why we use it. Then I will give some info about Github, history, who owns it, and why people seem to prefer it to other options. Then I plan to dive into some popular git commands. I will link all the resources I pulled from in case you want more info. So sit back and relax, it's going to be a bumpy road. Actually, it's going to be fine, I’m not sure why I said that. GIT To talk about git I first have to discuss version control. What is version control you may ask, and my response is that it is exactly what it sounds like? It’s a system that manages changes to the files you are working on so you can recall a specific version later. There are three different styles to version control and they are local, centralized, and distributed. This link will tell you more info about all three. Now where does Git fit into all of this. Git is a distributed version contol system. It was created in 2005 by Linus Torvalds, for development with the linex kernal. They had been using Bitkeeper, but a breakdown with the commercial company that developed Bitkeeper ended in them loosing the free-of-charge status. So Linus, who also created linex, decided to create their own DVCS. They would use what they learned fom Bitkeeper and improve upon it. Thus Git was born. If you want a quick giggle read the naming of Git here, I found it humorous. One of the main differences between Git and other version control systems is the way Git thinks about data. Most systems use delta based version control, where they store the information as a list of file-based changes. Git on the other hand takes a snapshot of the information and compares it to the existing file and then only saves the differences. If a file hasn't changed then Git just links to the already existing file. One of the key features to Git and is intergreal to its speed is that everything is local. Since everything has been cloned on to your computer it makes searching a project almost instintanous. Another benefit of this is the ability to work offline, and then pushing commits once you are on network again. Git also uses checksums to store and then refer back to that data. Which makes it impossible to change anything without Git knowing. This helps with not losing data and file corruption. Git has three stages modified, staged, committed. Modified: File has been changed, but not saved to the directory. Staged: File has been marked to be saved. Commited: File has been saved. These three stages correlate to the three sections of a Git project the working tree, staging area, and the Git directory. GITHUB Github was launched in April 2008 by Tom Preston-Werner, Chris Wanstrath, P. J. Hyett, and Scott Chacon. It reported fairly early success with 46,000 public repositories in the first year. The numbers grew from there with 90,000 repositories and 100,000 users the following year. The numbers just continued to grow, until they caught the attention of Microsoft, which had been using Github since 2012. Micorsoft acquired Github in 2018, for $7.5 billion. Sidenote for my Javascript users out there Microsoft also acquired npm, this year. So Github is popular, but why. This one is hard to anwser for me. I’ve only ever used it, so I have no reference for how it compares to other version control sites. I can say that as someone just getting started, that it is pretty user friendly, and it has a plethera of features. The community is a huge part of what makes it great. Being able to check out other peoples code, and in return have them look at yours, and give feed back is pretty great. It also has a bunch of integrations and features which just add to its usability. The one feature I currently use is Github Pages, which I use to host my portfolio page. GIT COMMANDS I’m going to go over some basic commands for Git, with links to more in-depth explanations. I will be assuming you are using Github to do this. Things you will need to do first is make sure Git is installed, and that you have an account set-up with Github. Also, most if what I’ll be talking about will be for use with MacOs, as that's what I use, but most will be universal. So to install git from the terminal us can use the command: $ git --version If you don't have Git it will ask you to install it. Instead of walking you through you first repo, I will just link you to Github guides walkthrough which is very details, with images and everything. I probably couldn’t explain it better than they can. Here are some basic Git commands with links to their documentation: Here are some commands also with links to documentation that also may prove useful, but you might not need right out of the gate: git rm — remove files from a working tree. git mv — move or rename a file. git checkout — switch branches. git diff — shows changes between commits. The last thing I wanted to explain is the Github flow, just so you have an idea of how it works. It has six steps: Create a branch. Add commits. Open a pull request. Discuss and review code. Merge. Deploy. CONCLUSSION I learned so much by researching Git and Github. Not all of it is going to make me a more productive software engineer, but knowing it definitely makes me feel more like one. Having some background knowledge about things more experienced engineeres know, can really help with imposter syndrome. Make you feel like you have a little insight about this world you are trying to become part of. I know I will use all of this knowledge and put it to use, my command line abilities get better everyday, to the point were I don’t have to use the mouse for as much, which makes me that much faster. I hope you found this helpful. I will split the links to all the resources I used as primary and secondary. That way you will know where the main source of information came from. I would encourage you to check them even if you are only cherry picking them and not reading the whole thing. They provided me with so much knowledge, I can’t even really begin to explain. PRIMARY RESOURCES SECONDARY RESOURCES
https://medium.com/swlh/git-and-github-a-love-story-or-something-like-that-f18f789a7144
['Robert M Ricci']
2020-12-14 01:38:24.328000+00:00
['Programming', 'Informational', 'Git', 'Software Engineering', 'Github']
Title Git Github Love Story Something Like ThatContent Github Repo continue journey becoming software engineer I’m trying identify gap knowledge Basic thing probably know research plan write post feel good way retaining knowledge acquired Towards end time Flatiron School occurred thing basically using whole time still mystery knew fork clone something knew initialize init repo add file save didnt know why’s how’s didnt really know Git even barely knew Github that’s I’m going discus git use give info Github history owns people seem prefer option plan dive popular git command link resource pulled case want info sit back relax going bumpy road Actually going fine I’m sure said GIT talk git first discus version control version control may ask response exactly sound like It’s system manages change file working recall specific version later three different style version control local centralized distributed link tell info three Git fit Git distributed version contol system created 2005 Linus Torvalds development linex kernal using Bitkeeper breakdown commercial company developed Bitkeeper ended loosing freeofcharge status Linus also created linex decided create DVCS would use learned fom Bitkeeper improve upon Thus Git born want quick giggle read naming Git found humorous One main difference Git version control system way Git think data system use delta based version control store information list filebased change Git hand take snapshot information compare existing file save difference file hasnt changed Git link already existing file One key feature Git intergreal speed everything local Since everything cloned computer make searching project almost instintanous Another benefit ability work offline pushing commits network Git also us checksum store refer back data make impossible change anything without Git knowing help losing data file corruption Git three stage modified staged committed Modified File changed saved directory Staged File marked saved Commited File saved three stage correlate three section Git project working tree staging area Git directory GITHUB Github launched April 2008 Tom PrestonWerner Chris Wanstrath P J Hyett Scott Chacon reported fairly early success 46000 public repository first year number grew 90000 repository 100000 user following year number continued grow caught attention Microsoft using Github since 2012 Micorsoft acquired Github 2018 75 billion Sidenote Javascript user Microsoft also acquired npm year Github popular one hard anwser I’ve ever used reference compare version control site say someone getting started pretty user friendly plethera feature community huge part make great able check people code return look give feed back pretty great also bunch integration feature add usability one feature currently use Github Pages use host portfolio page GIT COMMANDS I’m going go basic command Git link indepth explanation assuming using Github Things need first make sure Git installed account setup Github Also I’ll talking use MacOs thats use universal install git terminal u use command git version dont Git ask install Instead walking first repo link Github guide walkthrough detail image everything probably couldn’t explain better basic Git command link documentation command also link documentation also may prove useful might need right gate git rm — remove file working tree git mv — move rename file git checkout — switch branch git diff — show change commits last thing wanted explain Github flow idea work six step Create branch Add commits Open pull request Discuss review code Merge Deploy CONCLUSSION learned much researching Git Github going make productive software engineer knowing definitely make feel like one background knowledge thing experienced engineeres know really help imposter syndrome Make feel like little insight world trying become part know use knowledge put use command line ability get better everyday point don’t use mouse much make much faster hope found helpful split link resource used primary secondary way know main source information came would encourage check even cherry picking reading whole thing provided much knowledge can’t even really begin explain PRIMARY RESOURCES SECONDARY RESOURCESTags Programming Informational Git Software Engineering Github
4,244
What Is The Intel Student Ambassador Program?
In November of 2016 we announced the Intel® AI Academy for Students, created to work collaboratively with students at innovative schools and universities doing great work in the Deep Learning and Artificial Intelligence space. As part of this program we also announced the Intel® Student Ambassador Program for AI, an exciting new program for university students to engage with Intel around their work in Machine Learning, Deep Learning and Artificial Intelligence. What is the Student Ambassador program? The Student Ambassador Program is a developer affinity program, designed to assist student experts in telling their story and share their expertise with other student data scientists and developers. Intel is working with universities across the globe to introduce this program. Those students invited into the program as Student Ambassadors are provided technical support, resources, and marketing to advance their own work through Intel software, tools, and hardware. This program is primarily targeted toward graduate students; however, undergrads and PhD students can apply should they have the combined education, skill and time to fulfill program requirements (note: this program does not provide a college internship with Intel, nor does it provide placement for employment with Intel). What are the benefits of the Student Ambassador program? The Student Ambassador Program offers many benefits for the select students who are invited into the program. These benefits include: Formal association with Intel® Corporation via Student Ambassador title, swag, and affiliation Free software, tools and libraries from Intel Direct access to their own instance on the Intel® AI DevCloud, Intel’s AI cluster, to power the development and training of deep learning models Access to early disclosure information (under NDA) during monthly meetings with Intel Direct access to Intel engineers and resources to support their work and adoption and integration of Intel® architecture Sponsored travel to support speakerships and/or training by or for the Student Ambassador Sponsored funds to assist in hosting, training, and speaking sessions at their campus to promote their work Numerous speakership and collaboration opportunities coordinated by the Intel® AI Academy for Students, exclusively for Intel® Student Ambassadors Opportunities to apply for Early Innovation micro-funding opportunities, solely for Student Ambassadors What are the expectations for Student Ambassadors? Student Ambassadors will continue in their role as long as the student is able to and desires to continue as a Student Ambassador or upon their graduation, whichever comes first. During their time as a Student Ambassador, each is expected to complete the following: Create of an online profile and posting of at least one (1) project to Intel’s® Developer Mesh website Deliver three (3) pieces of technical content to be shared on Intel’s Developer Website discussing your own research, projects, and interests in the space of Deep Learning and Artificial Intelligence Host speaker of one (1) or more Ambassador Labs on campus, connecting with your peers and local community, providing training and insight into your work to a total of 125 students or more over the course of a calendar year I’m interested. How do I get involved? For students or faculty interested in the Student Ambassador Program, there are multiple ways to engage with Intel and get involved: Universities can invite Intel to come on campus for a half-day workshop to discuss the program and provide initial training on deep learning and artificial intelligence technologies supporting Intel architecture. Visit this site for more information on setting up a workshop. For students directly interested in the Student Ambassador program Post information about your research and or student projects to Intel’s® Developer Mesh website. This is a key step in us evaluating students for the Ambassador Program. Add these projects to the Student Group on the Intel Developer Mesh website. Posting to this site helps Intel get a glimpse into the student work and helps demonstrate the student’s willingness and aptitude for sharing their experience with the community. After posting a project to Developer Mesh, students can complete and submit an online candidate form. How will Intel support other students, not eligible or able to be a Student Ambassador? Intel is also able to support and sponsor student clubs at universities. With this program Intel is able to provide sponsorship funds to select university clubs. Sponsorship funds help support a club’s cost for meetings and gatherings, in exchange for the club discussing and sharing information about Intel’s support of Artificial Intelligence. Select clubs will be provided with an AI training kit, including content and documentation to share and discuss during their meetings and gatherings. Select university clubs will be prioritized for guest speakerships by Intel or associated partners as resources are available. Those interested in being evaluated as an Intel Student Program University Club can submit information for candidacy here. Intel is excited about the opportunity to work and engage directly with students who are shaping and advancing new work and use cases for Artificial Intelligence via campus workshops, Student Ambassadors, and University Clubs. Our aim is to provide students and developers the resources and opportunity to have a voice and influence in driving AI forward. Learn more on the Intel Student Ambassador site, check out the AI projects on Mesh, or contact Niven Singh, Intel’s Student Community Manager directly for more information.
https://medium.com/intel-student-ambassadors/what-is-the-intel-student-ambassador-program-2cac2c855ada
['Niven Singh']
2018-10-30 18:05:37.704000+00:00
['Artificial Intelligence']
Title Intel Student Ambassador ProgramContent November 2016 announced Intel® AI Academy Students created work collaboratively student innovative school university great work Deep Learning Artificial Intelligence space part program also announced Intel® Student Ambassador Program AI exciting new program university student engage Intel around work Machine Learning Deep Learning Artificial Intelligence Student Ambassador program Student Ambassador Program developer affinity program designed assist student expert telling story share expertise student data scientist developer Intel working university across globe introduce program student invited program Student Ambassadors provided technical support resource marketing advance work Intel software tool hardware program primarily targeted toward graduate student however undergrad PhD student apply combined education skill time fulfill program requirement note program provide college internship Intel provide placement employment Intel benefit Student Ambassador program Student Ambassador Program offer many benefit select student invited program benefit include Formal association Intel® Corporation via Student Ambassador title swag affiliation Free software tool library Intel Direct access instance Intel® AI DevCloud Intel’s AI cluster power development training deep learning model Access early disclosure information NDA monthly meeting Intel Direct access Intel engineer resource support work adoption integration Intel® architecture Sponsored travel support speakership andor training Student Ambassador Sponsored fund assist hosting training speaking session campus promote work Numerous speakership collaboration opportunity coordinated Intel® AI Academy Students exclusively Intel® Student Ambassadors Opportunities apply Early Innovation microfunding opportunity solely Student Ambassadors expectation Student Ambassadors Student Ambassadors continue role long student able desire continue Student Ambassador upon graduation whichever come first time Student Ambassador expected complete following Create online profile posting least one 1 project Intel’s® Developer Mesh website Deliver three 3 piece technical content shared Intel’s Developer Website discussing research project interest space Deep Learning Artificial Intelligence Host speaker one 1 Ambassador Labs campus connecting peer local community providing training insight work total 125 student course calendar year I’m interested get involved student faculty interested Student Ambassador Program multiple way engage Intel get involved Universities invite Intel come campus halfday workshop discus program provide initial training deep learning artificial intelligence technology supporting Intel architecture Visit site information setting workshop student directly interested Student Ambassador program Post information research student project Intel’s® Developer Mesh website key step u evaluating student Ambassador Program Add project Student Group Intel Developer Mesh website Posting site help Intel get glimpse student work help demonstrate student’s willingness aptitude sharing experience community posting project Developer Mesh student complete submit online candidate form Intel support student eligible able Student Ambassador Intel also able support sponsor student club university program Intel able provide sponsorship fund select university club Sponsorship fund help support club’s cost meeting gathering exchange club discussing sharing information Intel’s support Artificial Intelligence Select club provided AI training kit including content documentation share discus meeting gathering Select university club prioritized guest speakership Intel associated partner resource available interested evaluated Intel Student Program University Club submit information candidacy Intel excited opportunity work engage directly student shaping advancing new work use case Artificial Intelligence via campus workshop Student Ambassadors University Clubs aim provide student developer resource opportunity voice influence driving AI forward Learn Intel Student Ambassador site check AI project Mesh contact Niven Singh Intel’s Student Community Manager directly informationTags Artificial Intelligence
4,245
Welcome To October, Ghouls
Haunt yourself a little happy. Photo by Cederic X on Unsplash Welcome, ye who enter here, to October. Macabre Month. Scary Christmas. The month where those of us who live in the dark at last have our time in the spotlight. It’s pumpkins and chilling movies and all manner of treats dressed as body parts. It’s falling leaves and cool air and plastic spider rings just kind of everywhere. It’s a general layer of sinister fog enrobing the entire month and I don’t mind telling you I’m excited. Naturally, my decor went up September first, but I don’t expect anyone to get it. September is a garbage month full of hot weather when we don’t want it and no discernible enjoyable traits. Additionally, how can I be expected to bask in the orange and purple glow of my twinkle lights for just 31 days per year? That hardly seems like enough. I buy skulls in bulk for heaven’s sake. But today is October 1st, officially the start of Halloween season, a spooky and sinister time that brews great personal joy within my cold, black heart. My black faux candelabra is on as a write this. Dance, little orange flames, dance. Photo by Shani Silver. Why do we like this crap? Why are the shelves of Target bedecked with increasing quantities of battery operated novelties and home exterior decor to rival the Griswolds? I have a theory that everyone loves a little creepy, it’s just that October is the only time they feel confident saying so. Nobody wants to be a weirdo, but in October, we all are. There’s a safety to the scariness of October. Even those who don’t leave the faux raven skeletons out all year have a laugh as they dabble in the darkness. We want to be scared, but safely. Like we need the confidence that the chainsaw is fake to have a good time, you know? I have fake tombstones and a skull in a snow globe in my living room and this month is the only time I can have people over and not have to explain them. This is maybe the only month of the year I can thoroughly relax. October is a month that, when you think about it, uses death as decoration and yet somehow we’re all remarkably upbeat. It’s indulgence in weirdness of the highest order and it’s a beautiful thing to see and offer candy to out of an automated bowl. October is the only month human bones are funny. I’m not asking for an explanation I’m just stating the facts. So go forth, gremlins. Haunt on, you spirits. Jump into this cobweb-covered month with both feet—better still if they’re wearing witch shoes at the time. I celebrate October with reckless frivolity, coating my home in black and white striped accents and draping creepy cloth over anything that will keep still. I’ve been running makeup trials of my costume for two weeks and I’ve owned it’s major components since summer. But for a horrible error on my part and the cat would be resting in a tiny haunted house of her own right now. Dammit for selling out. Anyway—what I’m saying is, embrace your inner creep, celebrate your sinister side, and have a very, very happy Halloween, starting right now.
https://shanisilver.medium.com/welcome-to-october-ghouls-e3e7d7cbf3a5
['Shani Silver']
2019-10-01 10:55:23.776000+00:00
['Halloween', 'Writing', 'October', 'Weird', 'Humor']
Title Welcome October GhoulsContent Haunt little happy Photo Cederic X Unsplash Welcome ye enter October Macabre Month Scary Christmas month u live dark last time spotlight It’s pumpkin chilling movie manner treat dressed body part It’s falling leaf cool air plastic spider ring kind everywhere It’s general layer sinister fog enrobing entire month don’t mind telling I’m excited Naturally decor went September first don’t expect anyone get September garbage month full hot weather don’t want discernible enjoyable trait Additionally expected bask orange purple glow twinkle light 31 day per year hardly seems like enough buy skull bulk heaven’s sake today October 1st officially start Halloween season spooky sinister time brew great personal joy within cold black heart black faux candelabra write Dance little orange flame dance Photo Shani Silver like crap shelf Target bedecked increasing quantity battery operated novelty home exterior decor rival Griswolds theory everyone love little creepy it’s October time feel confident saying Nobody want weirdo October There’s safety scariness October Even don’t leave faux raven skeleton year laugh dabble darkness want scared safely Like need confidence chainsaw fake good time know fake tombstone skull snow globe living room month time people explain maybe month year thoroughly relax October month think us death decoration yet somehow we’re remarkably upbeat It’s indulgence weirdness highest order it’s beautiful thing see offer candy automated bowl October month human bone funny I’m asking explanation I’m stating fact go forth gremlin Haunt spirit Jump cobwebcovered month feet—better still they’re wearing witch shoe time celebrate October reckless frivolity coating home black white striped accent draping creepy cloth anything keep still I’ve running makeup trial costume two week I’ve owned it’s major component since summer horrible error part cat would resting tiny haunted house right Dammit selling Anyway—what I’m saying embrace inner creep celebrate sinister side happy Halloween starting right nowTags Halloween Writing October Weird Humor
4,246
The Binary Search Algorithm
A number of months ago, with a budding interest in data science and machine learning, I decided to take MIT’s 6.0001: Introduction to Computer Science and Programming in Python from their course archive. Having had only a basic understanding of Python at the time, I was caught off guard when topics such as recursion and the bisection method were introduced very early on during the first few lectures. By lecture 5, I was already working at the edge of my ability; and then I realized. This course is not meant to teach you the Python language — Python is merely the tool used to help students without prior experience in programming or computer science to develop the skill of computational thinking. So after I was floored just reading the final question of the first problem set, this is how I went about implementing my first binary search algorithm. The method A dramatic step forward in computational cost from exhaustive search algorithms such as guess and check and approximation, binary search is a more efficient algorithm that finds and returns a target value from a sorted array. It can be plainly illustrated using the following steps: Establish the search space – or size of the array – with a low boundary and high boundary. Divide the sum of the low and high boundaries by two to find the middle of the search space. If the target value is equal to the value in the middle of the search space, the target value has been found. Return the target value. Else, eliminate the half of the search space in which the target is impossible to be found. Repeat the steps in the new search space until the target value is found. The target is not present in the search space if the algorithm concludes emptying the array. Application Now, let’s write a simple root-finding program applying this search algorithm. """Find the square root of x""" x = 16 epsilon = 0.01 # This is the acceptable margin of error for this algorithm no_of_guesses = 0 low = 1.0 high = x #The root of x cannot be > x target = (high + low) / 2.0 #Watch out for integar division while abs(target**2 - x) >= epsilon: no_of_guesses += 1 if target**2 < x: low = target else: high = target target = (high + low) / 2.0 print("No. of guesses =", no_of_guesses) print(target, "is approximately the square root of", x) The output of this code is as follows. No. of guesses = 11 3.999267578125 is approximately the square root of 16 Per the epsilon we defined, our program has come up with a reasonably accurate answer! However, it is important to note these sticking points: Bisection search only works when the value of the function varies monotonically with input. When working with an array, always ensure that it is sorted. The exit condition is crucial. Stop the search when the low boundary is no longer less than or equal to the high boundary. The relevant search space would have already been looped through by the time this condition is met. Now with the basics under our belt, we should be able to start incorporating the bisection method into our own search algorithms. I recommend first getting started doing the ‘easy’ category binary search questions on LeetCode. Here are a few for reference. As you progress, you will be able to move up in the categories towards the harder questions and eventually feel comfortable using this search algorithm in your projects. Final Remarks The binary search is a fairly efficient search algorithm and will serve you well in a variety of use cases. However, it is only faster than linear search algorithms, that too only with sizeable datasets. In essence, comprehending this thoroughly will give you a great foundation in data structures and algorithms and prepare you to explore further and learn more efficient structures. Additionally, Python provides the Bisect module which contains many functions that would be useful to you while working with bisection search algorithms. Until next time, happy coding!
https://thakshilarajakaruna.medium.com/the-binary-search-algorithm-7b37eb8bb445
['Thakshila Rajakaruna']
2020-11-29 12:51:31.604000+00:00
['Binary Search', 'Beginners Guide', 'Python', 'Programming']
Title Binary Search AlgorithmContent number month ago budding interest data science machine learning decided take MIT’s 60001 Introduction Computer Science Programming Python course archive basic understanding Python time caught guard topic recursion bisection method introduced early first lecture lecture 5 already working edge ability realized course meant teach Python language — Python merely tool used help student without prior experience programming computer science develop skill computational thinking floored reading final question first problem set went implementing first binary search algorithm method dramatic step forward computational cost exhaustive search algorithm guess check approximation binary search efficient algorithm find return target value sorted array plainly illustrated using following step Establish search space – size array – low boundary high boundary Divide sum low high boundary two find middle search space target value equal value middle search space target value found Return target value Else eliminate half search space target impossible found Repeat step new search space target value found target present search space algorithm concludes emptying array Application let’s write simple rootfinding program applying search algorithm Find square root x x 16 epsilon 001 acceptable margin error algorithm noofguesses 0 low 10 high x root x cannot x target high low 20 Watch integar division abstarget2 x epsilon noofguesses 1 target2 x low target else high target target high low 20 printNo guess noofguesses printtarget approximately square root x output code follows guess 11 3999267578125 approximately square root 16 Per epsilon defined program come reasonably accurate answer However important note sticking point Bisection search work value function varies monotonically input working array always ensure sorted exit condition crucial Stop search low boundary longer le equal high boundary relevant search space would already looped time condition met basic belt able start incorporating bisection method search algorithm recommend first getting started ‘easy’ category binary search question LeetCode reference progress able move category towards harder question eventually feel comfortable using search algorithm project Final Remarks binary search fairly efficient search algorithm serve well variety use case However faster linear search algorithm sizeable datasets essence comprehending thoroughly give great foundation data structure algorithm prepare explore learn efficient structure Additionally Python provides Bisect module contains many function would useful working bisection search algorithm next time happy codingTags Binary Search Beginners Guide Python Programming
4,247
Banana Pudding and the Hegelian Dialectic
Banana Pudding and the Hegelian Dialectic Having a thesis and an antithesis requires synthesis. Having an Id and a Superego requires an Ego that can satisfy both’s needs. Eating banana pudding works too. Photo by Maxim Potkin on Unsplash When I was looking for a topic for my senior thesis in college I stumbled on the work of the 19th Century German philosopher, G.W.F. Hegel in the library of the Jesuit University I attended. I didn’t remember reading about the Hegelian Dialectic in philosophy classes or discussing him in any of my ethics seminars. The simple premise that we conjure up a thesis in life and are then challenged on that thesis with antithetical data was intriguing to me. That the resultant synthesis was a mere starting point for reconciliation of the next thesis was right up my alley. The understanding of metaphysical conflicts was exactly the thing I had come to College to study. I was addicted to understanding the process of a mental process. That there was a systematic way we could explain this thing I was going through called growth, which was transforming me from a little ghetto brat into a soon to be graduate student blew my mind. That I was simultaneously reading Carl Jung was just icing on my intellectual cake and a new theory of the self was beginning to blossom in my peanut of a brain. Pretty heady stuff for a 25 year old high school drop out using her brain for the first time for things other than getting it stoned. I came up with a theory of my own from this journey down philosophy lane and that goes something like this: We all have a set of believes, norms, customs, etc. that we use to measure our existence against. Our “Thesis” of our life, in Hegelian terms. Maybe something called an Ego if we are Carl Jung. As we experience life our “Thesis” is often challenged by nonconforming experiences, urges, or ideas which we need to fit into our world view. Hegel’s “Antithesis”, or a nudge from our “Id” in Jung’s vernacular. We then seek to assimilate these contrary experiences, whether mental or metaphysical, with our existing reality. Hegel would call this the “Synthesis”, Jung might say our “Superego” police have come to restore law and order to our selves. The new Thesis we form through this Synthesis would then become our new normal. Our Jungian Ego would be adjusted and the process would begin again. I wrote that paper, which I cannot find in any of the cardboard boxes I’ve been lugging around for the last 35 years, for my senior project. It garnered me an “A” and has left me with a haunting feeling that the concept of synthesizing data to adjust one’s inner view of the world and see that world though a new lens, repeatedly, is all the human condition really is. I think that the Hegelian Dialectic can describe most of what drives human behavior. We like being comfortable in our thoughts and feelings. Something comes along that disrupts that comfort so we have to build it in to our reality in such a way that we can continue to function. We move forward with a slightly skewed belief, a new perspective or a heightened awareness. End of story. So what does that have to do with banana pudding? Nothing. I was just eating a bowl of delicious, homemade vanilla pudding with bananas and vanilla wafer’s as I began writing this morning. At 6 a.m. I was wide awake with an empty day ahead of me. Now typically I would roll over (after getting up to pee) and let myself sleep for another 2 hours, but there were these three bananas on my kitchen counter and I wanted to use them in some way that didn’t involve turning on my oven. God forbid I let three perfectly ripe bananas go to waste! That was when the banana pudding idea came to me. I had never made banana pudding, at least not from scratch, but I believed I had all of the ingredients, and Google happened to be available, so I looked up a recipe. The resulting loveliness, a smooth creamy vanilla pudding with both bananas and vanilla wafers floating in it was exactly what I needed after a tough week. Comfort food in a new form, my Thesis for using ripe bananas had been shifted by the Antithesis of pudding which meant not having to turn on my oven on a hot, humid day to bake banana bread or muffins. A small psychic shift, but a shift none the less. The resultant Synthesis, light and cool, velvety in my mouth, is just what I needed. It was what I was savoring as I began to write this, after taking a mid-morning nap and giving the mixture enough time to chill in the fridge. I had a new normal, a go to for ripe bananas, a new weapon in my comfort food arsenal. Banana pudding is my new thesis. I can really get behind this new version of myself on a humid, cloudy Saturday morning when I have nowhere to go, no one to entertain and no children to feed. My life is forever changed for the better. Hegel would be proud.
https://medium.com/illumination/banana-pudding-and-the-hegelian-dialectic-e2853b4c2e68
['Janice Maves']
2020-07-13 23:13:06.876000+00:00
['Cooking', 'Psychology', 'Self Improvement', 'Philosophy', 'Humor']
Title Banana Pudding Hegelian DialecticContent Banana Pudding Hegelian Dialectic thesis antithesis requires synthesis Id Superego requires Ego satisfy both’s need Eating banana pudding work Photo Maxim Potkin Unsplash looking topic senior thesis college stumbled work 19th Century German philosopher GWF Hegel library Jesuit University attended didn’t remember reading Hegelian Dialectic philosophy class discussing ethic seminar simple premise conjure thesis life challenged thesis antithetical data intriguing resultant synthesis mere starting point reconciliation next thesis right alley understanding metaphysical conflict exactly thing come College study addicted understanding process mental process systematic way could explain thing going called growth transforming little ghetto brat soon graduate student blew mind simultaneously reading Carl Jung icing intellectual cake new theory self beginning blossom peanut brain Pretty heady stuff 25 year old high school drop using brain first time thing getting stoned came theory journey philosophy lane go something like set belief norm custom etc use measure existence “Thesis” life Hegelian term Maybe something called Ego Carl Jung experience life “Thesis” often challenged nonconforming experience urge idea need fit world view Hegel’s “Antithesis” nudge “Id” Jung’s vernacular seek assimilate contrary experience whether mental metaphysical existing reality Hegel would call “Synthesis” Jung might say “Superego” police come restore law order self new Thesis form Synthesis would become new normal Jungian Ego would adjusted process would begin wrote paper cannot find cardboard box I’ve lugging around last 35 year senior project garnered “A” left haunting feeling concept synthesizing data adjust one’s inner view world see world though new lens repeatedly human condition really think Hegelian Dialectic describe drive human behavior like comfortable thought feeling Something come along disrupts comfort build reality way continue function move forward slightly skewed belief new perspective heightened awareness End story banana pudding Nothing eating bowl delicious homemade vanilla pudding banana vanilla wafer’s began writing morning 6 wide awake empty day ahead typically would roll getting pee let sleep another 2 hour three banana kitchen counter wanted use way didn’t involve turning oven God forbid let three perfectly ripe banana go waste banana pudding idea came never made banana pudding least scratch believed ingredient Google happened available looked recipe resulting loveliness smooth creamy vanilla pudding banana vanilla wafer floating exactly needed tough week Comfort food new form Thesis using ripe banana shifted Antithesis pudding meant turn oven hot humid day bake banana bread muffin small psychic shift shift none le resultant Synthesis light cool velvety mouth needed savoring began write taking midmorning nap giving mixture enough time chill fridge new normal go ripe banana new weapon comfort food arsenal Banana pudding new thesis really get behind new version humid cloudy Saturday morning nowhere go one entertain child feed life forever changed better Hegel would proudTags Cooking Psychology Self Improvement Philosophy Humor
4,248
Allow the Books Speak to You With Python
Step #1. Import the Python library The library I was talking about is pyttsx3 (python text to speech version 3). It is a text-to-speech conversion library in Python. Unlike alternative libraries, it works offline and is compatible with both Python 2 and 3. You can use any editor for creating this project. I prefer using Pycharm due to its user-friendly and other important features. You can use any editor and then import this library by executing the below-mentioned command: pip install pyttsx3 pip is a package manager for Python. That means it’s a tool that allows you to install and manage additional libraries and dependencies that are not distributed as part of the standard library. Here, we are telling the package manager to install the specific library to our project. Step #2. Make the Code Talk Once the library is installed, we can import it into our project. import pyttsx3 Then it takes three lines of code to make the code speak. speak = pyttsx3.init() speak.say('A.I. is going to take over the world') speak.runAndWait() Here, we have initialized an instance of our imported library. We used the in-built method say in which we wrote the text that we want to convert into speech. Lastly, we call the runAndWait method for the execution. Run the above code for turning your text into the speech. Step #3. Create an Audiobook There is a prerequisite before we move to create our own audiobook. We will need a pdf that can be converted into an audiobook. You can choose any pdf file. If you have a pdf for any book or novel, then you can use that one. Once we have a pdf, then the next thing we need is a package to read pdf files. We will again go to our editor and install the package — pip install PyPDF2 After the package installation, we can import the package in our code to read the pdf file. import PyPDF2 book = open('filename.pdf','rb') pdfReader = PyPDF2.PdfFileReader(book) page = pdfReader.getPage(1) text = page.extractText() Here, we have imported the package to read pdf files in our code. Then we have created an object call book to open the given pdf file. The second argument ‘rb’ stands for — read as binary. After that, we call the PdfFileReader method of the imported package and pass our pdf file information to it. Then we are calling the getPage method to extract the specific page information and extract the text by calling method extractText in the next step. Once we get the text extracted then we can pass it to the method say that we created in step #1. Final Code for audiobook import pyttsx3 import PyPDF2 book = open('filename.pdf','rb') pdfReader = PyPDF2.PdfFileReader(book) pages = pdfReader.numPages() speak = pyttsx3.init() for num in range(0, pages-1) pages = pdfReader.getPage(num) text = page.extractText() speak.say(text) speak.runAndWait() That’s it. In eleven lines of code, we created our own custom audiobook. Next time, instead of sitting in front of a computer going through some daunting pdf, just convert it into an audiobook and lie down while listening to it. There are several other methods present in library pyttsx3 for customizations like changing the voice of the reader, controlling volume, and even we can save the audiobook as a .mp3 file in our system. I’ll leave those things up to you for exploring the library further.
https://towardsdatascience.com/allow-the-books-speak-to-you-with-python-e95c65030c7a
['Shubham Pathania']
2020-12-17 13:51:51.009000+00:00
['Coding', 'Software Development', 'Python', 'Data Science', 'Programming']
Title Allow Books Speak PythonContent Step 1 Import Python library library talking pyttsx3 python text speech version 3 texttospeech conversion library Python Unlike alternative library work offline compatible Python 2 3 use editor creating project prefer using Pycharm due userfriendly important feature use editor import library executing belowmentioned command pip install pyttsx3 pip package manager Python mean it’s tool allows install manage additional library dependency distributed part standard library telling package manager install specific library project Step 2 Make Code Talk library installed import project import pyttsx3 take three line code make code speak speak pyttsx3init speaksayAI going take world speakrunAndWait initialized instance imported library used inbuilt method say wrote text want convert speech Lastly call runAndWait method execution Run code turning text speech Step 3 Create Audiobook prerequisite move create audiobook need pdf converted audiobook choose pdf file pdf book novel use one pdf next thing need package read pdf file go editor install package — pip install PyPDF2 package installation import package code read pdf file import PyPDF2 book openfilenamepdfrb pdfReader PyPDF2PdfFileReaderbook page pdfReadergetPage1 text pageextractText imported package read pdf file code created object call book open given pdf file second argument ‘rb’ stand — read binary call PdfFileReader method imported package pas pdf file information calling getPage method extract specific page information extract text calling method extractText next step get text extracted pas method say created step 1 Final Code audiobook import pyttsx3 import PyPDF2 book openfilenamepdfrb pdfReader PyPDF2PdfFileReaderbook page pdfReadernumPages speak pyttsx3init num range0 pages1 page pdfReadergetPagenum text pageextractText speaksaytext speakrunAndWait That’s eleven line code created custom audiobook Next time instead sitting front computer going daunting pdf convert audiobook lie listening several method present library pyttsx3 customizations like changing voice reader controlling volume even save audiobook mp3 file system I’ll leave thing exploring library furtherTags Coding Software Development Python Data Science Programming
4,249
Dodgers and MLB Equally to Blame in Justin Turner’s COVID-19 Protocol Breach
As the Los Angeles Dodgers poured out of the dugout and bullpen in the wake of Julio Urias’ final called strike against the Tampa Bay Rays, granting them their first World Series win since 1988, nothing seemed out of place. That is, until cameras focused in on an unkempt, crimson beard celebrating amongst the throngs of players, coaches, and executives. It belonged to Dodgers third baseman Justin Turner, who had been mysteriously absent for the last few innings of the game. Turner was ordered to exit the game in the 7th when his COVID-19 test came back positive. He complied and isolated himself in a nearby doctor’s office until the Dodgers claimed victory, when he raced back out onto the field and could be seen hugging his teammates and clutching the World Series trophy. At one point, the organization gathered together for a group photo, and Turner removed his mask not six inches away from the nearest Dodger. ESPN reporter Stephen A. Smith was quick to point out that Turner had already spent the past several hours with his teammates. While that may be true, scientists have long reiterated that the more time someone spends in the presence of an infected individual, the more opportunity they have to catch the virus. Plus, the field was also filled with reporters, photographers, and the families of the players, who were likely being exposed to Turner that night for the first time. Turner’s re-entry may seem like little more than an ill-advised, individual decision he made out of pure desperation to celebrate the victory with his teammates. However, the forces that allowed him to return extend much further outward. Reports say that the league was aware that Turner’s test had come back “inconclusive” in the 2nd inning and immediately relayed the information to the Dodgers’ management. However, Turner didn’t exit until five innings later, after the test was expedited and came back positive. Several reporters and fans pointed out the futility of receiving results after the game had already started, wondering if it were simply an empty gesture to fulfill the protocol rather than protect players from harm. Had Turner been forced to leave the premises entirely, he wouldn’t have even faced the temptation to storm back onto the field and fraternize with his teammates. Authorities of both the team and the league were well within their right to take further action. However, they chose to leave the decision up to Turner himself. Some reports even claim that Dodgers higher-ups permitted Turner to be on the field for the team photo, assuring one another they would insist he leave afterward. Inaction surrounding positive COVID-19 cases in MLB teams is far from a problem specific to the Dodgers. In late July, the Miami Marlins reported that three members of their squad had tested positive. The team’s management assured the general public that they had quarantined infected individuals, implemented daily testing, and were generally taking the situation very seriously. However, Commissioner Rob Manfred still allowed the team to go forward and play their scheduled game against the Philadelphia Phillies. He cited “temperature checks” as the reason that they decided to proceed, as if fevers were the strongest indication of someone’s ability to transmit the virus. Within days, the number of positive cases on the Marlins had risen to 20. When Manfred was asked on The Daily podcast about Nationals star outfielder Juan Soto testing positive ahead of the season opener, he responded that “we knew we were going to have positives . . . The whole point is you have a system that’s flexible enough to deal with what’s coming. We knew it was coming.” Many are convinced that professional athletes’ superior fitness levels make them less prone to a serious bout of COVID-19, and therefore seem to advocate for more lenient protocols. While several of the MLB players who were infected seem to have recovered without significant inconvenience, not all have been so fortunate. Red Sox pitcher Eduardo Rodriguez was unable to play this year after developing myocarditis, or inflammation of the heart muscles, from the virus. Braves first baseman Freddie Freeman suffered a 104-degree fever at the height of his illness, reporting that he prayed for God not to take his life. Multiple journalists have shared stories of professional athletes who now question their future in their sport due to COVID-19 complications. If the MLB wants to retain any sense of credibility going forward, they should refrain from pretending to care about the health and safety of the players and instead be transparent about what they are: a profit-driven organization that has operated entirely out of their fear of sacrificing ratings to the virus by forgoing the 2020 season. Perhaps the most common argument that people have used in defense of Turner’s actions is the fact that he had contributed so much to the team that it just wasn’t fair for him to miss the season’s culmination. However, it also wasn’t fair for millions of people to miss the plethora of events that were inaccessible throughout 2020, including weddings, graduations, births, and, most significantly, the deaths of loved ones who passed alone in hospitals. In fact, one of the reasons that the pandemic has persisted into the fall is because of people who simply can’t stand to miss things, and therefore crowd into bars, restaurants, and houses to retain some sense of normalcy. Watching the Dodgers mob one another on the field, it’s easy to forget, for a second, about the massive amount of death that has occurred in the last several months outside of the stadium walls. While baseball has served as a welcome haven for many during an otherwise devastating year, it’s the behavior of people like Turner, the Dodgers, and Manfred that reminds us we’re not even close to getting out.
https://medium.com/top-level-sports/dodgers-and-mlb-equally-to-blame-in-justin-turners-covid-19-protocol-breach-cf093a67ed9b
['Lily Seibert']
2020-10-31 17:38:56.059000+00:00
['World Series', 'Justin Turner', 'Coronavirus', 'Dodgers']
Title Dodgers MLB Equally Blame Justin Turner’s COVID19 Protocol BreachContent Los Angeles Dodgers poured dugout bullpen wake Julio Urias’ final called strike Tampa Bay Rays granting first World Series win since 1988 nothing seemed place camera focused unkempt crimson beard celebrating amongst throng player coach executive belonged Dodgers third baseman Justin Turner mysteriously absent last inning game Turner ordered exit game 7th COVID19 test came back positive complied isolated nearby doctor’s office Dodgers claimed victory raced back onto field could seen hugging teammate clutching World Series trophy one point organization gathered together group photo Turner removed mask six inch away nearest Dodger ESPN reporter Stephen Smith quick point Turner already spent past several hour teammate may true scientist long reiterated time someone spends presence infected individual opportunity catch virus Plus field also filled reporter photographer family player likely exposed Turner night first time Turner’s reentry may seem like little illadvised individual decision made pure desperation celebrate victory teammate However force allowed return extend much outward Reports say league aware Turner’s test come back “inconclusive” 2nd inning immediately relayed information Dodgers’ management However Turner didn’t exit five inning later test expedited came back positive Several reporter fan pointed futility receiving result game already started wondering simply empty gesture fulfill protocol rather protect player harm Turner forced leave premise entirely wouldn’t even faced temptation storm back onto field fraternize teammate Authorities team league well within right take action However chose leave decision Turner report even claim Dodgers higherups permitted Turner field team photo assuring one another would insist leave afterward Inaction surrounding positive COVID19 case MLB team far problem specific Dodgers late July Miami Marlins reported three member squad tested positive team’s management assured general public quarantined infected individual implemented daily testing generally taking situation seriously However Commissioner Rob Manfred still allowed team go forward play scheduled game Philadelphia Phillies cited “temperature checks” reason decided proceed fever strongest indication someone’s ability transmit virus Within day number positive case Marlins risen 20 Manfred asked Daily podcast Nationals star outfielder Juan Soto testing positive ahead season opener responded “we knew going positive whole point system that’s flexible enough deal what’s coming knew coming” Many convinced professional athletes’ superior fitness level make le prone serious bout COVID19 therefore seem advocate lenient protocol several MLB player infected seem recovered without significant inconvenience fortunate Red Sox pitcher Eduardo Rodriguez unable play year developing myocarditis inflammation heart muscle virus Braves first baseman Freddie Freeman suffered 104degree fever height illness reporting prayed God take life Multiple journalist shared story professional athlete question future sport due COVID19 complication MLB want retain sense credibility going forward refrain pretending care health safety player instead transparent profitdriven organization operated entirely fear sacrificing rating virus forgoing 2020 season Perhaps common argument people used defense Turner’s action fact contributed much team wasn’t fair miss season’s culmination However also wasn’t fair million people miss plethora event inaccessible throughout 2020 including wedding graduation birth significantly death loved one passed alone hospital fact one reason pandemic persisted fall people simply can’t stand miss thing therefore crowd bar restaurant house retain sense normalcy Watching Dodgers mob one another field it’s easy forget second massive amount death occurred last several month outside stadium wall baseball served welcome many otherwise devastating year it’s behavior people like Turner Dodgers Manfred reminds u we’re even close getting outTags World Series Justin Turner Coronavirus Dodgers
4,250
What’s Going On With Those Swift Substrings?
The Root of the Problem — UTF-8 To understand how Strings work, we need to go back to the basics — Unicode and UTF-8. When we work with Strings, we have the feeling we are dealing with a plain text, just an array of symbols and numbers, but this is a lie. It used to be the case back then, when computers worked with something called ASCII. ASCII was a way to represent all the important characters (letters, digits, symbols) as a number between 32 to 127, so every character took one byte of memory. And what about 127 to 255? Every developer could use it for whatever they wanted, so you can imagine the mess we had when computers got out of the US to non-English countries. That’s where Unicode takes a part — Unicode is the way of representing every letter and digit you can think of, in almost every language in the world, and not only that — Unicode is great for representing emojis as well. So, the Unicode characters map is a four-bytes map, and since most of the characters we are typing are English letters and digits, it is very inefficient to allocate four bytes for each character, when in most cases, one byte is enough. That’s the final piece of the matrix -> encoding, and in this case — UTF-8. UTF-8 is a way to encode a Unicode string to smaller chunks of data so it can be more efficient.
https://medium.com/better-programming/whats-going-on-with-those-swift-substrings-83c58cedf596
['Avi Tsadok']
2020-01-20 00:28:44.973000+00:00
['Development', 'Mobile', 'Swift', 'iOS', 'Programming']
Title What’s Going Swift SubstringsContent Root Problem — UTF8 understand Strings work need go back basic — Unicode UTF8 work Strings feeling dealing plain text array symbol number lie used case back computer worked something called ASCII ASCII way represent important character letter digit symbol number 32 127 every character took one byte memory 127 255 Every developer could use whatever wanted imagine mess computer got US nonEnglish country That’s Unicode take part — Unicode way representing every letter digit think almost every language world — Unicode great representing emojis well Unicode character map fourbytes map since character typing English letter digit inefficient allocate four byte character case one byte enough That’s final piece matrix encoding case — UTF8 UTF8 way encode Unicode string smaller chunk data efficientTags Development Mobile Swift iOS Programming
4,251
20 Terminal Commands That You Must Know
----------------Manipulation With Files and Folders----------------- 1. Encrypting Files I Know windows is not so much famous for the security it offers but still, there are some methods that can give a guarded feel. Encrypting Files is one of them. Many windows users use third-party apps to encrypt their data but windows also offer an inbuild encryption system for securing files. Open Your Terminal (Win+R Type CMD and Press Enter), and target your terminal to the folder where your files are that you want to secure. Then simply use the command below. Cipher /E Now No one without the password can not access your files. If You Want to Decrypt the Files than you can use Cipher \D . 2. File Compare We all store our important data in files and overtime when the data of the files change and gets updates then it becomes very tough to find the difference between the previous and latest version of the file. You can also relate it with two versions of a coding project. We usually create multiple versions for our project file and in the end, we forgot what changes we have done. Using the file compare command of the terminal we can find the difference between the two files by just a simple line of command. fc /a File1.txt File2.txt ##Simple compare fc /b File1.txt File2.txt ##Binary compare (Best For Images) 3. Hiding Folders You Might be thinking that one I already know but wait the one you are thinking is not good enough. we all know there is an easy way of hiding folders using the right-click and then in properties checking the checkbox “Hidden”. if you know it then you also know that folders can be seen if you go in the view and then check the “Hidden Files” Check box in the top bar. Anyone who is using your computer can do that and easily access your hidden files. There is a much better and safe way is to use the terminal. In the Terminal Target The location to the parent of your desired folder and then type the below command. Attrib +h +s +r FOLDER_NAME ## Attrib +h +s +r studymaterial Now YOur Folder is hidden completely and you can’t even see it by checking the Hidden Files checkbox in the top bar. To unhide the folder, you can use the command Attrib -h -s -r FOLDER_NAME ## Attrib -h -s -r studymaterial 4. Showing File Structure This one I found useful because most of the time when you are working in a team on a big project the most important thing is the file structure. One Mistake in the file structure and your all efforts wasted. You don’t do a bigger mistake like this that's why CMD comes with a command which helps you to show the file structure.
https://medium.com/pythoneers/20-terminal-commands-that-you-must-know-f24ebb54c638
['Abhay Parashar']
2020-12-23 14:35:17.737000+00:00
['Tech', 'Technology', 'Productivity', 'Windows 10', 'Education']
Title 20 Terminal Commands Must KnowContent Manipulation Files Folders 1 Encrypting Files Know window much famous security offer still method give guarded feel Encrypting Files one Many window user use thirdparty apps encrypt data window also offer inbuild encryption system securing file Open Terminal WinR Type CMD Press Enter target terminal folder file want secure simply use command Cipher E one without password access file Want Decrypt Files use Cipher 2 File Compare store important data file overtime data file change get update becomes tough find difference previous latest version file also relate two version coding project usually create multiple version project file end forgot change done Using file compare command terminal find difference two file simple line command fc File1txt File2txt Simple compare fc b File1txt File2txt Binary compare Best Images 3 Hiding Folders Might thinking one already know wait one thinking good enough know easy way hiding folder using rightclick property checking checkbox “Hidden” know also know folder seen go view check “Hidden Files” Check box top bar Anyone using computer easily access hidden file much better safe way use terminal Terminal Target location parent desired folder type command Attrib h r FOLDERNAME Attrib h r studymaterial Folder hidden completely can’t even see checking Hidden Files checkbox top bar unhide folder use command Attrib h r FOLDERNAME Attrib h r studymaterial 4 Showing File Structure one found useful time working team big project important thing file structure One Mistake file structure effort wasted don’t bigger mistake like thats CMD come command help show file structureTags Tech Technology Productivity Windows 10 Education
4,252
Never Give Up — Is One Of The Most Cliché Advice To Discover Our Passion
Never Give Up — Is One Of The Most Cliché Advice To Discover Our Passion 2 reasons why “Never Give Up” is the worst advice to follow while discovering your passion. Photo by JESHOOTS.COM on Unsplash How many times have you been told, “Never give up!” or “No one likes a quitter!”? How many times have you heard inspirational stories — (These stories are all over the damn places on Facebook or Linkedin…) — that go something like this: “So-and-So faced countless setbacks, but you know what he kept fighting all along” or “Mrs. A had failed 100 times, but she never gave up on her career and look where she is now”? I assume your answer would be along the lines of “Infinite times or More times than I can remember.” NEVER GIVE UP!! It’s probably one of the most cliché phrases you’ll hear as you’re building your career. I’ve heard this phrase more than 5000 times — or maybe more than that — in my life up till now. And there is this another one — “Winners never quit and quitters never win” — Vince Lombardi Excuse my brain while I vomit. What Nonsense!!! — Are you freaking kidding me? From childhood, we’re taught to persevere, to be patient, no matter what, but sometimes that patience— that unwillingness or inability to let go — prevent us from moving forward, finding happiness, adapting to every challenge that life throws our way. Giving up does not always make you a bad person, or failure, or whatever evil thing you have been telling yourself. Sometimes giving up means that you are mature enough to know when to cut your own losses and move on, that you have the bravery to protect your own mental health, that you’re willing to take the risk of changing course. Last year in 2019, I started blogging, yep I started that. I started my blog named Lifestyle on blogger.com. I was very excited about that and I thought maybe this is what I really wanted to do. But the problem was I don’t know what I want to write about. So, I started writing about anything I feel like — blogs on skincare, life routine, life …not writing much by myself but copying-pasting another author’s materials(Hey, I was a beginner back then and you can’t blame me for plagiarizing). Yeah so, I keep writing — but something was wrong, something just didn’t feel right. I started having conflicting thoughts — Why I am even doing this? What the hell is the point? Will it even be worth it or not? But completing ignoring asking myself answer to the question — Is this is what I really want to do? — I keep trying, repeating to myself “Don’t give up. Don’t give up…” again and again and after some time when things didn’t work out after putting so much effort I freaking felt like a complete mess — a f*cked up mess. So, I quit then and there. I didn’t quit because I can’t do it. I quit because I feel f*cked up — because I don’t know what exactly I was doing and why I was doing that. It is the same with all of us we keep doing what we’re doing without even acknowledging the fact is this what we want to do. The more times we read the “Never Give Up” phrase, the more the thought of — “We’ll not give up” — gets embedded deep inside our brain. “Keep trying. Keep going.” “Don’t stop. You can do this. Just try once more.” “If you give up, then you’re a loser.” We keep ignoring the need to ask ourselves — Why I am Doing What I am doing? or Is this what I want to do? — and just keep trying again and again despite how many times we fail, but we kept going. Obviously, there is nothing wrong with trying again and again, it’s the mantra we all need to follow to reach the level of success but it’s only efficient when we are trying what we really want to do. Are you really doing what you really want to do? If yes, then no problem for you, and if no then there is a problem — a problem that will ruin your career. Here are 2 reasons why “Never Give Up” is the cliché advice — to follow — while discovering your passion
https://medium.com/live-your-life-on-purpose/never-give-up-is-one-of-the-most-clich%C3%A9-advice-to-discover-our-passion-b836b234e602
[]
2020-12-24 14:01:03.145000+00:00
['Life Lessons', 'Inspiration', 'Productivity', 'Self Improvement', 'Life']
Title Never Give — One Cliché Advice Discover PassionContent Never Give — One Cliché Advice Discover Passion 2 reason “Never Give Up” worst advice follow discovering passion Photo JESHOOTSCOM Unsplash many time told “Never give up” “No one like quitter” many time heard inspirational story — story damn place Facebook Linkedin… — go something like “SoandSo faced countless setback know kept fighting along” “Mrs failed 100 time never gave career look now” assume answer would along line “Infinite time time remember” NEVER GIVE It’s probably one cliché phrase you’ll hear you’re building career I’ve heard phrase 5000 time — maybe — life till another one — “Winners never quit quitter never win” — Vince Lombardi Excuse brain vomit Nonsense — freaking kidding childhood we’re taught persevere patient matter sometimes patience— unwillingness inability let go — prevent u moving forward finding happiness adapting every challenge life throw way Giving always make bad person failure whatever evil thing telling Sometimes giving mean mature enough know cut loss move bravery protect mental health you’re willing take risk changing course Last year 2019 started blogging yep started started blog named Lifestyle bloggercom excited thought maybe really wanted problem don’t know want write started writing anything feel like — blog skincare life routine life …not writing much copyingpasting another author’s materialsHey beginner back can’t blame plagiarizing Yeah keep writing — something wrong something didn’t feel right started conflicting thought — even hell point even worth completing ignoring asking answer question — really want — keep trying repeating “Don’t give Don’t give up…” time thing didn’t work putting much effort freaking felt like complete mess — fcked mess quit didn’t quit can’t quit feel fcked — don’t know exactly u keep we’re without even acknowledging fact want time read “Never Give Up” phrase thought — “We’ll give up” — get embedded deep inside brain “Keep trying Keep going” “Don’t stop try more” “If give you’re loser” keep ignoring need ask — want — keep trying despite many time fail kept going Obviously nothing wrong trying it’s mantra need follow reach level success it’s efficient trying really want really really want yes problem problem — problem ruin career 2 reason “Never Give Up” cliché advice — follow — discovering passionTags Life Lessons Inspiration Productivity Self Improvement Life
4,253
Kubernetes Just Deprecated Docker Support. What Now?
Kubernetes Just Deprecated Docker Support. What Now? Kat Cosgrove tweeted this on December 2: Let me transcribe the whole thread for you here if you’re not a Twitter user: “So, Kubernetes is deprecating Docker support and you’re either nervous or confused. That’s okay! I would like to help you understand what’s happening. A thread! 1/10 From Kubernetes v1.20, you will receive a deprecation warning for Docker. After that, you will need to use a different container runtime. Yes, this will break your clusters. You might think that Docker == Kubernetes. Not so! 2/10 The thing we call Docker is actually an entire tech stack, which includes a thing called containerd as well as some other stuff, like some fancy UX changes that make it easier for humans to interact with. Containerd is a high-level container runtime by itself. 3/10 Kubernetes doesn’t need all of that fancy UX stuff, though. It just needs the container runtime. Using Docker, the whole stack, as your container runtime means Kubernetes has to use something called dockershim to interact with the parts it actually needs. 4/10 This is because Docker isn’t CRI (Container Runtime Interface) compliant. Dockershim allows us to get around that, but it also means we have an entirely separate thing to maintain just so we can use Docker as our runtime. 5/10 This kind of sucks. It’s inconvenient. The solution is to cut out the abstraction and just use containerd as our container runtime in Kubernetes. Because, again, Kubernetes isn’t a human — it doesn’t need the UX enhancements. 6/10 So, you don’t need to panic. Docker isn’t dead (yet), and it still has its uses. You just can’t use it as your container runtime in Kubernetes anymore. After the next version, you need to switch to containerd. 7/10 Yes, you COULD just stay on an old version of Kubernetes. No, you absolutely should not, or else @IanColdwater will haunt your clusters. Ghost 8/10 The Kubernetes docs for container runtimes are here, with info about using containerd or CRI-O: https://kubernetes.io/docs/setup/production-environment/container-runtimes/… 9/10 Anyway, I hope this helped allay some anxiety or misunderstandings. If you’re still confused, that’s okay! Ask questions! This is REALLY complicated. Your questions aren’t stupid, even if they’re simple! 10/10 BONUS TWEET: Yes, Kubernetes will still run images built by Docker! TL;DR not a whole lot will change for devs, those images are still compliant with OCI (Open Container Initiative) and containerd knows what to do with them.”
https://medium.com/better-programming/kubernetes-just-deprecated-docker-support-e86d2327afad
['Edgar Rodríguez']
2020-12-07 18:29:27.416000+00:00
['Kubernetes', 'Docker', 'Container Orchestration', 'Programming', 'Containerd']
Title Kubernetes Deprecated Docker Support NowContent Kubernetes Deprecated Docker Support Kat Cosgrove tweeted December 2 Let transcribe whole thread you’re Twitter user “So Kubernetes deprecating Docker support you’re either nervous confused That’s okay would like help understand what’s happening thread 110 Kubernetes v120 receive deprecation warning Docker need use different container runtime Yes break cluster might think Docker Kubernetes 210 thing call Docker actually entire tech stack includes thing called containerd well stuff like fancy UX change make easier human interact Containerd highlevel container runtime 310 Kubernetes doesn’t need fancy UX stuff though need container runtime Using Docker whole stack container runtime mean Kubernetes use something called dockershim interact part actually need 410 Docker isn’t CRI Container Runtime Interface compliant Dockershim allows u get around also mean entirely separate thing maintain use Docker runtime 510 kind suck It’s inconvenient solution cut abstraction use containerd container runtime Kubernetes Kubernetes isn’t human — doesn’t need UX enhancement 610 don’t need panic Docker isn’t dead yet still us can’t use container runtime Kubernetes anymore next version need switch containerd 710 Yes COULD stay old version Kubernetes absolutely else IanColdwater haunt cluster Ghost 810 Kubernetes doc container runtimes info using containerd CRIO httpskubernetesiodocssetupproductionenvironmentcontainerruntimes… 910 Anyway hope helped allay anxiety misunderstanding you’re still confused that’s okay Ask question REALLY complicated question aren’t stupid even they’re simple 1010 BONUS TWEET Yes Kubernetes still run image built Docker TLDR whole lot change devs image still compliant OCI Open Container Initiative containerd know them”Tags Kubernetes Docker Container Orchestration Programming Containerd
4,254
React UseState Explained With Examples
React UseState Explained With Examples Learn about React UseState with practical examples Photo by Ferenc Almasi on Unsplash Introduction React provides a bunch of hooks that allow you to add features to your components. These hooks are JavaScript functions that you can import from the React package. However, hooks are available only for function-based components, so they can’t be used inside a class component. In this article, we will learn about the React UseState hook with practical examples. Let’s get right into it. What is UseState and when we use it? As I said React provides you with a bunch of hooks that you can use on your application. However, useState and useEffect are the two important hooks that you will be using a lot. The hook useState is a function that takes one argument, which is the initial state, and it returns two values: the current state and a function that can be used to update the state. If you tried to print the function useState() in the React dev tools ( console.log(useState) ), you will notice that it returns an array that contains the argument that you have put in the function useState and undefined where you will add a function to update the state. The hook useState can be used when you want to change a text after clicking a button for example or creating a counter and etc. Simple UseState examples In order to use the hook useState , you will have to import it from the React package first. Here is an example: import React, { useState } from 'react' Now you can start using the hook on your code without any problems. Have a look at the example below: import React, { useState } from 'react' function Component() { const [name, setName] = useState('Mehdi') } Notice that we are using the ES6 array destructuring inside the component. So the variable name inside the array refers to the argument of the function useState (current state). On the other hand, the variable setName refers to the function that you will add to update the state. So this means we have a state named name and we can update it by calling on setName() function. Let’s use it on the return statement: import React, { useState } from 'react' function Component() { const [name, setName] = useState('Brad') return <h1> My name is {name} </h1> } //Returns: My name is Brad Since function components don’t have the setState() function, you need to use the setName() function to update it. Here’s how you change the name from “Brad” to “John”: import React, { useState } from 'react' function Component() { const [name, setName] = useState('Brad') if(name === "Brad"){ setName("John") } return <h1> My name is {name} </h1> } //Returns: My name is John Multiple useState When you have multiple states, you can call the useState hook as many times as you need. Here is an example: import React, { useState } from 'react' function Component() { const [name, setName] = useState('Alex') const [age, setAge] = useState(15) const [friends, setFriends] = useState(["Brad", "Mehdi"]) return <h1> My name is {name} and I'm {age} </h1> //My name is Alex and I'm 15 } Notice that, the hook receives all valid JavaScript data types such as string, number, boolean, array, and object. Conclusion The hook useState is one of the important and useful React hook that you must know. Moreover, this hook basically enables function components to have their own internal state and add features to them. Thank you for reading this article, I hope you found it useful. More Reading
https://medium.com/javascript-in-plain-english/react-usestate-explained-with-examples-13d6c17b4b61
['Mehdi Aoussiad']
2020-12-21 17:43:24.180000+00:00
['Programming', 'Web Development', 'React', 'JavaScript', 'Coding']
Title React UseState Explained ExamplesContent React UseState Explained Examples Learn React UseState practical example Photo Ferenc Almasi Unsplash Introduction React provides bunch hook allow add feature component hook JavaScript function import React package However hook available functionbased component can’t used inside class component article learn React UseState hook practical example Let’s get right UseState use said React provides bunch hook use application However useState useEffect two important hook using lot hook useState function take one argument initial state return two value current state function used update state tried print function useState React dev tool consoleloguseState notice return array contains argument put function useState undefined add function update state hook useState used want change text clicking button example creating counter etc Simple UseState example order use hook useState import React package first example import React useState react start using hook code without problem look example import React useState react function Component const name setName useStateMehdi Notice using ES6 array destructuring inside component variable name inside array refers argument function useState current state hand variable setName refers function add update state mean state named name update calling setName function Let’s use return statement import React useState react function Component const name setName useStateBrad return h1 name name h1 Returns name Brad Since function component don’t setState function need use setName function update Here’s change name “Brad” “John” import React useState react function Component const name setName useStateBrad ifname Brad setNameJohn return h1 name name h1 Returns name John Multiple useState multiple state call useState hook many time need example import React useState react function Component const name setName useStateAlex const age setAge useState15 const friend setFriends useStateBrad Mehdi return h1 name name Im age h1 name Alex Im 15 Notice hook receives valid JavaScript data type string number boolean array object Conclusion hook useState one important useful React hook must know Moreover hook basically enables function component internal state add feature Thank reading article hope found useful ReadingTags Programming Web Development React JavaScript Coding
4,255
Loading Data from OpenStreetMap with Python and the Overpass API
There are a number of ways to download map data from OpenStreetMap (OSM) as shown in their wiki. Of course you could download the whole Planet.osm but you would need to free up over 800 GB as of date of this article to have the whole data set sitting on your computer waiting to be analyzed. If you just need to work with a certain region you can use extracts in various formats such as the native .OSM (stored as XML), .PBF (A compressed version of .OSM ), Shapefile or GeoJSON. There are also different API possible such as the native OSM API or the Nominatim API. In this article we will only focus on the Overpass API which allows us to query specific data from the OSM data set. Quick Look at the OSM Data Model Before we start, we have to take a look at how OSM is structured. We have three basic components in the OSM data model, which are nodes, ways and relations which all come with an id. Many of the elements come with tags which describe specific features represented as key-value pairs. In simple terms, nodes are points on the maps (in latitude and longitude) as in the next image of a well documented bench in London. A way on the other hand is a ordered list of nodes, which could correspond to a street or the outline of a house. Here is an example of McSorley’s Old Ale House in New York which can be found as a way in OSM. The final data element is a relation which is also an ordered list containing either nodes, ways or even other relations. It is used to model logical or geographic relationships between objects. This can be used for example for large structures as in the Palace of Versailles which contains multiple polygons to describe the building. Using the Overpass API Now we’ll take a look how to load data from OSM. The Overpass API uses a custom query language to define the queries. It takes some time getting used to, but luckily there is Overpass Turbo by Martin Raifer which comes in handy to interactively evaluate our queries directly in the browser. Let’s say you want to query nodes for cafes, then your query looks like this node["amenity"="cafe"]({{bbox}}); out; where each statement in the query source code ends with a semicolon. This query starts by specifying the component we want to query, which is in this case a node. We are applying a filter by tag on our query which looks for all the nodes where the key-value pair is "amenity"="cafe" . There are different options to filter by tag which can be found in the documentation. There is a variety of tags to choose from, one common key is amenity which covers various community facilities like cafe, restaurant or just a bench. To have an overview of most of the other possible tags in OSM take a look at the OSM Map Features or taginfo. Another filter is the bounding box filter where {{bbox}} corresponds to the bounding box in which we want to search and works only in Overpass Turbo. Otherwise you can specify a bounding box by (south, west, north, east) in latitude and longitude which can look like node["amenity"="pub"] (53.2987342,-6.3870259,53.4105416,-6.1148829); out; which you can try in Overpass Turbo. As we saw before in the OSM data model, there are also ways and relations which might also hold the same attribute. We can get those as well by using a union block statement, which collects all outputs from the sequence of statements inside a pair of parentheses as in ( node["amenity"="cafe"]({{bbox}}); way["amenity"="cafe"]({{bbox}}); relation["amenity"="cafe"]({{bbox}}); ); out; The next way to filter our queries is by element id. Here is the example for the query node(1); out; which gives us the Prime Meridian of the World with longitude close to zero. Another way to filter queries is by area which can be specified like area["ISO3166-1"="GB"][admin_level=2]; which gives us the area for Great Britain. We can use this now as a filter for the query by adding (area) to our statement as in area["ISO3166-1"="GB"][admin_level=2]; node["place"="city"](area); out; This query returns all cities in Great Britain. It is also possible to use a relation or a way as an area. In this case area ids need to be derived from an existing OSM way by adding 2400000000 to its OSM id or in case of a relation by adding 3600000000 . Note that not all ways/relations have an area counterpart (i.e. those that are tagged with area=no , and most multipolygons and that don’t have a defined name=* will not be part of areas). If we apply the relation of Great Britain to the previous example we’ll then get area(3600062149); node["place"="city"](area); out; Finally we can specify the output of the queried data, which configured by the out action. Until now we specified the output as out; , but there are various additional values which can be appended. The first set of values can control the verbosity or the detail of information of the output, such as ids , skel , body (default value), tags , meta and count as described in the documentation. Additionally we can add modifications for the geocoded information. geom adds the full geometry to each object. This is important when returning relations or ways that have no coordinates associated and you want to get the coordinates of their nodes and ways. For example the query rel["ISO3166-1"="GB"][admin_level=2]; out geom; would otherwise not return any coordinates. The value bb adds only the bounding box to each way and relation and center adds only the center of the same bounding box (not the center of the geometry). The sort order can be configured by asc and qt , sorting by object id or by quadtile index respectively, where the latter is significantly faster. Lastly, by adding an integer value, you can set the maximum number of elements to return. After combining what we have learnt so far we can finally query the location of all Biergarten in Germany area["ISO3166-1"="DE"][admin_level=2]; ( node["amenity"="biergarten"](area); way["amenity"="biergarten"](area); rel["amenity"="biergarten"](area); ); out center; Python and the Overpass API Now we should have a pretty good grasp of how to query OSM data with the Overpass API, but how can we use this data now? One way to download the data is by using the command line tools curl or wget. In order to do this we need to access one of the Overpass API endpoints, where the one we will look go by the format http://overpass-api.de/api/interpreter?data=query . When using curl we can download the OSM XML of our query by running the command where the previously crafted query comes after data= and the query needs to be urlencoded. The --globoff is important in order to use square and curly brackets without being interpreted by curl. This query returns the following XML result <?xml version="1.0" encoding="UTF-8"?> <osm version="0.6" generator="Overpass API 0.7.54.13 ff15392f"> <note>The data included in this document is from www.openstreetmap.org. The data is made available under ODbL.</note> <meta osm_base="2018-02-24T21:09:02Z"/> <node id="1" lat="51.4779481" lon="-0.0014863"> <tag k="historic" v="memorial"/> <tag k="memorial" v="stone"/> <tag k="name" v="Prime Meridian of the World"/> </node> </osm> There are various output formats to choose from in the documentation. In order to download the query result as JSON we need to add [out:json]; to the beginning of our query as in giving us the previous XML result in JSON format. You can test the query also in the browser by accessing http://overpass-api.de/api/interpreter?data=[out:json];node(1);out;. But I have promised to use Python to get the resulting query. We can run our well known Biergarten query now with Python by using the requests package in order to access the Overpass API and the json package to read the resulting JSON from the query. import requests import json overpass_url = "http://overpass-api.de/api/interpreter" overpass_query = """ [out:json]; area["ISO3166-1"="DE"][admin_level=2]; (node["amenity"="biergarten"](area); way["amenity"="biergarten"](area); rel["amenity"="biergarten"](area); ); out center; """ response = requests.get(overpass_url, params={'data': overpass_query}) data = response.json() In this case we do not have to use urlencoding for our query since this is taken care of by requests.get and now we can store the data or directly use the data further. The data we care about is stored under the elements key. Each element there contains a type key specifying if it is a node, way or relation and an id key. Since we used the out center; statement in our query, we get for each way and relation a center coordinate stored under the center key. In the case of node elements, the coordinates are simply under the lat, lon keys. import numpy as np import matplotlib.pyplot as plt # Collect coords into list coords = [] for element in data['elements']: if element['type'] == 'node': lon = element['lon'] lat = element['lat'] coords.append((lon, lat)) elif 'center' in element: lon = element['center']['lon'] lat = element['center']['lat'] coords.append((lon, lat)) # Convert coordinates into numpy array X = np.array(coords) plt.plot(X[:, 0], X[:, 1], 'o') plt.title('Biergarten in Germany') plt.xlabel('Longitude') plt.ylabel('Latitude') plt.axis('equal') plt.show() Another way to access the Overpass API with Python is by using the overpy package as a wrapper. Here you can see how we can translate the previous example with the overpy package import overpy api = overpy.Overpass() r = api.query(""" area["ISO3166-1"="DE"][admin_level=2]; (node["amenity"="biergarten"](area); way["amenity"="biergarten"](area); rel["amenity"="biergarten"](area); ); out center; """) coords = [] coords += [(float(node.lon), float(node.lat)) for node in r.nodes] coords += [(float(way.center_lon), float(way.center_lat)) for way in r.ways] coords += [(float(rel.center_lon), float(rel.center_lat)) for rel in r.relations] One nice thing about overpy is that it detects the content type (i.e. XML, JSON) from the response. For further information take a look at their documentation. You can use this collected data then for other purposes or just visualize it with Blender as in the openstreetmap-heatmap project. This brings us back to the title image which shows as you might have guessed it, the distribution of Biergarten in Germany. Image from openstreetmap-heatmap Conclusion Starting from the need to get buildings within certain regions, I discovered how many different things are possible to discover in OSM and I got lost in the geospatial rabbit hole. It is exciting to see how much interesting data in OSM is left to explore, including even the possibility to find 3D data of buildings in OSM. Since OSM is based on contributions, you could also explore how OSM has been growing over time and how many users have been joining as in this article which uses pyosmium to retrieve OSM user statistics for certain regions. I hope I inspired you to go forth and discover curiosities and interesting findings in the depths of OSM with your newly equipped tools. Thanks for reading! If you enjoyed the post, go ahead and show the clap button some love and follow me for more upcoming articles. Also, feel free to connect with me on LinkedIn or Twitter. This article was originally published on janakiev.com.
https://towardsdatascience.com/loading-data-from-openstreetmap-with-python-and-the-overpass-api-513882a27fd0
['Nikolai Janakiev']
2018-08-17 13:14:48.564000+00:00
['Python', 'Data Science', 'Towards Data Science', 'GIS', 'Openstreetmap']
Title Loading Data OpenStreetMap Python Overpass APIContent number way download map data OpenStreetMap OSM shown wiki course could download whole Planetosm would need free 800 GB date article whole data set sitting computer waiting analyzed need work certain region use extract various format native OSM stored XML PBF compressed version OSM Shapefile GeoJSON also different API possible native OSM API Nominatim API article focus Overpass API allows u query specific data OSM data set Quick Look OSM Data Model start take look OSM structured three basic component OSM data model node way relation come id Many element come tag describe specific feature represented keyvalue pair simple term node point map latitude longitude next image well documented bench London way hand ordered list node could correspond street outline house example McSorley’s Old Ale House New York found way OSM final data element relation also ordered list containing either node way even relation used model logical geographic relationship object used example large structure Palace Versailles contains multiple polygon describe building Using Overpass API we’ll take look load data OSM Overpass API us custom query language define query take time getting used luckily Overpass Turbo Martin Raifer come handy interactively evaluate query directly browser Let’s say want query node cafe query look like nodeamenitycafebbox statement query source code end semicolon query start specifying component want query case node applying filter tag query look node keyvalue pair amenitycafe different option filter tag found documentation variety tag choose one common key amenity cover various community facility like cafe restaurant bench overview possible tag OSM take look OSM Map Features taginfo Another filter bounding box filter bbox corresponds bounding box want search work Overpass Turbo Otherwise specify bounding box south west north east latitude longitude look like nodeamenitypub 5329873426387025953410541661148829 try Overpass Turbo saw OSM data model also way relation might also hold attribute get well using union block statement collect output sequence statement inside pair parenthesis nodeamenitycafebbox wayamenitycafebbox relationamenitycafebbox next way filter query element id example query node1 give u Prime Meridian World longitude close zero Another way filter query area specified like areaISO31661GBadminlevel2 give u area Great Britain use filter query adding area statement areaISO31661GBadminlevel2 nodeplacecityarea query return city Great Britain also possible use relation way area case area id need derived existing OSM way adding 2400000000 OSM id case relation adding 3600000000 Note waysrelations area counterpart ie tagged areano multipolygons don’t defined name part area apply relation Great Britain previous example we’ll get area3600062149 nodeplacecityarea Finally specify output queried data configured action specified output various additional value appended first set value control verbosity detail information output id skel body default value tag meta count described documentation Additionally add modification geocoded information geom add full geometry object important returning relation way coordinate associated want get coordinate node way example query relISO31661GBadminlevel2 geom would otherwise return coordinate value bb add bounding box way relation center add center bounding box center geometry sort order configured asc qt sorting object id quadtile index respectively latter significantly faster Lastly adding integer value set maximum number element return combining learnt far finally query location Biergarten Germany areaISO31661DEadminlevel2 nodeamenitybiergartenarea wayamenitybiergartenarea relamenitybiergartenarea center Python Overpass API pretty good grasp query OSM data Overpass API use data One way download data using command line tool curl wget order need access one Overpass API endpoint one look go format httpoverpassapideapiinterpreterdataquery using curl download OSM XML query running command previously crafted query come data query need urlencoded globoff important order use square curly bracket without interpreted curl query return following XML result xml version10 encodingUTF8 osm version06 generatorOverpass API 075413 ff15392f noteThe data included document wwwopenstreetmaporg data made available ODbLnote meta osmbase20180224T210902Z node id1 lat514779481 lon00014863 tag khistoric vmemorial tag kmemorial vstone tag kname vPrime Meridian World node osm various output format choose documentation order download query result JSON need add outjson beginning query giving u previous XML result JSON format test query also browser accessing httpoverpassapideapiinterpreterdataoutjsonnode1out promised use Python get resulting query run well known Biergarten query Python using request package order access Overpass API json package read resulting JSON query import request import json overpassurl httpoverpassapideapiinterpreter overpassquery outjson areaISO31661DEadminlevel2 nodeamenitybiergartenarea wayamenitybiergartenarea relamenitybiergartenarea center response requestsgetoverpassurl paramsdata overpassquery data responsejson case use urlencoding query since taken care requestsget store data directly use data data care stored element key element contains type key specifying node way relation id key Since used center statement query get way relation center coordinate stored center key case node element coordinate simply lat lon key import numpy np import matplotlibpyplot plt Collect coords list coords element dataelements elementtype node lon elementlon lat elementlat coordsappendlon lat elif center element lon elementcenterlon lat elementcenterlat coordsappendlon lat Convert coordinate numpy array X nparraycoords pltplotX 0 X 1 plttitleBiergarten Germany pltxlabelLongitude pltylabelLatitude pltaxisequal pltshow Another way access Overpass API Python using overpy package wrapper see translate previous example overpy package import overpy api overpyOverpass r apiquery areaISO31661DEadminlevel2 nodeamenitybiergartenarea wayamenitybiergartenarea relamenitybiergartenarea center coords coords floatnodelon floatnodelat node rnodes coords floatwaycenterlon floatwaycenterlat way rways coords floatrelcenterlon floatrelcenterlat rel rrelations One nice thing overpy detects content type ie XML JSON response information take look documentation use collected data purpose visualize Blender openstreetmapheatmap project brings u back title image show might guessed distribution Biergarten Germany Image openstreetmapheatmap Conclusion Starting need get building within certain region discovered many different thing possible discover OSM got lost geospatial rabbit hole exciting see much interesting data OSM left explore including even possibility find 3D data building OSM Since OSM based contribution could also explore OSM growing time many user joining article us pyosmium retrieve OSM user statistic certain region hope inspired go forth discover curiosity interesting finding depth OSM newly equipped tool Thanks reading enjoyed post go ahead show clap button love follow upcoming article Also feel free connect LinkedIn Twitter article originally published janakievcomTags Python Data Science Towards Data Science GIS Openstreetmap
4,256
Design Thinking at Cisco
I spend a lot of time explaining the value of design — the fact that design is not about pixels or mockups or wireframes. Design is about finding the right problem to solve, and then solving it in the best way possible. Good designers are problem solvers. Great designers are problem finders! This process of finding and solving problems is referred to as Design Thinking, and as part of the design transformation at Cisco we set out to create a design thinking framework would not only be used to up the game of our already amazing designers, but would also be something to enable the thousands of engineers who might not always have the benefit of working with a design partner. The framework explained The Cisco Design Thinking framework consists of three phases: Discover, Define, and Explore, and two guard-rails: Making thing and validating with users. Lets take a closer look at the three phases. Discover In the first phase of Cisco Design Thinking, your priority is getting to know your users — with empathy. By empathizing with users and truly understanding their core needs, current frustrations, and related pain points, you can uncover the valuable opportunities that drive true innovation. We do this by immersing ourselves in the world of our user through research techniques like interviews and contextual inquiries. We interpret the information we are capturing through artifacts like Journey maps, empathy maps, and story boards. We aim to capture the current state of the world for our user, and then reframe the information in order to draw insight from it. We document any opportunities using this standard format: Define Once you have documented your opportunity, you and your team will likely identify many problems that will need to be solved. But which ones matter most to your users? Your goal in this phase is to prioritize three — or fewer — clearly articulated problems that your solution will address on behalf of your users. We use a template to capture these problem statements and they get amended to opportunity statement so that it looks like this: Once the team has settled on their opportunity and problems statements, its then time to start creating solutions and for this we get into the explore phase. Explore You have a clear sense of who you’re building for, the opportunity at hand, and the key problems to be solved. Now it’s time for the team to start identifying creative solutions. The key to the Explore phase is to ensure that the solutions developed explicitly solve the prioritized problems documented in the Define phase. I think of the explore phase as a continual loop of learning. We take a problem and begin by exploring as many solutions are possible. Pick the most desirable solution, and figure out how to quickly build an experiment that tests it. Run the experiment. Did is pass or fail? Continue to iterate on this one till you are happy at which point you can move onto another problem. This constant looping of “build, measure, learn” is exactly what the Lean Startup methodology is all about. There is a lot of overlap between Design Thinking and Lean Startup and even Agile. In fact the way I think about it is this: Pulling it all together All of these pieces of the framework come together and are applied first as a way to develop a high level direction for what we are doing, and then as a way to accelerate learning through the delivery of the proposed solutions. Don’t look at the framework as a progressive linear process, but look at it as set of tools that you use depending on your current challenge. On any given project you will flip around between the different phases in order to achieve the outcome. The main things to remember are: Always make sure you are focussed on the right problem, before trying to solve it. Design thinking is a team sport. Collaboration with cross functional teams is critical to its success. When you are creating solutions, always focus on running fast experiments that answer what you need to learn, or validate assumptions you are making. Supporting the framework We have created a few supporting artifacts that enable the teams to practice this new framework. The most impactful artifact has been the field guide. This beautifully designed practical guide contains an explanation of each of the phases, along with lots of examples. The second half of the book is filled with tools and exercises that can be used along the way. Whats next? The Cisco Design Thinking framework is already being used by teams around the globe and is not just focussed on product development. We have executives, sales, HR, design, and engineering all using it to great effect. Our next steps involve developing a learning framework around CDT that will allow us to train four different levels of Design Thinkers: Enthusiasts, Practitioners, Facilitators, and Coaches. If you are interested in learning more about what we learned along the way, please don’t hesitate to reach out or leave a comment on this post. Also, be sure to check out my previous article about why I chose to focus on why I chose Design Thinking for my team.
https://medium.com/cisco-design-community/the-cisco-design-thinking-framework-1263c3ce2e7c
['Jason Cyr']
2018-01-15 23:26:34.179000+00:00
['Design Thinking', 'Design Process', 'Innovation', 'Design', 'User Experience']
Title Design Thinking CiscoContent spend lot time explaining value design — fact design pixel mockups wireframes Design finding right problem solve solving best way possible Good designer problem solver Great designer problem finder process finding solving problem referred Design Thinking part design transformation Cisco set create design thinking framework would used game already amazing designer would also something enable thousand engineer might always benefit working design partner framework explained Cisco Design Thinking framework consists three phase Discover Define Explore two guardrail Making thing validating user Lets take closer look three phase Discover first phase Cisco Design Thinking priority getting know user — empathy empathizing user truly understanding core need current frustration related pain point uncover valuable opportunity drive true innovation immersing world user research technique like interview contextual inquiry interpret information capturing artifact like Journey map empathy map story board aim capture current state world user reframe information order draw insight document opportunity using standard format Define documented opportunity team likely identify many problem need solved one matter user goal phase prioritize three — fewer — clearly articulated problem solution address behalf user use template capture problem statement get amended opportunity statement look like team settled opportunity problem statement time start creating solution get explore phase Explore clear sense you’re building opportunity hand key problem solved it’s time team start identifying creative solution key Explore phase ensure solution developed explicitly solve prioritized problem documented Define phase think explore phase continual loop learning take problem begin exploring many solution possible Pick desirable solution figure quickly build experiment test Run experiment pas fail Continue iterate one till happy point move onto another problem constant looping “build measure learn” exactly Lean Startup methodology lot overlap Design Thinking Lean Startup even Agile fact way think Pulling together piece framework come together applied first way develop high level direction way accelerate learning delivery proposed solution Don’t look framework progressive linear process look set tool use depending current challenge given project flip around different phase order achieve outcome main thing remember Always make sure focussed right problem trying solve Design thinking team sport Collaboration cross functional team critical success creating solution always focus running fast experiment answer need learn validate assumption making Supporting framework created supporting artifact enable team practice new framework impactful artifact field guide beautifully designed practical guide contains explanation phase along lot example second half book filled tool exercise used along way Whats next Cisco Design Thinking framework already used team around globe focussed product development executive sale HR design engineering using great effect next step involve developing learning framework around CDT allow u train four different level Design Thinkers Enthusiasts Practitioners Facilitators Coaches interested learning learned along way please don’t hesitate reach leave comment post Also sure check previous article chose focus chose Design Thinking teamTags Design Thinking Design Process Innovation Design User Experience
4,257
The practical benefits of augmented analytics
The practical benefits of augmented analytics How does augmented analytics really benefit your organisation? We break down its many practical advantages in usability, time-savings and value it unlocks across the entire analytics life cycle. Augmented analytics uses emerging technologies like automation, artificial intelligence (AI), machine learning (ML) and natural language generation (NLG) to automate data manipulation, monitoring and analysis tasks and enhance data literacy. In our previous blog, we covered what augmented analytics actually is and what it really means for modern business intelligence. In this article, we focus on helping you learn the many practical benefits that augmented analytics can bring to your business, across the three core pillars of the analytics lifecycle: preparation, analysis and insight delivery. #1 — Augmented data preparation Traditionally, database administrators bring critical data together from multiple sources and carefully prepare it for integration with downstream systems and analytics tools. Augmented data preparation is a component of augmented analytics that transforms this procedure to become less reliant on manual process and ensures best practices for each step of the preparation phase are followed. How augmented analytics enhances data preparation Automatic Data Profiling: Automatic profiling can recommend the best approaches to cleaning, enriching, manipulating and modelling data; combined with the team’s existing knowledge, this can fundamentally improve data management (cataloguing, metadata, data quality) and help teams continuously refine their data preparation processes. As an example, Yellowfin Data Prep provides automated recommendations to best fix or curate data as part of its Suggest Actions feature. Auto-detection for less repetition: Augmented data preparation can handle mundane, routine but necessary data transformation steps, like physically joining schemas together or comparison calculations, with automation. Algorithms auto-detect schemas and join data from different sources together, without the need for manual intervention. Streamlined data harmonisation: Augmented data preparation enables admins to integrate more data sources, from ad-hoc, external or trusted sources, using automation and machine-led algorithms, faster than using traditional harmonisation approaches. An example of augmented data preparation from Gartner details a business in the Consumer Packaged Goods (CPG) sector who originally required five people to access, clean, blend, model and integrate data across various data systems (point-of-sale, pricing, Nielsen) which took five weeks. Augmented data preparation reduced this process to one person and one hour, with one click-updates. #2 — Augmented analysis Sifting through and analyzing prepared data before it is deployed to the wider business is traditionally handled by data analysts. But the vast volumes of data organizations accumulate today means it’s just not possible to look at every relevant data point from every angle in a timely manner. How augmented analysis enhances data analysis Image via Yellowfin Automated visualisations: Analysts can leverage automated visualisations for their analysis efforts; these contextualised visualisations are automatically generated using the machine learning capabilities of augmented analytics based on the best fit options for representing said metrics. This saves analysts a lot of time and also assists them in deeper understanding of data. Continuous ‘always on’ analysis: Machines now make it possible to have analysis set to be always running monitoring and analysis of data. If it finds a type of change (spike, volatility), it will auto-analyse an anomaly and bring it to the surface, as opposed to relying on analysts to spend considerable time looking for every single instance of relevant change and potentially missing it due to lack of time or data fatigue. Reduced analytical bias: Analysts always make assumptions when trying to find answers (we have to start somewhere). Augmented analytics can help reduce bias by running automated analysis across a bigger range of data, with a focus on factors of statistical importance, broadening future search efforts. By applying automated analysis in parallel with the analyst’s manual oversight, the risk of missing important insights is reduced. Time-to-insight: Rather than having to manually test all possible combinations of data, analysts can apply sophisticated ML algorithms to auto-detect hidden correlations, clusters, outliers, relationships and segments. The most statistically significant findings are presented via smart visualisations, optimised for the analyst’s further interpretation and action. This helps specialists examine highly pertinent insights in more depth and provide more detail when they pass data to users — it’s also a lot faster. As a real-world example of the practical impact of using augmented analysis, Yellowfin’s augmented analytics features (Signals, Assisted Insights) enabled aviation manufacturer AeroEdge’s analysts to identify hidden patterns that lead to manufacturing issues and address them 80% faster. This also increased cost awareness for the operators in charge of analysing data, which led to further identified opportunities to improve business profitability. #3 — Augmented insight delivery Finding patterns in data is like looking for a needle in a haystack. It’s not easy, nor is it a task that can always be done in a timely manner. Augmented insight delivery helps business users discover not just where the needle lies faster than what traditional manual effort can produce, but also understand what’s there and why it’s important so they can do something about it, without the traditional blockers if needed to go back and forth via the technology teams for assistance. How augmented insight delivery enhances insight discovery Image via Yellowfin Predictable discovery: The automated and machine-led capabilities of augmented analytics never get tired of exploring all possible combinations of data, aren’t biased, and can alert users to meaningful changes right as they happen, including outliers that might not always be evident when users manually view highly aggregated dashboard or charts. They help provide reliable findings into potential issues or insights that regular analysis can’t always guarantee, and reduce the need for users to seek additional intervention from analysts. For example, our Yellowfin digital team had pre-built dashboards in Google Analytics to provide aggregated overviews of traffic to our website.. Sometimes, anomalies were hard to ascertain for regular users from the aggregated visualisations. So we set up Yellowfin Signals to automate the monitoring, alerting and analysis of unexpected spikes in page-views. We instantly received automated alerts that Signals’ automated analysis determined paid ads were from countries where we weren’t running any paid campaigns (or so we thought). Signals drove the immediate conversation with our ad agency in which we discovered our agency made a mistake and did not limit our paid ads to the locations we had chosen. Without Signals and Yellowfin’s augmented insight delivery capabilities, it would be like trying to find that data ‘needle’ beneath a highly aggregated dashboard. Augmented analytics help to ensure these types of hidden insights are consistently brought to the surface. Instant, explained answers: Augmented insight delivery features like Assisted Insights leverage NLG to dynamically generate high-level explanations and comparisons that break down findings in a way that users of varying knowledge can understand. Coupled with autogenerated visualisations, variance analysis and calculation creation, this gives an instant, visual understanding when they query their data. Most importantly, it helps improve the user’s data literacy, encouraging further data-driven decision-making and data-led cultures as a whole. Personalised insights: Ranking algorithms learn the more users interact with their analytics tools, and rank what’s most relevant over time. With this sort of augmented capability in place, users can better understand exactly what critical areas of their business metrics they should be looking at, and be assured that their automated BI is delivering them more pertinent information to analyse over time, which also gradually opens up otherwise unseen avenues of insight. Augmented analytics: Why it’s becoming essential for enterprise in 2021 Next year, global advisory firm Gartner predict augmented analytics to be a dominant driver of new purchases of data analytics and BI platforms, making it clear it’s an area of analytical capability that is no longer seen by industry leaders as a far-flung future. By familiarising yourself with the many practical ways augmented analytics has been benefiting organisations today, you can better prepare for a future implementation and ensure you retain a competitive edge as your analytics needs continue to evolve.
https://medium.com/dataseries/the-practical-benefits-of-augmented-analytics-5a6fa4031c0b
['Daniel Shaw-Dennis']
2020-12-11 10:12:07.760000+00:00
['AI', 'Augmented Analytics', 'Innovation', 'Analytics', 'Data']
Title practical benefit augmented analyticsContent practical benefit augmented analytics augmented analytics really benefit organisation break many practical advantage usability timesavings value unlocks across entire analytics life cycle Augmented analytics us emerging technology like automation artificial intelligence AI machine learning ML natural language generation NLG automate data manipulation monitoring analysis task enhance data literacy previous blog covered augmented analytics actually really mean modern business intelligence article focus helping learn many practical benefit augmented analytics bring business across three core pillar analytics lifecycle preparation analysis insight delivery 1 — Augmented data preparation Traditionally database administrator bring critical data together multiple source carefully prepare integration downstream system analytics tool Augmented data preparation component augmented analytics transforms procedure become le reliant manual process ensures best practice step preparation phase followed augmented analytics enhances data preparation Automatic Data Profiling Automatic profiling recommend best approach cleaning enriching manipulating modelling data combined team’s existing knowledge fundamentally improve data management cataloguing metadata data quality help team continuously refine data preparation process example Yellowfin Data Prep provides automated recommendation best fix curate data part Suggest Actions feature Autodetection le repetition Augmented data preparation handle mundane routine necessary data transformation step like physically joining schema together comparison calculation automation Algorithms autodetect schema join data different source together without need manual intervention Streamlined data harmonisation Augmented data preparation enables admins integrate data source adhoc external trusted source using automation machineled algorithm faster using traditional harmonisation approach example augmented data preparation Gartner detail business Consumer Packaged Goods CPG sector originally required five people access clean blend model integrate data across various data system pointofsale pricing Nielsen took five week Augmented data preparation reduced process one person one hour one clickupdates 2 — Augmented analysis Sifting analyzing prepared data deployed wider business traditionally handled data analyst vast volume data organization accumulate today mean it’s possible look every relevant data point every angle timely manner augmented analysis enhances data analysis Image via Yellowfin Automated visualisation Analysts leverage automated visualisation analysis effort contextualised visualisation automatically generated using machine learning capability augmented analytics based best fit option representing said metric save analyst lot time also assist deeper understanding data Continuous ‘always on’ analysis Machines make possible analysis set always running monitoring analysis data find type change spike volatility autoanalyse anomaly bring surface opposed relying analyst spend considerable time looking every single instance relevant change potentially missing due lack time data fatigue Reduced analytical bias Analysts always make assumption trying find answer start somewhere Augmented analytics help reduce bias running automated analysis across bigger range data focus factor statistical importance broadening future search effort applying automated analysis parallel analyst’s manual oversight risk missing important insight reduced Timetoinsight Rather manually test possible combination data analyst apply sophisticated ML algorithm autodetect hidden correlation cluster outlier relationship segment statistically significant finding presented via smart visualisation optimised analyst’s interpretation action help specialist examine highly pertinent insight depth provide detail pas data user — it’s also lot faster realworld example practical impact using augmented analysis Yellowfin’s augmented analytics feature Signals Assisted Insights enabled aviation manufacturer AeroEdge’s analyst identify hidden pattern lead manufacturing issue address 80 faster also increased cost awareness operator charge analysing data led identified opportunity improve business profitability 3 — Augmented insight delivery Finding pattern data like looking needle haystack It’s easy task always done timely manner Augmented insight delivery help business user discover needle lie faster traditional manual effort produce also understand what’s it’s important something without traditional blocker needed go back forth via technology team assistance augmented insight delivery enhances insight discovery Image via Yellowfin Predictable discovery automated machineled capability augmented analytics never get tired exploring possible combination data aren’t biased alert user meaningful change right happen including outlier might always evident user manually view highly aggregated dashboard chart help provide reliable finding potential issue insight regular analysis can’t always guarantee reduce need user seek additional intervention analyst example Yellowfin digital team prebuilt dashboard Google Analytics provide aggregated overview traffic website Sometimes anomaly hard ascertain regular user aggregated visualisation set Yellowfin Signals automate monitoring alerting analysis unexpected spike pageviews instantly received automated alert Signals’ automated analysis determined paid ad country weren’t running paid campaign thought Signals drove immediate conversation ad agency discovered agency made mistake limit paid ad location chosen Without Signals Yellowfin’s augmented insight delivery capability would like trying find data ‘needle’ beneath highly aggregated dashboard Augmented analytics help ensure type hidden insight consistently brought surface Instant explained answer Augmented insight delivery feature like Assisted Insights leverage NLG dynamically generate highlevel explanation comparison break finding way user varying knowledge understand Coupled autogenerated visualisation variance analysis calculation creation give instant visual understanding query data importantly help improve user’s data literacy encouraging datadriven decisionmaking dataled culture whole Personalised insight Ranking algorithm learn user interact analytics tool rank what’s relevant time sort augmented capability place user better understand exactly critical area business metric looking assured automated BI delivering pertinent information analyse time also gradually open otherwise unseen avenue insight Augmented analytics it’s becoming essential enterprise 2021 Next year global advisory firm Gartner predict augmented analytics dominant driver new purchase data analytics BI platform making clear it’s area analytical capability longer seen industry leader farflung future familiarising many practical way augmented analytics benefiting organisation today better prepare future implementation ensure retain competitive edge analytics need continue evolveTags AI Augmented Analytics Innovation Analytics Data
4,258
Learn to code smarter: How to become a senior software engineer quickly
Learn to code smarter: How to become a senior software engineer quickly Shanea Follow Nov 13 · 6 min read When I first taught myself to code, I noticed a gap. Even though I’d been teaching myself to code for five years, I didn’t have the skills necessary to reach the next level. I was technical… but not technical enough. It wasn’t just me who noticed this skill gap either. After years working to become a Product Manager at Google, I finally had the opportunity to interview for the role. However, after passing five internal interviews, I was told by the hiring manager that I would never pass the technical ladder transfer interviews. The job was given to someone else. After all my hard work, I felt defeated. My insecurity– that I wasn’t technical enough– was staring at me in the face. Despite all the hours I’d spent building mobile apps and learning how to develop in Java, Javascript, and Python, I wasn’t skilled enough to snag my dream job. I wanted to be a better software engineer and product manager, so I got a (second) bachelor’s degree, this time in Computer Science. Because I’d been in the job market before this degree, I gained unique insight into how Computer Science is taught, as well as how those lessons directly translate into our roles as engineers. Now that I have a Computer Science degree, have put in the hours as a technical product management executive, and have founded my own tool for developers, I understand what was preventing me from exceling. Although I’m happy about where I’ve landed, this knowledge shouldn’t be locked in a Computer Science degree. Today, I’m sharing how you can learn to code smarter so that you can become a senior software engineer quickly. Even if you have a ways to go, this knowledge will help you become better than you were yesterday. Why become a senior software engineer? First off, what’s so great about becoming a senior software engineer? Why go through the trouble? In my experience, senior software engineers are trusted to solve harder problems and handle more complexity. Although this can be challenging, it also gives you the opportunity to build something that’s rewarding and impactful. It gives you a seat at the table. Not only that, but being a senior software engineer gives you the chance to mentor and provide insights to others. Often, it may lead to managing your own group. And, let’s not forget about the senior software engineer salary. On average, senior software engineers make 92% more than junior ones, according to PayScale. For me, becoming a senior leader changed the trajectory of my career. While I was completing my degree, I landed a senior role at eBay. Not only did I get the role I wanted, but I was also able to skip the junior level. In doing so, I instantly tripled my salary. In that first year, I took seven separate products from ideation to launch, giving me enough experience to get into even higher level roles. How to become a senior software engineer quickly If you want to advance in your engineering career, you shouldn’t have to go get a second degree. That’s a big (and expensive) commitment that requires years of your time. Becoming a senior software engineer quickly requires you to read, understand, and have a big picture understanding of programming languages. How can you ensure you have an in-depth understanding of code? You’ll need to read a lot of code, get a lot of code reviews, and give a lot of code reviews. Spending time with code and gaining feedback from others will help you gain the depth of knowledge you need to move forward. But giving and receiving code reviews isn’t enough to put you on the right track. Ultimately, you need to gain the ability to build large mental models. This all boils down to loading up more complex systems in your head. Engineering requires us to hold abstract systems and concepts in our heads via a skill called spatial reasoning. Spatial reasoning is the ability to “generate, retain, retrieve, and transform well-structured visual images” (Lohman 1996). It’s what we do when we visualize shapes in our “mind’s eye.” In engineering, we use spatial reasoning to create a mental picture or a mental model of how our systems should look. We hold it in our heads. You follow a function call from one file to another. You imagine how data at runtime flows through that picture you created. You transform that picture by flipping it and manipulating it daily. To get to senior engineer, you need to hold larger and larger systems in your head. You need to add more and more to your mental model. You need to build up a database of things you have seen before. This is what takes so much time, and it’s what you need to conquer to go from junior to senior engineer. A few tips for building these models Turns out, I have terrible spatial reasoning skills. This challenge is so visceral to me that I’ve built an entire company around it. CodeSee’s mission is to help developers and development teams master and maintain their understanding of large scale codebases. Codesee.io helps dev teams all speak the same language. It takes these large scale systems that engineers have traditionally held in their heads, and it creates a visual map along with all of the data that PMs can understand and that shows how all the pieces fit together. This map shows everything from the line of code that gets run to the higher level system architecture. Here are a few recommended tips for building mental models, all of which are built into the CodeSee platform. Write things down. Some say that good writing is good thinking, and I agree. Being able to write down what’s going on in your code will clarify your thoughts, help you see the big picture, and ensure that you’re able to communicate your ideas to others. Make sure you write things down in a scalable way that you can search from, build onto, and is available when you need it. Practice spatial reasoning skills. Spatial reasoning does not come naturally to me, but I’ve practiced these skills to become an expert. Every time I write a bit of code, I work to build a mental model in my head. Draw a picture instead of holding something in your head. Drawing a simple picture or diagram can help you plot out your ideas and situate them contextually. Similar to writing things down, drawing a picture helps you solidify your thoughts and share them with others. Reason about the data. Every system is made up of code and data. If you are only looking at the code, you’re missing half of the picture. Ask yourself: Where is the data stored? What does it look like? Where does the data start, go and end up? How is the data transformed along the way? Read a lot of code. This is what most people advise, but it’s really important. I put reading of code into my calendar, and I go to Stack Overflow and other open source codebases. The best advice I’ve heard here was from a language teacher: Read it once, ignoring the things you don’t know. Read it again, noting the things you don’t know. Then, look up everything you don’t know. Finally, read it again. No matter your background or experience, it’s possible to go from a junior to senior software engineer– just as long as you have a solid, big picture understanding of programming languages. Shanea Leven is the Founder and CEO of a developer platform called CodeSee. CodeSee helps developers master understanding of codebases. We visualize in real-time how a software system works and fits together, so developers — and anyone else — can onboard more easily, plan more reliably, and ship features faster and better. Shanea has spent many years as a technical product leader building platforms for developers at Google, Docker, eBay, Cloudflare and various startups. She is also the chair of Executive Women In Product.
https://medium.com/codesee-io/learn-to-code-smarter-how-to-become-a-senior-software-engineer-quickly-8f19903f419d
[]
2020-11-13 23:48:36.071000+00:00
['Engineering', 'Coding', 'Software Development', 'JavaScript', 'Programming']
Title Learn code smarter become senior software engineer quicklyContent Learn code smarter become senior software engineer quickly Shanea Follow Nov 13 · 6 min read first taught code noticed gap Even though I’d teaching code five year didn’t skill necessary reach next level technical… technical enough wasn’t noticed skill gap either year working become Product Manager Google finally opportunity interview role However passing five internal interview told hiring manager would never pas technical ladder transfer interview job given someone else hard work felt defeated insecurity– wasn’t technical enough– staring face Despite hour I’d spent building mobile apps learning develop Java Javascript Python wasn’t skilled enough snag dream job wanted better software engineer product manager got second bachelor’s degree time Computer Science I’d job market degree gained unique insight Computer Science taught well lesson directly translate role engineer Computer Science degree put hour technical product management executive founded tool developer understand preventing exceling Although I’m happy I’ve landed knowledge shouldn’t locked Computer Science degree Today I’m sharing learn code smarter become senior software engineer quickly Even way go knowledge help become better yesterday become senior software engineer First what’s great becoming senior software engineer go trouble experience senior software engineer trusted solve harder problem handle complexity Although challenging also give opportunity build something that’s rewarding impactful give seat table senior software engineer give chance mentor provide insight others Often may lead managing group let’s forget senior software engineer salary average senior software engineer make 92 junior one according PayScale becoming senior leader changed trajectory career completing degree landed senior role eBay get role wanted also able skip junior level instantly tripled salary first year took seven separate product ideation launch giving enough experience get even higher level role become senior software engineer quickly want advance engineering career shouldn’t go get second degree That’s big expensive commitment requires year time Becoming senior software engineer quickly requires read understand big picture understanding programming language ensure indepth understanding code You’ll need read lot code get lot code review give lot code review Spending time code gaining feedback others help gain depth knowledge need move forward giving receiving code review isn’t enough put right track Ultimately need gain ability build large mental model boil loading complex system head Engineering requires u hold abstract system concept head via skill called spatial reasoning Spatial reasoning ability “generate retain retrieve transform wellstructured visual images” Lohman 1996 It’s visualize shape “mind’s eye” engineering use spatial reasoning create mental picture mental model system look hold head follow function call one file another imagine data runtime flow picture created transform picture flipping manipulating daily get senior engineer need hold larger larger system head need add mental model need build database thing seen take much time it’s need conquer go junior senior engineer tip building model Turns terrible spatial reasoning skill challenge visceral I’ve built entire company around CodeSee’s mission help developer development team master maintain understanding large scale codebases Codeseeio help dev team speak language take large scale system engineer traditionally held head creates visual map along data PMs understand show piece fit together map show everything line code get run higher level system architecture recommended tip building mental model built CodeSee platform Write thing say good writing good thinking agree able write what’s going code clarify thought help see big picture ensure you’re able communicate idea others Make sure write thing scalable way search build onto available need Practice spatial reasoning skill Spatial reasoning come naturally I’ve practiced skill become expert Every time write bit code work build mental model head Draw picture instead holding something head Drawing simple picture diagram help plot idea situate contextually Similar writing thing drawing picture help solidify thought share others Reason data Every system made code data looking code you’re missing half picture Ask data stored look like data start go end data transformed along way Read lot code people advise it’s really important put reading code calendar go Stack Overflow open source codebases best advice I’ve heard language teacher Read ignoring thing don’t know Read noting thing don’t know look everything don’t know Finally read matter background experience it’s possible go junior senior software engineer– long solid big picture understanding programming language Shanea Leven Founder CEO developer platform called CodeSee CodeSee help developer master understanding codebases visualize realtime software system work fit together developer — anyone else — onboard easily plan reliably ship feature faster better Shanea spent many year technical product leader building platform developer Google Docker eBay Cloudflare various startup also chair Executive Women ProductTags Engineering Coding Software Development JavaScript Programming
4,259
8 UX Design tips for “Not always” cases when designing for iOS and Android
Photo by Halacious on Unsplash Before going on, I would like to say that everything you read is only based on my UI/UX design knowledge, experience, and conducted user tests. Somethings could not work for you but in my case, it turned out good. Many examples that I will bring I took from an enormous medical app that stakeholders wanted to fit into a small screen (I felt like an IMF agent considering how big the application was). I decided to go with that exact application because it had many problems and challenges as stakeholders wanted almost all functionalities of a hospital plus features of doctor kitchen. 1. Not always fewer clicks are better. When making a page of certain functionality, that has too many exits, sometimes it is better to add more clicks (in my case taps), make additional pages rather than fit everything in one small screen. Photo by Kelly Sikkema on Unsplash On a small screen, everything stacked together can mislead the user. Key functions become more noticeable and easier to access. Visually more appealing. When we check the case with add workday section, the first time I added two more buttons on the top as stakeholders and POs didn’t even want to listen to any argument. They wanted quick access from the first page. But as potential users pointed out, they often tapped on the user profile menu instead of the edit button underneath it. 2. Not always native solutions are the best. When I was designing a native iOS ‘cancel’ and ‘done’ actions on the Modals, I decided to improvise and put the same section on the bottom. Image of a project I worked on. In the left picture on top, we see the native solution for modal Cancel and Done actions. In the right picture, I brought them to the bottom. It’s easier to access with the thumb. The native version is too high, and on newer devices often impossible to reach without rearranging the phone in hand or using second. I did the native version too, and after AB testing results were on my side. I can’t even tell you how happy I was with that. IOS, do you read this? 1 point for me :) 3. Not always stick to the same solution for the same OS I’m not talking about Gmail that uses Material Design for iOS and Android or Instagram that used iOS style on both platforms. I am talking about using a feature of one in the other, while most components stay native to each platform. Before going on with this one, I must say that I absolutely adore both solutions for text fields iOS (Human Interface) and Android (Material Design). Image by me. I used Material Design and Human Interface add contact forms Forms of iOS are just so simple yet elegant, and they have great UX. Material design’s popping titles are so mindblowing and again have great UX. You always know what fields you are filling. In the case of the app, I used popping Material Design forms for both platforms. All three groups (stakeholders, users, and our team) unanimously preferred the popping animation fields. 4. Not always perfect means good. This one goes for the stakeholders. I respect them a lot, and working with them was so much fun. The thing is that they aimed at perfect. And the project lasted for a very long time because every week they came up with new ideas that will “work better”. And we, constantly came back to do changes and the project lasted even longer. Photo by Brett Jordan on Unsplash The first time I saw the phrase “Done is better than perfect!” on a MacBook sticker, I fell in love with that phrase. At the same time, I was designing my personal website and I did the same perfect thing every day. New ideas, new changes, new inspirations, that all led to a new schedule which delayed the success for me. When you aim for perfect each imperfection keeps delaying the project. Our minds can’t project the perfect thus we will never be able to form that as a goal. On the other hand, good is formed, it’s stable, and it has “measurements.” By no means, I am saying that you have to rush your project for sooner results. I am just saying that there is no absolute perfect, and by chasing it, you may never finish the project. Remember this, and I am saying as a huge perfectionist — there is never a perfect, just like in physics. Repeat that to yourself! It’s like trying to catch the junkie dragon in South Park 11.14 episode. You can never catch it. For me, closest to perfect is Apple’s website design, especially the iPhone 11 pro page. And they keep updating. Because there is always better. My advice is to look at your design after some time away from the screen. If you think it is good, then it’s already better than perfect. Trust me. If stakeholders say it’s good, and sugar on top — users approve it, there is no better perfect than that. 5. Not always what is simpler for you simpler for others. I really love iOS data pickers, and I use them as often as I get the chance. But user testing showed that most users easier navigated in android data pickers. Image made by me. iOS and Android Datapickers. Everyone agreed that iOS looked cooler, but it’s the famous problem where UI and UX are going against each other. In some cases, there is no sweet middle spot. And you ALWAYS want to go with the users. 6. Not always going with the users will have the best outcome for YOU. We know the phrase ‘Client is Always Right!’ Thankfully, in web design, thanks to many kinds of researches, including user testings, they often admit that it’s not the case. But in some cases, stakeholders want to stick with something they love, and whatever you do, whatever UX research proves them wrong — they won’t agree to take it away. In some rare cases, you might even lose the project if you keep insisting to take away the thing they love so much. Here comes a delicate spot where you have to choose for yourself what’s more important; do the wrong UX, and in some severe cases, lose your rating or lose the project. I’m afraid it’s all up to you. It depends on how important you think that feature is, how much it will impact other’s opinion of your professionalism. And how much do you need that project. I’m afraid each case is unique, and you have to decide for yourself. In my case, it was a user icon that worked by Hamburger menu logic. It opened when tapping on the user icon. I didn’t like that profile pic button opened the hamburger menu. But the Client is Always Right. 7. Not always User Experience is better than Marketing. To my shame, I have to admit that I am worse at marketing skills than I should have been. Marketing is a big part of UX and vice versa. On the app, there were pages where the Marketing department, along with stakeholders, ingeminated to use their big brand symbol on a small screen (not a logo) on pages where the right thing would be to use more functional assets. And the thing is that we both were right. From the UX perspective, it would be friendlier for users to have some functions there. And I was forced to fit it on other pages. But from a Marketing perspective, the brand was new, and they needed to make people see the symbol in as many places as possible. Marketing department won, and the outcome was good. Though I don’t know what would happen if we did it my way. The goal of the marketing department was achieved, and sacrificed pages didn’t even impact users. 8. Not always — I have 8 tips. I’m sorry, just kidding :) 8.1. Not always people know what they are used to. I know, it’s a bit odd. I’ll explain it. In one test, I crossed users and platforms. I gave the iOS version to Android users and vice versa. Many Android users navigated easier in some iOS features even though they never held an iPhone in their hands. It was most noticeable when they navigated through modals (popups). And vice versa; iOS user group navigated easier in the Android calendar. In conclusion: Not always — you have to stick to the right UX, Native solutions, etc. Use Your Skills, Experience, and most of all, a common sense. It’s a big UX design world out there, and everything is constantly changing. What was right before is wrong now, and what is wrong now might someday be considered the best UX. Trust your gut and keep going. I’m sure if you made it so far with this boring article :), you’ll make it in the big league and you’ll make it really good. Which is better than perfect! (See what I did there?!) Thank you for reading. If you have any questions or thoughts about the article feel free to leave a comment. Wish you Best User Experience in real life!
https://uxplanet.org/8-ux-design-tips-for-not-always-cases-when-designing-on-ios-and-android-ae45bb6d575d
['Daniel Danielyan']
2020-08-18 09:33:59.998000+00:00
['UX', 'UI', 'Design', 'iOS', 'Android']
Title 8 UX Design tip “Not always” case designing iOS AndroidContent Photo Halacious Unsplash going would like say everything read based UIUX design knowledge experience conducted user test Somethings could work case turned good Many example bring took enormous medical app stakeholder wanted fit small screen felt like IMF agent considering big application decided go exact application many problem challenge stakeholder wanted almost functionality hospital plus feature doctor kitchen 1 always fewer click better making page certain functionality many exit sometimes better add click case tap make additional page rather fit everything one small screen Photo Kelly Sikkema Unsplash small screen everything stacked together mislead user Key function become noticeable easier access Visually appealing check case add workday section first time added two button top stakeholder POs didn’t even want listen argument wanted quick access first page potential user pointed often tapped user profile menu instead edit button underneath 2 always native solution best designing native iOS ‘cancel’ ‘done’ action Modals decided improvise put section bottom Image project worked left picture top see native solution modal Cancel Done action right picture brought bottom It’s easier access thumb native version high newer device often impossible reach without rearranging phone hand using second native version AB testing result side can’t even tell happy IOS read 1 point 3 always stick solution OS I’m talking Gmail us Material Design iOS Android Instagram used iOS style platform talking using feature one component stay native platform going one must say absolutely adore solution text field iOS Human Interface Android Material Design Image used Material Design Human Interface add contact form Forms iOS simple yet elegant great UX Material design’s popping title mindblowing great UX always know field filling case app used popping Material Design form platform three group stakeholder user team unanimously preferred popping animation field 4 always perfect mean good one go stakeholder respect lot working much fun thing aimed perfect project lasted long time every week came new idea “work better” constantly came back change project lasted even longer Photo Brett Jordan Unsplash first time saw phrase “Done better perfect” MacBook sticker fell love phrase time designing personal website perfect thing every day New idea new change new inspiration led new schedule delayed success aim perfect imperfection keep delaying project mind can’t project perfect thus never able form goal hand good formed it’s stable “measurements” mean saying rush project sooner result saying absolute perfect chasing may never finish project Remember saying huge perfectionist — never perfect like physic Repeat It’s like trying catch junkie dragon South Park 1114 episode never catch closest perfect Apple’s website design especially iPhone 11 pro page keep updating always better advice look design time away screen think good it’s already better perfect Trust stakeholder say it’s good sugar top — user approve better perfect 5 always simpler simpler others really love iOS data picker use often get chance user testing showed user easier navigated android data picker Image made iOS Android Datapickers Everyone agreed iOS looked cooler it’s famous problem UI UX going case sweet middle spot ALWAYS want go user 6 always going user best outcome know phrase ‘Client Always Right’ Thankfully web design thanks many kind research including user testing often admit it’s case case stakeholder want stick something love whatever whatever UX research prof wrong — won’t agree take away rare case might even lose project keep insisting take away thing love much come delicate spot choose what’s important wrong UX severe case lose rating lose project I’m afraid it’s depends important think feature much impact other’s opinion professionalism much need project I’m afraid case unique decide case user icon worked Hamburger menu logic opened tapping user icon didn’t like profile pic button opened hamburger menu Client Always Right 7 always User Experience better Marketing shame admit worse marketing skill Marketing big part UX vice versa app page Marketing department along stakeholder ingeminated use big brand symbol small screen logo page right thing would use functional asset thing right UX perspective would friendlier user function forced fit page Marketing perspective brand new needed make people see symbol many place possible Marketing department outcome good Though don’t know would happen way goal marketing department achieved sacrificed page didn’t even impact user 8 always — 8 tip I’m sorry kidding 81 always people know used know it’s bit odd I’ll explain one test crossed user platform gave iOS version Android user vice versa Many Android user navigated easier iOS feature even though never held iPhone hand noticeable navigated modal popups vice versa iOS user group navigated easier Android calendar conclusion always — stick right UX Native solution etc Use Skills Experience common sense It’s big UX design world everything constantly changing right wrong wrong might someday considered best UX Trust gut keep going I’m sure made far boring article you’ll make big league you’ll make really good better perfect See Thank reading question thought article feel free leave comment Wish Best User Experience real lifeTags UX UI Design iOS Android
4,260
8 Classic JavaScript-Coding Mistakes You Should Avoid
Handling the ` this ` Reference The this keyword confuses every JavaScript developer. Perhaps, it’s a lot different than what other programming languages like Java offer. The this keyword refers to an object, and the reference can change depending on how the function is called and not where it’s defined. For methods inside an object, this would refer to the invoker object itself, whereas for independent functions, this refers to a global object. let user = { name: "your name", getName() { console.log(this.name); } }; user.getName(); Now, if we extract the function, the this reference would change: let user = { name: "your name", getName: function(){ console.log(this.name); } }; user.getName() //your name var getUsername = user.getName; getUsername() //undefined Even though getUsername holds a reference of getName , the invocation site has changed, thereby making this a global object. Hence the getUsername() returns undefined. Another trickier case of the this keyword is when it’s used inside an anonymous function. The this context within anonymous functions doesn’t have access to the outer object and, hence, points to the global scope. To access object properties inside anonymous functions, we need to pass the object’s instance, as shown below: let user = { name: "your name", getName: function(){ var self = this; (function () { console.log(self.name); }()); } }; Instead of storing the this reference in a variable like we did above, we could have also invoked call(this) on the anonymous function, as shown below:
https://medium.com/better-programming/8-classic-javascript-coding-mistakes-you-should-avoid-14f198ea9e36
['Anupam Chugh']
2020-06-25 05:46:42.750000+00:00
['Programming', 'Software Development', 'Web Development', 'JavaScript', 'Startup']
Title 8 Classic JavaScriptCoding Mistakes AvoidContent Handling Reference keyword confuses every JavaScript developer Perhaps it’s lot different programming language like Java offer keyword refers object reference change depending function called it’s defined method inside object would refer invoker object whereas independent function refers global object let user name name getName consolelogthisname usergetName extract function reference would change let user name name getName function consolelogthisname usergetName name var getUsername usergetName getUsername undefined Even though getUsername hold reference getName invocation site changed thereby making global object Hence getUsername return undefined Another trickier case keyword it’s used inside anonymous function context within anonymous function doesn’t access outer object hence point global scope access object property inside anonymous function need pas object’s instance shown let user name name getName function var self function consolelogselfname Instead storing reference variable like could also invoked callthis anonymous function shown belowTags Programming Software Development Web Development JavaScript Startup
4,261
7 tips against your smartphone addiction
1. AVOID IDLE MOMENTS It's usually during transitional breaks or moments of boredom, that we tend to reach for our phones as an automatic reaction to fulfil the void in the waiting time. According to Jamison Monroe, CEO of Newport Academy, the best thing you can do to avoid scrolling is to create a list of things you could do during your idle moments. The key is to come up with options that appeal to you. Here's some for you: You could be taking a walk You could be writing on paper with a pen You could be singing or dancing to your own favourite song You could drop down and do 10 push-ups (or stretch… actually maybe stretch) You could be closing your eyes, and meditate for up to 10 minutes. 2. USE TECH TO ELIMINATE TECH Download apps that can tell you the times you've checked your phone on the same day you are using your phone in order to trigger warnings of breaking self-imposed limits. These Apps can lock your apps for you for a specified amount of time to help you place your attention away from your smartphone. Facebook released a feature called “Quiet Mode” earlier in 2020, allowing users to minimize distractions by muting the app’s push notifications for a time frame pre-specified. What I particularly like about it, is that you could set Quiet Mode to automatically run during your workday to reduce your temptation to waste time in the app. Furthermore, if you try to launch Facebook during Quiet Mode, the app will remind you that you’ve set this time aside with the goal of limiting your time in the app. I honestly can’t wait for this to be also rolled out for Instagram & WhatsApp. Other apps that I would recommend are AppDetox or AppBlock. Otherwise, may other exists in the App Store / Google Play. 3. MAKE USE OF YOUR OWN NOTIFICATIONS Leverage your calendar to set daily reminders (via e-mail & push notification). This will find its way to become the most useful and healthy push notification on your phone. 4. UNPLUG BEFORE BED This is the toughest one to adopt in the short-term, but the healthiest in the long-term. An hour before you go to sleep, avoid any tech or electronic device. The blue wavelength light emitted from digital screens interrupts the production of Melatonin, which gives our brain the signal to go sleep & rest. Melatonin, known as the darkness hormone. Leave your device on a desk far from your bed, or in other room to avoid temptation. 5. START SMALL We tend to go big or go home. Pride aside, start small to test the waters and slowly reduce your screen time. For example: Turn off your phone during dinner, or leave it away from the table. Leave it at home when going for walks Define a daily limit — e.g. 3 hours x day tracked no phone time. Trust a partner or friend to hold it for you during work time. Allow yourself to grab it when taking mandatory breaks. 6. TURN OFF NOTIFICATIONS Think of a smartphone as the world's smallest slot machine — it elevates your dopamine receptors, and continues that behavior over and over again as it offers an unpredictable award just like in gambling. These awards are triggered by notifications (whether useful/useless, good or bad). Silence notifications for all social channels to make yourself less tempted to look at your phone every few seconds. In case of work dependancy, make sure to connect the relevant apps on your work laptop instead. Think of accessing social media updates as your reward for your hard work, making your time more enjoyable. Watch The Social Dilemma on Netflix if you need more convincing on silencing some Apps. 7. PLAN BREAKS. Commit to taking daily breaks, during which time you turn off your phone and put it out of sight and out of reach. To make the most out of your break, plan a specific activity to do to fill in the gap. I usually prep myself a nice Italian-made coffee, head out with my coffee and go for a short [mindful] walk around the block without my phone. However, do try to do the same also during meals or other specific daily events — pay attention to what is going on around you. Recommendation: Let your friends or colleagues know, in order to allow for you to fully detach and maximise your mindfulness during your break; for them to give you some space.
https://medium.com/design-bootcamp/7-tips-against-your-smartphone-addiction-8e37ff8e5f9
['Claudio Corti']
2020-12-18 00:54:23.599000+00:00
['Addiction', 'Mobile', 'Mindfulness', 'Mental Health', 'UX']
Title 7 tip smartphone addictionContent 1 AVOID IDLE MOMENTS usually transitional break moment boredom tend reach phone automatic reaction fulfil void waiting time According Jamison Monroe CEO Newport Academy best thing avoid scrolling create list thing could idle moment key come option appeal Heres could taking walk could writing paper pen could singing dancing favourite song could drop 10 pushup stretch… actually maybe stretch could closing eye meditate 10 minute 2 USE TECH ELIMINATE TECH Download apps tell time youve checked phone day using phone order trigger warning breaking selfimposed limit Apps lock apps specified amount time help place attention away smartphone Facebook released feature called “Quiet Mode” earlier 2020 allowing user minimize distraction muting app’s push notification time frame prespecified particularly like could set Quiet Mode automatically run workday reduce temptation waste time app Furthermore try launch Facebook Quiet Mode app remind you’ve set time aside goal limiting time app honestly can’t wait also rolled Instagram WhatsApp apps would recommend AppDetox AppBlock Otherwise may exists App Store Google Play 3 MAKE USE NOTIFICATIONS Leverage calendar set daily reminder via email push notification find way become useful healthy push notification phone 4 UNPLUG BED toughest one adopt shortterm healthiest longterm hour go sleep avoid tech electronic device blue wavelength light emitted digital screen interrupt production Melatonin give brain signal go sleep rest Melatonin known darkness hormone Leave device desk far bed room avoid temptation 5 START SMALL tend go big go home Pride aside start small test water slowly reduce screen time example Turn phone dinner leave away table Leave home going walk Define daily limit — eg 3 hour x day tracked phone time Trust partner friend hold work time Allow grab taking mandatory break 6 TURN NOTIFICATIONS Think smartphone world smallest slot machine — elevates dopamine receptor continues behavior offer unpredictable award like gambling award triggered notification whether usefuluseless good bad Silence notification social channel make le tempted look phone every second case work dependancy make sure connect relevant apps work laptop instead Think accessing social medium update reward hard work making time enjoyable Watch Social Dilemma Netflix need convincing silencing Apps 7 PLAN BREAKS Commit taking daily break time turn phone put sight reach make break plan specific activity fill gap usually prep nice Italianmade coffee head coffee go short mindful walk around block without phone However try also meal specific daily event — pay attention going around Recommendation Let friend colleague know order allow fully detach maximise mindfulness break give spaceTags Addiction Mobile Mindfulness Mental Health UX
4,262
The Late Night Dream
The Late Night Dream A short horror poetry. Once upon a midsummer night There came a roaring from the wind. There I stood. Wandering and coupled with fear. Ravaging thoughts and goose bumps. Filled my flesh. Then suddenly, A shadow. A very big one. With long nails and fangs. Like that of a beast. Ready to pounce on me. Ready to devour me. There I stood. Unable to move. Unable to run. Unable to scream. Then I knew. The end is near. The end is now. With nobody to save me. But a voice. A still small voice. A melodious one. Like that of an angel. Wake up! wake up!! Little monster. © Evince pen
https://medium.com/illumination/the-late-night-dream-c5baf106b6e
['Evince Uhurebor']
2020-11-24 19:33:20.434000+00:00
['Poetry', 'Poetry On Medium', 'Writing', 'Horror Fiction', 'Self']
Title Late Night DreamContent Late Night Dream short horror poetry upon midsummer night came roaring wind stood Wandering coupled fear Ravaging thought goose bump Filled flesh suddenly shadow big one long nail fang Like beast Ready pounce Ready devour stood Unable move Unable run Unable scream knew end near end nobody save voice still small voice melodious one Like angel Wake wake Little monster © Evince penTags Poetry Poetry Medium Writing Horror Fiction Self
4,263
Five app prototyping tools compared
There are 🇺🇦 Ukrainian, 🇷🇺 Russian (+ another one) and 🇨🇳 Chinese (+ another one) translations of this article, and there’s a follow-up with Principle, Flinto for Mac & Tumult Hype. I recreated the IF by IFTTT user onboarding in five different high-fidelity prototyping tools to get an idea of the differences between them: Proto.io, Pixate, Framer, Facebook’s Origami and RelativeWave’s Form. See how these five recreations behave compared to the real thing: Pages versus Layers Why did I select these five? I discovered that recreating something that is this animation-heavy (icons moving around in different directions and at different speeds) is not even possible in most prototyping packages. The majority of tools only let you connect static pages, while only the more complex ones let you animate different objects or layers within a given page. I’ll explain it a bit more. Page-based tools In a page-based tool, you lay out different screens, and then you make hotspots or buttons to connect them together. You tap a button somewhere on one screen to go to another screen. Page-based tools generally also have a choice of different transitions between screens, like fade in, slide in from the right, slide up from below, etc. It’s a bit clunky, but it’s a good way to make quick mockups when you’re still figuring out the flow of an app (which and how many screens are needed, how they would appear, where buttons should go, etc.). Examples of page-based tools are: Briefs, InVision, Notism, Flinto, Fluid, Mockup.io, Prott, POP, Marvel, Balsamiq, Red Pen and Keynotopia. Granted, in some of these tools you can have animations or scrollable areas within a page, but you cannot use them to emulate every interaction possible in real native apps. Layer-based tools Every asset, interface element, or in other words, layer can be made tappable, swipe-able, draggable… but also animated. Prototyping a complete app in a tool like this would be crazy, though; it would be too much work (you might as well build the real app). But they’re great for trying out new interactions, or for tweaking the timing of an animation. Proto.io, Pixate, Framer, Facebook’s Origami and RelativeWave’s Form are the tools I tried. To be honest, there are a few others — Axure and Indigo Studio — but they seem to be more enterprisey (read: rather expensive). I might try them out some other time. So, onwards with the chosen ones.
https://medium.com/sketch-app-sources/five-app-prototyping-tools-compared-form-framer-origami-pixate-proto-io-c2acc9062c61
['Tes Mat']
2017-11-24 15:52:55.331000+00:00
['Design', 'UX', 'Tech', 'Prototyping', 'UI']
Title Five app prototyping tool comparedContent 🇺🇦 Ukrainian 🇷🇺 Russian another one 🇨🇳 Chinese another one translation article there’s followup Principle Flinto Mac Tumult Hype recreated IFTTT user onboarding five different highfidelity prototyping tool get idea difference Protoio Pixate Framer Facebook’s Origami RelativeWave’s Form See five recreation behave compared real thing Pages versus Layers select five discovered recreating something animationheavy icon moving around different direction different speed even possible prototyping package majority tool let connect static page complex one let animate different object layer within given page I’ll explain bit Pagebased tool pagebased tool lay different screen make hotspot button connect together tap button somewhere one screen go another screen Pagebased tool generally also choice different transition screen like fade slide right slide etc It’s bit clunky it’s good way make quick mockups you’re still figuring flow app many screen needed would appear button go etc Examples pagebased tool Briefs InVision Notism Flinto Fluid Mockupio Prott POP Marvel Balsamiq Red Pen Keynotopia Granted tool animation scrollable area within page cannot use emulate every interaction possible real native apps Layerbased tool Every asset interface element word layer made tappable swipeable draggable… also animated Prototyping complete app tool like would crazy though would much work might well build real app they’re great trying new interaction tweaking timing animation Protoio Pixate Framer Facebook’s Origami RelativeWave’s Form tool tried honest others — Axure Indigo Studio — seem enterprisey read rather expensive might try time onwards chosen onesTags Design UX Tech Prototyping UI
4,264
Mistakes to Avoid in Affiliate Marketing
Mistakes to Avoid in Affiliate Marketing Visualmodo Follow Mar 3 · 4 min read Affiliate marketing programs are known to pay a good amount of commissions on a regular basis which makes it a lucrative industry for passionate marketers or worldwide web lovers. As lucrative as this industry is, beginners, find it hard to figure out the right formulae to get them started and maintain the right path. Most of the affiliate marketing mistakes are not detrimental in the beginning but in the long run, they affect the returns on our affiliate marketing efforts more than we can afford, see how to avoid it. Affiliate Marketing Mistakes And How To Avoid How The Sales Happen Most of the bloggers fail in affiliate marketing because of not understanding how the sales will be generated. I’ve seen bloggers with great blog posts failed miserably because of this. Having quality blog posts is not enough to generate affiliate sales. You need to understand how these posts can help you to make affiliate sales. For example, let’s say you have written a great post on Topic X, but you haven’t put any affiliate links on the post. Even if you place affiliate links, there is no clear call to action. Do you think this post is going to generate sales for you? No, the chance is meager. You need to optimize your posts for affiliate marketing. Another important thing is understanding which types of blog posts are driving more sales. Not all the types of blog posts get sales. Here are some best types of blog posts that are proven to drive more sales: “List Of Alternatives” Post. “Coupon Codes & Sales” Post. “List of Best X or Best X for Y” Post. “Comparison” Post. “Product/Service Review” Post. “How To” Post. “Ultimate Resources” Post. Try these types of posts. And I am pretty sure you will see good sales. Share the Wrong Product Affiliate Marketing Mistakes Getting affiliate sales is hard. Choosing the wrong products/services makes it harder. So how do you know if a product/service is right or wrong for you to promote? It depends on several factors. Let’s see some of the factors: Firstly, the product/service is not relevant to your niche. For example, it’s irrelevant to promote an SEO tool on a recipe blog. Secondly, product/service quality is not good, but it offers high commission rates. Avoid these types of products. Even if you get some sales, in the long term, it will decrease your credibility. Finally, the product/service is new on the market. Always remember, popular products convert better. So always try to promote the products/services that are relevant and popular in your niche. Do Not Understand The Product It’s not like you have to use and test all the products/services that you are promoting on your blog. But it’s essential to have proper knowledge of the affiliate products/services. This way, you will be able to solve your readers’ problems in better ways. For example, if you are promoting HostGator on your blog, you need to know how HostGator can help in starting a blog and other services of HostGator. And if possible, it’s better to test the products/services before promoting it on your blog. It will increase your trustability. Ignore Link Management Affiliate Marketing Mistakes To Avoid It’s a common mistake that almost every affiliate marketers make. But this mistake can turn out to be a big one if you are not using an affiliate link management system for a while. Imagine, someday, your highest revenue-generating affiliate program decides to change its affiliate platform, and they want you to use a new affiliate link. How would you change all the affiliate links that you’ve inserted on the blog posts? You’ll have to find and change the links manually. But if you are using a link management plugin, you can do it within one minute. Here’s how it works. Affiliate Link Management plugin allows you to cloak your affiliate links. The idea is to hide your affiliate link with URL redirection. Most of the time affiliate links are ugly like this one — https://partners.hostgator.com/c/214426/177309/3094 Now, whenever a company changes their affiliate link structure, all you need to do is changing the affiliate link from the plugin dashboard. And the change will be applied to all links. Cloaking is not the only benefit of using an affiliate link management plugin. Here are some other benefits – Firstly, you can add affiliate links automatically to the specified keywords. Secondly, check your affiliate links’ click statistics. You can change affiliate links based on geo locations and more. Finally, and most importantly, it makes your affiliate links SEO-Friendly. Finally, the question is, which plugin should you use to manage your affiliate links? There are several plugins out there. ThirstyAffiliates and Pretty Links Pro are the most popular. I use and recommend ThirstyAffiliates. Final Thoughts Creating a successful affiliate marketing business requires passion, commitment, knowledge of products, updated knowledge about market trends, relationship building with clients, quality traffic, and helpful product recommendations. Many times, affiliate marketers with great products fail to make their business profitable just because they deploy detrimental strategies and the information is above is a small step to help them avoid these mistakes. In a world of noise where everyone is selling something, it is very important to indulge in the race of making money but to for earning money but to help consumers/customers to buy what they actually need.
https://medium.com/visualmodo/mistakes-to-avoid-in-affiliate-marketing-881e66f8878f
[]
2020-03-03 02:40:12.343000+00:00
['Avoid', 'Affiliate', 'Marketing', 'Mistakes', 'Error']
Title Mistakes Avoid Affiliate MarketingContent Mistakes Avoid Affiliate Marketing Visualmodo Follow Mar 3 · 4 min read Affiliate marketing program known pay good amount commission regular basis make lucrative industry passionate marketer worldwide web lover lucrative industry beginner find hard figure right formula get started maintain right path affiliate marketing mistake detrimental beginning long run affect return affiliate marketing effort afford see avoid Affiliate Marketing Mistakes Avoid Sales Happen blogger fail affiliate marketing understanding sale generated I’ve seen blogger great blog post failed miserably quality blog post enough generate affiliate sale need understand post help make affiliate sale example let’s say written great post Topic X haven’t put affiliate link post Even place affiliate link clear call action think post going generate sale chance meager need optimize post affiliate marketing Another important thing understanding type blog post driving sale type blog post get sale best type blog post proven drive sale “List Alternatives” Post “Coupon Codes Sales” Post “List Best X Best X Y” Post “Comparison” Post “ProductService Review” Post “How To” Post “Ultimate Resources” Post Try type post pretty sure see good sale Share Wrong Product Affiliate Marketing Mistakes Getting affiliate sale hard Choosing wrong productsservices make harder know productservice right wrong promote depends several factor Let’s see factor Firstly productservice relevant niche example it’s irrelevant promote SEO tool recipe blog Secondly productservice quality good offer high commission rate Avoid type product Even get sale long term decrease credibility Finally productservice new market Always remember popular product convert better always try promote productsservices relevant popular niche Understand Product It’s like use test productsservices promoting blog it’s essential proper knowledge affiliate productsservices way able solve readers’ problem better way example promoting HostGator blog need know HostGator help starting blog service HostGator possible it’s better test productsservices promoting blog increase trustability Ignore Link Management Affiliate Marketing Mistakes Avoid It’s common mistake almost every affiliate marketer make mistake turn big one using affiliate link management system Imagine someday highest revenuegenerating affiliate program decides change affiliate platform want use new affiliate link would change affiliate link you’ve inserted blog post You’ll find change link manually using link management plugin within one minute Here’s work Affiliate Link Management plugin allows cloak affiliate link idea hide affiliate link URL redirection time affiliate link ugly like one — httpspartnershostgatorcomc2144261773093094 whenever company change affiliate link structure need changing affiliate link plugin dashboard change applied link Cloaking benefit using affiliate link management plugin benefit – Firstly add affiliate link automatically specified keywords Secondly check affiliate links’ click statistic change affiliate link based geo location Finally importantly make affiliate link SEOFriendly Finally question plugin use manage affiliate link several plugins ThirstyAffiliates Pretty Links Pro popular use recommend ThirstyAffiliates Final Thoughts Creating successful affiliate marketing business requires passion commitment knowledge product updated knowledge market trend relationship building client quality traffic helpful product recommendation Many time affiliate marketer great product fail make business profitable deploy detrimental strategy information small step help avoid mistake world noise everyone selling something important indulge race making money earning money help consumerscustomers buy actually needTags Avoid Affiliate Marketing Mistakes Error
4,265
You Have More in Common With Voldemort Than You Think
You Have More in Common With Voldemort Than You Think We all leave little pieces of our soul in the world around us Courtesy of Warner Bros. Entertainment Inc. Photo: Eric Charbonneau/WireImage/Getty Images What could the average person possibly have in common with the fictional mass-murdering noseless wizard whose name ought not to be spoken? It’s a bold claim, I know. Well, each of us constructs our identity in the same way Voldemort does — or, Voldemort provides an illuminating allegory for how we build our identities. Voldemort, that dastardly villain, created these things called Horcruxes. The concept of the Horcrux, for those of you who don’t already know, was that ol’ Voldy split his soul and stored the shards in different objects or beings. This ensured his life — nobody could kill him unless they first destroyed all the Horcruxes. We do have to put one element of this analogy aside: the fact that he created these Horcruxes by murdering people. Let’s just ignore that bit for the sake of this analogy, as I don’t think most people are secret murderers. When you take that element out, though, this premise has some surprising backbone to it. A person latches pieces of their soul onto objects and people with sentimental significance to them, and they cannot truly die until these Horcruxes are all destroyed. The destruction of one brings them pain, and they will stop at nothing to protect their soul shards. When put in these terms, could we not accurately say this about ourselves? We all ground our identity in ideas, memories, and experiences. But we don’t stop there. We humans like to create physical symbols of abstract concepts. So, we use objects and people as symbols of the ideas upon which we base our identities. We keep sentimental jewelry and knickknacks; we cling to family and friends; we buy clothes with icons and slogans on them. And we each have a few key things, from necklaces to cars to plots of land, that make up the core of who we are. Moreover, because we have based our identities on the ideas these objects represent, we will protect them with our lives. An attack upon our identity is akin to an attack on our person — it’s a threat to a foundational aspect of our being. We will voraciously defend these ideas or the items that represent them. Who among us has not been thrown into a momentary panic when a friend or family member almost threw away something that was meaningless to them but sentimental to us? And lastly, so long as these ideas remain in the world and are built upon, we never disappear. Even the objects that represent these ideas can live on and grow past our lives. The farmer identifies with his farm — and his children will continue to cultivate it. So, I think it could be said that we all have Horcruxes, though we don’t need to kill people to make them. And we don’t die if all of our Horcruxes get destroyed. Or do we? Well, no. But our Horcruxes do have a relationship to the quality of our lives. Life, fundamentally, is a delicate balance of order and chaos. Life requires unpredictability to work. We can see this truth in something as fundamental as evolution — the random (chaotic) mutation of genes that ultimately allows us to adapt and survive. But it also requires order. If our DNA was too unpredictable, nothing resembling a species could form. If every core piece of your identity is destroyed, you would be left feeling like nothing. The necessity of this balance is present at all levels of our lives. Thus, life exists and thrives in this precise balance of the two elements. Order and chaos. Change to something new and maintenance of what is. Our identities are integral to our existence as individuals, and the same rule applies. Our identities must gradually grow and change if we are to survive, and they must also maintain core elements to achieve any level of stability. If every core piece of your identity is destroyed, you would be left feeling like nothing — like no one. As though the person you were had died, and whoever you became next would be totally new. Many people have experienced this kind of rebirth through extreme loss. On the other side, an unchanging identity is effectively dead. It is stagnant. There is nothing left of its story to tell. It is merely waiting for the literal death to catch up to the spiritual end. Similarly, when death strikes any person, their identity freezes. Having an unchanging identity has many of the same consequences as death. Death is to know, completely and absolutely, who you are — never to have the will to become a different, better person, nor to have any questions remaining about your true nature. That said, it is, in a way, possible to survive without an identity. Victims of severe childhood trauma often achieve this, because any identity one develops in a situation of abuse becomes the target of attack. An identity is a weakness to be exploited by the enemy, so the individual creates a protective barrier of nonassociation and nonattachment. One can never achieve complete nonidentity — we will always grasp something to help keep us alive and moving. But in these situations of constant attack, the identity is minimized and hidden. It becomes almost undetectable, even by the individual that holds it. So, one might ask, if it is possible to survive and even become immune to emotional attack by minimizing one’s identity, is this not the ideal way to live? You can probably guess my answer. No, because identity is necessary to create anything. Or, to create something is to give form to a piece of your identity. Without an identity, you have nothing to give shape. Moreover, it is necessary to identify with something before you can improve or build upon it. This is similar to the fact that before you have a right to change something, you must first have at least partial ownership of it. But what do ownership and identification have to do with each other? Well, they are parallel processes. To own something is to, at least partially, identify with it. If you didn’t identify with it, you would cast it aside. Similarly, when you strongly identify with something, even if you are not the sole owner of it, you protect it and tend to it as though you were. Because identity is both our strength and our weakness, we must be particularly careful with what we identify with. You need to identify with things to own them. If you want anything, if you have even a single solitary desire in life, you must open your identity to it before you can acquire it. Identifying with things is necessary for achieving goals. In a state of nonidentity, you may be immune to emotional attack, but you are also incapable of reaching any joy in life. It’s an emotional scorched-earth policy — you can’t steal what is burned to ash, and you can’t attack something that doesn’t exist. All this is to say that, ultimately, we must construct an identity for ourselves. We cannot go about our entire lives without connecting to anything in the world around us. Or rather, if we did, it wouldn’t be a life much worth living. But because identity is both our strength and our weakness, we must be particularly careful about what we identify with. We cannot hurl our soul around haphazardly and make Horcruxes of everything we touch. If we did, we would disappear for lack of distinction and become susceptible not to attack, but to the random nature of life. Catastrophes would consistently strike one or more of the things we are identified with. We must identify with that which is more core to our desires and leave all else behind. It will only slow us down.
https://humanparts.medium.com/you-have-more-in-common-with-voldemort-than-you-think-51ed18951fb9
['Atheno Boldly Fearless']
2020-04-29 17:15:06.690000+00:00
['Life Lessons', 'Psychology', 'Self Improvement', 'Life', 'Self']
Title Common Voldemort ThinkContent Common Voldemort Think leave little piece soul world around u Courtesy Warner Bros Entertainment Inc Photo Eric CharbonneauWireImageGetty Images could average person possibly common fictional massmurdering noseless wizard whose name ought spoken It’s bold claim know Well u construct identity way Voldemort — Voldemort provides illuminating allegory build identity Voldemort dastardly villain created thing called Horcruxes concept Horcrux don’t already know ol’ Voldy split soul stored shard different object being ensured life — nobody could kill unless first destroyed Horcruxes put one element analogy aside fact created Horcruxes murdering people Let’s ignore bit sake analogy don’t think people secret murderer take element though premise surprising backbone person latch piece soul onto object people sentimental significance cannot truly die Horcruxes destroyed destruction one brings pain stop nothing protect soul shard put term could accurately say ground identity idea memory experience don’t stop human like create physical symbol abstract concept use object people symbol idea upon base identity keep sentimental jewelry knickknack cling family friend buy clothes icon slogan key thing necklace car plot land make core Moreover based identity idea object represent protect life attack upon identity akin attack person — it’s threat foundational aspect voraciously defend idea item represent among u thrown momentary panic friend family member almost threw away something meaningless sentimental u lastly long idea remain world built upon never disappear Even object represent idea live grow past life farmer identifies farm — child continue cultivate think could said Horcruxes though don’t need kill people make don’t die Horcruxes get destroyed Well Horcruxes relationship quality life Life fundamentally delicate balance order chaos Life requires unpredictability work see truth something fundamental evolution — random chaotic mutation gene ultimately allows u adapt survive also requires order DNA unpredictable nothing resembling specie could form every core piece identity destroyed would left feeling like nothing necessity balance present level life Thus life exists thrives precise balance two element Order chaos Change something new maintenance identity integral existence individual rule applies identity must gradually grow change survive must also maintain core element achieve level stability every core piece identity destroyed would left feeling like nothing — like one though person died whoever became next would totally new Many people experienced kind rebirth extreme loss side unchanging identity effectively dead stagnant nothing left story tell merely waiting literal death catch spiritual end Similarly death strike person identity freeze unchanging identity many consequence death Death know completely absolutely — never become different better person question remaining true nature said way possible survive without identity Victims severe childhood trauma often achieve identity one develops situation abuse becomes target attack identity weakness exploited enemy individual creates protective barrier nonassociation nonattachment One never achieve complete nonidentity — always grasp something help keep u alive moving situation constant attack identity minimized hidden becomes almost undetectable even individual hold one might ask possible survive even become immune emotional attack minimizing one’s identity ideal way live probably guess answer identity necessary create anything create something give form piece identity Without identity nothing give shape Moreover necessary identify something improve build upon similar fact right change something must first least partial ownership ownership identification Well parallel process something least partially identify didn’t identify would cast aside Similarly strongly identify something even sole owner protect tend though identity strength weakness must particularly careful identify need identify thing want anything even single solitary desire life must open identity acquire Identifying thing necessary achieving goal state nonidentity may immune emotional attack also incapable reaching joy life It’s emotional scorchedearth policy — can’t steal burned ash can’t attack something doesn’t exist say ultimately must construct identity cannot go entire life without connecting anything world around u rather wouldn’t life much worth living identity strength weakness must particularly careful identify cannot hurl soul around haphazardly make Horcruxes everything touch would disappear lack distinction become susceptible attack random nature life Catastrophes would consistently strike one thing identified must identify core desire leave else behind slow u downTags Life Lessons Psychology Self Improvement Life Self
4,266
Covid-19 in the Middle East: situation report for week ending 22 August
ALGERIA Algeria’s outbreak peaked towards the end of July when more than 600 new cases were being recorded each day. Since then the trend has been downwards, with new cases averaging 429 a day during the past week according to official figures. Restrictions imposed in 29 of Algeria’s 58 wilayas (administrative districts) were eased last week but there is still a night curfew (11pm to 6am) and face masks must be worn outdoors. Large mosques (1,000-plus capacity) have been alllowed to open throughout the country but worshippers must bring their own prayer mats and wear face masks. Congregational prayers on Fridays are still banned. About 4,025 medical staff have been infected with Covid-19 in Algeria and 69 of them have died, according to the government’s scientific committee. These figures are a lot higher than those previously given by the health minister. For more information see: Covid-19 in Algeria Confirmed cases: 40,667 New cases in past week: 3,003 Active cases: 10,662 Deaths: 1,418 Tests carried out: (unknown) BAHRAIN Bahrain has more than 26,000 known cases per million inhabitants. This makes it the world’s third most infected country after Qatar and French Guiana. However, Bahrain is also one of the world leaders in Covid-19 testing. So far, almost 60% of its 1.7 million population have been tested. The daily total of new cases fluctuates but Bahrain’s epidemic appears to be subsiding gradually. The number of people reported to be currently infected is around 3,300 compared with 5,700 at the peak in mid-June. Cafes and restaurants remain closed but the authorities have announced plans for a phased reopening in September. Bahrain no longer requires people arriving in the country to isolate themselves. In recent tests only 0.2% of new arrivals were found to carry the virus. For more information see: Covid-19 in Bahrain Confirmed cases: 48,661 New cases in past week: 2,609 Active cases: 3,314 Deaths: 181 Tests carried out: 1 million EGYPT New Covid-19 cases in Egypt over the last three months. Seven-day rolling average, day by day. New cases peaked in June and have been falling sharply during the past few weeks, according to official figures. This week’s average was 123 cases a day compared with almost 1,600 at the peak. Although Egypt’s official figures have often been viewed with suspicion there is other evidence that its outbreak is subsiding. For example, the health ministry has been closing down some of its temporary isolation facilities. Egypt has been anxious to revive its economically important tourism sector and in July it began reopening its seaside resorts for foreign visitors. These resorts — in South Sinai, the Red Sea and Marsa Matrouh on the Mediterranean coast — have been isolated from the rest of the country to reduce the risk of infections spreading. Foreigners flying directly to the resorts don’t need to be tested for Covid-19 but they will need a test if they wish to leave the resort and visit other parts of the country. Foreigners arriving in other parts of the country must have tested negative during the 72 hours before travelling and will not be allowed to visit the resorts. For more information see: Covid-19 in Egypt Confirmed cases: 97,148 New cases in past week: 928 Active cases: 27,599 Deaths: 5,231 Tests carried out: 135,000 IRAN Iran was the first country in the region to be seriously affected by the virus and its epidemic shows no sign of abating. Government figures show an initial wave of infections which peaked at the end of March. It subsided during April, briefly dipping below 1,000 new cases per day but then rose to a new peak in the first week of June. New cases this week averaged 2,206 a day — virtually unchanged from the previous week. Iran continues to report more coronavirus-related deaths than any other country in the region. A further 1,045 deaths have been recorded during the past week. Confirmed cases: 354,7645 New cases in past week: 15,939 Active cases: 28,522 Deaths: 20,376 Tests carried out: 3 million IRAQ Iraq is currently recording more new infections than any other country in the region. New cases this week averaged more than 4,000 a day and Wednesday’s total of 4,576 cases was the highest since the outbreak began. Worse still, Iraq’s official figures are widely believed to understate the scale of the epidemic. Many cases go unreported because of social stigma. Compliance with preventive measures appears to be low and health services are inadequate. For more information see: Covid-19 in Iraq Confirmed cases: 197,085 New cases in past week: 28,795 Active cases: 50,356 Deaths: 6,283 Tests carried out: 1.4 million ISRAEL After coming close to bringing the epidemic under control, Israel has been hit by a second wave much larger than the first. The first wave peaked at around 600 new cases a day in early April. Efforts to control it were intially successful and by the second half of May new cases had dropped to about 15 a day. However, the virus surged back when lockdown restrictions were lifted and by the end of July new cases were averaging almost 1,800 a day. The second wave now appears to have peaked but the number of new cases remains high, averaging 1,377 a day this week. For more information see: Covid-19 in Israel Confirmed cases: 100,716 New cases in past week: 9,636 Active cases: 22,122 Deaths: 809 Tests carried out: 2.2 million JORDAN Until a couple of weeks ago Jordan appeared to be the most successful Arab country in controlling the virus. Although it continued to intercept new cases among people arriving from abroad, transmission within the country had virtually ceased. Since then, however, there has been a spate of locally-occurring cases and they now account for most of the newly-detected infections. The recent problems began with an outbreak at the Jaber-Nasib crossing point on the border with Syria where at least nine employees were diagnosed with the virus (see news report). This led to further infections among their contacts in various other places. Buildings in several cities have been sealed off but but tracing contacts and ensuring compliance with quarantine is proving a formidable task. One of the people who tested positive for the virus this week is said to have come into contact with 170 people and visited 35 different places all over the country. At a news conference on Friday health minister Saad Jaber said the main reason for the increase in infections is non-compliance with preventive measures at border crossings. Employees had broken the rules to meet for tea, coffee and tomato stir-fry, he added. Even people who had tested positive were shaking hands and hosting large gatherings. New measures may be imposed in the light of developments over the next few days. These could include a one-day lockdown on Fridays, extending curfew hours and temporarily closing schools, mosques, churches, parks and gathering places. For more information see: Covid-19 in Jordan Confirmed cases: 1,532 New cases in past week: 203 Active cases: 259 Deaths: 11 Tests carried out: 728,000 KUWAIT New infections peaked in late May at just over 1,000 cases a day. The numbers have dropped back substantially since then and this week’s average was 583 a day. The government has announced that the night curfew will be lifted on August 30. Restrictions on large gatherings such as weddings and funerals will continue. As a result of the economic downturn caused by the pandemic and low oil prices Kuwait is planning to expel 360,000 foreigners though as yet there is no timetable for their departure. For more information see: Covid-19 in Kuwait Confirmed cases: 79,269 New cases in past week: 4,084 Active cases: 7,494 Deaths: 511 Tests carried out: 581,000 LEBANON Political and economic turmoil, plus the devastating explosion in Beirut on August 4, have diverted attention from the coronavirus. Although Lebanon’s outbreak is still relatively small, infections have surged during the past month. New cases this week averaged 505 a day — about three times as many as at the end of July. A new partial lockdown began on Friday and is due to last two weeks. However, there are doubts about how well it will be observed or enforced (see news report). For more information see: Covid-19 in Lebanon Confirmed cases: 11,580 New cases in past week: 3,535 Active cases: 8,260 Deaths: 116 Tests carried out: 451,000 LIBYA Libya is in its ninth year of internal conflict. The UN-backed Government of National Unity in Tripoli is challenged by Field Marshall Haftar’s forces based in the east of the country. There are also numerous militias. This leaves the country ill-equipped to cope with a major epidemic. Growing levels of insecurity, political fragmentation and weak governance have led to a deterioration of basic services, particularly in the health system. At least 27 health facilities have been damaged or closed by fighting and some have been attacked directly. There are 870,000 people — refugees, asylum seekers and displaced persons — who the UN regards as especially vulnerable. The World Health Organisation (WHO) describes the coronavirus situation in Libya as “clusters of cases” — in other words, a series of local outbreaks rather than a generalised epidemic. Sebha, Tripoli, Zliten, Misrata, Ashshatti, Ubari, Traghen, Janzour and Khoms are said to be particular hotspots. Testing is very limited and the number of confirmed infections is still relatively small but growing fast. Half of the known cases were recorded this month. Investigations by the National Centre for Disease Control (NCDC) have concluded that most infections are the result of people not practising social distancing. The Libya Herald reports that most people do not wear masks in public. It adds: “Many still spend the weekend with their parents/relatives, attend funerals, baby-parties, and weddings. And although function halls have been forced to shut down, many are holding events for hundreds in open locations such as farms.” The authorities in Tripoli have responded by announcing series of penalty charges, including fines of 250 dinars ($180) for not wearing a face mask on public transport and 500 dinars (plus temporary closure) for businesses that fail to enforce mask-wearing. For more information see: Covid-19 in Libya Confirmed cases: 10,121 New cases in past week: 2,794 Active cases: 8,888 Deaths: 180 Tests carried out: 91,000 MOROCCO New Covid-19 cases in Morocco over the last three months. Seven-day rolling average, day by day. Coronavirus infections have been rising sharply in Morocco over the last few weeks, with a record 1,776 new cases reported on Sunday. This is a major setback since early June when a strict lockdown had reduced new cases to around 40 a day. Local health experts attribute the reversal mainly to the “rushed” way restrictions were lifted (see news report). There are scattered outbreaks around the country which could grow rapidly if not monitored closely. Containing these depends heavily on the effectiveness of contact-tracing — which is being hampered by delays in testing. Once someone has been diagnosed with Covid-19, it usually takes a week or more to trace all their contacts — by which time the virus may have spread further. Delays in testing are also being blamed for causing avoidable deaths, because people with serious symptoms are often not receiving treatment until it is too late. For more information see: Covid-19 in Morocco Confirmed cases: 49,247 New cases in past week: 10,006 Active cases: 14,239 Deaths: 817 Tests carried out: 1.7 million OMAN Infections peaked in mid-July with just under 1,600 cases a day and are now on a downward path. New cases averaged 147 a day this week — a substantial drop to levels not seen since early May. This week the authorities issued a long list of rules for people visiting restaurants and cafes. More than 600 medical staff in Oman have been infected with Covid-19 since the outbreak began, according to the government. For more information see: Covid-19 in Oman Confirmed cases: 83,769 New cases in past week: 1,026 Active cases: 4,774 Deaths: 609 Tests carried out: 309,000 PALESTINE Palestine, like Israel, is in the midst of a wave of new infections. Hebron is the most seriously affected area, with 10,832 confirmed cases — almost half the total. New cases this week averaged 477 a day — a small increase on the previous week. Many of the infections are attributed to people ignoring the rules for social distancing, which the authorities have difficulty enforcing. The health ministry says more than 30% of cases are the result of Palestinians travelling to and from work in Israel which is in the second wave of its epidemic. Fears of a major outbreak in Gaza have not materialised. Most of the known cases there appear to have been due to contacts with Egypt. For more information see: Covid-19 in Palestine Confirmed cases: 24,398 (West Bank 16,293, Gaza 117, East Jerusalem 7,988) New cases in past week: 3,342 Active cases: 8,993 Deaths: 135 Tests carried out: 214,000 QATAR In population terms Qatar has more known cases than any other country — 41,000 per million inhabitants. Migrant workers have been disproportionately affected. Qatar’s epidemic reached a peak in the first week of June but infections have fallen since then. New cases this week averaged 278 a day — well below the peak of more than 1,800 a day. For more information see: Covid-19 in Qatar Confirmed cases: 116,481 New cases in past week: 1,949 Active cases: 3,072 Deaths: 193 Tests carried out: 577,000 SAUDI ARABIA Saudi Arabia has the largest number of recorded cases among the Arab countries. New infections reached an initial peak in the fourth week of May, then dropped back slightly before rising to a higher peak in the third week of June. Since then, though, there has been a substantial improvement. Numbers of new cases are still large. This week they averaged 1,326 a day — a small drop since the previous week and about 3,000 a day below the June peak. The kingdom currently has fewer than 25,000 active cases compared with more than 63,000 at the peak. Migrant workers have been disproportionately affected but the authorities have also complained about non-compliance with precautionary measures by Saudi citizens. For more information see: Covid-19 in Saudi Arabia Confirmed cases: 305,186 New cases in past week: 9,284 Active cases: 24,539 Deaths: 3,580 Tests carried out: 4.5 million SUDAN The coronavirus struck Sudan in the midst of a political transition following a popular uprising against the regime of President Bashir and the country is ill-equipped to cope with a major epidemic. Testing is very limited and official figures don’t reflect the full scale of the outbreak. For more information see: Covid-19 in Sudan Confirmed cases: 12,623 New cases in past week: 461 Active cases: 5,335 Deaths: 812 Tests carried out: (unknown) SYRIA According to official figures Syria’s outbreak is still small, with just over 2,000 cases reported in areas controlled by the Assad regime. Even so, that is twice as many as two weeks ago. Official announcements rarely give any details and this lack of transparency fuels suspicions that many cases are being concealed. There is also some evidence that people with Covid-19 symptoms are reluctant to contact the authorities. Anecdotal evidence suggests community transmission of the virus is now widespread and one study indicates there may be tens of thousands of unreported cases. Fears have been raised about north-western and north-eastern parts of the country which are outside the regime’s control. Millions of displaced people are living in those areas and health services are often rudimentary. So far, 225 cases have been confirmed in the north-east and 54 in the north-west according to Syria in Context, a subscription website. For more information see: Covid-19 in Syria The following figures relate to regime-controlled areas only: Confirmed cases: 2,073 New cases in past week: 558 Active cases: 1,515 Deaths: 83 Tests carried out: (unknown) TUNISIA New Covid-19 cases in Tunisia over the last three months. Seven-day rolling average, day by day. Tunisia’s outbreak remains small, with fewer than 3,000 infections recorded so far. New cases are growing rapidly though. This week they averaged 101 a day, compared with only 35 in the previous week. In June, Tunisia appeared to be almost free of the virus and began promoting itself as a safe holiday destination. Tourists were to be allowed in with just a simple temperature check. On Wednesday, however, the authorities announced that people arriving in the country must present evidence of a negative RT-PCR test result. This applies to everyone, including those arriving from low-risk countries. A controversial LGBT+ film festival, originally due to have been held in Tunis last March, has now been postponed for a second time because of the Covid-19 outbreak. The first such festival was held in secret in 2018 because of local opposition. For more information see: Covid-19 in Tunisia Confirmed cases: 2,607 New cases in past week: 704 Active cases: 1,123 Deaths: 64 Tests carried out: 119,000 UNITED ARAB EMIRATES The UAE’s epidemic peaked in the last week of May when new infections were running at more than 900 a day. Numbers of new cases are now considerably lower, though this week’s average of 339 a day is the highest for a month. The UAE has carried out more tests per head of population than any other Arab country and ranks tenth worldwide in terms of levels of testing. For more information see: Covid-19 in the UAE Confirmed cases: 66,193 New cases in past week: 2,374 Active cases: 7,527 Deaths: 370 Tests carried out: 6.3 million YEMEN Because of the ongoing war, Yemen already faced a humanitarian crisis before the coronavirus arrived. Millions are malnourished and vulnerable to disease, and health services are inadequate. Official figures grossly understate the severity of the epidemic. Cholera is also prevalent. For more information see: Covid-19 in Yemen Confirmed cases: 1,910 New cases in past week: 48 Active cases: 306 Deaths: 543 Tests carried out: (unknown)
https://brian-whit.medium.com/covid-19-in-the-middle-east-situation-report-for-week-ending-22-august-3551aa6af577
['Brian Whitaker']
2020-08-23 07:15:25.167000+00:00
['Coronavirus', 'Middle East', 'Covid 19']
Title Covid19 Middle East situation report week ending 22 AugustContent ALGERIA Algeria’s outbreak peaked towards end July 600 new case recorded day Since trend downwards new case averaging 429 day past week according official figure Restrictions imposed 29 Algeria’s 58 wilayas administrative district eased last week still night curfew 11pm 6am face mask must worn outdoors Large mosque 1000plus capacity alllowed open throughout country worshipper must bring prayer mat wear face mask Congregational prayer Fridays still banned 4025 medical staff infected Covid19 Algeria 69 died according government’s scientific committee figure lot higher previously given health minister information see Covid19 Algeria Confirmed case 40667 New case past week 3003 Active case 10662 Deaths 1418 Tests carried unknown BAHRAIN Bahrain 26000 known case per million inhabitant make world’s third infected country Qatar French Guiana However Bahrain also one world leader Covid19 testing far almost 60 17 million population tested daily total new case fluctuates Bahrain’s epidemic appears subsiding gradually number people reported currently infected around 3300 compared 5700 peak midJune Cafes restaurant remain closed authority announced plan phased reopening September Bahrain longer requires people arriving country isolate recent test 02 new arrival found carry virus information see Covid19 Bahrain Confirmed case 48661 New case past week 2609 Active case 3314 Deaths 181 Tests carried 1 million EGYPT New Covid19 case Egypt last three month Sevenday rolling average day day New case peaked June falling sharply past week according official figure week’s average 123 case day compared almost 1600 peak Although Egypt’s official figure often viewed suspicion evidence outbreak subsiding example health ministry closing temporary isolation facility Egypt anxious revive economically important tourism sector July began reopening seaside resort foreign visitor resort — South Sinai Red Sea Marsa Matrouh Mediterranean coast — isolated rest country reduce risk infection spreading Foreigners flying directly resort don’t need tested Covid19 need test wish leave resort visit part country Foreigners arriving part country must tested negative 72 hour travelling allowed visit resort information see Covid19 Egypt Confirmed case 97148 New case past week 928 Active case 27599 Deaths 5231 Tests carried 135000 IRAN Iran first country region seriously affected virus epidemic show sign abating Government figure show initial wave infection peaked end March subsided April briefly dipping 1000 new case per day rose new peak first week June New case week averaged 2206 day — virtually unchanged previous week Iran continues report coronavirusrelated death country region 1045 death recorded past week Confirmed case 3547645 New case past week 15939 Active case 28522 Deaths 20376 Tests carried 3 million IRAQ Iraq currently recording new infection country region New case week averaged 4000 day Wednesday’s total 4576 case highest since outbreak began Worse still Iraq’s official figure widely believed understate scale epidemic Many case go unreported social stigma Compliance preventive measure appears low health service inadequate information see Covid19 Iraq Confirmed case 197085 New case past week 28795 Active case 50356 Deaths 6283 Tests carried 14 million ISRAEL coming close bringing epidemic control Israel hit second wave much larger first first wave peaked around 600 new case day early April Efforts control intially successful second half May new case dropped 15 day However virus surged back lockdown restriction lifted end July new case averaging almost 1800 day second wave appears peaked number new case remains high averaging 1377 day week information see Covid19 Israel Confirmed case 100716 New case past week 9636 Active case 22122 Deaths 809 Tests carried 22 million JORDAN couple week ago Jordan appeared successful Arab country controlling virus Although continued intercept new case among people arriving abroad transmission within country virtually ceased Since however spate locallyoccurring case account newlydetected infection recent problem began outbreak JaberNasib crossing point border Syria least nine employee diagnosed virus see news report led infection among contact various place Buildings several city sealed tracing contact ensuring compliance quarantine proving formidable task One people tested positive virus week said come contact 170 people visited 35 different place country news conference Friday health minister Saad Jaber said main reason increase infection noncompliance preventive measure border crossing Employees broken rule meet tea coffee tomato stirfry added Even people tested positive shaking hand hosting large gathering New measure may imposed light development next day could include oneday lockdown Fridays extending curfew hour temporarily closing school mosque church park gathering place information see Covid19 Jordan Confirmed case 1532 New case past week 203 Active case 259 Deaths 11 Tests carried 728000 KUWAIT New infection peaked late May 1000 case day number dropped back substantially since week’s average 583 day government announced night curfew lifted August 30 Restrictions large gathering wedding funeral continue result economic downturn caused pandemic low oil price Kuwait planning expel 360000 foreigner though yet timetable departure information see Covid19 Kuwait Confirmed case 79269 New case past week 4084 Active case 7494 Deaths 511 Tests carried 581000 LEBANON Political economic turmoil plus devastating explosion Beirut August 4 diverted attention coronavirus Although Lebanon’s outbreak still relatively small infection surged past month New case week averaged 505 day — three time many end July new partial lockdown began Friday due last two week However doubt well observed enforced see news report information see Covid19 Lebanon Confirmed case 11580 New case past week 3535 Active case 8260 Deaths 116 Tests carried 451000 LIBYA Libya ninth year internal conflict UNbacked Government National Unity Tripoli challenged Field Marshall Haftar’s force based east country also numerous militia leaf country illequipped cope major epidemic Growing level insecurity political fragmentation weak governance led deterioration basic service particularly health system least 27 health facility damaged closed fighting attacked directly 870000 people — refugee asylum seeker displaced person — UN regard especially vulnerable World Health Organisation describes coronavirus situation Libya “clusters cases” — word series local outbreak rather generalised epidemic Sebha Tripoli Zliten Misrata Ashshatti Ubari Traghen Janzour Khoms said particular hotspot Testing limited number confirmed infection still relatively small growing fast Half known case recorded month Investigations National Centre Disease Control NCDC concluded infection result people practising social distancing Libya Herald report people wear mask public add “Many still spend weekend parentsrelatives attend funeral babyparties wedding although function hall forced shut many holding event hundred open location farms” authority Tripoli responded announcing series penalty charge including fine 250 dinar 180 wearing face mask public transport 500 dinar plus temporary closure business fail enforce maskwearing information see Covid19 Libya Confirmed case 10121 New case past week 2794 Active case 8888 Deaths 180 Tests carried 91000 MOROCCO New Covid19 case Morocco last three month Sevenday rolling average day day Coronavirus infection rising sharply Morocco last week record 1776 new case reported Sunday major setback since early June strict lockdown reduced new case around 40 day Local health expert attribute reversal mainly “rushed” way restriction lifted see news report scattered outbreak around country could grow rapidly monitored closely Containing depends heavily effectiveness contacttracing — hampered delay testing someone diagnosed Covid19 usually take week trace contact — time virus may spread Delays testing also blamed causing avoidable death people serious symptom often receiving treatment late information see Covid19 Morocco Confirmed case 49247 New case past week 10006 Active case 14239 Deaths 817 Tests carried 17 million OMAN Infections peaked midJuly 1600 case day downward path New case averaged 147 day week — substantial drop level seen since early May week authority issued long list rule people visiting restaurant cafe 600 medical staff Oman infected Covid19 since outbreak began according government information see Covid19 Oman Confirmed case 83769 New case past week 1026 Active case 4774 Deaths 609 Tests carried 309000 PALESTINE Palestine like Israel midst wave new infection Hebron seriously affected area 10832 confirmed case — almost half total New case week averaged 477 day — small increase previous week Many infection attributed people ignoring rule social distancing authority difficulty enforcing health ministry say 30 case result Palestinians travelling work Israel second wave epidemic Fears major outbreak Gaza materialised known case appear due contact Egypt information see Covid19 Palestine Confirmed case 24398 West Bank 16293 Gaza 117 East Jerusalem 7988 New case past week 3342 Active case 8993 Deaths 135 Tests carried 214000 QATAR population term Qatar known case country — 41000 per million inhabitant Migrant worker disproportionately affected Qatar’s epidemic reached peak first week June infection fallen since New case week averaged 278 day — well peak 1800 day information see Covid19 Qatar Confirmed case 116481 New case past week 1949 Active case 3072 Deaths 193 Tests carried 577000 SAUDI ARABIA Saudi Arabia largest number recorded case among Arab country New infection reached initial peak fourth week May dropped back slightly rising higher peak third week June Since though substantial improvement Numbers new case still large week averaged 1326 day — small drop since previous week 3000 day June peak kingdom currently fewer 25000 active case compared 63000 peak Migrant worker disproportionately affected authority also complained noncompliance precautionary measure Saudi citizen information see Covid19 Saudi Arabia Confirmed case 305186 New case past week 9284 Active case 24539 Deaths 3580 Tests carried 45 million SUDAN coronavirus struck Sudan midst political transition following popular uprising regime President Bashir country illequipped cope major epidemic Testing limited official figure don’t reflect full scale outbreak information see Covid19 Sudan Confirmed case 12623 New case past week 461 Active case 5335 Deaths 812 Tests carried unknown SYRIA According official figure Syria’s outbreak still small 2000 case reported area controlled Assad regime Even twice many two week ago Official announcement rarely give detail lack transparency fuel suspicion many case concealed also evidence people Covid19 symptom reluctant contact authority Anecdotal evidence suggests community transmission virus widespread one study indicates may ten thousand unreported case Fears raised northwestern northeastern part country outside regime’s control Millions displaced people living area health service often rudimentary far 225 case confirmed northeast 54 northwest according Syria Context subscription website information see Covid19 Syria following figure relate regimecontrolled area Confirmed case 2073 New case past week 558 Active case 1515 Deaths 83 Tests carried unknown TUNISIA New Covid19 case Tunisia last three month Sevenday rolling average day day Tunisia’s outbreak remains small fewer 3000 infection recorded far New case growing rapidly though week averaged 101 day compared 35 previous week June Tunisia appeared almost free virus began promoting safe holiday destination Tourists allowed simple temperature check Wednesday however authority announced people arriving country must present evidence negative RTPCR test result applies everyone including arriving lowrisk country controversial LGBT film festival originally due held Tunis last March postponed second time Covid19 outbreak first festival held secret 2018 local opposition information see Covid19 Tunisia Confirmed case 2607 New case past week 704 Active case 1123 Deaths 64 Tests carried 119000 UNITED ARAB EMIRATES UAE’s epidemic peaked last week May new infection running 900 day Numbers new case considerably lower though week’s average 339 day highest month UAE carried test per head population Arab country rank tenth worldwide term level testing information see Covid19 UAE Confirmed case 66193 New case past week 2374 Active case 7527 Deaths 370 Tests carried 63 million YEMEN ongoing war Yemen already faced humanitarian crisis coronavirus arrived Millions malnourished vulnerable disease health service inadequate Official figure grossly understate severity epidemic Cholera also prevalent information see Covid19 Yemen Confirmed case 1910 New case past week 48 Active case 306 Deaths 543 Tests carried unknownTags Coronavirus Middle East Covid 19
4,267
Towards an Economic Theory of Everything in under 5 minutes
Economics is traditionally defined as the study of the allocation of scarce resources which have alternative uses. Microeconomics is further defined as the study of individual decision making, or of supply and demand, price theory, market design, and so on. Macroeconomics is defined as the study of aggregative economic phenomenon, national accounts, GDP, fiscal and monetary policy. In some sense, economics can be called the science of scarcity. Nevertheless, most economists are not scientists in the traditional meaning; some are engaged in empirical research that is profound, but as a field, economics, especially the foundations, is closer to mathematics than a science. Only loosely can the traditional definition account for most of what economists do. Decision making under uncertainty can be considered the science of individuals allocating resources of time under scarcity of information. Macroeconomics can be the study of governments making decisions under political constraints. And so on. But a more general definition is needed. Here’s one: economics is the study of agents, their evolution, and their interactions in complex environments. This definition includes both microeconomics and macroeconomics, it includes the “science” of economics, as well as the math, and it includes the study of humans, animals, aliens, and artificial agents (parts of computer science). One economic theory of everything would be to actually create micro-foundations for the macroeconomy. This has historically been done through rational expectations and representative agent models. In the future, this can be behavioralistic and even neuroeconomic models. Anwar Shaikh actually believes that microeconomics needs better macro-foundations, not the other way around. This would involve class, gender, social relations, political factors, culture, etc. Perhaps it will prove too difficult to model individual decision making with realistic accuracy (I doubt this given enough time), nevertheless, at least theoretically, it seems possible that increasingly accurate simulations of economic behavior is possible. Unifying neuroeconomics with macroeconomics seems to be one way of creating an economic theory of everything. Shaikh also shows that you don’t necessarily need micro-economic foundations to do macroeconomics, since aggregates are robustly insensitive to individual parts- the whole is greater than the sum of the parts. In this sense, Shaikh believes the proper focus for economists is studying the economy, not necessarily individual units. Finally, economics needs to integrate with computer science, especially artificial intelligence. This is for several reasons. Humans are a form of artificial intelligence. Reinforcement agents can model human behavior. Eventually, accurate simulations of economies will be created with the aid of AI. So if there is going to be an E=MC² in economics it will likely come by expanding the definition of economics to account for the mathematical study of agents, not just scarcity of resources 2. creating macro-foundations for microeconomics, perhaps through sociology, anthropology, political science, etc. 3. creating micro-foundations for macroeconomics, perhaps through behavioral economics and neuroeconomics 4. incorporating reflexivity, interactions, and complexity into models 5. integrating with AI/ computer science Gain Access to Expert View — Subscribe to DDI Intel
https://medium.com/datadriveninvestor/towards-an-economic-theory-of-everything-in-under-5-minutes-12f25b0038ee
['The Moral Economist']
2020-11-25 15:17:11.151000+00:00
['Economics', 'Investing', 'Finance', 'Psychology', 'Philosophy']
Title Towards Economic Theory Everything 5 minutesContent Economics traditionally defined study allocation scarce resource alternative us Microeconomics defined study individual decision making supply demand price theory market design Macroeconomics defined study aggregative economic phenomenon national account GDP fiscal monetary policy sense economics called science scarcity Nevertheless economist scientist traditional meaning engaged empirical research profound field economics especially foundation closer mathematics science loosely traditional definition account economist Decision making uncertainty considered science individual allocating resource time scarcity information Macroeconomics study government making decision political constraint general definition needed Here’s one economics study agent evolution interaction complex environment definition includes microeconomics macroeconomics includes “science” economics well math includes study human animal alien artificial agent part computer science One economic theory everything would actually create microfoundations macroeconomy historically done rational expectation representative agent model future behavioralistic even neuroeconomic model Anwar Shaikh actually belief microeconomics need better macrofoundations way around would involve class gender social relation political factor culture etc Perhaps prove difficult model individual decision making realistic accuracy doubt given enough time nevertheless least theoretically seems possible increasingly accurate simulation economic behavior possible Unifying neuroeconomics macroeconomics seems one way creating economic theory everything Shaikh also show don’t necessarily need microeconomic foundation macroeconomics since aggregate robustly insensitive individual part whole greater sum part sense Shaikh belief proper focus economist studying economy necessarily individual unit Finally economics need integrate computer science especially artificial intelligence several reason Humans form artificial intelligence Reinforcement agent model human behavior Eventually accurate simulation economy created aid AI going EMC² economics likely come expanding definition economics account mathematical study agent scarcity resource 2 creating macrofoundations microeconomics perhaps sociology anthropology political science etc 3 creating microfoundations macroeconomics perhaps behavioral economics neuroeconomics 4 incorporating reflexivity interaction complexity model 5 integrating AI computer science Gain Access Expert View — Subscribe DDI IntelTags Economics Investing Finance Psychology Philosophy
4,268
A New Global Mobility Hierarchy Emerges as International Travel Resumes
Coronavirus-related travel restrictions are beginning to lift in some countries after more than six months of panic and uncertainty. The resumption of international cross-border travel may appear to be a signal that things are slowly returning to normal, but as the latest research from the Henley Passport Index — based on exclusive data from the International Air Transport Association (IATA) — shows, the pandemic has completely upended the seemingly unshakeable hierarchy of global mobility that has dominated the last few decades, with more change still to come. At the beginning of the year, for instance, the US passport was ranked in 6th position on the Henley Passport Index — the original ranking of all the world’s passports according to the number of destinations their holders can access without a prior visa — and Americans could travel hassle-free to 185 destinations around the world. Since then, that number has dropped dramatically by over 100, with US passport holders currently able to access fewer than 75 destinations, with the most popular tourist and business centers notably excluded. As criticism of the country’s pandemic response continues to mount, and with the US presidential election just weeks away, the precipitous decline of US passport power and American travel freedom is seen as a clear indication of its altered status in the eyes of the international community. Other significant changes in the once-solid global mobility hierarchy paint an equally vivid picture of the chaos caused by the Covid-19 pandemic. At the beginning of 2020, the Singapore passport was ranked 2nd globally, with passport holders able to access an unprecedented 190 destinations. However, under the current travel restrictions, Singaporeans can travel to fewer than 80 destinations around the world. Unsurprisingly, those countries whose coronavirus responses have been criticized for being inadequate have taken the greatest knock when it comes to the travel freedom of their citizens. Brazilian passport holders were able to access 170 destinations without acquiring a visa in advance in January. Currently, approximately only 70 destinations are accessible. The decline in mobility and passport power for countries such as India and Russia have been less dramatic, but nevertheless indicative of an overall shift. Russian citizens had access to 119 destinations prior to the Covid-19 outbreak but can currently travel to fewer than 50. At the beginning of the year, Indian passport holders could travel to 61 destinations without a visa but due to virus-related restrictions, they currently have access to fewer than 30. Without taking the various pandemic-related travel bans and restrictions into account, Japan continues to hold the number one spot on the Henley Passport Index, with a visa-free/visa-on-arrival score of 191. Singapore remains in 2nd place, with a score of 190, while Germany and South Korea are tied 3rd, each with a score of 189. EU member states continue to perform best overall, with countries from the bloc taking up most of the spots in the index’s top 10.
https://medium.com/curious/a-new-global-mobility-hierarchy-emerges-as-international-travel-resumes-72a39e741ca5
['Henley']
2020-10-14 10:32:29.783000+00:00
['Travel Freedom', 'International Travel', 'Global Mobility', 'Coronavirus', 'Henley Passport Index']
Title New Global Mobility Hierarchy Emerges International Travel ResumesContent Coronavirusrelated travel restriction beginning lift country six month panic uncertainty resumption international crossborder travel may appear signal thing slowly returning normal latest research Henley Passport Index — based exclusive data International Air Transport Association IATA — show pandemic completely upended seemingly unshakeable hierarchy global mobility dominated last decade change still come beginning year instance US passport ranked 6th position Henley Passport Index — original ranking world’s passport according number destination holder access without prior visa — Americans could travel hasslefree 185 destination around world Since number dropped dramatically 100 US passport holder currently able access fewer 75 destination popular tourist business center notably excluded criticism country’s pandemic response continues mount US presidential election week away precipitous decline US passport power American travel freedom seen clear indication altered status eye international community significant change oncesolid global mobility hierarchy paint equally vivid picture chaos caused Covid19 pandemic beginning 2020 Singapore passport ranked 2nd globally passport holder able access unprecedented 190 destination However current travel restriction Singaporeans travel fewer 80 destination around world Unsurprisingly country whose coronavirus response criticized inadequate taken greatest knock come travel freedom citizen Brazilian passport holder able access 170 destination without acquiring visa advance January Currently approximately 70 destination accessible decline mobility passport power country India Russia le dramatic nevertheless indicative overall shift Russian citizen access 119 destination prior Covid19 outbreak currently travel fewer 50 beginning year Indian passport holder could travel 61 destination without visa due virusrelated restriction currently access fewer 30 Without taking various pandemicrelated travel ban restriction account Japan continues hold number one spot Henley Passport Index visafreevisaonarrival score 191 Singapore remains 2nd place score 190 Germany South Korea tied 3rd score 189 EU member state continue perform best overall country bloc taking spot index’s top 10Tags Travel Freedom International Travel Global Mobility Coronavirus Henley Passport Index
4,269
An RDF crawler
I wrote an RDF crawler (aka scutter) using Java and the Jena RDF toolkit that spiders the web gathering up semantic web data and storing it in any of Jena’s backend stores (in-memory, Berkeley DB, mysql, etc). Download it here. The system is multithreaded and so can simultaneously download from many sources while the aggregation thread does the processing. It builds a model that remembers the provenance of the RDF and takes care to delete and replace triples if it hits the same URL twice, so you can run it as often as you like to keep the data fresh without bloating the store with out-of-date information. As yet it doesn’t do anything with what it gathers; the information’s just sitting there waiting for interesting applications to be built on top of it. To use it as distributed, set up a mysql database called “scutter” and set the username and password in the DBConnection setup in Scutter.java then recompile using ‘ant compile’ (sorry, no handy config files in this 0.1 release). Run the script scutter.sh passing in as many starting-point URLs as you like. These will be added to the queue, and any rdfs:seeAlso pointers in the downloaded RDF will be recursively followed until no more unique URLs can be found. The biggest known issue at the moment is that it doesn’t do proper management to work out when it’s run out of URLs — it just stops. The standard log4j.properties file can be edited to change what gets logged — with full debugging information turned on, you get quite a lot of output. Plans for the future include tying FOAF-related processing into the aggregation such as smushing and mbox_sha1sum normalising, and making a publish/subscribe-based system so that people who can’t run their own aggregators can subscribe to the RDF that’s gathered.
https://medium.com/hackdiary/an-rdf-crawler-f747a5493a4c
['Matt Biddulph']
2018-01-12 02:53:45.222000+00:00
['Java', 'Rdf']
Title RDF crawlerContent wrote RDF crawler aka scutter using Java Jena RDF toolkit spider web gathering semantic web data storing Jena’s backend store inmemory Berkeley DB mysql etc Download system multithreaded simultaneously download many source aggregation thread processing build model remembers provenance RDF take care delete replace triple hit URL twice run often like keep data fresh without bloating store outofdate information yet doesn’t anything gather information’s sitting waiting interesting application built top use distributed set mysql database called “scutter” set username password DBConnection setup Scutterjava recompile using ‘ant compile’ sorry handy config file 01 release Run script scuttersh passing many startingpoint URLs like added queue rdfsseeAlso pointer downloaded RDF recursively followed unique URLs found biggest known issue moment doesn’t proper management work it’s run URLs — stop standard log4jproperties file edited change get logged — full debugging information turned get quite lot output Plans future include tying FOAFrelated processing aggregation smushing mboxsha1sum normalising making publishsubscribebased system people can’t run aggregator subscribe RDF that’s gatheredTags Java Rdf
4,270
Top 7 Practice Tests and Mock Exams to Prepare for Oracle’s Java Certifications — OCAJP and OCPJP
Top 7 Practice Tests and Mock Exams to Prepare for Oracle’s Java Certifications — OCAJP and OCPJP javinpaul Follow Jun 11 · 7 min read image_credit — Udemy Hello guys, there is no doubt that exam simulators play an essential role in preparing for any Java certification, like OCAJP, OCPJP, OCEJWCD, OCMJEA exams. In fact, they are one of the most crucial pillars because choosing a good exam simulator with a good book is generally the success mantra of many Java certification aspirants. The exam simulators prepare you well for exams by presenting the level of questions you can expect in real reviews. They provide much-needed practice in a review like an environment to gauge your speed and accuracy. I have personally seen the difference of 30% in score between people who do a lot of mock exams and who just go without practicing mock exams. Candidates make more mistakes when they first took exams, and by participating in mock tests, you train your mind to make fewer mistakes. They also help you to handle the time pressure of the real exam better. Though it’s not necessary to buy a commercial exam simulator that is probably the best-spent money, you get a lot of value for your money. You not only learn your mistakes, but the comprehensive explanations are given by these simulators also help you to correct them. Since many of my readers have requested about which is the best exam simulator to buy for OCAJP 11 or OCAJP 8? Or, which one is the cheapest exam simulator but good quality, I decided to jot down some of the excellent quality exam simulators for Oracle’s Java certification. Top 7 Practice test and Mock Exam to Crack Oracle’s Java Certification Here is my list of some of the best Java Exam simulators currently available in the market. The list is solely based on whatever I have read and known from the people who have used it, but I have not taken all the exam simulators personally. My personal experience is only with Whizlabs, which I think is more than sufficient for any candidates who wants to achieve more than 80% in OCAJP or OCPJP, but I have listed down other commercial mock exam providers to provide a comprehensive list of exam simulators. Most of the exam providers not only provide simulators for OCAJP and OCPJP but also for more advanced Java exams like OCEJWCD (Oracle Certified Expert Java Web Component Developer) and OCMJEA (Oracle Certified Master Java Enterprise Architect). So, no matter which exam you are preparing, you will find some good exam simulators with these providers. This is the best exam simulator for Java certification. I have used it personally so I can vouch for the quality of Whizlabs. It has separate practice tests for OCAJP and OCPJP, both Java 11 and 8, depending on which version you are preparing. The OCPJP 11 exam simulator contains over 400 questions and five full-length mock exams, which costs around 20$, you might get some discount as well. You can take the test online from any device, and it also provides detailed reports on your strong and weak areas. You can also buy Whizlabs Practice questions on Udemy. Here is the link to buy Whizlabs simulator on Udemy: I didn’t know that you can also buy practice questions on Udemy but you can and they also have some of the best practice questions for Java certifications like Java SE 8 and Java SE 11. Here are some of the notable Java Practice questions you can buy on Udemy, they are also very affordable and you can get most of them in just $10 on several Udemy sales which happens every now and then. This is another great Java exam simulator but only available for Java 8, i.e. only for both 1Z0–808 and 1z0–809 exam. They also have a Java 1Z0–808 and 1Z0–809 Free Test, which is created to demonstrate all the features of our Java8 Associate Web Simulator. You will be able to access 25 complete questions and will have 53 minutes to finish the test. If you want, you can also download their free 1Z0–808 a 1Z0–809 dumps in PDF format for the reference. If you want to go for cheap and best, then nothing beats Entuware. It contains around $9.95 for question bank with approximately 500 questions. Surely, you can’t get less expensive than this. The items are also top quality, pretty much the same level as Whizlabs, and detailed answer is also of good quality and explains why correct answers are correct and why wrong answers are incorrect. Kaplan SelfTest is authorized by Oracle, so you can be sure that it covers the exam objectives well. The Kaplan SelfTest contains over 170 questions, and the price starts from $69 for 30 days online access. The CD costs you around $99. The Kaplan 1X0–804 Practice Test for Java SE 7 Programmer II (OCPJP7) also includes 275 complimentary flashcards, and a comprehensive score report helps you focus your study efforts. Transcender is similar to Kaplan, and also an Oracle authorized practice exam provider. They have different packs for a different time duration, like 190 questions; price starts from $109. You should only buy either Kaplan or Transcender because they actually contain the same problem, the only thing which is different is the number of topics covered and the number of questions provided. They are actually now merged together and known as Transcender, powered by Kaplan IT Training. This is another good Oracle and Java Exam simulator provider that offers training courses and exam simulators for almost all Java certifications. You can buy OCPJP 11 online training, OCPJP 11 study guide + mock exam questions from this provider for your practice. They also have free tests on their website so that you evaluate their content before you buy, worth trying to check your knowledge as well. 7. Mock Exams from Java Certification Guides You can also find a couple of mock exams when you buy the Java Certification Study guide. The Study guide is an excellent resource to prepare for the exam because they provide full coverage of the syllabus and prepare you for the exam by presenting concepts that are more valuable from the exam point of view. Here are a couple of excellent Java study guides for both OCAJP and OCPJP, for both Java SE 11 and Java SE 8. Apart from these, there are a couple of other books and study guides, depending upon whether you are preparing for OCAJP 11, OCPJP11, OCAPJP 8, or OCPJP 8. You can check out my recommended books for these exams in this blog, here. Other Certification Resources for Java Programmers and IT Professionals That’s all about the list of some of the best Java commercial exam simulators for OCAJP and OCPJP exams. Most of these Java exam simulators provider also provides mock exams for other Java certifications like OCPJWCD or OCMJCEA and other reviews. There are also a lot of free mock exams available for both OCAJP 11 and OCPJP8, which you can take a look at before buying any Java exam simulators. You can use them to judge the quality of full exams which P. S. — If you are new to the Java development world and want to learn Java in depth before going for certification then I also suggest you go through this The Complete Java Masterclass course by Tim Buchalaka and his team on Udemy. It is also one of the most up-to-date courses to learn Java covering new features from recent Java releases.
https://medium.com/javarevisited/top-7-practice-tests-and-mock-exams-to-prepare-for-oracles-java-certifications-ocajp-and-ocpjp-36502d4ca061
[]
2020-12-11 08:58:50.638000+00:00
['Certification', 'Programming', 'Software Development', 'Java', 'Coding']
Title Top 7 Practice Tests Mock Exams Prepare Oracle’s Java Certifications — OCAJP OCPJPContent Top 7 Practice Tests Mock Exams Prepare Oracle’s Java Certifications — OCAJP OCPJP javinpaul Follow Jun 11 · 7 min read imagecredit — Udemy Hello guy doubt exam simulator play essential role preparing Java certification like OCAJP OCPJP OCEJWCD OCMJEA exam fact one crucial pillar choosing good exam simulator good book generally success mantra many Java certification aspirant exam simulator prepare well exam presenting level question expect real review provide muchneeded practice review like environment gauge speed accuracy personally seen difference 30 score people lot mock exam go without practicing mock exam Candidates make mistake first took exam participating mock test train mind make fewer mistake also help handle time pressure real exam better Though it’s necessary buy commercial exam simulator probably bestspent money get lot value money learn mistake comprehensive explanation given simulator also help correct Since many reader requested best exam simulator buy OCAJP 11 OCAJP 8 one cheapest exam simulator good quality decided jot excellent quality exam simulator Oracle’s Java certification Top 7 Practice test Mock Exam Crack Oracle’s Java Certification list best Java Exam simulator currently available market list solely based whatever read known people used taken exam simulator personally personal experience Whizlabs think sufficient candidate want achieve 80 OCAJP OCPJP listed commercial mock exam provider provide comprehensive list exam simulator exam provider provide simulator OCAJP OCPJP also advanced Java exam like OCEJWCD Oracle Certified Expert Java Web Component Developer OCMJEA Oracle Certified Master Java Enterprise Architect matter exam preparing find good exam simulator provider best exam simulator Java certification used personally vouch quality Whizlabs separate practice test OCAJP OCPJP Java 11 8 depending version preparing OCPJP 11 exam simulator contains 400 question five fulllength mock exam cost around 20 might get discount well take test online device also provides detailed report strong weak area also buy Whizlabs Practice question Udemy link buy Whizlabs simulator Udemy didn’t know also buy practice question Udemy also best practice question Java certification like Java SE 8 Java SE 11 notable Java Practice question buy Udemy also affordable get 10 several Udemy sale happens every another great Java exam simulator available Java 8 ie 1Z0–808 1z0–809 exam also Java 1Z0–808 1Z0–809 Free Test created demonstrate feature Java8 Associate Web Simulator able access 25 complete question 53 minute finish test want also download free 1Z0–808 1Z0–809 dump PDF format reference want go cheap best nothing beat Entuware contains around 995 question bank approximately 500 question Surely can’t get le expensive item also top quality pretty much level Whizlabs detailed answer also good quality explains correct answer correct wrong answer incorrect Kaplan SelfTest authorized Oracle sure cover exam objective well Kaplan SelfTest contains 170 question price start 69 30 day online access CD cost around 99 Kaplan 1X0–804 Practice Test Java SE 7 Programmer II OCPJP7 also includes 275 complimentary flashcard comprehensive score report help focus study effort Transcender similar Kaplan also Oracle authorized practice exam provider different pack different time duration like 190 question price start 109 buy either Kaplan Transcender actually contain problem thing different number topic covered number question provided actually merged together known Transcender powered Kaplan Training another good Oracle Java Exam simulator provider offer training course exam simulator almost Java certification buy OCPJP 11 online training OCPJP 11 study guide mock exam question provider practice also free test website evaluate content buy worth trying check knowledge well 7 Mock Exams Java Certification Guides also find couple mock exam buy Java Certification Study guide Study guide excellent resource prepare exam provide full coverage syllabus prepare exam presenting concept valuable exam point view couple excellent Java study guide OCAJP OCPJP Java SE 11 Java SE 8 Apart couple book study guide depending upon whether preparing OCAJP 11 OCPJP11 OCAPJP 8 OCPJP 8 check recommended book exam blog Certification Resources Java Programmers Professionals That’s list best Java commercial exam simulator OCAJP OCPJP exam Java exam simulator provider also provides mock exam Java certification like OCPJWCD OCMJCEA review also lot free mock exam available OCAJP 11 OCPJP8 take look buying Java exam simulator use judge quality full exam P — new Java development world want learn Java depth going certification also suggest go Complete Java Masterclass course Tim Buchalaka team Udemy also one uptodate course learn Java covering new feature recent Java releasesTags Certification Programming Software Development Java Coding
4,271
Osho’s Views on J Krishnamurthy
When Jiddu Krishnamurti died, Osho expressed his thoughts on him as a being and his work. Its relevance, its longevity and its usefulness. It is worth reading. The discussion has been called “Death of the mystic, J. Krishnamurti”. J. Krishnamurti died last Monday, In Ojai, California. In the past you have spoken of him as another enlightened being. Would you please comment on his death? The death of an enlightened being like J. Krishnamurti is nothing to be sad about, it is something to be celebrated with songs and dances. It is a moment of rejoicing. His death is not a death. He knows his immortality. His death is only the death of the body. But J. Krishnamurti will go on living in the universal consciousness, forever and forever. Just three days before J. Krishnamurti died, one of my friends was with him; and he reported to me that his words to him were very strange. Krishnamurti was very sad and he simply said one thing: “I have wasted my life. People were listening to me as if I am an entertainment.” The mystic is a revolution; he is not entertainment. If you hear him, if you allow him, if you open your doors to him, he is pure fire. He will burn all that is rubbish in you, all that is old in you, and he will purify you into a new human being. It is risky to allow fire into your being — rather than opening the doors, you immediately close all the doors. But entertainment is another thing. It does not change you. It does not make you more conscious; on the contrary, it helps you to remain unconscious for two, three hours, so that you can forget all your worries, concerns, anxieties — so that you can get lost in the entertainment. You can note it: as man has passed through the centuries, he has managed to create more and more entertainments, because he needs more and more to be unconscious. He is afraid of being conscious, because being conscious means to go through a metamorphosis. I was more shocked by the news than by the death. A man like J. Krishnamurti dies, and the papers don’t have space to devote to that man who for ninety years continuously has been helping humanity to be more intelligent, to be more mature. Nobody has worked so hard and so long. Just a small news article, unnoticeable — and if a politician sneezes it makes headlines. What is your connection with Krishnamurti? It is a real mystery. I have loved him since I have known him, and he has been very loving towards me. But we have never met; hence the relationship, the connection is something beyond words. We have not seen each other ever, but yet…perhaps we have been the two persons closest to each other in the whole world. We had a tremendous communion that needs no language, that need not be of physical presence…. You are asking me about my connection with him. It was the deepest possible connection — which needs no physical contact, which needs no linguistic communication. Not only that, once in a while I used to criticize him, he used to criticize me, and we enjoyed each other’s criticism — knowing perfectly well that the other does not mean it. Now that he is dead, I will miss him because I will not be able to criticize him; it won’t be right. It was such a joy to criticize him. He was the most intelligent man of this century, but he was not understood by people. He has died, and it seems the world goes on its way without even looking back for a single moment that the most intelligent man is no longer there. It will be difficult to find that sharpness and that intelligence again in centuries. But people are such sleep walkers, they have not taken much note. In newspapers, just in small corners where nobody reads, his death is declared. And it seems that a ninety-year-old man who has been continuously speaking for almost seventy years, moving around the world, trying to help people to get unconditioned, trying to help people to become free — nobody seems even to pay a tribute to the man who has worked the hardest in the whole of history for man’s freedom, for man’s dignity. I don’t feel sorry for his death. His death is beautiful; he has attained all that life is capable to give. But I certainly feel sorry for the whole world. It goes on missing its greatest flights of consciousnesses, its highest peaks, its brightest stars. It is too much concerned with trivia. I feel such a deep affinity with Krishnamurti that even to talk of connection is not right; connection is possible only between two things which are separate. I feel almost a oneness with him. In spite of all his criticisms, in spite of all my criticisms — which were just joking with the old man, provoking the old man…and he was very easily provoked…. Krishnamurti’s teaching is beautiful, but too serious. And my experience and feeling is that his seventy years went to waste because he was serious. So only people who were long-faced and miserable and serious types collected around him; he was a collector of corpses, and as he became older, those corpses also became older. I know people who have been listening to him for almost their whole lives; they are as old as he himself was. They are still alive. I know one woman who is ninety-five, and I know many other people. One thing I have seen in all of them, which is common, is that they are too serious. Life needs a little playfulness, a little humor, a little laughter. Only on that point am I in absolute disagreement with him; otherwise, he was a genius. He has penetrated as deeply as possible into every dimension of man’s spirituality, but it is all like a desert, tiring. I would like you back in the garden of Eden, innocent, not serious, but like small children playing. This whole existence is playful. This whole existence is full of humor; you just need the sense of humor and you will be surprised…. Existence is hilarious. Everything is in a dancing mood, you just have to be in the same mood to understand it. I am not sorry that J. Krishnamurti is dead; there was nothing more for him to attain. I am sorry that his teaching did not reach the human heart because it was too dry, juiceless, with no humor, no laughter. But you will be surprised to know — whatever he was saying was against religions, was against politics, was against the status quo, was against the whole past, yet nobody was condemning him for the simple reason that he was ineffective. There was no reason to take note of him…. Krishnamurti failed because he could not touch the human heart; he could only reach the human head. The heart needs some different approaches. This is where I have differed with him all my life: unless the human heart is reached, you can go on repeating parrot-like, beautiful words — they don’t mean anything. Whatever Krishnamurti was saying is true, but he could not manage to relate it to your heart. In other words, what I am saying is that J. Krishnamurti was a great philosopher but he could not become a master. He could not help people, prepare people for a new life, a new orientation. But still I love him, because amongst the philosophers he comes the closest to the mystic way of life. He himself avoided the mystic way, bypassed it, and that is the reason for his failure. But he is the only one amongst the modern contemporary thinkers who comes very close, almost on the boundary line of mysticism, and stops there. Perhaps he’s afraid that if he talks about mysticism people will start falling into old patterns, old traditions, old philosophies of mysticism. That fear prevents him from entering. But that fear also prevents other people from entering into the mysteries of life…. I have met thousands of Krishnamurti people — because anybody who has been interested in Krishnamurti sooner or later is bound to find his way towards me, because where Krishnamurti leaves them, I can take their hand and lead them into the innermost shrine of truth. You can say my connection with Krishnamurti is that Krishnamurti has prepared the ground for me. He has prepared people intellectually for me; now it is my work to take those people deeper than intellect, to the heart; and deeper than the heart, to the being. Our work is one. Krishnamurti is dead, but his work will not be dead until I am dead. His work will continue. References: What Osho said about J Krishnamurti and his work on his death.
https://medium.com/devansh-mittal/oshos-views-on-j-krishnamurthy-895a742e2eac
['Devansh Mittal']
2019-10-07 14:17:02.487000+00:00
['Spirituality', 'J Krishnamurthy', 'Osho', 'Psychology', 'Philosophy']
Title Osho’s Views J KrishnamurthyContent Jiddu Krishnamurti died Osho expressed thought work relevance longevity usefulness worth reading discussion called “Death mystic J Krishnamurti” J Krishnamurti died last Monday Ojai California past spoken another enlightened Would please comment death death enlightened like J Krishnamurti nothing sad something celebrated song dance moment rejoicing death death know immortality death death body J Krishnamurti go living universal consciousness forever forever three day J Krishnamurti died one friend reported word strange Krishnamurti sad simply said one thing “I wasted life People listening entertainment” mystic revolution entertainment hear allow open door pure fire burn rubbish old purify new human risky allow fire — rather opening door immediately close door entertainment another thing change make conscious contrary help remain unconscious two three hour forget worry concern anxiety — get lost entertainment note man passed century managed create entertainment need unconscious afraid conscious conscious mean go metamorphosis shocked news death man like J Krishnamurti dy paper don’t space devote man ninety year continuously helping humanity intelligent mature Nobody worked hard long small news article unnoticeable — politician sneeze make headline connection Krishnamurti real mystery loved since known loving towards never met hence relationship connection something beyond word seen ever yet…perhaps two person closest whole world tremendous communion need language need physical presence… asking connection deepest possible connection — need physical contact need linguistic communication used criticize used criticize enjoyed other’s criticism — knowing perfectly well mean dead miss able criticize won’t right joy criticize intelligent man century understood people died seems world go way without even looking back single moment intelligent man longer difficult find sharpness intelligence century people sleep walker taken much note newspaper small corner nobody read death declared seems ninetyyearold man continuously speaking almost seventy year moving around world trying help people get unconditioned trying help people become free — nobody seems even pay tribute man worked hardest whole history man’s freedom man’s dignity don’t feel sorry death death beautiful attained life capable give certainly feel sorry whole world go missing greatest flight consciousness highest peak brightest star much concerned trivia feel deep affinity Krishnamurti even talk connection right connection possible two thing separate feel almost oneness spite criticism spite criticism — joking old man provoking old man…and easily provoked… Krishnamurti’s teaching beautiful serious experience feeling seventy year went waste serious people longfaced miserable serious type collected around collector corps became older corps also became older know people listening almost whole life old still alive know one woman ninetyfive know many people One thing seen common serious Life need little playfulness little humor little laughter point absolute disagreement otherwise genius penetrated deeply possible every dimension man’s spirituality like desert tiring would like back garden Eden innocent serious like small child playing whole existence playful whole existence full humor need sense humor surprised… Existence hilarious Everything dancing mood mood understand sorry J Krishnamurti dead nothing attain sorry teaching reach human heart dry juiceless humor laughter surprised know — whatever saying religion politics status quo whole past yet nobody condemning simple reason ineffective reason take note him… Krishnamurti failed could touch human heart could reach human head heart need different approach differed life unless human heart reached go repeating parrotlike beautiful word — don’t mean anything Whatever Krishnamurti saying true could manage relate heart word saying J Krishnamurti great philosopher could become master could help people prepare people new life new orientation still love amongst philosopher come closest mystic way life avoided mystic way bypassed reason failure one amongst modern contemporary thinker come close almost boundary line mysticism stop Perhaps he’s afraid talk mysticism people start falling old pattern old tradition old philosophy mysticism fear prevents entering fear also prevents people entering mystery life… met thousand Krishnamurti people — anybody interested Krishnamurti sooner later bound find way towards Krishnamurti leaf take hand lead innermost shrine truth say connection Krishnamurti Krishnamurti prepared ground prepared people intellectually work take people deeper intellect heart deeper heart work one Krishnamurti dead work dead dead work continue References Osho said J Krishnamurti work deathTags Spirituality J Krishnamurthy Osho Psychology Philosophy
4,272
診所排程規劃:3 大重點,幫您快速打造適合的線上預約掛號網站!
免費的跨平台線上預約排程系統,同時支援網頁版以及行動裝置 Oh, Instagram, the source of inspiration for countless millennials, a visual gem and the perfect “look at how cool I…
https://medium.com/simplybooktw/%E8%A8%BA%E6%89%80%E6%8E%92%E7%A8%8B%E8%A6%8F%E5%8A%83-3-%E5%A4%A7%E9%87%8D%E9%BB%9E-%E5%B9%AB%E6%82%A8%E5%BF%AB%E9%80%9F%E6%89%93%E9%80%A0%E9%81%A9%E5%90%88%E7%9A%84%E7%B7%9A%E4%B8%8A%E9%A0%90%E7%B4%84%E6%8E%9B%E8%99%9F%E7%B6%B2%E7%AB%99-bbcb8652c482
['Simplybook.Me']
2020-12-08 09:10:29.392000+00:00
['Simplybookrecommend', 'Simplybook', '五分鐘打造專屬預約系統', 'Medical', 'Productivity']
Title 診所排程規劃:3 大重點,幫您快速打造適合的線上預約掛號網站!Content 免費的跨平台線上預約排程系統,同時支援網頁版以及行動裝置 Oh Instagram source inspiration countless millennials visual gem perfect “look cool I…Tags Simplybookrecommend Simplybook 五分鐘打造專屬預約系統 Medical Productivity
4,273
Cypress and Mobile Apps?. Cypress.io + React Native Web + Pareto…
What mobile app testing feels like. Ow. (Pen and paper? Seriously? :) ) Photo by freestocks on Unsplash It is tricky to set up automated testing of mobile apps. Maybe you’re on a small project and your Jest + Enzyme unit tests aren’t giving you the ROI you want. Maybe you want to test network error scenarios or time-sensitive logic, but don’t have the means to do so in your current app framework. Your friends in the web-development world tell you about Cypress, but they don’t have a React-Native mobile app to test. There’s a way to bridge the gap that doesn’t require rearchitecting your app. In this article, I lay out how to apply the best of web app testing to your React-Native mobile app, with a few tips and tricks along the way. Cypress (Cypress is a development-oriented web-app end-to-end and integration testing tool. Read more in the Cypress.io docs) When selecting an e2e testing solution for web apps, we face the question “should I choose a Selenium-based tool, or should I choose Cypress?” This is a false-dichotomy — though they both test web apps, they solve different problems. Listen to the answer the Cypress.io docs gives in the FAQ section: … Cypress may not be able to give you 100% coverage without you changing anything, but that’s okay. Use different tools to test the less accessible parts of your application, and let Cypress test the other 99%. (From Cypress.io FAQ) If you’re a fan of the Pareto Principle (“20% effort, 80% results” more or less), you’ll start to see the appeal of Cypress. If your requirements ask for that last 20% of the result (cross-platform/cross-browser, cross-origin, multi-tab, etc.), no one is stopping you from picking up Selenium to cover the cases Cypress can’t address. (I’ve found that software testing has more to do with economics and ROI than software, but that’s a separate article) In short: Cypress is about getting more ROI from your tests. (Not to mention the powerful mocking abilities Cypress unlocks — personally love the network request-response mocking features) Wouldn’t it be nice if you could get the same philosophy and powers when testing your mobile apps?
https://medium.com/javascript-in-plain-english/easy-mobile-app-automated-tests-509e9cde311f
['James Fulford']
2020-04-13 02:13:13.223000+00:00
['Mobile App Development', 'JavaScript', 'Software Development', 'Expo', 'React Native']
Title Cypress Mobile Apps Cypressio React Native Web Pareto…Content mobile app testing feel like Ow Pen paper Seriously Photo freestocks Unsplash tricky set automated testing mobile apps Maybe you’re small project Jest Enzyme unit test aren’t giving ROI want Maybe want test network error scenario timesensitive logic don’t mean current app framework friend webdevelopment world tell Cypress don’t ReactNative mobile app test There’s way bridge gap doesn’t require rearchitecting app article lay apply best web app testing ReactNative mobile app tip trick along way Cypress Cypress developmentoriented webapp endtoend integration testing tool Read Cypressio doc selecting e2e testing solution web apps face question “should choose Seleniumbased tool choose Cypress” falsedichotomy — though test web apps solve different problem Listen answer Cypressio doc give FAQ section … Cypress may able give 100 coverage without changing anything that’s okay Use different tool test le accessible part application let Cypress test 99 Cypressio FAQ you’re fan Pareto Principle “20 effort 80 results” le you’ll start see appeal Cypress requirement ask last 20 result crossplatformcrossbrowser crossorigin multitab etc one stopping picking Selenium cover case Cypress can’t address I’ve found software testing economics ROI software that’s separate article short Cypress getting ROI test mention powerful mocking ability Cypress unlocks — personally love network requestresponse mocking feature Wouldn’t nice could get philosophy power testing mobile appsTags Mobile App Development JavaScript Software Development Expo React Native
4,274
In Defense of Very Long Novels
Photo by Ryan Graybill on Unsplash This past week I’ve read two, seemingly polar opposite LitHub articles. The first was “In Praise of Difficult Novels” by Will Self, which argues for a return of the High Modernist movement in current literary fiction. The second was “On the Very Contemporary Art of Flash Fiction” by John Dufresne, which explains the opportunities, especially in respect to writing on the Internet, of flash fiction. I don’t judge a piece of writing based on its length. What’s important is what the author is able to convey to readers within the limitations of their form. This is why flash fiction can be so brilliant, not only for its accessibility and its reflection of the Twitter age as Dufresne points out, but also for great flash fiction’s profound ability to capture a moment in all its strange singularity. Medium has been a good platform for finding flash fiction, and I assure you a quick search will not leave you disappointed. That being said, I think this standard should apply to all forms of fiction. What I want to argue is that the long novels have been unfairly rejected or made taboo by readers. However, I also want to push back against Will Self’s intellectual-nostalgia conception of complexity in literature. I think the High Modernist writers (e.g. Joyce) made enormous progress in developing the form of long novels, but I think the focus of the discussion should be on which of their techniques were good for readers and communication, rather than what only and sometimes exclusively makes sense for writers like obscure allusions and stream of consciousness writing. I once heard Junot Díaz argue that one quality that separates short stories from novels is that novels can make a lot more mistakes. That is, a great short story needs to be perfectly clean, whereas readers will look over many of the weaknesses in a novel because readers are generally nicer to novelists they like. It seems then that reader satisfaction follows a fluctuating scale, where they are more judgemental of short stories, less for novels, and then become increasingly impatient as the form gets lengthier. I’m like this too. I think this impatience stems partially from the fact that we read novels for clear stories or for well thought out, well crafted themes and characters. Like Díaz said, we have a lot of trust in novelists. So, when a book starts stretching past four or five hundred pages, we start to wonder if the novelist really knows what they’re doing, or whether there is actually anything new or worthwhile left to read. My concern is that this worry causes readers to be wary of long novels in general, and to assume that length necessarily means excess. Although it’s clear that many long novels just need to be edited — Murakami’s 1Q84 and Yanagihara’s A Little Life are fair examples of this, where length takes away from their ideas. What then is a standard for a great long novel? I think the answer comes from thinking about what makes great pieces of writing in any form. For example, some of what makes great short or flash fiction is its ability to say so much in such little space. Essentially, an act of compression or refinement. I would argue that one central quality of long novels is their ability to create chaos out of order. The extreme example is Finnegan’s Wake, but this is also apparent in novels like War and Peace, Gravity’s Rainbow, and even Harry Potter or The Lord of the Rings. What’s interesting to me is that no one would criticize The Lord of the Rings for being too long. The reason why is that readers recognize that it takes a lot of space to build an entire world, often with a huge cast of characters and subplots. I think the same standard should apply to long pieces of literary fiction, where the author is trying to craft a whole new world to reflect all the complexities of the real world. The fact is, yes short fiction can encapsulate complexity, but can it immerse you in it? Can it make you feel that complexity physically? An obvious objection to literature’s history of long novels is its very male pretentiousness and hyper-intellectualism. This is completely fair, and I’ve been a contributor to this issue with the pieces I‘ve written on my page. I think it’s really unfortunate that some of the prominent writers that have taken on the challenge of long novels are very pretentious and inaccessible but I don’t agree that that’s the fault of the form. I don’t believe in Will Self’s argument of “long novels are important because the Modernist project was so beautiful because they used X and Y techniques…” Yes, maybe long novels are not for you but maybe it’s just because the long novels that we hype up as amazing #1s are necessary stepping stones full of mistakes that are required to fully actualize the form. (May I suggest Middlemarch?) Long novels have the potential to hold our entire world, and I hope that this potential is not lost on readers.
https://medium.com/literally-literary/in-defense-of-very-long-novels-3b9df2fc3e9c
['Xi Chen']
2018-09-28 13:10:28.308000+00:00
['Reading', 'Books', 'Essay', 'Culture', 'Literally Literary']
Title Defense Long NovelsContent Photo Ryan Graybill Unsplash past week I’ve read two seemingly polar opposite LitHub article first “In Praise Difficult Novels” Self argues return High Modernist movement current literary fiction second “On Contemporary Art Flash Fiction” John Dufresne explains opportunity especially respect writing Internet flash fiction don’t judge piece writing based length What’s important author able convey reader within limitation form flash fiction brilliant accessibility reflection Twitter age Dufresne point also great flash fiction’s profound ability capture moment strange singularity Medium good platform finding flash fiction assure quick search leave disappointed said think standard apply form fiction want argue long novel unfairly rejected made taboo reader However also want push back Self’s intellectualnostalgia conception complexity literature think High Modernist writer eg Joyce made enormous progress developing form long novel think focus discussion technique good reader communication rather sometimes exclusively make sense writer like obscure allusion stream consciousness writing heard Junot Díaz argue one quality separate short story novel novel make lot mistake great short story need perfectly clean whereas reader look many weakness novel reader generally nicer novelist like seems reader satisfaction follows fluctuating scale judgemental short story le novel become increasingly impatient form get lengthier I’m like think impatience stem partially fact read novel clear story well thought well crafted theme character Like Díaz said lot trust novelist book start stretching past four five hundred page start wonder novelist really know they’re whether actually anything new worthwhile left read concern worry cause reader wary long novel general assume length necessarily mean excess Although it’s clear many long novel need edited — Murakami’s 1Q84 Yanagihara’s Little Life fair example length take away idea standard great long novel think answer come thinking make great piece writing form example make great short flash fiction ability say much little space Essentially act compression refinement would argue one central quality long novel ability create chaos order extreme example Finnegan’s Wake also apparent novel like War Peace Gravity’s Rainbow even Harry Potter Lord Rings What’s interesting one would criticize Lord Rings long reason reader recognize take lot space build entire world often huge cast character subplots think standard apply long piece literary fiction author trying craft whole new world reflect complexity real world fact yes short fiction encapsulate complexity immerse make feel complexity physically obvious objection literature’s history long novel male pretentiousness hyperintellectualism completely fair I’ve contributor issue piece I‘ve written page think it’s really unfortunate prominent writer taken challenge long novel pretentious inaccessible don’t agree that’s fault form don’t believe Self’s argument “long novel important Modernist project beautiful used X techniques…” Yes maybe long novel maybe it’s long novel hype amazing 1 necessary stepping stone full mistake required fully actualize form May suggest Middlemarch Long novel potential hold entire world hope potential lost readersTags Reading Books Essay Culture Literally Literary
4,275
Get with the algorithm: Facebook’s News Feed Changes
Get with the algorithm: Facebook’s News Feed Changes We Are Social hosted a talk on Facebook’s recent News Feed change announcement. Here’s a quick summary of some of the key themes that came out of the discussion. The end of organic reach? Facebook’s News Feed announcement may have come as a surprise to many but organic reach has been dropping off in recent years. We are Social’s, Chief Strategy Officer, Mobbie Nazir says on average their clients are seeing an organic reach level of around 4%. Lauren Davey, Head of Social Media & Display at Barclaycard Business said that the company doesn’t post any organic content on Facebook — only paid posts. She said that marketers need to stop seeing social media as a free commodity and see it as another paid marketing channel. I agree with this somewhat — but you can still get great results on other channels such as Twitter and Instagram without putting a budget behind your posts — creativity is key. However, you do still need to invest in a great social media manager to make this work. No more ‘Tag a mate’ content Facebook pages like LADbible have traditionally used ‘Tag a mate’ posts to quickly gain high reach and engagement levels. As part of the newsfeed changes — Facebook’s algorithms will no longer favour these types of posts. LADbible have always had a Facebook-first approach — it all started as a Facebook page, even before they had a website and they now have around 150 employees and billions of views per week. Peter Heneghan, Head of Communications at LADbible says they have diversified the types of content they share — moving towards more ‘meaningful’ content. They recently polled their followers on what topics are the most important to them — mental health came out on top — so they’ve created content sparking the debate around mental health — crucially, targeting young men. Meaningful Content ‘Meaningful Content’ was the buzz phrase of the morning. Many organisations pump out crap branded content for content’s sake said Leo Ryan, Vice President of Customer Success (EMEA) at Spredfast. Brands must carefully consider the content they create and ask themselves if it’s actually interesting to the people it’s aimed at. The ultimate goal of creating meaningful content is creating meaningful conversations, it’s all about quality over quantity. Conversations & customer care Yes — organic reach is declining — but we need to put less emphasis on reach. Direct conversations with customers is by far the most engaging form of social media. Make sure your brand is ready to chat to its followers - providing them with a great customer experience will seriously build a brands reputation. Nobody really knows what impact these changes will have on social media marketing, all we can do is predict. Personally I think it’s important (and brave) that Facebook want people to spend less time on the platform. Social media has a powerful grip on many of us — having both a positive and negative effect on our lives. Recent research reveals the negative impact it is having on young people’s mental health — its important that the platforms act responsibly knowing this. Facebook’s most important asset is its users —keep them happy or risk losing them.
https://medium.com/confab-social/get-with-the-algorithm-facebooks-news-feed-changes-36f8e022b23f
['Joanna Ayre']
2018-02-06 12:33:21.759000+00:00
['Algorithms', 'Facebook', 'Content Strategy', 'Social Strategy', 'Social Media']
Title Get algorithm Facebook’s News Feed ChangesContent Get algorithm Facebook’s News Feed Changes Social hosted talk Facebook’s recent News Feed change announcement Here’s quick summary key theme came discussion end organic reach Facebook’s News Feed announcement may come surprise many organic reach dropping recent year Social’s Chief Strategy Officer Mobbie Nazir say average client seeing organic reach level around 4 Lauren Davey Head Social Media Display Barclaycard Business said company doesn’t post organic content Facebook — paid post said marketer need stop seeing social medium free commodity see another paid marketing channel agree somewhat — still get great result channel Twitter Instagram without putting budget behind post — creativity key However still need invest great social medium manager make work ‘Tag mate’ content Facebook page like LADbible traditionally used ‘Tag mate’ post quickly gain high reach engagement level part newsfeed change — Facebook’s algorithm longer favour type post LADbible always Facebookfirst approach — started Facebook page even website around 150 employee billion view per week Peter Heneghan Head Communications LADbible say diversified type content share — moving towards ‘meaningful’ content recently polled follower topic important — mental health came top — they’ve created content sparking debate around mental health — crucially targeting young men Meaningful Content ‘Meaningful Content’ buzz phrase morning Many organisation pump crap branded content content’s sake said Leo Ryan Vice President Customer Success EMEA Spredfast Brands must carefully consider content create ask it’s actually interesting people it’s aimed ultimate goal creating meaningful content creating meaningful conversation it’s quality quantity Conversations customer care Yes — organic reach declining — need put le emphasis reach Direct conversation customer far engaging form social medium Make sure brand ready chat follower providing great customer experience seriously build brand reputation Nobody really know impact change social medium marketing predict Personally think it’s important brave Facebook want people spend le time platform Social medium powerful grip many u — positive negative effect life Recent research reveals negative impact young people’s mental health — important platform act responsibly knowing Facebook’s important asset user —keep happy risk losing themTags Algorithms Facebook Content Strategy Social Strategy Social Media
4,276
Predicting StockX Sneaker Prices With Machine Learning
The Footwear industry consists of companies engaged in the manufacturing of footwear such as dress shoes, slippers, boots, galoshes, sandals and athletic and trade related footwear; however, the most lucrative sector of this industry is collectible sneakers. The rise of marketplace apps like StockX and GOAT, alongside the proliferation of social media sites where you’re just one message away from turning a rare pair of trainers into cash, mean that more people are selling their shoes than ever before. The global sneaker resale market has been valued at over $2 billion, while the right pair of kicks can go for over $10,000 💸. Moreover, the massive margin of profit for each shoe makes the resale market attractive to those who would like to make some extra cash, given that in the past year, the average profit margin in the sneaker industry was 42.5%. While there is plenty of money to be made, it can be risky to buy a shoe due to the volatile nature of each shoe. Sneakers are like stocks with their resale price constantly changing from day to day. Thus, I developed this web application to predict the price of a given shoe based on factors such as date, shoe size, buyer region, and more. This tool resolves the issue of knowing which sneaker is worthwhile and when to buy it. As a “sneakerhead” and reseller myself, I know that this program will have lots of value in the community. For in-depth details on this project, check out my GitHub Repo. Getting Started Installation Clone this repo, create a blank Anaconda environment, and install the requirements file. $ git clone # Clone the repo$ git clone https://github.com/lognorman20/stockx_competiton # Create new environment called ‘stockx-env’ conda create -n stockx-env python=3.8 # Activate the environment we just made conda activate stockx-env # Install the requirements pip install -r requirements.txt Usage In your terminal, Cd to the repository, then to the application folder. Run this program using the command below. Make sure to run the app from the `application/` directory. After running it, click on the link provided in the terminal. cd application python app.py Understanding the Data The data I used is from StockX’s data competition in 2019. Here’s a description of the data from StockX: “The data we’re giving you consists of a random sample of all Off-White x Nike and Yeezy 350 sales from between 9/1/2017 (the month that Off-White first debuted “The Ten” collection) and the present. There are 99,956 total sales in the data set; 27,794 Off-White sales, and 72,162 Yeezy sales. The sample consists of U.S. sales only. To create this sample, we took a random, fixed percentage of StockX sales (X%) for each colorway, on each day, since September 2017. So, for each day the Off-White Jordan 1 was on the market, we randomly selected X% of its sale from each day. (It’s not important to know what X is; all that matters is that it’s a random sample, and that the same fixed X% of sales was selected from every day, for every sneaker). Every row in the spreadsheet represents an individual StockX sale. There are no averages or order counts; this is just a random sample of daily sales data.” I did some exploratory data analysis and made some visuals. You can check out my EDA notebook on the GitHub repo: Fig. 1: The Average Daily Sale Price from 2017 to 2019 Fig. 2: The Average Sale Price by State Fig. 3: The Average Sale Price by Sneaker Name Fig. 4: Coorleations between each feature Fig. 5: Sale Price Distribution of Off-White Sneakers Fig. 6: Sale Distribution of Yeezy Sneakers Fig. 7: The Most Popular Shoe Sizes Fig. 8: The Most Popular Sneakers Fig. 9: Best Selling Sneaker Retail Prices Development Data Cleaning The data that StockX gave me was not very messy. Here’s what I did: Changed ‘order date’ dtype Changed ‘release date’ dtype Removed ‘-’ from sneaker name Removed ‘$’ and comma from sale price Removed ‘$’ from retail price Renamed columns to get rid of spaces Converted dates into numerical values Converted categorical data to numerical using OneHotEncoding Model Building To begin, I split the data into train and tests sets with a 80/20 split. I selected three models: Random Forest Regressor because has the power to handle a large data set with higher dimensionality, provides higher accuracy through cross validation, is commonly used when analyzing the stock market due to its random nature, and each tree draws a random sample from the original data set when generating its splits, adding a further element of randomness that prevents overfitting. XGBoost because I have a large number of training examples given that this dataset is has about 100,000 rows. Therefore, it should have plenty of data to learn from and apply gradient boosting. This dataset also has a mix of categorical and numerical features, which XGBoost tends to do well with. Decision Tree Regressor as a baseline model to compare the others to. Model performance Since I am trying to predict an exact value, I decided to use mean squared error to measure the accuracy of each model. I was expecting XGBoost to perform the best due to its gradient boosting methods, however, the random forest regressors was able to out perform it. Decision Tree Accuracy (Baseline): 0.97284 XGBoost Test Accuracy: 0.98225 RandomForest Test Accuracy: 0.98452 Model with best accuracy: RandomForest The highest performing model was the RandomForestRegressor with an accuracy of 98.5%. Not bad. Productionization In this step, I pickled my model and saved it into a callable object to be used to create a basic Flask application. After that, I struggled to summon my knowledge of HTML and CSS from my 6th grade tech class to create a simple front-end web site for my model to be hosted. I inserted my model into the web application and the rest is history! (Check out the demo on the GitHub page) Reflection Real World Application This project can be applied in several ways. 1. Helping to decide when to buy a sneaker by predicting its price at any given time 📈 2. Knowing which factors influence the sale price of each sneaker can help businesses optimize their shoe buying process to those that have the most potential 👍 3. Sneaker businesses can see a timeline of when sneaker prices are high or low to know when to buy/sell 📆 4. Know if your friend got ripped off for buying their shoes too early or too late! 🤣 What I learned All in all, this project gave me better insight into the worlds of machine learning and sneakers. If I was to do this project again, I would choose a different way to handle categorical variables other than OneHotEncoding such as ` pd.get_dummies` to reduce the amount of features. When I was creating the Flask application, it was difficult to recreate the lucrative amount of features that I had from my training data in a real world application, and using a different method would absolve this issue. I was surprised that Off-White sneakers typically sold for much more than Yeezy sneakers. From my experience as a sneaker reseller, this threw me off guard. Moreover, I was surprised to see that certain retail prices typically sold better than others. Visualizing the data helped me notice these trends and I now know how I can apply them. Contact Feel free to reach out to me on LinkedIn and follow my work on Github! LinkedIn GitHub
https://medium.com/swlh/predicting-stockx-sneaker-prices-with-machine-learning-ec9cb625bec0
['Logan Norman']
2020-10-05 03:23:55.296000+00:00
['Machine Learning', 'Programming', 'Sneakers', 'Predictions', 'Python']
Title Predicting StockX Sneaker Prices Machine LearningContent Footwear industry consists company engaged manufacturing footwear dress shoe slipper boot galosh sandal athletic trade related footwear however lucrative sector industry collectible sneaker rise marketplace apps like StockX GOAT alongside proliferation social medium site you’re one message away turning rare pair trainer cash mean people selling shoe ever global sneaker resale market valued 2 billion right pair kick go 10000 💸 Moreover massive margin profit shoe make resale market attractive would like make extra cash given past year average profit margin sneaker industry 425 plenty money made risky buy shoe due volatile nature shoe Sneakers like stock resale price constantly changing day day Thus developed web application predict price given shoe based factor date shoe size buyer region tool resolve issue knowing sneaker worthwhile buy “sneakerhead” reseller know program lot value community indepth detail project check GitHub Repo Getting Started Installation Clone repo create blank Anaconda environment install requirement file git clone Clone repo git clone httpsgithubcomlognorman20stockxcompetiton Create new environment called ‘stockxenv’ conda create n stockxenv python38 Activate environment made conda activate stockxenv Install requirement pip install r requirementstxt Usage terminal Cd repository application folder Run program using command Make sure run app application directory running click link provided terminal cd application python apppy Understanding Data data used StockX’s data competition 2019 Here’s description data StockX “The data we’re giving consists random sample OffWhite x Nike Yeezy 350 sale 912017 month OffWhite first debuted “The Ten” collection present 99956 total sale data set 27794 OffWhite sale 72162 Yeezy sale sample consists US sale create sample took random fixed percentage StockX sale X colorway day since September 2017 day OffWhite Jordan 1 market randomly selected X sale day It’s important know X matter it’s random sample fixed X sale selected every day every sneaker Every row spreadsheet represents individual StockX sale average order count random sample daily sale data” exploratory data analysis made visuals check EDA notebook GitHub repo Fig 1 Average Daily Sale Price 2017 2019 Fig 2 Average Sale Price State Fig 3 Average Sale Price Sneaker Name Fig 4 Coorleations feature Fig 5 Sale Price Distribution OffWhite Sneakers Fig 6 Sale Distribution Yeezy Sneakers Fig 7 Popular Shoe Sizes Fig 8 Popular Sneakers Fig 9 Best Selling Sneaker Retail Prices Development Data Cleaning data StockX gave messy Here’s Changed ‘order date’ dtype Changed ‘release date’ dtype Removed ‘’ sneaker name Removed ‘’ comma sale price Removed ‘’ retail price Renamed column get rid space Converted date numerical value Converted categorical data numerical using OneHotEncoding Model Building begin split data train test set 8020 split selected three model Random Forest Regressor power handle large data set higher dimensionality provides higher accuracy cross validation commonly used analyzing stock market due random nature tree draw random sample original data set generating split adding element randomness prevents overfitting XGBoost large number training example given dataset 100000 row Therefore plenty data learn apply gradient boosting dataset also mix categorical numerical feature XGBoost tends well Decision Tree Regressor baseline model compare others Model performance Since trying predict exact value decided use mean squared error measure accuracy model expecting XGBoost perform best due gradient boosting method however random forest regressors able perform Decision Tree Accuracy Baseline 097284 XGBoost Test Accuracy 098225 RandomForest Test Accuracy 098452 Model best accuracy RandomForest highest performing model RandomForestRegressor accuracy 985 bad Productionization step pickled model saved callable object used create basic Flask application struggled summon knowledge HTML CSS 6th grade tech class create simple frontend web site model hosted inserted model web application rest history Check demo GitHub page Reflection Real World Application project applied several way 1 Helping decide buy sneaker predicting price given time 📈 2 Knowing factor influence sale price sneaker help business optimize shoe buying process potential 👍 3 Sneaker business see timeline sneaker price high low know buysell 📆 4 Know friend got ripped buying shoe early late 🤣 learned project gave better insight world machine learning sneaker project would choose different way handle categorical variable OneHotEncoding pdgetdummies reduce amount feature creating Flask application difficult recreate lucrative amount feature training data real world application using different method would absolve issue surprised OffWhite sneaker typically sold much Yeezy sneaker experience sneaker reseller threw guard Moreover surprised see certain retail price typically sold better others Visualizing data helped notice trend know apply Contact Feel free reach LinkedIn follow work Github LinkedIn GitHubTags Machine Learning Programming Sneakers Predictions Python
4,277
For Love
For Love I Hope You Can Feel It… Photo by Adrian Swancar on Unsplash It’s amazing how I saw you that night. And it was like the whole world stopped when your eyes said “Hello.” Your gaze said “Remember me.” And let’s dance on life through eternity. I could cry a tear for the love in your eyes. As my once heart ache turned into loving sighs. I remember you, a love I do not know. For without you, I feel my heart in a chokehold. And the world thought we were crazy. I don’t know. Maybe a little… maybe. With a cup of tea and some laughs on a high. I’ll remember you when I saw you the first time. Many lifetimes ago…
https://medium.com/scribe/for-love-3c7638b49d8a
['Q. Imagine']
2020-12-16 09:42:17.311000+00:00
['Poetry', 'Poems On Medium', 'Writing', 'Love', 'Poem']
Title LoveContent Love Hope Feel It… Photo Adrian Swancar Unsplash It’s amazing saw night like whole world stopped eye said “Hello” gaze said “Remember me” let’s dance life eternity could cry tear love eye heart ache turned loving sigh remember love know without feel heart chokehold world thought crazy don’t know Maybe little… maybe cup tea laugh high I’ll remember saw first time Many lifetime ago…Tags Poetry Poems Medium Writing Love Poem
4,278
Helping Those Who Help Others — How We Updated This Nonprofit’s Site for Easier Use and Clearer Messaging
Helping Those Who Help Others — How We Updated This Nonprofit’s Site for Easier Use and Clearer Messaging Ideometry Follow Oct 20, 2017 · 3 min read Simply put, CDA wants to make the world a better place, and they want to help others make the world a better place by ensuring their relief efforts do not have unintended negative consequences. Through their website, CDA offers publications, case studies, toolkits and guides relating to areas such as Responsible Business and Conflict Sensitivity, but they also provide in-person advisory services and trainings to nonprofits, NGOs and corporations in these same areas of expertise. The Problem Though clients give glowing reviews of CDA’s services, people were less than enthusiastic about their website. Along with its outdated appearance and wordy subpages, the site was not user-friendly. For example, the search function for CDA’s publications, the main driver of traffic to the site, was very difficult to find and navigate. Furthermore, CDA’s website did not reflect the recent restructuring of CDA’s organization, namely their Collaborative Learning branch and their Advisory Services. The Solution Ideometry conducted extensive interviews with current CDA employees as well as internal and external stakeholders to get an accurate understanding of what the exact needs were for the new website. We compiled this information to create a series of user journeys, and these user journeys guided the restructuring of CDA’s website. The new website highlights those aspects of CDA users most want to see — the upcoming events, the recent publications, the blog posts — while educating them about new CDA project and service areas. It’s also extremely user-friendly, with mobile-compatibility, modern design and clear calls to action. Most importantly, the back end of the website is user-friendly for CDA staff, so they can quickly update the content as needed. Ideometry even designed a new logo that CDA can use not only on its website, but on mailers, email headings and business cards. *** If you liked what you saw here, check out some of the other branding and creative campaigns we’ve done for a major credit union and a BBQ catering startup. Need help creating an amazing brand? Get in touch with us today.
https://medium.com/ideometry/helping-those-who-help-others-how-we-updated-this-nonprofits-site-for-easier-use-and-clearer-3b3cceeb09ea
[]
2017-10-24 14:43:01.694000+00:00
['Web Design', 'Web Development', 'Marketing', 'Nonprofit', 'Digital Marketing']
Title Helping Help Others — Updated Nonprofit’s Site Easier Use Clearer MessagingContent Helping Help Others — Updated Nonprofit’s Site Easier Use Clearer Messaging Ideometry Follow Oct 20 2017 · 3 min read Simply put CDA want make world better place want help others make world better place ensuring relief effort unintended negative consequence website CDA offer publication case study toolkits guide relating area Responsible Business Conflict Sensitivity also provide inperson advisory service training nonprofit NGOs corporation area expertise Problem Though client give glowing review CDA’s service people le enthusiastic website Along outdated appearance wordy subpages site userfriendly example search function CDA’s publication main driver traffic site difficult find navigate Furthermore CDA’s website reflect recent restructuring CDA’s organization namely Collaborative Learning branch Advisory Services Solution Ideometry conducted extensive interview current CDA employee well internal external stakeholder get accurate understanding exact need new website compiled information create series user journey user journey guided restructuring CDA’s website new website highlight aspect CDA user want see — upcoming event recent publication blog post — educating new CDA project service area It’s also extremely userfriendly mobilecompatibility modern design clear call action importantly back end website userfriendly CDA staff quickly update content needed Ideometry even designed new logo CDA use website mailer email heading business card liked saw check branding creative campaign we’ve done major credit union BBQ catering startup Need help creating amazing brand Get touch u todayTags Web Design Web Development Marketing Nonprofit Digital Marketing
4,279
5 habits for coping with stress that are actually making your anxiety worse
By Amy Morin From a racing heartbeat to excessive worrying, anxiety feels awful. It affects you physically, cognitively, and emotionally. The symptoms can make it difficult to function. Sometimes you can pinpoint where the anxiety is coming from, like when you’re anxious about an upcoming root canal. At other times, you might feel anxious about everything — debt, relationships, work, and your health. Amy Morin. Courtesy of Amy Morin When your anxiety levels are high, you might feel desperate to do whatever it takes to feel better fast. But the things you reach for to get instant relief might actually be making your anxiety worse. As a therapist, I see it happen all the time. People work really hard to help themselves feel better. But much of the time, their efforts aren’t just counterproductive — they’re downright harmful. Here are five common mistakes that will make your anxiety worse, even though you may think they’re making you feel better: 1. Avoiding the things that make you feel anxious On the surface, avoidance seems like a helpful response to anxiety. If you feel anxious about your financial situation, you might ignore your bills and avoid looking at your bank account. Avoiding the reality of your mounting debt and dwindling bank account will keep your anxiety at bay — at least temporarily. As your financial problems mount, however, your anxiety will grow. Research backs up the fact that the more you avoid anxiety-provoking situations, the more anxiety-provoking they become. And avoidance causes you to lose confidence in your ability to face these fears. So while avoidance might give you a quick moment of relief, the act of dodging problems worsens anxiety over time. 2. Scrolling through your phone before you go to sleep Clients who come into my therapy office often say things like, “My mind just won’t shut off at night” or, “As soon as I try to go to sleep, my brain just reminds me of all the things I need to start worrying about.” In an effort to drown out the noise in their heads, many of them scroll through their phones before they fall asleep. And while looking at social media for a few minutes might feel like it quiets their brain for a minute, staring at a screen actually interferes with sleep and leads to more anxiety. In fact, just having a smartphone in the same room while you’re sleeping can increase your anxiety. A 2018 study published in “Computers in Human Behavior” found that after just one week of not sleeping with a smartphone in the bedroom, individuals reported less anxiety, better quality sleep, and improved well-being. So you might want to try it as an experiment of your own. For one week, leave your smartphone in the kitchen when you go to sleep. See if you feel better. A whopping 94% of participants in the study decided to continue leaving their phones in another room when they slept because they felt so much better. 3. Venting to your friends and family When you’ve had a rough day, you might think you need to “get your feelings out.” So you may be eager to share with your family and friends all the things that went wrong. After all, you might erupt like a pressure cooker if you stuff your feelings, right? Well, that’s actually a misconception. The more you talk about things that cause you distress, the more you keep yourself in a heightened state of arousal. A 2013 study published in the “Cyberpyschology, Behavior, and Social Networking” volume found that venting backfires — especially in people with perfectionist tendencies (which is common in individuals with anxiety disorders). The authors of the study say people are better off focusing on the positive aspects of their day. Recounting what went right, rather than dwelling on what went wrong, can boost mood and decrease anxiety. 4. Thinking about your problems There’s a common misconception that the more you think about a problem, the more likely you are to develop a solution. So many anxious people sit around running zillions of “what if…” scenarios through their heads just to make sure they’re prepared. But thinking longer and harder isn’t necessarily the best way to solve a problem. In fact, letting your brain work through a problem in the background could be a better option. Researchers have found an “incubation period” might be the key to solving problems and making your best decisions. Studies show people make better decisions after they give their brains a break from dwelling on a problem. So whether you’re worried about a specific issue or dwelling on an anxiety-provoking problem, distract yourself for a bit. Give the unconscious part of your brain an opportunity to work through the issue in the background. 5. Self-medicating with drugs or alcohol Reaching for drugs or alcohol at the end of a long day might seem like a helpful way to relax your anxious brain. But self-medicating usually backfires. Despite the repercussions, self-medicating is a popular coping strategy. Studies suggest that almost 25% of individuals with anxiety disorders try to mask their symptoms with substances. Using drugs and alcohol to cope with anxiety has been linked to a variety of adverse outcomes, ranging from higher levels of stress and dysfunction to lower quality of life and increased physical health problems. So while substances might take the edge off for a minute, they contribute to longer-term problems. And these problems fuel anxiety, making it a cycle that can be difficult to break. How to get help for anxiety If you struggle with anxiety and have gotten caught up in habits that are making you feel worse, get professional help. Anxiety is one of the most treatable yet under-treated conditions out there. Cognitive behavioral therapy is an effective therapeutic strategy that could reduce your symptoms and help you break free from the unhelpful habits that are keeping you stuck. Medication may be an option as well. Talk to your physician or reach out to a mental health professional so you can break free from the habits that are keeping you stuck in a cycle of anxiety. This article was originally published on Business Insider July 14, 2020. For more great stories, visit Business Insider’s homepage.
https://medium.com/business-insider/5-habits-for-coping-with-stress-that-are-actually-making-your-anxiety-worse-162c5f33cc9b
['Business Insider']
2020-12-25 17:03:29.408000+00:00
['Anxiety', 'Stress', 'Mental Health', 'Coping Strategies', 'Screentime']
Title 5 habit coping stress actually making anxiety worseContent Amy Morin racing heartbeat excessive worrying anxiety feel awful affect physically cognitively emotionally symptom make difficult function Sometimes pinpoint anxiety coming like you’re anxious upcoming root canal time might feel anxious everything — debt relationship work health Amy Morin Courtesy Amy Morin anxiety level high might feel desperate whatever take feel better fast thing reach get instant relief might actually making anxiety worse therapist see happen time People work really hard help feel better much time effort aren’t counterproductive — they’re downright harmful five common mistake make anxiety worse even though may think they’re making feel better 1 Avoiding thing make feel anxious surface avoidance seems like helpful response anxiety feel anxious financial situation might ignore bill avoid looking bank account Avoiding reality mounting debt dwindling bank account keep anxiety bay — least temporarily financial problem mount however anxiety grow Research back fact avoid anxietyprovoking situation anxietyprovoking become avoidance cause lose confidence ability face fear avoidance might give quick moment relief act dodging problem worsens anxiety time 2 Scrolling phone go sleep Clients come therapy office often say thing like “My mind won’t shut night” “As soon try go sleep brain reminds thing need start worrying about” effort drown noise head many scroll phone fall asleep looking social medium minute might feel like quiet brain minute staring screen actually interferes sleep lead anxiety fact smartphone room you’re sleeping increase anxiety 2018 study published “Computers Human Behavior” found one week sleeping smartphone bedroom individual reported le anxiety better quality sleep improved wellbeing might want try experiment one week leave smartphone kitchen go sleep See feel better whopping 94 participant study decided continue leaving phone another room slept felt much better 3 Venting friend family you’ve rough day might think need “get feeling out” may eager share family friend thing went wrong might erupt like pressure cooker stuff feeling right Well that’s actually misconception talk thing cause distress keep heightened state arousal 2013 study published “Cyberpyschology Behavior Social Networking” volume found venting backfire — especially people perfectionist tendency common individual anxiety disorder author study say people better focusing positive aspect day Recounting went right rather dwelling went wrong boost mood decrease anxiety 4 Thinking problem There’s common misconception think problem likely develop solution many anxious people sit around running zillion “what if…” scenario head make sure they’re prepared thinking longer harder isn’t necessarily best way solve problem fact letting brain work problem background could better option Researchers found “incubation period” might key solving problem making best decision Studies show people make better decision give brain break dwelling problem whether you’re worried specific issue dwelling anxietyprovoking problem distract bit Give unconscious part brain opportunity work issue background 5 Selfmedicating drug alcohol Reaching drug alcohol end long day might seem like helpful way relax anxious brain selfmedicating usually backfire Despite repercussion selfmedicating popular coping strategy Studies suggest almost 25 individual anxiety disorder try mask symptom substance Using drug alcohol cope anxiety linked variety adverse outcome ranging higher level stress dysfunction lower quality life increased physical health problem substance might take edge minute contribute longerterm problem problem fuel anxiety making cycle difficult break get help anxiety struggle anxiety gotten caught habit making feel worse get professional help Anxiety one treatable yet undertreated condition Cognitive behavioral therapy effective therapeutic strategy could reduce symptom help break free unhelpful habit keeping stuck Medication may option well Talk physician reach mental health professional break free habit keeping stuck cycle anxiety article originally published Business Insider July 14 2020 great story visit Business Insider’s homepageTags Anxiety Stress Mental Health Coping Strategies Screentime
4,280
【Summary】Progress Made in Dialog Management Model Research
This article is the result of the collaborative efforts of the following experts and researchers in the Intelligent Robot Conversational AI Team: Yu Huihua and Jiang Yixuan from Cornell University as well as Dai Yinpei (nicknamed Yanfeng), Tang Chengguang (Enzhu), Li Yongbin (Shuide), and Sunjian (Sunjian) from Alibaba DAMO Academy. Many efforts have been made to develop highly intelligent human-machine dialog systems since research began on artificial intelligence (AI). Alan Turing proposed the Turing test in 1950[1]. He believed that machines could be considered highly intelligent if they passed the Turing test. To pass this test, the machine had to communicate with a real person so that this person believed they were talking to another person. The first-generation dialog systems were mainly rule-based. For example, the ELIZA system[2] developed by MIT in 1966 was a psychological medical chatbot that matched methods using templates. The flowchart-based dialog system popular in the 1970s simulates state transition in the dialog flow based on the finite state automaton (FSA) model. These machines have transparent internal logic and are easy to analyze and debug. However, they are less flexible and scalable due to their high dependency on expert intervention. Second-generation dialog systems driven by statistical data (hereinafter referred to as the statistical dialog systems) emerged with the rise of big data technology. At that time, reinforcement learning was widely studied and applied in dialog systems. A representative example is the statistical dialog system based on the Partially Observable Markov Decision Process (POMDP) proposed by Professor Steve Young of Cambridge University in 2005[3]. This system is significantly superior to rule-based dialog systems in terms of robustness. It maintains the state of each round of dialog through Bayesian inference based on speech recognition results and then selects a dialog policy based on the dialog state to generate a natural language response. With a reinforcement learning framework, the POMDP-based dialog system constantly interacts with user simulators or real users to detect errors and optimize the dialog policy accordingly. A statistical dialog system is a modular system not highly dependent on expert intervention. However, it is less scalable, and the model is difficult to maintain. In recent years, with breakthroughs in deep learning in the image, voice, and text fields, third-generation dialog systems built around deep learning have emerged. These systems still adopt the framework of the statistical dialog systems, but apply a neural network model in each module. Neural network models have powerful representation and language classification and generation capabilities. Therefore, models based on natural language are transformed from generative models, such as Bayesian networks, into deep discriminative models, such as Convolutional Neural Networks (CNNs), Deep Neural Networks (DNNs), and Recurrent Neural Networks (RNNs)[5]. The dialog state is obtained by directly calculating the maximum conditional probability instead of the Bayesian a posteriori probability. The deep reinforcement learning model is also used to optimize the dialog policy[6]. In addition, the success of end-to-end sequence-to-sequence technology in machine translation makes end-to-end dialog systems possible. Facebook researchers proposed a task-oriented dialog system based on memory networks[4], presenting a new way forward in the research of the end-to-end task-oriented dialog systems in third-generation dialog systems. In general, third-generation dialog systems are better than second-generation dialog systems, but a large amount of tagged data is required for effective training. Therefore, improving the cross-domain migration and scalability of the model has become an important area of research. Common dialog systems are divided into the following three types: Chat-, task-, and Q&A-oriented. In a chat-oriented dialog, the system generates interesting and informative natural responses to allow human-machine dialog to proceed[7]. In a Q&A-oriented dialog, the system analyzes each question and finds a correct answer from its libraries[8]. A task-oriented dialog (hereinafter referred to as a task dialog) is a task-driven multi-round dialog. The machine determines the user’s requirements through understanding, active inquiry, and clarification, makes queries by calling an Application Programming Interface (API), and returns the correct results. Generally, a task dialog is a sequence decision-making process. During the dialog, the machine updates and maintains the internal dialog state by understanding user statements and then selects the optimal action based on the current dialog state, such as determining the requirement, querying restrictions, and providing results. Task-oriented dialog systems are divided by architecture into two categories. One type is a pipeline system that has a modular structure[5], as shown in Figure 1. It consists of four key modules: Natural Language Understanding (NLU): Identifies and parses a user’s text input to obtain semantic tags that can be understood by computers, such as slot-values and intentions. Identifies and parses a user’s text input to obtain semantic tags that can be understood by computers, such as slot-values and intentions. Dialog State Tracking (DST): Maintains the current dialog state based on the dialog history. The dialog state is the cumulative meaning of the dialog history, which is generally expressed as slot-value pairs. Maintains the current dialog state based on the dialog history. The dialog state is the cumulative meaning of the dialog history, which is generally expressed as slot-value pairs. Dialog Policy: Outputs the next system action based on the current dialog state. The DST module and the dialog policy module are collectively referred to as the dialog manager (DM). Outputs the next system action based on the current dialog state. The DST module and the dialog policy module are collectively referred to as the dialog manager (DM). Natural Language Generation (NLG): Converts system actions to natural language output. This modular system structure is highly interpretable, easy to implement, and applied in most practical task-oriented dialog systems in the industry. However, this structure is not flexible enough. The modules are independent of each other and difficult to optimize together. This makes it difficult to adapt to changing application scenarios. Additionally, due to the accumulation of errors between modules, the upgrade of a single module may require the adjustment of the whole system. Figure 1. Modular structure of a task-oriented dialog system[41] Another implementation of a task-oriented dialog system is an end-to-end system, which has been a popular field of academic research in recent years911. This type of structure trains an overall mapping relationship from the natural language input on the user side to the natural language output on the machine side. It is highly flexible and scalable, reducing labor costs for design and removing the isolation between modules. However, the end-to-end model places high requirements on the quantity and quality of data and does not provide clear modeling for processes such as slot filling and API calling. This model is still being explored and is as yet rarely applied in the industry. Figure 2. End-to-end structure of a task-oriented dialog system[41] With higher requirements on product experience, actual dialog scenarios become more complex, and DM needs to be further improved. Traditional DM is usually built in a clear dialog script system (searching for matching answers, querying the user intent, and then ending the dialog) with pre-defined system action space, user intent space, and dialog body. However, due to unpredictable user behaviors, traditional dialog systems are less responsive and have a greater difficulty dealing with undefined situations. In addition, many actual scenarios require cold start without sufficient tagged dialog data, resulting in high data cleansing and tagging costs. DM based on deep reinforcement learning requires a large amount of data for model training. According to the experiments in many academic papers, hundreds of complete sessions are required to train a dialog model, which hinders the rapid development and iteration of dialog systems. To solve the limitations of traditional DM, researchers in academic and industry circles have begun to focus on how to strengthen the usability of DM. Specifically, they are working to address the following shortcomings in DM: Poor scalability Insufficient tagged data Low training efficiency I will introduce the latest research results in terms of the preceding aspects. Cutting-Edge Research on Dialog Manager Shortcoming 1: Poor Scalability As mentioned above, DM consists of the DST and dialog policy modules. The most representative traditional DST is the neural belief tracker (NBT) proposed by scholars from Cambridge University in 2017[12]. NBT uses neural networks to track the state of complex dialogs in a single domain. By using representation learning, NBT encodes system actions in the previous round, user statements in the current round, and candidate slot-value pairs to calculate semantic similarity in a high dimensional space and detect the slot value output by the user in the current round. Therefore, NBT can identify slot values that are not in the training set but semantically similar to those in the set by using the word vector expression of the slot-value pair. This avoids the need to create a semantic dictionary. As such, the slot values can be extended. Later, Cambridge scholars further improved NBT13 by changing the input slot-value pair to the domain-slot-value triple. The recognition results of each round are accumulated using model learning instead of manual rules. All data is trained by the same model. Knowledge is shared among different domains, leaving the total number of parameters unchanged as the number of domains increases. Among traditional dialog policy research, the most representative is the ACER-based policy optimization proposed by Cambridge scholars6. By applying the experience replay technique, the authors tried both the trust region actor-critic model and the episodic natural actor-critic model. The results proved that the deep AC-based reinforcement learning algorithms were the best in sample utilization efficiency, algorithm convergence, and dialog success rate. However, traditional DM still needs to be improved in terms of scalability, specifically in the following three respects: How to deal with changing user intents. How to deal with changing slots and slot values. How to deal with changing system actions. Changing User Intents If a system does not take the user intent into account, it will often provide nonsensical answers. As shown in Figure 3, the user’s “confirm” intent is not considered. A new dialog script must be added to help the system deal with this problem. Figure 3. Example of a dialog with new intent[15] The traditional model outputs a fixed one-hot vector of the old intent category. Once a new user intent not in the training set appears, vectors need to be changed to include the new intent category, and the new model needs to be retrained. This makes the model less maintainable and scalable. One paper[15] proposes a teacher-student learning framework to solve this problem. In the teacher-student training architecture, the old model and logical rules for new user intents are used as the teacher, and the new model as a student. This architecture uses knowledge distillation technology. Specifically, for the old intent set, the probability output of the old model directly guides the training of the new model. For the new intent, the logical rules are used as new tagged data to train the new model. In this way, the new model no longer needs to interact with the environment for re-training. The paper presented the results of an experiment performed on the DSTC2 dataset. The confirm intent is deliberately removed and then added as a new intent to the dialog body to verify whether the new model is adaptable. Figure 4 shows the experiment result. The new model (Extended System), the model containing all intents (Contrast System), and the old model are compared. The result shows that the new model achieves satisfactory success rates in extended new intent identification at different noise levels. Figure 4. Comparison of various models at different noise levels Of course, systems with this architecture need to be further trained. CDSSM[16], a proposed semantic similarity matching model, can identify extended user intents without tagged data and model re-training. Based on the natural description of user intents in the training set, CDSSM directly learns an intent embedding encoder and embeds the description of any intent into a high dimensional semantic space. In this way, the model directly generates corresponding intent embedding based on the natural description of the new intent and then identifies the intent. Many models that improve scalability mentioned below are designed with similar ideas. Tags are moved from the output end of the model to the input end, and neural networks are used to perform semantic encoding on tags (tag names or natural descriptions of the tags) to obtain certain semantic vectors and then match their semantic similarity. A separate paper[43] provides another idea. Through man-machine collaboration, manual customer services are used to deal with user intents not in the training set after the system is launched. This model uses an additional neural parser to determine whether manual customer service is required based on the dialog state vector extracted from the current model. If it is, the model distributes the current dialog to online customer service. If not, the model makes a prediction. The parser obtained through data learning can determine whether the current dialog contains a new intent, and responses from customer service are regarded as correct by default. This man-machine collaboration mechanism effectively deals with user intents not found in the training set during online testing and significantly improves the accuracy of the dialog. Changing Slots and Slot Values In dialog state tracking involving multiple or complex domains, dealing with changing slots and slot values has always been a challenge. Some slots have non-enumerative slot values, for example, the time, location, and user name. Their slot value sets, such as flights or movie theater schedules, change dynamically. In traditional DST, the slot and slot value set remain unchanged by default, which greatly reduces the system scalability. Google researchers[17] proposed a candidate set for slots with non-enumerative slot values. A candidate set is maintained for each slot. The candidate set contains a maximum of k possible slot values in the dialog and assigns a score to each slot value to indicate the user’s preference for the slot value in the current dialog. The system uses a two-way RNN model to find the value of a slot in the current user statement and then score and re-rank it with existing slot values in the candidate set. In this way, the DST of each round only needs to make a judgment on a limited slot value set, allowing us to track non-enumerative slot values. To track slot values not in the set, we can use a sequence tagging model[18] or a semantic similarity matching model such as the neural belief tracker[12]. The preceding are solutions for non-fixed slot values, but what about changing slots in the dialog body? In one paper[19], a slot description encoder is used to encode the natural language description of existing and new slots. The obtained semantic vectors representing the slot are sent with user statements as inputs to the Bi-LSTM model, and the identified slot values are output as sequence tags, as shown in Figure 5. The paper makes an acceptable assumption that the natural language description of any slot is easy to obtain. Therefore, a concept tagger applicable to multiple domains is designed, and the slot description encoder is simply implemented by the sum of simple word vectors. Experiments show that this model can quickly adapt to new slots. Compared with the traditional method, this method greatly improves scalability. Figure 5. Concept tagger structure With the development of sequence-to-sequence technology in recent years, many researchers are looking at ways to use the end-to-end neural network model to generate the DST results as a sequence. Common techniques such as attention mechanisms and copy mechanisms are used to improve the generation effect. In the famous MultiWOZ dataset for multi-domain dialogs, the team led by Professor Pascale Fung from Hong Kong University of Science and Technology used the copy network to significantly improve the recognition accuracy of non-enumerative slot values[20]. Figure 6 shows the TRADE model proposed by the team. Each time the slot value is detected, the model performs semantic encoding for different combinations of domains and slots and uses the result as the initial position input of the RNN decoder. The decoder directly generates the slot value through the copy network. In this way, both non-enumerative slot values and changing slot values can be generated by the same model. Therefore, slot values can be shared between domains, allowing the model to be widely used. Figure 6. TRADE model framework Recent research tends to view multi-domain DST as a machine reading and understanding task and transform generative models such as TRADE into discriminative models45. Non-enumerative slot values are tracked by a machine reading and understanding task like SQuAD[46], in which the text span in the dialog history and questions is used as the slot value. Enumerative slot values are tracked by a multi-choice machine reading and understanding task, in which the correct value is selected from the candidate values as the predicted slot value. By combining deep context words such as ELMO and BERT, these new models obtain the optimal results from the MultiWOZ dataset. Changing System Actions The last factor affecting scalability is the difficulty of pre-defining the system action space. As shown in Figure 7, when designing an electronic product recommendation system, you may ignore questions like how to upgrade the product operating system, but you cannot stop users from asking questions the system cannot answer. If the system action space is pre-defined, irrelevant answers may be provided to questions that have not been defined, greatly compromising the user experience. Figure 7. Example of a dialog where the dialog system encounters an undefined system action[22] In this case, we need to design a dialog policy network that helps the system quickly expand its actions. The first attempt to do this was made by Microsoft[21], who modifies the classic DQN structure to enable reinforcement learning in an unrestricted action space. The dialog task in this paper is a text game mission task. Each round of action is a single sentence, with an uncertain number of actions. The story varies with the action. The author proposed a new model, Deep Reinforcement Relevance Network (DRRN), which matches the current dialog state with optional system actions by semantic similarity matching to obtain the Q function. Specifically, in a round of dialog, each action text of an uncertain length is encoded by a neural network to obtain a system action vector with a fixed length. The story background text is encoded by another neural network to obtain a dialog state vector with a fixed length. The two vectors are used to generate the final Q value through an interactive function, such as dot product. Figure 8 shows the structure of the model designed in the paper. Experiments show that DRRN outperforms traditional DQN (using the padding technique) in the text games “Saving John” and “Machine of Death”. Figure 8. DRRN model, in which round t has two candidate actions, and round t+1 has three candidate actions In another paper[22], the author wanted to solve this problem from the perspective of the entire dialogue system and proposed the Incremental Dialogue System (IDS), as shown in Figure 9. IDS first encodes the dialog history to obtain the context vector through the Dialog Embedding module and then uses a VAE-based Uncertainty Estimation module to evaluate, based on the context vector, the confidence level used to indicate whether the current system can give correct answers. Similar to active learning, if the confidence level is higher than the threshold, DM scores all available actions and then predicts the probability distribution based on the softmax function. If the confidence level is lower than the threshold, the tagger is requested to tag the response of the current round (select the correct response or create a new response). The new data obtained in this way is added to the data pool to update the model online. With this human-teaching method, IDS not only supports learning in an unrestricted action space, but also quickly collects high-quality data, which is quite suitable for actual production. Figure 9. The Overall framework of IDS Shortcoming 2: Insufficient Tagged Data The extensive application of dialog systems results in diversified data requirements. To train a task-oriented dialog system, as much domain-specific data as possible is needed, but quality tagged data is costly. Scholars have tried to solve this problem in three ways: (1) using machines to tag data to reduce the tagging costs; (2) mining the dialog structure to use non-tagged data efficiently; and (3) optimizing the data collection policy to efficiently obtain high-quality data. Automatic Tagging To address the cost and inefficiency of manual tagging, scholars hope to use supervised learning and unsupervised learning to allow machines to assist in manual tagging. One paper[23] proposed the auto-dielabel architecture, which automatically groups intents and slots in the dialog data by using the unsupervised learning method of hierarchical clustering to automatically tag the dialog data (the specific tag of the category needs to be manually determined). This method is based on the assumption that expressions of the same intent may share similar background features. Initial features extracted by the model include word vectors, part-of-speech (POS) tags, noun word clusters, and Latent Dirichlet allocation (LDA). All features are encoded by the auto-encoder into vectors of the same dimension and spliced. Then, the inter-class distance calculated by the radial bias function (RBF) is used for dynamic hierarchical clustering. Classes that are closest to each other are merged automatically until the inter-class distance between the classes is greater than the threshold. Figure 10 shows the model framework. Figure 10. Auto-dialabel model In another paper[24], supervised clustering is used to implement machine tagging. The author views each dialog data record as a graph node and sees the clustering process as the process of identifying the minimum spanning forest. The model uses a support vector machine (SVM) to train the distance scoring model between nodes in the Q&A dataset through supervised learning. It then uses the structured model and the minimum subtree spanning algorithm to derive the class information corresponding to the dialog data as the hidden variable. It generates the best cluster structure to represent the user intent type. Dialog Structure Mining Due to the lack of high-quality tagged data for training dialog systems, finding ways to fully mine implicit dialog structures or information in the untagged dialog data has become a popular area of research. Implicit dialog structures or information contribute to the design of dialog policies and the training of dialog models to some extent. One paper[25] proposed to use unsupervised learning in a variational RNN (VRNN) to automatically learn hidden structures in dialog data. The author provides two models that can obtain the dynamic information in a dialog: Discrete-VRNN (D-VRNN) and Direct-Discrete-VRNN (DD-VRNN). As shown in Figure 11, x_t indicates the t-th round of dialog, h_t indicates the hidden variable of the dialog history, and z_t indicates the hidden variable (one-dimensional one-hot discrete variable) of the dialog structure. The difference between the two models is that for D-VRNN, the hidden variable z_t depends on h_(t-1) , while for DD-VRNN, the hidden variable z_t depends on z_(t-1) . Based on the maximum likelihood of the entire dialog, VRNN uses some common methods of VAE to estimate the distribution of a posteriori probabilities of the hidden variable z_t . Figure 11. D-VRNN and DD-VRNN The experiments in the paper show that VRNN is superior to the traditional HMM method. VRNN also adds the dialog structure information to the reward function, supporting faster convergence of the reinforcement learning model. Figure 12 shows the transition probability of the hidden variable z_t in restaurants mined by D-VRNN. Figure 12. Dialog stream structure mined by D-VRNN from the dialog data related to restaurants CMU scholars[26] also tried to use the VAE method to deduce system actions as hidden variables and directly use them for dialog policy selection. This can alleviate the problems caused by insufficient predefined system actions. As shown in Figure 13, for simplicity, an end-to-end dialog system framework is used in the paper. The baseline model is an RL model at the word level (that is, a dialog action is a word in the vocabulary). The model uses an encoder to encode the dialog history and then uses a decoder to decode it and generate a response. The reward function directly compares the generated response statement with the real response statement. Compared with the baseline model, the latent action model adds a posterior probability inference between the encoder and the decoder and uses discrete hidden variables to represent the dialog actions without any manual intervention. The experiment shows that the end-to-end RL model based on latent actions is superior to the baseline model in terms of statement generation diversity and task completion rate. Figure 13. Baseline model and latent action model Data Collection Policy Recently, Google researchers proposed a method to quickly collect dialog data27: First, use two rule-based simulators to interact to generate a dialog outline, which is a dialog flow framework represented by semantic tags. Then, convert the semantic tags into natural language dialogs based on templates. Finally, rewrite the natural statements by crowdsourcing to enrich the language expressions of dialog data. This reverse data collection method features high collection efficiency and complete and highly available data tags, reducing the cost and workload of data collection and processing. Figure 14. Examples of dialog outline, template-based dialog generation, and crowdsourcing-based dialog rewrite This method is a machine-to-machine (M2M) data collection policy, in which a wide range of semantic tags for dialog data are generated, and then crowdsourced to generate a large number of dialog utterances. However, the generated dialogs cannot cover all the possibilities in real scenarios. In addition, the effect depends on the simulator. In relevant academic circles, two other methods are commonly used to collect data from dialog systems: human-to-machine (H2M) and human-to-human (H2H). The H2H method requires a multi-round dialog between the user, played by a crowdsourced staff member, and the customer service personnel, played by another crowdsourced staff member. The user proposes requirements based on specified dialog targets such as buying an airplane ticket, and the customer service staff annotates the dialog tags and makes responses. This mode is called the Wizard-of-Oz framework. Many dialog datasets, such as WOZ[5] and MultiWOZ [28], are collected in this mode. The H2H method helps us get dialog data that is the most similar to that of actual service scenarios. However, it is costly to design different interactive interfaces for different tasks and to clean up incorrect annotations. The H2M data collection policy allows users and trained machines to interact with each other. This way, we can directly collect data online and continuously improve the DM model through RL. The famous DSTC2&3 dataset was collected in this way. The performance of the H2M method depends largely on the initial performance of the DM model. In addition, the data collected online has a great deal of noise, which results in high clean-up costs and affects the model optimization efficiency. Shortcoming 3: Low Training Efficiency With the successful application of deep RL in the Go game, this method is also widely used in the task dialog systems. For example, the ACER dialog management method in one paper[6] combines model-free deep RL with other techniques such as Experience Replay, belief domain constraints, and pre-training. This greatly improves the training efficiency and stability of RL algorithms in task dialog systems. However, simply applying the RL algorithm cannot meet the actual requirements of dialog systems. One reason is that dialogs lack clear rules, reward functions, simple and clear action spaces, and perfect environment simulators that can generate hundreds of millions of quality interactive data records. Dialog tasks include changing slot values, actions, and intents, which significantly increases the action space of the dialog system and makes it difficult to define. When traditional flat RL methods are used, the curse of dimensionality may occur due to one-hot encoding of all system actions. Therefore, these methods are no longer suitable for handling complex dialogs with large action spaces. For this reason, scholars have tried many other methods, including model-free RL, model-based RL, and human-in-the-loop. Model-Free RL — HRL Hierarchical Reinforcement Learning (HRL) divides a complex task into multiple sub-tasks to avoid the curse of dimensionality in traditional flat RL methods. In one paper[29], HRL was applied to task dialog systems for the first time. The authors divided a complex dialog task into multiple sub-tasks by time. For example, a complex travel task can be divided into sub-tasks, such as booking tickets, booking hotels, and renting cars. Accordingly, they designed a dialog policy network of two layers. One layer selects and arranges all sub-tasks, and the other layer executes specific sub-tasks. The DM model they proposed consists of two parts, as shown in Figure 15: Top-level policy: Selects a sub-task based on the dialog state. Selects a sub-task based on the dialog state. Low-level policy: Completes a specific dialog action in a sub-task. Completes a specific dialog action in a sub-task. The global dialog state tracker records the overall dialog state. After the entire dialog task is completed, the top-level policy receives an external reward. The model also has an internal critic module to estimate the possibility of completing the sub-tasks (the degree of slot filling for sub-tasks) based on the dialog state. The low-level policy receives an intrinsic reward from the internal critic module based on the degree of completion of the sub-task. Figure 15. The HRL framework of a task-oriented dialog system For complex dialogs, a basic system action is selected at each step of traditional RL methods, such as querying the slot value or confirming constraints. In the HRL mode, a set of basic actions is selected based on the top-level policy, and then a basic action is selected from the current set based on the low-level policy, as shown in Figure 16. This hierarchical division of action spaces covers the time sequence constraints between different sub-tasks, which facilitates the completion of composite tasks. In addition, the intrinsic reward effectively relieves the problem of sparse rewards, accelerating RL training, preventing frequent switching of the dialog between different sub-tasks, and improving the accuracy of action prediction. Of course, the hierarchical design of actions requires expert knowledge, and the types of sub-tasks need to be determined by experts. Recently, tools that can automatically discover dialog sub-tasks have appeared30. By using unsupervised learning methods, these tools automatically split the dialog state sequence of the whole dialog history, without the need to manually build a dialog sub-task structure. Figure 16. Policy selection process of HRL Model-free RL — FRL Feudal Reinforcement Learning (FRL) is a suitable solution to large dimension issues. HRL divides a dialog policy into sub-policies based on different task stages in the time dimension, which reduces the complexity of policy learning. FRL divides a policy in the space dimension to restrict the action range of each sub-policy, which reduces the complexity of sub-policies. FRL does not divide a task into sub-tasks. Instead, it uses the abstract functions of the state space to extract useful features from dialog states. Such abstraction allows FRL to be applied and migrated between different domains, achieving high scalability. Cambridge scholars applied FRL[32] to task dialog systems for the first time to divide the action space by its relevance to the slots. With this done, only the natural structure of the action space is used, and additional expert knowledge is not required. They put forward a feudal policy structure shown in Figure 17. The decision-making process for this structure is divided into two steps: Determine whether the next action requires slots as parameters. Select the low-level policy and next action for the corresponding slot based on the decision of the first step. Figure 17. Application of FRL in a task-oriented dialog system In general, both HRL and FRL divide the high-dimensional complex action space in different ways to address the low training efficiency of traditional RL methods due to large action space dimensions. HRL divides tasks properly in line with human understanding. However, expert knowledge is required to divide a task into sub-tasks. FRL divides complex tasks based on the logical structure of the action and does not consider mutual constraints between sub-tasks. Model-Based RL The preceding RL methods are model-free. With these methods, a large amount of weakly supervised data is obtained through trial and error interactions with the environment, and then a value network or policy network is trained accordingly. The process is independent of the environment. There is also model-based RL, as shown in Figure 18. Model-based RL directly models and interacts with the environment to learn a probability transition function of state and reward, namely, an environment model. Then, the system interacts with the environment model to generate more training data. Therefore, model-based RL is more efficient than model-free RL, especially when it is costly to interact with the environment. However, the resulting performance depends on the quality of environment modeling. Figure 18. Model-based RL process Using model-based RL to improve training efficiency is currently an active field of research. Microsoft first applied the classic Deep Dyna-Q (DDQ) algorithm in dialogs[33], as shown by the figure © in Figure 19. Before DDQ training starts, we use a small amount of existing dialog data to pre-train the policy model and the world model. Then, we train DDQ by repeating the following steps: Direct RL: Interact with real users online, update policy models, and store dialog data. Interact with real users online, update policy models, and store dialog data. World model training: Update the world model based on collected real dialog data. Update the world model based on collected real dialog data. Planning: Use the dialog data obtained from interaction with the world model to train the policy model. The world model (as shown in Figure 20) is a neural network that models the probability of environment state transition and rewards. The inputs are the current dialog state and system action. The outputs are the next user action, environment rewards, and dialog termination variables. The world model reduces the human-machine interaction data required by DDQ for online RL (as shown in figure (a) of Figure 19) and avoids ineffective interactions with user simulators (as shown in figure (b) of Figure 19). Figure 19. Three RL architectures Figure 20. Structure of the world model Similar to the user simulator in the dialog field, the world model can simulate real user actions and interact with the system’s DM. However, the user simulator is essentially an external environment and is used to simulate real users, while the world model is an internal model of the system. Microsoft researchers have made improvements based on DDQ. To improve the authenticity of the dialog data generated by the world model, they proposed[34] to improve the quality of the generated dialog data through adversarial training. Considering when to use the data generated through interaction with the real environment and when to use data generated through interaction with the world model, they discussed feasible solutions in a paper[35]. They also discussed a unified dialog framework to include interaction with real users in another paper[36]. This human-teaching concept has attracted attention in the industry as it can help in the building of DMs. This will be further explained in the following sections. Human-in-the-Loop We hope to make full use of human knowledge and experience to generate high-quality data and improve the efficiency of model training. Human-in-the-loop RL[37] is a method to introduce human beings into robot training. Through designed human-machine interaction methods, humans can efficiently guide the training of RL models. To further improve the training efficiency of the task dialog systems, researchers are working to design an effective human-in-the-loop method based on the dialog features. Figure 21. Composite learning combining supervised pre-training, imitation learning, and online RL Google researchers proposed a composite learning method combining human teaching and RL37, which adds a human teaching stage between supervised pre-training and online RL, allowing humans to tag data to avoid the covariate shift caused by supervised pre-training[42]. Amazon researchers also proposed a similar human teaching framework[37]: In each round of dialog, the system recommends four responses to the customer service expert. The customer service expert determines whether to select one of these responses or create a new response. Finally, the customer service expert sends the selected or created response to the user. With this method, developers can quickly update the capabilities of the dialog system. In the preceding method, the system passively receives the data tagged by humans. However, a good system should actively ask questions and seek help from humans. One paper[40] introduced the companion learning architecture (as shown in Figure 22), which adds the role of a teacher (human) to the traditional RL framework. The teacher can correct the responses of the dialog system (the student, represented by the switch on the left side of the figure) and evaluate the student’s response in the form of intrinsic reward (the switch on the right side of the figure). For the implementation of active learning, the authors put forward the concept of dialog decision certainty. The student policy network is sampled multiple times through dropout to obtain the estimated approximate maximum probability of the desired action. Then the moving average of several dialog rounds is calculated through the maximum probability and used as the decision certainty of the student policy network. If the calculated certainty is lower than the target value, the system determines whether a teacher is required to correct errors and provide reward functions based on the difference between the calculated decision certainty and the target value. If the calculated certainty is higher than the target value, the system stops learning from the teacher and makes judgments on its own. Figure 22. The teacher corrects the student’s response (on the left) or evaluates the student’s response (on the right). The key to active learning is to estimate the certainty of the dialog system regarding its own decisions. In addition to dropping out policy networks, other methods include using hidden variables as condition variables to calculate the Jensen-Shannon divergence of policy networks[22] and making judgments based on the dialog success rate of the current system[36]. Dialog Management Framework of the Intelligent Robot Conversational AI Team To ensure stability and interpretability, the industry primarily uses rule-based DM models. The Intelligent Robot Conversational AI Team at Alibaba’s DAMO Academy began to explore DM models last year. When building a real dialog system, we need to solve two problems: (1) how to obtain a large amount of dialog data in a specific scenario and (2) how to use algorithms to maximize the value of data. Currently, we plan to complete the model framework design in four steps, as shown in Figure 23. Figure 23. Four steps of DM model design Step 1: First, use the dialog studio independently developed by the Intelligent Robot Conversational AI team to quickly build a dialog engine called TaskFlow based on rule-based dialog flows and build a user simulator with similar dialog flows. Then, have the user simulator and TaskFlow continuously interact with each other to generate a large amount of dialog data. First, use the dialog studio independently developed by the Intelligent Robot Conversational AI team to quickly build a dialog engine called TaskFlow based on rule-based dialog flows and build a user simulator with similar dialog flows. Then, have the user simulator and TaskFlow continuously interact with each other to generate a large amount of dialog data. Step 2: Train a neural network through supervised learning to build a preliminary DM model that has capabilities basically equivalent to a rule-based dialog engine. The model can be expanded by combining semantic similarity matching and end-to-end generation. Dialog tasks with a large action space are divided using the HRL method. Train a neural network through supervised learning to build a preliminary DM model that has capabilities basically equivalent to a rule-based dialog engine. The model can be expanded by combining semantic similarity matching and end-to-end generation. Dialog tasks with a large action space are divided using the HRL method. Step 3: In the development phase, make the system interact with an improved user simulator or AI trainers and continuously enhance the system dialog capability based on off-policy ACER RL algorithms. In the development phase, make the system interact with an improved user simulator or AI trainers and continuously enhance the system dialog capability based on off-policy ACER RL algorithms. Step 4: After the human-machine interaction experience is verified, launch the system and introduce human roles to collect real user interaction data. In addition, use some UI designs to easily introduce user feedback to continuously update and enhance the model. The obtained human-machine dialog data will be further analyzed and mined for customer insight. At present, the RL-based DM model we developed can complete 80% of the dialog with the user simulator for moderately complex dialog tasks, such as booking a meeting room, as shown in Figure 24. Figure 24. Framework and evaluation indicators of the DM model developed by the Intelligent Robot Conversational AI team Summary This article provides a detailed introduction of the latest research on DM models, focusing on three shortcomings of traditional DM models: Poor scalability Insufficient tagged data Low training efficiency To address scalability, common methods for processing changes in user intents, dialog bodies, and the system action space include semantic similarity matching, knowledge distillation, and sequence generation. To address insufficient tagged data, methods include automatic machine tagging, effective dialog structure mining, and efficient data collection policies. To address the low training efficiency of traditional DM models, methods such as HRL and FRL are used to divide action spaces into different layers. Model-based RL methods are also used to model the environment and improve training efficiency. Introducing human-in-the-loop into the dialog system training framework is also a current focus of research. Finally, I discussed the current progress of the DM model developed by the Intelligent Robot Conversational AI team of Alibaba’s DAMO Academy. I hope this summary can provide some new insights to support your own research on DM. References [1].TURING A M. I. — COMPUTING MACHINERY AND INTELLIGENCE[J]. Mind, 1950, 59(236): 433–460. [2].Weizenbaum J. ELIZA — -a computer program for the study of natural language communication between man and machine[J]. Communications of the ACM, 1966, 9(1): 36–45. [3].Young S, Gašić M, Thomson B, et al. Pomdp-based statistical spoken dialog systems: A review[J]. Proceedings of the IEEE, 2013, 101(5): 1160–1179. [4].Bordes A, Boureau Y L, Weston J. Learning end-to-end goal-oriented dialog[J]. arXiv preprint arXiv:1605.07683, 2016. [5].Wen T H, Vandyke D, Mrksic N, et al. A network-based end-to-end trainable task-oriented dialogue system[J]. arXiv preprint arXiv:1604.04562, 2016. [6].Su P H, Budzianowski P, Ultes S, et al. Sample-efficient actor-critic reinforcement learning with supervised data for dialogue management[J]. arXiv preprint arXiv:1707.00130, 2017. [7]. Serban I V, Sordoni A, Lowe R, et al. A hierarchical latent variable encoder-decoder model for generating dialogues[C]//Thirty-First AAAI Conference on Artificial Intelligence. 2017. [8]. Berant J, Chou A, Frostig R, et al. Semantic parsing on freebase from question-answer pairs[C]//Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. 2013: 1533–1544. [9]. Dhingra B, Li L, Li X, et al. Towards end-to-end reinforcement learning of dialogue agents for information access[J]. arXiv preprint arXiv:1609.00777, 2016. [10]. Lei W, Jin X, Kan M Y, et al. Sequicity: Simplifying task-oriented dialogue systems with single sequence-to-sequence architectures[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2018: 1437–1447. [11]. Madotto A, Wu C S, Fung P. Mem2seq: Effectively incorporating knowledge bases into end-to-end task-oriented dialog systems[J]. arXiv preprint arXiv:1804.08217, 2018. [12]. Mrkšić N, Séaghdha D O, Wen T H, et al. Neural belief tracker: Data-driven dialogue state tracking[J]. arXiv preprint arXiv:1606.03777, 2016. [13]. ¬Ramadan O, Budzianowski P, Gašić M. Large-scale multi-domain belief tracking with knowledge sharing[J]. arXiv preprint arXiv:1807.06517, 2018. [14]. Weisz G, Budzianowski P, Su P H, et al. Sample efficient deep reinforcement learning for dialogue systems with large action spaces[J]. IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP), 2018, 26(11): 2083–2097. [15]. Wang W, Zhang J, Zhang H, et al. A Teacher-Student Framework for Maintainable Dialog Manager[C]//Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 2018: 3803–3812. [16]. Yun-Nung Chen, Dilek Hakkani-Tur, and Xiaodong He, “Zero-Shot Learning of Intent Embeddings for Expansion by Convolutional Deep Structured Semantic Models,” in Proceedings of The 41st IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2016), Shanghai, China, March 20–25, 2016. IEEE. [17]. Rastogi A, Hakkani-Tür D, Heck L. Scalable multi-domain dialogue state tracking[C]//2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). IEEE, 2017: 561–568. [18]. Mesnil G, He X, Deng L, et al. Investigation of recurrent-neural-network architectures and learning methods for spoken language understanding[C]//Interspeech. 2013: 3771–3775. [19]. Bapna A, Tur G, Hakkani-Tur D, et al. Towards zero-shot frame semantic parsing for domain scaling[J]. arXiv preprint arXiv:1707.02363, 2017. [20]. Wu C S, Madotto A, Hosseini-Asl E, et al. Transferable Multi-Domain State Generator for Task-Oriented Dialogue Systems[J]. arXiv preprint arXiv:1905.08743, 2019. [21]. He J, Chen J, He X, et al. Deep reinforcement learning with a natural language action space[J]. arXiv preprint arXiv:1511.04636, 2015. [22]. Wang W, Zhang J, Li Q, et al. Incremental Learning from Scratch for Task-Oriented Dialogue Systems[J].arXiv preprint arXiv:1906.04991, 2019. [23]. Shi C, Chen Q, Sha L, et al.Auto-Dialabel: Labeling Dialogue Data with Unsupervised Learning[C]//Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 2018: 684–689. [24]. Haponchyk I, Uva A, Yu S, et al. Supervised clustering of questions into intents for dialog system applications[C]//Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 2018: 2310–2321. [25]. Shi W, Zhao T, Yu Z. Unsupervised Dialog Structure Learning[J]. arXiv preprint arXiv:1904.03736, 2019. [26]. Zhao T, Xie K, Eskenazi M. Rethinking action spaces for reinforcement learning in end-to-end dialog agents with latent variable models[J]. arXiv preprint arXiv:1902.08858, 2019. [27]. Shah P, Hakkani-Tur D, Liu B, et al. Bootstrapping a neural conversational agent with dialogue self-play, crowdsourcing and on-line reinforcement learning[C]//Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 3 (Industry Papers). 2018: 41–51. [28]. Budzianowski P, Wen T H, Tseng B H, et al. Multiwoz-a large-scale multi-domain wizard-of-oz dataset for task-oriented dialogue modelling[J]. arXiv preprint arXiv:1810.00278, 2018. [29]. Peng B, Li X, Li L, et al. Composite task-completion dialogue policy learning via hierarchical deep reinforcement learning[J]. arXiv preprint arXiv:1704.03084, 2017. [30]. Kristianto G Y, Zhang H, Tong B, et al. Autonomous Sub-domain Modeling for Dialogue Policy with Hierarchical Deep Reinforcement Learning[C]//Proceedings of the 2018 EMNLP Workshop SCAI: The 2nd International Workshop on Search-Oriented Conversational AI. 2018: 9–16. [31]. Tang D, Li X, Gao J, et al. Subgoal discovery for hierarchical dialogue policy learning[J]. arXiv preprint arXiv:1804.07855, 2018. [32]. Casanueva I, Budzianowski P, Su P H, et al. Feudal reinforcement learning for dialogue management in large domains[J]. arXiv preprint arXiv:1803.03232, 2018. [33]. Peng B, Li X, Gao J, et al. Deep dyna-q: Integrating planning for task-completion dialogue policy learning[J]. ACL 2018. [34]. Su S Y, Li X, Gao J, et al. Discriminative deep dyna-q: Robust planning for dialogue policy learning.EMNLP, 2018. [35]. Wu Y, Li X, Liu J, et al. Switch-based active deep dyna-q: Efficient adaptive planning for task-completion dialogue policy learning.AAAI, 2019. [36]. Zhang Z, Li X, Gao J, et al. Budgeted Policy Learning for Task-Oriented Dialogue Systems. ACL, 2019.[37]. Abel D, Salvatier J, Stuhlmüller A, et al. Agent-agnostic human-in-the-loop reinforcement learning[J]. arXiv preprint arXiv:1701.04079, 2017. [38]. Liu B, Tur G, Hakkani-Tur D, et al. Dialogue learning with human teaching and feedback in end-to-end trainable task-oriented dialogue systems[J]. arXiv preprint arXiv:1804.06512, 2018. [39]. Lu Y, Srivastava M, Kramer J, et al. Goal-Oriented End-to-End Conversational Models with Profile Features in a Real-World Setting[C]//Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Industry Papers). 2019: 48–55. [40]. Chen L, Zhou X, Chang C, et al. Agent-aware dropout dqn for safe and efficient on-line dialogue policy learning[C]//Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. 2017: 2454–2464. [41]. Gao J, Galley M, Li L. Neural approaches to conversational AI[J]. Foundations and Trends® in Information Retrieval, 2019, 13(2–3): 127–298. [42]. Ross S, Gordon G, Bagnell D. A reduction of imitation learning and structured prediction to no-regret online learning[C]//Proceedings of the fourteenth international conference on artificial intelligence and statistics. 2011: 627–635. [43]. Rajendran J, Ganhotra J, Polymenakos L C. Learning End-to-End Goal-Oriented Dialog with Maximal User Task Success and Minimal Human Agent Use[J]. Transactions of the Association for Computational Linguistics, 2019, 7: 375–386. [44]. Mrkšić N, Vulić I. Fully Statistical Neural Belief Tracking[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). 2018: 108–113. [45]. Zhou L, Small K. Multi-domain Dialogue State Tracking as Dynamic Knowledge Graph Enhanced Question Answering[J]. arXiv preprint arXiv:1911.06192, 2019. [46]. Rajpurkar P, Jia R, Liang P. Know What You Don’t Know: Unanswerable Questions for SQuAD[J]. arXiv preprint arXiv:1806.03822, 2018. [47]. Zhang J G, Hashimoto K, Wu C S, et al. Find or Classify? Dual Strategy for Slot-Value Predictions on Multi-Domain Dialog State Tracking[J]. arXiv preprint arXiv:1910.03544, 2019. Are you eager to know the latest tech trends in Alibaba Cloud? Hear it from our top experts in our newly launched series, Tech Show! Original Source:
https://medium.com/datadriveninvestor/progress-in-dialog-management-model-research-444c52f4bc1a
['Alibaba Cloud']
2020-06-22 10:41:51.468000+00:00
['Machine Learning', 'AI', 'API', 'Alibabacloud', 'Algorithms']
Title 【Summary】Progress Made Dialog Management Model ResearchContent article result collaborative effort following expert researcher Intelligent Robot Conversational AI Team Yu Huihua Jiang Yixuan Cornell University well Dai Yinpei nicknamed Yanfeng Tang Chengguang Enzhu Li Yongbin Shuide Sunjian Sunjian Alibaba DAMO Academy Many effort made develop highly intelligent humanmachine dialog system since research began artificial intelligence AI Alan Turing proposed Turing test 19501 believed machine could considered highly intelligent passed Turing test pas test machine communicate real person person believed talking another person firstgeneration dialog system mainly rulebased example ELIZA system2 developed MIT 1966 psychological medical chatbot matched method using template flowchartbased dialog system popular 1970s simulates state transition dialog flow based finite state automaton FSA model machine transparent internal logic easy analyze debug However le flexible scalable due high dependency expert intervention Secondgeneration dialog system driven statistical data hereinafter referred statistical dialog system emerged rise big data technology time reinforcement learning widely studied applied dialog system representative example statistical dialog system based Partially Observable Markov Decision Process POMDP proposed Professor Steve Young Cambridge University 20053 system significantly superior rulebased dialog system term robustness maintains state round dialog Bayesian inference based speech recognition result selects dialog policy based dialog state generate natural language response reinforcement learning framework POMDPbased dialog system constantly interacts user simulator real user detect error optimize dialog policy accordingly statistical dialog system modular system highly dependent expert intervention However le scalable model difficult maintain recent year breakthrough deep learning image voice text field thirdgeneration dialog system built around deep learning emerged system still adopt framework statistical dialog system apply neural network model module Neural network model powerful representation language classification generation capability Therefore model based natural language transformed generative model Bayesian network deep discriminative model Convolutional Neural Networks CNNs Deep Neural Networks DNNs Recurrent Neural Networks RNNs5 dialog state obtained directly calculating maximum conditional probability instead Bayesian posteriori probability deep reinforcement learning model also used optimize dialog policy6 addition success endtoend sequencetosequence technology machine translation make endtoend dialog system possible Facebook researcher proposed taskoriented dialog system based memory networks4 presenting new way forward research endtoend taskoriented dialog system thirdgeneration dialog system general thirdgeneration dialog system better secondgeneration dialog system large amount tagged data required effective training Therefore improving crossdomain migration scalability model become important area research Common dialog system divided following three type Chat task QAoriented chatoriented dialog system generates interesting informative natural response allow humanmachine dialog proceed7 QAoriented dialog system analyzes question find correct answer libraries8 taskoriented dialog hereinafter referred task dialog taskdriven multiround dialog machine determines user’s requirement understanding active inquiry clarification make query calling Application Programming Interface API return correct result Generally task dialog sequence decisionmaking process dialog machine update maintains internal dialog state understanding user statement selects optimal action based current dialog state determining requirement querying restriction providing result Taskoriented dialog system divided architecture two category One type pipeline system modular structure5 shown Figure 1 consists four key module Natural Language Understanding NLU Identifies par user’s text input obtain semantic tag understood computer slotvalues intention Identifies par user’s text input obtain semantic tag understood computer slotvalues intention Dialog State Tracking DST Maintains current dialog state based dialog history dialog state cumulative meaning dialog history generally expressed slotvalue pair Maintains current dialog state based dialog history dialog state cumulative meaning dialog history generally expressed slotvalue pair Dialog Policy Outputs next system action based current dialog state DST module dialog policy module collectively referred dialog manager DM Outputs next system action based current dialog state DST module dialog policy module collectively referred dialog manager DM Natural Language Generation NLG Converts system action natural language output modular system structure highly interpretable easy implement applied practical taskoriented dialog system industry However structure flexible enough module independent difficult optimize together make difficult adapt changing application scenario Additionally due accumulation error module upgrade single module may require adjustment whole system Figure 1 Modular structure taskoriented dialog system41 Another implementation taskoriented dialog system endtoend system popular field academic research recent years911 type structure train overall mapping relationship natural language input user side natural language output machine side highly flexible scalable reducing labor cost design removing isolation module However endtoend model place high requirement quantity quality data provide clear modeling process slot filling API calling model still explored yet rarely applied industry Figure 2 Endtoend structure taskoriented dialog system41 higher requirement product experience actual dialog scenario become complex DM need improved Traditional DM usually built clear dialog script system searching matching answer querying user intent ending dialog predefined system action space user intent space dialog body However due unpredictable user behavior traditional dialog system le responsive greater difficulty dealing undefined situation addition many actual scenario require cold start without sufficient tagged dialog data resulting high data cleansing tagging cost DM based deep reinforcement learning requires large amount data model training According experiment many academic paper hundred complete session required train dialog model hinders rapid development iteration dialog system solve limitation traditional DM researcher academic industry circle begun focus strengthen usability DM Specifically working address following shortcoming DM Poor scalability Insufficient tagged data Low training efficiency introduce latest research result term preceding aspect CuttingEdge Research Dialog Manager Shortcoming 1 Poor Scalability mentioned DM consists DST dialog policy module representative traditional DST neural belief tracker NBT proposed scholar Cambridge University 201712 NBT us neural network track state complex dialog single domain using representation learning NBT encodes system action previous round user statement current round candidate slotvalue pair calculate semantic similarity high dimensional space detect slot value output user current round Therefore NBT identify slot value training set semantically similar set using word vector expression slotvalue pair avoids need create semantic dictionary slot value extended Later Cambridge scholar improved NBT13 changing input slotvalue pair domainslotvalue triple recognition result round accumulated using model learning instead manual rule data trained model Knowledge shared among different domain leaving total number parameter unchanged number domain increase Among traditional dialog policy research representative ACERbased policy optimization proposed Cambridge scholars6 applying experience replay technique author tried trust region actorcritic model episodic natural actorcritic model result proved deep ACbased reinforcement learning algorithm best sample utilization efficiency algorithm convergence dialog success rate However traditional DM still need improved term scalability specifically following three respect deal changing user intent deal changing slot slot value deal changing system action Changing User Intents system take user intent account often provide nonsensical answer shown Figure 3 user’s “confirm” intent considered new dialog script must added help system deal problem Figure 3 Example dialog new intent15 traditional model output fixed onehot vector old intent category new user intent training set appears vector need changed include new intent category new model need retrained make model le maintainable scalable One paper15 proposes teacherstudent learning framework solve problem teacherstudent training architecture old model logical rule new user intent used teacher new model student architecture us knowledge distillation technology Specifically old intent set probability output old model directly guide training new model new intent logical rule used new tagged data train new model way new model longer need interact environment retraining paper presented result experiment performed DSTC2 dataset confirm intent deliberately removed added new intent dialog body verify whether new model adaptable Figure 4 show experiment result new model Extended System model containing intent Contrast System old model compared result show new model achieves satisfactory success rate extended new intent identification different noise level Figure 4 Comparison various model different noise level course system architecture need trained CDSSM16 proposed semantic similarity matching model identify extended user intent without tagged data model retraining Based natural description user intent training set CDSSM directly learns intent embedding encoder embeds description intent high dimensional semantic space way model directly generates corresponding intent embedding based natural description new intent identifies intent Many model improve scalability mentioned designed similar idea Tags moved output end model input end neural network used perform semantic encoding tag tag name natural description tag obtain certain semantic vector match semantic similarity separate paper43 provides another idea manmachine collaboration manual customer service used deal user intent training set system launched model us additional neural parser determine whether manual customer service required based dialog state vector extracted current model model distributes current dialog online customer service model make prediction parser obtained data learning determine whether current dialog contains new intent response customer service regarded correct default manmachine collaboration mechanism effectively deal user intent found training set online testing significantly improves accuracy dialog Changing Slots Slot Values dialog state tracking involving multiple complex domain dealing changing slot slot value always challenge slot nonenumerative slot value example time location user name slot value set flight movie theater schedule change dynamically traditional DST slot slot value set remain unchanged default greatly reduces system scalability Google researchers17 proposed candidate set slot nonenumerative slot value candidate set maintained slot candidate set contains maximum k possible slot value dialog assigns score slot value indicate user’s preference slot value current dialog system us twoway RNN model find value slot current user statement score rerank existing slot value candidate set way DST round need make judgment limited slot value set allowing u track nonenumerative slot value track slot value set use sequence tagging model18 semantic similarity matching model neural belief tracker12 preceding solution nonfixed slot value changing slot dialog body one paper19 slot description encoder used encode natural language description existing new slot obtained semantic vector representing slot sent user statement input BiLSTM model identified slot value output sequence tag shown Figure 5 paper make acceptable assumption natural language description slot easy obtain Therefore concept tagger applicable multiple domain designed slot description encoder simply implemented sum simple word vector Experiments show model quickly adapt new slot Compared traditional method method greatly improves scalability Figure 5 Concept tagger structure development sequencetosequence technology recent year many researcher looking way use endtoend neural network model generate DST result sequence Common technique attention mechanism copy mechanism used improve generation effect famous MultiWOZ dataset multidomain dialog team led Professor Pascale Fung Hong Kong University Science Technology used copy network significantly improve recognition accuracy nonenumerative slot values20 Figure 6 show TRADE model proposed team time slot value detected model performs semantic encoding different combination domain slot us result initial position input RNN decoder decoder directly generates slot value copy network way nonenumerative slot value changing slot value generated model Therefore slot value shared domain allowing model widely used Figure 6 TRADE model framework Recent research tends view multidomain DST machine reading understanding task transform generative model TRADE discriminative models45 Nonenumerative slot value tracked machine reading understanding task like SQuAD46 text span dialog history question used slot value Enumerative slot value tracked multichoice machine reading understanding task correct value selected candidate value predicted slot value combining deep context word ELMO BERT new model obtain optimal result MultiWOZ dataset Changing System Actions last factor affecting scalability difficulty predefining system action space shown Figure 7 designing electronic product recommendation system may ignore question like upgrade product operating system cannot stop user asking question system cannot answer system action space predefined irrelevant answer may provided question defined greatly compromising user experience Figure 7 Example dialog dialog system encounter undefined system action22 case need design dialog policy network help system quickly expand action first attempt made Microsoft21 modifies classic DQN structure enable reinforcement learning unrestricted action space dialog task paper text game mission task round action single sentence uncertain number action story varies action author proposed new model Deep Reinforcement Relevance Network DRRN match current dialog state optional system action semantic similarity matching obtain Q function Specifically round dialog action text uncertain length encoded neural network obtain system action vector fixed length story background text encoded another neural network obtain dialog state vector fixed length two vector used generate final Q value interactive function dot product Figure 8 show structure model designed paper Experiments show DRRN outperforms traditional DQN using padding technique text game “Saving John” “Machine Death” Figure 8 DRRN model round two candidate action round t1 three candidate action another paper22 author wanted solve problem perspective entire dialogue system proposed Incremental Dialogue System IDS shown Figure 9 IDS first encodes dialog history obtain context vector Dialog Embedding module us VAEbased Uncertainty Estimation module evaluate based context vector confidence level used indicate whether current system give correct answer Similar active learning confidence level higher threshold DM score available action predicts probability distribution based softmax function confidence level lower threshold tagger requested tag response current round select correct response create new response new data obtained way added data pool update model online humanteaching method IDS support learning unrestricted action space also quickly collect highquality data quite suitable actual production Figure 9 Overall framework IDS Shortcoming 2 Insufficient Tagged Data extensive application dialog system result diversified data requirement train taskoriented dialog system much domainspecific data possible needed quality tagged data costly Scholars tried solve problem three way 1 using machine tag data reduce tagging cost 2 mining dialog structure use nontagged data efficiently 3 optimizing data collection policy efficiently obtain highquality data Automatic Tagging address cost inefficiency manual tagging scholar hope use supervised learning unsupervised learning allow machine assist manual tagging One paper23 proposed autodielabel architecture automatically group intent slot dialog data using unsupervised learning method hierarchical clustering automatically tag dialog data specific tag category need manually determined method based assumption expression intent may share similar background feature Initial feature extracted model include word vector partofspeech POS tag noun word cluster Latent Dirichlet allocation LDA feature encoded autoencoder vector dimension spliced interclass distance calculated radial bias function RBF used dynamic hierarchical clustering Classes closest merged automatically interclass distance class greater threshold Figure 10 show model framework Figure 10 Autodialabel model another paper24 supervised clustering used implement machine tagging author view dialog data record graph node see clustering process process identifying minimum spanning forest model us support vector machine SVM train distance scoring model node QA dataset supervised learning us structured model minimum subtree spanning algorithm derive class information corresponding dialog data hidden variable generates best cluster structure represent user intent type Dialog Structure Mining Due lack highquality tagged data training dialog system finding way fully mine implicit dialog structure information untagged dialog data become popular area research Implicit dialog structure information contribute design dialog policy training dialog model extent One paper25 proposed use unsupervised learning variational RNN VRNN automatically learn hidden structure dialog data author provides two model obtain dynamic information dialog DiscreteVRNN DVRNN DirectDiscreteVRNN DDVRNN shown Figure 11 xt indicates tth round dialog ht indicates hidden variable dialog history zt indicates hidden variable onedimensional onehot discrete variable dialog structure difference two model DVRNN hidden variable zt depends ht1 DDVRNN hidden variable zt depends zt1 Based maximum likelihood entire dialog VRNN us common method VAE estimate distribution posteriori probability hidden variable zt Figure 11 DVRNN DDVRNN experiment paper show VRNN superior traditional HMM method VRNN also add dialog structure information reward function supporting faster convergence reinforcement learning model Figure 12 show transition probability hidden variable zt restaurant mined DVRNN Figure 12 Dialog stream structure mined DVRNN dialog data related restaurant CMU scholars26 also tried use VAE method deduce system action hidden variable directly use dialog policy selection alleviate problem caused insufficient predefined system action shown Figure 13 simplicity endtoend dialog system framework used paper baseline model RL model word level dialog action word vocabulary model us encoder encode dialog history us decoder decode generate response reward function directly compare generated response statement real response statement Compared baseline model latent action model add posterior probability inference encoder decoder us discrete hidden variable represent dialog action without manual intervention experiment show endtoend RL model based latent action superior baseline model term statement generation diversity task completion rate Figure 13 Baseline model latent action model Data Collection Policy Recently Google researcher proposed method quickly collect dialog data27 First use two rulebased simulator interact generate dialog outline dialog flow framework represented semantic tag convert semantic tag natural language dialog based template Finally rewrite natural statement crowdsourcing enrich language expression dialog data reverse data collection method feature high collection efficiency complete highly available data tag reducing cost workload data collection processing Figure 14 Examples dialog outline templatebased dialog generation crowdsourcingbased dialog rewrite method machinetomachine M2M data collection policy wide range semantic tag dialog data generated crowdsourced generate large number dialog utterance However generated dialog cannot cover possibility real scenario addition effect depends simulator relevant academic circle two method commonly used collect data dialog system humantomachine H2M humantohuman H2H H2H method requires multiround dialog user played crowdsourced staff member customer service personnel played another crowdsourced staff member user proposes requirement based specified dialog target buying airplane ticket customer service staff annotates dialog tag make response mode called WizardofOz framework Many dialog datasets WOZ5 MultiWOZ 28 collected mode H2H method help u get dialog data similar actual service scenario However costly design different interactive interface different task clean incorrect annotation H2M data collection policy allows user trained machine interact way directly collect data online continuously improve DM model RL famous DSTC23 dataset collected way performance H2M method depends largely initial performance DM model addition data collected online great deal noise result high cleanup cost affect model optimization efficiency Shortcoming 3 Low Training Efficiency successful application deep RL Go game method also widely used task dialog system example ACER dialog management method one paper6 combine modelfree deep RL technique Experience Replay belief domain constraint pretraining greatly improves training efficiency stability RL algorithm task dialog system However simply applying RL algorithm cannot meet actual requirement dialog system One reason dialog lack clear rule reward function simple clear action space perfect environment simulator generate hundred million quality interactive data record Dialog task include changing slot value action intent significantly increase action space dialog system make difficult define traditional flat RL method used curse dimensionality may occur due onehot encoding system action Therefore method longer suitable handling complex dialog large action space reason scholar tried many method including modelfree RL modelbased RL humanintheloop ModelFree RL — HRL Hierarchical Reinforcement Learning HRL divide complex task multiple subtasks avoid curse dimensionality traditional flat RL method one paper29 HRL applied task dialog system first time author divided complex dialog task multiple subtasks time example complex travel task divided subtasks booking ticket booking hotel renting car Accordingly designed dialog policy network two layer One layer selects arranges subtasks layer executes specific subtasks DM model proposed consists two part shown Figure 15 Toplevel policy Selects subtask based dialog state Selects subtask based dialog state Lowlevel policy Completes specific dialog action subtask Completes specific dialog action subtask global dialog state tracker record overall dialog state entire dialog task completed toplevel policy receives external reward model also internal critic module estimate possibility completing subtasks degree slot filling subtasks based dialog state lowlevel policy receives intrinsic reward internal critic module based degree completion subtask Figure 15 HRL framework taskoriented dialog system complex dialog basic system action selected step traditional RL method querying slot value confirming constraint HRL mode set basic action selected based toplevel policy basic action selected current set based lowlevel policy shown Figure 16 hierarchical division action space cover time sequence constraint different subtasks facilitates completion composite task addition intrinsic reward effectively relief problem sparse reward accelerating RL training preventing frequent switching dialog different subtasks improving accuracy action prediction course hierarchical design action requires expert knowledge type subtasks need determined expert Recently tool automatically discover dialog subtasks appeared30 using unsupervised learning method tool automatically split dialog state sequence whole dialog history without need manually build dialog subtask structure Figure 16 Policy selection process HRL Modelfree RL — FRL Feudal Reinforcement Learning FRL suitable solution large dimension issue HRL divide dialog policy subpolicies based different task stage time dimension reduces complexity policy learning FRL divide policy space dimension restrict action range subpolicy reduces complexity subpolicies FRL divide task subtasks Instead us abstract function state space extract useful feature dialog state abstraction allows FRL applied migrated different domain achieving high scalability Cambridge scholar applied FRL32 task dialog system first time divide action space relevance slot done natural structure action space used additional expert knowledge required put forward feudal policy structure shown Figure 17 decisionmaking process structure divided two step Determine whether next action requires slot parameter Select lowlevel policy next action corresponding slot based decision first step Figure 17 Application FRL taskoriented dialog system general HRL FRL divide highdimensional complex action space different way address low training efficiency traditional RL method due large action space dimension HRL divide task properly line human understanding However expert knowledge required divide task subtasks FRL divide complex task based logical structure action consider mutual constraint subtasks ModelBased RL preceding RL method modelfree method large amount weakly supervised data obtained trial error interaction environment value network policy network trained accordingly process independent environment also modelbased RL shown Figure 18 Modelbased RL directly model interacts environment learn probability transition function state reward namely environment model system interacts environment model generate training data Therefore modelbased RL efficient modelfree RL especially costly interact environment However resulting performance depends quality environment modeling Figure 18 Modelbased RL process Using modelbased RL improve training efficiency currently active field research Microsoft first applied classic Deep DynaQ DDQ algorithm dialogs33 shown figure © Figure 19 DDQ training start use small amount existing dialog data pretrain policy model world model train DDQ repeating following step Direct RL Interact real user online update policy model store dialog data Interact real user online update policy model store dialog data World model training Update world model based collected real dialog data Update world model based collected real dialog data Planning Use dialog data obtained interaction world model train policy model world model shown Figure 20 neural network model probability environment state transition reward input current dialog state system action output next user action environment reward dialog termination variable world model reduces humanmachine interaction data required DDQ online RL shown figure Figure 19 avoids ineffective interaction user simulator shown figure b Figure 19 Figure 19 Three RL architecture Figure 20 Structure world model Similar user simulator dialog field world model simulate real user action interact system’s DM However user simulator essentially external environment used simulate real user world model internal model system Microsoft researcher made improvement based DDQ improve authenticity dialog data generated world model proposed34 improve quality generated dialog data adversarial training Considering use data generated interaction real environment use data generated interaction world model discussed feasible solution paper35 also discussed unified dialog framework include interaction real user another paper36 humanteaching concept attracted attention industry help building DMs explained following section HumanintheLoop hope make full use human knowledge experience generate highquality data improve efficiency model training Humanintheloop RL37 method introduce human being robot training designed humanmachine interaction method human efficiently guide training RL model improve training efficiency task dialog system researcher working design effective humanintheloop method based dialog feature Figure 21 Composite learning combining supervised pretraining imitation learning online RL Google researcher proposed composite learning method combining human teaching RL37 add human teaching stage supervised pretraining online RL allowing human tag data avoid covariate shift caused supervised pretraining42 Amazon researcher also proposed similar human teaching framework37 round dialog system recommends four response customer service expert customer service expert determines whether select one response create new response Finally customer service expert sends selected created response user method developer quickly update capability dialog system preceding method system passively receives data tagged human However good system actively ask question seek help human One paper40 introduced companion learning architecture shown Figure 22 add role teacher human traditional RL framework teacher correct response dialog system student represented switch left side figure evaluate student’s response form intrinsic reward switch right side figure implementation active learning author put forward concept dialog decision certainty student policy network sampled multiple time dropout obtain estimated approximate maximum probability desired action moving average several dialog round calculated maximum probability used decision certainty student policy network calculated certainty lower target value system determines whether teacher required correct error provide reward function based difference calculated decision certainty target value calculated certainty higher target value system stop learning teacher make judgment Figure 22 teacher corrects student’s response left evaluates student’s response right key active learning estimate certainty dialog system regarding decision addition dropping policy network method include using hidden variable condition variable calculate JensenShannon divergence policy networks22 making judgment based dialog success rate current system36 Dialog Management Framework Intelligent Robot Conversational AI Team ensure stability interpretability industry primarily us rulebased DM model Intelligent Robot Conversational AI Team Alibaba’s DAMO Academy began explore DM model last year building real dialog system need solve two problem 1 obtain large amount dialog data specific scenario 2 use algorithm maximize value data Currently plan complete model framework design four step shown Figure 23 Figure 23 Four step DM model design Step 1 First use dialog studio independently developed Intelligent Robot Conversational AI team quickly build dialog engine called TaskFlow based rulebased dialog flow build user simulator similar dialog flow user simulator TaskFlow continuously interact generate large amount dialog data First use dialog studio independently developed Intelligent Robot Conversational AI team quickly build dialog engine called TaskFlow based rulebased dialog flow build user simulator similar dialog flow user simulator TaskFlow continuously interact generate large amount dialog data Step 2 Train neural network supervised learning build preliminary DM model capability basically equivalent rulebased dialog engine model expanded combining semantic similarity matching endtoend generation Dialog task large action space divided using HRL method Train neural network supervised learning build preliminary DM model capability basically equivalent rulebased dialog engine model expanded combining semantic similarity matching endtoend generation Dialog task large action space divided using HRL method Step 3 development phase make system interact improved user simulator AI trainer continuously enhance system dialog capability based offpolicy ACER RL algorithm development phase make system interact improved user simulator AI trainer continuously enhance system dialog capability based offpolicy ACER RL algorithm Step 4 humanmachine interaction experience verified launch system introduce human role collect real user interaction data addition use UI design easily introduce user feedback continuously update enhance model obtained humanmachine dialog data analyzed mined customer insight present RLbased DM model developed complete 80 dialog user simulator moderately complex dialog task booking meeting room shown Figure 24 Figure 24 Framework evaluation indicator DM model developed Intelligent Robot Conversational AI team Summary article provides detailed introduction latest research DM model focusing three shortcoming traditional DM model Poor scalability Insufficient tagged data Low training efficiency address scalability common method processing change user intent dialog body system action space include semantic similarity matching knowledge distillation sequence generation address insufficient tagged data method include automatic machine tagging effective dialog structure mining efficient data collection policy address low training efficiency traditional DM model method HRL FRL used divide action space different layer Modelbased RL method also used model environment improve training efficiency Introducing humanintheloop dialog system training framework also current focus research Finally discussed current progress DM model developed Intelligent Robot Conversational AI team Alibaba’s DAMO Academy hope summary provide new insight support research DM References 1TURING — COMPUTING MACHINERY INTELLIGENCEJ Mind 1950 59236 433–460 2Weizenbaum J ELIZA — computer program study natural language communication man machineJ Communications ACM 1966 91 36–45 3Young Gašić Thomson B et al Pomdpbased statistical spoken dialog system reviewJ Proceedings IEEE 2013 1015 1160–1179 4Bordes Boureau L Weston J Learning endtoend goaloriented dialogJ arXiv preprint arXiv160507683 2016 5Wen H Vandyke Mrksic N et al networkbased endtoend trainable taskoriented dialogue systemJ arXiv preprint arXiv160404562 2016 6Su P H Budzianowski P Ultes et al Sampleefficient actorcritic reinforcement learning supervised data dialogue managementJ arXiv preprint arXiv170700130 2017 7 Serban V Sordoni Lowe R et al hierarchical latent variable encoderdecoder model generating dialoguesCThirtyFirst AAAI Conference Artificial Intelligence 2017 8 Berant J Chou Frostig R et al Semantic parsing freebase questionanswer pairsCProceedings 2013 Conference Empirical Methods Natural Language Processing 2013 1533–1544 9 Dhingra B Li L Li X et al Towards endtoend reinforcement learning dialogue agent information accessJ arXiv preprint arXiv160900777 2016 10 Lei W Jin X Kan et al Sequicity Simplifying taskoriented dialogue system single sequencetosequence architecturesCProceedings 56th Annual Meeting Association Computational Linguistics Volume 1 Long Papers 2018 1437–1447 11 Madotto Wu C Fung P Mem2seq Effectively incorporating knowledge base endtoend taskoriented dialog systemsJ arXiv preprint arXiv180408217 2018 12 Mrkšić N Séaghdha Wen H et al Neural belief tracker Datadriven dialogue state trackingJ arXiv preprint arXiv160603777 2016 13 ¬Ramadan Budzianowski P Gašić Largescale multidomain belief tracking knowledge sharingJ arXiv preprint arXiv180706517 2018 14 Weisz G Budzianowski P Su P H et al Sample efficient deep reinforcement learning dialogue system large action spacesJ IEEEACM Transactions Audio Speech Language Processing TASLP 2018 2611 2083–2097 15 Wang W Zhang J Zhang H et al TeacherStudent Framework Maintainable Dialog ManagerCProceedings 2018 Conference Empirical Methods Natural Language Processing 2018 3803–3812 16 YunNung Chen Dilek HakkaniTur Xiaodong “ZeroShot Learning Intent Embeddings Expansion Convolutional Deep Structured Semantic Models” Proceedings 41st IEEE International Conference Acoustics Speech Signal Processing ICASSP 2016 Shanghai China March 20–25 2016 IEEE 17 Rastogi HakkaniTür Heck L Scalable multidomain dialogue state trackingC2017 IEEE Automatic Speech Recognition Understanding Workshop ASRU IEEE 2017 561–568 18 Mesnil G X Deng L et al Investigation recurrentneuralnetwork architecture learning method spoken language understandingCInterspeech 2013 3771–3775 19 Bapna Tur G HakkaniTur et al Towards zeroshot frame semantic parsing domain scalingJ arXiv preprint arXiv170702363 2017 20 Wu C Madotto HosseiniAsl E et al Transferable MultiDomain State Generator TaskOriented Dialogue SystemsJ arXiv preprint arXiv190508743 2019 21 J Chen J X et al Deep reinforcement learning natural language action spaceJ arXiv preprint arXiv151104636 2015 22 Wang W Zhang J Li Q et al Incremental Learning Scratch TaskOriented Dialogue SystemsJarXiv preprint arXiv190604991 2019 23 Shi C Chen Q Sha L et alAutoDialabel Labeling Dialogue Data Unsupervised LearningCProceedings 2018 Conference Empirical Methods Natural Language Processing 2018 684–689 24 Haponchyk Uva Yu et al Supervised clustering question intent dialog system applicationsCProceedings 2018 Conference Empirical Methods Natural Language Processing 2018 2310–2321 25 Shi W Zhao Yu Z Unsupervised Dialog Structure LearningJ arXiv preprint arXiv190403736 2019 26 Zhao Xie K Eskenazi Rethinking action space reinforcement learning endtoend dialog agent latent variable modelsJ arXiv preprint arXiv190208858 2019 27 Shah P HakkaniTur Liu B et al Bootstrapping neural conversational agent dialogue selfplay crowdsourcing online reinforcement learningCProceedings 2018 Conference North American Chapter Association Computational Linguistics Human Language Technologies Volume 3 Industry Papers 2018 41–51 28 Budzianowski P Wen H Tseng B H et al Multiwoza largescale multidomain wizardofoz dataset taskoriented dialogue modellingJ arXiv preprint arXiv181000278 2018 29 Peng B Li X Li L et al Composite taskcompletion dialogue policy learning via hierarchical deep reinforcement learningJ arXiv preprint arXiv170403084 2017 30 Kristianto G Zhang H Tong B et al Autonomous Subdomain Modeling Dialogue Policy Hierarchical Deep Reinforcement LearningCProceedings 2018 EMNLP Workshop SCAI 2nd International Workshop SearchOriented Conversational AI 2018 9–16 31 Tang Li X Gao J et al Subgoal discovery hierarchical dialogue policy learningJ arXiv preprint arXiv180407855 2018 32 Casanueva Budzianowski P Su P H et al Feudal reinforcement learning dialogue management large domainsJ arXiv preprint arXiv180303232 2018 33 Peng B Li X Gao J et al Deep dynaq Integrating planning taskcompletion dialogue policy learningJ ACL 2018 34 Su Li X Gao J et al Discriminative deep dynaq Robust planning dialogue policy learningEMNLP 2018 35 Wu Li X Liu J et al Switchbased active deep dynaq Efficient adaptive planning taskcompletion dialogue policy learningAAAI 2019 36 Zhang Z Li X Gao J et al Budgeted Policy Learning TaskOriented Dialogue Systems ACL 201937 Abel Salvatier J Stuhlmüller et al Agentagnostic humanintheloop reinforcement learningJ arXiv preprint arXiv170104079 2017 38 Liu B Tur G HakkaniTur et al Dialogue learning human teaching feedback endtoend trainable taskoriented dialogue systemsJ arXiv preprint arXiv180406512 2018 39 Lu Srivastava Kramer J et al GoalOriented EndtoEnd Conversational Models Profile Features RealWorld SettingCProceedings 2019 Conference North American Chapter Association Computational Linguistics Human Language Technologies Volume 2 Industry Papers 2019 48–55 40 Chen L Zhou X Chang C et al Agentaware dropout dqn safe efficient online dialogue policy learningCProceedings 2017 Conference Empirical Methods Natural Language Processing 2017 2454–2464 41 Gao J Galley Li L Neural approach conversational AIJ Foundations Trends® Information Retrieval 2019 132–3 127–298 42 Ross Gordon G Bagnell reduction imitation learning structured prediction noregret online learningCProceedings fourteenth international conference artificial intelligence statistic 2011 627–635 43 Rajendran J Ganhotra J Polymenakos L C Learning EndtoEnd GoalOriented Dialog Maximal User Task Success Minimal Human Agent UseJ Transactions Association Computational Linguistics 2019 7 375–386 44 Mrkšić N Vulić Fully Statistical Neural Belief TrackingCProceedings 56th Annual Meeting Association Computational Linguistics Volume 2 Short Papers 2018 108–113 45 Zhou L Small K Multidomain Dialogue State Tracking Dynamic Knowledge Graph Enhanced Question AnsweringJ arXiv preprint arXiv191106192 2019 46 Rajpurkar P Jia R Liang P Know Don’t Know Unanswerable Questions SQuADJ arXiv preprint arXiv180603822 2018 47 Zhang J G Hashimoto K Wu C et al Find Classify Dual Strategy SlotValue Predictions MultiDomain Dialog State TrackingJ arXiv preprint arXiv191003544 2019 eager know latest tech trend Alibaba Cloud Hear top expert newly launched series Tech Show Original SourceTags Machine Learning AI API Alibabacloud Algorithms
4,281
Trade Biotech Stocks Like a Hedge Fund With These Hacks
The secrets of the market are out there, waiting to be unearthed. Few people have the curiosity or grit to dig for them. Sometimes, those secrets are right in front of our eyes. Few people have the boldness or presence of mind to simply look. In the past, I’ve presented investment ideas that have been based, in large part, on discerning work to determine with near certainty whether a biotech asset is under- or overvalued (e.g. here, here, here, and here). This work requires technical proficiency, a critical eye, and the stamina for deep analysis. I have also discussed ways in which professional investors can acquire an edge through the widespread practice of both legal and illegal insider trading. Although ethically dubious, these schemes require the cultivation of expert networks, intimate knowledge of the markets, and substantial legal wherewithal. Most importantly, the approaches above entail a professional commitment, with a concomitant investment in time and resources, and are not accessible to the layman investor. The following are shortcuts. The Clinicaltrials.gov Hack Clinicaltrials.gov is a website that provides the public with information on clinical studies. The information is provided and updated by the sponsor or principal investigator of the clinical study, and the website is maintained by the National Institutes of Health. Registration is required for any Phase 2, Phase 3, or post-marketing trial of a drug, biologic, or medical device that meets one of the following conditions: The trial has one or more sites in the United States The trial is conducted under an FDA investigational new drug application or investigational device exemption The trial involves a drug, biologic, or device that is manufactured in the United States or its territories and is exported for research These criteria yield essentially any trial that would materially affect the value of a publicly traded biotech company. While the clinical trial descriptors don’t provide granular data on the status of trials (e.g. number of patients currently enrolled or proportion that have completed the protocol), they do provide an overall classification of trial status (recruiting, completed, suspended, terminated, etc.). The FDA now also requires that trials initiated from 2017 onward report results once they are available. All told, clinicaltrials.gov is a public source of information on events that would affect most, if not all, biotech stocks. So what is the likelihood that information would be posted on clinicaltrials.gov before it is formally announced to the public in a press release? Not high, and such an occurrence would almost certainly be a blunder. But it does happen. On February 25, 2016, clinicaltrials.gov logged a change in the study record for Vitae Pharmaceuticals’ ($VTAE) psoriasis trial of its drug, VTP-43742. The change indicated that enrollment for the trial was closed at 74 patients instead of the anticipated 108. Halting a trial’s enrollment prematurely could have a variety of causes, but very few of them would be considered auspicious. The most likely explanation, especially in an ascending dose trial, is toxicity. On March 3, 2016, the company issued a press release noting that enrollment was closed to additional psoriatic patients, adding that data from the enrolled cohort would be “sufficient to determine next steps in the program.” This revelation was viewed negatively by the market, and the company’s stock plunged 52% the following day. Vitae later reported that the drug demonstrated positive efficacy in the trial, causing its stock to regain much of its lost ground. However, recruitment had in fact been halted due to toxicity concerns, as investigators in the trial observed transaminase elevations in four patients in the 700-mg group, which swayed Vitae to forgo the highest dose cohort of 1,050 mg. A similar case manifested on March 23, 2016, when clinicaltrials.gov registered a change in the study record for Ionis Pharmaceuticals’ ($IONS) trial of drug IONIS-TTR(Rx) in familial amyloid polyneuropathy (FAP). The change signaled that enrollment of the Phase 3 trial was halted at 172 patients, instead of the planned 195. On April 7, 2016, Ionis issued a press release stating that the FDA placed its planned trial of IONIS-TTR(Rx) in transthyretin amyloid cardiomyopathy on clinical hold, due to an undisclosed issue with its ongoing trial in FAP. Ionis promptly shed 11% of its value. It was later revealed that the clinical hold was triggered by a negative safety signal from the FAP trial, in which some patients experienced a severe decline in platelet count. To be fair, changes to clinical trials don’t always foreshadow bad news. Trials are sometimes stopped early due to efficacy (which would trigger unblinding of the trial in order to treat all patients with efficacious drug). This was famously the case for Intercept Pharmaceuticals’ ($ICPT) trial of obeticholic acid in nonalcoholic steatohepatitis, where the announcement sent the company’s stock soaring over 500%. But trials can also be stopped due to lack of efficacy, toxicity issues, or simply poor enrollment. In an analysis of terminated studies on clinicaltrials.gov, 68% of trials were terminated due to reasons other than scientific data from the trial (e.g. insufficient rate of enrollment, issues with study conduct), and only 21% of trials were terminated due to findings related to the overall benefit-risk profile of the intervention. Only a subset this 21% would be trials that are stopped due to positive efficacy. Reasons for clinical trial termination based on an analysis of a clinicaltrials.gov dataset Thus, a potentially lucrative trading strategy would be to (1) troll clinicaltrials.gov for recent updates to clinical trial records where the sponsor is a publicly traded biotech company, (2) determine whether the update is material to the company’s stock price, (3) verify whether a press release has already been issued and, if not, (4) trade in the company’s stock. The most straightforward embodiment of this strategy a short of the stock of a company whose trial is terminated, suspended, or for which recruitment is halted without a relevant disclosure by the company. The risk is that the change in the trial is due to a positive development (which we’ve determined is unlikely) or that the change is actually immaterial and some other, positive catalyst emerges in the meantime. If you’re convinced the change is material but could be positive, consider hedging your position with a call option to the upside. You may be thinking that, with over 250,000 trial records on clinicaltrials.gov, monitoring each trial in real-time would be a futile effort. Fortunately, the web site recently implemented an RSS feature which, with some customization, allows you to automate this process. The RSS feed can automatically update you to recently added or modified study records of interest. For instance, a search for all interventional studies with the status Active, not recruiting, Suspended, Terminated, or Withdrawn, yields 31,010 study records. Click on Subscribe to RSS in the upper-right corner of the search results box: 2. A pop-up box containing RSS feed options will appear. Choose the option for Show studies that were added or modified in the last 14 days, and click on the Create RSS Feed button to open the feed and display a list of any new updates to your search results. You can subscribe to the RSS feed using your browser or a feed reader (e.g. Feedly). Once you set up an RSS feed on your browser or feed reader, you can integrate with IFTTT to set up e-mail or push notifications and receive any relevant update in real-time. Now, I’m notified immediately of any clinical trial that is terminated, suspended, or that stops recruiting. The FOIA Hack Sometimes, when you don’t have the answer to a question, the government will give it to you. The Freedom of Information Act (FOIA), signed into law in 1966, gives any person the right to access public records, such as FDA facility inspections, drug adverse event reports, and internal newsletters. We fund government, and they collect a lot of data on people, corporations, and their products. The FOIA allows the average taxpayer to access that data. Trading on material obtained through FOIA is not illegal because the government has no duty to keep the information private — in fact, officials are required to disclose the information, except when its release poses a threat to national security. Some federal agencies are bound to protect certain trade secrets, such as the proprietary manufacturing protocol for a drug, in which case the agency will withhold or redact such information. Hedge funds already make liberal use of FOIA to perform due diligence, with several examples of such funds profiting or stemming losses based on the information they obtained. In March of 2009, Genzyme announced that the FDA had issued a warning letter identifying manufacturing deficiencies at a plant where it produced the enzyme replacement therapies Cerezyme and Fabrazyme. SAC Capital sent a FOIA request to the FDA for the Form 483 facility inspection report, which it received on March 30. The report led SAC to believe that the issues were more dire than the company let on because, over the next few months, SAC reduced their stake in the company from 221,000 shares to 127,000. On June 16, the company disclosed a viral contamination at the plant, leading to manufacturing shutdown of the two drugs. SAC was able to avert major losses, as the company’s stock declined 15% in the two weeks ending June 16. Another FOIA exploit enabled hedge funds to predict the acquisition of Actelion by Johnson & Johnson earlier this year. Although Actelion had been a rumored takeover target for some time, a group of hedge funds became increasingly convinced when they found that J&J’s corporate jet had been parked in Basel, Switzerland — near Actelion headquarters — for over a week. When the $30 billion deal was announced on January 26, 2017, Actelion’s stock soared 20%, earning the funds hundreds of millions in profit. The story echoes a scene straight out of the movie Wall St — but these funds didn’t need to rely on corporate espionage à la Bud Fox for this intel. The movements of almost any private jet can be tracked using publicly available tools, thanks to FOIA. The FAA keeps track of all aircraft, and because of FOIA, the FAA has agreed to provide the data in real-time to services such as FlightAware. The only information needed to track a plane is the tail number for the specific jet, which can be searched on the FAA registry using the owner’s name. These feats are not merely anomalies. A recent analysis found that FOIA requests are incredibly common among hedge funds. (Incidentally, the study’s authors used none other than a FOIA request in order to acquire the data on FDA-bound FOIA requests). A separate analysis broke down the 1,899 FOIA requests of FDA records by hedge funds from 1999 to 2013, and found that the most frequent kinds of requests were for Form 483s and consumer complaints. In addition to being frequently invoked, FOIA enables hedge funds to generate significant trading returns. In particular, when funds increase their holdings of a stock in connection with a FOIA request, the stock’s abnormal returns (a measure that adjusts for market trends) average 5.26%, and when funds reduce their holdings, abnormal returns average -3.09%. In other words, the trades associated with FOIA requests are, on average, profitable, underscoring the value of the information. Abnormal cumulative returns densities for stocks that were the subject of FOIA requests, illustrating how FOIA data confers an advantage. Results are computed for stocks for which holdings were increased by hedge funds making the FOIA request (blue dashed line), stocks for which holdings were decreased by hedge funds making the FOIA request (red dashed line), and stocks for which holdings were unchanged by hedge funds making the FOIA request (black solid line). FOIA requests give rise to information asymmetries. Even though the information is accessible to anyone, it is not publicly disseminated, and only those who request it will benefit from it. Although there has been an effort to make a searchable, online database of the over 600,000 yearly FOIA requests and responses (i.e. FOIA Online), the Department of Health and Human Services, which oversees the FDA, does not participate in the program. Moreover, the FOIA information comes in the form of unfiltered technical reports, and only those that can understand and process the information can effectively exploit it. Currently, I’m working to create a database of material obtained through FOIA requests to the FDA. The purpose is to give biotech investors access to public information that is, ironically, inaccessible to the independent investor. I’m aiming to crowdsource the database by, at least initially, requiring users to submit FOIA information in order to gain access. If readers are interesting in learning more about this project, please provide your contact information here. Submitting a FOIA request is quite straightforward. The FDA has an online request form through which you can submit your request. The form will ask you the maximum dollar amount you are willing to pay for processing. For consumer use, there is no charge for the first two hours of search and the first 100 pages of information, which should be sufficient for most requests. Beyond that, modest search and copying fees apply. There will be a field where you can enter your request or upload it as a document — be as specific as possible. Remember, you may ask for anything within reason (e.g. adverse event reports, warning letters, facility inspection reports). You may also want to include with your request a note asking the agency to contact you by e-mail or phone in case of any questions, as requests can be denied for being unclear. Finally, you should ask to have the information sent in PDF format by e-mail so that the agency doesn’t default to snail mail. All agencies are required to respond to your request within 20 business days, although the information may take and additional 10 days in exceptional circumstances. It’s as simple as that!
https://medium.com/the-mission/trade-biotech-stocks-like-a-hedge-fund-with-these-hacks-ff153c907b0b
['Samy Hamdouche']
2017-09-11 17:07:06.517000+00:00
['Investing', 'FOIA', 'Stock Market', 'Tech', 'Science']
Title Trade Biotech Stocks Like Hedge Fund HacksContent secret market waiting unearthed people curiosity grit dig Sometimes secret right front eye people boldness presence mind simply look past I’ve presented investment idea based large part discerning work determine near certainty whether biotech asset overvalued eg work requires technical proficiency critical eye stamen deep analysis also discussed way professional investor acquire edge widespread practice legal illegal insider trading Although ethically dubious scheme require cultivation expert network intimate knowledge market substantial legal wherewithal importantly approach entail professional commitment concomitant investment time resource accessible layman investor following shortcut Clinicaltrialsgov Hack Clinicaltrialsgov website provides public information clinical study information provided updated sponsor principal investigator clinical study website maintained National Institutes Health Registration required Phase 2 Phase 3 postmarketing trial drug biologic medical device meet one following condition trial one site United States trial conducted FDA investigational new drug application investigational device exemption trial involves drug biologic device manufactured United States territory exported research criterion yield essentially trial would materially affect value publicly traded biotech company clinical trial descriptor don’t provide granular data status trial eg number patient currently enrolled proportion completed protocol provide overall classification trial status recruiting completed suspended terminated etc FDA also requires trial initiated 2017 onward report result available told clinicaltrialsgov public source information event would affect biotech stock likelihood information would posted clinicaltrialsgov formally announced public press release high occurrence would almost certainly blunder happen February 25 2016 clinicaltrialsgov logged change study record Vitae Pharmaceuticals’ VTAE psoriasis trial drug VTP43742 change indicated enrollment trial closed 74 patient instead anticipated 108 Halting trial’s enrollment prematurely could variety cause would considered auspicious likely explanation especially ascending dose trial toxicity March 3 2016 company issued press release noting enrollment closed additional psoriatic patient adding data enrolled cohort would “sufficient determine next step program” revelation viewed negatively market company’s stock plunged 52 following day Vitae later reported drug demonstrated positive efficacy trial causing stock regain much lost ground However recruitment fact halted due toxicity concern investigator trial observed transaminase elevation four patient 700mg group swayed Vitae forgo highest dose cohort 1050 mg similar case manifested March 23 2016 clinicaltrialsgov registered change study record Ionis Pharmaceuticals’ IONS trial drug IONISTTRRx familial amyloid polyneuropathy FAP change signaled enrollment Phase 3 trial halted 172 patient instead planned 195 April 7 2016 Ionis issued press release stating FDA placed planned trial IONISTTRRx transthyretin amyloid cardiomyopathy clinical hold due undisclosed issue ongoing trial FAP Ionis promptly shed 11 value later revealed clinical hold triggered negative safety signal FAP trial patient experienced severe decline platelet count fair change clinical trial don’t always foreshadow bad news Trials sometimes stopped early due efficacy would trigger unblinding trial order treat patient efficacious drug famously case Intercept Pharmaceuticals’ ICPT trial obeticholic acid nonalcoholic steatohepatitis announcement sent company’s stock soaring 500 trial also stopped due lack efficacy toxicity issue simply poor enrollment analysis terminated study clinicaltrialsgov 68 trial terminated due reason scientific data trial eg insufficient rate enrollment issue study conduct 21 trial terminated due finding related overall benefitrisk profile intervention subset 21 would trial stopped due positive efficacy Reasons clinical trial termination based analysis clinicaltrialsgov dataset Thus potentially lucrative trading strategy would 1 troll clinicaltrialsgov recent update clinical trial record sponsor publicly traded biotech company 2 determine whether update material company’s stock price 3 verify whether press release already issued 4 trade company’s stock straightforward embodiment strategy short stock company whose trial terminated suspended recruitment halted without relevant disclosure company risk change trial due positive development we’ve determined unlikely change actually immaterial positive catalyst emerges meantime you’re convinced change material could positive consider hedging position call option upside may thinking 250000 trial record clinicaltrialsgov monitoring trial realtime would futile effort Fortunately web site recently implemented RSS feature customization allows automate process RSS feed automatically update recently added modified study record interest instance search interventional study status Active recruiting Suspended Terminated Withdrawn yield 31010 study record Click Subscribe RSS upperright corner search result box 2 popup box containing RSS feed option appear Choose option Show study added modified last 14 day click Create RSS Feed button open feed display list new update search result subscribe RSS feed using browser feed reader eg Feedly set RSS feed browser feed reader integrate IFTTT set email push notification receive relevant update realtime I’m notified immediately clinical trial terminated suspended stop recruiting FOIA Hack Sometimes don’t answer question government give Freedom Information Act FOIA signed law 1966 give person right access public record FDA facility inspection drug adverse event report internal newsletter fund government collect lot data people corporation product FOIA allows average taxpayer access data Trading material obtained FOIA illegal government duty keep information private — fact official required disclose information except release pose threat national security federal agency bound protect certain trade secret proprietary manufacturing protocol drug case agency withhold redact information Hedge fund already make liberal use FOIA perform due diligence several example fund profiting stemming loss based information obtained March 2009 Genzyme announced FDA issued warning letter identifying manufacturing deficiency plant produced enzyme replacement therapy Cerezyme Fabrazyme SAC Capital sent FOIA request FDA Form 483 facility inspection report received March 30 report led SAC believe issue dire company let next month SAC reduced stake company 221000 share 127000 June 16 company disclosed viral contamination plant leading manufacturing shutdown two drug SAC able avert major loss company’s stock declined 15 two week ending June 16 Another FOIA exploit enabled hedge fund predict acquisition Actelion Johnson Johnson earlier year Although Actelion rumored takeover target time group hedge fund became increasingly convinced found JJ’s corporate jet parked Basel Switzerland — near Actelion headquarters — week 30 billion deal announced January 26 2017 Actelion’s stock soared 20 earning fund hundred million profit story echo scene straight movie Wall St — fund didn’t need rely corporate espionage à la Bud Fox intel movement almost private jet tracked using publicly available tool thanks FOIA FAA keep track aircraft FOIA FAA agreed provide data realtime service FlightAware information needed track plane tail number specific jet searched FAA registry using owner’s name feat merely anomaly recent analysis found FOIA request incredibly common among hedge fund Incidentally study’s author used none FOIA request order acquire data FDAbound FOIA request separate analysis broke 1899 FOIA request FDA record hedge fund 1999 2013 found frequent kind request Form 483s consumer complaint addition frequently invoked FOIA enables hedge fund generate significant trading return particular fund increase holding stock connection FOIA request stock’s abnormal return measure adjusts market trend average 526 fund reduce holding abnormal return average 309 word trade associated FOIA request average profitable underscoring value information Abnormal cumulative return density stock subject FOIA request illustrating FOIA data confers advantage Results computed stock holding increased hedge fund making FOIA request blue dashed line stock holding decreased hedge fund making FOIA request red dashed line stock holding unchanged hedge fund making FOIA request black solid line FOIA request give rise information asymmetry Even though information accessible anyone publicly disseminated request benefit Although effort make searchable online database 600000 yearly FOIA request response ie FOIA Online Department Health Human Services oversees FDA participate program Moreover FOIA information come form unfiltered technical report understand process information effectively exploit Currently I’m working create database material obtained FOIA request FDA purpose give biotech investor access public information ironically inaccessible independent investor I’m aiming crowdsource database least initially requiring user submit FOIA information order gain access reader interesting learning project please provide contact information Submitting FOIA request quite straightforward FDA online request form submit request form ask maximum dollar amount willing pay processing consumer use charge first two hour search first 100 page information sufficient request Beyond modest search copying fee apply field enter request upload document — specific possible Remember may ask anything within reason eg adverse event report warning letter facility inspection report may also want include request note asking agency contact email phone case question request denied unclear Finally ask information sent PDF format email agency doesn’t default snail mail agency required respond request within 20 business day although information may take additional 10 day exceptional circumstance It’s simple thatTags Investing FOIA Stock Market Tech Science
4,282
The Ultimate List of the Best Productivity Resources
The Ultimate List of the Best Productivity Resources Where the most productive people go to get the latest tips What’s your go-to resource for all things productivity? We asked, you answered. And the best tips and tricks are now rounded up here, in one handy list. With blogs and podcasts to check out, people to follow, and apps to try, we’ve got the ultimate list of where to look when you’re in need of some solid productivity advice. Blogs + News Podcasts Evernote Podcast — iTunes, SoundCloud, Overcast — Dive into the realms of achievement, entrepreneurship, and creative thinking — iTunes, SoundCloud, Overcast — Dive into the realms of achievement, entrepreneurship, and creative thinking Cortex Podcast — Each episode, they get together to discuss their working lives People Apps + Tools + Approaches
https://medium.com/taking-note/the-ultimate-list-of-the-best-productivity-resources-5ad2f648875b
[]
2017-11-15 22:03:33.186000+00:00
['Apps', 'Productivity', 'Self Improvement', 'Advice', 'Personal Development']
Title Ultimate List Best Productivity ResourcesContent Ultimate List Best Productivity Resources productive people go get latest tip What’s goto resource thing productivity asked answered best tip trick rounded one handy list blog podcasts check people follow apps try we’ve got ultimate list look you’re need solid productivity advice Blogs News Podcasts Evernote Podcast — iTunes SoundCloud Overcast — Dive realm achievement entrepreneurship creative thinking — iTunes SoundCloud Overcast — Dive realm achievement entrepreneurship creative thinking Cortex Podcast — episode get together discus working life People Apps Tools ApproachesTags Apps Productivity Self Improvement Advice Personal Development
4,283
What Product Teams say and What They Really Mean — 10 Tips for Diagnosing Team Issues
Originally published on Mind The Product October 2018 Team issues can have a negative impact on a project and your people long term. There are a bunch of ways they might manifest themselves — and I’ve written them down as I’ve heard them over a decade of building digital products in cross-functional teams. I’m not touching on the upfront issues like bad sales process, junk briefs, confused business requirements, that’s for another day. This list is most useful for in-flight project teams, off and sprinting. Reading these unfiltered issues will surface the symptoms. And in turn, help with diagnosis. All teams and situations are unique, but some pains are universal and understanding the issue is halfway to a solution. 1 — “Our Client is a ☠️ They Don’t Understand What we are Trying to do” This is bad mojo for a team. In the same way that losing empathy for the customer can easily happen in long projects (good read here about this), it’s easy for a team to start classing the client as a hindrance to getting a project out. This can creep in from the smallest negative comments. If the team doesn’t take the time to understand who they are working with, an “us and them” mentality can develop. The client is taking great risks, personally and as a business. Building client empathy is important. They might be frustrated or confused, which can result in curt communication… Tip: Get to know the client, learn to ask the right question and be patient. But most of all don’t be a promoter of negative views in the team. 2 — “Let’s Push Back This Next Check Till we Have More to Show” This means the team isn’t confident in the direction they are going, and probably doesn’t have the right information. They’ll push the meeting back but go nowhere in the meantime, while the expectation gap between client and team grows and it gets harder to ask the simple questions they didn’t have the answers to in the beginning. Tip: When you or the team are nervous of meeting with the client or major stakeholder, ask why and then go talk to the client about that thing. 3 — “Wow, I’d Never Seen That Document Before” Projects will produce a heap of documentation, and that’s normal. This is a challenge worth understanding from day one. Light documentation in favour of delivering is (in my view) always preferable. One consistent issue I see is the grouping of deliverables by phase or sprint. This starts out looking like a good idea, but soon makes it extremely hard to view a continuous thread across the project. Tip: By taking time to discuss where specific groupings will live, how insights will be surfaced, and an agreement on nomenclature, you will save time and pain later. 4 — “Our Meetings are Long and Have no Outcome” It’s all-too easy to get into a bad meeting etiquette routine. If meetings feel long, then they are, regardless of their actual duration. Judging the correct length can be hard. The way the working day is broken into hours tends to mean a meeting will fill an hour (at least), irrespective of its content. Setting a meeting goal or outcome is imperative. That could be to generate ideas, agree on a deadline or assign work. Whether the goal is hard or loose doesn’t matter, but having one is key. Tip: The simple rules: set a goal, take notes, assign tasks, agree on next steps AND leave the tech out the room. 5 — “What did They go Into a Meeting Room for?” When things get a bit “interesting” on a project there is a tendency to get secretive and have small groups heading to a meeting room. It could be a bit of client drama, or maybe a team member issue. But quite often it’s just everyday tasks masquerading as an issue. The point here is that the rest of the team wonders what is going on. It creates team drama, and ripples from it are disruptive. Tip: Try to be absurdly transparent. Spell it out. Tell the team at standup what’s going on and then say it again later. And where possible don’t hide in a meeting room. 6 — “I Just Don’t get Enough Time at my Desk” All the meetings, planning, and alignment are hugely valuable activities. But a balance needs to be struck. If your week is peppered with team meetings and check-ins, how can you find time to get deep into work? This crushes flow time, that special mode that gets the best work and helps team members to feel job satisfaction. Tip: It’s worth evaluating the need for a meeting. If you are a manager, is this meeting more about your peace of mind than anything else? Could that be achieved in another way? Another issue to watch for is the double workload a team can feel when working on-site with a client. Close collaboration is hugely valuable and something I would always promote. But it’s worth recognising that it comes at a cost to the team. They are always on, staying professional, interpreting comments and filtering needs. Once you have been doing this for a few years you find tactics to manage the load and it can be very enjoyable for most. But for members of the team more used to crafting at a desk with headphones on most the day, it can be a great deal of effort to manage and not feel the most productive. Tip: Could you mark out safe spots in the week for the work to get done? I have gone as far as a traffic lights system in the past — I even had a traffic light on display. Parts of the week are green, free to chat and collaboration. Parts are red, please don’t disrupt, it’s deep working time. If this is planned in advance it gives the team a firm grounding to build out a week of work and know when they will be able to focus on the deeper thinking. 7 — “We Have a Presentation Today!?” When people in the team seem confused about where to be and what’s happening this can be an indication of some poor calendars etiquette — things like moving meetings around without updating verbally, dropping them into calendars on the day or, even worse, five minutes before they start. This creates uncertainty, causes confusion, and quickly leads to a behaviour where you don’t start any major task because you have no understanding of how long you will have to work at it — why bother getting into it just to be pulled straight out. Tip: Make time at the end of the day to plan your following day, confirm the meetings, and make adjustments. On the day, use a short team alignment like a standup meeting to get calendars aligned, and reconfirm all the key activities. Things change, people’s life commitments pop up. That’s all fine so long as the team are aware of where they are supposed to be ahead of time. 8 — “The Sprints Just Feel Relentless” Sprints can feel quite intense and exhausting — whether it’s because there’s a deadline in mind, or no end in sight. This can be made worse when a team doesn’t have a grasp of the roadmap, or when you haven’t paused long enough to recognise success. One thing I’ve heard in the past which rings true, is ‘sprinting a marathon’. Tip: One tactic is to have a break — a sprint every X sprints to focus on the little bits that have been sidelined, like process and documentation. This is especially useful for developers to jump on any technical debt. 9 — “Did you Take Notes? No, but it’s Cool, [Insert Firefighter] has it” Teams that seem to not be taking responsibility are a really common and bad sign. Most likely a key person is taking the heat. Firefighter is a great term for the people who parachute into the troubled projects and save the day. They have a job to do and little time to do it, so their style is to dictate action. It works in the short term. Clients tend to love them. But remember firefighters love to fight fires. It’s not necessarily on their to-do list to build a strong team. This leads to disengagement — why bother when the firefighter has it covered? Tip: How you know if you have one one person taking all the weight? Maybe the client said: “Where would we be without [Insert firefighter]. Don’t ever let them leave”. But what if they leave? Use the firefighter to set process, but then plan the day they move off the project with them. Let the team and client know. 10 — “Best not Disturb the Team, They Have a big Mountain to Climb” I have often heard this said by well-meaning managers. It comes from a good place. The team may have started strongly with retrospectives, but that can drift if not carefully guarded and valued. Not allowing the team the space to address problems weakens its ability self-fix. Resilience becomes low and the general mood can stagnate. Tip: It’s time to get back to building the space to reflect. Gather input from the team on issues. You’ll probably realise they have a deep understanding of what is going on and that they have some ideas to fix it. Find a forum for discussion as a group. Empower team members to take action from those discussions, and always allow time for them to succeed at the tasks by building time into the plan. …no time like now If you have an issue in your team, and maybe one of these sparked that realisation, well good news! — one of the biggest lessons I’ve learned is that it’s never too late to take a moment, reflect, and start the conversation that could fix things. As you’ve probably guessed, I don’t have any silver bullets for you — if I did I would have a book out 😀 Good luck 🙏
https://medium.com/ideas-by-idean/what-product-teams-say-and-what-they-really-mean-10-tips-for-diagnosing-team-issues-f77625fa72e8
['Rob Boyett']
2019-03-20 16:24:45.157000+00:00
['Product Design', 'Mobile', 'Design', 'Team Management', 'Agile']
Title Product Teams say Really Mean — 10 Tips Diagnosing Team IssuesContent Originally published Mind Product October 2018 Team issue negative impact project people long term bunch way might manifest — I’ve written I’ve heard decade building digital product crossfunctional team I’m touching upfront issue like bad sale process junk brief confused business requirement that’s another day list useful inflight project team sprinting Reading unfiltered issue surface symptom turn help diagnosis team situation unique pain universal understanding issue halfway solution 1 — “Our Client ☠️ Don’t Understand Trying do” bad mojo team way losing empathy customer easily happen long project good read it’s easy team start classing client hindrance getting project creep smallest negative comment team doesn’t take time understand working “us them” mentality develop client taking great risk personally business Building client empathy important might frustrated confused result curt communication… Tip Get know client learn ask right question patient don’t promoter negative view team 2 — “Let’s Push Back Next Check Till Show” mean team isn’t confident direction going probably doesn’t right information They’ll push meeting back go nowhere meantime expectation gap client team grows get harder ask simple question didn’t answer beginning Tip team nervous meeting client major stakeholder ask go talk client thing 3 — “Wow I’d Never Seen Document Before” Projects produce heap documentation that’s normal challenge worth understanding day one Light documentation favour delivering view always preferable One consistent issue see grouping deliverable phase sprint start looking like good idea soon make extremely hard view continuous thread across project Tip taking time discus specific grouping live insight surfaced agreement nomenclature save time pain later 4 — “Our Meetings Long Outcome” It’s alltoo easy get bad meeting etiquette routine meeting feel long regardless actual duration Judging correct length hard way working day broken hour tends mean meeting fill hour least irrespective content Setting meeting goal outcome imperative could generate idea agree deadline assign work Whether goal hard loose doesn’t matter one key Tip simple rule set goal take note assign task agree next step leave tech room 5 — “What go Meeting Room for” thing get bit “interesting” project tendency get secretive small group heading meeting room could bit client drama maybe team member issue quite often it’s everyday task masquerading issue point rest team wonder going creates team drama ripple disruptive Tip Try absurdly transparent Spell Tell team standup what’s going say later possible don’t hide meeting room 6 — “I Don’t get Enough Time Desk” meeting planning alignment hugely valuable activity balance need struck week peppered team meeting checkins find time get deep work crush flow time special mode get best work help team member feel job satisfaction Tip It’s worth evaluating need meeting manager meeting peace mind anything else Could achieved another way Another issue watch double workload team feel working onsite client Close collaboration hugely valuable something would always promote it’s worth recognising come cost team always staying professional interpreting comment filtering need year find tactic manage load enjoyable member team used crafting desk headphone day great deal effort manage feel productive Tip Could mark safe spot week work get done gone far traffic light system past — even traffic light display Parts week green free chat collaboration Parts red please don’t disrupt it’s deep working time planned advance give team firm grounding build week work know able focus deeper thinking 7 — “We Presentation Today” people team seem confused what’s happening indication poor calendar etiquette — thing like moving meeting around without updating verbally dropping calendar day even worse five minute start creates uncertainty cause confusion quickly lead behaviour don’t start major task understanding long work — bother getting pulled straight Tip Make time end day plan following day confirm meeting make adjustment day use short team alignment like standup meeting get calendar aligned reconfirm key activity Things change people’s life commitment pop That’s fine long team aware supposed ahead time 8 — “The Sprints Feel Relentless” Sprints feel quite intense exhausting — whether it’s there’s deadline mind end sight made worse team doesn’t grasp roadmap haven’t paused long enough recognise success One thing I’ve heard past ring true ‘sprinting marathon’ Tip One tactic break — sprint every X sprint focus little bit sidelined like process documentation especially useful developer jump technical debt 9 — “Did Take Notes it’s Cool Insert Firefighter it” Teams seem taking responsibility really common bad sign likely key person taking heat Firefighter great term people parachute troubled project save day job little time style dictate action work short term Clients tend love remember firefighter love fight fire It’s necessarily todo list build strong team lead disengagement — bother firefighter covered Tip know one one person taking weight Maybe client said “Where would without Insert firefighter Don’t ever let leave” leave Use firefighter set process plan day move project Let team client know 10 — “Best Disturb Team big Mountain Climb” often heard said wellmeaning manager come good place team may started strongly retrospective drift carefully guarded valued allowing team space address problem weakens ability selffix Resilience becomes low general mood stagnate Tip It’s time get back building space reflect Gather input team issue You’ll probably realise deep understanding going idea fix Find forum discussion group Empower team member take action discussion always allow time succeed task building time plan …no time like issue team maybe one sparked realisation well good news — one biggest lesson I’ve learned it’s never late take moment reflect start conversation could fix thing you’ve probably guessed don’t silver bullet — would book 😀 Good luck 🙏Tags Product Design Mobile Design Team Management Agile
4,284
How to Be Productive and Achieve If You Have a Tender Soul
Photo by Fabrizio Verrecchia on Unsplash Work with your soul, not against it. If you have a tender soul, you respond to everything that happens like a feather caught in the wind. Successes put you over the moon, but the slightest discouragement can knock you flat. If your self-esteem isn’t that great, criticism feels like stabbing knives. Just taking a step that might bring on disapproval can feel like a herculean task. Maybe you worry about making a mistake that would hurt someone, giving bad advice, getting something wrong, or offending someone. And whenever you try to do something that’s not right for you, your conscience screams until you stop. Even when it is right, a welter of emotions can get between you and what you’re trying to accomplish. Sometimes you might envy the people with steelier souls. People who can work like a machine without getting tripped up seventeen times a day by their feelings. I’m here to tell you, there’s nothing to envy about people who’ve shut down their emotional life. And there’s no reason you can’t create and achieve magnificent things — without putting a gag on your soul. I’ve tried the way that doesn’t work — for way too many years — trying to slog through a work life and then an academic program that didn’t chime with my soul. Trying to ignore the pain of the misalignment, but finding myself at the end of the day curled up on the sofa in a fetal position, drinking wine every night, or contracting mysterious illnesses that wouldn’t go away. I’m 52 now, and I think I’m finally figuring it out. Two attitudes, and one major strategy, have been helping me stay productive and move toward exciting goals, without feeling like I have to stifle my soul. Photo by Wolfgang Hasselmann on Unsplash Knowing that I truly don’t have to choose one or the other. The world seems to be structured to work for and reward people who’ve discarded their emotions. That’s probably true about large swathes of modern life: it encourages focus on financial bottom lines, mechanistic production, and feeding people’s addictions, for the sake of easy sales and immense profits, rather than nourishing their souls with integrity and imagination. But that’s not the whole world. There are still millions of people out there who value — crave, long for — beauty, truth, authenticity, vision, playfulness, delight, inspiration — all those things that only a person with a tender soul can offer. This is my world, and your world. It might not be quite as profitable as the other one, but it can definitely be enough. Nurturing and sheltering myself. This world can be pretty dark and dreary, and even sharp-edged for someone who’s sensitive. I’m learning to take care of myself. That means making sure I get the emotional and sensory nourishment I need: taking breaks to listen to my favorite music, filling my space with light and color and beautiful scents, and ultimately finding a place to live where I feel free, safe, and inspired. I’ve discovered I have to be extra-careful about my boundaries. The acid rain of this world can eat away at our joy. I’m doing everything I can think of to protect myself from that, and to maintain my sense of wonder and delight. This doesn’t mean withdrawal or isolation. There’s a difference between taking a positive interest in the world and people around you — engaging with them lovingly — and allowing yourself to be harmed and brought down. I’m learning to always remember who I am, and that my energy and accomplishments will be grounded in my sensitivity, compassion, vision, and joy. I need to nurture and shelter those qualities in myself. Photo by Gene Devine on Unsplash My emotions hold the key to functioning well — shutting them down isn’t going to work for me. For me, the emotional flow is pretty much constant, and until recently I found it very distracting and hampering. In my case, it’s been things like, for example, feeling really restless when I have to stick with a project that isn’t intrinsically interesting at the moment: I would let that restlessness completely carry me away from what I needed — and really wanted — to be accomplishing. Or, when I moved toward working on my novel, I would have a wave of feelings about it not being good enough, or feeling futility, like success will never come to me no matter how good I am or how hard I try. I found it really hard to set those feelings aside in order to focus on my work. I suspect that people who have closed down their souls don’t experience emotions like those so keenly, or they’re able to push them away fairly easily, and that’s one reason they get a lot done. I find it incredibly hard to do something I don’t fully want to be doing. I have to feel hopeful and excited about it, and that it’s the right thing for me and, ideally, beneficial for the world in some way. From sweeping the floor of my kitchen to building my writing career, I have to stir up some level of excitement and a feeling of congruence with the task before I can give it my energy and engagement. On the other hand, any negative feelings can completely prevent me from working — or even keeping my house tidy. So this is the solution I’ve discovered: Instead of trying to ignore or push away these unhelpful emotions, I turn toward them and give them the attention they seem to want. Before I start work, I first sit and self-reflect for a moment to sense what I’m feeling about what I’m about to do. Sometimes I find that I’m really excited and eager, and it’s great to notice that and be able to ride that energy into the session. But if it’s feelings that are pulling me away from the task instead of toward it, I will sit with them for a while and give them some time and attention. Sometimes, especially if I’m having trouble figuring out what’s going on, journaling helps me identify what it is that’s trying to make itself known. If I’m alone, I’ll even talk to myself out loud: “Wow, I feel really sad about doing this today, and I don’t know why, but crap do I feel sad.” Figuring out why I’m feeling a particular way can be useful information, but it seems most important just to identify and acknowledge the feeling itself, and sit with it till it softens. Sometimes the emotion is just sort of like an itch that needs to be scratched or a pebble I have to take out of my shoe — it just needs a few minutes of undivided attention, and it will fade away. Sometimes it’s more intense or durable. Sometimes journaling about it or crying a little will soften or dispel it, and even if it doesn’t completely go away, I’m still able to work now. There are times when I decide to accept that it’s there and get to work anyway, not trying to stifle the feeling, but just letting it be a presence while I do the work I really want to be doing. Photo by seth schwiet on Unsplash It’s so much more peaceful and productive when I’m honest about what I’m feeling. When the feeling goes against my chosen goals and plans, I don’t have to let it “win” and deflect me. But recognizing that it’s there can drain a lot of the undermining power out of it. Obviously, this practice can take a bit of time, but if it saves you from getting completely distracted from what you want to do and not doing anything, you’ll come out ahead. And I think it’s worth it in itself for the self-knowledge you gain from it. Acknowledging and sitting with the emotions can be truly healing, too. I’ve learned I don’t need to stifle myself in order to be productive and successful. Exactly the opposite: I can work productively when I accept and allow who I really am and what’s going on for me. I’ve learned that my soul is the source of my creativity, energy, and unique gifts. Shutting it down won’t get me anywhere that I actually want to go — and anyway, it hurts too much. I’ve learned that my truth, such as it is, really can be a gift to the world, to people who are yearning for truth and authenticity and for the specific life lessons that I’ve managed to learn and can now echo. It’s been so encouraging and life-changing to get that. Obviously, the same goes for you. So when your emotions are tripping you up, maybe give them the respect and attention that every inch of your soul deserves. You can still get the work done, set and achieve ambitious goals, and be as productive as anyone else — you just need to work with your soul, not against it.
https://medium.com/swlh/how-to-be-productive-and-achieve-if-you-have-a-tender-soul-1576b72ae4c0
['Sk Camille']
2019-09-13 05:53:38.611000+00:00
['Life Lessons', 'Emotions', 'Productivity', 'Self', 'Work']
Title Productive Achieve Tender SoulContent Photo Fabrizio Verrecchia Unsplash Work soul tender soul respond everything happens like feather caught wind Successes put moon slightest discouragement knock flat selfesteem isn’t great criticism feel like stabbing knife taking step might bring disapproval feel like herculean task Maybe worry making mistake would hurt someone giving bad advice getting something wrong offending someone whenever try something that’s right conscience scream stop Even right welter emotion get you’re trying accomplish Sometimes might envy people steelier soul People work like machine without getting tripped seventeen time day feeling I’m tell there’s nothing envy people who’ve shut emotional life there’s reason can’t create achieve magnificent thing — without putting gag soul I’ve tried way doesn’t work — way many year — trying slog work life academic program didn’t chime soul Trying ignore pain misalignment finding end day curled sofa fetal position drinking wine every night contracting mysterious illness wouldn’t go away I’m 52 think I’m finally figuring Two attitude one major strategy helping stay productive move toward exciting goal without feeling like stifle soul Photo Wolfgang Hasselmann Unsplash Knowing truly don’t choose one world seems structured work reward people who’ve discarded emotion That’s probably true large swathe modern life encourages focus financial bottom line mechanistic production feeding people’s addiction sake easy sale immense profit rather nourishing soul integrity imagination that’s whole world still million people value — crave long — beauty truth authenticity vision playfulness delight inspiration — thing person tender soul offer world world might quite profitable one definitely enough Nurturing sheltering world pretty dark dreary even sharpedged someone who’s sensitive I’m learning take care mean making sure get emotional sensory nourishment need taking break listen favorite music filling space light color beautiful scent ultimately finding place live feel free safe inspired I’ve discovered extracareful boundary acid rain world eat away joy I’m everything think protect maintain sense wonder delight doesn’t mean withdrawal isolation There’s difference taking positive interest world people around — engaging lovingly — allowing harmed brought I’m learning always remember energy accomplishment grounded sensitivity compassion vision joy need nurture shelter quality Photo Gene Devine Unsplash emotion hold key functioning well — shutting isn’t going work emotional flow pretty much constant recently found distracting hampering case it’s thing like example feeling really restless stick project isn’t intrinsically interesting moment would let restlessness completely carry away needed — really wanted — accomplishing moved toward working novel would wave feeling good enough feeling futility like success never come matter good hard try found really hard set feeling aside order focus work suspect people closed soul don’t experience emotion like keenly they’re able push away fairly easily that’s one reason get lot done find incredibly hard something don’t fully want feel hopeful excited it’s right thing ideally beneficial world way sweeping floor kitchen building writing career stir level excitement feeling congruence task give energy engagement hand negative feeling completely prevent working — even keeping house tidy solution I’ve discovered Instead trying ignore push away unhelpful emotion turn toward give attention seem want start work first sit selfreflect moment sense I’m feeling I’m Sometimes find I’m really excited eager it’s great notice able ride energy session it’s feeling pulling away task instead toward sit give time attention Sometimes especially I’m trouble figuring what’s going journaling help identify that’s trying make known I’m alone I’ll even talk loud “Wow feel really sad today don’t know crap feel sad” Figuring I’m feeling particular way useful information seems important identify acknowledge feeling sit till softens Sometimes emotion sort like itch need scratched pebble take shoe — need minute undivided attention fade away Sometimes it’s intense durable Sometimes journaling cry little soften dispel even doesn’t completely go away I’m still able work time decide accept it’s get work anyway trying stifle feeling letting presence work really want Photo seth schwiet Unsplash It’s much peaceful productive I’m honest I’m feeling feeling go chosen goal plan don’t let “win” deflect recognizing it’s drain lot undermining power Obviously practice take bit time save getting completely distracted want anything you’ll come ahead think it’s worth selfknowledge gain Acknowledging sitting emotion truly healing I’ve learned don’t need stifle order productive successful Exactly opposite work productively accept allow really what’s going I’ve learned soul source creativity energy unique gift Shutting won’t get anywhere actually want go — anyway hurt much I’ve learned truth really gift world people yearning truth authenticity specific life lesson I’ve managed learn echo It’s encouraging lifechanging get Obviously go emotion tripping maybe give respect attention every inch soul deserves still get work done set achieve ambitious goal productive anyone else — need work soul itTags Life Lessons Emotions Productivity Self Work
4,285
Just walk out Amazon Go — the most convincing future of retail
JUST WALK OUT TECHNOLOGY- the key phrase used for Amazon’s cashier-less convenient stores, Amazon Go. These stores resemble the look of normal convenience stores, but customers don’t need to wait or scan to pay; they just have to walk out the stores with items. Amazon opened its second New York City location in June 11th, 2019. This location is the 13th amongst other locations in Seattle, Chicago, and San Francisco. Amazon’s initiatives to apply their online experience to brick-and-mortar shops are not the new thing. Back in 2017, Amazon acquired Whole Foods in order to expand its fresh grocery lines and physical store footprints. Amazon has also experimented with brick-and-mortar shops like Amazon 4-star with highly reviewed and rated items from amazon.com, and Amazon Books, which was literally a physical version of amazon.com book stores (Amazon Books NYC: Does it predict the future of retail?). Although these experiments weren’t the solution for the future retail, large retail enterprises, including Amazon, have tried to reinvent the physical shopping experience to be more reachable and convenient with the use of technology. Image source: Tesco virtual supermarket in a subway station via designboom In 2011, Tesco in South Korea installed a virtual shopping experience in Seoul’s subway stations — customers could scan QR codes on printed supermarket shelves on the station platforms. The idea was simple: hard working people didn’t have time for a grocery shopping and Tesco tapped into this concept by having them multitask during everyday commutes. Although this attempt is more about marketing rather than a practical solution, their registered members rose by 76%, and their online sales increased 130%. Unlike Tesco’s case, in the case of Amazon GO, customers still need to go to physical stores. Presumably, Amazon Go can help customers save time in its target market, which include dense downtown settings, where register lines get long during peak hours. However, one of Amazon Go’s main agendas is to reduce the operating cost, human staff. The history of physical stores One’s grocery shopping experience from markets in the 1800s was simply inefficient. Customers needed to visit individual stores that sold different goods. In 1916, the first Piggly Wiggly store in Memphis completely changed this flow. The customers were led to the store’s storage to pick up items themselves, and then to the centralized register area to pay all items together. This system didn’t only help operation costs, but it also stimulated customers to buy more by spending time picking up different items. In the 1930s, the Great Depression had pressured more supermarkets into the same direction and to pursue economies of scale, which ultimately lead to the success of Walmart. This then led to e-commerce giants like Amazon in later days. In the meantime, physical stores adopted various technologies to run a centralized register even more efficiently with less human staff. In 1972, Kroger agreed to test the barcode system to manage inventories better, soon creating the industry standard, Universal Product Code (UPC). Image source: The History of the Bar Code Timeline of the modern supermarket 1916 — The first Piggly Wiggly store let customers pick items from its storage and pay at the central location. 1930s — The Great Depression directed many stores to adopt the centralized register with the large quantity model. 1950s — Many big-box supermarkets appeared in suburban settings due to the motorization. 1969 — Walmart chain was founded. The original store was Walton’s 5–10 opened in 1950. 1972 — Kroger agreed with Radio Corporation of America (RCA) to test the barcode system. 1974 — The first use of the standardized barcode system, Universal Product Code (UPC), at Troy’s Marsh Supermarket. 1997 — Contactless payment system, Speedpass by Mobil, which looks like a keychain, was introduced to make a purchase without the use of cash or credit cards. Image source: Esso Mobil 2001 — Kmart adopted the self-checkout as the big-box player, but it then removed it from its stores by 2003. Image source: starts at 60 2014 — Apple Pay expanded the use of contactless payment to the wider merchants in the US. 2016 — The first Amazon Go store opened in Seattle. What does Amazon Go try to solve? Customers save time by NOT waiting in a cashier line. The closest precedent may be the self-checkout system in some large-scale supermarkets or drugstores. I personally find it useful for a faster process, though, it receives more criticism, mainly because customers are not trained to use registers (some poll from 2014 suggested that 93% of people disliked them). Customers don’t have to carry their wallets. The contactless payment system, such as Apple Pay, is the closest solution allowing customers to shop cashless. However, identifying and counting items still relies on human staff. To address this, technology to detect what items each customer has picked is being explored. For example, the creative unit, teamLab, created a hanger that reacts as a customer picks it up. Similarly, tagging items had been the mainstream solution, but this solution does not connect the product with the customer’s identity. For connecting individual customer identity, personal mobile phones take an important role. The remaining question is how to make the connection, and how to make the process frictionless. The low energy bluetooth device, Beacon, was seen as a solution. The company Emoticons introduced small, stylish, and affordable Beacon devices that were also easy to install in retail stores. These devices could emit bluetooth signals constantly, and they did not require pairing steps like regular bluetooth. This way, the system could identify when a customer entered the store and track their locations in the store. However, the solution did not come with a practical way to identify items the customer chose. Additionally, customers needed to download an app to have the connection. Amazon also foresees their Amazon Go stores to cut the operation cost. How does Amazon Go work? The smaller Amazon Go store format. (Image source: Amazon via. Business Insider) In order to successfully achieve this JUST WALK OUT TECHNOLOGY, Amazon Go stores have to achieve the following with extremely high accuracy: Register a customer — so the store can link their Amazon account. Track the customer’s location — so the system can correlate the customer data, and the actions taken place. Detect an item that was picked up — so the system can add items to the virtual shopping cart of the customer who was at the location. Detect an item if it was put back onto the shelf — so the system can remove items from the customer’s virtual shopping cart. Detect when the customer leaves the store — so the customer’s online transaction can be completed. 1. Register a customer This is the most conventional part of the experience. Customers have to download an app to their phone, which is not part of the Amazon app. At the store entrance, they have to scan the QR code on their app to the gate, which almost looks like some sort of a subway entrance. When I visited the store, I was with my wife and baby. I thought each person had to scan different QR codes and enter separately. However I was told that all of us can use the same QR code. 2. Track the customer’s location There are hundreds of cameras mounted on the ceiling; they are RGB cameras for tracking individual customers. Amazon has mentioned that their Go stores don’t use any facial recognition technology. Instead, these cameras detect each customer’s general profile and track individuals with motion detection. The camera correlates a customer leaving Camera A and picks up the same customer entering Camera B. The accuracy of tracking is augmented by the use of separate depth-sensing cameras, according to TechCrunch article. There is also a gate for the staff exist. 3. Detect an item that was picked up and 4. Detect an item if it was put back onto the shelf This is the most unique characteristic of Amazon Go and is represented in its store design. Each shelf has a weight sensor that knows the exact weight of each item. When an item is picked up, the sensor can tell exactly which shelf the item is from. Similarly, the sensor detects when the object with the same weight is put back. The central processing unit relates the information about each customer’s location and the actions taken place on each shelf. Because of this system design, each shelf has clear guides separating each row, and they are more spacious compared to regular grocery stores. The store always looks tidy and well organized, because items need to be placed precisely, and space helps accurately detect customers. 5. Detect when the customer leaves the store Customers don’t have to scan the QR code to exit like they do when they enter. In-store tracking detects when they leave the store. When I walked out from the store, I was curious if the store successfully detected items that my wife picked up. In fact, it took about 5 minutes after leaving the store to receive my receipt and see any updates on the app. I am not sure if this was by design, but I hope Amazon Go app had updated my virtual shopping cart while I was in the store. What does Amazon Go look like in the future? From my experience at the small Go store, there was plenty of human staff. Amazon Go is still an early initiative, and it needs people to help operate it. For example, detecting the right item for the right customer’s virtual shopping cart is still assisted by human staff when the processing’s confidence score is low. In addition, the friendly staff standing by the gate was helpful for answering questions and assisting customers who were not used to the new-age shopping experience. Restocking and reorganizing items on the shelves were also handled by humans. Having the right items on the right shelves, which could have been misplaced by customers, is the key for the entire system. Human staff is an inevitable workforce flexible enough to adjust the misoperations. Nonetheless, Amazon’s vision is to make the operation as efficient as possible. According to the new estimates from RBC Capital Markets analysts, Amazon Go brings in about 50% more revenue compared to the traditional convenience stores. Although the initial store cost $1 million in hardware alone, the cost can be drastically reduced by reverse engineering the earlier cases and deploying them on a large scale. Bloomberg reported that Amazon is aiming to open 3,000 locations by 2021. Annual revenue per square foot, based on store size (Source: Amazon’s cashierless Go stores could be a $4 billion business by 2021, new research suggests | recode) Amazon Go is the sophisticated version of future retail, which has been attempted by many of its competitors. If it had not been by Amazon, I can imagine the exact same solution being deployed to more convenient stores by providers. For consumers, it is only about 30 seconds that they save from a regular shopping trip at a convenience store. Yes, we don’t need to bring our wallets, but we need to carry our phones anyways and open the app and scan. So is it really that much better for consumers’ shopping experience and efficiency? One thing I really liked about Amazon Go is its limited product variety in the relatively spacious store, which is in place for pursuing its high accuracy of product detection. This lies in comparison to regular convenience stores that look messy with overwhelming product variations. I used to like this about convenience stores. Today however, with too much information and items being accessible online, I am probably not the only one who prefers something simpler, at least in physical spaces. Original post: http://www.ta-kuma.com/experience-design/just-walk-out-amazon-go%E2%80%8A-%E2%80%8Athe-most-convincing-future-of-retail/ Reference Inside Amazon’s surveillance-powered, no-checkout convenience store | TechCrunch Stepping Into An Amazon Store Helps It Get Inside Your Head | WIRED Amazon’s cashierless Go stores could be a $4 billion business by 2021, new research suggests | recode Wouldn’t it be better if self-checkout just died? | Vox Amazon’s store of the future has no cashiers, but humans are watching from behind the scenes | recode Meet the duo who make Amazon Go | Fast Company Only Amazon Could Make A Checkout-free Grocery Store A Reality | WIRED The technology behind Amazon’s surveillance-heavy Go store | WIRED Amazon Go is the inevitable evolution of supermarket retail | engadget Amazon opens its second Go store in New York | engadget The 6 Most Surprising Things About the New Amazon Go (No Cash Registers) Convenience Store | inc. The History of the Bar Code | Smithonian The Man Who Invented the Grocery Store | The Wall Street Journal
https://uxdesign.cc/just-walk-out-amazon-go-the-most-convincing-future-of-retail-469b5794d65c
['Takuma Kakehi']
2019-11-15 02:12:16.315000+00:00
['Future Of Retail', 'Amazon', 'Tracking', 'Retail', 'Amazon Go']
Title walk Amazon Go — convincing future retailContent WALK TECHNOLOGY key phrase used Amazon’s cashierless convenient store Amazon Go store resemble look normal convenience store customer don’t need wait scan pay walk store item Amazon opened second New York City location June 11th 2019 location 13th amongst location Seattle Chicago San Francisco Amazon’s initiative apply online experience brickandmortar shop new thing Back 2017 Amazon acquired Whole Foods order expand fresh grocery line physical store footprint Amazon also experimented brickandmortar shop like Amazon 4star highly reviewed rated item amazoncom Amazon Books literally physical version amazoncom book store Amazon Books NYC predict future retail Although experiment weren’t solution future retail large retail enterprise including Amazon tried reinvent physical shopping experience reachable convenient use technology Image source Tesco virtual supermarket subway station via designboom 2011 Tesco South Korea installed virtual shopping experience Seoul’s subway station — customer could scan QR code printed supermarket shelf station platform idea simple hard working people didn’t time grocery shopping Tesco tapped concept multitask everyday commute Although attempt marketing rather practical solution registered member rose 76 online sale increased 130 Unlike Tesco’s case case Amazon GO customer still need go physical store Presumably Amazon Go help customer save time target market include dense downtown setting register line get long peak hour However one Amazon Go’s main agenda reduce operating cost human staff history physical store One’s grocery shopping experience market 1800s simply inefficient Customers needed visit individual store sold different good 1916 first Piggly Wiggly store Memphis completely changed flow customer led store’s storage pick item centralized register area pay item together system didn’t help operation cost also stimulated customer buy spending time picking different item 1930s Great Depression pressured supermarket direction pursue economy scale ultimately lead success Walmart led ecommerce giant like Amazon later day meantime physical store adopted various technology run centralized register even efficiently le human staff 1972 Kroger agreed test barcode system manage inventory better soon creating industry standard Universal Product Code UPC Image source History Bar Code Timeline modern supermarket 1916 — first Piggly Wiggly store let customer pick item storage pay central location 1930s — Great Depression directed many store adopt centralized register large quantity model 1950s — Many bigbox supermarket appeared suburban setting due motorization 1969 — Walmart chain founded original store Walton’s 5–10 opened 1950 1972 — Kroger agreed Radio Corporation America RCA test barcode system 1974 — first use standardized barcode system Universal Product Code UPC Troy’s Marsh Supermarket 1997 — Contactless payment system Speedpass Mobil look like keychain introduced make purchase without use cash credit card Image source Esso Mobil 2001 — Kmart adopted selfcheckout bigbox player removed store 2003 Image source start 60 2014 — Apple Pay expanded use contactless payment wider merchant US 2016 — first Amazon Go store opened Seattle Amazon Go try solve Customers save time waiting cashier line closest precedent may selfcheckout system largescale supermarket drugstore personally find useful faster process though receives criticism mainly customer trained use register poll 2014 suggested 93 people disliked Customers don’t carry wallet contactless payment system Apple Pay closest solution allowing customer shop cashless However identifying counting item still relies human staff address technology detect item customer picked explored example creative unit teamLab created hanger reacts customer pick Similarly tagging item mainstream solution solution connect product customer’s identity connecting individual customer identity personal mobile phone take important role remaining question make connection make process frictionless low energy bluetooth device Beacon seen solution company Emoticons introduced small stylish affordable Beacon device also easy install retail store device could emit bluetooth signal constantly require pairing step like regular bluetooth way system could identify customer entered store track location store However solution come practical way identify item customer chose Additionally customer needed download app connection Amazon also foresees Amazon Go store cut operation cost Amazon Go work smaller Amazon Go store format Image source Amazon via Business Insider order successfully achieve WALK TECHNOLOGY Amazon Go store achieve following extremely high accuracy Register customer — store link Amazon account Track customer’s location — system correlate customer data action taken place Detect item picked — system add item virtual shopping cart customer location Detect item put back onto shelf — system remove item customer’s virtual shopping cart Detect customer leaf store — customer’s online transaction completed 1 Register customer conventional part experience Customers download app phone part Amazon app store entrance scan QR code app gate almost look like sort subway entrance visited store wife baby thought person scan different QR code enter separately However told u use QR code 2 Track customer’s location hundred camera mounted ceiling RGB camera tracking individual customer Amazon mentioned Go store don’t use facial recognition technology Instead camera detect customer’s general profile track individual motion detection camera correlate customer leaving Camera pick customer entering Camera B accuracy tracking augmented use separate depthsensing camera according TechCrunch article also gate staff exist 3 Detect item picked 4 Detect item put back onto shelf unique characteristic Amazon Go represented store design shelf weight sensor know exact weight item item picked sensor tell exactly shelf item Similarly sensor detects object weight put back central processing unit relates information customer’s location action taken place shelf system design shelf clear guide separating row spacious compared regular grocery store store always look tidy well organized item need placed precisely space help accurately detect customer 5 Detect customer leaf store Customers don’t scan QR code exit like enter Instore tracking detects leave store walked store curious store successfully detected item wife picked fact took 5 minute leaving store receive receipt see update app sure design hope Amazon Go app updated virtual shopping cart store Amazon Go look like future experience small Go store plenty human staff Amazon Go still early initiative need people help operate example detecting right item right customer’s virtual shopping cart still assisted human staff processing’s confidence score low addition friendly staff standing gate helpful answering question assisting customer used newage shopping experience Restocking reorganizing item shelf also handled human right item right shelf could misplaced customer key entire system Human staff inevitable workforce flexible enough adjust misoperations Nonetheless Amazon’s vision make operation efficient possible According new estimate RBC Capital Markets analyst Amazon Go brings 50 revenue compared traditional convenience store Although initial store cost 1 million hardware alone cost drastically reduced reverse engineering earlier case deploying large scale Bloomberg reported Amazon aiming open 3000 location 2021 Annual revenue per square foot based store size Source Amazon’s cashierless Go store could 4 billion business 2021 new research suggests recode Amazon Go sophisticated version future retail attempted many competitor Amazon imagine exact solution deployed convenient store provider consumer 30 second save regular shopping trip convenience store Yes don’t need bring wallet need carry phone anyways open app scan really much better consumers’ shopping experience efficiency One thing really liked Amazon Go limited product variety relatively spacious store place pursuing high accuracy product detection lie comparison regular convenience store look messy overwhelming product variation used like convenience store Today however much information item accessible online probably one prefers something simpler least physical space Original post httpwwwtakumacomexperiencedesignjustwalkoutamazongoE2808AE2808Athemostconvincingfutureofretail Reference Inside Amazon’s surveillancepowered nocheckout convenience store TechCrunch Stepping Amazon Store Helps Get Inside Head WIRED Amazon’s cashierless Go store could 4 billion business 2021 new research suggests recode Wouldn’t better selfcheckout died Vox Amazon’s store future cashier human watching behind scene recode Meet duo make Amazon Go Fast Company Amazon Could Make Checkoutfree Grocery Store Reality WIRED technology behind Amazon’s surveillanceheavy Go store WIRED Amazon Go inevitable evolution supermarket retail engadget Amazon open second Go store New York engadget 6 Surprising Things New Amazon Go Cash Registers Convenience Store inc History Bar Code Smithonian Man Invented Grocery Store Wall Street JournalTags Future Retail Amazon Tracking Retail Amazon Go
4,286
I’ve Been Plant-Based For A Month, Here’s How It's Gone
I’ve Been Plant-Based For A Month, Here’s How It's Gone So far, so good. Photo by Anna Pelzer on Unsplash For the last six weeks, I’ve followed a completely plant-based diet. I steer away from saying Vegan as I do believe that labels matter to a degree. However, I’ve not eaten any animal products at all! Before going completely plant-based, I was the biggest meat-eater going. Beef, chicken, lamb, veal, duck — you name it, I’ve eaten it. Food is something I’ve always appreciated because I came from a family of foodies. My dad is a French-trained Chef, my mum is Serbian and my Aunt also worked in restaurants and cafes for a large portion of my life. Going plant-based, or following a Vegan lifestyle was something that I always turned my nose up at, mostly because I thought it was pointless and also — because I had absolutely no desire to give up animal products. However, my gut health started to deteriorate and I was certain that dairy products were causing me issues. I wasn’t sick by any stretch of the imagination, but I had constant anxiety about my skin, and felt like I couldn’t eat a meal without feeling uncomfortable. I wanted to be open and honest with my journey, as I do believe there is a stigma attached to being Vegan/Plant-Based. Below I’ve highlighted the good, the bad and the ugly from my own experience — as someone who is a self-employed freelance writer and has to think about budgeting fairly often. The good Let’s start off with the positives, as that’s always a good place to begin. I feel “lighter” and have more energy: The biggest difference I’ve noticed, especially in week five and six is the energy levels. I’m a naturally fast eater, which often meant at lunchtimes I’d end up feeling uncomfortably full. Since following a plant-based diet my energy levels seem to stay at one level for the majority of the day and naturally drop off by the time I need to sleep. The biggest difference I’ve noticed, especially in week five and six is the energy levels. I’m a naturally fast eater, which often meant at lunchtimes I’d end up feeling uncomfortably full. Since following a plant-based diet my energy levels seem to stay at one level for the majority of the day and naturally drop off by the time I need to sleep. Complexion and weight: My weight has naturally fluctuated over the years, and my BMI has been at both ends of the spectrum — overweight (68kg), as well as underweight (53kg). Please note that this was in relation to my height and age at the time, and was definitely a correlation of the food I was eating and the alcohol I was consuming. Now I sit at a healthy 60kg which I’m really proud of, and following a plant-based diet seems to work a lot better for my digestion and overall weight management. I’ve never looked at being plant-based as a weight loss ploy, as I don’t think that’s a healthy way to look at things. However, after dealing with a lot of fluctuating weight from 16–21, at 25 I find that this way of eating works well both physically and mentally. In terms of complexion, I started to develop problematic skin at 24, which seemed to directly correlate with the amount of dairy that I was eating. Cutting this out has helped my complexion to recover. Accessibility: I’m incredibly fortunate and blessed to have access to large supermarkets and whole food stores, meaning I can buy good quality fresh and frozen products. My budget doesn’t allow for me to constantly buy the top end products on the market, so having shops that cater to varying price ranges has made a plant-based diet from an accessibility perspective really positive. I appreciate that accessibility plays a huge part in what kind of food you can eat, and I don’t think this is spoken about enough. I’m incredibly fortunate and blessed to have access to large supermarkets and whole food stores, meaning I can buy good quality fresh and frozen products. My budget doesn’t allow for me to constantly buy the top end products on the market, so having shops that cater to varying price ranges has made a plant-based diet from an accessibility perspective really positive. I appreciate that accessibility plays a part in what kind of food you can eat, and I don’t think this is spoken about enough. Snacks: I love to graze, and definitely was a goat or sheep in a previous life. The snack options for those who follow a plant-based diet are varied, and usually pretty healthy. This was something I didn’t realise until I started to read up on what I could eat. The bad Portion management: The first month of being plant-based came with some challenges, as I lived in a state of constant hunger due to not having larger portions. This was something I worked on straight away and now eat around 25% larger portions than I did when I was eating animal products. I don’t advise counting calories as that isn’t healthy — but intuitive eating was something I had to get my head around and find a portion size that would work for me! The first month of being plant-based came with some challenges, as I lived in a state of constant hunger due to not having larger portions. This was something I worked on straight away and now eat around 25% larger portions than I did when I was eating animal products. I don’t advise counting calories as that isn’t healthy — but intuitive eating was something I had to get my head around and find a portion size that would work for me! Time: Being plant-based takes a lot of time. I don’t want to skirt around that. I work from home and I’m self-employed, so the luxury of time means I can marinate and cook things at home from scratch, I can go to a market if I want something specific and preparing meals is factored into my day. If I worked in an office or a standard 9–5 I definitely think I’d struggle, so time is a privilege which is often overlooked. Being plant-based takes a lot of time. I don’t want to skirt around that. I work from home and I’m self-employed, so the luxury of time means I can marinate and cook things at home from scratch, I can go to a market if I want something specific and preparing meals is factored into my day. If I worked in an office or a standard 9–5 I definitely think I’d struggle, so time is a privilege which is often overlooked. Finances: I’d say on average my food shop is still the same as when I bought animal products, so financially I don’t feel like I’ve saved any money or spent any more than I usually would. A lot of the expensive things are usually pre-made plant-based meals, as well as supplements, nuts, seeds and spices. You can live on a plant-based diet comfortably, but I can imagine if I had less time to prepare things from scratch I’d end up spending more on pre-made products. I’d say on average my food shop is still the same as when I bought animal products, so financially I don’t feel like I’ve saved any money or spent any more than I usually would. A lot of the expensive things are usually pre-made plant-based meals, as well as supplements, nuts, seeds and spices. You can live on a plant-based diet comfortably, but I can imagine if I had less time to prepare things from scratch I’d end up spending more on pre-made products. Accessibility: Although I have a lot of access where I live, there are certain circumstances where things may be sold out/low stock and then I’ll either have to buy a more expensive alternative, or go without. Of course, this is a privileged problem but accessibility is one of the main reasons why I think fewer people are plant-based. This pressure should be put on retailers as opposed to the average person having to travel to multiple stores just to be able to buy the food they need. Although I have a lot of access where I live, there are certain circumstances where things may be sold out/low stock and then I’ll either have to buy a more expensive alternative, or go without. Of course, this is a privileged problem but accessibility is one of the main reasons why I think fewer people are plant-based. This pressure should be put on retailers as opposed to the average person having to travel to multiple stores just to be able to buy the food they need. Socially: Having a supportive friendship group is great, and in that aspect, I feel very comfortable socially eating plant-based food, however, once we’re out of a pandemic I imagine this will come with challenges. My main worry is being a burden to other people if they have to cater to my eating habits. The ugly Fast food: Plant-based fast food is delicious , but it’s a lot more expensive in comparison to your average McDonalds or Burger King. I guess this is positive as it does deter me from eating bad food (even though it tastes so good), however, this does tie into accessibility and finances. I paid £14.99 for a burger and chips yesterday, and although it was delicious — that’s a helluva lot of money. Plant-based fast food is , but it’s a lot more expensive in comparison to your average McDonalds or Burger King. I guess this is positive as it does deter me from eating bad food (even though it tastes so good), however, this does tie into accessibility and finances. I paid £14.99 for a burger and chips yesterday, and although it was delicious — that’s a helluva lot of money. Skin purging: I noticed in week three and four that my skin seemed to be in a hurry to get rid of any blemishes and spots that I had on my face. It alarmed me at first as I thought the plant-based diet wasn’t working for me, but after reading and understanding what happened — it looked as though my skin was clearing itself. Now, crystal clear aside from the odd hormonal spot! Conclusion Overall, I do think there have been a lot of positives to my journey so far, however, I think the real test of time will be three, six and twelve months down the line to see how it impacts me across the aforementioned points. If you’re following a plant-based diet I’d love to hear from you. Follow me on Twitter!
https://medium.com/the-innovation/ive-been-plant-based-for-a-month-here-s-how-its-gone-12517ed6215
['Claire Stapley']
2020-12-30 11:07:01.358000+00:00
['Plant Based', 'Sustainabilityms', 'Lifestyle', 'Vegan', 'Eating']
Title I’ve PlantBased Month Here’s GoneContent I’ve PlantBased Month Here’s Gone far good Photo Anna Pelzer Unsplash last six week I’ve followed completely plantbased diet steer away saying Vegan believe label matter degree However I’ve eaten animal product going completely plantbased biggest meateater going Beef chicken lamb veal duck — name I’ve eaten Food something I’ve always appreciated came family foodie dad Frenchtrained Chef mum Serbian Aunt also worked restaurant cafe large portion life Going plantbased following Vegan lifestyle something always turned nose mostly thought pointless also — absolutely desire give animal product However gut health started deteriorate certain dairy product causing issue wasn’t sick stretch imagination constant anxiety skin felt like couldn’t eat meal without feeling uncomfortable wanted open honest journey believe stigma attached VeganPlantBased I’ve highlighted good bad ugly experience — someone selfemployed freelance writer think budgeting fairly often good Let’s start positive that’s always good place begin feel “lighter” energy biggest difference I’ve noticed especially week five six energy level I’m naturally fast eater often meant lunchtime I’d end feeling uncomfortably full Since following plantbased diet energy level seem stay one level majority day naturally drop time need sleep biggest difference I’ve noticed especially week five six energy level I’m naturally fast eater often meant lunchtime I’d end feeling uncomfortably full Since following plantbased diet energy level seem stay one level majority day naturally drop time need sleep Complexion weight weight naturally fluctuated year BMI end spectrum — overweight 68kg well underweight 53kg Please note relation height age time definitely correlation food eating alcohol consuming sit healthy 60kg I’m really proud following plantbased diet seems work lot better digestion overall weight management I’ve never looked plantbased weight loss ploy don’t think that’s healthy way look thing However dealing lot fluctuating weight 16–21 25 find way eating work well physically mentally term complexion started develop problematic skin 24 seemed directly correlate amount dairy eating Cutting helped complexion recover Accessibility I’m incredibly fortunate blessed access large supermarket whole food store meaning buy good quality fresh frozen product budget doesn’t allow constantly buy top end product market shop cater varying price range made plantbased diet accessibility perspective really positive appreciate accessibility play huge part kind food eat don’t think spoken enough I’m incredibly fortunate blessed access large supermarket whole food store meaning buy good quality fresh frozen product budget doesn’t allow constantly buy top end product market shop cater varying price range made plantbased diet accessibility perspective really positive appreciate accessibility play part kind food eat don’t think spoken enough Snacks love graze definitely goat sheep previous life snack option follow plantbased diet varied usually pretty healthy something didn’t realise started read could eat bad Portion management first month plantbased came challenge lived state constant hunger due larger portion something worked straight away eat around 25 larger portion eating animal product don’t advise counting calorie isn’t healthy — intuitive eating something get head around find portion size would work first month plantbased came challenge lived state constant hunger due larger portion something worked straight away eat around 25 larger portion eating animal product don’t advise counting calorie isn’t healthy — intuitive eating something get head around find portion size would work Time plantbased take lot time don’t want skirt around work home I’m selfemployed luxury time mean marinate cook thing home scratch go market want something specific preparing meal factored day worked office standard 9–5 definitely think I’d struggle time privilege often overlooked plantbased take lot time don’t want skirt around work home I’m selfemployed luxury time mean marinate cook thing home scratch go market want something specific preparing meal factored day worked office standard 9–5 definitely think I’d struggle time privilege often overlooked Finances I’d say average food shop still bought animal product financially don’t feel like I’ve saved money spent usually would lot expensive thing usually premade plantbased meal well supplement nut seed spice live plantbased diet comfortably imagine le time prepare thing scratch I’d end spending premade product I’d say average food shop still bought animal product financially don’t feel like I’ve saved money spent usually would lot expensive thing usually premade plantbased meal well supplement nut seed spice live plantbased diet comfortably imagine le time prepare thing scratch I’d end spending premade product Accessibility Although lot access live certain circumstance thing may sold outlow stock I’ll either buy expensive alternative go without course privileged problem accessibility one main reason think fewer people plantbased pressure put retailer opposed average person travel multiple store able buy food need Although lot access live certain circumstance thing may sold outlow stock I’ll either buy expensive alternative go without course privileged problem accessibility one main reason think fewer people plantbased pressure put retailer opposed average person travel multiple store able buy food need Socially supportive friendship group great aspect feel comfortable socially eating plantbased food however we’re pandemic imagine come challenge main worry burden people cater eating habit ugly Fast food Plantbased fast food delicious it’s lot expensive comparison average McDonalds Burger King guess positive deter eating bad food even though taste good however tie accessibility finance paid £1499 burger chip yesterday although delicious — that’s helluva lot money Plantbased fast food it’s lot expensive comparison average McDonalds Burger King guess positive deter eating bad food even though taste good however tie accessibility finance paid £1499 burger chip yesterday although delicious — that’s helluva lot money Skin purging noticed week three four skin seemed hurry get rid blemish spot face alarmed first thought plantbased diet wasn’t working reading understanding happened — looked though skin clearing crystal clear aside odd hormonal spot Conclusion Overall think lot positive journey far however think real test time three six twelve month line see impact across aforementioned point you’re following plantbased diet I’d love hear Follow TwitterTags Plant Based Sustainabilityms Lifestyle Vegan Eating
4,287
Universal Health Coverage Should Be a Fundamental Human Right
Many people are confused about what the ACA is actually supposed to do. One of the biggest ACA reforms is the establishment of public health insurance exchanges, which are like marketplaces that allow individuals and families to seek out and buy affordable and comprehensive health insurance plans. The ACA also provides increased government subsidies to help low and middle-income families afford health insurance. Additionally, it prohibits insurance companies from refusing service or charging higher rates to people with pre-existing conditions, making health insurance more affordable and accessible to all. The ACA also prohibits insurance companies from placing an annual or lifetime cap on how much money they’re willing to pay for an individual’s healthcare. Finally, the ACA requires all companies with at least 50 employees to offer affordable, comprehensive health insurance to all of their full-time employees. Although the ACA has made considerable strides towards the goal of achieving universal health coverage for all Americans, it’s not a perfect system, and has faced considerable pushback, especially from Republican politicians. One of the ACA’s major flaws involves Medicaid, a program established in the 1980s to provide affordable healthcare for low-income Americans. When the ACA was first established, one of its main goals was to expand Medicaid to all 50 states in the hopes that more low-income individuals could gain access to affordable health insurance. However, in 2012, the Supreme Court declared the expansion of Medicaid unconstitutional, which means that individual states are still allowed to opt out of providing expanded Medicaid coverage to their residents. As of 2019, 37 states (including Washington DC) have adopted the ACA’s Medicaid expansion, but 14 states have chosen not to. This has created a coverage gap for low-income individuals in these 14 states, which means that about 2 million Americans still do not have affordable or accessible health coverage. Until all Americans, including those who live at or under the poverty line, are given access to affordable healthcare, we cannot claim to be a nation that values the fundamental human right of health. In March of 2019, the Trump Administration announced that it wanted to overthrow the entire Affordable Care Act, nullifying advances in healthcare coverage for over 30 million Americans. To do this, the Trump Administration is banking on a lawsuit against the ACA, Texas v. Azar, which seeks to declare the entirety of the ACA unconstitutional. Legal scholars are divided on whether or not this lawsuit poses a serious threat to the ACA, so in the coming months, the Texas v. Azar suit is definitely something to keep your eye on if you’re interested in following the debate surrounding the ACA. To combat the Trump Administration, House Democrats recently introduced a bill to strengthen the Affordable Care Act. Provisions in this bill include increasing subsidies for low-income individuals, expanding federal assistance to include individuals at higher income levels, and fixing the ACA’s notorious “family glitch,” which currently makes it difficult for employed individuals to afford insurance plans that include their spouses and children. However, because of rampant partisanship in Congress, it’s still unclear whether this bill will make any ground. Universal healthcare and ‘Medicaid for All’ has become the battleground of a fierce partisan debate, with Republicans and Democrats vying for political power by trying to repeal or strengthen the ACA. Although the debate swirling around universal health coverage and the ACA can be incredibly tense and confusing, it’s important to always keep in mind the core tenet of human rights that serves as the foundation of the argument for universal healthcare. Regardless of what form it ends up taking, access to quality healthcare is a fundamental human right, and every attempt to deny this healthcare is a degradation of the United States’ commitment to upholding human rights. Subscribe to our Newsletter
https://medium.com/in-kind/universal-health-coverage-should-be-a-fundamental-human-right-f1991d575b6c
['In Kind']
2019-05-14 17:35:52.968000+00:00
['Politics', 'Affordable Care Act', 'Healthcare', 'Wellness', 'Insurance']
Title Universal Health Coverage Fundamental Human RightContent Many people confused ACA actually supposed One biggest ACA reform establishment public health insurance exchange like marketplace allow individual family seek buy affordable comprehensive health insurance plan ACA also provides increased government subsidy help low middleincome family afford health insurance Additionally prohibits insurance company refusing service charging higher rate people preexisting condition making health insurance affordable accessible ACA also prohibits insurance company placing annual lifetime cap much money they’re willing pay individual’s healthcare Finally ACA requires company least 50 employee offer affordable comprehensive health insurance fulltime employee Although ACA made considerable stride towards goal achieving universal health coverage Americans it’s perfect system faced considerable pushback especially Republican politician One ACA’s major flaw involves Medicaid program established 1980s provide affordable healthcare lowincome Americans ACA first established one main goal expand Medicaid 50 state hope lowincome individual could gain access affordable health insurance However 2012 Supreme Court declared expansion Medicaid unconstitutional mean individual state still allowed opt providing expanded Medicaid coverage resident 2019 37 state including Washington DC adopted ACA’s Medicaid expansion 14 state chosen created coverage gap lowincome individual 14 state mean 2 million Americans still affordable accessible health coverage Americans including live poverty line given access affordable healthcare cannot claim nation value fundamental human right health March 2019 Trump Administration announced wanted overthrow entire Affordable Care Act nullifying advance healthcare coverage 30 million Americans Trump Administration banking lawsuit ACA Texas v Azar seek declare entirety ACA unconstitutional Legal scholar divided whether lawsuit pose serious threat ACA coming month Texas v Azar suit definitely something keep eye you’re interested following debate surrounding ACA combat Trump Administration House Democrats recently introduced bill strengthen Affordable Care Act Provisions bill include increasing subsidy lowincome individual expanding federal assistance include individual higher income level fixing ACA’s notorious “family glitch” currently make difficult employed individual afford insurance plan include spouse child However rampant partisanship Congress it’s still unclear whether bill make ground Universal healthcare ‘Medicaid All’ become battleground fierce partisan debate Republicans Democrats vying political power trying repeal strengthen ACA Although debate swirling around universal health coverage ACA incredibly tense confusing it’s important always keep mind core tenet human right serf foundation argument universal healthcare Regardless form end taking access quality healthcare fundamental human right every attempt deny healthcare degradation United States’ commitment upholding human right Subscribe NewsletterTags Politics Affordable Care Act Healthcare Wellness Insurance
4,288
Where science meets business — crafting a career of impact
CAREERS Where science meets business — crafting a career of impact A doctoral degree has been the gold standard metric when it comes to predicting the potential for the impact an individual has in contributing to the bioeconomy. Bioeconomy.XYZ writers have been leading dialogue around a particularly important topic — the importance of the Ph.D. Alexander Titus’s article “PhD not required” has certainly made waves as he challenges the limitations of this gold standard as he presents the novel thought that impact within this space is not constrained to possessing a certain type of educational background. Joseph Buccina picks up the metaphorical baton as he answers the question “If a PhD is not required … then what is?” He gives tangible insights for how to grow within the bioeconomy without a doctorate. I would highly recommend reading these two articles. Now as an individual breaking into the bioeconomy and looking to make an impact while not currently possessing a Ph.D., this discussion has been extremely appealing. Finding your path from undergrad to the workforce can be incredibly daunting. So how do we get there? I recently wrote an article about unlocking the potential of networking especially in light of the uncertainty of the pandemic. All five recommendations were essential to forming meaningful connections with mavericks within the bioeconomy. One of the most impactful conversations that I have had to date was with Chris Hsu. When searching LinkedIn looking for individuals to interview (point #1), Chris’s profile caught my attention first due to parallels in our academic backgrounds. We both have undergraduate degrees with a scientific discipline and both hold master's degrees in a business concentration; an MBA in Chris’s case. Beyond that, Chris leads innovation within the bioeconomy through his work at GSK, a multinational pharmaceutical and consumer healthcare company. Now that is a level of impact I would love to have. To point #3, Chris embodies the hungry and humble mentality which is evident not only in his work experience but also in that he was willing to speak with me about his journey. This conversation with Chris gave me unique confidence about the decisions I have made about my professional journey thus far and the ones I will have to make in the future. Others looking to break into the bioeconomy with a nontraditional background would greatly benefit from Chris’s wisdom which he gave me permission to share. I used many of the questions I shared from point #4 in framing our conversation so I hope Chris’s story will resonate with you just as much as it did for me. Why did you decide to study science, specifically biology and public health in undergrad? Photo by National Cancer Institute on Unsplash For me, my journey started in high school when I was really fascinated by the Human Genome Project. At that time in the 2000’s, we were really just starting to make breakthroughs with understanding how the human genetic sequence could help unlock the mysteries of how diseases impact the body, as well as the potential for how we could edit or manage some of these genes in order to find cures. I remember a ton of excitement around the idea of unlocking the codes that help trigger certain types of cancer and being able to edit those cleanly to help patients through gene-therapy find a path to recovery. That really intrigued me. There was a movie I watched back in 2000 called Gattaca, which was set in the future and where the career paths and futures of individuals were determined by eugenics. In that dystopia, only those individuals with the best and strongest hereditary traits were favored. It was an extreme example, but the implications of that movie also really sparked my interest in what we knew about DNA and the potential for curing diseases. I learned that down syndrome is called by an extra chromosome and how genes can determine your gender and eye color. Thanks, high school biology class! If you fast forward to today we now have gene-editing technology like CRISPR and we have this up and coming mRNA technology that companies like Moderna are using in which you can use a viral vector to insert revised mRNA back into the body to produce specific proteins. And they are using the platform to develop vaccines for RSV or COVID for example. So it was really the wow factor- just being in awe of what the Human Genome Project could unlock, and just the idea that one day I would love to work at a company where we could cure diseases that have been plaguing human society for hundreds of years. The public health emphasis was really about understanding what can we do to promote community protection- protect society as a whole versus at the individual level. As graduation approached — why did you decide to not go to the medical school or graduate school path? When considering medical school, becoming a physician was never really an interest for me. I know typically for most science majors going to medical school, dental school or veterinarian school is where the vast majority of students go to (at least at my university). I chose to not go to graduate school immediately after because I saw a need to have work experience before I decided to develop a specialization through graduate school. When I graduated in 2007, it was absolutely one of the worst times to be a college graduate in the job market. Companies were only hiring experienced candidates, and all of us new grads had a tough time getting a foot in the door when companies were reducing their workforce. Most people of my peers tried to go back to graduate school immediately, but I tried to ride it out and get a job. I chose to pursue a career in the pharma/biotech industry because of my passion and inspiration that began in high school and knowing that the medicines I helped develop and produce could have a profound impact on a large patient community. Why did you choose your first job? I remembered that upon graduation I had three job offers on the table. Two were through federal agencies with the NIH and the FDA, and the third was a private sector company. I ended up selecting my first job because it was for cancer research and I had the chance to work with the National Cancer Institute. It was actually the lowest paying job out of the three offers that I received. But experience at the time was more important to me than pay. I always believed that getting the right experience now would translate into better compensation down the road. It ended up being a great opportunity for me to learn more about regulatory affairs, which plays a multi-functional role in developing the clinical and filing strategy to bring a drug to market. Photo by Science in HD on Unsplash When you decided to pivot from your associate positions to consulting — how did you know it was time to make a change and how did you evaluate the offer? By the time I had switched over to consulting, I had already had three different jobs in industry. I had almost switched jobs year-to-year in the first three years of my post-undergraduate experience. I often get asked “Chris, I see that you moved around a lot in your career early on — why is that?” And for me, I would say — pharma is such a large industry with a lot of different career paths. I figured the best time to learn, make mistakes, and figure out what I was interested in was as a young professional, so it made sense to pick up experience in different functional areas. It ended up being a great decision because it gave me a lot of early exposure to different aspects of the industry to develop a better understanding of drug development. I eventually transitioned into life science consulting because I wanted the ability to work with a variety of companies and across diverse projects. Consulting really was the best opportunity to get into a whole different pace of work as well. In consulting you work on projects for three months, six months or a year at a time- switching clients on a regular basis. This really accelerates your learning curve instead of you being in the same job for two to three years as you would be normally in an industry role. In that same time span in consulting I worked with at least 2–3 different companies and maybe on 4–5 different projects spanning various functional areas. Consulting was an opportunity for me to really accelerate my learning and my development. Photo by You X Ventures on Unsplash Why did you decide to move to GSK? This reason is more personal. Prior to coming to GSK, I had relocated San Francisco to support a leading pharmaceutical company client. My wife and I were engaged but we were basically doing long distance, planning a wedding while we were on opposite sides of the country. The long-distance and constant travel difficult to manage so I agreed to move back to the east coast. Coincidentally, GSK was at the time opening its third R&D vaccine center in Rockville. It was a really great time to come in because the site was just starting up and I was able to start my career in Vaccines as one of the first 75 employees on the site. Today we are over 400. How did your academic background prepare you to first be a Senior Program Manager for Global Meningitis Vaccines, Strategy, Portfolio, and Operations and now a Commercial Launch Excellence Lead? It really was a combination of academic and work experience that has helped prepare me for my current role as the Senior Program Manager. One of the reasons that I got the job was because of my work versatility, where I had prior work experience in multiple functional areas and as a result be able to better support my team and my stakeholders. I remember receiving feedback from the interviews that they really liked my consulting experience combined with the experience in clinical, manufacturing, regulatory affairs. Having that broad experience and exposure to these areas is really important in my current role where I help manage the overall portfolio of our key initiatives and execution of our strategic objectives. In terms of academic background, having prior knowledge of immunology or biological systems is critically important in understanding the scientific and technical development of medicines, and how the body is able to benefit from it. If I came from a non-science background, it would certainly be harder for me to understand disease progression and the science behind the vaccine itself in terms of the antigens, how it drives the immunological response and how antibodies are produced. The scientific knowledge is something you can develop over time if you have a non-science background. Pursuing my MBA was one of the best decisions I made in my career. Paired with a scientific degree, the MBA helped prepare me to develop and execute strategy, how to perform complex financial analyses, and understand the fundamentals of marketing. One of the more memorable quotes a colleague shared with me was “you can’t develop a medicine you can’t sell, and if you can sell it, why should people choose your product?” Simple but true. R&D and Commercial are very complimentary and having both a scientific and business background bring that relationship to life. What is the most gratifying part of your role? The most gratifying part of my role hands down is knowing that we have an impact on patients. Going back to why I wanted to get into the industry in the first place, it’s the idea that we could do something at large that could make an impact on the broader population. My dad is a doctor and has likely seen thousands of patients over his lifetime, and it’s extremely admirable seeing the time and dedication he gives to his patients. With vaccinations, you have the potential to change an entire generation globally. What is super rewarding about my role is knowing that millions of infants, children, or adolescents are receiving the vaccine that we developed and knowing we are potentially providing them a measure of protection against a severe disease. Why do you think that pairing a STEM undergraduate degree with a Business degree is an advantage in the bioeconomy space? Photo by Jaron Nix on Unsplash I think it is such a powerful combination because you have the scientific background to understand how medicines work, how diseases progress, but also the scientific challenges of creating drugs or vaccines. As I mentioned earlier, there will also be a fine balance between how much a company can commit to R&D investment and how much its Commercial can generate with sales. Ultimately, a business is only sustainable if you are able to generate consistent and sustainable revenue and then invest that money back into your R&D for new medicines. It is a very cyclical process but it’s pretty simple; current R&D investment drives future Revenue, and current Revenue drives future R&D investment. A STEM undergraduate degree and business degree combination gives you a more powerful presence as a leader where you understand the nature of R&D and drug development but also understanding the needs of the business- who your customers are and how you can best benefit patients ultimately. I want to contribute to this industry — what skills should I be developing now to set myself up to make a meaningful contribution? I think that is important to be naturally curious. Be willing to discover, explore, research, and learn about different technologies and trends in what is happening in the industry. I subscribe to the daily newsletters Fierce Pharma and Fierce Biotech. Every day I get a newsletter highlighting the newest drug approvals or clinical studies that didn’t go well. Getting that industry insight gives you a little more understanding of the different technologies that are out there, but also the different companies and players that are out there. There are mergers and acquisitions that are happening all the time (R&D by M&A), and it’s not uncommon that during the year a company releases positive data and by the end of the year you see some form of licensing or acquisition happening with another company. Also, take some risks early on in your career. If your current job isn’t fulfilling or you feel like you have maxed out, don’t shy away from moving on and trying something different. Especially for young professionals, this is the time of your life where I think you should learn and try different roles. Broad experience is always going to make you more marketable to companies. Be willing to network and connect. Reach out to different people across the industry. One suggestion is to have an informational interview to learn more about who they are, the job that they are doing and how did they get there (like you did Katy!) If you are already in a company, connect and network with your colleagues because one of them could become a mentor or a champion for you to develop within the company. Finally, you don’t know what you don’t know. Be open and proactive to finding the answers to the questions you have. Who are your mentors and how did you foster those relationships? I have multiple mentors that have supported me and have helped influence me throughout my career. You will undoubtedly come across people in your career that you have good chemistry with, where you immediately feel there is a trust and confidence to confide in them; and these people genuinely, the keyword being genuine, want to see you succeed. I have developed what I’d call lifelong career mentors, despite having left the company where I worked with them. Part of what makes this mentor relationship so important is these people really care about you, want to see you grow and see your potential. Mentors wouldn’t be helpful if they didn’t see your talent and potential, otherwise, it’s their precious time that they are not using effectively. The way that I have fostered these relationships is to first be myself. I am clear about what my interests are and what I’d like to develop. Also, keep these mentors in the loop about what is going on with you. You don’t need to talk to them on a weekly or monthly basis but keep the relationships warm. Keep them regularly updated on what’s happening in your life or professionally even if there isn’t a clear ask for them to do something for you. This helps your mentors be aware of what is happening so that if there is a time you need some advice or you need to come to them for help — they are up to speed. Don’t just come to them for help only when it comes time to finding your next position otherwise the relationship will feel one-sided. It’s on you to have regular check-ins and you can define regular together- it could be monthly, quarterly, semi-annually, — but if they genuinely care about you then they will want to keep in touch with you. I also think that it is important to have multiple mentors. It is always good to have multiple perspectives. And there may be times when you are seeking advice and you want to get the counsel of different people with different experiences which can give you a more balanced perspective. And you will find that at times their messages are consistent and other times you may find that their messages can be conflicting. But I have always found it valuable to have at least several mentors that you can lean on and that can be fully honest with you. Finally, your mentors should always be trying to challenge you as well. Any advice for those looking to contribute to this space with a non-traditional academic path? The good news is that I have met a lot of people who work in the field who don’t have scientific backgrounds. I have met Journalism majors and English majors who have done very well. There are definitely opportunities to work in a pharma company across different functions depending on what your interests are. For example, I work with someone who started out as a Journalism major who now works in a marketing role. That role typically requires an understanding of the scientific technology behind our vaccine, however, because they’ve been able to demonstrate the ability to learn agility, they’ve been successful There are also other career paths for contributing in other ways like in finance, accounting, legal, and supply chain, where you don’t need a scientific degree. If you are passionate about the bioeconomy — Chris is proof that it is possible to craft your own path of impact with a nontraditional background. I hope his journey can encourage others to be bold in their own paths. If Chris’s journey resonates with you, reach out and start a conversation! Want to talk about biotechnology or bioeconomy innovation? Working on some cool science you think is essential to the conversation? Let’s connect! But most importantly make sure you are following Bioeconomy.XYZ for accessible information about biotechnology and the bioeconomy.
https://medium.com/bioeconomy-xyz/where-science-meets-business-crafting-a-career-of-impact-35cd6ddf552e
['Kathryn Hamilton']
2020-11-18 21:07:14.844000+00:00
['Interview', 'Careers', 'Bioeconomy', 'Biotechnology', 'Graduate School']
Title science meet business — crafting career impactContent CAREERS science meet business — crafting career impact doctoral degree gold standard metric come predicting potential impact individual contributing bioeconomy BioeconomyXYZ writer leading dialogue around particularly important topic — importance PhD Alexander Titus’s article “PhD required” certainly made wave challenge limitation gold standard present novel thought impact within space constrained possessing certain type educational background Joseph Buccina pick metaphorical baton answer question “If PhD required … is” give tangible insight grow within bioeconomy without doctorate would highly recommend reading two article individual breaking bioeconomy looking make impact currently possessing PhD discussion extremely appealing Finding path undergrad workforce incredibly daunting get recently wrote article unlocking potential networking especially light uncertainty pandemic five recommendation essential forming meaningful connection maverick within bioeconomy One impactful conversation date Chris Hsu searching LinkedIn looking individual interview point 1 Chris’s profile caught attention first due parallel academic background undergraduate degree scientific discipline hold master degree business concentration MBA Chris’s case Beyond Chris lead innovation within bioeconomy work GSK multinational pharmaceutical consumer healthcare company level impact would love point 3 Chris embodies hungry humble mentality evident work experience also willing speak journey conversation Chris gave unique confidence decision made professional journey thus far one make future Others looking break bioeconomy nontraditional background would greatly benefit Chris’s wisdom gave permission share used many question shared point 4 framing conversation hope Chris’s story resonate much decide study science specifically biology public health undergrad Photo National Cancer Institute Unsplash journey started high school really fascinated Human Genome Project time 2000’s really starting make breakthrough understanding human genetic sequence could help unlock mystery disease impact body well potential could edit manage gene order find cure remember ton excitement around idea unlocking code help trigger certain type cancer able edit cleanly help patient genetherapy find path recovery really intrigued movie watched back 2000 called Gattaca set future career path future individual determined eugenics dystopia individual best strongest hereditary trait favored extreme example implication movie also really sparked interest knew DNA potential curing disease learned syndrome called extra chromosome gene determine gender eye color Thanks high school biology class fast forward today geneediting technology like CRISPR coming mRNA technology company like Moderna using use viral vector insert revised mRNA back body produce specific protein using platform develop vaccine RSV COVID example really wow factor awe Human Genome Project could unlock idea one day would love work company could cure disease plaguing human society hundred year public health emphasis really understanding promote community protection protect society whole versus individual level graduation approached — decide go medical school graduate school path considering medical school becoming physician never really interest know typically science major going medical school dental school veterinarian school vast majority student go least university chose go graduate school immediately saw need work experience decided develop specialization graduate school graduated 2007 absolutely one worst time college graduate job market Companies hiring experienced candidate u new grad tough time getting foot door company reducing workforce people peer tried go back graduate school immediately tried ride get job chose pursue career pharmabiotech industry passion inspiration began high school knowing medicine helped develop produce could profound impact large patient community choose first job remembered upon graduation three job offer table Two federal agency NIH FDA third private sector company ended selecting first job cancer research chance work National Cancer Institute actually lowest paying job three offer received experience time important pay always believed getting right experience would translate better compensation road ended great opportunity learn regulatory affair play multifunctional role developing clinical filing strategy bring drug market Photo Science HD Unsplash decided pivot associate position consulting — know time make change evaluate offer time switched consulting already three different job industry almost switched job yeartoyear first three year postundergraduate experience often get asked “Chris see moved around lot career early — that” would say — pharma large industry lot different career path figured best time learn make mistake figure interested young professional made sense pick experience different functional area ended great decision gave lot early exposure different aspect industry develop better understanding drug development eventually transitioned life science consulting wanted ability work variety company across diverse project Consulting really best opportunity get whole different pace work well consulting work project three month six month year time switching client regular basis really accelerates learning curve instead job two three year would normally industry role time span consulting worked least 2–3 different company maybe 4–5 different project spanning various functional area Consulting opportunity really accelerate learning development Photo X Ventures Unsplash decide move GSK reason personal Prior coming GSK relocated San Francisco support leading pharmaceutical company client wife engaged basically long distance planning wedding opposite side country longdistance constant travel difficult manage agreed move back east coast Coincidentally GSK time opening third RD vaccine center Rockville really great time come site starting able start career Vaccines one first 75 employee site Today 400 academic background prepare first Senior Program Manager Global Meningitis Vaccines Strategy Portfolio Operations Commercial Launch Excellence Lead really combination academic work experience helped prepare current role Senior Program Manager One reason got job work versatility prior work experience multiple functional area result able better support team stakeholder remember receiving feedback interview really liked consulting experience combined experience clinical manufacturing regulatory affair broad experience exposure area really important current role help manage overall portfolio key initiative execution strategic objective term academic background prior knowledge immunology biological system critically important understanding scientific technical development medicine body able benefit came nonscience background would certainly harder understand disease progression science behind vaccine term antigen drive immunological response antibody produced scientific knowledge something develop time nonscience background Pursuing MBA one best decision made career Paired scientific degree MBA helped prepare develop execute strategy perform complex financial analysis understand fundamental marketing One memorable quote colleague shared “you can’t develop medicine can’t sell sell people choose product” Simple true RD Commercial complimentary scientific business background bring relationship life gratifying part role gratifying part role hand knowing impact patient Going back wanted get industry first place it’s idea could something large could make impact broader population dad doctor likely seen thousand patient lifetime it’s extremely admirable seeing time dedication give patient vaccination potential change entire generation globally super rewarding role knowing million infant child adolescent receiving vaccine developed knowing potentially providing measure protection severe disease think pairing STEM undergraduate degree Business degree advantage bioeconomy space Photo Jaron Nix Unsplash think powerful combination scientific background understand medicine work disease progress also scientific challenge creating drug vaccine mentioned earlier also fine balance much company commit RD investment much Commercial generate sale Ultimately business sustainable able generate consistent sustainable revenue invest money back RD new medicine cyclical process it’s pretty simple current RD investment drive future Revenue current Revenue drive future RD investment STEM undergraduate degree business degree combination give powerful presence leader understand nature RD drug development also understanding need business customer best benefit patient ultimately want contribute industry — skill developing set make meaningful contribution think important naturally curious willing discover explore research learn different technology trend happening industry subscribe daily newsletter Fierce Pharma Fierce Biotech Every day get newsletter highlighting newest drug approval clinical study didn’t go well Getting industry insight give little understanding different technology also different company player merger acquisition happening time RD it’s uncommon year company release positive data end year see form licensing acquisition happening another company Also take risk early career current job isn’t fulfilling feel like maxed don’t shy away moving trying something different Especially young professional time life think learn try different role Broad experience always going make marketable company willing network connect Reach different people across industry One suggestion informational interview learn job get like Katy already company connect network colleague one could become mentor champion develop within company Finally don’t know don’t know open proactive finding answer question mentor foster relationship multiple mentor supported helped influence throughout career undoubtedly come across people career good chemistry immediately feel trust confidence confide people genuinely keyword genuine want see succeed developed I’d call lifelong career mentor despite left company worked Part make mentor relationship important people really care want see grow see potential Mentors wouldn’t helpful didn’t see talent potential otherwise it’s precious time using effectively way fostered relationship first clear interest I’d like develop Also keep mentor loop going don’t need talk weekly monthly basis keep relationship warm Keep regularly updated what’s happening life professionally even isn’t clear ask something help mentor aware happening time need advice need come help — speed Don’t come help come time finding next position otherwise relationship feel onesided It’s regular checkins define regular together could monthly quarterly semiannually — genuinely care want keep touch also think important multiple mentor always good multiple perspective may time seeking advice want get counsel different people different experience give balanced perspective find time message consistent time may find message conflicting always found valuable least several mentor lean fully honest Finally mentor always trying challenge well advice looking contribute space nontraditional academic path good news met lot people work field don’t scientific background met Journalism major English major done well definitely opportunity work pharma company across different function depending interest example work someone started Journalism major work marketing role role typically requires understanding scientific technology behind vaccine however they’ve able demonstrate ability learn agility they’ve successful also career path contributing way like finance accounting legal supply chain don’t need scientific degree passionate bioeconomy — Chris proof possible craft path impact nontraditional background hope journey encourage others bold path Chris’s journey resonates reach start conversation Want talk biotechnology bioeconomy innovation Working cool science think essential conversation Let’s connect importantly make sure following BioeconomyXYZ accessible information biotechnology bioeconomyTags Interview Careers Bioeconomy Biotechnology Graduate School
4,289
Picking Peaches With Python in Animal Crossing New Horizons
Getting Started Before running ACNH Automator, you will need to input some information about your tree grid and where Nook’s Cranny is located in your town. In a future release this information will be entered into a command line prompt when running joycontrol, but for the current release you must edit the run_controller_cli.py file. On line 63 of run_controller_cli.py you will find tree_pick_data being defined as an instance of the TreePickLogic class. It is populated with sample data that you will have to change. There are also secondary defaults that are defined, which can be updated as necessary. I’ve included a full explanation of each value you will have change in the Readme, but I’ve also included some reference images to clarify how the grid system is set up. Grid Information ACNH Automator v1.0 assumes that your trees are spaced exactly one grid space apart from each other in the x and y direction. Options for updating this will be available in a future release. Grid space is measured in [x,y] and assumes that [0,0] is the space directly to the left of the top-left tree. The nook_grid value should be exactly 2 spaces below Nook’s Cranny to avoid running into the building by accident. Other recommendations You MUST make sure that your inventory selector (the hand icon when you’re in your inventory) is located on the first inventory space, or the selling process will not work properly. I also recommend clearing out your inventory of anything you don’t want to accidentally sell until you are very comfortable with this toolset. It’s important to have Nook’s Cranny located as close as possible to your tree grid in order to make traveling to sell your fruit easier. I recommend separating your tree grid from the rest of the town to avoid the possibility of villagers getting in your way, I solved this by building on a cliff that is inaccessible to villagers. Try to only have one space available on either side of your tree grid, this will help your character “get back on track” if the automation goes awry. Once you’ve entered your town’s data into run_controller_cli.py you can navigate your character to grid space [0,0] and move on to the next step. Emulating the controller and running “pick_trees” ACNH Automator relies on joycontrol to run, so you’ll need to first navigate to the Change Grip/Order menu on your Nintendo Switch, run joycontrol to begin emulating a controller, and then navigate back to Animal Crossing before running the pick_trees command. To start this process, cd into the main joycontrol directory and run the following command: sudo python3 run_controller_cli.py PRO_CONTROLLER Here is an example of what running joycontrol will look like. You might have to hit CTRL-C once or twice if it doesn’t connect to your Switch within a few seconds. Once joycontrol is up and running and your character is at [0,0] facing the first tree, you can simply run the following command: pick_trees Running pick_trees should look something like this. Based on the information you entered about your town, your character will: Navigate through the grid, harvesting fruit from each tree in the x direction until it reaches the last tree in the row. Travel down two spaces in the y direction to proceed to the next row, and change direction accordingly. Stop picking trees when a threshold is met for the amount of fruit that can be safely stored in your inventory. Travel to Nook’s Cranny to sell all of the fruit, and travel back to the next tree that needs to be picked. Repeat this process until all fruit is harvested and sold. Here is an example of what this process looks like in action:
https://medium.com/swlh/picking-peaches-with-python-in-animal-crossing-new-horizons-75274706ee79
['Arthur Wilton']
2020-11-11 15:53:12.004000+00:00
['Python', 'Nintendo Switch', 'Animal Crossing Switch', 'Animal Crossing', 'Github']
Title Picking Peaches Python Animal Crossing New HorizonsContent Getting Started running ACNH Automator need input information tree grid Nook’s Cranny located town future release information entered command line prompt running joycontrol current release must edit runcontrollerclipy file line 63 runcontrollerclipy find treepickdata defined instance TreePickLogic class populated sample data change also secondary default defined updated necessary I’ve included full explanation value change Readme I’ve also included reference image clarify grid system set Grid Information ACNH Automator v10 assumes tree spaced exactly one grid space apart x direction Options updating available future release Grid space measured xy assumes 00 space directly left topleft tree nookgrid value exactly 2 space Nook’s Cranny avoid running building accident recommendation MUST make sure inventory selector hand icon you’re inventory located first inventory space selling process work properly also recommend clearing inventory anything don’t want accidentally sell comfortable toolset It’s important Nook’s Cranny located close possible tree grid order make traveling sell fruit easier recommend separating tree grid rest town avoid possibility villager getting way solved building cliff inaccessible villager Try one space available either side tree grid help character “get back track” automation go awry you’ve entered town’s data runcontrollerclipy navigate character grid space 00 move next step Emulating controller running “picktrees” ACNH Automator relies joycontrol run you’ll need first navigate Change GripOrder menu Nintendo Switch run joycontrol begin emulating controller navigate back Animal Crossing running picktrees command start process cd main joycontrol directory run following command sudo python3 runcontrollerclipy PROCONTROLLER example running joycontrol look like might hit CTRLC twice doesn’t connect Switch within second joycontrol running character 00 facing first tree simply run following command picktrees Running picktrees look something like Based information entered town character Navigate grid harvesting fruit tree x direction reach last tree row Travel two space direction proceed next row change direction accordingly Stop picking tree threshold met amount fruit safely stored inventory Travel Nook’s Cranny sell fruit travel back next tree need picked Repeat process fruit harvested sold example process look like actionTags Python Nintendo Switch Animal Crossing Switch Animal Crossing Github
4,290
Don't make errors on error messages
When users are exploring a system, it’s like they are walking a path towards their goal. Mistakes are unavoidable in any paths. And when they happen, UX writers must come in as a tour guide to quickly help them out to continue their journey. Product teams may sometimes overlook tiny error messages and let the developers decide their words, which often sound like a robot talking to human. I just do not agree with this point, as you may experience it yourself, such a small but careless message on “Invalid password” could get you frustrated, or leave the system if they do not tell you where you go wrong. Therefore, error messages should be taken care of, as they should help users solve the problem and move on. 3 types of errors In-line errors These are small errors that happen when users are taking an action, they can still move forward, but they are advised to make a correction before moving on. For example, when users make a mistake inputting their phone number to a field, error messages appear to notify them and help them move on by asking them to fill in their number with 10 digits. Example of an in-line error Tips on these errors: The text can be very short and, in general, can clarify, remind, or instruct an ongoing conversation between the person and the experience instead of stopping their actions. Detour errors These are errors that occur when the person can’t get where they want to go in the way they anticipated, but they can still get there. (usually when they need to complete an action before keeping going) Example as below, when users make payment, they are required to add a card first. They still can get to where they are going, but they have to complete an action before it. Tips on these errors: Should provide instruction first, then explanation, and then the single action to take to move forward Blocking errors These are errors that occur when the way forward is blocked until the person takes an action that is outside of the scope of the experience. (Internet off, Site under construction) Example: below is one blocking error message, when the Internet is off. Users are required to take an action outside the scope of the app (turn on Wifi and connect to the Internet) Tips on these errors: Should provide instruction first, then explanation, and then the single action to take to move forward Common rules when writing error messages Besides tips for each type of error messages, there are common rules for any time UX writers playing with words Purposeful: Have a purpose in mind Error messages must have purposes aligned with users’ purposes, that is, telling users what they are experiencing, why it happens and how can they do to move on. Don’t speak the “what” without the “why” and “so what”. In this case, the purpose of the user is to make payment. And when an error happens, if no focus is put on writing, the error message could look like this. Only talk about the “what” No, we cannot let this happen. Imagine the user sees this, and then what? He would ask oh why it is unsuccessful? And what could I do next? In this case, imagine you are the user, you want to make payment. To get to the successful payment, you need to know why it is unsuccessful and how you could make it successful to fulfill your goal. Then the message could look like this: The “what”, with the “why” and “so what” Concise: Cut them short and meaningful Let’s face this, copywriting is there to sell, but UX writing is here to guide. People have a goal when they come to a system and they have no time to read UX texts. So make every message short and straight to the point. The easiest way to do this is to start with an imperative verb on how users may get through their problems and move on. Take an example: Long and not meaningful what to do Well and what is the standard format and what is the required format? Users do not want a long story like this, they want something short and straight to the point, like this: Short and meaningful This messages concisely tell them what to do, in the right way. Conversational: Talk to users like a human Most of the users are not interested in technical details of the problem occurred. So make humans recognize they are interacting with the words; they are in conversation with the experience. It means that the messages should be in normal languages, not with technical terms or codes. Like these: This message contains technical jargons This message contains technical jargons This message contains technical jargons Do normal humans understand these codes? …. Instead of doing this: Talk like a robot and users cannot understand Please do this: Talk like a human Clear: Cut them short The right words will be the ones that the people using the experience will recognize immediately, without having to think. They must not be ambiguous about the problems and make users ask questions like “Exactly what is going on?” Like these bad messages: Windows makes it hard to realize what kind of problem users are in Why is it invalid? These messages are so helpless because they do not tell users clearly what they are experiencing, and therefore they cannot find a way to move forward. Instead of saying this: Vague about problems Please say this: Longer but clearer Conclusion Error messages have a great influence in user experience, reflecting brand voice and personality. Pay attention to error messages to better communicate with users and make the experience worth their time. References Strategic writing for UX — Torrey Podmajersky
https://medium.com/uxpress/dont-make-errors-on-error-messages-a132f3770bf2
['Nguyen Anh Linh Giang']
2020-02-18 14:17:02.286000+00:00
['UX Design', 'Ux Writing', 'Content Strategy', 'Design', 'UX']
Title Dont make error error messagesContent user exploring system it’s like walking path towards goal Mistakes unavoidable path happen UX writer must come tour guide quickly help continue journey Product team may sometimes overlook tiny error message let developer decide word often sound like robot talking human agree point may experience small careless message “Invalid password” could get frustrated leave system tell go wrong Therefore error message taken care help user solve problem move 3 type error Inline error small error happen user taking action still move forward advised make correction moving example user make mistake inputting phone number field error message appear notify help move asking fill number 10 digit Example inline error Tips error text short general clarify remind instruct ongoing conversation person experience instead stopping action Detour error error occur person can’t get want go way anticipated still get usually need complete action keeping going Example user make payment required add card first still get going complete action Tips error provide instruction first explanation single action take move forward Blocking error error occur way forward blocked person take action outside scope experience Internet Site construction Example one blocking error message Internet Users required take action outside scope app turn Wifi connect Internet Tips error provide instruction first explanation single action take move forward Common rule writing error message Besides tip type error message common rule time UX writer playing word Purposeful purpose mind Error message must purpose aligned users’ purpose telling user experiencing happens move Don’t speak “what” without “why” “so what” case purpose user make payment error happens focus put writing error message could look like talk “what” cannot let happen Imagine user see would ask oh unsuccessful could next case imagine user want make payment get successful payment need know unsuccessful could make successful fulfill goal message could look like “what” “why” “so what” Concise Cut short meaningful Let’s face copywriting sell UX writing guide People goal come system time read UX text make every message short straight point easiest way start imperative verb user may get problem move Take example Long meaningful Well standard format required format Users want long story like want something short straight point like Short meaningful message concisely tell right way Conversational Talk user like human user interested technical detail problem occurred make human recognize interacting word conversation experience mean message normal language technical term code Like message contains technical jargon message contains technical jargon message contains technical jargon normal human understand code … Instead Talk like robot user cannot understand Please Talk like human Clear Cut short right word one people using experience recognize immediately without think must ambiguous problem make user ask question like “Exactly going on” Like bad message Windows make hard realize kind problem user invalid message helpless tell user clearly experiencing therefore cannot find way move forward Instead saying Vague problem Please say Longer clearer Conclusion Error message great influence user experience reflecting brand voice personality Pay attention error message better communicate user make experience worth time References Strategic writing UX — Torrey PodmajerskyTags UX Design Ux Writing Content Strategy Design UX
4,291
Kreatives & soziales “Hotelprojekt” ausgezeichnet
in In Fitness And In Health
https://medium.com/workersonthefield/kreatives-soziales-hotelprojekt-ausgezeichnet-dd85cbf1bdd5
['Reinhard Lanner']
2016-06-26 08:32:00.122000+00:00
['Architektur', 'Hotel', 'Design']
Title Kreatives soziales “Hotelprojekt” ausgezeichnetContent Fitness HealthTags Architektur Hotel Design
4,292
You Belong Here
You Belong Here A poem Photo by Noah Silliman on Unsplash Yesterday, your heart broke into a million pieces, and then it broke into a million more. And yet, you’re still here. Yesterday, you cried enough tears to fill all of the oceans in this great big world. And yet, you’re still here. Yesterday, you threw your hands up to the sky, and held onto the lie that you’re not strong enough to withstand all of this. And yet, you’re still here. You’re still here. You’re still here, with lungs still breathing, and eyes still blinking and tearing and seeing. You’re still here, with tender wounds and silver scars, and a heart that has had a million breaks, and yet, it continues to beat. You’re still here, with lessons you’ve learned from this life that you’re living, and a heart that can continue to keep loving. The world has tried to break your spirit and steal your light. And yet, you’re still here. You belong here.
https://medium.com/assemblage/you-belong-here-6c264128e9ad
['Megan Minutillo']
2020-12-23 14:28:16.058000+00:00
['Poetry', 'Poesía', 'Poetry On Medium', 'Encouragement', 'Self-awareness']
Title Belong HereContent Belong poem Photo Noah Silliman Unsplash Yesterday heart broke million piece broke million yet you’re still Yesterday cried enough tear fill ocean great big world yet you’re still Yesterday threw hand sky held onto lie you’re strong enough withstand yet you’re still You’re still You’re still lung still breathing eye still blinking tearing seeing You’re still tender wound silver scar heart million break yet continues beat You’re still lesson you’ve learned life you’re living heart continue keep loving world tried break spirit steal light yet you’re still belong hereTags Poetry Poesía Poetry Medium Encouragement Selfawareness
4,293
Manning Park Resort — March 6/20. Aging gracefully, and still skiing up a…
Gord represents my inspiration for continuing to enjoy this sport. He is 77 years young. I love it!! He boasts having hit the magic number for the $25 pass: age 75. He actually resides in the Yukon, but spends his winters here as the snow is better for skiing. Very cool. A fun fact about Gord is he has tried to ski the number of times each year to match his age. At age 65, he almost made it. He skied 63 times that season!! That is A LOT!! I also loved something important that Gord had to say, which really mimics what I stand for, and why I even blog about skiing (as it relates to mental health, and getting outside): Any time I have to stay in a big city for more than four or five days, I get what I call “Nature Deficit Disorder” [NDD] A great expression, indeed! I think I will adopt it. In that vein, Gord also commented on another important fact that goes along with being active and outdoors in the winter. He noticed that if he does not get himself out and on the hill much during the winter, that in the off season “his tummy is a little larger” and his joints a little stiffer. He actually took up telemark skiing only about five years ago to increase his mobility (much to his wife’s chagrin). Anyway, Gord said that following a less active winter, he then has less ability to do the summer things that he enjoys, like hiking and such. This is such an important factor for all of us to embrace, with whatever sport/hobby we can engage in during the winter months. Gord was also full of interesting historical information about Manning. For example, there is a run called Featherstone, which he said was named after Frank Featherstone. He skied here until he was 91! His wife also skied until she was 88 or 89. How spectacular is that?! Fun fact: apparently Frank was about 5 feet tall, and his wife about 4'6". I would say that makes things easier when you don’t have far to fall, lol. With regards to Manning itself, I would highly recommend it! It only has two chairlifts (a brand new one last year), but covers a great deal of terrain. The landscape is very beautiful. I understand it is also common for Manning to have over 10 cm of fresh snow overnight on a regular basis. Lots of powder to be found! Whether it is skiing, hiking, biking, running or walking-please get outside yourself! Your mental health will thank you. :)
https://medium.com/mind-your-madness/manning-park-resort-march-6-20-9c3b2a7846eb
['Jennifer Hammersmark']
2020-03-09 15:11:31.641000+00:00
['Outdoors', 'Exercise', 'Skiing', 'Vitamin D', 'Mental Health']
Title Manning Park Resort — March 620 Aging gracefully still skiing a…Content Gord represents inspiration continuing enjoy sport 77 year young love boast hit magic number 25 pas age 75 actually resides Yukon spends winter snow better skiing cool fun fact Gord tried ski number time year match age age 65 almost made skied 63 time season LOT also loved something important Gord say really mimic stand even blog skiing relates mental health getting outside time stay big city four five day get call “Nature Deficit Disorder” NDD great expression indeed think adopt vein Gord also commented another important fact go along active outdoors winter noticed get hill much winter season “his tummy little larger” joint little stiffer actually took telemark skiing five year ago increase mobility much wife’s chagrin Anyway Gord said following le active winter le ability summer thing enjoys like hiking important factor u embrace whatever sporthobby engage winter month Gord also full interesting historical information Manning example run called Featherstone said named Frank Featherstone skied 91 wife also skied 88 89 spectacular Fun fact apparently Frank 5 foot tall wife 46 would say make thing easier don’t far fall lol regard Manning would highly recommend two chairlift brand new one last year cover great deal terrain landscape beautiful understand also common Manning 10 cm fresh snow overnight regular basis Lots powder found Whether skiing hiking biking running walkingplease get outside mental health thank Tags Outdoors Exercise Skiing Vitamin Mental Health
4,294
Special benefits are the decider for workers’ happiness
Special benefits are the decider for workers’ happiness The right programs stop people from walking out the door Photo by Fauxels Organizations struggle to provide the right benefits for their workers. Many leaders and managers don’t understand the basic wants and needs of rank-and-file employees, which is likely different from that of the top echelon. Coming from different professional and personal backgrounds, companies large or small can’t rely on managers or executives to know what everyone in the organization desires. “Your organization probably invests a lot of time, energy and money to retain top employees,” said Meghan M. Biro, analyst, brand strategist, podcaster and TalentCulture chief executive officer. “Yet, at least occasionally, you still wind up losing them to competitors.” She wondered about how to put an end to that unproductive cycle. “What can you offer your employees that means enough for them to stay?” Biro said. “As an employer, what’s your real value proposition? A beautiful office? No, not when we’re working remotely. Free gym memberships or great retreats? Soon, hopefully, but not now.” She contends that to retain top talent in today’s work environment, it’s not about perks. “Retention is about what employees really need,” Biro said, turning to Chris Wakely, executive vice president of global sales at Benify, which specializes in employee benefits around the world. His company compiled The Benefits and Engagement Report: A European Employer’s Guide to Employee Experience for the 2020s. The survey was conducted early in 2020, near the beginning of the global pandemic. “Despite all the craziness, about 5,000 people took the survey,” Wakely said. “We asked them what they think about their employer. What benefits, other than salary, do they want? “It was a really interesting time to be asking these questions as people dug into their new reality,” he said. “We really got an understanding of how employees think and act in the middle of change.” Benefits rule One takeaway, according to Wakely: Nine out of 10 employees aged under 30 say they would consider changing employers to receive better employee benefits. The revolving door of worker turnover is real. “A huge reason organizations struggle with providing the right benefits is that there’s a misconception,” Biro said. “What benefits employees is far beyond simply providing health and dental insurance. There’s so much more that goes into it.” Wakely breaks the problem down to two main reasons. “Benefits aren’t a one-size-fits-all model,” he said. “Each generation has its own needs and preferences. A company’s employee benefits offering needs to be personalized. One way is through offering a flexible benefits plan. “Human resources professionals might not have access to insights about their employees’ needs and wants,” Wakely said. “The guesswork can be removed through a global dashboard where administrators get an overview of benefits in use along with spending and supplier costs.” Employers must adapt as circumstances change. “When it comes to building a benefits strategy, perhaps the most important thing of all is flexibility — allowing employees to customize and personalize their benefits based on their needs,” Wakely said. “There are several ways to offer flexibility. “You can remove assumptions and find out what employees really want,” he said, citing one of his company’s related posts. Employees need to understand what’s in it for them when it comes to benefits. That means engaging education on their level of understanding. The common person is not a licensed insurance agent well versed in arcane legal language. Workers need translators who care. Well-chosen words matter “Evaluate your benefits,” Wakely said. “Find out what your employees think about your offer and which benefits are working. Align your benefits to other organizational goals. For example, if your goal includes promoting more remote working, offer more digital benefits. “The greatest benefits in the world aren’t worth anything if they aren’t communicated properly,” he said. “Thinking outside the box is important along with giving employees the flexibility to choose.” Conventional approaches will stymie creativity. “Benefits can include everything beyond compensation,” Biro said. “There are so many ways to provide them that meet employees’ real-life and working needs. “Small to medium-sized organizations should consider working with an outside service provider to improve the benefits experience,” she said. “It’s not just the what, it’s the how, the where, the when, too.” Biro questions how well employers perceive their workers’ benefits experience. “I often say this: Before you embark on changes, find out,” she said. “Take the pulse of your workforce.” A total rewards experience is a valuable hiring and retention tool. Management should not make employees cherry pick happiness. One benefit that addresses a particular need will not satisfy those with lingering wants in other areas. A bad overall experience will send people out the door to greener pastures. “So much influences an employee’s decision to share an experience — which affects the employer brand for prospective hires,” Biro said. “How they’re treated is clearly a major factor. “Consider the isolation, the disruptions, the noise, the pressures of working from home — even for those who love it,” she said. “Now balance a moment of happiness, a gesture of recognition against that. It’s a big deal.” Full view brings clarity Employees’ satisfaction rests with having the big picture about their benefits. “When employees only see part of their compensation, other important benefits such as insurance, pension and add-ons are overlooked,” Wakely said. “This undervalues the employee’s total reward package and wastes money on unused benefits from the employer’s perspective. “In today’s competitive job market where companies compete to attract and retain talent, this can make the difference of a candidate choosing one employer over the other,” he said. “Knowing what your employees want is essential. Give them the flexibility to choose their compensation package.” He referred to Benify’s benefit and engagement report from a survey of 5,000 employees to back up his recommendations. About The Author Jim Katzaman is a manager at Largo Financial Services and worked in public affairs for the Air Force and federal government. You can connect with him on Twitter, Facebook and LinkedIn.
https://medium.com/datadriveninvestor/special-benefits-are-the-decider-for-workers-happiness-bdc4b2207410
['Jim Katzaman - Get Out Of Debt']
2020-10-26 10:27:09.684000+00:00
['Entrepreneurship', 'Management', 'Remote Working', 'Benefits', 'Recruiting']
Title Special benefit decider workers’ happinessContent Special benefit decider workers’ happiness right program stop people walking door Photo Fauxels Organizations struggle provide right benefit worker Many leader manager don’t understand basic want need rankandfile employee likely different top echelon Coming different professional personal background company large small can’t rely manager executive know everyone organization desire “Your organization probably invests lot time energy money retain top employees” said Meghan Biro analyst brand strategist podcaster TalentCulture chief executive officer “Yet least occasionally still wind losing competitors” wondered put end unproductive cycle “What offer employee mean enough stay” Biro said “As employer what’s real value proposition beautiful office we’re working remotely Free gym membership great retreat Soon hopefully now” contends retain top talent today’s work environment it’s perk “Retention employee really need” Biro said turning Chris Wakely executive vice president global sale Benify specializes employee benefit around world company compiled Benefits Engagement Report European Employer’s Guide Employee Experience 2020s survey conducted early 2020 near beginning global pandemic “Despite craziness 5000 people took survey” Wakely said “We asked think employer benefit salary want “It really interesting time asking question people dug new reality” said “We really got understanding employee think act middle change” Benefits rule One takeaway according Wakely Nine 10 employee aged 30 say would consider changing employer receive better employee benefit revolving door worker turnover real “A huge reason organization struggle providing right benefit there’s misconception” Biro said “What benefit employee far beyond simply providing health dental insurance There’s much go it” Wakely break problem two main reason “Benefits aren’t onesizefitsall model” said “Each generation need preference company’s employee benefit offering need personalized One way offering flexible benefit plan “Human resource professional might access insight employees’ need wants” Wakely said “The guesswork removed global dashboard administrator get overview benefit use along spending supplier costs” Employers must adapt circumstance change “When come building benefit strategy perhaps important thing flexibility — allowing employee customize personalize benefit based needs” Wakely said “There several way offer flexibility “You remove assumption find employee really want” said citing one company’s related post Employees need understand what’s come benefit mean engaging education level understanding common person licensed insurance agent well versed arcane legal language Workers need translator care Wellchosen word matter “Evaluate benefits” Wakely said “Find employee think offer benefit working Align benefit organizational goal example goal includes promoting remote working offer digital benefit “The greatest benefit world aren’t worth anything aren’t communicated properly” said “Thinking outside box important along giving employee flexibility choose” Conventional approach stymie creativity “Benefits include everything beyond compensation” Biro said “There many way provide meet employees’ reallife working need “Small mediumsized organization consider working outside service provider improve benefit experience” said “It’s it’s too” Biro question well employer perceive workers’ benefit experience “I often say embark change find out” said “Take pulse workforce” total reward experience valuable hiring retention tool Management make employee cherry pick happiness One benefit address particular need satisfy lingering want area bad overall experience send people door greener pasture “So much influence employee’s decision share experience — affect employer brand prospective hires” Biro said “How they’re treated clearly major factor “Consider isolation disruption noise pressure working home — even love it” said “Now balance moment happiness gesture recognition It’s big deal” Full view brings clarity Employees’ satisfaction rest big picture benefit “When employee see part compensation important benefit insurance pension addons overlooked” Wakely said “This undervalues employee’s total reward package waste money unused benefit employer’s perspective “In today’s competitive job market company compete attract retain talent make difference candidate choosing one employer other” said “Knowing employee want essential Give flexibility choose compensation package” referred Benify’s benefit engagement report survey 5000 employee back recommendation Author Jim Katzaman manager Largo Financial Services worked public affair Air Force federal government connect Twitter Facebook LinkedInTags Entrepreneurship Management Remote Working Benefits Recruiting
4,295
Text Classification with NLP: Tf-Idf vs Word2Vec vs BERT
Setup First of all, I need to import the following libraries: ## for data import json import pandas as pd import numpy as np ## for plotting import matplotlib.pyplot as plt import seaborn as sns ## for processing import re import nltk ## for bag-of-words from sklearn import feature_extraction, model_selection, naive_bayes, pipeline, manifold, preprocessing ## for explainer from lime import lime_text ## for word embedding import gensim import gensim.downloader as gensim_api ## for deep learning from tensorflow.keras import models, layers, preprocessing as kprocessing from tensorflow.keras import backend as K ## for bert language model import transformers The dataset is contained into a json file, so I will first read it into a list of dictionaries with json and then transform it into a pandas Dataframe. lst_dics = [] with open('data.json', mode='r', errors='ignore') as json_file: for dic in json_file: lst_dics.append( json.loads(dic) ) ## print the first one lst_dics[0] The original dataset contains over 30 categories, but for the purposes of this tutorial, I will work with a subset of 3: Entertainment, Politics, and Tech. ## create dtf dtf = pd.DataFrame(lst_dics) ## filter categories dtf = dtf[ dtf["category"].isin(['ENTERTAINMENT','POLITICS','TECH']) ][["category","headline"]] ## rename columns dtf = dtf.rename(columns={"category":"y", "headline":"text"}) ## print 5 random rows dtf.sample(5) In order to understand the composition of the dataset, I am going to look into the univariate distribution of the target by showing labels frequency with a bar plot. fig, ax = plt.subplots() fig.suptitle("y", fontsize=12) dtf["y"].reset_index().groupby("y").count().sort_values(by= "index").plot(kind="barh", legend=False, ax=ax).grid(axis='x') plt.show() The dataset is imbalanced: the proportion of Tech news is really small compared to the others, this will make for models to recognize Tech news rather tough. Before explaining and building the models, I am going to give an example of preprocessing by cleaning text, removing stop words, and applying lemmatization. I will write a function and apply it to the whole data set. ''' Preprocess a string. :parameter :param text: string - name of column containing text :param lst_stopwords: list - list of stopwords to remove :param flg_stemm: bool - whether stemming is to be applied :param flg_lemm: bool - whether lemmitisation is to be applied :return cleaned text ''' def utils_preprocess_text(text, flg_stemm=False, flg_lemm=True, lst_stopwords=None): ## clean (convert to lowercase and remove punctuations and characters and then strip) text = re.sub(r'[^\w\s]', '', str(text).lower().strip()) ## Tokenize (convert from string to list) lst_text = text.split() ## remove Stopwords if lst_stopwords is not None: lst_text = [word for word in lst_text if word not in lst_stopwords] ## Stemming (remove -ing, -ly, ...) if flg_stemm == True: ps = nltk.stem.porter.PorterStemmer() lst_text = [ps.stem(word) for word in lst_text] ## Lemmatisation (convert the word into root word) if flg_lemm == True: lem = nltk.stem.wordnet.WordNetLemmatizer() lst_text = [lem.lemmatize(word) for word in lst_text] ## back to string from list text = " ".join(lst_text) return text That function removes a set of words from the corpus if given. I can create a list of generic stop words for the English vocabulary with nltk (we could edit this list by adding or removing words). lst_stopwords = nltk.corpus.stopwords.words("english") lst_stopwords Now I shall apply the function I wrote on the whole dataset and store the result in a new column named “text_clean” so that you can choose to work with the raw corpus or the preprocessed text. dtf["text_clean"] = dtf["text"].apply(lambda x: utils_preprocess_text(x, flg_stemm=False, flg_lemm=True, lst_stopwords=lst_stopwords)) dtf.head() If you are interested in a deeper text analysis and preprocessing, you can check this article. With this in mind, I am going to partition the dataset into training set (70%) and test set (30%) in order to evaluate the models performance. ## split dataset dtf_train, dtf_test = model_selection.train_test_split(dtf, test_size=0.3) ## get target y_train = dtf_train["y"].values y_test = dtf_test["y"].values Let’s get started, shall we? Bag-of-Words The Bag-of-Words model is simple: it builds a vocabulary from a corpus of documents and counts how many times the words appear in each document. To put it another way, each word in the vocabulary becomes a feature and a document is represented by a vector with the same length of the vocabulary (a “bag of words”). For instance, let’s take 3 sentences and represent them with this approach: Feature matrix shape: Number of documents x Length of vocabulary As you can imagine, this approach causes a significant dimensionality problem: the more documents you have the larger is the vocabulary, so the feature matrix will be a huge sparse matrix. Therefore, the Bag-of-Words model is usually preceded by an important preprocessing (word cleaning, stop words removal, stemming/lemmatization) aimed to reduce the dimensionality problem. Terms frequency is not necessarily the best representation for text. In fact, you can find in the corpus common words with the highest frequency but little predictive power over the target variable. To address this problem there is an advanced variant of the Bag-of-Words that, instead of simple counting, uses the term frequency–inverse document frequency (or Tf–Idf). Basically, the value of a word increases proportionally to count, but it is inversely proportional to the frequency of the word in the corpus. Let’s start with the Feature Engineering, the process to create features by extracting information from the data. I am going to use the Tf-Idf vectorizer with a limit of 10,000 words (so the length of my vocabulary will be 10k), capturing unigrams (i.e. “new” and “york”) and bigrams (i.e. “new york”). I will provide the code for the classic count vectorizer as well: ## Count (classic BoW) vectorizer = feature_extraction.text.CountVectorizer(max_features=10000, ngram_range=(1,2)) ## Tf-Idf (advanced variant of BoW) vectorizer = feature_extraction.text.TfidfVectorizer(max_features=10000, ngram_range=(1,2)) Now I will use the vectorizer on the preprocessed corpus of the train set to extract a vocabulary and create the feature matrix. corpus = dtf_train["text_clean"] vectorizer.fit(corpus) X_train = vectorizer.transform(corpus) dic_vocabulary = vectorizer.vocabulary_ The feature matrix X_train has a shape of 34,265 (Number of documents in training) x 10,000 (Length of vocabulary) and it’s pretty sparse: sns.heatmap(X_train.todense()[:,np.random.randint(0,X.shape[1],100)]==0, vmin=0, vmax=1, cbar=False).set_title('Sparse Matrix Sample') Random sample from the feature matrix (non-zero values in black) In order to know the position of a certain word, we can look it up in the vocabulary: word = "new york" dic_vocabulary[word] If the word exists in the vocabulary, this command prints a number N, meaning that the Nth feature of the matrix is that word. In order to drop some columns and reduce the matrix dimensionality, we can carry out some Feature Selection, the process of selecting a subset of relevant variables. I will proceed as follows: treat each category as binary (for example, the “Tech” category is 1 for the Tech news and 0 for the others); perform a Chi-Square test to determine whether a feature and the (binary) target are independent; keep only the features with a certain p-value from the Chi-Square test. y = dtf_train["y"] X_names = vectorizer.get_feature_names() p_value_limit = 0.95 dtf_features = pd.DataFrame() for cat in np.unique(y): chi2, p = feature_selection.chi2(X_train, y==cat) dtf_features = dtf_features.append(pd.DataFrame( {"feature":X_names, "score":1-p, "y":cat})) dtf_features = dtf_features.sort_values(["y","score"], ascending=[True,False]) dtf_features = dtf_features[dtf_features["score"]>p_value_limit] X_names = dtf_features["feature"].unique().tolist() I reduced the number of features from 10,000 to 3,152 by keeping the most statistically relevant ones. Let’s print some: for cat in np.unique(y): print("# {}:".format(cat)) print(" . selected features:", len(dtf_features[dtf_features["y"]==cat])) print(" . top features:", ",".join( dtf_features[dtf_features["y"]==cat]["feature"].values[:10])) print(" ") We can refit the vectorizer on the corpus by giving this new set of words as input. That will produce a smaller feature matrix and a shorter vocabulary. vectorizer = feature_extraction.text.TfidfVectorizer(vocabulary=X_names) vectorizer.fit(corpus) X_train = vectorizer.transform(corpus) dic_vocabulary = vectorizer.vocabulary_ The new feature matrix X_train has a shape of is 34,265 (Number of documents in training) x 3,152 (Length of the given vocabulary). Let’s see if the matrix is less sparse: Random sample from the new feature matrix (non-zero values in black) It’s time to train a machine learning model and test it. I recommend using a Naive Bayes algorithm: a probabilistic classifier that makes use of Bayes’ Theorem, a rule that uses probability to make predictions based on prior knowledge of conditions that might be related. This algorithm is the most suitable for such large dataset as it considers each feature independently, calculates the probability of each category, and then predicts the category with the highest probability. classifier = naive_bayes.MultinomialNB() I’m going to train this classifier on the feature matrix and then test it on the transformed test set. To that end, I need to build a scikit-learn pipeline: a sequential application of a list of transformations and a final estimator. Putting the Tf-Idf vectorizer and the Naive Bayes classifier in a pipeline allows us to transform and predict test data in just one step. ## pipeline model = pipeline.Pipeline([("vectorizer", vectorizer), ("classifier", classifier)]) ## train classifier model["classifier"].fit(X_train, y_train) ## test X_test = dtf_test["text_clean"].values predicted = model.predict(X_test) predicted_prob = model.predict_proba(X_test) We can now evaluate the performance of the Bag-of-Words model, I will use the following metrics: Accuracy: the fraction of predictions the model got right. Confusion Matrix: a summary table that breaks down the number of correct and incorrect predictions by each class. ROC: a plot that illustrates the true positive rate against the false positive rate at various threshold settings. The area under the curve (AUC) indicates the probability that the classifier will rank a randomly chosen positive observation higher than a randomly chosen negative one. Precision: the fraction of relevant instances among the retrieved instances. Recall: the fraction of the total amount of relevant instances that were actually retrieved. classes = np.unique(y_test) y_test_array = pd.get_dummies(y_test, drop_first=False).values ## Accuracy, Precision, Recall accuracy = metrics.accuracy_score(y_test, predicted) auc = metrics.roc_auc_score(y_test, predicted_prob, multi_class="ovr") print("Accuracy:", round(accuracy,2)) print("Auc:", round(auc,2)) print("Detail:") print(metrics.classification_report(y_test, predicted)) ## Plot confusion matrix cm = metrics.confusion_matrix(y_test, predicted) fig, ax = plt.subplots() sns.heatmap(cm, annot=True, fmt='d', ax=ax, cmap=plt.cm.Blues, cbar=False) ax.set(xlabel="Pred", ylabel="True", xticklabels=classes, yticklabels=classes, title="Confusion matrix") plt.yticks(rotation=0) fig, ax = plt.subplots(nrows=1, ncols=2) ## Plot roc for i in range(len(classes)): fpr, tpr, thresholds = metrics.roc_curve(y_test_array[:,i], predicted_prob[:,i]) ax[0].plot(fpr, tpr, lw=3, label='{0} (area={1:0.2f})'.format(classes[i], metrics.auc(fpr, tpr)) ) ax[0].plot([0,1], [0,1], color='navy', lw=3, linestyle='--') ax[0].set(xlim=[-0.05,1.0], ylim=[0.0,1.05], xlabel='False Positive Rate', ylabel="True Positive Rate (Recall)", title="Receiver operating characteristic") ax[0].legend(loc="lower right") ax[0].grid(True) ## Plot precision-recall curve for i in range(len(classes)): precision, recall, thresholds = metrics.precision_recall_curve( y_test_array[:,i], predicted_prob[:,i]) ax[1].plot(recall, precision, lw=3, label='{0} (area={1:0.2f})'.format(classes[i], metrics.auc(recall, precision)) ) ax[1].set(xlim=[0.0,1.05], ylim=[0.0,1.05], xlabel='Recall', ylabel="Precision", title="Precision-Recall curve") ax[1].legend(loc="best") ax[1].grid(True) plt.show() The BoW model got 85% of the test set right (Accuracy is 0.85), but struggles to recognize Tech news (only 252 predicted correctly). Let’s try to understand why the model classifies news with a certain category and assess the explainability of these predictions. The lime package can help us to build an explainer. To give an illustration, I will take a random observation from the test set and see what the model predicts and why. ## select observation i = 0 txt_instance = dtf_test["text"].iloc[i] ## check true value and predicted value print("True:", y_test[i], "--> Pred:", predicted[i], "| Prob:", round(np.max(predicted_prob[i]),2)) ## show explanation explainer = lime_text.LimeTextExplainer(class_names= np.unique(y_train)) explained = explainer.explain_instance(txt_instance, model.predict_proba, num_features=3) explained.show_in_notebook(text=txt_instance, predict_proba=False) That makes sense: the words “Clinton” and “GOP” pointed the model in the right direction (Politics news) even if the word “Stage” is more common among Entertainment news. Word Embedding Word Embedding is the collective name for feature learning techniques where words from the vocabulary are mapped to vectors of real numbers. These vectors are calculated from the probability distribution for each word appearing before or after another. To put it another way, words of the same context usually appear together in the corpus, so they will be close in the vector space as well. For instance, let’s take the 3 sentences from the previous example: Words embedded in 2D vector space In this tutorial, I’m going to use the first model of this family: Google’s Word2Vec (2013). Other popular Word Embedding models are Stanford’s GloVe (2014) and Facebook’s FastText (2016). Word2Vec produces a vector space, typically of several hundred dimensions, with each unique word in the corpus such that words that share common contexts in the corpus are located close to one another in the space. That can be done using 2 different approaches: starting from a single word to predict its context (Skip-gram) or starting from the context to predict a word (Continuous Bag-of-Words). In Python, you can load a pre-trained Word Embedding model from genism-data like this: nlp = gensim_api.load("word2vec-google-news-300") Instead of using a pre-trained model, I am going to fit my own Word2Vec on the training data corpus with gensim. Before fitting the model, the corpus needs to be transformed into a list of lists of n-grams. In this particular case, I’ll try to capture unigrams (“york”), bigrams (“new york”), and trigrams (“new york city”). corpus = dtf_train["text_clean"] ## create list of lists of unigrams lst_corpus = [] for string in corpus: lst_words = string.split() lst_grams = [" ".join(lst_words[i:i+1]) for i in range(0, len(lst_words), 1)] lst_corpus.append(lst_grams) ## detect bigrams and trigrams bigrams_detector = gensim.models.phrases.Phrases(lst_corpus, delimiter=" ".encode(), min_count=5, threshold=10) bigrams_detector = gensim.models.phrases.Phraser(bigrams_detector) trigrams_detector = gensim.models.phrases.Phrases(bigrams_detector[lst_corpus], delimiter=" ".encode(), min_count=5, threshold=10) trigrams_detector = gensim.models.phrases.Phraser(trigrams_detector) When fitting the Word2Vec, you need to specify: the target size of the word vectors, I’ll use 300; the window, or the maximum distance between the current and predicted word within a sentence, I’ll use the mean length of text in the corpus; the training algorithm, I’ll use skip-grams (sg=1) as in general it has better results. ## fit w2v nlp = gensim.models.word2vec.Word2Vec(lst_corpus, size=300, window=8, min_count=1, sg=1, iter=30) We have our embedding model, so we can select any word from the corpus and transform it into a vector. word = "data" nlp[word].shape We can even use it to visualize a word and its context into a smaller dimensional space (2D or 3D) by applying any dimensionality reduction algorithm (i.e. TSNE). word = "data" fig = plt.figure() ## word embedding tot_words = [word] + [tupla[0] for tupla in nlp.most_similar(word, topn=20)] X = nlp[tot_words] ## pca to reduce dimensionality from 300 to 3 pca = manifold.TSNE(perplexity=40, n_components=3, init='pca') X = pca.fit_transform(X) ## create dtf dtf_ = pd.DataFrame(X, index=tot_words, columns=["x","y","z"]) dtf_["input"] = 0 dtf_["input"].iloc[0:1] = 1 ## plot 3d from mpl_toolkits.mplot3d import Axes3D ax = fig.add_subplot(111, projection='3d') ax.scatter(dtf_[dtf_["input"]==0]['x'], dtf_[dtf_["input"]==0]['y'], dtf_[dtf_["input"]==0]['z'], c="black") ax.scatter(dtf_[dtf_["input"]==1]['x'], dtf_[dtf_["input"]==1]['y'], dtf_[dtf_["input"]==1]['z'], c="red") ax.set(xlabel=None, ylabel=None, zlabel=None, xticklabels=[], yticklabels=[], zticklabels=[]) for label, row in dtf_[["x","y","z"]].iterrows(): x, y, z = row ax.text(x, y, z, s=label) That’s pretty cool and all, but how can the word embedding be useful to predict the news category? Well, the word vectors can be used in a neural network as weights. This is how: First, transform the corpus into padded sequences of word ids to get a feature matrix. Then, create an embedding matrix so that the vector of the word with id N is located at the Nth row. Finally, build a neural network with an embedding layer that weighs every word in the sequences with the corresponding vector. Let’s start with the Feature Engineering by transforming the same preprocessed corpus (list of lists of n-grams) given to the Word2Vec into a list of sequences using tensorflow/keras: ## tokenize text tokenizer = kprocessing.text.Tokenizer(lower=True, split=' ', oov_token="NaN", filters='!"#$%&()*+,-./:;<=>?@[\\]^_`{|}~\t ') tokenizer.fit_on_texts(lst_corpus) dic_vocabulary = tokenizer.word_index ## create sequence lst_text2seq= tokenizer.texts_to_sequences(lst_corpus) ## padding sequence X_train = kprocessing.sequence.pad_sequences(lst_text2seq, maxlen=15, padding="post", truncating="post") The feature matrix X_train has a shape of 34,265 x 15 (Number of sequences x Sequences max length). Let’s visualize it: sns.heatmap(X_train==0, vmin=0, vmax=1, cbar=False) plt.show() Feature matrix (34,265 x 15) Every text in the corpus is now an id sequence with length 15. For instance, if a text had 10 tokens in it, then the sequence is composed of 10 ids + 5 0s, which is the padding element (while the id for word not in the vocabulary is 1). Let’s print how a text from the train set has been transformed into a sequence with the padding and the vocabulary. i = 0 ## list of text: ["I like this", ...] len_txt = len(dtf_train["text_clean"].iloc[i].split()) print("from: ", dtf_train["text_clean"].iloc[i], "| len:", len_txt) ## sequence of token ids: [[1, 2, 3], ...] len_tokens = len(X_train[i]) print("to: ", X_train[i], "| len:", len(X_train[i])) ## vocabulary: {"I":1, "like":2, "this":3, ...} print("check: ", dtf_train["text_clean"].iloc[i].split()[0], " -- idx in vocabulary -->", dic_vocabulary[dtf_train["text_clean"].iloc[i].split()[0]]) print("vocabulary: ", dict(list(dic_vocabulary.items())[0:5]), "... (padding element, 0)") Before moving on, don’t forget to do the same feature engineering on the test set as well: corpus = dtf_test["text_clean"] ## create list of n-grams lst_corpus = [] for string in corpus: lst_words = string.split() lst_grams = [" ".join(lst_words[i:i+1]) for i in range(0, len(lst_words), 1)] lst_corpus.append(lst_grams) ## detect common bigrams and trigrams using the fitted detectors lst_corpus = list(bigrams_detector[lst_corpus]) lst_corpus = list(trigrams_detector[lst_corpus]) ## text to sequence with the fitted tokenizer lst_text2seq = tokenizer.texts_to_sequences(lst_corpus) ## padding sequence X_test = kprocessing.sequence.pad_sequences(lst_text2seq, maxlen=15, padding="post", truncating="post") X_test (14,697 x 15) We’ve got our X_train and X_test, now we need to create the matrix of embedding that will be used as a weight matrix in the neural network classifier. ## start the matrix (length of vocabulary x vector size) with all 0s embeddings = np.zeros((len(dic_vocabulary)+1, 300)) for word,idx in dic_vocabulary.items(): ## update the row with vector try: embeddings[idx] = nlp[word] ## if word not in model then skip and the row stays all 0s except: pass That code generates a matrix of shape 22,338 x 300 (Length of vocabulary extracted from the corpus x Vector size). It can be navigated by word id, which can be obtained from the vocabulary. word = "data" print("dic[word]:", dic_vocabulary[word], "|idx") print("embeddings[idx]:", embeddings[dic_vocabulary[word]].shape, "|vector") It’s finally time to build a deep learning model. I’m going to use the embedding matrix in the first Embedding layer of the neural network that I will build and train to classify the news. Each id in the input sequence will be used as the index to access the embedding matrix. The output of this Embedding layer will be a 2D matrix with a word vector for each word id in the input sequence (Sequence length x Vector size). Let’s use the sentence “I like this article” as an example: My neural network shall be structured as follows: an Embedding layer that takes the sequences as input and the word vectors as weights, just as described before. A simple Attention layer that won’t affect the predictions but it’s going to capture the weights of each instance and allow us to build a nice explainer (it isn't necessary for the predictions, just for the explainability, so you can skip it). The Attention mechanism was presented in this paper (2014) as a solution to the problem of the sequence models (i.e. LSTM) to understand what parts of a long text are actually relevant. Two layers of Bidirectional LSTM to model the order of words in a sequence in both directions. Two final dense layers that will predict the probability of each news category. ## code attention layer def attention_layer(inputs, neurons): x = layers.Permute((2,1))(inputs) x = layers.Dense(neurons, activation="softmax")(x) x = layers.Permute((2,1), name="attention")(x) x = layers.multiply([inputs, x]) return x ## input x_in = layers.Input(shape=(15,)) ## embedding x = layers.Embedding(input_dim=embeddings.shape[0], output_dim=embeddings.shape[1], weights=[embeddings], input_length=15, trainable=False)(x_in) ## apply attention x = attention_layer(x, neurons=15) ## 2 layers of bidirectional lstm x = layers.Bidirectional(layers.LSTM(units=15, dropout=0.2, return_sequences=True))(x) x = layers.Bidirectional(layers.LSTM(units=15, dropout=0.2))(x) ## final dense layers x = layers.Dense(64, activation='relu')(x) y_out = layers.Dense(3, activation='softmax')(x) ## compile model = models.Model(x_in, y_out) model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model.summary() Now we can train the model and check the performance on a subset of the training set used for validation before testing it on the actual test set. ## encode y dic_y_mapping = {n:label for n,label in enumerate(np.unique(y_train))} inverse_dic = {v:k for k,v in dic_y_mapping.items()} y_train = np.array([inverse_dic[y] for y in y_train]) ## train training = model.fit(x=X_train, y=y_train, batch_size=256, epochs=10, shuffle=True, verbose=0, validation_split=0.3) ## plot loss and accuracy metrics = [k for k in training.history.keys() if ("loss" not in k) and ("val" not in k)] fig, ax = plt.subplots(nrows=1, ncols=2, sharey=True) ax[0].set(title="Training") ax11 = ax[0].twinx() ax[0].plot(training.history['loss'], color='black') ax[0].set_xlabel('Epochs') ax[0].set_ylabel('Loss', color='black') for metric in metrics: ax11.plot(training.history[metric], label=metric) ax11.set_ylabel("Score", color='steelblue') ax11.legend() ax[1].set(title="Validation") ax22 = ax[1].twinx() ax[1].plot(training.history['val_loss'], color='black') ax[1].set_xlabel('Epochs') ax[1].set_ylabel('Loss', color='black') for metric in metrics: ax22.plot(training.history['val_'+metric], label=metric) ax22.set_ylabel("Score", color="steelblue") plt.show() Nice! In some epochs, the accuracy reached 0.89. In order to complete the evaluation of the Word Embedding model, let’s predict the test set and compare the same metrics used before (code for metrics is the same as before). ## test predicted_prob = model.predict(X_test) predicted = [dic_y_mapping[np.argmax(pred)] for pred in predicted_prob] The model performs as good as the previous one, in fact, it also struggles to classify Tech news. But is it explainable as well? Yes, it is! I put an Attention layer in the neural network to extract the weights of each word and understand how much those contributed to classify an instance. So I’ll try to use Attention weights to build an explainer (similar to the one seen in the previous section): ## select observation i = 0 txt_instance = dtf_test["text"].iloc[i] ## check true value and predicted value print("True:", y_test[i], "--> Pred:", predicted[i], "| Prob:", round(np.max(predicted_prob[i]),2)) ## show explanation ### 1. preprocess input lst_corpus = [] for string in [re.sub(r'[^\w\s]','', txt_instance.lower().strip())]: lst_words = string.split() lst_grams = [" ".join(lst_words[i:i+1]) for i in range(0, len(lst_words), 1)] lst_corpus.append(lst_grams) lst_corpus = list(bigrams_detector[lst_corpus]) lst_corpus = list(trigrams_detector[lst_corpus]) X_instance = kprocessing.sequence.pad_sequences( tokenizer.texts_to_sequences(corpus), maxlen=15, padding="post", truncating="post") ### 2. get attention weights layer = [layer for layer in model.layers if "attention" in layer.name][0] func = K.function([model.input], [layer.output]) weights = func(X_instance)[0] weights = np.mean(weights, axis=2).flatten() ### 3. rescale weights, remove null vector, map word-weight weights = preprocessing.MinMaxScaler(feature_range=(0,1)).fit_transform(np.array(weights).reshape(-1,1)).reshape(-1) weights = [weights[n] for n,idx in enumerate(X_instance[0]) if idx != 0] dic_word_weigth = {word:weights[n] for n,word in enumerate(lst_corpus[0]) if word in tokenizer.word_index.keys()} ### 4. barplot if len(dic_word_weigth) > 0: dtf = pd.DataFrame.from_dict(dic_word_weigth, orient='index', columns=["score"]) dtf.sort_values(by="score", ascending=True).tail(top).plot(kind="barh", legend=False).grid(axis='x') plt.show() else: print("--- No word recognized ---") ### 5. produce html visualization text = [] for word in lst_corpus[0]: weight = dic_word_weigth.get(word) if weight is not None: text.append('<b><span style="background-color:rgba(100,149,237,' + str(weight) + ');">' + word + '</span></b>') else: text.append(word) text = ' '.join(text) ### 6. visualize on notebook print("\033[1m"+"Text with highlighted words") from IPython.core.display import display, HTML display(HTML(text)) Just like before, the words “clinton” and “gop” activated the neurons of the model, but this time also “high” and “benghazi” have been considered slightly relevant for the prediction. Language Models Language Models, or Contextualized/Dynamic Word Embeddings, overcome the biggest limitation of the classic Word Embedding approach: polysemy disambiguation, a word with different meanings (e.g. “ bank” or “stick”) is identified by just one vector. One of the first popular ones was ELMO (2018), which doesn’t apply a fixed embedding but, using a bidirectional LSTM, looks at the entire sentence and then assigns an embedding to each word. Enter Transformers: a new modeling technique presented by Google’s paper Attention is All You Need (2017) in which it was demonstrated that sequence models (like LSTM) can be totally replaced by Attention mechanisms, even obtaining better performances. Google’s BERT (Bidirectional Encoder Representations from Transformers, 2018) combines ELMO context embedding and several Transformers, plus it’s bidirectional (which was a big novelty for Transformers). The vector BERT assigns to a word is a function of the entire sentence, therefore, a word can have different vectors based on the contexts. Let’s try it using transformers: txt = "bank river" ## bert tokenizer tokenizer = transformers.BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True) ## bert model nlp = transformers.TFBertModel.from_pretrained('bert-base-uncased') ## return hidden layer with embeddings input_ids = np.array(tokenizer.encode(txt))[None,:] embedding = nlp(input_ids) embedding[0][0] If we change the input text into “bank money”, we get this instead: In order to complete a text classification task, you can use BERT in 3 different ways: train it all from scratches and use it as classifier. Extract the word embeddings and use them in an embedding layer (like I did with Word2Vec). Fine-tuning the pre-trained model (transfer learning). I’m going with the latter and do transfer learning from a pre-trained lighter version of BERT, called Distil-BERT (66 million of parameters instead of 110 million!). ## distil-bert tokenizer tokenizer = transformers.AutoTokenizer.from_pretrained('distilbert-base-uncased', do_lower_case=True) As usual, before fitting the model there is some Feature Engineering to do, but this time it’s gonna be a little trickier. To give an illustration of what I’m going to do, let’s take as an example our beloved sentence “I like this article”, which has to be transformed into 3 vectors (Ids, Mask, Segment): Shape: 3 x Sequence length First of all, we need to select the sequence max length. This time I’m gonna choose a much larger number (i.e. 50) because BERT splits unknown words into sub-tokens until it finds a known unigrams. For example, if a made-up word like “zzdata” is given, BERT would split it into [“z”, “##z”, “##data”]. Moreover, we have to insert special tokens into the input text, then generate masks and segments. Finally, put all together in a tensor to get the feature matrix that will have the shape of 3 (ids, masks, segments) x Number of documents in the corpus x Sequence length. Please note that I’m using the raw text as corpus (so far I’ve been using the clean_text column). corpus = dtf_train["text"] maxlen = 50 ## add special tokens maxqnans = np.int((maxlen-20)/2) corpus_tokenized = ["[CLS] "+ " ".join(tokenizer.tokenize(re.sub(r'[^\w\s]+| ', '', str(txt).lower().strip()))[:maxqnans])+ " [SEP] " for txt in corpus] ## generate masks masks = [[1]*len(txt.split(" ")) + [0]*(maxlen - len( txt.split(" "))) for txt in corpus_tokenized] ## padding txt2seq = [txt + " [PAD]"*(maxlen-len(txt.split(" "))) if len(txt.split(" ")) != maxlen else txt for txt in corpus_tokenized] ## generate idx idx = [tokenizer.encode(seq.split(" ")) for seq in txt2seq] ## generate segments segments = [] for seq in txt2seq: temp, i = [], 0 for token in seq.split(" "): temp.append(i) if token == "[SEP]": i += 1 segments.append(temp) ## feature matrix X_train = [np.asarray(idx, dtype='int32'), np.asarray(masks, dtype='int32'), np.asarray(segments, dtype='int32')] The feature matrix X_train has a shape of 3 x 34,265 x 50. We can check a random observation from the feature matrix: i = 0 print("txt: ", dtf_train["text"].iloc[0]) print("tokenized:", [tokenizer.convert_ids_to_tokens(idx) for idx in X_train[0][i].tolist()]) print("idx: ", X_train[0][i]) print("mask: ", X_train[1][i]) print("segment: ", X_train[2][i]) You can take the same code and apply it to dtf_test[“text”] to get X_test. Now, I’m going to build the deep learning model with transfer learning from the pre-trained BERT. Basically, I’m going to summarize the output of BERT into one vector with Average Pooling and then add two final Dense layers to predict the probability of each news category. If you want to use the original versions of BERT, here’s the code (remember to redo the feature engineering with the right tokenizer): ## inputs idx = layers.Input((50), dtype="int32", name="input_idx") masks = layers.Input((50), dtype="int32", name="input_masks") segments = layers.Input((50), dtype="int32", name="input_segments") ## pre-trained bert nlp = transformers.TFBertModel.from_pretrained("bert-base-uncased") bert_out, _ = nlp([idx, masks, segments]) ## fine-tuning x = layers.GlobalAveragePooling1D()(bert_out) x = layers.Dense(64, activation="relu")(x) y_out = layers.Dense(len(np.unique(y_train)), activation='softmax')(x) ## compile model = models.Model([idx, masks, segments], y_out) for layer in model.layers[:4]: layer.trainable = False model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model.summary() As I said, I’m going to use the lighter version instead, Distil-BERT: ## inputs idx = layers.Input((50), dtype="int32", name="input_idx") masks = layers.Input((50), dtype="int32", name="input_masks") ## pre-trained bert with config config = transformers.DistilBertConfig(dropout=0.2, attention_dropout=0.2) config.output_hidden_states = False nlp = transformers.TFDistilBertModel.from_pretrained('distilbert- base-uncased', config=config) bert_out = nlp(idx, attention_mask=masks)[0] ## fine-tuning x = layers.GlobalAveragePooling1D()(bert_out) x = layers.Dense(64, activation="relu")(x) y_out = layers.Dense(len(np.unique(y_train)), activation='softmax')(x) ## compile model = models.Model([idx, masks], y_out) for layer in model.layers[:3]: layer.trainable = False model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model.summary() Let’s train, test, evaluate this bad boy (code for evaluation is the same): ## encode y dic_y_mapping = {n:label for n,label in enumerate(np.unique(y_train))} inverse_dic = {v:k for k,v in dic_y_mapping.items()} y_train = np.array([inverse_dic[y] for y in y_train]) ## train training = model.fit(x=X_train, y=y_train, batch_size=64, epochs=1, shuffle=True, verbose=1, validation_split=0.3) ## test predicted_prob = model.predict(X_test) predicted = [dic_y_mapping[np.argmax(pred)] for pred in predicted_prob] The performance of BERT is slightly better than the previous models, in fact, it can recognize more Tech news than the others. Conclusion This article has been a tutorial to demonstrate how to apply different NLP models to a multiclass classification use case. I compared 3 popular approaches: Bag-of-Words with Tf-Idf, Word Embedding with Word2Vec, and Language model with BERT. I went through Feature Engineering & Selection, Model Design & Testing, Evaluation & Explainability, comparing the 3 models in each step (where possible). Please note that I haven’t covered explainability for BERT as I’m still working on that, but I will update this article as soon as I can. If you have any useful resources about that, feel free to contact me.
https://towardsdatascience.com/text-classification-with-nlp-tf-idf-vs-word2vec-vs-bert-41ff868d1794
['Mauro Di Pietro']
2020-11-26 09:53:03.349000+00:00
['Data Science', 'Artificial Intelligence', 'Machine Learning', 'Programming', 'NLP']
Title Text Classification NLP TfIdf v Word2Vec v BERTContent Setup First need import following library data import json import panda pd import numpy np plotting import matplotlibpyplot plt import seaborn sn processing import import nltk bagofwords sklearn import featureextraction modelselection naivebayes pipeline manifold preprocessing explainer lime import limetext word embedding import gensim import gensimdownloader gensimapi deep learning tensorflowkeras import model layer preprocessing kprocessing tensorflowkeras import backend K bert language model import transformer dataset contained json file first read list dictionary json transform panda Dataframe lstdics opendatajson moder errorsignore jsonfile dic jsonfile lstdicsappend jsonloadsdic print first one lstdics0 original dataset contains 30 category purpose tutorial work subset 3 Entertainment Politics Tech create dtf dtf pdDataFramelstdics filter category dtf dtf dtfcategoryisinENTERTAINMENTPOLITICSTECH categoryheadline rename column dtf dtfrenamecolumnscategoryy headlinetext print 5 random row dtfsample5 order understand composition dataset going look univariate distribution target showing label frequency bar plot fig ax pltsubplots figsuptitley fontsize12 dtfyresetindexgroupbyycountsortvaluesby indexplotkindbarh legendFalse axaxgridaxisx pltshow dataset imbalanced proportion Tech news really small compared others make model recognize Tech news rather tough explaining building model going give example preprocessing cleaning text removing stop word applying lemmatization write function apply whole data set Preprocess string parameter param text string name column containing text param lststopwords list list stopwords remove param flgstemm bool whether stemming applied param flglemm bool whether lemmitisation applied return cleaned text def utilspreprocesstexttext flgstemmFalse flglemmTrue lststopwordsNone clean convert lowercase remove punctuation character strip text resubrws strtextlowerstrip Tokenize convert string list lsttext textsplit remove Stopwords lststopwords None lsttext word word lsttext word lststopwords Stemming remove ing ly flgstemm True p nltkstemporterPorterStemmer lsttext psstemword word lsttext Lemmatisation convert word root word flglemm True lem nltkstemwordnetWordNetLemmatizer lsttext lemlemmatizeword word lsttext back string list text joinlsttext return text function remove set word corpus given create list generic stop word English vocabulary nltk could edit list adding removing word lststopwords nltkcorpusstopwordswordsenglish lststopwords shall apply function wrote whole dataset store result new column named “textclean” choose work raw corpus preprocessed text dtftextclean dtftextapplylambda x utilspreprocesstextx flgstemmFalse flglemmTrue lststopwordslststopwords dtfhead interested deeper text analysis preprocessing check article mind going partition dataset training set 70 test set 30 order evaluate model performance split dataset dtftrain dtftest modelselectiontraintestsplitdtf testsize03 get target ytrain dtftrainyvalues ytest dtftestyvalues Let’s get started shall BagofWords BagofWords model simple build vocabulary corpus document count many time word appear document put another way word vocabulary becomes feature document represented vector length vocabulary “bag words” instance let’s take 3 sentence represent approach Feature matrix shape Number document x Length vocabulary imagine approach cause significant dimensionality problem document larger vocabulary feature matrix huge sparse matrix Therefore BagofWords model usually preceded important preprocessing word cleaning stop word removal stemminglemmatization aimed reduce dimensionality problem Terms frequency necessarily best representation text fact find corpus common word highest frequency little predictive power target variable address problem advanced variant BagofWords instead simple counting us term frequency–inverse document frequency Tf–Idf Basically value word increase proportionally count inversely proportional frequency word corpus Let’s start Feature Engineering process create feature extracting information data going use TfIdf vectorizer limit 10000 word length vocabulary 10k capturing unigrams ie “new” “york” bigram ie “new york” provide code classic count vectorizer well Count classic BoW vectorizer featureextractiontextCountVectorizermaxfeatures10000 ngramrange12 TfIdf advanced variant BoW vectorizer featureextractiontextTfidfVectorizermaxfeatures10000 ngramrange12 use vectorizer preprocessed corpus train set extract vocabulary create feature matrix corpus dtftraintextclean vectorizerfitcorpus Xtrain vectorizertransformcorpus dicvocabulary vectorizervocabulary feature matrix Xtrain shape 34265 Number document training x 10000 Length vocabulary it’s pretty sparse snsheatmapXtraintodensenprandomrandint0Xshape11000 vmin0 vmax1 cbarFalsesettitleSparse Matrix Sample Random sample feature matrix nonzero value black order know position certain word look vocabulary word new york dicvocabularyword word exists vocabulary command print number N meaning Nth feature matrix word order drop column reduce matrix dimensionality carry Feature Selection process selecting subset relevant variable proceed follows treat category binary example “Tech” category 1 Tech news 0 others perform ChiSquare test determine whether feature binary target independent keep feature certain pvalue ChiSquare test dtftrainy Xnames vectorizergetfeaturenames pvaluelimit 095 dtffeatures pdDataFrame cat npuniquey chi2 p featureselectionchi2Xtrain ycat dtffeatures dtffeaturesappendpdDataFrame featureXnames score1p ycat dtffeatures dtffeaturessortvaluesyscore ascendingTrueFalse dtffeatures dtffeaturesdtffeaturesscorepvaluelimit Xnames dtffeaturesfeatureuniquetolist reduced number feature 10000 3152 keeping statistically relevant one Let’s print cat npuniquey print formatcat print selected feature lendtffeaturesdtffeaturesycat print top feature join dtffeaturesdtffeaturesycatfeaturevalues10 print refit vectorizer corpus giving new set word input produce smaller feature matrix shorter vocabulary vectorizer featureextractiontextTfidfVectorizervocabularyXnames vectorizerfitcorpus Xtrain vectorizertransformcorpus dicvocabulary vectorizervocabulary new feature matrix Xtrain shape 34265 Number document training x 3152 Length given vocabulary Let’s see matrix le sparse Random sample new feature matrix nonzero value black It’s time train machine learning model test recommend using Naive Bayes algorithm probabilistic classifier make use Bayes’ Theorem rule us probability make prediction based prior knowledge condition might related algorithm suitable large dataset considers feature independently calculates probability category predicts category highest probability classifier naivebayesMultinomialNB I’m going train classifier feature matrix test transformed test set end need build scikitlearn pipeline sequential application list transformation final estimator Putting TfIdf vectorizer Naive Bayes classifier pipeline allows u transform predict test data one step pipeline model pipelinePipelinevectorizer vectorizer classifier classifier train classifier modelclassifierfitXtrain ytrain test Xtest dtftesttextcleanvalues predicted modelpredictXtest predictedprob modelpredictprobaXtest evaluate performance BagofWords model use following metric Accuracy fraction prediction model got right Confusion Matrix summary table break number correct incorrect prediction class ROC plot illustrates true positive rate false positive rate various threshold setting area curve AUC indicates probability classifier rank randomly chosen positive observation higher randomly chosen negative one Precision fraction relevant instance among retrieved instance Recall fraction total amount relevant instance actually retrieved class npuniqueytest ytestarray pdgetdummiesytest dropfirstFalsevalues Accuracy Precision Recall accuracy metricsaccuracyscoreytest predicted auc metricsrocaucscoreytest predictedprob multiclassovr printAccuracy roundaccuracy2 printAuc roundauc2 printDetail printmetricsclassificationreportytest predicted Plot confusion matrix cm metricsconfusionmatrixytest predicted fig ax pltsubplots snsheatmapcm annotTrue fmtd axax cmappltcmBlues cbarFalse axsetxlabelPred ylabelTrue xticklabelsclasses yticklabelsclasses titleConfusion matrix pltyticksrotation0 fig ax pltsubplotsnrows1 ncols2 Plot roc rangelenclasses fpr tpr threshold metricsroccurveytestarrayi predictedprobi ax0plotfpr tpr lw3 label0 area102fformatclassesi metricsaucfpr tpr ax0plot01 01 colornavy lw3 linestyle ax0setxlim00510 ylim00105 xlabelFalse Positive Rate ylabelTrue Positive Rate Recall titleReceiver operating characteristic ax0legendloclower right ax0gridTrue Plot precisionrecall curve rangelenclasses precision recall threshold metricsprecisionrecallcurve ytestarrayi predictedprobi ax1plotrecall precision lw3 label0 area102fformatclassesi metricsaucrecall precision ax1setxlim00105 ylim00105 xlabelRecall ylabelPrecision titlePrecisionRecall curve ax1legendlocbest ax1gridTrue pltshow BoW model got 85 test set right Accuracy 085 struggle recognize Tech news 252 predicted correctly Let’s try understand model classifies news certain category ass explainability prediction lime package help u build explainer give illustration take random observation test set see model predicts select observation 0 txtinstance dtftesttextiloci check true value predicted value printTrue ytesti Pred predictedi Prob roundnpmaxpredictedprobi2 show explanation explainer limetextLimeTextExplainerclassnames npuniqueytrain explained explainerexplaininstancetxtinstance modelpredictproba numfeatures3 explainedshowinnotebooktexttxtinstance predictprobaFalse make sense word “Clinton” “GOP” pointed model right direction Politics news even word “Stage” common among Entertainment news Word Embedding Word Embedding collective name feature learning technique word vocabulary mapped vector real number vector calculated probability distribution word appearing another put another way word context usually appear together corpus close vector space well instance let’s take 3 sentence previous example Words embedded 2D vector space tutorial I’m going use first model family Google’s Word2Vec 2013 popular Word Embedding model Stanford’s GloVe 2014 Facebook’s FastText 2016 Word2Vec produce vector space typically several hundred dimension unique word corpus word share common context corpus located close one another space done using 2 different approach starting single word predict context Skipgram starting context predict word Continuous BagofWords Python load pretrained Word Embedding model genismdata like nlp gensimapiloadword2vecgooglenews300 Instead using pretrained model going fit Word2Vec training data corpus gensim fitting model corpus need transformed list list ngrams particular case I’ll try capture unigrams “york” bigram “new york” trigram “new york city” corpus dtftraintextclean create list list unigrams lstcorpus string corpus lstwords stringsplit lstgrams joinlstwordsii1 range0 lenlstwords 1 lstcorpusappendlstgrams detect bigram trigram bigramsdetector gensimmodelsphrasesPhraseslstcorpus delimiter encode mincount5 threshold10 bigramsdetector gensimmodelsphrasesPhraserbigramsdetector trigramsdetector gensimmodelsphrasesPhrasesbigramsdetectorlstcorpus delimiter encode mincount5 threshold10 trigramsdetector gensimmodelsphrasesPhrasertrigramsdetector fitting Word2Vec need specify target size word vector I’ll use 300 window maximum distance current predicted word within sentence I’ll use mean length text corpus training algorithm I’ll use skipgrams sg1 general better result fit w2v nlp gensimmodelsword2vecWord2Veclstcorpus size300 window8 mincount1 sg1 iter30 embedding model select word corpus transform vector word data nlpwordshape even use visualize word context smaller dimensional space 2D 3D applying dimensionality reduction algorithm ie TSNE word data fig pltfigure word embedding totwords word tupla0 tupla nlpmostsimilarword topn20 X nlptotwords pca reduce dimensionality 300 3 pca manifoldTSNEperplexity40 ncomponents3 initpca X pcafittransformX create dtf dtf pdDataFrameX indextotwords columnsxyz dtfinput 0 dtfinputiloc01 1 plot 3d mpltoolkitsmplot3d import Axes3D ax figaddsubplot111 projection3d axscatterdtfdtfinput0x dtfdtfinput0y dtfdtfinput0z cblack axscatterdtfdtfinput1x dtfdtfinput1y dtfdtfinput1z cred axsetxlabelNone ylabelNone zlabelNone xticklabels yticklabels zticklabels label row dtfxyziterrows x z row axtextx z slabel That’s pretty cool word embedding useful predict news category Well word vector used neural network weight First transform corpus padded sequence word id get feature matrix create embedding matrix vector word id N located Nth row Finally build neural network embedding layer weighs every word sequence corresponding vector Let’s start Feature Engineering transforming preprocessed corpus list list ngrams given Word2Vec list sequence using tensorflowkeras tokenize text tokenizer kprocessingtextTokenizerlowerTrue split oovtokenNaN filterst tokenizerfitontextslstcorpus dicvocabulary tokenizerwordindex create sequence lsttext2seq tokenizertextstosequenceslstcorpus padding sequence Xtrain kprocessingsequencepadsequenceslsttext2seq maxlen15 paddingpost truncatingpost feature matrix Xtrain shape 34265 x 15 Number sequence x Sequences max length Let’s visualize snsheatmapXtrain0 vmin0 vmax1 cbarFalse pltshow Feature matrix 34265 x 15 Every text corpus id sequence length 15 instance text 10 token sequence composed 10 id 5 0 padding element id word vocabulary 1 Let’s print text train set transformed sequence padding vocabulary 0 list text like lentxt lendtftraintextcleanilocisplit printfrom dtftraintextcleaniloci len lentxt sequence token id 1 2 3 lentokens lenXtraini printto Xtraini len lenXtraini vocabulary I1 like2 this3 printcheck dtftraintextcleanilocisplit0 idx vocabulary dicvocabularydtftraintextcleanilocisplit0 printvocabulary dictlistdicvocabularyitems05 padding element 0 moving don’t forget feature engineering test set well corpus dtftesttextclean create list ngrams lstcorpus string corpus lstwords stringsplit lstgrams joinlstwordsii1 range0 lenlstwords 1 lstcorpusappendlstgrams detect common bigram trigram using fitted detector lstcorpus listbigramsdetectorlstcorpus lstcorpus listtrigramsdetectorlstcorpus text sequence fitted tokenizer lsttext2seq tokenizertextstosequenceslstcorpus padding sequence Xtest kprocessingsequencepadsequenceslsttext2seq maxlen15 paddingpost truncatingpost Xtest 14697 x 15 We’ve got Xtrain Xtest need create matrix embedding used weight matrix neural network classifier start matrix length vocabulary x vector size 0 embeddings npzeroslendicvocabulary1 300 wordidx dicvocabularyitems update row vector try embeddingsidx nlpword word model skip row stay 0 except pas code generates matrix shape 22338 x 300 Length vocabulary extracted corpus x Vector size navigated word id obtained vocabulary word data printdicword dicvocabularyword idx printembeddingsidx embeddingsdicvocabularywordshape vector It’s finally time build deep learning model I’m going use embedding matrix first Embedding layer neural network build train classify news id input sequence used index access embedding matrix output Embedding layer 2D matrix word vector word id input sequence Sequence length x Vector size Let’s use sentence “I like article” example neural network shall structured follows Embedding layer take sequence input word vector weight described simple Attention layer won’t affect prediction it’s going capture weight instance allow u build nice explainer isnt necessary prediction explainability skip Attention mechanism presented paper 2014 solution problem sequence model ie LSTM understand part long text actually relevant Two layer Bidirectional LSTM model order word sequence direction Two final dense layer predict probability news category code attention layer def attentionlayerinputs neuron x layersPermute21inputs x layersDenseneurons activationsoftmaxx x layersPermute21 nameattentionx x layersmultiplyinputs x return x input xin layersInputshape15 embedding x layersEmbeddinginputdimembeddingsshape0 outputdimembeddingsshape1 weightsembeddings inputlength15 trainableFalsexin apply attention x attentionlayerx neurons15 2 layer bidirectional lstm x layersBidirectionallayersLSTMunits15 dropout02 returnsequencesTruex x layersBidirectionallayersLSTMunits15 dropout02x final dense layer x layersDense64 activationrelux yout layersDense3 activationsoftmaxx compile model modelsModelxin yout modelcompilelosssparsecategoricalcrossentropy optimizeradam metricsaccuracy modelsummary train model check performance subset training set used validation testing actual test set encode dicymapping nlabel nlabel enumeratenpuniqueytrain inversedic vk kv dicymappingitems ytrain nparrayinversedicy ytrain train training modelfitxXtrain yytrain batchsize256 epochs10 shuffleTrue verbose0 validationsplit03 plot loss accuracy metric k k traininghistorykeys loss k val k fig ax pltsubplotsnrows1 ncols2 shareyTrue ax0settitleTraining ax11 ax0twinx ax0plottraininghistoryloss colorblack ax0setxlabelEpochs ax0setylabelLoss colorblack metric metric ax11plottraininghistorymetric labelmetric ax11setylabelScore colorsteelblue ax11legend ax1settitleValidation ax22 ax1twinx ax1plottraininghistoryvalloss colorblack ax1setxlabelEpochs ax1setylabelLoss colorblack metric metric ax22plottraininghistoryvalmetric labelmetric ax22setylabelScore colorsteelblue pltshow Nice epoch accuracy reached 089 order complete evaluation Word Embedding model let’s predict test set compare metric used code metric test predictedprob modelpredictXtest predicted dicymappingnpargmaxpred pred predictedprob model performs good previous one fact also struggle classify Tech news explainable well Yes put Attention layer neural network extract weight word understand much contributed classify instance I’ll try use Attention weight build explainer similar one seen previous section select observation 0 txtinstance dtftesttextiloci check true value predicted value printTrue ytesti Pred predictedi Prob roundnpmaxpredictedprobi2 show explanation 1 preprocess input lstcorpus string resubrws txtinstancelowerstrip lstwords stringsplit lstgrams joinlstwordsii1 range0 lenlstwords 1 lstcorpusappendlstgrams lstcorpus listbigramsdetectorlstcorpus lstcorpus listtrigramsdetectorlstcorpus Xinstance kprocessingsequencepadsequences tokenizertextstosequencescorpus maxlen15 paddingpost truncatingpost 2 get attention weight layer layer layer modellayers attention layername0 func Kfunctionmodelinput layeroutput weight funcXinstance0 weight npmeanweights axis2flatten 3 rescale weight remove null vector map wordweight weight preprocessingMinMaxScalerfeaturerange01fittransformnparrayweightsreshape11reshape1 weight weightsn nidx enumerateXinstance0 idx 0 dicwordweigth wordweightsn nword enumeratelstcorpus0 word tokenizerwordindexkeys 4 barplot lendicwordweigth 0 dtf pdDataFramefromdictdicwordweigth orientindex columnsscore dtfsortvaluesbyscore ascendingTruetailtopplotkindbarh legendFalsegridaxisx pltshow else print word recognized 5 produce html visualization text word lstcorpus0 weight dicwordweigthgetword weight None textappendbspan stylebackgroundcolorrgba100149237 strweight word spanb else textappendword text jointext 6 visualize notebook print0331mText highlighted word IPythoncoredisplay import display HTML displayHTMLtext like word “clinton” “gop” activated neuron model time also “high” “benghazi” considered slightly relevant prediction Language Models Language Models ContextualizedDynamic Word Embeddings overcome biggest limitation classic Word Embedding approach polysemy disambiguation word different meaning eg “ bank” “stick” identified one vector One first popular one ELMO 2018 doesn’t apply fixed embedding using bidirectional LSTM look entire sentence assigns embedding word Enter Transformers new modeling technique presented Google’s paper Attention Need 2017 demonstrated sequence model like LSTM totally replaced Attention mechanism even obtaining better performance Google’s BERT Bidirectional Encoder Representations Transformers 2018 combine ELMO context embedding several Transformers plus it’s bidirectional big novelty Transformers vector BERT assigns word function entire sentence therefore word different vector based context Let’s try using transformer txt bank river bert tokenizer tokenizer transformersBertTokenizerfrompretrainedbertbaseuncased dolowercaseTrue bert model nlp transformersTFBertModelfrompretrainedbertbaseuncased return hidden layer embeddings inputids nparraytokenizerencodetxtNone embedding nlpinputids embedding00 change input text “bank money” get instead order complete text classification task use BERT 3 different way train scratch use classifier Extract word embeddings use embedding layer like Word2Vec Finetuning pretrained model transfer learning I’m going latter transfer learning pretrained lighter version BERT called DistilBERT 66 million parameter instead 110 million distilbert tokenizer tokenizer transformersAutoTokenizerfrompretraineddistilbertbaseuncased dolowercaseTrue usual fitting model Feature Engineering time it’s gonna little trickier give illustration I’m going let’s take example beloved sentence “I like article” transformed 3 vector Ids Mask Segment Shape 3 x Sequence length First need select sequence max length time I’m gonna choose much larger number ie 50 BERT split unknown word subtokens find known unigrams example madeup word like “zzdata” given BERT would split “z” “z” “data” Moreover insert special token input text generate mask segment Finally put together tensor get feature matrix shape 3 id mask segment x Number document corpus x Sequence length Please note I’m using raw text corpus far I’ve using cleantext column corpus dtftraintext maxlen 50 add special token maxqnans npintmaxlen202 corpustokenized CLS jointokenizertokenizeresubrws strtxtlowerstripmaxqnans SEP txt corpus generate mask mask 1lentxtsplit 0maxlen len txtsplit txt corpustokenized padding txt2seq txt PADmaxlenlentxtsplit lentxtsplit maxlen else txt txt corpustokenized generate idx idx tokenizerencodeseqsplit seq txt2seq generate segment segment seq txt2seq temp 0 token seqsplit tempappendi token SEP 1 segmentsappendtemp feature matrix Xtrain npasarrayidx dtypeint32 npasarraymasks dtypeint32 npasarraysegments dtypeint32 feature matrix Xtrain shape 3 x 34265 x 50 check random observation feature matrix 0 printtxt dtftraintextiloc0 printtokenized tokenizerconvertidstotokensidx idx Xtrain0itolist printidx Xtrain0i printmask Xtrain1i printsegment Xtrain2i take code apply dtftest“text” get Xtest I’m going build deep learning model transfer learning pretrained BERT Basically I’m going summarize output BERT one vector Average Pooling add two final Dense layer predict probability news category want use original version BERT here’s code remember redo feature engineering right tokenizer input idx layersInput50 dtypeint32 nameinputidx mask layersInput50 dtypeint32 nameinputmasks segment layersInput50 dtypeint32 nameinputsegments pretrained bert nlp transformersTFBertModelfrompretrainedbertbaseuncased bertout nlpidx mask segment finetuning x layersGlobalAveragePooling1Dbertout x layersDense64 activationrelux yout layersDenselennpuniqueytrain activationsoftmaxx compile model modelsModelidx mask segment yout layer modellayers4 layertrainable False modelcompilelosssparsecategoricalcrossentropy optimizeradam metricsaccuracy modelsummary said I’m going use lighter version instead DistilBERT input idx layersInput50 dtypeint32 nameinputidx mask layersInput50 dtypeint32 nameinputmasks pretrained bert config config transformersDistilBertConfigdropout02 attentiondropout02 configoutputhiddenstates False nlp transformersTFDistilBertModelfrompretraineddistilbert baseuncased configconfig bertout nlpidx attentionmaskmasks0 finetuning x layersGlobalAveragePooling1Dbertout x layersDense64 activationrelux yout layersDenselennpuniqueytrain activationsoftmaxx compile model modelsModelidx mask yout layer modellayers3 layertrainable False modelcompilelosssparsecategoricalcrossentropy optimizeradam metricsaccuracy modelsummary Let’s train test evaluate bad boy code evaluation encode dicymapping nlabel nlabel enumeratenpuniqueytrain inversedic vk kv dicymappingitems ytrain nparrayinversedicy ytrain train training modelfitxXtrain yytrain batchsize64 epochs1 shuffleTrue verbose1 validationsplit03 test predictedprob modelpredictXtest predicted dicymappingnpargmaxpred pred predictedprob performance BERT slightly better previous model fact recognize Tech news others Conclusion article tutorial demonstrate apply different NLP model multiclass classification use case compared 3 popular approach BagofWords TfIdf Word Embedding Word2Vec Language model BERT went Feature Engineering Selection Model Design Testing Evaluation Explainability comparing 3 model step possible Please note haven’t covered explainability BERT I’m still working update article soon useful resource feel free contact meTags Data Science Artificial Intelligence Machine Learning Programming NLP
4,296
Two Things That Separate the Wealthy From the Non-Wealthy
Two Things That Separate the Wealthy From the Non-Wealthy The mindset of successful people is different from the norm. Photo by Keenan Barber on Unsplash One of the essential resources that we have is our time, yet so many of us ultimately view our time the wrong way. Time often creates an illusion that tricks us into thinking that we’re making the best use of it when in reality, we’re not. For example, we may have a list of small jobs that we must complete, and because we allocate our time to these necessary tasks, we believe that we are spending our time wisely. I’m a big believer that if you want to improve upon something, you should model the types of people who have the result that you want. In terms of time management, the wealthy are the people to follow. The wealthy are where they are because of the unique way they think about time and how much value they place upon it. My Misconceptions Regarding the Rich When I was young, I used to think rich people were mean. When I thought of rich people in their homes, I always imagined them ordering around their workers and making them work unnecessarily hard just for the hell of it. To me, rich people had cleaners, cooks, gardeners, and maids for five reasons. Because they wanted to show how powerful they are They enjoyed the power trips. They like ordering poor people around. They were snobs and didn’t know how to get their hands dirty. They were clueless about manual labor. I was wrong. I was wrong, but this wasn’t a healthy way to think about the wealthy, mostly if I wanted to become one in the future. I know now that the wealthy are wealthy and continue to be more wealthy because they know how to value their own time and manage themselves better than most people. Relaxing & Producing Results The wealthy have caught the fact that you should spend your time doing only one of two things. Relaxing Producing results Each time a wealthy person thinks about doing something, they ask themselves is if the task or activity falls in either of these two categories: Point 1 is easy to determine . It’s merely a question of asking whether they deem a particular action relaxing or not. . It’s merely a question of asking whether they deem a particular action relaxing or not. Point 2 takes a bit more thinking, which proves the difference between how a wealthy person thinks about his time compared to others. Let’s take the example of everyday housework to explain the differences. Everyone has to do housework. Everyone’s houses need to be cleaned. Clothes washed, dishes washed, floor washed, vacuumed — there’s no getting around it. But if you were to think about ‘Housework’, which category does it fall into? A relaxing or B producing results? For the sake of keeping things simple, I’d say the majority of people do not enjoy housework; thus, it may not be relaxing. It’s more likely that it would be categorized in ‘Producing Results’ — everyone has to do it. Therefore when it’s completed, that’s a result. Even for the wealthy, ‘housework’ falls into the category of producing results, the only difference is he or she will get someone else to do it. He knows that he can do this because he places a monetary amount per hour of his time. Depending on the individual, it will be different. You may think you are worth £10 an hour, or the guy sitting next to you may think he’,s worth £50 an hour — it depends on the individual. The idea is that if a wealthy person comes across an activity that can be outsourced below his per hour value, in most cases, he will get someone else to do it, especially if it’s far below his value and given that he doesn’t enjoy doing it. Rather than waste his time on something he doesn’t enjoy, he’ll hire somebody at a cheaper rate to produce the results for him while he makes better use of his time, such as relaxing or producing high valued products. Instead, he could use the time to spend time with his family, relaxing, working on his business, or improving his life somehow. Once he’s put a monetary value on his time, then he knows which jobs are worth outsourcing. If he were to put a £ 100-hour value on his time, why should he iron if he can employ somebody to do it for £5 an hour? If the grass is long, why should he use up 2 hours of his time if he can get someone to do it for £10 an hour? With this principle in mind, he knows what jobs he should do and what jobs he should outsource. A wealthy person thinks about the best use of his time a lot. He has learned to use his time wisely and focus his time on achieving goals instead of meaningless chores somebody else could do. Don’t get me wrong, housework must be done — but if anybody can do it — get anybody to do it. How Does This Apply To Me, I’m Not Wealthy! Now I hear you loud and clear, but the principle discusses should open your mind about how to value your own time. Even if you’re sitting around and have time to clean your house, is this the best use of your time? You could spend the whole of your Saturday catching up with your housework for the week but would it not be better to spend your whole Saturday on ways to achieve your dreams? You don’t have to be rich to use your time wisely and learn to outsource work. If you’ve put a £20 value on your time, then it would be worth employing a cleaner for £5 an hour to come and do it for you. Think about it, your time is much better spent producing meaningful results. Anybody can clean, so get anybody to clean for you. Nobody can cross the things on your life’s To-do list other than you. Nobody can take action to reach your dreams apart from you. As the economy stands now, there are plenty of opportunities to employ people at reasonable prices — it doesn’t cost much to employ someone to come in once a week to take care of your housework. I have received plenty of flyers from local people wanting to do some extra work. One was a local pair looking for a bit of extra work, and the other was from a schoolboy on holiday looking to earn a bit of pocket money on the side (could use him to wash your car). I’m slowly getting more and more into online outsourcing. I outsource a lot of my boring and time-consuming ‘website’ work to people through Upwork. If you run an online business and need workers to do your link building, design work, writers, and virtual assistants, it’s a great place to find people like that. Doing so has done wonders to my life compared to how it was before. I used to try to do everything else myself, and now that I’ve outsourced, it’s freed up so much of my time to work on other essential things. If you consider saving the money you have planned on that big night out and instead spend it on outsourcing your work — you’ve freed up a whole chunk of your time to relax more or focus on producing results. I’ll repeat it there’s a reason the wealthy are wealthy, and that’s because they put a monetary value on their time. Learn to leverage your time on producing results that only you can do. Taking time to learn or do the activities that frustrate you is something that we don’t need to busy ourselves during our life. Learning to outsource your work will do wonders for you in your life for sure. A great place to start is by putting a monetary value on your own time. Ask yourself the question — How much is your time worth?
https://medium.com/live-your-life-on-purpose/two-things-that-separate-the-wealthy-from-the-non-wealthy-6c2b94280764
['Josef Cruz']
2020-12-17 23:02:53.182000+00:00
['Wealth', 'Mindset', 'Psychology', 'Self Improvement', 'Money']
Title Two Things Separate Wealthy NonWealthyContent Two Things Separate Wealthy NonWealthy mindset successful people different norm Photo Keenan Barber Unsplash One essential resource time yet many u ultimately view time wrong way Time often creates illusion trick u thinking we’re making best use reality we’re example may list small job must complete allocate time necessary task believe spending time wisely I’m big believer want improve upon something model type people result want term time management wealthy people follow wealthy unique way think time much value place upon Misconceptions Regarding Rich young used think rich people mean thought rich people home always imagined ordering around worker making work unnecessarily hard hell rich people cleaner cook gardener maid five reason wanted show powerful enjoyed power trip like ordering poor people around snob didn’t know get hand dirty clueless manual labor wrong wrong wasn’t healthy way think wealthy mostly wanted become one future know wealthy wealthy continue wealthy know value time manage better people Relaxing Producing Results wealthy caught fact spend time one two thing Relaxing Producing result time wealthy person think something ask task activity fall either two category Point 1 easy determine It’s merely question asking whether deem particular action relaxing It’s merely question asking whether deem particular action relaxing Point 2 take bit thinking prof difference wealthy person think time compared others Let’s take example everyday housework explain difference Everyone housework Everyone’s house need cleaned Clothes washed dish washed floor washed vacuumed — there’s getting around think ‘Housework’ category fall relaxing B producing result sake keeping thing simple I’d say majority people enjoy housework thus may relaxing It’s likely would categorized ‘Producing Results’ — everyone Therefore it’s completed that’s result Even wealthy ‘housework’ fall category producing result difference get someone else know place monetary amount per hour time Depending individual different may think worth £10 hour guy sitting next may think he’s worth £50 hour — depends individual idea wealthy person come across activity outsourced per hour value case get someone else especially it’s far value given doesn’t enjoy Rather waste time something doesn’t enjoy he’ll hire somebody cheaper rate produce result make better use time relaxing producing high valued product Instead could use time spend time family relaxing working business improving life somehow he’s put monetary value time know job worth outsourcing put £ 100hour value time iron employ somebody £5 hour grass long use 2 hour time get someone £10 hour principle mind know job job outsource wealthy person think best use time lot learned use time wisely focus time achieving goal instead meaningless chore somebody else could Don’t get wrong housework must done — anybody — get anybody Apply I’m Wealthy hear loud clear principle discus open mind value time Even you’re sitting around time clean house best use time could spend whole Saturday catching housework week would better spend whole Saturday way achieve dream don’t rich use time wisely learn outsource work you’ve put £20 value time would worth employing cleaner £5 hour come Think time much better spent producing meaningful result Anybody clean get anybody clean Nobody cross thing life’s Todo list Nobody take action reach dream apart economy stand plenty opportunity employ people reasonable price — doesn’t cost much employ someone come week take care housework received plenty flyer local people wanting extra work One local pair looking bit extra work schoolboy holiday looking earn bit pocket money side could use wash car I’m slowly getting online outsourcing outsource lot boring timeconsuming ‘website’ work people Upwork run online business need worker link building design work writer virtual assistant it’s great place find people like done wonder life compared used try everything else I’ve outsourced it’s freed much time work essential thing consider saving money planned big night instead spend outsourcing work — you’ve freed whole chunk time relax focus producing result I’ll repeat there’s reason wealthy wealthy that’s put monetary value time Learn leverage time producing result Taking time learn activity frustrate something don’t need busy life Learning outsource work wonder life sure great place start putting monetary value time Ask question — much time worthTags Wealth Mindset Psychology Self Improvement Money
4,297
Indiana Environmental Groups file lawsuit to stop logging in Hoosier National Forest
A recent lawsuit filed by several Indiana environmental groups accuses the U.S. Forest Service of proposing a project violating multiple environmental acts, endangering a reservoir that provides clean water to over 140,000 people and unlawfully imperiling endangered species. The project plans to selectively log 4,375 acres and burn 13,500 acres of forest over a time span of around 20 years to promote the growth of trees such as oak and hickory and treat forest health. The lawsuit accuses the U.S. Forest Service of violating the Council on Environmental Quality (CEQ) Regulations in line with the National Environmental Policy Act (NEPA). It also alleges that the Forest Service violates the goals and objectives of the Indiana Forest Plan, stating that certain practices that were supposed to be analyzed for suitability weren’t discussed, violating the National Forest Management Act. “Our lawyers have been talking to the Justice Department about putting the project on hold while we see if we can work out our differences,” said Jeff Stant, an executive director for the Indiana Forest Alliance. No changes to the project have yet been made, but a contract for shelterwood cutting has been delayed. Environmentalists have also proposed many alternatives, including moving the project to a different area outside of the Monroe Reservoir and reducing the volume of logging and burning. The Bloomington mayor and Monroe County officials say that the largest concern of the project is how it could affect the Lake Monroe Reservoir. The lake provides drinking water to over 140,000 people in South Central Indiana, and the project could increase sediment levels in a reservoir that already suffers from flooding, erosion and high levels of algae. “By logging the slopes in the Houston South Area…there’s no question that there will be an increase in sediment levels,” said Stant. The Lake Monroe Reservoir has suffered from contamination concerns for many years. Logging and farming lead to erosion in the area that increases nutrients in the lake which feeds toxic algae. The Indiana Department of Environmental Management has listed the lake as an impaired water body, meaning it doesn’t meet water quality standards assigned by the Clean Water Act. The U.S. Forest Service has responded to these concerns, stating that the project would have no significant impact on the sedimentation levels of Lake Monroe. Michelle Paduani, a district ranger for the Hoosier National Forest, emailed a statement saying that “the mitigation measures [the Forest Service] apply are highly effective in protecting water while meeting other objectives of improving wildlife habitat and forest resilience.” However, many in Monroe County aren’t willing to take the risk and worry that the studies cited by the U.S. Forest Service don’t apply to Lake Monroe. Scientists with concerns about increased erosion in the Monroe Reservoir have proposed setting up monitoring stations for sedimentation in the lake itself. Environmental activists have also brought up concerns about the health of the forest, which is currently classified as mature and contains many oak and hickory trees. The project plans on fulfilling goals outlined in the Indiana Forest Plan, a strategic outline set up to approve funding for forest conservation and desired future conditions on public land. This project would be the largest management project to ever take place in the Hoosier National Forest, affecting about 20,000 acres. Photo Courtesy of Indiana Forest Alliance Dr. Jane Fitzgerald, a coordinator for the Central Hardwoods Joint Venture (CHJV), stated that the project was essential to improving habitat conditions for bird species of conservation concern. “A couple of examples are the Cerulean Warbler, and the Wood Thrush, and probably the Prairie Warbler,” Fitzgerald said. “Those are the top three that come to mind.” In a letter of endorsement from the CHJV, Fitzgerald writes that the plan would encourage the growth of white oak trees, which forest birds use for foraging and nesting. According to a research article on bird populations and selective cutting, the thinning of branches would increase shrubby growth that provides better habitat structures for juvenile forest-breeding birds and improves population numbers. Another study completed by the Woodland Steward Institute showed that oak trees and canopy gaps were important to nesting success in Cerulean Warblers. However, some studies have shown that opening the closed canopy forest could have an unfavorable effect on forest songbirds. According to a research report on forest fragmentation, the act of breaking large forested areas into smaller pieces, the nesting success of songbirds can decrease in response to selective cutting. Environmental activists also fear that the prescribed burning could have detrimental effects on vulnerable species of bats, birds, amphibians and reptiles. The Houston South project area currently supports many species of bats that are federally threatened or endangered. The IFA argues that the planned burning and thinning could result in harm to maternity roosting trees, killing mothers and pups. Stant said that while the project may intend to provide a better habitat for these endangered animals, they may not recover from the prescribed burning. The burning could also lead to increased air pollution. Studies have shown that forests sequester more carbon as they mature, and the burning could release an unknown amount of carbon into the atmosphere. The Indiana Forest Alliance worries that the burning could affect recreational activities that normally take place in a public forest, from hiking and jogging to horseback riding. Another point of contention is that burning was not used by the Native Americans of Indiana to promote the growth of oak trees. According to an article by Cheryl Munson, a research scientist at Indiana University, “no indication exists that intentionally set fire was a key factor in determining the natural composition of the forest in the Houston South area.” The U.S. Forest Service has responded that burning is an essential factor in encouraging oak growth and that careful measures will be taken. “For the most part,” said Fitzgerald, “fire is what helps to regenerate the oaks… And in terms of the harvest, the new growth is going to sequester carbon too, so it’s not like you’re cutting the trees and paving it and there’s not ever going to be more carbon sequestration, there will be.” The Hoosier National Forest has struggled with a lack of age diversity for many years. The majority of the trees in the forest are classified in the 20 to 99 year age range. While promoting oak-hickory growth is the goal of the project, opponents believe that it would be best to let the forest naturally regrow and increase the diversity of the trees, without solely focusing on oak and hickory. It is still not known whether any changes have been made to the Houston South project. Hopefully, a suitable compromise can be reached that will help maintain Lake Monroe’s water quality and protect endangered species’ while still achieving the important goals of the project.
https://medium.com/the-climate-reporter/indiana-environmental-groups-file-lawsuit-to-stop-logging-in-hoosier-national-forest-33c8d1860ce2
['Chenyao Liu']
2020-06-25 19:34:46.401000+00:00
['Environment', 'Conservation', 'Politics', 'Law', 'Climate News']
Title Indiana Environmental Groups file lawsuit stop logging Hoosier National ForestContent recent lawsuit filed several Indiana environmental group accuses US Forest Service proposing project violating multiple environmental act endangering reservoir provides clean water 140000 people unlawfully imperiling endangered specie project plan selectively log 4375 acre burn 13500 acre forest time span around 20 year promote growth tree oak hickory treat forest health lawsuit accuses US Forest Service violating Council Environmental Quality CEQ Regulations line National Environmental Policy Act NEPA also alleges Forest Service violates goal objective Indiana Forest Plan stating certain practice supposed analyzed suitability weren’t discussed violating National Forest Management Act “Our lawyer talking Justice Department putting project hold see work differences” said Jeff Stant executive director Indiana Forest Alliance change project yet made contract shelterwood cutting delayed Environmentalists also proposed many alternative including moving project different area outside Monroe Reservoir reducing volume logging burning Bloomington mayor Monroe County official say largest concern project could affect Lake Monroe Reservoir lake provides drinking water 140000 people South Central Indiana project could increase sediment level reservoir already suffers flooding erosion high level algae “By logging slope Houston South Area…there’s question increase sediment levels” said Stant Lake Monroe Reservoir suffered contamination concern many year Logging farming lead erosion area increase nutrient lake feed toxic algae Indiana Department Environmental Management listed lake impaired water body meaning doesn’t meet water quality standard assigned Clean Water Act US Forest Service responded concern stating project would significant impact sedimentation level Lake Monroe Michelle Paduani district ranger Hoosier National Forest emailed statement saying “the mitigation measure Forest Service apply highly effective protecting water meeting objective improving wildlife habitat forest resilience” However many Monroe County aren’t willing take risk worry study cited US Forest Service don’t apply Lake Monroe Scientists concern increased erosion Monroe Reservoir proposed setting monitoring station sedimentation lake Environmental activist also brought concern health forest currently classified mature contains many oak hickory tree project plan fulfilling goal outlined Indiana Forest Plan strategic outline set approve funding forest conservation desired future condition public land project would largest management project ever take place Hoosier National Forest affecting 20000 acre Photo Courtesy Indiana Forest Alliance Dr Jane Fitzgerald coordinator Central Hardwoods Joint Venture CHJV stated project essential improving habitat condition bird specie conservation concern “A couple example Cerulean Warbler Wood Thrush probably Prairie Warbler” Fitzgerald said “Those top three come mind” letter endorsement CHJV Fitzgerald writes plan would encourage growth white oak tree forest bird use foraging nesting According research article bird population selective cutting thinning branch would increase shrubby growth provides better habitat structure juvenile forestbreeding bird improves population number Another study completed Woodland Steward Institute showed oak tree canopy gap important nesting success Cerulean Warblers However study shown opening closed canopy forest could unfavorable effect forest songbird According research report forest fragmentation act breaking large forested area smaller piece nesting success songbird decrease response selective cutting Environmental activist also fear prescribed burning could detrimental effect vulnerable specie bat bird amphibian reptile Houston South project area currently support many specie bat federally threatened endangered IFA argues planned burning thinning could result harm maternity roosting tree killing mother pup Stant said project may intend provide better habitat endangered animal may recover prescribed burning burning could also lead increased air pollution Studies shown forest sequester carbon mature burning could release unknown amount carbon atmosphere Indiana Forest Alliance worry burning could affect recreational activity normally take place public forest hiking jogging horseback riding Another point contention burning used Native Americans Indiana promote growth oak tree According article Cheryl Munson research scientist Indiana University “no indication exists intentionally set fire key factor determining natural composition forest Houston South area” US Forest Service responded burning essential factor encouraging oak growth careful measure taken “For part” said Fitzgerald “fire help regenerate oaks… term harvest new growth going sequester carbon it’s like you’re cutting tree paving there’s ever going carbon sequestration be” Hoosier National Forest struggled lack age diversity many year majority tree forest classified 20 99 year age range promoting oakhickory growth goal project opponent believe would best let forest naturally regrow increase diversity tree without solely focusing oak hickory still known whether change made Houston South project Hopefully suitable compromise reached help maintain Lake Monroe’s water quality protect endangered species’ still achieving important goal projectTags Environment Conservation Politics Law Climate News
4,298
A Full-Length Machine Learning Course in Python for Free
A Full-Length Machine Learning Course in Python for Free Andrew Ng’s Machine Learning Course in Python One of the most popular Machine-Leaning course is Andrew Ng’s machine learning course in Coursera offered by Stanford University. I tried a few other machine learning courses before but I thought he is the best to break the concepts into pieces make them very understandable. But I think, there is just only one problem. That is, all the assignments and instructions are in Matlab. I am a Python user and did not want to learn Matlab. So, I just learned the concepts from the lectures and developed all the algorithms in Python. I explained all the algorithms in my own way(as simply as I could) and demonstrated the development of almost all the algorithms in the different articles before. I thought I should summarise them all on one page so that if anyone wants to follow, it is easier for them. Sometimes a little help goes a long way. If you want to take Andrew Ng’s Machine Learning course, you can audit the complete course for free as many times as you want. Let’s dive in! Linear Regression The most basic machine learning algorithm. This algorithm is based on the very basic straight line formula we all learned in school: Y = AX + B Remember? If not, no problem. This is a very simple formula. Here is the complete article that explains how this simple formula can be used to make predictions. The article above works on only the datasets with a single variable. But in real life, most datasets have multiple variables. Using the same simple formula, you can develop the algorithm with multiple variables: Polynomial Regression This one is also a sister of linear regression. But polynomial regression is able to find the relationship between the input variables and the output variable more precisely, even if the relationship between them is not linear: Logistic Regression Logistic regression is developed on linear regression. It also uses the same simple formula of a straight line. This is a widely used, powerful, and popular machine learning algorithm. It is used to predict a categorical variable. The following article explains the development of logistic regression step by step for binary classification: Based on the concept of binary classification, it is possible to develop a logistic regression for multiclass classification. At the same time, Python has some optimization functions that help to do the calculation a lot faster. In the following article, I worked on both the methods to perform a multiclass classification task on a digit recognition dataset: Neural Network Neural Network has been getting more and more popular nowadays. If you are reading this article, I guess you heard of neural networks. A neural network works much faster and much efficiently in more complex datasets. This one also involves the same formula of a straight line but the development of the algorithm is a bit more complicated than the previous ones. If you are Andrew Ng’s course, probably, you know the concepts already. Otherwise, I tried to break down the concepts as much as I could. Hopefully, it is helpful: Learning Curve What if you spent all that time and developed an algorithm and then, it does not work the way you wanted. How do you fix it? You need to figure out first where the problem is. Is your algorithm faulty or you need more data to train the model or you need more features? So many questions, right? But if you do not figure out the problem first and keep moving in any direction, it may kill too much time unnecessarily. Here is how you may find the problem: On the other hand, if the dataset is too skewed that is another type of challenge. For example, if you are working on a classification problem, where 95% of cases it is positive and only 5% of cases are negative. In that case, if you just randomly put all the output as positive, you are 95% correct. On the other hand, if the machine learning algorithm turns out to be 90% accurate, it is still not efficient, right? Because without a machine learning algorithm, you can predict with 95% accuracy. Here are some ideas to deal with these types of situation: K Mean CLustering One of the most popular and old unsupervised learning algorithms. This algorithm does not make predictions like the previous algorithms. It makes clusters based on the similarities amongst the data. It is more like understanding the current data more effectively. Then whenever the algorithm sees new data, based on its characteristics, it decides which cluster it belongs to. This algorithm has other importance as well. It can be used for the dimensionality reduction of images. Why do we need dimensionality reduction of an image? Think, when we need to input a lot of images to an algorithm to train an image classification model. Very high-resolution images could be too heavy and the training process can be too slow. In that case, a lower-dimensional picture will do the job with less time. This is just one example. You probably can imagine, there are a lot of uses for the same reason. This article is a complete tutorial on how to develop a K mean clustering algorithm and how to use that algorithm for dimensionality reduction of an image: Anomaly Detection Another core machine learning task. Used in credit card fraud detection, to detect faulty manufacturing or even any rare disease detection or cancer cell detection. Using the Gaussian distribution(or normal distribution) method or even more simply a probability formula it can be done. Here is a complete step by step guide for developing an anomaly detection algorithm using the Gaussian distribution concepts: If you need a refresher on a Gaussian distribution method, please check this one: Recommender System The recommendation system is everywhere. If you buy something on Amazon, it will recommend you some more products you may like, YouTube recommends the video you may like, Facebook recommends people you may know. So, we see it everywhere. Andrew Ng’s course teaches how to develop a recommender system using the same formula we used in linear regression. Here is the step by step process of developing a movie recommendation algorithm: Conclusion Hopefully, this article will help some people to start with machine learning. The best way is by doing. If you notice most of the algorithms are based on a very simple basic formula. I see a notion that machine learning or Artificial Intelligence requires very heavy programming knowledge and very difficult math. That’s not always true. With simple codes, basic math, and stats knowledge, you can go a long way. At the same time, keep improving your programming skills to do more complex tasks. If you are interested in machine learning, just take some time and start working on it. Feel free to follow me on Twitter and like my Facebook page. More Reading:
https://towardsdatascience.com/a-full-length-machine-learning-course-in-python-for-free-f2732954f35f
['Rashida Nasrin Sucky']
2020-12-13 15:39:46.368000+00:00
['Data Science', 'Machine Learning', 'Artificial Intelligence', 'Programming', 'Technology']
Title FullLength Machine Learning Course Python FreeContent FullLength Machine Learning Course Python Free Andrew Ng’s Machine Learning Course Python One popular MachineLeaning course Andrew Ng’s machine learning course Coursera offered Stanford University tried machine learning course thought best break concept piece make understandable think one problem assignment instruction Matlab Python user want learn Matlab learned concept lecture developed algorithm Python explained algorithm wayas simply could demonstrated development almost algorithm different article thought summarise one page anyone want follow easier Sometimes little help go long way want take Andrew Ng’s Machine Learning course audit complete course free many time want Let’s dive Linear Regression basic machine learning algorithm algorithm based basic straight line formula learned school AX B Remember problem simple formula complete article explains simple formula used make prediction article work datasets single variable real life datasets multiple variable Using simple formula develop algorithm multiple variable Polynomial Regression one also sister linear regression polynomial regression able find relationship input variable output variable precisely even relationship linear Logistic Regression Logistic regression developed linear regression also us simple formula straight line widely used powerful popular machine learning algorithm used predict categorical variable following article explains development logistic regression step step binary classification Based concept binary classification possible develop logistic regression multiclass classification time Python optimization function help calculation lot faster following article worked method perform multiclass classification task digit recognition dataset Neural Network Neural Network getting popular nowadays reading article guess heard neural network neural network work much faster much efficiently complex datasets one also involves formula straight line development algorithm bit complicated previous one Andrew Ng’s course probably know concept already Otherwise tried break concept much could Hopefully helpful Learning Curve spent time developed algorithm work way wanted fix need figure first problem algorithm faulty need data train model need feature many question right figure problem first keep moving direction may kill much time unnecessarily may find problem hand dataset skewed another type challenge example working classification problem 95 case positive 5 case negative case randomly put output positive 95 correct hand machine learning algorithm turn 90 accurate still efficient right without machine learning algorithm predict 95 accuracy idea deal type situation K Mean CLustering One popular old unsupervised learning algorithm algorithm make prediction like previous algorithm make cluster based similarity amongst data like understanding current data effectively whenever algorithm see new data based characteristic decides cluster belongs algorithm importance well used dimensionality reduction image need dimensionality reduction image Think need input lot image algorithm train image classification model highresolution image could heavy training process slow case lowerdimensional picture job le time one example probably imagine lot us reason article complete tutorial develop K mean clustering algorithm use algorithm dimensionality reduction image Anomaly Detection Another core machine learning task Used credit card fraud detection detect faulty manufacturing even rare disease detection cancer cell detection Using Gaussian distributionor normal distribution method even simply probability formula done complete step step guide developing anomaly detection algorithm using Gaussian distribution concept need refresher Gaussian distribution method please check one Recommender System recommendation system everywhere buy something Amazon recommend product may like YouTube recommends video may like Facebook recommends people may know see everywhere Andrew Ng’s course teach develop recommender system using formula used linear regression step step process developing movie recommendation algorithm Conclusion Hopefully article help people start machine learning best way notice algorithm based simple basic formula see notion machine learning Artificial Intelligence requires heavy programming knowledge difficult math That’s always true simple code basic math stats knowledge go long way time keep improving programming skill complex task interested machine learning take time start working Feel free follow Twitter like Facebook page ReadingTags Data Science Machine Learning Artificial Intelligence Programming Technology
4,299
The Architect of Artificial intelligence — Deep Learning
Artificial Intelligence has been one the most remarkable advancements of the decade. People are hushing from explicit software development to building Ai based models, businesses are now relying on data driven decisions rather on someone manually defining rules. Everything is turning into Ai, ranging from Ai chat-bots to self driving cars, speech recognition to language translation, robotics to medicine. Ai is not a new thing to researchers though. It has been present even before 90’s. But what’s making it so trending and open to the world?? I’ve been working with Artificial Intelligence and Data Science for almost 2 years now and have worked around a lot of so called state-of-the-art Ai systems like generative chat-bots, speech recognition, language translation, text classification, object recognition, age and expression recognition etc. So, after spending 2 years in Ai, I believe there’s just one major technology (or whatever you call it) behind this Ai-boom, Deep Learning. This being my introductory blog, I won’t dive into technical details of Deep Learning and Neural Nets (will talk about my work in upcoming blogs), but share with you why I think Deep learning is taking over other traditional methods. If you are not into Deep Learning and Ai stuffs, let me explain it to you in simple non-techie words. Imagine you have to build a method to classify emails into categories like social, promotional or spam, one of the prime Ai tasks that Google does for your Gmail inbox! What would you do to achieve this? May be you could make a list of words to look for into emails like ‘advertisement’, ‘subscribe’, ‘newsletter’ etc, then write a simple string matching regex to look for these words in the emails and classify them as promotional or spam if these words are found. But the problem here is how many keywords can you catch this way or how many rules can you manually write for this? You know the content over internet is cross folding and each day, new keywords would hop in. Thus, this keywords based approach won’t land you good results. Now if you would give a closer thought to this, you have a computer which can do keyword matching million times faster than you. So rather than using your computationally powerful device just for simple string matching, why not let the computer decide the rules for classification too! What I mean is a computer can go through thousands of data and come up with more precise rules for the task in the time you could just think of 5 such rules. This is deep learning all about! Instead of you explicitly designing rules and conditions which you think would solve the problem (like simple if-else, making dictionaries of keywords etc.), Deep Learning deals with giving computer the capability to produce certain rules which it can use to solve the problem. This means it’s an end-to-end architecture. You give in the data as input to the network and tell the desired output for each data point. The network then goes through the data and update the rules accordingly to land on a set of optimized rules. This decision making ability is generally limited to we humans, right? This is where Artificial Neural Networks (or simply neural nets) kick in. These are set of nodes arranged in layers and connected through weights (which are nothing but number matrices) in a similar way as neurons are connected in our brain. Again I won’t go into technical details of the architecture, their learning algorithms and mathematics behind it, but this is the way Deep Learning mimics brain’s learning process. Lets take another example, suppose you are to recognize human face in an image which could be located anywhere in the image. How would you proceed? One obvious way is to define a set of key-points all over human face which together can characterize the face. Generally these are in sets of 128 or 68. These points when interconnected forms an image mask. But what if the orientation of face changes from frontal view to side view?? The geometry of face which helped these points to identify a face changes and thus, the key-point method won’t detect the face. 68 key points of human face, Image taken from www.pyimagesearch.com Deep Learning makes this possible too ! The key-points we used were based on a human’s perception of face features(like nose, ears, eyes). Hence to detect a face, we try to make the computer find these features together in an image. But guess what, these manually selected features are not so pronounced to computers. Deep Learning rather makes the computer go through a lot of faces (containing all sort of distortions and orientations) and lets the computer decide what feature maps seems relevant to the computer for face detection. After all the computer has to recognize the face, not you! And this gives surprisingly good results. You can go through one of my project here where I used ConvNets (a deep learning architecture) to recognize expression of the face. Having large data set of faces for recognizing a face may occur as a problem to you. But one-shot learning methods such as Siamese Network have solved this problem too. It is an approach based on a special loss function called Contrastive Triplet Loss and was introduced in the FaceNet paper. I won’t discuss about this here. If you wish to know abut it, you can go through the paper here. Siamese Network for Gender Detection, Image taken from www.semanticscholar.org Another myth about Deep Learning is that Deep Learning is a Black Box. There’s no feature engineering and Maths involved behind the architecture. And so it simply replicates the data without actually providing a reliable and long-term solution to the problem. NO, it’s not like ! It has mathematics and probability involved in a similar way traditional Machine Learning methods have, be it simple Linear Regression or Support Vector Machines. Deep Learning uses the same Gradient Descent equation to look for optimized parameter values as Linear Regression does. The cost function, the hypothesis, error calculation from target value (loss) are all done in similar fashion as they are in traditional algorithms (based on equations). Activation functions in deep nets are nothing but mathematical functions. Once you understand every mathematical aspect of Deep Learning, you can figure out how to build the model for a specific task and what changes need to be done. It’s just that the mathematics involved in Deep Learning turns out to be little complex. But if you get the concepts right, it’s no more a Black Box to you! In fact this is true to all the algorithms in the world. As far as I’ve learnt, I’ve made my way through all the mathematics behind it. Beginning right from a simple perceptron, standard Wx+b equation of a neuron and back-propagation to modern architectures such as CNN, LSTM, Encoder-Decoder, Sequence2Sequence etc. The purpose of this blog was to create more acceptance for Deep Learning in the field of Machine Learning and Artificial Intelligence. That’s why I didn’t talk about Deep Learning architectures, codes and Tensorflow. Companies basing their business over AI need to support Deep Learning along with traditional Machine Learning methods. In my upcoming blog, I will talk about some cool projects I did, may be Generative Chat-Bot or may be Neural Machine Translation. If you are into Artificial Intelligence too, do let me know about your opinions on the blog!
https://towardsdatascience.com/the-architect-of-artificial-intelligence-deep-learning-226ac69ab27a
['Saransh Mehta']
2018-10-03 19:15:00.094000+00:00
['Deep Learning', 'Artificial Intelligence', 'Neural Networks', 'Data Science', 'Machine Learning']
Title Architect Artificial intelligence — Deep LearningContent Artificial Intelligence one remarkable advancement decade People hushing explicit software development building Ai based model business relying data driven decision rather someone manually defining rule Everything turning Ai ranging Ai chatbots self driving car speech recognition language translation robotics medicine Ai new thing researcher though present even 90’s what’s making trending open world I’ve working Artificial Intelligence Data Science almost 2 year worked around lot called stateoftheart Ai system like generative chatbots speech recognition language translation text classification object recognition age expression recognition etc spending 2 year Ai believe there’s one major technology whatever call behind Aiboom Deep Learning introductory blog won’t dive technical detail Deep Learning Neural Nets talk work upcoming blog share think Deep learning taking traditional method Deep Learning Ai stuff let explain simple nontechie word Imagine build method classify email category like social promotional spam one prime Ai task Google Gmail inbox would achieve May could make list word look email like ‘advertisement’ ‘subscribe’ ‘newsletter’ etc write simple string matching regex look word email classify promotional spam word found problem many keywords catch way many rule manually write know content internet cross folding day new keywords would hop Thus keywords based approach won’t land good result would give closer thought computer keyword matching million time faster rather using computationally powerful device simple string matching let computer decide rule classification mean computer go thousand data come precise rule task time could think 5 rule deep learning Instead explicitly designing rule condition think would solve problem like simple ifelse making dictionary keywords etc Deep Learning deal giving computer capability produce certain rule use solve problem mean it’s endtoend architecture give data input network tell desired output data point network go data update rule accordingly land set optimized rule decision making ability generally limited human right Artificial Neural Networks simply neural net kick set node arranged layer connected weight nothing number matrix similar way neuron connected brain won’t go technical detail architecture learning algorithm mathematics behind way Deep Learning mimic brain’s learning process Lets take another example suppose recognize human face image could located anywhere image would proceed One obvious way define set keypoints human face together characterize face Generally set 128 68 point interconnected form image mask orientation face change frontal view side view geometry face helped point identify face change thus keypoint method won’t detect face 68 key point human face Image taken wwwpyimagesearchcom Deep Learning make possible keypoints used based human’s perception face featureslike nose ear eye Hence detect face try make computer find feature together image guess manually selected feature pronounced computer Deep Learning rather make computer go lot face containing sort distortion orientation let computer decide feature map seems relevant computer face detection computer recognize face give surprisingly good result go one project used ConvNets deep learning architecture recognize expression face large data set face recognizing face may occur problem oneshot learning method Siamese Network solved problem approach based special loss function called Contrastive Triplet Loss introduced FaceNet paper won’t discus wish know abut go paper Siamese Network Gender Detection Image taken wwwsemanticscholarorg Another myth Deep Learning Deep Learning Black Box There’s feature engineering Maths involved behind architecture simply replicates data without actually providing reliable longterm solution problem it’s like mathematics probability involved similar way traditional Machine Learning method simple Linear Regression Support Vector Machines Deep Learning us Gradient Descent equation look optimized parameter value Linear Regression cost function hypothesis error calculation target value loss done similar fashion traditional algorithm based equation Activation function deep net nothing mathematical function understand every mathematical aspect Deep Learning figure build model specific task change need done It’s mathematics involved Deep Learning turn little complex get concept right it’s Black Box fact true algorithm world far I’ve learnt I’ve made way mathematics behind Beginning right simple perceptron standard Wxb equation neuron backpropagation modern architecture CNN LSTM EncoderDecoder Sequence2Sequence etc purpose blog create acceptance Deep Learning field Machine Learning Artificial Intelligence That’s didn’t talk Deep Learning architecture code Tensorflow Companies basing business AI need support Deep Learning along traditional Machine Learning method upcoming blog talk cool project may Generative ChatBot may Neural Machine Translation Artificial Intelligence let know opinion blogTags Deep Learning Artificial Intelligence Neural Networks Data Science Machine Learning