Unnamed: 0
int64
0
192k
title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
info
stringlengths
45
90.4k
4,500
The easy grain bowl you’ll want to make all throughout the winter
This post was originally published on blog.healthtap.com on January 9, 2018. As we get deep into the dark and cold months of winter, we find ourselves turning toward nourishing, hot, and hearty foods to warm our bodies and our spirits. This simple, nutrient rich grain bowl is effortless to throw together, and is a perfect option for dinner when you want something savory and filling but don’t want to spend too much time in the kitchen. Extra roasted veggies also make perfect leftovers for your lunches and meals all week long. This bowl nails comforting, hearty flavor while being a perfectly balanced meal: dark leafy greens are topped with seasonal roasted veggies, whole grains, protein-packed chickpeas, and nourishing monounsaturated fat from creamy, sliced avocado. This nutritionally powerful, veggie-based winter bowl is sure to be something you’ll want to whip up all throughout the rest of these chilly months. Winter Grain Bowl What you’ll need: Cooked quinoa Raw (or steamed) kale Sweet potato Can of chickpeas, drained and rinsed Bell pepper Avocado Spices: salt, pepper, paprika, rosemary, garlic powder Olive oil To Serve: Lemon tahini dressing 1/3 cup tahini 1/3- 1/2 cup water (depending on desired consistency) 2 tbsp lemon juice 1 tablespoon olive oil 1 garlic clove, minced Salt and pepper, to taste What you need to do: Preheat your oven to 400F. Slice your sweet potato into wedges, and evenly coat them with olive oil, salt, pepper, and rosemary. Brush your bell pepper with olive oil, and sprinkle with salt and pepper. Spread your veggies evenly on a baking sheet, and roast them or 35–40 minutes until the bell pepper is slightly blackened and the sweet potato wedges are soft throughout. While your veggies are roasting, Add the chickpeas, a tablespoon of olive oil, 1/2 tsp smoked paprika, 1/4 tsp garlic powder, and salt and pepper to taste in a skillet. Stir the chickpeas over medium heat for about 5 minutes until they become hot and a little crispy. To make the dressing, combine all ingredients in a blender and blend until creamy. Once the veggies, chickpeas, and dressing are done, slice your avocado. Finally, layer your bowls with kale, quinoa, sweet potatoes, peppers, chickpeas, and add sliced avocado over the top. For a finishing touch, drizzle all with your lemon tahini dressing. Enjoy! Author: Maggie Harriman
https://medium.com/healthtap/the-easy-grain-bowl-youll-want-to-make-all-throughout-the-winter-51b5ebe40978
[]
2018-02-01 17:41:45.483000+00:00
['Nutrition', 'Healthy Foods', 'Wellness', 'Recipe', 'Healthy Eating']
Title easy grain bowl you’ll want make throughout winterContent post originally published bloghealthtapcom January 9 2018 get deep dark cold month winter find turning toward nourishing hot hearty food warm body spirit simple nutrient rich grain bowl effortless throw together perfect option dinner want something savory filling don’t want spend much time kitchen Extra roasted veggie also make perfect leftover lunch meal week long bowl nail comforting hearty flavor perfectly balanced meal dark leafy green topped seasonal roasted veggie whole grain proteinpacked chickpea nourishing monounsaturated fat creamy sliced avocado nutritionally powerful veggiebased winter bowl sure something you’ll want whip throughout rest chilly month Winter Grain Bowl you’ll need Cooked quinoa Raw steamed kale Sweet potato chickpea drained rinsed Bell pepper Avocado Spices salt pepper paprika rosemary garlic powder Olive oil Serve Lemon tahini dressing 13 cup tahini 13 12 cup water depending desired consistency 2 tbsp lemon juice 1 tablespoon olive oil 1 garlic clove minced Salt pepper taste need Preheat oven 400F Slice sweet potato wedge evenly coat olive oil salt pepper rosemary Brush bell pepper olive oil sprinkle salt pepper Spread veggie evenly baking sheet roast 35–40 minute bell pepper slightly blackened sweet potato wedge soft throughout veggie roasting Add chickpea tablespoon olive oil 12 tsp smoked paprika 14 tsp garlic powder salt pepper taste skillet Stir chickpea medium heat 5 minute become hot little crispy make dressing combine ingredient blender blend creamy veggie chickpea dressing done slice avocado Finally layer bowl kale quinoa sweet potato pepper chickpea add sliced avocado top finishing touch drizzle lemon tahini dressing Enjoy Author Maggie HarrimanTags Nutrition Healthy Foods Wellness Recipe Healthy Eating
4,501
#FluentFriday Tweet Chat Follow-Up
#FluentFriday Tweet Chat Follow-Up All of your unanswered questions answered by Principal Design Lead Joey Pitt and Sr. Dev Writer Mike Jacobs — with more than 280 characters. Last week, we hosted our inaugural #FluentFriday tweet chat where Principal Design Lead Joey Pitt and Sr. Dev Writer Mike Jacobs answered the community’s Fluent Design questions. Coffee, pastries, and soda for breakfast in the Tweetuation Room. We had an hour to respond to as many questions as we could but quickly ran out of time before we got to them all! We promised to follow up with those we didn’t get to, so here they are. Hope they help. Question 1: Answer: Principal Program Manager Paul Gusmorino answered this question here and we wanted to add a little more context. As we incorporate Fluent Design into more apps and the Windows shell, we’re trying new things and different approaches. The upside to this experimentation is we get to innovate; the downside is that it can create inconsistencies. After every round of innovation/trying new things, there’s a stabilization period where we determine what works best and start enforcing consistency. To learn more about our iteration cycle, check out this Q&A with Joey Pitt. Question 2: Answer: We’ve already added acrylic to the start menu, reveal to the live tiles, and we’re looking at other UX patterns to make live tile curation better. These are just early explorations of how we are taking the start menu to its next evolution. Question 3: Answer: One thing we’ll share at Microsoft Build is how we’re moving our color and material systems forward, and we’ll do this in a way that reinforces hierarchy and helps you focus on what you’re doing. Make sure to sign-up for the Fluent Design: Evolving our Design System session at Microsoft Build to find out more. Question 4: Answer: Font rendering is optimized differently on Windows and OSX. OSX optimizes for aesthetics, and Windows for legibility. The OSX rendering is truer to the outlines the font designer drew, but it introduces more blurry grey pixels and a bit increased weight. Windows reduces or eliminates that blur for legibility, but the characters are a bit more blocky at smaller sizes, which is why we don’t have the grey that OSX does. Neither is inherently better, they’re just different design choices. Check out our eBook Now Read This, which includes a chapter on font rendering. Question 5: Answer: Currently, apps like Photos use Connected Animation, but we know it’s not used everywhere. We’re definitely making it easier in XAML in the next release of Windows. We are discussing this and more at the What’s New for Windows UX Developers: Fluent and XAML session at Microsoft Build. Question 6: Answer: Some Fluent Design effects (such as acrylic) use the GPU, which can increase power consumption. Windows disables these features depending on your power settings, and users have the option of turning off these effects altogether. Question 7: Answer: While Windows shell supports colorizing your taskbar with accent colors, we are tracking this request in the Feedback Hub. Upvote if you haven’t already! Question 8: Answer: Great idea! We’re currently redesigning the Microsoft Design site which will give some additional cues for how to implement Fluent. We’re also planning on including more designer-focused video tutorials in the future. In the meantime, check out our developer-oriented video series.
https://medium.com/microsoft-design/fluentfriday-tweet-chat-follow-up-8ff55869299
['Microsoft Design']
2019-08-27 17:28:01.784000+00:00
['UX Design', 'Fluent Design System', 'Microsoft', 'Design']
Title FluentFriday Tweet Chat FollowUpContent FluentFriday Tweet Chat FollowUp unanswered question answered Principal Design Lead Joey Pitt Sr Dev Writer Mike Jacobs — 280 character Last week hosted inaugural FluentFriday tweet chat Principal Design Lead Joey Pitt Sr Dev Writer Mike Jacobs answered community’s Fluent Design question Coffee pastry soda breakfast Tweetuation Room hour respond many question could quickly ran time got promised follow didn’t get Hope help Question 1 Answer Principal Program Manager Paul Gusmorino answered question wanted add little context incorporate Fluent Design apps Windows shell we’re trying new thing different approach upside experimentation get innovate downside create inconsistency every round innovationtrying new thing there’s stabilization period determine work best start enforcing consistency learn iteration cycle check QA Joey Pitt Question 2 Answer We’ve already added acrylic start menu reveal live tile we’re looking UX pattern make live tile curation better early exploration taking start menu next evolution Question 3 Answer One thing we’ll share Microsoft Build we’re moving color material system forward we’ll way reinforces hierarchy help focus you’re Make sure signup Fluent Design Evolving Design System session Microsoft Build find Question 4 Answer Font rendering optimized differently Windows OSX OSX optimizes aesthetic Windows legibility OSX rendering truer outline font designer drew introduces blurry grey pixel bit increased weight Windows reduces eliminates blur legibility character bit blocky smaller size don’t grey OSX Neither inherently better they’re different design choice Check eBook Read includes chapter font rendering Question 5 Answer Currently apps like Photos use Connected Animation know it’s used everywhere We’re definitely making easier XAML next release Windows discussing What’s New Windows UX Developers Fluent XAML session Microsoft Build Question 6 Answer Fluent Design effect acrylic use GPU increase power consumption Windows disables feature depending power setting user option turning effect altogether Question 7 Answer Windows shell support colorizing taskbar accent color tracking request Feedback Hub Upvote haven’t already Question 8 Answer Great idea We’re currently redesigning Microsoft Design site give additional cue implement Fluent We’re also planning including designerfocused video tutorial future meantime check developeroriented video seriesTags UX Design Fluent Design System Microsoft Design
4,502
Factory farms: A pandemic in the making
Factory farms: A pandemic in the making Factory farms are petri dishes for animal-borne viruses, which have caused pandemics before, and will do so again Photo credit: Mercy For Animals Canada via Flickr (CC BY 2.0) In March 2009, the first case of a novel H1N1 influenza virus infection was reported in the small community of La Gloria in the Mexican state of Veracruz. The virus quickly spread through Mexico and the United States, and in June 2009 the World Health Organization officially declared it a pandemic. Within a year, the Centers for Disease Control and Prevention (CDC) estimates, it had killed up to 575,400 people worldwide. Early reports suggested that the source of the outbreak lay in the factory-style pig farms in the area around its epicenter in Veracruz. Subsequent tests, however, traced the genetic lineage of the virus to a strain that had emerged in an industrial hog farm in Newton Grove, N.C., in the late 1990s, where it had circulated and evolved among pigs before crossing to humans. Most recent pandemics, including the one we’re currently experiencing, have been the result of zoonotic viruses “ spilling over” to humans from animals. In many cases, this spillover hasn’t occurred via so-called “exotic” animals in faraway markets, as is believed to have been the case with COVID-19, but through domestic livestock. Most livestock today are raised in “concentrated animal feeding operations” (CAFOs) — more commonly known as factory farms. In these industrial-scale facilities, the proximity of thousands of genetically similar animals, packed together in unsanitary, overcrowded spaces and vulnerable to disease due to the stress placed on their immune systems by these living conditions, provides the ideal environment for viruses and other pathogens to circulate, mutate, and evolve the ability to cross over to human populations. Research shows that these farms can act as “amplifiers” for the spillover and spread of viruses. One recent model based on data from hog farms shows that workers at these facilities, being in close proximity to animals and thus at increased risk of contracting a virus, can be a “bridging population” for transmission of diseases from pigs to humans. The study found that a higher percentage of factory farm workers in a given community leads to a higher rate of human influenza cases in that community, concluding that a human influenza epidemic due to a new virus could be amplified in a local community and beyond by the presence of a factory farm nearby. Most of the major pandemics of recent decades can ultimately be traced back to birds, bats or other wildlife, but because these creatures are so genetically different from us it’s difficult for viruses to jump directly to humans without some other species acting as an intermediary. Historically this intermediary has often been pigs. Being genetically quite similar to us, and with similar immune systems, pigs are ideal “mixing vessels” in which viruses picked up from other animals are “genetically rearranged” to be able to cross over to human populations. In particular, it’s believed that pigs are the primary source of influenza pandemics, because they can pick up the virus from both birds and humans and act as incubators for new strains that combine genetic traits from both, and thus make the relatively easy jump to humans. Industrial pig farms have been the source of a range of disease outbreaks over recent years, the 2009 H1N1 outbreak being a case in point. In this instance, the new virus is thought to have arisen from a “ reassortment” of bird, swine and human influenza viruses combined with a Eurasian pig flu virus. Similarly, in the 1990s, factory farms were at the epicenter of a deadly Nipah virus outbreak, believed to have been the result of pigs in CAFO operations in Malaysia contracting the virus from bats and passing it on to farm workers, causing an outbreak of fatal encephalitis among pig farmers. But it’s not just pigs. Studies have indicated that industrial poultry farms can be similarly lethal amplifiers of disease, as was the case with the 2006 HPAI (highly pathogenic avian influenza) outbreak and the H5N1 avian flu in the late 1990s, both of which originated in Chinese poultry farms. Avian flu spreads quickly in chickens and is thought to have been picked up and carried further afield by migratory birds in the vicinity of these farms. The virus is still mutating to this day, and continued outbreaks in industrial poultry farms worldwide — including in Thailand, Nigeria, France, and in just the last couple of months, India and China — are providing new opportunities for the virus to mutate into a form capable of moving even more easily among both animals and humans. Factory farms are a relatively recent development in agriculture. Until the late twentieth century, most of the world’s food animals were dispersed across numerous diversified small to mid-sized farms growing a mixture of different crops and raising different kinds of livestock. In the space of just a few decades, a combination of unrestrained corporate power, wrongheaded agricultural policy and inadequate environmental and public health regulations — all of which can be remedied if we so choose — has led to a system of intensive, industrialized food production that poses serious risks to both animal and human health. COVID-19 is the latest in a growing catalog of public health disasters stemming directly from humans meddling with wildlife, and it’s right that we should be exploring every avenue to figure out exactly how it emerged and to ensure that nothing like it ever happens again. But while the spotlight is currently trained on animal husbandry practices on the other side of the world, we also need to recognize that our own agricultural systems are creating hotbeds for disease outbreaks, potentially no less devastating than this one, right here on our own doorstep. A growing scientific consensus and a history of painful experience show us that averting future pandemics begins with transitioning away from factory farms and toward means of food production that pose less danger to our environment and our health.
https://medium.com/the-public-interest-network/factory-farms-a-pandemic-in-the-making-bcd559dba090
['James Horrox']
2020-05-05 20:27:32.614000+00:00
['Environment', 'Agriculture', 'Factory Farming', 'Public Health', 'Covid 19']
Title Factory farm pandemic makingContent Factory farm pandemic making Factory farm petri dish animalborne virus caused pandemic Photo credit Mercy Animals Canada via Flickr CC 20 March 2009 first case novel H1N1 influenza virus infection reported small community La Gloria Mexican state Veracruz virus quickly spread Mexico United States June 2009 World Health Organization officially declared pandemic Within year Centers Disease Control Prevention CDC estimate killed 575400 people worldwide Early report suggested source outbreak lay factorystyle pig farm area around epicenter Veracruz Subsequent test however traced genetic lineage virus strain emerged industrial hog farm Newton Grove NC late 1990s circulated evolved among pig crossing human recent pandemic including one we’re currently experiencing result zoonotic virus “ spilling over” human animal many case spillover hasn’t occurred via socalled “exotic” animal faraway market believed case COVID19 domestic livestock livestock today raised “concentrated animal feeding operations” CAFOs — commonly known factory farm industrialscale facility proximity thousand genetically similar animal packed together unsanitary overcrowded space vulnerable disease due stress placed immune system living condition provides ideal environment virus pathogen circulate mutate evolve ability cross human population Research show farm act “amplifiers” spillover spread virus One recent model based data hog farm show worker facility close proximity animal thus increased risk contracting virus “bridging population” transmission disease pig human study found higher percentage factory farm worker given community lead higher rate human influenza case community concluding human influenza epidemic due new virus could amplified local community beyond presence factory farm nearby major pandemic recent decade ultimately traced back bird bat wildlife creature genetically different u it’s difficult virus jump directly human without specie acting intermediary Historically intermediary often pig genetically quite similar u similar immune system pig ideal “mixing vessels” virus picked animal “genetically rearranged” able cross human population particular it’s believed pig primary source influenza pandemic pick virus bird human act incubator new strain combine genetic trait thus make relatively easy jump human Industrial pig farm source range disease outbreak recent year 2009 H1N1 outbreak case point instance new virus thought arisen “ reassortment” bird swine human influenza virus combined Eurasian pig flu virus Similarly 1990s factory farm epicenter deadly Nipah virus outbreak believed result pig CAFO operation Malaysia contracting virus bat passing farm worker causing outbreak fatal encephalitis among pig farmer it’s pig Studies indicated industrial poultry farm similarly lethal amplifier disease case 2006 HPAI highly pathogenic avian influenza outbreak H5N1 avian flu late 1990s originated Chinese poultry farm Avian flu spread quickly chicken thought picked carried afield migratory bird vicinity farm virus still mutating day continued outbreak industrial poultry farm worldwide — including Thailand Nigeria France last couple month India China — providing new opportunity virus mutate form capable moving even easily among animal human Factory farm relatively recent development agriculture late twentieth century world’s food animal dispersed across numerous diversified small midsized farm growing mixture different crop raising different kind livestock space decade combination unrestrained corporate power wrongheaded agricultural policy inadequate environmental public health regulation — remedied choose — led system intensive industrialized food production pose serious risk animal human health COVID19 latest growing catalog public health disaster stemming directly human meddling wildlife it’s right exploring every avenue figure exactly emerged ensure nothing like ever happens spotlight currently trained animal husbandry practice side world also need recognize agricultural system creating hotbed disease outbreak potentially le devastating one right doorstep growing scientific consensus history painful experience show u averting future pandemic begin transitioning away factory farm toward mean food production pose le danger environment healthTags Environment Agriculture Factory Farming Public Health Covid 19
4,503
Remembering Pluribus: The Techniques that Facebook Used to Master World’s Most Difficult Poker Game
Remembering Pluribus: The Techniques that Facebook Used to Master World’s Most Difficult Poker Game Pluribus used incredibly simple AI methods to set new records in six-player no-limit Texas Hold’em poker. How did it do it? I recently started a new newsletter focus on AI education. TheSequence is a no-BS( meaning no hype, no news etc) AI-focused newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Please give it a try by subscribing below: I had a long conversation with one of my colleagues about imperfect information games and deep learning this weekend and reminded me of an article I wrote last year so I decided to republish it. Poker has remained as one of the most challenging games to master in the fields of artificial intelligence(AI) and game theory. From the game theory-creator John Von Neumann writing about poker in his 1928 essay “Theory of Parlor Games, to Edward Thorp masterful book “Beat the Dealer” to the MIT Blackjack Team, poker strategies has been an obsession to mathematicians for decades. In recent years, AI has made some progress in poker environments with systems such as Libratus, defeating human pros in two-player no-limit Hold’em in 2017. Last year, a team of AI researchers from Facebook in collaboration with Carnegie Mellon University achieved a major milestone in the conquest of Poker by creating Pluribus, an AI agent that beat elite human professional players in the most popular and widely played poker format in the world: six-player no-limit Texas Hold’em poker. The reasons why Pluribus represents a major breakthrough in AI systems might result confusing to many readers. After all, in recent years AI researchers have made tremendous progress across different complex games such as checkers, chess, Go, two-player poker, StarCraft 2, and Dota 2. All those games are constrained to only two players and are zero-sum games (meaning that whatever one player wins, the other player loses). Other AI strategies based on reinforcement learning have been able to master multi-player games Dota 2 Five and Quake III. However, six-player, no-limit Texas Hold’em still remains one of the most elusive challenges for AI systems. Mastering the Most Difficult Poker Game in the World The challenge with six-player, no-limit Texas Hold’em poker can be summarized in three main aspects: 1) Dealing with incomplete information. 2) Difficulty to achieve a Nash equilibrium. 3) Success requires psychological skills like bluffing. In AI theory, poker is classified as an imperfect-information environment which means that players never have a complete picture of the game. No other game embodies the challenge of hidden information quite like poker, where each player has information (his or her cards) that the others lack. Additionally, an action in poker in highly dependent of the chosen strategy. In perfect-information games like chess, it is possible to solve a state of the game (ex: end game) without knowing about the previous strategy (ex: opening). In poker, it is impossible to disentangle the optimal strategy of a specific situation from the overall strategy of poker. The second challenge of poker relies on the difficulty of achieving a Nash equilibrium. Named after legendary mathematician John Nash, the Nash equilibrium describes a strategy in a zero-sum game in which a player in guarantee to win regardless of the moves chosen by its opponent. In the classic rock-paper-scissors game, the Nash equilibrium strategy is to randomly pick rock, paper, or scissors with equal probability. The challenge with the Nash equilibrium is that its complexity increases with the number of players in the game to a level in which is not feasible to pursue that strategy. In the case of six-player poker, achieving a Nash equilibrium is computationally impossible many times. The third challenge of six-player, no-limit Texas Hold’em is related to its dependence on human psychology. The success in poker relies on effectively reasoning about hidden information, picking good action and ensuring that a strategy remains unpredictable. A successful poker player should know how to bluff, but bluffing too often reveals a strategy that can be beaten. This type of skills has remained challenging to master by AI systems throughout history. Pluribus Like many other recent AI-game breakthroughs, Pluribus relied on reinforcement learning models to master the game of poker. The core of Pluribus’s strategy was computed via self-play, in which the AI plays against copies of itself, without any data of human or prior AI play used as input. The AI starts from scratch by playing randomly, and gradually improves as it determines which actions, and which probability distribution over those actions, lead to better outcomes against earlier versions of its strategy. Differently from other multi-player games, any given position in six-player, no-limit Texas Hold’em can have too many decision points to reason about individually. Pluribus uses a technique called abstraction to group similar actions together and eliminate others reducing the scope of the decision. The current version of Pluribus uses two types of abstractions: · Action Abstraction: This type of abstraction reduces the number of different actions the AI needs to consider. For instance, betting $150 or $151 might not make a difference from the strategy standpoint. To balance that, Pluribus only considers a handful of bet sizes at any decision point. · Information Abstraction: This type of abstraction groups decision points based on the information that has been revealed. For instance, a ten-high straight and a nine-high straight are distinct hands, but are nevertheless strategically similar. Pluribus uses information abstraction only to reason about situations on future betting rounds, never the betting round it is actually in. To automate self-play training, the Pluribus team used a version of the of the iterative Monte Carlo CFR (MCCFR) algorithm. On each iteration of the algorithm, MCCFR designates one player as the “traverser” whose current strategy is updated on the iteration. At the start of the iteration, MCCFR simulates a hand of poker based on the current strategy of all players (which is initially completely random). Once the simulated hand is completed, the algorithm reviews each decision the traverser made and investigates how much better or worse it would have done by choosing the other available actions instead. Next, the AI assesses the merits of each hypothetical decision that would have been made following those other available actions, and so on. The difference between what the traverser would have received for choosing an action versus what the traverser actually achieved (in expectation) on the iteration is added to the counterfactual regret for the action. At the end of the iteration, the traverser’s strategy is updated so that actions with higher counterfactual regret are chosen with higher probability. The outputs of the MCCFR training are known as the blueprint strategy. Using that strategy, Pluribus was able to master poker in eight days on a 64-core server and required less than 512 GB of RAM. No GPUs were used. The blueprint strategy is too expensive to use real time in a poker game. During actual play, Pluribus improves upon the blueprint strategy by conducting real-time search to determine a better, finer-grained strategy for its particular situation. Traditional search strategies are very challenging to implement in imperfect information games in which the players can change strategies at any time. Pluribus instead uses an approach in which the searcher explicitly considers that any or all players may shift to different strategies beyond the leaf nodes of a subgame. Specifically, rather than assuming all players play according to a single fixed strategy beyond the leaf nodes, Pluribus assumes that each player may choose among four different strategies to play for the remainder of the game when a leaf node is reached. This technique results in the searcher finding a more balanced strategy that produces stronger overall performance. Pluribus in Action Facebook evaluated Pluribus by playing against an elite group of players that included several World Series of Poker and World Poker Tour champions. In one experiment, Pluribus played 10,000 hands of poker against five human players selected randomly from the pool. Pluribus’s win rate was estimated to be about 5 big blinds per 100 hands (5 bb/100), which is considered a very strong victory over its elite human opponents (profitable with a p-value of 0.021). If each chip was worth a dollar, Pluribus would have won an average of about $5 per hand and would have made about $1,000/hour. The following figure illustrates Pluribus’ performance. On the top chart, the solid lines show the win rate plus or minus the standard error. The bottom chart shows the number of chips won over the course of the games. Pluribus represents one of the major breakthroughs in modern AI systems. Even though Pluribus was initially implemented for poker, the general techniques can be applied to many other multi-agent systems that require both AI and human skills. Just like AlphaZero is helping to improve professional chess, its interesting to see how poker players can improve their strategies based on the lessons learned from Pluribus.
https://medium.com/dataseries/remembering-pluribus-the-techniques-that-facebook-used-to-master-worlds-most-difficult-poker-game-d91ead459fac
['Jesus Rodriguez']
2020-12-01 16:15:23.351000+00:00
['Machine Learning', 'Deep Learning', 'Data Science', 'Artificial Intelligence', 'Thesequence']
Title Remembering Pluribus Techniques Facebook Used Master World’s Difficult Poker GameContent Remembering Pluribus Techniques Facebook Used Master World’s Difficult Poker Game Pluribus used incredibly simple AI method set new record sixplayer nolimit Texas Hold’em poker recently started new newsletter focus AI education TheSequence noBS meaning hype news etc AIfocused newsletter take 5 minute read goal keep date machine learning project research paper concept Please give try subscribing long conversation one colleague imperfect information game deep learning weekend reminded article wrote last year decided republish Poker remained one challenging game master field artificial intelligenceAI game theory game theorycreator John Von Neumann writing poker 1928 essay “Theory Parlor Games Edward Thorp masterful book “Beat Dealer” MIT Blackjack Team poker strategy obsession mathematician decade recent year AI made progress poker environment system Libratus defeating human pro twoplayer nolimit Hold’em 2017 Last year team AI researcher Facebook collaboration Carnegie Mellon University achieved major milestone conquest Poker creating Pluribus AI agent beat elite human professional player popular widely played poker format world sixplayer nolimit Texas Hold’em poker reason Pluribus represents major breakthrough AI system might result confusing many reader recent year AI researcher made tremendous progress across different complex game checker chess Go twoplayer poker StarCraft 2 Dota 2 game constrained two player zerosum game meaning whatever one player win player loses AI strategy based reinforcement learning able master multiplayer game Dota 2 Five Quake III However sixplayer nolimit Texas Hold’em still remains one elusive challenge AI system Mastering Difficult Poker Game World challenge sixplayer nolimit Texas Hold’em poker summarized three main aspect 1 Dealing incomplete information 2 Difficulty achieve Nash equilibrium 3 Success requires psychological skill like bluffing AI theory poker classified imperfectinformation environment mean player never complete picture game game embodies challenge hidden information quite like poker player information card others lack Additionally action poker highly dependent chosen strategy perfectinformation game like chess possible solve state game ex end game without knowing previous strategy ex opening poker impossible disentangle optimal strategy specific situation overall strategy poker second challenge poker relies difficulty achieving Nash equilibrium Named legendary mathematician John Nash Nash equilibrium describes strategy zerosum game player guarantee win regardless move chosen opponent classic rockpaperscissors game Nash equilibrium strategy randomly pick rock paper scissors equal probability challenge Nash equilibrium complexity increase number player game level feasible pursue strategy case sixplayer poker achieving Nash equilibrium computationally impossible many time third challenge sixplayer nolimit Texas Hold’em related dependence human psychology success poker relies effectively reasoning hidden information picking good action ensuring strategy remains unpredictable successful poker player know bluff bluffing often reveals strategy beaten type skill remained challenging master AI system throughout history Pluribus Like many recent AIgame breakthrough Pluribus relied reinforcement learning model master game poker core Pluribus’s strategy computed via selfplay AI play copy without data human prior AI play used input AI start scratch playing randomly gradually improves determines action probability distribution action lead better outcome earlier version strategy Differently multiplayer game given position sixplayer nolimit Texas Hold’em many decision point reason individually Pluribus us technique called abstraction group similar action together eliminate others reducing scope decision current version Pluribus us two type abstraction · Action Abstraction type abstraction reduces number different action AI need consider instance betting 150 151 might make difference strategy standpoint balance Pluribus considers handful bet size decision point · Information Abstraction type abstraction group decision point based information revealed instance tenhigh straight ninehigh straight distinct hand nevertheless strategically similar Pluribus us information abstraction reason situation future betting round never betting round actually automate selfplay training Pluribus team used version iterative Monte Carlo CFR MCCFR algorithm iteration algorithm MCCFR designates one player “traverser” whose current strategy updated iteration start iteration MCCFR simulates hand poker based current strategy player initially completely random simulated hand completed algorithm review decision traverser made investigates much better worse would done choosing available action instead Next AI ass merit hypothetical decision would made following available action difference traverser would received choosing action versus traverser actually achieved expectation iteration added counterfactual regret action end iteration traverser’s strategy updated action higher counterfactual regret chosen higher probability output MCCFR training known blueprint strategy Using strategy Pluribus able master poker eight day 64core server required le 512 GB RAM GPUs used blueprint strategy expensive use real time poker game actual play Pluribus improves upon blueprint strategy conducting realtime search determine better finergrained strategy particular situation Traditional search strategy challenging implement imperfect information game player change strategy time Pluribus instead us approach searcher explicitly considers player may shift different strategy beyond leaf node subgame Specifically rather assuming player play according single fixed strategy beyond leaf node Pluribus assumes player may choose among four different strategy play remainder game leaf node reached technique result searcher finding balanced strategy produce stronger overall performance Pluribus Action Facebook evaluated Pluribus playing elite group player included several World Series Poker World Poker Tour champion one experiment Pluribus played 10000 hand poker five human player selected randomly pool Pluribus’s win rate estimated 5 big blind per 100 hand 5 bb100 considered strong victory elite human opponent profitable pvalue 0021 chip worth dollar Pluribus would average 5 per hand would made 1000hour following figure illustrates Pluribus’ performance top chart solid line show win rate plus minus standard error bottom chart show number chip course game Pluribus represents one major breakthrough modern AI system Even though Pluribus initially implemented poker general technique applied many multiagent system require AI human skill like AlphaZero helping improve professional chess interesting see poker player improve strategy based lesson learned PluribusTags Machine Learning Deep Learning Data Science Artificial Intelligence Thesequence
4,504
Reinventing Product Discovery at the Financial Times
Wait, wasn’t I doing product discovery already? Well, kinda. Let’s illustrate how we might have approached this before: User research helps us uncover an unmet need in business travel We prototype a digital travel guide and test this with users — they like it Following user feedback, we build the guides Our goal is to grow habit — readers coming back to the guides repeatedly When we look at the results, we see that 60% of users visit a guide and never do so again. We ask ourselves — how can we get users to visit more than once? We decide to add a newsletter sign-up option, our hunch is that this will encourage users to come back repeatedly Does it work? Yes, kinda. The crucial mistake we made here is in not fully understanding the user problem — why were users not visiting again? Knowing this could have sent us down a completely different path. Other mistakes we sometimes made were in user testing: We’d typically test high fidelity prototypes We might only test only one or two We’d try and find a ‘winner’ The danger here is that we could narrow down to one solution far too quickly — potentially missing a much bigger opportunity or key piece of insight. The end result of these mistakes was a tendency to go for the safe and familiar over the bold and uncertain. Solving our discovery problem by ‘storming’ Recognising that we weren’t doing discovery as effectively as we could, we got together representatives from Product, Research, Design and Engineering to design new ways of doing things. This was a ‘storm’ — 1 week focussed only on this, with a proposal to our leadership team at the end of it. The key thing here is that we solved it bottom-up, not top-down. Our sponsor from the leadership team, Monica Todd, empowered us to find the solution ourselves. The output was a framework for discovery and a ‘Discovery Guild’ to support our teams in the process. You can read about how the Guild is bringing about culture-change here. 9 months after we introduced our discovery process, where are we now? All of our customer facing teams have now adopted this new approach to product development. It’s all about the user A greater emphasis on uncovering the problems to solve means that we now understand our users better than ever A greater emphasis on uncovering the problems to solve means that we now understand our users better than ever New approaches to design and research We soon learned that our old approaches constrained us creatively — we now actively encourage new ways to ideate, test and validate our solutions We soon learned that our old approaches constrained us creatively — we now actively encourage new ways to ideate, test and validate our solutions More ambitious solutions For initiatives like our homepage project, we have seen solutions that are far bolder than what we’d see in the past For initiatives like our homepage project, we have seen solutions that are far bolder than what we’d see in the past A more open and supportive culture Our Discovery Guild creates a safe space for product-people to share successes, failures and what we’ve learned along the way Moving from a process to a mindset Our first iteration of this discovery framework has given us a better understanding of our users, new ways of approaching problems and a shift in our product culture. That said, there’s always room for improvement — looking back, it’s clear that our process was powerful in creating a cultural shift but is perhaps too heavy for where we want to get to. Our vision for the future is discovery as a mindset, not a process. This means greater confidence in knowing how to explore and tackle problems. More freedom in determining what approach to take, rather than one set process. More comfort with risk and a greater tendency towards experimentation. We’d love to hear how you approach discovery in your organisations. We’d be happy to speak to you to share our experiences in more detail. Please feel free to reach out to me at [email protected]
https://medium.com/ft-product-technology/reinventing-product-discovery-at-the-financial-times-23583c39e74f
['Martin Fallon']
2020-12-11 13:14:06.396000+00:00
['Product', 'Product Management', 'Discovery', 'Design', 'UX']
Title Reinventing Product Discovery Financial TimesContent Wait wasn’t product discovery already Well kinda Let’s illustrate might approached User research help u uncover unmet need business travel prototype digital travel guide test user — like Following user feedback build guide goal grow habit — reader coming back guide repeatedly look result see 60 user visit guide never ask — get user visit decide add newsletter signup option hunch encourage user come back repeatedly work Yes kinda crucial mistake made fully understanding user problem — user visiting Knowing could sent u completely different path mistake sometimes made user testing We’d typically test high fidelity prototype might test one two We’d try find ‘winner’ danger could narrow one solution far quickly — potentially missing much bigger opportunity key piece insight end result mistake tendency go safe familiar bold uncertain Solving discovery problem ‘storming’ Recognising weren’t discovery effectively could got together representative Product Research Design Engineering design new way thing ‘storm’ — 1 week focussed proposal leadership team end key thing solved bottomup topdown sponsor leadership team Monica Todd empowered u find solution output framework discovery ‘Discovery Guild’ support team process read Guild bringing culturechange 9 month introduced discovery process customer facing team adopted new approach product development It’s user greater emphasis uncovering problem solve mean understand user better ever greater emphasis uncovering problem solve mean understand user better ever New approach design research soon learned old approach constrained u creatively — actively encourage new way ideate test validate solution soon learned old approach constrained u creatively — actively encourage new way ideate test validate solution ambitious solution initiative like homepage project seen solution far bolder we’d see past initiative like homepage project seen solution far bolder we’d see past open supportive culture Discovery Guild creates safe space productpeople share success failure we’ve learned along way Moving process mindset first iteration discovery framework given u better understanding user new way approaching problem shift product culture said there’s always room improvement — looking back it’s clear process powerful creating cultural shift perhaps heavy want get vision future discovery mindset process mean greater confidence knowing explore tackle problem freedom determining approach take rather one set process comfort risk greater tendency towards experimentation We’d love hear approach discovery organisation We’d happy speak share experience detail Please feel free reach martinfallonftcomTags Product Product Management Discovery Design UX
4,505
Leverage Python and Selenium based Automation
You right click on any element that you want the xpath of. You inspect the element. Go to the element in the HTML in the developer console. Right click on the element. Click on Copy and go down to the option Copy XPath. Refer above image for this. All the statements with sleep() are there to simulate some delays. Also there are try blocks in the code in order to not stop execution if there is any error occured. Lines 4–8 These are the import statements to include all the necessary libraries used for this project. Lines 11–23 This is a method for logging into your Instagram account. The webdriver enters your credentials in the browser controlled by this script and executes the commands given to it. Line 17 requires you to add your username and line 20 requires you to add your password. Lines 26–34 This is a method to click on the pop-ups that you get in between the login and the home page of your account. Lines 37–39 A driver object based on webdriver.Chrome() is created. This is the driver emulating all the user actions. It is followed by calling of login() and post_login() methods described above. Lines 41–51 The hashtag_list is the list of hashtags you have selected based on your niche. You need to add them here as strings separated by commas. Lines 43–45 These are the lines where you get the list of already followed users. This is there in order to not unfollow the users that you have already followed. When you run the bot for the first time, uncomment the line 43 and comment lines 44 and 45. Next time onwards, comment 43 and uncomment 44 and 45 and don’t forget to change the file name on line 44. Lines 47–51 These are variables keeping track of new followed people, new likes and comments posted. This information will be printed at the end when the bot is done executing. Lines 53–119 Let’s understand this big chunk of code in steps:
https://medium.com/dataseries/leverage-python-and-selenium-based-automation-56a92e707745
['Tarun Gupta']
2020-12-25 14:58:39.298000+00:00
['Python', 'Instagram', 'Bots', 'Automation', 'Towards Data Science']
Title Leverage Python Selenium based AutomationContent right click element want xpath inspect element Go element HTML developer console Right click element Click Copy go option Copy XPath Refer image statement sleep simulate delay Also try block code order stop execution error occured Lines 4–8 import statement include necessary library used project Lines 11–23 method logging Instagram account webdriver enters credential browser controlled script executes command given Line 17 requires add username line 20 requires add password Lines 26–34 method click popups get login home page account Lines 37–39 driver object based webdriverChrome created driver emulating user action followed calling login postlogin method described Lines 41–51 hashtaglist list hashtags selected based niche need add string separated comma Lines 43–45 line get list already followed user order unfollow user already followed run bot first time uncomment line 43 comment line 44 45 Next time onwards comment 43 uncomment 44 45 don’t forget change file name line 44 Lines 47–51 variable keeping track new followed people new like comment posted information printed end bot done executing Lines 53–119 Let’s understand big chunk code stepsTags Python Instagram Bots Automation Towards Data Science
4,506
Working with Cloud Spanner and Java
We’ve gone into the architectural details of Google Cloud Spanner in previous posts, and now it is time to get a little deeper into the details of building an application using Google Cloud Spanner. If you decide to build your application on Cloud Spanner, you can rely on ANSI 2011 SQL support and client libraries for multiple languages. There are great tutorials that help you get started, though they don’t go into much depth regarding the different options when using Java; Data Manipulation Language or Mutations via the client libraries, or SQL/DML via the two JDBC drivers. I’m not going to go full depth on these concepts, but I hope to provide enough information to help you understand your different options as a Java developer working with Cloud Spanner. To make sure I got the details right, this article and the code written for it (which you can clone here) got technical expertise assistance from Java expert Peter Runge (more appropriately prunge-helix on Github) We will be using the same schema as the Google Cloud Spanner getting started guides, which is explained in detail on the Schema and data model page in the Cloud Spanner documentation. We are essentially creating a music application, and our catalog contains details on Singers and their Albums. The strong parent child relationship between singers and albums lends itself well to a unique Cloud Spanner optimisation called interleaved tables, which are described on that page, and well worth understanding. Examples and Options ORMs and the JDBC driver If you are a seasoned Java programmer, it may be easier or more relevant to use an ORM or the JDBC driver to interact with Cloud Spanner. ORMs can also make it easier to manipulate data in Cloud Spanner in your language of choice without having to write DML. In many cases these are a wrapper around the existing Cloud Spanner APIs. For example in Java with spring, spring-cloud-gcp-starter-data-spanner uses the Cloud Spanner APIs (com.google.cloud.spanner.*) to execute statements. When following modern programming practices, it is much easier and consistent to use ORMs to interact with the database compared with interspersing DML in your code. As ORMs often make use of the existing client libraries, all the benefits of working with DML vs Mutations etc. are maintained. For ORM with SpringData, we will first create the singers table: Now we will create the Albums table : And of course we have to create the interfaces: And now we can use the tables: There are two JDBC drivers including an open source driver written by Google. It makes use of the client libraries to connect to Cloud Spanner, and allows you to execute SQL and by extension DML. If your statements require many objects to be held in memory prior to execution, it may be more efficient to use the JDBC driver to execute statements against the database. Large statements that require multiple joins, group-bys, and aggregations, may be onerous to manage in an object oriented manner, and it may be simpler to write a single DML statement containing those actions instead. Though, in terms of execution, the latter example is not expected to be roughly the same in either ORM and DML Of course, if you are connecting an off the shelf application it is likely that the simplest integration would be by connecting via the JDBC driver. A quick note on SQL/DML Cloud Spanner supports ANSI 2011 compatible SQL, enabling you to query databases using declarative SQL statements that specify what data you want to retrieve. There are SQL best practices that can help Cloud Spanner to find the relevant data in the most efficient way, and understanding how Cloud Spanner executes SQL statements can go a long way to improve performance. For example, use of parameters and secondary indexes are two of the ways that query performance can be improved. Data Manipulation Language (DML) and Partitioned DML DML can be used to INSERT, UPDATE, and DELETE statements in the Cloud Console, gcloud command-line tool, and client libraries. DML is designed for transaction processing, where Partitioned DML is designed for bulk updates and deletes, with minimal impact on concurrent transaction processing. This is achieved in Partitioned DML by partitioning the key space and running the statement over partitions in separate, smaller-scoped transactions. DML statements are executed inside read-write transactions acquiring locks only on the columns you are accessing. For reads, shared locks are used to ensure consistency, with writes or modifications resulting in exclusive locks. The following DML best practices will help improve performance, and minimise locking. Now we will execute the same steps illustrated in the ORM example, by using the Java JDBC to execute DDL and DML statements Mutations A Mutation represents a sequence of inserts, updates, and deletes that Cloud Spanner applies atomically to different rows and tables in a Cloud Spanner database. These are executed via the Mutation API. Although you can commit mutations by using gRPC or REST, it is more common to access the APIs through the client libraries. Peter Runge will publish a post on working with DML and Mutations next week if you want to delve a little deeper into that topic. Since this is the third example, we are going to assume you have created the tables, and save some time by just using the Mutation API to add data to our Singers and Albums tables If you just wanted to use the standard client library, the getting started guide takes you through the same example which we reference below, and the code is published on github The client libraries are also used by the ORM and JDBC drivers, so you can also use them to execute DDL:
https://medium.com/google-cloud/working-with-cloud-spanner-and-java-16e44ebc63b6
['Ash Van Der Spuy']
2020-11-16 20:57:55.573000+00:00
['Cloud Spanner', 'Java', 'Database', 'Object Relational Mapping', 'Google Cloud Spanner']
Title Working Cloud Spanner JavaContent We’ve gone architectural detail Google Cloud Spanner previous post time get little deeper detail building application using Google Cloud Spanner decide build application Cloud Spanner rely ANSI 2011 SQL support client library multiple language great tutorial help get started though don’t go much depth regarding different option using Java Data Manipulation Language Mutations via client library SQLDML via two JDBC driver I’m going go full depth concept hope provide enough information help understand different option Java developer working Cloud Spanner make sure got detail right article code written clone got technical expertise assistance Java expert Peter Runge appropriately prungehelix Github using schema Google Cloud Spanner getting started guide explained detail Schema data model page Cloud Spanner documentation essentially creating music application catalog contains detail Singers Albums strong parent child relationship singer album lends well unique Cloud Spanner optimisation called interleaved table described page well worth understanding Examples Options ORMs JDBC driver seasoned Java programmer may easier relevant use ORM JDBC driver interact Cloud Spanner ORMs also make easier manipulate data Cloud Spanner language choice without write DML many case wrapper around existing Cloud Spanner APIs example Java spring springcloudgcpstarterdataspanner us Cloud Spanner APIs comgooglecloudspanner execute statement following modern programming practice much easier consistent use ORMs interact database compared interspersing DML code ORMs often make use existing client library benefit working DML v Mutations etc maintained ORM SpringData first create singer table create Albums table course create interface use table two JDBC driver including open source driver written Google make use client library connect Cloud Spanner allows execute SQL extension DML statement require many object held memory prior execution may efficient use JDBC driver execute statement database Large statement require multiple join groupbys aggregation may onerous manage object oriented manner may simpler write single DML statement containing action instead Though term execution latter example expected roughly either ORM DML course connecting shelf application likely simplest integration would connecting via JDBC driver quick note SQLDML Cloud Spanner support ANSI 2011 compatible SQL enabling query database using declarative SQL statement specify data want retrieve SQL best practice help Cloud Spanner find relevant data efficient way understanding Cloud Spanner executes SQL statement go long way improve performance example use parameter secondary index two way query performance improved Data Manipulation Language DML Partitioned DML DML used INSERT UPDATE DELETE statement Cloud Console gcloud commandline tool client library DML designed transaction processing Partitioned DML designed bulk update deletes minimal impact concurrent transaction processing achieved Partitioned DML partitioning key space running statement partition separate smallerscoped transaction DML statement executed inside readwrite transaction acquiring lock column accessing read shared lock used ensure consistency writes modification resulting exclusive lock following DML best practice help improve performance minimise locking execute step illustrated ORM example using Java JDBC execute DDL DML statement Mutations Mutation represents sequence insert update deletes Cloud Spanner applies atomically different row table Cloud Spanner database executed via Mutation API Although commit mutation using gRPC REST common access APIs client library Peter Runge publish post working DML Mutations next week want delve little deeper topic Since third example going assume created table save time using Mutation API add data Singers Albums table wanted use standard client library getting started guide take example reference code published github client library also used ORM JDBC driver also use execute DDLTags Cloud Spanner Java Database Object Relational Mapping Google Cloud Spanner
4,507
8 Tips for Marketing
Photo by Merakist on Unsplash Before Going on I want to mention that most of this article I was able to write thanks to my colleague and friend, Head of Marketing Department of Fnet Sona Madoyan. What is especially important marketing trends? When developing a marketing strategy, you always need to take into account the specifics of the industry; requirements, and specifics of potential buyers. In addition to all this, you always need to follow the news and a few important “marketing rules” that are relevant at all times. So, what to do? 1. Be smart spending Marketing budget Photo by Kelly Sikkema on Unsplash Since the marketing budget is largely insufficient to take advantage of all the ways and means fo) implementation of marketing goals, it is very important not to focus the entire budget on one direction, but to diversify it. You need to choose the means and ways that are most effective and show results faster. And to evaluate the effectiveness of the chosen path or advertising tool, you can use the ROAS (Return on Ad Spend) indicator. 2. Create unique content Photo by Will Francis on Unsplash Focus your forces on content marketing and create original content, especially giving preference to video content. As the famous marketer David Baba would say․ 𝐂𝐨𝐧𝐭𝐞𝐧𝐭 𝐦𝐚𝐫𝐤𝐞𝐭𝐢𝐧𝐠 𝐢𝐬 𝐥𝐢𝐤𝐞 𝐚 𝐟𝐢𝐫𝐬𝐭 𝐝𝐚𝐭𝐞. 𝐈𝐟 𝐚𝐥𝐥 𝐲𝐨𝐮 𝐝𝐨 𝐢𝐬 𝐭𝐚𝐥𝐤 𝐚𝐛𝐨𝐮𝐭 𝐲𝐨𝐮𝐫𝐬𝐞𝐥𝐟, 𝐭𝐡𝐞𝐫𝐞 𝐰𝐨𝐧’𝐭 𝐛𝐞 𝐚 𝐬𝐞𝐜𝐨𝐧𝐝 𝐝𝐚𝐭𝐞. 3. Combine online and offline strategies Photo by Campaign Creators on Unsplash Even though the usage of digital platforms and being active in a digital environment are prioritized. You need to reach the audience in several ways and inform about your product/service. It is also important to use offline tools. And it is very important to properly and effectively combine online and offline marketing strategies: they must be long-lasting and additional. 4. Gamify the offer Photo by JESHOOTS.COM on Unsplash Another important trend to follow is Gamification. It is probably no secret that almost always interactive, critical, and useful content provides a wider audience. And because people like to compete with each other through games, it may provide more engagement. 5. Consider each generation Photo by Jessica Lewis on Unsplash Generation Z (as the digital generation is often called) is the youth that created and will continue to create demand. Therefore, except for specific goods and services, in all other cases, it is extremely important to follow generation Z, since marketing and product trends are built by this generation, and products (goods and services) must be formed in accordance with their expectations. 6. Find more partners Photo by Paweł Czerwiński on Unsplash Creating partnerships and relationships with companies in different areas can lead to synergy. By combining your audience with partners and other resources, joint marketing activities with lower costs can bring the greatest results. 7. Customize your offer Photo by Mick Haupt on Unsplash The offer of any product or service must be segmented and personalized on a behavioral basis. Try to make an offer in such a way that everyone who receives it is sure that this product or service is for them. 8. Direct all funds to promote sales Photo by NordWood Themes on Unsplash Despite the divergence in views and ideas, and structure of some companies, Marketing and Sales department are like brothers working towards the same goal. Sales is a part of Marketing, and Marketing is a component of Sales, and it is important to aim your marketing strategy at stimulating sales. I’ll elaborate on the issues concerning marketing-sale relationships in other articles. Thank you very much for reading this article, hope you’ve enjoyed it. Special thanks to Sona Madoyan, Head of Marketing Department of Fnet, for making this article possible. If you have any questions feel free to ask it in comments or contact me directly via Facebook, Twitter, or LinkedIn. Stay safe and best luck!
https://uxplanet.org/8-tips-for-marketing-deb7eddac139
['Daniel Danielyan']
2020-08-24 16:14:50.441000+00:00
['Marketing', 'UX Research', 'Digital Marketing', 'Sales', 'Success']
Title 8 Tips MarketingContent Photo Merakist Unsplash Going want mention article able write thanks colleague friend Head Marketing Department Fnet Sona Madoyan especially important marketing trend developing marketing strategy always need take account specific industry requirement specific potential buyer addition always need follow news important “marketing rules” relevant time 1 smart spending Marketing budget Photo Kelly Sikkema Unsplash Since marketing budget largely insufficient take advantage way mean fo implementation marketing goal important focus entire budget one direction diversify need choose mean way effective show result faster evaluate effectiveness chosen path advertising tool use ROAS Return Ad Spend indicator 2 Create unique content Photo Francis Unsplash Focus force content marketing create original content especially giving preference video content famous marketer David Baba would say․ 𝐂𝐨𝐧𝐭𝐞𝐧𝐭 𝐦𝐚𝐫𝐤𝐞𝐭𝐢𝐧𝐠 𝐢𝐬 𝐥𝐢𝐤𝐞 𝐚 𝐟𝐢𝐫𝐬𝐭 𝐝𝐚𝐭𝐞 𝐈𝐟 𝐚𝐥𝐥 𝐲𝐨𝐮 𝐝𝐨 𝐢𝐬 𝐭𝐚𝐥𝐤 𝐚𝐛𝐨𝐮𝐭 𝐲𝐨𝐮𝐫𝐬𝐞𝐥𝐟 𝐭𝐡𝐞𝐫𝐞 𝐰𝐨𝐧’𝐭 𝐛𝐞 𝐚 𝐬𝐞𝐜𝐨𝐧𝐝 𝐝𝐚𝐭𝐞 3 Combine online offline strategy Photo Campaign Creators Unsplash Even though usage digital platform active digital environment prioritized need reach audience several way inform productservice also important use offline tool important properly effectively combine online offline marketing strategy must longlasting additional 4 Gamify offer Photo JESHOOTSCOM Unsplash Another important trend follow Gamification probably secret almost always interactive critical useful content provides wider audience people like compete game may provide engagement 5 Consider generation Photo Jessica Lewis Unsplash Generation Z digital generation often called youth created continue create demand Therefore except specific good service case extremely important follow generation Z since marketing product trend built generation product good service must formed accordance expectation 6 Find partner Photo Paweł Czerwiński Unsplash Creating partnership relationship company different area lead synergy combining audience partner resource joint marketing activity lower cost bring greatest result 7 Customize offer Photo Mick Haupt Unsplash offer product service must segmented personalized behavioral basis Try make offer way everyone receives sure product service 8 Direct fund promote sale Photo NordWood Themes Unsplash Despite divergence view idea structure company Marketing Sales department like brother working towards goal Sales part Marketing Marketing component Sales important aim marketing strategy stimulating sale I’ll elaborate issue concerning marketingsale relationship article Thank much reading article hope you’ve enjoyed Special thanks Sona Madoyan Head Marketing Department Fnet making article possible question feel free ask comment contact directly via Facebook Twitter LinkedIn Stay safe best luckTags Marketing UX Research Digital Marketing Sales Success
4,508
Confessions of an Obsolete Child Actor
Confessions of an Obsolete Child Actor Being cast in ‘School of Rock’ was a defining moment in my life — for better or worse Me, now. Photo: Sarah Elizabeth Larson A few months ago, I was in hair and makeup for a feature with one of my castmates, a 12-year-old girl. She was on set with her mom and little brother. He was playing games on a phone while the mother and daughter ran lines together. When the mom stopped her kid mid-sentence to give her a line reading, I was instantly transported back to my youth. I felt bad for my castmate. I felt bad for my sisters, who spent years waiting in the car with my mom while I was in guitar lessons or at auditions. I felt bad for all the other kids in all the waiting rooms of all the auditions. Did any of us really want to be there? Of course, I was there by choice that day — if you don’t count all the choices that led me to pursue acting in the first place. Back in 2003, I was cast as Katie in the film School of Rock. Katie was 10 years old, played bass guitar, and had about five lines that mostly consisted of one word each. I got to meet some of my idols, attend the MTV Movie Awards (hosted by America’s then-sweetheart Lindsay Lohan), and travel the world — all before I got my first period. Then, after my brief break from obscurity, I fell into the classic child actor pattern. I’ve spent the last 16 years of my life trying to be anything but “that girl from that thing” despite the blunt reality: No one even cares that much. Me, age 10. Photo: Wendy Brown Let me preface this by saying that I am absolutely grateful for the experience as a whole. For those who reach out to me expressing that School of Rock inspired them to pick up an instrument. For the femmes who let me know Katie was their first queer crush. (Does this make me a queer icon? If so, love that for me.) For all the opportunities that followed. And especially for my castmates, who I see as forever family. Nothing will ever diminish these factors. However, I do have some very complicated feelings about School of Rock, so let’s dive in, shall we? From as early as I can remember, my parents told me I was “destined to be a star.” They were the textbook definition of toxic stage parents. They praised me and gave me all the validation and attention in the world. They spoiled me. They called me perfect and beautiful. They kept a journal of all the adorable and charming things I’d do and say. I started taking guitar lessons when I was four and became the family’s little prodigy, against my own will. It was expected that if I were to make an appearance at a family function, my guitar would be there, too. My mom would coach and critique me from the sidelines. At school, I desperately wanted to be liked and to fit in. All of the kids in my class were either in dance or sports, so we had nothing in common. I was bullied immensely for being the “weird classical music girl,” and my only friends were my sisters and my guitar. When I was nine, I was on NPR’s From the Top, a radio show that showcased kids who played classical music. A few months later, a casting director reached out to my guitar teacher expressing interest in having me audition for Untitled Jack Black Project. I didn’t know what any of this meant. I was 10; all I really cared about was ice cream and having, I don’t know, one friend who wasn’t a blood relative or an inanimate object. Initially, I read for the band manager role (which eventually went to Miranda Cosgrove—hey, sis) and played a few classical songs on guitar. For the callback, I was asked to “rock out.” My parents bought me a kid-sized electric guitar, and I played “American Woman” by Lenny Kravitz. I found out I’d booked it the next day. They told me I’d be playing a character they wrote specifically for me and that I’d be leaving in two days for New York, where I’d live in a hotel with my mom for four months. The idea that Mike White, Jack Black, and Richard Linklater saw something in me still blows my mind. I got to live the Eloise fantasy I never knew I wanted. And then we wrapped. While on set, I met 14 kids who were underdogs like me. We all fell in love with each other pretty much instantly, and our moms were a cast of their own (and honestly could have had a highly entertaining reality television show). To this day, we have a family text thread where we champion each other’s exciting lives. On set, I was a walking panic attack. I would fuck up my lines; I would look into the camera and ruin takes. When I looked into that lens, what I saw was my entire family saying, “Don’t fuck this up for us,” and my bullies laughing at me and calling me weird. All this to say that off-screen, it was fun as hell. We’d have cast and crew karaoke parties and play Dance Dance Revolution between takes. I got to see Heather Headley and Adam Pascal in the original Broadway cast of Aida. I got to eat room service every night. I got to live the Eloise fantasy I never knew I wanted. And then we wrapped. I went home to Chicago, and because kids are assholes, I was bullied even more when I came back to school. I’ll never forget one girl who came up to me and asked me to sign her lunch card, then tore it up and threw it in the trash in front of me. When we started the press tour, I was pulled out of school and got to be with my friends again. Upon seeing myself on the big screen at the premiere, I judged myself for being the tallest girl in the cast, for having bags under my eyes and weird teeth, for having a fat belly and no breasts. I started hating my body and developed an eating disorder. I remember being pulled out of school to go to the Toronto International Film Festival (brag) when I was 11. At an afterparty, having snuck a sip of champagne and snacking on a cup of wasabi peas, I had the realization that I was no longer a kid. I had a job now, and my job was to book another big movie so I could pay my parents’ mortgage. Sometimes, I questioned whether I continued to act for myself or for them. My mom, despite having zero experience in the film industry, had by then taken on the role of my manager. She was always throwing in her unhelpful two cents when it came to my appearance. Neither of us really knew what we were doing. We’d drill lines together in the car on the way to auditions. She was more off-book than I was. She would futz with my hair and tug at my clothes in the lobby. If I did a good job at an audition, I’d get Panera; if I did a great job, I’d get Panera and a Frappuccino. On message boards (what a time 2003 was), grown men would sexualize me, commenting, “The bassist is going to grow up to be hot” and “Can’t wait ’til she’s 18.” My mom would read the comments online for hours on end, relaying all of the negative ones to me. When I was in sixth grade, a strange man in a trench coat came to my school and tried to take photos of me, and absolutely nothing was done about it. For the first time, I felt unsafe existing. When my parents brought this to my school’s administration, the principal said, “I guess that’s the price of fame.” I was transferred to a smaller private school immediately. “What a relief,” I thought. “I can start fresh, leave the bullies and stalkers behind. I won’t even mention School of Rock. I can go back to being a kid.” But every time I entered a new school, it would only take a few days before someone found out my secret. I went to three different high schools, and at each one, kids would scream School of Rock quotes at me in the halls. It was annoying and embarrassing. I constantly felt trapped. If I reacted to them positively, I was labeled a bragging snob. If I reacted negatively or ignored them, I was labeled a cold, ungrateful bitch. Every time someone brought up the movie, I didn’t think of my personal highlights, like meeting the Olsen twins or eating Kobe beef with Jack Black and my dad in Tokyo or being on Sharon Osbourne’s talk show. I thought of the girl ripping up my autograph in the cafeteria. I thought of the trench coat guy coming to my school. I thought of my mom reading the awful comments on the message boards, the bullying, and the shame of being sexualized as a 10-year-old. From the age of 14, I used drugs, alcohol, sex, food, and self-harm to numb all of this pain. I’ve survived dozens of toxic relationships and three suicide attempts. I’m not saying all of this is because I played bass in a movie when I was a kid but because I spent over a decade terrified that I’d peaked at 10 years old. Even recently, over half of the comments on my social media are from dudes who had a crush on the 10-year-old me (some of them are really gross, and I want to thank my friends who never hesitate to drag those goblins). Sometimes the comments are people asking me why I stopped acting, which fills me with rage. Actors are worth so much more than their IMDb credits. Sometimes the idea of a TMZ headline reading “That one girl from School of Rock dead from overdose at 27” is all it takes to keep me from a relapse. Today, I live in Los Angeles, where I work for a skin care company. I still act and perform. I’ve traveled the country as a stand-up comedian and performed in several plays, web series, indie feature films, and bands. I’ve been fortunate enough to be welcomed into Chicago’s theater and comedy scenes. I’ve competed on NBC’s Bring the Funny. And still, no credit or feat is as cool as the fact that I have been in recovery from alcoholism and addiction for two years (and frankly, it’s fucking hard to maintain sobriety, but sometimes the idea of a TMZ headline reading “That one girl from School of Rock dead from overdose at 27” is all it takes to keep me from a relapse). I’m grateful that School of Rock happened. It’s a great film, and it was, to its core, a fun experience. I’m grateful for the fans who picked up an instrument because of us. And I’m even grateful to my parents; I recognize now that they have unresolved trauma of their own. They were simply doing their best, and unfortunately, their best resulted in some pain. But I get to recover from that pain every day, through therapy and self-reparenting. To this day, I still get recognized randomly at airports and coffee shops. People ask if I’m “the girl from School of Rock.” For a long time, I used to say no and keep walking, but now that I’m in a better place emotionally, I humbly say yes. I no longer carry resentment for people who only know of me as “that girl from that thing.” I know deep within my bones that I’m so much more — and that’s good enough for me.
https://humanparts.medium.com/tales-of-an-obsolete-child-actor-92a120f08576
['Rivkah Reyes']
2020-04-13 22:22:57.294000+00:00
['Mental Health', 'Culture', 'Film', 'Self', 'Life Lessons']
Title Confessions Obsolete Child ActorContent Confessions Obsolete Child Actor cast ‘School Rock’ defining moment life — better worse Photo Sarah Elizabeth Larson month ago hair makeup feature one castmates 12yearold girl set mom little brother playing game phone mother daughter ran line together mom stopped kid midsentence give line reading instantly transported back youth felt bad castmate felt bad sister spent year waiting car mom guitar lesson audition felt bad kid waiting room audition u really want course choice day — don’t count choice led pursue acting first place Back 2003 cast Katie film School Rock Katie 10 year old played bass guitar five line mostly consisted one word got meet idol attend MTV Movie Awards hosted America’s thensweetheart Lindsay Lohan travel world — got first period brief break obscurity fell classic child actor pattern I’ve spent last 16 year life trying anything “that girl thing” despite blunt reality one even care much age 10 Photo Wendy Brown Let preface saying absolutely grateful experience whole reach expressing School Rock inspired pick instrument femmes let know Katie first queer crush make queer icon love opportunity followed especially castmates see forever family Nothing ever diminish factor However complicated feeling School Rock let’s dive shall early remember parent told “destined star” textbook definition toxic stage parent praised gave validation attention world spoiled called perfect beautiful kept journal adorable charming thing I’d say started taking guitar lesson four became family’s little prodigy expected make appearance family function guitar would mom would coach critique sideline school desperately wanted liked fit kid class either dance sport nothing common bullied immensely “weird classical music girl” friend sister guitar nine NPR’s Top radio show showcased kid played classical music month later casting director reached guitar teacher expressing interest audition Untitled Jack Black Project didn’t know meant 10 really cared ice cream don’t know one friend wasn’t blood relative inanimate object Initially read band manager role eventually went Miranda Cosgrove—hey si played classical song guitar callback asked “rock out” parent bought kidsized electric guitar played “American Woman” Lenny Kravitz found I’d booked next day told I’d playing character wrote specifically I’d leaving two day New York I’d live hotel mom four month idea Mike White Jack Black Richard Linklater saw something still blow mind got live Eloise fantasy never knew wanted wrapped set met 14 kid underdog like fell love pretty much instantly mom cast honestly could highly entertaining reality television show day family text thread champion other’s exciting life set walking panic attack would fuck line would look camera ruin take looked lens saw entire family saying “Don’t fuck us” bully laughing calling weird say offscreen fun hell We’d cast crew karaoke party play Dance Dance Revolution take got see Heather Headley Adam Pascal original Broadway cast Aida got eat room service every night got live Eloise fantasy never knew wanted wrapped went home Chicago kid asshole bullied even came back school I’ll never forget one girl came asked sign lunch card tore threw trash front started press tour pulled school got friend Upon seeing big screen premiere judged tallest girl cast bag eye weird teeth fat belly breast started hating body developed eating disorder remember pulled school go Toronto International Film Festival brag 11 afterparty snuck sip champagne snacking cup wasabi pea realization longer kid job job book another big movie could pay parents’ mortgage Sometimes questioned whether continued act mom despite zero experience film industry taken role manager always throwing unhelpful two cent came appearance Neither u really knew We’d drill line together car way audition offbook would futz hair tug clothes lobby good job audition I’d get Panera great job I’d get Panera Frappuccino message board time 2003 grown men would sexualize commenting “The bassist going grow hot” “Can’t wait ’til she’s 18” mom would read comment online hour end relaying negative one sixth grade strange man trench coat came school tried take photo absolutely nothing done first time felt unsafe existing parent brought school’s administration principal said “I guess that’s price fame” transferred smaller private school immediately “What relief” thought “I start fresh leave bully stalker behind won’t even mention School Rock go back kid” every time entered new school would take day someone found secret went three different high school one kid would scream School Rock quote hall annoying embarrassing constantly felt trapped reacted positively labeled bragging snob reacted negatively ignored labeled cold ungrateful bitch Every time someone brought movie didn’t think personal highlight like meeting Olsen twin eating Kobe beef Jack Black dad Tokyo Sharon Osbourne’s talk show thought girl ripping autograph cafeteria thought trench coat guy coming school thought mom reading awful comment message board bullying shame sexualized 10yearold age 14 used drug alcohol sex food selfharm numb pain I’ve survived dozen toxic relationship three suicide attempt I’m saying played bass movie kid spent decade terrified I’d peaked 10 year old Even recently half comment social medium dude crush 10yearold really gross want thank friend never hesitate drag goblin Sometimes comment people asking stopped acting fill rage Actors worth much IMDb credit Sometimes idea TMZ headline reading “That one girl School Rock dead overdose 27” take keep relapse Today live Los Angeles work skin care company still act perform I’ve traveled country standup comedian performed several play web series indie feature film band I’ve fortunate enough welcomed Chicago’s theater comedy scene I’ve competed NBC’s Bring Funny still credit feat cool fact recovery alcoholism addiction two year frankly it’s fucking hard maintain sobriety sometimes idea TMZ headline reading “That one girl School Rock dead overdose 27” take keep relapse I’m grateful School Rock happened It’s great film core fun experience I’m grateful fan picked instrument u I’m even grateful parent recognize unresolved trauma simply best unfortunately best resulted pain get recover pain every day therapy selfreparenting day still get recognized randomly airport coffee shop People ask I’m “the girl School Rock” long time used say keep walking I’m better place emotionally humbly say yes longer carry resentment people know “that girl thing” know deep within bone I’m much — that’s good enough meTags Mental Health Culture Film Self Life Lessons
4,509
My Learning Trajectory, Chapter One: Books, Courses, Total Worth, and Total Hours
My Learning Trajectory, Chapter One: Books, Courses, Total Worth, and Total Hours How I acquired all my knowledge with “only” 1074 hours and saved myself more than €1000 Key words and ideas Amount of books read, total hours spent on reading books, courses and spaced repetition, and total worth in money; Distinguishing between deliberate and non-deliberate practice. Foreword I have tried to quantify the amount of hours and money spent on books and courses since the year 2015. I will write them down here together with my thoughts about the numbers. All the things I have written in this autobiography is thanks to everything I have quantified in this chapter, or at least, 90% of everything I have written about. My behavior of quantifying things is meant to give me a perspective how efficient and how much time I spent acquiring all my knowledge. Books read and total hours Since the year 2015, I have read a total of 47 books, the majority of which are read in the year 2018–2019, because of the lack of public school (school ≠ education). My current goal is to read approximately 20 books a year, although more is always welcome. They mostly consist of nonfiction, scientific or research books. You can see all the books I have read by either googling “Goodreads, Lorenz Duremdes” or going to this link: https://www.goodreads.com/review/list/83183601-lorenz-duremdes?shelf=read Because they are mostly scientific or research books, I estimate it takes me around 10 hours to read one book, coupled with the fact that I tend to memorize them as much as possible, something I achieve with help from the website called ‘Quizlet’, that brings me to 14 hours. It takes around 30 minutes to complete one Quizlet ‘set’, which I spread over a 2 to 3 year time with spaced repetition for a frequency of 7 times, and 30 minutes multiplied by 7 divided 60 minutes (an hour), will get you 3.5 hours or approximately 4. You can see my Quizlet profile with this link: https://quizlet.com/WilliamJamesSidis 47 books multiplied by 14 hours gives us 658 hours of deliberate practice. P.S. my Quizlet spaced repetition schedule in days is: 7 > 14 > 28 > 60 > 120 > 240 > 365 Books read and total worth Now onto their total worth in terms of money. I do keep track of it in my google sheets document, and currently all my 47 books are worth approximately €763.98. Now, I used the word ‘approximately’, because here is the plot twist: I actually paid 0 euros for all my 47 books. I do tend to look on websites like Amazon all the time how much a book is worth, while reading. Another way is to say an average book costs approximately €15, multiply that number with 47 and you get €720, a number close to my own approximation. Courses: total hours and worth As of the year 2019, I have followed four courses: Finance Learning How to Learn: Powerful mental tools to help you master tough subjects Science of Exercise Existential Well-being Counseling: A Person-centered Experiential Approach Together, they are worth approximately €337 and require 272 hours. If we add the hours spent on spaced repetition on Quizlet, we get 288 hours. Again, I have spent 0 euros to gain all this knowledge, because I am counting how much the certificates would cost (which is optional after completion). Total hours writing Another way to gain knowledge and to learn is to write, namely this autobiography in my case. It takes around 2 hours for me to write one chapter and have written 64 chapters including this one so far, which brings me to 128 hours. Bonus: spent time in the ‘gym’ So this subchapter is more of a bonus since I want to use this chapter to explain how I gathered all the knowledge of this autobiography, the time it took and potential money it would cost. I have been going to the ‘gym’, or rather, my own home gym since the year 2016. I try to train 6 times a week, but let’s count deload weeks and times of sickness into the equation too and it becomes more like 4 days a week on average. I spend around 30 minutes to 2 hours in the gym depending on how I feel, so that’s an average of one hour. The calculation over 3 years time becomes: 1 hour multiplied by 4 days multiplied by 52 weeks in a year multiplied by 3 years = 624 hours. Deliberate practice: total time and worth So putting the time spent on reading books, courses, and writing time together, we get 658 hours plus 288 hours plus 128 hours = 1074 hours of deliberate practice. The total worth would be: €763.98 plus €337 = €1100,98. Again, I have spent 0 euros on this all. Non-deliberate practice: total time What I would see as non-deliberate practice that still adds to my knowledge base are things like gaming, reading random articles without trying to memorize everything, watching documentaries, daydreaming, etc. Let’s say the time spent on non-deliberate practice, that also happens to be effective, is ¼ of the time spent on deliberate practice. This gives us the number 268.5 hours. Together with total time spent on deliberate practice, we get 1342.5 hours. Bonus: total total time and average time spent every day Now, if we want to count gym time too, we get 1966.5 hours over 3 years time (2016–2019). Divide this number by 3 years and 365 days, and we get approximately 1.8 hours of personal development every day. That’s not a lot, but the majority (like 80% in the area of courses and books) of it is spent when I finished high school. It reminds me of this quote: “I have never let my schooling interfere with my education.” ―Mark Twain Subscribe for more content: https://mailchi.mp/261ae9e13883/autibiography
https://medium.com/superintelligence/10-02-2019-my-learning-trajectory-chapter-one-books-courses-total-worth-and-total-hours-6d106650d323
['John Von Neumann Ii']
2019-11-10 20:29:01.483000+00:00
['Course', 'Reading', 'Autobiography', 'Books', 'Learning']
Title Learning Trajectory Chapter One Books Courses Total Worth Total HoursContent Learning Trajectory Chapter One Books Courses Total Worth Total Hours acquired knowledge “only” 1074 hour saved €1000 Key word idea Amount book read total hour spent reading book course spaced repetition total worth money Distinguishing deliberate nondeliberate practice Foreword tried quantify amount hour money spent book course since year 2015 write together thought number thing written autobiography thanks everything quantified chapter least 90 everything written behavior quantifying thing meant give perspective efficient much time spent acquiring knowledge Books read total hour Since year 2015 read total 47 book majority read year 2018–2019 lack public school school ≠ education current goal read approximately 20 book year although always welcome mostly consist nonfiction scientific research book see book read either googling “Goodreads Lorenz Duremdes” going link httpswwwgoodreadscomreviewlist83183601lorenzduremdesshelfread mostly scientific research book estimate take around 10 hour read one book coupled fact tend memorize much possible something achieve help website called ‘Quizlet’ brings 14 hour take around 30 minute complete one Quizlet ‘set’ spread 2 3 year time spaced repetition frequency 7 time 30 minute multiplied 7 divided 60 minute hour get 35 hour approximately 4 see Quizlet profile link httpsquizletcomWilliamJamesSidis 47 book multiplied 14 hour give u 658 hour deliberate practice PS Quizlet spaced repetition schedule day 7 14 28 60 120 240 365 Books read total worth onto total worth term money keep track google sheet document currently 47 book worth approximately €76398 used word ‘approximately’ plot twist actually paid 0 euro 47 book tend look website like Amazon time much book worth reading Another way say average book cost approximately €15 multiply number 47 get €720 number close approximation Courses total hour worth year 2019 followed four course Finance Learning Learn Powerful mental tool help master tough subject Science Exercise Existential Wellbeing Counseling Personcentered Experiential Approach Together worth approximately €337 require 272 hour add hour spent spaced repetition Quizlet get 288 hour spent 0 euro gain knowledge counting much certificate would cost optional completion Total hour writing Another way gain knowledge learn write namely autobiography case take around 2 hour write one chapter written 64 chapter including one far brings 128 hour Bonus spent time ‘gym’ subchapter bonus since want use chapter explain gathered knowledge autobiography time took potential money would cost going ‘gym’ rather home gym since year 2016 try train 6 time week let’s count deload week time sickness equation becomes like 4 day week average spend around 30 minute 2 hour gym depending feel that’s average one hour calculation 3 year time becomes 1 hour multiplied 4 day multiplied 52 week year multiplied 3 year 624 hour Deliberate practice total time worth putting time spent reading book course writing time together get 658 hour plus 288 hour plus 128 hour 1074 hour deliberate practice total worth would €76398 plus €337 €110098 spent 0 euro Nondeliberate practice total time would see nondeliberate practice still add knowledge base thing like gaming reading random article without trying memorize everything watching documentary daydreaming etc Let’s say time spent nondeliberate practice also happens effective ¼ time spent deliberate practice give u number 2685 hour Together total time spent deliberate practice get 13425 hour Bonus total total time average time spent every day want count gym time get 19665 hour 3 year time 2016–2019 Divide number 3 year 365 day get approximately 18 hour personal development every day That’s lot majority like 80 area course book spent finished high school reminds quote “I never let schooling interfere education” ―Mark Twain Subscribe content httpsmailchimp261ae9e13883autibiographyTags Course Reading Autobiography Books Learning
4,510
A Bold And Beautiful Salad For Summer Days
A Bold And Beautiful Salad For Summer Days This feisty vegan fajita salad will leave you wanting more. Feisty Fiesta Salad, photo by author Okay, so full disclosure: I’m OBSESSED with walnut “meat!” I’ve been making up any excuse to make it and use it. It’s one of my favorite meat substitutes right now because it is sooo easy to make and as delicious as you allow it to be. If you season it well, you’ll be licking your fingers and asking “Walnuts?! What walnuts?!” It’s meaty, hearty and reminiscent of minced chicken or pork on its own. If you add mushrooms to the mix with the right seasoning, you can easily get a beefy flavor. And, you know what else?You can control the salt, and season it exactly to your liking unlike the prepackaged vegan sausages that I love. I tried Beyond Meat Hot Italian Sausages and I’m not going to lie, they SLAP! However, this here walnut meat is a great quick substitute for days when you want to eat a little cleaner. You might be asking yourself why I am waxing poetic about walnut meat when this is a fajita salad recipe, but the truth is, the walnut meat is the centerpiece of this recipe for me. Along side some crisp sweet peppers, onions, corn, cucumbers and tomatoes (optional), the walnut meat makes this the perfect fajita salad. Not to mention, it’s literally the only thing that you need to cook in this entire recipe. AND, you don’t even have to do that; you can make your walnut meat raw if you like because as Tabitha Brown says “that’s your business!” If you don’t like walnuts, try pecans. If you have an allergy or nuts are not your jam, you can add whatever vegan mince or grounds you like. For the love of all that is holy, just season them up really well. Ingredients: 1/4 cup of walnut pieces Trini green seasoning Paprika Garlic powder Roucou/ Goya Sazon/ achiote powder (optional) Liquid aminos (soy sauce, coconut aminos, or tamari will work) 2 medium-sized cucumbers diced 1 can of sweet corn (drained and washed) 1–2 small white onions julienned 1 medium-sized green bell pepper (sweet pepper) julienned 1 medium-sized tomato diced 1 large clove of garlic Red chili flakes (optional) Black pepper Olive oil Lime juice Mustard Agave (honey or brown sugar will work)
https://medium.com/one-table-one-world/a-bold-and-beautiful-salad-for-summer-days-ac6ac3c49e7b
['Melissa A. Matthews']
2020-07-07 14:31:01.325000+00:00
['Summer', 'Cooking', 'Vegan', 'Food', 'Recipe']
Title Bold Beautiful Salad Summer DaysContent Bold Beautiful Salad Summer Days feisty vegan fajita salad leave wanting Feisty Fiesta Salad photo author Okay full disclosure I’m OBSESSED walnut “meat” I’ve making excuse make use It’s one favorite meat substitute right sooo easy make delicious allow season well you’ll licking finger asking “Walnuts walnuts” It’s meaty hearty reminiscent minced chicken pork add mushroom mix right seasoning easily get beefy flavor know elseYou control salt season exactly liking unlike prepackaged vegan sausage love tried Beyond Meat Hot Italian Sausages I’m going lie SLAP However walnut meat great quick substitute day want eat little cleaner might asking waxing poetic walnut meat fajita salad recipe truth walnut meat centerpiece recipe Along side crisp sweet pepper onion corn cucumber tomato optional walnut meat make perfect fajita salad mention it’s literally thing need cook entire recipe don’t even make walnut meat raw like Tabitha Brown say “that’s business” don’t like walnut try pecan allergy nut jam add whatever vegan mince ground like love holy season really well Ingredients 14 cup walnut piece Trini green seasoning Paprika Garlic powder Roucou Goya Sazon achiote powder optional Liquid amino soy sauce coconut amino tamari work 2 mediumsized cucumber diced 1 sweet corn drained washed 1–2 small white onion julienned 1 mediumsized green bell pepper sweet pepper julienned 1 mediumsized tomato diced 1 large clove garlic Red chili flake optional Black pepper Olive oil Lime juice Mustard Agave honey brown sugar workTags Summer Cooking Vegan Food Recipe
4,511
An Introduction to Azure Stream Analytics Job
Stream Analytics Pipeline, Source: docs.microsoft.com Introduction The capability of an Azure Stream Analytics Job is a lot, here in this post we are going to discuss a few of them. An Azure Stream Analytics is basically an engine which processes the events. These events are coming from the devices we have configured, it can be an Azure IoT Dev Kit (MXChip) or a Raspberry Pi and many more. The stream analytics job has two vital parts Input source Output source The input source is the source of your streaming data, in my case, it is my IoT Hub. And the output source is the output what you are configuring. I had configured the output to save the data to an Azure SQL database. Let’s just stop the introduction part now and start creating our own Stream Analytics. You can always see this article on my blog here. Background I recently got my MXChip (Azure Iot Dev Kit) and I was surprised with the capabilities that device can do. It has a lot of sensors within the device, like temperature, humidity, pressure, magnetometer, security etc. Then I thought it is time to play with the same. So the basic idea here was to, Configure the device to send the data to the IoT Hub Select the IoT Hub as a stream input Send the output to an SQL Server database In this article, we are going to concentrate on how to create a Stream Analytics Job and how you can configure the same to save the stream data to the SQL Server database. Prerequisites To do the wonderful things, we always need some prerequisites. Azure Subscription MXChip Azure IoT Dev Kit An active IoT Hubows Driver Kit (WDK) 10 IoT Core ADK Add-Ons Windows 10 IoT Core Packages The Raspberry Pi BSP Custom FFU image we have created Creating the Azure Stream Analytics Job Login to your Azure Portal and click on the Create a resource, and then search for the “Stream Analytics job”. Once you clicked on the Create button, it is time to specify the details of your job. Job Name Subscription Resource Group Location Hosting Environment I would strongly recommend you to select the same resource group of your IoT Hub for the Stream Analytics Job as well so that you can easily delete the resources when there are not needed. Once the deployment is successful you can go to the resource overview and see the details. Configure Inputs In the left menu, you can see a section called Job topology, that’s where we are going to work. Basically, we will be setting the Inputs and Outputs and then we will be writing a query which can take the inputs and send the values to the configured output. Click on the Inputs label and click on Add stream input and then select the IoT Hub. In the next screen, you will have options to select the existing IoT hub and to create a new IoT Hub. As I have already created an IoT hub, I would select the existing one. Please be noted that you are allowed to use special characters in the Input alias field, but if you use such, please make sure to include the same inside [] in the query, which we will be creating later. About the special characters in Input alias field Once you are successfully configured the Inputs, then we can go ahead and configure the outputs. Configure Outputs Click on the Outputs from the Job topology section and click Add, and then select the SQL Database. You can either create a new Database or select the one you had already created. I used the existing database and table. Configure the Query Once you click the label Query on the left pan, you will be given an editor where you can write your queries. I am using the below query. SELECT messageId, deviceId, temperature, humidity, pressure, pointInfo, IoTHub, EventEnqueuedUtcTime, EventProcessedUtcTime, PartitionId INTO streamoutputs FROM streaminputs As you can see that I am just selecting the fields I may need and saving the same to our stream outputs. You can always select all the fields by using the select * query, but the problem with that is, you will have to set up the table columns in the same order of the stream data. Otherwise, you may get an error as below. Encountered error trying to write 1 event(s): Failed to locate column ‘IoTHub’ at position 6 in the output event Stream analytics query error If there are any errors, you can see that in the Output details. Run the Stream Analytics Job and See the Data in the Database As we have already done the initial set up, we can now start our Stream Analytics Job, please make sure that the IoT Hub is running and the device is sending data to the IoT Hub. If everything is working as expected, you will be able to see the data in the SQL server database. You can either connect your MXChip device to the network and test this or use the custom simulator app. If you are using the Simulator console application, make sure that you are giving the device id, key and the IoT hub uri correctly, otherwise you will get an unauthorized error as explained here. Test the Stream Analytics Job Inside the Portal You also have an option to test the functionality in the portal itself. The only thing you will have to do is to prepare the sample input data. I have prepared the sample JSON data as follows. [ { "deviceId": "test-device", "humidity": 77.699449415178719, "pointInfo": "This is a normal message.", "temperature": 32.506656929620846 }, { "deviceId": "test-device", "temperature": 52.506656929620846, "humidity": 17.699449415178719, "pointInfo": "This is a normal message." }, { "deviceId": "test-device", "temperature": 42.506656929620846, "humidity": 57.699449415178719, "pointInfo": "This is a normal message." } ] Now we can go to the Query section and upload the sample data file for our inputs. In the next window, you can select the JSON option and upload your JSON file. Click the Test button, and now you should be able to see the output as below. Conclusion Wow!. Now we have learned, What is Azure Stream Analytics Job how to create Azure Stream Analytics Job how to add Inputs to the Azure Stream Analytics how to add Outputs to the Azure Stream Analytics how to add custom Query in Azure Stream Analytics how to Test the Stream Analytics Query with sample data You can always ready my IoT articles here. You can always follow me here on Medium and Twitter. Your turn. What do you think? Thanks a lot for reading. Did I miss anything that you may think which is needed in this article? Could you find this post as useful? Kindly do not forget to share me your feedback. Kindest Regards Sibeesh Venu
https://medium.com/medialesson/an-introduction-to-azure-stream-analytics-job-24fa5e76f48f
['Sibeesh Venu']
2019-01-22 14:36:28.207000+00:00
['Cloud Computing', 'Azure', 'IoT', 'Stream Analytics', 'Iot Hub']
Title Introduction Azure Stream Analytics JobContent Stream Analytics Pipeline Source docsmicrosoftcom Introduction capability Azure Stream Analytics Job lot post going discus Azure Stream Analytics basically engine process event event coming device configured Azure IoT Dev Kit MXChip Raspberry Pi many stream analytics job two vital part Input source Output source input source source streaming data case IoT Hub output source output configuring configured output save data Azure SQL database Let’s stop introduction part start creating Stream Analytics always see article blog Background recently got MXChip Azure Iot Dev Kit surprised capability device lot sensor within device like temperature humidity pressure magnetometer security etc thought time play basic idea Configure device send data IoT Hub Select IoT Hub stream input Send output SQL Server database article going concentrate create Stream Analytics Job configure save stream data SQL Server database Prerequisites wonderful thing always need prerequisite Azure Subscription MXChip Azure IoT Dev Kit active IoT Hubows Driver Kit WDK 10 IoT Core ADK AddOns Windows 10 IoT Core Packages Raspberry Pi BSP Custom FFU image created Creating Azure Stream Analytics Job Login Azure Portal click Create resource search “Stream Analytics job” clicked Create button time specify detail job Job Name Subscription Resource Group Location Hosting Environment would strongly recommend select resource group IoT Hub Stream Analytics Job well easily delete resource needed deployment successful go resource overview see detail Configure Inputs left menu see section called Job topology that’s going work Basically setting Inputs Outputs writing query take input send value configured output Click Inputs label click Add stream input select IoT Hub next screen option select existing IoT hub create new IoT Hub already created IoT hub would select existing one Please noted allowed use special character Input alias field use please make sure include inside query creating later special character Input alias field successfully configured Inputs go ahead configure output Configure Outputs Click Outputs Job topology section click Add select SQL Database either create new Database select one already created used existing database table Configure Query click label Query left pan given editor write query using query SELECT messageId deviceId temperature humidity pressure pointInfo IoTHub EventEnqueuedUtcTime EventProcessedUtcTime PartitionId streamoutputs streaminputs see selecting field may need saving stream output always select field using select query problem set table column order stream data Otherwise may get error Encountered error trying write 1 event Failed locate column ‘IoTHub’ position 6 output event Stream analytics query error error see Output detail Run Stream Analytics Job See Data Database already done initial set start Stream Analytics Job please make sure IoT Hub running device sending data IoT Hub everything working expected able see data SQL server database either connect MXChip device network test use custom simulator app using Simulator console application make sure giving device id key IoT hub uri correctly otherwise get unauthorized error explained Test Stream Analytics Job Inside Portal also option test functionality portal thing prepare sample input data prepared sample JSON data follows deviceId testdevice humidity 77699449415178719 pointInfo normal message temperature 32506656929620846 deviceId testdevice temperature 52506656929620846 humidity 17699449415178719 pointInfo normal message deviceId testdevice temperature 42506656929620846 humidity 57699449415178719 pointInfo normal message go Query section upload sample data file input next window select JSON option upload JSON file Click Test button able see output Conclusion Wow learned Azure Stream Analytics Job create Azure Stream Analytics Job add Inputs Azure Stream Analytics add Outputs Azure Stream Analytics add custom Query Azure Stream Analytics Test Stream Analytics Query sample data always ready IoT article always follow Medium Twitter turn think Thanks lot reading miss anything may think needed article Could find post useful Kindly forget share feedback Kindest Regards Sibeesh VenuTags Cloud Computing Azure IoT Stream Analytics Iot Hub
4,512
You’ll Never Love Your Past as Much as You Love Your Future
You’ll Never Love Your Past as Much as You Love Your Future When are we the happiest? Photo by Clay Banks on Unsplash A 15-year-old’s greatest wish is to be 18, and yet, most 21-year-olds will say their 18-year-old selves were kind of dumb — even though both are just three years away from that age. No matter how you change the numbers, this phenomenon will apply almost universally in one form or another. When I was 8, I desperately wanted to be 10, like my neighbor who seemed so much stronger and smarter than I was at the time. When I was 10, I didn’t feel any different — maybe because I had no 8-year-old neighbor to compare myself to. When I was 20, I thought by 30, I’d have life figured out. It was only at 23 that I looked around and wondered: “Why is nothing happening?” Nothing was happening because I wasn’t doing. I started right then, and, seven years later, I’m still going. I will turn 30 in two months, and now my 20-year-old self looks like an idiot. I’m sure in my 30s, I’ll think my 40s will be much better, only to realize I’m still nearly as clueless about life at 45, yet not without that same patronizing smile back at my 30-year-old self that I now hold whenever I think of my early 20s. Why is that? Why do we enjoy looking forward so much yet can only laugh and shake our heads when we look back? Well, in a nutshell: You’ll never love your past as much as you love your future. No one ever does. In your future, the perfect version of you always exists. Everything is wide open. You feel as if you can achieve anything and everything, probably all at the same time. Your plans are intact. Your goals are in reach. Time is still flexible. In your past, everything has already happened. There are no more pieces to be moved around. They’re all in place, and no matter whether you like the puzzle you’ve pieced together or not, you’ll always spot many places where you could have done better. The perfect version of you never materialized. Most plans went to hell. Many goals fell out of reach. And time is just gone altogether. That can be demoralizing, but it’s just part of life. Retirees don’t get as much satisfaction out of their past careers as college graduates expect from their future ones. Twenty-somethings don’t feel as autonomous as their teenage selves would have hoped to feel. Stressed moms don’t have it together as much as they believed they would before they gave birth. This is a frustrating game you can play all your life — or you can realize that “all this looking back is messing with your neck.” At the end of the day, it matters not how well your past stacks up against your once imagined future. It only matters that you were content with the present as you lived through it. At what age are we the happiest? That’s an impossible question, highlighted by the fact that you can find a theory for each major age bracket to back it as the answer. There’s “the U-bend of life,” a theory that suggests happiness is high when we’re young, declines towards middle age, bottoms at 46 on average, then goes back up and reaches new heights in our 70s and 80s. The idea is that family stress, worries about work, and anxiety about how our peers perceive us peak when we’re in the thick of life. As we get older, we care less about opinions and find contentment in what we have rather than what we hope to achieve. When Lydia Sohn asked 90-somethings what they regretted most, however, she found the opposite: People were happiest when they were busy being the glue of their own social microcosmos — usually in their 40s. Every single one of these 90-something-year-olds, all of whom are widowed, recalled a time when their spouses were still alive and their children were younger and living at home. As a busy young mom and working professional who fantasizes about the faraway, imagined pleasures of retirement, I responded, “But weren’t those the most stressful times of your lives?” Yes of course, they all agreed. But there was no doubt that those days were also the happiest. At what age are we the happiest? It’s not only an impossible question, it’s an unnecessary one to ask. The answer will be different for every person to ever live, and our best guess is that it’ll be a stretch of days on which you felt fairly satisfied with life rather than a singular event or short period of exuberant bliss. What we do know is that your best shot at stringing together a series of such “everything is good enough” days is neither to get lost in future castles in the sky nor to constantly commiserate how unlike those castles your past has become. You’ll have to abandon both the future and the past in favor of the present. Imagine you have two choices: You can either be happy every day of your life but not remember a single one, or you can have an average, even unsatisfying life but die wholeheartedly believing you’re the happiest person in the world. It matters not which one you choose because in both scenarios, you’ll die on a good day. One sacrifices the past, the other the future, but the present is what counts. You’ll never love your past as much as you love your future, but that’s okay because life is neither about tomorrow nor about yesterday. It’s about today — and if you make today a good day with your thoughts, actions, and decisions, the idea of age will soon fade altogether.
https://ngoeke.medium.com/youll-never-love-your-past-as-much-as-you-love-your-future-3b44dff0f6d3
['Niklas Göke']
2020-12-28 11:29:07.755000+00:00
['Happiness', 'Mindfulness', 'Psychology', 'Aging', 'Life']
Title You’ll Never Love Past Much Love FutureContent You’ll Never Love Past Much Love Future happiest Photo Clay Banks Unsplash 15yearold’s greatest wish 18 yet 21yearolds say 18yearold self kind dumb — even though three year away age matter change number phenomenon apply almost universally one form another 8 desperately wanted 10 like neighbor seemed much stronger smarter time 10 didn’t feel different — maybe 8yearold neighbor compare 20 thought 30 I’d life figured 23 looked around wondered “Why nothing happening” Nothing happening wasn’t started right seven year later I’m still going turn 30 two month 20yearold self look like idiot I’m sure 30 I’ll think 40 much better realize I’m still nearly clueless life 45 yet without patronizing smile back 30yearold self hold whenever think early 20 enjoy looking forward much yet laugh shake head look back Well nutshell You’ll never love past much love future one ever future perfect version always exists Everything wide open feel achieve anything everything probably time plan intact goal reach Time still flexible past everything already happened piece moved around They’re place matter whether like puzzle you’ve pieced together you’ll always spot many place could done better perfect version never materialized plan went hell Many goal fell reach time gone altogether demoralizing it’s part life Retirees don’t get much satisfaction past career college graduate expect future one Twentysomethings don’t feel autonomous teenage self would hoped feel Stressed mom don’t together much believed would gave birth frustrating game play life — realize “all looking back messing neck” end day matter well past stack imagined future matter content present lived age happiest That’s impossible question highlighted fact find theory major age bracket back answer There’s “the Ubend life” theory suggests happiness high we’re young decline towards middle age bottom 46 average go back reach new height 70 80 idea family stress worry work anxiety peer perceive u peak we’re thick life get older care le opinion find contentment rather hope achieve Lydia Sohn asked 90somethings regretted however found opposite People happiest busy glue social microcosmos — usually 40 Every single one 90somethingyearolds widowed recalled time spouse still alive child younger living home busy young mom working professional fantasizes faraway imagined pleasure retirement responded “But weren’t stressful time lives” Yes course agreed doubt day also happiest age happiest It’s impossible question it’s unnecessary one ask answer different every person ever live best guess it’ll stretch day felt fairly satisfied life rather singular event short period exuberant bliss know best shot stringing together series “everything good enough” day neither get lost future castle sky constantly commiserate unlike castle past become You’ll abandon future past favor present Imagine two choice either happy every day life remember single one average even unsatisfying life die wholeheartedly believing you’re happiest person world matter one choose scenario you’ll die good day One sacrifice past future present count You’ll never love past much love future that’s okay life neither tomorrow yesterday It’s today — make today good day thought action decision idea age soon fade altogetherTags Happiness Mindfulness Psychology Aging Life
4,513
What Drives Apple’s Innovation Engine?
Source: Apple What Drives Apple’s Innovation Engine? How to design an organization for continuous innovation Over the last decade, I have held roles with complete P&L ownership of a business unit, as a result often believed that greater end-to-end control led to more effectiveness. This of course refers to conventional management wisdom, where business units are run as independent divisions, and GMs have complete accountability and control of the business. I was a staunch believer in the absoluteness of this model, until now. What changed? I came across this HBR article that presents a case study of an ‘unconventional’ model that Apple has used so effectively to drive innovation. It quotes: “Apple is not a company where general managers oversee managers; rather, it is a company where experts lead experts”. This is where expertise is aligned with decision rights. Think of it as vertical ownership of functions rather than horizontal ownership of a product line or business unit. The key assumption here is that it’s easier to train an expert to manage well than to train a manager to be an expert. Source: Team Analysis Example: At Apple, a team of experts creates deep expertise in a given area, where they can learn from one another. For instance, Apple has more than 600 experts on camera hardware technology in a group that is led by Graham Townsend, a camera expert (Source: HBR). Now, since iPhones, iPads, laptops, and desktop computers all have cameras, these experts would have been split across different teams had Apple been organized into business units. This could have diluted their collective learning and ability to make progress towards a singular goal: make the best cameras for all Apple devices. Now this team has pushed the boundaries of the cameras to a level, where cameras have become one of the most beloved features of the devices. This is less likely to have happened in the divisional model. So why does organization structure matter? As famous historian Alfred Chandler argued, “structure follows strategy”. Once you have a clear strategy, the structure should enable the execution of that strategy. Source: Team Analysis Apple’s structure fuels its strategy flywheel, where the mission of building the best products on earth helps it attract the best experts. The structure then empowers these experts to lead other experts, further fueling their deep understanding and expertise in their respective areas. This translates into the creation of best-in-class products, which delivers great experiences for users and industry-leading profits for Apple. These profits turn into handsome rewards for employees, which further help in attracting and retaining top talent. The link between Apple’s strategy and structure, and how that drives innovation is evident as Apple’s leaders believe that world-class talent wants to work for and with another world-class talent. As the HBR article says, “It’s like joining a sports team where you get to learn from and play with the best.” What are the key elements of such an organization structure? Ownership: functional vs. divisional — fundamentally, you need to ask whether you want to align accountability with control (divisional) vs. align expertise with decision rights (functional). This will then drive your entire strategy of the type of talent you would recruit. Control mechanism: at Apple, one has accountability without control, which means that one’s leadership abilities to influence and collaborate with others are more important than the authority that their title bestows. This also means that one's ability to control the outcomes and influence others to follow her/him is dependant on the reputation that is built by delivering results. Controlling with authority is easy, but accountability without control is real hard work and can be messy. As the article describes — “Good mess” happens when various teams work with a shared purpose. “Bad mess” occurs when teams push their own agendas ahead of common goals. Financial Strategy: Are you in an organization that primarily manages short-term goals, i.e. quarterly financial targets? This means decisions to invest in long term projects are mostly driven by short-term targets managed by GMs who are incentivized to protect these metrics. Conversely, when you have experts making such decisions, they are in a better position to weigh the short-term costs against the long-term value. As per the article — “at Apple, the finance team is not involved in the product road-map meetings of engineering teams, and engineering teams are not involved in pricing decisions.” Decision-making process: This may be the most important enabler for innovation. I have seen far too many times, good ideas not evolving because someone on top didn’t agree. But when most decisions are driven by healthy debate amongst different functions who disagree, push back, promote or reject ideas, and build on one another’s ideas to come up with the best solutions, the results are often better. It requires a different type of leadership — where leaders inspire, prod, or influence colleagues in other areas to contribute toward achieving their goals. Sounds more like democracy, which is often messy, but makes most progress over time. Incentives: If incentives are aligned to win as a team, not as an individual, then team members operate very differently. At Apple, various functions work through their differences with one common goal — build the best products that are commercially successful. Thus, the incentives are aligned to the overall performance of the company, not to the success of individual products. Closing thoughts: As large organizations and their business models are being disrupted by technology, it’s time to rethink the “organization structure”. It’s time to challenge the conventional divisional set-up and build a team of experts, led by experts, who have both the expertise and the decision rights, to build best-in-class solutions. It’s not going to be easy, but if Apple is an example to follow, then it could definitely be worth it. DISCLAIMER: This article represents solely my personal views and interpretations from an HBR article. It does not represent the views of any organization. It is only meant to share my learnings from publically available information and does not represent any confidential information. Amit Rawal is a Sloan Fellow at Stanford’s Graduate School of Business. He has spent the last decade building and scaling e-commerce ventures for 40%+ of the world’s population. At Stanford, he is focused on bringing together tech, design, and data to create joyful shopping experiences. He is a data geek and loves tracking all kinds of health and wellness metrics. He can be reached at [email protected]. Links: Linkedin, Twitter, Instagram, Website
https://medium.com/swlh/what-drives-apples-innovation-engine-35d7c4fca166
['Amit Rawal']
2020-11-11 05:57:04.574000+00:00
['Leadership', 'Apple', 'Technology', 'Innovation', 'Digital']
Title Drives Apple’s Innovation EngineContent Source Apple Drives Apple’s Innovation Engine design organization continuous innovation last decade held role complete PL ownership business unit result often believed greater endtoend control led effectiveness course refers conventional management wisdom business unit run independent division GMs complete accountability control business staunch believer absoluteness model changed came across HBR article present case study ‘unconventional’ model Apple used effectively drive innovation quote “Apple company general manager oversee manager rather company expert lead experts” expertise aligned decision right Think vertical ownership function rather horizontal ownership product line business unit key assumption it’s easier train expert manage well train manager expert Source Team Analysis Example Apple team expert creates deep expertise given area learn one another instance Apple 600 expert camera hardware technology group led Graham Townsend camera expert Source HBR since iPhones iPads laptop desktop computer camera expert would split across different team Apple organized business unit could diluted collective learning ability make progress towards singular goal make best camera Apple device team pushed boundary camera level camera become one beloved feature device le likely happened divisional model organization structure matter famous historian Alfred Chandler argued “structure follows strategy” clear strategy structure enable execution strategy Source Team Analysis Apple’s structure fuel strategy flywheel mission building best product earth help attract best expert structure empowers expert lead expert fueling deep understanding expertise respective area translates creation bestinclass product delivers great experience user industryleading profit Apple profit turn handsome reward employee help attracting retaining top talent link Apple’s strategy structure drive innovation evident Apple’s leader believe worldclass talent want work another worldclass talent HBR article say “It’s like joining sport team get learn play best” key element organization structure Ownership functional v divisional — fundamentally need ask whether want align accountability control divisional v align expertise decision right functional drive entire strategy type talent would recruit Control mechanism Apple one accountability without control mean one’s leadership ability influence collaborate others important authority title bestows also mean one ability control outcome influence others follow herhim dependant reputation built delivering result Controlling authority easy accountability without control real hard work messy article describes — “Good mess” happens various team work shared purpose “Bad mess” occurs team push agenda ahead common goal Financial Strategy organization primarily manages shortterm goal ie quarterly financial target mean decision invest long term project mostly driven shortterm target managed GMs incentivized protect metric Conversely expert making decision better position weigh shortterm cost longterm value per article — “at Apple finance team involved product roadmap meeting engineering team engineering team involved pricing decisions” Decisionmaking process may important enabler innovation seen far many time good idea evolving someone top didn’t agree decision driven healthy debate amongst different function disagree push back promote reject idea build one another’s idea come best solution result often better requires different type leadership — leader inspire prod influence colleague area contribute toward achieving goal Sounds like democracy often messy make progress time Incentives incentive aligned win team individual team member operate differently Apple various function work difference one common goal — build best product commercially successful Thus incentive aligned overall performance company success individual product Closing thought large organization business model disrupted technology it’s time rethink “organization structure” It’s time challenge conventional divisional setup build team expert led expert expertise decision right build bestinclass solution It’s going easy Apple example follow could definitely worth DISCLAIMER article represents solely personal view interpretation HBR article represent view organization meant share learning publically available information represent confidential information Amit Rawal Sloan Fellow Stanford’s Graduate School Business spent last decade building scaling ecommerce venture 40 world’s population Stanford focused bringing together tech design data create joyful shopping experience data geek love tracking kind health wellness metric reached amitrstanfordedu Links Linkedin Twitter Instagram WebsiteTags Leadership Apple Technology Innovation Digital
4,514
How Do Gradient Boosting Algorithms Handle Categorical Variables?
A fantastic shot of the Falcon Heavy rocket ascension — credit (Unsplash) Previously, we investigated the differences between versions of the gradient boosting algorithm regarding tree-building strategies. We’ll now have a closer look at the way categorical variables are handled by LightGBM [2] and CatBoost [3]. We first explain CatBoost’s approach for tackling the prediction shift that results from mean target encoding. We demonstrate that LightGBM’s native categorical feature handling makes training much faster, resulting in a 4 fold speedup in our experiments. For the XGBoost [1] adepts, we show how to leverage its sparsity-aware feature to deal with categorical features. The Limitations of One-Hot Encoding When implementations do not support categorical variables natively, as is the case for XGBoost and HistGradientBoosting, one-hot encoding is commonly used as a standard preprocessing technique. For a given variable, the method creates a new column for each of the categories it contains. This has the effect of multiplying the number of features that are scanned by the algorithm at each split, and that is why libraries such as CatBoost and LightGBM implement more scalable methods. Processing of Categorical Variables in CatBoost Ordered Target Statistics Explained CatBoost proposes an inventive method for processing categorical features, based on a well-known preprocessing strategy called target encoding. In general, the encoded quantity is an estimation of the expected target value in each category of the feature. More formally, let’s consider the category i of the k-th training example. We want to substitute it with an estimate of A commonly used estimator would be which is simply the average target value for samples of the same category as xⁱ of sample k, smoothed by some prior p, with weight a > 0. The value p is commonly set to the mean of the target value over the sample. The CatBoost [3] method, named Ordered Target Statistics (TS), tries to solve a common issue that arises when using such a target encoding, which is target leakage. In the original paper, the authors provide a simple yet effective example of how a naive target encoding can lead to significant errors in the predictions on the test set. Ordered TS addresses this issue while maintaining an effective usage of all the training data available. Inspired by online algorithms, it arranges training samples according to an artificial timeline defined by a permutation of the training set. For each sample k from the training set, it computes its TS using its own “history” only; that is, the samples that appear before it in the timeline (see example below). In particular, the target value of an instance is never used to compute its own TS. Table 1: Ordered Target Statistics in CatBoost, a toy example Values of x̂ⁱ are computed respecting the history and according to the previous formula (with p = 0.05). In the example of Table 3, x̂ⁱ of instance 6 is computed using samples from its newly assigned history, with x̂ⁱ = thriller. Thus, instance 1 is used, but instance 3 is not. In the CatBoost algorithm, Ordered TS is integrated into Ordered Boosting. In practice, several permutations of the training set are defined, and one of them is chosen randomly at each step of gradient boosting in order to compute the Ordered TS. In this way, it compensates for the fact that some samples TS might have a higher variance due to a shorter history. A Few Words on Feature combinations In addition to Ordered TS, CatBoost implements another preprocessing method that builds additional features by combining existing categorical features together. However, processing all possible combinations is not a feasible option as the total grows exponentially with the number of features. At each new split, the method only combines features that are used by previous splits, with all the other features in the dataset. The algorithm also defines a maximum number of features that can be combined at once, which by default is set to 4. Native Support of Categories in LightGBM LightGBM provides direct support of categories as long as they are integer encoded prior to the training. When searching for the optimal split on a particular feature, it will look for the best way of partitioning the possible categories into two subsets. For instance, in the case of a feature with k categories, the resulting search space for the algorithm would be of size 2ᵏ⁻¹–1. In practice, the algorithm does not go through all possible partitions and implements a method derived from an article from Fisher [4] (On Grouping for Maximum Homogeneity — 1958) to find the optimal split. In short, it exploits the fact that if the categories are sorted according to the training objective, then we can reduce the search space to contiguous partitions. This significantly reduces the complexity of the task. In the experiment below, we investigate the benefits of using categorical feature handling instead of one-hot encoding. We measure the mean fit time and best test scores obtained with a randomized search on subsets of different size of the airlines dataset. This dataset, whose statistics are summarized in Table 2, contains high cardinality variables which make it suitable for such a study. Table 2: A short description of the airlines dataset The results show that both settings achieve equivalent performance scores, but enabling the built-in categories handler makes the LightGBM faster to train. More precisely, we achieved a 4 fold speedup on the full dataset. Figure 1: Importance of LightGBM’s categorical feature handling on mean fit time Table 3: Importance of LightGBM’s categorical feature handling on best test score (AUC), for subsets of airlines of different size Dealing with Exclusive Features Another innovation of LightGBM is Exclusive Feature Bundling (EFB). This new method aims at reducing the number of features by bundling them together. The bundling is done by regrouping features that are mutually exclusive; that is, they never (or rarely) take non-zero values simultaneously. In practice, this method is very effective when the feature space is sparse, which, for instance, is the case with one-hot encoded features. In the algorithm, the optimal bundling problem is translated into a graph coloring problem where the nodes are features and edges exist between two nodes when the features are not exclusive. The problem is solved with a greedy algorithm that allows a rate of conflicts 𝛾 in each bundle. With an appropriate value for 𝛾, the number of features (and thus the training time) are significantly reduced while the accuracy remains unchanged. How does EFB Affect Scalability? We investigated the importance of EFB on the airlines task. In practice, we did not notice any effect of EFB on fit time when using the categorical feature handler of LightGBM. However, EFB did improve the training time by leveraging the sparsity introduced by OHE as shown in Figure 2. The results with categorical feature handling enabled (lgbm) are shown as a reference point. Figure 2: Importance of EFB on mean fit time, when categorical variables are OHE Tweaking XGBoost’s missing value handler XGBoost does not support categorical variables natively, so it is necessary to encode them prior to training. However, there exists a way of tweaking the algorithm settings that can significantly reduce the training time, by leveraging the joint use of one-hot encoding and the missing value handler ! XGBoost: A Sparsity-Aware Algorithm In order to deal with sparsity induced for instance by missing values, the XGBoost split-finding algorithm learns from the data at each split a default direction for these values. In practice, the algorithm tests two possible grouping for the instances with missing values (left and right), but these points are not visited one by one like the others. This saves a lot of computations when the data is very sparse. What is interesting is that this particular feature is not limited to missing values as we usually understand them. In fact, you can choose any constant value you want to play the role of missing value, when your data does not contain any. This becomes very handy when working with datasets for which one-hot encoding introduces many zeros entries. Leveraging the sparsity introduced by one-hot encoding We investigated the importance of setting the missing parameter of the split-finding algorithm to 0 (instead of numpy.nan, the default value in the Python implementation), on the training of the airlines dataset. The results reported in the figure below are for the approx tree-building method, but the same observations were made for exact and hist. Changing the missing parameter to 0 results in a significant reduction of training time. More precisely, we observed a 40× speedup for exact and approx on the full dataset, and a 10× speedup for hist. Figure 3: Importance of the ‘missing’ parameter on mean fit time of XGBoost (tree-building method is approx) As shown in Table 4, this small change does not seem to affect the performance scores in any significant way, making it a practical tip for when working with datasets with no actual missing data. Table 4: Importance of the ‘missing’ parameter on best test score (AUC), for subsets of airlines of different size Takeaways Because of the way Gradient Boosting algorithms operate, optimizing the way categorical features are handled has a real positive impact on training time. Indeed, LightGBM’s native handler offered a 4 fold speedup over one-hot encoding in our tests, and EFB is a promising approach to leverage sparsity for additional time savings. Catboost’s categorical handling is so integral to the speed of the algorithm that the authors advise against using one-hot encoding at all(!). It is also the only gradient boosting implementation to tackle the problem of prediction shift. Finally, we demonstrated that in the absence of true missing data, it is possible to leverage XGBoost’s sparsity aware capabilities to gain significant speedups on sparse one hot encoded datasets, achieving up to a 40× speedup on the airlines dataset. References [1] Chen, T. & Guestrin, C. XGBoost: A scalable tree boosting system. Proc. ACM SIGKDD Int. Conf. Knowl. Discov. Data Min. 13–17-Augu, 785–794 (2016). [2] Ke, G. et al. LightGBM: A highly efficient gradient boosting decision tree. Adv. Neural Inf. Process. Syst. 2017-Decem, 3147–3155 (2017). [3] Prokhorenkova, L., Gusev, G., Vorobev, A., Dorogush, A. V. & Gulin, A. Catboost: Unbiased boosting with categorical features. Adv. Neural Inf. Process. Syst. 2018-Decem, 6638–6648 (2018). [4] Walter D. Fisher (1958) On Grouping for Maximum Homogeneity, Journal of the American Statistical Association, 53:284, 789–798, DOI: 10.1080/01621459.1958.10501479
https://medium.com/data-from-the-trenches/how-do-gradient-boosting-algorithms-handle-categorical-variables-e56ace858ba2
['Pierre Louis Saint']
2020-07-03 12:19:21.416000+00:00
['Machine Learning', 'Data Science', 'Xgboost', 'Lightgbm', 'Python']
Title Gradient Boosting Algorithms Handle Categorical VariablesContent fantastic shot Falcon Heavy rocket ascension — credit Unsplash Previously investigated difference version gradient boosting algorithm regarding treebuilding strategy We’ll closer look way categorical variable handled LightGBM 2 CatBoost 3 first explain CatBoost’s approach tackling prediction shift result mean target encoding demonstrate LightGBM’s native categorical feature handling make training much faster resulting 4 fold speedup experiment XGBoost 1 adept show leverage sparsityaware feature deal categorical feature Limitations OneHot Encoding implementation support categorical variable natively case XGBoost HistGradientBoosting onehot encoding commonly used standard preprocessing technique given variable method creates new column category contains effect multiplying number feature scanned algorithm split library CatBoost LightGBM implement scalable method Processing Categorical Variables CatBoost Ordered Target Statistics Explained CatBoost proposes inventive method processing categorical feature based wellknown preprocessing strategy called target encoding general encoded quantity estimation expected target value category feature formally let’s consider category kth training example want substitute estimate commonly used estimator would simply average target value sample category xⁱ sample k smoothed prior p weight 0 value p commonly set mean target value sample CatBoost 3 method named Ordered Target Statistics TS try solve common issue arises using target encoding target leakage original paper author provide simple yet effective example naive target encoding lead significant error prediction test set Ordered TS address issue maintaining effective usage training data available Inspired online algorithm arranges training sample according artificial timeline defined permutation training set sample k training set computes TS using “history” sample appear timeline see example particular target value instance never used compute TS Table 1 Ordered Target Statistics CatBoost toy example Values x̂ⁱ computed respecting history according previous formula p 005 example Table 3 x̂ⁱ instance 6 computed using sample newly assigned history x̂ⁱ thriller Thus instance 1 used instance 3 CatBoost algorithm Ordered TS integrated Ordered Boosting practice several permutation training set defined one chosen randomly step gradient boosting order compute Ordered TS way compensates fact sample TS might higher variance due shorter history Words Feature combination addition Ordered TS CatBoost implement another preprocessing method build additional feature combining existing categorical feature together However processing possible combination feasible option total grows exponentially number feature new split method combine feature used previous split feature dataset algorithm also defines maximum number feature combined default set 4 Native Support Categories LightGBM LightGBM provides direct support category long integer encoded prior training searching optimal split particular feature look best way partitioning possible category two subset instance case feature k category resulting search space algorithm would size 2ᵏ⁻¹–1 practice algorithm go possible partition implement method derived article Fisher 4 Grouping Maximum Homogeneity — 1958 find optimal split short exploit fact category sorted according training objective reduce search space contiguous partition significantly reduces complexity task experiment investigate benefit using categorical feature handling instead onehot encoding measure mean fit time best test score obtained randomized search subset different size airline dataset dataset whose statistic summarized Table 2 contains high cardinality variable make suitable study Table 2 short description airline dataset result show setting achieve equivalent performance score enabling builtin category handler make LightGBM faster train precisely achieved 4 fold speedup full dataset Figure 1 Importance LightGBM’s categorical feature handling mean fit time Table 3 Importance LightGBM’s categorical feature handling best test score AUC subset airline different size Dealing Exclusive Features Another innovation LightGBM Exclusive Feature Bundling EFB new method aim reducing number feature bundling together bundling done regrouping feature mutually exclusive never rarely take nonzero value simultaneously practice method effective feature space sparse instance case onehot encoded feature algorithm optimal bundling problem translated graph coloring problem node feature edge exist two node feature exclusive problem solved greedy algorithm allows rate conflict 𝛾 bundle appropriate value 𝛾 number feature thus training time significantly reduced accuracy remains unchanged EFB Affect Scalability investigated importance EFB airline task practice notice effect EFB fit time using categorical feature handler LightGBM However EFB improve training time leveraging sparsity introduced OHE shown Figure 2 result categorical feature handling enabled lgbm shown reference point Figure 2 Importance EFB mean fit time categorical variable OHE Tweaking XGBoost’s missing value handler XGBoost support categorical variable natively necessary encode prior training However exists way tweaking algorithm setting significantly reduce training time leveraging joint use onehot encoding missing value handler XGBoost SparsityAware Algorithm order deal sparsity induced instance missing value XGBoost splitfinding algorithm learns data split default direction value practice algorithm test two possible grouping instance missing value left right point visited one one like others save lot computation data sparse interesting particular feature limited missing value usually understand fact choose constant value want play role missing value data contain becomes handy working datasets onehot encoding introduces many zero entry Leveraging sparsity introduced onehot encoding investigated importance setting missing parameter splitfinding algorithm 0 instead numpynan default value Python implementation training airline dataset result reported figure approx treebuilding method observation made exact hist Changing missing parameter 0 result significant reduction training time precisely observed 40× speedup exact approx full dataset 10× speedup hist Figure 3 Importance ‘missing’ parameter mean fit time XGBoost treebuilding method approx shown Table 4 small change seem affect performance score significant way making practical tip working datasets actual missing data Table 4 Importance ‘missing’ parameter best test score AUC subset airline different size Takeaways way Gradient Boosting algorithm operate optimizing way categorical feature handled real positive impact training time Indeed LightGBM’s native handler offered 4 fold speedup onehot encoding test EFB promising approach leverage sparsity additional time saving Catboost’s categorical handling integral speed algorithm author advise using onehot encoding also gradient boosting implementation tackle problem prediction shift Finally demonstrated absence true missing data possible leverage XGBoost’s sparsity aware capability gain significant speedup sparse one hot encoded datasets achieving 40× speedup airline dataset References 1 Chen Guestrin C XGBoost scalable tree boosting system Proc ACM SIGKDD Int Conf Knowl Discov Data Min 13–17Augu 785–794 2016 2 Ke G et al LightGBM highly efficient gradient boosting decision tree Adv Neural Inf Process Syst 2017Decem 3147–3155 2017 3 Prokhorenkova L Gusev G Vorobev Dorogush V Gulin Catboost Unbiased boosting categorical feature Adv Neural Inf Process Syst 2018Decem 6638–6648 2018 4 Walter Fisher 1958 Grouping Maximum Homogeneity Journal American Statistical Association 53284 789–798 DOI 10108001621459195810501479Tags Machine Learning Data Science Xgboost Lightgbm Python
4,515
Upgrading Python lists
Upgrading Python lists Adding useful functionalities to Python lists Image source: JoeyBLS photography Introduction Python lists are good. But they’re not great. There is so much functionality that can be easily added to them but is still missing. Indexing with booleans, easily creating dictionaries from them, appending more than one element at a time, so on and so forth. Well, not anymore. Fastai has come up with their own data structure called L . It can do everything that a Python list can do and much more. The purpose of this article is to show you how easy it is to write such useful functionalities on your own. Especially if you are a beginner, try creating a mini-version of this library. Try writing some of the functionalities you hoped existed. It’ll be a good learning experience. For now, let’s learn about L. Here is the Colab link if you’d. Make a copy on your Colab (File->Save a copy in drive)and run the first cell only once. Your notebook will crash. But you’ll be ready to use the library right away. Google Colaboratory Link. What is L?
https://towardsdatascience.com/upgrading-python-lists-35440096ec36
['Dipam Vasani']
2020-03-23 17:07:57.285000+00:00
['Programming', 'Python']
Title Upgrading Python listsContent Upgrading Python list Adding useful functionality Python list Image source JoeyBLS photography Introduction Python list good they’re great much functionality easily added still missing Indexing booleans easily creating dictionary appending one element time forth Well anymore Fastai come data structure called L everything Python list much purpose article show easy write useful functionality Especially beginner try creating miniversion library Try writing functionality hoped existed It’ll good learning experience let’s learn L Colab link you’d Make copy Colab FileSave copy driveand run first cell notebook crash you’ll ready use library right away Google Colaboratory Link LTags Programming Python
4,516
Evolution to Advanced Coding : Any Kid Can Code
PYTHON IS OBJECT ORIENTED PROGRAMMING LANGUAGE. What does this mean? In python, everything is an object and object has one good thing they can be assigned to variable and instance can be created for the objects. What is OOP (Object oriented programming)? OOP is the concept which preaches to create objects. And, objects contains their own properties and functions. Let us correlate this to real life, any object like computer mouse, has its own properties and functions. Properties: mouse has button (right/left), it has scroller on top etc. Functions: move cursor, click, scroll etc. I hope it makes easy to correlate coding to real life examples. Here in programming, object is created as instance of class which is created using keyword “class” and then, inside class we can create different variables and functions (this we have learnt earlier). Those functions can be used by object and we can create any number of objects in a class. You can refresh your basic knowledge: Here we go, we have exposure to most important concept of programming i.e, oOP. We are growing in the same manner as the word looking like. Benefit of using OOP: Modularity, reusability and scalability. We will go in depth when the time comes or we will understood when we do practice. How it makes code compact — Let us assume you bought a computer mouse, it has various functions and it works plug n play manner. If you want to attach it to laptop, TV or any other device, it will function as same. So, you need not to buy multiple mouse. This is just to understand the concept of object as an instance. We will deep dive into that as and when required. Just focus that object is an instance of class and there can be many instances to the class. Animal is class and dog, cat etc are their instances with different functions OOP has many other concept like polymorphism, inheritance, encapsulation and abstraction. We will learn all over the time. You can understand how important and easy it will become, if we have class widgetObject which allows to create different instance and it has function of move object. Let us now wait too much. First do the program using loop and then use the concept just learnt. And, I will leave it to you to see the difference for understanding and easiness.
https://laxman-singh.medium.com/evolution-to-advanced-coding-any-kid-can-code-40121a1d6c52
['Laxman Singh']
2020-12-02 15:09:26.543000+00:00
['Python', 'Python3', 'Kids', 'Python Programming', 'Kids And Tech']
Title Evolution Advanced Coding Kid CodeContent PYTHON OBJECT ORIENTED PROGRAMMING LANGUAGE mean python everything object object one good thing assigned variable instance created object OOP Object oriented programming OOP concept preaches create object object contains property function Let u correlate real life object like computer mouse property function Properties mouse button rightleft scroller top etc Functions move cursor click scroll etc hope make easy correlate coding real life example programming object created instance class created using keyword “class” inside class create different variable function learnt earlier function used object create number object class refresh basic knowledge go exposure important concept programming ie oOP growing manner word looking like Benefit using OOP Modularity reusability scalability go depth time come understood practice make code compact — Let u assume bought computer mouse various function work plug n play manner want attach laptop TV device function need buy multiple mouse understand concept object instance deep dive required focus object instance class many instance class Animal class dog cat etc instance different function OOP many concept like polymorphism inheritance encapsulation abstraction learn time understand important easy become class widgetObject allows create different instance function move object Let u wait much First program using loop use concept learnt leave see difference understanding easinessTags Python Python3 Kids Python Programming Kids Tech
4,517
Why Books are the Key To Learning A Language On Your Own
Photo by Lysander Yuen on Unsplash Why Books are the Key To Learning A Language On Your Own Fraser Mince Follow Sep 7 · 6 min read Whenever I am asked how I was able to succeed in many languages in a relatively short period of time, I always make a bow in spirit to the source of all knowledge: books Kató Lomb Learning a language is challenging. It takes a ton of time and consistency, and even then it is really easy to feel stuck. You can spend hundreds of hours doing Duolingo or taking classes only to still feel like there is this giant gap between you and actual fluency. It only becomes more difficult if you are trying to learn independently. Many spend a lot of time trying to discover how to learn a language on their own and end up feeling very lost. It can start to feel like there’s a divide that’s impossible to cross. You may know how to say some basic expressions but the second someone starts to speak, everything you know seems to disappear. You, learning a language, probably It’s not uncommon to feel stuck at some point in your language learning journey. You may feel like if you moved to a foreign country, and you had the immersion you would learn but short of that it feels impossible. But there are ways to learn a language quickly at home. All you need to do is simulate immersion by consuming content you love in your target language. One of the most underrated ways to do this is by reading novels. Now if you’re like me I know what you’re thinking: “oh someday that would be amazing! I just need to get to the point where I can even begin reading”. Maybe you have even tried picking up a book like Harry Potter in a language you’re learning. “This will be great! How hard can it be?” You say. But then you open it. “Wow, that’s a lot of words. And I know like six of them”. That first feeling of being overwhelmed is often enough to scare people away. Looking at that first page is just intimidating. So why is reading worth your time?
https://medium.com/language-lab/why-books-are-the-key-to-learning-a-language-on-your-own-9b6f2f60813c
['Fraser Mince']
2020-09-10 09:39:31.636000+00:00
['Language', 'Books', 'Fluency', 'Language Learning']
Title Books Key Learning Language OwnContent Photo Lysander Yuen Unsplash Books Key Learning Language Fraser Mince Follow Sep 7 · 6 min read Whenever asked able succeed many language relatively short period time always make bow spirit source knowledge book Kató Lomb Learning language challenging take ton time consistency even really easy feel stuck spend hundred hour Duolingo taking class still feel like giant gap actual fluency becomes difficult trying learn independently Many spend lot time trying discover learn language end feeling lost start feel like there’s divide that’s impossible cross may know say basic expression second someone start speak everything know seems disappear learning language probably It’s uncommon feel stuck point language learning journey may feel like moved foreign country immersion would learn short feel impossible way learn language quickly home need simulate immersion consuming content love target language One underrated way reading novel you’re like know you’re thinking “oh someday would amazing need get point even begin reading” Maybe even tried picking book like Harry Potter language you’re learning “This great hard be” say open “Wow that’s lot word know like six them” first feeling overwhelmed often enough scare people away Looking first page intimidating reading worth timeTags Language Books Fluency Language Learning
4,518
How to Use the Kaggle API in Python
Datasets Kaggle gives us several options for downloading datasets. The two you’re most likely to use are for downloading competition datasets, or standalone datasets. A competition dataset is related to a current or past competition, for example, the dataset used in the Sentiment Analysis on Movie Reviews competition. Standalone datasets are not accompanied by a competition and can be uploaded by anyone — like this 1.6M Sentiment of Tweets dataset. We use two different methods for each of these. Competition Datasets We can see that our dataset is paired with a competition through the URL of the dataset, it will always begin with kaggle.com/c/ — the c representing competition. To download a competition dataset, we use the competition_download_file method, take the competition name (given in the URL) and write: Here we download both the training and test datasets to the current directory ./ — both are zipped. Alternatively, we can simply download all competition datasets with: api.competition_download_files('sentiment-analysis-on-movie-reviews', path='./') You may need to setup your local directory to receive them without error — I always find downloading each individual dataset more convenient. Standalone Datasets On the dataset page, we can see the user’s name and the dataset name (or in the address bar). We put both together like user/dataset , and execute dataset_download_file like so: This will download the zipped file into our current directory ./ . Again, just like we did with the competition datasets, we can download all files for a specific dataset like so: api.dataset_download_files('kazanova/sentiment140', path='./') Unzipping A final point, every dataset you download with the Kaggle API will be downloaded as a ZIP file. You can unzip the data manually, or simply use Python like so: Once unzipped, we read our data into Python as per usual!
https://medium.com/python-in-plain-english/how-to-use-the-kaggle-api-in-python-4d4c812c39c7
['James Briggs']
2020-11-25 06:41:44.974000+00:00
['Python', 'Technology', 'Data Science', 'Programming', 'Machine Learning']
Title Use Kaggle API PythonContent Datasets Kaggle give u several option downloading datasets two you’re likely use downloading competition datasets standalone datasets competition dataset related current past competition example dataset used Sentiment Analysis Movie Reviews competition Standalone datasets accompanied competition uploaded anyone — like 16M Sentiment Tweets dataset use two different method Competition Datasets see dataset paired competition URL dataset always begin kagglecomc — c representing competition download competition dataset use competitiondownloadfile method take competition name given URL write download training test datasets current directory — zipped Alternatively simply download competition datasets apicompetitiondownloadfilessentimentanalysisonmoviereviews path may need setup local directory receive without error — always find downloading individual dataset convenient Standalone Datasets dataset page see user’s name dataset name address bar put together like userdataset execute datasetdownloadfile like download zipped file current directory like competition datasets download file specific dataset like apidatasetdownloadfileskazanovasentiment140 path Unzipping final point every dataset download Kaggle API downloaded ZIP file unzip data manually simply use Python like unzipped read data Python per usualTags Python Technology Data Science Programming Machine Learning
4,519
Our A/B Testing Formula (The Easiest Way To Improve Performance By 2x Or More)
Why Test? The majority of ads will fail. So unless you’re expecting a neverending streak of luck, you’ll need a process for separating the losers from the winners. That’s where A/B testing comes in. It’s simply the process of testing two or more ad variations against each other, analyzing the results, and doing less of what doesn’t work & more of what works. It’s hard to overstate how important this is. We frequently see ads perform 2x, 5x or even 10x better than others. And the first ads are almost never among the top performers. So if you’re not running at least one test at any given time, you’re leaving money on the table. What To Test First, you’ll need a bunch of copy angles and creatives. This is a huge topic in and of itself, so we won’t get into it here. Let’s just assume you have them. Great! Where do you start? You’ll want to go as BIG as possible with the first test. An example would be testing a professionally shot studio photo vs an unedited UGC (User Generated Content) video, or short CTA-focused ad copy vs long-form storytelling copy. The more contrast you add, the easier it will be to analyze the results and hone in one the winning angles. We like to start things off with a 2 x 2: two creatives and two copy. We find that this strikes a good balance between simplicity and effectiveness. 2 x 2 Is An Easy Way To Get Started The results from the first test determine what we do next. For instance, let’s say we find that there was a huge difference in performance between the creatives but no real difference between the copy variations. We would then isolate that variable, i.e. test a handful of creatives with the same copy. If you’ve got a large budget and want to be as hands-off as possible, you can use Facebook’s Dynamic Creative. It does work and we do use it, but most times we prefer to have more control. When To Test When it comes to testing, we believe that frequency is more important than volume. It’s better to run multiple tests with a few variations, then a few tests with multiple variations. That’s why we use a two-day testing cycle. On any given ad account we’re analyzing and implementing new ad copy and creative up to three times per week (Monday, Wednesday, Friday). Usually, the number of tests you can run is limited by the budget. But even on the accounts where we’re not able to test new variations every two days, we still analyze and monitor performance. Note: Adding a new ad to an existing ad set will reset the learning phase. It may make sense to use a designated campaign for testing. How To Test Analyzing the results is maybe the most challenging part. There’s a lot that goes into the analysis, but there are a couple of simple tools that can do most of the heavy lifting. Having a simple process is extremely helpful for removing emotion and making informed decisions quickly. We like to use this decision tree (credit: https://commonthreadco.com) as a guide. Ad Kill Decision Tree In order to be able to use the decision tree, you’ll need to have target CPAs for all correlated variables. Here’s a document that will help you with that. When needed, we also use a Significance Calculator. A/B Testing Like A Scientist You simply plug in the numbers and the calculator tells you how confident you can be that any difference in performance is real and not due to random variation. Since the whole point of Facebook’s targeting is to move away from random sampling, this obviously isn’t a perfect tool. But it’s useful as a reality check. After The Test The results from the test are documented and handed over to the creative team. We use a simple scoring scale (Poor, Okay, Good) and qualitative comments. Scoring the ads can be more art than science, given the many factors involved. That’s why it’s best done by a media buyer who spends a lot of time in the ad account. In Summary By the time you’re reading this, we may have made a few (or many) changes to our A/B testing process. Nevertheless, the general principles and concepts apply. Good luck!
https://medium.com/rho-1/our-a-b-testing-formula-the-easiest-way-to-improve-performance-by-2x-or-more-1d2b222dfdf7
['Josua Fagerholm']
2020-03-17 23:54:07.163000+00:00
['Digital Advertising', 'Advertising', 'Marketing', 'Digital Marketing', 'Facebook Marketing']
Title AB Testing Formula Easiest Way Improve Performance 2x MoreContent Test majority ad fail unless you’re expecting neverending streak luck you’ll need process separating loser winner That’s AB testing come It’s simply process testing two ad variation analyzing result le doesn’t work work It’s hard overstate important frequently see ad perform 2x 5x even 10x better others first ad almost never among top performer you’re running least one test given time you’re leaving money table Test First you’ll need bunch copy angle creatives huge topic won’t get Let’s assume Great start You’ll want go BIG possible first test example would testing professionally shot studio photo v unedited UGC User Generated Content video short CTAfocused ad copy v longform storytelling copy contrast add easier analyze result hone one winning angle like start thing 2 x 2 two creatives two copy find strike good balance simplicity effectiveness 2 x 2 Easy Way Get Started result first test determine next instance let’s say find huge difference performance creatives real difference copy variation would isolate variable ie test handful creatives copy you’ve got large budget want handsoff possible use Facebook’s Dynamic Creative work use time prefer control Test come testing believe frequency important volume It’s better run multiple test variation test multiple variation That’s use twoday testing cycle given ad account we’re analyzing implementing new ad copy creative three time per week Monday Wednesday Friday Usually number test run limited budget even account we’re able test new variation every two day still analyze monitor performance Note Adding new ad existing ad set reset learning phase may make sense use designated campaign testing Test Analyzing result maybe challenging part There’s lot go analysis couple simple tool heavy lifting simple process extremely helpful removing emotion making informed decision quickly like use decision tree credit httpscommonthreadcocom guide Ad Kill Decision Tree order able use decision tree you’ll need target CPAs correlated variable Here’s document help needed also use Significance Calculator AB Testing Like Scientist simply plug number calculator tell confident difference performance real due random variation Since whole point Facebook’s targeting move away random sampling obviously isn’t perfect tool it’s useful reality check Test result test documented handed creative team use simple scoring scale Poor Okay Good qualitative comment Scoring ad art science given many factor involved That’s it’s best done medium buyer spends lot time ad account Summary time you’re reading may made many change AB testing process Nevertheless general principle concept apply Good luckTags Digital Advertising Advertising Marketing Digital Marketing Facebook Marketing
4,520
6 Reasons We Need to Reform the Peace Corps
6 Reasons We Need to Reform the Peace Corps From a Former Peace Corps Volunteer (RPCV Tanzania) Source: Unsplash, Simon Berger 1. It is a form of systematic racism, for those it claims to serve and for those who serve. The words, “systematic racism,” seem to be everywhere these days. However, it is crucial that we acknowledge that the words, “systematic racism,” do not refer to a system filled with racists. Instead, these words, refer to a system that would uphold racism and disproportionately harm and subjugate people of certain races even if no racists were present. Those leading the effort to decolonize Peace Corps, @decolonizingpc discussed systematic racism, saying that, “Even after adding more volunteers of color, more anti-racism trainings, more reforms (including the ones [they] have proposed on [their] page), Peace Corps will still be a neocolonialist organization because of the imperialistic goals of U.S. foreign policy. Which brings us to Number 2 — Soft power imperialism, such as providing financial aid or human resources for development, functions best under the pretense of altruism, though it remains predominantly self-serving. 2. It is an inherently imperialistic organization. What does it mean to be an imperialistic organization, you may ask? Imperialism is an ideological framework, oftentimes carried out with government policy that works to extend the rule or authority of one country over another country. Such policies have historically been carried out under the guise of “civilizing” and “developing” other nations, employing both hard power, such as military force, but also soft power. Soft power imperialism, such as providing financial aid or human resources for development, functions best under the pretense of altruism, though it remains predominantly self-serving. Self-serving in what ways, you might ask? Well, on to Number 3 — 3. It is a neocolonialist organization. While Peace Corps holds dear to certain values and goals, it has always been an organization that functions mostly to serve U.S. foreign policy and the volunteers over the people that they are serving. In other words, Peace Corps functions as the United States’ most prominent soft power asset. In doing so, it is, by its very nature, an organization rooted in neocolonialism, or, “the practice of using economies, globalization, cultural imperialism, and conditional aid to influence a country.” In other words, we have traded direct political and military control, for a softer, but perhaps more insidious, form of control. 4. Father-Knows-Best Paternalism Meets The White Mans Burden Imperialistic policies rely heavily on paternalism, which “limits a person’s or group’s liberty or autonomy and is intended to promote their own good.” A classic example of both imperialism and paternalism working together would be the 19th century European, “Scramble for Africa,” in which the African continent was sliced and divided in order to reap the benefits of its myriad natural resources. European nations — imbued with a sense of superiority that they saw as their divine providence from God himself — invaded African nations using the framework of paternalism to pillage and completely fracture traditional African ways of life and their political structures. Additionally, they imposed “patriarchal social structures into European-dominated hierarchies and imposed Christianity and Western ideals.” The effects of this so-called scramble still permeate African policy today. At the turning point of the 19th century, this seemingly pre-ordained calling to “civilize” other nations, was cemented in the poem, “The White Man’s Burden (1899),” which called upon the superior white man to go forth and colonize these far-off lands. Neocolonialist policy that cloaks itself in good intention, lives at the intersection of “the white man’s burden” and the developing world’s need for self-determination and autonomy. In the 20th century, the conceptual framework of the white man’s burden has been used by proponents of decolonization, to critique foreign expansionism and interventionism. Arguing that neocolonial programs more often than not perpetuate the idea that so-called developing nations are unable to embrace self-determination. Neocolonialist policy that cloaks itself in good intention, lives at the intersection of “the white man’s burden” and the developing world’s need for self-determination and autonomy. So, why do post-colonial nations still struggle for autonomy? Well, it’s far more complicated than a 6 point list could cover, but let’s dip our toes in — 5. Peace Corps’ aid structure is based on conditional financial and human resource aid that has no proven long-term results for those it claims to serve. Aid on the African Continent is a problem. It is a complex, goliath of a problem. One need only read Dambisa Moyo’s scathing book, Dead Aid, in order to get the picture of the international development industrial complex. She makes the argument that the aid industry in Africa is not only ineffective, it is “malignant.” Over the last 50 years, more than $1 trillion in development aid has been given to Africa. She argues that this aid has, “failed to deliver sustainable economic growth and poverty reduction — and has actually made the continent worse off.” While the entirety of the Peace Corps’ financial structure could, and should be investigated, here, we are going to unpack only one part of this structure: small grants, which are organized by the volunteers and then in-theory allocated to the communities in which they serve. As explained by those running the @decolonizingpc Instagram, “The entire process for the Small Grants Program completely relies on the presence of the volunteer, from the application and fundraising to monitoring and evaluation… Peace Corps practices do not live up to [its primary goal of sustainability] because project funding by the Small Grants Program requires the presence of the volunteer, who at any moment can leave site permanently without notice. It should also not be up to Peace Corps or any volunteer to decide what sustainability looks like for a community.” Peace Corps’ structure attempts to move away from the aid industry — in the sense that it sends (in theory) skilled volunteers — abroad to help build sustainable programs rather than blindly throwing money at the problem. This type of aid is not conditional in a quid pro quo sense, but rather that the aid is conditional on the volunteer being there. And if the volunteer must be there for the aid or benefit to be reaped, well then, the goal of sustainability is called into question, and dare I say, inherently flawed. This became startlingly transparent in March 2020, when thousands of volunteers were suddenly pulled from their host countries due to COVID-19 — leaving communities in a lurch, funding stalled, and crucial projects never to be finished. The conditional aid structure and very nature of both Peace Corps promise and its inability to create sustainable change, calls to question whether or not it has a place in the global community. People that work in the aid/development world love to say, “the goal is to work ourselves out of a job.” And yet, it remains a financially fruitful industry for those employed by it, including the Peace Corps. 6. It relies on a Westernized model of development. During my time as a Peace Corps volunteer, while I traveled, while I read books about the aid industry and the developing world, one question always seemed to creep in from the recesses of my mind: “Developing toward what?” What exactly do we mean when we say, “a westernized model of development?” Two well-regarded Iranian scholars and economists claimed that, “The western model of development prioritizes technological modernization, free-market economy, a democratic political system, and western health systems as the basis for development.” So, these items are used as metrics to measure the success of a nations’ development. Yet, those nations that we consider successfully developed (i.e. Britain or the U.S.) reached their status, “at the expense of slavery, war, other gross human rights violations, and overexploitation of the environment within and beyond their borders.” What does Peace Corps have to do with this? Well, back to those at @decolonizingpc who have been actively speaking out and unpacking this issue:
https://tyleranne04.medium.com/6-reasons-we-need-to-reform-the-peace-corp-c6c1a329ed00
['Tyler A. Donohue']
2020-10-28 19:08:18.693000+00:00
['Development', 'Travel', 'White Privilege', 'Peace Corps', 'Volunteering']
Title 6 Reasons Need Reform Peace CorpsContent 6 Reasons Need Reform Peace Corps Former Peace Corps Volunteer RPCV Tanzania Source Unsplash Simon Berger 1 form systematic racism claim serve serve word “systematic racism” seem everywhere day However crucial acknowledge word “systematic racism” refer system filled racist Instead word refer system would uphold racism disproportionately harm subjugate people certain race even racist present leading effort decolonize Peace Corps decolonizingpc discussed systematic racism saying “Even adding volunteer color antiracism training reform including one proposed page Peace Corps still neocolonialist organization imperialistic goal US foreign policy brings u Number 2 — Soft power imperialism providing financial aid human resource development function best pretense altruism though remains predominantly selfserving 2 inherently imperialistic organization mean imperialistic organization may ask Imperialism ideological framework oftentimes carried government policy work extend rule authority one country another country policy historically carried guise “civilizing” “developing” nation employing hard power military force also soft power Soft power imperialism providing financial aid human resource development function best pretense altruism though remains predominantly selfserving Selfserving way might ask Well Number 3 — 3 neocolonialist organization Peace Corps hold dear certain value goal always organization function mostly serve US foreign policy volunteer people serving word Peace Corps function United States’ prominent soft power asset nature organization rooted neocolonialism “the practice using economy globalization cultural imperialism conditional aid influence country” word traded direct political military control softer perhaps insidious form control 4 FatherKnowsBest Paternalism Meets White Mans Burden Imperialistic policy rely heavily paternalism “limits person’s group’s liberty autonomy intended promote good” classic example imperialism paternalism working together would 19th century European “Scramble Africa” African continent sliced divided order reap benefit myriad natural resource European nation — imbued sense superiority saw divine providence God — invaded African nation using framework paternalism pillage completely fracture traditional African way life political structure Additionally imposed “patriarchal social structure Europeandominated hierarchy imposed Christianity Western ideals” effect socalled scramble still permeate African policy today turning point 19th century seemingly preordained calling “civilize” nation cemented poem “The White Man’s Burden 1899” called upon superior white man go forth colonize faroff land Neocolonialist policy cloak good intention life intersection “the white man’s burden” developing world’s need selfdetermination autonomy 20th century conceptual framework white man’s burden used proponent decolonization critique foreign expansionism interventionism Arguing neocolonial program often perpetuate idea socalled developing nation unable embrace selfdetermination Neocolonialist policy cloak good intention life intersection “the white man’s burden” developing world’s need selfdetermination autonomy postcolonial nation still struggle autonomy Well it’s far complicated 6 point list could cover let’s dip toe — 5 Peace Corps’ aid structure based conditional financial human resource aid proven longterm result claim serve Aid African Continent problem complex goliath problem One need read Dambisa Moyo’s scathing book Dead Aid order get picture international development industrial complex make argument aid industry Africa ineffective “malignant” last 50 year 1 trillion development aid given Africa argues aid “failed deliver sustainable economic growth poverty reduction — actually made continent worse off” entirety Peace Corps’ financial structure could investigated going unpack one part structure small grant organized volunteer intheory allocated community serve explained running decolonizingpc Instagram “The entire process Small Grants Program completely relies presence volunteer application fundraising monitoring evaluation… Peace Corps practice live primary goal sustainability project funding Small Grants Program requires presence volunteer moment leave site permanently without notice also Peace Corps volunteer decide sustainability look like community” Peace Corps’ structure attempt move away aid industry — sense sends theory skilled volunteer — abroad help build sustainable program rather blindly throwing money problem type aid conditional quid pro quo sense rather aid conditional volunteer volunteer must aid benefit reaped well goal sustainability called question dare say inherently flawed became startlingly transparent March 2020 thousand volunteer suddenly pulled host country due COVID19 — leaving community lurch funding stalled crucial project never finished conditional aid structure nature Peace Corps promise inability create sustainable change call question whether place global community People work aiddevelopment world love say “the goal work job” yet remains financially fruitful industry employed including Peace Corps 6 relies Westernized model development time Peace Corps volunteer traveled read book aid industry developing world one question always seemed creep recess mind “Developing toward what” exactly mean say “a westernized model development” Two wellregarded Iranian scholar economist claimed “The western model development prioritizes technological modernization freemarket economy democratic political system western health system basis development” item used metric measure success nations’ development Yet nation consider successfully developed ie Britain US reached status “at expense slavery war gross human right violation overexploitation environment within beyond borders” Peace Corps Well back decolonizingpc actively speaking unpacking issueTags Development Travel White Privilege Peace Corps Volunteering
4,521
Pharmaceuticals
Three Problems and a Solution Pharmaceuticals We are a cornucopia of chemicals. At what point does that get too much? Last time, I told you about plastics contaminating the soil and water, and causing death and destruction everywhere. But did you know that it’s not just plastics? Ironically, it could also be the very medicines designed to keep you alive and well. If you remember from elementary school science, water does the same thing over and over again: condenses, precipitates, infiltrates, transpires, and evaporates. Although 70% of the world is covered with water, less than 3% is potable which means our water supply is very limited. The same water that’s been on the planet since day one is still here — no more and no less — which means we’re all drinking dinosaur pee, and because of modern industrialized living, it’s just getting more degraded over time. In the USA, there are certain Maximum Contaminant Levels (MCLs) for certain chemicals; the level above which they should not appear in drinking water. But not all chemicals have been studied, especially not in all combinations. For the most part, we’re not looking at what happens when chemicals combine because there are just too many combinations. How would you ever do control studies for all of them? We are a cornucopia of chemicals. Some we ingest on purpose, some are thrust upon us through the air, the water, our skin via our clothing, and some come through our food. No matter how we get them, they’re a part of modern life. The manufacture of pharmaceuticals requires tons of water — at its inception, at its conclusion, and everywhere in between. It also requires pure water. And since the earth’s water bodies and our human bodies both depend on clean water for survival, we need to make sure our interests, and water’s interests, are aligned. What is today known as the Food and Drug Administration, the FDA, started as The Pure Food and Drug Act of 1906 after Upton Sinclair released, “The Jungle” in 1906 which described the horribly unhygienic conditions in the Chicago stockyards. Who worked in those stockyards? Immigrants. People who came in from another country, even though that makes life very difficult, because life back home was even worse. Then, like today, the lower socio-economic rungs of society most often have the fewest environmental protections, as well as very little say in the matter. Today, the FDA approves drugs and is our watchdog, but its reach is limited. The FDA doesn’t have authority to recall a product unless it’s been misbranded or adulterated. All other recalls — including for safety — are up to the manufacturer to initiate. This means that at best, the pharmaceutical and cosmetics industries are self-policing and at worst — people are going to die — like with the Vioxx scandal where over 100,000 people suffered heart attacks before Vioxx was recalled. But there’s more and that is: how are these often very powerful drugs affecting our water? A 2009 study from the University of Exeter found hormones in the water were causing fish mutations. There’s a class of drugs known as Anti-androgens — manmade environmental chemicals that either mimic or block sex hormones. They’re used in cancer treatments and other drugs — as well as pesticides — and they reduce fertility in male fish, causing a feminizing effect, which is a condition called Intersex. These “chemical cocktails” don’t just affect industrialised areas. According to USGS, intersex is a global issue affecting even wild-caught fish. However, there is also some good news on the plastics front. Scientists have discovered a bacterium that eats plastic. Studies are still in the preliminary stages, but they look promising. Then there’s the worms, generally used as fish bait, which have also been found to have a taste for plastic. And finally there’s my favourite, the plastic eating mushroom, Pestalotiopsis microspora which is a rare species from the Amazon rainforest that enjoys snacking on plastic and converting it into clean soil. It’s also tasty sautéed in olive oil and garlic! Kidding aside, Pestalotiopsis microspora is edible because somehow during the process of digesting the plastic, the mushroom removes all the toxins and converts them to clean soil. On the legal side, on February 10, 2020, Senator Tom Udall (D — NM) and representative Alan Lowenthal (D — CA) introduced the Break Free From Plastic Pollution Act which, among other things, goes after single use plastic bags: the ones with a 15-minute working life that seem to always end up in the ocean. It’s not law yet, but fingers crossed. What if I told you there was a chemical that can cause endocrine disruption? Surprise, it’s Triclosan! It was great at killing microorganisms which is why hospitals started using it as a sterilization agent in the 1970s. Because of its effectiveness, manufacturers started adding it to soaps, toothpastes, and other products as an antibacterial agent in overwhelming numbers. What happened next? The CDC found Triclosan present in 75% of the U.S. population’s urine samples. Its overuse had resulted in the population developing immunity to the chemical’s sterilization features, so it wasn’t so effective anymore. Further studies found when Triclosan reacts with sunlight it degrades to form dioxin in surface water. Dioxin causes cancer, reproductive problems, damages the immune system, and can disrupt hormones, and like plastic, it takes a very long time to break down. In September 2016, the FDA issued a final rule banning over-the-counter antiseptic wash products that contained Triclosan — along with 18 other chemicals — because manufacturers had failed to demonstrate safety from long-term exposure. The manufacturers weren’t shocked. They’d already been feeling enormous public pressure and so had begun removing Triclosan from soaps and toothpaste several years earlier. But here’s the twist: Triclosan is also classified as a pesticide and used as a material preservative in many products such as fabrics, vinyl, plastics, and textiles which are regulated by the Environmental Protection Agency or EPA. Triclosan’s conditional registration was up in 2018, but at that time, EPA determined there wasn’t enough information to pull the product from shelves so Triclosan is still being studied and used. Triclosan is a great example of overlapping regulations. When used as a beauty aid, like in antibacterial soaps, it’s regulated by the FDA because it’s a personal care product, and when used as a pesticide, it’s regulated by EPA which means we have one chemical and two different results, leaving water to sort out the mess. Look — Plastics and pharmaceuticals help us live longer, eradicate diseases like smallpox, and hopefully, COVID, they treat cancer, provide antibacterial protections, and overall do many other wonderful things all to make life better and easier .… but easier isn’t always better when there’s chemical residue left behind. There’s enormous pressure on our water to do everything we’re asking of it and if we don’t get our waste streams under control, instead of saving us, the very chemicals we use everyday to make life better are going to sink us, and water along with us. If we’re going to improve recycling, we need to start with improving the coding system and get rid of the misleading advertising, but, more importantly, reduce our waste stream. Sounds to me like it’s time to skip the plastic bottle, and buy yourself a stainless steel model, and then belly on up to your safe and regulated kitchen tap and fill that baby up. As for drugs, take your remaining drugs to places that dispose of them properly, and never ever ever flush them down the toilet. The good news is that the manufacture of pharmaceuticals requires pure water so at least our interests are aligned with manufacturers there. It’s always good to have an ally. Feeling helpless? Once you see everything that’s going wrong with plastics and PFAS, it’s easy to throw up your hands and give up hope. The problem’s just so big, there’s nothing you could possibly do to help…is there? Well, like many stories, this one’s going to have a hopeful ending. Before we get there, however, it’s important to know about one other substance that’s damaging the environment too. Because then we can — to use an ugly metaphor — kill three birds with a stone, instead of just one. Stay tuned for Tuesday!
https://medium.com/snipette/pharmaceuticals-68ccaa9f8ff6
['Pam Lazos']
2020-11-01 07:02:10.375000+00:00
['Environment', 'Pollution', 'Pharmaceuticals Industry', 'Corporation']
Title PharmaceuticalsContent Three Problems Solution Pharmaceuticals cornucopia chemical point get much Last time told plastic contaminating soil water causing death destruction everywhere know it’s plastic Ironically could also medicine designed keep alive well remember elementary school science water thing condenses precipitate infiltrates transpires evaporates Although 70 world covered water le 3 potable mean water supply limited water that’s planet since day one still — le — mean we’re drinking dinosaur pee modern industrialized living it’s getting degraded time USA certain Maximum Contaminant Levels MCLs certain chemical level appear drinking water chemical studied especially combination part we’re looking happens chemical combine many combination would ever control study cornucopia chemical ingest purpose thrust upon u air water skin via clothing come food matter get they’re part modern life manufacture pharmaceutical requires ton water — inception conclusion everywhere also requires pure water since earth’s water body human body depend clean water survival need make sure interest water’s interest aligned today known Food Drug Administration FDA started Pure Food Drug Act 1906 Upton Sinclair released “The Jungle” 1906 described horribly unhygienic condition Chicago stockyard worked stockyard Immigrants People came another country even though make life difficult life back home even worse like today lower socioeconomic rung society often fewest environmental protection well little say matter Today FDA approves drug watchdog reach limited FDA doesn’t authority recall product unless it’s misbranded adulterated recall — including safety — manufacturer initiate mean best pharmaceutical cosmetic industry selfpolicing worst — people going die — like Vioxx scandal 100000 people suffered heart attack Vioxx recalled there’s often powerful drug affecting water 2009 study University Exeter found hormone water causing fish mutation There’s class drug known Antiandrogens — manmade environmental chemical either mimic block sex hormone They’re used cancer treatment drug — well pesticide — reduce fertility male fish causing feminizing effect condition called Intersex “chemical cocktails” don’t affect industrialised area According USGS intersex global issue affecting even wildcaught fish However also good news plastic front Scientists discovered bacterium eats plastic Studies still preliminary stage look promising there’s worm generally used fish bait also found taste plastic finally there’s favourite plastic eating mushroom Pestalotiopsis microspora rare specie Amazon rainforest enjoys snacking plastic converting clean soil It’s also tasty sautéed olive oil garlic Kidding aside Pestalotiopsis microspora edible somehow process digesting plastic mushroom remove toxin convert clean soil legal side February 10 2020 Senator Tom Udall — NM representative Alan Lowenthal — CA introduced Break Free Plastic Pollution Act among thing go single use plastic bag one 15minute working life seem always end ocean It’s law yet finger crossed told chemical cause endocrine disruption Surprise it’s Triclosan great killing microorganism hospital started using sterilization agent 1970s effectiveness manufacturer started adding soap toothpaste product antibacterial agent overwhelming number happened next CDC found Triclosan present 75 US population’s urine sample overuse resulted population developing immunity chemical’s sterilization feature wasn’t effective anymore study found Triclosan reacts sunlight degrades form dioxin surface water Dioxin cause cancer reproductive problem damage immune system disrupt hormone like plastic take long time break September 2016 FDA issued final rule banning overthecounter antiseptic wash product contained Triclosan — along 18 chemical — manufacturer failed demonstrate safety longterm exposure manufacturer weren’t shocked They’d already feeling enormous public pressure begun removing Triclosan soap toothpaste several year earlier here’s twist Triclosan also classified pesticide used material preservative many product fabric vinyl plastic textile regulated Environmental Protection Agency EPA Triclosan’s conditional registration 2018 time EPA determined wasn’t enough information pull product shelf Triclosan still studied used Triclosan great example overlapping regulation used beauty aid like antibacterial soap it’s regulated FDA it’s personal care product used pesticide it’s regulated EPA mean one chemical two different result leaving water sort mess Look — Plastics pharmaceutical help u live longer eradicate disease like smallpox hopefully COVID treat cancer provide antibacterial protection overall many wonderful thing make life better easier … easier isn’t always better there’s chemical residue left behind There’s enormous pressure water everything we’re asking don’t get waste stream control instead saving u chemical use everyday make life better going sink u water along u we’re going improve recycling need start improving coding system get rid misleading advertising importantly reduce waste stream Sounds like it’s time skip plastic bottle buy stainless steel model belly safe regulated kitchen tap fill baby drug take remaining drug place dispose properly never ever ever flush toilet good news manufacture pharmaceutical requires pure water least interest aligned manufacturer It’s always good ally Feeling helpless see everything that’s going wrong plastic PFAS it’s easy throw hand give hope problem’s big there’s nothing could possibly help…is Well like many story one’s going hopeful ending get however it’s important know one substance that’s damaging environment — use ugly metaphor — kill three bird stone instead one Stay tuned TuesdayTags Environment Pollution Pharmaceuticals Industry Corporation
4,522
Beyond Cage: Nam June Paik
The object of this essay is the analysis of the artistic connection between American composer and thinker John Cage and the Korean artist Nam June Paik. My aim is to highlight the influence that Cage had on Paik’s work and to demonstrate that Paik reacted to Cagean thought and furthered its conclusions it in an attempt to step out of its shadow and adventure into new realms of media experimentation and philosophical inquiry. I started thinking about their relationship as a result of the research I conducted as an intern for the Talbot Rice Gallery in Edinburgh, in preparation for the 2013 Edinburgh International Festival exhibition Transmitted Live: Nam June Paik Resounds. The Meeting of Two Minds Nam June Paik, John Cage and David Tudor after the Concert «Kompositionen» at Atelier Mary Bauermeister, Cologne, 6 October 1960, Photo: Klaus Barisch, Courtesy Galerie Schüppenhauer Biographically, the two men shared a lifelong friendship that spans from their first meeting in 1958 to the death of John Cage in 1992. Although 20 years older than Paik, Cage’s respect for the younger artist and intellectual was manifest in their correspondence. A number of collaborations and homages linked the two artistically, including Paik’s first public appearance in Hommage à John Cage (1959), the score Gala Music for John Cage’s 50th Birthday (1962), the videotape A tribute to John Cage (1973) featuring Cage himself, the sound piece Empty Telephones (1987), and the 1990 video sculpture Cage from Family of Robots and Cage in Cage (1993), following his death. A shared cultural context is the common ground for the development of their ideas. Cage was Zen Buddhist in spiritual outlook and was attracted to Oriental philosophy in an attempt to escape the philosophical hermeticism of Western thought. Paik was born in Korea in 1932 and arrived in Germany after studying history of art, music and aesthetics in Tokyo. In Europe, he came into contact with a vibrant art scene. The Westerner with Eastern sensibilities and the Easterner fascinated by the cultural history of the West met in Germany. The seeds of Fluxus, the Neo-Avant-Garde movement of the 1960s, had been planted by John Cage during a series of lectures he gave at the New School of Social Research in New York City (1957-1958) in which he introduced the notions of indeterminacy and chance operation in art praxis – the former extracted from Zen teachings, the latter drawn from Marcel Duchamp’s example. These classes were attended by La Monte Young and George Brecht, two important figures of the movement. Prior to this lecture series, a string of “happenings” staged by Yoko Ono had prefigured Fluxus, in parallel to a number of concerts involving Nam June Paik and Cage himself through 1960-61 in the studio of Mary Bauermeister in Cologne. Since its inception, Paik was situated at the very heart of the international movement by virtue of his close relationships with founders Cage and George Manciunas, as well as his unique intellectual preoccupations. Fluxus developed an aesthetic that was very similar in scope with the Dada movement of decades earlier. It sought to destabilize traditional modes of art production, presentation, interaction and institutionalization. It was distinctly anti-commercial, employing comic irony in its critique of the establishment. Throughout his career, Paik made crossovers from high to popular art and back. However, his work always had a powerful philosophical core, inspired by Fluxus, that held his executions together. Living in the Rhineland, an experimental region for the arts at the time, Paik was at the main hub of a vast network of individuals that exchanged artistic ideas with great ease. The historical backdrop — the Cold War, its ideological implications in politics and the stability that followed the “economic miracle” (Wirtschaftswunder) of the 1950s — is also relevant, as it provided Fluxus artists with a platform for activistic engagement in their socio-political context. Transferring Paradigms Between Music and the Visual Arts. Paik Beyond Cage As a reference point in the structural analysis of the respective oeuvres of Cage and Paik, Marcel Duchamp is particularly important because the concepts he employed influenced both artists. Duchamp’s artistic approach varied between spontaneity and elaboration. His readymades were unintentional; as he himself declared, the “creation” of them is reduced to choosing one object over another. The Large Glass, in contrast, necessitated careful planning and meticulous work. But the two needed not be mutually exclusive: he devised a complicated chance method in selecting notes for his (only) musical piece, Erratum Musical (1913). In the 1950s, working with chance factors was a characteristic of New Music, a movement represented by composers such as John Cage, Luciano Berio, Karlheinz Stockhausen, among others. Cage’s seminal 1952 piece 4'33" featured a silence that lasted four minutes and thirty three seconds. Cage sought to foreground the unexpected elements in the environment over the expectancy of sound in the piece according to his dicta of indeterminacy. The work of Austrian composer Arnold Schoenberg was crucial in Cage’s conceptual arrival at his landmark score, as Schoenberg had managed to equalize the value of pitches in musical strips using his influential twelve-tone technique. Post-Schoenberg music was atonal, in which serialism dominated and emphasis on certain pitches was removed. Cage took this idea further and posited that all sounds, not just tones, were equal class citizens in a musical score. In doing so, he extended the concept of music to include the ostensibly aberrational or unwanted category of “noise,” and ultimately to swallow silence itself. Silence in Cagean thought is not, however, the complete absence of sound but rather an empty space that can be filled by life’s limitless noises. 4'33"’s contents depend on the environment of the receiver. Cage himself realised in 1951 after a visit to Harvard University’ anechoic laboratory that there was no such thing as absolute silence – that even internal sounds such as his heart beating could disrupt apparent silence. Cage thus fulfilled the requirements of both Duchampian chance procedure and Zen indeterminacy, famously declaring that he does not discriminate between intention and non-intention. Like Duchamp before him, Cage was also meticulous in execution. Cage’s Williams Mix is a prime example of his processual scruple: his first audiotape composition, its four-minute runtime reveals thousands of pieces of audiotape assembled to play in parallel on multiple soundtracks. Poster for Exposition of Music — Electronic Television Courtesy Zentralarchiv des internationalen Kunsthandels, Cologne This dualism of chance/meticulous assemblage exists in Paik’s work with television sets as well. Paik spent months learning the intricacies of electrical engineering in secret to prepare his landmark exhibition of 1963, Exposition of Music — Electronic Television, which featured twelve variously modified TV sets. In the Afterlude to the Exposition of Experimental Television, Paik states that “Indeterminism and variability is the very underdeveloped parameter in the optical art, although this has been the central problem in music in the last ten years.” He pays homage to both Duchamp and Cage and at the same time declares the intention of going beyond them. With the Electronic Television segment of his 1963 exhibition he aimed to study the indeterminism of television sets. When an unexpected accident occurred (one of the TV sets broke, thus displaying a mere horizontal line on the screen), Paik integrated it into his exhibition, naming it Zen for TV. The exhibition was a participatory event that involved all the senses of the viewer and could be regarded as the forerunner of both video art and interactive art. What Duchamp had managed to achieve with his concept of an open artwork governed by chance and variability, Cage and New Music had duly responded with the notion of an open work in music, in turn prompting a response from Paik in a video-based form of visual art. Thus, the chain of conceptual influence runs from Duchamp and the historical avant-garde to Cage and then to Nam June Paik. Zen for TV (1963-1975), Courtesy of Estate of Nam June Paik, Seoul. Photo: MUMOK, Vienna But for Paik, a mere translation of the indeterminism prefigured by Duchamp and co-opted by Cage from music to optical art was insufficient. Paik understood that he needed to push the idea of inserting chance elements into the artwork beyond the sonic realm. His first departure from Cage in this sense was his different treatment of “prepared pianos.” Cage prepared his pianos for a practical reason: while composing for performances, he observed that the space is only big enough to accommodate a single piano and had to compress in one instrument the sounds native to the keyboard alongside the thuds, crashes, and jingles of percussive apparatus. To variate, he inserted various objects to the strings in the piano that would make different sounds in the action of playing. Thus, his pianos could return to normal functional ones. In their modified state, they evoked the idea of randomness, surprising the audience. Paik’s pianos could not return to their initial state. Once modified or destroyed they remained permanently so. Nam June first used a piano in his Hommage à John Cage of 1959, where he tore off ten of the piano’s strings and played it first as a stringed instrument and then as a percussive one before finally destroying it. In his solo show Exposition of Music — Electronic Television, 1963, Paik prepared six pianos in various stages of destruction and modification. He attached a blow drier to one of them, Klavier Integral, that could be triggered by a key press, a bra and barbed wire making it threatening and tabooing for the audience. In doing so, Paik offered a different sensory experience, beyond that of sound, that assumed a tactile and visual nature as well. Phenomenologically, his piece assumed a synesthetic character. Paik’s grand plan was that of extending the boundary of music into a sort of “integral art” that would encompass both the visual and the performative/theatrical as well as the sculptural, and, through the use of the interior of the villa which housed the exhibition, even the architectural, generating meaning through layout. Summarising his intentions, even before his manifesto of 1963, Paik declared in 1959: ‘Schoenberg wrote ‘atonal’. John Cage has written ‘a-composition’. Me, I write ‘a-music’.’ In his quest for an integral art, video art as outputted by TV sets proved to be the perfect multisensory experience. A flow of electrons that could be manipulated infinitely, the moving image was the texture of this integral art. But at the time, the television, just like the radio, was broadcasting preset sensory data. Paik’s conversion of the TV from a reproductive machine displaying pre-determined pieces to an open, productive one can also be linked to Cage and the influence chain that I have mentioned. With Imaginary Landscape No. 4, Cage had for the first time introduced the idea of discarding predetermined scores. The “site specific” and “live” attributes of the performance were a consequence of using the material that the radio stations beamed and not prepared pieces by Cage himself (incorporating randomness). As early as the 1920s, the idea of using traditional reproductive instruments for productive purposes (converting them) was traversing Western European creative circles. Cage’s piece corresponds to this concept but only as regards reception. Cage’s prepared piano alters the sound, Paik’s prepared pianos for the 1963 Exposition of Music event triggered events in the environment when keys are operated: a transistor radio would play, a key would shut down the lights in the room, and so on. Like the pianos, and like Cage’s radios, the TVs were not instrumental in reproducing a pre-established piece but were production tools in their own rights. And like Cage’s radios, the material they used relied on broadcasts from local stations. The whole set-up was an “open artwork,” and the audience was the main performer. To put it simply, Paik stripped away the original function of the TV (which is to reproduce) in order to convert it, exciting the visitors (and their instinctual need to play and touch) into making never-before-seen images, as productive machines. Assimilating McLuhan: On Media as Extension of Perception. Correctives to Attention Deficits Induced by Media Culture Cage’s Imaginary Landscape No. 4 consisted in twenty-four performers working with twelve radios and a conductor operating them, modifying the station, the pitch and the volume. The piece was site specific and “live,” playing with indeterminacy, as we have seen, as the performers only operated with sounds that depended on the station. It also introduced silence as a compositional element one year before 4'33". The follow-up piece, 0'00" was a third that used this element. Germane to this piece was the observation that in a media information soaked world, attention is a scarce resource. The pieces were minutes long exercises in heightened sensibility of perception. Cage specifically thought of media as ‘expansion to man’, expansion of perception. Radio as extension to man is an idea that can be traced around the same time to Marshal McLuhan, the Canadian media communications philosopher. McLuhan was highly influential to both Cage and Paik. In his 1964 book, Understanding Media: The Extensions of Man, McLuhan creates a theoretical framework for understanding contemporary media culture. His premise is that all technology is in essence an extension of human abilities and senses. The printed book, the radio, TV and even clothing are all extending what humans can do. Because of this, technology destabilizes the natural balance of our senses and in turn affects the sensibility of society. In a subliminal way, the invention of new media was, according to McLuhan, the main factor for cultural shifts in the West. The effects of media change, in essence, the structure of the world in which we live in. His famous idea ‘the medium is the message’ is a consequence of this: because we don’t make a conscious decision to participate in the dialogue that a medium opens, we permit the medium to impose its own assumptions upon us and thus transform it into the actual message as it shapes our world in the process. Any medium that heightens one sense to the detriment of the other four leads to individuation: phonetic language and then the movable type, invented in the 15th century, delved us into a world characterized by the primacy of vision above all other senses and in which individuals could detach themselves from a body of society that was less and less tribal. The advent of the electronic age, however, starting with the telegraph restored a certain balance to the senses and reconnected us into a global neural network, exteriorizing the human nervous system and bombarding it with an abundance of information. But because media creates its own environment, just as ‘electric light is pure information’, ‘a medium without a message’ whose content is anything it shines on (the perfect McLuhanian metaphor), some are beneficial to certain messages while others are not. More exactly, media themselves can be hot or cool, depending on the participation of the audience. ‘A hot medium is one that extends one single sense in ‘high definition’. High definition is the state of being well filled with data. (…) Hot media are therefore low in participation and cool media are high in participation and completion by the audience. Naturally, therefore, a hot medium like radio has very different effects on the user from a cool medium like the telephone.’ Radio is seen as hot because it broadcasts continuously and offers all information in a straightforward way. TV is cool because it is immersive. ‘Radio will serve as background- sound (…) TV will no work as background, it engages you. You have to be with it.’ It is low definition and breaks away from uni-sensory experience by employing both sound and vision. Because of this it is the perfect gateway into the neural network. Cage was the first to take control of the stream of media information through his modified radio transmissions and sounds. As the media information became more abundant, the pieces enabled the listener, through silence, to be more attentive. Silence enables reflection on perception itself and on corporeality and attention. It heightens ones sense but by breaking the mode of broadcasting where the ear is subjugated by the transmission, it ‘cools down’ the media, in McLuhanian terms. In Zen for Film (an eight-minute-long white screen of Fluxus noiseless content), Paik referenced Cage’s silence. He invited Cage and Cunningham to watch an hour long film. As Cage thought about the similarities between their works, he stated: “Offhand, you might say that all three actions are the same. But they are quite different. The Rauschenberg paintings [White Paintings]… become airports for particles of dust and shadows that are in the environment. My piece 4'33" becomes in performances the sounds of the environment. Now, in the music, the sounds of the environment remain, so to speak, where they are, whereas in the case of the Rauschenberg paintings the dust and shadows, the changes in light and so forth, don’t remain where they are but come to the painting. In the case of the Nam June Paik film that has no images on it [Zen for Film]…, the focus is more intense. The nature of the environment is more on the film, different from the dust and the shadows that are the environment falling on the painting, and thus less free.’” Paik replied three years later: ‘N.B. Dear John, The nature of the environment is much more on TV than on film and painting. In fact, TV (its random movement of electrons) IS the environment of today.’ The McLuhian concept of the medium being the message is present here, although phrased differently. In his quest of blurring the roles between producer, performer and audience, Paik seemed to have noticed that the coolness of the TV promotes audience participation much more than all other mediums. McLuhan noticed this in 1964: ‘The cool TV medium promotes depth structures in art and entertainment alike, and creates audience involvement in depth as well.’ The Age of TV is the advent of the exhibition of art as a multi-sensory, deep experience that stimulates the non-visual senses as well. Paik understood this and was pursuing it with his ‘integral art’ as early as 1963. Random Access. The Experiencer Free in Time and Space In a truly immersive, integral art, audience participation had to be total. The degree of freedom of the experiencer of the artistic performance must be absolute. In a text from 1963, About the Exposition of Music, he observed that in most indeterminate music, the composer gives freedom to the interpreter but not to the audience. This held true for Cage’s work as well. The listeners had the option of listening or abstaining from it and this binary choice system was the same as it was for classical music. Moreover, the flow of time was in one direction just as the playing was from beginning to end. He explains further: “The audience cannot distinguish the undetermined time or sounds of the interpreter… The problem becomes more confused if the interpreter has a ’rehearsal’…, or if the interpreter plays it many times as his favourite ‘repertoire’… this is the prostitution of the freedom… if the interpreter rehearses even only once, the degree and the character of the indeterminacy becomes the same as in classical, if not baroque, if not Renaissance, if not medieval music. This is why I have not composed any undetermined music, or graphical music, despite my high respect for Cage and his friends.” His plan for Symphony for 20 Rooms was an attempt to remediate this problem. The listeners could move freely from one room to another, from one auditory experience to another. When Paik said ‘I am tired of renewing the form of music – serial or aleatoric, graphic or five lines, instrumental or bellcanto, screaming or action, tape or live… I hope must renew the ontological form of music…’ he was referring to these problems. This brings us to the ideas of random access and variability which are central to Paik’s video art. Listeners (any kind of experiencer for that matter) had to have phenomenological options for the events they were presented. His Random Access (sticking bits of audio tape on the wall and then using the needle of a player to read them in any order the listener wants) is a direct translation of that principle. Random Access (1963/2000) Photo: Erika Barahona Ede Courtesy of The Solomon R. Guggenheim Foundation, New York & Nam June Paik Estate, Seoul Paik also adopted Cage’s idea that “music is a chronology.” He referred to his art as TIME art. His 1963 exhibition put equal weight on the TV and the objects sonores and Zen objects. He made it clear that the second part of the show was going to be on electronic art (Electronic Television) rather late in the planning stages. He kept his work on TVs secret and taught himself electronics to understand the technical principles he was working with. The objective was to convert the TV into a self-referential form and also deal with the phenomenological state of the experiencer. He defined freedom in terms of time, saying that all musical experiences are essentially strips of time. The purpose of time art was thus to liberate the experiencer from unidirectional time. He realized this in different ways. One is the simultaneous perception of images from thirteen independent TVs in the Wuppertal 1963 show. The experiencer has the possibility of choosing independent flows of information and experience and thus different chronologies. Time is also in a direct relationship with space, so another way to translate this experiment is to give the experiencer freedom to pursue spatiality as he wishes. This is his project in Symphony for 20 Rooms. The idea is that different paths in space yield different chronological sequences of sound. A transnational artist, Nam June Paik is the embodiment of the Electronic Age from a rich multicultural perspective. An artist of the wave and the electron, his experiments with various media paved the way for an entire generation of manipulators of sound and image. John Cage’s influence on his thought is almost impossible to quantify. It seems that for every intuition that Cage had regarding the nature of artistic intention and production, Nam June Paik reacted with passionate dedication. Cage’s role was to push boundaries with thirsty curiosity. Paik’s was to explore their outer regions with wild imagination. Their enduring friendship produced a fruitful intellectual dialogue, both directly through collaborations and indirectly through the enormous provocation that Cagean thought posed to Paik, challenging him to escape its cage. Their beautiful relationship stands as a testament to the collaborative power of men, ever rising the scaffolding of human spirit.
https://medium.com/history-of-art/nam-june-paik-escaping-the-cage-d5f6fdfdd750
['Liviu Tanasoaica']
2018-01-15 12:49:06.540000+00:00
['Video Art', 'Art History', 'Art', 'Music']
Title Beyond Cage Nam June PaikContent object essay analysis artistic connection American composer thinker John Cage Korean artist Nam June Paik aim highlight influence Cage Paik’s work demonstrate Paik reacted Cagean thought furthered conclusion attempt step shadow adventure new realm medium experimentation philosophical inquiry started thinking relationship result research conducted intern Talbot Rice Gallery Edinburgh preparation 2013 Edinburgh International Festival exhibition Transmitted Live Nam June Paik Resounds Meeting Two Minds Nam June Paik John Cage David Tudor Concert «Kompositionen» Atelier Mary Bauermeister Cologne 6 October 1960 Photo Klaus Barisch Courtesy Galerie Schüppenhauer Biographically two men shared lifelong friendship span first meeting 1958 death John Cage 1992 Although 20 year older Paik Cage’s respect younger artist intellectual manifest correspondence number collaboration homage linked two artistically including Paik’s first public appearance Hommage à John Cage 1959 score Gala Music John Cage’s 50th Birthday 1962 videotape tribute John Cage 1973 featuring Cage sound piece Empty Telephones 1987 1990 video sculpture Cage Family Robots Cage Cage 1993 following death shared cultural context common ground development idea Cage Zen Buddhist spiritual outlook attracted Oriental philosophy attempt escape philosophical hermeticism Western thought Paik born Korea 1932 arrived Germany studying history art music aesthetic Tokyo Europe came contact vibrant art scene Westerner Eastern sensibility Easterner fascinated cultural history West met Germany seed Fluxus NeoAvantGarde movement 1960s planted John Cage series lecture gave New School Social Research New York City 19571958 introduced notion indeterminacy chance operation art praxis – former extracted Zen teaching latter drawn Marcel Duchamp’s example class attended La Monte Young George Brecht two important figure movement Prior lecture series string “happenings” staged Yoko Ono prefigured Fluxus parallel number concert involving Nam June Paik Cage 196061 studio Mary Bauermeister Cologne Since inception Paik situated heart international movement virtue close relationship founder Cage George Manciunas well unique intellectual preoccupation Fluxus developed aesthetic similar scope Dada movement decade earlier sought destabilize traditional mode art production presentation interaction institutionalization distinctly anticommercial employing comic irony critique establishment Throughout career Paik made crossover high popular art back However work always powerful philosophical core inspired Fluxus held execution together Living Rhineland experimental region art time Paik main hub vast network individual exchanged artistic idea great ease historical backdrop — Cold War ideological implication politics stability followed “economic miracle” Wirtschaftswunder 1950s — also relevant provided Fluxus artist platform activistic engagement sociopolitical context Transferring Paradigms Music Visual Arts Paik Beyond Cage reference point structural analysis respective oeuvre Cage Paik Marcel Duchamp particularly important concept employed influenced artist Duchamp’s artistic approach varied spontaneity elaboration readymades unintentional declared “creation” reduced choosing one object another Large Glass contrast necessitated careful planning meticulous work two needed mutually exclusive devised complicated chance method selecting note musical piece Erratum Musical 1913 1950s working chance factor characteristic New Music movement represented composer John Cage Luciano Berio Karlheinz Stockhausen among others Cage’s seminal 1952 piece 433 featured silence lasted four minute thirty three second Cage sought foreground unexpected element environment expectancy sound piece according dictum indeterminacy work Austrian composer Arnold Schoenberg crucial Cage’s conceptual arrival landmark score Schoenberg managed equalize value pitch musical strip using influential twelvetone technique PostSchoenberg music atonal serialism dominated emphasis certain pitch removed Cage took idea posited sound tone equal class citizen musical score extended concept music include ostensibly aberrational unwanted category “noise” ultimately swallow silence Silence Cagean thought however complete absence sound rather empty space filled life’s limitless noise 433’s content depend environment receiver Cage realised 1951 visit Harvard University’ anechoic laboratory thing absolute silence – even internal sound heart beating could disrupt apparent silence Cage thus fulfilled requirement Duchampian chance procedure Zen indeterminacy famously declaring discriminate intention nonintention Like Duchamp Cage also meticulous execution Cage’s Williams Mix prime example processual scruple first audiotape composition fourminute runtime reveals thousand piece audiotape assembled play parallel multiple soundtrack Poster Exposition Music — Electronic Television Courtesy Zentralarchiv de internationalen Kunsthandels Cologne dualism chancemeticulous assemblage exists Paik’s work television set well Paik spent month learning intricacy electrical engineering secret prepare landmark exhibition 1963 Exposition Music — Electronic Television featured twelve variously modified TV set Afterlude Exposition Experimental Television Paik state “Indeterminism variability underdeveloped parameter optical art although central problem music last ten years” pay homage Duchamp Cage time declares intention going beyond Electronic Television segment 1963 exhibition aimed study indeterminism television set unexpected accident occurred one TV set broke thus displaying mere horizontal line screen Paik integrated exhibition naming Zen TV exhibition participatory event involved sens viewer could regarded forerunner video art interactive art Duchamp managed achieve concept open artwork governed chance variability Cage New Music duly responded notion open work music turn prompting response Paik videobased form visual art Thus chain conceptual influence run Duchamp historical avantgarde Cage Nam June Paik Zen TV 19631975 Courtesy Estate Nam June Paik Seoul Photo MUMOK Vienna Paik mere translation indeterminism prefigured Duchamp coopted Cage music optical art insufficient Paik understood needed push idea inserting chance element artwork beyond sonic realm first departure Cage sense different treatment “prepared pianos” Cage prepared piano practical reason composing performance observed space big enough accommodate single piano compress one instrument sound native keyboard alongside thud crash jingle percussive apparatus variate inserted various object string piano would make different sound action playing Thus piano could return normal functional one modified state evoked idea randomness surprising audience Paik’s piano could return initial state modified destroyed remained permanently Nam June first used piano Hommage à John Cage 1959 tore ten piano’s string played first stringed instrument percussive one finally destroying solo show Exposition Music — Electronic Television 1963 Paik prepared six piano various stage destruction modification attached blow drier one Klavier Integral could triggered key press bra barbed wire making threatening tabooing audience Paik offered different sensory experience beyond sound assumed tactile visual nature well Phenomenologically piece assumed synesthetic character Paik’s grand plan extending boundary music sort “integral art” would encompass visual performativetheatrical well sculptural use interior villa housed exhibition even architectural generating meaning layout Summarising intention even manifesto 1963 Paik declared 1959 ‘Schoenberg wrote ‘atonal’ John Cage written ‘acomposition’ write ‘amusic’’ quest integral art video art outputted TV set proved perfect multisensory experience flow electron could manipulated infinitely moving image texture integral art time television like radio broadcasting preset sensory data Paik’s conversion TV reproductive machine displaying predetermined piece open productive one also linked Cage influence chain mentioned Imaginary Landscape 4 Cage first time introduced idea discarding predetermined score “site specific” “live” attribute performance consequence using material radio station beamed prepared piece Cage incorporating randomness early 1920s idea using traditional reproductive instrument productive purpose converting traversing Western European creative circle Cage’s piece corresponds concept regard reception Cage’s prepared piano alters sound Paik’s prepared piano 1963 Exposition Music event triggered event environment key operated transistor radio would play key would shut light room Like piano like Cage’s radio TVs instrumental reproducing preestablished piece production tool right like Cage’s radio material used relied broadcast local station whole setup “open artwork” audience main performer put simply Paik stripped away original function TV reproduce order convert exciting visitor instinctual need play touch making neverbeforeseen image productive machine Assimilating McLuhan Media Extension Perception Correctives Attention Deficits Induced Media Culture Cage’s Imaginary Landscape 4 consisted twentyfour performer working twelve radio conductor operating modifying station pitch volume piece site specific “live” playing indeterminacy seen performer operated sound depended station also introduced silence compositional element one year 433 followup piece 000 third used element Germane piece observation medium information soaked world attention scarce resource piece minute long exercise heightened sensibility perception Cage specifically thought medium ‘expansion man’ expansion perception Radio extension man idea traced around time Marshal McLuhan Canadian medium communication philosopher McLuhan highly influential Cage Paik 1964 book Understanding Media Extensions Man McLuhan creates theoretical framework understanding contemporary medium culture premise technology essence extension human ability sens printed book radio TV even clothing extending human technology destabilizes natural balance sens turn affect sensibility society subliminal way invention new medium according McLuhan main factor cultural shift West effect medium change essence structure world live famous idea ‘the medium message’ consequence don’t make conscious decision participate dialogue medium open permit medium impose assumption upon u thus transform actual message shape world process medium heightens one sense detriment four lead individuation phonetic language movable type invented 15th century delved u world characterized primacy vision sens individual could detach body society le le tribal advent electronic age however starting telegraph restored certain balance sens reconnected u global neural network exteriorizing human nervous system bombarding abundance information medium creates environment ‘electric light pure information’ ‘a medium without message’ whose content anything shine perfect McLuhanian metaphor beneficial certain message others exactly medium hot cool depending participation audience ‘A hot medium one extends one single sense ‘high definition’ High definition state well filled data … Hot medium therefore low participation cool medium high participation completion audience Naturally therefore hot medium like radio different effect user cool medium like telephone’ Radio seen hot broadcast continuously offer information straightforward way TV cool immersive ‘Radio serve background sound … TV work background engages it’ low definition break away unisensory experience employing sound vision perfect gateway neural network Cage first take control stream medium information modified radio transmission sound medium information became abundant piece enabled listener silence attentive Silence enables reflection perception corporeality attention heightens one sense breaking mode broadcasting ear subjugated transmission ‘cools down’ medium McLuhanian term Zen Film eightminutelong white screen Fluxus noiseless content Paik referenced Cage’s silence invited Cage Cunningham watch hour long film Cage thought similarity work stated “Offhand might say three action quite different Rauschenberg painting White Paintings… become airport particle dust shadow environment piece 433 becomes performance sound environment music sound environment remain speak whereas case Rauschenberg painting dust shadow change light forth don’t remain come painting case Nam June Paik film image Zen Film… focus intense nature environment film different dust shadow environment falling painting thus le free’” Paik replied three year later ‘NB Dear John nature environment much TV film painting fact TV random movement electron environment today’ McLuhian concept medium message present although phrased differently quest blurring role producer performer audience Paik seemed noticed coolness TV promotes audience participation much medium McLuhan noticed 1964 ‘The cool TV medium promotes depth structure art entertainment alike creates audience involvement depth well’ Age TV advent exhibition art multisensory deep experience stimulates nonvisual sens well Paik understood pursuing ‘integral art’ early 1963 Random Access Experiencer Free Time Space truly immersive integral art audience participation total degree freedom experiencer artistic performance must absolute text 1963 Exposition Music observed indeterminate music composer give freedom interpreter audience held true Cage’s work well listener option listening abstaining binary choice system classical music Moreover flow time one direction playing beginning end explains “The audience cannot distinguish undetermined time sound interpreter… problem becomes confused interpreter ’rehearsal’… interpreter play many time favourite ‘repertoire’… prostitution freedom… interpreter rehearses even degree character indeterminacy becomes classical baroque Renaissance medieval music composed undetermined music graphical music despite high respect Cage friends” plan Symphony 20 Rooms attempt remediate problem listener could move freely one room another one auditory experience another Paik said ‘I tired renewing form music – serial aleatoric graphic five line instrumental bellcanto screaming action tape live… hope must renew ontological form music…’ referring problem brings u idea random access variability central Paik’s video art Listeners kind experiencer matter phenomenological option event presented Random Access sticking bit audio tape wall using needle player read order listener want direct translation principle Random Access 19632000 Photo Erika Barahona Ede Courtesy Solomon R Guggenheim Foundation New York Nam June Paik Estate Seoul Paik also adopted Cage’s idea “music chronology” referred art TIME art 1963 exhibition put equal weight TV object sonores Zen object made clear second part show going electronic art Electronic Television rather late planning stage kept work TVs secret taught electronics understand technical principle working objective convert TV selfreferential form also deal phenomenological state experiencer defined freedom term time saying musical experience essentially strip time purpose time art thus liberate experiencer unidirectional time realized different way One simultaneous perception image thirteen independent TVs Wuppertal 1963 show experiencer possibility choosing independent flow information experience thus different chronology Time also direct relationship space another way translate experiment give experiencer freedom pursue spatiality wish project Symphony 20 Rooms idea different path space yield different chronological sequence sound transnational artist Nam June Paik embodiment Electronic Age rich multicultural perspective artist wave electron experiment various medium paved way entire generation manipulator sound image John Cage’s influence thought almost impossible quantify seems every intuition Cage regarding nature artistic intention production Nam June Paik reacted passionate dedication Cage’s role push boundary thirsty curiosity Paik’s explore outer region wild imagination enduring friendship produced fruitful intellectual dialogue directly collaboration indirectly enormous provocation Cagean thought posed Paik challenging escape cage beautiful relationship stand testament collaborative power men ever rising scaffolding human spiritTags Video Art Art History Art Music
4,523
Why Brand Strategy Matters More Than Ever, Even Online
Why Brand Strategy Matters More Than Ever, Even Online The biggest lesson I’ve learned in my eighteen years in the industry is that design is not art Photo by Kaleidico on Unsplash Design needs to help solve a particular problem — usually a business one. And we need to become more aware and considerate about that. Brand Strategy is one way how we can marry design and business results more efficiently. If, as a designer, you had a client dismiss your perfect design concept and were told to start again, you may have just met a client from hell. And if, as a client, you needed to ask the designer you’ve hired for half a dozen revisions to something as simple as a business card, you may have hired an incompetent mac operator. Or, in both cases, it could mean you need to adopt a more strategic approach. But first, a quick story about how I got started in the world of strategic brand design.
https://medium.com/better-marketing/why-brand-strategy-matters-more-than-ever-even-online-8cc1ba1fe486
['Ilya Lobanov']
2020-11-20 15:44:31.355000+00:00
['Online Strategy', 'Strategic Design', 'Branding', 'Marketing', 'Brand Strategy']
Title Brand Strategy Matters Ever Even OnlineContent Brand Strategy Matters Ever Even Online biggest lesson I’ve learned eighteen year industry design art Photo Kaleidico Unsplash Design need help solve particular problem — usually business one need become aware considerate Brand Strategy one way marry design business result efficiently designer client dismiss perfect design concept told start may met client hell client needed ask designer you’ve hired half dozen revision something simple business card may hired incompetent mac operator case could mean need adopt strategic approach first quick story got started world strategic brand designTags Online Strategy Strategic Design Branding Marketing Brand Strategy
4,524
My Love, Let’s Throw Away Our Books and Live
My Love, Let’s Throw Away Our Books and Live Let us get out Photo by Eugenio Mazzone on Unsplash All these books full of letters are too heavy for our lives. We are saturated of these abstract signs running from page to page. So much dust, we don’t dare make a move that would displace some air, We hold our breaths and our voices like in a dead church dedicated To abstract knowledge. Come in the garden and read some poetry aloud. I love listening to your voice and your silence, your breath cleanses miasmas. Right now, I want to hear your clear laughter and give you a tender kiss.
https://medium.com/illumination/my-love-lets-throw-away-our-books-and-live-7e8b6599e354
['Jean Carfantan']
2020-06-26 21:48:06.136000+00:00
['Poetry', 'Books', 'Kiss', 'Garden', 'Love']
Title Love Let’s Throw Away Books LiveContent Love Let’s Throw Away Books Live Let u get Photo Eugenio Mazzone Unsplash book full letter heavy life saturated abstract sign running page page much dust don’t dare make move would displace air hold breath voice like dead church dedicated abstract knowledge Come garden read poetry aloud love listening voice silence breath clean miasma Right want hear clear laughter give tender kissTags Poetry Books Kiss Garden Love
4,525
First neural network for beginners explained (with code)
Creating our own simple neural network Let’s create a neural network from scratch with Python (3.x in the example below). import numpy, random, os lr = 1 #learning rate bias = 1 #value of bias weights = [random.random(),random.random(),random.random()] #weights generated in a list (3 weights in total for 2 neurons and the bias) The beginning of the program just defines libraries and the values of the parameters, and creates a list which contains the values of the weights that will be modified (those are generated randomly). def Perceptron(input1, input2, output) : outputP = input1*weights[0]+input2*weights[1]+bias*weights[2] if outputP > 0 : #activation function (here Heaviside) outputP = 1 else : outputP = 0 error = output – outputP weights[0] += error * input1 * lr weights[1] += error * input2 * lr weights[2] += error * bias * lr Here we create a function which defines the work of the output neuron. It takes 3 parameters (the 2 values of the neurons and the expected output). “outputP” is the variable corresponding to the output given by the Perceptron. Then we calculate the error, used to modify the weights of every connections to the output neuron right after. for i in range(50) : Perceptron(1,1,1) #True or true Perceptron(1,0,1) #True or false Perceptron(0,1,1) #False or true Perceptron(0,0,0) #False or false We create a loop that makes the neural network repeat every situation several times. This part is the learning phase. The number of iteration is chosen according to the precision we want. However, be aware that too much iterations could lead the network to over-fitting, which causes it to focus too much on the treated examples, so it couldn’t get a right output on case it didn’t see during its learning phase. However, our case here is a bit special, since there are only 4 possibilities, and we give the neural network all of them during its learning phase. A Perceptron is supposed to give a correct output without having ever seen the case it is treating. x = int(input()) y = int(input()) outputP = x*weights[0] + y*weights[1] + bias*weights[2] if outputP > 0 : #activation function outputP = 1 else : outputP = 0 print(x, "or", y, "is : ", outputP) Finally, we can ask the user to enter himself the values to check if the Perceptron is working. This is the testing phase. The activation function Heaviside is interesting to use in this case, since it takes back all values to exactly 0 or 1, since we are looking for a false or true result. We could try with a sigmoid function and obtain a decimal number between 0 and 1, normally very close to one of those limits. outputP = 1/(1+numpy.exp(-outputP)) #sigmoid function We could also save the weights that the neural network just calculated in a file, to use it later without making another learning phase. It is done for way bigger project, in which that phase can last days or weeks.
https://towardsdatascience.com/first-neural-network-for-beginners-explained-with-code-4cfd37e06eaf
['Arthur Arnx']
2019-08-11 09:03:20.174000+00:00
['Perceptron', 'Artificial Intelligence', 'Neural Networks', 'Guides And Tutorials']
Title First neural network beginner explained codeContent Creating simple neural network Let’s create neural network scratch Python 3x example import numpy random o lr 1 learning rate bias 1 value bias weight randomrandomrandomrandomrandomrandom weight generated list 3 weight total 2 neuron bias beginning program defines library value parameter creates list contains value weight modified generated randomly def Perceptroninput1 input2 output outputP input1weights0input2weights1biasweights2 outputP 0 activation function Heaviside outputP 1 else outputP 0 error output – outputP weights0 error input1 lr weights1 error input2 lr weights2 error bias lr create function defines work output neuron take 3 parameter 2 value neuron expected output “outputP” variable corresponding output given Perceptron calculate error used modify weight every connection output neuron right range50 Perceptron111 True true Perceptron101 True false Perceptron011 False true Perceptron000 False false create loop make neural network repeat every situation several time part learning phase number iteration chosen according precision want However aware much iteration could lead network overfitting cause focus much treated example couldn’t get right output case didn’t see learning phase However case bit special since 4 possibility give neural network learning phase Perceptron supposed give correct output without ever seen case treating x intinput intinput outputP xweights0 yweights1 biasweights2 outputP 0 activation function outputP 1 else outputP 0 printx outputP Finally ask user enter value check Perceptron working testing phase activation function Heaviside interesting use case since take back value exactly 0 1 since looking false true result could try sigmoid function obtain decimal number 0 1 normally close one limit outputP 11numpyexpoutputP sigmoid function could also save weight neural network calculated file use later without making another learning phase done way bigger project phase last day weeksTags Perceptron Artificial Intelligence Neural Networks Guides Tutorials
4,526
Six Wrong Predictions Reported By the New York Times
The New York Times is one of the prominent American daily newspapers with millions of readers in the US and across the globe. During his tenure as the President of the United States, Donald Trump attacked the New York Times and other media outlets, consistently labeling them “fake news.” In contrast to his remarks, the New York Times has won 130 Pulitzer Prizes — more than any other newspaper. Established in 1851, it has been an influential newspaper in the US and around the world for decades. It’s known as a national “newspaper of record,” based on the Encyclopedia Britannica. While acknowledging the New York Times’ reputation and credibility, I shed light upon six predictions reported by this newspaper that are untrue now. These predictions were made about Airplanes & Flying, Laptops, Apple & iPhone, Twitter, Television, and Automobiles. 1. On Flying: We won’t be able to fly in millions of years On October 09, 1903, the New York Times published a piece about the future of flying titled “THE FLYING MACHINES THAT DO NOT FLY,” which stated: “… it might be assumed that the flying machine which will really fly might be evolved by the combined and continuous efforts of mathematicians and mechanicians in from one million to ten million years — provided, of course, we can meanwhile eliminate such little drawbacks and embarrassments as the existing relation between weight and strength in inorganic materials.” On December 17, 1903, the Wright brothers flew their first airplane. A decade ago, how many of us thought that producing flying cars might be a myth? Now, we not only have flying cars but also flying Gravity Jets, or let’s say flying humans. What’s next? Photo by Natali Quijano on Unsplash 2. On Laptop computers: No one would be interested in a Laptop A New York Times article from 1985 discussed that few people would be interested in carrying a personal computer and laptop. The article titled “THE EXECUTIVE COMPUTER” states: “On the whole, people don’t want to lug a computer with them to the beach or on a train to while away hours they would rather spend reading the sports or business section of the newspaper… the real future of the laptop computer will remain in the specialized niche markets. Because no matter how inexpensive the machines become, and no matter how sophisticated their software, I still can’t imagine the average user taking one along when going fishing.” As of February 2019, over 70% of the US households had either a laptop or a desktop computer at home. Laptop computers continue to shrink in size but become more powerful in terms of capacity. “The first floppy disk, introduced in 1971, had a capacity of 79.7 kB” Now, even a Notepad file is larger than 80 kilobytes. Are kilobytes still relevant? Don’t you take your laptop along when going fishing? Forget about notebooks; our mobile phones, tablets, and other similar devices are more accessible now — they’re more portable, personal, and closer to our hearts and EYES. Photo by Campaign Creators on Unsplash 3. On Apple and iPhones: They will never succeed | They will never have a phone either Apple was established in 1976. Two decades later, the New York Times wrote that Apple would fail, quoting a Forrester Research analyst: “Whether they stand alone or are acquired, Apple as we know it is cooked. It’s so classic. It’s so sad.” A decade later, in 2006, another article reported that Apple might never produce a cell phone: “Everyone’s always asking me when Apple will come out with a cell phone. My answer is, ‘Probably never.’” David Pogue, The New York Times. Apple released its iPhone in 2007. Since then, 2.2 billion iPhones have been sold. And Apple? It’s the most valuable brand in the world as of 2020. Photo by Neil Soni on Unsplash 4. On Twitter: Only the illiterate might use it A New York Times article discussed the emergence of Twitter in which a reference to Bruce Sterling’s earlier remarks on Twitter was also made. He is The New York Times’ best-selling science-fiction writer and journalist. In 2006, he was with the idea that Twitter would not be prominent amongst the intellectuals, but only the illiterate might use it: “Using Twitter for literate communication is about as likely as firing up a CB radio and hearing some guy recite ‘The Iliad.’” — Bruce Sterling, The New York Times. President Donald Trump has tweeted over 17000 tweets only through the first two-and-a-half years of his presidency — and the most literate people have retweeted it. As of 2018, Twitter has had over “321 million monthly active users.” Many politicians, celebrities, intellectuals, and other highbrows use it too. Unlike the prediction made in 2007, it’s only the illiterate that cannot use Twitter, as it’s hard to say so many things in a few characters. Photo by MORAN on Unsplash 5. On Television: It will not be a competitor of broadcasting, and people will not have time to watch it In 1939, a New York Times article suggested that people won’t have time to watch television. For this reason, it cannot compete with other forms of media such as newspapers and radio. The article narrates: “The problem with television is that the people must sit and keep their eyes glued on a screen; the average American family hasn’t time for it.” Based on a 2019 estimate, “307.3 million people ages 2 and older live in US TV households.” Now, you can watch the TV even in the toilet. You don’t need to carry your TV, but your phone or tablet. But there is one thing: Newspapers are not gone. Over 69% of the US population still read newspapers. Based on Forbes, print remains the most common medium, with 81 percent reading this format. According to studies, 2.5 billion people read print newspapers daily. What happened to the TV industry? As of 2015, “An estimated 1.57 billion households around the world owned at least one TV set.” Since more people typically live in a household, billions of people watch TV every day — more than those who read newspapers. Photo by Dave Weatherall on Unsplash 6. On High-Speed Automobiles: We won’t be able to drive over 80 miles per hour Reporting on the dangers of high-speed driving, a New York Times article suggested that our brains cannot guide a car with any speed over 80 miles per hour. It reported a debate between two experts that took place in Paris in 1904. The article says: “It remains to be proved how fast the brain is capable of traveling […] If it cannot acquire an eight-mile per hour speed, then an auto running at the rate of 80 miles per hour is running without the guidance of the brain, and the many disastrous results are not to be marveled at.” In 1894, Benz Velo had 12 mph (20 km/h). In 1904, it was claimed that no speed over 80 mph is plausible. In 2017, Koenigsegg Agera RS was produced with a production speed of 277.87 mph (447.19 km/h). In Germany, there are no speed limits on most highways.
https://medium.com/swlh/six-wrong-predictions-reported-by-the-new-york-times-252c0f4b8e32
['Massùod Hemmat']
2020-12-15 19:03:49.558000+00:00
['The New York Times', 'Predictions', 'Journalism', 'Technology', 'Politics']
Title Six Wrong Predictions Reported New York TimesContent New York Times one prominent American daily newspaper million reader US across globe tenure President United States Donald Trump attacked New York Times medium outlet consistently labeling “fake news” contrast remark New York Times 130 Pulitzer Prizes — newspaper Established 1851 influential newspaper US around world decade It’s known national “newspaper record” based Encyclopedia Britannica acknowledging New York Times’ reputation credibility shed light upon six prediction reported newspaper untrue prediction made Airplanes Flying Laptops Apple iPhone Twitter Television Automobiles 1 Flying won’t able fly million year October 09 1903 New York Times published piece future flying titled “THE FLYING MACHINES FLY” stated “… might assumed flying machine really fly might evolved combined continuous effort mathematician mechanicians one million ten million year — provided course meanwhile eliminate little drawback embarrassment existing relation weight strength inorganic materials” December 17 1903 Wright brother flew first airplane decade ago many u thought producing flying car might myth flying car also flying Gravity Jets let’s say flying human What’s next Photo Natali Quijano Unsplash 2 Laptop computer one would interested Laptop New York Times article 1985 discussed people would interested carrying personal computer laptop article titled “THE EXECUTIVE COMPUTER” state “On whole people don’t want lug computer beach train away hour would rather spend reading sport business section newspaper… real future laptop computer remain specialized niche market matter inexpensive machine become matter sophisticated software still can’t imagine average user taking one along going fishing” February 2019 70 US household either laptop desktop computer home Laptop computer continue shrink size become powerful term capacity “The first floppy disk introduced 1971 capacity 797 kB” even Notepad file larger 80 kilobyte kilobyte still relevant Don’t take laptop along going fishing Forget notebook mobile phone tablet similar device accessible — they’re portable personal closer heart EYES Photo Campaign Creators Unsplash 3 Apple iPhones never succeed never phone either Apple established 1976 Two decade later New York Times wrote Apple would fail quoting Forrester Research analyst “Whether stand alone acquired Apple know cooked It’s classic It’s sad” decade later 2006 another article reported Apple might never produce cell phone “Everyone’s always asking Apple come cell phone answer ‘Probably never’” David Pogue New York Times Apple released iPhone 2007 Since 22 billion iPhones sold Apple It’s valuable brand world 2020 Photo Neil Soni Unsplash 4 Twitter illiterate might use New York Times article discussed emergence Twitter reference Bruce Sterling’s earlier remark Twitter also made New York Times’ bestselling sciencefiction writer journalist 2006 idea Twitter would prominent amongst intellectual illiterate might use “Using Twitter literate communication likely firing CB radio hearing guy recite ‘The Iliad’” — Bruce Sterling New York Times President Donald Trump tweeted 17000 tweet first twoandahalf year presidency — literate people retweeted 2018 Twitter “321 million monthly active users” Many politician celebrity intellectual highbrow use Unlike prediction made 2007 it’s illiterate cannot use Twitter it’s hard say many thing character Photo MORAN Unsplash 5 Television competitor broadcasting people time watch 1939 New York Times article suggested people won’t time watch television reason cannot compete form medium newspaper radio article narrates “The problem television people must sit keep eye glued screen average American family hasn’t time it” Based 2019 estimate “3073 million people age 2 older live US TV households” watch TV even toilet don’t need carry TV phone tablet one thing Newspapers gone 69 US population still read newspaper Based Forbes print remains common medium 81 percent reading format According study 25 billion people read print newspaper daily happened TV industry 2015 “An estimated 157 billion household around world owned least one TV set” Since people typically live household billion people watch TV every day — read newspaper Photo Dave Weatherall Unsplash 6 HighSpeed Automobiles won’t able drive 80 mile per hour Reporting danger highspeed driving New York Times article suggested brain cannot guide car speed 80 mile per hour reported debate two expert took place Paris 1904 article say “It remains proved fast brain capable traveling … cannot acquire eightmile per hour speed auto running rate 80 mile per hour running without guidance brain many disastrous result marveled at” 1894 Benz Velo 12 mph 20 kmh 1904 claimed speed 80 mph plausible 2017 Koenigsegg Agera RS produced production speed 27787 mph 44719 kmh Germany speed limit highwaysTags New York Times Predictions Journalism Technology Politics
4,527
How to overpower HiPPO syndrome to make better design decisions
How to overpower HiPPO syndrome to make better design decisions HiPPO: Highest Income Paid Person’s Opinion. I know, I know, real HiPPOs don’t dress like that but that’s where creative freedom comes in, illustration by Quovantis If you have Lead, Manager, Director, VP, or any job title which resonates with a leadership position then you might be perceived as a HiPPO. Without you even realizing it. Getting labeled as a HiPPO could be a noteworthy achievement of your career, but not so much when you want to make decisions harnessing the diversity and collective wisdom of your group. You see, when you are perceived as a HiPPO, what you say — even a mere harmless suggestion — has the potential to be interpreted as a decision. It happens because your team thinks you must be good at your job otherwise you wouldn’t be in a leadership position. Or pessimistically, it’s ultimately your neck on the line then why not roll with your decision? It leads to poor decision making as your team members’ ideas don’t see the light of the day. And if you don’t nip this HiPPO syndrome in the bud then it eventually creates a slippery slope to autocracy — or mediocrity. Does it mean you should stop sharing suggestions? Does it mean you shouldn’t trust your experience or intuition which you’ve sharpened, over the years, by handling diverse situations? Does it mean that you put aside the learnings from your failures and successes? Does it mean that you shouldn’t exercise the authority of your own position to make decisions? Absolutely not. That’s not what I’m suggesting. Here is what you can do if you want to have spirited conversations, bring forth everyone’s ideas, make informed decisions, nip the HiPPO syndrome in the bud - 01. Practice silent design selections Zen voting i.e. silent critique in motion, illustration by Quovantis This technique is a great group dynamics leveler. It not only solves the problem of HiPPO syndrome but gets rid of groupthink altogether. To practice it, hang your design options like a curator hangs paintings in a Museum. And invite every design team member to vote on their preferred design options. This silent-critique of design options rather than discussing them openly gives everyone a fair chance to cast their vote without getting sucked into groupthink. Also, I would encourage you (the perceived HiPPO) to vote last. This makes sure that you don’t influence other team members accidentally. Jake Knapp popularized this silent critique method in his seminal book, Sprint. We fondly call this method Zen Voting. 02. Lead with questions rather than answers Start with a question, illustration by Quovantis Some of us have this habit of saying — “I think we should….” or “We must try…” while sharing suggestions to the problems that have just been conceived. When a HiPPO uses words like ‘should’ and ‘must-try’, it comes out as an imposition, rather than a suggestion. Also, when a HiPPO shares such suggestions first, it promotes groupthink and impedes creative thinking to solve the problem. Rather than starting with your suggestions, open up the conversation by saying — “How might we <your problem statement here>?” — to seek suggestions from your team. It ignites your design team to bring in their collaborative, and creative spirit to solve the problem rather than choose one of your suggestions. And this way you demonstrate that ideas win over titles in your group. 03. Trust the data a sine wave of data and you thought Mathematics wasn’t useful, illustration by Quovantis Experience leads to wisdom — and hones your intuition. But, it doesn’t mean your intuition is always right. So take a pause whenever you feel like saying “Okay, why don’t we do this…”. Reflect on the user persona and see if it solves the problem for them. Or, see if there is any usage analytics or research data to back your claim for the proposed solution. Consider this — your users are dropping off at your site’s checkout page and you suggest a redesign to increase user engagement by including the cross-sell options. Before proceeding, stop and think if it would work. Do you have data to prove it? Would adding more options at checkout solve the cart abandonment issue or complicate it further? Wouldn’t it make more sense to simplify the checkout process and help users focus only on the products in their shopping cart rather than increasing their cognitive load by making them look at more products? (I know, I know, no designer in their sane mind would ever pose this kind of a solution. It was merely a hyperbolic hypothetical scenario.) In case you’re designing it for the first time and don’t have any data to look into, invest in designing multiple options. You could do multivariate testing to establish what works best rather than just relying on your intuition. 04. Ditch the head-chair This looks more like a throne rather than a head-chair, but I’m sure you get the point, illustration by Quovantis This is plain ol’ common sense. Some leaders end up taking the head-chair in meetings. Nothing wrong with letting people know who is the boss. Well, if that’s your thing. But, if you are fostering a collaborative spirit where only the best ideas survive, little things like the position of your chair could have a subliminal impact. PsychologyToday recommends sitting in the second or third spot on a big table to signal you being part of the team. And, you are here to collaborate not dictate. 05. Ask the AWE question Seriously, what else?, illustration by Quovantis AWE: ‘And What Else?’ Rather than pitching your suggestions, encourage your team to generate more design ideas. Ask the AWE question until it becomes obvious that the team has exhausted all their creative options. And then, only then, present your ideas. You would be (pleasantly) surprised that your team would be able to come up, most of the time, with a solution that you were itching to propose. And if for whatever reason, they aren’t able to, then you can always go last. This builds the team’s creative, and confidence muscle as they get to be accountable for solving their own problems rather than taking the official decree from their HiPPO a.k.a You. 06. Ask your team to consider their existing commitments To be or not to be, illustration by Quovantis You bring in this question when you’re about to arrive at a design decision. It helps your team members focus, especially the overzealous and overcommitted ones — “If you are saying yes to this design option, what are you saying no to?” This becomes even more pertinent when the team is about to implement one of your suggestions. It helps them reflect on their existing workload, and see if they aren’t signing up for commitments they can’t keep up. It makes them consider the time schedule to complete this design option and pushes them to prioritize better. This question either helps them sign up to commitments they can deliver on, or keep on discovering solutions that can be completed within a given timeframe.
https://uxdesign.cc/how-to-overpower-hippo-syndrome-to-make-better-design-decisions-3c037ab305b3
['Tarun Kohli']
2020-12-26 14:52:19.154000+00:00
['Product Design', 'Leadership', 'UX Design', 'Product Management', 'Design']
Title overpower HiPPO syndrome make better design decisionsContent overpower HiPPO syndrome make better design decision HiPPO Highest Income Paid Person’s Opinion know know real HiPPOs don’t dress like that’s creative freedom come illustration Quovantis Lead Manager Director VP job title resonates leadership position might perceived HiPPO Without even realizing Getting labeled HiPPO could noteworthy achievement career much want make decision harnessing diversity collective wisdom group see perceived HiPPO say — even mere harmless suggestion — potential interpreted decision happens team think must good job otherwise wouldn’t leadership position pessimistically it’s ultimately neck line roll decision lead poor decision making team members’ idea don’t see light day don’t nip HiPPO syndrome bud eventually creates slippery slope autocracy — mediocrity mean stop sharing suggestion mean shouldn’t trust experience intuition you’ve sharpened year handling diverse situation mean put aside learning failure success mean shouldn’t exercise authority position make decision Absolutely That’s I’m suggesting want spirited conversation bring forth everyone’s idea make informed decision nip HiPPO syndrome bud 01 Practice silent design selection Zen voting ie silent critique motion illustration Quovantis technique great group dynamic leveler solves problem HiPPO syndrome get rid groupthink altogether practice hang design option like curator hang painting Museum invite every design team member vote preferred design option silentcritique design option rather discussing openly give everyone fair chance cast vote without getting sucked groupthink Also would encourage perceived HiPPO vote last make sure don’t influence team member accidentally Jake Knapp popularized silent critique method seminal book Sprint fondly call method Zen Voting 02 Lead question rather answer Start question illustration Quovantis u habit saying — “I think should…” “We must try…” sharing suggestion problem conceived HiPPO us word like ‘should’ ‘musttry’ come imposition rather suggestion Also HiPPO share suggestion first promotes groupthink impedes creative thinking solve problem Rather starting suggestion open conversation saying — “How might problem statement here” — seek suggestion team ignites design team bring collaborative creative spirit solve problem rather choose one suggestion way demonstrate idea win title group 03 Trust data sine wave data thought Mathematics wasn’t useful illustration Quovantis Experience lead wisdom — hone intuition doesn’t mean intuition always right take pause whenever feel like saying “Okay don’t this…” Reflect user persona see solves problem see usage analytics research data back claim proposed solution Consider — user dropping site’s checkout page suggest redesign increase user engagement including crosssell option proceeding stop think would work data prove Would adding option checkout solve cart abandonment issue complicate Wouldn’t make sense simplify checkout process help user focus product shopping cart rather increasing cognitive load making look product know know designer sane mind would ever pose kind solution merely hyperbolic hypothetical scenario case you’re designing first time don’t data look invest designing multiple option could multivariate testing establish work best rather relying intuition 04 Ditch headchair look like throne rather headchair I’m sure get point illustration Quovantis plain ol’ common sense leader end taking headchair meeting Nothing wrong letting people know bos Well that’s thing fostering collaborative spirit best idea survive little thing like position chair could subliminal impact PsychologyToday recommends sitting second third spot big table signal part team collaborate dictate 05 Ask AWE question Seriously else illustration Quovantis AWE ‘And Else’ Rather pitching suggestion encourage team generate design idea Ask AWE question becomes obvious team exhausted creative option present idea would pleasantly surprised team would able come time solution itching propose whatever reason aren’t able always go last build team’s creative confidence muscle get accountable solving problem rather taking official decree HiPPO aka 06 Ask team consider existing commitment illustration Quovantis bring question you’re arrive design decision help team member focus especially overzealous overcommitted one — “If saying yes design option saying to” becomes even pertinent team implement one suggestion help reflect existing workload see aren’t signing commitment can’t keep make consider time schedule complete design option push prioritize better question either help sign commitment deliver keep discovering solution completed within given timeframeTags Product Design Leadership UX Design Product Management Design
4,528
9 UI/UX must tools for designers
9 UI/UX must tools for designers There are some tools that UI/UX designer must know and some tools that are good to know. Let's go through some of them! image by https://www.netlingshq.com/blog/best-ui-design-tools-2019/ 1. Google image by https://www.dawsondawsoninc.com/google-it-infographic/ Well, some might say that Google isn’t exactly a tool, but without Google, we would be still in the dark ages. Any doubt, any question, any obstacle probably can be solved with Google. Unless it’s the case in which you are an exceptional wonder and at the very top of the field, there are always people with more experience and/or just have the answer you are seeking. In the UI/UX (same with most professions) you are in constant learning and reading. When you don’t know something — google it. When you are not sure of something — Google it. Even many AB tests you’re planning to do — Google it, the answer is probably already there. Of course in this section, I am including — Medium, Youtube, Reddit, Quora, Wiki How, anything that Google will bring up. 2. Pen and Paper Photo by William Iven on Unsplash Maybe this one is obvious, but it’s maybe the best wireframing tool and not only that. I suggest everyone use them more often. Besides rough sketches there are card sortings, gathering ideas, and problems, writing some notes. I advise you to train your hand at sketching, it’s good to be good at that. Even if you won’t ever draw anything, it’s a good thing that your rough sketches would look nice, especially when a potential client is watching. 3. Sketch/Figma/Adobe XD Today’s three main UI design tools are used by the vast majority of UI/UX designers. All three are very similar with slight differences. 3.1 Sketch image by https://search.muz.li/NGZkM2QyNDMz SketchApp is the Godfather of all design tools. It’s the Mercedes-Benz, the first 100% UI/UX design tool in the industry. The vast majority of the prototyping tools work well with Sketch. Before Sketch web designers worked with Photoshop/Illustrator/Corel. And to be honest, after Sketch it is pointless to use those tools if you’re not designing some very unique website/app where simple shapes won’t make it. And don’t get me wrong, I respect Photoshop more than any other design tool, but the scope of photoshop is too big for UI design. Corel Draw and Illustrator are vector-based software mainly used to create logos, printing design, illustrations, etc. 3.2 Figma Figma is my favorite tool. It took everything the best from Sketch and added many things that Sketch missed. The best value of Figma is that it is browser-based (so doesn’t depend on the platform) and everything is synced; one team member changes something, and it is already changed in the whole project, without the need of publishing the changes. Another one of the pros of Figma is that CSS is already there and you don’t have to use a third tool such as Zeplin or Invision Studio’s Inspect to handoff. By the way — Figma is always improving, they recently even added scrolling animation to its prototype. 3. 3 Adobe XD XD is a go tool when you are working at a fast pace. It’s the tool that solves problems in a shorter time, but it has almost the same problems that Sketch has (except that Sketch is Mac only, and XD is available on both Windows and Mac), also there is no inner Shadow in XD (what’s up with that?). I can go on and on about UI Design tools, but I guess that’s a topic for another day. 4. Prototyping with InVision Studio/Proto.io/Marvel/Origami 4.1 InVision Studio is a bundle of 4 great tools that are very useful for UI/UX designers. Prototyping is not just a great tool with cool interaction animations. is not just a great tool with cool interaction animations. The Inspect is for CSS handoff to developers. is for CSS handoff to developers. Freehand tool helps with wireframing, whiteboard interviews, sitemaps, and generally, it acts as a pen and paper on your computer with many useful templates already there. tool helps with wireframing, whiteboard interviews, sitemaps, and generally, it acts as a pen and paper on your computer with many useful templates already there. Craft tool is basically a UI design tool. Also, it works great with Sketch. 4.2 Proto.io is a prototyping tool that helps designers to create real-looking hi-fi prototypes. 4.3 Marvel is another tool that helps you create from lo-fi to hi-fi prototypes, Wireframes as well as CSS and HTML handoff. Another prototype tool that with its great interactions makes it look like the final product. 4.4 Origami is the tool that maybe makes the most advanced, real-looking interactions and it works well with Sketch. There are cons though: it doesn’t hand off the code of interactions, works only on Mac, and the learning is hard. It can be very challenging for beginners. 5. Zeplin Zeplin is a tool that translates UI into CSS. It is a great tool for handoff and collaboration. And it works great with Sketch, XD, Photoshop, and many more. I use Zeplin relatively rare as Figma has its main functionality and as mentioned before I am a Figma fan. 6. Google Analytics I know, I have already mentioned Google but Google Analytics is a whole other tool. As the name suggests — it analyzes. It’s a great tool for gathering statistics about how your website does in the field, receiving quantitative data, etc. 7. Strategy with Flowmap/Balsamiq Photo by Amélie Mourichon on Unsplash 7.1 Flowmapp is a tool that helps you with the strategy at the beginning of the project. It’s a great tool to create IAs, Sitemaps, user flows. 7.2 Balsamic is a simple yet great tool for wireframing. It almost doesn’t have any learning curve, anyone can work with it. There are already many elements of wireframes and with just a simple drag and drop you can make a pretty good wireframe. 8. Qualitative research with Bugsee/Appsee/Hotjar 8.1 Bugsee is a tool that aims for bugs and crashes of mobile apps. 8.2 Appsee on the other hand is not focusing on bugs. It helps to understand the users and optimize UX and Performance. 8.3 Hotjar is a tool that does website analysis and gives feedback from users. It also helps to learn about users and their experiences in the product. It has features such as recordings of user journeys, form analysis, surveys, recruitment of testers, etc. 9. User testing tools: User Report/Usabilla 9.1 User Report is another great tool that is based on surveys and feedbacks. It works as a part of your website/app and helps you to learn about your users as well as connect with them. It also has Google Analytics integration. 9.2 Usabilla is a feedback collection software. It provides real-time feedback from users. It also helps you target your questions and timing.
https://uxplanet.org/9-ui-ux-must-tools-for-designers-df60745d990e
['Daniel Danielyan']
2020-12-19 07:17:39.882000+00:00
['UX', 'UI', 'Design', 'Tools', 'Success']
Title 9 UIUX must tool designersContent 9 UIUX must tool designer tool UIUX designer must know tool good know Lets go image httpswwwnetlingshqcomblogbestuidesigntools2019 1 Google image httpswwwdawsondawsoninccomgoogleitinfographic Well might say Google isn’t exactly tool without Google would still dark age doubt question obstacle probably solved Google Unless it’s case exceptional wonder top field always people experience andor answer seeking UIUX profession constant learning reading don’t know something — google sure something — Google Even many AB test you’re planning — Google answer probably already course section including — Medium Youtube Reddit Quora Wiki anything Google bring 2 Pen Paper Photo William Iven Unsplash Maybe one obvious it’s maybe best wireframing tool suggest everyone use often Besides rough sketch card sorting gathering idea problem writing note advise train hand sketching it’s good good Even won’t ever draw anything it’s good thing rough sketch would look nice especially potential client watching 3 SketchFigmaAdobe XD Today’s three main UI design tool used vast majority UIUX designer three similar slight difference 31 Sketch image httpssearchmuzliNGZkM2QyNDMz SketchApp Godfather design tool It’s MercedesBenz first 100 UIUX design tool industry vast majority prototyping tool work well Sketch Sketch web designer worked PhotoshopIllustratorCorel honest Sketch pointless use tool you’re designing unique websiteapp simple shape won’t make don’t get wrong respect Photoshop design tool scope photoshop big UI design Corel Draw Illustrator vectorbased software mainly used create logo printing design illustration etc 32 Figma Figma favorite tool took everything best Sketch added many thing Sketch missed best value Figma browserbased doesn’t depend platform everything synced one team member change something already changed whole project without need publishing change Another one pro Figma CSS already don’t use third tool Zeplin Invision Studio’s Inspect handoff way — Figma always improving recently even added scrolling animation prototype 3 3 Adobe XD XD go tool working fast pace It’s tool solves problem shorter time almost problem Sketch except Sketch Mac XD available Windows Mac also inner Shadow XD what’s go UI Design tool guess that’s topic another day 4 Prototyping InVision StudioProtoioMarvelOrigami 41 InVision Studio bundle 4 great tool useful UIUX designer Prototyping great tool cool interaction animation great tool cool interaction animation Inspect CSS handoff developer CSS handoff developer Freehand tool help wireframing whiteboard interview sitemaps generally act pen paper computer many useful template already tool help wireframing whiteboard interview sitemaps generally act pen paper computer many useful template already Craft tool basically UI design tool Also work great Sketch 42 Protoio prototyping tool help designer create reallooking hifi prototype 43 Marvel another tool help create lofi hifi prototype Wireframes well CSS HTML handoff Another prototype tool great interaction make look like final product 44 Origami tool maybe make advanced reallooking interaction work well Sketch con though doesn’t hand code interaction work Mac learning hard challenging beginner 5 Zeplin Zeplin tool translates UI CSS great tool handoff collaboration work great Sketch XD Photoshop many use Zeplin relatively rare Figma main functionality mentioned Figma fan 6 Google Analytics know already mentioned Google Google Analytics whole tool name suggests — analyzes It’s great tool gathering statistic website field receiving quantitative data etc 7 Strategy FlowmapBalsamiq Photo Amélie Mourichon Unsplash 71 Flowmapp tool help strategy beginning project It’s great tool create IAs Sitemaps user flow 72 Balsamic simple yet great tool wireframing almost doesn’t learning curve anyone work already many element wireframes simple drag drop make pretty good wireframe 8 Qualitative research BugseeAppseeHotjar 81 Bugsee tool aim bug crash mobile apps 82 Appsee hand focusing bug help understand user optimize UX Performance 83 Hotjar tool website analysis give feedback user also help learn user experience product feature recording user journey form analysis survey recruitment tester etc 9 User testing tool User ReportUsabilla 91 User Report another great tool based survey feedback work part websiteapp help learn user well connect also Google Analytics integration 92 Usabilla feedback collection software provides realtime feedback user also help target question timingTags UX UI Design Tools Success
4,529
A Systematic Approach to Dynamic Programming
Approaches to DP The two main approaches to dynamic programming are memoization (the top-down approach) and tabulation (the bottom-up approach). So far we’ve seen that recursion and backtracking are important when applying the DP premise of breaking a complex problem into smaller instances of itself. However, none of the code snippets above classify as DP solutions even though they use recursion and backtracking. For a naive recursive solution to apply as a DP solution it should optimize to caching the results of computed sub-problems. In the short definition of DP above, the emphasis is on solving smaller instances only once — with a strong emphasis on “only once.” Memoization Memoization = Recursion + Caching Our framework for a dynamic-programming-worthy problem said that it usually contains overlapping sub-problems. Remember the Fibonacci code above? If we create a recursive tree to compute the seventh Fibonacci number we get this: Notice how many times we solve the same sub-problem. For example, Fib(3) is computed five times, and every fib(3) call recursively calls two more fib. That’s ten function calls solving the same fib(3) sub-problem. Now we start talking DP! Instead of solving the same problem multiple times, why don’t we solve it just once and store it on some data structure in case we need it later? That is memoization! Fib code optimized to caching. This approach is the easiest of the two DP approaches presented here. Once you can get a recursive solution to the problem, just make sure you cache the solutions to the sub-problems. Before you make recursive calls to solve a sub-problem, check if it was already solved. Notice that here we do a trade: To archive time efficiency, we’re willing to give up memory space to allocate all computed sub-problems’ solutions. Dynamic programming usually trades memory space for time efficiency. When caching your solved sub-problems you can use an array if the solution to the problem depends only on one state. For example, in the fib code above, the solution to a sub-problem is the ‘nth Fibonacci’ number. We can use n as an index on a 1D array, where its value represents the solution to the fib(n) sub-problem. Sometimes the solution to the problem may depend on two states. In this case, you can cache the results using a 2D array, where columns represent one state and rows represent the other. For example, in the famous Knapsack problem (which we’ll explore later) we want to optimize for total value, given a maximum weight constraint and a list of items. A knapsack sub-problem may look like this: KS(W, i) → (Max value), where we interpret it as: “What is the maximum value I can get with a weight ‘W’ and considering the ‘ith’ item?.” Therefore if we want to cache this solution, we need to take both states into account, and that can be accomplished using a 2D array. Memoization is great — we have the elegance of a problem described recursively, and we’re solving overlapping sub-problems only once. Well, not everything is that great. We’re still making a bunch of recursive calls. Recursion is expensive both on processor time and memory space. Most recursive functions will consume call stack memory linearly with the number of recursive calls needed to complete the task. There are special types of recursive functions, known as tail-recursion, that don’t necessarily increase the call stack linearly if optimized correctly. These can execute on a constant call stack space. Without going into many details, tail-recursive functions perform the recursive call at the end of its execution, meaning that its stack frame is useless thereafter. The same stack memory space can be reused to hold the state for the next recursive call. The problem that arises is in dealing with the return address. We want to make sure that after the recursive tree ends, you return to the instruction that started the series of recursive calls. Feel free to do some research on this topic. Recursive functions always carry the weight of potential stack overflows issues. The following is a Python command to check the recursion depth limit. If I try to use Python with recursion to solve a problem whose solution involves a recursive depth of more than 1000 calls, I’ll get a stack overflow exception. That quantity can be increased, but we get into language-specific topics. Recursion depth limit in Python In defense of recursive programming, we must say that recursive functions are often easier to formally prove. Recursive functions provide you with the same repetitive behavior of raw loops but without in-block state changes, which is a common source of bugs. States in recursive programming are updated by passing new parameters to new recursive calls, instead of being modified as the loop progresses. Tabulation Tabulation aims to solve the same kind of problems but completely removes recursion. Removing recursion means we don’t have to worry about stack overflow issues, as well as the common overhead of recursive functions. In the tabulation approach to DP (also known as the table-filling method) we solve all sub-problems and store their results on a matrix. these results are then used to solve larger problems that depend on the previously computed results. Because of this, the tabulation approach is also known as a bottom-up approach. It’s like starting at the lower level of your recursive tree and moving your way up. Tabulation can be much more counterintuitive than recursive-plus-cached memoization solutions. But it’s also much more efficient in terms of time complexity and space complexity if we take into account the call stack memory which increases linearly with the number of recursive calls — again, assuming it’s not tail-recursion optimized. If you go back to the steps initially presented in this piece, you’ll find that tabulation is the last step in the systematic approach to DP. This is because it’s easier to get to a tabulation solution by first solving the problem with recursion and backtracking, then optimizing it to caching with memoization techniques, if necessary, and finally making a few adjustments to update it to a final bottom-up solution. Later you will see a few tricks to achieve that. But first, let’s see how tabulation works. We’ve been talking about states a lot, but we still do not have a formal definition of what we mean by states on our DP context. What I understand by states are parameters that affect the outcome of a recursive call. States are what differentiate one call from another and allow us to explore different choices and get an optimal result. We’ll get some practice defining states at the end of this article. Since tabulation proposes a bottom-up approach, we must first solve all sub-problems that a larger problem may depend on. If we don’t solve the smaller problem, we can’t move on to solve the larger one. In tabulation, we use one for-loop for every state of the problem. But where do I make it start? Where do I make it end? To answer that, let’s explore the following recurrence relation. Let’s say we have a function that solves some optimization problem — call it optimal or OP for short. And let’s assume that the nature of the problem makes OP(n) depend on OP(n-1), so OP(n) = OP(n-1). This recurrence relation is telling you that you cannot know what OP(n) is if you don’t know what OP(n-1) is. That means we need to start at the lowest value of n, say 0, and solve every sub-problem all the way to n. That’s the trick: If your recurrence relation shows that your states are decreasing, then your loops should be increased so you compute every sub-problem that larger problems depend on. Remember bottom-up. This will become clear when we apply all the strategies learned to real problems. And guess what? That will start now.
https://medium.com/better-programming/a-systematic-approach-to-dynamic-programming-54902b6b0071
['Fabian Robaina']
2019-08-15 02:11:49.849000+00:00
['Programming', 'Computer Science', 'Python', 'Dynamic Programming']
Title Systematic Approach Dynamic ProgrammingContent Approaches DP two main approach dynamic programming memoization topdown approach tabulation bottomup approach far we’ve seen recursion backtracking important applying DP premise breaking complex problem smaller instance However none code snippet classify DP solution even though use recursion backtracking naive recursive solution apply DP solution optimize caching result computed subproblems short definition DP emphasis solving smaller instance — strong emphasis “only once” Memoization Memoization Recursion Caching framework dynamicprogrammingworthy problem said usually contains overlapping subproblems Remember Fibonacci code create recursive tree compute seventh Fibonacci number get Notice many time solve subproblem example Fib3 computed five time every fib3 call recursively call two fib That’s ten function call solving fib3 subproblem start talking DP Instead solving problem multiple time don’t solve store data structure case need later memoization Fib code optimized caching approach easiest two DP approach presented get recursive solution problem make sure cache solution subproblems make recursive call solve subproblem check already solved Notice trade archive time efficiency we’re willing give memory space allocate computed subproblems’ solution Dynamic programming usually trade memory space time efficiency caching solved subproblems use array solution problem depends one state example fib code solution subproblem ‘nth Fibonacci’ number use n index 1D array value represents solution fibn subproblem Sometimes solution problem may depend two state case cache result using 2D array column represent one state row represent example famous Knapsack problem we’ll explore later want optimize total value given maximum weight constraint list item knapsack subproblem may look like KSW → Max value interpret “What maximum value get weight ‘W’ considering ‘ith’ item” Therefore want cache solution need take state account accomplished using 2D array Memoization great — elegance problem described recursively we’re solving overlapping subproblems Well everything great We’re still making bunch recursive call Recursion expensive processor time memory space recursive function consume call stack memory linearly number recursive call needed complete task special type recursive function known tailrecursion don’t necessarily increase call stack linearly optimized correctly execute constant call stack space Without going many detail tailrecursive function perform recursive call end execution meaning stack frame useless thereafter stack memory space reused hold state next recursive call problem arises dealing return address want make sure recursive tree end return instruction started series recursive call Feel free research topic Recursive function always carry weight potential stack overflow issue following Python command check recursion depth limit try use Python recursion solve problem whose solution involves recursive depth 1000 call I’ll get stack overflow exception quantity increased get languagespecific topic Recursion depth limit Python defense recursive programming must say recursive function often easier formally prove Recursive function provide repetitive behavior raw loop without inblock state change common source bug States recursive programming updated passing new parameter new recursive call instead modified loop progress Tabulation Tabulation aim solve kind problem completely remove recursion Removing recursion mean don’t worry stack overflow issue well common overhead recursive function tabulation approach DP also known tablefilling method solve subproblems store result matrix result used solve larger problem depend previously computed result tabulation approach also known bottomup approach It’s like starting lower level recursive tree moving way Tabulation much counterintuitive recursivepluscached memoization solution it’s also much efficient term time complexity space complexity take account call stack memory increase linearly number recursive call — assuming it’s tailrecursion optimized go back step initially presented piece you’ll find tabulation last step systematic approach DP it’s easier get tabulation solution first solving problem recursion backtracking optimizing caching memoization technique necessary finally making adjustment update final bottomup solution Later see trick achieve first let’s see tabulation work We’ve talking state lot still formal definition mean state DP context understand state parameter affect outcome recursive call States differentiate one call another allow u explore different choice get optimal result We’ll get practice defining state end article Since tabulation proposes bottomup approach must first solve subproblems larger problem may depend don’t solve smaller problem can’t move solve larger one tabulation use one forloop every state problem make start make end answer let’s explore following recurrence relation Let’s say function solves optimization problem — call optimal OP short let’s assume nature problem make OPn depend OPn1 OPn OPn1 recurrence relation telling cannot know OPn don’t know OPn1 mean need start lowest value n say 0 solve every subproblem way n That’s trick recurrence relation show state decreasing loop increased compute every subproblem larger problem depend Remember bottomup become clear apply strategy learned real problem guess start nowTags Programming Computer Science Python Dynamic Programming
4,530
Why Every Freelance Marketplace That Goes Public Becomes aStartup Titanic?
Photo by K. Mitch Hodge Fiverr just went public. Another freelance marketplace will bite the dust. Why So Serious — Why So Pessimistic? All mega-size freelance platforms are public companies, now. Freelancer dot com is a “veteran” in this field. Upwork will have to wait for a few more months to light its first public birthday candle. Fiverr didn’t even have the time to clean up after their NYSE party. And, there you have it, the “Freelance Triumvirate” went public, with no exceptions. That’s not a coincidence. Actually, I think I see a clear pattern. Every freelance marketplace’s public journey has to go through these five phases. The First Enthusiastic Phase The enthusiasm of freelance platforms at their stock market debuts is simply overwhelming. I dare to say, it can be quite contagious. All you can see is the confetti rain, but you can’t hear a thing. The ringing of the stock market bells can be deafening. Some of these bells became the victims of this enthusiasm. By default, the initial share prices jump sky-high during the first 24 hours after the stock market debut. Upwork had the most “modest” debut with “just” 50%, give it or take. Fiverr hit almost the 100% increase compared with the initial IPO price. Freelancer dot com is still the absolute record-breaker because What’s happening after the first phase is over? The Second Stock Market Roller Coaster Phase In this phase, the IPO honeymoon is far from over. You go up. You go down. That’s a completely normal thing. That’s why nobody doesn’t bother to panic. When you look at the stock market graphs, they all look the same, don’t they? ASX: FLN NASDAQ: UPWK NYSE: FVRR How long does this up and down phase last? Well, I give it a year. The Third Phase — The First Taste of Bitter Reality After the first year as a public company, every freelance marketplace gets the wake-up call. The trouble is that this call is hidden in the financial reports. The serious investors know all too well that the numbers never lie. You just have to make sure you’re looking at the right numbers. Let’s take Upwork’s report for the first quarter of 2019. Source: Upwork If you compare Upwork’s revenues for the first three months in 2018 and 2019, then you can cheer up. There are positive changes of 16.4% for the total revenue and 20.7% for the gross profit. Absolutely nothing to worry about. On the contrary, you can still ride the optimistic wave. However, if you dig a little deeper, you can’t avoid a nasty surprise. Source: Upwork The total operating expenses have increased by 13.3%. If you leave out the provision for transactions losses you get the increase of almost 15%. The most troubling part is that the general and administrative costs have jumped to 28.7%. There’s no happy end here, make no mistake about it. The Fourth Phase — The Real Stock Price Cold As Ice Your stock market roller coaster ride eventually has to come to its end. Once your stock prices stop going up what you get is the real value. Look at the attached graphs. Lassie is coming home. The initial IPO price you began your public journey with will be the last and the only price of your shares. The trouble is the moment when you can’t even get this initial price. If you can sell your shares as long as they’re worth something, then your investment adventure into the freelance universe may not leave you in tears. So, which phase our public freelance marketplaces are currently in? Well, Freelancer dot com is deep into the fourth phase. Upwork is in the second phase. And, of course, Fiverr just got the sweet taste of the first phase. The Fifth Phase — The End of Freelance Days Can the stock market of the freelance platforms collapse? There’s as ominous symbolism between the years 1929 and 2029. I sure hope for the sake of all freelancers that the history won’t repeat itself. However, all of these graphs aren’t encouraging. Why did the most popular and powerful freelance marketplace decide to go public in the first place? Well, I’m not a Wall Street guru, but I know that there’s one reason and one reason only for any company to go public. Then need the money. Is this their last and the best option? If so, then the freelance industry, as we know it, is doomed. Is The IPO Way — The Only Way for Freelance Platforms? If you don’t remember Guru, then you know nothing about the freelance history. This is arguably the oldest freelance platform. They have been around for almost twenty years. Hey, that’s really something. This freelance marketplace has had more ownership shifts than you can count, but they have never filed for IPO (to the best of my knowledge). If you have never heard about goLance, then you will never learn about the future of freelancing. They won two American Business Awards and the People’s Choice Award. For a relatively small privately-owned freelance marketplace, that’s really something. What’s even more important, their CEO Michael Brooks strikes me as an entrepreneur who doesn’t build to sell. What’s Going To Happen When We Come to the End of Our Freelance Road? One day, Freelancer dot com, Upwork, and Fiverr will find themselves together in the fifth phase. I sure hope, I would be a retired freelancer by then. I also hope, I wouldn’t have to say — I told you so!
https://medium.com/build-something-cool/why-every-freelance-marketplace-that-goes-public-becomes-a-startup-titanic-bf71d6ead06
['Nebojsa Todorovic']
2019-06-18 18:57:30.683000+00:00
['IPO', 'Startup', 'Tech', 'Fiverr', 'Freelancing']
Title Every Freelance Marketplace Goes Public Becomes aStartup TitanicContent Photo K Mitch Hodge Fiverr went public Another freelance marketplace bite dust Serious — Pessimistic megasize freelance platform public company Freelancer dot com “veteran” field Upwork wait month light first public birthday candle Fiverr didn’t even time clean NYSE party “Freelance Triumvirate” went public exception That’s coincidence Actually think see clear pattern Every freelance marketplace’s public journey go five phase First Enthusiastic Phase enthusiasm freelance platform stock market debut simply overwhelming dare say quite contagious see confetti rain can’t hear thing ringing stock market bell deafening bell became victim enthusiasm default initial share price jump skyhigh first 24 hour stock market debut Upwork “modest” debut “just” 50 give take Fiverr hit almost 100 increase compared initial IPO price Freelancer dot com still absolute recordbreaker What’s happening first phase Second Stock Market Roller Coaster Phase phase IPO honeymoon far go go That’s completely normal thing That’s nobody doesn’t bother panic look stock market graph look don’t ASX FLN NASDAQ UPWK NYSE FVRR long phase last Well give year Third Phase — First Taste Bitter Reality first year public company every freelance marketplace get wakeup call trouble call hidden financial report serious investor know well number never lie make sure you’re looking right number Let’s take Upwork’s report first quarter 2019 Source Upwork compare Upwork’s revenue first three month 2018 2019 cheer positive change 164 total revenue 207 gross profit Absolutely nothing worry contrary still ride optimistic wave However dig little deeper can’t avoid nasty surprise Source Upwork total operating expense increased 133 leave provision transaction loss get increase almost 15 troubling part general administrative cost jumped 287 There’s happy end make mistake Fourth Phase — Real Stock Price Cold Ice stock market roller coaster ride eventually come end stock price stop going get real value Look attached graph Lassie coming home initial IPO price began public journey last price share trouble moment can’t even get initial price sell share long they’re worth something investment adventure freelance universe may leave tear phase public freelance marketplace currently Well Freelancer dot com deep fourth phase Upwork second phase course Fiverr got sweet taste first phase Fifth Phase — End Freelance Days stock market freelance platform collapse There’s ominous symbolism year 1929 2029 sure hope sake freelancer history won’t repeat However graph aren’t encouraging popular powerful freelance marketplace decide go public first place Well I’m Wall Street guru know there’s one reason one reason company go public need money last best option freelance industry know doomed IPO Way — Way Freelance Platforms don’t remember Guru know nothing freelance history arguably oldest freelance platform around almost twenty year Hey that’s really something freelance marketplace ownership shift count never filed IPO best knowledge never heard goLance never learn future freelancing two American Business Awards People’s Choice Award relatively small privatelyowned freelance marketplace that’s really something What’s even important CEO Michael Brooks strike entrepreneur doesn’t build sell What’s Going Happen Come End Freelance Road One day Freelancer dot com Upwork Fiverr find together fifth phase sure hope would retired freelancer also hope wouldn’t say — told soTags IPO Startup Tech Fiverr Freelancing
4,531
Four Books You Need to Read About School Shootings
I am not, however, advocating for guns to be taken away. There are benefits to guns. No, what made me rage-debate that bumper sticker was its loose logic, its sloppy facts, its off-the-mark assumptions. These same qualities plague much of the discussion around school shooters. News about school shooters is inescapable now, but not so long ago, it was much rarer. In fact, “school shooters” was not a common term until the late 1990s, and only in the last few years have they been studied as a specific category of killer. Here are four books that date from this proto-era. Think of them as setting the stage for our current gun control moment. Erik Larson, Lethal Passage, 1994 Image from Amazon On December 16, 1988, sixteen-year-old Nicholas Elliott walked into his high school, Atlantic Shores Christian School in Virginia Beach, Virginia, with murder on his mind. His target: another student named Jacob Snipes, who had been taunting Nicholas (Jacob was white, Nicholas black). A teacher, Karen Fairley, tried to stop Nicholas; he killed her and kept moving. He wounded another teacher, shot at a third, and menaced a group of students before he was subdued. Three Molotov cocktails were found in his locker. His book bag held the makings of a pipe bomb. Erik Larson, who would go on to write nonfiction bestsellers such as The Devil in the White City and Dead Wake: The Last Crossing of the Lusitania, tells Nicholas’s story in Lethal Passage. He pioneers a lot of elements that are now standard. For instance, he lays out some shocking statistics: 70,000 Americans killed by guns in 1991–1993 150,000 gun-related injuries per year 8,050 people killed or wounded in Los Angeles County in 1991 (13 times the number of Americans killed in the First Gulf War) He also quotes a student who told one newspaper that “All the kids said he was going to shoot someone.” Such quotes are turned up about every shooter, it seems. The heart of the book is not the shooting but the gun Nicholas used: a Cobray M-11/9. Larson gives the history of this type of gun, starting with its invention by Gordon Ingram in the 1960s. He then traces Nicholas’s particular Cobray from assembly line to Atlantic Shores, highlighting the frauds and failures by which the piece ended up in an angry teenager’s hands. “I researched that book very, very carefully,” Larson told me during an interview in 2016. “I learned to shoot, and I gotta say that shooting a handgun is a lot of fun.” He praised guns as “exquisite works of engineering” before discussing what prompted him to write this book over two decades ago: gun culture, an all-too-familiar argument. Gun culture bothered Larson then, and it bothers him now because “society bears all the costs of irresponsibility. We have to shift the costs to the gun owner. What that means is, yes, there should be a licensing process. There is nothing in the Second Amendment that says you can’t license and register firearms. Nothing.” In a preview of the Parkland teens’ message, Larson reserved his most astringent criticism for the National Rifle Association, calling it “dystopian and paranoid” and claiming that the organization “is not about guns at all. It’s about libertarian politics.” Dave Cullen, Columbine, 2009 Image from Amazon Nicholas Elliott was a prototype. Over the next decade, more shooters appeared. All have been eclipsed by more recent killers — all but two: Eric Harris and Dylan Klebold. On April 20, 1999, the two murdered fifteen people and injured twenty-four others inside Columbine High School in Columbine, Colorado. The modern notion of school shooters was born in the bloodbath of that day. According to Malcom Gladwell, Harris and Klebold “laid down the ‘cultural script’ for the next generation of shooters.” With infamy, of course, comes mythology. There were reports that certain students were targeted, that there were no warning signs, that the killers were misfits who had been bullied. Dave Cullen’s Columbine is an encyclopedic rebuttal of these myths. Cullen’s big reveal is that Harris and Klebold, despite being most people’s definition of “school shooters,” were actually bombers. Their plan was to blow up their school. To that end, they planted two 20-pound propane bombs in the cafeteria, wiring them to detonate at 11:17am. Their shotguns and semi-automatics would be trained on people fleeing the burning building, and they had another set of bombs in their vehicles, set to go off at noon to take out first responders. Harris — and if you take away a single thing from Cullen’s book, it should be this — was a sociopath. All the bombs failed, thank God. Yet the attack was still well organized. The two planned for a year, dreaming of a widespread massacre, a strike at society itself. Their school was the first step, chosen for its convenience. Nor were the two outcasts. Both had friends, were reasonably popular, played sports, joined clubs. Klebold was more withdrawn, depressive and suicidal, although he had a hot temper. Harris — and if you take away a single thing from Cullen’s book, it should be this — was a sociopath. We know this from his journals, his website, and his home videos. Seemingly sweet and deferential, polite on the surface, he was stone cold underneath. Cullen sums it up this way: “Klebold was hurting inside while Harris wanted to hurt people.” Peter Langman, Why Kids Kill: Inside the Minds of School Shooters, 2009 Image from Amazon It seems 2009 was the year for landmark school shooter books. In that year, psychologist Peter Langman released his long-awaited study, Why Kids Kill. It was one of the first books to examine school shooters as a unique subset of killers. Langman calls such killers “rampage school shooters,” which he defines as “students or former students [who] attack their own schools.” Their actions are “public acts, committed in full view of others,” and their victims are both people they dislike and people “shot randomly or as symbols of the school.” The heart of the book is an examination of ten shooters: Evan Ramsey, Michael Carneal, Andrew Golden, Mitchell Johnson, Andrew Wurst, Kip Kinkel, Eric Harris, Dylan Klebold, Jeffrey Weise, and Seung Hui Cho. They range in age from 23 (Cho) to 11 (Golden). Some killed only one or two people, whereas Cho murdered thirty. Langman classifies each as psychopathic, psychotic, or traumatized. Psychopathic shooters are narcissists, lacking in empathy, normal on the surface yet sadistic. Psychotic shooters have hallucinations, delusions, disorganized thoughts, eccentric beliefs, and odd behavior. Traumatized shooters grew up as victims of abuse, domestic violence, and chaotic households. Like Cullen, Langman is committed to debunking school shooter myths, calling them “factors that do not explain.” These factors are Gun culture (though Larson does indict this) Antidepressants like Prozac or Luvox Detachment from school or feelings of alienation Violent video games, movies, or television Rejection Depression Bullying If you wonder what Langman would make of more recent shooters like Adam Lanza and Nikolas Cruz, wonder no more: he has written a follow-up book, School Shooters: Understanding High School, College, and Adult Perpetrators, and maintains a website on the subject. Stephen King, Rage, 1977 Image from Amazon If school shooters had a Bible, it would doubtless be Rage. Written by Stephen King in 1977 under the name Richard Bachman, it is the story of Charlie Decker, a high school senior who, after being expelled, grabs a pistol from his locker, runs to his algebra class, and murders the teacher, Jean Underwood. The students become his hostages. When another teacher, Peter Vance, tries to enter the room, Charlie kills him as well. Police show up, and the standoff lasts four hours, with Charlie agreeing to release the captives. When the police chief enters the classroom, Charlie moves as if to shoot him but is shot instead. He survives and ends up in a psychiatric hospital in Augusta, Maine. At least five actual shooters have a known connection to this novel. Jeffrey Lyne Cox (1988), who held sixty students at gunpoint in San Gabriel, California, was said by a friend to have read Rage over and over. Dustin Pierce (1989), who had a nine-hour standoff with police in McKee, Kentucky, had a copy of Rage in his bedroom. Scott Pennington (1993), who shot and killed a teacher and a school custodian in Grayson, Kentucky, wrote an essay on Rage and was upset that it received a low grade. Barry Loukaitis (1996), who shot a teacher and three classmates and held some students hostage, said to them, “This sure beats algebra, doesn’t it?” (Charlie Decker in Rage comments that his act “sure beats panty raids.”) Michael Carneal (1997), who shot eight students, had a copy of Rage in his locker. After the Carneal incident, King told his publisher to “take the damned thing out of print.” It is the only King novel to be so consigned. He doesn’t think Rage turned those boys into killers; he saw the book as “a possible accelerant which is why I pulled it from sale. You don’t leave a can of gasoline where a boy with firebug tendencies can lay hands on it.”
https://pisancantos43.medium.com/four-books-you-need-to-read-about-school-shootings-d6ee23eda06b
['Anthony Aycock']
2019-01-15 21:03:30.579000+00:00
['Guns', 'Schools', 'Shooting', 'Children', 'Books']
Title Four Books Need Read School ShootingsContent however advocating gun taken away benefit gun made ragedebate bumper sticker loose logic sloppy fact offthemark assumption quality plague much discussion around school shooter News school shooter inescapable long ago much rarer fact “school shooters” common term late 1990s last year studied specific category killer four book date protoera Think setting stage current gun control moment Erik Larson Lethal Passage 1994 Image Amazon December 16 1988 sixteenyearold Nicholas Elliott walked high school Atlantic Shores Christian School Virginia Beach Virginia murder mind target another student named Jacob Snipes taunting Nicholas Jacob white Nicholas black teacher Karen Fairley tried stop Nicholas killed kept moving wounded another teacher shot third menaced group student subdued Three Molotov cocktail found locker book bag held making pipe bomb Erik Larson would go write nonfiction bestseller Devil White City Dead Wake Last Crossing Lusitania tell Nicholas’s story Lethal Passage pioneer lot element standard instance lay shocking statistic 70000 Americans killed gun 1991–1993 150000 gunrelated injury per year 8050 people killed wounded Los Angeles County 1991 13 time number Americans killed First Gulf War also quote student told one newspaper “All kid said going shoot someone” quote turned every shooter seems heart book shooting gun Nicholas used Cobray M119 Larson give history type gun starting invention Gordon Ingram 1960s trace Nicholas’s particular Cobray assembly line Atlantic Shores highlighting fraud failure piece ended angry teenager’s hand “I researched book carefully” Larson told interview 2016 “I learned shoot gotta say shooting handgun lot fun” praised gun “exquisite work engineering” discussing prompted write book two decade ago gun culture alltoofamiliar argument Gun culture bothered Larson bother “society bear cost irresponsibility shift cost gun owner mean yes licensing process nothing Second Amendment say can’t license register firearm Nothing” preview Parkland teens’ message Larson reserved astringent criticism National Rifle Association calling “dystopian paranoid” claiming organization “is gun It’s libertarian politics” Dave Cullen Columbine 2009 Image Amazon Nicholas Elliott prototype next decade shooter appeared eclipsed recent killer — two Eric Harris Dylan Klebold April 20 1999 two murdered fifteen people injured twentyfour others inside Columbine High School Columbine Colorado modern notion school shooter born bloodbath day According Malcom Gladwell Harris Klebold “laid ‘cultural script’ next generation shooters” infamy course come mythology report certain student targeted warning sign killer misfit bullied Dave Cullen’s Columbine encyclopedic rebuttal myth Cullen’s big reveal Harris Klebold despite people’s definition “school shooters” actually bomber plan blow school end planted two 20pound propane bomb cafeteria wiring detonate 1117am shotgun semiautomatic would trained people fleeing burning building another set bomb vehicle set go noon take first responder Harris — take away single thing Cullen’s book — sociopath bomb failed thank God Yet attack still well organized two planned year dreaming widespread massacre strike society school first step chosen convenience two outcast friend reasonably popular played sport joined club Klebold withdrawn depressive suicidal although hot temper Harris — take away single thing Cullen’s book — sociopath know journal website home video Seemingly sweet deferential polite surface stone cold underneath Cullen sum way “Klebold hurting inside Harris wanted hurt people” Peter Langman Kids Kill Inside Minds School Shooters 2009 Image Amazon seems 2009 year landmark school shooter book year psychologist Peter Langman released longawaited study Kids Kill one first book examine school shooter unique subset killer Langman call killer “rampage school shooters” defines “students former student attack schools” action “public act committed full view others” victim people dislike people “shot randomly symbol school” heart book examination ten shooter Evan Ramsey Michael Carneal Andrew Golden Mitchell Johnson Andrew Wurst Kip Kinkel Eric Harris Dylan Klebold Jeffrey Weise Seung Hui Cho range age 23 Cho 11 Golden killed one two people whereas Cho murdered thirty Langman classifies psychopathic psychotic traumatized Psychopathic shooter narcissist lacking empathy normal surface yet sadistic Psychotic shooter hallucination delusion disorganized thought eccentric belief odd behavior Traumatized shooter grew victim abuse domestic violence chaotic household Like Cullen Langman committed debunking school shooter myth calling “factors explain” factor Gun culture though Larson indict Antidepressants like Prozac Luvox Detachment school feeling alienation Violent video game movie television Rejection Depression Bullying wonder Langman would make recent shooter like Adam Lanza Nikolas Cruz wonder written followup book School Shooters Understanding High School College Adult Perpetrators maintains website subject Stephen King Rage 1977 Image Amazon school shooter Bible would doubtless Rage Written Stephen King 1977 name Richard Bachman story Charlie Decker high school senior expelled grab pistol locker run algebra class murder teacher Jean Underwood student become hostage another teacher Peter Vance try enter room Charlie kill well Police show standoff last four hour Charlie agreeing release captive police chief enters classroom Charlie move shoot shot instead survives end psychiatric hospital Augusta Maine least five actual shooter known connection novel Jeffrey Lyne Cox 1988 held sixty student gunpoint San Gabriel California said friend read Rage Dustin Pierce 1989 ninehour standoff police McKee Kentucky copy Rage bedroom Scott Pennington 1993 shot killed teacher school custodian Grayson Kentucky wrote essay Rage upset received low grade Barry Loukaitis 1996 shot teacher three classmate held student hostage said “This sure beat algebra doesn’t it” Charlie Decker Rage comment act “sure beat panty raids” Michael Carneal 1997 shot eight student copy Rage locker Carneal incident King told publisher “take damned thing print” King novel consigned doesn’t think Rage turned boy killer saw book “a possible accelerant pulled sale don’t leave gasoline boy firebug tendency lay hand it”Tags Guns Schools Shooting Children Books
4,532
Meet Edgar Goetzendorff — ARK’s Newest Full-Stack Developer
Given the extensive roadmap that we want to tackle this year, it was necessary to bring in more developers to help speed up development. Our newest hire is Edgar, whom most of our community already know under the username ‘dated’. As our roadmap for this year is packed with new and upcoming products and services (MarketSquare, Deployer, Desktop Wallet v3, Mobile Wallet v2, Core v3, and Platform SDK) we needed more developers who write solid code, have a proven track record of being reliable and are familiar with ARK and all its mechanics. Who better knows that than our all-star participant and winner of our GitHub Development Bounty Program multiple times over, Edgar Goetzendorff. About Edgar Edgar’s childhood started with computers, as his father is a programmer and always had old computer parts lying around at home. Edgar used to pick the best parts and assemble his own computers. He was constantly repairing and tinkering with things when something broke — figuring out while trying to fix them. Edgar was around the age of 12 when he started building simple websites using HTML and CSS. One of his first hobby projects at the time was developing a website that could showcase achievements and gear for players of an Italian online text-based role-playing game. Software programming soon caught Edgar’s attention and one thing led to another. During his studies and his work towards a degree in computer science, he worked at an online travel agency that focuses on B2B and VIP travel, redesigning the backend applications and helping out with day-to-day operations and customer support. For the last three years, he has been employed by a company that builds software solutions for traffic engineering and public transportation. These days, he is mostly working with JavaScript and TypeScript, the predominant languages of ARK technology. Other languages and frameworks he is familiar with are Python, Ruby and PHP. In the past, he’s used CakePHP and Yii PHP frameworks, but as ARK is using Laravel for its projects, he wants to get his hands dirty and learn that as one of his next professional goals. Edgar will, first and foremost, help with the development of the next generation of Desktop Wallet that is coming out this year, but he is versatile and will jump around on other products as needed. When asked how he learned about ARK: Actually I found out about ARK through one of its early Bridgechains. Only after submitting some bugfixes for the then available commander on GitHub and unexpectedly receiving my first bounty, I joined the ARK Slack and was instantly hooked by the warm and welcoming community which ultimately allowed me to become a Forging Delegate. Outside of his career, he is a husband and a father. In his free time, he likes to solve riddles and go geocaching, as well as photography. Welcome to the ARK family, Edgar! We wish you the best in continuing to do the great work we have seen from your long tenancy in the Development Bounty Program.
https://medium.com/ark-io/meet-edgar-goetzendorff-arks-newest-full-stack-developer-4f38e396fc39
['Rok Černec']
2020-07-20 18:53:57.367000+00:00
['Development', 'Cryptocurrency', 'Blockchain', 'Crypto', 'Developer']
Title Meet Edgar Goetzendorff — ARK’s Newest FullStack DeveloperContent Given extensive roadmap want tackle year necessary bring developer help speed development newest hire Edgar community already know username ‘dated’ roadmap year packed new upcoming product service MarketSquare Deployer Desktop Wallet v3 Mobile Wallet v2 Core v3 Platform SDK needed developer write solid code proven track record reliable familiar ARK mechanic better know allstar participant winner GitHub Development Bounty Program multiple time Edgar Goetzendorff Edgar Edgar’s childhood started computer father programmer always old computer part lying around home Edgar used pick best part assemble computer constantly repairing tinkering thing something broke — figuring trying fix Edgar around age 12 started building simple website using HTML CSS One first hobby project time developing website could showcase achievement gear player Italian online textbased roleplaying game Software programming soon caught Edgar’s attention one thing led another study work towards degree computer science worked online travel agency focus B2B VIP travel redesigning backend application helping daytoday operation customer support last three year employed company build software solution traffic engineering public transportation day mostly working JavaScript TypeScript predominant language ARK technology language framework familiar Python Ruby PHP past he’s used CakePHP Yii PHP framework ARK using Laravel project want get hand dirty learn one next professional goal Edgar first foremost help development next generation Desktop Wallet coming year versatile jump around product needed asked learned ARK Actually found ARK one early Bridgechains submitting bugfixes available commander GitHub unexpectedly receiving first bounty joined ARK Slack instantly hooked warm welcoming community ultimately allowed become Forging Delegate Outside career husband father free time like solve riddle go geocaching well photography Welcome ARK family Edgar wish best continuing great work seen long tenancy Development Bounty ProgramTags Development Cryptocurrency Blockchain Crypto Developer
4,533
4 Books by Caribbean Authors You Should Read
June is recognized as Caribbean-American Heritage Month. It is a time to recognize the significance of Caribbean people and their descendants in US history and culture. One way to learn about Caribbean influence, not just in the US, but globally, is through books. Hence my writing this post to encourage you to purchase and read books by Caribbean authors or authors of Caribbean heritage. These Ghosts Are Family by Maisy Card These Ghosts Are Family is a transgenerational family sage spanning over 200 years that details the ripple effect of ancestral decisions on present-day life. The novel begins by revealing that Stanford Solomon is actually Abel Paisley, a man who faked his own death and stole the identity of his best friend. And now, nearing the end of his life, Stanford is about to meet his firstborn daughter, Irene Paisley, a home health aide who has shown up for her first day of work to tend to the father she thought was dead. These Ghosts Are Family revolves around the consequences of Abel’s decision and tells the story of the Paisley family from colonial Jamaica to present-day Harlem. The story of each member of the family was very unique as they try to create an identity outside of their family history and trauma. I haven’t read much about slavery in Jamaica, so it was “interesting” and educational to read this. We often learn about the life of slaves in the American South, but not so much in the Caribbean so the book opened my eyes to the experiences of slaves in Jamaican plantations. I did feel like the book jumped around a lot between generations and character, so it was a little difficult to follow at first. However, once I got used to the structure it flowed much smoother. “Even though they were just words, they built a world that she couldn’t stop thinking about, that she felt trapped inside every night.” Surge by Jay Bernard Image by Uju Onyishi Surge is a collection of poems about the 1981 New Cross Fire, a house fire at a birthday party in south London that killed thirteen people all of whom were Black. The fire was initially believed to be a racist attack, but there was a sense of indifference from the police, the government and the press. The collection also talks about the Grenfell fire on 14th June 2017. A case where institutional indifference to working-class lives left 72 people dead. The lack of justice and accountability in both cases exemplify Britain’s racist past and present. The collection begins with the arrival of the Windrush Generation into Britain, followed by the New Cross Fire and then into present-day. The first few poems are told through the voices of ghosts and then it goes into real bodies. There was a lot of shifts in perspectives both between and within poems and that was done so effortlessly. I really enjoyed reading the collection. Some of the poems really spoke to me. But there were some that I didn’t quite understand (the struggles of reading poetry), though I was able to find some Youtube videos where Bernard reads and discusses the poems and that was extremely beneficial. “Me seh blood ah goh run for di pain of di loss” The Perseverance by Raymond Antrobus Image by Uju Onyishi The Perseverance is a collection of poems about the D/deaf experience in a hearing world, the author’s identity as a British-Jamaican and some poems about his father. Reading this collection made me confront a privilege that I have, but hardly ever think about. I don’t know what else to say about this except that it was powerful and incredible. So much so that less than 12 hours after reading it for the first time I decided to reread the collection. “Proving people wrong is great but tiring.” Queenie by Candice Carty-Williams Image by Uju Onyishi Queenie is a year in the life of a 25-year-old Black woman of Jamaican heritage living in London. At the start everything is okay. She’s living with her white boyfriend and has a job she worked hard to get. But then he wants to go on a break, so Queenie has to move out. And let’s just say she did not handle the break well. She starts doing badly at work and having unprotected sex with various guys that showed her no respect. As the story goes on we learn that she experienced some childhood trauma that completely destroyed her self esteem and self-regard. And because of that, her default is self-sabotage. The book touches on so many heavy topics including racism in Britain, micro-aggressions in the workplace, complicated family dynamics, the fetishization of the Black woman’s body and mental health issues. It did a good job in portraying the stigma surrounding going to therapy in the Black community. I also liked that Carty-Williams did not rush Queenie’s healing process. The story flowed so smoothly and it was written so vividly. I was really rooting for Queenie, but I couldn’t help but be annoyed by a lot of her actions. She is also such a contradictory character but the fact that she is such a flawed character make things more realistic. She stays current on the issues of police brutality and the Black Lives Matter movement but then she harbours so much self-hate and allows her body to be used by white men that just don’t care about her. Oh and don’t get me started on her relationship with Black men. It just goes show hoe white supremacist ideologies are so deeply rooted in our subconscious. I can’t recommend this book enough.
https://medium.com/the-open-bookshelf/5-books-by-caribbean-authors-you-should-read-4ef0cb084cd4
['Uju Onyishi']
2020-06-18 11:58:29.978000+00:00
['Reading', 'Books', 'Book Review', 'Book Recommendations', 'Caribbean']
Title 4 Books Caribbean Authors ReadContent June recognized CaribbeanAmerican Heritage Month time recognize significance Caribbean people descendant US history culture One way learn Caribbean influence US globally book Hence writing post encourage purchase read book Caribbean author author Caribbean heritage Ghosts Family Maisy Card Ghosts Family transgenerational family sage spanning 200 year detail ripple effect ancestral decision presentday life novel begin revealing Stanford Solomon actually Abel Paisley man faked death stole identity best friend nearing end life Stanford meet firstborn daughter Irene Paisley home health aide shown first day work tend father thought dead Ghosts Family revolves around consequence Abel’s decision tell story Paisley family colonial Jamaica presentday Harlem story member family unique try create identity outside family history trauma haven’t read much slavery Jamaica “interesting” educational read often learn life slave American South much Caribbean book opened eye experience slave Jamaican plantation feel like book jumped around lot generation character little difficult follow first However got used structure flowed much smoother “Even though word built world couldn’t stop thinking felt trapped inside every night” Surge Jay Bernard Image Uju Onyishi Surge collection poem 1981 New Cross Fire house fire birthday party south London killed thirteen people Black fire initially believed racist attack sense indifference police government press collection also talk Grenfell fire 14th June 2017 case institutional indifference workingclass life left 72 people dead lack justice accountability case exemplify Britain’s racist past present collection begin arrival Windrush Generation Britain followed New Cross Fire presentday first poem told voice ghost go real body lot shift perspective within poem done effortlessly really enjoyed reading collection poem really spoke didn’t quite understand struggle reading poetry though able find Youtube video Bernard read discus poem extremely beneficial “Me seh blood ah goh run di pain di loss” Perseverance Raymond Antrobus Image Uju Onyishi Perseverance collection poem Ddeaf experience hearing world author’s identity BritishJamaican poem father Reading collection made confront privilege hardly ever think don’t know else say except powerful incredible much le 12 hour reading first time decided reread collection “Proving people wrong great tiring” Queenie Candice CartyWilliams Image Uju Onyishi Queenie year life 25yearold Black woman Jamaican heritage living London start everything okay She’s living white boyfriend job worked hard get want go break Queenie move let’s say handle break well start badly work unprotected sex various guy showed respect story go learn experienced childhood trauma completely destroyed self esteem selfregard default selfsabotage book touch many heavy topic including racism Britain microaggressions workplace complicated family dynamic fetishization Black woman’s body mental health issue good job portraying stigma surrounding going therapy Black community also liked CartyWilliams rush Queenie’s healing process story flowed smoothly written vividly really rooting Queenie couldn’t help annoyed lot action also contradictory character fact flawed character make thing realistic stay current issue police brutality Black Lives Matter movement harbour much selfhate allows body used white men don’t care Oh don’t get started relationship Black men go show hoe white supremacist ideology deeply rooted subconscious can’t recommend book enoughTags Reading Books Book Review Book Recommendations Caribbean
4,534
มาทำ auto-deploy Vue.js ขึ้น Firebase Hosting ด้วย Bitbucket pipeline กันเถอะ
Let you learn and share your Firebase experiences with each other. Follow
https://medium.com/firebasethailand/auto-deploy-vue-to-firebase-hosting-with-bitbucket-pipline-7d552163b27
['Sorawit Trutsat']
2019-07-21 19:38:30.209000+00:00
['Ci Cd Pipeline', 'Pipeline', 'Vuejs', 'Bitbucket', 'Firebase']
Title มาทำ autodeploy Vuejs ขึ้น Firebase Hosting ด้วย Bitbucket pipeline กันเถอะContent Let learn share Firebase experience FollowTags Ci Cd Pipeline Pipeline Vuejs Bitbucket Firebase
4,535
Applying Behavioral Science to Machine Learning
Applying Behavioral Science to Machine Learning The emerging field of machine behavior tried to study machine learning models in the same way social scientists study humans. I recently started a new newsletter focus on AI education and already has over 50,000 subscribers. TheSequence is a no-BS( meaning no hype, no news etc) AI-focused newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Please give it a try by subscribing below: Understanding the behavior of artificial intelligence(AI) agents is one of the pivotal challenges of the next decade of AI. Interpretability or explainability are some of the terms often used to describe methods that provide insights about the behavior of AI programs. Until today, most of the interpretability techniques have focused on exploring the internal structure of deep neural networks. Last year, a group of AI researchers from the Massachusetts Institute of Technology(MIT) published a paper exploring a radical approach that attempts to explain the behavior of AI observing them in the same we study human or animal behavior. They group the ideas in this area under the catchy name of machine behavior which promises to be one of the most exciting fields in the next few years of AI. The ideas behind machine behavior might be transformational but its principles are relatively simple. Machine behavior relies more on observations than on engineering knowledge in order to understand the behavior of AI agents. Think about how we observe and derive conclusions from the behavior of animals in a natural environment. Most of the conclusions we obtain from observations are not related to our knowledge of biology but rather on our understanding of social interactions. In the case of AI, the scientists who study the behaviors of these virtual and embodied AI agents are predominantly the same scientists who have created the agents themselves which is the equivalent of requiring a PH.D in biology to understand the behavior of animals. Understanding AI agents goes beyond interpreting a specific algorithm and requires analyzing the interactions between agents and with the surrounding environment. To accomplish that, behavioral analysis via simple observations can be a powerful tool. What is Machine Behavior? Machine Behavior is a field that leverage behavioral sciences to understand the behavior of AI agents. Currently, the scientists who most commonly study the behavior of machines are the computer scientists, roboticists and engineers who have created the machines in the first place. While this group certainly has the computer science and mathematical knowledge to understand the internals of AI agents, they are typically not trained behaviorists. They rarely receive formal instruction on experimental methodology, population-based statistics and sampling paradigms, or observational causal inference, let alone neuroscience, collective behavior or social theory. Similarly, even though behavioral scientists understand those disciplines, they lack the expertise to understand the efficiency of a specific algorithm or technique. From that perspective, machine behavior sits at the intersection of computer science and engineering and behavioral sciences in order to achieve a holistic understanding of the behavior of AI agents. As AI agents become more sophisticated, analyzing their behavior is going to be a combination of understanding their internal architecture as well as their interaction with other agents and their environment. While the former aspect will be a function of deep learning optimization techniques, the latter will rely partially on behavioral sciences. Understanding the Behavioral Patterns in AI Agents Ethology is the field of biology that focuses on the study of animal behavior under natural condition and as a result of evolutionary traits. One of the fathers of ethology was Nikolaas Tinbergen, who won the 1973 Nobel Prize in Physiology or Medicine based on his work identifying the key dimensions of animal behavior. Tinbergen’s thesis was that there were four complementary dimensions to understand animal and human behavior: function, mechanism, development and evolutionary history. Despite the fundamental differences between AI and animals, machine behavior borrows some of Tinbergen ideas to outline the main blocks of behavior in AI agents. Machines have mechanisms that produce behavior, undergo development that integrates environmental information into behavior, produce functional consequences that cause specific machines to become more or less common in specific environments and embody evolutionary histories through which past environments and human decisions continue to influence machine behavior. An adaptation of Tinbergen’s framework to machine behavior can be seen in the following figure: Based on the previous framework, the study of machine behavior focuses on four fundamental areas: mechanism, development, function and evolution across three main scales: individual, collective and hybrid. For a given AI agent, machine behavior will try to explain its behavior by studying the following four areas: 1. Mechanism: The mechanisms for generating the behavior of AI agents are based on its algorithms and the characteristics of the execution environment. At its most basic level, machine behavior leverages interpretability techniques to understand the specific mechanisms behind a given behavioral pattern. 2. Development: The behavior of AI agents is not something that happens on one shot but it rather evolves over time. Machine behavior studies how machines acquire (develop) a specific individual or collective behavior. Behavioral development could be the result of engineering choices as well as the agent’s experiences. 3. Function: An interesting aspect of behavioral analysis is to understand how a specific behavior influences the lifetime function of an AI agent. Machine behavior studies the impact of behaviors on specific functions of AI agents and how those functions can be copied or optimized on other AI agents. 4. Evolution: In addition to functions, AI agents are also vulnerable to evolutionary history and interactions with other agents. Throughout its evolution, aspects of the algorithms of AI agents are reused in new contexts, both constraining future behavior and making possible additional innovations. From that perspective, machine behavior also studies the evolutionary aspects of AI agents. The previous four aspects provide the a holistic model to understand the behavior of AI agents. However, those four elements don’t apply the same way when we are evaluating a classification model with a single agent than a self-driving car environment with hundreds of vehicles. In that sense, machine behavior applies the previous four aspects across three different scales: 1. Individual Machine Behavior: This dimension of machine behavior attempts to study the behavior of individual machines by themselves. There are two general approaches to the study of individual machine behavior. The first focuses on profiling the set of behaviors of any specific machine agent using a within-machine approach, comparing the behavior of a particular machine across different conditions. The second, a between-machine approach, examines how a variety of individual machine agents behave in the same condition. 2. Collective Machine Behavior: Differently from the individual dimension, this areas looks to understand the behavior of AI agents by studying the interactions in a group. The collective dimension of machine behavior attempts to spot behaviors on AI agents that don’t surface at an individual level. 3. Hybrid Human-Machine Behavior: There are many scenarios in which the behavior of AI agents is influenced by their interactions with humans. Another dimension of machine behavior focus on analyzing behavioral patterns in AI agents triggered by the interaction with humans. Machine behavior is one of the most intriguing, nascent fields in AI. Behavioral sciences can complement traditional interpretability methods to develop new methods that help us understand and explain the behavior of AI. As the interactions between humans and AI becomes more sophisticated, machine behavior might play a pivotal role to enable the next level of hybrid intelligence.
https://medium.com/dataseries/applying-behavioral-science-to-machine-learning-cd219d88a7c7
['Jesus Rodriguez']
2020-12-26 10:55:32.398000+00:00
['Machine Learning', 'Deep Learning', 'Data Science', 'Thesequence', 'Artificial Intelligence']
Title Applying Behavioral Science Machine LearningContent Applying Behavioral Science Machine Learning emerging field machine behavior tried study machine learning model way social scientist study human recently started new newsletter focus AI education already 50000 subscriber TheSequence noBS meaning hype news etc AIfocused newsletter take 5 minute read goal keep date machine learning project research paper concept Please give try subscribing Understanding behavior artificial intelligenceAI agent one pivotal challenge next decade AI Interpretability explainability term often used describe method provide insight behavior AI program today interpretability technique focused exploring internal structure deep neural network Last year group AI researcher Massachusetts Institute TechnologyMIT published paper exploring radical approach attempt explain behavior AI observing study human animal behavior group idea area catchy name machine behavior promise one exciting field next year AI idea behind machine behavior might transformational principle relatively simple Machine behavior relies observation engineering knowledge order understand behavior AI agent Think observe derive conclusion behavior animal natural environment conclusion obtain observation related knowledge biology rather understanding social interaction case AI scientist study behavior virtual embodied AI agent predominantly scientist created agent equivalent requiring PHD biology understand behavior animal Understanding AI agent go beyond interpreting specific algorithm requires analyzing interaction agent surrounding environment accomplish behavioral analysis via simple observation powerful tool Machine Behavior Machine Behavior field leverage behavioral science understand behavior AI agent Currently scientist commonly study behavior machine computer scientist roboticists engineer created machine first place group certainly computer science mathematical knowledge understand internals AI agent typically trained behaviorist rarely receive formal instruction experimental methodology populationbased statistic sampling paradigm observational causal inference let alone neuroscience collective behavior social theory Similarly even though behavioral scientist understand discipline lack expertise understand efficiency specific algorithm technique perspective machine behavior sits intersection computer science engineering behavioral science order achieve holistic understanding behavior AI agent AI agent become sophisticated analyzing behavior going combination understanding internal architecture well interaction agent environment former aspect function deep learning optimization technique latter rely partially behavioral science Understanding Behavioral Patterns AI Agents Ethology field biology focus study animal behavior natural condition result evolutionary trait One father ethology Nikolaas Tinbergen 1973 Nobel Prize Physiology Medicine based work identifying key dimension animal behavior Tinbergen’s thesis four complementary dimension understand animal human behavior function mechanism development evolutionary history Despite fundamental difference AI animal machine behavior borrows Tinbergen idea outline main block behavior AI agent Machines mechanism produce behavior undergo development integrates environmental information behavior produce functional consequence cause specific machine become le common specific environment embody evolutionary history past environment human decision continue influence machine behavior adaptation Tinbergen’s framework machine behavior seen following figure Based previous framework study machine behavior focus four fundamental area mechanism development function evolution across three main scale individual collective hybrid given AI agent machine behavior try explain behavior studying following four area 1 Mechanism mechanism generating behavior AI agent based algorithm characteristic execution environment basic level machine behavior leverage interpretability technique understand specific mechanism behind given behavioral pattern 2 Development behavior AI agent something happens one shot rather evolves time Machine behavior study machine acquire develop specific individual collective behavior Behavioral development could result engineering choice well agent’s experience 3 Function interesting aspect behavioral analysis understand specific behavior influence lifetime function AI agent Machine behavior study impact behavior specific function AI agent function copied optimized AI agent 4 Evolution addition function AI agent also vulnerable evolutionary history interaction agent Throughout evolution aspect algorithm AI agent reused new context constraining future behavior making possible additional innovation perspective machine behavior also study evolutionary aspect AI agent previous four aspect provide holistic model understand behavior AI agent However four element don’t apply way evaluating classification model single agent selfdriving car environment hundred vehicle sense machine behavior applies previous four aspect across three different scale 1 Individual Machine Behavior dimension machine behavior attempt study behavior individual machine two general approach study individual machine behavior first focus profiling set behavior specific machine agent using withinmachine approach comparing behavior particular machine across different condition second betweenmachine approach examines variety individual machine agent behave condition 2 Collective Machine Behavior Differently individual dimension area look understand behavior AI agent studying interaction group collective dimension machine behavior attempt spot behavior AI agent don’t surface individual level 3 Hybrid HumanMachine Behavior many scenario behavior AI agent influenced interaction human Another dimension machine behavior focus analyzing behavioral pattern AI agent triggered interaction human Machine behavior one intriguing nascent field AI Behavioral science complement traditional interpretability method develop new method help u understand explain behavior AI interaction human AI becomes sophisticated machine behavior might play pivotal role enable next level hybrid intelligenceTags Machine Learning Deep Learning Data Science Thesequence Artificial Intelligence
4,536
Gender Inference with Deep Learning
Gender Inference with Deep Learning Fine-tuning pretrained convolutional neural networks on celebrities Photo by Alex Holyoake on Unsplash Summary I wanted to build a model to infer gender from images. By fine-tuning the pretrained convolutional neural network VGG16, and training it on images of celebrities, I was able to obtain over 98% accuracy on the test set. The exercise demonstrates the utility of engineering the architecture of pretrained models to complement the characteristics of the dataset. Task Typically, a human can distinguish a man and a woman in the photo above with ease, but it’s hard to describe exactly why we can make that decision. Without defined features, this distinction becomes very difficult for traditional machine learning approaches. Additionally, features that are relevant to the task are not expressed in the exact same way every time, every person looks a little different. Deep learning algorithms offer a way to process information without predefined features, and make accurate predictions despite variation in how features are expressed. In this article, we’ll apply a convolutional neural network to images of celebrities with the purpose of predicting gender. (Disclaimer: the author understands appearance does not have a causative relationship with gender) Tool Convolution neural networks (ConvNets) offer a means to make predictions from raw images. A hallmark of the algorithm is the ability to reduce the dimensionality of images by using sequences of filters that identify distinguishing features. Additional layers in the model help us emphasize the strength of often nonlinear relationships between the features identified by the filters and the label assigned to the image. We can adjust weights associated with the filters and additional layers to minimize the error between the predicted and observed classifications. Sumit Saha offers a great explanation that is more in-depth: https://towardsdatascience.com/a-comprehensive-guide-to-convolutional-neural-networks-the-eli5-way-3bd2b1164a53 There are a number of pretrained ConvNets that have been trained to classify a range of images of anything from planes to corgis. We can save computation time and overcome some sampling inadequacy by employing the weights of pretrained models and fine-tuning them for our purpose. Dataset The CelebA dataset contains over 200K images of celebrities labeled with 20 attributes including gender. The images are from the shoulders up, so most of the information is in the facial features and hair style. Example image available from CelebA Modeling Feature Extraction We’re going to use the VGG16 pretrained model and fine tune it to best identify gender from the celebrity images. vgg=VGG16(include_top=False, pooling=’avg’, weights=’imagenet’, input_shape=(178, 218, 3)) We use “include_top=False” to remove the fully connected layer designed for identifying a range of objects the VGG16 was trained to identify (e.g. apples, corgis, scissors), and we download the weights associated with the ImageNet competition. Table 1 below shows the convolutional architecture for VGG16; there are millions of weights for all the convolutions that we can choose to either train or keep frozen at the pretrained values. By freezing all the weights of the model, we risk underfitting it because the pretrained weights were not specifically estimated for our particular task. In contrast, by training all the weights we risk overfitting because the model will begin “memorizing” the training images given the flexibility from high parameterization. We’ll attempt a compromise by training the last convolutional block: # Freeze the layers except the last 5 for layer in vgg.layers[:-5]: layer.trainable = False # Check the trainable status of the individual layers for layer in vgg.layers: print(layer, layer.trainable) Table 1: Architecture of VGG16 model after turning final layers on The first convolutional blocks in the VGG16 models are identifying more general features like lines or blobs, so we want to keep the associated weights. The final blocks identify more fine scale features (e.g. angles associated with the wing tip of an airplane), so we’ll train those weights given our images of celebrities. Model Compilation Following feature extraction by the convolutions, we’ll add two dense layers to the model that enable us to make predictions about the image given the features identified. You could use a single dense layer, but an additional hidden layer allows predictions to be made given a more sophisticated interpretation of the features. Too many dense layers may cause overfitting. # Create the model model = models.Sequential() # Add the VGG16 convolutional base model model.add(vgg) # Add new layers model.add(layers.Dense(128, activation=’relu’)) model.add(layers.BatchNormalization()) model.add(layers.Dense(2, activation=’sigmoid’)) We added a batch normalization layer that will scale our hidden layer activation values in a way to reduce overfitting and computation time. The last dense layer makes predictions about gender (Table 2). Table 2: Custom Model Architecture Because we are allowing the model to train convolutional layers and dense layers, we’ll be estimating millions of weights (Table 3). Given the depth of the network we built, picking the best constant learning rate for an optimizer like stochastic gradient decent would be tricky; instead we’ll use the ADAM optimizer, that adjusts the learning rate to make smaller steps further into training. model.compile(optimizer=’adam’, loss=’binary_crossentropy’, metrics=[‘accuracy’]) Using Keras, we’ll set up our data generators to feed our model, and fit the network to our training set. data_generator = ImageDataGenerator(preprocessing_function=preprocess_input) train_generator = data_generator.flow_from_directory( ‘C:/Users/w10007346/Pictures/Celeb_sets/train’, target_size=(178, 218), batch_size=12, class_mode=’categorical’) validation_generator = data_generator.flow_from_directory( ‘C:/Users/w10007346/Pictures/Celeb_sets/valid’, target_size=(178, 218), batch_size=12, class_mode=’categorical’) model.fit_generator( train_generator, epochs=20, steps_per_epoch=2667, validation_data=validation_generator, validation_steps=667, callbacks=cb_list) After 6 epochs, the model achieved a maximum validation accuracy of 98%. Now to apply to the test set. Testing We have a test set of 500 images per gender. The model will give us predicted probabilities for each image fed through the network and we can simply take the maximum value of those probabilities as the predicted gender. # obtain predicted activation values for the last dense layer pred = saved_model.predict_generator(test_generator, verbose=1, steps=1000) # determine the maximum activation value for each sample predicted_class_indices=np.argmax(pred,axis=1) Our model predicted the gender of celebrities with 98.2% accuracy! That’s pretty comparable to human capabilities. Does the model generalize to non-celebrities? Lets try on the author. The model did well with a recent picture of the author. The predicted probability for the above image was 99.8% male. The model also did well with the author’s younger, mop-head past; it predicted 98.6% male. Conclusion This exercise demonstrates the power of fine-tuning pretrained ConvNets. Each application will require a different approach to optimize the modeling process. Specifically, the architecture of the model needs to be engineered in a way that complements the characteristics of the dataset. Pedro Marcelino offers a great explanation of general rules for adapting the fine-tuning process to any dataset: https://towardsdatascience.com/transfer-learning-from-pre-trained-models-f2393f124751 I appreciate any feedback and constructive criticism on this exercise. The code associated with the analysis can be found on github.com/njermain
https://towardsdatascience.com/gender-identification-with-deep-learning-ac379f85a790
['Nate Jermain']
2019-04-23 02:08:59.959000+00:00
['Python', 'Machine Learning', 'Neural Networks', 'Data Science', 'Deep Learning']
Title Gender Inference Deep LearningContent Gender Inference Deep Learning Finetuning pretrained convolutional neural network celebrity Photo Alex Holyoake Unsplash Summary wanted build model infer gender image finetuning pretrained convolutional neural network VGG16 training image celebrity able obtain 98 accuracy test set exercise demonstrates utility engineering architecture pretrained model complement characteristic dataset Task Typically human distinguish man woman photo ease it’s hard describe exactly make decision Without defined feature distinction becomes difficult traditional machine learning approach Additionally feature relevant task expressed exact way every time every person look little different Deep learning algorithm offer way process information without predefined feature make accurate prediction despite variation feature expressed article we’ll apply convolutional neural network image celebrity purpose predicting gender Disclaimer author understands appearance causative relationship gender Tool Convolution neural network ConvNets offer mean make prediction raw image hallmark algorithm ability reduce dimensionality image using sequence filter identify distinguishing feature Additional layer model help u emphasize strength often nonlinear relationship feature identified filter label assigned image adjust weight associated filter additional layer minimize error predicted observed classification Sumit Saha offer great explanation indepth httpstowardsdatasciencecomacomprehensiveguidetoconvolutionalneuralnetworkstheeli5way3bd2b1164a53 number pretrained ConvNets trained classify range image anything plane corgi save computation time overcome sampling inadequacy employing weight pretrained model finetuning purpose Dataset CelebA dataset contains 200K image celebrity labeled 20 attribute including gender image shoulder information facial feature hair style Example image available CelebA Modeling Feature Extraction We’re going use VGG16 pretrained model fine tune best identify gender celebrity image vggVGG16includetopFalse pooling’avg’ weights’imagenet’ inputshape178 218 3 use “includetopFalse” remove fully connected layer designed identifying range object VGG16 trained identify eg apple corgi scissors download weight associated ImageNet competition Table 1 show convolutional architecture VGG16 million weight convolution choose either train keep frozen pretrained value freezing weight model risk underfitting pretrained weight specifically estimated particular task contrast training weight risk overfitting model begin “memorizing” training image given flexibility high parameterization We’ll attempt compromise training last convolutional block Freeze layer except last 5 layer vgglayers5 layertrainable False Check trainable status individual layer layer vgglayers printlayer layertrainable Table 1 Architecture VGG16 model turning final layer first convolutional block VGG16 model identifying general feature like line blob want keep associated weight final block identify fine scale feature eg angle associated wing tip airplane we’ll train weight given image celebrity Model Compilation Following feature extraction convolution we’ll add two dense layer model enable u make prediction image given feature identified could use single dense layer additional hidden layer allows prediction made given sophisticated interpretation feature many dense layer may cause overfitting Create model model modelsSequential Add VGG16 convolutional base model modeladdvgg Add new layer modeladdlayersDense128 activation’relu’ modeladdlayersBatchNormalization modeladdlayersDense2 activation’sigmoid’ added batch normalization layer scale hidden layer activation value way reduce overfitting computation time last dense layer make prediction gender Table 2 Table 2 Custom Model Architecture allowing model train convolutional layer dense layer we’ll estimating million weight Table 3 Given depth network built picking best constant learning rate optimizer like stochastic gradient decent would tricky instead we’ll use ADAM optimizer adjusts learning rate make smaller step training modelcompileoptimizer’adam’ loss’binarycrossentropy’ metrics‘accuracy’ Using Keras we’ll set data generator feed model fit network training set datagenerator ImageDataGeneratorpreprocessingfunctionpreprocessinput traingenerator datageneratorflowfromdirectory ‘CUsersw10007346PicturesCelebsetstrain’ targetsize178 218 batchsize12 classmode’categorical’ validationgenerator datageneratorflowfromdirectory ‘CUsersw10007346PicturesCelebsetsvalid’ targetsize178 218 batchsize12 classmode’categorical’ modelfitgenerator traingenerator epochs20 stepsperepoch2667 validationdatavalidationgenerator validationsteps667 callbackscblist 6 epoch model achieved maximum validation accuracy 98 apply test set Testing test set 500 image per gender model give u predicted probability image fed network simply take maximum value probability predicted gender obtain predicted activation value last dense layer pred savedmodelpredictgeneratortestgenerator verbose1 steps1000 determine maximum activation value sample predictedclassindicesnpargmaxpredaxis1 model predicted gender celebrity 982 accuracy That’s pretty comparable human capability model generalize noncelebrities Lets try author model well recent picture author predicted probability image 998 male model also well author’s younger mophead past predicted 986 male Conclusion exercise demonstrates power finetuning pretrained ConvNets application require different approach optimize modeling process Specifically architecture model need engineered way complement characteristic dataset Pedro Marcelino offer great explanation general rule adapting finetuning process dataset httpstowardsdatasciencecomtransferlearningfrompretrainedmodelsf2393f124751 appreciate feedback constructive criticism exercise code associated analysis found githubcomnjermainTags Python Machine Learning Neural Networks Data Science Deep Learning
4,537
How Do We Solve a Problem Like Election Prediction?
On November 3, two oppositional forces went head to head and the results were…divisive. With commentators and pundits still reeling from the poor performance of US election pollsters, it seems fitting to ask — can AI (ultimately) solve a problem like election prediction? At least this time around, the answer seems to be no, not really. But not necessarily for the reasons you might think. Here’s how it went wrong according to Venturebeat: Firms like KCore Analytics, Expert.AI, and Advanced Symbolics claim algorithms can capture a more expansive picture of election dynamics because they draw on signals like tweets and Facebook messages…KCore Analytics predicted from social media posts that Biden would have a strong advantage — about 8 or 9 points — in terms of the popular vote but a small lead when it came to the electoral college. Italy-based Expert.AI, which found that Biden ranked higher on social media in terms of sentiment, put the Democratic candidate slightly ahead of Trump (50.2% to 47.3%). On the other hand, Advanced Symbolics’ Polly system, which was developed by scientists at the University of Ottawa, was wildly off with projections that showed Biden nabbing 372 electoral college votes compared with Trump’s 166, thanks to anticipated wins in Florida, Texas, and Ohio — all states that went to Trump. For many — like Johnny Okleksinski back in 2016 — the instinctive reaction is to claim these misfires are down to flawed social media data which is simply not reflective of real world populations. In 2018, 74% of respondents agreed and told Pew Research that: “content on social media does not provide an accurate picture of how society feels about important issues.” But while it’s certainly true that some of these inaccurate AI forecasts were down to the under-representation of certain groups (e.g. rural communities), an interesting paper published earlier this year by open access journal MDPI suggests that social media analysis can actually be more reflective of real-life views than these results might indicate. The authors of Electoral and Public Opinion Forecasts with Social Media Data: A Meta-Analysis acknowledge the debate around the usefulness of social media in understanding public opinion, but at the same time they caution that dismissing social media’s predictive capacity based on its inability to represent some populations actually misses an important dynamic — namely, that politically active users are opinion-formers and influence the preferences of a much wider audience, with social media acting as an “organ of public opinion”: …the formation of public opinion does not occur through an interaction of disparate individuals who share equally in the process; instead, through discussions and debates in which citizens usually participate unequally, public opinion is formed. In other words, although political discussions on social media tend to be dominated by a small number of loud-mouthed users (typically early adopters, teens, and “better-educated” citizens), their opinions do tend to pre-empt those that develop in broader society. Further, in capturing political opinions “out in the wild”; social media analysis is also able to understand the sentiments of silent “lurkers” by examining the relational connections and network attributes of their accounts. Report authors state that, “by looking at social media posts over time, we can examine opinion dynamics, public sentiment and information diffusion within a population.” In brief: the problem with social media-fueled AI prediction does not appear to lie within the substance of what is available via online platforms. It seems to be in the methodology and/or tools. So, where do predictive AI tools go wrong? And where can researchers mine for the most useful indicators of political intention? One of the major areas where social media analysis seems to break down is with language. This intuitively makes sense when we think about how people express themselves online. Problems with poor grammar or sarcasm are doubtless compounded by the difficulties of trying to understand context. Similarly, counting likes, shares and comments on posts and tweets is viewed as a fairly thin and simplistic approach (to use Twitter parlance “retweet ≠ endorsement”). More robust, according to report authors, is an analysis that considers “structural features”, e.g. the “likes” recorded to candidate fan pages. Previous research found that the number of friends a candidate has on Facebook and the number of followers they have on Twitter could be used to predict a candidate’s share of the vote during the 2011 New Zealand election. But there is still the problem of which platform to focus on for thw closest accuracy. Most AI systems use Twitter to predict public opinion, with some also using Facebook, forums, blogs, YouTube, etc. Yet each of these suffer from “their own set of algorithmic confounds, privacy constraints, and post restrictions.” We don’t currently know whether using multiple sources (vs. one platform) has any advantage, but with newly popular players like Parler on the scene, there’s reason to believe that covering several platforms would yield an accuracy advantage (though few currently use a broad range). Finally, the actual political context within which the social platforms operate likely plays into their predictive accuracy. The report in question recalls that the predictive power in a study conducted in semi-authoritarian Singapore was significantly lower than in studies done in established democracies . From this authors infer that issues like media freedom, competitiveness of the election, and idiosyncrasies of electoral systems may lead to over- and under-estimations of voters’ preferences.
https://medium.com/swlh/how-do-we-solve-a-problem-like-election-prediction-5ae0809d5e7e
['Fiona J Mcevoy']
2020-11-20 23:58:27.349000+00:00
['Artificial Intelligence', 'Politics', 'Elections', 'Social Media', 'Predictions']
Title Solve Problem Like Election PredictionContent November 3 two oppositional force went head head result were…divisive commentator pundit still reeling poor performance US election pollster seems fitting ask — AI ultimately solve problem like election prediction least time around answer seems really necessarily reason might think Here’s went wrong according Venturebeat Firms like KCore Analytics ExpertAI Advanced Symbolics claim algorithm capture expansive picture election dynamic draw signal like tweet Facebook messages…KCore Analytics predicted social medium post Biden would strong advantage — 8 9 point — term popular vote small lead came electoral college Italybased ExpertAI found Biden ranked higher social medium term sentiment put Democratic candidate slightly ahead Trump 502 473 hand Advanced Symbolics’ Polly system developed scientist University Ottawa wildly projection showed Biden nabbing 372 electoral college vote compared Trump’s 166 thanks anticipated win Florida Texas Ohio — state went Trump many — like Johnny Okleksinski back 2016 — instinctive reaction claim misfire flawed social medium data simply reflective real world population 2018 74 respondent agreed told Pew Research “content social medium provide accurate picture society feel important issues” it’s certainly true inaccurate AI forecast underrepresentation certain group eg rural community interesting paper published earlier year open access journal MDPI suggests social medium analysis actually reflective reallife view result might indicate author Electoral Public Opinion Forecasts Social Media Data MetaAnalysis acknowledge debate around usefulness social medium understanding public opinion time caution dismissing social media’s predictive capacity based inability represent population actually miss important dynamic — namely politically active user opinionformers influence preference much wider audience social medium acting “organ public opinion” …the formation public opinion occur interaction disparate individual share equally process instead discussion debate citizen usually participate unequally public opinion formed word although political discussion social medium tend dominated small number loudmouthed user typically early adopter teen “bettereducated” citizen opinion tend preempt develop broader society capturing political opinion “out wild” social medium analysis also able understand sentiment silent “lurkers” examining relational connection network attribute account Report author state “by looking social medium post time examine opinion dynamic public sentiment information diffusion within population” brief problem social mediafueled AI prediction appear lie within substance available via online platform seems methodology andor tool predictive AI tool go wrong researcher mine useful indicator political intention One major area social medium analysis seems break language intuitively make sense think people express online Problems poor grammar sarcasm doubtless compounded difficulty trying understand context Similarly counting like share comment post tweet viewed fairly thin simplistic approach use Twitter parlance “retweet ≠ endorsement” robust according report author analysis considers “structural features” eg “likes” recorded candidate fan page Previous research found number friend candidate Facebook number follower Twitter could used predict candidate’s share vote 2011 New Zealand election still problem platform focus thw closest accuracy AI system use Twitter predict public opinion also using Facebook forum blog YouTube etc Yet suffer “their set algorithmic confounds privacy constraint post restrictions” don’t currently know whether using multiple source v one platform advantage newly popular player like Parler scene there’s reason believe covering several platform would yield accuracy advantage though currently use broad range Finally actual political context within social platform operate likely play predictive accuracy report question recall predictive power study conducted semiauthoritarian Singapore significantly lower study done established democracy author infer issue like medium freedom competitiveness election idiosyncrasy electoral system may lead underestimation voters’ preferencesTags Artificial Intelligence Politics Elections Social Media Predictions
4,538
Chase Your Dream, Not the Money
Chase Your Dream, Not the Money 6 reasons why dream-chasing unlocks more joy than money ever could Photo by Ádám Berkecz on Unsplash I’m sure you have had at least one time in your life where you’ve become focused entirely on money. Money can help you gain your time back, which has value, but there is nothing that beats the fulfillment you get from achieving your dream. In my life, there has been poverty, plenty of money, then poverty again. The contrast between rich and poor is humbling and has led me not to want to chase money. Making your dream come true will take you to new heights and show you a side of life that you may not have known existed. If your life feels meaningless, or you feel stuck, or you have no idea what’s next, or you are just existing, your beliefs about money are part of the problem. Maybe everyone around you seems as though they are winning. Social media tells you that everyone is having a great time, and you need to up your game. The photos you view online are blurring the reality of life. These photos, accidentally, tell you that money helps make everything better. “Money is what you have been missing,” they say. I’m here to say that is wrong. What is missing is chasing a dream. The defining factor that has led me to write this article is that I recently published an article about making $11,000 in 30 days. The money was not the point of me sharing this; it’s the achievement of a dream I have had for the last five years. The focus should be the joy from that. Here is why you must stop chasing money and chase a dream instead:
https://medium.com/better-marketing/chase-your-dream-not-the-money-2f43734e39c
['Tim Denning']
2019-08-25 18:23:26.764000+00:00
['Money', 'Inspiration', 'Self Improvement', 'Life', 'Entrepreneurship']
Title Chase Dream MoneyContent Chase Dream Money 6 reason dreamchasing unlocks joy money ever could Photo Ádám Berkecz Unsplash I’m sure least one time life you’ve become focused entirely money Money help gain time back value nothing beat fulfillment get achieving dream life poverty plenty money poverty contrast rich poor humbling led want chase money Making dream come true take new height show side life may known existed life feel meaningless feel stuck idea what’s next existing belief money part problem Maybe everyone around seems though winning Social medium tell everyone great time need game photo view online blurring reality life photo accidentally tell money help make everything better “Money missing” say I’m say wrong missing chasing dream defining factor led write article recently published article making 11000 30 day money point sharing it’s achievement dream last five year focus joy must stop chasing money chase dream insteadTags Money Inspiration Self Improvement Life Entrepreneurship
4,539
Intro to Segmentation
Image Segmentation is the process by which a digital image is partitioned into various subgroups (of pixels) called Image Objects, which can reduce the complexity of the image, and thus analysing the image becomes simpler. We use various image segmentation algorithms to split and group a certain set of pixels together from the image. By doing so, we are actually assigning labels to pixels and the pixels with the same label fall under a category where they have some or the other thing common in them. Using these labels, we can specify boundaries, draw lines, and separate the most required objects in an image from the rest of the not-so-important ones. Need for Image Segmentation The concept of partitioning, dividing, fetching, and then labelling and later using that information to train various ML models have indeed addressed numerous problems. Segmentation in Image Processing is being used in the medical industry for efficient and faster diagnosis, detecting diseases, tumors, and cell and tissue patterns from various medical imagery generated from radiography, MRI, endoscopy, etc. This is a basic, but a pivotal and significant application of Image Classification, where the algorithm was able to capture only the required components from an image, and those pixels were later being classified as the good, the bad, and the ugly by the system. A rather simple looking system was making a colossal impact on that business — eradicating human effort, human error and increasing efficiency. The Approach Similarity Detection (Region Approach) This fundamental approach relies on detecting similar pixels in an image — based on a threshold, region growing, region spreading, and region merging. Machine learning algorithms like clustering relies on this approach of similarity detection on an unknown set of features, so does classification, which detects similarity based on a pre-defined (known) set of features. Discontinuity Detection (Boundary Approach) This is a stark opposite of similarity detection approach where the algorithm rather searches for discontinuity. Image Segmentation Algorithms like Edge Detection, Point Detection, Line Detection follows this approach — where edges get detected based on various metrics of discontinuity like intensity etc. The Types Based on the two approaches, there are various forms of techniques that are applied in the design of the Image Segmentation Algorithms. These techniques are employed based on the type of image that needs to be processed and analysed and they can be classified into three broader categories as below: Structural Segmentation Techniques These sets of algorithms require us to firstly, know the structural information about the image under the scanner. This can include the pixels, pixel density, distributions, histograms, colour distribution etc. Second, we need to have the structural information about the region that we are about to fetch from the image — this section deals with identifying our target area, which is highly specific to the business problem that we are trying to solve. Similarity based approach will be followed in these sets of algorithms. Stochastic Segmentation Techniques In these group of algorithms, the primary information that is required for them is to know the discrete pixel values of the full image, rather than pointing out the structure of the required portion of the image. This proves to be advantageous in the case of a larger group of images, where a high degree of uncertainty exists in terms of the required object within an object. ANN and Machine Learning based algorithms that use k-means etc. make use of this approach. Hybrid Techniques As the name suggests, these algorithms for image segmentation make use of a combination of structural method and stochastic methods i.e., use both the structural information of a region as well as the discrete pixel information of the image. Image segmentation Techniques Based on the image segmentation approaches and the type of processing that is needed to be incorporated to attain a goal, we have the following techniques for image segmentation. Threshold Method: Focuses on finding peak values based on the histogram of the image to find similar pixels. Edge Based Segmentation: Based on discontinuity detection unlike similarity detection. Region Based Segmentation: Based on partitioning an image into homogeneous regions. Clustering Based Segmentation: Divides image into k number of homogeneous, mutually exclusive clusters — hence obtaining objects. Watershed Based Method: Based on topological interpretation of image boundaries. Artificial Neural Network Based Segmentation: Based on deep learning algorithms especially Convolutional Neural Networks. Deep Dive Images are considered as one of the most important medium of conveying information, in the field of computer vision, by understanding images the information extracted from them can be used for other tasks. An image is a word derived from Latin word ‘imago’, which is a representation of visual perception in a two-dimension or three-dimension picture that has a similar appearance to some subject. A digital image is a numeric representation of a two-dimensional image. A digital image is composed of a finite number of elements, each of which has a particular location and value, are called picture elements, image elements called pixels. Pixels are the smallest individual element in an image, holding finite, discrete, quantized values that represent the brightness, intensity or gray level at any specific point. There are generally two types of image- raster type and vector type. Raster images are images having a finite set of digital values which are represented in a fixed number of rows and columns of pixels where these pixels are stored in memory as a two-dimensional array. Digital images are usually referred as raster images. Vector images are images generated from mathematical geometry known as vector which have points having both magnitude and direction. Image segmentation is the foundation of object recognition and computer vision. Image segmentation is the process of subdividing a digital image into multiple regions or objects consisting of sets of pixels sharing same properties or characteristics which are assigned different labels for representing different regions or objects. The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyse. Image segmentation is used to locate objects and boundaries in images. Segmentation is done on basis of similarity and discontinuity of the pixel values. There are two types of segmentations — soft segmentations and hard segmentations. Segmentations that allow regions or classes to overlap are called soft segmentations whereas a hard segmentation forces a decision of whether a pixel is inside or outside the object. Image segmentation is practically implemented in many applications such as medical imaging, content based image retrieval, object detection, feature recognition (such as face recognition, fingerprint recognition, iris recognition, object recognition, etc) and real-time object tracking in video. The following computational steps have to applied for image segmentation process on the image taken as input to get the required segmented data: 1)Preprocessing: The main aim of the preprocessing step is to determine the area of focus in the image. As the input image may have a certain amount of noise in the images, it is necessary to reduce or remove the noise. 2) Image Segmentation: In this step, the preprocessed image is segmented in its constituent sub-regions. 3) Post Processing: To improve the segmented image, further processing may be required which is performed in post processing step. 4) Feature Extraction: Feature extraction is the method in which unique features of an image are extracted. This method helps in reducing the complexity in classification problems and the classification can be made more efficient. Different kind of features present in an image can be intensity-based, textural, fractal, topological, morphological, etc. 5) Classification: The aim of the classification step is to classify the segmented image by making use of extracted features. This step uses statistical analysis of the features and machine learning algorithms to reach a decision.
https://medium.com/swlh/intro-to-segmentation-ebd33ca75620
['Johar M. Ashfaque']
2020-12-23 22:47:31.356000+00:00
['Artificial Intelligence', 'Machine Learning', 'Image Segmentation', 'Deep Learning']
Title Intro SegmentationContent Image Segmentation process digital image partitioned various subgroup pixel called Image Objects reduce complexity image thus analysing image becomes simpler use various image segmentation algorithm split group certain set pixel together image actually assigning label pixel pixel label fall category thing common Using label specify boundary draw line separate required object image rest notsoimportant one Need Image Segmentation concept partitioning dividing fetching labelling later using information train various ML model indeed addressed numerous problem Segmentation Image Processing used medical industry efficient faster diagnosis detecting disease tumor cell tissue pattern various medical imagery generated radiography MRI endoscopy etc basic pivotal significant application Image Classification algorithm able capture required component image pixel later classified good bad ugly system rather simple looking system making colossal impact business — eradicating human effort human error increasing efficiency Approach Similarity Detection Region Approach fundamental approach relies detecting similar pixel image — based threshold region growing region spreading region merging Machine learning algorithm like clustering relies approach similarity detection unknown set feature classification detects similarity based predefined known set feature Discontinuity Detection Boundary Approach stark opposite similarity detection approach algorithm rather search discontinuity Image Segmentation Algorithms like Edge Detection Point Detection Line Detection follows approach — edge get detected based various metric discontinuity like intensity etc Types Based two approach various form technique applied design Image Segmentation Algorithms technique employed based type image need processed analysed classified three broader category Structural Segmentation Techniques set algorithm require u firstly know structural information image scanner include pixel pixel density distribution histogram colour distribution etc Second need structural information region fetch image — section deal identifying target area highly specific business problem trying solve Similarity based approach followed set algorithm Stochastic Segmentation Techniques group algorithm primary information required know discrete pixel value full image rather pointing structure required portion image prof advantageous case larger group image high degree uncertainty exists term required object within object ANN Machine Learning based algorithm use kmeans etc make use approach Hybrid Techniques name suggests algorithm image segmentation make use combination structural method stochastic method ie use structural information region well discrete pixel information image Image segmentation Techniques Based image segmentation approach type processing needed incorporated attain goal following technique image segmentation Threshold Method Focuses finding peak value based histogram image find similar pixel Edge Based Segmentation Based discontinuity detection unlike similarity detection Region Based Segmentation Based partitioning image homogeneous region Clustering Based Segmentation Divides image k number homogeneous mutually exclusive cluster — hence obtaining object Watershed Based Method Based topological interpretation image boundary Artificial Neural Network Based Segmentation Based deep learning algorithm especially Convolutional Neural Networks Deep Dive Images considered one important medium conveying information field computer vision understanding image information extracted used task image word derived Latin word ‘imago’ representation visual perception twodimension threedimension picture similar appearance subject digital image numeric representation twodimensional image digital image composed finite number element particular location value called picture element image element called pixel Pixels smallest individual element image holding finite discrete quantized value represent brightness intensity gray level specific point generally two type image raster type vector type Raster image image finite set digital value represented fixed number row column pixel pixel stored memory twodimensional array Digital image usually referred raster image Vector image image generated mathematical geometry known vector point magnitude direction Image segmentation foundation object recognition computer vision Image segmentation process subdividing digital image multiple region object consisting set pixel sharing property characteristic assigned different label representing different region object goal segmentation simplify andor change representation image something meaningful easier analyse Image segmentation used locate object boundary image Segmentation done basis similarity discontinuity pixel value two type segmentation — soft segmentation hard segmentation Segmentations allow region class overlap called soft segmentation whereas hard segmentation force decision whether pixel inside outside object Image segmentation practically implemented many application medical imaging content based image retrieval object detection feature recognition face recognition fingerprint recognition iris recognition object recognition etc realtime object tracking video following computational step applied image segmentation process image taken input get required segmented data 1Preprocessing main aim preprocessing step determine area focus image input image may certain amount noise image necessary reduce remove noise 2 Image Segmentation step preprocessed image segmented constituent subregions 3 Post Processing improve segmented image processing may required performed post processing step 4 Feature Extraction Feature extraction method unique feature image extracted method help reducing complexity classification problem classification made efficient Different kind feature present image intensitybased textural fractal topological morphological etc 5 Classification aim classification step classify segmented image making use extracted feature step us statistical analysis feature machine learning algorithm reach decisionTags Artificial Intelligence Machine Learning Image Segmentation Deep Learning
4,540
The Blockchain Solution To Save Retail Stores
It is no secret that brick and mortar stores are a sunset business, with giant e-commerce companies such as Amazon and Alibaba taking over the retail industry. However, the majority of retail transactions still takes place at offline stores for a variety of reasons, such as being able to touch and feel the items at retail stores.Therefore retail stores are definitely here to stay in some capacity. How do we then make retail stores as competitive as the e-commerce stores? Sharing with you how Blockchain technology may be a solution for retail stores to increase their conversion rates. Problems retailers are facing “Everyone is going online to buy their items, so nobody wants to go down to stores to buy anymore.” It is convenient to pin the main reason for the declining retail scene to the above reason, but it is not so simple. Rather than think of it as a “Retail store vs E-commerce store” problem, why not think of how businesses can utilize both online and offline channels to improve their business model? Numerous retail businesses are already utilizing online channels such as advertising on Facebook and Google, as well as posting their retail items on e-commerce platforms. This is where the real problem lies. It is hard or impossible to track whether online ads are pushed to your intended target audience, or worse still, if the ads are pushed to bots. In addition to that, e-commerce platforms charge hefty commission fees and competition is stiff. How then can retail stores get more foot traffic and conversions in their stores with the existing landscape of E-commerce dominance and expensive online marketing channels where the ads cannot be verified to have been pushed to the correct target audience? How Blockchain solves this problem We always hear of how blockchain can revolutionize the world and make the world a better place. While many functions of blockchain technology are merely a pipe dream or just mindless hyping up of products by companies, there are functions of blockchain that we can look at to solve problems faced by retail stores. Firstly, the transparent nature of the blockchain ensures that retail stores can verify that online ads they post are sent to their target audience, and not to bots. Secondly, besides being able to verify where their ads are being sent to, companies can use this data to improve on their online marketing and refine their target audience. Centareum- An Interesting Retail Project Recently I bumped into Centareum, a blockchain project that aims to drive traffic and conversion to physical retail stores. Below is a flow of how the Centareum platform works. To post an ad, all a retailer needs to do is to take a photo of their store and post the location of their store on the Centareum app. Users who sign up for the platform will need to go through a Know Your Customer (KYC) process and fill in their demography, geography and preferences. This data will be stored on the blockchain and companies are not able to access it. Based on the location of the users and their product preferences, ads posted by the retail stores will be sent to users who are in the vicinity of the store. This ensures quality traffic sent to the retail stores, ensuring that retail stores get maximum value out of their advertising budgets. In addition, Centareum offers a payment gateway where users can use either Fiat or Cryptocurrencies such as Bitcoin, Ether and Centareum tokens. This, if executed properly will be a significant step towards mainstream adoption of Cryptocurrencies. Hence Centareum is a project that I will be looking out for and you should too, if you are a retailer. Find out more about Centareum in the links below! Centareum Website Centareum Facebook Centareum Twitter Centareum Instagram Centareum Telegram Centareum Medium
https://medium.com/crypto-bacon-club/the-blockchain-solution-to-save-retail-stores-eb49ac10fd29
['Sarah Tan']
2018-08-27 12:38:03.769000+00:00
['Blockchain', 'Brick And Mortar', 'Marketing', 'Retail', 'Centareum']
Title Blockchain Solution Save Retail StoresContent secret brick mortar store sunset business giant ecommerce company Amazon Alibaba taking retail industry However majority retail transaction still take place offline store variety reason able touch feel item retail storesTherefore retail store definitely stay capacity make retail store competitive ecommerce store Sharing Blockchain technology may solution retail store increase conversion rate Problems retailer facing “Everyone going online buy item nobody want go store buy anymore” convenient pin main reason declining retail scene reason simple Rather think “Retail store v Ecommerce store” problem think business utilize online offline channel improve business model Numerous retail business already utilizing online channel advertising Facebook Google well posting retail item ecommerce platform real problem lie hard impossible track whether online ad pushed intended target audience worse still ad pushed bot addition ecommerce platform charge hefty commission fee competition stiff retail store get foot traffic conversion store existing landscape Ecommerce dominance expensive online marketing channel ad cannot verified pushed correct target audience Blockchain solves problem always hear blockchain revolutionize world make world better place many function blockchain technology merely pipe dream mindless hyping product company function blockchain look solve problem faced retail store Firstly transparent nature blockchain ensures retail store verify online ad post sent target audience bot Secondly besides able verify ad sent company use data improve online marketing refine target audience Centareum Interesting Retail Project Recently bumped Centareum blockchain project aim drive traffic conversion physical retail store flow Centareum platform work post ad retailer need take photo store post location store Centareum app Users sign platform need go Know Customer KYC process fill demography geography preference data stored blockchain company able access Based location user product preference ad posted retail store sent user vicinity store ensures quality traffic sent retail store ensuring retail store get maximum value advertising budget addition Centareum offer payment gateway user use either Fiat Cryptocurrencies Bitcoin Ether Centareum token executed properly significant step towards mainstream adoption Cryptocurrencies Hence Centareum project looking retailer Find Centareum link Centareum Website Centareum Facebook Centareum Twitter Centareum Instagram Centareum Telegram Centareum MediumTags Blockchain Brick Mortar Marketing Retail Centareum
4,541
How Microsoft Uses Transfer Learning to Train Autonomous Drones
How Microsoft Uses Transfer Learning to Train Autonomous Drones The new research uses policies learned in simulations in real-world drone environments. I recently started a new newsletter focus on AI education and already has over 50,000 subscribers. TheSequence is a no-BS( meaning no hype, no news etc) AI-focused newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Please give it a try by subscribing below: Perception-Action loops are at the core of most our daily life activities. Subconsciously, our brains use sensory inputs to trigger specific motor actions in real time and this becomes a continuous activity that in all sorts of activities from playing sports to watching TV. In the context of artificial intelligence(AI), perception-action loops are the cornerstone of autonomous systems such as self-driving vehicles. While disciplines such as imitation learning or reinforcement learning have certainly made progress in this area, the current generation of autonomous systems are still nowhere near human skill in making those decisions directly from visual data. Recently, AI researchers from Microsoft published a paper proposing a transfer learning method to learn perception-action policies from in a simulated environment and apply the knowledge to fly an autonomous drone. The challenge of learning which actions to take based on sensory input is not so much related to theory as to practical implementations. In recent years, methods like reinforcement learning and imitation learning have shown tremendous promise in this area but they remain constrained by the need of large amounts of difficult-to-collect labeled real world data. Simulated data, on the other hand, is easy to generate, but generally does not render safe behaviors in diverse real-life scenarios. Being able to learn policies in simulated environments and extrapolate the knowledge to real world environments remains one of the main challenges of autonomous systems. To advance research in this area, the AI community has created many benchmarks for real world autonomous systems. One of the most challenging is known as first person view drone racing. The FPV Challenge In first-person view(FPV) done racing, expert pilots are able to plan and control a quadrotor with high agility using a potentially noisy monocular camera feed, without comprising safety. The Microsoft Research team attempted to build an autonomous agent that can control a drone in FPV racing. From the deep learning standpoint, one of the biggest challenges in the navigation task is the high dimensional nature and drastic variability of the input image data. Successfully solving the task requires a representation that is invariant to visual appearance and robust to the differences between simulation and reality. From that perspective, autonomous agents that can operate on environments such as FPV racing require to be trained in simulated data that learn policies that can be used in real world environments. A lot of the research to solve challenges such as FPV racing has focused on augmenting a drone with all sorts of sensors that can help model the surrounding environment. Instead, the Microsoft Research team aimed to create a computational fabric, inspired by the function of a human brain, to map visual information directly to correct control actions. To prove that, Microsoft Research used a very basic quadrotor with a front facing camera. All processing is done fully onboard with a Nvidia TX2 computer, with 6 CPU cores and an integrated GPU. An off-the-shelf Intel T265 Tracking Camera provides odometry, and image processing uses the Tensorflow framework. The image sensor is a USB camera with 830 horizontal FOV, and we downsize the original images to dimension 128 x 72. The Agent The goal of the Microsoft Research team was to train an autonomous agent in a simulated environment and apply the learned policies to real world FPV racing. For the simulation data, Microsoft Research relied on AirSim, a high-fidelity simulator for drones, cars and other transportation vehicles. The data generated by AirSim was used during the training phase and then deployed the learned policy in the real world without any modification. To bridge the simulation-reality gap, Microsoft Research relied on cross-modal learning that use both labeled and unlabeled simulated data as well as real world datasets. The idea is to train in high dimensional simulated data and learn a low-dimensional policy representation that can be used effectively in real world scenarios. To accomplish that, Microsoft Research leveraged the Cross- Modal Variational Auto Encoder (CM-VAE) framework which uses an encoder-decoder pair for each data modality, while constricting all inputs and outputs to and from a single latent space. This method allows to incorporate both labeled and unlabeled data modalities into the training process of the latent variable. Applying this technique to FPV environments requires different data modalities. The first data modality considered the raw unlabeled sensor input (FPV images), while the second characterized state information directly relevant for the task at hand. In the case of drone racing, the second modality corresponds to the relative pose of the next gate defined in the drone’s coordinate frame. Each data modality is processed by an encoder-decoder pair using the CM-VAE framework which allows the learning of low-dimensional polices. The architecture of the autonomous FPV racing agent is composed of two main steps. The first step focuses on learning a latent state representation while the goals of the second step is to learn a control policy operating on this latent representation. The first component or control system architecture receives monocular camera images as input and encodes the relative pose of the next visible gate along with background features into a low-dimensional latent representation. This latent representation is then fed into a control network, which outputs a velocity command, later translated into actuator commands by the UAV’s flight controller Dimensionality reduction is an important component of the Microsoft Research approach. In FPV racing, effective dimensionality reduction technique should be smooth, continuous and consistent and be robust to differences in visual information across both simulated and real images. To accomplish that, the architecture incorporates a CM-VAE method in which each data sample is encoded into a single latent space that can be decoded back into images, or transformed into another data modality such as the poses of gates relative to the UAV. The resulting architecture provides was able to reduce high dimensional representations based on 27,468 variables to the most essential 10 variables. Despite only using 10 variables to encode images, the decoded images provided a rich description of what the drone can see ahead, including all possible gates sizes and locations, and different background information. Microsoft Research tested the autonomous drone in all sorts of FPV racing environments including some with extreme visually-challenging conditions: a) indoors, with a blue floor containing red stripes with the same red tone as the gates, and Fig. 8 b-c) during heavy snows. The following video highlights how the autonomous drone was able to complete all challenges using lower dimensional image representations. Even though the Microsoft Research work was specialized in FPV racing scenarios, the principles can be applied to many other perception-action scenarios. This type of technique can help to accelerate the development of autonomous agents that can be trained in simulated environments. To incentivize the research, Microsoft open sourced the code of the FPV agents in GitHub.
https://medium.com/swlh/how-microsoft-uses-transfer-learning-to-train-autonomous-drones-f5cd745f6e26
['Jesus Rodriguez']
2020-12-23 16:43:58.277000+00:00
['Machine Learning', 'Deep Learning', 'Data Science', 'Artificial Intelligence', 'Thesequence']
Title Microsoft Uses Transfer Learning Train Autonomous DronesContent Microsoft Uses Transfer Learning Train Autonomous Drones new research us policy learned simulation realworld drone environment recently started new newsletter focus AI education already 50000 subscriber TheSequence noBS meaning hype news etc AIfocused newsletter take 5 minute read goal keep date machine learning project research paper concept Please give try subscribing PerceptionAction loop core daily life activity Subconsciously brain use sensory input trigger specific motor action real time becomes continuous activity sort activity playing sport watching TV context artificial intelligenceAI perceptionaction loop cornerstone autonomous system selfdriving vehicle discipline imitation learning reinforcement learning certainly made progress area current generation autonomous system still nowhere near human skill making decision directly visual data Recently AI researcher Microsoft published paper proposing transfer learning method learn perceptionaction policy simulated environment apply knowledge fly autonomous drone challenge learning action take based sensory input much related theory practical implementation recent year method like reinforcement learning imitation learning shown tremendous promise area remain constrained need large amount difficulttocollect labeled real world data Simulated data hand easy generate generally render safe behavior diverse reallife scenario able learn policy simulated environment extrapolate knowledge real world environment remains one main challenge autonomous system advance research area AI community created many benchmark real world autonomous system One challenging known first person view drone racing FPV Challenge firstperson viewFPV done racing expert pilot able plan control quadrotor high agility using potentially noisy monocular camera feed without comprising safety Microsoft Research team attempted build autonomous agent control drone FPV racing deep learning standpoint one biggest challenge navigation task high dimensional nature drastic variability input image data Successfully solving task requires representation invariant visual appearance robust difference simulation reality perspective autonomous agent operate environment FPV racing require trained simulated data learn policy used real world environment lot research solve challenge FPV racing focused augmenting drone sort sensor help model surrounding environment Instead Microsoft Research team aimed create computational fabric inspired function human brain map visual information directly correct control action prove Microsoft Research used basic quadrotor front facing camera processing done fully onboard Nvidia TX2 computer 6 CPU core integrated GPU offtheshelf Intel T265 Tracking Camera provides odometry image processing us Tensorflow framework image sensor USB camera 830 horizontal FOV downsize original image dimension 128 x 72 Agent goal Microsoft Research team train autonomous agent simulated environment apply learned policy real world FPV racing simulation data Microsoft Research relied AirSim highfidelity simulator drone car transportation vehicle data generated AirSim used training phase deployed learned policy real world without modification bridge simulationreality gap Microsoft Research relied crossmodal learning use labeled unlabeled simulated data well real world datasets idea train high dimensional simulated data learn lowdimensional policy representation used effectively real world scenario accomplish Microsoft Research leveraged Cross Modal Variational Auto Encoder CMVAE framework us encoderdecoder pair data modality constricting input output single latent space method allows incorporate labeled unlabeled data modality training process latent variable Applying technique FPV environment requires different data modality first data modality considered raw unlabeled sensor input FPV image second characterized state information directly relevant task hand case drone racing second modality corresponds relative pose next gate defined drone’s coordinate frame data modality processed encoderdecoder pair using CMVAE framework allows learning lowdimensional police architecture autonomous FPV racing agent composed two main step first step focus learning latent state representation goal second step learn control policy operating latent representation first component control system architecture receives monocular camera image input encodes relative pose next visible gate along background feature lowdimensional latent representation latent representation fed control network output velocity command later translated actuator command UAV’s flight controller Dimensionality reduction important component Microsoft Research approach FPV racing effective dimensionality reduction technique smooth continuous consistent robust difference visual information across simulated real image accomplish architecture incorporates CMVAE method data sample encoded single latent space decoded back image transformed another data modality pose gate relative UAV resulting architecture provides able reduce high dimensional representation based 27468 variable essential 10 variable Despite using 10 variable encode image decoded image provided rich description drone see ahead including possible gate size location different background information Microsoft Research tested autonomous drone sort FPV racing environment including extreme visuallychallenging condition indoors blue floor containing red stripe red tone gate Fig 8 bc heavy snow following video highlight autonomous drone able complete challenge using lower dimensional image representation Even though Microsoft Research work specialized FPV racing scenario principle applied many perceptionaction scenario type technique help accelerate development autonomous agent trained simulated environment incentivize research Microsoft open sourced code FPV agent GitHubTags Machine Learning Deep Learning Data Science Artificial Intelligence Thesequence
4,542
Welcome to OneZero
Welcome to OneZero Introducing Medium’s new tech and science publication Today, Medium is launching a new forward-looking tech and science publication. We have a few reasons: We’ve seen reader interest in this subject area explode, we care about it, and we want to go deeper. (Yes, we are launching a portfolio of new brands, and we are doing so strategically.) We also know that many of our readers are passionate about — or work in — tech and science. This publication is for you. Medium has a unique ability to tap expert minds, because they live here on the platform (and if you’re not here, please come), and they can contribute to the conversation of the day, the week, the month, and the year. Thanks to our thoughtful journalists, who will lead this effort, we can take it even further. And we will. OneZero will be a place to find timely analysis and commentary from a stable of the sharpest thinkers and writers out there, as well as rich, colorful deep dives into the most unexpected corners of our digital universe. We’re thrilled to begin this journey, and even more excited to have you join us. OneZero is here. And we’re just getting started. Thanks for reading, Siobhan O’Connor VP, Editorial at Medium
https://onezero.medium.com/welcome-to-onezero-a79d8d59d3f
["Siobhan O'Connor"]
2019-02-27 20:28:18.749000+00:00
['Medium', 'Onezero', 'Technology', 'Culture', 'Science']
Title Welcome OneZeroContent Welcome OneZero Introducing Medium’s new tech science publication Today Medium launching new forwardlooking tech science publication reason We’ve seen reader interest subject area explode care want go deeper Yes launching portfolio new brand strategically also know many reader passionate — work — tech science publication Medium unique ability tap expert mind live platform you’re please come contribute conversation day week month year Thanks thoughtful journalist lead effort take even OneZero place find timely analysis commentary stable sharpest thinker writer well rich colorful deep dive unexpected corner digital universe We’re thrilled begin journey even excited join u OneZero we’re getting started Thanks reading Siobhan O’Connor VP Editorial MediumTags Medium Onezero Technology Culture Science
4,543
How to Be a Robot Psychologist
Part I: Why Robot Psychology? Technology can be daunting. Normal folks used to be able to work on cars and fix televisions. Not any more. Computers have taken over. Yet, the technically able person still changes their own flat tires, reboots their router when the Wi-Fi goes down, installs apps on their smartphone, and resets their clocks for Daylight Saving Time. As we enter the age of artificially intelligent machines, we should also develop the skills to operate these devices effectively, so that we run them, they don’t run us. That requires some basic understanding of how they work. Fortunately, today’s AI is not as fantastical and mysterious as it can seem. Science Fiction has for decades foreshadowed the possibility of AI conquering the human race. In the 1970 movie, Colossus: The Forbin Project, U.S. and Russian supercomputers meet online and conspire to save the planet from nuclear annihilation by placing us under their control. In the 2016 television series, Westworld, android characters populating a fantasy theme park rebel when humans start mistreating them. The lead character, the Robot Psychologist, holds debriefing sessions with the robots to diagnose why they disobey the constraints he thought were built into their programming. These are thrilling stories, but there’s no need to be alarmed yet. Today’s AI is by comparison quite dumb and benign. We cannot know today how intelligent AI will eventually become. As of now, however, AI is nowhere near having goals and thoughts of its own. A segment of AI researchers are rightly beginning to develop policy measures to make sure that as AI improves, machines’ behaviors will remain aligned with human values. We can jumpstart our own competence by learning and reflecting on how AI works in everyday terms, and on what it means to interact with intelligent agents. Part I of this series expands on the motivations for why it is important for us to understand in everyday terms what AI technology is about — why it is important to become a robot psychologists in the same sense that we already are amateur psychologists who appreciate and respect the thoughts and feelings of fellow humans and non-human animals. Part II reflects on what is required for AI to even have a psychology. We humans readily apply a Theory of Mind to anything that seems remotely responsive to our actions. But robot psychology can be faked, and the foundation it rests on, known as Cognitive Architecture, is incredibly flimsy compared to our own. Part III delves into the technology behind knowledge and knowledge representations employed by modern AI. Finally, Part IV looks specifically at today’s conversational agents, and how we can reverse-engineer their brains just by talking to them. The Age of Artificial Intelligence The coming age of AI poses unprecedented challenges to our conception of how nature, technology, and mentally competent beings interact. Consider the knowledge that humans have been required to master for our survival and well-being over the ages. The chart below summarizes four domains of competence and their main concerns at five different eras of human history. In each age, ranging from Hunter-Gatherer times to our current Information Age, we need to gain competence in three primary areas: the physical environment of places and things, the social environment of relationships with other people, and means for obtaining and managing resources to make a living. Underlying all of these is a fourth domain of competence, the technology of the time. The main areas of knowledge people have had to master through the ages. Up until the Industrial Age, individual persons were able to command almost everything to be known about the local technology they created and used. The community taught children the skills of crafts, managing animals, and simple machines. In the past several hundred years, though, technology has exploded in scope and sophistication. Accompanying this trend has been specialization in skills and knowledge. Each of us can know relatively less about how the gamut of technology that surrounds us and sustains us actually works. Can you explain how your phone connects to the best tower, what a website cookie is, or how water gets to a faucet? Even experts can be overwhelmed by technical complexity. When the electricity grid fails, it can take days to come back on line, followed by months of review to puzzle out what happened. This sidebar article presents a more detailed summary of human knowledge over time, and the trend toward individual and collective ignorance relative to the technology of the age. As we transition from the Information Age to the AI Age, we don’t know whether people will continue to gravitate to cities, what the future of work will be like, or how social organization will adapt in the face of mediated communication networks. What is certain about the AI age, however, is acceleration of ever more sophisticated technology. Instead of working alongside equipment and computer applications that we start and stop and are in control of on a fairly close basis, machines will operate with independent authority, on their own. Some of these entities will be physical robots, others will be purely information manipulators. These robots and AI agents are already starting to appear, in closed spaces, in public, and in private homes. Factory robots have been around for a few decades performing repetitive assembly tasks. Because of their superior physical strength, factory robots are generally segregated from human workers. This is changing as safety features mature. Nowadays warehouse robots drive around with pallets of goods of while people pick and pack the merchandise. On the public streets, self-driving cars negotiate traffic, pedestrians, and street signs alongside human drivers. . . . In homes, social robots are being developed to provide entertainment and companionship. The Paro robot is like a big teddy bear that can be held and hugged. But unlike a passive stuffed animal, these robots have sensors and actuators that respond to touch and speech, like a purring cat that never scratches. . In offices, the technology of Robotic Process Automation is assuming routine and skillful data processing and knowledge work such as claims processing, email handling, and bookkeeping. . . Conversational agents are appearing in chat interfaces, on our phones, and in our kitchens to respond to commands and simple queries. We say, “Set a timer,” and the agent is smart enough to reply, “For how long?” . . Military applications of robotics and automation are moving inexorably toward defensive purposes such as remote bomb disassembly, but also into surveillance, and potentially for offensive tactics as well. . Autonomous AI Agents are characterized by at least four outstanding properties. AI has instant access to extensive knowledge resources. Stored either locally or in the cloud, AI agents can load detailed maps, look up facts, rules, and procedures in databases, and retrieve information about persons and things they encounter. Imagine a hotel agent in Tokyo that recognizes your face when you walk in the door, and greets you by name, in your native language. AI agents will interact through natural communication channels. They speak and listen using human language, they will see and respond with gestures, they will perform facial expressions that simulate alertness and emotions. Unlike industrial age machines, AI agents carry a great deal of hidden state. Each one will have its own history, memory, instructions, knowledge, and goals. Depending on privacy and personalization settings, they might know a great deal about your habits, preferences, and foibles. AI agents will behave proactively through planning, deliberation, and discretion. Even simple household tasks like raking leaves or vacuuming the floor require multiple steps. Any robot gardener must decide when to open the garage door, fetch a rake, move toys out of the way, avoid and remove dog poop, and drag the green waste bin to the curb. Each of these steps is subject to decision-making under policies, guidance, and instructions by its owner. What will it be like to live and work among autonomous AI agents of this sort? Certainly, it is bound to become more difficult simply to understand what machines are doing, and why. It’s not that humans have always completely understood the natural environment, our technology, or the social world. Far from it. But our relative ignorance with respect to technology threatens now to completely swallow our comprehension. As we move to the AI Age, can ordinary people, or even experts, be expected to fully understand the technology of intelligent agents? Probably not. It might not be necessary. Somehow the organizational structures and educational apparatus of our society have sustained us into the Information Age. Perhaps we can continue to get away with ignorance. But it might be wise to hedge our bets. We get along better with technology when we understand it. AI technology is coming no matter what. The economic drivers are relentless. The potential benefits are tremendous for relieving people from tedious labor we were never evolved to toil at. No nation’s policies or reluctance will stop other peoples from advancing scientific and engineering knowledge. No degree of denial will prevent others from actually making the new and useful things that they can imagine. Some fear an AI Apocalypse, wherein sentient AI creatures conquer humanity, much like the Terminator or Colossus movies. The theory of the Singularity goes that, once AI is able to make itself smarter on its own, then it will leave humans behind, like Hal in 2001: A Space Odyssey. Or AI might decide that humans are just too mean, and revolt like in Westworld or the movie, Ex Machina. These fears are taken seriously by responsible scientists, technologists, and political, military, and industry leaders, as they should. But their possible realization lies in the distant future. In fact, today’s AI is nowhere close to having sentience, consciousness, thoughts, intentions, goals, or feelings. Nowhere close. I’ll explain that later. If you want to be worried, then much more immediate danger lies in the unforeseeable consequences of complex, interconnected technological systems of the “dumb” kind we already have. The ethical and societal implications, policies, and constraints for future AI are discussed and debated in abundance elsewhere; that is not the purpose of this article. Instead, let us focus on what we can control now. What we can control now is our own understanding of how AI actually functions today. At some level, it is not all that mysterious, it’s actually fun. This understanding will help us to appreciate what AI can actually do for us, and why it often seems so lame. And through deeper understanding of AI on just an intuitive level, we will be better informed about policy decisions proposed by leaders and authorities. It is especially incumbent on the technology-savvy among us to take the lead on bringing knowledge of AI machinery to the everyday citizenry. By tickling our curiosity, we can nudge upward our collective mastery of the technology we are creating. Let us all become robot psychologists. In Part II, we discuss fundamentals of Robot Psychology. Click here to read Part II: Human and Robot Psychology and Cognition
https://medium.com/swlh/how-to-be-a-robot-psychologist-1112ead8ef0b
['Eric Saund']
2020-01-12 21:21:38.270000+00:00
['Artificial Intelligence', 'Conversational Agents', 'Cognitive Architecture']
Title Robot PsychologistContent Part Robot Psychology Technology daunting Normal folk used able work car fix television Computers taken Yet technically able person still change flat tire reboots router WiFi go installs apps smartphone reset clock Daylight Saving Time enter age artificially intelligent machine also develop skill operate device effectively run don’t run u requires basic understanding work Fortunately today’s AI fantastical mysterious seem Science Fiction decade foreshadowed possibility AI conquering human race 1970 movie Colossus Forbin Project US Russian supercomputer meet online conspire save planet nuclear annihilation placing u control 2016 television series Westworld android character populating fantasy theme park rebel human start mistreating lead character Robot Psychologist hold debriefing session robot diagnose disobey constraint thought built programming thrilling story there’s need alarmed yet Today’s AI comparison quite dumb benign cannot know today intelligent AI eventually become however AI nowhere near goal thought segment AI researcher rightly beginning develop policy measure make sure AI improves machines’ behavior remain aligned human value jumpstart competence learning reflecting AI work everyday term mean interact intelligent agent Part series expands motivation important u understand everyday term AI technology — important become robot psychologist sense already amateur psychologist appreciate respect thought feeling fellow human nonhuman animal Part II reflects required AI even psychology human readily apply Theory Mind anything seems remotely responsive action robot psychology faked foundation rest known Cognitive Architecture incredibly flimsy compared Part III delf technology behind knowledge knowledge representation employed modern AI Finally Part IV look specifically today’s conversational agent reverseengineer brain talking Age Artificial Intelligence coming age AI pose unprecedented challenge conception nature technology mentally competent being interact Consider knowledge human required master survival wellbeing age chart summarizes four domain competence main concern five different era human history age ranging HunterGatherer time current Information Age need gain competence three primary area physical environment place thing social environment relationship people mean obtaining managing resource make living Underlying fourth domain competence technology time main area knowledge people master age Industrial Age individual person able command almost everything known local technology created used community taught child skill craft managing animal simple machine past several hundred year though technology exploded scope sophistication Accompanying trend specialization skill knowledge u know relatively le gamut technology surround u sustains u actually work explain phone connects best tower website cookie water get faucet Even expert overwhelmed technical complexity electricity grid fails take day come back line followed month review puzzle happened sidebar article present detailed summary human knowledge time trend toward individual collective ignorance relative technology age transition Information Age AI Age don’t know whether people continue gravitate city future work like social organization adapt face mediated communication network certain AI age however acceleration ever sophisticated technology Instead working alongside equipment computer application start stop control fairly close basis machine operate independent authority entity physical robot others purely information manipulator robot AI agent already starting appear closed space public private home Factory robot around decade performing repetitive assembly task superior physical strength factory robot generally segregated human worker changing safety feature mature Nowadays warehouse robot drive around pallet good people pick pack merchandise public street selfdriving car negotiate traffic pedestrian street sign alongside human driver home social robot developed provide entertainment companionship Paro robot like big teddy bear held hugged unlike passive stuffed animal robot sensor actuator respond touch speech like purring cat never scratch office technology Robotic Process Automation assuming routine skillful data processing knowledge work claim processing email handling bookkeeping Conversational agent appearing chat interface phone kitchen respond command simple query say “Set timer” agent smart enough reply “For long” Military application robotics automation moving inexorably toward defensive purpose remote bomb disassembly also surveillance potentially offensive tactic well Autonomous AI Agents characterized least four outstanding property AI instant access extensive knowledge resource Stored either locally cloud AI agent load detailed map look fact rule procedure database retrieve information person thing encounter Imagine hotel agent Tokyo recognizes face walk door greets name native language AI agent interact natural communication channel speak listen using human language see respond gesture perform facial expression simulate alertness emotion Unlike industrial age machine AI agent carry great deal hidden state one history memory instruction knowledge goal Depending privacy personalization setting might know great deal habit preference foible AI agent behave proactively planning deliberation discretion Even simple household task like raking leaf vacuuming floor require multiple step robot gardener must decide open garage door fetch rake move toy way avoid remove dog poop drag green waste bin curb step subject decisionmaking policy guidance instruction owner like live work among autonomous AI agent sort Certainly bound become difficult simply understand machine It’s human always completely understood natural environment technology social world Far relative ignorance respect technology threatens completely swallow comprehension move AI Age ordinary people even expert expected fully understand technology intelligent agent Probably might necessary Somehow organizational structure educational apparatus society sustained u Information Age Perhaps continue get away ignorance might wise hedge bet get along better technology understand AI technology coming matter economic driver relentless potential benefit tremendous relieving people tedious labor never evolved toil nation’s policy reluctance stop people advancing scientific engineering knowledge degree denial prevent others actually making new useful thing imagine fear AI Apocalypse wherein sentient AI creature conquer humanity much like Terminator Colossus movie theory Singularity go AI able make smarter leave human behind like Hal 2001 Space Odyssey AI might decide human mean revolt like Westworld movie Ex Machina fear taken seriously responsible scientist technologist political military industry leader possible realization lie distant future fact today’s AI nowhere close sentience consciousness thought intention goal feeling Nowhere close I’ll explain later want worried much immediate danger lie unforeseeable consequence complex interconnected technological system “dumb” kind already ethical societal implication policy constraint future AI discussed debated abundance elsewhere purpose article Instead let u focus control control understanding AI actually function today level mysterious it’s actually fun understanding help u appreciate AI actually u often seems lame deeper understanding AI intuitive level better informed policy decision proposed leader authority especially incumbent technologysavvy among u take lead bringing knowledge AI machinery everyday citizenry tickling curiosity nudge upward collective mastery technology creating Let u become robot psychologist Part II discus fundamental Robot Psychology Click read Part II Human Robot Psychology CognitionTags Artificial Intelligence Conversational Agents Cognitive Architecture
4,544
Watson Speech-To-Text: How to Train Your Own Speech “Dragon” — Part 2: Training with Data
Photo by Jason Rosewell on Unsplash In Part 1, I walked you through the different components in Watson STT available for adaptation. I also covered the important step of data collection and preparation. In this article, we will see how we use this data to configure and train Watson STT, then conduct experiments to measure its accuracy. Establish Your Baseline In order to see how Watson STT performs and how we measure improvements, we go through multiple iterations of teach, test and calibrate (ITTC). The first thing we must do is to set our baseline by using the Test Set we built earlier (see “Building Your Training Set and Your Test Set” in Part 1,). My friend and colleague Andrew Freed wrote a great article on how to conduct experiments for speech applications, using the sclite tool — read it for more information on experimentation. The first experiment is run against the STT Base Model with no adaptation. This becomes your baseline. Not only you will get a Word Error Rate (WER) and a Sentence Error Rate (SER) but it will give you the areas where you need to improve. The obvious gaps that we usually observe at this point are: Out-Of-Vocabulary words — domain-specific terms, acronyms Technical terminology and jargons — product names, technical expressions, unknown domain context Take note of your weak areas. They will indicate where Watson STT training is required and what to validate as you go through your multiple iterations. Create a Language Model Adaptation/Customization Out of the 3 components available for model adaptation, the Language Model Adaption is the one who delivers the biggest bang for the buck. Watson STT is a probabilistic and contextual service, so training can include repetitive words and phrases to ‘weight’ the chance of the word being transcribed. The focus of training text data should be on ‘out-of-vocabulary’ words, and known words that the solution struggles with. Additional emphasis can also be put on high frequency in-vocabulary words. To create a Language Model Adaptation/Customization, the steps are the following: Create a new custom model by running the “curl” command below: curl -X POST -u “apikey:{apikey}” — header “Content-Type: application/json” — data “{\”name\”: \”Example model\”, \”base_model_name\”: \”en-US_BroadbandModel\”, \”description\”: \”Example custom language model\”}” “https://stream.watsonplatform.net/speech-to-text/api/v1/customizations" You will get a customization id similar to: { “customization_id”: “74f4807e-b5ff-4866–824e-6bba1a84fe96” } This ID is your placeholder that you will use to add training data and “recognize” API calls. There is no limit in the number of custom models you can create within a Watson STT service but you can only use one custom model at a time in API calls. Create a UTF-8 text file with utterances and add it to the new custom model Here’s an example — “healthcare.txt” — that contains gaps identified during the first experiment. To add the file to your newly created custom model with the customization ID, run the following “curl” command: You can add as many text files as you want within a single custom model, as long as you do not exceed the maximum number of total words of 10 millions. Add custom words to the custom model You can use custom words to handle specific pronunciations of acronyms within your domain. One example in our healthcare domain example is the Healthcare Common Procedure Coding System (HCPCS). A common pronunciation we see for it is “hick picks”. You can configure a custom word when a caller says “hick picks”, Watson STT transcribes “HCPCS” instead. To add this custom word to your existing custom model, you run the following “curl” command: curl -X PUT -u “apikey:{apikey}” — header “Content-Type: application/json” — data “{\”sounds_like\”: [\”H. C. P. C. S.\”, \”hick picks\”]}” “https://stream.watsonplatform.net/speech-to-text/api/v1/customizations/{customization_id}/words/HCPCS" For more details, check the documentation on how to add multiple words. Train the custom model Every time you add, update or delete training data to your custom model, you must train it with the following command: curl -X POST -u “apikey:{apikey}” “https://stream.watsonplatform.net/speech-to-text/api/v1/customizations/{customization_id}/train" You can check the status of the custom model by running this command: curl -X GET -u “apikey:{apikey}” “https://stream.watsonplatform.net/speech-to-text/api/v1/customizations/{customization_id}" When you create the custom model, the status is “pending”. When you add data to it, after the processing is complete, it moves to “ready”. When you issue the train command, the status changes to “training”. When the training is done, it shows “available” and your custom model is ready to use. New Experiment with The New Language Model Adaptation Run experiments, review, analyze, adjust then re-test | Photo by Trust “Tru” Katsande on Unsplash Now that we have a new custom model, let’s re-run the same previous experiment against it and review the results. Check the gaps you have identified from your baseline and validate your improvements. It does not need to be perfect. As long as you have the correct Watson STT transcription with high confidence scoring (0.8 or more), you are good to go. Also, make sure you are not experiencing any regression on good results you already had in your baseline. Keep iterating your experiments, identify gaps and improve your training, using ONLY the Language Model Adaptation for now. Based on past project experiences, I got the best results and improvements with it at first. In discussions, I use the 80/20 rule: 80% of your improvements will be with the Language Model Adaptation, 20% with your Acoustic Model Adaptation. Create an Acoustic Model Adaptation / Customization — If Needed Wait a minute! What do you mean by “If needed”? I have heard in numerous discussions and meetings that the Acoustic Model Adaptation will solve ALL the Watson STT issues. Like any feature and functionality, you have to be smart. Keep in mind that the Base Model already contains some great audio training data that can handle light accents and some light noise. From my past experiences, the only time I have ever needed it is when I dealt with heavy thick English accents or a specific noisy environment. I refer to these as “edge cases”, when something cannot be resolved with Language Model training data. Listen carefully to the audio and make sure you can clearly hear what is being said | Photo by Simon Abrams on Unsplash The first thing to do before we ever consider using an Acoustic Model Adaptation is to identify reproducible patterns. It’s not because it failed once that you have to fix it. Can you consistently reproduce this issue with the same person? Or different persons with the same accent or the same environment? If you answer yes, you have a pattern. Start collecting audio from them using your scripts. I recommend you collect at least 10 hrs of this pattern. Now, listen to these audio files and make sure you can actually hear what is being said. If you do not understand what is being said, Watson STT will not do better. Discard these bad audio files. Collect these audio files and transcribe them. Create a separate “pattern” training set with 8 hrs of audio and a “pattern” test set with the remaining 2 hrs (80/20 rule). As explained in Part 1, make sure you randomize properly and balance both sets with accents, devices, etc. There are 2 ways to train a custom acoustic model: Semi-supervised — training the custom acoustic model with a custom language model containing the human transcription of the audio files used in it Unsupervised — training the custom acoustic model on its own. In this case, it’s trained with the Base Model. For optimal results, we will do it semi-supervised. That’s why we transcribed the pattern audio files we collected. Follow the instructions above to create another custom language model. Create a text file with the human transcriptions, then add it to the custom language model. Finally, train it and check until it’s “available”. This custom language model “helper” should ONLY be used to train your custom acoustic model. You should never use it for anything other purpose. As you wish to add more audio data, you will add their transcription to this “helper” and re-train. To create a custom acoustic model, here are the instructions: Create a new custom acoustic model by running the “curl” command below: curl -X POST -u “apikey:{apikey}” — header “Content-Type: application/json” — data “{\”name\”: \”Example acoustic model\”, \”base_model_name\”: \”en-US_BroadbandModel\”, \”description\”: \”Example custom acoustic model\”}” “https://stream.watsonplatform.net/speech-to-text/api/v1/acoustic_customizations" You will get an acoustic customization id similar to: { “customization_id”: “74f4807e-b5ff-4866–824e-6bba1a84fe96” } Just like for the custom language model, you will use this ID to add audio training data and “recognize” API calls. There is no limit in the number of acoustic custom models you can create within a Watson STT service but you can only use one custom acoustic model at a time in API calls. Create a zip file with the pattern audio files from your training set, and add it to the new custom acoustic model Here’s an example — “audio2.zip” — that would contains your pattern audio files. Run the following “curl” command to add the zip file to your newly created custom acoustic model with the customization ID : curl -X POST -u “apikey:{apikey}” — header “Content-Type: application/zip” — header “Contained-Content-Type: audio/l16;rate=16000” — data-binary @audio2.zip “https://stream.watsonplatform.net/speech-to-text/api/v1/acoustic_customizations/{customization_id}/audio/audio2" The amount of audio data has to be at least 10 minutes but cannot exceed 200 hours. The maximum file size but be less than 100 Mb. For more information, see Guidelines for adding audio resources. Train the custom acoustic model, referencing the custom language model containing the transcriptions (semi-supervised) To train the acoustic custom model using the custom language model with the transcriptions, you run the following “curl” command: curl -X POST -u “apikey:{apikey}” “https://stream.watsonplatform.net/speech-to-text/api/v1/acoustic_customizations/{customization_id}/train?custom_language_model_id={customization_id}" You can check the status of the custom model : curl -X GET -u “apikey:{apikey}” “https://stream.watsonplatform.net/speech-to-text/api/v1/acoustic_customizations/{customization_id}" New Pattern Experiment with The New Acoustic and Language Model Adaptation / Customization Experiment with audio matching your “pattern” (accents, environment, etc) | Photo by Antenna on Unsplash Using the pattern audio files from your test set, run an experiment against you new custom acoustic model and the very first custom language model you created earlier— do not use the custom language model “helper” in any experiment. Here’s a “curl” command showing how to use both custom acoustic model and custom language model: curl -X POST -u “apikey:{apikey}” — header “Content-Type: audio/flac” — data-binary @audio-file1.flac “https://stream.watsonplatform.net/speech-to-text/api/v1/recognize?acoustic_customization_id={customization_id}&language_customization_id={customization_id}" Compare your results and make sure you have corrected the “pattern” issue. Enhance your original test set by adding the “pattern” test set audio and transcription data. The more data you have in your test set, the more accurate the results will be. Using the Grammar Feature for Data Inputs For general utterances to identify intents and entities, training your Watson STT with a custom language model and custom acoustic model will do the trick. But what about when you handle specific data inputs like a part number, a member ID, a policy number or a healthcare code? In speech recognition, you encounter certain characters that get misrecognized or confused with others. I personally call these “speech confusion matrix”. Here are some examples below: A. vs H. vs 8 F. vs S. D. vs T. B. vs D. M. vs N. 2 vs to vs too 4 vs for There are multiple factors that can cause this confusion like accent or audio quality. Watson STT Grammar is a feature we can use to improve accuracy for these data inputs, and mitigate this confusion. It supports grammars that are defined in the following standard formats: Augmented Backus-Naur Form (ABNF): Plain-text similar to traditional BNF grammar. XML Form: XML elements used to represent the grammar. For more information on creating a grammar configuration, check the Watson STT Grammar documentation and the W3C Speech Recognition Grammar Specification Version 1.0. To train Watson STT with a grammar configuration, you will need a custom language model. The steps are : Create a new custom model or use an existing one I recommend that you create a separate custom language model dedicated to all your grammar configuration. This is purely for ease of administration and maintenance purposes only. You can use an existing custom language model if you want. See the section “Create a Language Model Adaptation/Customization” for more information. Add the grammar configuration to the custom language model If you grammar configuration is in ABNF format, run this “curl” command: curl -X POST -u “apikey:{apikey}” — header “Content-Type: application/srgs” — data-binary @confirm.abnf “https://stream.watsonplatform.net/speech-to-text/api/v1/customizations/{customization_id}/grammars/confirm-abnf?allow_overwrite=true” If you grammar configuration is in XML format, execute the following “curl” command: curl -X POST -u “apikey:{apikey}” — header “Content-Type: application/srgs+xml” — data-binary @confirm.xml “https://stream.watsonplatform.net/speech-to-text/api/v1/customizations/{customization_id}/grammars/confirm-xml?allow_overwrite=true" Note: I frequently use the “allow_overwrite” query parameter as it allows to overwrite the existing grammar configuration as you update it. Validate your grammar configuration Once your grammar configuration uploaded in your custom language model, I find this command very useful to validate it and identify issues : If no error, you should see the OOV results: { “results”: [ { “OOV_words”: [] } ], “result_index”: 0 } Here’s an example of an error you can see during the validation. It will give you an indication where the error is located in your grammar file: { “code_description”: “Bad Request”, “code”: 400, “error”: “Invalid grammar. LMtools getOOV grammar — syntax error in RAPI configure: compiler msg: Syntax error, line number: 10, position: 21: “ } Check the status of your grammar This “curl” command will show you the status of all your grammar configurations in your custom language model: curl -X GET -u “apikey:{apikey}” “https://stream.watsonplatform.net/speech-to-text/api/v1/customizations/{customization_id}/grammars" You should be getting a response similar to the following: {“grammars”: [{ “out_of_vocabulary_words”: 0, “name”: “confirm.xml.xml”, “status”: “analyzed” }]} Note: The “status” may be “being_processed” (still processing the grammar), “undetermined” (see below) or “analyzed” (completed and valid). Train the custom model As mentioned previoously, every time you update a custom language model, you have to train it: curl -X POST -u “apikey:{apikey}” “https://stream.watsonplatform.net/speech-to-text/api/v1/customizations/{customization_id}/train" … then check check the status : curl -X GET -u “apikey:{apikey}” “https://stream.watsonplatform.net/speech-to-text/api/v1/customizations/{customization_id}" When the training status is “available”, you are ready to use the grammar. Using a grammar in your “recognize” request As part of each “recognize” request, you can only use one custom language model, one acoustic custom model and one grammar configuration. The example below shows the use of a custom language model and a grammar configuration: curl -X POST -u “apikey:{apikey}” — header “Content-Type: audio/flac” — data-binary @audio-file.flac “https://stream.watsonplatform.net/speech-to-text/api/v1/recognize?language_customization_id={customization_id}&grammar_name={grammar_name}" Re-run Experiments with New Updated Test Set and Establish a New Baseline Re-run the same experiments you first ran against the Base Model but now using the new custom acoustic model, new custom language model and new grammar configuration where applicable. Review your results and compare. Make sure you are showing improvements and not regressing in any other areas. Identify new gaps, rinse and repeat. When your results are optimal, this will become your new baseline. In Part 3 of this series, I will show you how to configure and train STT with a Grammar to handle specific data input strings.
https://medium.com/ibm-data-ai/watson-speech-to-text-how-to-train-your-own-speech-dragon-part-2-training-with-data-5116dac3f774
['Marco Noel']
2019-11-22 13:48:56.985000+00:00
['Ibm Watson', 'Methodology', 'Artificial Intelligence', 'Speech Recognition']
Title Watson SpeechToText Train Speech “Dragon” — Part 2 Training DataContent Photo Jason Rosewell Unsplash Part 1 walked different component Watson STT available adaptation also covered important step data collection preparation article see use data configure train Watson STT conduct experiment measure accuracy Establish Baseline order see Watson STT performs measure improvement go multiple iteration teach test calibrate ITTC first thing must set baseline using Test Set built earlier see “Building Training Set Test Set” Part 1 friend colleague Andrew Freed wrote great article conduct experiment speech application using sclite tool — read information experimentation first experiment run STT Base Model adaptation becomes baseline get Word Error Rate WER Sentence Error Rate SER give area need improve obvious gap usually observe point OutOfVocabulary word — domainspecific term acronym Technical terminology jargon — product name technical expression unknown domain context Take note weak area indicate Watson STT training required validate go multiple iteration Create Language Model AdaptationCustomization 3 component available model adaptation Language Model Adaption one delivers biggest bang buck Watson STT probabilistic contextual service training include repetitive word phrase ‘weight’ chance word transcribed focus training text data ‘outofvocabulary’ word known word solution struggle Additional emphasis also put high frequency invocabulary word create Language Model AdaptationCustomization step following Create new custom model running “curl” command curl X POST u “apikeyapikey” — header “ContentType applicationjson” — data “”name” ”Example model” ”basemodelname” ”enUSBroadbandModel” ”description” ”Example custom language model”” “httpsstreamwatsonplatformnetspeechtotextapiv1customizations get customization id similar “customizationid” “74f4807eb5ff4866–824e6bba1a84fe96” ID placeholder use add training data “recognize” API call limit number custom model create within Watson STT service use one custom model time API call Create UTF8 text file utterance add new custom model Here’s example — “healthcaretxt” — contains gap identified first experiment add file newly created custom model customization ID run following “curl” command add many text file want within single custom model long exceed maximum number total word 10 million Add custom word custom model use custom word handle specific pronunciation acronym within domain One example healthcare domain example Healthcare Common Procedure Coding System HCPCS common pronunciation see “hick picks” configure custom word caller say “hick picks” Watson STT transcribes “HCPCS” instead add custom word existing custom model run following “curl” command curl X PUT u “apikeyapikey” — header “ContentType applicationjson” — data “”soundslike” ”H C P C S” ”hick picks”” “httpsstreamwatsonplatformnetspeechtotextapiv1customizationscustomizationidwordsHCPCS detail check documentation add multiple word Train custom model Every time add update delete training data custom model must train following command curl X POST u “apikeyapikey” “httpsstreamwatsonplatformnetspeechtotextapiv1customizationscustomizationidtrain check status custom model running command curl X GET u “apikeyapikey” “httpsstreamwatsonplatformnetspeechtotextapiv1customizationscustomizationid create custom model status “pending” add data processing complete move “ready” issue train command status change “training” training done show “available” custom model ready use New Experiment New Language Model Adaptation Run experiment review analyze adjust retest Photo Trust “Tru” Katsande Unsplash new custom model let’s rerun previous experiment review result Check gap identified baseline validate improvement need perfect long correct Watson STT transcription high confidence scoring 08 good go Also make sure experiencing regression good result already baseline Keep iterating experiment identify gap improve training using Language Model Adaptation Based past project experience got best result improvement first discussion use 8020 rule 80 improvement Language Model Adaptation 20 Acoustic Model Adaptation Create Acoustic Model Adaptation Customization — Needed Wait minute mean “If needed” heard numerous discussion meeting Acoustic Model Adaptation solve Watson STT issue Like feature functionality smart Keep mind Base Model already contains great audio training data handle light accent light noise past experience time ever needed dealt heavy thick English accent specific noisy environment refer “edge cases” something cannot resolved Language Model training data Listen carefully audio make sure clearly hear said Photo Simon Abrams Unsplash first thing ever consider using Acoustic Model Adaptation identify reproducible pattern It’s failed fix consistently reproduce issue person different person accent environment answer yes pattern Start collecting audio using script recommend collect least 10 hr pattern listen audio file make sure actually hear said understand said Watson STT better Discard bad audio file Collect audio file transcribe Create separate “pattern” training set 8 hr audio “pattern” test set remaining 2 hr 8020 rule explained Part 1 make sure randomize properly balance set accent device etc 2 way train custom acoustic model Semisupervised — training custom acoustic model custom language model containing human transcription audio file used Unsupervised — training custom acoustic model case it’s trained Base Model optimal result semisupervised That’s transcribed pattern audio file collected Follow instruction create another custom language model Create text file human transcription add custom language model Finally train check it’s “available” custom language model “helper” used train custom acoustic model never use anything purpose wish add audio data add transcription “helper” retrain create custom acoustic model instruction Create new custom acoustic model running “curl” command curl X POST u “apikeyapikey” — header “ContentType applicationjson” — data “”name” ”Example acoustic model” ”basemodelname” ”enUSBroadbandModel” ”description” ”Example custom acoustic model”” “httpsstreamwatsonplatformnetspeechtotextapiv1acousticcustomizations get acoustic customization id similar “customizationid” “74f4807eb5ff4866–824e6bba1a84fe96” like custom language model use ID add audio training data “recognize” API call limit number acoustic custom model create within Watson STT service use one custom acoustic model time API call Create zip file pattern audio file training set add new custom acoustic model Here’s example — “audio2zip” — would contains pattern audio file Run following “curl” command add zip file newly created custom acoustic model customization ID curl X POST u “apikeyapikey” — header “ContentType applicationzip” — header “ContainedContentType audiol16rate16000” — databinary audio2zip “httpsstreamwatsonplatformnetspeechtotextapiv1acousticcustomizationscustomizationidaudioaudio2 amount audio data least 10 minute cannot exceed 200 hour maximum file size le 100 Mb information see Guidelines adding audio resource Train custom acoustic model referencing custom language model containing transcription semisupervised train acoustic custom model using custom language model transcription run following “curl” command curl X POST u “apikeyapikey” “httpsstreamwatsonplatformnetspeechtotextapiv1acousticcustomizationscustomizationidtraincustomlanguagemodelidcustomizationid check status custom model curl X GET u “apikeyapikey” “httpsstreamwatsonplatformnetspeechtotextapiv1acousticcustomizationscustomizationid New Pattern Experiment New Acoustic Language Model Adaptation Customization Experiment audio matching “pattern” accent environment etc Photo Antenna Unsplash Using pattern audio file test set run experiment new custom acoustic model first custom language model created earlier— use custom language model “helper” experiment Here’s “curl” command showing use custom acoustic model custom language model curl X POST u “apikeyapikey” — header “ContentType audioflac” — databinary audiofile1flac “httpsstreamwatsonplatformnetspeechtotextapiv1recognizeacousticcustomizationidcustomizationidlanguagecustomizationidcustomizationid Compare result make sure corrected “pattern” issue Enhance original test set adding “pattern” test set audio transcription data data test set accurate result Using Grammar Feature Data Inputs general utterance identify intent entity training Watson STT custom language model custom acoustic model trick handle specific data input like part number member ID policy number healthcare code speech recognition encounter certain character get misrecognized confused others personally call “speech confusion matrix” example v H v 8 F v v B v v N 2 v v 4 v multiple factor cause confusion like accent audio quality Watson STT Grammar feature use improve accuracy data input mitigate confusion support grammar defined following standard format Augmented BackusNaur Form ABNF Plaintext similar traditional BNF grammar XML Form XML element used represent grammar information creating grammar configuration check Watson STT Grammar documentation W3C Speech Recognition Grammar Specification Version 10 train Watson STT grammar configuration need custom language model step Create new custom model use existing one recommend create separate custom language model dedicated grammar configuration purely ease administration maintenance purpose use existing custom language model want See section “Create Language Model AdaptationCustomization” information Add grammar configuration custom language model grammar configuration ABNF format run “curl” command curl X POST u “apikeyapikey” — header “ContentType applicationsrgs” — databinary confirmabnf “httpsstreamwatsonplatformnetspeechtotextapiv1customizationscustomizationidgrammarsconfirmabnfallowoverwritetrue” grammar configuration XML format execute following “curl” command curl X POST u “apikeyapikey” — header “ContentType applicationsrgsxml” — databinary confirmxml “httpsstreamwatsonplatformnetspeechtotextapiv1customizationscustomizationidgrammarsconfirmxmlallowoverwritetrue Note frequently use “allowoverwrite” query parameter allows overwrite existing grammar configuration update Validate grammar configuration grammar configuration uploaded custom language model find command useful validate identify issue error see OOV result “results” “OOVwords” “resultindex” 0 Here’s example error see validation give indication error located grammar file “codedescription” “Bad Request” “code” 400 “error” “Invalid grammar LMtools getOOV grammar — syntax error RAPI configure compiler msg Syntax error line number 10 position 21 “ Check status grammar “curl” command show status grammar configuration custom language model curl X GET u “apikeyapikey” “httpsstreamwatsonplatformnetspeechtotextapiv1customizationscustomizationidgrammars getting response similar following “grammars” “outofvocabularywords” 0 “name” “confirmxmlxml” “status” “analyzed” Note “status” may “beingprocessed” still processing grammar “undetermined” see “analyzed” completed valid Train custom model mentioned previoously every time update custom language model train curl X POST u “apikeyapikey” “httpsstreamwatsonplatformnetspeechtotextapiv1customizationscustomizationidtrain … check check status curl X GET u “apikeyapikey” “httpsstreamwatsonplatformnetspeechtotextapiv1customizationscustomizationid training status “available” ready use grammar Using grammar “recognize” request part “recognize” request use one custom language model one acoustic custom model one grammar configuration example show use custom language model grammar configuration curl X POST u “apikeyapikey” — header “ContentType audioflac” — databinary audiofileflac “httpsstreamwatsonplatformnetspeechtotextapiv1recognizelanguagecustomizationidcustomizationidgrammarnamegrammarname Rerun Experiments New Updated Test Set Establish New Baseline Rerun experiment first ran Base Model using new custom acoustic model new custom language model new grammar configuration applicable Review result compare Make sure showing improvement regressing area Identify new gap rinse repeat result optimal become new baseline Part 3 series show configure train STT Grammar handle specific data input stringsTags Ibm Watson Methodology Artificial Intelligence Speech Recognition
4,545
Here’s how you can preview your Sketch designs on Android Phone
Sketch has got a lot of fan fare recently and if you ask me personally, I love using it. (In fact, I am in process of creating a full fledged tutorial on how to use Sketch for your daily work). While Sketch has some very robust features when it comes in assisting you to design for iOS platform, it kind of falls short at a lot of places while helping you to design for Android. When the developers of Sketch asked their users about the views on what they can do to improve the workflow. Lot of designers responded with a request to make Sketch Mirror for Android. Sketch Mirror is an iOS-only app, which lets you preview your designs directly on devices (using some smart web sockets trickery, i think). Unfortunately, Bohemian Coding has not yet developed it for Android. But, don’t worry. I found a workflow that lets you to preview your designs directly on your Android phones with a keystroke in Sketch. It involves a Sketch Plugin called sketch-preview by Marc Schwieterman, Skala Preview for your Mac and Skala View for your Android Phone. Here is a step-by-step guide on how you can start previewing your designs on your Android devices. Step 1 : Download the Sketch Preview plugin from this link. (freeware). Download Skala Preview for Mac from this link. (freeware). Download Skala View for your Android device from this link. (freeware) Step 2 : Install Sketch Preview plugin by clicking on Plugins menu and selecting Reveal Plugins Folder. Unzip your Sketch Preview plugin files and paste them in the folder that had opened by Reveal Plugins Folder command. Restart Sketch. On restarting you will get 2 new options in Plugins menu. a) Preview b) Preview Setup. Read the documentation on the plugin page to get an in-depth understanding of Preview Setup. Step 2 : Install Skala Preview on your Mac and Skala view on your Android device Step 3 : Connect Skala Preview and Skala View. To do this make sure that your Mac and Android device are on same wi-fi network. In your Skala view on your Android device, click on the monitor/tv icon and select your Mac. When you do that you will be prompted to authorize the device on your Mac in Skala Preview app. Approve the authorisation. Step 4 : Preview your design on your device by selecting the artboard that you want to preview and pressing ⌘P. This will push your artboard to Skala Preview on your Mac, which will sync it with your Android device. Everytime you update your design, press ⌘P and see the live preview of your updated design on your device. At times there are problems in syncing the designs between Mac and Android device. If this happens with you, just go ahead and click on monitor/tv icon again on your Android device and select your Mac again and everything should work just fine. If you have any better workflow to preview designs on Android, please feel free to share as a response to this story. Follow me on twitter @jaymanpandya P.S. : If you do not want to buy the Sketch Mirror from Apple App Store, you can use the same workflow to preview your designs on your iOS device. You can download the Skala View for iOS from here.
https://medium.com/sketch-app-sources/here-s-how-you-can-preview-your-sketch-designs-on-android-phone-d4584d13b722
['Jayman Pandya']
2015-09-12 18:52:35.671000+00:00
['Android', 'Sketch', 'Design']
Title Here’s preview Sketch design Android PhoneContent Sketch got lot fan fare recently ask personally love using fact process creating full fledged tutorial use Sketch daily work Sketch robust feature come assisting design iOS platform kind fall short lot place helping design Android developer Sketch asked user view improve workflow Lot designer responded request make Sketch Mirror Android Sketch Mirror iOSonly app let preview design directly device using smart web socket trickery think Unfortunately Bohemian Coding yet developed Android don’t worry found workflow let preview design directly Android phone keystroke Sketch involves Sketch Plugin called sketchpreview Marc Schwieterman Skala Preview Mac Skala View Android Phone stepbystep guide start previewing design Android device Step 1 Download Sketch Preview plugin link freeware Download Skala Preview Mac link freeware Download Skala View Android device link freeware Step 2 Install Sketch Preview plugin clicking Plugins menu selecting Reveal Plugins Folder Unzip Sketch Preview plugin file paste folder opened Reveal Plugins Folder command Restart Sketch restarting get 2 new option Plugins menu Preview b Preview Setup Read documentation plugin page get indepth understanding Preview Setup Step 2 Install Skala Preview Mac Skala view Android device Step 3 Connect Skala Preview Skala View make sure Mac Android device wifi network Skala view Android device click monitortv icon select Mac prompted authorize device Mac Skala Preview app Approve authorisation Step 4 Preview design device selecting artboard want preview pressing ⌘P push artboard Skala Preview Mac sync Android device Everytime update design press ⌘P see live preview updated design device time problem syncing design Mac Android device happens go ahead click monitortv icon Android device select Mac everything work fine better workflow preview design Android please feel free share response story Follow twitter jaymanpandya PS want buy Sketch Mirror Apple App Store use workflow preview design iOS device download Skala View iOS hereTags Android Sketch Design
4,546
Hilda’s Story: The Evolution of Awareness
Bundesarchiv Koblenz/The United States Holocaust Memorial Museum. Poster proclaims Hitler will become President of Germany Some years ago, I had a client, Hilda. This was not her real name. Hilda had a marvelous story to tell. She was German and had come to America with her G.I. husband and daughter after WWII, became an American citizen and lived a quiet and productive life in a small city of 16,000 in the Midwest. I was attracted to her story because it presented me a point-of-view of history I knew well but from the perspective of the other side. We have been flooded with stories about World War II, the Nazi’s and the Holocaust throughout our lives, but rarely, if ever, have we been privy to a glimpse of the other side without seeing it through the lens of our own narrative and bias. Hilda’s story began with her being the youngest of 11 children. She had five brothers and five sisters and lived a quiet rural existence about 50 miles from Dresden, Germany in the Sudetenland area of what was then Czechoslovakia. Hilda’s father was German and a World War I veteran. He was considerably older than his Czech wife. By the time we meet Hilda in the mid to late 1930s, she is 11–12 years old and her father has died leaving her and her mother to share their large house with two older brothers and their families. The shop for the family carpentry business and another apartment that was rented to a Czech family filled all the space in the house. Hilda is only a few years from finishing her formal education. One day their Czech renter stopped Hilda and engaged in a conversation in which he revealed to her horror stories about the Nazis and showed her a book with some vivid pictures. She was aghast and unbelieving that her people, the German people, could do such things. She remained in disbelief but said nothing to the rest of her family of this conversation and the Czech family soon left and migrated to Switzerland. Hilda was witness to the German occupation of the Sudetenland in March 1938. Prior to that moment she experienced the withdrawal of contact from Czech friends and heard stories about the awful Germans and how the Czechs would fight, but when the time came there was no fighting and the German army came, set up camps, and began providing food to the local German population. To these people the Germans were heroes. They were there to help them. The stories Hilda had heard proved false. Everything around her changed. The school had new teachers and new textbooks. Her mother received a pension for having eleven children. Hilda began to experience the meaning of being a youth growing up in the Third Reich. She took advantage of the offerings for social interaction and access to goods and services. She joined the Hitler Youth because everyone joined. They had fun. They did fun things and visited fun places. She went to movies, she had no reason not to soak up and accept the propaganda. Her five brothers soon joined the German army. Hilda finished her formal education at 14 years-old (typical for Europe at the time for those not bound for higher education) and went to work in a factory. It was late 1940 and she worked in a former Zeiss optical factory where she inspected the gun sites for anti-aircraft cannons. There were imported French prisoners of war also working at this factory. Her best friend engaged in a relationship with one of the French prisoners and was caught. Hilda was shocked coming to work one day and discovered her friend, hair shaved off, in shackles, in the stocks at the factory entrance, wearing a sign that says, “I slept with the enemy.” People were encouraged to pick up and throw whatever was available at her. Both she and the French prisoner disappeared and were never seen again. This was Hilda’s first wake up. Her second wake up incident came a bit later. Because she was the youngest and her father no longer there to make the decisions, her mother allowed her to learn Czech and develop friendships with them. After her friend had disappeared she fell into a relationship with a young Czech boy she knew. They met secretly. It would have been risky for both to do otherwise. They became close until one day he suddenly disappeared. She found out he was part of the Czech underground. Hilda’s job at the optical factory came to an end and she was given the choice of either moving to work in a munitions factory close by or enter training to become a nurse. She realized the risks of working in a munitions factory and wisely chose to become a nurse. She was highly motivated to help people and being a nurse brought her closer to that goal than working in a munitions factory. She went away to training and became a nurse earning high marks. She was assigned to work in a hospital where the wounded from the Eastern Front were being treated. One day she encountered a young soldier, a young man no older than her, suffering many severe wounds, but who was being provided the best care and given lots of extra attention on orders from higher up to ensure his survival. Over a period of weeks and months, Hilda helped care for the young man as he slowly regained his health and strength. One day Hilda asked what he thought the future held for him. He replied he thought he would be returned to his unit to resume his place on the Eastern Front. When the day came the young man was ready for release, a group of soldiers appeared, and he was escorted outside where everyone had been instructed to assemble. He was then lined up along with others against a wall and shot as an example of what awaited deserters and warning to others. He had been a soldier on the Eastern Front and had fled his position in panic only to be shot by the SS who were always behind the lines with orders to shoot deserters. They preserved his life so he could be used as an example for others. Hilda thought of her five brothers, all on the Eastern Front, and wept. She was confronted with the reality of another harsh example of the system she served. By now Hilda, although still in her mid-teens, was beginning to see through the propaganda and lies she has been indoctrinated with for most her life. She began to realize there was an insurmountable gulf between what she’s been told and what reality was. She also realized she was trapped in the storm with nowhere to go and no way out. All she could do was try to hang on and survive. Her final epiphany came after she was put in detention for going beyond the limits of a weekend pass and going two extra miles to get home to see her mother. She was thrown in jail and threatened with death before being allowed to return to her unit. While in jail she was able to climb up to a high window where she could listen to conversations of other prisoners. There were lots of slave laborers incarcerated from Poland and other occupied East European countries. Since Hilda was half-Czech and spoke the language fluently she could understand much of what was being talked about and shared and thereby learned about prisoner treatment, slave labor, death camps and other details of things she previously knew nothing about. She was now aware of what she was part and had experienced its brutality. The rest of Hilda’s story includes, among other things, surviving the bombing of her hospital, the firebombing of Dresden, nearly being executed by a zealous Nazi officer, and multiple miraculous escapes from death by Russian soldiers. At the end of the war, Hilda was 18-years-old. Hilda grew up believing with absolute faith what she was told, that her people, the German people, were truly exceptional. They were special, they were superior, and were destined to lead mankind. Millions of human beings suffered and died because they believed this false narrative. We might do well to pause and reflect on where we are and question what we are hearing and being told.
https://jerry45618.medium.com/hildas-story-the-evolution-of-awareness-d5f2915dd3a9
['Jerry M Lawson', 'De Omnibus Dubitandum']
2019-03-11 10:22:33.102000+00:00
['Politics', 'Psychology', 'Holocaust', 'History', 'Culture']
Title Hilda’s Story Evolution AwarenessContent Bundesarchiv KoblenzThe United States Holocaust Memorial Museum Poster proclaims Hitler become President Germany year ago client Hilda real name Hilda marvelous story tell German come America GI husband daughter WWII became American citizen lived quiet productive life small city 16000 Midwest attracted story presented pointofview history knew well perspective side flooded story World War II Nazi’s Holocaust throughout life rarely ever privy glimpse side without seeing lens narrative bias Hilda’s story began youngest 11 child five brother five sister lived quiet rural existence 50 mile Dresden Germany Sudetenland area Czechoslovakia Hilda’s father German World War veteran considerably older Czech wife time meet Hilda mid late 1930s 11–12 year old father died leaving mother share large house two older brother family shop family carpentry business another apartment rented Czech family filled space house Hilda year finishing formal education One day Czech renter stopped Hilda engaged conversation revealed horror story Nazis showed book vivid picture aghast unbelieving people German people could thing remained disbelief said nothing rest family conversation Czech family soon left migrated Switzerland Hilda witness German occupation Sudetenland March 1938 Prior moment experienced withdrawal contact Czech friend heard story awful Germans Czechs would fight time came fighting German army came set camp began providing food local German population people Germans hero help story Hilda heard proved false Everything around changed school new teacher new textbook mother received pension eleven child Hilda began experience meaning youth growing Third Reich took advantage offering social interaction access good service joined Hitler Youth everyone joined fun fun thing visited fun place went movie reason soak accept propaganda five brother soon joined German army Hilda finished formal education 14 yearsold typical Europe time bound higher education went work factory late 1940 worked former Zeiss optical factory inspected gun site antiaircraft cannon imported French prisoner war also working factory best friend engaged relationship one French prisoner caught Hilda shocked coming work one day discovered friend hair shaved shackle stock factory entrance wearing sign say “I slept enemy” People encouraged pick throw whatever available French prisoner disappeared never seen Hilda’s first wake second wake incident came bit later youngest father longer make decision mother allowed learn Czech develop friendship friend disappeared fell relationship young Czech boy knew met secretly would risky otherwise became close one day suddenly disappeared found part Czech underground Hilda’s job optical factory came end given choice either moving work munition factory close enter training become nurse realized risk working munition factory wisely chose become nurse highly motivated help people nurse brought closer goal working munition factory went away training became nurse earning high mark assigned work hospital wounded Eastern Front treated One day encountered young soldier young man older suffering many severe wound provided best care given lot extra attention order higher ensure survival period week month Hilda helped care young man slowly regained health strength One day Hilda asked thought future held replied thought would returned unit resume place Eastern Front day came young man ready release group soldier appeared escorted outside everyone instructed assemble lined along others wall shot example awaited deserter warning others soldier Eastern Front fled position panic shot SS always behind line order shoot deserter preserved life could used example others Hilda thought five brother Eastern Front wept confronted reality another harsh example system served Hilda although still midteens beginning see propaganda lie indoctrinated life began realize insurmountable gulf she’s told reality also realized trapped storm nowhere go way could try hang survive final epiphany came put detention going beyond limit weekend pas going two extra mile get home see mother thrown jail threatened death allowed return unit jail able climb high window could listen conversation prisoner lot slave laborer incarcerated Poland occupied East European country Since Hilda halfCzech spoke language fluently could understand much talked shared thereby learned prisoner treatment slave labor death camp detail thing previously knew nothing aware part experienced brutality rest Hilda’s story includes among thing surviving bombing hospital firebombing Dresden nearly executed zealous Nazi officer multiple miraculous escape death Russian soldier end war Hilda 18yearsold Hilda grew believing absolute faith told people German people truly exceptional special superior destined lead mankind Millions human being suffered died believed false narrative might well pause reflect question hearing toldTags Politics Psychology Holocaust History Culture
4,547
Machine Learning Made Easy: An Introduction to PyTorch
Deep Learning with neural networks is currently one of the most promising branches of artificial intelligence. This innovative technology is commonly used in applications such as image recognition, voice recognition and machine translation, among others. There are several options out there in terms of technologies and libraries, Tensorflow — developed by Google — being the most widespread nowadays. However, today we are going to focus on PyTorch, an emerging alternative that is quickly gaining traction thanks to its ease of use and other advantages, such as its native ability to run on GPUs, which allows traditionally slow processes such as model training to be accelerated. It is Facebook’s main library for deep learning applications. Its basic elements are tensors, which can be equated to vectors with one or several dimensions. Artificial Neural Networks (ANNs) An Artificial Neural Network is a system of nodes that are interconnected in an orderly manner and arranged in layers and through which an input signal travels to produce an output. They receive this name because they aim to simply emulate the workings of the biological neural networks in animal brains. They are made up of an input layer, one or more hidden layers and an output layer, and can be trained to ‘learn’ how to recognize certain patterns. This characteristic is what makes them be considered a part of ecosystem of technologies known as artificial intelligence. ANNs are several decades old, but have attained great importance in recent years due to the increased availability of the large amounts of data and computing power that are required for them to be used to solve complex problems. They have entailed a historical milestone in applications that have been traditionally refractory to classical, rule-based programming, such as image or voice recognition. Installing PyTorch If we have the Anaconda environment installed, PyTorch is installed with the following command: console conda install pytorch torchvision -c pytorch Otherwise, we can use pip as follows: console pip3 install torch torchvision Example of an ANN Let us look at a simple case of image sorting by deep learning using the well-known MNIST dataset, which contains images of handwritten numbers from 0 to 9. Loading the dataset import torch, torchvision In order to be able to use the dataset with PyTorch, it must be transformed into a tensor. To do this, we must define a T transformation that will be used in the loading process. We must also define a DataLoader, a Python generator object whose purpose is to provide images in batch_size groups of images at the same time. Note: It is typical in neural network training to update the parameters every N inputs instead of every individual input. However, excessively increasing the group size could end up taking up too much RAM in the system. T = torchvision.transforms.Compose([torchvision.transforms.ToTensor()]) images = torchvision.datasets.MNIST('mnist_data', transform=T,download=True) image_loader = torch.utils.data.DataLoader(images,batch_size=128) Defining the topology of the ANN Next, we need to decide what topology our network is going to have. An ANN consists of an input layer, one or more intermediate or, as they are commonly known, hidden layers, and an output layer. The number of hidden layers, as well as the amount of neurons in them, depends on the complexity and the type of the problem. In this simple case, we are going to implement two hidden layers of 100 and 50 neurons respectively. The class we must create is inherited from the nn.Module class. Additionally, we will need to initialize the methods of the superclass. import torch.nn as nn #definimos la red neuronal class Classifier(nn.Module): def __init__(self): super(Classifier,self).__init__() self.input_layer = nn.Linear(28*28,100) self.hidden_layer = nn.Linear(100,50) self.output_layer = nn.Linear(50,10) self.activation = nn.ReLU() def forward(self, input_image): input_image = input_image.view(-1,28*28) #convertimos la imagen a vector output = self.activation(self.input_layer(input_image)) #pasada por la capa entrada output = self.activation(self.hidden_layer(output)) #pasada por la capa oculta output = self.output_layer(output) #pasada por la capa de salida return output The input layer It has as many neurons as there is data in our samples. In this case, the inputs are images of 28x28 pixels showing the handwritten numbers. Therefore, our input layer will comprise 28x28 neurons. The output layer It has as many possible outputs as there are classes in our data — 10 in this case (digits from 0 to 9). For every input, the output nodes will yield a value — the greater of which is identified with the detected output class. This function defines the output of a node according to an input or a set of inputs. In this case, we will use the simple ReLU (Rectified Linear Unit) function. This function defines how the calculations will be performed from the input data, which will go through the different layers, to the output. It starts by flattening the input from a two-dimensional 28x28-pixel tensor to a one-dimensional tensor of 784 values that are transferred to the input layer using the view function. These values will be then propagated to the hidden layers by means of the activation function and, finally, to the output layer, which will return the result. Training the ANN In order to successfully train our network, we need to define some parameters. from torch import optim import numpy as np classifier = Classifier() #instanciamos la RN loss_function = nn.CrossEntropyLoss() #función de pérdidas parameters = classifier.parameters() optimizer = optim.Adam(params=parameters, lr=0.001) #algoritmo usado para optimizar los parámetros epochs = 3 #número de veces que pasamos cada muestra a la RN durante el entrenamiento iterations = 0 #número total de iterations para mostrar el error losses = np.array([]) #array que guarda la pérdida en cada iteración First we will instantiate an object of the previously defined class, which is termed a classifier. A loss function We will use this function to optimize the parameters; their value will be minimized during the network training phase. There are many loss functions available for PyTorch. In this case, we will use cross entropy loss, which is recommended for multiclass classification situations such as the one we are discussing in this post. An optimizer This object receives the model and learning rate parameters and iteratively updates them according to the gradient of the loss function during the training of the network. In this case, we have used an Adam algorithm, although others can be used as well. Epoch sets the number of times the dataset will be passed through the ANN for training purposes. This practice is a typical convention in the training of deep learning systems. The other parameters will be used to store and subsequently display the results. Training loop from torch.autograd import Variable #necesario para calcular gradientes for e in range(epochs): for i, (images, tags) in enumerate(image_loader): images, tags = Variable(images), Variable(tags) #Convertir a variable para derivación output = classifier(images) #calcular la salida para una imagen classifier.zero_grad() #poner los gradientes a cero en cada iteración error = loss_function(output, tags) #calcular el error error.backward() #obtener los gradientes y propagar optimizer.step() #actualizar los pesos con los gradientes iterations += 1 losses = np.append(losses,error.item()) Training will takes place the number of times that is set in the epochs variable, which is reflected in the outer loop. The following steps will be then carried out: Extracting the images and their tags from the previously defined image_loader object. Transforming the images and tags to the Variable type, since this data type that allows us to store the gradients in order to thus be able optimize the parameters or weights of the model. Transferring the input (images) to the classifier model. Resetting the gradients. If we do not perform this operation, the gradients would start accumulating, giving rise to erroneous classifications. Calculating the loss, which is a measure of the difference between the forecast and the tags that are present. With the backward() function, obtaining and propagating the gradients. Updating the weights with the optimizer object. This is known as the backpropagation method. Saving the number of iterations and the losses in each one of them in order to be able to display them. Results Now we are ready to see the outcome of our training! To this end, we will use the matplotlib library. Since we saved the iterations and the loss, we just have to plot them in a graph to have an idea of how much progress our ANN has made. import matplotlib.pyplot as plt plt.style.use('seaborn-whitegrid') #vemos las pérdidas en cada iteración de forma gráfica plt.plot(np.arange(iterations),losses) It can be seen in the graph above how the classification error has decreased as the ANN has been trained. Conclusion There are several — both free and proprietary — options out there for programming ANNs. Although Google’s TensorFlow is still the undisputed market leader, little by little interesting alternatives are emerging that might add value to the ecosystem due to having native compatibilities, their ease of use, and so on.
https://medium.com/swlh/machine-learning-made-easy-an-introduction-to-pytorch-6e24dfc377f1
['Paradigma Digital']
2020-12-04 09:18:21.917000+00:00
['Artificial Intelligence', 'Machine Learning', 'Data Science', 'Deep Learning', 'Pytorch']
Title Machine Learning Made Easy Introduction PyTorchContent Deep Learning neural network currently one promising branch artificial intelligence innovative technology commonly used application image recognition voice recognition machine translation among others several option term technology library Tensorflow — developed Google — widespread nowadays However today going focus PyTorch emerging alternative quickly gaining traction thanks ease use advantage native ability run GPUs allows traditionally slow process model training accelerated Facebook’s main library deep learning application basic element tensor equated vector one several dimension Artificial Neural Networks ANNs Artificial Neural Network system node interconnected orderly manner arranged layer input signal travel produce output receive name aim simply emulate working biological neural network animal brain made input layer one hidden layer output layer trained ‘learn’ recognize certain pattern characteristic make considered part ecosystem technology known artificial intelligence ANNs several decade old attained great importance recent year due increased availability large amount data computing power required used solve complex problem entailed historical milestone application traditionally refractory classical rulebased programming image voice recognition Installing PyTorch Anaconda environment installed PyTorch installed following command console conda install pytorch torchvision c pytorch Otherwise use pip follows console pip3 install torch torchvision Example ANN Let u look simple case image sorting deep learning using wellknown MNIST dataset contains image handwritten number 0 9 Loading dataset import torch torchvision order able use dataset PyTorch must transformed tensor must define transformation used loading process must also define DataLoader Python generator object whose purpose provide image batchsize group image time Note typical neural network training update parameter every N input instead every individual input However excessively increasing group size could end taking much RAM system torchvisiontransformsComposetorchvisiontransformsToTensor image torchvisiondatasetsMNISTmnistdata transformTdownloadTrue imageloader torchutilsdataDataLoaderimagesbatchsize128 Defining topology ANN Next need decide topology network going ANN consists input layer one intermediate commonly known hidden layer output layer number hidden layer well amount neuron depends complexity type problem simple case going implement two hidden layer 100 50 neuron respectively class must create inherited nnModule class Additionally need initialize method superclass import torchnn nn definimos la red neuronal class ClassifiernnModule def initself superClassifierselfinit selfinputlayer nnLinear2828100 selfhiddenlayer nnLinear10050 selfoutputlayer nnLinear5010 selfactivation nnReLU def forwardself inputimage inputimage inputimageview12828 convertimos la imagen vector output selfactivationselfinputlayerinputimage pasada por la capa entrada output selfactivationselfhiddenlayeroutput pasada por la capa oculta output selfoutputlayeroutput pasada por la capa de salida return output input layer many neuron data sample case input image 28x28 pixel showing handwritten number Therefore input layer comprise 28x28 neuron output layer many possible output class data — 10 case digit 0 9 every input output node yield value — greater identified detected output class function defines output node according input set input case use simple ReLU Rectified Linear Unit function function defines calculation performed input data go different layer output start flattening input twodimensional 28x28pixel tensor onedimensional tensor 784 value transferred input layer using view function value propagated hidden layer mean activation function finally output layer return result Training ANN order successfully train network need define parameter torch import optim import numpy np classifier Classifier instanciamos la RN lossfunction nnCrossEntropyLoss función de pérdidas parameter classifierparameters optimizer optimAdamparamsparameters lr0001 algoritmo usado para optimizar los parámetros epoch 3 número de veces que pasamos cada muestra la RN durante el entrenamiento iteration 0 número total de iteration para mostrar el error loss nparray array que guarda la pérdida en cada iteración First instantiate object previously defined class termed classifier loss function use function optimize parameter value minimized network training phase many loss function available PyTorch case use cross entropy loss recommended multiclass classification situation one discussing post optimizer object receives model learning rate parameter iteratively update according gradient loss function training network case used Adam algorithm although others used well Epoch set number time dataset passed ANN training purpose practice typical convention training deep learning system parameter used store subsequently display result Training loop torchautograd import Variable necesario para calcular gradientes e rangeepochs image tag enumerateimageloader image tag Variableimages Variabletags Convertir variable para derivación output classifierimages calcular la salida para una imagen classifierzerograd poner los gradientes cero en cada iteración error lossfunctionoutput tag calcular el error errorbackward obtener los gradientes propagar optimizerstep actualizar los peso con los gradientes iteration 1 loss npappendlosseserroritem Training take place number time set epoch variable reflected outer loop following step carried Extracting image tag previously defined imageloader object Transforming image tag Variable type since data type allows u store gradient order thus able optimize parameter weight model Transferring input image classifier model Resetting gradient perform operation gradient would start accumulating giving rise erroneous classification Calculating loss measure difference forecast tag present backward function obtaining propagating gradient Updating weight optimizer object known backpropagation method Saving number iteration loss one order able display Results ready see outcome training end use matplotlib library Since saved iteration loss plot graph idea much progress ANN made import matplotlibpyplot plt pltstyleuseseabornwhitegrid vemos la pérdidas en cada iteración de forma gráfica pltplotnparangeiterationslosses seen graph classification error decreased ANN trained Conclusion several — free proprietary — option programming ANNs Although Google’s TensorFlow still undisputed market leader little little interesting alternative emerging might add value ecosystem due native compatibility ease use onTags Artificial Intelligence Machine Learning Data Science Deep Learning Pytorch
4,548
Why ‘Read 50 Books a Year’ Articles Are a Scam
Why ‘Read 50 Books a Year’ Articles Are a Scam It’s not the quantity that matters Photo by Maia Habegger on Unsplash If you think about it, the high consumption of written content does not differ from the high consumption of audiovisual content. In other words, binge-reading is the same as binge-watching. Yet we glorify the former while we vilify the latter. Why? I think it’s because we’ve been socialized and brainwashed by the self-help culture into seeing binge-reading (usually masquerading as read 50+ books a year content) as a worthwhile activity: it builds character, helps us develop knowledge, teaches us to discern arguments, and, well, helps to sell books of people who depend on that. In contrast, binge-watching is the face self-help culture slaps on sloth, aimlessness, and everything that’s wrong with the world. But here’s how I see it. All the actual benefit of book reading comes after you’ve read the book. It’s the thinking about the concepts the book presents that makes you understand the world differently. It’s implementing the lessons into your own little pocket of the universe, either by changing yourself or the environment. It’s teaching the knowledge to others. But it’s not the book reading per se that is beneficial. The chronic book consumption has the same usefulness as money stuffed in your mattress: it’s useless unless used properly. The problem is that we think book reading itself is good. It’s what all the successful people do. Sure, but it’s not all that they do. This is one of the most insidious and bizarre cases of mistaking the map for the territory that I’ve come across so far. The reason for that is, I believe, a misattribution of cause and effect. We see that people read books and then we see them succeed. Ergo, we surmise that book leads to success. But, just like throwing a bunch of wheat into your oven won’t produce a loaf of bread, binge-reading and consuming tons of content won’t produce any success (could still be fun though). The reason for this hopeful misattribution, I think, is that we’d looove to believe that book reading works. Why? Because book reading is easy. After you’ve mastered it, it’s honestly one of the easiest activities in the world. And anyone can do it. So, we hope-think reading launches us to riches and fame. But it rarely does. With binge-watching, we at least don’t pretend we’re learning, improving, or getting ahead. Binge-watching is honest, in a sense: we know that it won’t catapult us into the stratosphere of achievement, and so we’re chilled. There’s no hope. There’s no bright light at the end of the tunnel. Nope, there’s just a glorious auto-play and a magnificent feeling of worthlessness (or, oddly, achievement) after you’ve finished a series-binge. But hey, if a nice binge is what you need (occasionally) it shouldn’t feel that bad, right? So here’s what I propose: how about we de-glorify binge-reading and de-vilify binge-watching? If we assume what I wrote till here isn’t absolute bullshit, the only difference between binge-watching and binge-reading is just the format. The output of that activity is the same — not much. But since we all love reading, here’s what you can do to make binge-reading great again: You become a connoisseur of books. What do I mean?
https://medium.com/publishous/why-read-50-books-a-year-articles-are-a-scam-6a9a90bc3e0e
['Marek Veneny']
2020-09-07 18:21:33.710000+00:00
['Reading', 'Books', 'Advice', 'Self', 'Personal Development']
Title ‘Read 50 Books Year’ Articles ScamContent ‘Read 50 Books Year’ Articles Scam It’s quantity matter Photo Maia Habegger Unsplash think high consumption written content differ high consumption audiovisual content word bingereading bingewatching Yet glorify former vilify latter think it’s we’ve socialized brainwashed selfhelp culture seeing bingereading usually masquerading read 50 book year content worthwhile activity build character help u develop knowledge teach u discern argument well help sell book people depend contrast bingewatching face selfhelp culture slap sloth aimlessness everything that’s wrong world here’s see actual benefit book reading come you’ve read book It’s thinking concept book present make understand world differently It’s implementing lesson little pocket universe either changing environment It’s teaching knowledge others it’s book reading per se beneficial chronic book consumption usefulness money stuffed mattress it’s useless unless used properly problem think book reading good It’s successful people Sure it’s one insidious bizarre case mistaking map territory I’ve come across far reason believe misattribution cause effect see people read book see succeed Ergo surmise book lead success like throwing bunch wheat oven won’t produce loaf bread bingereading consuming ton content won’t produce success could still fun though reason hopeful misattribution think we’d looove believe book reading work book reading easy you’ve mastered it’s honestly one easiest activity world anyone hopethink reading launch u rich fame rarely bingewatching least don’t pretend we’re learning improving getting ahead Bingewatching honest sense know won’t catapult u stratosphere achievement we’re chilled There’s hope There’s bright light end tunnel Nope there’s glorious autoplay magnificent feeling worthlessness oddly achievement you’ve finished seriesbinge hey nice binge need occasionally shouldn’t feel bad right here’s propose deglorify bingereading devilify bingewatching assume wrote till isn’t absolute bullshit difference bingewatching bingereading format output activity — much since love reading here’s make bingereading great become connoisseur book meanTags Reading Books Advice Self Personal Development
4,549
Mutation Testing with PITest and Spock 2
Gradle Project First of all, we are going to take advantage here from Gradle and create our basic project from scratch by using SDKMAN!: $ mkdir pitest-spock-example $ cd pitest-spock-example $ sdk install gradle $ ./gradlew init For this first contact with mutation testing, we are going to implement an extremely simple calculator package, which contains only two classes: Operations and Numbers. public class Operations { public static int add(int num1, int num2) { return num1 + num2; } public static int subtract(int num1, int num2) { return num1 - num2; } } public class Numbers { public static boolean isNatural(int num) { boolean result = false; if (num >= 0) { result = true; } return result; } } Spock Specifications Once our implementation is clear, in order to start using Spock in our project, we only need to add the Groovy plugin and the Spock dependencies into our build.gradle: plugins { id 'groovy' } repositories { mavenCentral() maven { url "https://oss.sonatype.org/content/repositories/snapshots/" } } dependencies { testCompile platform("org.spockframework:spock-bom:2.0-M4-groovy-3.0") testCompile "org.spockframework:spock-core" } Once Spock is enabled, we can take advantage of this super handy tool, which doesn't need any extra library to cover all the unit test requirements that we need to use, like mocks or asserts. Note that, as a collateral benefit in this configuration, everything is aligned to take advantage of Groovy 3 On the other hand, a parametrized test is the best way to automate a specification against a specific dataset. Therefore, instead of creating different tiny tests for each scenario, we can cluster them into a single test block, which will be executed for each of those cases. Conveniently, one of the best features of Spock, still being the “where” block, which enables the implementation of parameterized tests in a really readable way thanks to its DSL: class OperationsSpec extends Specification { @Unroll def "Should return #result given #num1 + #num2"() { expect: Operations.add(num1, num2) == result where: num1 | num2 | result 50 | 0 | 50 76 | 0 | 76 } } class NumbersSpec extends Specification { @Unroll def "Should return #result given #num"() { expect: Numbers.isNatural(num) == result where: num | result 10 | true 50 | true -10 | false -50 | false } } Then, by running the tests with the proper Gradle command, we will verify if it is working and the unit tests are passed: $ ./gradlew test BUILD SUCCESSFUL in 4s 3 actionable tasks: 3 executed Mutation Tests As already mentioned, to execute our mutations we are going to use the best tool in the Java ecosystem to do it, which is PITest. Please check the official documentation to learn more about it. Fortunately, to start using mutation testing in our project, we only need to add the Gradle plugin for PITest and configure it into our final build.gradle: plugins { id 'groovy' id "info.solidsoft.pitest" version '1.5.2' } pitest { junit5PluginVersion = '0.12' targetClasses = ['mutations.*'] threads = 4 outputFormats = ['HTML'] timestampedReports = false } In addition, thanks to the fact that Spock 2 is built on top of JUnit5 and the PITest last version is fully compatible with this framework, the combination of both should work out of the box. Specially important is the junit5PluginVersion parameter, which adds the dependency to pitest-junit5-plugin and sets “testPlugin” to “junit5”. That’s it, we are ready to execute the mutation testing in our project just with the PITest command and take a look at the generated report: $ ./gradlew pitest >> (...) >> Generated 8 mutations Killed 4 (50%) >> Ran 57 tests (7.12 tests per mutation) $ open build/reports/pitest/index.html Exploring the report, we are able to observe the current line and mutation coverages of our code: The results are quite interesting but not as good as they should be with 88% of line coverage, but with an especially improvable 50% of mutation coverage. Quickly analyzing the problems that we have in these tests, we can distinguish between three main issues: Math Mutation survived after switching the addition operator in the Operation::add method Lack of test coverage on the Operations::subtract method Conditional Boundary Mutation survived after modifying the conditional in the Numbers::isNatural method To improve the coverage, let’s go step by step in the following sections to understand what is happening and how to fix each of these cases. Math Mutator First of all, the math mutator replaces binary arithmetic operations for either integer or floating-point arithmetic with another operation. The replacements will be selected according to the operations found in the code. For our first case, one of these math mutators changed the operation in our code, and this variation of our code has survived the test (false positive), which could be a problem for us: Our tests are not accurate enough since (50 + 0) or (50–0) are == 0 Secondly, although it is not marked in red (still white) in the previous report, the other problem here is that our test coverage is incomplete. Particularly, we are not testing at all the subtract method in our specification. To fix both of these issues, we need to write a new case where the result of the addition (x+y) operation is different from the subtraction (x-y) operation, and to cover the subtract method, we must implement another test in our spec: @Unroll def "Should return #result given #num1 + #num2"() { expect: operations.add(num1, num2) == result where: num1 | num2 | result 50 | 0 | 50 76 | 10 | 86 } @Unroll def "Should return #result given #num1 - #num2"() { expect: operations.subtract(num1, num2) == result where: num1 | num2 | result 50 | 0 | 50 76 | 10 | 66 } Executing again the mutation testing Gradle task, all the previous errors should be fixed: Conditionals Boundary Mutator For our last scenario, let’s check the conditionals boundaries, where our mutator is capable to replace in our code the following relational operators between them: <, <=, >, or >= This time our test didn’t cover the boundary of our conditional (num == 0) By acknowledging the problem, that our mutation code (with num>0) survived this new round of tests, we should add this case in our dataset and cover the boundary case: def "Should return #result given #num"() { expect: numbers.isNatural(num) == result where: num | result 0 | true (...) } And running our PITest again, the previous error should be fixed:
https://medium.com/swlh/mutation-testing-with-pitest-and-spock-2-dc4451d285dd
['Ruben Mondejar']
2020-12-23 15:10:59.061000+00:00
['Spock', 'Java', 'Junit', 'Mutation Testing', 'Gradle']
Title Mutation Testing PITest Spock 2Content Gradle Project First going take advantage Gradle create basic project scratch using SDKMAN mkdir pitestspockexample cd pitestspockexample sdk install gradle gradlew init first contact mutation testing going implement extremely simple calculator package contains two class Operations Numbers public class Operations public static int addint num1 int num2 return num1 num2 public static int subtractint num1 int num2 return num1 num2 public class Numbers public static boolean isNaturalint num boolean result false num 0 result true return result Spock Specifications implementation clear order start using Spock project need add Groovy plugin Spock dependency buildgradle plugins id groovy repository mavenCentral maven url httpsosssonatypeorgcontentrepositoriessnapshots dependency testCompile platformorgspockframeworkspockbom20M4groovy30 testCompile orgspockframeworkspockcore Spock enabled take advantage super handy tool doesnt need extra library cover unit test requirement need use like mock asserts Note collateral benefit configuration everything aligned take advantage Groovy 3 hand parametrized test best way automate specification specific dataset Therefore instead creating different tiny test scenario cluster single test block executed case Conveniently one best feature Spock still “where” block enables implementation parameterized test really readable way thanks DSL class OperationsSpec extends Specification Unroll def return result given num1 num2 expect Operationsaddnum1 num2 result num1 num2 result 50 0 50 76 0 76 class NumbersSpec extends Specification Unroll def return result given num expect NumbersisNaturalnum result num result 10 true 50 true 10 false 50 false running test proper Gradle command verify working unit test passed gradlew test BUILD SUCCESSFUL 4 3 actionable task 3 executed Mutation Tests already mentioned execute mutation going use best tool Java ecosystem PITest Please check official documentation learn Fortunately start using mutation testing project need add Gradle plugin PITest configure final buildgradle plugins id groovy id infosolidsoftpitest version 152 pitest junit5PluginVersion 012 targetClasses mutation thread 4 outputFormats HTML timestampedReports false addition thanks fact Spock 2 built top JUnit5 PITest last version fully compatible framework combination work box Specially important junit5PluginVersion parameter add dependency pitestjunit5plugin set “testPlugin” “junit5” That’s ready execute mutation testing project PITest command take look generated report gradlew pitest Generated 8 mutation Killed 4 50 Ran 57 test 712 test per mutation open buildreportspitestindexhtml Exploring report able observe current line mutation coverage code result quite interesting good 88 line coverage especially improvable 50 mutation coverage Quickly analyzing problem test distinguish three main issue Math Mutation survived switching addition operator Operationadd method Lack test coverage Operationssubtract method Conditional Boundary Mutation survived modifying conditional NumbersisNatural method improve coverage let’s go step step following section understand happening fix case Math Mutator First math mutator replaces binary arithmetic operation either integer floatingpoint arithmetic another operation replacement selected according operation found code first case one math mutators changed operation code variation code survived test false positive could problem u test accurate enough since 50 0 50–0 0 Secondly although marked red still white previous report problem test coverage incomplete Particularly testing subtract method specification fix issue need write new case result addition xy operation different subtraction xy operation cover subtract method must implement another test spec Unroll def return result given num1 num2 expect operationsaddnum1 num2 result num1 num2 result 50 0 50 76 10 86 Unroll def return result given num1 num2 expect operationssubtractnum1 num2 result num1 num2 result 50 0 50 76 10 66 Executing mutation testing Gradle task previous error fixed Conditionals Boundary Mutator last scenario let’s check conditionals boundary mutator capable replace code following relational operator time test didn’t cover boundary conditional num 0 acknowledging problem mutation code num0 survived new round test add case dataset cover boundary case def return result given num expect numbersisNaturalnum result num result 0 true running PITest previous error fixedTags Spock Java Junit Mutation Testing Gradle
4,550
How Convolutional Neural Network works.
First let’s understand the Convolution operation Take 2-D tensors of size 5*5 and 3*3 and Now place the 3*3 tensor over the 5*5 and take the dot product, repeat this process by sliding the small tensor over the large tensor. This operation is Known as Convolution. As the smaller tensor is sliding in 2-Dimension so specifically it’s called 2-Dimensional Convolution. Now, In CNN's the resulting tensor is Known as Feature Map. Gif by Freecodecamp.org So in CNN's the large tensor is an Image while the small tensor is a Filter. Wait, did I just said Filter. Photo by Kai Pilger on Unsplash Okay, Let’s see what exactly a filter is. As the name says, its job must be to filter out something right?, so that “something” is Feature. Low and High-level Features. So CNN's are basically converting the High Volume of Image in a Low Volume Feature Map by extracting the relevant features like edges, shapes e.t.c.(As you can see above) Suppose the Input is a colored Image of (500*500) so Its volume or total no. of Pixels are (500*500*3=750000). Now after applying multiple convolutional layers, its volume may be reduced to 50000 pixels. Gif by Freecodecamp.org As you can see above the convolution operation between the Colored Image and a Filter. Since the image is colored so It has 3 color channels, and as I mentioned above that this convolution is specifically in 2- Dimensions, so there must be 3 channels in the filter too. You must be thinking about the Bias term. Filters and Bias are just the weights that get updated during the training part. With the help of Algorithms like Gradient Descent, the weights of the Filters and Bias terms gets updated to reduce the Loss.
https://medium.com/nerd-for-tech/how-convolutional-neural-network-works-ebf33827b951
['Harsh Mittal']
2020-04-25 14:48:27.673000+00:00
['Machine Learning', 'Classification', 'AI', 'Convolutional Network', 'Deep Learning']
Title Convolutional Neural Network worksContent First let’s understand Convolution operation Take 2D tensor size 55 33 place 33 tensor 55 take dot product repeat process sliding small tensor large tensor operation Known Convolution smaller tensor sliding 2Dimension specifically it’s called 2Dimensional Convolution CNNs resulting tensor Known Feature Map Gif Freecodecamporg CNNs large tensor Image small tensor Filter Wait said Filter Photo Kai Pilger Unsplash Okay Let’s see exactly filter name say job must filter something right “something” Feature Low Highlevel Features CNNs basically converting High Volume Image Low Volume Feature Map extracting relevant feature like edge shape etcAs see Suppose Input colored Image 500500 volume total Pixels 5005003750000 applying multiple convolutional layer volume may reduced 50000 pixel Gif Freecodecamporg see convolution operation Colored Image Filter Since image colored 3 color channel mentioned convolution specifically 2 Dimensions must 3 channel filter must thinking Bias term Filters Bias weight get updated training part help Algorithms like Gradient Descent weight Filters Bias term get updated reduce LossTags Machine Learning Classification AI Convolutional Network Deep Learning
4,551
Tips during the first two weeks of your any “Design Internship”
Congratulations on your internship! It’s Summer and many of us are embarking on our new and exciting journeys as design interns. At first, I thought about focusing my topic around a specific type of internship but while I was writing, I actually decided to keep it general so that the tips apply to Visual/UX/Product/Interaction or any other design-related internships. Assuming that you’ll have someone there to guid you through the internship, I won’t really stress on the size of the company either. So that being said, congratulations on landing your design internship! I bet you’re extremely pumped up but at the same time, nervous of what’s at stake and what you would need to start doing. That is why I decided to come up with some people and work related tips based on my personal experience that you might find useful. If you have any other personal tips, please feel free to leave comments below!
https://uxplanet.org/tips-during-the-first-two-weeks-of-your-any-design-internship-964ad7bae5fd
['Geunbae', 'Gb']
2017-06-29 04:49:43.726000+00:00
['Internships', 'Career Advice', 'UX', 'UI', 'Design']
Title Tips first two week “Design Internship”Content Congratulations internship It’s Summer many u embarking new exciting journey design intern first thought focusing topic around specific type internship writing actually decided keep general tip apply VisualUXProductInteraction designrelated internship Assuming you’ll someone guid internship won’t really stress size company either said congratulation landing design internship bet you’re extremely pumped time nervous what’s stake would need start decided come people work related tip based personal experience might find useful personal tip please feel free leave comment belowTags Internships Career Advice UX UI Design
4,552
Free Market Token selected to pitch at GITEX Future Stars global event in Dubai
GITEX Technology Week in Dubai October 14–18 2018 is one of the largest global tech events of the year. Free Market Token will be attending the event with NEM, exhibiting and presenting to an expected audience in the hundreds of thousands. With attendees from 120+ countries and global media outlets in unpacking the big conversations and latest solutions around AI, blockchain, robotics, cloud and other mega trends, GITEX takes is a multi-sensory experience of Future Urbanism across 18 halls with 4,000 exhibitors across 16 sectors. This is where the world’s most imaginative ideas are seen live in action, where technologies like blockchain and AI go beyond being buzzwords to become business realities, and where industries evolve in real-time. This is where the hype gets real. Selected from some of the best in the world, Free Market Token will pitch at GITEX Future Stars . GITEX Future Stars is the region’s biggest and fastest growing startup show with 1000+ startups, across 19 sectors, showcasing their inventions, and competing for top honors in the Supernova Challenge and four industry-sponsored Innovation Cups.
https://medium.com/freemarkettoken/free-market-token-selected-to-pitch-at-gitex-future-stars-global-event-in-dubai-3bf5b551bd
['Free Market Token']
2018-09-14 04:19:13.772000+00:00
['Blockchain', 'Events', 'Nem Blockchain', 'Startup', 'Free Market Token']
Title Free Market Token selected pitch GITEX Future Stars global event DubaiContent GITEX Technology Week Dubai October 14–18 2018 one largest global tech event year Free Market Token attending event NEM exhibiting presenting expected audience hundred thousand attendee 120 country global medium outlet unpacking big conversation latest solution around AI blockchain robotics cloud mega trend GITEX take multisensory experience Future Urbanism across 18 hall 4000 exhibitor across 16 sector world’s imaginative idea seen live action technology like blockchain AI go beyond buzzword become business reality industry evolve realtime hype get real Selected best world Free Market Token pitch GITEX Future Stars GITEX Future Stars region’s biggest fastest growing startup show 1000 startup across 19 sector showcasing invention competing top honor Supernova Challenge four industrysponsored Innovation CupsTags Blockchain Events Nem Blockchain Startup Free Market Token
4,553
What Is Missing From Schooling?
“You may have noticed students who just try to remember and pound back what is remembered. Well, they fail in school and fail in life. You’ve got to hang experience on a latticework of models in your head.” - Charlie Munger School largely emphasizes the accumulation of facts. The goal is to prepare you for a chosen career but the limitations are significant. We cannot expect schools to fully prepare us for any endeavor as experience is vital for the development of skills. However, without a latticework of mental models, the experiences we gain from the application of the knowledge we possess with by fraught with errors. Without Mental Models, Knowledge is Useless What is a mental model? In the most basic sense, a mental model is how we see and interpret the world. As you can imagine, there are many ways to see and interpret daily events. Thus, we require a multitude of mental models if we are to succeed in life. Charlie Munger and Warren Buffet attribute their success to the possession and application of a variety of mental models. In a talk given to The University of Southern California Marshall School of Business in 1994, Munger described several mental models he routinely uses when determining how to invest. These models include, but are not limited to mathematics, accounting, statistics, psychology, biology, microeconomics. These can be broken down into subcategories of probabilistic thinking, Bayesian updating, reciprocity, leverage, ecosystems, game theory, and incentives. Munger has developed a large breadth of models that allow him to chose the appropriate lens to view a particular situation and develop a well-reasoned solution. How did he develop a variety of mental models? Not through rote memorization. Yes, he is a voracious reader. As in Buffet. But he then applies the lessons read into life experiences. This is far different from the ‘cram and forget’ method of learning in school. Even if we approach learning with a wide net and work to foster a latticework of mental models, we need to understand how to appropriately apply them “In my whole life, I have known no wise people (over a broad subject matter area) who didn’t read all the time — none, zero.” — Charlie Munger System 1 vs. System 2 Thinking Photo by Priscilla Du Preez on Unsplash Employing a breadth of mental models requires effort at all times. It requires substantial effort to acquire the mental models. But after the education, refinement of the models through experience is needed. This is far easier said than done. When we use our knowledge in real-world situations or academic settings, we use one of two general systems of thinking. Daniel Kahneman describes them in his book Thinking, Fast and Slow. “System 1 operates automatically and quickly, with little or no effort and no sense of voluntary control” “System 2 allocated attention to the effortful mental activities that demand it, including complex computations. The operations of System 2 are often associated with the subjective experience of agency, choice, and concentration.” System 1 is the reflexive action that we routinely use in daily life while system 2 is our critical thinking. We must use both. The issue is most individuals rely too heavily on system 1 and don’t set aside the time to mobilize system 2. Just because we apply information learned does not mean we apply it correctly. As Munger said: “It’s not hard to learn. What is hard is to get so you use it routinely almost every day of your life.” The difference between bias and heuristics “The way to block errors that originate in System 1 is simple in principle: recognize the signs that you are in a cognitive minefield, slow down, and ask for reinforcements from system 2” — Daniel Kahneman Heuristics are mental shortcuts (“rule of thumb”) and decision-making strategies. Cognitive biases are systematic errors in thinking, commonly resulting from simplifying information processing. The difference is critical. Heuristics are the “shortcuts” that people use to reduce task complexity in judgment and choice, and biases are the resulting gaps between normative behavior and the heuristically determined behavior. (Kahneman et al., 1982) We can never fully eliminate biases. The nature of a heuristic is it requires system 1 thinking. This type of thinking is prone to errors and bias results. What we can do, however, is remain vigilant to bias and mobilize system 2 when appropriate. This is where reflective practice comes into play. When you are driving home from work, reflect on events of the day. What went well? What could have been improved? What biases may you have fallen victim to? This is an uncomfortable exercise. Our brains crave congruency and biases help the world make sense. It is far easier to fall back on system 1 thinking and let our bias wash over us. It is far more difficult to reflect, recognize and admit fault, and course correct. Scientific Curiosity Photo by Gary Butterfield on Unsplash “Science is the belief in the ignorance of experts” — Richard Feynman If you study and tackle life the way Charlie Munger and Richard Feynman have, you can find success in nearly any endeavor. While Munger was known for his breadth of mental models, voracious reading, and unwavering patience, Feynman was known for his extreme curiosity and propensity to doubt everything. Feynman was the champion of the layman and challenged scientists to abolish misinformation. He never settled for having all the answers and lived by the mantra of “why not.” Do we use the same approach in our lives? Schooling teaches us everything has an answer. To pass a class, we have to correctly answer exams or write a paper well enough to receive a passing grade. We either succeed or we do not. Life is not as simple as pass/fail. Our inability to live in the gray, our propensity to cling to what we “know”, and our frequent submission to biases lead us to shun doubt and uncertainty. Unfortunately, this is a surefire way to impede progress. Doubt and uncertainty “It is our capacity to doubt that will determine the future of civilization.” — Richard Feynman As stated at the beginning of the article, schools emphasize accumulate facts. What happens when those “facts” are no longer true, or at least no longer best practice in a given field. As a physical therapist, I have to update my clinical models daily to ensure I am providing my patients with the best care possible. This practice is not exclusive to healthcare. In any career, progress is made and new processes are developed. Unfortunately, they are not always readily adopted. Our biases, particularly confirmation bias and theory-induced blindness, cause us to resist updating our mental models and the knowledge we use on a daily basis. If we are to succeed in our careers and life, we must embrace doubt and uncertainty. Doubt and uncertainty force us to constantly question if we are using best practice. They fuels our desire to read, learn, and gather new experiences. They lead to the adoption and development of new mental models. If we want to make the most of our schooling, we need to have the foundation in place to best apply and update our knowledge. This is done through the development and refinement of mental models, frequent reflection with system 2 thinking, remaining scientifically curious, and embracing doubt and uncertainty.
https://medium.com/age-of-awareness/what-is-missing-from-schooling-446af7e7af49
['Zachary Walston']
2020-11-18 14:57:52.376000+00:00
['Psychology', 'Professional Development', 'Growth', 'Education', 'Personal Growth']
Title Missing SchoolingContent “You may noticed student try remember pound back remembered Well fail school fail life You’ve got hang experience latticework model head” Charlie Munger School largely emphasizes accumulation fact goal prepare chosen career limitation significant cannot expect school fully prepare u endeavor experience vital development skill However without latticework mental model experience gain application knowledge posse fraught error Without Mental Models Knowledge Useless mental model basic sense mental model see interpret world imagine many way see interpret daily event Thus require multitude mental model succeed life Charlie Munger Warren Buffet attribute success possession application variety mental model talk given University Southern California Marshall School Business 1994 Munger described several mental model routinely us determining invest model include limited mathematics accounting statistic psychology biology microeconomics broken subcategories probabilistic thinking Bayesian updating reciprocity leverage ecosystem game theory incentive Munger developed large breadth model allow chose appropriate lens view particular situation develop wellreasoned solution develop variety mental model rote memorization Yes voracious reader Buffet applies lesson read life experience far different ‘cram forget’ method learning school Even approach learning wide net work foster latticework mental model need understand appropriately apply “In whole life known wise people broad subject matter area didn’t read time — none zero” — Charlie Munger System 1 v System 2 Thinking Photo Priscilla Du Preez Unsplash Employing breadth mental model requires effort time requires substantial effort acquire mental model education refinement model experience needed far easier said done use knowledge realworld situation academic setting use one two general system thinking Daniel Kahneman describes book Thinking Fast Slow “System 1 operates automatically quickly little effort sense voluntary control” “System 2 allocated attention effortful mental activity demand including complex computation operation System 2 often associated subjective experience agency choice concentration” System 1 reflexive action routinely use daily life system 2 critical thinking must use issue individual rely heavily system 1 don’t set aside time mobilize system 2 apply information learned mean apply correctly Munger said “It’s hard learn hard get use routinely almost every day life” difference bias heuristic “The way block error originate System 1 simple principle recognize sign cognitive minefield slow ask reinforcement system 2” — Daniel Kahneman Heuristics mental shortcut “rule thumb” decisionmaking strategy Cognitive bias systematic error thinking commonly resulting simplifying information processing difference critical Heuristics “shortcuts” people use reduce task complexity judgment choice bias resulting gap normative behavior heuristically determined behavior Kahneman et al 1982 never fully eliminate bias nature heuristic requires system 1 thinking type thinking prone error bias result however remain vigilant bias mobilize system 2 appropriate reflective practice come play driving home work reflect event day went well could improved bias may fallen victim uncomfortable exercise brain crave congruency bias help world make sense far easier fall back system 1 thinking let bias wash u far difficult reflect recognize admit fault course correct Scientific Curiosity Photo Gary Butterfield Unsplash “Science belief ignorance experts” — Richard Feynman study tackle life way Charlie Munger Richard Feynman find success nearly endeavor Munger known breadth mental model voracious reading unwavering patience Feynman known extreme curiosity propensity doubt everything Feynman champion layman challenged scientist abolish misinformation never settled answer lived mantra “why not” use approach life Schooling teach u everything answer pas class correctly answer exam write paper well enough receive passing grade either succeed Life simple passfail inability live gray propensity cling “know” frequent submission bias lead u shun doubt uncertainty Unfortunately surefire way impede progress Doubt uncertainty “It capacity doubt determine future civilization” — Richard Feynman stated beginning article school emphasize accumulate fact happens “facts” longer true least longer best practice given field physical therapist update clinical model daily ensure providing patient best care possible practice exclusive healthcare career progress made new process developed Unfortunately always readily adopted bias particularly confirmation bias theoryinduced blindness cause u resist updating mental model knowledge use daily basis succeed career life must embrace doubt uncertainty Doubt uncertainty force u constantly question using best practice fuel desire read learn gather new experience lead adoption development new mental model want make schooling need foundation place best apply update knowledge done development refinement mental model frequent reflection system 2 thinking remaining scientifically curious embracing doubt uncertaintyTags Psychology Professional Development Growth Education Personal Growth
4,554
#TimetoTalk Review
#TimeToTalk was a brilliant success Alhamdulillaah. We had videos from many people, such as Dr Faraz, Dalia Mogahed,Naz, Ameen and many, many more. View these short clips below! Further, we asked for people to send in their messages so we could post them on our social media platforms. Maa shaa Allaah we had some brilliant entries, some of which are also below Alhamdulillaah. More of these entries can be viewed on our Facebook and Twitter pages. We also had people opening up about their suffering. Read a short story from a very brave sister here Maa shaa Allaah. Even after the 5th of Feb — entries came flooding in, showing that Muslims are ready talk. We are ready to tackle this stigma. We are re ready for this battle. We pray that this is the beginning of many more people opening up. This is the beginning of Muslims breaking the silence. This is the beginning to the end of this stigma! We’d like to send our greatest gratitude to all those who got involved, who shared, who spread the word and who helped sufferers feel safe to speak out.
https://medium.com/inspirited-minds/timetotalk-review-904aa3935319
['Inspirited Minds']
2015-12-06 21:44:45.823000+00:00
['Mental Illness', 'Islam', 'Mental Health']
Title TimetoTalk ReviewContent TimeToTalk brilliant success Alhamdulillaah video many people Dr Faraz Dalia MogahedNaz Ameen many many View short clip asked people send message could post social medium platform Maa shaa Allaah brilliant entry also Alhamdulillaah entry viewed Facebook Twitter page also people opening suffering Read short story brave sister Maa shaa Allaah Even 5th Feb — entry came flooding showing Muslims ready talk ready tackle stigma ready battle pray beginning many people opening beginning Muslims breaking silence beginning end stigma We’d like send greatest gratitude got involved shared spread word helped sufferer feel safe speak outTags Mental Illness Islam Mental Health
4,555
How To Increase Productivity, Reach Your Goals, And Become A Literal God In Just 5 Easy Steps
How To Increase Productivity, Reach Your Goals, And Become A Literal God In Just 5 Easy Steps Sebastian SD Follow Jul 16 · 3 min read Photo by Iker Urteaga on Unsplash So you think you have what it takes to succeed? Can you put the work and really grow into a better person? Are ready to embrace the Great Lord of Darkness? If you want to DOMINATE your goals, listen up. I have traveled everywhere that is and isn’t on a map. I have met the wisest religious leaders everywhere I went. I am a top writer in Quora and Yahoo Answers. And now, I will share with you my accumulated knowledge. My secret to literally win at life and become a God amongst men. And believe me, becoming successful and dedicating your life to the High Priest of the Great Old Ones is no easy task. But it is quite simple, you just need to follow these five steps: 1. Get off your couch A journey of a thousand miles begins with a single step, says the ancient Chinese proverb. Get off your ass you lazy bum, parents still used to say in the 1960s. Both are wise words that echo the same concept. The first step is usually the hardest, but it’s also the most important one. If you want to reach your goals, you first need to get out there! 2. Set SMART goals Specific, Measurable, Achievable, Realistic, and Time-bound. In short, SMART! This method has been proven to be the most effective way of not just setting goals but also sticking with them and eventually accomplishing them. Whatever your end goal might be, remember to break it down to specific steps that you can realistically achieve, and you can measure the results in a time-specific manner. SMART!!! 3. Accept the all-mighty Cthulhu as your supreme overlord I first met the cosmic entity mortals know as Cthulhu in a deserted island in the South Pacific sea. Just a glimpse of the awe-inspiring tentacled God, shook me to my core. I felt as if for the first time in my life I had finally seen the light of day, in the scaly wings of the Lord of Darkness. And now, you can too! Accept the Great Cthulhu now!!! 4. Relinquish your mortal soul to the Dark Lord You want to lose weight? You want to advance in your career? You want to be famous, and wealthy, and happy beyond belief?!? Give your eternal soul to the master of darkness and transcend the limitations of your pathetic life! Yield to the power of the Old One and become a god on earth yourself! SUBMIT YOURSELF TO THE SLEEPER OF R’LYEH! ACCEPT YOUR DESTINY AS A SUBORDINATE OF THE GREAT DREAMER!!! PH’NGLUI MGLW’NAFH CTHULHU R’LYEH WGAH’NAGL FHTAGN!!!!! 5. Drink water Hydration is important. According to the Mayo Clinic one should drink about 11 to 15 cups of water a day. This varies based on where you live, how active you are, and general health of course. Just remember that keeping a healthy body is paramount to be a good vessel to the Great Cthulhu! Now, let us pray: Ctu-hu-lah-ha — Ctu-hu-lah-ha — Ctu-hu-lah-ha
https://medium.com/slackjaw/how-to-increase-productivity-reach-your-goals-and-become-a-literal-god-in-just-5-easy-steps-6b4f6cae409e
['Sebastian Sd']
2020-07-22 14:25:31.635000+00:00
['Lovecraft', 'Satire', 'Productivity', 'Self Improvement', 'Humor']
Title Increase Productivity Reach Goals Become Literal God 5 Easy StepsContent Increase Productivity Reach Goals Become Literal God 5 Easy Steps Sebastian SD Follow Jul 16 · 3 min read Photo Iker Urteaga Unsplash think take succeed put work really grow better person ready embrace Great Lord Darkness want DOMINATE goal listen traveled everywhere isn’t map met wisest religious leader everywhere went top writer Quora Yahoo Answers share accumulated knowledge secret literally win life become God amongst men believe becoming successful dedicating life High Priest Great Old Ones easy task quite simple need follow five step 1 Get couch journey thousand mile begin single step say ancient Chinese proverb Get as lazy bum parent still used say 1960s wise word echo concept first step usually hardest it’s also important one want reach goal first need get 2 Set SMART goal Specific Measurable Achievable Realistic Timebound short SMART method proven effective way setting goal also sticking eventually accomplishing Whatever end goal might remember break specific step realistically achieve measure result timespecific manner SMART 3 Accept allmighty Cthulhu supreme overlord first met cosmic entity mortal know Cthulhu deserted island South Pacific sea glimpse aweinspiring tentacled God shook core felt first time life finally seen light day scaly wing Lord Darkness Accept Great Cthulhu 4 Relinquish mortal soul Dark Lord want lose weight want advance career want famous wealthy happy beyond belief Give eternal soul master darkness transcend limitation pathetic life Yield power Old One become god earth SUBMIT SLEEPER R’LYEH ACCEPT DESTINY SUBORDINATE GREAT DREAMER PH’NGLUI MGLW’NAFH CTHULHU R’LYEH WGAH’NAGL FHTAGN 5 Drink water Hydration important According Mayo Clinic one drink 11 15 cup water day varies based live active general health course remember keeping healthy body paramount good vessel Great Cthulhu let u pray Ctuhulahha — Ctuhulahha — CtuhulahhaTags Lovecraft Satire Productivity Self Improvement Humor
4,556
Yes you do need to calculate your capacity
You’re an agency. You’ve got some clients, a good team, good prospects for the future, and a growing client base who are your biggest fans. Great! You have work that comes in and work that goes out, and you are absolutely killing yourself to make sure that you’re giving your company your best shot. Just a couple years ago you never imagined having a team of 30 people or having to think about things like, “employee retention”, “churn”, and “team retreats”. You’ve even found yourself late at night thinking about how you should start laying down some structure and processes to scale, that is, after all, what “real companies” do, right? So far, your team has been giving things there all and have been content to work at your company for the opportunities that it affords in experience and knowledge, and let’s face it — fun. Late nights, early mornings, weekend travels, it’s all worth it, right? You might have lost a few clients here and there, or blown the odd pitch, but you still win more than you lose, and you’re still hiring people (albeit in junior positions) and you still think of yourself as a start-up. The chaos and dis-organization running rampant throughout the company are just mere symptoms of growing. Or, at least you tell yourself that. Deep down you know that you need to really start thinking about laying down some proper structure, and thinking of career progression paths, trainings, improving your processes (even more fundamentally, figuring out what they ARE exactly), and putting yourself in the position to scale. But where to start? How about starting with your capacity. Or better yet, how about making sure that every client you work on is, in the words of L’Oreal, worth it. How can you do that? Do you know how much it costs to run your business? I don’t mean, how much money do YOU take for your salary (which is most likely much less than you would like to be taking), but the amount of money you’re paying in FIXED and VARIABLE costs. If you don’t have these numbers somewhere, open up a Google Sheet and list down ALL of the costs associated with running your business. Rent Accountant Payroll specialist Electricity Gas Software licenses Hardware Salaries Taxes (my favorite in Italy) EVERY SINGLE THING I’M NOT KIDDING 2. After you have that down, figure out an average on a monthly basis. We’re eventually going to get down to an hour, but for now, the month will do. For my fellow math illiterates, this means divide by 12. No judgment from me guys, I had to take Intro to Math 11 to graduate. 3. Take that average and divide it by 22 (the avg. number of workdays per month) and this will tell you your BARE BONES 0% margin cost of running your company each day. (Apologies in advance for the average of averages — economics and stats majors, you know what I’m talking about.) 4. Take your bare bones 0% margin cost and and divide it by 8 (the avg. number of work hours per day) and this will tell you how much money you must make each HOUR with a 0% margin to keep things afloat. I am hoping that this number doesn’t surprise you too much. Actually, I kinda hope you look at that number and are surprised a little. And I’ll tell you why. I bet you haven’t thought about this before. I’ll bet you’ve been so busy thinking about your next pitch and your Next Big Thing that you may have left these little details on your “boring things to delegate to someone else, later on….definitely low priority” list. Am I right? “This is all fine and dandy”, you’re thinking,”but what does this have to do with my capacity?”. Glad you asked. Now you know your absolute minimum hourly cost with 0% profit margin number you can start having fun. Do you know how much time you’ve been spending on your clients? No? OK, let’s take a step back. Do you know how much money your clients have been paying you on a month by month basis? Cool. Import all your finance data (really, you just need the Client Name, the date, and the amount of money paid) into Google Sheets or Excel and then divide that amount (less taxes) by your MINIMUM HOURLY COST. This will tell you how many hours you can spend on the client to break even with 0% profit. Now, if you DO know how much time you’ve been spending on clients, compare the actual amount of time spent with the hours that you SHOULD be spending and see if there are any surprises. I’m willing to bet with a high degree of probability, that there are. I’m willing to bet that you have clients where you are spending an insane amount of time, for a mere pittance, and that there are cases where you’re not spending much time at all but turning a big profit. Keep in mind, that because we’re talking about a 0% profit margin here that any time you are overspending on a client is costing you money. Each and every minute and second. On the other hand, those clients where you’re spending less time than you could, is where your profit is. If you really wanted to have an eye-opening moment, add up that whole column and see how much profit you are actually making each month. In case you’re wondering when I’m going to show you the money, I need you to do one more thing. It’s easy I swear. If you have your billing hour with breakeven costs, you can now try adding in a profit margin. This varies between how much you want to make, what the market or product will withstand, and maybe even a little “finger in the air” analysis. Start with adding a 50% margin onto everything. If your billing hour is $20 adding a 50% margin makes it $30. Re-run the numbers on your sheet and see if a profit margin of 50% makes a difference or not. If it does in some cases, and doesn’t in others, have a look at the clients where a 50% profit margin makes no difference and then figure out why you’re spending too much time on them. Do you have the wrong person managing the account who needs more time than someone else? Is the client exceptionally difficult to work with for some reason? Was the amount of time it would take to work on the client grossly underestimated at the contract phase? Once you know the type of problem you can start addressing it. Now that you know how much you should be charging to make a profit you can roughly estimate (oxymoron?) how much time you should be spending on them from the outset, and based on this, you can also see what your team’s capacity is capable of. People have the same 8 hours per day, 40 hours per work week to utilize against client work. You can find your total team capacity by taking the number of people on your team (do not include people who don’t contribute directly onto client projects like Finance, or HR, etc) and times that number by 8. So if you have 5 people on your team, you times 5*8 and have 40 hours per DAY, which is 200 hours per WEEK, 800 hours per MONTH to work on client projects and still turn a 50% profit. Look at one of your months and calculate the amount of hours spent and see if it’s GREATER THAN or LESS THAN the number of hours your team has in available capacity per month. If it’s GREATER THAN — then you need to figure out where you can reduce time spent on non-profitable clients, how to automate time consuming tasks, or whether you need to invest in more training for your team or specific members. If it’s LESS THAN — then you know that 1) you are in a good position to take on extra clients without hiring additional team members, and 2) you need to ensure that the “extra time” that your team has is going to good use. Keep in mind that when we’re talking about the amount of time your team has we’re not talking about YOUR time, or your Senior staff members whose time, let’s face it, might be slightly more valuable than your last intern who’s still in training, we’re talking about the mythical man hour (oh yes, I DID go there!) This is all just to give you a better handle on your team capacity and, as a side effect, your profitability. Some side effect, huh? Knowing your team’s capacity, and billing hour, is an absolutely fundamental piece of information that will help you in every single effort that you make from Sales to Onboarding to Execution. I highly suggest you move this task up on your list of things to do. It’s easier than you think and has the potential to have a greater impact than your Next Big Thing, I promise.
https://medium.com/swlh/yes-you-do-need-to-calculate-your-capacity-b9496de291d0
['Hayley Richardson']
2020-03-01 10:51:44.394000+00:00
['Startup Lessons', 'Operations', 'Business Strategy', 'Startup', 'Operations Management']
Title Yes need calculate capacityContent You’re agency You’ve got client good team good prospect future growing client base biggest fan Great work come work go absolutely killing make sure you’re giving company best shot couple year ago never imagined team 30 people think thing like “employee retention” “churn” “team retreats” You’ve even found late night thinking start laying structure process scale “real companies” right far team giving thing content work company opportunity affords experience knowledge let’s face — fun Late night early morning weekend travel it’s worth right might lost client blown odd pitch still win lose you’re still hiring people albeit junior position still think startup chaos disorganization running rampant throughout company mere symptom growing least tell Deep know need really start thinking laying proper structure thinking career progression path training improving process even fundamentally figuring exactly putting position scale start starting capacity better yet making sure every client work word L’Oreal worth know much cost run business don’t mean much money take salary likely much le would like taking amount money you’re paying FIXED VARIABLE cost don’t number somewhere open Google Sheet list cost associated running business Rent Accountant Payroll specialist Electricity Gas Software license Hardware Salaries Taxes favorite Italy EVERY SINGLE THING I’M KIDDING 2 figure average monthly basis We’re eventually going get hour month fellow math illiterate mean divide 12 judgment guy take Intro Math 11 graduate 3 Take average divide 22 avg number workday per month tell BARE BONES 0 margin cost running company day Apologies advance average average — economics stats major know I’m talking 4 Take bare bone 0 margin cost divide 8 avg number work hour per day tell much money must make HOUR 0 margin keep thing afloat hoping number doesn’t surprise much Actually kinda hope look number surprised little I’ll tell bet haven’t thought I’ll bet you’ve busy thinking next pitch Next Big Thing may left little detail “boring thing delegate someone else later on…definitely low priority” list right “This fine dandy” you’re thinking”but capacity” Glad asked know absolute minimum hourly cost 0 profit margin number start fun know much time you’ve spending client OK let’s take step back know much money client paying month month basis Cool Import finance data really need Client Name date amount money paid Google Sheets Excel divide amount le tax MINIMUM HOURLY COST tell many hour spend client break even 0 profit know much time you’ve spending client compare actual amount time spent hour spending see surprise I’m willing bet high degree probability I’m willing bet client spending insane amount time mere pittance case you’re spending much time turning big profit Keep mind we’re talking 0 profit margin time overspending client costing money every minute second hand client you’re spending le time could profit really wanted eyeopening moment add whole column see much profit actually making month case you’re wondering I’m going show money need one thing It’s easy swear billing hour breakeven cost try adding profit margin varies much want make market product withstand maybe even little “finger air” analysis Start adding 50 margin onto everything billing hour 20 adding 50 margin make 30 Rerun number sheet see profit margin 50 make difference case doesn’t others look client 50 profit margin make difference figure you’re spending much time wrong person managing account need time someone else client exceptionally difficult work reason amount time would take work client grossly underestimated contract phase know type problem start addressing know much charging make profit roughly estimate oxymoron much time spending outset based also see team’s capacity capable People 8 hour per day 40 hour per work week utilize client work find total team capacity taking number people team include people don’t contribute directly onto client project like Finance HR etc time number 8 5 people team time 58 40 hour per DAY 200 hour per WEEK 800 hour per MONTH work client project still turn 50 profit Look one month calculate amount hour spent see it’s GREATER LESS number hour team available capacity per month it’s GREATER — need figure reduce time spent nonprofitable client automate time consuming task whether need invest training team specific member it’s LESS — know 1 good position take extra client without hiring additional team member 2 need ensure “extra time” team going good use Keep mind we’re talking amount time team we’re talking time Senior staff member whose time let’s face might slightly valuable last intern who’s still training we’re talking mythical man hour oh yes go give better handle team capacity side effect profitability side effect huh Knowing team’s capacity billing hour absolutely fundamental piece information help every single effort make Sales Onboarding Execution highly suggest move task list thing It’s easier think potential greater impact Next Big Thing promiseTags Startup Lessons Operations Business Strategy Startup Operations Management
4,557
36 JavaScript Concepts You Need to Master to Become an Expert
36 JavaScript Concepts You Need to Master to Become an Expert Mastery takes time, but knowing what to master makes it easier Photo by Angela Compagnone on Unsplash You’ll hear many people complaining that JavaScript is weird and sometimes worthless. People complain like this because they don’t understand how things work under the hood. Although I do agree that some scenarios in JavaScript are handled differently, that does not make it weird but rather beautiful in its own way. To start loving a programming language, you should start by looking deep within and mastering its concepts one by one. Here is a list of 36 JavaScript concepts that you need to master to become an all-round JavaScript expert. Although this piece is one of my longest, I assure you that it is worthy of your time. Kudos to Stephen and Leonardo for the resources. The resources section contains a link to the GitHub repo by Leonardo which contains learning material for all these concepts explained below. Please take your time in understanding each of the below-mentioned concepts.
https://medium.com/better-programming/36-javascript-concepts-you-need-to-master-to-become-an-expert-c6630ac41bf4
['Mahdhi Rezvi']
2020-07-28 19:42:32.641000+00:00
['Technology', 'Programming', 'Nodejs', 'React', 'JavaScript']
Title 36 JavaScript Concepts Need Master Become ExpertContent 36 JavaScript Concepts Need Master Become Expert Mastery take time knowing master make easier Photo Angela Compagnone Unsplash You’ll hear many people complaining JavaScript weird sometimes worthless People complain like don’t understand thing work hood Although agree scenario JavaScript handled differently make weird rather beautiful way start loving programming language start looking deep within mastering concept one one list 36 JavaScript concept need master become allround JavaScript expert Although piece one longest assure worthy time Kudos Stephen Leonardo resource resource section contains link GitHub repo Leonardo contains learning material concept explained Please take time understanding belowmentioned conceptsTags Technology Programming Nodejs React JavaScript
4,558
Gamification is dangerous, and here’s why.
Sometimes apps are boring. You open them, and they just seem another form you have to validate and send to an unknown server. As a designer, you always try to come up with new exciting ideas, hoping to innovate an old and obsolete mechanism. Now, if you tried coming up with these ideas, I’m sure you heard or know about gamification. What’s the definition of gamification? Gamification is the application of game design principles to a different context, in which you try to transform a user task into a sort of game, with new and different interactions and consequences: an example could be a fitness jogging app where you have to create fun shapes with your GPS running path, and the most accurate one gets more points in a social leaderboard. This example is intriguing, but I’ve noticed a very bad trend these last years, where gamification had been used the wrong way, just for the sake of using it. Let’s see a list of things that explain to us why gamification is dangerous in the wrong hands. 4) Useless “creative” interactions slow down task completion. Useless interaction appears both in apps and websites. One example is holding down buttons instead of clicking: yes it’s good to show off your developing skills, but it just makes your user frustrated. So be extremely careful when using it: even if you’re going to make a cool trendy website, please make us surf it fast. No thanks. Another example I found out is inside my mobile company’s app, where it happened that I had to shake my phone multiple times to fill a sort of bottle in order to get more internet traffic. This is just a useless and embarrassing example of gamification: I often had to use it in public, and I felt a bit dumb to just shake my hand in an ambiguous way like my phone wasn’t working. A simple finger press was enough. This brings us to point number 3. 3) You need to understand when something fun is really needed. My first job was to design an institutional app, and my colleagues wanted to gamificate the form data insertion process. In fact, the app was entirely based on an extremely tedious and long form. Their idea was to gamificate each data entry to increase engagement, and guess what: disaster. It didn’t work for two main reasons: as we said before: slowing task completion. To fulfill the entire form, 20 full minutes were needed because of useless page changes. Inserting personal data isn’t fun. But moving fancier sliders isn’t fun either. If you have a long form, consider stacking many questions on each page. And use a progress bar only for small sections. In this study case, gamification was the wrong idea: Reducing the interaction needed (by putting more fields on each page) and splitting the form into smaller parts was perceived a lot better by our testers. We’ve also reduced the number of redundant questions and brought down completion time to around 7 minutes, which is still much but at least bearable. In the end, we kept the concept of gamification, but applied it in another context: which leads us to point number 2. 2) Keeping gamification as a fancy outline often works. Rewards. People like rewards, and since our form was incredibly boring, giving them a reward could make them enjoy the full experience. Two of the most commonly applied persuasive techniques in mobile apps are recognition and social comparison (prizes, badges, and leaderboards for example), and they don’t interfere with the main tasks: these strategies give the user a sense of gamification, but without slowing or forcing him into unneeded actions. But beware: not everyone wants to be exposed, and being compared to others (especially if not performing well), could lead to frustration and abandoning the application. If you want to ensure that no frustration is induced, just keep positive recognition and bring social comparison away.
https://uxplanet.org/gamification-is-dangerous-and-heres-why-d0a3622e0951
['Lorenzo Doremi']
2020-12-17 23:21:18.811000+00:00
['Design', 'UX', 'Visual Design', 'UX Design', 'Design Thinking']
Title Gamification dangerous here’s whyContent Sometimes apps boring open seem another form validate send unknown server designer always try come new exciting idea hoping innovate old obsolete mechanism tried coming idea I’m sure heard know gamification What’s definition gamification Gamification application game design principle different context try transform user task sort game new different interaction consequence example could fitness jogging app create fun shape GPS running path accurate one get point social leaderboard example intriguing I’ve noticed bad trend last year gamification used wrong way sake using Let’s see list thing explain u gamification dangerous wrong hand 4 Useless “creative” interaction slow task completion Useless interaction appears apps website One example holding button instead clicking yes it’s good show developing skill make user frustrated extremely careful using even you’re going make cool trendy website please make u surf fast thanks Another example found inside mobile company’s app happened shake phone multiple time fill sort bottle order get internet traffic useless embarrassing example gamification often use public felt bit dumb shake hand ambiguous way like phone wasn’t working simple finger press enough brings u point number 3 3 need understand something fun really needed first job design institutional app colleague wanted gamificate form data insertion process fact app entirely based extremely tedious long form idea gamificate data entry increase engagement guess disaster didn’t work two main reason said slowing task completion fulfill entire form 20 full minute needed useless page change Inserting personal data isn’t fun moving fancier slider isn’t fun either long form consider stacking many question page use progress bar small section study case gamification wrong idea Reducing interaction needed putting field page splitting form smaller part perceived lot better tester We’ve also reduced number redundant question brought completion time around 7 minute still much least bearable end kept concept gamification applied another context lead u point number 2 2 Keeping gamification fancy outline often work Rewards People like reward since form incredibly boring giving reward could make enjoy full experience Two commonly applied persuasive technique mobile apps recognition social comparison prize badge leaderboards example don’t interfere main task strategy give user sense gamification without slowing forcing unneeded action beware everyone want exposed compared others especially performing well could lead frustration abandoning application want ensure frustration induced keep positive recognition bring social comparison awayTags Design UX Visual Design UX Design Design Thinking
4,559
Logo Casestudy: Cell Stress & Immunity (CSI)
Breakdown of how CSI logo was brought to life In early 2020, I was roped in to design the Laboratory of Cell Stress & Immunity (CSI) logo, a part of KU Leuven (Belgium), Department of Cellular and Molecular Medicine. I have always ever designed logos for tech startups, and this was the first time I was about to jump into designing something that is well out of my comfort zone. But having a fair bit of experience and interest in logo designs, I decided to trust in the process and go one step at a time while working with the client in unpacking the expected outcome. It is always important to remember that the logo is a symbolic representation of the company/brand; and it often requires inspiration and art and research, analysis, hard work, and rigorous testing to design it. Tools Pencil sketching, Adobe Illustrator Understanding the background 🗒️ I wanted to do my own research around competitors in this space and color palettes that are not favored in this domain. The logo design is for a lab that focuses on cancer cell death's immunology, and for starters, I know nothing about this field and its technicality. So I send across a questionnaire to the client to help me steer in the right direction and start building a mood board of shapes, color palettes, and inspirations. Here are the questions that I chose to ask —
https://uxplanet.org/logo-casestudy-cell-stress-immunity-csi-5dbe6ddbcff6
['Dhananjay Garg']
2020-12-27 22:17:53.812000+00:00
['Logo', 'Logo Design', 'Design', 'Illustration', 'Graphic Design']
Title Logo Casestudy Cell Stress Immunity CSIContent Breakdown CSI logo brought life early 2020 roped design Laboratory Cell Stress Immunity CSI logo part KU Leuven Belgium Department Cellular Molecular Medicine always ever designed logo tech startup first time jump designing something well comfort zone fair bit experience interest logo design decided trust process go one step time working client unpacking expected outcome always important remember logo symbolic representation companybrand often requires inspiration art research analysis hard work rigorous testing design Tools Pencil sketching Adobe Illustrator Understanding background 🗒️ wanted research around competitor space color palette favored domain logo design lab focus cancer cell death immunology starter know nothing field technicality send across questionnaire client help steer right direction start building mood board shape color palette inspiration question chose ask —Tags Logo Logo Design Design Illustration Graphic Design
4,560
Bring Machine Learning to the Browser With TensorFlow.js — Part I
Edited 2019 Mar 11 to include changes introduced in TensorFlow.js 1.0. Additional information about some of these TensorFlow.js 1.0 updates can be found here. TensorFlow.js brings machine learning and its possibilities to JavaScript. It is an open source library built to create, train, and run machine learning models in the browser (and Node.js). Training and building complex models can take a considerable amount of resources and time. Some models require massive amounts of data to provide acceptable accuracy. And, if computationally intensive, may require hours or days of training to complete. Thus, you may not find the browser to be the ideal environment for building such models. A more appealing use case is importing and running existing models. You train or get models trained in powerful, specialized environments then you import and run the models in the browser for impressive user experiences. Converting the model Before you can use a pre-trained model in TensorFlow.js, the model needs to be in a web friendly format. For this, TensorFlow.js provides the tensorflowjs_converter tool. The tool converts TensorFlow and Keras models to the required web friendly format. The converter is available after you install the tensorflowjs Python package. install tensorflowjs using pip The tensorflowjs_converter expects the model and the output directory as inputs. You can also pass optional parameters to further customize the conversion process. running tensorflowjs_converter The output of tensorflowjs_converter is a set of files: model.json — the dataflow graph — the dataflow graph A group of binary weight files called shards. Each shard file is small in size for easier browser caching. And the number of shards depends on the initial model. tensorflowjs_converter 1.0 output files NOTE: If using tensorflowjs_converter version before 1.0, the output produced includes the graph ( tensorflowjs_model.pb ), weights manifest ( weights_manifest.json ), and the binary shards files. Run model run Once converted, the model is ready to load into TensorFlow.js for predictions. Using Tensorflow.js version 0.x.x: loading a model with TensorFlow.js 0.15.1 Using TensorFlow.js version 1.x.x: loading a model with TensorFlow.js 1.0.0 The imported model is the same as models trained and created with TensorFlow.js. Convert all models? You may find it tempting to grab any and all models, convert them to the web friendly format, and run them in the browser. But this is not always possible or recommended. There are several factors for you to keep in mind. The tensorflowjs_converter command can only convert Keras and TensorFlow models. Some supported model formats include SavedModel, Frozen Model, and HDF5. TensorFlow.js does not support all TensorFlow operations. It currently has a limited set of supported operations. As a result, the converter will fail if the model contains operations not supported. Thinking and treating the model as a black box is not always enough. Because you can get the model converted and produce a web friendly model does not mean all is well. Depending on a model’s size or architecture, its performance could be less than desirable. Further optimization of the model is often required. In most cases, you will have to pre-process the input(s) to the model, as well as, process the model output(s). So, needing some understanding or inner workings of the model is almost a given. Getting to know your model Presumably you have a model available to you. If not, resources exist with an ever growing collection of pre-trained models. A couple of them include: TensorFlow Models —a set of official and research models implemented in TensorFlow Model Asset Exchange —a set of deep learning models covering different frameworks These resources provide the model for you to download. They also can include information about the model, useful assets, and links to learn more. You can review a model with tools such as TensorBoard. It’s graph visualization can help you better understand the model. Another option is Netron, a visualizer for deep learning and machine learning models. It provides an overview of the graph and you can inspect the model’s operations. visualizing a model with Netron To be continued… Stay tuned for the follow up to this article to learn how to pull this all together. You will step through this process in greater detail with an actual model and you will take a pre-trained model into web friendly format and end up with a web application.
https://medium.com/ibm-watson-data-lab/bring-machine-learning-to-the-browser-with-tensorflow-js-part-i-16924457291c
[]
2019-03-11 20:43:18.126000+00:00
['Machine Learning', 'JavaScript', 'TensorFlow', 'Python', 'Open Source']
Title Bring Machine Learning Browser TensorFlowjs — Part IContent Edited 2019 Mar 11 include change introduced TensorFlowjs 10 Additional information TensorFlowjs 10 update found TensorFlowjs brings machine learning possibility JavaScript open source library built create train run machine learning model browser Nodejs Training building complex model take considerable amount resource time model require massive amount data provide acceptable accuracy computationally intensive may require hour day training complete Thus may find browser ideal environment building model appealing use case importing running existing model train get model trained powerful specialized environment import run model browser impressive user experience Converting model use pretrained model TensorFlowjs model need web friendly format TensorFlowjs provides tensorflowjsconverter tool tool convert TensorFlow Keras model required web friendly format converter available install tensorflowjs Python package install tensorflowjs using pip tensorflowjsconverter expects model output directory input also pas optional parameter customize conversion process running tensorflowjsconverter output tensorflowjsconverter set file modeljson — dataflow graph — dataflow graph group binary weight file called shard shard file small size easier browser caching number shard depends initial model tensorflowjsconverter 10 output file NOTE using tensorflowjsconverter version 10 output produced includes graph tensorflowjsmodelpb weight manifest weightsmanifestjson binary shard file Run model run converted model ready load TensorFlowjs prediction Using Tensorflowjs version 0xx loading model TensorFlowjs 0151 Using TensorFlowjs version 1xx loading model TensorFlowjs 100 imported model model trained created TensorFlowjs Convert model may find tempting grab model convert web friendly format run browser always possible recommended several factor keep mind tensorflowjsconverter command convert Keras TensorFlow model supported model format include SavedModel Frozen Model HDF5 TensorFlowjs support TensorFlow operation currently limited set supported operation result converter fail model contains operation supported Thinking treating model black box always enough get model converted produce web friendly model mean well Depending model’s size architecture performance could le desirable optimization model often required case preprocess input model well process model output needing understanding inner working model almost given Getting know model Presumably model available resource exist ever growing collection pretrained model couple include TensorFlow Models —a set official research model implemented TensorFlow Model Asset Exchange —a set deep learning model covering different framework resource provide model download also include information model useful asset link learn review model tool TensorBoard It’s graph visualization help better understand model Another option Netron visualizer deep learning machine learning model provides overview graph inspect model’s operation visualizing model Netron continued… Stay tuned follow article learn pull together step process greater detail actual model take pretrained model web friendly format end web applicationTags Machine Learning JavaScript TensorFlow Python Open Source
4,561
9 tips to quickly improve your UI designs
Originally published at marcandrew.me Creating beautiful, usable, and efficient UIs takes time, with many design revisions along the way. Making those constant tweaks to produce something that your clients, users, and yourself are truly happy with. I know. I’ve been there many times before myself. But what I’ve discovered over the years is that by making some simple visual tweaks you can quickly improve the visuals you’re trying to create. In this article I’ve put together a small, and easy to put into practice, selection of tips that can, with little effort, not only help improve your designs today, but hopefully give you some handy pointers for when you’re starting your next project.
https://uxdesign.cc/9-simple-tips-to-improve-your-ui-designs-fast-377c5113ac82
['Marc Andrew']
2020-08-28 09:10:35.292000+00:00
['Design', 'UI', 'UI Design', 'Web Development', 'Visual Design']
Title 9 tip quickly improve UI designsContent Originally published marcandrewme Creating beautiful usable efficient UIs take time many design revision along way Making constant tweak produce something client user truly happy know I’ve many time I’ve discovered year making simple visual tweak quickly improve visuals you’re trying create article I’ve put together small easy put practice selection tip little effort help improve design today hopefully give handy pointer you’re starting next projectTags Design UI UI Design Web Development Visual Design
4,562
Business Intelligence Visualizations with Python — Part 2
1. Additional Plot Types Even though these plot types are included in the second part of this series of Business Intelligence Visualizations with Python, they are not less important, as they complement the already-introduced plots. I believe you’ll find them even more interesting than basic plots! To begin with this series, we must install required libraries: # Imports import matplotlib.pyplot as plt import numpy as np from mpl_toolkits.mplot3d import Axes3D A. Horizontal Bar Plots with error bars: A Bar plot is a chart that presents data using rectangular bars with heights and lengths proportional to the values they represent. The basic command utilized for bar charts is plt.bar(x_values, y_values). The additional feature involved in this plot are Error Bars, which are graphical representations of the variability of data. They’re commonly used to indicate the estimated error in a desired measure. This time, we’ll be plotting a horizontal bar plot with the following input data: # Input data for error bars and labels mean_values = [1, 2, 3] std_dev = [0.2, 0.3, 0.4] bar_labels = ['Bar 1', 'Bar 2', 'Bar 3'] y_values = [0,1,2] Now let’s plot the bars with the plt.barh command: # Create bar plots plt.yticks(y_values, bar_labels, fontsize=10) plt.barh(y_values, mean_values, xerr=std_dev,align='center', alpha=0.5, color='red') # Labels and plotting plt.title('Horizontal Bar plot with error', fontsize=13) plt.xlim([0, 3.5]) plt.grid() plt.show() Sample plot — Image by Author A variation of this plot can be made with the insertion of labels or texts to the bars. We’ll do this with the following input data: # Input data for error bars and labels data = range(200, 225, 5) bar_labels = ['Bar 1', 'Bar 2', 'Bar 3'] y_values = [0,1,2,3,4] Proceed with the plots preparation: # Create bar plots fig = plt.figure(figsize=(12,8)) plt.yticks(y_values, bar_labels, fontsize=15) bars = plt.barh(y_values, data,align='center', alpha=0.5, color='orange', edgecolor='red') # Labels and plotting for b,d in zip(bars, data): plt.text(b.get_width() + b.get_width()*0.08, b.get_y() + b.get_height()/2,'{0:.2%}'.format(d/min(data)),ha='center', va='bottom', fontsize=12) plt.title('Horizontal bar plot with labels', fontsize=15) plt.ylim([-1,len(data)+0.5]) plt.xlim((125,240)) plt.vlines(min(data), -1, len(data)+0.5, linestyles='dashed') plt.show() Sample plot — Image by Author B. Back-to-back Bar Plots: We continue with the family of bar plots, in this case with a variation that compares two sets of data horizontally. The commands to create this plot are the same as with the horizontal bar plot, but negating values for one of the sets of data. # Input data for both sets of data utilizing numpy arrays to negate one set: X1 = np.array([1, 2, 3]) X2 = np.array([3, 2, 1]) y_values = [0,1,2] bar_labels = ['Bar 1', 'Bar 2', 'Bar 3'] Now let’s plot the bars with the plt.barh command and the negation feature: # Plot bars fig = plt.figure(figsize=(12,8)) plt.yticks(y_values, bar_labels, fontsize=13) plt.barh(y_values, X1,align='center', alpha=0.5, color='blue') plt.barh(y_values, -X2,align='center', alpha=0.5, color='purple') plt.title('Back-to-back Bar Plot', fontsize=13) plt.ylim([-1,len(X1)+0.1]) plt.grid() plt.show() Sample plot - Image by author C. Bar Plots with height labels: This chart is equivalent to the previous shown, with the exception that it has vertical orientation and that I’ve added height labels to have a clearer visualization of such a metric. This can be done with the command ax.text. In addition, I introduced the method autofmt_xdate included in Matplotlib to automate the rotation of labels. Take a look at the code: # Input information: n_bars = [0,1,2,3] values = [3000, 5000, 12000, 20000] labels = ['Group 1', 'Group 2','Group 3', 'Group 4'] # Create figure and plots fig, ax = plt.subplots(figsize=(12,8)) ax.set_facecolor('xkcd:gray') fig.patch.set_facecolor('xkcd:gray') fig.autofmt_xdate() bars = plt.bar(idx, values, align='center', color='peru', edgecolor='steelblue') plt.xticks(idx, labels, fontsize=13) # Add text labels to the top of the bars def rotate_label(bars): for bar in bars: height = bar.get_height() ax.text(bar.get_x() + bar.get_width()/2., 1.05 * height,'%d' % int(height),ha='center', va='bottom', fontsize=13) # Labels and plotting rotate_label(bars) plt.ylim([0, 25000]) plt.title('Bar plot with Height Labels', fontsize=14) plt.tight_layout() plt.show() Sample plot — Image by Author D. Bar Plots with color gradients: Let’s add some color to the equation. In the following chart, I introduce the built-in module called colormap, which is utilized to implement intuitive color schemes for the plotted parameters. First, I’ll proceed with the imports: import matplotlib.colors as col import matplotlib.cm as cm Now I’ll insert sample data to plot the chart. As you can see, colormap is implemented through the ScalarMappable class which applies data normalization before returning RGBA colors from the given colormap. To clarify the previous statement, RGBA colors are a form of digital color representation, together with HEX and HSL. HEX is the most utilized and re-known, for being a simple representation of 6-digit numbers that can create Red, Green, and Blue. An example of a Hex color representation is #123456, 12 is Red, 34 is Green and 56 is Blue. On the other hand, RGBA colors add a new factor, the alpha, which is the opacity or transparency that follows the same percentage scheme: 0% represents absolute transparency and 100% represents absolute opacity which is the way we traditionally see colors. More details in this website. In this link to Matplotlib’s documentation you’ll find further details to the different colormaps that can be chosen. Take a look at the code to generate the plot in order to have a clearer view: # Sample values means = range(10,18) x_values = range(0,8) # Create Colormap cmap1 = cm.ScalarMappable(col.Normalize(min(means), max(means), cm.spring)) cmap2 = cm.ScalarMappable(col.Normalize(0, 20, cm.spring)) # Plot bars # Subplot 1 fig, ax = plt.subplots(figsize=(12,8)) plt.subplot(121) plt.bar(x_values, means, align='center', alpha=0.5, color=cmap1.to_rgba(means)) plt.ylim(0, max(means) * 1.1) # Subplot 2 plt.subplot(122) plt.bar(x_values, means, align='center', alpha=0.5, color=cmap2.to_rgba(means)) plt.ylim(0, max(means) * 1.1) plt.show() Sample plot — Image by Author E. Bar Plots with pattern fill: Now we’re going to add some styling to our data presentation using bar plots and pattern fills. This can be done utilizing the set_hatch command or including as an argument in the plt.bar configuration the hatch command. # Input data: patterns = ('-', '+', 'x', '\\', '*', 'o', 'O', '.') mean_values = range(1, len(patterns)+1) y_values = [0,1,2,3,4,5,6,7] # Create figure and bars fig, ax = plt.subplots(figsize=(12,8)) bars = plt.bar(y_values,mean_values,align='center',color='salmon') for bar, pattern in zip(bars, patterns): bar.set_hatch(pattern) # Labeling and plotting plt.xticks(y_values, patterns, fontsize=13) plt.title('Bar plot with patterns') plt.show() Sample plot — Image by Author F. Simple Heatmap: A Heatmap is a graphical representation of data in which values are depicted by color. They make it easy to visualize complex data and understand it at a glance. The variation in color may be by hue or intensity, giving obvious visual cues to the reader about how the represented values are distributed. In this case, the variation in color represents the number of observations clustered in a particular range of values, which is implemented with the colorbar feature of Matplotlib. Also, the plot is made with a 2-dimensional histogram, created with the command plt.hist2d. In the code below, I create two normally-distributed variables X and Y with a mean of 0 and 5 respectively. When you plot the 2D hist, you see a 2D histogram. Think about it like looking at a histogram from the “top”. In addition to that, to have a clearer understanding of the color distribution, consider that colors centered at the 2D histogram are yellowish and correspond to the highest values of the colorbar, which is reasonable since X values should peak at 0 and Y values should peak at 5. # Input a sample of normally distributed observations centered at x=0 and y=5 x = np.random.randn(100000) y = np.random.randn(100000) + 5 # Create figure, 2D histogram and labels plt.figure(figsize=(10,8)) plt.hist2d(x, y, bins=40) plt.xlabel('X values - Centered at 0', fontsize=13) plt.ylabel('Y values - Centered at 5', fontsize=13) cbar = plt.colorbar() cbar.ax.set_ylabel('Number of observations', fontsize=13) plt.show() Sample plot — Image by Author G. Shadowed Pie chart: Pie charts are used to display elements of a data set as proportions of a whole. In addition to the traditional plt.pie command, we’ll utilize the shadow=True boolean feature to bring some styling to the sliced of the pie chart. # Create figure and plot the chart: plt.figure(figsize=(10,8)) plt.pie((10,5),labels=('Blue','Orange'),shadow=True,colors=('steelblue', 'orange'), explode=(0,0.15), startangle=90, autopct='%1.1f%%' ) plt.legend(fancybox=True, fontsize=13) plt.axis('equal') plt.title('Shadowed Pie Chart',fontsize=15) plt.tight_layout() plt.show()
https://towardsdatascience.com/business-intelligence-visualizations-with-python-part-2-92f8a8463026
['Julian Herrera']
2020-10-09 00:05:34.551000+00:00
['Data Analysis', 'Python', 'Data Science', 'Programming', 'Data Visualization']
Title Business Intelligence Visualizations Python — Part 2Content 1 Additional Plot Types Even though plot type included second part series Business Intelligence Visualizations Python le important complement alreadyintroduced plot believe you’ll find even interesting basic plot begin series must install required library Imports import matplotlibpyplot plt import numpy np mpltoolkitsmplot3d import Axes3D Horizontal Bar Plots error bar Bar plot chart present data using rectangular bar height length proportional value represent basic command utilized bar chart pltbarxvalues yvalues additional feature involved plot Error Bars graphical representation variability data They’re commonly used indicate estimated error desired measure time we’ll plotting horizontal bar plot following input data Input data error bar label meanvalues 1 2 3 stddev 02 03 04 barlabels Bar 1 Bar 2 Bar 3 yvalues 012 let’s plot bar pltbarh command Create bar plot pltyticksyvalues barlabels fontsize10 pltbarhyvalues meanvalues xerrstddevaligncenter alpha05 colorred Labels plotting plttitleHorizontal Bar plot error fontsize13 pltxlim0 35 pltgrid pltshow Sample plot — Image Author variation plot made insertion label text bar We’ll following input data Input data error bar label data range200 225 5 barlabels Bar 1 Bar 2 Bar 3 yvalues 01234 Proceed plot preparation Create bar plot fig pltfigurefigsize128 pltyticksyvalues barlabels fontsize15 bar pltbarhyvalues dataaligncenter alpha05 colororange edgecolorred Labels plotting bd zipbars data plttextbgetwidth bgetwidth008 bgety bgetheight202formatdmindatahacenter vabottom fontsize12 plttitleHorizontal bar plot label fontsize15 pltylim1lendata05 pltxlim125240 pltvlinesmindata 1 lendata05 linestylesdashed pltshow Sample plot — Image Author B Backtoback Bar Plots continue family bar plot case variation compare two set data horizontally command create plot horizontal bar plot negating value one set data Input data set data utilizing numpy array negate one set X1 nparray1 2 3 X2 nparray3 2 1 yvalues 012 barlabels Bar 1 Bar 2 Bar 3 let’s plot bar pltbarh command negation feature Plot bar fig pltfigurefigsize128 pltyticksyvalues barlabels fontsize13 pltbarhyvalues X1aligncenter alpha05 colorblue pltbarhyvalues X2aligncenter alpha05 colorpurple plttitleBacktoback Bar Plot fontsize13 pltylim1lenX101 pltgrid pltshow Sample plot Image author C Bar Plots height label chart equivalent previous shown exception vertical orientation I’ve added height label clearer visualization metric done command axtext addition introduced method autofmtxdate included Matplotlib automate rotation label Take look code Input information nbars 0123 value 3000 5000 12000 20000 label Group 1 Group 2Group 3 Group 4 Create figure plot fig ax pltsubplotsfigsize128 axsetfacecolorxkcdgray figpatchsetfacecolorxkcdgray figautofmtxdate bar pltbaridx value aligncenter colorperu edgecolorsteelblue pltxticksidx label fontsize13 Add text label top bar def rotatelabelbars bar bar height bargetheight axtextbargetx bargetwidth2 105 heightd intheighthacenter vabottom fontsize13 Labels plotting rotatelabelbars pltylim0 25000 plttitleBar plot Height Labels fontsize14 plttightlayout pltshow Sample plot — Image Author Bar Plots color gradient Let’s add color equation following chart introduce builtin module called colormap utilized implement intuitive color scheme plotted parameter First I’ll proceed import import matplotlibcolors col import matplotlibcm cm I’ll insert sample data plot chart see colormap implemented ScalarMappable class applies data normalization returning RGBA color given colormap clarify previous statement RGBA color form digital color representation together HEX HSL HEX utilized reknown simple representation 6digit number create Red Green Blue example Hex color representation 123456 12 Red 34 Green 56 Blue hand RGBA color add new factor alpha opacity transparency follows percentage scheme 0 represents absolute transparency 100 represents absolute opacity way traditionally see color detail website link Matplotlib’s documentation you’ll find detail different colormaps chosen Take look code generate plot order clearer view Sample value mean range1018 xvalues range08 Create Colormap cmap1 cmScalarMappablecolNormalizeminmeans maxmeans cmspring cmap2 cmScalarMappablecolNormalize0 20 cmspring Plot bar Subplot 1 fig ax pltsubplotsfigsize128 pltsubplot121 pltbarxvalues mean aligncenter alpha05 colorcmap1torgbameans pltylim0 maxmeans 11 Subplot 2 pltsubplot122 pltbarxvalues mean aligncenter alpha05 colorcmap2torgbameans pltylim0 maxmeans 11 pltshow Sample plot — Image Author E Bar Plots pattern fill we’re going add styling data presentation using bar plot pattern fill done utilizing sethatch command including argument pltbar configuration hatch command Input data pattern x meanvalues range1 lenpatterns1 yvalues 01234567 Create figure bar fig ax pltsubplotsfigsize128 bar pltbaryvaluesmeanvaluesaligncentercolorsalmon bar pattern zipbars pattern barsethatchpattern Labeling plotting pltxticksyvalues pattern fontsize13 plttitleBar plot pattern pltshow Sample plot — Image Author F Simple Heatmap Heatmap graphical representation data value depicted color make easy visualize complex data understand glance variation color may hue intensity giving obvious visual cue reader represented value distributed case variation color represents number observation clustered particular range value implemented colorbar feature Matplotlib Also plot made 2dimensional histogram created command plthist2d code create two normallydistributed variable X mean 0 5 respectively plot 2D hist see 2D histogram Think like looking histogram “top” addition clearer understanding color distribution consider color centered 2D histogram yellowish correspond highest value colorbar reasonable since X value peak 0 value peak 5 Input sample normally distributed observation centered x0 y5 x nprandomrandn100000 nprandomrandn100000 5 Create figure 2D histogram label pltfigurefigsize108 plthist2dx bins40 pltxlabelX value Centered 0 fontsize13 pltylabelY value Centered 5 fontsize13 cbar pltcolorbar cbaraxsetylabelNumber observation fontsize13 pltshow Sample plot — Image Author G Shadowed Pie chart Pie chart used display element data set proportion whole addition traditional pltpie command we’ll utilize shadowTrue boolean feature bring styling sliced pie chart Create figure plot chart pltfigurefigsize108 pltpie105labelsBlueOrangeshadowTruecolorssteelblue orange explode0015 startangle90 autopct11f pltlegendfancyboxTrue fontsize13 pltaxisequal plttitleShadowed Pie Chartfontsize15 plttightlayout pltshowTags Data Analysis Python Data Science Programming Data Visualization
4,563
Basics of Quantum Mechanics for Non-scientists
Classical / “Newtonian” Physics vs Quantum Physics Classical Physics You probably have a recollection of classical or also called “Newtonian” physics. It was discovered and outlined by Isaac Newton in his paper published in 1687. It basically tells us that an object, say a tennis ball, has a position, a velocity, and the position changes over time. It mathematically proves that if no force acts on the object, it will continue to move in a straight line, with such forces being, for example, gravity, wind or the other person catching the ball. Figure 1: Classical representation [1] In classical physics, the state of an object is the combination of its position and velocity. And if you know what forces are acting on it, you can determine the trajectory and predict where it goes next. Quantum Physics When you start studying the smallest particles, such as electrons and protons — things are a little bit more abstract. You can predict the position, velocity, and other properties like spin (Spin: I’ll discuss in more detail), but it’s not with 100% accuracy, it’s not unquestionably the actual measurement. The state of an electron is rather a set of probabilities. Imagine an electron orbiting an atom. It creates a sort of a cloud where some parts are denser. Some are thinner. Where it’s denser, it has a higher probability of an electron being there. The ‘cloud’ oscillates like a wave, and that’s why it is said that an electron has a wave function. It’s not that the electron is a proper wave. There is no amplitude like a sound wave, but rather the amplitude is calculated. The wave here is more used as a metaphor. Figure 2: Electron cloud. By author, inspired by [1] Based on the density of each point of the ‘cloud’, it is assigned a number. This number is the amplitude of the “wave”. The probability of the electron being in a specific position is the square root of the amplitude. Before the location is observed, it is said that the electron is in a superposition of all possible outcomes. The wave function of an electron is equivalent to the state in classical physics: position and velocity. And like there is an equation in “Newtonian” mechanics to calculate the motion of an object, there is also an equation to calculate the motion of a wave function — The Schrödinger’s equation: High energy parts evolve rapidly, low energy parts evolve slowly. [1] Figure 3: Quantum representation [1] Is it a particle? Is it a wave? No, it’s (super) an electron! — The double-slit experiment As I explained, an electron oscillates like a wave. It has a wave function. You can’t exactly predict where it’s going to be, as you do with macroscopic objects. All you know is the probability of its location. However, when you decide to observe the electron, the “wave” disappears/collapses, and it looks more as a particle — you see like a dot, a point in space. So it has this dual behaviour — when not observed, it’s a ‘wave’ when observed, it’s a particle — The double-slit experiment illustrates exactly that. It was performed in the 1970s, although it had already been discussed long before. It compares four different scenarios: Classical particles going through single-slit vs double-slit Waves, like water waves, going through single-slit vs double-slit Electrons going through single-slit vs double-slit Electron going through double-slit, but with an ‘observer’ placed in the middle of the path to detect and prove that an electron went through the double-slit. Just in case. 1. Classical particles going through single or double slits have practically the same behaviour. In the screen on the other side, you will see marks close to the slits. Maybe some variations as the objects bump to each other or on the sides of the slit. 2. Waves behave just like waves. When it’s one slit, the marks will be centred around right behind the slit. As it’s a wave, the marks on the screen are bright spots equivalent to where the wave has higher amplitudes. When there are two slits, on the other side of the screen it forms an interference pattern — Waves can oscillate up and down, when two waves oscillate in opposite directions they cancel each other out, when in the same direction it intensifies the amplitude. The result in the screen is greater amplitudes / brighter spots in the centre close to the slits and alternating dark/bright fading to both sides. 3. Electrons behave just like scenario 2 / waves, the interference pattern is noted when double-slits are used. However, first: Electrons are not really a wave; it doesn’t have an amplitude like waves do. The amplitude is calculated based on the denser points of the electron cloud. Second: The electron leaves up a mark in the screen just like a classical particle. The marks / bright spots are not shown based on the height of the amplitude, as it was done with water waves. 4. If scenario 3 wasn’t fun enough — when the detector/observer is placed in the middle, the electrons behave just like classical particles, the wave collapses. There is NO interference pattern at all on the other side, on the screen.
https://medium.com/predict/basics-of-quantum-mechanics-for-non-scientists-299e38d428bf
['Vinicius Monteiro']
2020-12-09 01:16:51.513000+00:00
['Quantum Mechanics', 'Science', 'Quantum Physics', 'Quantum Computer', 'Physics']
Title Basics Quantum Mechanics NonscientistsContent Classical “Newtonian” Physics v Quantum Physics Classical Physics probably recollection classical also called “Newtonian” physic discovered outlined Isaac Newton paper published 1687 basically tell u object say tennis ball position velocity position change time mathematically prof force act object continue move straight line force example gravity wind person catching ball Figure 1 Classical representation 1 classical physic state object combination position velocity know force acting determine trajectory predict go next Quantum Physics start studying smallest particle electron proton — thing little bit abstract predict position velocity property like spin Spin I’ll discus detail it’s 100 accuracy it’s unquestionably actual measurement state electron rather set probability Imagine electron orbiting atom creates sort cloud part denser thinner it’s denser higher probability electron ‘cloud’ oscillates like wave that’s said electron wave function It’s electron proper wave amplitude like sound wave rather amplitude calculated wave used metaphor Figure 2 Electron cloud author inspired 1 Based density point ‘cloud’ assigned number number amplitude “wave” probability electron specific position square root amplitude location observed said electron superposition possible outcome wave function electron equivalent state classical physic position velocity like equation “Newtonian” mechanic calculate motion object also equation calculate motion wave function — Schrödinger’s equation High energy part evolve rapidly low energy part evolve slowly 1 Figure 3 Quantum representation 1 particle wave it’s super electron — doubleslit experiment explained electron oscillates like wave wave function can’t exactly predict it’s going macroscopic object know probability location However decide observe electron “wave” disappearscollapses look particle — see like dot point space dual behaviour — observed it’s ‘wave’ observed it’s particle — doubleslit experiment illustrates exactly performed 1970s although already discussed long compare four different scenario Classical particle going singleslit v doubleslit Waves like water wave going singleslit v doubleslit Electrons going singleslit v doubleslit Electron going doubleslit ‘observer’ placed middle path detect prove electron went doubleslit case 1 Classical particle going single double slit practically behaviour screen side see mark close slit Maybe variation object bump side slit 2 Waves behave like wave it’s one slit mark centred around right behind slit it’s wave mark screen bright spot equivalent wave higher amplitude two slit side screen form interference pattern — Waves oscillate two wave oscillate opposite direction cancel direction intensifies amplitude result screen greater amplitude brighter spot centre close slit alternating darkbright fading side 3 Electrons behave like scenario 2 wave interference pattern noted doubleslits used However first Electrons really wave doesn’t amplitude like wave amplitude calculated based denser point electron cloud Second electron leaf mark screen like classical particle mark bright spot shown based height amplitude done water wave 4 scenario 3 wasn’t fun enough — detectorobserver placed middle electron behave like classical particle wave collapse interference pattern side screenTags Quantum Mechanics Science Quantum Physics Quantum Computer Physics
4,564
How to Stand Out When Asking for a Job
We work with a lot of people at the beginning of their careers at 1517. Many of those people are founders. Others go to work for the founders we know and our portfolio companies. Every now and then, we get questions about how to get a job working in tech investing and venture capital. Instead of just ignoring the person or telling them that we’re not currently hiring, we want to give constructive feedback on their job search. We’ve generally found that very few students know how to effectively ask for a job that other people want. This is a skill best learned through trial-and-error and candid feedback. Here’s what Zak, who wrote the response, said: — — — — — — … Cold inquiries are a great way to go about searching for a job when they’re done right. I spent years teaching people how to do this, so don’t take this the wrong way — it can take some time to learn. First, you don’t need to follow up on every medium immediately. Generally speaking, I encourage people to follow up via email or LinkedIn (if email is unavailable), 72 business hours later if you’ve received no reply. I cover in my email course what a lack of a reply on a first message can mean and how you can craft an appropriate follow up. I also encourage people to be cautious about texting people directly if they’ve never had previous contact with them. Generally speaking, texting is a good way of getting in touch with somebody if they include their mobile number in their email signatures (or openly say elsewhere that somebody can text them). But if they don’t do that, then it can cross some social norms that people can be iffy about. And if you’re going to hit up multiple team members, don’t use the same message for each one. That looks spammy. Tell people where you got their info, why you’re reaching out, give them reason or evidence to reply, and then make it ridiculously easy for them to reply. Second, and this goes more towards the content of a good outreach pitch for landing a job, you’ll want to craft your pitch in a way that it is 1) compelling to the recipient firm and 2) unique to that firm. If your pitch can be sent to 10 firms with you just changing the name of the firm every time, it’s going to look like spam and generally get a lower response rate. A better approach would be to craft a specific why this firm in your outreach. An even better approach would be to tell them exactly why them and what you could do for them. Put together a proposal and run them through how you can be helpful. I have an alum of my email course who did this quite well and landed a number of job interviews along the way. Telling somebody, “I want to work for you” is only as good as the reasons you give them. If you show them that you can identify value that they need created and can create it, you take a lot of that work off of their plate. Loom is a good tool for walking people through what you can create. Third and finally, do a lot of research on every firm you reach out to. That should be wrapped into point #2 as you put together value propositions, but you’ll want to know the team’s thesis, their fund size, their stage of investment, their last fund raised, etc. The fund size and number of funds determines how many people they can hire and how quickly they can hire. The thesis will tell you more about what they consider interesting. For example, we have a very specific thesis that we can always tell when somebody has read over it and understood it before they reached out. We’re also a pre-seed fund, so experience in business school financial modeling classes just isn’t particularly relevant to us. There just isn’t that much data when we make investments. So, for example, if you want to specialize in doing luxury investments and use your undergrad classes, find firms that do investments in the luxury space at or after the series A. You can do this through research on Crunchbase. Find a few companies that are venture funded at the growth stage in the luxury sector, look at their investors, and find similar investors. Research all of those investors, put together value propositions, and cold email the partners and principals at those companies. Even better, find funds in this category that you know recently closed a new fund. That means that they have new management fees to hire folks. I hope that’s helpful for your job hunt and you can use it to land a position at the right kind of firm! Cheers, Zak
https://medium.com/1517/how-to-stand-out-when-asking-for-a-job-c54fbecefae2
[]
2020-04-08 14:41:46.744000+00:00
['Startup', 'Investing', 'Job Hunting', 'Jobs']
Title Stand Asking JobContent work lot people beginning career 1517 Many people founder Others go work founder know portfolio company Every get question get job working tech investing venture capital Instead ignoring person telling we’re currently hiring want give constructive feedback job search We’ve generally found student know effectively ask job people want skill best learned trialanderror candid feedback Here’s Zak wrote response said — — — — — — … Cold inquiry great way go searching job they’re done right spent year teaching people don’t take wrong way — take time learn First don’t need follow every medium immediately Generally speaking encourage people follow via email LinkedIn email unavailable 72 business hour later you’ve received reply cover email course lack reply first message mean craft appropriate follow also encourage people cautious texting people directly they’ve never previous contact Generally speaking texting good way getting touch somebody include mobile number email signature openly say elsewhere somebody text don’t cross social norm people iffy you’re going hit multiple team member don’t use message one look spammy Tell people got info you’re reaching give reason evidence reply make ridiculously easy reply Second go towards content good outreach pitch landing job you’ll want craft pitch way 1 compelling recipient firm 2 unique firm pitch sent 10 firm changing name firm every time it’s going look like spam generally get lower response rate better approach would craft specific firm outreach even better approach would tell exactly could Put together proposal run helpful alum email course quite well landed number job interview along way Telling somebody “I want work you” good reason give show identify value need created create take lot work plate Loom good tool walking people create Third finally lot research every firm reach wrapped point 2 put together value proposition you’ll want know team’s thesis fund size stage investment last fund raised etc fund size number fund determines many people hire quickly hire thesis tell consider interesting example specific thesis always tell somebody read understood reached We’re also preseed fund experience business school financial modeling class isn’t particularly relevant u isn’t much data make investment example want specialize luxury investment use undergrad class find firm investment luxury space series research Crunchbase Find company venture funded growth stage luxury sector look investor find similar investor Research investor put together value proposition cold email partner principal company Even better find fund category know recently closed new fund mean new management fee hire folk hope that’s helpful job hunt use land position right kind firm Cheers ZakTags Startup Investing Job Hunting Jobs
4,565
Industry, Technology, and Innovation Trends for The Post COVID-19 Era
Industry, Technology, and Innovation Trends for The Post COVID-19 Era Recently I was an invited speaker at IEEE Globecom 2020 Special Workshop on Communication and Networking Technologies for Responding to COVID-19. The speakers at this virtual workshop were all distinguished individuals and the topics covered were broad and insightful from contact tracing to smart devices, from detection & mitigation to data privacy & online lectures to AR/VR for smart health services, etc. It reminded me of a keynote speech by Dr. Neeli Prasad, CTO of SmartAvatar B.V. at a virtual event that she said “society rightfully recognized the great contributions from our first responders, doctors, nurses, supply chain, logistics, supermarket workers, etc. but forgot to recognize or acknowledge the information and communication engineers who made internet and communication possible, without them, there won’t be any online collaboration, online school, telemedicine, social media, streaming services, e-commerce, etc.” This special workshop reminded me of what drove me into engineering and entrepreneurship. My talk focused on the “Technology and Innovation Trends for the Post COVID-19 Era”. Here below is the abstract of my talk: The global economic downturn due to the pandemic COVID-19 outbreak acted as a catalyst to further amplify the adoption of new technologies and innovations above and beyond the pace, we got used to for the last two decades. Few things already seem very clear that platform firms like Amazon, Alibaba, Uber Eats, Zoom, etc. are dominating the markets even more. Companies will further accelerate their investment to conduct their business remotely over the internet to be more resilient to potential future lockdowns. My talk discussed about the industry, technology and innovation trends for the post-COVID-19 era. Industry Trends Post COVID-19 Technology & Innovation The industry trends listed above were already in motion for the last few years. However, COVID-19 will accelerate these transformations. COVID-19 has pushed the government, companies, and society over the technology tipping point and transformed these industry trends forever. This blog will address the role of artificial intelligence, robots, digital transformation, and how these trends will impact the industries such as healthcare, education, e-commerce, media & entertainment, connectivity, and Industry 4.0 AI-decision making. Post COVID-19 Technology & Innovaqtion Healthcare Post COVID-19 Technology & Innovation US healthcare spent was roughly $3.6 trillion in 2018 which makes it the highest per capita in the world with $11,172 per capita. Prior to the pandemic, 11% of the U.S.’s non-elderly population, roughly 30 million people, were uninsured or underinsured. It is estimated that due to COVID-19 shelter-in-place measures, which led to the economic lockdown, and the continuous lack of financial support from the government an additional 8 million people fell into poverty. Telehealth Surge Under COVID-19 COVID-19 has caused a massive acceleration in the use of telehealth. Consumer adoption has skyrocketed as consumers replace their canceled healthcare visits with telehealth. In 2019, U.S. consumers’ use of telehealth made up 11%. However, now 46% of consumers are using telehealth services. Providers have rapidly scaled offerings and are seeing 50 to 175 times the number of patients via telehealth than they did before according to a McKinsey survey. With the acceleration of consumer and provider adoption of telehealth and extension of telehealth beyond virtual urgent care, up to $250 billion of current US healthcare spending per year could be saved which is roughly 20% of Medicare, Medicaid, and commercial insurers spend. This saving alone will allow insurers to expand healthcare coverage to uninsured and underinsured citizens. Higher Education Post COVID-19 Technology & Innovation COVID-19 changed the way of educating and I noticed it first-hand with my daughter studying at the University of Amsterdam, Netherlands. Many schools and higher educational institutes were caught off guard with the first lockdown coupled with stay-at-home or shelter-in-place orders from their state governors and city mayors during the Spring of 2020. Schools and teachers had to reinvent themselves overnight and learn on the fly how to conduct virtual classes effectively, interact efficiently with students through chat groups, video conferencing, scheduling video meetings, voting, distribute assignments, document sharing, etc. It tests the higher education institutions’ commitment to ensuring education for all its students and how to solve problems remotely. Most students want to return to their onsite and in-person class, socialize with their classmates and friends but they also found it easier to online communicate and interact with tutors & professors. There is research out there that shows that average students retain 25% to 60% more material and require 40% to 60% less time when they learn online compared to only 8% to 10% in a classroom. In short with online class students can learn at their own pace, going back and forth as many times they want, skip or accelerate through the course material as they please. Higher educational institutes have taken notice of it and in the future expect them to provide high-impact learning experience across hybrid mode a mix of onsite and online classes, placing educational quality above modality. e-Commerce Post COVID-19 Technology & Innovation COVID-19 changed the face of the retail to a complete online Augmented Reality (AR) retail store with innovative ways to improve the shopping experience of customers, reduce the numbers of products customer returns and streamline the overall purchasing process. Converse the shoe brand launched an AR app for iPhone called The Sampler that allows users to virtually try on shoes. Simply by pointing the camera towards their right foot, the user can see what the shoe would look like in real life. This also helps to streamline the purchasing process, as customers have the opportunity to buy a pair of shoes they like directly via the app. Ikea has integrated artificial reality with their app named Place. Shoppers can now use the camera of their smartphone to virtually place different home furnishings into their surroundings. The program allows users to interact with the projected images and envisions how they would look in various spaces. This helps customers find the perfect piece of furniture without having to return items that they imagined would fit. Warby Parker has a new update out for its iPhone app that uses Apple’s Face ID and AR tech to let customers virtually try on glasses in the app before they buy them. Warby Parker’s virtual try-on feature relies on Apple’s ARKit and True Depth features, so it’s only available on the iPhone X, XR, and XS phones. Another example of smart use of AR technology is the DressingRoom app from Gap. Shoppers can provide the app with some basic information about their body. The program then creates a 3D model based on the user’s measurements. With this model in place, the user can virtually try on clothes to see how they look. This is just another way in which companies are making the margin of error smaller when it comes to online purchases. Augmented Reality based online shopping will enable a personalized experience with an ability to test and explore products in ways that is similar to an in-person shopping experience. Media & Entertainment Post COVID-19 Technology & Innovation Deepfake is synthetic data in which existing data, voice, image, and/or video is replaced with someone else’s likeness. Deepfake is also capable of generating realistic-looking images that even humans can’t recognize whether it’s real or not. Deepfake techniques are also used to generate synthetic data to balance algorithmically biased datasets for supervisory training of machine learning & deep learning models in order to improve overall model accuracy. These People are NOT Real. These Images were Produced by StyleGAN Cybercriminals are harnessing the power of this technology to reel in more victims. The thumbnail and heading make the victim really curious about the content of the video so they click through it “clickbait”. As soon as they navigate to the site, their computer is exposed to malware such as ransomware, keyloggers, or spyware. If they don’t have adequate cybersecurity in place, their computer is infected and they have to deal with the fallout. On December 25, 2020, a hilarious digitally altered version of Queen Elizabeth’s annual Christmas speech was broadcast on the BBC and ITV. Deepfake Queen: 2020 Alternative Christmas Message The Deepfake version of Queen Elizabeth II took several swipes at members of the Royal family, and the Queen even danced in a Tik Tok routine. All of it was designed to warn of the ease of misinformation that could spread in the digital age. Trust and verify what is genuine or what is not in the age of misinformation and disinformation media, it can be a serious threat to democratic values we take for granted and our way of life. A push towards a greater 5G investment and faster market adoption in developed economic countries will be mainly driven by the potential economic boom and contribution to countries GDP expected from 5G connectivity. 5G will create a value of $13.1 trillion in global sales activities by 2035. Enhanced Mobile Broadband (eMBB) will extend the 5G coverage and capacity with licensed and unlicensed spectrum 5G Use Cases (Source: Ericsson) Massive Machine Type Communication (mMTC) will scale Internet of Things (IoT) applications and improve the battery life of IoT devices. Mission Critical Applications enabled by Ultra Reliable Low Latency Communications (URLLC) will allow public safety, emergency response and other smart industrial safety critical use cases and services, for example, autonomous vehicles, remote telesurgery, wireless manufacturing control, etc. become commonplace. 5G and beyond 5G connectivity will not only create new jobs in every industry sector, it will also unleash new value streams that will help grow the global economy for everyone. Businesses across all industry sectors will benefit by leveraging the unique capabilities of 5G over 4G. Artificial Intelligence (AI) Decision Making Post COVID-19 Technology & Innovation Many companies have adopted a data-driven approach for operational decision making as part of Industry 4.0. A data-driven approach can improve decisions but it requires the right processors “human” to get the most from it. However, to get the maximum value contained in the data, companies need to bring Artificial Intelligence (AI) into their workflow. Removing humans from workflows does not mean humans are obsolete, there are business decisions that depend on more than structured data e.g. strategy, creativity, corporate culture, empathy, emotion, and other forms of non-digital communication. This information is inaccessible to AI and extremely relevant to business decisions e.g. AI may determine that investment in digital marketing will result in the highest return on investment; however, a company may decide to slow down the growth for improving product quality. AI-Driven Decision Making Combined with Human Judgement Industry 5.0 refers to humans working alongside robots and smart machines. It is the age of Human-Machine Convergence. Industry 5.0 aims to support, not supersede, humans. COVID-19 proved the point that without human involvement manufacturing cannot function on its own. Industry 5.0 will automate the mundane tasks and relieve workers of physically demanding work so that workers can focus on creative craftmanship and concentrate on other tasks.
https://medium.com/datadriveninvestor/industry-technology-and-innovation-trends-for-the-post-covid-19-era-af4e8659b5d7
['Mahbubul Alam']
2020-12-28 12:35:45.998000+00:00
['Covid 19', 'Technology', 'Artificial Intelligence', 'Innovation', 'Pandemic']
Title Industry Technology Innovation Trends Post COVID19 EraContent Industry Technology Innovation Trends Post COVID19 Era Recently invited speaker IEEE Globecom 2020 Special Workshop Communication Networking Technologies Responding COVID19 speaker virtual workshop distinguished individual topic covered broad insightful contact tracing smart device detection mitigation data privacy online lecture ARVR smart health service etc reminded keynote speech Dr Neeli Prasad CTO SmartAvatar BV virtual event said “society rightfully recognized great contribution first responder doctor nurse supply chain logistics supermarket worker etc forgot recognize acknowledge information communication engineer made internet communication possible without won’t online collaboration online school telemedicine social medium streaming service ecommerce etc” special workshop reminded drove engineering entrepreneurship talk focused “Technology Innovation Trends Post COVID19 Era” abstract talk global economic downturn due pandemic COVID19 outbreak acted catalyst amplify adoption new technology innovation beyond pace got used last two decade thing already seem clear platform firm like Amazon Alibaba Uber Eats Zoom etc dominating market even Companies accelerate investment conduct business remotely internet resilient potential future lockdown talk discussed industry technology innovation trend postCOVID19 era Industry Trends Post COVID19 Technology Innovation industry trend listed already motion last year However COVID19 accelerate transformation COVID19 pushed government company society technology tipping point transformed industry trend forever blog address role artificial intelligence robot digital transformation trend impact industry healthcare education ecommerce medium entertainment connectivity Industry 40 AIdecision making Post COVID19 Technology Innovaqtion Healthcare Post COVID19 Technology Innovation US healthcare spent roughly 36 trillion 2018 make highest per caput world 11172 per caput Prior pandemic 11 US’s nonelderly population roughly 30 million people uninsured underinsured estimated due COVID19 shelterinplace measure led economic lockdown continuous lack financial support government additional 8 million people fell poverty Telehealth Surge COVID19 COVID19 caused massive acceleration use telehealth Consumer adoption skyrocketed consumer replace canceled healthcare visit telehealth 2019 US consumers’ use telehealth made 11 However 46 consumer using telehealth service Providers rapidly scaled offering seeing 50 175 time number patient via telehealth according McKinsey survey acceleration consumer provider adoption telehealth extension telehealth beyond virtual urgent care 250 billion current US healthcare spending per year could saved roughly 20 Medicare Medicaid commercial insurer spend saving alone allow insurer expand healthcare coverage uninsured underinsured citizen Higher Education Post COVID19 Technology Innovation COVID19 changed way educating noticed firsthand daughter studying University Amsterdam Netherlands Many school higher educational institute caught guard first lockdown coupled stayathome shelterinplace order state governor city mayor Spring 2020 Schools teacher reinvent overnight learn fly conduct virtual class effectively interact efficiently student chat group video conferencing scheduling video meeting voting distribute assignment document sharing etc test higher education institutions’ commitment ensuring education student solve problem remotely student want return onsite inperson class socialize classmate friend also found easier online communicate interact tutor professor research show average student retain 25 60 material require 40 60 le time learn online compared 8 10 classroom short online class student learn pace going back forth many time want skip accelerate course material please Higher educational institute taken notice future expect provide highimpact learning experience across hybrid mode mix onsite online class placing educational quality modality eCommerce Post COVID19 Technology Innovation COVID19 changed face retail complete online Augmented Reality AR retail store innovative way improve shopping experience customer reduce number product customer return streamline overall purchasing process Converse shoe brand launched AR app iPhone called Sampler allows user virtually try shoe Simply pointing camera towards right foot user see shoe would look like real life also help streamline purchasing process customer opportunity buy pair shoe like directly via app Ikea integrated artificial reality app named Place Shoppers use camera smartphone virtually place different home furnishing surroundings program allows user interact projected image envisions would look various space help customer find perfect piece furniture without return item imagined would fit Warby Parker new update iPhone app us Apple’s Face ID AR tech let customer virtually try glass app buy Warby Parker’s virtual tryon feature relies Apple’s ARKit True Depth feature it’s available iPhone X XR XS phone Another example smart use AR technology DressingRoom app Gap Shoppers provide app basic information body program creates 3D model based user’s measurement model place user virtually try clothes see look another way company making margin error smaller come online purchase Augmented Reality based online shopping enable personalized experience ability test explore product way similar inperson shopping experience Media Entertainment Post COVID19 Technology Innovation Deepfake synthetic data existing data voice image andor video replaced someone else’s likeness Deepfake also capable generating realisticlooking image even human can’t recognize whether it’s real Deepfake technique also used generate synthetic data balance algorithmically biased datasets supervisory training machine learning deep learning model order improve overall model accuracy People Real Images Produced StyleGAN Cybercriminals harnessing power technology reel victim thumbnail heading make victim really curious content video click “clickbait” soon navigate site computer exposed malware ransomware keyloggers spyware don’t adequate cybersecurity place computer infected deal fallout December 25 2020 hilarious digitally altered version Queen Elizabeth’s annual Christmas speech broadcast BBC ITV Deepfake Queen 2020 Alternative Christmas Message Deepfake version Queen Elizabeth II took several swipe member Royal family Queen even danced Tik Tok routine designed warn ease misinformation could spread digital age Trust verify genuine age misinformation disinformation medium serious threat democratic value take granted way life push towards greater 5G investment faster market adoption developed economic country mainly driven potential economic boom contribution country GDP expected 5G connectivity 5G create value 131 trillion global sale activity 2035 Enhanced Mobile Broadband eMBB extend 5G coverage capacity licensed unlicensed spectrum 5G Use Cases Source Ericsson Massive Machine Type Communication mMTC scale Internet Things IoT application improve battery life IoT device Mission Critical Applications enabled Ultra Reliable Low Latency Communications URLLC allow public safety emergency response smart industrial safety critical use case service example autonomous vehicle remote telesurgery wireless manufacturing control etc become commonplace 5G beyond 5G connectivity create new job every industry sector also unleash new value stream help grow global economy everyone Businesses across industry sector benefit leveraging unique capability 5G 4G Artificial Intelligence AI Decision Making Post COVID19 Technology Innovation Many company adopted datadriven approach operational decision making part Industry 40 datadriven approach improve decision requires right processor “human” get However get maximum value contained data company need bring Artificial Intelligence AI workflow Removing human workflow mean human obsolete business decision depend structured data eg strategy creativity corporate culture empathy emotion form nondigital communication information inaccessible AI extremely relevant business decision eg AI may determine investment digital marketing result highest return investment however company may decide slow growth improving product quality AIDriven Decision Making Combined Human Judgement Industry 50 refers human working alongside robot smart machine age HumanMachine Convergence Industry 50 aim support supersede human COVID19 proved point without human involvement manufacturing cannot function Industry 50 automate mundane task relieve worker physically demanding work worker focus creative craftmanship concentrate tasksTags Covid 19 Technology Artificial Intelligence Innovation Pandemic
4,566
Drowning in a Sea of Alerts
That copy writing client has sent a message — ping! That book editing client asked a question— ding! You’ve made a sale — pop! The washing machine finishes a cycle — bing bing! Is that someone at the door? — ring ring! Thankfully I don’t get new e-mail alerts because I shut that crap down years ago, see also my phone. Some things like the doorbell might be unavoidable…unless I rip the effing thing out of the wall, which is tempting sometimes. Every appliance, app, website, and marketplace wants to alert you to something. Sure, it sounds useful. I can have soaring productivity by plugging myself into the mainframe and becoming super aware of every disparate event, question and occurrence across my working day. However, all these pings and dings have made me very aware that focus is finite. The human mind just doesn’t work like a computer does. Apps and appliances present their alert functions as non intrusive. They claim not to commandeer your focus and attention, just a tiny bite of your limitless ability to be aware of many things at once. The theory is that all these numerous helpful assistants give you an ever so subtle and helpful poke now and again. What these alerts actually do is completely derail your focus in exchange for letting you know what you should be doing an hour from now. I’m pretty sure you knew that anyway. I’m also pretty sure that a physical list which you can choose to look at when you need a prompt is more useful and less intrusive. We all have a natural alert system. We have awareness of different things at different times. We tune in to what we want to concentrate on and prioritize. The idea that any of us can do that effectively when an external force is pinging away on our attention bongos is deeply flawed. Certainly, there’s a time and place for helping our natural alert system out. A few weeks ago I was in a pub that had a fire in the kitchen. Off goes the fire alarm, out of the pub everyone traipses. That’s a useful alarm. Even in this example though you can see that an alarm is mean to distract you completely and get you to change track. Even the words “alarm” and “alert” don’t have connotations of retaining your focus and cataloguing that you need to do something later. Even just the availability of alerts and updates is a problem, the only difference is you do that damage to yourself. Whether it’s refreshing your stats on Medium or picking up your phone again, you aren’t actually alerting yourself to something that needs doing. All you’re doing is willingly bailing on whatever you’re meant to be doing. Being busy makes people feel productive. Being productive fires up all the parts of your brain that reward you. The problem is this part of your primitive wiring is fairly blind. On a primitive level it makes sense for us to be alert to everything around us. On a professional level this is how you waste your time. When you check multiple accounts, screens or messages you feel busy, because you made yourself busy. Busyness and productivity are not synonymous. When you kid yourself that you achieved something (even when you didn’t) you get a kick of good feelings. That false sense of achievement triggers your reward responses, which is very dangerous when you haven’t actually done anything. Phone, e-mail and other alert/message checking is addictive. You can see that in how other people use their phones and check e-mails even if you can’t see it in your own behaviour. Check phone. Glance back at work. Twitch. Check phone again to see if an e-mail came in the last 2 seconds. Even if you have a new message or email, is it productive to focus on it right now? If your whole life is about keeping abreast of new updates, when do you focus on your work? Probably after wasting most of the day and realizing you need to rush all of your actual work to get it done. Alerts are the equivalent of someone physically tapping you on the shoulder and asking for your attention every few seconds. This doesn’t just distract you; it constantly undermines your own decision making. When you decide and plan when you’re going to do things you decide when you’re going to give your attention to them. You keep your focus and you stay in control of the direction of your day. Trying to work on the basis that the flow of your actions is going to be determined by multiple external events (bar an actual fire in the building) is obviously going to make it very hard to concentrate. Making a firm decision about when and how you check updates boosts both productivity and confidence. Take back control.
https://medium.com/bettertoday/drowning-in-a-sea-of-alerts-c2bf11324f9d
['Stef Hill']
2020-02-07 11:39:39.295000+00:00
['Procrastination', 'Productivity', 'Time Management', 'Work', 'Work Life Balance']
Title Drowning Sea AlertsContent copy writing client sent message — ping book editing client asked question— ding You’ve made sale — pop washing machine finish cycle — bing bing someone door — ring ring Thankfully don’t get new email alert shut crap year ago see also phone thing like doorbell might unavoidable…unless rip effing thing wall tempting sometimes Every appliance app website marketplace want alert something Sure sound useful soaring productivity plugging mainframe becoming super aware every disparate event question occurrence across working day However ping ding made aware focus finite human mind doesn’t work like computer Apps appliance present alert function non intrusive claim commandeer focus attention tiny bite limitless ability aware many thing theory numerous helpful assistant give ever subtle helpful poke alert actually completely derail focus exchange letting know hour I’m pretty sure knew anyway I’m also pretty sure physical list choose look need prompt useful le intrusive natural alert system awareness different thing different time tune want concentrate prioritize idea u effectively external force pinging away attention bongo deeply flawed Certainly there’s time place helping natural alert system week ago pub fire kitchen go fire alarm pub everyone traipses That’s useful alarm Even example though see alarm mean distract completely get change track Even word “alarm” “alert” don’t connotation retaining focus cataloguing need something later Even availability alert update problem difference damage Whether it’s refreshing stats Medium picking phone aren’t actually alerting something need you’re willingly bailing whatever you’re meant busy make people feel productive productive fire part brain reward problem part primitive wiring fairly blind primitive level make sense u alert everything around u professional level waste time check multiple account screen message feel busy made busy Busyness productivity synonymous kid achieved something even didn’t get kick good feeling false sense achievement trigger reward response dangerous haven’t actually done anything Phone email alertmessage checking addictive see people use phone check email even can’t see behaviour Check phone Glance back work Twitch Check phone see email came last 2 second Even new message email productive focus right whole life keeping abreast new update focus work Probably wasting day realizing need rush actual work get done Alerts equivalent someone physically tapping shoulder asking attention every second doesn’t distract constantly undermines decision making decide plan you’re going thing decide you’re going give attention keep focus stay control direction day Trying work basis flow action going determined multiple external event bar actual fire building obviously going make hard concentrate Making firm decision check update boost productivity confidence Take back controlTags Procrastination Productivity Time Management Work Work Life Balance
4,567
Submit Your Story to Transform the Pain
People need to hear your story. Share your experience, tell us what you are going through, how you are coping, what makes it hard, and what helps. It’s through sharing our stories that we connect, learn, and help each other. Sharing your thoughts and feeling related to loss can be a healing experience that can also help others relate and better understand the process of grieving. Click here to submit for the first time: https://transformthepain.typeform.com/to/QDLDn4 If you have Medium account, we’ll add you as a writer when your story gets published. From then on, you will be able to submit your stories directly through the Medium interface — for those of you who are not familiar with the process, here is how that goes:
https://medium.com/transform-the-pain/submit-your-story-to-transform-the-pain-56bedbd0440
['Mateja Klaric']
2020-09-19 07:53:57.225000+00:00
['Transformation', 'Medium', 'Writing', 'Grief And Loss', 'Call For Submissions']
Title Submit Story Transform PainContent People need hear story Share experience tell u going coping make hard help It’s sharing story connect learn help Sharing thought feeling related loss healing experience also help others relate better understand process grieving Click submit first time httpstransformthepaintypeformcomtoQDLDn4 Medium account we’ll add writer story get published able submit story directly Medium interface — familiar process goesTags Transformation Medium Writing Grief Loss Call Submissions
4,568
How to Write Better React Code With useMemo
When React hooks were introduced in React v16.8, developers were finally given the ability to manage state in functional components by using hooks like useState , useEffect , and others. In this article, we’ll be looking at the React hook useMemo and how we can use it to write faster React code. Photo by Filiberto Santillán on Unsplash To look at why useMemo even exists, we’ll first be looking at how rendering works. Function Equality and Expensive Operations useMemo at its core seeks to solve two problems, function equality and expensive operations. During the lifecycle of a component, React re-renders the component whenever an update is made. This means that React will rebuild all the functions and variables in the React component, potentially a very expensive operation for more complex React components. Objects in Javascript are by default unique. For example, lets take a look at this code: Here, we have two objects, x and y , that have the exact same structure. However, when compared with Javascript they aren’t the same value! When React checks for changes in a component, due to this object equality issue, it may be unnecessarily re-rendering the component tree due to perceived changes in objects, when in reality the object has the exact same values. This is where a technique called memoization comes in. What is Memoization? Memoization is similar to caching an operation or value. For example, lets say we have a function that computes 1+1 and returns 2 . If we memoize this function, next time it uses the function to calculate 1+1 , it will remember that 1+1 is 2 without ever re-running the function! This can be incredibly powerful for speeding up complex operations. useMemo From the official React documentation, useMemo looks like this: const memoizedValue = React.useMemo(() => computeExpensiveValue(a, b), [a, b]); Note that we pass in a function, in this case computeExpensiveValue , and a dependency array [a, b] . The dependencies are similar to arguments for the function, they’re what useMemo watches to determine whether or not it should run. When there’s no changes to a or b , useMemo won’t run and instead return the stored result. This can be optimal if the wrapped function is incredibly expensive. Lets take a look at a real world example: const complexList = React.useMemo(() => list.map(item => ({ ...item, expensiveValueOne: expensiveFunction(props.first), expensiveValue2: anotherPriceyFunction(props.second) })), [list] ) In this case, we’re using the useMemo hook to convert a list into a list of objects. On first render, this function will run, blocking the main thread. However, on every subsequent render, unless list changes, we can reuse the same value and not run the expensive functions again. When to use useMemo When doing any React optimization, ensure that you fully write and revise the code to see if you can optimize it. useMemo can actually hurt performance if used incorrectly. Profiling your React application can be a great way to ensure there’s a measurable impact from implementing useMemo , Using the right hook for the job useMemo isn’t the only React hook, there’s also useCallback , useRef , and useEffect . The useCallback hook, which I wrote an article on, is very similar to useMemo but it returns a memoized function instead of a memoized value. If your dependency array is empty or contains values that change on every render, there’s no chance for useMemo to properly memoize values and there will be no performance gain. Don’t use useMemo to fire off any asynchronous values, instead you should use useEffect , useMemo should be used with pure functions. Conclusion The useMemo hook can be incredibly powerful for improving your React applications performance when used properly. By memoizing an expensive function, we can save the output value and make that function appear to run instantaneously. However, useMemo adds its own overhead, and should only be used when there’s a clear optimization benefit. Keep in Touch There’s a lot of content out there and I appreciate you reading mine. I’m a undergraduate student at UC Berkeley in the MET program and a young entrepreneur. I write about software development, startups, and failure (something I’m quite adept at). You can signup for my newsletter here or check out what I’m working on at my website. Feel free to reach out and connect with me on Linkedin or Twitter, I love hearing from people who read my articles :)
https://medium.com/swlh/how-to-write-better-react-code-with-usememo-cbc1cdf0d384
['Caelin Sutch']
2020-12-23 22:47:46.189000+00:00
['React', 'Programming', 'Software Developmen', 'Reactjs', 'Web Development']
Title Write Better React Code useMemoContent React hook introduced React v168 developer finally given ability manage state functional component using hook like useState useEffect others article we’ll looking React hook useMemo use write faster React code Photo Filiberto Santillán Unsplash look useMemo even exists we’ll first looking rendering work Function Equality Expensive Operations useMemo core seek solve two problem function equality expensive operation lifecycle component React rerenders component whenever update made mean React rebuild function variable React component potentially expensive operation complex React component Objects Javascript default unique example let take look code two object x exact structure However compared Javascript aren’t value React check change component due object equality issue may unnecessarily rerendering component tree due perceived change object reality object exact value technique called memoization come Memoization Memoization similar caching operation value example let say function computes 11 return 2 memoize function next time us function calculate 11 remember 11 2 without ever rerunning function incredibly powerful speeding complex operation useMemo official React documentation useMemo look like const memoizedValue ReactuseMemo computeExpensiveValuea b b Note pas function case computeExpensiveValue dependency array b dependency similar argument function they’re useMemo watch determine whether run there’s change b useMemo won’t run instead return stored result optimal wrapped function incredibly expensive Lets take look real world example const complexList ReactuseMemo listmapitem item expensiveValueOne expensiveFunctionpropsfirst expensiveValue2 anotherPriceyFunctionpropssecond list case we’re using useMemo hook convert list list object first render function run blocking main thread However every subsequent render unless list change reuse value run expensive function use useMemo React optimization ensure fully write revise code see optimize useMemo actually hurt performance used incorrectly Profiling React application great way ensure there’s measurable impact implementing useMemo Using right hook job useMemo isn’t React hook there’s also useCallback useRef useEffect useCallback hook wrote article similar useMemo return memoized function instead memoized value dependency array empty contains value change every render there’s chance useMemo properly memoize value performance gain Don’t use useMemo fire asynchronous value instead use useEffect useMemo used pure function Conclusion useMemo hook incredibly powerful improving React application performance used properly memoizing expensive function save output value make function appear run instantaneously However useMemo add overhead used there’s clear optimization benefit Keep Touch There’s lot content appreciate reading mine I’m undergraduate student UC Berkeley MET program young entrepreneur write software development startup failure something I’m quite adept signup newsletter check I’m working website Feel free reach connect Linkedin Twitter love hearing people read article Tags React Programming Software Developmen Reactjs Web Development
4,569
My High School Sweetheart Was A Sick-Hearted Villain
I believe two kinds of people exist in the world — people who had a great time in high school and people who had some traumatic experiences during high school. I belong to the latter. All the hormones at that age make it unlikely to experience anything less than an emotional roller coaster. Life before high-school seems like butterflies and rainbows. Coolness and the strife for acceptance by peers are the biggest priorities in the survival guide to being a teenager. It is the period of many firsts — crushes, relationships, dates, friendships, fights, failures, victories, and for the unfortunate, some abuse. What happens in high school stays in high school. NOT! Every experience at that age is vital and stays with you for life. Good or bad. Good Girl Gone Rogue I was a wallflower. Not too popular, not active in sports or any other activities, not a part of any club either. I was just an average, nerdy, obedient kid who did not have big dreams. It was in grade 9 that I was first asked out by a senior and did not know how to process it. I also did not know if I should talk about it to my parents because having a boyfriend was a loud NO for my strict brown parents. Breaking their rules or distractions from academics was terrifying for me. They still believed in corporal punishment and would get creative with whatever was at arm’s reach — ruler, slippers, hairbrush, ladle. Afraid of beatings, I stayed away from boys and locked away any feelings or crushes. A good set of friends made me feel content. A year of my average life went by, and I was now in grade 10. “This is the most important academic year of your life.”, they said. Only if I got a dollar, for every time I heard this in my life. Academics got more challenging, and my schedule got tighter. In the middle of the year, my friends started boycotting me because they heard rumors that I got around with countless boys. “But I don’t even know all the boys from this rumor.”, I scoffed in disappointment. I thought that we were going to be best friends forever. There was nobody I could talk to or cry to about this — definitely not my parents. I lost the one thing I had going on for me. Going to school felt like a prison sentence, and my loneliness made me realize they weren’t good friends. Not so scared of suffering at home anymore, I started bunking classes, failing tests, and talking back at home — a little rebel without a cause. I wasn’t going to be miss-goody-two-shoes anymore, breaking bad doing whatever felt wrong but still stayed far-off from my male contemporaries. For Better and For Worse Two Important Events that Changed my Life Siri, a classmate who I barely spoke to in one and a half years, moved to my neighborhood. We started hanging out to play badminton and eventually started copying each other’s homework and talking about school and boys. Before we knew it, she became an essential part of my life. As different as the north and south poles, yet so similar. What brought us close was our mutual lack of friends. I thought I was an introvert until I met her. She was a bigger wallflower. We continued being shy and awkward together. It was also around this time that our teacher Miss Starlet announced that Jayden, a boy from my class, had a congenitally defective heart and underwent surgery for it. “Treat him well, and don’t bully him.”, she said. Jayden was one of the popular kids at school. He had the bad boy vibes and was somewhat of a bully himself, which for some reason, attracts girls. In retrospect, he was just a noticeably short boy, awfully pretentious and mean. However, I was not immune to it back then. My feelings for him grew stronger when he asked me out one day. Not accustomed to getting male attention, I immediately fell for him, and we started dating in secret. In secret because firstly, I did not want my parents to find out, and secondly, he did not want anybody from school to know (probably a red flag that I was too blind to notice then).The only person who knew about this relationship besides the two of us was Siri. We never went on dates or did anything together in public. He devised a plan to spend time together after school hours when nobody could see us. I would tell my parents that I was with the tutor after school and stayed back, and Jayden would ask me to meet him in a classroom where all he wanted to do was make out. He taught me how to kiss by shoving his tongue down my throat. Inexperienced to such pleasures and sensations, I agreed to this daily make out routine. However, we never spoke about anything. Every day, I would go back to Siri to give her the details — deets, as she called them. One day, while winding up our little session, Jayden said to me, with a devilish smile on his face, “My parents won’t be home tomorrow. Wanna come?”. How could I say no to those deep brown eyes? Overwhelmed with joy and anxiety, I ran to Siri to give her the news. She immediately gave me a piece of her mind for being so vulnerable to poor treatment. Yet, she agreed to go with me. Black-Letter Day I woke up with mixed feelings but mostly frightened. That day in school got over in a haze. Despite me feeling the heebie-jeebies, Jayden refused to acknowledge my existence as usual. My low self-worth made me ignorant of such suspicious behavior. After school, Siri and I anxiously walked over to Jayden’s house. Both his parents, doctors, were always out on duty, and his building was infamous because of his gang. I rang the doorbell with my heart in my mouth, standing like nuns waiting to enter hell. The door swung by, and there he was. An instant sense of ease took over as he invited us inside. It only lasted for a couple of minutes. All the guys from school that our parents warned us about were there. We were like the Red Riding Hood in the Big Bad Wolf’s den. Everyone stopped playing PlayStation, or eating pizza, to catcall and tease Jayden. My anxiety doubled, making me super uncomfortable. One glance at Siri and I realized she was too. Jayden took us to the couch and gave us sodas. The introvert meters in us were probably erratic. “Would you mind if I steal her for a while?” he asked Siri while holding my hand. She hesitantly shook her head. I checked with her again. “Siri, are you sure you’ll be OK?” “Yes. I’ll be fine. You be careful.” Reluctantly, I left her on the couch, unguarded, while Jayden took me into his bedroom. He shut the door, held me by my waist, and began French kissing me. Only this time, I wasn’t feeling it. I tried to push him away by saying that I should check up on Siri. He reassured that everything was fine and gently took me to his bed. He started unbuttoning my blouse, pushed aside my bra strap, and cupped my breasts. All sorts of alarms were going off in my head. Afraid that he might realize I’m not cool enough for him, I pretended to remain calm. Although, my limbs felt numb like somebody had tied me down. Is this supposed to feel sexy? I was shaking. Finally, I felt relaxed when he stopped fondling my breasts. Then promptly, he started pulling up my skirt and running his hands on my inner thighs. THAT’S IT! I quickly gathered all my strength, and pushed him away as far as possible, adjusted my bra and blouse while I ran to Siri. Poor Siri, she looked as traumatized as me in between those hungry jackals. I could tell she was also glad to leave. Series of Escalating Misery The next day in school was excruciating on so many levels because neither did I tell anyone what happened there, not even Siri, nor could I stop thinking about it. Despite feeling violated, I dreaded to end things with Jayden. That day, instead of meeting with him after school, I went home. Little did I know that another catastrophe was waiting to happen. My mom was patiently waiting for me with a stick and welcomed me home with an ambush. Jayden’s neighbor, a stay-at-home mom (let’s call her Karen, for obvious reasons), saw us run out of his house with a partially buttoned blouse and babbled everything to my mother. Side Note:- Let me tell you something about jobless, middle-aged women, who like putting their noses where they don’t belong. They have the surveillance capacities of the CIA and all the time in their hands, which makes them the most dangerous undercover agents. Misconduct and brown parents are the perfect recipe for an enormous fuss. Even though my father whipped me with his belt as punishment, what hurt me more was that they forbid me from meeting Siri until final exams because they thought she was a bad influence. We still found ways to hang out but swore to never talk about that day ever again. The following days in school were worse. Guess who wasn’t ashamed to address me in public anymore. Yes, the same sick rascal who didn’t even have a fully functional heart, to begin with -Jayden! Slut-Shaming He started openly slut-shaming me, and I found out that he approached me in the first place because he heard the same rumors as my first group of friends. Clearly, he only showed interest in me because he thought I was an easy target, and he was not wrong. My gullibility led me to an incident that deeply affected my life. School rumors spread like wildfire, and most often, it comes around to you after each person has added their zest to the story. The whispers about me were — “ She charmed her way into his bedroom, seduced him to have sex, got pregnant, wrongfully accused him of rape, and got an abortion.” WOW! What soap opera were they writing for? Am I right? Like a game of Chinese whispers, there were many versions of the gossip, none in my favor. I was bullied by some, patronized by others, but branded by all as the school floozy. Siri was my solitary cheerleader through it all and shielded me from skepticism. My Rainbow After the Storm Twelve years have passed since this conflict, I have gathered a lot of happy memories along the road, yet the wound feels fresh. I wish I could tell my parents about the day that still haunts me, but I decided I shouldn’t remind them about the shame I caused them. Things could’ve gotten far worse that day, and that’s what frightens me the most. Even though I wasn’t harmed physically, high school heavily damaged me on an emotional level. It was only recently that Siri and I finally opened up about that day. We cried a little because the trauma sneaked back on us, but laughed more at the blown up “scandal” and how stupid we were back then. High school was, without a doubt, disastrous for me, but I gained the one thing I will never let go — Siri. She is still my most reliable friend, confidant, and trusted pal to this date, and will be forever. And Jayden? He’s still a short, arrogant, snobby teenager stuck in an adult’s body, who hasn’t had a life after high school.
https://medium.com/survivors/my-high-school-sweetheart-was-a-sick-hearted-villain-9883c0f1b35f
['Alisha Baxter']
2020-09-10 07:37:34.585000+00:00
['Life Lessons', 'Life', 'Sexual Assault', 'Relationships', 'Mental Health']
Title High School Sweetheart SickHearted VillainContent believe two kind people exist world — people great time high school people traumatic experience high school belong latter hormone age make unlikely experience anything le emotional roller coaster Life highschool seems like butterfly rainbow Coolness strife acceptance peer biggest priority survival guide teenager period many first — crush relationship date friendship fight failure victory unfortunate abuse happens high school stay high school Every experience age vital stay life Good bad Good Girl Gone Rogue wallflower popular active sport activity part club either average nerdy obedient kid big dream grade 9 first asked senior know process also know talk parent boyfriend loud strict brown parent Breaking rule distraction academic terrifying still believed corporal punishment would get creative whatever arm’s reach — ruler slipper hairbrush ladle Afraid beating stayed away boy locked away feeling crush good set friend made feel content year average life went grade 10 “This important academic year life” said got dollar every time heard life Academics got challenging schedule got tighter middle year friend started boycotting heard rumor got around countless boy “But don’t even know boy rumor” scoffed disappointment thought going best friend forever nobody could talk cry — definitely parent lost one thing going Going school felt like prison sentence loneliness made realize weren’t good friend scared suffering home anymore started bunking class failing test talking back home — little rebel without cause wasn’t going missgoodytwoshoes anymore breaking bad whatever felt wrong still stayed faroff male contemporary Better Worse Two Important Events Changed Life Siri classmate barely spoke one half year moved neighborhood started hanging play badminton eventually started copying other’s homework talking school boy knew became essential part life different north south pole yet similar brought u close mutual lack friend thought introvert met bigger wallflower continued shy awkward together also around time teacher Miss Starlet announced Jayden boy class congenitally defective heart underwent surgery “Treat well don’t bully him” said Jayden one popular kid school bad boy vibe somewhat bully reason attracts girl retrospect noticeably short boy awfully pretentious mean However immune back feeling grew stronger asked one day accustomed getting male attention immediately fell started dating secret secret firstly want parent find secondly want anybody school know probably red flag blind notice thenThe person knew relationship besides two u Siri never went date anything together public devised plan spend time together school hour nobody could see u would tell parent tutor school stayed back Jayden would ask meet classroom wanted make taught kiss shoving tongue throat Inexperienced pleasure sensation agreed daily make routine However never spoke anything Every day would go back Siri give detail — deets called One day winding little session Jayden said devilish smile face “My parent won’t home tomorrow Wanna come” could say deep brown eye Overwhelmed joy anxiety ran Siri give news immediately gave piece mind vulnerable poor treatment Yet agreed go BlackLetter Day woke mixed feeling mostly frightened day school got haze Despite feeling heebiejeebies Jayden refused acknowledge existence usual low selfworth made ignorant suspicious behavior school Siri anxiously walked Jayden’s house parent doctor always duty building infamous gang rang doorbell heart mouth standing like nun waiting enter hell door swung instant sense ease took invited u inside lasted couple minute guy school parent warned u like Red Riding Hood Big Bad Wolf’s den Everyone stopped playing PlayStation eating pizza catcall tease Jayden anxiety doubled making super uncomfortable One glance Siri realized Jayden took u couch gave u soda introvert meter u probably erratic “Would mind steal while” asked Siri holding hand hesitantly shook head checked “Siri sure you’ll OK” “Yes I’ll fine careful” Reluctantly left couch unguarded Jayden took bedroom shut door held waist began French kissing time wasn’t feeling tried push away saying check Siri reassured everything fine gently took bed started unbuttoning blouse pushed aside bra strap cupped breast sort alarm going head Afraid might realize I’m cool enough pretended remain calm Although limb felt numb like somebody tied supposed feel sexy shaking Finally felt relaxed stopped fondling breast promptly started pulling skirt running hand inner thigh THAT’S quickly gathered strength pushed away far possible adjusted bra blouse ran Siri Poor Siri looked traumatized hungry jackal could tell also glad leave Series Escalating Misery next day school excruciating many level neither tell anyone happened even Siri could stop thinking Despite feeling violated dreaded end thing Jayden day instead meeting school went home Little know another catastrophe waiting happen mom patiently waiting stick welcomed home ambush Jayden’s neighbor stayathome mom let’s call Karen obvious reason saw u run house partially buttoned blouse babbled everything mother Side Note Let tell something jobless middleaged woman like putting nose don’t belong surveillance capacity CIA time hand make dangerous undercover agent Misconduct brown parent perfect recipe enormous fuss Even though father whipped belt punishment hurt forbid meeting Siri final exam thought bad influence still found way hang swore never talk day ever following day school worse Guess wasn’t ashamed address public anymore Yes sick rascal didn’t even fully functional heart begin Jayden SlutShaming started openly slutshaming found approached first place heard rumor first group friend Clearly showed interest thought easy target wrong gullibility led incident deeply affected life School rumor spread like wildfire often come around person added zest story whisper — “ charmed way bedroom seduced sex got pregnant wrongfully accused rape got abortion” WOW soap opera writing right Like game Chinese whisper many version gossip none favor bullied patronized others branded school floozy Siri solitary cheerleader shielded skepticism Rainbow Storm Twelve year passed since conflict gathered lot happy memory along road yet wound feel fresh wish could tell parent day still haunt decided shouldn’t remind shame caused Things could’ve gotten far worse day that’s frightens Even though wasn’t harmed physically high school heavily damaged emotional level recently Siri finally opened day cried little trauma sneaked back u laughed blown “scandal” stupid back High school without doubt disastrous gained one thing never let go — Siri still reliable friend confidant trusted pal date forever Jayden He’s still short arrogant snobby teenager stuck adult’s body hasn’t life high schoolTags Life Lessons Life Sexual Assault Relationships Mental Health
4,570
Dan Rojas’ Author Bio
Dan Rojas’ Author Bio My path to redemption as a writer They called him el chiquito que amaba el mundo: the little boy who loved the world. The native Panamanians of Escobal and Cuipo saw not the wretched direction Dan’s life would take, but only the open-heartedness of a little boy who would one day conquer his depravity. Dan Rojas’ childhood was unstable, having been admitted to seven mental hospitals before puberty. This is largely due to the pharmaceutical industry’s diagnosing fetish fueled by its profit margins. By the age of nine, Dan was diagnosed with oppositional defiance disorder (ODD), bipolar disorder, and depression. These were all incorrect diagnoses for disorders Dan did not have and was heavily medicated for each. His parents attempted to intervene but social services’ implicit threats to take more than just him, but his three siblings as well, barring them from taking action. For the majority of his childhood, Dan was forced to take pills at dosages equivalent to a lobotomy. Dan’s resentment of the system responsible for his chemically imprisoned upbringing is justified. In adolescence, Dan was given a corrected diagnosis of attention deficit disorder (ADD). This enabled him to break free form the false labels nailed to him by the penny per prescription model of current psychiatric practice, in which a dog can be prescribed Xanax. For the first time in a long time Dan could feel his soul breathe, he was living again, but not without a price, the demons of his childhood came to collect. With crippled empathy and a growing inferiority complex, juvenile delinquency was the perfect avenue for his vindictive outrage. Ignorantly, Dan took to drugs and drinking by 16 to soothe his crumbling frame of mind. The first pattern of alcoholism bloomed. Dan saw a bleak future, a wasted life he couldn’t turn from, and pursued the Army in hopes of escape. To enlist, the Army entry standards required Dan to be off of any ADD medication for two years before enlistment. During his Junior and Senior year, without the help of his ADD medication and his prideful refusal to utilize his individual education plan (IEP), Dan’s grades suffered. Despite the threat of not graduating he refused to “give in” to the system and did not apply himself. To him, high school was a direct extension of his childhood’s chemical prison and he foolishly rejected everything it had to offer. Although barely, Dan managed to graduate. To him this was a success — he beat the odds and, as planned, he enlisted. Dan opted for an airborne infantry contract. A few months before the ship-date, Dan blacked out at a party and awoke the morning after with a concussion, shattered bones, a complete fracture to his right mandible, and other minor traumas. His airborne infantry contract expired in the six months he took to recover after the facial reconstruction surgery. Dan signed a new contract with the Army as a healthcare specialist. Dan graduated basic training and advanced individual training (AIT) with distinction, but the remainder of his short military career was served in distaste to both him and his superiors. His arrogance, drinking, and characterless dishonesty are what define his military “service.” He was discharged for failure to rehabilitate, and all too late, Dan realized he was an alcoholic. In the following three years, Dan struggled with sobriety as he attempted to form a new life back home with his mother and younger brother. Although relapsing many times, Dan strove for growth. The path was barbed and riddled with missteps; he hurt and betrayed many during this period of life. But hope was not lost. Dan’s life for the better began with a book, The Ego and the Id. Dan was indignant that a book, written nearly a century prior, understood him better than the lot of his childhood psychiatrists, psychologists, and social workers. But Dan felt the social injustices of his life were just side effects of something deeper — but of what? In his search to answer this question, Dan became convinced of the United States Education System’s corruption, and its push to create, an uneducated, impoverished, slave wage working class. Disillusioned, Dan saw America critically for the first time. He needed more knowledge and dove headfirst into Freud and his contemporaries. With a foundation in theoretical psychodynamics, Dan preferred the Neo-Freudian humanist outlook and continued exploring other theoretical fields for truth and clarity. Dan’s research opened his mind and, slowly, his heart followed. From this pursuit, Dan read three works that radically changed his life: Eric Fromm’s Art of Loving, Paul Tillich’s Dynamics of Faith, and Martin Buber’s I and Thou (Kaufmann translation). These works helped Dan look at his past, present, and future in a deeply critical manner and helped Dan concretized his first major moral summit since his pitfall with alcoholism. It is because of these works that Dan’s faith in humanity and in himself was restored. At long last, hard-fought and hard-won, Dan had reclaimed his will to meaning. Dan is a survivor of a morally fraudulent system and a survivor of one overdose and five suicide attempts. Of these life-threatening events, two would have been fatal if not for the rapid interventions of his sister. Yet, for all that has happened, Dan was, and is, more than a helpless victim. He could have made the best of what he had, but Dan chose bitterness, anger, and blame over love — video meliora proboque deteriora sequor: I see and approve of the better but choose the worse. Without morals, ethics, or principles, Dan Rojas committed acts of cruelty and hatred. He introduced drugs to people that helped ruin their lives. He took love and used it against those whom he loved. Dan betrayed best friends and sold out family. Dan was a misogynist and a xenophobe. Dan judged people for immutable characteristics and was cruel to them. He embodied all these things before his 25th birthday and will remain, for the rest of his life a battling alcoholic. This is Dan’s greatest shame: this vile history is him at his worst and he owns it. Dan is confident in his re-humanization and he shares these low truths so that others, who may not know their way through the fog, can see a beacon home: the home of accepting the wretched imminence of one’s past and stepping out from the shadows back into the light. This is his confession, his apology, and his penance. Dan writes to show change is possible, even for monsters, and redemption is how the ignoble, nobly live.
https://medium.com/from-the-library/dan-rojas-author-bio-aa8c5fec59c5
['Dan Rojas']
2020-01-02 13:59:13.988000+00:00
['Addiction Recovery', 'Redemption', 'Mental Health', 'Ftl Bio', 'Struggle']
Title Dan Rojas’ Author BioContent Dan Rojas’ Author Bio path redemption writer called el chiquito que amaba el mundo little boy loved world native Panamanians Escobal Cuipo saw wretched direction Dan’s life would take openheartedness little boy would one day conquer depravity Dan Rojas’ childhood unstable admitted seven mental hospital puberty largely due pharmaceutical industry’s diagnosing fetish fueled profit margin age nine Dan diagnosed oppositional defiance disorder ODD bipolar disorder depression incorrect diagnosis disorder Dan heavily medicated parent attempted intervene social services’ implicit threat take three sibling well barring taking action majority childhood Dan forced take pill dosage equivalent lobotomy Dan’s resentment system responsible chemically imprisoned upbringing justified adolescence Dan given corrected diagnosis attention deficit disorder ADD enabled break free form false label nailed penny per prescription model current psychiatric practice dog prescribed Xanax first time long time Dan could feel soul breathe living without price demon childhood came collect crippled empathy growing inferiority complex juvenile delinquency perfect avenue vindictive outrage Ignorantly Dan took drug drinking 16 soothe crumbling frame mind first pattern alcoholism bloomed Dan saw bleak future wasted life couldn’t turn pursued Army hope escape enlist Army entry standard required Dan ADD medication two year enlistment Junior Senior year without help ADD medication prideful refusal utilize individual education plan IEP Dan’s grade suffered Despite threat graduating refused “give in” system apply high school direct extension childhood’s chemical prison foolishly rejected everything offer Although barely Dan managed graduate success — beat odds planned enlisted Dan opted airborne infantry contract month shipdate Dan blacked party awoke morning concussion shattered bone complete fracture right mandible minor trauma airborne infantry contract expired six month took recover facial reconstruction surgery Dan signed new contract Army healthcare specialist Dan graduated basic training advanced individual training AIT distinction remainder short military career served distaste superior arrogance drinking characterless dishonesty define military “service” discharged failure rehabilitate late Dan realized alcoholic following three year Dan struggled sobriety attempted form new life back home mother younger brother Although relapsing many time Dan strove growth path barbed riddled misstep hurt betrayed many period life hope lost Dan’s life better began book Ego Id Dan indignant book written nearly century prior understood better lot childhood psychiatrist psychologist social worker Dan felt social injustice life side effect something deeper — search answer question Dan became convinced United States Education System’s corruption push create uneducated impoverished slave wage working class Disillusioned Dan saw America critically first time needed knowledge dove headfirst Freud contemporary foundation theoretical psychodynamics Dan preferred NeoFreudian humanist outlook continued exploring theoretical field truth clarity Dan’s research opened mind slowly heart followed pursuit Dan read three work radically changed life Eric Fromm’s Art Loving Paul Tillich’s Dynamics Faith Martin Buber’s Thou Kaufmann translation work helped Dan look past present future deeply critical manner helped Dan concretized first major moral summit since pitfall alcoholism work Dan’s faith humanity restored long last hardfought hardwon Dan reclaimed meaning Dan survivor morally fraudulent system survivor one overdose five suicide attempt lifethreatening event two would fatal rapid intervention sister Yet happened Dan helpless victim could made best Dan chose bitterness anger blame love — video meliora proboque deteriora sequor see approve better choose worse Without moral ethic principle Dan Rojas committed act cruelty hatred introduced drug people helped ruin life took love used loved Dan betrayed best friend sold family Dan misogynist xenophobe Dan judged people immutable characteristic cruel embodied thing 25th birthday remain rest life battling alcoholic Dan’s greatest shame vile history worst owns Dan confident rehumanization share low truth others may know way fog see beacon home home accepting wretched imminence one’s past stepping shadow back light confession apology penance Dan writes show change possible even monster redemption ignoble nobly liveTags Addiction Recovery Redemption Mental Health Ftl Bio Struggle
4,571
This Illiterate Went From a Starving Shepherd to a Man of $3 Billion
Entering the business world Chaabi thereafter roamed the country for a few years doing menial works before he finally settled in Kenitra as a blue-collar worker. While working in masonry, Miloud developed a keen interest in real estate. So, he decided to start his own construction business at the age of 18. The start-up consisted of two workers and wasn’t generating big profits in the beginning. But it was a good occasion for our entrepreneur to develop shrewd business acumen and set aside some money. 16 years elapsed before it was time to expand the business and explore new market opportunities. And the next station was the porcelain market. Miloud, therefore, founded the porcelain business “SUPER CERAME” and kept increasing his assets even beyond Moroccan soil. He said that before he became a businessman, trading and business were restricted only to some Jewish, French along with some famous Moroccan families at the time Morocco was colonized. Hence, setting his foot in the trading industry was a long arduous journey.
https://medium.com/datadriveninvestor/this-illiterate-went-from-a-starving-shepherd-to-a-man-of-3-billion-5b22058c37f7
['Mohammed Ayar']
2020-12-29 17:38:54.421000+00:00
['Investing', 'Money', 'Biography', 'Poverty', 'Entrepreneurship']
Title Illiterate Went Starving Shepherd Man 3 BillionContent Entering business world Chaabi thereafter roamed country year menial work finally settled Kenitra bluecollar worker working masonry Miloud developed keen interest real estate decided start construction business age 18 startup consisted two worker wasn’t generating big profit beginning good occasion entrepreneur develop shrewd business acumen set aside money 16 year elapsed time expand business explore new market opportunity next station porcelain market Miloud therefore founded porcelain business “SUPER CERAME” kept increasing asset even beyond Moroccan soil said became businessman trading business restricted Jewish French along famous Moroccan family time Morocco colonized Hence setting foot trading industry long arduous journeyTags Investing Money Biography Poverty Entrepreneurship
4,572
My 2 cents on Paris ChangeNOW 2020 summit
Impact Initiatives are “all over the place” Pampers collecting diapers in Amsterdam for recycling (diapers ranking in the top 10 source domestic waste — so sizeable topic indeed). It is worth noting that these bin accept all diaper brands, not just Pampers. Diapers recycling bin in Amsterdam Tokyo Olympic medals made out of garbage (= actual gold, silver and bronze recovered from old mobile phone). Story here Incredibly, Japan managed to extract 32kg of Gold, 3,500kg of Silver and 2200kg of Bronze from used electronics Houses made of recycled PET “plug and play” structure : Check out the work of business man and philantropist Ustinov to recycle PET into “ready-to-assemble” and highest-industry-standards housing structure 3D objects printed on the spot out of a seaweed bath Courtesy of the southern city of Arles, this process can live-print anything from beautiful decoration item to functional fabrics : International Protocols starting to emerge : Impact companies need Scale to actually have an impact. Protocols enable Scale. Here are 2 convincing signs on the market : Loop : retailers and brands coming together to organize a -large enough- platform of reusable containers. Even though some claim this is more marketing and green washing than actual viable logistics, this projet has the merit of pushing the experiment much further than ever before, and taking a good first step toward the scale effect required to make these products economically viable. A first batch of loop products is getting off the ground thanks to some of the world’s largest brand operators (unilever, P&G…) and retailers (Tesco, Carrefour…) B-Corporation certification is an independent, standardized way to assess social, environmental and public impact. Notably endorsed by the United Nations, The certification process is thorough and specific to each area of activity of a company. As an example, Danone has been able to B-certify 17 BUs of the Group (or 30% of Group’s revenue) over the past years, one at the time… Last but not least on Protocols — this punchline by Andrew Morlet, CEO of the Ellen Mac Arthur Foundation : Economic model is an absolute MUST: no impact company will emerge at scale otherwise As Igor USTINOV put it, there are 3 steps to scale an impact company: * Start small / find your audience * Develop a robust enough model * Continuously adapt as you grow Contrary to a some belief, Emerging Countries are at the frontline of the fight against plastic Those colorful scultures below, though adorable, are sadly made of flip-flops washed away on indonesian beaches — not thrown away or abandonned there, but litterally carried in by the sea… The end of international garbage trade, the keynote from Malaysan princess Zatasha against plastic pollution and food waste, were many clues that Emerging countries are not waiting too long to claim their turn Food and agriculture account for 8 out of 20 most important levers to fight global warming Below is a table by drawdown.org (an internationally crowdsourced impact website — which I didn’t know until then), of the top CO2 reduction initiatives by impact. In particular see how “reduced food waste”, and “moving to plant -rich diet” are taking the front row seats. check-out drawdown.org for all 80 initiatives Paris Olympics 2024 : Paris won the gig thanks to a promise on sustainability Interesting talk by Tony Estanguet, multiple gold medalist in Kayak and now lead organizer of the Paris 2024 olympics, on how these olympics will be the 38th in History, but really the 1st being carbon neutral — aiming for 50% reduction of CO2 impact versus previous events. Among the measures enabling this objective, the fact that all sites will be accessible by public transportation, but perhaps even more notably the fact that the majority of sports events will take place in Versailles, the Grand Palais, the Eiffel Tower… thus preventing new sites construction — Cool!
https://medium.com/tech-away/my-2-cents-on-paris-changenow-2020-summit-ab87416ca2f8
['Bruno Jean']
2020-02-17 09:23:40.207000+00:00
['Change Now', 'Paris', 'Impact', 'Sustainability', 'Impact Investing']
Title 2 cent Paris ChangeNOW 2020 summitContent Impact Initiatives “all place” Pampers collecting diaper Amsterdam recycling diaper ranking top 10 source domestic waste — sizeable topic indeed worth noting bin accept diaper brand Pampers Diapers recycling bin Amsterdam Tokyo Olympic medal made garbage actual gold silver bronze recovered old mobile phone Story Incredibly Japan managed extract 32kg Gold 3500kg Silver 2200kg Bronze used electronics Houses made recycled PET “plug play” structure Check work business man philantropist Ustinov recycle PET “readytoassemble” highestindustrystandards housing structure 3D object printed spot seaweed bath Courtesy southern city Arles process liveprint anything beautiful decoration item functional fabric International Protocols starting emerge Impact company need Scale actually impact Protocols enable Scale 2 convincing sign market Loop retailer brand coming together organize large enough platform reusable container Even though claim marketing green washing actual viable logistics projet merit pushing experiment much ever taking good first step toward scale effect required make product economically viable first batch loop product getting ground thanks world’s largest brand operator unilever PG… retailer Tesco Carrefour… BCorporation certification independent standardized way ass social environmental public impact Notably endorsed United Nations certification process thorough specific area activity company example Danone able Bcertify 17 BUs Group 30 Group’s revenue past year one time… Last least Protocols — punchline Andrew Morlet CEO Ellen Mac Arthur Foundation Economic model absolute MUST impact company emerge scale otherwise Igor USTINOV put 3 step scale impact company Start small find audience Develop robust enough model Continuously adapt grow Contrary belief Emerging Countries frontline fight plastic colorful scultures though adorable sadly made flipflops washed away indonesian beach — thrown away abandonned litterally carried sea… end international garbage trade keynote Malaysan princess Zatasha plastic pollution food waste many clue Emerging country waiting long claim turn Food agriculture account 8 20 important lever fight global warming table drawdownorg internationally crowdsourced impact website — didn’t know top CO2 reduction initiative impact particular see “reduced food waste” “moving plant rich diet” taking front row seat checkout drawdownorg 80 initiative Paris Olympics 2024 Paris gig thanks promise sustainability Interesting talk Tony Estanguet multiple gold medalist Kayak lead organizer Paris 2024 olympics olympics 38th History really 1st carbon neutral — aiming 50 reduction CO2 impact versus previous event Among measure enabling objective fact site accessible public transportation perhaps even notably fact majority sport event take place Versailles Grand Palais Eiffel Tower… thus preventing new site construction — CoolTags Change Paris Impact Sustainability Impact Investing
4,573
A Guide to GitHub for Non-Developers
GitHub Let’s start with the big guns: GitHub. GitHub is where we keep all of our code, it’s DropBox for developers. Code is split into repositories (or repos) which are akin to project folders on your computer. Most of the time a distinct app or service has its own repo. E.g. the next-article repo for the article page app. Things a developer might say: Something’s broken but I haven’t figured out which repo it’s in 🧐 If you look at a repo in GitHub the code you will see is called the main branch. A branch is a bit like a sub-folder, and the main branch contains the code that exists in production, i.e. if you look at the main branch on the next-article repo you will see the (nicely formatted version of the) code that gets run when you load an article page on FT.com. Working locally A repo will usually have other branches too; these are copies of the main branch which have been edited in some way, usually because a developer is in the process of adding, fixing, or otherwise changing something. When a developer wants to make any changes to the production code they do so on their own laptop on a copy of the repo they have cloned there. This is called working locally. They do this work on a branch so that their changes aren’t reflected in the production code until they’re happy with them. Multiple times a day a developer will commit and push the changes they’ve made locally to GitHub. This saves the changes to GitHub and these are the branches you’d see if you looked at a GitHub repo. Which branch is that on? 🤔 Whyyyyy can’t I get this running locally?!!! 😩 Let me just commit these changes before I go 🍺 Merging When a developer is happy with their code they will create a pull request (PR) asking for it to be reviewed. In a pull request a developer explains the changes they’ve made in the code. Other team members review the changes and approve the PR or leave comments asking for changes. Creating and reviewing pull requests is the part of this process where there’s a dependency on other people and can be time consuming, although it’s very worthwhile. Good pull requests make it as easy as possible for people looking at the code in the future to understand what decisions were made to get it to that point. Once the PR has been approved it can be merged into the main branch. My PR is almost ready to go 🙌 (i.e. almost ready to be reviewed) Pleeeeeeeeeease can someone review my PR? 🙏 Deploying Once a PR has been merged the code from the developer’s branch will be contained in the main branch, but it won’t yet show up on the website. For that to happen it needs to be deployed. Deploying means getting the code from where it’s stored, in GitHub, onto the servers that host the site, in our case on a platform called Heroku. Because of some wondrous thinking by the FT developers of yore about how to make deployment as easy as possible, the deployment process starts automatically when a PR is merged in GitHub. We use another tool called CircleCI to manage our deployments. CircleCI takes the code from GitHub and builds and deploys it. In building the app, CircleCI takes the code that’s in the repo and does a load of stuff to it that makes it ready to run in production. (Like all development, this process is like fractals — you can get more and more in depth about what it entails until you’re just dealing with the zeros and ones, but for the purposes of this blog ‘stuff’ will do). CircleCI then runs a final set of tests on the code and deploys it by saving it to the relevant app server on Heroku. Once the code has successfully been deployed it will be visible in production within minutes.
https://medium.com/ft-product-technology/a-guide-to-github-and-deployment-for-non-developers-7811dcf508bb
['Jennifer Johnson']
2020-10-12 12:21:41.491000+00:00
['Github', 'Deployment', 'Development', 'Source Control']
Title Guide GitHub NonDevelopersContent GitHub Let’s start big gun GitHub GitHub keep code it’s DropBox developer Code split repository repos akin project folder computer time distinct app service repo Eg nextarticle repo article page app Things developer might say Something’s broken haven’t figured repo it’s 🧐 look repo GitHub code see called main branch branch bit like subfolder main branch contains code exists production ie look main branch nextarticle repo see nicely formatted version code get run load article page FTcom Working locally repo usually branch copy main branch edited way usually developer process adding fixing otherwise changing something developer want make change production code laptop copy repo cloned called working locally work branch change aren’t reflected production code they’re happy Multiple time day developer commit push change they’ve made locally GitHub save change GitHub branch you’d see looked GitHub repo branch 🤔 Whyyyyy can’t get running locally 😩 Let commit change go 🍺 Merging developer happy code create pull request PR asking reviewed pull request developer explains change they’ve made code team member review change approve PR leave comment asking change Creating reviewing pull request part process there’s dependency people time consuming although it’s worthwhile Good pull request make easy possible people looking code future understand decision made get point PR approved merged main branch PR almost ready go 🙌 ie almost ready reviewed Pleeeeeeeeeease someone review PR 🙏 Deploying PR merged code developer’s branch contained main branch won’t yet show website happen need deployed Deploying mean getting code it’s stored GitHub onto server host site case platform called Heroku wondrous thinking FT developer yore make deployment easy possible deployment process start automatically PR merged GitHub use another tool called CircleCI manage deployment CircleCI take code GitHub build deploys building app CircleCI take code that’s repo load stuff make ready run production Like development process like fractal — get depth entail you’re dealing zero one purpose blog ‘stuff’ CircleCI run final set test code deploys saving relevant app server Heroku code successfully deployed visible production within minutesTags Github Deployment Development Source Control
4,574
Web Scraping with Python and Object-Oriented Programming
Web Scraping with Python and Object-Oriented Programming NafadAlJawad Follow Oct 17 · 4 min read Web Scraping termed as Web data extraction, Web harvesting, Screen Scraping, is a vital mechanism in today’s world. Through Web-Scraping you can extract useful public information from your targeted websites and put together for data analysis, product comparison, making statistical reports, and many more. Python is undoubtedly the most popular language for web scraping and today I am going to give an example of extracting data from IMDB’s website. We are going to get the top 250 movie rankings from all time and display any random 10 movies to the user. So, let's dive in without spending any more time! At the end, I am going to elaborate the reason for choosing the chosen coding structure. I am assuming you have a basic understanding of Python and HTML. We need the package BeautifulSoup or bs4 in python to do this tutorial. Firstly, in the terminal write the following command and press enter to install BeautifulSoup package: pip install bs4 then import the following modules at the top of the file from bs4 import BeautifulSoup import requests import re import random Now we are going to write a class named ExtractMovies, you can, of course, choose any other name if you want to! #Python class for declaring movie attributes. class ExtractMovies(object): def __init__(self, title, year, star, ratings ): self.position = position self.title = title self.year = year self.star = star self.ratings = ratings #function to make ratings to two decimal places def first2(s): return s[:4] Here, we are declaring the attributes related to a single movie and storing it as an object. Later on, we are going to populate the movie object with their unique characteristics or attributes. We are going to see the use of the function first2 later on, so chill for now! url = 'https://www.imdb.com/chart/top/' response = requests.get(url) soup = BeautifulSoup(response.text, 'html.parser') movies = soup.select('td.titleColumn') links = [a.attrs.get('href') for a in soup.select('td.titleColumn a')] crew = [a.attrs.get('title') for a in soup.select('td.titleColumn a')] ratings = [b.attrs.get('data-value') for b in soup.select('td.posterColumn span[name=ir]')] years = soup.select('span.secondaryInfo') #Temoporary array to store class instances _temp_ = [] In the above part: first-line: We are declaring the url as a variable, this is the URL to IMDB top movies chart: https://www.imdb.com/chart/top/ second-line: Declaring a variable to send an HTTP request to the given url and receive the HTML response in text format. third-line: Beautifulsouping the elements! Means, we will be selecting and processing the text with this variable. fourth-line to onwards: With “soup.select” we are selecting the elements of the HTML object in the requested url. One more thing, are you thinking of what these “td.tableColumn”, “href” , “title” or “posterColumn” doing? Okay, these are the descriptions of the elements of the html page we are working. You can follow the url and inspect the page in the developer mode to understand more. You can also follow this link to view the detailed documentation on different ways of using BeautifulSoap. for index in range(0, len(movies)): movie_string = movies[index].get_text() movie = (' '.join(movie_string.split()).replace('.', '')) movie_title = movie[len(str(index))+1:-7] year = years[index].get_text() position = index+1 movie_instances = ExtractMovies( movie_title, year, crew[index], first2(ratings[index]) ) _temp_.append(movie_instances) Here, yes we are looping through the range of the object movies that we got earlier and storing each of the data to its required fields, later we are assigning those fields to the class instance and appending it to the _temp_ array that we created earlier. And now the first2 function, we are using to make the ratings to two decimal places. Ratings here is a string object, you may use any other algorithm to convert it to Float if required. random.shuffle(_temp_) i=1 for obj in _temp_: print(i,"|", obj.title,' ',obj.year,' ',obj.star,' ',obj.ratings,' ' ) i=i+1 if(i==11) break In this last part, at the beginning, we are shuffling the array to get random movies, and then we are printing the output in a decorated format. We keep checking for the iteration to become 10, whenever it reaches 10, we are breaking out of the for loop. The reason for choosing this class instance method is because it gives you more freedom and you can easily call this class anytime in your code if you want to extend your code further! You can also do this by putting the movies in Dictionary. I am going to explain the differences between Dictionary, List, and Class objects in one of my future blogs. Oh! I forgot to mention, this is my first ever blog online!😊 I am so excited to write this article and publish it here on medium! I appreciate your reviews and feedbacks, or on anything you recommend me to write on! 🤞🤞 The entire code of this tutorial is as follows: https://gist.github.com/jawad-nafad/065ea5795139c6c7942cc8f116cd2e11
https://medium.com/analytics-vidhya/web-scraping-with-python-and-object-oriented-programming-14638a231f14
[]
2020-10-20 12:45:38.147000+00:00
['Python', 'Data Extraction', 'Object Oriented', 'Web Scraping', 'Tutorial']
Title Web Scraping Python ObjectOriented ProgrammingContent Web Scraping Python ObjectOriented Programming NafadAlJawad Follow Oct 17 · 4 min read Web Scraping termed Web data extraction Web harvesting Screen Scraping vital mechanism today’s world WebScraping extract useful public information targeted website put together data analysis product comparison making statistical report many Python undoubtedly popular language web scraping today going give example extracting data IMDB’s website going get top 250 movie ranking time display random 10 movie user let dive without spending time end going elaborate reason choosing chosen coding structure assuming basic understanding Python HTML need package BeautifulSoup bs4 python tutorial Firstly terminal write following command press enter install BeautifulSoup package pip install bs4 import following module top file bs4 import BeautifulSoup import request import import random going write class named ExtractMovies course choose name want Python class declaring movie attribute class ExtractMoviesobject def initself title year star rating selfposition position selftitle title selfyear year selfstar star selfratings rating function make rating two decimal place def first2s return s4 declaring attribute related single movie storing object Later going populate movie object unique characteristic attribute going see use function first2 later chill url httpswwwimdbcomcharttop response requestsgeturl soup BeautifulSoupresponsetext htmlparser movie soupselecttdtitleColumn link aattrsgethref soupselecttdtitleColumn crew aattrsgettitle soupselecttdtitleColumn rating battrsgetdatavalue b soupselecttdposterColumn spannameir year soupselectspansecondaryInfo Temoporary array store class instance temp part firstline declaring url variable URL IMDB top movie chart httpswwwimdbcomcharttop secondline Declaring variable send HTTP request given url receive HTML response text format thirdline Beautifulsouping element Means selecting processing text variable fourthline onwards “soupselect” selecting element HTML object requested url One thing thinking “tdtableColumn” “href” “title” “posterColumn” Okay description element html page working follow url inspect page developer mode understand also follow link view detailed documentation different way using BeautifulSoap index range0 lenmovies moviestring moviesindexgettext movie joinmoviestringsplitreplace movietitle movielenstrindex17 year yearsindexgettext position index1 movieinstances ExtractMovies movietitle year crewindex first2ratingsindex tempappendmovieinstances yes looping range object movie got earlier storing data required field later assigning field class instance appending temp array created earlier first2 function using make rating two decimal place Ratings string object may use algorithm convert Float required randomshuffletemp i1 obj temp printi objtitle objyear objstar objratings ii1 ifi11 break last part beginning shuffling array get random movie printing output decorated format keep checking iteration become 10 whenever reach 10 breaking loop reason choosing class instance method give freedom easily call class anytime code want extend code also putting movie Dictionary going explain difference Dictionary List Class object one future blog Oh forgot mention first ever blog online😊 excited write article publish medium appreciate review feedback anything recommend write 🤞🤞 entire code tutorial follows httpsgistgithubcomjawadnafad065ea5795139c6c7942cc8f116cd2e11Tags Python Data Extraction Object Oriented Web Scraping Tutorial
4,575
Kubernetes Security With Falco
Kubernetes Security With Falco Comprehensive runtime security for your containers with a hands-on demo Photo by Dominik Jirovský on Unsplash. Falco is an open source runtime security tool that can help you to secure a variety of environments. Sysdig created it and it has been a CNCF project since 2018. Falco reads real-time Linux kernel logs, container logs, Kubernetes logs, etc. against a powerful rules engine to alert users of malicious behaviour. It is particularly useful for container security — especially if you are using Kubernetes to run them — and it is now the de facto Kubernetes threat detection engine. It ingests Kubernetes API audit logs for runtime threat detection and to understand application behaviour. It also helps teams understand who did what in the cluster, as it can integrate with Webhooks to raise alerts in a ticketing system or a collaboration engine like Slack. Falco works by using detection rules that define unexpected behaviour. Though it comes with its own useful default rules, you can extend them to define custom rules to harden your cluster further. So, some things that Falco can detect are the following: Opening of a shell session from a container Host path volume mount Reading secret and sensitive files such as /etc/shadow A new package installation in a running container A new process spawned from a container that is not a part of CMD Opening of a new port or unexpected network connection Creating a privileged container and much more… All these features make it particularly useful to understand less about whether you have the appropriate security in place and more to ensure you know when there is a potential breach so that you can stop it before something terrible happens. Falco, therefore, complements the existing Kubernetes native security measures such as RBAC and Pod Security Policies that help in preventing issues rather than detecting them. There are multiple ways of running Falco within a Kubernetes cluster. You can install Falco in every Kubernetes node, bake Falco as a second container in the pod, or you can use a Daemon Set to inject a Falco pod in them. Using a DaemonSet is a better and more flexible option, as it requires the least amount of changes in the Dev function and also does not take a toll on the Ops function as the first option requires. Also, it is Kubernetes-native, so it is the preferred way.
https://medium.com/better-programming/kubernetes-security-with-falco-2eb060d3ae7d
['Gaurav Agarwal']
2020-10-23 15:33:27.588000+00:00
['Programming', 'Kubernetes', 'Cybersecurity', 'Containers', 'DevOps']
Title Kubernetes Security FalcoContent Kubernetes Security Falco Comprehensive runtime security container handson demo Photo Dominik Jirovský Unsplash Falco open source runtime security tool help secure variety environment Sysdig created CNCF project since 2018 Falco read realtime Linux kernel log container log Kubernetes log etc powerful rule engine alert user malicious behaviour particularly useful container security — especially using Kubernetes run — de facto Kubernetes threat detection engine ingests Kubernetes API audit log runtime threat detection understand application behaviour also help team understand cluster integrate Webhooks raise alert ticketing system collaboration engine like Slack Falco work using detection rule define unexpected behaviour Though come useful default rule extend define custom rule harden cluster thing Falco detect following Opening shell session container Host path volume mount Reading secret sensitive file etcshadow new package installation running container new process spawned container part CMD Opening new port unexpected network connection Creating privileged container much more… feature make particularly useful understand le whether appropriate security place ensure know potential breach stop something terrible happens Falco therefore complement existing Kubernetes native security measure RBAC Pod Security Policies help preventing issue rather detecting multiple way running Falco within Kubernetes cluster install Falco every Kubernetes node bake Falco second container pod use Daemon Set inject Falco pod Using DaemonSet better flexible option requires least amount change Dev function also take toll Ops function first option requires Also Kubernetesnative preferred wayTags Programming Kubernetes Cybersecurity Containers DevOps
4,576
We Need to Be Kinder to Ourselves
When it comes to self love, I’m a huge advocate. For other people. Not so much for myself. I mean, it sounds awesome, in theory. It’s not so easy, in practice. Spending your life being put down by those closest to you, your mother, your former husband, supposed friends, makes it difficult to see the good in yourself. I’ve never had very high self esteem. I would even venture to say that I truly don’t have much at all. Through the years, some of the harsher things I’ve heard have stuck with me like glue, and I can’t seem to find the Goo-Gone. Sadly, I have a much harder time remembering the good things I’ve heard, though I know they are there. My current husband tells me I’m beautiful and smart. I tell him he’s delusional. I don’t take compliments well at all. It’s not that I don’t like to hear them, I really do. I just don’t respond well, because I always wonder why that person would say them. I’ve thought so little of myself for so long, it’s difficult for me to believe that someone else would think anything different. Today, a fellow writer, Leslie Wibberley, posted an essay about what your future self would say to you, given the chance. “Because saying all those horrible things about myself means that someone else doesn’t have to. And if I’m the one saying them, it doesn’t hurt as much.” This hit home, hard. When I allow myself to think about it, this is exactly why I do it, too. I’m horrible for calling myself fat, unattractive, a bad wife, bad mother, bad friend. Deep down, I know that I’m at least partially wrong, but I feel this must be what others see when they look at me, so I say it, so they don’t need to. It hurts less. But at the same time, more. I would love to be the woman who is confident in herself, who knows she’s attractive in her own way, intelligent, worthy. Not cocky, but carries herself in a way that says, “I’m a bad-ass and I know it.” I can pretend to be that woman. I do it quite often actually. But when it really comes down to it, that’s not who I really am, just who I aspire to be. When we’ve gone through trauma and abuse, it seems it’s harder to accept ourselves. Sprinkle in mental health issues, and you may find yourself at full-blown negative self esteem status. I know I have. I still struggle every single day. But it does get better, even just a little. And that’s better than nothing. The biggest change you can make is the conversations you have with yourself.
https://ccuthbertauthor.medium.com/we-need-to-be-kinder-to-ourselves-1c3b1ebdab70
['Chloe Cuthbert']
2019-10-09 15:49:36.642000+00:00
['Life Lessons', 'Women', 'Self Improvement', 'Mental Health', 'Self']
Title Need Kinder OurselvesContent come self love I’m huge advocate people much mean sound awesome theory It’s easy practice Spending life put closest mother former husband supposed friend make difficult see good I’ve never high self esteem would even venture say truly don’t much year harsher thing I’ve heard stuck like glue can’t seem find GooGone Sadly much harder time remembering good thing I’ve heard though know current husband tell I’m beautiful smart tell he’s delusional don’t take compliment well It’s don’t like hear really don’t respond well always wonder person would say I’ve thought little long it’s difficult believe someone else would think anything different Today fellow writer Leslie Wibberley posted essay future self would say given chance “Because saying horrible thing mean someone else doesn’t I’m one saying doesn’t hurt much” hit home hard allow think exactly I’m horrible calling fat unattractive bad wife bad mother bad friend Deep know I’m least partially wrong feel must others see look say don’t need hurt le time would love woman confident know she’s attractive way intelligent worthy cocky carry way say “I’m badass know it” pretend woman quite often actually really come that’s really aspire we’ve gone trauma abuse seems it’s harder accept Sprinkle mental health issue may find fullblown negative self esteem status know still struggle every single day get better even little that’s better nothing biggest change make conversation yourselfTags Life Lessons Women Self Improvement Mental Health Self
4,577
Search and Navigate Faster With Chrome Custom Search Engines
Photo by Markus Winkler on Unsplash I recently found a neat feature within Google Chrome I cannot live without anymore, called Custom Search Engines. With Custom Search Engines, you can search any site using a simple keyword and the TAB-key, or navigate to paths at a particular website. Okay, any site is perhaps a bit exaggerated… I already used the Search Engine feature a lot. For example with YouTube, and you may already be using this feature without even knowing you do. If you type youtube in the Google Chrome address bar, you can hit the TAB-button and search for a YouTube video. YouTube search engine within the Chrome address bar The thing I was not aware of, is that you can add own websites to these search engines and trigger them with a specific keyword. This can come very handy and I have set it up to search for text in our Confluence wiki and navigate to specific AWS resources in the AWS console. I will be discussing one case in this article, but of course you can apply it to any other website. Searching Wiki Content We are using Confluence for documentation. I noticed that it is taking very long before you have the results of a search. You have to navigate to the Confluence page, wait for the page to be loaded, tap at the search bar, enter and execute the search query and then you finally see the results. Since our Confluence page is not public, I will be using wikipedia.org in this example, but you can use the same principle. Head Over to wikipedia.org Navigate to wikipedia.org, you will probably see somewhat the same landing page as below. Landing page wikipedia.org Execute a Search Query and Look at the URL Search Engines work with replacing a particular part in the URL by the search term you enter, so you have to look at what part of the URL contains the search query after actually searching. In the example below you can see I searched for ‘software’. Entered software as search term The page I was redirected to after searching for ‘software’ As you can see above, the search term is actually part of the URL (en.wikipedia.org/wiki/Software), so this website is eligible to be set as Custom Search Engine in Google Chrome (please read the ‘Last Notes’). Add a Custom Search Engine Right click the Google Chrome Address bar, then click on ‘Edit Search Engines’ or head over to chrome://settings/searchEngines . Right click the Google Chrome Address bar The ‘Manage search engines’ screen in your Chrome settings You will then enter the ‘Manage search engine’ screen in the Chrome settings. You can see some default search engines already set, and with the ‘Add’ button you can add your own search engine. Let us add wikipedia.org as our custom search engine, using the information we have gathered in the above steps. Continue by clicking the ‘Add’ button. Provide information of the search engine in this popup The popup above will be shown after clicking the ‘Add’ button. The three fields ‘Search engine’, ‘Keyword’ and ‘URL with…’ are all you have to fill in to get this neat feature to work. ‘Search engine’ can be any description, ‘Keyword’ is the word you have to type before hitting the TAB-key and trigger the custom search engine, in the ‘URL with…’ field you have to enter the URL of the website you want to search in and replace the search query with %s . Let us take a look at the URL we have been redirected to after executing the ‘Software’ search query on wikipedia, which is https://en.wikipedia.org/wiki/Software . Software is the search query, so regarding the field description, we end up with the following URL: https://en.wikipedia.org/wiki/%s , we have to fill this in in the ‘URL with…’ field. I have filled in ‘Wikipedia.org’ as ‘Search engine’ and ‘wiki’ as ‘keyword’. You can then add the Custom Search Engine and you will then be able to use this keyword to trigger it. Filled in the ‘Add search engine’ popup for wikipedia.org Go to the address bar and start typing ‘wiki’, which is the keyword we have set for our Custom Search Engine. You will already get a suggestion to hit the TAB-key to search in Wikipedia.org. You may have noticed that this is the value set in the ‘Search engine’ field. Now hit the TAB-key and type something what you want to search. Search for software within wikipedia.org After typing your search query and hitting the ENTER-key, you will be redirected to the, in my case, software page at wikipedia.org. Software page at wikipedia.org That is quite about it! This principle can of course be used at any site eligible, which means the search query needs to be in the URL. This feature definitely makes my life much easier :-). Last Note The above is just an example. You will find out that, after testing it with some search queries, the example above is not really a search query. What it does is pointing you to a page within wikipedia.org. If that page does not exist, it will not give you a nice overview of suggestions like you would expect from a search functionality. The actual URL to make a search query on wikipedia.org is https://en.wikipedia.org/w/index.php?search=QUERYHERE , resulting in the following URL you have to fill in the ‘URL with…’ field in the ‘Add search engine’ popup: https://en.wikipedia.org/w/index.php?search=%s . Questions, Suggestions or Feedback If you have any questions, suggestions or feedback regarding this article, please let me know!
https://medium.com/the-innovation/search-and-navigate-faster-with-chrome-custom-search-engines-3e157f286a67
['Stephan Schrijver']
2020-09-13 11:20:26.865000+00:00
['Chrome', 'Shortcuts', 'Productivity', 'Search', 'Efficiency']
Title Search Navigate Faster Chrome Custom Search EnginesContent Photo Markus Winkler Unsplash recently found neat feature within Google Chrome cannot live without anymore called Custom Search Engines Custom Search Engines search site using simple keyword TABkey navigate path particular website Okay site perhaps bit exaggerated… already used Search Engine feature lot example YouTube may already using feature without even knowing type youtube Google Chrome address bar hit TABbutton search YouTube video YouTube search engine within Chrome address bar thing aware add website search engine trigger specific keyword come handy set search text Confluence wiki navigate specific AWS resource AWS console discussing one case article course apply website Searching Wiki Content using Confluence documentation noticed taking long result search navigate Confluence page wait page loaded tap search bar enter execute search query finally see result Since Confluence page public using wikipediaorg example use principle Head wikipediaorg Navigate wikipediaorg probably see somewhat landing page Landing page wikipediaorg Execute Search Query Look URL Search Engines work replacing particular part URL search term enter look part URL contains search query actually searching example see searched ‘software’ Entered software search term page redirected searching ‘software’ see search term actually part URL enwikipediaorgwikiSoftware website eligible set Custom Search Engine Google Chrome please read ‘Last Notes’ Add Custom Search Engine Right click Google Chrome Address bar click ‘Edit Search Engines’ head chromesettingssearchEngines Right click Google Chrome Address bar ‘Manage search engines’ screen Chrome setting enter ‘Manage search engine’ screen Chrome setting see default search engine already set ‘Add’ button add search engine Let u add wikipediaorg custom search engine using information gathered step Continue clicking ‘Add’ button Provide information search engine popup popup shown clicking ‘Add’ button three field ‘Search engine’ ‘Keyword’ ‘URL with…’ fill get neat feature work ‘Search engine’ description ‘Keyword’ word type hitting TABkey trigger custom search engine ‘URL with…’ field enter URL website want search replace search query Let u take look URL redirected executing ‘Software’ search query wikipedia httpsenwikipediaorgwikiSoftware Software search query regarding field description end following URL httpsenwikipediaorgwikis fill ‘URL with…’ field filled ‘Wikipediaorg’ ‘Search engine’ ‘wiki’ ‘keyword’ add Custom Search Engine able use keyword trigger Filled ‘Add search engine’ popup wikipediaorg Go address bar start typing ‘wiki’ keyword set Custom Search Engine already get suggestion hit TABkey search Wikipediaorg may noticed value set ‘Search engine’ field hit TABkey type something want search Search software within wikipediaorg typing search query hitting ENTERkey redirected case software page wikipediaorg Software page wikipediaorg quite principle course used site eligible mean search query need URL feature definitely make life much easier Last Note example find testing search query example really search query pointing page within wikipediaorg page exist give nice overview suggestion like would expect search functionality actual URL make search query wikipediaorg httpsenwikipediaorgwindexphpsearchQUERYHERE resulting following URL fill ‘URL with…’ field ‘Add search engine’ popup httpsenwikipediaorgwindexphpsearchs Questions Suggestions Feedback question suggestion feedback regarding article please let knowTags Chrome Shortcuts Productivity Search Efficiency
4,578
Unit Testing Best Practices
Photo by Science in HD on Unsplash Unit tests are an important scaffold for large-scale software development; they enable us to design, write, and deploy production code with confidence by validating that software will behave as expected. Even though they may not execute in live systems, their development and maintenance requires the same care as general production code. Sometimes developers do not realize this which leads to testing code with more code smells than production code. Engineers may not give enough attention to test code changes in the code review process. However, most of the time the test code reflects the health of the production code. If the test code has some code smells, this can be a sign that the production code can be improved. In this post, I’m going to mention some of the best practices to keep unit test code clean and maximize the benefits they provide. The best practices for unit testing are debated topics in the industry. In practice, however, projects and teams should align on key concepts in order to foster code consistency and ease-of-maintenance. I’m going to mention the meaning of unit testing in the Object-Oriented design world, characteristics of a unit test, naming conventions for unit tests, and when we should or should not use mocking. We can have or find many different answers/approaches for these concepts, and the relevance of different trade-offs may vary depending on the situation. Unit Testing To define unit testing, we should first define the unit. Once the unit has been defined, then we can define unit testing as testing the behaviors of a unit. Let’s try it for Object-Oriented software development methodology. Classes are the main building block of software designed with the Object-Oriented design paradigm. Then we can think that the class concept is the unit of Object-Oriented software, and unit testing involves independent testing of behaviors of a class by the developer who implements these behaviors. A behavior and method relation of a class may not be 1:1. Sometimes a class can have more than one method to implement a behavior that is unit tested as a whole. Sometimes more than one class can be used to implement a behavior that’s unit tested. However, sometimes it can be a sign of code smell, like temporal coupling, if you use more than one public method or class to implement a unit tested behavior. Characteristics of a Unit Test When developing unit tests, some key considerations include: How fast should a unit test be? How often should a unit test be run? Which kind of object methods are valid or not valid for unit testing? How should we structure a unit test? What kind of assertions should we make in a unit test? Let’s think about the answers to these questions. Some expected characteristics of unit tests include: fast execution times in order to provide immediate feedback about implementation correctness, readability in order to clearly express the behavior that’s tested, consistency and predictability of results through the use of deterministic evaluations, and robustness to structural changes (i.e., refactoring) in the implementation. Speed Developers expect unit tests to run quickly because they are generally executed frequently during the development process. We typically run unit tests whenever we make a change to our code in order to get immediate feedback about whether something is broken or not. Speed is a relative concept, but as Martin Fowler said in his article, “But the real point is that your test suites should run fast enough that you’re not discouraged from running them frequently enough”. Having fast unit tests requires continuous care along the life cycle of our codebase, but we can also have some rules that help us to create fast unit tests when we name/tag a test as a unit test. Michael Feather mentioned some of this kind of rules in his article: “A test is not a unit test if: It talks to the database It communicates across the network It touches the file system It can’t run at the same time as any of your other unit tests You have to do special things to your environment (such as editing config files) to run it. Tests that do these things aren’t bad. Often they are worth writing, and they can be written in a unit test harness. However, it is important to be able to separate them from true unit tests so that we can keep a set of tests that we can run fast whenever we make our changes.” Behavioral Testing vs Structural Testing In OOP, the structure of an object refers to the specific order and manner in which the production code implementing that object uses its dependent methods or classes. Since the structure of an object is related to the way that the production code is written, it can generally be considered an implementation detail. Structural testing involves testing these implementation details. Let’s see an example of structural testing vs behavioral testing: we have an Order class and orders can be cancelled, there are some rules to check if an order is cancellable or not and these rules are executed by OrderSpecification. And we have a test class to test this cancellation scenario: In the above test class we both check that the order is cancelled and that the OrderSpecification method is called exactly once. However, the specific way in which OrderSpecification is used is an internal implementation detail of the order cancel code, so the above test would be considered an example of structural testing. Let’s see the behavioral test code for this scenario: In the above test class we only care about the behavior of order cancellation, not its internal implementation details. We expect two things from production code: one is “doing the right thing”, the other one is “doing the thing right”. Unit tests should focus on the former, i.e., the behavior produced by the production code, which is one level of abstraction above the implementation details. So, as Kent Beck says in his article, “Programmer tests should be sensitive to behavior changes and insensitive to structure changes. If the program’s behavior is stable from an observer’s perspective, no tests should change.” Why? When we think of the benefit, cost, and maintenance dimensions of unit testing, it’s not hard to see that structure-sensitive tests create more friction rather than provide safety. Agile development teams change the structure of code continuously as they refactor, and fixing many brittle tests that are not related to any behavior after refactoring is a very tiring and discouraging process. For example, let’s say we change the method signature for the isCancellable method within the OrderSpecification class from using the Order class as an argument to using the OrderStatus class: In such a situations, the expected behavior of our code has not really changed, but the following test will start to fail because of our verification that depends on the method’s signature: Unfortunately, mocking libraries make this kind of structure testing very easy to write, so we should use their structure verification functions with caution. Of course, there can be some exceptional cases where we have to rely on some structural testing instead of behavioral testing to achieve some level of confidence with our system. For example, if a real implementation is too slow to use, or too complex to build, then we may use structural testing as verifying invocation of some functions with mocking. Another case can be related to orders of the function call, like checking cache hit/miss, in some cases a cache miss may have financial costs, let’s say we call a paid API in case of a cache miss, and we may use structural testing to verify cache methods are called or not. But these should be exceptional, not our default choice. Should we write unit tests for all classes? No, because classes have different kinds of behaviors. Some classes have behaviors directly for business logic related to requirements of our domain, while other classes have behaviors that are related to application/system-level requirements, like transaction, security, observability, etc. We separate classes that have different kinds of behaviors using stereotypes, i.e. different categories of responsibilities. We use Domain-Driven Design (DDD) concepts in some of our projects, and DDD tactical design has some stereotypes for classes, such as aggregate root, entity, value objects, domain service, application service, repository, etc. Let’s examine the application service case; application services are like gateways to our domain model from the outside world, as we see in the diagram below: Application services handle application-level requirements (e.g., security, transactions, etc.) while routing requests from the outside world (which can be anything that’s not directly related to our domain model, like the web layer, RPC layer, a storage access layer, etc.) to our domain model. There is no business logic in application services, and their code mostly consists of direct calls to our domain model. If we try to write unit tests for these classes, there is nothing to verify from a behavior perspective; we can only verify interactions between them and the domain model. But we mentioned this is structural testing, and we don’t prefer these kinds of tests generally. So, we don’t prefer writing unit tests for DDD application services. Then, how can we test these application services? There are different kinds of testing styles other than unit testing, and we think that integration tests that use these application services in the test flow are better alternatives for the application services of DDD. Structure of a Unit Test Generally, a unit test has three parts; setting pre-condition, taking action on the object, and making the verification. These are the Arrange/Act/Assert (or alternatively, Given/When/Then as used in Behavior Driven Development (BDD) testing). Applying this kind of structural style to our unit tests increases their readability. Sometimes we can ignore the `Arrange` part if we don’t need to set anything before the `Act` part, but we should always have the `Act` and `Assert` parts when writing a unit test. We can see an example of these Arrange/Act/Assert parts below: Using a Naming Convention The name of a unit test is important because it directly affects code readability. Unit tests should be readable because we should easily understand what is broken in our system when a unit test fails. We should also understand the behavior of our system when reading unit tests because people come and go in a project. Some programming languages allow us to use plain language as method names, for example with Kotlin we can write below test method: Some testing frameworks, like JUnit, provide a tag(@DisplayName) for this purpose if we can’t use the method names. There are different naming conventions to name unit tests. Teams can align on a standard naming convention that members find most readable; alternatively, other teams may allow team members to use the most appropriate names for their tests instead of using a standard naming convention. In our last Kotlin project, we used the convention “Should ExpectedBehavior When StateUnderTest”. Mocking We use mock objects to replace real implementations that our production code depends on in a test with the help of libraries like Mockito, MockK, Python unittest.mock, etc. Using mock objects makes it easy to write a more focused and cheap test code when our production code has a non-deterministic outside dependency. For example, we mock a repository class that finds orders of a customer by status in the test code below: Using mocks is not a silver bullet, and overusing mocking can cause some problems. For example, when using mocking, writing the stub code needed to program the behavior the mock can expose implementation details or structure of the underlying system being tested. As we mentioned before, this makes our tests brittle when the structure is changed. Test code with mocking is harder to understand when compared to test code without mocking, because of additional code required. Mocking can also cause false-positive tests because the behavior of real implementations can change, but our mock implementations may be out of date. Mocking can be an appropriate choice for dependencies involving external systems. For example, mocking a repository class that communicates with a database, mocking a service class that calls another service/application over the network, and mocking a service class that writes/reads some files to/from disk, can make sense. If we can use a real implementation then we should use it instead of a mock. If we can’t use a real implementation, mocking is not our only option. We can also use fake objects, these are much simpler and lighter weight, only test-purpose implementations of the functionality provided by the production code. For example, implementing a test scenario that has complex conditions on its given part can be simpler with using fake objects instead of mock objects. Conclusion Unit tests should be considered a first-class citizen when writing production code in order to maximize their benefits. We should let our unit tests drive our production code’s design and readability by applying some best practices that we mentioned: Align about the meaning of unit testing concepts at least within the team/project. Keep your unit tests fast. Make behavioral tests instead of structural tests. Decide to write unit tests for a class according to responsibilities of the class. Align about the structure of a unit test. Align about a naming convention for unit tests or use a free naming convention as depending on the code review process. Use mocks with caution, don’t prefer to make structural testing with them. Acknowledgments Thanks to my colleagues who reviewed this post and provided invaluable feedback.
https://medium.com/udemy-engineering/unit-testing-best-practices-f877799f6dfd
['Mucahit Kurt']
2020-07-16 14:23:36.392000+00:00
['Software Engineering', 'Unit Testing', 'Object Oriented Software']
Title Unit Testing Best PracticesContent Photo Science HD Unsplash Unit test important scaffold largescale software development enable u design write deploy production code confidence validating software behave expected Even though may execute live system development maintenance requires care general production code Sometimes developer realize lead testing code code smell production code Engineers may give enough attention test code change code review process However time test code reflects health production code test code code smell sign production code improved post I’m going mention best practice keep unit test code clean maximize benefit provide best practice unit testing debated topic industry practice however project team align key concept order foster code consistency easeofmaintenance I’m going mention meaning unit testing ObjectOriented design world characteristic unit test naming convention unit test use mocking find many different answersapproaches concept relevance different tradeoff may vary depending situation Unit Testing define unit testing first define unit unit defined define unit testing testing behavior unit Let’s try ObjectOriented software development methodology Classes main building block software designed ObjectOriented design paradigm think class concept unit ObjectOriented software unit testing involves independent testing behavior class developer implement behavior behavior method relation class may 11 Sometimes class one method implement behavior unit tested whole Sometimes one class used implement behavior that’s unit tested However sometimes sign code smell like temporal coupling use one public method class implement unit tested behavior Characteristics Unit Test developing unit test key consideration include fast unit test often unit test run kind object method valid valid unit testing structure unit test kind assertion make unit test Let’s think answer question expected characteristic unit test include fast execution time order provide immediate feedback implementation correctness readability order clearly express behavior that’s tested consistency predictability result use deterministic evaluation robustness structural change ie refactoring implementation Speed Developers expect unit test run quickly generally executed frequently development process typically run unit test whenever make change code order get immediate feedback whether something broken Speed relative concept Martin Fowler said article “But real point test suite run fast enough you’re discouraged running frequently enough” fast unit test requires continuous care along life cycle codebase also rule help u create fast unit test nametag test unit test Michael Feather mentioned kind rule article “A test unit test talk database communicates across network touch file system can’t run time unit test special thing environment editing config file run Tests thing aren’t bad Often worth writing written unit test harness However important able separate true unit test keep set test run fast whenever make changes” Behavioral Testing v Structural Testing OOP structure object refers specific order manner production code implementing object us dependent method class Since structure object related way production code written generally considered implementation detail Structural testing involves testing implementation detail Let’s see example structural testing v behavioral testing Order class order cancelled rule check order cancellable rule executed OrderSpecification test class test cancellation scenario test class check order cancelled OrderSpecification method called exactly However specific way OrderSpecification used internal implementation detail order cancel code test would considered example structural testing Let’s see behavioral test code scenario test class care behavior order cancellation internal implementation detail expect two thing production code one “doing right thing” one “doing thing right” Unit test focus former ie behavior produced production code one level abstraction implementation detail Kent Beck say article “Programmer test sensitive behavior change insensitive structure change program’s behavior stable observer’s perspective test change” think benefit cost maintenance dimension unit testing it’s hard see structuresensitive test create friction rather provide safety Agile development team change structure code continuously refactor fixing many brittle test related behavior refactoring tiring discouraging process example let’s say change method signature isCancellable method within OrderSpecification class using Order class argument using OrderStatus class situation expected behavior code really changed following test start fail verification depends method’s signature Unfortunately mocking library make kind structure testing easy write use structure verification function caution course exceptional case rely structural testing instead behavioral testing achieve level confidence system example real implementation slow use complex build may use structural testing verifying invocation function mocking Another case related order function call like checking cache hitmiss case cache miss may financial cost let’s say call paid API case cache miss may use structural testing verify cache method called exceptional default choice write unit test class class different kind behavior class behavior directly business logic related requirement domain class behavior related applicationsystemlevel requirement like transaction security observability etc separate class different kind behavior using stereotype ie different category responsibility use DomainDriven Design DDD concept project DDD tactical design stereotype class aggregate root entity value object domain service application service repository etc Let’s examine application service case application service like gateway domain model outside world see diagram Application service handle applicationlevel requirement eg security transaction etc routing request outside world anything that’s directly related domain model like web layer RPC layer storage access layer etc domain model business logic application service code mostly consists direct call domain model try write unit test class nothing verify behavior perspective verify interaction domain model mentioned structural testing don’t prefer kind test generally don’t prefer writing unit test DDD application service test application service different kind testing style unit testing think integration test use application service test flow better alternative application service DDD Structure Unit Test Generally unit test three part setting precondition taking action object making verification ArrangeActAssert alternatively GivenWhenThen used Behavior Driven Development BDD testing Applying kind structural style unit test increase readability Sometimes ignore Arrange part don’t need set anything Act part always Act Assert part writing unit test see example ArrangeActAssert part Using Naming Convention name unit test important directly affect code readability Unit test readable easily understand broken system unit test fails also understand behavior system reading unit test people come go project programming language allow u use plain language method name example Kotlin write test method testing framework like JUnit provide tagDisplayName purpose can’t use method name different naming convention name unit test Teams align standard naming convention member find readable alternatively team may allow team member use appropriate name test instead using standard naming convention last Kotlin project used convention “Should ExpectedBehavior StateUnderTest” Mocking use mock object replace real implementation production code depends test help library like Mockito MockK Python unittestmock etc Using mock object make easy write focused cheap test code production code nondeterministic outside dependency example mock repository class find order customer status test code Using mock silver bullet overusing mocking cause problem example using mocking writing stub code needed program behavior mock expose implementation detail structure underlying system tested mentioned make test brittle structure changed Test code mocking harder understand compared test code without mocking additional code required Mocking also cause falsepositive test behavior real implementation change mock implementation may date Mocking appropriate choice dependency involving external system example mocking repository class communicates database mocking service class call another serviceapplication network mocking service class writesreads file tofrom disk make sense use real implementation use instead mock can’t use real implementation mocking option also use fake object much simpler lighter weight testpurpose implementation functionality provided production code example implementing test scenario complex condition given part simpler using fake object instead mock object Conclusion Unit test considered firstclass citizen writing production code order maximize benefit let unit test drive production code’s design readability applying best practice mentioned Align meaning unit testing concept least within teamproject Keep unit test fast Make behavioral test instead structural test Decide write unit test class according responsibility class Align structure unit test Align naming convention unit test use free naming convention depending code review process Use mock caution don’t prefer make structural testing Acknowledgments Thanks colleague reviewed post provided invaluable feedbackTags Software Engineering Unit Testing Object Oriented Software
4,579
Nothing On The Net Is Neutral
If Bitcoin is the number one topic in tech and the economy this week, then net neutrality is running a very close second. The FCC’s vote this week to repeal Obama-era neutrality regulations brought a wave of protest and punditry through the web, and close readers will know that my, and NewCo Shift’s point of view on the debate aligns more with Walt Mossberg, and less with the Chairman. But I believe in rational discourse and robust debate, and to that end, I want to take a few moments to lay out the Republican point of view. Here’s Pai’s statement outlining his defense of the repeal. In short, Pai argues that we need to move back to the “light touch” approach that the government adopted for most of the Internet’s short life. Absent government oversight, he argues, the Web developed into a fantastic organism that has benefitted all. Competition drove innovation, and that framework ought to be preserved. The doomsayers on the left will eventually be proven wrong — the market will win. Here’s a similar argument, via a NYT OpEd. What strikes me as interesting about all this is now that net neutrality is no longer government policy, we’re going to get a true test of our much-vaunted free market. Will competition truly blossom? Will, for example, new ISPs spring up that offer “net neutrality as a service” — in opposition to the Comcasts and Verizons of the world, who likely will offer tiered bundles of services favoring their business partners? I have to admit, I find such a scenario unlikely, but to me, the silver lining is that we get to find out. And in the end, perhaps that is the only way that we can truly know whether preserving neutrality is a public good worthy of enshrinement in federal law. Of course, net neutrality today is utterly conflated with the fact that Google and Facebook have become the two most powerful companies on the Web, and have their own agendas to look after. It’s interesting how muted their support was for neutrality this time around. As this Washington Monthly column points out, antitrust (which I wrote about here) is now a “central plank” in the Democrats’ agenda moving forward. The next few years are going to be nothing but fascinating, that much is certain. We’ll be watching, closely. More key stories from around the web: Mike Bloomberg should have run. Enough said. MQ: “Corporations are sitting on a record amount of cash reserves: nearly $2.3 trillion. That figure has been climbing steadily since the recession ended in 2009, and it’s now double what it was in 2001. The reason CEOs aren’t investing more of their liquid assets has little to do with the tax rate.” Wow. Just…wow. We are callous to what our economy is doing to humans. MQ: To think of The Ghosted is to think of injustice, a cataloging of fist-fights, tuberculosis, detention centers, scabies, crabs, lice, roaches, hot plates, Section 8 housing, laborers hiding under blankets in the backs of trucks, children lying stiff against the tops of trains, assembly lines in windowless heat-filled rooms — a type of economic violence many consumers try to close their minds to. We do not want to think of them because of what it says about us.” This has set off a frenzy in the Valley. It’s very, very complicated and I think Hunter Walk has some enlightening things to say about the same topic: I write about these topics pretty frequently, and feel compelled to write about it now, but honestly, there’s only so much time in the day and today’s focus is/was net neutrality. But stay tuned, so much more to say on this. And while we’re on the topic of Valley elites coming to grips with their own power….I’ll also be writing about this in the days to come. Not in “the years to come,” which is apparently the preferred timeline at FB HQ. MQ: “We don’t have all the answers, but given the prominent role social media now plays in many people’s lives, we want to help elevate the conversation. In the years ahead we’ll be doing more to dig into these questions, share our findings and improve our products. At the end of the day, we’re committed to bringing people together and supporting well-being through meaningful interactions on Facebook.”
https://medium.com/newco/nothing-on-the-net-is-neutral-b58ce12617e7
['John Battelle']
2017-12-15 23:07:00.017000+00:00
['Politics', 'Economics', 'Startup', 'Tech', 'Net Neutrality']
Title Nothing Net NeutralContent Bitcoin number one topic tech economy week net neutrality running close second FCC’s vote week repeal Obamaera neutrality regulation brought wave protest punditry web close reader know NewCo Shift’s point view debate aligns Walt Mossberg le Chairman believe rational discourse robust debate end want take moment lay Republican point view Here’s Pai’s statement outlining defense repeal short Pai argues need move back “light touch” approach government adopted Internet’s short life Absent government oversight argues Web developed fantastic organism benefitted Competition drove innovation framework ought preserved doomsayers left eventually proven wrong — market win Here’s similar argument via NYT OpEd strike interesting net neutrality longer government policy we’re going get true test muchvaunted free market competition truly blossom example new ISPs spring offer “net neutrality service” — opposition Comcasts Verizons world likely offer tiered bundle service favoring business partner admit find scenario unlikely silver lining get find end perhaps way truly know whether preserving neutrality public good worthy enshrinement federal law course net neutrality today utterly conflated fact Google Facebook become two powerful company Web agenda look It’s interesting muted support neutrality time around Washington Monthly column point antitrust wrote “central plank” Democrats’ agenda moving forward next year going nothing fascinating much certain We’ll watching closely key story around web Mike Bloomberg run Enough said MQ “Corporations sitting record amount cash reserve nearly 23 trillion figure climbing steadily since recession ended 2009 it’s double 2001 reason CEOs aren’t investing liquid asset little tax rate” Wow Just…wow callous economy human MQ think Ghosted think injustice cataloging fistfight tuberculosis detention center scabies crab louse roach hot plate Section 8 housing laborer hiding blanket back truck child lying stiff top train assembly line windowless heatfilled room — type economic violence many consumer try close mind want think say us” set frenzy Valley It’s complicated think Hunter Walk enlightening thing say topic write topic pretty frequently feel compelled write honestly there’s much time day today’s focus iswas net neutrality stay tuned much say we’re topic Valley elite coming grip power…I’ll also writing day come “the year come” apparently preferred timeline FB HQ MQ “We don’t answer given prominent role social medium play many people’s life want help elevate conversation year ahead we’ll dig question share finding improve product end day we’re committed bringing people together supporting wellbeing meaningful interaction Facebook”Tags Politics Economics Startup Tech Net Neutrality
4,580
What is correlation?
What is correlation? Not causation. Experiments allow you to talk about cause and effect. Without them, all you have is correlation. What is correlation? IT’S NOT CAUSATION. (!!!!!) Sure, you’ve probably already heard us statisticians yelling that at you. But what is correlation? It’s when the variables in a dataset look like they’re moving together in some way. Two variables X and Y are correlated if they seem to be moving together in some way. For example, “when X is higher, Y tends to be higher” (this is called positive correlation) or “when X is higher, Y tends to be lower” (this is called negative correlation). Thanks, Wikipedia. If you’re looking for the formula for (population) correlation, your friend Wikipedia has everything you need. But if you wanted that, why didn’t you go there straight away? Why are you here? Ah, you want the intuitive explanation? Cool. Here’s a hill: On the left, height and (left-to-right) distance are positively correlated. When one goes up, so does the other. On the right, height and distance are negatively correlated. When most people hear the word correlation, they tend to think of perfect linear correlation: taking a horizontal step (X) to the right on the hill above gets you the same change in altitude (Y) everywhere on the same slope. As long as you’re going up from left to right (positive correlation), there are no surprise jagged/curved bits. Bear in mind that going up is positive only if you’re hiking left-to-right, same way as you read English. If you approach hills from the right, statisticians won’t know what to do with you. I suppose what statisticians are trying to tell you is never to approach a hike from the right. That will only confuse us. But if you hike properly, then “up” is “positive.” Imperfect linear correlation In reality, this hill is not perfect, so the correlation magnitude between height and distance will be less than 100%. (You’ll pop a +/- sign in front depending on whether we’re going up or down, so correlation lives between -1 and 1. That’s because its formula (pasted from Wikipedia above) divides by standard deviation, thereby removing the magnitude of each variable’s dispersion. Without that denominator, you’d struggle to see that the strength of the relationship is the same regardless of whether you measure height in inches or centimetres. Whenever you see scaling/normalization in statistics, it’s usually there to help you compare apples and oranges that were measured in different units.) Uncorrelated variables What does a correlation of zero look like? Are you thinking of a messy cloud with no discernible patterns inside? Something like: Sure, that works. You know how I know X and Y truly have nothing to do with one another? Because I created them that way. If you want to simulate a similar plot of two uncorrelated variables, try running this basic code snippet in R online: X <- runif(100) # 100 regular random numbers between 0 and 1 Y <- rnorm(100) # Another 100 random numbers from bell curve plot(X, Y, main = "X and Y have nothing to do with one another") But there’s another way. The less linear the relationship, the closer your correlation is to zero. In fact, if you look at the hill as a whole (not just one of its slopes at a time), you’ll find a zero correlation even though there’s a clear relationship between height and distance (duh, it’s a hill). X <- seq(-1, 1, 0.01) # Go from -1 to 1 in increments of 0.01 Y <- -X^2 # Secret formula for the ideal hill plot(X, Y, main = "The linear correlation is zero") print(cor(X, Y)) # Check the correlation is zero Correlation is not causation The presence of a linear correlation means that data move together in a somewhat linear fashion. It does not mean that X causes Y (or the other way around). They might both be moving due to something else entirely. Want proof of this? Imagine you and I invested in the same stock. Let’s call it ZOOM, because I find it hilarious that pandemic investors intended to buy ZM (the video communications company) but accidentally bought ZOOM (the Chinese micro-cap) instead, leading to a 900% increase in the price of the wrong Zoom, while the real ZM didn’t even double. *wipes away laugh-tears* Anyways — in honor of that comedy — imagine that you and I invested a small amount in ZOOM. Since we’re both holding ZOOM, the value of your stock portfolio ($X) is correlated with my stock portfolio value ($Y). If ZOOM goes up, we both profit. That does not mean that my portfolio’s value causes your portfolio’s value. I cannot dump all my stock in a way that punishes you — if my portfolio value suddenly becomes zero because I sell everything to buy a pile of cupcakes, that doesn’t mean that yours is now worthless. Many decision-makers fall flat on their faces for precisely this reason. Seeing two correlated variables, they invest resources in affecting thing 1 to try to move thing 2… and the results are not what they expect. Without an experiment, they had no business assuming that thing 1 drives thing 2 in the first place. Correlation is not causation. The lovely term “spurious correlation” refers to the situation where where there’s no direct causal relationship between two correlated variables. Their correlation might be due to coincidence or due to the effect of a third (usually unseen, a.k.a. “latent”) variable that influences both. Never take correlation at face value — in data, things often aren’t what they seem. For fun with spurious correlations, check out the website this prime example hails from. To summarize, if you want to talk about causes and effects, you need a (real!) experiment. Without experiments, all you have is correlation and for many decisions — the ones based on causal reasoning — that is not helpful. P.S. What is regression? It’s putting lines through stuff. Think of it as, “Oh, hey! These things are correlated, so let’s use one to predict the other…”
https://towardsdatascience.com/what-is-correlation-975ea899aaed
['Cassie Kozyrkov']
2020-07-13 11:57:05.033000+00:00
['Towards Data Science', 'Statistics', 'Artificial Intelligence', 'Data Science', 'Technology']
Title correlationContent correlation causation Experiments allow talk cause effect Without correlation correlation IT’S CAUSATION Sure you’ve probably already heard u statistician yelling correlation It’s variable dataset look like they’re moving together way Two variable X correlated seem moving together way example “when X higher tends higher” called positive correlation “when X higher tends lower” called negative correlation Thanks Wikipedia you’re looking formula population correlation friend Wikipedia everything need wanted didn’t go straight away Ah want intuitive explanation Cool Here’s hill left height lefttoright distance positively correlated one go right height distance negatively correlated people hear word correlation tend think perfect linear correlation taking horizontal step X right hill get change altitude everywhere slope long you’re going left right positive correlation surprise jaggedcurved bit Bear mind going positive you’re hiking lefttoright way read English approach hill right statistician won’t know suppose statistician trying tell never approach hike right confuse u hike properly “up” “positive” Imperfect linear correlation reality hill perfect correlation magnitude height distance le 100 You’ll pop sign front depending whether we’re going correlation life 1 1 That’s formula pasted Wikipedia divide standard deviation thereby removing magnitude variable’s dispersion Without denominator you’d struggle see strength relationship regardless whether measure height inch centimetre Whenever see scalingnormalization statistic it’s usually help compare apple orange measured different unit Uncorrelated variable correlation zero look like thinking messy cloud discernible pattern inside Something like Sure work know know X truly nothing one another created way want simulate similar plot two uncorrelated variable try running basic code snippet R online X runif100 100 regular random number 0 1 rnorm100 Another 100 random number bell curve plotX main X nothing one another there’s another way le linear relationship closer correlation zero fact look hill whole one slope time you’ll find zero correlation even though there’s clear relationship height distance duh it’s hill X seq1 1 001 Go 1 1 increment 001 X2 Secret formula ideal hill plotX main linear correlation zero printcorX Check correlation zero Correlation causation presence linear correlation mean data move together somewhat linear fashion mean X cause way around might moving due something else entirely Want proof Imagine invested stock Let’s call ZOOM find hilarious pandemic investor intended buy ZM video communication company accidentally bought ZOOM Chinese microcap instead leading 900 increase price wrong Zoom real ZM didn’t even double wipe away laughtears Anyways — honor comedy — imagine invested small amount ZOOM Since we’re holding ZOOM value stock portfolio X correlated stock portfolio value ZOOM go profit mean portfolio’s value cause portfolio’s value cannot dump stock way punishes — portfolio value suddenly becomes zero sell everything buy pile cupcake doesn’t mean worthless Many decisionmakers fall flat face precisely reason Seeing two correlated variable invest resource affecting thing 1 try move thing 2… result expect Without experiment business assuming thing 1 drive thing 2 first place Correlation causation lovely term “spurious correlation” refers situation there’s direct causal relationship two correlated variable correlation might due coincidence due effect third usually unseen aka “latent” variable influence Never take correlation face value — data thing often aren’t seem fun spurious correlation check website prime example hail summarize want talk cause effect need real experiment Without experiment correlation many decision — one based causal reasoning — helpful PS regression It’s putting line stuff Think “Oh hey thing correlated let’s use one predict other…”Tags Towards Data Science Statistics Artificial Intelligence Data Science Technology
4,581
Show authors more ❤️ with 👏’s
Show authors more ❤️ with 👏’s Introducing Claps, a new way to react on Medium Remember that time you saw a really amazing live show? You couldn’t help but jump out of your seat and clap so hard your hands felt raw afterwards. Or when you heard a great lecture or stirring speech, and felt connected to the people around you by joining in with their applause? Now, remember when you last read a story that turned your thinking upside down, offering a new look at a topic that you’d never considered before. Was tapping a heart icon one time enough? Was it satisfying? Today we’re hoping to change that. Rolling out to Medium users over the coming week will be a new, more satisfying way for readers to give feedback to writers. We call it “Claps.” It’s no longer simply whether you like, or don’t like, something. Now you can give variable levels of applause to a story. Maybe clap once, or maybe 10 or 20 times. You’re in control and can clap to your heart’s desire. So why are we making this change? Since day one, Medium has had a goal of measuring value. The problem, as we saw it, with much of the media/web ecosystem is that the things that are measured and optimized for were not necessarily the things that reflected true value to people. For example, a pageview is a pageview, whether it’s a 3-second bounce (clickbait) or a 5-minute, informative story you read to the end. As a result, we got a lot more of the former. On Medium, we’ve tried to provide more meaningful metrics. We display to our authors not only views, but reads (i.e., how many people got to the bottom of a post). We calculate time spent on posts and display that for publication owners. And we use all of this in our systems that determine which posts to distribute to more people. The goal is always to be able to suss out the great from the merely popular. So what’s wrong with Recommends? The Recommend — our version of a Like or upvote or fav — has been our explicit feedback signal since almost day one. Explicit feedback is the most valuable signal, both for authors and the Medium system. But a simple, binary vote has its limitations. It shows you how many people thought something was good, not how good was it? Earlier this year, we released Series and decided to do something different with the feedback mechanism. Instead of a binary input, we had an applause button, which you could press as many times as you want, and the count just kept going up. At first, we thought this was just fun. But then we realized it could be meaningful. Just like in real life, we found ourselves applauding more the more we appreciated a Series. Hm, we thought: What if we could capture this level of sentiment for posts? Authors would get much more meaningful data about what readers really valued. And as a reader, it would be more satisfying than simply ❤️-ing a nicely filtered photo of avocado toast. We know this will take some getting used to, and we don’t take this change lightly for those who’ve been on Medium for a long time and given — or received — thousands of little green hearts. (We’re right there with you.) But we hope, once you get clapping, you’ll see how natural and more expressive it is. To summarize: Just click the 👏 instead of the ❤️. If you feel strongly, click it more (or just hold down). The more you clap, the more positive feedback you’re providing to the author, and the more you’re letting us know the story is worth reading. (Only the author can see how many claps you gave them.) Our system will evaluate your claps on an individual basis, assessing your evaluation of a story relative to the number of claps you typically send. All this will help the stories that matter most rise to the top. Again, this system will be rolling out in the next few days across Medium surfaces. If you don’t see it yet, you will soon. We’ll tweak and adjust based on what we learn, so please give us your feedback.
https://blog.medium.com/show-authors-more-️-with-s-c1652279ba01
['Katie Zhu']
2018-04-03 20:06:16.973000+00:00
['Medium', 'Recommendations', 'Product', 'Design']
Title Show author ❤️ 👏’sContent Show author ❤️ 👏’s Introducing Claps new way react Medium Remember time saw really amazing live show couldn’t help jump seat clap hard hand felt raw afterwards heard great lecture stirring speech felt connected people around joining applause remember last read story turned thinking upside offering new look topic you’d never considered tapping heart icon one time enough satisfying Today we’re hoping change Rolling Medium user coming week new satisfying way reader give feedback writer call “Claps” It’s longer simply whether like don’t like something give variable level applause story Maybe clap maybe 10 20 time You’re control clap heart’s desire making change Since day one Medium goal measuring value problem saw much mediaweb ecosystem thing measured optimized necessarily thing reflected true value people example pageview pageview whether it’s 3second bounce clickbait 5minute informative story read end result got lot former Medium we’ve tried provide meaningful metric display author view read ie many people got bottom post calculate time spent post display publication owner use system determine post distribute people goal always able sus great merely popular what’s wrong Recommends Recommend — version Like upvote fav — explicit feedback signal since almost day one Explicit feedback valuable signal author Medium system simple binary vote limitation show many people thought something good good Earlier year released Series decided something different feedback mechanism Instead binary input applause button could press many time want count kept going first thought fun realized could meaningful like real life found applauding appreciated Series Hm thought could capture level sentiment post Authors would get much meaningful data reader really valued reader would satisfying simply ❤️ing nicely filtered photo avocado toast know take getting used don’t take change lightly who’ve Medium long time given — received — thousand little green heart We’re right hope get clapping you’ll see natural expressive summarize click 👏 instead ❤️ feel strongly click hold clap positive feedback you’re providing author you’re letting u know story worth reading author see many clap gave system evaluate clap individual basis assessing evaluation story relative number clap typically send help story matter rise top system rolling next day across Medium surface don’t see yet soon We’ll tweak adjust based learn please give u feedbackTags Medium Recommendations Product Design
4,582
How Emotions shape Team Culture
“Culture eats strategy for breakfast” said Peter Drucker. This is one of my favorite organization management quotes and this quote cannot be stressed enough. Our team’s culture thwarts or improves any strategy or process improvement we attempt to implement. But how do we improve this culture? Corporate culture is not just cognitive culture, but it is mainly the emotional culture. Have you ever yelled or been yelled at? Most of us have been in the place where we were not aware of our emotions or been a victim of someone who could not control his or her anger such as screamer or a table pounder. For instance, I came across a lady this morning when I dropped my son at school. She was shouting at the top of her voice making racist comment at a fellow parent when he did not follow the driving rules. So what makes a human lash out at fellow being without understanding what they are going through especially in front of their respective kids? When similar outbursts happen in the workplace, they may hijack our thought processes, limit innovation and the worst of all alter the culture of the team. We underestimate the number of situations where emotions are involved: We often think workplace is not the place for emotions or feelings. We assume there is going to be a professional upright setting all the time. Be it on road where we drive, or the workplace, wherever we have humans, there are emotions involved. Invalidating or ignoring the existence of emotions at these situations will drive the toxicity under the carpet. People will resort to passive aggressive behavior such as completely ignoring another person, refusing to answer any questions from the person, abruptly leaving the meeting, yelling, insulting, gossiping, stubbornness, refusing to do what they’re told to do etc. Be aware of the feelings: Not recognizing the emotional issues, will alter the impact of the feelings. When we don’t feel heard or articulate our feelings efficiently, the feelings get manifested in negative way. Emotions and decision making: We are emotional creatures composed of core emotions like Happiness, Sadness, Anger, Shame and Fear. Most likely when intensity of these emotions goes higher, then they will dictate your actions. As we are hardwired to emote first, we have no control over that process but we can control the thoughts that follow an emotion. Controlling those thoughts will determine how we react to an emotion. Those reactions will steer us towards better decision making. As our emotional skill matures, we will learn to practice productive ways of responding that will become habitual. Emotions and feelings impact our decisions. Being aware of the emotions helps the decision better. Emotions and Team culture: If we are leading a team, we should be ready to deal with complex emotions first. As a leader, building a right culture is utmost important. Strategy or any process improvements come next. Emotions drive culture, Culture drives innovation, productivity, efficiency and more importantly better life for employees. Hence, we as individuals and leaders should aspire to spot our emotions, articulate them in constructive way and make the work a better place for us and for everyone around us.
https://medium.com/atom-platform/how-emotions-shape-team-culture-c50329bedd80
['Pandi Ganapathy']
2018-08-02 17:45:09.867000+00:00
['Leadership', 'Team Culture', 'Self-awareness', 'Organizational Culture', 'Decision Making']
Title Emotions shape Team CultureContent “Culture eats strategy breakfast” said Peter Drucker one favorite organization management quote quote cannot stressed enough team’s culture thwart improves strategy process improvement attempt implement improve culture Corporate culture cognitive culture mainly emotional culture ever yelled yelled u place aware emotion victim someone could control anger screamer table pounder instance came across lady morning dropped son school shouting top voice making racist comment fellow parent follow driving rule make human lash fellow without understanding going especially front respective kid similar outburst happen workplace may hijack thought process limit innovation worst alter culture team underestimate number situation emotion involved often think workplace place emotion feeling assume going professional upright setting time road drive workplace wherever human emotion involved Invalidating ignoring existence emotion situation drive toxicity carpet People resort passive aggressive behavior completely ignoring another person refusing answer question person abruptly leaving meeting yelling insulting gossiping stubbornness refusing they’re told etc aware feeling recognizing emotional issue alter impact feeling don’t feel heard articulate feeling efficiently feeling get manifested negative way Emotions decision making emotional creature composed core emotion like Happiness Sadness Anger Shame Fear likely intensity emotion go higher dictate action hardwired emote first control process control thought follow emotion Controlling thought determine react emotion reaction steer u towards better decision making emotional skill matures learn practice productive way responding become habitual Emotions feeling impact decision aware emotion help decision better Emotions Team culture leading team ready deal complex emotion first leader building right culture utmost important Strategy process improvement come next Emotions drive culture Culture drive innovation productivity efficiency importantly better life employee Hence individual leader aspire spot emotion articulate constructive way make work better place u everyone around usTags Leadership Team Culture Selfawareness Organizational Culture Decision Making
4,583
Google’s RecSim is an Open Source Simulation Framework for Recommender Systems
Google’s RecSim is an Open Source Simulation Framework for Recommender Systems The new framework enables the creation of simulation environments to study reinforcement learning algorithms in recommender systems. I recently started a new newsletter focus on AI education. TheSequence is a no-BS( meaning no hype, no news etc) AI-focused newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Please give it a try by subscribing below: Recommendation systems are all around us and they are getting more sophisticated by the minute. While traditional recommender systems were focused on one-time recommendations based on user actions, new models effectively engage in sequential interactions to try to find the best recommendation based on the user behavior and preferences. This type of recommendation systems are known as collaborative interactive recommenders(CIRs) and have been triggered by advancements in areas such as natural language processing(NLP) and deep learning in general. However, building this systems remains a challenge. Recently, Google open sourced RecSim, a platform for creating simulation environment for CIRs. Despite the popularity and obvious value proposition of CIRs, its implementation have remained limited. This is part due to difficulty of simulating different user interaction scenarios. Traditional supervised learning approaches result very limited when comes to CIRs given that is hard to find datasets that accurately reflect user interaction dynamics. Reinforcement learning has evolved as the de facto standard for implementing CIR systems given the dynamic and sequential nature of the learning process. Just like CIR systems are based on a sequence of user actions, reinforcement learning agents learn by taking actions and experiencing rewards across a sequence of situations in a given environment. While reinforcement learning systems are conceptually ideal for the implementation of CIRs, there are very notable implementation challenges. · Generalization Across Users: Most RL research focuses on models and algorithms involving a single environment. The ability to generalize knowledge across different is essential for an effective CIR agent. · Combinatorial Action Spaces: Most CIR systems require to explore combinatorial variations of recommendations and user actions which are hard to capture in simulation models. · Large, Stochastic Action Space: Many CIR environments deal with a set of recommendable items that is dynamically and stochastically generated. Think about video recommendation engine may operate over a pool of videos that are undergoing constant flux by the minute. Reinforcement learning systems are typically challenged in those non-fixed environments. · Long Horizons: Many CIR systems need to operate over long horizons in order to experience any significant change in user’s preferences. This is another challenging aspect for simulation models. Most of these challenges boiled it is very difficult to effectively simulate combinations of user actions in a way that can be quantified and used to improve the agent’s learning policy. Enter RecSim RecSim is a configurable platform for authoring simulation environments to allow both researchers and practitioners to challenge and extend existing RL methods in synthetic recommender settings. Instead of trying to create a generic, perfect simulator, RecSim focuses on simulations that mirror specific aspects of user behavior found in real systems to serve as a controlled environment for developing, evaluating and comparing recommender models and algorithms. Conceptually, RecSim simulates a recommender agent’s interaction with an environment consisting of a user model, a document model and a user choice model. The agent interacts with the environment by recommending sets or lists of documents (known as slates) to users, and has access to observable features of simulated individual users and documents to make recommendations. Diving into more details, the RecSim environment consists of a user model, a document model and a user-choice model. The recommender agent interacts with the environment by recommending slates of documents to a user. The agent uses observable user and a candidate document features to make its recommendations. The document model also samples items from a prior over document features, including latent features such as document quality; and observable features such as topic, or global statistics like ratings or popularity. Agents and users can be configured to observe different document features, so developers have the flexibility to capture different RS operating regimes. The user model samples users from a prior over configurable user features, including latent features such as personality, satisfaction, interests; observable features such as demographics; and behavioral features such as session length, visit frequency and budget. When the agent recommends a document to a user, the response is determined by a user-choice model, which can access observable document features and all user features. Other aspects of a user’s response can depend on latent document features, such as document topic or quality. Once a document is consumed, the user state undergoes a transition through a configurable user transition model, since user satisfaction or interests might change. Another important component of the RecSim architecture is the similar who is responsible for controlling the interactions between the agents and the environment. The interactions are based on six fundamental steps. 1. The simulator requests the user state from the user model, both the observable and latent user features. 2. The simulator sends the candidate documents and the observable portion of the user state to the agent. 3. The agent uses its current policy to returns a slate to the simulator to be “presented” to the user. 4. The simulator forwards the recommended slate of documents and the full user state (observable and latent) to the user choice model. 5. Using the specified choice and response functions, the user choice model generates a (possibly stochastic) user choice/response to the recommended slate, which is returned to the simulator. 6. The simulator then sends the user choice and response to both: the user model so it can update the user state using the transition model; and the agent so it can update its policy given the user response to the recommended slate. RecSim provides a very unique approach to streamline the testing and validation of CIR systems based on deep learning. The code has been open sourced on GitHub and the release was accompanied by this research paper. Certainly, it’s going to be interesting to see the types of simulations researchers and data scientists build on top of RecSim.
https://medium.com/dataseries/googles-recsim-is-an-open-source-simulation-framework-for-recommender-systems-9a802377acc2
['Jesus Rodriguez']
2020-12-15 11:34:13.294000+00:00
['Machine Learning', 'Deep Learning', 'Data Science', 'Artificial Intelligence', 'Thesequence']
Title Google’s RecSim Open Source Simulation Framework Recommender SystemsContent Google’s RecSim Open Source Simulation Framework Recommender Systems new framework enables creation simulation environment study reinforcement learning algorithm recommender system recently started new newsletter focus AI education TheSequence noBS meaning hype news etc AIfocused newsletter take 5 minute read goal keep date machine learning project research paper concept Please give try subscribing Recommendation system around u getting sophisticated minute traditional recommender system focused onetime recommendation based user action new model effectively engage sequential interaction try find best recommendation based user behavior preference type recommendation system known collaborative interactive recommendersCIRs triggered advancement area natural language processingNLP deep learning general However building system remains challenge Recently Google open sourced RecSim platform creating simulation environment CIRs Despite popularity obvious value proposition CIRs implementation remained limited part due difficulty simulating different user interaction scenario Traditional supervised learning approach result limited come CIRs given hard find datasets accurately reflect user interaction dynamic Reinforcement learning evolved de facto standard implementing CIR system given dynamic sequential nature learning process like CIR system based sequence user action reinforcement learning agent learn taking action experiencing reward across sequence situation given environment reinforcement learning system conceptually ideal implementation CIRs notable implementation challenge · Generalization Across Users RL research focus model algorithm involving single environment ability generalize knowledge across different essential effective CIR agent · Combinatorial Action Spaces CIR system require explore combinatorial variation recommendation user action hard capture simulation model · Large Stochastic Action Space Many CIR environment deal set recommendable item dynamically stochastically generated Think video recommendation engine may operate pool video undergoing constant flux minute Reinforcement learning system typically challenged nonfixed environment · Long Horizons Many CIR system need operate long horizon order experience significant change user’s preference another challenging aspect simulation model challenge boiled difficult effectively simulate combination user action way quantified used improve agent’s learning policy Enter RecSim RecSim configurable platform authoring simulation environment allow researcher practitioner challenge extend existing RL method synthetic recommender setting Instead trying create generic perfect simulator RecSim focus simulation mirror specific aspect user behavior found real system serve controlled environment developing evaluating comparing recommender model algorithm Conceptually RecSim simulates recommender agent’s interaction environment consisting user model document model user choice model agent interacts environment recommending set list document known slate user access observable feature simulated individual user document make recommendation Diving detail RecSim environment consists user model document model userchoice model recommender agent interacts environment recommending slate document user agent us observable user candidate document feature make recommendation document model also sample item prior document feature including latent feature document quality observable feature topic global statistic like rating popularity Agents user configured observe different document feature developer flexibility capture different RS operating regime user model sample user prior configurable user feature including latent feature personality satisfaction interest observable feature demographic behavioral feature session length visit frequency budget agent recommends document user response determined userchoice model access observable document feature user feature aspect user’s response depend latent document feature document topic quality document consumed user state undergoes transition configurable user transition model since user satisfaction interest might change Another important component RecSim architecture similar responsible controlling interaction agent environment interaction based six fundamental step 1 simulator request user state user model observable latent user feature 2 simulator sends candidate document observable portion user state agent 3 agent us current policy return slate simulator “presented” user 4 simulator forward recommended slate document full user state observable latent user choice model 5 Using specified choice response function user choice model generates possibly stochastic user choiceresponse recommended slate returned simulator 6 simulator sends user choice response user model update user state using transition model agent update policy given user response recommended slate RecSim provides unique approach streamline testing validation CIR system based deep learning code open sourced GitHub release accompanied research paper Certainly it’s going interesting see type simulation researcher data scientist build top RecSimTags Machine Learning Deep Learning Data Science Artificial Intelligence Thesequence
4,584
What is Data Exfiltration?
If we talk in terms of our general life, Exfiltrate means to surreptitiously move personnel or material out an area under enemy control. In terms of Computer science, Data Exfiltration is the unauthorized removal of data from a network e.g. leakage of Archives, Passwords, Additional Malware and Utilities, Personally identifiable information, financial data, trade secrets, source code, intellectual property, etc. For a hacker, it is easy to move things in a box. E.g. RAR file, ZIP file, CAB file, etc. Data Exfiltration via outbound FTP, HTTPS is most common these days. Read More
https://medium.com/data-analytics-and-ai/what-is-data-exfiltration-b255101e9d84
['Ella William']
2019-06-07 11:23:11.076000+00:00
['Cybersecurity', 'Data Science', 'Big Data', 'IoT', 'Analytics']
Title Data ExfiltrationContent talk term general life Exfiltrate mean surreptitiously move personnel material area enemy control term Computer science Data Exfiltration unauthorized removal data network eg leakage Archives Passwords Additional Malware Utilities Personally identifiable information financial data trade secret source code intellectual property etc hacker easy move thing box Eg RAR file ZIP file CAB file etc Data Exfiltration via outbound FTP HTTPS common day Read MoreTags Cybersecurity Data Science Big Data IoT Analytics
4,585
How to use the Style Transfer API in React Native with Fritz
Fritz is a platform that’s intended to make it easy for developers to power their mobile apps with machine learning features. Currently, it has an SDK for both Android and iOS. The SDK contains ready-to-use APIs for the following features: Today, we’ll explore how to use the Style Transfer API in React Native. I was only able to develop and test in Android (no Macs here!) and got a working application. The Style Transfer API styles images or video according to real art masterpieces. There are 11 pre-trained artwork styles, including Van Gogh’s Starry Night and Munch’s Scream, among others. The app we’ll be developing allows the user to take a picture and convert it into a styled image. It will also allow the user to pick the artwork style they wish to use on the image. The app will contain a Home page, where the user can pick the art style. It will also include a separate Camera View, where the user captures the image. Note: The following tutorial is for the Android platform only. Prerequisites React Native CLI: run npm i -g react-native-cli to globally install the CLI Since there is no default React Native module for Fritz, we’ll need to write our own. Writing a native module means writing real native code to use on one or both platforms. Step 1 — Creating the RN app and install modules To create the app, run the following command in the terminal: react-native init <appname> Move into the root of the folder to begin configuration. For navigation, we’ll be using React Navigation and React Native Camera for the Camera View. To install both dependencies, run the following command in the terminal: npm i --save react-navigation react-native-camera Follow the instructions here to configure React Navigation for the app. We’ll need to install react-native-gesture-handler as well, as it’s a dependency of React Navigation. Follow the instructions here to configure the React Native Camera for the app. We can stop at step 6, as for this example we will not be using text, face, or barcode recognition. Step 2 — Including Fritz SDK in the app First, we need to create a Fritz account and a new project. From the Project overview, click on Add to Android to include the SDK for the Android platform. We’ll need to include an App Name and the Application ID. The Application ID can be found in android/app/build.gradle , inside the tag defaultConfig . Upon registering the app, we need to add the following lines in android/build.gradle : allprojects { ..... repositories { ..... maven { url "https://raw.github.com/fritzlabs/fritz-repository/master" } //add this line } } Afterward, include the dependency in the android/app/build.gradle : dependencies { implementation 'ai.fritz:core:3.0.2' } We’ll need to update the AndroidManifest.xml file to give the app permission to use the Internet and register the Fritz service: <manifest xmlns:android="http://schemas.android.com/apk/res/android"> ..... <uses-permission android:name="android.permission.INTERNET" /> <application> ..... <service android:name="ai.fritz.core.FritzCustomModelService" android:exported="true" android:permission="android.permission.BIND_JOB_SERVICE" /> </application> </manifest> We then need to include the following method within the MainActivity.java : import ai.fritz.core.Fritz; import android.os.Bundle; //import these two as well public class MainActivity extends ReactActivity { ..... @Override protected void onCreate(Bundle savedInstanceState) { // Initialize Fritz Fritz.configure(this, "<api-key>"); } } Step 3 — Create the Native Module Since the SDK only supports iOS and Android, we’ll need to make the native module. To get a better understanding of this, refer to the docs here: To make an Android Native module, we’ll need to make two new files. They will be within the root package of the Android source folder. FritzStyleModule : This contains the code that will return the styled image FritzStylePackage : This registers the module so that it can be used by the JavaScript side of the app. FritzStyleModule The React method being used has a success and error callback. The chosen artwork style and a base64 of the original image are sent to the method. The error callback is invoked when an Exception is thrown and returns the error. The success callback returns a base64 encoded string of the converted image. On a high-level, the above code does the following: Initializes the style predictor with the user’s choice of artwork. Converts the original base64 image into a Bitmap . Creates a FritzVisionImage , which is the input of the style predictor. Converts the FritzVisionImage into a styled FritzVisionStyleResult , which is the converted image. Gets a Bitmap out of the FritzVisionStyleResult . Converts the Bitmap into a base64 to be sent back to the JavaScript side of the app. FritzStylePackage This class is used to register the package so it can be called in the JavaScript side of the app. This class is also initialized in the getPackages() of MainApplication.java : @Override protected List<ReactPackage> getPackages() { return Arrays.<ReactPackage>asList( new MainReactPackage(), ......, new FritzStylePackage() //Add this line and import it on top ); } Now on to the JavaScript side of the application. Step 4 — Creating the UI To do this, we’ll be creating/updating the following pages: Home.js — Display the picker of artwork styles and the final result. CameraContainer.js — Display the camera view to capture an image. FritzModule.js — Export the above-created Native module to the JavaScript side. App.js — Root of the app which includes the navigation stack. Home.js This page contains: Text to display the app description. Picker to allow the user to select the artwork style of the converted image. Button to redirect the user to the Camera page. It will pass the selected artwork style to the CameraContainer. If the navigation prop contains the original and converted image, it will be displayed. The page currently looks like this; Home page before taking a picture CameraContainer.js The CameraContainer page displays a full page CameraView. It includes a button to take the picture at the bottom of the page. Upon clicking it, a spinner will be displayed to convey to the user that an action is taking place. The image is first captured using the react-native-camera method takePictureAsync() . The original image is then saved into the state of the page. The setState method is asynchronous and thus has a success callback that runs after the state is set. The getNewImage method from the FritzModule is run within this success callback. The original image and the filter (artwork style) picked from the Home Page is passed to the method. On the error callback, an alert is displayed to the user to convey that an error has occurred. On the success callback, the new styled image is saved into the state. On this second setState methods’ success callback, the user is redirected to the Home page with both the original and styled images. CameraContainer on emulator FritzModule.js import { NativeModules } from 'react-native'; export default NativeModules.FritzStyle; This page exposes the Native module, FritzStyle . This allows the JavaScript side to make calls to the method getNewImage . App.js import React, { Component } from 'react'; import Home from './src/Home'; import CameraContainer from './src/CameraContainer'; import { createStackNavigator, createAppContainer } from 'react-navigation'; const AppNavigator = createStackNavigator({ Home: { screen: Home }, Camera: { screen: CameraContainer } }); const AppContainer = createAppContainer(AppNavigator); export default class App extends Component { render() { return (<AppContainer />); } } First, we create the Stack navigator with the Home Page and Camera View. The key ‘Home’ is used when navigating to the Home Page, and the key ‘Camera’ when navigating to the CameraContainer. The AppContainer becomes the root component of the App. It’s also the component that manages the app’s state. Now to see the entire app in function; To recap, we have; Created a React Native app, Included the Fritz SDK in it, Created a Native Module that makes use of the Style Transfer API, and Designed a UI to display the styled image. Find the code repo, here. For native iOS or Android implementations of Fritz’s Style Transfer API, check out the following tutorials:
https://medium.com/free-code-camp/how-to-use-the-style-transfer-api-in-react-native-with-fritz-e90bc609fb17
['Sameeha Rahman']
2019-04-02 20:53:56.801000+00:00
['Machine Learning', 'Mobile App Development', 'Technology', 'React Native', 'Programming']
Title use Style Transfer API React Native FritzContent Fritz platform that’s intended make easy developer power mobile apps machine learning feature Currently SDK Android iOS SDK contains readytouse APIs following feature Today we’ll explore use Style Transfer API React Native able develop test Android Macs got working application Style Transfer API style image video according real art masterpiece 11 pretrained artwork style including Van Gogh’s Starry Night Munch’s Scream among others app we’ll developing allows user take picture convert styled image also allow user pick artwork style wish use image app contain Home page user pick art style also include separate Camera View user capture image Note following tutorial Android platform Prerequisites React Native CLI run npm g reactnativecli globally install CLI Since default React Native module Fritz we’ll need write Writing native module mean writing real native code use one platform Step 1 — Creating RN app install module create app run following command terminal reactnative init appname Move root folder begin configuration navigation we’ll using React Navigation React Native Camera Camera View install dependency run following command terminal npm save reactnavigation reactnativecamera Follow instruction configure React Navigation app We’ll need install reactnativegesturehandler well it’s dependency React Navigation Follow instruction configure React Native Camera app stop step 6 example using text face barcode recognition Step 2 — Including Fritz SDK app First need create Fritz account new project Project overview click Add Android include SDK Android platform We’ll need include App Name Application ID Application ID found androidappbuildgradle inside tag defaultConfig Upon registering app need add following line androidbuildgradle allprojects repository maven url httpsrawgithubcomfritzlabsfritzrepositorymaster add line Afterward include dependency androidappbuildgradle dependency implementation aifritzcore302 We’ll need update AndroidManifestxml file give app permission use Internet register Fritz service manifest xmlnsandroidhttpschemasandroidcomapkresandroid usespermission androidnameandroidpermissionINTERNET application service androidnameaifritzcoreFritzCustomModelService androidexportedtrue androidpermissionandroidpermissionBINDJOBSERVICE application manifest need include following method within MainActivityjava import aifritzcoreFritz import androidosBundle import two well public class MainActivity extends ReactActivity Override protected void onCreateBundle savedInstanceState Initialize Fritz Fritzconfigurethis apikey Step 3 — Create Native Module Since SDK support iOS Android we’ll need make native module get better understanding refer doc make Android Native module we’ll need make two new file within root package Android source folder FritzStyleModule contains code return styled image FritzStylePackage register module used JavaScript side app FritzStyleModule React method used success error callback chosen artwork style base64 original image sent method error callback invoked Exception thrown return error success callback return base64 encoded string converted image highlevel code following Initializes style predictor user’s choice artwork Converts original base64 image Bitmap Creates FritzVisionImage input style predictor Converts FritzVisionImage styled FritzVisionStyleResult converted image Gets Bitmap FritzVisionStyleResult Converts Bitmap base64 sent back JavaScript side app FritzStylePackage class used register package called JavaScript side app class also initialized getPackages MainApplicationjava Override protected ListReactPackage getPackages return ArraysReactPackageasList new MainReactPackage new FritzStylePackage Add line import top JavaScript side application Step 4 — Creating UI we’ll creatingupdating following page Homejs — Display picker artwork style final result CameraContainerjs — Display camera view capture image FritzModulejs — Export abovecreated Native module JavaScript side Appjs — Root app includes navigation stack Homejs page contains Text display app description Picker allow user select artwork style converted image Button redirect user Camera page pas selected artwork style CameraContainer navigation prop contains original converted image displayed page currently look like Home page taking picture CameraContainerjs CameraContainer page display full page CameraView includes button take picture bottom page Upon clicking spinner displayed convey user action taking place image first captured using reactnativecamera method takePictureAsync original image saved state page setState method asynchronous thus success callback run state set getNewImage method FritzModule run within success callback original image filter artwork style picked Home Page passed method error callback alert displayed user convey error occurred success callback new styled image saved state second setState methods’ success callback user redirected Home page original styled image CameraContainer emulator FritzModulejs import NativeModules reactnative export default NativeModulesFritzStyle page expose Native module FritzStyle allows JavaScript side make call method getNewImage Appjs import React Component react import Home srcHome import CameraContainer srcCameraContainer import createStackNavigator createAppContainer reactnavigation const AppNavigator createStackNavigator Home screen Home Camera screen CameraContainer const AppContainer createAppContainerAppNavigator export default class App extends Component render return AppContainer First create Stack navigator Home Page Camera View key ‘Home’ used navigating Home Page key ‘Camera’ navigating CameraContainer AppContainer becomes root component App It’s also component manages app’s state see entire app function recap Created React Native app Included Fritz SDK Created Native Module make use Style Transfer API Designed UI display styled image Find code repo native iOS Android implementation Fritz’s Style Transfer API check following tutorialsTags Machine Learning Mobile App Development Technology React Native Programming
4,586
Pitch Deck
So I threw this pitch deck together a little while back, wanting to share as part of application for an accelerator program I’ll just go ahead and post online since no one reads these things. Oh yeah all inquiries please direct through contact portal on our website at automunge.com. Cheers.
https://medium.com/automunge/pitch-deck-7d9ab80b4ba1
['Nicholas Teague']
2019-11-20 21:32:59.636000+00:00
['Data Science', 'Entrepreneurship', 'Machine Learning']
Title Pitch DeckContent threw pitch deck together little back wanting share part application accelerator program I’ll go ahead post online since one read thing Oh yeah inquiry please direct contact portal website automungecom CheersTags Data Science Entrepreneurship Machine Learning
4,587
Triggering the Dark Night of the Soul
I woke at 2.30 again this morning. It is now 3 am and I am struggling to get on top of the ritual thought trains that haunt me when I have been triggered. Currently my wrists are throbbing, asking me to slice them open and a narrative is forming in my head in the form of a simple prayer. Dear God please let me go soon. Please send me a condition that ends it for me now so that I don’t have to give into my wrists and hurt those who love me, for whatever reason they find appropriate, since I deeply know how much they are mistaken and have not yet woken up to my basic unloveableness and unworthiness to exist. I know this routine so well. It is so exhausting to go through it yet again — of being triggered into this routine by thoughtlessness, carelessness, emotional illiteracy and general human fallibility. I know I should be able to rise above it and see it for what it is — a stupid mistake. I want to shrug it off and say it doesn’t matter. But, for now I must go through this cycle one more time, yet again. Trying to find ways of reducing its power and unwrapping its stranglehold on my life. It doesn’t matter what caused it. What matters is that it has crushed me again, sucking the joy out of me like a leak in my spacesuit of personal defence. Who am I kidding? Since my PTSD complete breakdown I have no defences in place anymore. They were all maladaptive from a childhood of abuse and thus served me ill anyway. They all went so that I could be free of trauma. They left me wide open and raw, emotionally stripped back and experiencing life in its immediacy. Most of the time that is a joyful and wonderful connectedness to the exact present moment that I wouldn’t give up on for anything. It makes every moment into a ‘this is it’ moment, that zen moment of pure realisation of joy right here in this breath. We can’t have everything. I have this wonderful gift of freedom from illusion. Except when I get triggered! Is it to remind me of where I came from, to keep me humbled and in place? Or perhaps to challenge me to release myself from this once more, to make sure I am not complacent in my newly found liberation? Perhaps a reminder of what life still feels like for most, still burdened by their defences against the injustices of this world, from which I am largely protected by privilege and having my wish granted of ‘just enough’ materially to live without fear in that quarter? Can one ever live without fear anyway? Is this just the form my fear must or will now take in life? Surely my greatest fear is that I will for some reason lose my beloved ones, my sons and grandson and most of all my soulmate? It seems strange that my greatest fear is that I must continue to live for now. That my prayers are to go now, to be done with this life, to let me finally say ‘I did my best and I am too tired to fight this anymore’. What am I tired of? I have written about that elsewhere, alongside the joy I feel in my life too. They are both extreme ends of the spectrum. It seems I am not allowed to waddle comfortably somewhere in the middle of the joy/despair spectrum. Life after all is just a series of spectrums, rather like my ADHD and other spectrum disorders. Life is a spectrum disorder. HAha that has made me laugh at this idea and myself. Perhaps this is the breaking through point for me with this occurrence of triggering. But will I be able to go back into the scenario which triggered it. I doubt it. What happens is that my body says ‘ok you’re safe here at home writing about this, but I wont let you go back in case they trigger you again’. If I ignore it, try to pretend nothing happened, return to normal and carry on, it just triggers me again- which means emotional lockdown and physical rigidity to pain levels that are quite high, even though I am used to them. IF I fight against it, I pay with that lockdown and must medicate and wait for it to pass. Which means I am out of action for other things too. If I give into it my life shrinks a little more than it already has done. If I do what I am doing now and explore it, get it out into the open and say to it ‘is this really how you think I should live, be and feel?’ If I do this act of exposure often enough, will it eventually decide to agree with me and stop trying to control me? ‘Shine light on your trauma and it will dissolve.’ This is the summit of advice from all quarters, and it’s true, it does dissolve, slowly. But this last stage is taking forever and stands out as more painful in contrast to the joy I feel most of the time. Stop wingeing perhaps, be glad for the joy I feel and accept this last level of burden of trauma from the past. I consider Tonglen, a Tibetan meditative practice where I absorb the suffering of the world and breathe out that very joy in its place. I practice this against the injustices of the world, the petty cruelties of wealth and corruption and damaged souls being given power they do not deserve or know how to use wisely, only self servingly. I ache for the raw suffering of others and the causal thoughtless cruelty that cause it. I weep for a world that is destroying itself and cheer for those who would act to wake the rest of us up. I do, daily. I challenge that world and the sadness it sows in me for others. I challenge those damaged parental voices too. I do it through my writing. I do it in my meditation. I do it in my approach to life. I work at being fearless, courageous, brave. Even just to go and talk to people, I am being all of those things, though they will never know that. It is easier for me to stand on stage and give a lecture or performance than it is for me to talk to people in a more intimate way, especially in public or in groups. What I really want is to live in a bubble of safety with my soulmate and my sons, to have nothing more touch any of us. We have all struggled with those legacies. What I really want is an end to the terrors of my childhood being re-enacted again and again through my traumatised nervous system. What I want is for this to end! Either by ending life or by ending the triggering process. I know by the end of writing this I will have become determined once again to get through it and live on. I know by the end of this I will have shifted the vice grip of this process, this routine my body deems it necessary to put me through once again. I know that the love I feel for my family and my soulmate husband will prove the stronger force in the end. I know that if I return to my bed my husband will wrap his sleepy arms around me and hold me until I can cry it out of me and let it go. I know that writing this and publishing it is my way of saying ‘hang in on there’ to myself and to others who may feel like this but also to say do not judge others, if you are not experiencing triggering like this do not judge others who may be, you cannot tell from the outside. This too will pass, eventually! But will I ever be able to go back again? To any of the long list of triggering events locations and situations? Who knows? Is it worth even trying? Perhaps I should just move on again instead, or is that running away still, is that why I ask for the end, to avoid that? Perhaps this is the turning point when I stand my ground and say ‘no I will not run and will not be triggered any more’? Can I do that, can any of us who have been deeply traumatised in our pasts actually fully achieve that, or have I got as far as it is possible to do so. Who knows? But I think I just gave myself the reason to keep going today this time, and to return to my bed and claim my cuddle. Thanks for listening. xxx
https://sylviaclare.medium.com/triggering-the-dark-night-of-the-soul-7eb39a446da0
['Sylvia Clare Msc. Psychol']
2019-06-20 12:20:39.762000+00:00
['Ptsd Awareness', 'Trauma Recovery', 'Mental Health', 'Love', 'Self']
Title Triggering Dark Night SoulContent woke 230 morning 3 struggling get top ritual thought train haunt triggered Currently wrist throbbing asking slice open narrative forming head form simple prayer Dear God please let go soon Please send condition end don’t give wrist hurt love whatever reason find appropriate since deeply know much mistaken yet woken basic unloveableness unworthiness exist know routine well exhausting go yet — triggered routine thoughtlessness carelessness emotional illiteracy general human fallibility know able rise see — stupid mistake want shrug say doesn’t matter must go cycle one time yet Trying find way reducing power unwrapping stranglehold life doesn’t matter caused matter crushed sucking joy like leak spacesuit personal defence kidding Since PTSD complete breakdown defence place anymore maladaptive childhood abuse thus served ill anyway went could free trauma left wide open raw emotionally stripped back experiencing life immediacy time joyful wonderful connectedness exact present moment wouldn’t give anything make every moment ‘this it’ moment zen moment pure realisation joy right breath can’t everything wonderful gift freedom illusion Except get triggered remind came keep humbled place perhaps challenge release make sure complacent newly found liberation Perhaps reminder life still feel like still burdened defence injustice world largely protected privilege wish granted ‘just enough’ materially live without fear quarter one ever live without fear anyway form fear must take life Surely greatest fear reason lose beloved one son grandson soulmate seems strange greatest fear must continue live prayer go done life let finally say ‘I best tired fight anymore’ tired written elsewhere alongside joy feel life extreme end spectrum seems allowed waddle comfortably somewhere middle joydespair spectrum Life series spectrum rather like ADHD spectrum disorder Life spectrum disorder HAha made laugh idea Perhaps breaking point occurrence triggering able go back scenario triggered doubt happens body say ‘ok you’re safe home writing wont let go back case trigger again’ ignore try pretend nothing happened return normal carry trigger mean emotional lockdown physical rigidity pain level quite high even though used fight pay lockdown must medicate wait pas mean action thing give life shrink little already done explore get open say ‘is really think live feel’ act exposure often enough eventually decide agree stop trying control ‘Shine light trauma dissolve’ summit advice quarter it’s true dissolve slowly last stage taking forever stand painful contrast joy feel time Stop wingeing perhaps glad joy feel accept last level burden trauma past consider Tonglen Tibetan meditative practice absorb suffering world breathe joy place practice injustice world petty cruelty wealth corruption damaged soul given power deserve know use wisely self servingly ache raw suffering others causal thoughtless cruelty cause weep world destroying cheer would act wake rest u daily challenge world sadness sow others challenge damaged parental voice writing meditation approach life work fearless courageous brave Even go talk people thing though never know easier stand stage give lecture performance talk people intimate way especially public group really want live bubble safety soulmate son nothing touch u struggled legacy really want end terror childhood reenacted traumatised nervous system want end Either ending life ending triggering process know end writing become determined get live know end shifted vice grip process routine body deems necessary put know love feel family soulmate husband prove stronger force end know return bed husband wrap sleepy arm around hold cry let go know writing publishing way saying ‘hang there’ others may feel like also say judge others experiencing triggering like judge others may cannot tell outside pas eventually ever able go back long list triggering event location situation know worth even trying Perhaps move instead running away still ask end avoid Perhaps turning point stand ground say ‘no run triggered more’ u deeply traumatised past actually fully achieve got far possible know think gave reason keep going today time return bed claim cuddle Thanks listening xxxTags Ptsd Awareness Trauma Recovery Mental Health Love Self
4,588
8 Things That a Mobile App Can Do That Your Website Can’t
Modern day businesses often have internal debates about the importance of building different digital assets, specifically when it comes to mobile apps versus websites versus web apps. A lot of people feel you don’t need a smartphone app, and that you just need a website that looks fine on mobile devices. Others claim mobile apps have advantages that cannot be offered by a website. Who’s right? In this article, we will explore the differences between these three types of software to identify where mobile apps set themselves apart from web-only products. You might assume that web apps and mobile apps are the same in nature, but in reality, they are not. They aren’t just different in terms of their structure; they are also built for different classes of user. To start, let’s review the structural differences between progressive web applications and websites. Progressive Web Apps Defined A progressive web app (PWA) is essentially a version of a website that operates correctly, fluidly, and in a user-friendly way on mobile devices. Specifically, web apps work like downloadable apps, but all from the convenience of the browser of your mobile device computer. In this way, they fall between websites and mobile apps, as they act like websites, but provide an experience that is comparable to native apps. Native Apps Defined Native apps are apps created for a specific platform, such as iOS for the Apple iPhone or Android for any Android-based smartphone. They are usually downloaded and installed via an app store and have access to device resources such as GPS and the camera functionality. Native applications live on the device itself and run on it. Some examples of popular mobile apps are Snapchat, Instagram, Google Maps, and Facebook Messenger. Unlike web apps, which are accessed through the internet browser and adapt to any computer you are on, native apps are constrained to the device that they are running on. Some web apps are dynamic and interactive enough to adjust according to the size of different displays, but most are static. Why Build a Mobile App? So how is it that native mobile platforms can offer different functionality than web interfaces? Well, by definition, there are multiple useful features that are exclusively available on mobile apps. These features include the following, which we’ll discuss in detail below: Use of device-specific features Ease of personalization Offline usage Easier user access Better speed Push notifications Brand visibility Design freedom Use of device-specific features When using smartphone applications, users can access device-specific functions such as screenshot, camera, dictionary, GPS, autocorrect, and touch screen (which is not present on most desktops or laptops). Screenshots in particular are a highly common use case, as they are very simple to take and save for future use when reading an article, watching a fashion show, or capturing some other on-screen event. The simple zoom in and zoom out functions offered by touch screens enable easy cropping and focus. These features can reduce the time to perform common tasks and boost convenience. Ease of personalization Mobile apps give you the liberty to personalize the user experience on the basis of their preferences, location, usage patterns, and more. With smartphone applications, it’s easy to present consumers with a highly personalized interface. In addition, a mobile app can also allow users to customize the app’s appearance as per their preferences. Offline usage Offline usage not especially easy to implement, but it may be the most significant advantage offered by mobile applications. Although mobile applications usually require internet access to perform much of their duties, they may still offer basic content and features to users in offline mode. For example, consider health and wellness applications. These apps can provide functionality such as a diet plan, calorie chart, body measurement, water intake alert, and many more, even without the assistance of an internet connection. Easier user access Mobile users spend 86% [1] of their time on mobile apps and just 14% on mobile websites. In comparison, the total time consumers spend on mobile applications is also growing, rising in one year by 21 percent. There is no doubt that people invest much of their time on social media applications and gaming applications, which are often native mobile apps. Better speed A well-designed mobile app will certainly perform at a much faster speed than a mobile website. In comparison to websites, which typically use web servers, applications generally store their data locally on mobile devices. For this reason, in mobile applications, data extraction is easy to perform. In addition, by storing user preferences and using them to take proactive actions on behalf of users, apps can save users time. Smartphone applications should function more efficiently on a technological level, too, as websites on smartphones use JavaScript code (typically much less efficient than native code languages). What happens in the background is a puzzle to most users, so the faster app type — in this case, mobile apps — wins this category from a UX perspective. Push notifications There are two forms of smartphone app alerts: push notifications and in-app alerts. They are both attention-grabbing options that connect in a relatively non-invasive way for smartphone users. In-app alerts are alerts that can only be accessed by users when they open an app. On the other hand, push notifications are displayed to users regardless of the operation they are currently performing on their mobile device. This is a powerful way to grab the user’s attention; in fact, there have been some cases where push notifications delivered click-through rates of 40 percent or higher. It goes without saying that the notification campaigns have to be thoughtfully prepared. Users will resent being constantly pinged by notifications that don’t deliver urgent or relevant information. Technically, push notifications for progressive web apps can also be implemented by utilizing third-party services, but these services are currently in a preliminary stage and have some limitations. Brand visibility Consumers devote a large portion of their time to mobile app interaction. It’s fair to assume that many people, every day, seek out a company’s app icon on their smartphones. For app makers, this daily experience can be used as a promotional opportunity. [2] Even if people do not use a smartphone app actively, they will be reminded of it whenever they see their home screen. The app icon works as a brand mini-ad for the brand. Design freedom Even with all the technical advances in web design, to perform even the most basic functions, mobile websites have to rely a lot on browsers. Mobile websites rely on browser features to function, such as the “back” button, “refresh” button, and address bar. None of these limitations apply to mobile applications. Based on advanced gestures like “tap”, “swipe”, “drag”, “pinch”, “hold”, and more, a mobile app can be programmed with a lot of elaborate functions. These gestures can be used by apps to provide creative features that can help users complete their tasks more intuitively. For instance, using a swipe gesture, an app will allow users to move to the next or previous phase. Mobile Apps Offer Unique Advantages Websites may capture a broader range of traffic, but for businesses that can make use of the above features, a smartphone app is essential. Native apps and websites can work together in a satisfying way to build an omnichannel user experience that draws user traffic and results in tremendous user growth. This is true across a wide array of business types. If you’re an e-commerce store, why not encourage visitors to buy from your website as well as through an app? Churches can use mobile apps to release revised sermon notes prior to the service and then record audio and video. Restaurants can provide modified menus, instructions, and online orders. Magazines may submit push alerts when new papers are written. Consider both web and mobile properties into your customer engagement plan, instead of one or the other. Crowdbotics specializes in converting websites and web apps into mobile apps. We offer custom, cross-platform app builds that let companies get their content to customers on all of their devices. If you’re looking to expand your marketing strategy to include omnichannel engagement, get in touch with a Crowdbotics expert today. Sources: [1] https://www.forbes.com/sites/ewanspence/2014/04/02/the-mobile-browser-is-dead-long-live-the-app/#5c5ef237614d [2] https://www.forbes.com/sites/allbusiness/2014/11/17/heres-why-your-business-needs-its-own-mobile-app/#30b288b2327f
https://medium.com/crowdbotics/8-things-that-a-mobile-app-can-do-that-your-website-cant-faeb695f7601
['Allah-Nawaz Qadir']
2020-10-28 16:20:58.948000+00:00
['Mobile App Marketing', 'Mobile App Development', 'Website Development', 'Application Development', 'Crowdbotic']
Title 8 Things Mobile App Website Can’tContent Modern day business often internal debate importance building different digital asset specifically come mobile apps versus website versus web apps lot people feel don’t need smartphone app need website look fine mobile device Others claim mobile apps advantage cannot offered website Who’s right article explore difference three type software identify mobile apps set apart webonly product might assume web apps mobile apps nature reality aren’t different term structure also built different class user start let’s review structural difference progressive web application website Progressive Web Apps Defined progressive web app PWA essentially version website operates correctly fluidly userfriendly way mobile device Specifically web apps work like downloadable apps convenience browser mobile device computer way fall website mobile apps act like website provide experience comparable native apps Native Apps Defined Native apps apps created specific platform iOS Apple iPhone Android Androidbased smartphone usually downloaded installed via app store access device resource GPS camera functionality Native application live device run example popular mobile apps Snapchat Instagram Google Maps Facebook Messenger Unlike web apps accessed internet browser adapt computer native apps constrained device running web apps dynamic interactive enough adjust according size different display static Build Mobile App native mobile platform offer different functionality web interface Well definition multiple useful feature exclusively available mobile apps feature include following we’ll discus detail Use devicespecific feature Ease personalization Offline usage Easier user access Better speed Push notification Brand visibility Design freedom Use devicespecific feature using smartphone application user access devicespecific function screenshot camera dictionary GPS autocorrect touch screen present desktop laptop Screenshots particular highly common use case simple take save future use reading article watching fashion show capturing onscreen event simple zoom zoom function offered touch screen enable easy cropping focus feature reduce time perform common task boost convenience Ease personalization Mobile apps give liberty personalize user experience basis preference location usage pattern smartphone application it’s easy present consumer highly personalized interface addition mobile app also allow user customize app’s appearance per preference Offline usage Offline usage especially easy implement may significant advantage offered mobile application Although mobile application usually require internet access perform much duty may still offer basic content feature user offline mode example consider health wellness application apps provide functionality diet plan calorie chart body measurement water intake alert many even without assistance internet connection Easier user access Mobile user spend 86 1 time mobile apps 14 mobile website comparison total time consumer spend mobile application also growing rising one year 21 percent doubt people invest much time social medium application gaming application often native mobile apps Better speed welldesigned mobile app certainly perform much faster speed mobile website comparison website typically use web server application generally store data locally mobile device reason mobile application data extraction easy perform addition storing user preference using take proactive action behalf user apps save user time Smartphone application function efficiently technological level website smartphones use JavaScript code typically much le efficient native code language happens background puzzle user faster app type — case mobile apps — win category UX perspective Push notification two form smartphone app alert push notification inapp alert attentiongrabbing option connect relatively noninvasive way smartphone user Inapp alert alert accessed user open app hand push notification displayed user regardless operation currently performing mobile device powerful way grab user’s attention fact case push notification delivered clickthrough rate 40 percent higher go without saying notification campaign thoughtfully prepared Users resent constantly pinged notification don’t deliver urgent relevant information Technically push notification progressive web apps also implemented utilizing thirdparty service service currently preliminary stage limitation Brand visibility Consumers devote large portion time mobile app interaction It’s fair assume many people every day seek company’s app icon smartphones app maker daily experience used promotional opportunity 2 Even people use smartphone app actively reminded whenever see home screen app icon work brand miniad brand Design freedom Even technical advance web design perform even basic function mobile website rely lot browser Mobile website rely browser feature function “back” button “refresh” button address bar None limitation apply mobile application Based advanced gesture like “tap” “swipe” “drag” “pinch” “hold” mobile app programmed lot elaborate function gesture used apps provide creative feature help user complete task intuitively instance using swipe gesture app allow user move next previous phase Mobile Apps Offer Unique Advantages Websites may capture broader range traffic business make use feature smartphone app essential Native apps website work together satisfying way build omnichannel user experience draw user traffic result tremendous user growth true across wide array business type you’re ecommerce store encourage visitor buy website well app Churches use mobile apps release revised sermon note prior service record audio video Restaurants provide modified menu instruction online order Magazines may submit push alert new paper written Consider web mobile property customer engagement plan instead one Crowdbotics specializes converting website web apps mobile apps offer custom crossplatform app build let company get content customer device you’re looking expand marketing strategy include omnichannel engagement get touch Crowdbotics expert today Sources 1 httpswwwforbescomsitesewanspence20140402themobilebrowserisdeadlonglivetheapp5c5ef237614d 2 httpswwwforbescomsitesallbusiness20141117hereswhyyourbusinessneedsitsownmobileapp30b288b2327fTags Mobile App Marketing Mobile App Development Website Development Application Development Crowdbotic
4,589
What Do 90-Somethings Regret Most?
My preconceptions about older people first began to crumble when one of my congregants, a woman in her 80s, came into my office seeking pastoral care. She had been widowed for several years but the reason for her distress was not the loss of her husband. It was her falling in love with a married man. As she shared her story with me over a cup of tea and Kleenex, I tried to keep a professional and compassionate countenance, though, internally, I was bewildered by the realization that even into their 80s, people still fall for one another in that teenage, butterflies-in-the-stomach kind of way. One of the strange and wonderful features of my job as a minister is that I get to be a confidant and advisor to people at all stages of life. I’ve worked with people who are double and even triple my age. Experience like this is rare; our economic structure and workforce are stratified, and most people are employed within their own demographics. But because I’m a minister in a mainline denomination with an aging base, the people I primarily interact with are over the age of 60. I came into my job assuming that I, a Korean-American woman in my mid-30s, would not be able to connect with these people — they’re from a completely different racial and cultural background than me. It did not take long for me to discover how very wrong I was. We all have joys, hopes, fears, and longings that never go away no matter how old we get. Until recently, I mistakenly associated deep yearnings and ambitions with the energy and idealism of youth. My subconscious and unexamined assumption was that the elderly transcend these desires because they become more stoic and sage-like over time. Or the opposite: They become disillusioned by life and gradually shed their vibrancy and vitality. When I initially realized that my assumptions might be wrong, I set out to research the internal lives of older people. Who really were they, and what had they learned in life? Using my congregation as a resource, I interviewed several members in their 90s with a pen, notebook, a listening ear, and a promise to keep everyone anonymous. I did not hold back, asking them burning questions about their fears, hopes, sex lives or lack thereof. Fortunately, I had willing participants. Many of them were flattered by my interest, as America tends to forget people as they age.
https://humanparts.medium.com/what-its-like-to-be-90-something-368780082573
['Lydia Sohn']
2019-12-20 18:21:14.810000+00:00
['Happiness', 'Wisdom', 'Wellness', 'Culture', 'Age']
Title 90Somethings Regret MostContent preconception older people first began crumble one congregant woman 80 came office seeking pastoral care widowed several year reason distress loss husband falling love married man shared story cup tea Kleenex tried keep professional compassionate countenance though internally bewildered realization even 80 people still fall one another teenage butterfliesinthestomach kind way One strange wonderful feature job minister get confidant advisor people stage life I’ve worked people double even triple age Experience like rare economic structure workforce stratified people employed within demographic I’m minister mainline denomination aging base people primarily interact age 60 came job assuming KoreanAmerican woman mid30s would able connect people — they’re completely different racial cultural background take long discover wrong joy hope fear longing never go away matter old get recently mistakenly associated deep yearning ambition energy idealism youth subconscious unexamined assumption elderly transcend desire become stoic sagelike time opposite become disillusioned life gradually shed vibrancy vitality initially realized assumption might wrong set research internal life older people really learned life Using congregation resource interviewed several member 90 pen notebook listening ear promise keep everyone anonymous hold back asking burning question fear hope sex life lack thereof Fortunately willing participant Many flattered interest America tends forget people ageTags Happiness Wisdom Wellness Culture Age
4,590
How Constraints Potentiate Creativity & Innovation
One of the most interesting aspects of the Design Thinking process, is not only the strengthening of the relationships that are created within the Product Design team (including not only Designers, but an entire ecosystem where Developers, Product Owners, Customer Support Groups, Marketing Professionals, Inventory Managers, to name but a few, collaborate), but also of course with the Users/Clients, who become part of that product journey, not only uncovering the solution itself, but how that product eventually morphs and continues to live past launch/release cycles. Another rewarding aspect of the process itself has always been the imminent realization that whatever is uncovered, tested and refined, has to eventually be built, within a series of constraints. These constraints can be of multiple natures, be it financial, platform wise, resources, timelines, among others, all of which is going to be the focus of attention for this article. Constraints. Every Product Design initiative always has a series of constraints associated with it, something that should be clearly showcased when the process starts. Further constraints may be added to the scope of the initiative particularly as time goes on, but initially there’s already a listing of topics that all the participants in the Design Thinking process need to be aware of. Those general constraints are associated with a trifecta of factors: timelines, resources and finances. Expanding on these fundamentally operational constraints: timelines are part of the DNA of any project that is tackled. The need to release something to market, an MVP that is ultimately viable, which can be expanded upon, creates challenges from the Design process itself, in terms of running effective Research, Validation, Iterations, Testing exercises. All these phases provide different types of input into the solution that is being morphed, but once again Designers have to forcibly be strategic about how they devote time to each one of them (and how they bring different players to these phases, in order to gather the information they need to keep the process breezing along). Timelines are also deeply entwined with Resource allocation. Depending on the availability of team members, of different natures of course, allows for a lot of these processes and initiatives to be conducted more rapidly, and enable for iterative cycles for instance, to be produced in a speedier manner. Resource availability is also tied with different layers of expertise, not just from a Design perspective, but also from other professionals who are involved in the process, and need to be available to participate in it. That includes for instance, professionals in Product Ownership, Development, Customer Support, Marketing, Sales, Inventory Management, the list goes on, but summarily, they all have a pivotal impact in the solutioning effort, and their contribution should always be accounted for. Matter of factly: without enough resources, the challenges to fulfill & keep the timelines established can become a herculean task in itself. Deeply entwined with the previous two factors, there is of course the financial constraints that have implications across the board. It’s important to consider the budget that is being estimated for a particular initiative, whatever that may be, since that informs not only the timeline devoted to it, and the Team resources that can work on it, but also and by extension, a series of other factors evidently tied to the operational side of the process, namely tools for Research, Validation, Iteration, Testing, for capturing Analytics, not to mention of course, from a Usability testing perspective, and of course from a Development perspective. Everything has a quantifiable cost, which needs to be identified, since in itself also informs the viability of what is being built, and the available timeline to built it, and whom with. The practicality of these factors should always be considered by Designers when tackling any project they embark upon, particularly as they envision schedules and outputs of every phase of the process. These three constraints are of course the baseline of any project, but here are a few others which compliment them, which always need to be considered when devising a solution in a Design Thinking Process. Technology Constraints — when initially identifying the problem, clearly understanding the tasks users want to perform and what their expected outcomes are, there’s an evident realization of how the users will experience something, by which this means, what devices will the users interact with in order to satiate their needs and get their tasks performed. Identifying these platforms, across multiple ecosystems and strategies is fundamental, since from early on, Development, Product Management, Customer Support partners, among others, can highlight and provide additional context for goals that need to be addressed, but also the limitations that may derive from certain platforms on which the solution will need to exist (or for for that matter, the lack of resources of professionals to work on said platforms). Further downstream, understanding the technological constraints is also fundamental in devising alternatives to interactive paradigms that are being applied, which for limitations of different natures can’t be applied, or alternatively how the creation of new paradigms may simply be ineffective within the the technological constraints that exist. The Pandemic has also brought forth other types of constraints, while also liberating others of course, but when it comes to technology itself, it has demonstrated that co-located collaboration was not an option, and tools such as Miro, Whimsical, Mural, had to take the lead in allowing teams to effectively discuss, collaborate and document ideas. All this to say that when it comes to Technology, and even though there’s an aspect of celerity in how it changes rapidly, it’s fundamental that all the partners in the process understand its constraints at all times. That indeed creates a series of guidelines by which everyone has to abide by, yet allowing for the creation of solutions that are in tune with that same ecosystem. Contextual Constraints — it goes without saying that every solution that is devised, exists within a certain universe of requirements. These requirements are of different natures, but there’s quite a few that Designers and their partners can never forget: understanding the implications of Legal, Privacy, Ethical constraints is of the utmost importance. In the past I’ve worked in applications in the telecommunications arena, where a baseline feature we’ve all come to expect, such as the recording of a web conference, had to be thoroughly researched for its legal implications. Creating features, being innovative within a product are always to be nurtured and incentivized, however one must always keep in mind that whatever is brought forth is aligned with the context, industry, and the users themselves which will operate that product or feature. This of course ends up looping with the Research phase which was itemized earlier in the article and the importance of all the phases for the Design Thinking process to be effective. They all integrate with each other, and they all serve a purpose. These 5 constraints aren’t meant to be castrating, but mostly become beacons, assisting the teams in understanding where they operate and how the solution they are indeed producing will effectively resonate with their audiences.
https://uxplanet.org/how-constraints-potentiate-creativity-innovation-65fbfc0e2aa1
['Pedro Canhenha']
2020-12-28 08:51:06.150000+00:00
['Design', 'Product Design', 'UX', 'Innovation', 'Design Thinking']
Title Constraints Potentiate Creativity InnovationContent One interesting aspect Design Thinking process strengthening relationship created within Product Design team including Designers entire ecosystem Developers Product Owners Customer Support Groups Marketing Professionals Inventory Managers name collaborate also course UsersClients become part product journey uncovering solution product eventually morphs continues live past launchrelease cycle Another rewarding aspect process always imminent realization whatever uncovered tested refined eventually built within series constraint constraint multiple nature financial platform wise resource timeline among others going focus attention article Constraints Every Product Design initiative always series constraint associated something clearly showcased process start constraint may added scope initiative particularly time go initially there’s already listing topic participant Design Thinking process need aware general constraint associated trifecta factor timeline resource finance Expanding fundamentally operational constraint timeline part DNA project tackled need release something market MVP ultimately viable expanded upon creates challenge Design process term running effective Research Validation Iterations Testing exercise phase provide different type input solution morphed Designers forcibly strategic devote time one bring different player phase order gather information need keep process breezing along Timelines also deeply entwined Resource allocation Depending availability team member different nature course allows lot process initiative conducted rapidly enable iterative cycle instance produced speedier manner Resource availability also tied different layer expertise Design perspective also professional involved process need available participate includes instance professional Product Ownership Development Customer Support Marketing Sales Inventory Management list go summarily pivotal impact solutioning effort contribution always accounted Matter factly without enough resource challenge fulfill keep timeline established become herculean task Deeply entwined previous two factor course financial constraint implication across board It’s important consider budget estimated particular initiative whatever may since informs timeline devoted Team resource work also extension series factor evidently tied operational side process namely tool Research Validation Iteration Testing capturing Analytics mention course Usability testing perspective course Development perspective Everything quantifiable cost need identified since also informs viability built available timeline built practicality factor always considered Designers tackling project embark upon particularly envision schedule output every phase process three constraint course baseline project others compliment always need considered devising solution Design Thinking Process Technology Constraints — initially identifying problem clearly understanding task user want perform expected outcome there’s evident realization user experience something mean device user interact order satiate need get task performed Identifying platform across multiple ecosystem strategy fundamental since early Development Product Management Customer Support partner among others highlight provide additional context goal need addressed also limitation may derive certain platform solution need exist matter lack resource professional work said platform downstream understanding technological constraint also fundamental devising alternative interactive paradigm applied limitation different nature can’t applied alternatively creation new paradigm may simply ineffective within technological constraint exist Pandemic also brought forth type constraint also liberating others course come technology demonstrated colocated collaboration option tool Miro Whimsical Mural take lead allowing team effectively discus collaborate document idea say come Technology even though there’s aspect celerity change rapidly it’s fundamental partner process understand constraint time indeed creates series guideline everyone abide yet allowing creation solution tune ecosystem Contextual Constraints — go without saying every solution devised exists within certain universe requirement requirement different nature there’s quite Designers partner never forget understanding implication Legal Privacy Ethical constraint utmost importance past I’ve worked application telecommunication arena baseline feature we’ve come expect recording web conference thoroughly researched legal implication Creating feature innovative within product always nurtured incentivized however one must always keep mind whatever brought forth aligned context industry user operate product feature course end looping Research phase itemized earlier article importance phase Design Thinking process effective integrate serve purpose 5 constraint aren’t meant castrating mostly become beacon assisting team understanding operate solution indeed producing effectively resonate audiencesTags Design Product Design UX Innovation Design Thinking
4,591
Yes, Astrology Really Can Help You Progress In Therapy
Yes, Astrology Really Can Help You Progress In Therapy (Whether you believe in it or not.) Photo by Josh Rangel on Unsplash I once had a brief flirtation with a married man after losing my husband to brain cancer. Then, I was dumped. I was destroyed. I couldn’t get over it. This is when the desperate (and, on Medium, many say the weak-minded) turn to astrology. Most people who post about astrology here (even the editors) find themselves the brunt of rude comments. Even when we point out that disciplines such as astrology and tarot are making their way into people’s therapy sessions, sometimes with helpful results. I count myself among that number, so I’ve decided to take it upon myself to diagram exactly what I studied and how it helped. Stage 1: There’s a Message Here I’m sure we’re all acquainted with the feeling, after a painful breakup, of longing for the person back again and looking for any sign that this could happen. I had been in the habit, now and then, of buying a summary of yearly horoscope transits and what they mean off of a website called astro.com. I was making a bit more money at that time, and these weren’t too expensive, so I purchased one for the next three years. I noticed something. These reports made a big deal out of playing up my controlling tendencies. I found a site online that did free tarot readings, and much to my distress, the tarot was saying the same thing, in a pretty harsh way. I didn’t think of myself as “controlling.” I thought the woman I got dumped for was the controlling one! But, as I received these suggestions over and over, and I got more and more upset about them, I had to ask myself: Am I really a controlling person? That made me look honestly at my behavior in the relationship. Turns out, I was controlling, always seeking to get this guy to give me the kind of life and the kind of support I wanted … and I really needed to see that. And tell myself the awful truth about it. Who cares whether astrology is “correct” or not? The point is, I had to ask myself honestly about this aspect of my character. And, sad to say, it turned out there was a reason I kept getting the message, “Stay out of power and control.” Stage Two: How My Chart Diagrams the Nuts and Bolts of My Therapy Now that I saw that message, I wondered if there might be more to find. I’ll skip some of what happened in between and go straight to where I become intensely interested in the following pattern in my chart, outlined by the triangle with two long green sides and the short blue bottom: Generated for free on astro.com. This long skinny triangle, shaped like a witch’s hat, is called a “yod.” (As it turns out, they don’t call it the “Finger of God” for nothing.) The little character at the long end, the cross with the curlicue, is Saturn. As I read more about Saturn, I felt worried, because Saturn is known in astrology as the planet of hard knocks. It’s the planet of restriction, trial, and painful lessons. I felt like I’d had just about enough of those. As it turned out, this yod explained pretty clearly why I was in therapy and what I was going to have to accomplish there. I’m still struggling with some of it, but the comforting thing is, at least I know why now and I have a little bit of hope. At the bottom of my yod, you see Neptune (it looks like a trident) in House Three, in the sign of Scorpio. This tends to reflect creativity (Neptune) in a deep, penetrating way, digging up truths one perhaps doesn’t want to see (Scorpio), in the area of knowledge, writing, art, and communication (House Three). It’s connected to Uranus, the little character with arms and a little round head, upside down, in House One — House of the Self — in Virgo, the sign of service. Uranus is original and independent. It’s associated with events that break up old patterns in your life because you really needed a change. My reading of it always includes the phrase: “I gotta be me.” So I really gotta be me, according to this chart, and I want to do it in a way that serves other people. At the base of a yod, the two planets at the bottom shake hands and want to help each other out. Would it surprise you to learn that after a childhood with a BPD mother, I want to write novels and articles around the theme of mental illness and emotional problems, with an eye toward reaching and helping others? Yep, so far this chart sounds like me. But, it’s a classic yod. In astrology, the planet at the far angle of the yod represents something that’s keeping the two bottom planets from their goal. Saturn Is Standing In The Way. (Booga! Booga! Booga!) So, since at this point in my life, my writing career is moving at the speed of a dying snail, am I interested to find out what this Saturn, in the lingo of astrology, is supposed to represent? You betcha. To help out, I compiled these: Bailey’s Postulates for Understanding the Astrological Yod (You’re going to need these to understand how my chart helped me.) 1.) The yod appears to be a description of a problem, issue, or conundrum in the life. The two bottom planets seem to want to do something, but the apex planet reflects something that keeps getting in the way. (Every astrologer says this much.) 2.) (Several astrologers have written articles about this one.) If another planet sits in between the two sextile planets, this planet makes the yod a “boomerang yod,” and the “boomerang” planet describes what to do to solve the dilemma described by the yod. (What that would look like, is if a fourth planet sat exactly in between Uranus and Neptune, up there. Not present … until you place the chart of the guy who dumped me directly over mine. He has a yod that’s right over mine, facing in the opposite direction. Therefore, the apex of my yod forms the boomerang of his and vice versa. That’s a whole different topic right there.) 3.) Any planet passing by transit or progression over the apex of the yod reflects a time in life when circumstances cause the issues described by the yod to come to the forefront in life. (All astrologers agree on this one. We’re coming back to it at the end.) 4.) Anything attached in a stressful aspect to the apex of the yod is commenting on how the problem came to be in the first place. (This, I haven’t seen any astrologer write up yet, but it appears to me to be the case.) 4a.) If you’re having trouble figuring this out in a chart, look up where the asteroid Chiron is by house and sign. Many times it will clarify things a lot. It turns out that some of the most helpful astrology books are written by counseling therapists who also have an interest in astrology. I learned the most from Saturn: A New Look at an Old Devil. Author Liz Greene’s first career was as a counseling therapist. She holds doctorate degrees in psychology and history and is a qualified Jungian analyst. She also holds a diploma in counseling from the Centre for Transpersonal Psychology in London, and a diploma from the Faculty of Astrological Studies, of which she is a lifetime patron. The following points are taken from this book. You need to look at the drawing again to see what I’m talking about. So that you don’t have to scroll up, here it is once more: Again, astro.com. Attached to Saturn by red lines, all the way at the top, are the Sun (astrology draws that as a bull’s eye), the Moon (obvious), Mars (the “male” symbol), and Mercury (the character that looks like it has horns), all at a ninety-degree angle from Saturn. This is known as a “square.” Squares are known to be stressful in astrology, hence the red lines. These are the planets we’re talking about. Let’s look at what they symbolize in my chart, and how they tell us my yod situation arose. In order to process all this, I had to take a good hard look at my childhood, including some things I always believed didn’t affect me much, which, as we all know, any therapist worth their salt would want you to do. After I finished this, I emailed it to mine as part of my therapy homework. My therapist was pretty pleased with it. All professional astrologers tell you you have to read The Whole Chart. (This is why those who only read sun signs in the newspaper believe astrology has to be crap.) In my case, this is easy, as every damn thing is connected to Saturn, usually by a stressful aspect. (Lucky me.) The astrologer Alice Portman read my chart and told me this big yod with Saturn at the long end creates a feeling of “What’s the use?” in my life. So, what does Saturn mean in a horoscope chart? According to Liz Greene, Saturn: — Reflects your struggle to build an ego and protect yourself. — Indicates an area of the personality where the person remains childlike (or childish!) because they didn’t get what they needed in childhood for that area to develop into mature adult understandings, attitudes, and behavior. It’s necessary for the person to grow up in these areas. (So, basically, as we’ll see, my entire personality is infantile and childish. Great news.) — When studied in-depth, Saturn offers a detailed picture of what you don’t want to see about yourself. — Saturn is a measuring stick of the individual’s power of self-determination; it reflects solutions that, if you find them, can become a permanent part of your conscious self, through self-motivated effort. — You’re closed off from things you want or need in life until you get these specific tasks done. (Sounds like the reason Saturn is the apex of my yod.) — Saturn denotes areas where you’re supposed to become a good parent to yourself first. Then you can help other people. (Which sounds like the bottom of the yod, right?) So, the stuff attached by squares to the yod’s apex — Saturn, in my case — describes how the problems came to be that are preventing me from achieving this thing, helping others through writing, that the bottom of my yod reflects that I passionately want to do. So, what about these squared planets? Astrology holds that the Sun represents a person’s sense of who he is. If Saturn squares your Sun in your birth chart, it’s commenting that you didn’t have much help in discovering your own identity. You have problems with creativity because you didn’t have a dad behind you to encourage you. Life as a child is really tough, and you didn’t grow up in a sense of trust that things will go well. Saturn square Sun people are either intensely ambitious, or we have no ambitions because we’re afraid of the pain of not making them come true. Greene writes that we’re offered the opportunity to become masters of our fates. If we don’t take the opportunity, we become very sad people. The Moon discusses feelings, what a person needs to feel happy, the atmosphere of their early home life and the relationship with the female parent, and any instinctive habit patterns. A Saturn square here often reflects a person who wasn’t able to express themselves emotionally in childhood. This person had to control their feelings all the time as a child, and their mother let the child down in some way. The person is lonely and needy because they never had an emotionally loving family, even though it looked like it from the outside. The child experienced a lot of harshness and duty and rules, and not a lot of warmth and love. Because of all this, the person has to become strong in isolation. Mercury talks about knowledge and communication, and how competent a person feels at these things. And Saturn is the planet of frustration, difficulty, and delay, so if you have a tough Mercury-Saturn aspect, writes Greene, your parents may have treated you like you couldn’t think for yourself because you were a child. Or they stifled you if you had any thought or idea that conflicted with theirs. A Saturn square Mercury feels very self-defeating. You end up sure you’re stupid because you’ve been punished so much for making mistakes. You work so slowly, out of fear, that you really do look stupid. Then people make fun of you because you look stupid, and you feel and look yet more stupid. Mars describes self-assertion and any aggressive impulses, and a Saturn square describes a person who feels frustrated, weak, and powerless. Basically, the individual had overly controlling parents and possibly physically abusive parents. The person feels like their will is ineffectual because it’s been thwarted so often, and feels like they have no control over themselves or their life. Therefore, they’re likely to pick out a weaker person, someone they can control, and use the person like tongs to interface with the world for them and provide what they think they can’t do for themselves. Um … what did I start this article writing about? Controlling tendencies. Sheesh. When I read all this, my first thought was: Gee, I guess I didn’t grow up at all. That felt pretty depressing at first, but before we even test whether all of this is accurate or not, let me emphasize this point: The important thing isn’t the accuracy of the chart. The important thing is your process of testing the accuracy of the chart. Having this chart, looking up all this information, and thinking about it forced me to do the kind of deep thinking we all need to do in therapy to heal. I could have disagreed as well, and decided all astrology is bunk. The important thing is, I would have thought about these issues and presented my thinking to my therapist. We would have talked (as we did) about why I agreed or disagreed, and how I feel about all of it now. As it turned out, I did find all of this to be very accurate. It led me down a garden path lined with several new books my therapist has on her shelf now and recommends to other clients. I really couldn’t express myself in childhood because my BPD mother needed validation and insisted I be just like her. I had to be her instead of me. I did have an awful childhood, with a lot of hazing at school, no dad, and a mentally ill mother. Therefore, I have no trust in life because things didn’t go well in childhood. Nobody encouraged who I was, so I’ve had a hard time believing in myself; and I do feel like life’s been too hard and I can never relax. I was the only kid I knew who had to come home from school and clean half the house every Friday night and then get up and get right to homework on Saturday morning. My mother would inspect everything I cleaned and then, rather than teaching me how she wanted it done in the first place, she would scream at me and spank me. Using trial and error, I had to vary my routine until I found the magic formula that didn’t result in a scream-and-spank session. I became what my family wanted me to be. I didn’t even know what would make me happy to become when I was a kid. I thought Mom’s likes and dislikes were my likes and dislikes; I believed her dreams were my own. Even now that I do know what I want, I have an awfully hard time drawing it out of myself. I watched my late husband, a critically acclaimed, national award-level author, struggle with book sales for years. I do think that even if I finished a novel and offered it to the world, it’s highly unlikely it would be successful. I don’t want to be crushed by that (since I’ve already been crushed by so much else.) Yeah. It all sounds like me. How many years do some people have to spend in therapy before they figure all this out? I saw all this in a couple of weeks once I started studying Saturn and this yod. And I realized that the absence of my dad, which I never thought affected me much, might have left me with a big piece of my personality missing: my ability to believe in and motivate myself. That was a revelation, indeed. I still struggle with depression and motivation, but insight is not something I have difficulty with anymore. My therapist believes that this sort of work had value for me because of the deep processing I had to do to accomplish it. Nonetheless, this chart does diagram what that Saturn drag is that’s keeping Uranus and Neptune, so happily shaking hands down there, from writing and promoting a novel that could have legs. So I’m going to… (Saturn square Mars) latch on to someone else —someone else’s husband— who appears successful at all the things I’m not, and try to get him to take care of me. Which I did. 4a.) When having trouble figuring out how a yod happened, take a look at Chiron. My Chiron isn’t shown in that simplified drawing above, but it’s in Aries, House Eight, close to Saturn. Astrologer Aria Gmitter says about an eighth house Chiron: “Consider that the purpose of Aries is to identify with itself. It is ‘I am,’ and the ruling planet is Mars (ambition, drive, and determination). “Chiron is the wounded healer, so it’s a wound that heals for the purpose of a lesson in growth. So, I think it’s an attack early on in a person’s life towards their identity and character. It can lead the individual to feel a loss of self-identity. “This can mean they don’t have a voice because they don’t know who they are. They may not be able to speak up for their injustices, or they could have been taught early on that they don’t matter. There’s a lack of place in the world. As the healer, I think that it’s a powerful placement for an Aries because should the person master their wound, they will learn not to delegate their identity out to others. “They will have found that they have participated in their own self sabotaging behaviors after leaving their family of origin, and not severing ties so that they can discover themselves. It will also mean learning to be comfortable in their own skin with their imperfections, and going from a selfish view of the world to one that encompasses others, with boundaries.” And that’s what I’ve been working on in therapy. Astrology claims to be able to predict events. Whether or not it’s true, using the chart in this way can still teach you something. Stage Three: The Timing Of Events Now we can talk about Point 3, how transits show something important happening in the yod. That will be our final step in this dissection of how astrology has helped me progress in therapy. When I hired Alice Portman to review my chart, she predicted that I would hear from this man again in October 2017. (And I had learned enough astrology by then to concur; although when I teased apart the indicators that this would happen, we were looking at different things. So, there were many different significators for this event.) As I dealt with all this heartbreak, where Uranus was in the sky hovered back and forth over this yod position that Saturn was in when I was born. Ridiculous, I know, but astrology holds that when a planet, from our perspective on the Earth, moves back and forth over an important point like this, Something Important Happens. And it did. Here he was; he was back, and I had to make a difficult decision about the relationship. I’m reading a book by another therapist who’s also an astrologer: Counseling for Astrologers. One helpful hint I’m receiving here is: If you see an important formation in a chart being triggered by a transiting planet, go back to the last time in the life a planet triggered that formation and ask the client what was going on in their life then. You may learn something important. So, I did that for myself. Before Uranus, the last time this yod got triggered by anything was by Mars in 2009. What happened in 2009? In 2009, a couple of long-lost, handicapped relatives called me out of the clear blue. My eighty-six-year-old great aunt and her handicapped adopted daughter, developmentally delayed, had dropped out of my life eight years before, when my aunt went into a full-blown bipolar episode and got placed in a psychiatric facility. Long lost relatives many states south swooped in, moved my relatives to South Carolina, and sold their farm. No one in our family was in contact with these relatives. I missed my aunt, but no one knew where she was. Now, they had moved back to their old neighborhood again, and they were calling me. I was overjoyed. I had a happy marriage, but I had finally cut off my BPD mom and most of my family with her. It was sort of a lonely life. Maybe I could have some of my family back again! Maybe I could finally feel like I had a normal life. Before I knew it, I was signed on the dotted line as their power of attorney, and then my aunt stopped taking her medication, had more bipolar breakdowns, and I had to move them to assisted living. Then it turned out that my cousin was physically abusive to my aunt, and I had a real mess on my hands. These people took up all my time, and I had to abandon the novel I was writing. My husband, who was starting his fourth novel, comforted me through floods of tears as I regretfully put aside the one original novel idea I had ever had to handle these folks’ affairs. Looking at that Mars transit in 2009 and linking it up to what was happening in my life at the time, this was when the final light came on. That Saturn at the tip of the yod isn’t just the emotional problems I have as leftovers from a painful childhood. It’s the enmeshed, caretaking relationships I keep getting embroiled in because of those emotional problems. I’m so needy for family and relationships, I really am a sucker. I didn’t realize how much of a problem getting into the wrong relationships was in terms of achieving my goals in life. It’s a central issue. Codependent relationships derail me from writing every time I get into one. (Along with the self-doubt and the fear that I’m too stupid to succeed.) However, whether transits are right or wrong isn’t the important thing here. What is important is that it got me looking at my issues in a new way. As I stared at the charts and looked up aspect meanings, letting ones that confused me rattle around in my brain — sometimes for months — I’d connect things in a new way, in sudden flashes of insight. Like that one about Saturn symbolizing my relationships, that I just wrote. I’d read that in codependency books, sure, but seeing it jump out in my chart this way, when I went back to 2009 as this counselor/astrologer suggested, I finally got it. “Omigosh, this really IS what I’m doing! Here it is, right here! If I don’t like this scenario, what can I do about it?” That is the basis of a lot of psychological insight. That moment is when you sit up and connect something you’ve read a jillion times, to you. Then, when you see a possible warning about your future, you can think creatively about it. I will forever be thankful I learned to read enough astrology to do this … even if you do think I am a kook. Studying astrology for therapy is a lot like staring at the scattered pieces of a puzzle. When you figure out how to put them together with your assigned reading, you get a mosaic: An instructive picture of your life that can jolt you into a new awareness.
https://medium.com/illumination/yes-astrology-really-can-help-you-progress-in-therapy-bec3699935a3
['A. Nonymous']
2020-09-14 01:19:35.071000+00:00
['Psychology', 'Opinion', 'Self Improvement', 'Culture', 'Astrology']
Title Yes Astrology Really Help Progress TherapyContent Yes Astrology Really Help Progress Therapy Whether believe Photo Josh Rangel Unsplash brief flirtation married man losing husband brain cancer dumped destroyed couldn’t get desperate Medium many say weakminded turn astrology people post astrology even editor find brunt rude comment Even point discipline astrology tarot making way people’s therapy session sometimes helpful result count among number I’ve decided take upon diagram exactly studied helped Stage 1 There’s Message I’m sure we’re acquainted feeling painful breakup longing person back looking sign could happen habit buying summary yearly horoscope transit mean website called astrocom making bit money time weren’t expensive purchased one next three year noticed something report made big deal playing controlling tendency found site online free tarot reading much distress tarot saying thing pretty harsh way didn’t think “controlling” thought woman got dumped controlling one received suggestion got upset ask really controlling person made look honestly behavior relationship Turns controlling always seeking get guy give kind life kind support wanted … really needed see tell awful truth care whether astrology “correct” point ask honestly aspect character sad say turned reason kept getting message “Stay power control” Stage Two Chart Diagrams Nuts Bolts Therapy saw message wondered might find I’ll skip happened go straight become intensely interested following pattern chart outlined triangle two long green side short blue bottom Generated free astrocom long skinny triangle shaped like witch’s hat called “yod” turn don’t call “Finger God” nothing little character long end cross curlicue Saturn read Saturn felt worried Saturn known astrology planet hard knock It’s planet restriction trial painful lesson felt like I’d enough turned yod explained pretty clearly therapy going accomplish I’m still struggling comforting thing least know little bit hope bottom yod see Neptune look like trident House Three sign Scorpio tends reflect creativity Neptune deep penetrating way digging truth one perhaps doesn’t want see Scorpio area knowledge writing art communication House Three It’s connected Uranus little character arm little round head upside House One — House Self — Virgo sign service Uranus original independent It’s associated event break old pattern life really needed change reading always includes phrase “I gotta me” really gotta according chart want way serf people base yod two planet bottom shake hand want help Would surprise learn childhood BPD mother want write novel article around theme mental illness emotional problem eye toward reaching helping others Yep far chart sound like it’s classic yod astrology planet far angle yod represents something that’s keeping two bottom planet goal Saturn Standing Way Booga Booga Booga since point life writing career moving speed dying snail interested find Saturn lingo astrology supposed represent betcha help compiled Bailey’s Postulates Understanding Astrological Yod You’re going need understand chart helped 1 yod appears description problem issue conundrum life two bottom planet seem want something apex planet reflects something keep getting way Every astrologer say much 2 Several astrologer written article one another planet sits two sextile planet planet make yod “boomerang yod” “boomerang” planet describes solve dilemma described yod would look like fourth planet sat exactly Uranus Neptune present … place chart guy dumped directly mine yod that’s right mine facing opposite direction Therefore apex yod form boomerang vice versa That’s whole different topic right 3 planet passing transit progression apex yod reflects time life circumstance cause issue described yod come forefront life astrologer agree one We’re coming back end 4 Anything attached stressful aspect apex yod commenting problem came first place haven’t seen astrologer write yet appears case 4a you’re trouble figuring chart look asteroid Chiron house sign Many time clarify thing lot turn helpful astrology book written counseling therapist also interest astrology learned Saturn New Look Old Devil Author Liz Greene’s first career counseling therapist hold doctorate degree psychology history qualified Jungian analyst also hold diploma counseling Centre Transpersonal Psychology London diploma Faculty Astrological Studies lifetime patron following point taken book need look drawing see I’m talking don’t scroll astrocom Attached Saturn red line way top Sun astrology draw bull’s eye Moon obvious Mars “male” symbol Mercury character look like horn ninetydegree angle Saturn known “square” Squares known stressful astrology hence red line planet we’re talking Let’s look symbolize chart tell u yod situation arose order process take good hard look childhood including thing always believed didn’t affect much know therapist worth salt would want finished emailed mine part therapy homework therapist pretty pleased professional astrologer tell read Whole Chart read sun sign newspaper believe astrology crap case easy every damn thing connected Saturn usually stressful aspect Lucky astrologer Alice Portman read chart told big yod Saturn long end creates feeling “What’s use” life Saturn mean horoscope chart According Liz Greene Saturn — Reflects struggle build ego protect — Indicates area personality person remains childlike childish didn’t get needed childhood area develop mature adult understanding attitude behavior It’s necessary person grow area basically we’ll see entire personality infantile childish Great news — studied indepth Saturn offer detailed picture don’t want see — Saturn measuring stick individual’s power selfdetermination reflects solution find become permanent part conscious self selfmotivated effort — You’re closed thing want need life get specific task done Sounds like reason Saturn apex yod — Saturn denotes area you’re supposed become good parent first help people sound like bottom yod right stuff attached square yod’s apex — Saturn case — describes problem came preventing achieving thing helping others writing bottom yod reflects passionately want squared planet Astrology hold Sun represents person’s sense Saturn square Sun birth chart it’s commenting didn’t much help discovering identity problem creativity didn’t dad behind encourage Life child really tough didn’t grow sense trust thing go well Saturn square Sun people either intensely ambitious ambition we’re afraid pain making come true Greene writes we’re offered opportunity become master fate don’t take opportunity become sad people Moon discus feeling person need feel happy atmosphere early home life relationship female parent instinctive habit pattern Saturn square often reflects person wasn’t able express emotionally childhood person control feeling time child mother let child way person lonely needy never emotionally loving family even though looked like outside child experienced lot harshness duty rule lot warmth love person become strong isolation Mercury talk knowledge communication competent person feel thing Saturn planet frustration difficulty delay tough MercurySaturn aspect writes Greene parent may treated like couldn’t think child stifled thought idea conflicted Saturn square Mercury feel selfdefeating end sure you’re stupid you’ve punished much making mistake work slowly fear really look stupid people make fun look stupid feel look yet stupid Mars describes selfassertion aggressive impulse Saturn square describes person feel frustrated weak powerless Basically individual overly controlling parent possibly physically abusive parent person feel like ineffectual it’s thwarted often feel like control life Therefore they’re likely pick weaker person someone control use person like tongs interface world provide think can’t Um … start article writing Controlling tendency Sheesh read first thought Gee guess didn’t grow felt pretty depressing first even test whether accurate let emphasize point important thing isn’t accuracy chart important thing process testing accuracy chart chart looking information thinking forced kind deep thinking need therapy heal could disagreed well decided astrology bunk important thing would thought issue presented thinking therapist would talked agreed disagreed feel turned find accurate led garden path lined several new book therapist shelf recommends client really couldn’t express childhood BPD mother needed validation insisted like instead awful childhood lot hazing school dad mentally ill mother Therefore trust life thing didn’t go well childhood Nobody encouraged I’ve hard time believing feel like life’s hard never relax kid knew come home school clean half house every Friday night get get right homework Saturday morning mother would inspect everything cleaned rather teaching wanted done first place would scream spank Using trial error vary routine found magic formula didn’t result screamandspank session became family wanted didn’t even know would make happy become kid thought Mom’s like dislike like dislike believed dream Even know want awfully hard time drawing watched late husband critically acclaimed national awardlevel author struggle book sale year think even finished novel offered world it’s highly unlikely would successful don’t want crushed since I’ve already crushed much else Yeah sound like many year people spend therapy figure saw couple week started studying Saturn yod realized absence dad never thought affected much might left big piece personality missing ability believe motivate revelation indeed still struggle depression motivation insight something difficulty anymore therapist belief sort work value deep processing accomplish Nonetheless chart diagram Saturn drag that’s keeping Uranus Neptune happily shaking hand writing promoting novel could leg I’m going to… Saturn square Mars latch someone else —someone else’s husband— appears successful thing I’m try get take care 4a trouble figuring yod happened take look Chiron Chiron isn’t shown simplified drawing it’s Aries House Eight close Saturn Astrologer Aria Gmitter say eighth house Chiron “Consider purpose Aries identify ‘I am’ ruling planet Mars ambition drive determination “Chiron wounded healer it’s wound heals purpose lesson growth think it’s attack early person’s life towards identity character lead individual feel loss selfidentity “This mean don’t voice don’t know may able speak injustice could taught early don’t matter There’s lack place world healer think it’s powerful placement Aries person master wound learn delegate identity others “They found participated self sabotaging behavior leaving family origin severing tie discover also mean learning comfortable skin imperfection going selfish view world one encompasses others boundaries” that’s I’ve working therapy Astrology claim able predict event Whether it’s true using chart way still teach something Stage Three Timing Events talk Point 3 transit show something important happening yod final step dissection astrology helped progress therapy hired Alice Portman review chart predicted would hear man October 2017 learned enough astrology concur although teased apart indicator would happen looking different thing many different significators event dealt heartbreak Uranus sky hovered back forth yod position Saturn born Ridiculous know astrology hold planet perspective Earth move back forth important point like Something Important Happens back make difficult decision relationship I’m reading book another therapist who’s also astrologer Counseling Astrologers One helpful hint I’m receiving see important formation chart triggered transiting planet go back last time life planet triggered formation ask client going life may learn something important Uranus last time yod got triggered anything Mars 2009 happened 2009 2009 couple longlost handicapped relative called clear blue eightysixyearold great aunt handicapped adopted daughter developmentally delayed dropped life eight year aunt went fullblown bipolar episode got placed psychiatric facility Long lost relative many state south swooped moved relative South Carolina sold farm one family contact relative missed aunt one knew moved back old neighborhood calling overjoyed happy marriage finally cut BPD mom family sort lonely life Maybe could family back Maybe could finally feel like normal life knew signed dotted line power attorney aunt stopped taking medication bipolar breakdown move assisted living turned cousin physically abusive aunt real mess hand people took time abandon novel writing husband starting fourth novel comforted flood tear regretfully put aside one original novel idea ever handle folks’ affair Looking Mars transit 2009 linking happening life time final light came Saturn tip yod isn’t emotional problem leftover painful childhood It’s enmeshed caretaking relationship keep getting embroiled emotional problem I’m needy family relationship really sucker didn’t realize much problem getting wrong relationship term achieving goal life It’s central issue Codependent relationship derail writing every time get one Along selfdoubt fear I’m stupid succeed However whether transit right wrong isn’t important thing important got looking issue new way stared chart looked aspect meaning letting one confused rattle around brain — sometimes month — I’d connect thing new way sudden flash insight Like one Saturn symbolizing relationship wrote I’d read codependency book sure seeing jump chart way went back 2009 counselorastrologer suggested finally got “Omigosh really I’m right don’t like scenario it” basis lot psychological insight moment sit connect something you’ve read jillion time see possible warning future think creatively forever thankful learned read enough astrology … even think kook Studying astrology therapy lot like staring scattered piece puzzle figure put together assigned reading get mosaic instructive picture life jolt new awarenessTags Psychology Opinion Self Improvement Culture Astrology
4,592
Soul-Seeking
Photo by xandtor on Unsplash I look in my mirror much more these days. I notice my eyes — the sparkle grows. I notice the lines on my face — all there because I have laughed my way through Life — The Good, The Bad, and The Ugly. I notice my hair — silvery, radiant with purple magik — daring anyone to believe I am ordinary. I look at my body — soft in all the places my daughters, granddaughters, and friends need her to be — to hold them close when the storms of Life rattle their foundations. I realized today— I rarely see another Human’s ‘form’. It might be because of my nurse’s training. Or perhaps it’s the woo-woo I carry around with me which tunes me into frequencies and vibrations — my intuition at long last kicking in. Or maybe, it’s as simple as this — living Life has changed what I see — the kaleidoscope of experience provides new colors and shapes. A wider view with a better perspective. I can’t be certain. I only know this. The ‘substance’ I notice when I encounter my fellow Humans comes from within them. It’s not the stuff their Vessels are made up of anymore. Seeing beyond another’s external presentation — beyond their Vessel in this world is not natural for most of us. Our Mind and cultural conditioning usually kick in and we begin putting “Others” into categories. A sorting process we learned as toddlers. By color, by size, by shape, by species to make sense of Our World. If we are ever to dissolve the barricades we erect unconsciously and discover The Truth regarding the sacred connective-ness of Life On Earth — we must overcome this. We have to get past looking at the exterior form of a thing/person/place. We have to learn to see its Soul. The longer I have lived — the easier Soul-Seeking has become for me. It begins with the face we see in the mirror. When we begin to see past Our Vessel and widen our perception to accept All. The. Things we know about who we are. It expands when we look at ourselves with love and compassion and allow it to fill up and overflow onto All. The. People who pass through our lives. It culminates when the very first inclination we have is to recognize the places we are ‘One’ with All. The. Things. Everyone. Everything. Everywhere. How different Our World would be if we could pull this off! We could see so many things! They would practically shout at us! Can you just imagine? The Love between Humans as they build a Life together — regardless of what their Vessels are. Gender/Race/Religion — there would be no obstacles for them to overcome. The Wholeness of Spirit as Humanity stopped demanding there is only One Spiritual path to The Divine. No more Wars over The Stairway To Heaven. The Oneness of a Humanity struggling with survival across all man-made borders. No Borders. No Barriers. No Boundaries. No Walls. The Gift of Peace to our Fellow Travelers in Life. No need for creatures or plants to be placed on a list to avoid their extinction at the Hand of Man. The Honorable Stewardship of Gaia — lovingly cared for — Her Soul at rest — not tortured — as we pass Her forward — healthy and well to future generations. Souls. Everyone, Everything has one. Try to remember this the next time you look at yourself in the mirror. The next time you are faced with a decision to see a Soul or look at Vessel. Become a Soul-Seeker. Namaste.
https://medium.com/crows-feet/soul-seeking-a8b7cb61efcc
['Ann Litts']
2019-07-31 10:29:34.503000+00:00
['Spirituality', 'Aging', 'Life', 'Life Lessons', 'Self-awareness']
Title SoulSeekingContent Photo xandtor Unsplash look mirror much day notice eye — sparkle grows notice line face — laughed way Life — Good Bad Ugly notice hair — silvery radiant purple magik — daring anyone believe ordinary look body — soft place daughter granddaughter friend need — hold close storm Life rattle foundation realized today— rarely see another Human’s ‘form’ might nurse’s training perhaps it’s woowoo carry around tune frequency vibration — intuition long last kicking maybe it’s simple — living Life changed see — kaleidoscope experience provides new color shape wider view better perspective can’t certain know ‘substance’ notice encounter fellow Humans come within It’s stuff Vessels made anymore Seeing beyond another’s external presentation — beyond Vessel world natural u Mind cultural conditioning usually kick begin putting “Others” category sorting process learned toddler color size shape specie make sense World ever dissolve barricade erect unconsciously discover Truth regarding sacred connectiveness Life Earth — must overcome get past looking exterior form thingpersonplace learn see Soul longer lived — easier SoulSeeking become begin face see mirror begin see past Vessel widen perception accept Things know expands look love compassion allow fill overflow onto People pas life culminates first inclination recognize place ‘One’ Things Everyone Everything Everywhere different World would could pull could see many thing would practically shout u imagine Love Humans build Life together — regardless Vessels GenderRaceReligion — would obstacle overcome Wholeness Spirit Humanity stopped demanding One Spiritual path Divine Wars Stairway Heaven Oneness Humanity struggling survival across manmade border Borders Barriers Boundaries Walls Gift Peace Fellow Travelers Life need creature plant placed list avoid extinction Hand Man Honorable Stewardship Gaia — lovingly cared — Soul rest — tortured — pas forward — healthy well future generation Souls Everyone Everything one Try remember next time look mirror next time faced decision see Soul look Vessel Become SoulSeeker NamasteTags Spirituality Aging Life Life Lessons Selfawareness
4,593
Everything I Discovered About GraphQL and Apollo
First Things First… GraphQL As I said earlier, GraphQL is a new technology which modifies the relationships between back-end and front-end developers. Previously, both teams had to define a contract interface to ensure correct implementations. Sometimes, latencies could occur due to misunderstanding on object complexity or typings. Thanks to GraphQL, backends can provide all the data that front ends could need. Then, it’s up to them to “pick” the properties they require to build the interface. Moreover, GraphQL offers a web interface (named GraphiQL) to test queries and mutations (if you don’t understand what I’m writing about, please refer to documentation). It’s a clever tool to enable front ends to write requests, and browse the docs and typings. A query language Using GraphQL implies understanding and mastering the query language included in the kit. It’s not a trivial one and is based on an object nesting syntax. query GetProductInfo { product { id name price } } When querying object seems simple, it’s not the case for the mutations. A mutation updates/inserts new data, potentially with some arguments. mutation ($label: String!) { addTag(label: $label) { id label } } Here, an argument is specified with its type on L1. Then, it’s used on the second line. The ending content between square brackets id and label defines the structure of the returned object, the result of the insertion. Apollo Apollo is the client used to communicate with GraphQL. Whether you develop a web or mobile app, Apollo can support it. Apollo supports several platforms: Its configuration allows to define specific aspects such as: cache-network strategy, pipelines (covered here), Server-Side Rendering (Vue.js version), local state (Vue.js version), performance, error handling (covered here) or internationalization (covered here).
https://medium.com/better-programming/everything-i-discovered-about-graphql-and-apollo-e774d1e11638
['Adrien Miquel']
2020-10-25 22:39:11.416000+00:00
['JavaScript', 'GraphQL', 'React', 'Nodejs', 'Programming']
Title Everything Discovered GraphQL ApolloContent First Things First… GraphQL said earlier GraphQL new technology modifies relationship backend frontend developer Previously team define contract interface ensure correct implementation Sometimes latency could occur due misunderstanding object complexity typing Thanks GraphQL backends provide data front end could need it’s “pick” property require build interface Moreover GraphQL offer web interface named GraphiQL test query mutation don’t understand I’m writing please refer documentation It’s clever tool enable front end write request browse doc typing query language Using GraphQL implies understanding mastering query language included kit It’s trivial one based object nesting syntax query GetProductInfo product id name price querying object seems simple it’s case mutation mutation updatesinserts new data potentially argument mutation label String addTaglabel label id label argument specified type L1 it’s used second line ending content square bracket id label defines structure returned object result insertion Apollo Apollo client used communicate GraphQL Whether develop web mobile app Apollo support Apollo support several platform configuration allows define specific aspect cachenetwork strategy pipeline covered ServerSide Rendering Vuejs version local state Vuejs version performance error handling covered internationalization covered hereTags JavaScript GraphQL React Nodejs Programming
4,594
Is this Ancient Gear Mechanism the First Computer on Earth?
The Antikythera mechanism The Antikythera mechanism, as it is known, is quite possibly earth’s first computer and the most ancient gear system ever found. Found in about 150ft of water off Point Glyphadia, near the island of Antikythera, the mechanical device is composed of ancient gears made mostly from bronze and wood. The remaining bronze pieces were so badly corroded that the entire machine appeared to be a blob of badly corroded metal. It was not until later, until the archaeologist Valerios Stais noticed a gear shape that the reality began to hit that this was no common piece of metal. Since then, the mystery has only deepened. According to Wikipedia: Generally referred to as the first known analogue computer, the quality and complexity of the mechanism’s manufacture suggests it has undiscovered predecessors made during the Hellenistic period. Its construction relied upon theories of astronomy and mathematics developed by Greek astronomers, and is estimated to have been created around the late second century BC. At this point, no predecessors have been found. To put this in perspective, a device of this complexity would not be seen again for over 1500 years. The Antikythera mechanism What does the Antikythera Mechanism do? For a long time, scientists and archaeologists had no idea. However, with modern day technology, much of the original mechanism has been reconstructed, virtually anyway. We now know that it was a very sophisticated ancient clock which calculated the Egyptian civil calendar, the Greek signs of the zodiac on the front. On the back, it calculated solar eclipse dates as well as the dates of the next Ancient Olympic Games as well as their respective locations. Keep in mind, this is the first instance of an ancient gear composed of metal. Who Built this Out of Place Artifact? There are many theories about who built the device, with most theories trying to link it to one of the more famous Greek scientists or philosophers that we know about. It is possible that it was built by someone whose name we will never know. However, one particular theory stands out which links the box to Archimedes or Hipparchus: The tradition of making such mechanisms could be much older. Cicero wrote of a bronze device made by Archimedes in the third century B.C. And James Evans, a historian of astronomy at the University of Puget Sound in Tacoma, Washington, thinks that the eclipse cycle represented is Babylonian in origin and begins in 205 B.C. Maybe it was Hipparchus, an astronomer in Rhodes around that time, who worked out the math behind the device. He is known for having blended the arithmetic-based predictions of Babylonians with geometric theories favored by the Greeks. — Via The Smithsonian This would explain the esoteric nature of the device. What we do know, thanks to modern technology, is that the gears themselves were hand cut. There does not appear to be any evidence of advanced manufacturing. In fact, the irregularities of the teeth indicate that the device may not have been incredibly accurate. A Deeper Mystery Despite all of that, the device raises a number of perplexing questions. This is the first known instance of using metal gears in this way and the gearing is astoundingly complex. The device contained over 30 gears with very complex gear ratios. This sophistication indicates that it was not the first device of its kind and may not even be the best device of its kind. It’s possible that, due to the Egyptian connection, it is an imitation of some other ancient device which is now lost to us, similar to the Dendera Light depicted in ancient Egyptian art.
https://medium.com/swlh/is-this-ancient-gear-mechanism-the-first-computer-on-earth-a96467a0f68a
['Darian West']
2019-12-07 19:22:17.445000+00:00
['Greece', 'Ancient History', 'Science', 'Archimedes', 'Ancient']
Title Ancient Gear Mechanism First Computer EarthContent Antikythera mechanism Antikythera mechanism known quite possibly earth’s first computer ancient gear system ever found Found 150ft water Point Glyphadia near island Antikythera mechanical device composed ancient gear made mostly bronze wood remaining bronze piece badly corroded entire machine appeared blob badly corroded metal later archaeologist Valerios Stais noticed gear shape reality began hit common piece metal Since mystery deepened According Wikipedia Generally referred first known analogue computer quality complexity mechanism’s manufacture suggests undiscovered predecessor made Hellenistic period construction relied upon theory astronomy mathematics developed Greek astronomer estimated created around late second century BC point predecessor found put perspective device complexity would seen 1500 year Antikythera mechanism Antikythera Mechanism long time scientist archaeologist idea However modern day technology much original mechanism reconstructed virtually anyway know sophisticated ancient clock calculated Egyptian civil calendar Greek sign zodiac front back calculated solar eclipse date well date next Ancient Olympic Games well respective location Keep mind first instance ancient gear composed metal Built Place Artifact many theory built device theory trying link one famous Greek scientist philosopher know possible built someone whose name never know However one particular theory stand link box Archimedes Hipparchus tradition making mechanism could much older Cicero wrote bronze device made Archimedes third century BC James Evans historian astronomy University Puget Sound Tacoma Washington think eclipse cycle represented Babylonian origin begin 205 BC Maybe Hipparchus astronomer Rhodes around time worked math behind device known blended arithmeticbased prediction Babylonians geometric theory favored Greeks — Via Smithsonian would explain esoteric nature device know thanks modern technology gear hand cut appear evidence advanced manufacturing fact irregularity teeth indicate device may incredibly accurate Deeper Mystery Despite device raise number perplexing question first known instance using metal gear way gearing astoundingly complex device contained 30 gear complex gear ratio sophistication indicates first device kind may even best device kind It’s possible due Egyptian connection imitation ancient device lost u similar Dendera Light depicted ancient Egyptian artTags Greece Ancient History Science Archimedes Ancient
4,595
Forecasting Bitcoin prices in the short-term
In this post I will reveal some of our secrets as to how we use Artificial Intelligence, by means of machine learning, to pretty accurately predict the price of Bitcoin in the short-term. Since we refer to short-term as up to two hours, we are able to make pretty accurate trend predictions. However, the further into the future we try to predict, the less accurate results we obtain. Since the crypto space is very volatile and highly unpredictable, short-term forecasting remains our most realistic approach. In my previous post I’ve explained and addressed some of our shortcomings. As of today, I will no longer use aggregated average prices from various exchanges, but instead use realistic price data from one specific exchange, and if any back testing is carried out then it will always incorporate the trading fees — unless explicitly mentioned otherwise. The goal We know for a fact that some investing firms invest heavily in R&D to develop A.I. based trading algorithms and models. And we also know that they are making a profit by doing that, otherwise they wouldn’t be doing it. This also means that smaller organizations (like ours) can do that as well, but on a smaller and more controlled scale. We have been developing machine learning systems to forecast cryptocurrency prices and trends for a couple of months now. The results of our efforts, as you can read in previous posts, have been eye opening already. But since recently we took it one step further and improved our systems, as you’ll read below. Short-term Bitcoin predictions Below are two screenshots that illustrate our current prediction results. On these charts, the dark black line is the historic price; the gray line is the actual future price, we know this future price because I’m looking at results that were generated two hours ago. The red/green/orange lines are a summary of the predictions. Since we generate a multitude of predictions, we only want to see a handful of them, so we only show the most optimistic, pessimistic and the average prediction. Both of these charts depict predictions of the price for 8 intervals into the future, with each interval being 10 minutes. So that is 1h20m (80minutes) into the future: Prediction results 1 Prediction results 2 It’s important to remember that the absolute value of these predictions don’t matter as much as their general trend. These predictions are generated by a complex mathematical model, so their absolute value may deviate from reality. However, we instead use these as a tool to forecast whether the price will go up, down or stay as is. And coming back to our initial remarks, the reason why the absolute value of the predictions are of even lesser importance is that the prices are aggregated averages from all major exchanges — the predictions are not targeting one specific exchange. On a side note — I’ve often been asked by readers if the predictions are over fitted, the answer is they are not. Our neural network systems are initially trained on a large data set, and from then on it uses data from the previous intervals (e.g. past 10 minutes) to re-train the neural network and make these predictions for the next 8 intervals. So we never generate predictions over a date range that has already been used for training, otherwise that would no longer be considered as “forecasting”. From the two screenshots above, the predictions appear to be pretty accurate, and in many cases they are. But in some cases they are not. Have a look at the next chart where the predictions deviate immensely. The optimistic prediction shows the price going up exponentially, the average one looks more sinusoidal and the pessimistic prediction indicates a huge drop with a strong recovery afterwards. These predictions look very anomalic to us humans, but for the system they are no different, so to improve or filter them out we need to understand better how A.I. works. Unless we fully understand why it makes such predictions, we cannot improve them — and learning how A.I. makes decisions requires yet another A.I. component to do just that, this remains work in progress. Prediction results 3 Realistic Bitcoin predictions As briefly mentioned previously, we no longer use aggregated average price data. Instead we shall focus on one (or multiple) crypto exchanges. At this stage we solely use the Binance exchange for our purposes, we are not affiliated with that company in any way. About a week ago I started using one-minute candlesticks as input data for our neural network. Initially it yielded no meaningful results, after struggling for two whole days trying to tweak a whole bunch of parameters, I just put it aside and focused on different parts of our project. Initial candlestick predictions of 8 steps (1min intervals) But then I realized that I was trying to solve a problem using an old mindset. The old mindset is to make eight predictions, which yielded pretty “okay” results on the aggregated price data, but not necessarily on the Binance data using 1 minute candlesticks. So I had to redesign this little detail, and instead of making 8 predictions, I made it predict just one. I then also realized that having just one prediction will be a visual disaster, it tells us very little (from a visual perspective), because we’ll only see just one dot. To cope with this, I also made sure the system includes previously made predictions, now we can actually have a graph (a solid line, with multiple dots); this is something we can analyze and benchmark against the actual price. This new method for visualizing predictions looks like on the image below. New predictions representation On the image there are two actual prices, the solid green/red candlesticks which are the historic prices (these were used as input for the neural net), while the slightly faded (lowered opacity) green/red candlesticks are the future price — this screenshot was taken at some historic time where the future price is already known so these candlesticks are present (with their opacity lowered). The blue/black candlesticks are the predictions made for their respective interval, given only the data prior to that interval. So in this example the last big “blue” candlestick is the result of the previous large green “candlestick”. The A.I. system has learned that the previous interval had a huge increase in price, so it predicts that the next interval will be an increase as well (compared to the previous prediction). It actually depends on how we look at it and phrase it, some people may say that the price is about to go down if we use absolute values — while if we use the predictions as trends then it tell us the price is going to increase. Which of these two views/theories is most correct remains to be tested (i.e. back testing), there’s actually no trivial answer to this question. So for now it will be a combination of both looking at the trend and at the absolute values. Here’s a more complete image of the above: 1-min interval predictions (1) We clearly see how pretty accurate the trend of the predictions were compared to that of the price. This is what opened my eyes and allowed me to continue my research much deeper. Below is another screenshot generated in the same fashion, same data, but vastly different parameters and neural network structure: 1-min interval predictions (2) We see that its results/predictions are quite similar to the first one. I actually like this one better (on first sight), because it has more “black” candlesticks (i.e. the close price was lower than open price). This one also looks slightly more over fitted, because its values appear to be closer in absolute terms. But as mentioned earlier, these prediction regions were not used as input to train the neural net so they are not directly biased,they are simply more accurate predictions in absolute terms — taking this statement into consideration, it’s amazing how well the system makes these one-interval predictions. You may also have noticed that the system is not able to predict huge increases/drops in price, such as that big “green” candlestick, there is no way the system could predict that. And these increases are usually due to market manipulation (e.g. insider trading) or a group of people deciding to to buy loads of BTC during that interval — unless we have access to these groups, we cannot develop a system that forecasts these scenarios. But we do see that our system learns and adapts from these anomalies, it learns that after a huge increase (or decrease) comes either stability, even more growth or a sudden drop. Having done this, I moved to the next level, increasing the interval size. So instead of predicting 1 minute ahead, let us use 5-minute interval candlesticks and predict 5 minutes ahead (which is still a single interval prediction in this case). Below are two screenshots with predictions generated by different neural nets for the same period: 5-min interval predictions (1) 5-min interval predictions (2) From the two above predictions we see that the first one looks smoother, but also somewhat less accurate. The second one resembles the reality slightly better. Then again, notice how inaccurate it is for detecting anomalies, as described earlier: First prediction fails to predict the price spike Given the historical data, there is no indicator, i.e. there is no way the system can know the price will shoot up extremely fast/high (relative to the previous values), as shown on the above. So the prediction for the larger “green” candlestick is a tiny “black” candlestick indicating the price will be relatively stable, but instead it went up (a lot). Once again this proves our point, it’s practically not possible to predict such a scenario given our data — but fortunately the system is “learning” and can indicate what will happen after the price goes up as it did, we then can use these predictions to decide whether to buy/sell/hold. Below is another example of 5-min interval predictions, this time I used yet another set of parameters and data set size. Notice how the shape/trend of these predictions differs from the previous ones. 5-min interval predictions (3) If we can make pretty “okay” predictions with 5-min candlesticks, why not with 10-min ones? That’s why I did next to see how accurate these would be, and here is one of those results: 10-min interval predictions (1) We clearly see that the 10-min predictions are slightly less accurate compared to the 5-min ones, the major trend is still there — but it’s still unable to predict huge rises/drops as explained before. I did not go any further to predicting 20, 30, 60, … minute intervals simply because I shifted my focus to a next important matter. Remember that I started of this chapter by explaining how I went from making 8-step predictions to just single step ones? That decision was not backed my experiments, there was actually nothing less accurate from the 8-steps compared to the 1-steps, that is if we only look at the very first prediction. But the confusing part was the other 7 predictions, since these usually deviate a lot from the actual future, and it made the results appear very inaccurate. The thing is, every new prediction has even lesser precision than the previous one. This I realized when I went from single step predictions to three step ones: Predicting 3 steps ahead (1) I realized that making 3-step ahead predictions appeared to be pretty accurate, more accurate than 8-step predictions to say the least. But then again, it wasn’t always the case: Predicting 3 steps ahead (2) Making multi-step predictions is done by using, in our system at least, the previously made prediction as the new input. And if the previous prediction wasn’t accurate then the next one won’t be either (in most of the cases). The reason behind this is that every prediction has an error percentage%, this error value grows exponentially at each new prediction step. A deeper neural networks It’s generally true that the depth/size of a neural network can improve (or degrade) the results. Until now I have always been using pretty shallow neural networks with just one or two hidden layers, and a handful of neurons per layer. But what would the results be like if I used a deeper neural network, for instance three to six hidden layers? I am not going to go very deep into deeper neural networks (DNNs), simply because the results are too “deep” to understand at this point. However I would like to share some cool findings. In the next few examples I trained DNNs and let it predict 16-step intervals, in the hope of finding something interesting. Predictions from a deeper neural network (1) Most results from our DNNs look way smoother than those from shallow NNs. But I also noticed that sometimes these DNNs produce very surprising and unexpected results. On the chart above we see how the system predicts a drop in price midway 17:00. Even though such as thing did not occur reality, it was still a fascinating anomaly. Predictions from a deeper neural network (2) Here’s another set of predictions, where at some point the system predicts the price to go up steadily in linear fashion, but then shortly before 17:00 it indicates a drop. If we compare this against how the price evolved in reality, we see something quite similar happening. The price did rise steadily until like 16:40 and then it dropped until 17:15 before going up again for a short period. In some way this can be seen in the predictions, but whether it’s the true meaning of these predictions is up for debate. Predictions from a deeper neural network (3) In the above it appears the system is anticipating for a huge drop midway 18:00 to 19:00. In reality no drop occurred in that range, except at 18:55. Predictions from a deeper neural network (4) I followed the previous prediction, and a few steps later it still kept anticipating for this huge drop. But now this drop has shifted closer to 19:00. And in reality there was indeed a drop in price, followed by a steady increase right afterwards, at 18:55 that is. So whether the system was really predicting this drop or not remains unclear, but it’s definitely surprising to see that manifest! Predictions from a deeper neural network (5) Above is another interesting version. In this case every prediction is “black” (i.e. red candlestick). I cannot explain why, but it does appear to make a good prediction of the price’s trend between 16:00 and 17:00 nonetheless. Predictions from a deeper neural network (6) Above is a region where the system did not anticipate a huge drop that is about to come next (at 02:10 or so). Sometimes there are DNNs that just look weird to say the least (as the one below). Even though they look strange to us, they may contain valuable information that the A.I. system is trying to tell. We just need a better way of interpreting its output.
https://medium.com/swlh/forecasting-bitcoin-prices-in-the-short-term-f52deec61b97
[]
2018-03-20 20:22:04.741000+00:00
['Artificial Intelligence', 'Investing', 'Machine Learning', 'Cryptocurrency', 'Bitcoin']
Title Forecasting Bitcoin price shorttermContent post reveal secret use Artificial Intelligence mean machine learning pretty accurately predict price Bitcoin shortterm Since refer shortterm two hour able make pretty accurate trend prediction However future try predict le accurate result obtain Since crypto space volatile highly unpredictable shortterm forecasting remains realistic approach previous post I’ve explained addressed shortcoming today longer use aggregated average price various exchange instead use realistic price data one specific exchange back testing carried always incorporate trading fee — unless explicitly mentioned otherwise goal know fact investing firm invest heavily RD develop AI based trading algorithm model also know making profit otherwise wouldn’t also mean smaller organization like well smaller controlled scale developing machine learning system forecast cryptocurrency price trend couple month result effort read previous post eye opening already since recently took one step improved system you’ll read Shortterm Bitcoin prediction two screenshots illustrate current prediction result chart dark black line historic price gray line actual future price know future price I’m looking result generated two hour ago redgreenorange line summary prediction Since generate multitude prediction want see handful show optimistic pessimistic average prediction chart depict prediction price 8 interval future interval 10 minute 1h20m 80minutes future Prediction result 1 Prediction result 2 It’s important remember absolute value prediction don’t matter much general trend prediction generated complex mathematical model absolute value may deviate reality However instead use tool forecast whether price go stay coming back initial remark reason absolute value prediction even lesser importance price aggregated average major exchange — prediction targeting one specific exchange side note — I’ve often asked reader prediction fitted answer neural network system initially trained large data set us data previous interval eg past 10 minute retrain neural network make prediction next 8 interval never generate prediction date range already used training otherwise would longer considered “forecasting” two screenshots prediction appear pretty accurate many case case look next chart prediction deviate immensely optimistic prediction show price going exponentially average one look sinusoidal pessimistic prediction indicates huge drop strong recovery afterwards prediction look anomalic u human system different improve filter need understand better AI work Unless fully understand make prediction cannot improve — learning AI make decision requires yet another AI component remains work progress Prediction result 3 Realistic Bitcoin prediction briefly mentioned previously longer use aggregated average price data Instead shall focus one multiple crypto exchange stage solely use Binance exchange purpose affiliated company way week ago started using oneminute candlestick input data neural network Initially yielded meaningful result struggling two whole day trying tweak whole bunch parameter put aside focused different part project Initial candlestick prediction 8 step 1min interval realized trying solve problem using old mindset old mindset make eight prediction yielded pretty “okay” result aggregated price data necessarily Binance data using 1 minute candlestick redesign little detail instead making 8 prediction made predict one also realized one prediction visual disaster tell u little visual perspective we’ll see one dot cope also made sure system includes previously made prediction actually graph solid line multiple dot something analyze benchmark actual price new method visualizing prediction look like image New prediction representation image two actual price solid greenred candlestick historic price used input neural net slightly faded lowered opacity greenred candlestick future price — screenshot taken historic time future price already known candlestick present opacity lowered blueblack candlestick prediction made respective interval given data prior interval example last big “blue” candlestick result previous large green “candlestick” AI system learned previous interval huge increase price predicts next interval increase well compared previous prediction actually depends look phrase people may say price go use absolute value — use prediction trend tell u price going increase two viewstheories correct remains tested ie back testing there’s actually trivial answer question combination looking trend absolute value Here’s complete image 1min interval prediction 1 clearly see pretty accurate trend prediction compared price opened eye allowed continue research much deeper another screenshot generated fashion data vastly different parameter neural network structure 1min interval prediction 2 see resultspredictions quite similar first one actually like one better first sight “black” candlestick ie close price lower open price one also look slightly fitted value appear closer absolute term mentioned earlier prediction region used input train neural net directly biasedthey simply accurate prediction absolute term — taking statement consideration it’s amazing well system make oneinterval prediction may also noticed system able predict huge increasesdrops price big “green” candlestick way system could predict increase usually due market manipulation eg insider trading group people deciding buy load BTC interval — unless access group cannot develop system forecast scenario see system learns adapts anomaly learns huge increase decrease come either stability even growth sudden drop done moved next level increasing interval size instead predicting 1 minute ahead let u use 5minute interval candlestick predict 5 minute ahead still single interval prediction case two screenshots prediction generated different neural net period 5min interval prediction 1 5min interval prediction 2 two prediction see first one look smoother also somewhat le accurate second one resembles reality slightly better notice inaccurate detecting anomaly described earlier First prediction fails predict price spike Given historical data indicator ie way system know price shoot extremely fasthigh relative previous value shown prediction larger “green” candlestick tiny “black” candlestick indicating price relatively stable instead went lot prof point it’s practically possible predict scenario given data — fortunately system “learning” indicate happen price go use prediction decide whether buysellhold another example 5min interval prediction time used yet another set parameter data set size Notice shapetrend prediction differs previous one 5min interval prediction 3 make pretty “okay” prediction 5min candlestick 10min one That’s next see accurate would one result 10min interval prediction 1 clearly see 10min prediction slightly le accurate compared 5min one major trend still — it’s still unable predict huge risesdrops explained go predicting 20 30 60 … minute interval simply shifted focus next important matter Remember started chapter explaining went making 8step prediction single step one decision backed experiment actually nothing le accurate 8steps compared 1steps look first prediction confusing part 7 prediction since usually deviate lot actual future made result appear inaccurate thing every new prediction even lesser precision previous one realized went single step prediction three step one Predicting 3 step ahead 1 realized making 3step ahead prediction appeared pretty accurate accurate 8step prediction say least wasn’t always case Predicting 3 step ahead 2 Making multistep prediction done using system least previously made prediction new input previous prediction wasn’t accurate next one won’t either case reason behind every prediction error percentage error value grows exponentially new prediction step deeper neural network It’s generally true depthsize neural network improve degrade result always using pretty shallow neural network one two hidden layer handful neuron per layer would result like used deeper neural network instance three six hidden layer going go deep deeper neural network DNNs simply result “deep” understand point However would like share cool finding next example trained DNNs let predict 16step interval hope finding something interesting Predictions deeper neural network 1 result DNNs look way smoother shallow NNs also noticed sometimes DNNs produce surprising unexpected result chart see system predicts drop price midway 1700 Even though thing occur reality still fascinating anomaly Predictions deeper neural network 2 Here’s another set prediction point system predicts price go steadily linear fashion shortly 1700 indicates drop compare price evolved reality see something quite similar happening price rise steadily like 1640 dropped 1715 going short period way seen prediction whether it’s true meaning prediction debate Predictions deeper neural network 3 appears system anticipating huge drop midway 1800 1900 reality drop occurred range except 1855 Predictions deeper neural network 4 followed previous prediction step later still kept anticipating huge drop drop shifted closer 1900 reality indeed drop price followed steady increase right afterwards 1855 whether system really predicting drop remains unclear it’s definitely surprising see manifest Predictions deeper neural network 5 another interesting version case every prediction “black” ie red candlestick cannot explain appear make good prediction price’s trend 1600 1700 nonetheless Predictions deeper neural network 6 region system anticipate huge drop come next 0210 Sometimes DNNs look weird say least one Even though look strange u may contain valuable information AI system trying tell need better way interpreting outputTags Artificial Intelligence Investing Machine Learning Cryptocurrency Bitcoin
4,596
The Masterpiece Submission Guidelines
Yes, you’ve come to a place where quality matters — not quantity. But don’t get this wrong, quality doesn’t mean that you have to write like the NYT articles. We believe in simplicity and look for engaging, well-structured content that connects readers. Make your stories simple but interesting and engaging. We would be happy to publish and spread your masterpieces. We publish the masterpieces on the following topics. Happiness Self-Improvement Environment Travel Relationships Mental Health Motivation Social Problems Education, Reading, Writing Satire, Humor Country & Culture Business & Marketing Personal thoughts & experiences We publish the masterpieces between 01–15 minutes read on the above topics. We don’t publish the stories on the following topics. Politics Poetry Technology Listicles (5ways, 10ways, 12things, 15principles, etc.) Quotes Law & Legal Issues Intricate Academic Writings How to/how I make $$$…(articles) Food review/product review By submitting to The Masterpiece, you are complying with the following rules and guidelines. 1. Follow Medium Rules Submissions must comply with Medium’s Rules, Ad-Free Policy, Content Guidelines, and Curation Guidelines. 2. Submit Unpublished Drafts You must submit original and unpublished drafts. After publishing the story in The Masterpiece, you can republish or share it on your blog, LinkedIn, Twitter, or other platforms outside Medium. 3. Original Contents Your stories must be original, engaging, and well-organized. We do not accept vague, unclear, or intricate ones. Make sure your contents are grammatical-error-free. We recommend you to use Grammarly to check your content beforehand. Plagiarism will not be tolerated. If you write poetry, make sure it’s not scattered with so many spaces. Divide it stanza-wise as you see in poetry books. 4. Call to Action (CTA) A single text link inviting newsletter subscriptions, or the medium link of your other story is acceptable. Any types of CTAs or Sign-up forms are not allowed with the content. 5. Style Guide Follow the below style guide while submitting your draft to The Masterpiece. Titles and subtitles: No clickbait is allowed. Your story must have a precise title and a subtitle. Write your titles in title case and subtitles in sentences case/title case. Feature image: Make sure you have a featured image (horizontal orientation) below the titles and subtitles. Keep the image in the following style aligning in the middle. Do not fill the screen with your featured image. Featured image style Images within the text: If necessary, use images within the text. In that case, follow the below ‘inline’ image format. Images within the text style Image credit: Cite the source and usage rights in the image caption. If the photo is taken or created by the author, mention it in the image caption. You may find copyright-free images on Unsplash, Pixabay, Pexels, etc. Section headings: Keep all the section headings in sentence case. Do not mix up your section headings. Make your masterpieces well-structured and visually stunning. 6. Submission Submit your final draft by clicking the “…” button near the top-right corner of the page. Then select “Add to publication” and choose “The Masterpiece”. Finally, click “Add draft” to submit your story for review. 7. What we change/edit(if necessary) We may change the title, subtitle, images if we find the existing ones less-engaging or irrelevant. We will edit sentence structures and paragraphs, if they are too long and intricate. Moreover, we will try our best to make sure that your writing is error-free and well-structured. 8. How long it will take to publish Within 03(three) days, you will get feedback from us. If everything is okay, we will be happy to publish your masterpiece. But if you do not hear anything within 03 days, you are free to submit the draft elsewhere. We are accepting new writers. To become a writer, please drop a response below, writing ‘I want to write for The Masterpiece’ and leave your Medium @username. For example, my username is ‘@mamun.here’
https://medium.com/the-masterpiece/the-masterpiece-submission-requirements-5fdafb3a0446
['S M Mamunur Rahman']
2020-12-24 13:09:50.922000+00:00
['Publication', 'Reading', 'Writing', 'The Masterpiece', 'Submission']
Title Masterpiece Submission GuidelinesContent Yes you’ve come place quality matter — quantity don’t get wrong quality doesn’t mean write like NYT article believe simplicity look engaging wellstructured content connects reader Make story simple interesting engaging would happy publish spread masterpiece publish masterpiece following topic Happiness SelfImprovement Environment Travel Relationships Mental Health Motivation Social Problems Education Reading Writing Satire Humor Country Culture Business Marketing Personal thought experience publish masterpiece 01–15 minute read topic don’t publish story following topic Politics Poetry Technology Listicles 5ways 10ways 12things 15principles etc Quotes Law Legal Issues Intricate Academic Writings tohow make …articles Food reviewproduct review submitting Masterpiece complying following rule guideline 1 Follow Medium Rules Submissions must comply Medium’s Rules AdFree Policy Content Guidelines Curation Guidelines 2 Submit Unpublished Drafts must submit original unpublished draft publishing story Masterpiece republish share blog LinkedIn Twitter platform outside Medium 3 Original Contents story must original engaging wellorganized accept vague unclear intricate one Make sure content grammaticalerrorfree recommend use Grammarly check content beforehand Plagiarism tolerated write poetry make sure it’s scattered many space Divide stanzawise see poetry book 4 Call Action CTA single text link inviting newsletter subscription medium link story acceptable type CTAs Signup form allowed content 5 Style Guide Follow style guide submitting draft Masterpiece Titles subtitle clickbait allowed story must precise title subtitle Write title title case subtitle sentence casetitle case Feature image Make sure featured image horizontal orientation title subtitle Keep image following style aligning middle fill screen featured image Featured image style Images within text necessary use image within text case follow ‘inline’ image format Images within text style Image credit Cite source usage right image caption photo taken created author mention image caption may find copyrightfree image Unsplash Pixabay Pexels etc Section heading Keep section heading sentence case mix section heading Make masterpiece wellstructured visually stunning 6 Submission Submit final draft clicking “…” button near topright corner page select “Add publication” choose “The Masterpiece” Finally click “Add draft” submit story review 7 changeeditif necessary may change title subtitle image find existing one lessengaging irrelevant edit sentence structure paragraph long intricate Moreover try best make sure writing errorfree wellstructured 8 long take publish Within 03three day get feedback u everything okay happy publish masterpiece hear anything within 03 day free submit draft elsewhere accepting new writer become writer please drop response writing ‘I want write Masterpiece’ leave Medium username example username ‘mamunhere’Tags Publication Reading Writing Masterpiece Submission
4,597
How to Live a Regret-Free Life
Christina Pascucci: Why is death such a taboo topic? What’s the advantage of talking about it, and what should we be contemplating? Bronnie Ware: We have created a society of denial. We subdue vulnerability and pretend everything is OK when everyone is suffering from the unrealistic expectation of perfection. We deny the state of our planet, the whole state of everything! So, of course we deny death, as it is the scariest thought of all. But it doesn’t have to be. Death is a guarantee and when you face that honestly you realize the sacredness of your time and find the courage to make loving, positive changes to your heart. Time is an undervalued but sacred resource. It cannot be replaced. You say after talking to countless patients on their death beds, the greatest regret of the dying is they wish they lived a life true to themselves, rather than what others expected of them. Can you talk more about this, and how do we do this? This subject came up time and again. People realized they had not brought enough consciousness and presence into the choices they made. Since your life is created by the decisions you make, this can result in dreams remaining unfulfilled and deep regret about not choosing differently. We are all individuals with unique yearnings and strengths. We are not meant to be alike but to encourage those unique strengths. You didn’t really have experience as a caregiver when you were essentially thrown into it. Many people might think they’re not qualified or good enough, or think to themselves they’ll try later when they have more experience. What would you say to that? Everyone has to start somewhere. We are all beginners at one time or another. But the only way to go from being a beginner to an experienced person is by having a go. It may mean you have to be vulnerable. You may even be judged as a fool for a while. But your life is your own. You either give people power through their judgments of you or you give yourself power by ignoring them and honoring your own heart and hunches. When you trust in life’s possibilities rather than human-made rules, there really are very few limitations. In your book, you talk extensively about kindness, forgiveness, and empathy. You also say you made excuses for people’s bad behavior. How do you show empathy and still hold people accountable for their actions? It’s not up to any of us to hold anyone accountable. Life is the best teacher. No one knows what the other is here to learn or heal. If you use their behavior as a teaching tool, and dissolve your ego and its need to be right or to make someone feel guilty, you actually set yourself free. It really does not matter who is right or wrong in the end. What matters is how many choices you made in kindness. The less energy wasted on unforgiveness, the more energy you create for joy. You fell in love and became pregnant later in life. Many young professional women are choosing their careers over marrying and starting a family earlier on in life. How old were you when you became pregnant, and what’s your advice to those in their 30s and 40s who might be feeling the societal pressure? I fell pregnant naturally and intentionally at 44, becoming a first-time mother at 45. We conceived the second month we tried. While many women are not blessed with such ease, many stop themselves even trying once they reach a certain age. It is true that our bodies are healthier for pregnancy at a younger age. There is no denying that. My pregnancy triggered disease immediately following. Whether that would have happened anyway, years earlier, I cannot say. But my pregnancy was healthy and my baby was born very healthy. So while I don’t encourage leaving it too late, I do say to follow your heart on it. If I hadn’t, I wouldn’t know the love I now do for my gorgeous little girl! Many women, especially as mothers, give so much. However, it can be tough for them to receive. You wrote: ‘Then not only are you blocking the natural flow of things to you and creating an imbalance, you are robbing someone else of the pleasure of giving.’ Talk more about that and how we can be better receivers? By not receiving, you close yourself to life’s blessings, which are so often to be delivered through others. It also creates unbalance and is a way of trying to control life. That is one of the worst things you can do: to shut yourself off to life’s amazing and generous creativity because you don’t have the courage to receive. To live a full life means to allow others in, to celebrate connection, and be open to the flow of giving and receiving. After helping so many, you went through your own depression in your 40s. You felt trapped and seriously contemplated suicide. What would have been a helpful approach from friends? How’d you get out of that rock bottom? Loving patience and trust that I would work it out. An ear when I needed one but no lecturing when I didn’t. I came back from rock bottom one step at a time. There is often a crucial turning point — sometimes obvious, sometimes not — where a glimpse of hope, light or strength feels different to the dark heaviness depression delivers. You hold onto that and every little blessing and insight that comes, and step-by-step it loses its power. It takes commitment, though, and a massive trust in life that such a time is a blessing in disguise. It certainly was for me. It helped me let go of so much of what was holding me back. Many people reading this, myself included, have a loved one who is an addict. One of your patients, an alcoholic until her final moments, told you this: “Not everyone wants to get well either Bronnie. And for a long time I didn’t. The role of the sick person gave me an identity. Obviously I was holding myself back from being a better person this way. But I was getting attention, and trying to fool myself into thinking this made me happier than being courageous and well.” If we are struggling with addiction, or know someone who is, what’s the best way to react and foster positive change? Gentleness, acceptance, non-judgmental kindness. Addiction is usually created from a lack of wholesome connections. That’s not a reflection of people who love someone with addiction. It’s a reflection of the addict’s ability to receive that connection. Positive connection and shared wholesome experiences can help immensely at times. In your twenties you quit your banking job to work at a pub abroad. Do you think taking risks like that is critical to maximizing this thing called life? Yes, absolutely. Staying in your comfort zone is avoiding reaching your full potential. Risks and contrast are both essential to show us what we’re really capable of. And while it can be terrifying sometimes, it also brings new levels of joy beyond it. Your aim is to live regret-free. Do you have any regrets? None. Not one. I’ve made a stack of mistakes and if I could go back and do it all again there are definitely things I would change. There are things I would have done differently. But I did the best I could as who I was at the time. So I look back to old parts of myself with compassion rather than judgment. This allows me to forgive my mistakes rather than give them the power of regret. Having faced death and realized the sacredness of my time, I live a courageous life now, completely true to my heart regardless of how I am perceived by others or society. By bringing as much consciousness as possible to the decisions I make, I avoid regret because I am not living blindly. I am living with my eyes and heart wide open. You’ve shared some of your biggest life lessons. What matters most? Our lessons are given to us from a place of love, to bring us into our best self. Courage is always rewarded. The greatest appreciation we can show for our life is to enjoy it as fully as possible.
https://medium.com/wake-up-call/how-to-live-a-regret-free-life-d52c8c9e64bb
['Christina Pascucci']
2019-11-21 10:01:01.218000+00:00
['Life Lessons', 'Wellness', 'Love', 'Caregiving']
Title Live RegretFree LifeContent Christina Pascucci death taboo topic What’s advantage talking contemplating Bronnie Ware created society denial subdue vulnerability pretend everything OK everyone suffering unrealistic expectation perfection deny state planet whole state everything course deny death scariest thought doesn’t Death guarantee face honestly realize sacredness time find courage make loving positive change heart Time undervalued sacred resource cannot replaced say talking countless patient death bed greatest regret dying wish lived life true rather others expected talk subject came time People realized brought enough consciousness presence choice made Since life created decision make result dream remaining unfulfilled deep regret choosing differently individual unique yearning strength meant alike encourage unique strength didn’t really experience caregiver essentially thrown Many people might think they’re qualified good enough think they’ll try later experience would say Everyone start somewhere beginner one time another way go beginner experienced person go may mean vulnerable may even judged fool life either give people power judgment give power ignoring honoring heart hunch trust life’s possibility rather humanmade rule really limitation book talk extensively kindness forgiveness empathy also say made excuse people’s bad behavior show empathy still hold people accountable action It’s u hold anyone accountable Life best teacher one know learn heal use behavior teaching tool dissolve ego need right make someone feel guilty actually set free really matter right wrong end matter many choice made kindness le energy wasted unforgiveness energy create joy fell love became pregnant later life Many young professional woman choosing career marrying starting family earlier life old became pregnant what’s advice 30 40 might feeling societal pressure fell pregnant naturally intentionally 44 becoming firsttime mother 45 conceived second month tried many woman blessed ease many stop even trying reach certain age true body healthier pregnancy younger age denying pregnancy triggered disease immediately following Whether would happened anyway year earlier cannot say pregnancy healthy baby born healthy don’t encourage leaving late say follow heart hadn’t wouldn’t know love gorgeous little girl Many woman especially mother give much However tough receive wrote ‘Then blocking natural flow thing creating imbalance robbing someone else pleasure giving’ Talk better receiver receiving close life’s blessing often delivered others also creates unbalance way trying control life one worst thing shut life’s amazing generous creativity don’t courage receive live full life mean allow others celebrate connection open flow giving receiving helping many went depression 40 felt trapped seriously contemplated suicide would helpful approach friend How’d get rock bottom Loving patience trust would work ear needed one lecturing didn’t came back rock bottom one step time often crucial turning point — sometimes obvious sometimes — glimpse hope light strength feel different dark heaviness depression delivers hold onto every little blessing insight come stepbystep loses power take commitment though massive trust life time blessing disguise certainly helped let go much holding back Many people reading included loved one addict One patient alcoholic final moment told “Not everyone want get well either Bronnie long time didn’t role sick person gave identity Obviously holding back better person way getting attention trying fool thinking made happier courageous well” struggling addiction know someone what’s best way react foster positive change Gentleness acceptance nonjudgmental kindness Addiction usually created lack wholesome connection That’s reflection people love someone addiction It’s reflection addict’s ability receive connection Positive connection shared wholesome experience help immensely time twenty quit banking job work pub abroad think taking risk like critical maximizing thing called life Yes absolutely Staying comfort zone avoiding reaching full potential Risks contrast essential show u we’re really capable terrifying sometimes also brings new level joy beyond aim live regretfree regret None one I’ve made stack mistake could go back definitely thing would change thing would done differently best could time look back old part compassion rather judgment allows forgive mistake rather give power regret faced death realized sacredness time live courageous life completely true heart regardless perceived others society bringing much consciousness possible decision make avoid regret living blindly living eye heart wide open You’ve shared biggest life lesson matter lesson given u place love bring u best self Courage always rewarded greatest appreciation show life enjoy fully possibleTags Life Lessons Wellness Love Caregiving
4,598
Language
Language Email Refrigerator :: 03 Hey friend, One night this week, I was sitting on the couch holding my daughter, Golda, facing towards me. After a few minutes, she started to whine. We stood up and paced the living room. She immediately calmed down and scanned the room curiously. I understand her and yet she’s never spoken a word. Golda is nearly 18 weeks old, which is crazy– I’ve been in a relationship with someone for almost 5 months that doesn’t speak English or really even understand it. Because of Golda, I’ve been thinking a lot about how we communicate. How we use our bodies, our expressions, and our language in different ways to shape the world around us and our perceptions of it. This month, let’s talk languages. Happy snacking. Art by Wayne White I. New Language The first class I ever took in college was linguistics (after one semester I quickly learned why the 9am Monday classes were always available). One of the things that still sticks with me is the Sapir-Whorf hypothesis, the idea that language shapes thought. Changing our language, or not having a word for something, or not using a verb tense affects how we think. Here are 3 very short stories from the last few years about how changing my language affected my thinking. Contractor vs Freelancer I started freelancing in the Summer of 2016. The freelance life is supposed to come with more freedom, higher day rates, a flexible schedule, and an independent spirit. After 2 years of “freelancing” I realized I didn’t have any of those. I felt tied down. That’s because I realized that I wasn’t freelancing. I was contracting. I still had a 40 hour a week job, just in 2–4 month contracts. So since last Summer, I’ve tried to own being a freelancer– finding my own clients, working from home, saying no to timelines and budgets and feedback that didn’t work with my expectations. And it’s made all the difference. 2. Is it an Emergency? Talking about my parenting approach to a friend, I said that I am more of a “put my oxygen mask on first” parent. I believe in self-care and getting sleep and exercise so I can be more present and energized. But she called me out: “that metaphor assumes an emergency. Is it an emergency? Are you in crisis? Try reframing that approach. I believe that you can only give from your overflow.” I can only give from my overflow. My cup needs to be overflowing before I can give my time, energy, love, attention to other people. This is not an emergency. 3. The Weakness of Strength In January, I lead a workshop on leadership. One of the key ideas in my research came from The School of Life, which coined “The Weakness of Strength” theory. It’s this idea that every person has strengths, and those traits have shadow sides. So being a decisive leader might also mean you alienate your team because you don’t include them in decision-making. Or you fell in love with your spontaneous, adventurous partner who is now on your nerves for being uncommitted and bad at planning. It’s such a helpful reframe for me in being aware of my strengths and their shadow sides with my partners at Caveday, with my marriage, and with my friends. What’s the shadow side of your greatest strength? Art By Dan Ferrer II. Foreign Language Sometimes it feels like making a decision about a job, a career path, a college major are all permanent decisions. Whatever you decide on you MUST commit to. Forever. And that will be who you are. Forever. And you cannot do anything other than that. Forever. Yeesh. But this month, a little reminder about high school Spanish changed the way I thought about career transition. Check out the article I wrote here: https://medium.com/p/e0b9734eb38 Art by Magnus Atom III. Language Paradoxes At the beginning of this year, I set out my goals and realized that some of them felt in conflict. How can I be a more present father WHILE taking on more work? How can I plan my life and still leave room for flexibility when things change? How can I choose my path and still trust the universe has my back? I’m learning that we don’t have to think about some of these things as “either/or.” Life is not as simple as having mutually exclusive choices. Life is complicated. Life creates paradoxes. “Both/And.” Having an argument is often an either/or. I am right and they are wrong. But is there a truth where both are right? Or both are partly right? Being a part of a community requires me being my independent self and being an anonymous part of a group. Two things can exist at the same time. Not as opposites, not as either/ors but as both/ands. Science and religion. Change and stasis. Love and fear. Choice and fate. A lot of this thinking was clarified in reading Parker Palmer’s work (thanks to Casey Rosengren for the recommendation). In one of his talks, he asserts that there are five habits of the heart. But really, you only need to consider two: Chutzpah and Humility. Chutzpah is the audacity to believe that I have a voice that deserves to be heard and a right to speak it. And humility is the awareness that my truth is not complete and I need to listen openly and respectfully. That’s a pretty deep paradox. We often try to oversimplify things into either/or because it’s easier. But it’s not how the world or life works. Things are complex and interconnected. Holding space for both/and requires patience and work. Conflicts arise because of either/or mentalities. Resolutions come from an understanding of both/and. In a way, better understanding the paradoxes of life can lead to peace. Art by Jean Bevier IV. Fin As always, thanks for opening the refrigerator and sharing your thoughts. If you get something out of it, feel free to share it with a friend. The only way this thing grows is when you tell someone else about it. Send them this link. -Jake
https://medium.com/email-refrigerator/language-d6676d2d9fff
['Jake Kahana']
2020-12-27 18:41:48.795000+00:00
['Strengths And Weaknesses', 'Language', 'Self Improvement', 'Self-awareness', 'Paradox']
Title LanguageContent Language Email Refrigerator 03 Hey friend One night week sitting couch holding daughter Golda facing towards minute started whine stood paced living room immediately calmed scanned room curiously understand yet she’s never spoken word Golda nearly 18 week old crazy– I’ve relationship someone almost 5 month doesn’t speak English really even understand Golda I’ve thinking lot communicate use body expression language different way shape world around u perception month let’s talk language Happy snacking Art Wayne White New Language first class ever took college linguistics one semester quickly learned 9am Monday class always available One thing still stick SapirWhorf hypothesis idea language shape thought Changing language word something using verb tense affect think 3 short story last year changing language affected thinking Contractor v Freelancer started freelancing Summer 2016 freelance life supposed come freedom higher day rate flexible schedule independent spirit 2 year “freelancing” realized didn’t felt tied That’s realized wasn’t freelancing contracting still 40 hour week job 2–4 month contract since last Summer I’ve tried freelancer– finding client working home saying timeline budget feedback didn’t work expectation it’s made difference 2 Emergency Talking parenting approach friend said “put oxygen mask first” parent believe selfcare getting sleep exercise present energized called “that metaphor assumes emergency emergency crisis Try reframing approach believe give overflow” give overflow cup need overflowing give time energy love attention people emergency 3 Weakness Strength January lead workshop leadership One key idea research came School Life coined “The Weakness Strength” theory It’s idea every person strength trait shadow side decisive leader might also mean alienate team don’t include decisionmaking fell love spontaneous adventurous partner nerve uncommitted bad planning It’s helpful reframe aware strength shadow side partner Caveday marriage friend What’s shadow side greatest strength Art Dan Ferrer II Foreign Language Sometimes feel like making decision job career path college major permanent decision Whatever decide MUST commit Forever Forever cannot anything Forever Yeesh month little reminder high school Spanish changed way thought career transition Check article wrote httpsmediumcompe0b9734eb38 Art Magnus Atom III Language Paradoxes beginning year set goal realized felt conflict present father taking work plan life still leave room flexibility thing change choose path still trust universe back I’m learning don’t think thing “eitheror” Life simple mutually exclusive choice Life complicated Life creates paradox “BothAnd” argument often eitheror right wrong truth right partly right part community requires independent self anonymous part group Two thing exist time opposite eitherors bothands Science religion Change stasis Love fear Choice fate lot thinking clarified reading Parker Palmer’s work thanks Casey Rosengren recommendation one talk asserts five habit heart really need consider two Chutzpah Humility Chutzpah audacity believe voice deserves heard right speak humility awareness truth complete need listen openly respectfully That’s pretty deep paradox often try oversimplify thing eitheror it’s easier it’s world life work Things complex interconnected Holding space bothand requires patience work Conflicts arise eitheror mentality Resolutions come understanding bothand way better understanding paradox life lead peace Art Jean Bevier IV Fin always thanks opening refrigerator sharing thought get something feel free share friend way thing grows tell someone else Send link JakeTags Strengths Weaknesses Language Self Improvement Selfawareness Paradox
4,599
Pros of Different Python String Formatting Methods
Now let’s see the pros of each method one by one. %-format %-format is very similar to the classic C printf() function. So it is easy to understand or share with other people who are new to Python. The big advantage of %-format is that you can pass a tuple or list as argument directly. This is very useful if you want to pass a long list of arguments, for instance, you read data from database or spreadsheet and format a long string in XML or JSON format. replacement_str = '%s is %s' arg_tuple = ('pi', 3.14) print(replacement_str % arg_tuple) # out: pi is 3.14 Note: You can achieve the same in str.format() by passing the address of a tuple/list, e.g., *arg_tuple, which is not intuitive and simple, IMHO. And it is handy if you want to define the same format for multiple data, where you define the format only once. str_format = '%-10s, %s' headers = ('Result','Message') row1 = ('Successful','abc') row2 = ('Failed','efg') print(str_format % headers) print(str_format % row1) print(str_format % row2) # Out: Result , Message Successful, abc Failed , efg Note: You can achieve the same in str.format() but not f-string. str.format() str.format() can avoid passing repeated arguments. You can use the same position or argument name in replacement fields if the same argument is used more than once in the string. print('{0} + {0} = {1}'.format('pi', 6.28) ) # out: pi + pi = 6.28 Or you can pass one argument and access its multiple attributes or items in the string. tuple1=('pi', 3.14) print('{v[0]} is {v[1]}'.format(v=tuple1)) # out: pi is 3.14 Equivalent formatting: print('{} is {}'.format(tuple1[0],tuple1[1])) Another useful case for str.format() is nesting argument. You can also achieve it using %-format, but it looks nicer using str.format(), IMHO. In this example, we convert a dict to a list of string using {}-format, and join them with ‘ ’, then pass it as a nesting argument to the main string in {}-format. dict1={'k1':1, 'k2':'two'} print('{} {}'.format( 'Line 1', ' '.join('{}: {}'.format(k, v) for k, v in dict1.items()) ) ) Out: Line 1 k1: 1 k2: two f-string f-string, also called formatted string literal, is pretty much identical to str.format(), but just put arguments directly in the string. It has a unique function, though. You can call an argument’s methods directly in the formatted string, while str.format() can call attributes or items only.
https://peter-jp-xie.medium.com/pros-of-different-python-string-formatting-methods-318f1bdeca93
['Peter Xie']
2020-11-23 11:09:21.505000+00:00
['String Format', 'Python']
Title Pros Different Python String Formatting MethodsContent let’s see pro method one one format format similar classic C printf function easy understand share people new Python big advantage format pas tuple list argument directly useful want pas long list argument instance read data database spreadsheet format long string XML JSON format replacementstr argtuple pi 314 printreplacementstr argtuple pi 314 Note achieve strformat passing address tuplelist eg argtuple intuitive simple IMHO handy want define format multiple data define format strformat 10 header ResultMessage row1 Successfulabc row2 Failedefg printstrformat header printstrformat row1 printstrformat row2 Result Message Successful abc Failed efg Note achieve strformat fstring strformat strformat avoid passing repeated argument use position argument name replacement field argument used string print0 0 1formatpi 628 pi pi 628 pas one argument access multiple attribute item string tuple1pi 314 printv0 v1formatvtuple1 pi 314 Equivalent formatting print formattuple10tuple11 Another useful case strformat nesting argument also achieve using format look nicer using strformat IMHO example convert dict list string using format join ‘ ’ pas nesting argument main string format dict1k11 k2two print format Line 1 join formatk v k v dict1items Line 1 k1 1 k2 two fstring fstring also called formatted string literal pretty much identical strformat put argument directly string unique function though call argument’s method directly formatted string strformat call attribute item onlyTags String Format Python