Unnamed: 0
int64 0
192k
| title
stringlengths 1
200
| text
stringlengths 10
100k
| url
stringlengths 32
885
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
| info
stringlengths 45
90.4k
|
---|---|---|---|---|---|---|---|
3,500 | What are *Args and **Kwargs in Python? | What are *Args and **Kwargs in Python?
Boost your functions to the n-level.
Photo by SpaceX on Unsplash
If you have been programming Python for a while now surely you’ll have had a doubt in how to use a function properly. You went to the paper and have met with *args and **kwargs inside the parameter of this function.
In matplotlib or in seaborn (to name but a few) there are regularly these words in the attributes of a function. Let’s take a look at a caption.
Image by Author
When I started seeing these attributes my brain usually did like them didn’t exist. Until I recalled the power of this attribute and the commodity they provide. Using them your functions can be a lot more versatile and scalable.
Usually, the functions have some ‘natural’ attributes and each parameter are regularly predefined. For example, in the matplotlib function seen above, we have the “Data” attribute where its value “=None” is predefined. This meaning that if you don’t use a value in “Data” the calling of the function will return an empty figure. A none by default.
What happens with the *args and**kwargs seen above? The args here are usually used to introduce the data, x1, x2 whatsoever. And the kwargs are used for the tunning of your graph. The properties of each graph come determined with the kwargs you introduce. Linewidth, linestyle, or marker are just some of the keyworded attributes you could already be familiar with. So you have been already using this **kwargs-thing unconsciously. | https://towardsdatascience.com/args-and-kwargs-d89bdf56d49b | ['Toni Domenech Borrell'] | 2020-12-29 13:14:57.903000+00:00 | ['Data Science', 'Python', 'Programming', 'Software Development', 'Matplotlib'] | Title Args Kwargs PythonContent Args Kwargs Python Boost function nlevel Photo SpaceX Unsplash programming Python surely you’ll doubt use function properly went paper met args kwargs inside parameter function matplotlib seaborn name regularly word attribute function Let’s take look caption Image Author started seeing attribute brain usually like didn’t exist recalled power attribute commodity provide Using function lot versatile scalable Usually function ‘natural’ attribute parameter regularly predefined example matplotlib function seen “Data” attribute value “None” predefined meaning don’t use value “Data” calling function return empty figure none default happens args andkwargs seen args usually used introduce data x1 x2 whatsoever kwargs used tunning graph property graph come determined kwargs introduce Linewidth linestyle marker keyworded attribute could already familiar already using kwargsthing unconsciouslyTags Data Science Python Programming Software Development Matplotlib |
3,501 | Algorithms Tell Us How to Think, and This is Changing Us | Silicon Valley is predicting more and more how we are going to respond to an email, react on someone’s Instagram picture, determine which government services are we eligible for, and soon a forthcoming Google Assistant will be able to call our hairdresser for us in real-time.
We have invited algorithms practically everywhere, from hospitals and schools to courtrooms. We are surrounded by autonomous automation. Lines of code can tell us what to watch, whom to date, and even whom should the justice system send to jail.
Are we making a mistake by handing over so much decision-making authority and control to lines of code?
We are obsessed with mathematical procedures because they give us fast, accurate answers to a range of complex problems. Machine learning systems have been implemented in almost every realm of our modern society.
Yet, what we should be asking ourselves is, are we making a mistake by handing over so much decision-making authority and control to lines of code? And, how algorithms are affecting our lives?
In an ever-changing world, machines are doing a great job at learning how humans behave, what we like and hate, and what is best for us at a fast pace. We’re currently living within the chambers of predictive technology — Oh hey there Autocomplete!
Algorithms have drastically transformed our lives by sorting through the vastness data and giving us relevant, instantaneous results. By collecting big amounts of data we have given companies over the years the power to decide what’s best for us.
Companies like Alphabet or Amazon have been feeding their respective algorithms with the data harvested and are instructing AI into using the information gathered to adapt to our needs and be more like us. Yet as we get used to these handy features, are we talking and behaving more like a computer?
“Algorithms are not inherently fair, because the person who builds the model defines success.” — Cathy O’Neil, Data scientist
At this technological rate, it’s impossible not to imagine a near future where our behavior is guided or dictated by algorithms. In fact, it’s already happening.
Designed to assist you write messages or quick replies, Google rolled out its latest feature on Gmail called Smart Replies last October. Since, taking the internet by storm a lot of people have criticized the assistant, saying that its tailored suggestions are invasive, make humans look like machines, with some even arguing its replies could ultimately influence the way we communicate or possibly change email etiquette.
The main issue with algorithms is when they get so big and complex they start to negatively affect our current society, putting democracy in danger — Hi Mark Zuckerberg, or placing citizens into Orwellian measures, like China taking unprecedented means to rank people’s credit score by tracking their behaviour with a dystopian surveillance program.
As machine-learning systems are becoming more pervasive in many areas of society. Will algorithms run the world, taking over our thoughts?
Now, let’s take Facebook’s approach. Back in 2015 they rolled out their newer version of the News Feed which was designed as an ingenuous way of raking and boosting users’ feed into a personalized newspaper allowing them to engage in content they’ve previously liked, shared and commented.
The problem with “personalized” algorithms is that they can put users into filter bubbles or echo chambers. In real life, most people are far less likely to engage with viewpoints that they find confusing, annoying, incorrect, or abhorrent. In the case of Facebook´s algorithms, they give users what they want, as a result, each person feed becomes a unique world. A distinctive reality by itself.
Filter bubbles make it increasingly difficult to have a public argument because from the system’s perspective information and disinformation look exactly the same. As Roger McNamee wrote recently on Time magazine “On Facebook facts are not an absolute; they are a choice to be left initially to users and their friends but then magnified by algorithms to promote engagement.”
Filter bubbles create an illusion that everyone believes the same things we do or have the same habits. As we already know, on Facebook algorithms aggravated the problem by increasing polarization and, ultimately harming democracy. With evidence showing that algorithms may have influenced a British referendum or the 2016 elections in the U.S.
“Facebook’s algorithms promote extreme messages over neutral ones, which can elevate disinformation over information, conspiracy theories over facts.” — Roger McNamee, Silicon Valley Investor
In the current world constantly filled with looming mounds of information, sifting through it poses a huge challenge for some individuals. AI — used wisely — could potentially enhance someone’s experience online or help tackle, in a swift manner, the ever-growing loads of content. However, in order to function properly, algorithms require accurate data about what’s happening in real the world.
Companies and governments need to make sure the algorithms’ data is not biased or inaccurate. Since in nature, nothing is perfect, naturally biased data is expected to be inside many algorithms already, and that puts in danger not only our online world by also the physical, real one.
It is imperative to advocate for the implementation of stronger regulatory frameworks, so we don’t end up in a technological Wild West.
We should be extremely cautious about the power we give algorithms too. Fears are rising over the transparency issues algorithms entail and the ethical implications behind the decisions and processes made by them and the societal consequences affecting people.
For example, AI used in courtrooms may enhance bias, discriminate against minorities by taking into account “risk” factors such as their neighborhoods and links to crime. These algorithms could systematically make calamitous mistakes and sending innocent, real humans to jail.
“Are we in danger of losing our humanity?”
As security expert, Bruce Schneier wrote in his book Click Here to Kill Everybody, “if we let computers think for us and the underlying input data is corrupt, they’ll do the thinking badly and we might not ever know it.”
Hannah Fry, a mathematician at University College London, takes us inside a world in which computers operate freely. In her recent book Hello World: Being Human in the Age of Algorithms, she argues that as citizens we should be paying more attention to the people behind the keyboard, the ones programming the algorithms.
“We don’t have to create a world in which machines are telling us what to do or how to think, although we may very well end up in a world like that,” she says. Throughout the book, she frequently asks: “Are we in danger of losing our humanity?”
Right now, we still are not at the stage where humans, are out of the picture. Our role in this world hasn’t been sidelined yet nor it will be in a long time neither. Humans and machines can work together with their strengths and weaknesses. Machines are flawed and make the same mistakes just as we do. We should need to be careful about how much information and power we give up because algorithms are now an intrinsic part of humanity and they’re not going anywhere anytime soon. | https://orge.medium.com/algorithms-tell-us-how-to-think-and-this-is-affecting-us-eec7fb215dfa | ['Orge Castellano'] | 2019-03-04 19:34:06.614000+00:00 | ['Machine Learning', 'Privacy', 'Future', 'Technology', 'Artificial Intelligence'] | Title Algorithms Tell Us Think Changing UsContent Silicon Valley predicting going respond email react someone’s Instagram picture determine government service eligible soon forthcoming Google Assistant able call hairdresser u realtime invited algorithm practically everywhere hospital school courtroom surrounded autonomous automation Lines code tell u watch date even justice system send jail making mistake handing much decisionmaking authority control line code obsessed mathematical procedure give u fast accurate answer range complex problem Machine learning system implemented almost every realm modern society Yet asking making mistake handing much decisionmaking authority control line code algorithm affecting life everchanging world machine great job learning human behave like hate best u fast pace We’re currently living within chamber predictive technology — Oh hey Autocomplete Algorithms drastically transformed life sorting vastness data giving u relevant instantaneous result collecting big amount data given company year power decide what’s best u Companies like Alphabet Amazon feeding respective algorithm data harvested instructing AI using information gathered adapt need like u Yet get used handy feature talking behaving like computer “Algorithms inherently fair person build model defines success” — Cathy O’Neil Data scientist technological rate it’s impossible imagine near future behavior guided dictated algorithm fact it’s already happening Designed assist write message quick reply Google rolled latest feature Gmail called Smart Replies last October Since taking internet storm lot people criticized assistant saying tailored suggestion invasive make human look like machine even arguing reply could ultimately influence way communicate possibly change email etiquette main issue algorithm get big complex start negatively affect current society putting democracy danger — Hi Mark Zuckerberg placing citizen Orwellian measure like China taking unprecedented mean rank people’s credit score tracking behaviour dystopian surveillance program machinelearning system becoming pervasive many area society algorithm run world taking thought let’s take Facebook’s approach Back 2015 rolled newer version News Feed designed ingenuous way raking boosting users’ feed personalized newspaper allowing engage content they’ve previously liked shared commented problem “personalized” algorithm put user filter bubble echo chamber real life people far le likely engage viewpoint find confusing annoying incorrect abhorrent case Facebook´s algorithm give user want result person feed becomes unique world distinctive reality Filter bubble make increasingly difficult public argument system’s perspective information disinformation look exactly Roger McNamee wrote recently Time magazine “On Facebook fact absolute choice left initially user friend magnified algorithm promote engagement” Filter bubble create illusion everyone belief thing habit already know Facebook algorithm aggravated problem increasing polarization ultimately harming democracy evidence showing algorithm may influenced British referendum 2016 election US “Facebook’s algorithm promote extreme message neutral one elevate disinformation information conspiracy theory facts” — Roger McNamee Silicon Valley Investor current world constantly filled looming mound information sifting pose huge challenge individual AI — used wisely — could potentially enhance someone’s experience online help tackle swift manner evergrowing load content However order function properly algorithm require accurate data what’s happening real world Companies government need make sure algorithms’ data biased inaccurate Since nature nothing perfect naturally biased data expected inside many algorithm already put danger online world also physical real one imperative advocate implementation stronger regulatory framework don’t end technological Wild West extremely cautious power give algorithm Fears rising transparency issue algorithm entail ethical implication behind decision process made societal consequence affecting people example AI used courtroom may enhance bias discriminate minority taking account “risk” factor neighborhood link crime algorithm could systematically make calamitous mistake sending innocent real human jail “Are danger losing humanity” security expert Bruce Schneier wrote book Click Kill Everybody “if let computer think u underlying input data corrupt they’ll thinking badly might ever know it” Hannah Fry mathematician University College London take u inside world computer operate freely recent book Hello World Human Age Algorithms argues citizen paying attention people behind keyboard one programming algorithm “We don’t create world machine telling u think although may well end world like that” say Throughout book frequently asks “Are danger losing humanity” Right still stage human picture role world hasn’t sidelined yet long time neither Humans machine work together strength weakness Machines flawed make mistake need careful much information power give algorithm intrinsic part humanity they’re going anywhere anytime soonTags Machine Learning Privacy Future Technology Artificial Intelligence |
3,502 | How To Configure Custom Pipeline Options In Apache Beam | Example Project
Let’s see what we are building here with Apache Beam and Java SDK. Here is the simple input.txt file, we take this as an input and transform it and output the word count.
input.txt
Pipeline for example project
As shown above, we split the text file based on the “:” and then extract and count words, format the result, and output to the output.txt file. This pipeline produces the below output. You might see three output files based on the processes running on your machine.
testing: 3
progress: 2
done: 1
completed: 1
in: 2
test: 3
Here is the Github link for this example project. You can clone it and run it on your machine. You can see the output files in this location /src/main/resources
Here is the main file which is the starting point of your application and where the whole pipeline is defined.
App.java
SplitWords Transform
Once you read the input file from the location /src/main/resources all you need to do split the text by “:”. This is the transform file that takes the input and produced the output. In this case, input and output collections are PCollection<String>.
SplitWords.java
The processing logic which is split logic here is defined in the following file. This is the processing function where takes each input from the collection and processes it and produces the output.
SplitWordsFn.java
CountWords Transform
After the first transform split, you need to transform these lines into words. This is the transform file that takes the input and produced the output. In this case, input and output collections are PCollection<String> and PCollection<PV<String, Long>>
CountWords.java
The processing logic which is Extract logic here is defined in the following file. This is the processing function where takes each input from the collection and processes it and produces the output.
ExtractWordsFn.java
Once the pipeline is created, you can apply a series of transformations with the apply function. You can use TextIO to read from and write to the appropriate files. Finally, you run the pipeline with this line p.run().waitUntilFinish();. | https://medium.com/bb-tutorials-and-thoughts/how-to-configure-custom-pipeline-options-in-apache-beam-37a32f84d1aa | ['Bhargav Bachina'] | 2020-10-05 05:03:09.607000+00:00 | ['Programming', 'Java', 'Software Development', 'Web Development', 'Software Engineering'] | Title Configure Custom Pipeline Options Apache BeamContent Example Project Let’s see building Apache Beam Java SDK simple inputtxt file take input transform output word count inputtxt Pipeline example project shown split text file based “” extract count word format result output outputtxt file pipeline produce output might see three output file based process running machine testing 3 progress 2 done 1 completed 1 2 test 3 Github link example project clone run machine see output file location srcmainresources main file starting point application whole pipeline defined Appjava SplitWords Transform read input file location srcmainresources need split text “” transform file take input produced output case input output collection PCollectionString SplitWordsjava processing logic split logic defined following file processing function take input collection process produce output SplitWordsFnjava CountWords Transform first transform split need transform line word transform file take input produced output case input output collection PCollectionString PCollectionPVString Long CountWordsjava processing logic Extract logic defined following file processing function take input collection process produce output ExtractWordsFnjava pipeline created apply series transformation apply function use TextIO read write appropriate file Finally run pipeline line prunwaitUntilFinishTags Programming Java Software Development Web Development Software Engineering |
3,503 | How to Support Indie Writers | How to Support Indie Writers
In the rapidly changing world of publishing, more and more writers are going indie — but we need your help in order to succeed
Photo by Min An from Pexels
Being an indie or non-traditional writer is a tough path. We are a tenacious group of people who are determined to get our work out there, with or without the traditional means of assistance. But actually being able to support ourselves with our work can be a huge challenge. And sometimes, deeply discouraging when we put in endless hours and are lucky to make $10 a month off our efforts.
If you want to help, first of all, thank you so much. We wouldn’t be here without our devoted readers. Secondly, take a look at the following suggestions. You’d be surprised how easy it is to help your favorite indie writers.
Monetary Support
Buy our books
If you like our work, support it with your dollars. Buy them new direct from the main distributors. Remember that buying used copies and buying new copies from subsidiary book sellers on Amazon means the author doesn’t get any royalties!
Buy our other offerings
Many indie writers have secondary offerings. Perhaps they are selling their artwork on Etsy, or are offering online classes or educational subscription services. Sometimes, these purchases can be even more impactful than the purchase of our books. It helps sustain us in between projects.
Tithe
Is your favorite author on Ko-fi? Patreon? So many of us have places where you can contribute a few dollars, or regular, monthly support.
Again, this kind of support is so important. Some of us use this money for specific goals, some of us use it for business supplies, some for treats (I love my Earl Grey tea!), and some of us literally use it to help with the bills.
Non-Monetary Support
Share everything
Let your friends know about the amazing book you just read. Share our posts on social media. Recommend our work to others. Forward our newsletters to your friends.
Remember that indie authors are working without the expertise and contact lists of a marketing expert at a big publishing house. We’re on our own. So every fan who speaks up for us, every word-of-mouth recommendation is essential to our success. It’s like having our own, personalized patchwork quilt version of a PR manager (which, in my opinion, is the best kind!).
Leave reviews everywhere
Oftentimes, book reviews make or break a book purchase. Reviews are even more important to indie authors because self-published and independent press books have to fight against the cultural assumption that they weren’t good enough to be picked up by a traditional publishing house. (Not at all true.) We indie authors need all the help we can get to combat this negative assumption.
Remember that you aren’t limited to leaving a review of a book only at the place where you purchased it. Don’t forget Goodreads and other book-centric websites.
If you have a blog, write a review there. Share your reviews on social media.
And guess what? It’s okay if you didn’t like the book and don’t have a good review to offer. Believe it or not, we want honest reviews! A book with twenty-nine 5-star ratings and nothing else looks pretty hinky, right? No one has a perfect rating. So go ahead — if you didn’t like it, feel free to say so. (But, you know, be polite.)
Email your favorite indie author and ask if you can help them promote their latest project
Offer to review it on your blog or to lend them a space for a blog book tour. Invite them onto your podcast.
Authors are so grateful for this kind of support. And you might even get a free digital or audio copy out of the deal!
Subscribe to your favorite indie author’s free newsletter
You’ll get the scoop on all the upcoming projects, and it helps authors maintain a relationship with their loyal fans. (And don’t forget to forward the emails to friends who might enjoy it!)
Attend local events
It can be challenging for indie authors to put themselves out there. Give them a boost — fill up the room when they promote a reading/signing or other event. Bring some friends and smile a lot from the audience. And buy a book on your way out.
Post pictures of yourself reading the author’s book on social media
It is such a thrill for us to see our books out in the world, making an impact on other people’s lives! I’ll never forget the first time someone posted a photo on Instagram of them reading my book on vacation. Or the first time someone made a meme out of a quote from one of my books. It’s a thrill for us and a great way to help us spread the word.
Just make sure to hashtag the book title and tag the author.
Things to Keep in Mind
Remember not to make assumptions about the quality of an indie author’s work just because they are self-published or using an independent press.
Some of us are more interested in distributing our work to the world than we are in getting a publishing deal. Some of us are making a deliberate choice not to work with big publishing houses, for any number of reasons. Others want to keep a more intimate relationship between themselves and their readers.
And let’s not forget that all too often, high-quality projects are passed over by big publishers because they are so overwhelmed with their current workload and author list. There are countless reasons why indie authors might end up self-publishing and/or using an independent press. Give them a chance before you judge them, and make sure you remind others of this, as well.
If you review books on your blog or website, remember that indie authors often cannot afford to send you a printed copy of their book, especially in exchange for only the potential of a book review.
Even with our author discounts, sending 10 books out (with shipping costs) for 10 potential reviews (reviews are never guaranteed) can cost us $75 or more. Many indie authors aren’t even able to pay their bills from the money they earn off their books, let alone put large chunks of cash toward promotional outlets that might not come through.
If you’re a book reviewer who has these requirements, please reconsider allowing indie authors to send in a digital copy. With a digital copy, they at least won’t incur a loss if you choose not to review the book.
Most indie authors really, truly want to connect with their readers.
Don’t be afraid to contact us!
© Yael Wolfe 2019 | https://medium.com/wilder-with-yael-wolfe/how-to-support-indie-writers-6a0923da63bf | ['Yael Wolfe'] | 2020-08-21 05:08:28.158000+00:00 | ['Writing', 'Business', 'Creativity', 'Indie', 'Self Publishing'] | Title Support Indie WritersContent Support Indie Writers rapidly changing world publishing writer going indie — need help order succeed Photo Min Pexels indie nontraditional writer tough path tenacious group people determined get work without traditional mean assistance actually able support work huge challenge sometimes deeply discouraging put endless hour lucky make 10 month effort want help first thank much wouldn’t without devoted reader Secondly take look following suggestion You’d surprised easy help favorite indie writer Monetary Support Buy book like work support dollar Buy new direct main distributor Remember buying used copy buying new copy subsidiary book seller Amazon mean author doesn’t get royalty Buy offering Many indie writer secondary offering Perhaps selling artwork Etsy offering online class educational subscription service Sometimes purchase even impactful purchase book help sustain u project Tithe favorite author Kofi Patreon many u place contribute dollar regular monthly support kind support important u use money specific goal u use business supply treat love Earl Grey tea u literally use help bill NonMonetary Support Share everything Let friend know amazing book read Share post social medium Recommend work others Forward newsletter friend Remember indie author working without expertise contact list marketing expert big publishing house We’re every fan speaks u every wordofmouth recommendation essential success It’s like personalized patchwork quilt version PR manager opinion best kind Leave review everywhere Oftentimes book review make break book purchase Reviews even important indie author selfpublished independent press book fight cultural assumption weren’t good enough picked traditional publishing house true indie author need help get combat negative assumption Remember aren’t limited leaving review book place purchased Don’t forget Goodreads bookcentric website blog write review Share review social medium guess It’s okay didn’t like book don’t good review offer Believe want honest review book twentynine 5star rating nothing else look pretty hinky right one perfect rating go ahead — didn’t like feel free say know polite Email favorite indie author ask help promote latest project Offer review blog lend space blog book tour Invite onto podcast Authors grateful kind support might even get free digital audio copy deal Subscribe favorite indie author’s free newsletter You’ll get scoop upcoming project help author maintain relationship loyal fan don’t forget forward email friend might enjoy Attend local event challenging indie author put Give boost — fill room promote readingsigning event Bring friend smile lot audience buy book way Post picture reading author’s book social medium thrill u see book world making impact people’s life I’ll never forget first time someone posted photo Instagram reading book vacation first time someone made meme quote one book It’s thrill u great way help u spread word make sure hashtag book title tag author Things Keep Mind Remember make assumption quality indie author’s work selfpublished using independent press u interested distributing work world getting publishing deal u making deliberate choice work big publishing house number reason Others want keep intimate relationship reader let’s forget often highquality project passed big publisher overwhelmed current workload author list countless reason indie author might end selfpublishing andor using independent press Give chance judge make sure remind others well review book blog website remember indie author often cannot afford send printed copy book especially exchange potential book review Even author discount sending 10 book shipping cost 10 potential review review never guaranteed cost u 75 Many indie author aren’t even able pay bill money earn book let alone put large chunk cash toward promotional outlet might come you’re book reviewer requirement please reconsider allowing indie author send digital copy digital copy least won’t incur loss choose review book indie author really truly want connect reader Don’t afraid contact u © Yael Wolfe 2019Tags Writing Business Creativity Indie Self Publishing |
3,504 | How To Provision Infrastructure on AWS With Terraform | How To Provision Infrastructure on AWS With Terraform
A Beginner’s Guide with an example project
Terraform is an infrastructure as a code tool that makes it easy to provision infrastructure on any cloud or on-premise. Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.
Configuration files describe to Terraform the components needed to run a single application or your entire datacenter. Terraform generates an execution plan describing what it will do to reach the desired state, and then executes it to build the described infrastructure.
In this post, we will see how to provision infrastructure on AWS.
Get Started With Terraform
Prerequisites
Example Project
What is Backend
Configuring Backend
Provisioning Infrastructure
Inputs and Outputs
Destroying Infrastructure
Summary
Conclusion
Get Started With Terraform
The first thing we need to do is to get familiar with Terraform. If you are new to Terraform, Check the below article on how to get started. It has all the details on how t install, Terraform Workflow, Example Projects, etc. | https://medium.com/bb-tutorials-and-thoughts/how-to-provision-infrastructure-on-aws-with-terraform-d0d2710a0169 | ['Bhargav Bachina'] | 2020-11-13 06:02:25.989000+00:00 | ['Software Development', 'Terraform', 'Programming', 'Cloud Computing', 'AWS'] | Title Provision Infrastructure AWS TerraformContent Provision Infrastructure AWS Terraform Beginner’s Guide example project Terraform infrastructure code tool make easy provision infrastructure cloud onpremise Terraform tool building changing versioning infrastructure safely efficiently Terraform manage existing popular service provider well custom inhouse solution Configuration file describe Terraform component needed run single application entire datacenter Terraform generates execution plan describing reach desired state executes build described infrastructure post see provision infrastructure AWS Get Started Terraform Prerequisites Example Project Backend Configuring Backend Provisioning Infrastructure Inputs Outputs Destroying Infrastructure Summary Conclusion Get Started Terraform first thing need get familiar Terraform new Terraform Check article get started detail install Terraform Workflow Example Projects etcTags Software Development Terraform Programming Cloud Computing AWS |
3,505 | The Business Value of RPA — Robotics Process Automation | Introduction
RPA is on the critical agenda of CEOs to cut cost, increase profitability, and generate new business; and agenda of other CxO level executives for accuracy, compliance, and security.
As part of my business architecture and designer roles, I architected and designed several business solutions using Robotics Process Automation (RPA) tools and processes for large business and government organizations.
I also helped the startup companies and sole trading entrepreneurs to add this interesting and emerging technology to their growth agenda.
In this post, my focus is on the business value of Robotics Process Automation.
I excluded specific tools and service providers. My aim is to evaluate them in a comprehensive article. This story provides a high-level overview of the business value proposition of Robotics Process Automation.
Before delving into the business value proposition, I want to briefly introduce what RPA is.
The internal ingredients of RPA systems are complex and sophisticated however from a functional perspective it can be easier to describe the operational model by abstracting the technical details.
RPA is based on multiple software robots, connectors, and the control plane.
The software robot can read a computer screen.
In addition, designers or operators can use pre-built connectors to capture data, e.g. user screen. We can also edit the tasks that software robots produce.
Based on these data collection models, the RPA system can create triggers and timer schedules on the Control Room engine.
Then the Control Room engine assigns tasks from the queues to the software robots (a.k.a. Bots) in the worker pool. This is an iterative and repetitive process for desired outcomes.
Business Value Prepositions
While performing some proof of concepts or proof of technology initiatives, I noticed that the software robots could be up to five times faster than an experienced human agent in completing the same task.
Humans cannot but software robots can work 24 by 7 basis.
I witnessed three critical items on the agenda of CIOs, CTOs, CDOs, and CISOs.
Accuracy (Integrity of business),
Compliance, and
Security
I witnessed these major concerns in many large business and government organisations.
Addressing the key agenda items on CxO level professionals is a critical value proposition that RPA can provide and exceed in delivery.
My experience demonstrated that there was a substantial reduction in human error rates.
I received confirming feedback from C Level executives in business and government organizations that RPA enhanced compliance and security capabilities by addressing their organizational concerns.
As we know business and government organizations have a tremendous focus on three inevitable initiatives:
to cut cost,
increase profitability, and
generate new business.
RPA adds a compelling value proposition for cost-cutting. Many tasks can be performed in a shorter time and less effort by a human. Time of employees can be the biggest cost to large organizations. Therefore, each year we see so many redundancies in these business organizations.
Increasing profitability is another value proposition that RPA contributes to large business organizations. Profitability is initiated and maintained by speedy delivery of services and reduced efforts spent by the employees, especially by those who are highly paid.
As RPA is an innovative process and set of tools, it can contribute to generating new business. This value is greatly appreciated by startup organizations and sole entrepreneurs trying to tap into new markets.
At the highest level, RPA can get rid of downstream process remediation, troubleshooting, and can reposition organizational labor to higher-value tasks.
Close to my heart as an enterprise architect, RPA can address major concerns around scalability, agility, and flexibility in organizations leveraging Multi-Cloud services and Hybrid Cloud platforms.
In business terms, this means that the software robots can easily be replicated and rapidly scaled, across the enterprise and beyond. The goal is to meet peak or atypical workloads on demand.
If we have a compelling business case and want to use Artificial Intelligence (AI) and Cognitive Computing capabilities to enhance our automation services, we need to extend our current RPA capabilities using integrated offerings or customise our solutions at the architectural, design, operational, and specifications levels.
For example, one of my clients wanted to leverage voluminous of streaming data from the Big Data analytics platform operating with a very complex set of open source and proprietary data tools, processes, and technology stacks. The data sources were coming from internal and external sources with high velocity.
I came across many service providers offering RPA products and services with varying capabilities. I am not in a position to recommend or criticize any products as a vendor-agnostic enterprise architect.
However, I develop a set of criteria to meet our business requirements and strategic goals. I neutrally and independently propose the criteria for CxO level executives. Once they understand the business value from the architectural criteria that I introduce to them, I start creating solutions for the initiatives.
Conclusions
RPA solutions pose many business benefits to organizations at all level.
RPA solutions aim to reduce pain at the CxO level professional especially within the key agenda items such as cost-cutting, increasing profitability, and generating new business for CEOs. And they cover accuracy, compliance, and, security matters for CIO, CTO, CDO, and CISOs.
I want to point out that we should not purchase RPA products or services based on their reputation or market shares. We need to ensure that the product and associated services meet the solution requirements, use cases, financial position, and our organizations’ strategic business direction.
Understanding the high-level understanding of Business Architecture frameworks can be invaluable to strategize, design, implement and operate RPA solutions. You can learn about the critical concepts from Agile Business Architecture for Digital Transformation.
I also published an article on introducing An Overview of Business Architecture For Entrepreneurs.
Thank you for reading my perspectives.
Original version of this story was published on another platform. | https://medium.com/technology-hits/the-business-value-of-rpa-robotics-process-automation-2cc017ba9e88 | ['Dr Mehmet Yildiz'] | 2020-12-27 16:32:36.128000+00:00 | ['Technology', 'Artificial Intelligence', 'Business', 'Robotics', 'Writing'] | Title Business Value RPA — Robotics Process AutomationContent Introduction RPA critical agenda CEOs cut cost increase profitability generate new business agenda CxO level executive accuracy compliance security part business architecture designer role architected designed several business solution using Robotics Process Automation RPA tool process large business government organization also helped startup company sole trading entrepreneur add interesting emerging technology growth agenda post focus business value Robotics Process Automation excluded specific tool service provider aim evaluate comprehensive article story provides highlevel overview business value proposition Robotics Process Automation delving business value proposition want briefly introduce RPA internal ingredient RPA system complex sophisticated however functional perspective easier describe operational model abstracting technical detail RPA based multiple software robot connector control plane software robot read computer screen addition designer operator use prebuilt connector capture data eg user screen also edit task software robot produce Based data collection model RPA system create trigger timer schedule Control Room engine Control Room engine assigns task queue software robot aka Bots worker pool iterative repetitive process desired outcome Business Value Prepositions performing proof concept proof technology initiative noticed software robot could five time faster experienced human agent completing task Humans cannot software robot work 24 7 basis witnessed three critical item agenda CIOs CTOs CDOs CISOs Accuracy Integrity business Compliance Security witnessed major concern many large business government organisation Addressing key agenda item CxO level professional critical value proposition RPA provide exceed delivery experience demonstrated substantial reduction human error rate received confirming feedback C Level executive business government organization RPA enhanced compliance security capability addressing organizational concern know business government organization tremendous focus three inevitable initiative cut cost increase profitability generate new business RPA add compelling value proposition costcutting Many task performed shorter time le effort human Time employee biggest cost large organization Therefore year see many redundancy business organization Increasing profitability another value proposition RPA contributes large business organization Profitability initiated maintained speedy delivery service reduced effort spent employee especially highly paid RPA innovative process set tool contribute generating new business value greatly appreciated startup organization sole entrepreneur trying tap new market highest level RPA get rid downstream process remediation troubleshooting reposition organizational labor highervalue task Close heart enterprise architect RPA address major concern around scalability agility flexibility organization leveraging MultiCloud service Hybrid Cloud platform business term mean software robot easily replicated rapidly scaled across enterprise beyond goal meet peak atypical workload demand compelling business case want use Artificial Intelligence AI Cognitive Computing capability enhance automation service need extend current RPA capability using integrated offering customise solution architectural design operational specification level example one client wanted leverage voluminous streaming data Big Data analytics platform operating complex set open source proprietary data tool process technology stack data source coming internal external source high velocity came across many service provider offering RPA product service varying capability position recommend criticize product vendoragnostic enterprise architect However develop set criterion meet business requirement strategic goal neutrally independently propose criterion CxO level executive understand business value architectural criterion introduce start creating solution initiative Conclusions RPA solution pose many business benefit organization level RPA solution aim reduce pain CxO level professional especially within key agenda item costcutting increasing profitability generating new business CEOs cover accuracy compliance security matter CIO CTO CDO CISOs want point purchase RPA product service based reputation market share need ensure product associated service meet solution requirement use case financial position organizations’ strategic business direction Understanding highlevel understanding Business Architecture framework invaluable strategize design implement operate RPA solution learn critical concept Agile Business Architecture Digital Transformation also published article introducing Overview Business Architecture Entrepreneurs Thank reading perspective Original version story published another platformTags Technology Artificial Intelligence Business Robotics Writing |
3,506 | Tensorflow — Learn How to Use Callbacks Efficiently | So, you’re training a highly complex model that takes hours to adjust. You don’t know how many epochs are enough and, even worse: it’s impossible to know which variant will deliver the most reliable results!
You want to take a break, but the longer it takes to find the right set of parameters, the less you have to deliver the project; So what to do? Let’s dive together.
Photo by Tianyi Ma on Unsplash
Before we Start
To make it easier to understand, let’s go straight to the callback section, where we’ll adjust a Multilayer Perceptron. If you don’t know how to create a Deep Learning model, I suggest that you take a look at my article below; This will help you build your first Neural Network. | https://medium.com/swlh/tensorflow-learn-how-to-use-callbacks-efficiently-b13e0df89de3 | ['Renan Lolico'] | 2020-10-20 22:54:18.246000+00:00 | ['AI', 'Artificial Intelligence', 'Machine Learning', 'Data Science', 'TensorFlow'] | Title Tensorflow — Learn Use Callbacks EfficientlyContent you’re training highly complex model take hour adjust don’t know many epoch enough even worse it’s impossible know variant deliver reliable result want take break longer take find right set parameter le deliver project Let’s dive together Photo Tianyi Unsplash Start make easier understand let’s go straight callback section we’ll adjust Multilayer Perceptron don’t know create Deep Learning model suggest take look article help build first Neural NetworkTags AI Artificial Intelligence Machine Learning Data Science TensorFlow |
3,507 | Exploratory Data Analysis: Haberman’s Cancer Survival Dataset | What is Exploratory Data Analysis?
Exploratory Data Analysis (EDA) is an approach to analyzing data sets to summarize their main characteristics, often with visual methods. EDA is for seeing what the data can tell us beyond the formal modelling or hypothesis testing task.
It is always a good idea to explore a data set with multiple exploratory techniques, especially when they can be done together for comparison. The goal of exploratory data analysis is to obtain confidence in your data to a point where you’re ready to engage a machine learning algorithm. Another side benefit of EDA is to refine your selection of feature variables that will be used later for machine learning.
Why EDA?
In a hurry to get to the machine learning stage, some data scientists either entirely skip the exploratory process or do a very perfunctory job. This is a mistake with many implications, including generating inaccurate models, generating accurate models but on the wrong data, not creating the right types of variables in data preparation, and using resources inefficiently because of realizing only after generating models that perhaps the data is skewed, or has outliers, or has too many missing values, or finding that some values are inconsistent.
In this blog, we take Haberman’s Cancer Survival Dataset and perform various EDA techniques using python. You can easily download the dataset from Kaggle.
EDA on Haberman’s Cancer Survival Dataset
1. Understanding the dataset
Title: Haberman’s Survival Data
Description: The dataset contains cases from a study that was conducted between 1958 and 1970 at the University of Chicago’s Billings Hospital on the survival of patients who had undergone surgery for breast cancer.
Attribute Information:
Age of patient at the time of operation (numerical)
Patient’s year of operation (year — 1900, numerical)
Number of positive axillary nodes detected (numerical)
Survival status (class attribute) :
1 = the patient survived 5 years or longer
2 = the patient died within 5 years
2. Importing libraries and loading the file
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np #reading the csv file
haber = pd.read_csv(“haberman_dataset.csv”)
3. Understanding the data
#Prints the first 5 entries from the csv file
haber.head()
Output:
#prints the number of rows and number of columns
haber.shape
Output: (306, 4)
Observation:
The CSV file contains 306 rows and 4 columns.
#printing the columns
haber.columns
Output: Index([‘age’, ‘year’, ‘nodes’, ‘status’], dtype=’object’)
print(haber.info())
#brief info about the dataset
Output:
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 306 entries, 0 to 305
Data columns (total 4 columns):
age 306 non-null int64
year 306 non-null int64
nodes 306 non-null int64
status 306 non-null int64
dtypes: int64(4)
memory usage: 9.6 KB
Observations:
There are no missing values in this data set. All the columns are of the integer data type. The datatype of the status is an integer, it has to be converted to a categorical datatype In the status column, the value 1 can be mapped to ‘yes’ which means the patient has survived 5 years or longer. And the value 2 can be mapped to ‘no’ which means the patient died within 5 years.
haber[‘status’] = haber[‘status’].map({1:’Yes’, 2:’No’})
haber.head()
#mapping the values of 1 and 2 to yes and no respectively and #printing the first 5 records from the dataset.
Output:
haber.describe()
#describes the dataset
Output:
Observations:
Count : Total number of values present in respective columns. Mean: Mean of all the values present in the respective columns. Std: Standard Deviation of the values present in the respective columns. Min: The minimum value in the column. 25%: Gives the 25th percentile value. 50%: Gives the 50th percentile value. 75%: Gives the 75th percentile value. Max: The maximum value in the column.
haber[“status”].value_counts()
#gives each count of the status type
Output:
Yes 225
No 81
Name: status, dtype: int64
Observations:
The value_counts() function tells how many data points for each class are present. Here, it tells how many patients survived and how many did not survive. Out of 306 patients, 225 patients survived and 81 did not. The dataset is imbalanced.
status_yes = haber[haber[‘status’]==’Yes’]
status_yes.describe()
#status_yes dataframe stores all the records where status is yes
Output:
status_no = haber[haber[‘status’]==’No’]
status_no.describe()
#status_no dataframe stores all the records where status is no
Observations:
The mean age and the year in which the patients got operated are almost similar of both the classes, while the mean of the nodes of both the classes differs by 5 units approximately. The nodes of patients who survived are less when compared to patients who did not survive.
4. Univariate Analysis
The major purpose of the univariate analysis is to describe, summarize and find patterns in the single feature.
4.1 Probability Density Function(PDF)
Probability Density Function (PDF) is the probability that the variable takes a value x. (a smoothed version of the histogram)
Here the height of the bar denotes the percentage of data points under the corresponding group
sns.FacetGrid(haber,hue=’status’,height = 5)\
.map(sns.distplot,”age”)\
. add_legend();
plt.show()
Output:
PDF of Age
Observations:
Major overlapping is observed, which tells us that survival chances are irrespective of a person’s age. Although there is overlapping we can vaguely tell that people whose age is in the range 30–40 are more likely to survive, and 40–60 are less likely to survive. While people whose age is in the range 60–75 have equal chances of surviving and not surviving. Yet, this cannot be our final conclusion. We cannot decide the survival chances of a patient just by considering the age parameter
sns.FacetGrid(haber,hue=’status’,height = 5)\
.map(sns.distplot,”year”)\
. add_legend();
plt.show()
Output:
PDF of Year
Observations:
There is major overlapping observed. This graph only tells how many of the operations were successful and how many weren’t. This cannot be a parameter to decide the patient’s survival chances. However, it can be observed that in the years 1960 and 1965 there were more unsuccessful operations.
sns.FacetGrid(haber,hue=’status’,height = 5)\
.map(sns.distplot,”nodes”)\
. add_legend();
plt.show()
Output:
PDF of Nodes
Observations:
Patients with no nodes or 1 node are more likely to survive. There are very few chances of surviving if there are 25 or more nodes.
4.2 Cumulative Distribution Function(CDF)
The Cumulative Distribution Function (CDF) is the probability that the variable takes a value less than or equal to x.
counts1, bin_edges1 = np.histogram(status_yes['nodes'], bins=10, density = True)
pdf1 = counts1/(sum(counts1))
print(pdf1);
print(bin_edges1)
cdf1 = np.cumsum(pdf1)
plt.plot(bin_edges1[1:], pdf1)
plt.plot(bin_edges1[1:], cdf1, label = 'Yes')
plt.xlabel('nodes') print("***********************************************************")
counts2, bin_edges2 = np.histogram(status_no['nodes'], bins=10, density = True)
pdf2 = counts2/(sum(counts2))
print(pdf2);
print(bin_edges2)
cdf2 = np.cumsum(pdf2)
plt.plot(bin_edges2[1:], pdf2)
plt.plot(bin_edges2[1:], cdf2, label = 'No')
plt.xlabel('nodes') plt.legend()
plt.show()
Output:
[0.83555556 0.08 0.02222222 0.02666667 0.01777778 0.00444444
0.00888889 0. 0. 0.00444444]
[ 0. 4.6 9.2 13.8 18.4 23. 27.6 32.2 36.8 41.4 46. ] *************************************************************
[0.56790123 0.14814815 0.13580247 0.04938272 0.07407407 0.
0.01234568 0. 0. 0.01234568]
[ 0. 5.2 10.4 15.6 20.8 26. 31.2 36.4 41.6 46.8 52. ]
CDF of Nodes
Observations:
83.55% of the patients who have survived had nodes in the range of 0–4.6
4.3 Box Plots and Violin Plots
The box extends from the lower to upper quartile values of the data, with a line at the median. The whiskers extend from the box to show the range of the data. Outlier points are those past the end of the whiskers.
Violin plot is the combination of a box plot and probability density function(CDF). | https://towardsdatascience.com/exploratory-data-analysis-habermans-cancer-survival-dataset-c511255d62cb | ['Deepthi A R'] | 2019-09-15 11:01:16.990000+00:00 | ['Haberman', 'Exploratory Data Analysis', 'Data Analysis', 'Data Science', 'Data Visualization'] | Title Exploratory Data Analysis Haberman’s Cancer Survival DatasetContent Exploratory Data Analysis Exploratory Data Analysis EDA approach analyzing data set summarize main characteristic often visual method EDA seeing data tell u beyond formal modelling hypothesis testing task always good idea explore data set multiple exploratory technique especially done together comparison goal exploratory data analysis obtain confidence data point you’re ready engage machine learning algorithm Another side benefit EDA refine selection feature variable used later machine learning EDA hurry get machine learning stage data scientist either entirely skip exploratory process perfunctory job mistake many implication including generating inaccurate model generating accurate model wrong data creating right type variable data preparation using resource inefficiently realizing generating model perhaps data skewed outlier many missing value finding value inconsistent blog take Haberman’s Cancer Survival Dataset perform various EDA technique using python easily download dataset Kaggle EDA Haberman’s Cancer Survival Dataset 1 Understanding dataset Title Haberman’s Survival Data Description dataset contains case study conducted 1958 1970 University Chicago’s Billings Hospital survival patient undergone surgery breast cancer Attribute Information Age patient time operation numerical Patient’s year operation year — 1900 numerical Number positive axillary node detected numerical Survival status class attribute 1 patient survived 5 year longer 2 patient died within 5 year 2 Importing library loading file import panda pd import seaborn sn import matplotlibpyplot plt import numpy np reading csv file haber pdreadcsv“habermandatasetcsv” 3 Understanding data Prints first 5 entry csv file haberhead Output print number row number column habershape Output 306 4 Observation CSV file contains 306 row 4 column printing column habercolumns Output Index‘age’ ‘year’ ‘nodes’ ‘status’ dtype’object’ printhaberinfo brief info dataset Output class pandascoreframeDataFrame RangeIndex 306 entry 0 305 Data column total 4 column age 306 nonnull int64 year 306 nonnull int64 node 306 nonnull int64 status 306 nonnull int64 dtypes int644 memory usage 96 KB Observations missing value data set column integer data type datatype status integer converted categorical datatype status column value 1 mapped ‘yes’ mean patient survived 5 year longer value 2 mapped ‘no’ mean patient died within 5 year haber‘status’ haber‘status’map1’Yes’ 2’No’ haberhead mapping value 1 2 yes respectively printing first 5 record dataset Output haberdescribe describes dataset Output Observations Count Total number value present respective column Mean Mean value present respective column Std Standard Deviation value present respective column Min minimum value column 25 Gives 25th percentile value 50 Gives 50th percentile value 75 Gives 75th percentile value Max maximum value column haber“status”valuecounts give count status type Output Yes 225 81 Name status dtype int64 Observations valuecounts function tell many data point class present tell many patient survived many survive 306 patient 225 patient survived 81 dataset imbalanced statusyes haberhaber‘status’’Yes’ statusyesdescribe statusyes dataframe store record status yes Output statusno haberhaber‘status’’No’ statusnodescribe statusno dataframe store record status Observations mean age year patient got operated almost similar class mean node class differs 5 unit approximately node patient survived le compared patient survive 4 Univariate Analysis major purpose univariate analysis describe summarize find pattern single feature 41 Probability Density FunctionPDF Probability Density Function PDF probability variable take value x smoothed version histogram height bar denotes percentage data point corresponding group snsFacetGridhaberhue’status’height 5 mapsnsdistplot”age” addlegend pltshow Output PDF Age Observations Major overlapping observed tell u survival chance irrespective person’s age Although overlapping vaguely tell people whose age range 30–40 likely survive 40–60 le likely survive people whose age range 60–75 equal chance surviving surviving Yet cannot final conclusion cannot decide survival chance patient considering age parameter snsFacetGridhaberhue’status’height 5 mapsnsdistplot”year” addlegend pltshow Output PDF Year Observations major overlapping observed graph tell many operation successful many weren’t cannot parameter decide patient’s survival chance However observed year 1960 1965 unsuccessful operation snsFacetGridhaberhue’status’height 5 mapsnsdistplot”nodes” addlegend pltshow Output PDF Nodes Observations Patients node 1 node likely survive chance surviving 25 node 42 Cumulative Distribution FunctionCDF Cumulative Distribution Function CDF probability variable take value le equal x counts1 binedges1 nphistogramstatusyesnodes bins10 density True pdf1 counts1sumcounts1 printpdf1 printbinedges1 cdf1 npcumsumpdf1 pltplotbinedges11 pdf1 pltplotbinedges11 cdf1 label Yes pltxlabelnodes print counts2 binedges2 nphistogramstatusnonodes bins10 density True pdf2 counts2sumcounts2 printpdf2 printbinedges2 cdf2 npcumsumpdf2 pltplotbinedges21 pdf2 pltplotbinedges21 cdf2 label pltxlabelnodes pltlegend pltshow Output 083555556 008 002222222 002666667 001777778 000444444 000888889 0 0 000444444 0 46 92 138 184 23 276 322 368 414 46 056790123 014814815 013580247 004938272 007407407 0 001234568 0 0 001234568 0 52 104 156 208 26 312 364 416 468 52 CDF Nodes Observations 8355 patient survived node range 0–46 43 Box Plots Violin Plots box extends lower upper quartile value data line median whisker extend box show range data Outlier point past end whisker Violin plot combination box plot probability density functionCDFTags Haberman Exploratory Data Analysis Data Analysis Data Science Data Visualization |
3,508 | Can music help you study? | Do you listen to music while you study? Some people swear by it, others can’t stand it. What does science say?
Over the years, several research groups have studied how music affects learning or whether music can help you concentrate. These studies are all different: they look at different types of music, different types of studying, different test subjects, and they take different measurements.
In this post, I’ll round up some of the ones that show a positive or neutral effect of music on studying. In the next post, they’ll be more negative or neutral. So keep in mind that, like most articles about scientific studies, one post does not cover all aspects of the topic.
Make music part of the studying process
One type of studying involves memorizing things and later recalling them. That’s how you would study lists of vocabulary words, or biology or history facts, for example. In one study from earlier this year, researchers from the University of Ulm in Germany tested whether it matters in which way you are presented with the information that you need to remember. Is there a difference between reading text, hearing spoken word, or hearing a song? The researchers, Janina Lehmann and Tina Seufert, discovered that it was easiest to memorize text if you read it, but that people who heard it as a song were better able to comprehend the text.
This is the idea behind educational songs. By having students sing about the material they’re studying, they’re connecting with it in a more engaged way. This method is used in language education, but also for subjects that rely less on memorization and more on understanding, such as science.
But what if the music is not related to the material that’s being studied? Can music help you concentrate?
Background music while studying
When the music you’re listening to isn’t relevant to the material you’re studying, your brain is essentially doing two separate tasks: studying and listening to music. The music you’re listening to can change your mood, which can make studying easier if you enjoy the music, but it also has the ability to distract you from your work. Lehmann and Seufert, the same researchers who studied the difference between reading a text or hearing it as music (mentioned above) also investigated the role of background music.
They recruited 81 volunteers (all university students) and made half of them listen to music while studying, while the other half studied in silence. The researchers wanted to study how music affects memory, but they didn’t really find a difference in how well either group was able to memorize what they studied. That suggests background music didn’t have much of an effect on this group. But they did see a correlation between how well the subjects understood what they learned: Those students who had a good working memory were better able to learn while music was on in the background. The researchers think this is because music formed more of a distraction for the other participants.
The arousal-mood hypothesis
One of the theories why some people prefer having music on in the background is the “arousal-mood hypothesis”. This is the idea that listening to fast-paced, upbeat music improves someone’s ability of solving a task. The researchers who first tested this hypothesis used the same Mozart sonata in either a faster or slower tempo, and in a major or minor key, so that all study participants heard the same piece of music, but with different characteristics. They found that the faster, major-key version of the piece had a positive effect on solving a spatial task. Another study also found an effect of happy music on creative problem solving. This is another aspect of how music affects learning: by changing the way we feel while we study.
Is there a Mozart effect?
In the experiment above, the researchers chose a Mozart piano piece, specifically to control for the so-called “Mozart effect”. This is the idea that listening to Mozart makes solving spatial tasks easier. The origin of this theory is a very short research paper from 1993, which studied only 36 subjects (all university students) and didn’t compare Mozart to any other composer. The subjects either listened to the Mozart piece, to a relaxation tape, or to nothing at all. The fact that the students who listened to music did a bit better than the others in this very small study does not say anything about the power of Mozart’s music in particular — it just happened to be the music the researchers chose for their music sample.
But the study took on a life of its own. Without actually reading the original paper, people picked up on it, changed the story, and completely overhyped it, to the point where some people believed that “listening to Mozart makes babies smarter”. Those broad interpretations all came from wildly exaggerated interpretations of one very small study. So, no, listening to Mozart will NOT make you smarter. It’s just that listening to happy, upbeat music may make certain tasks a bit easier.
Confused by all the conflicting messages? Still not sure whether listening to music makes studying easier? Stay tuned for tomorrow’s post, which will make things even more complicated… | https://easternblot.medium.com/can-music-help-you-study-1a9af07f8a39 | ['Eva Amsen'] | 2020-04-04 10:26:56.813000+00:00 | ['Learning', 'Education', 'Science', 'Students', 'Music'] | Title music help studyContent listen music study people swear others can’t stand science say year several research group studied music affect learning whether music help concentrate study different look different type music different type studying different test subject take different measurement post I’ll round one show positive neutral effect music studying next post they’ll negative neutral keep mind like article scientific study one post cover aspect topic Make music part studying process One type studying involves memorizing thing later recalling That’s would study list vocabulary word biology history fact example one study earlier year researcher University Ulm Germany tested whether matter way presented information need remember difference reading text hearing spoken word hearing song researcher Janina Lehmann Tina Seufert discovered easiest memorize text read people heard song better able comprehend text idea behind educational song student sing material they’re studying they’re connecting engaged way method used language education also subject rely le memorization understanding science music related material that’s studied music help concentrate Background music studying music you’re listening isn’t relevant material you’re studying brain essentially two separate task studying listening music music you’re listening change mood make studying easier enjoy music also ability distract work Lehmann Seufert researcher studied difference reading text hearing music mentioned also investigated role background music recruited 81 volunteer university student made half listen music studying half studied silence researcher wanted study music affect memory didn’t really find difference well either group able memorize studied suggests background music didn’t much effect group see correlation well subject understood learned student good working memory better able learn music background researcher think music formed distraction participant arousalmood hypothesis One theory people prefer music background “arousalmood hypothesis” idea listening fastpaced upbeat music improves someone’s ability solving task researcher first tested hypothesis used Mozart sonata either faster slower tempo major minor key study participant heard piece music different characteristic found faster majorkey version piece positive effect solving spatial task Another study also found effect happy music creative problem solving another aspect music affect learning changing way feel study Mozart effect experiment researcher chose Mozart piano piece specifically control socalled “Mozart effect” idea listening Mozart make solving spatial task easier origin theory short research paper 1993 studied 36 subject university student didn’t compare Mozart composer subject either listened Mozart piece relaxation tape nothing fact student listened music bit better others small study say anything power Mozart’s music particular — happened music researcher chose music sample study took life Without actually reading original paper people picked changed story completely overhyped point people believed “listening Mozart make baby smarter” broad interpretation came wildly exaggerated interpretation one small study listening Mozart make smarter It’s listening happy upbeat music may make certain task bit easier Confused conflicting message Still sure whether listening music make studying easier Stay tuned tomorrow’s post make thing even complicated…Tags Learning Education Science Students Music |
3,509 | Our Economy Was Just Blasted Years Into the Future | Throughout history, pandemics have left varying, sometimes momentous impacts on the societies in which they have occurred. In the 16th and 17th centuries, smallpox, measles, and other diseases brought by the Spanish wiped out up to 90% of the South and Central American population, utterly transforming the historic order. Conversely, the global flu pandemic of 1918 to 1919 appeared to establish no new norms, suggests Harvard political scientist Joseph Nye. Rather, the approximately 50 million flu deaths seemed to blend into the general slaughter of World War I and go on to be all but forgotten until modern historians began to write about the calamity in the 1970s.
As a catastrophe, Covid-19 itself appears so far to be a hybrid in impact — vastly speeding up some potent trends while quickly dispelling others that people thought were happening but actually weren’t. Cliff Kupchan, chairman of the Eurasia Group, says such acceleration is a natural byproduct of crises like pandemics, which “tend to jolt the current system.”
Against the backdrop of a two-century period of faster and faster transformation, the coronavirus is compressing and further accelerating the arc of events.
“There is pressure on all trends, and only the strongest, most vibrant continue to be underway,” he says. “Only the fittest survive. You have a Darwinian moment for trends.”
What Kupchan is describing is an economic time machine. Against the backdrop of a two-century period of faster and faster transformation, the coronavirus is compressing and further accelerating the arc of events.
Consider the shift to driverless automobiles, one of the most-predicted events of our time. In the popular vision, repeated countless times by Silicon Valley, Wall Street, Detroit, expensive consultants, think tanks, and governments around the world, the human race is quickly shifting to a world of autonomous, shared cars. Starting in the early 2020s, it has been said, people will travel in such vehicles, heedless to their surroundings, relaxing, working, or shopping in smart metropolises looking substantially like Orbit City, home of the Jetsons, perhaps even including a few flying cars. This newfound liberation from the steering wheel would be a bonanza for automakers and Silicon Valley alike, producing tech-laden vehicles that would suck up a constant flow of lucrative data from the passengers. So certain was this future that the major automakers and Silicon Valley went on a spending spree to make it a reality, investing a collective $16 billion.
That was then. Even before Covid-19, many auto hands were already expressing private doubts about the timeline. But now, prominent names have mostly stopped making predictions about what they will produce and when they will produce it. Ford has outright postponed the 2021 debut of robotaxis and driverless delivery vehicles, saying that the virus could have an unknown, long-term effect on consumer behavior. BMW says people seem not to want to get into the kind of shared, autonomous vehicles it had planned but instead to drive their own car. GM has shut down Maven, its car-sharing service, and laid off 8% of the workforce at Cruise, its driverless vehicle division.
One reason for the doubts about the revival of gains for workers is yet another byproduct of the coronavirus: an accelerated automation of jobs.
Some of this is the auto industry feeling its own mortality: Ford expects to lose $5 billion this quarter after a $2 billion loss in the first three months of the year. Fiat Chrysler also lost just under $2 billion the first quarter. GM made a little money — $294 million — but that was an 86% drop year-on-year. It has been the same abroad: VW’s earnings plunged by 75% in the first quarter, and Toyota says it expects its full-year profit to plummet 80%.
But the industry has also lost confidence that a fully autonomous, go-anywhere vehicle is possible any time soon. In a Wall Street Journal report on May 18, Uber — whose business model until recently centered entirely on mastering autonomy — was said to be reevaluating driverless research after burning through more than $1 billion. It was stunning news since just last year, Uber’s self-driving unit was valued at $7.25 billion. In addition to the major players, tens of millions of dollars of venture capital has gone into countless startups, among them Argo AI, Zoox, Aurora, and Voyage.
No one is publicly giving up — that would be too much of a concession given the hit they would probably take from Wall Street. Rather than an admission of failure, look for one after the other to embrace lesser, limited autonomy such as lane changing, highway driving, and automatic parking. | https://marker.medium.com/our-economy-was-just-blasted-years-into-the-future-a591fbba2298 | ['Steve Levine'] | 2020-07-08 12:48:43.784000+00:00 | ['Economy', 'Automation', 'Society', 'Business', 'Future'] | Title Economy Blasted Years FutureContent Throughout history pandemic left varying sometimes momentous impact society occurred 16th 17th century smallpox measles disease brought Spanish wiped 90 South Central American population utterly transforming historic order Conversely global flu pandemic 1918 1919 appeared establish new norm suggests Harvard political scientist Joseph Nye Rather approximately 50 million flu death seemed blend general slaughter World War go forgotten modern historian began write calamity 1970s catastrophe Covid19 appears far hybrid impact — vastly speeding potent trend quickly dispelling others people thought happening actually weren’t Cliff Kupchan chairman Eurasia Group say acceleration natural byproduct crisis like pandemic “tend jolt current system” backdrop twocentury period faster faster transformation coronavirus compressing accelerating arc event “There pressure trend strongest vibrant continue underway” say “Only fittest survive Darwinian moment trends” Kupchan describing economic time machine backdrop twocentury period faster faster transformation coronavirus compressing accelerating arc event Consider shift driverless automobile one mostpredicted event time popular vision repeated countless time Silicon Valley Wall Street Detroit expensive consultant think tank government around world human race quickly shifting world autonomous shared car Starting early 2020s said people travel vehicle heedless surroundings relaxing working shopping smart metropolis looking substantially like Orbit City home Jetsons perhaps even including flying car newfound liberation steering wheel would bonanza automaker Silicon Valley alike producing techladen vehicle would suck constant flow lucrative data passenger certain future major automaker Silicon Valley went spending spree make reality investing collective 16 billion Even Covid19 many auto hand already expressing private doubt timeline prominent name mostly stopped making prediction produce produce Ford outright postponed 2021 debut robotaxis driverless delivery vehicle saying virus could unknown longterm effect consumer behavior BMW say people seem want get kind shared autonomous vehicle planned instead drive car GM shut Maven carsharing service laid 8 workforce Cruise driverless vehicle division One reason doubt revival gain worker yet another byproduct coronavirus accelerated automation job auto industry feeling mortality Ford expects lose 5 billion quarter 2 billion loss first three month year Fiat Chrysler also lost 2 billion first quarter GM made little money — 294 million — 86 drop yearonyear abroad VW’s earnings plunged 75 first quarter Toyota say expects fullyear profit plummet 80 industry also lost confidence fully autonomous goanywhere vehicle possible time soon Wall Street Journal report May 18 Uber — whose business model recently centered entirely mastering autonomy — said reevaluating driverless research burning 1 billion stunning news since last year Uber’s selfdriving unit valued 725 billion addition major player ten million dollar venture capital gone countless startup among Argo AI Zoox Aurora Voyage one publicly giving — would much concession given hit would probably take Wall Street Rather admission failure look one embrace lesser limited autonomy lane changing highway driving automatic parkingTags Economy Automation Society Business Future |
3,510 | Being Noble Is Not Important, But It’s Important To Me | I was once hanging out with a couple of my friends and we were having a discussion when somehow this question popped up:
“How important is it to be noble, anyways?”
There was a pause in the conversation as we considered the question. It was something that I’d wondered for a long time, too. Eventually one of my friends, who I’ve come to really respect and take after, said this:
“I don’t know, but it’s really important to me.”
I’ve thought on those words a lot over the past year, and in retrospect, that was the perfect response and the perfect answer. In the response that “it’s really important to me,” it meant that there was no elevation or comparison. He had his values, but did not impart or impose them on everyone else at the table. Being noble was his value, plain and simple, that he took to heart.
In the past year, so have I as well. I don’t, to the best of my knowledge, impart my values and beliefs on others. I’m a big believer of allowing people to find the path of whatever works best for them, personally, to find their way and take personal ownership of that path. But trying to be noble is something that has worked for me. If a friend asks me to do something for them and I give them my word, I flat out can’t reckon or live with myself if I don’t follow through. As such, I have used the mantra “keep your word” sparingly as a way to ensure I do something or keep a promise that was important to me.
The labels of “honorable” and “noble” tend to have negative connotations in today’s society — and I think a lot of people who do label themselves as such have goals of self-ascendancy above their peers, to some extent. If being noble is a for recognition or credit, then we are doing it for the wrong reasons.
In today’s day and age, however, there’s this question as well — what does being noble actually mean? The straight up answer I can give is I don’t know. I can only tell you what I think when I think the word.
It can be summed up in Galatians 5:13–14: “”For you are called to freedom, brothers; only don’t use this freedom as an opportunity for the flesh, but serve one another through love. For the entire law is fulfilled in one statement: Love your neighbor as yourself.”
I’ve written about this before, but the main reason I converted to Christianity was that some of the people I’m close with who identify as Christian have a certain way of treating others that I respect profoundly. It is treating those people with unconditional support and kindness, but giving them space and freedom to find their own path. They never thought they were better than me, and I never got the sense that they thought they were better than me. They treated myself and others with the utmost humility and respect, the likes of which I didn’t know existed.
That is what it means, to me to be noble. That is what I strive to be closer to, each and every day of my life. To be noble means to show, not tell, to be a leader that doesn’t tell people what to do or what they should be, but show people the way to be what they want to be. To be noble is to walk forward vulnerable and with flaws, with full acceptance of yourself, instead of trying to put on the tough act and hide that away. Because I’ve learned that the hardest thing you can do in life, that you can ever do, is invite people into your pain and suffering and see you as you truly are at what you believe is your worst — because hiding is the societal norm and default.
To be noble, again, is to not seek recognition or credit for your acts of good and service, even for the little things. These acts, such as sacrificing an hour or two of sleep to talk to a friend or family member in need, or sitting in silence with someone suffering alone, are not done for the right reasons if you later tell the world the next day about all the good you did so they can see how good of a person you are. The only recognition you truly need is from yourself, or from God or some other power you believe in. In the final analysis, that’s what it’s all about.
So the question you’re probably wondering at this point, if you’ve gotten to this far in the article, is if I believe I’m noble. I do not. I want to be and I’ve certainly gotten better at it as I’ve moved forward in life. Certain people who have seen snapshots of me would say yes definitively, while others who have seen different snapshots would say absolutely not. But what matters isn’t that being noble is important to everyone — what matters is that it’s important to me. | https://medium.com/the-partnered-pen/being-noble-is-not-important-but-its-important-to-me-f8dae2f06546 | ['Ryan Fan'] | 2019-10-17 00:52:00.881000+00:00 | ['Self', 'Psychology', 'Creativity', 'Spirituality', 'Religion'] | Title Noble Important It’s Important MeContent hanging couple friend discussion somehow question popped “How important noble anyways” pause conversation considered question something I’d wondered long time Eventually one friend I’ve come really respect take said “I don’t know it’s really important me” I’ve thought word lot past year retrospect perfect response perfect answer response “it’s really important me” meant elevation comparison value impart impose everyone else table noble value plain simple took heart past year well don’t best knowledge impart value belief others I’m big believer allowing people find path whatever work best personally find way take personal ownership path trying noble something worked friend asks something give word flat can’t reckon live don’t follow used mantra “keep word” sparingly way ensure something keep promise important label “honorable” “noble” tend negative connotation today’s society — think lot people label goal selfascendancy peer extent noble recognition credit wrong reason today’s day age however there’s question well — noble actually mean straight answer give don’t know tell think think word summed Galatians 513–14 “”For called freedom brother don’t use freedom opportunity flesh serve one another love entire law fulfilled one statement Love neighbor yourself” I’ve written main reason converted Christianity people I’m close identify Christian certain way treating others respect profoundly treating people unconditional support kindness giving space freedom find path never thought better never got sense thought better treated others utmost humility respect like didn’t know existed mean noble strive closer every day life noble mean show tell leader doesn’t tell people show people way want noble walk forward vulnerable flaw full acceptance instead trying put tough act hide away I’ve learned hardest thing life ever invite people pain suffering see truly believe worst — hiding societal norm default noble seek recognition credit act good service even little thing act sacrificing hour two sleep talk friend family member need sitting silence someone suffering alone done right reason later tell world next day good see good person recognition truly need God power believe final analysis that’s it’s question you’re probably wondering point you’ve gotten far article believe I’m noble want I’ve certainly gotten better I’ve moved forward life Certain people seen snapshot would say yes definitively others seen different snapshot would say absolutely matter isn’t noble important everyone — matter it’s important meTags Self Psychology Creativity Spirituality Religion |
3,511 | 20 Articles That Will Get Your Writing Career Started | You want to learn how to become a writer, but all the writing courses you’ve seen are expensive. Is it really worth handing over thousands that you may never get back? What if you do a course and discover writing isn’t for you after all?
I know it’s not easy breaking into the writing world. After over a decade in this business as both a writer and editor, I’ve discovered a few tricks of the trade that I’ve shared here on Medium over the past year. I decided to collect them all together for you as a kind of free writing course. Bookmark it and work your way through in your own time.
If you have questions, feel free to leave a comment. No questions are stupid ones! I’m sure if you’re wondering it, someone else is too.
If you’re ready for a more in depth course, with individual feedback on your writing, check out my Creative Nonfiction Academy, or email me about mentoring on [email protected]. I’m happy to answer any questions.
If you are looking into copywriting, my recommendation is the Comprehensive Copywriting Academy. This is an affiliate link, but I have tested out this copywriting course, including their monthly mentoring calls and Facebook group, myself and feel 100% confident recommending it. I wish this had been around when I was copywriting!
Here’s your DIY Writing Course:
Getting Started in Writing | https://medium.com/inspired-writer/my-top-20-articles-for-new-writers-254157064c9f | ['Kelly Eden'] | 2020-11-28 06:09:16.569000+00:00 | ['Freelance', 'Writing Tips', 'Creativity', 'Life Lessons', 'Writing'] | Title 20 Articles Get Writing Career StartedContent want learn become writer writing course you’ve seen expensive really worth handing thousand may never get back course discover writing isn’t know it’s easy breaking writing world decade business writer editor I’ve discovered trick trade I’ve shared Medium past year decided collect together kind free writing course Bookmark work way time question feel free leave comment question stupid one I’m sure you’re wondering someone else you’re ready depth course individual feedback writing check Creative Nonfiction Academy email mentoring inspiredwriterkellygmailcom I’m happy answer question looking copywriting recommendation Comprehensive Copywriting Academy affiliate link tested copywriting course including monthly mentoring call Facebook group feel 100 confident recommending wish around copywriting Here’s DIY Writing Course Getting Started WritingTags Freelance Writing Tips Creativity Life Lessons Writing |
3,512 | Social Media Can Have Mixed Results For Your Mental Health | Our brains crave stimulants throughout the day — especially in people who struggle with ADHD and depression. The stimulants increase dopamine in the brain.
We need that little dose of stimulation in the morning to wake up the brain. It is addicting because the brain loves it.
“According to an article by Harvard University researcher Trevor Haynes, when you get a social media notification, your brain sends a chemical messenger called dopamine along a reward pathway, which makes you feel good”
We can find ourselves in bed for hours just scrolling through social media. You would think that wasting time in bed mindlessly scrolling is bad for your mental health right? I would say in some cases, yes.
Some people are born with an addictive personality. I know this because I have family members who have that trait. These are the alcoholics of my family. There is a lot of controversy surrounding addiction as if it should be considered a mental illness.
Some may say that the drugs alter the brain circuits which characterizes under mental illness and some may say that no one was forced to pick up a needle and that drug addiction was considered a choice. In my opinion, addiction should be considered a mental illness and one of the most dangerous if you are asking me. If someone is in the severe stages of alcohol addiction, they can die if they stop drinking alcohol — this is the same for other drugs such as heroin.
My point is — people who have an addictive personality should be more cautious when it comes to social media because it can affect other parts of their lives, such as having difficulty with human communication in real life. | https://medium.com/invisible-illness/social-media-can-have-mixed-results-for-your-mental-health-d27e9c8583a3 | ['Justine Elizabeth'] | 2020-07-15 00:18:55.507000+00:00 | ['Mental Health', 'Health', 'Social Media', 'Addiction', 'Media'] | Title Social Media Mixed Results Mental HealthContent brain crave stimulant throughout day — especially people struggle ADHD depression stimulant increase dopamine brain need little dose stimulation morning wake brain addicting brain love “According article Harvard University researcher Trevor Haynes get social medium notification brain sends chemical messenger called dopamine along reward pathway make feel good” find bed hour scrolling social medium would think wasting time bed mindlessly scrolling bad mental health right would say case yes people born addictive personality know family member trait alcoholic family lot controversy surrounding addiction considered mental illness may say drug alter brain circuit characterizes mental illness may say one forced pick needle drug addiction considered choice opinion addiction considered mental illness one dangerous asking someone severe stage alcohol addiction die stop drinking alcohol — drug heroin point — people addictive personality cautious come social medium affect part life difficulty human communication real lifeTags Mental Health Health Social Media Addiction Media |
3,513 | Introducing Brushable Histogram | As part of our ongoing effort to be more open to the frontend community, we are announcing the launch of our first open source UI component: Brushable Histogram (click here for a demo).
The idea behind this component came up during the development of Genome, our new dynamic visualization engine. We needed a way to display how the events generated in the visualization were distributed over time, which sounds quite simple but ended up being a bit trickier for it to work for all our use cases.
The problem
A customer might have a regular habit of making a couple of purchases a month for a few years. However, fraudsters may deploy a bot attack which makes hundreds of purchases in just a few minutes. This required a flexible binning approach to bin events for a time period of a minute, or a month, depending on what’s needed.
Now consider that we are looking at a graph which involves both cases. We should be able to zoom in on interesting time ranges while continuously adjusting the time granularity of the binning, and pan around it to uncover the story. So here we discovered two fundamental actions which our histogram should support: pan and continuous zoom. To speed up navigation between different time ranges a slider was also necessary.
But when you are looking at events spanning over a two-day period (binned by hour), how do you know you need to zoom in on a specific 5-minute interval in which a bot attack took place?
Our solution
This led us to add a bit of flair to our slider and turn it into a strip plot of the full time period in investigation. As each strip represents an event, when you have an area with a large density of strips, it is an indicator of a high frequency of events. This way, we are able to give a more granular view of event velocity which allows us to uncover that bot attack and many other fraud patterns.
We kept searching for a histogram component that could give us the flexibility to do all of this, but we couldn’t find it. So, we set to create a new one! Beatriz Malveiro, one of our data visualization engineers, did the first original conception and prototype, and Victor Fernandes made several improvements to that first version. | https://medium.com/feedzaitech/introducing-brushable-histogram-6c6b0f0f60ca | ['Luís Cardoso'] | 2019-04-11 16:51:17.633000+00:00 | ['JavaScript', 'Data Visualization', 'React', 'Software Development', 'Software Engineering'] | Title Introducing Brushable HistogramContent part ongoing effort open frontend community announcing launch first open source UI component Brushable Histogram click demo idea behind component came development Genome new dynamic visualization engine needed way display event generated visualization distributed time sound quite simple ended bit trickier work use case problem customer might regular habit making couple purchase month year However fraudsters may deploy bot attack make hundred purchase minute required flexible binning approach bin event time period minute month depending what’s needed consider looking graph involves case able zoom interesting time range continuously adjusting time granularity binning pan around uncover story discovered two fundamental action histogram support pan continuous zoom speed navigation different time range slider also necessary looking event spanning twoday period binned hour know need zoom specific 5minute interval bot attack took place solution led u add bit flair slider turn strip plot full time period investigation strip represents event area large density strip indicator high frequency event way able give granular view event velocity allows u uncover bot attack many fraud pattern kept searching histogram component could give u flexibility couldn’t find set create new one Beatriz Malveiro one data visualization engineer first original conception prototype Victor Fernandes made several improvement first versionTags JavaScript Data Visualization React Software Development Software Engineering |
3,514 | 5 Insanely Simple Writing Tips You Need to Know | 5 Insanely Simple Writing Tips You Need to Know
Giving you the tools to elevate your writing to the next level
Photo by James Pond on Unsplash
If you’re reading this, you’re probably a writer who’s looking to improve their craft. You may write solely in your private journal, or you might have your own blog with 100,000 subscribers. My point is, no matter how long you’ve been writing, wherever you are in your own writing journey — none of us are perfect, and there is always more to learn.
And today is your lucky day.
“But why?” I hear you ask.
Because, dear reader, I’ve have put together a list of the only writing tips you’ll ever need. This right here is the crème de la crème of writing advice. At least I think so anyway. You may read it and think I’m barking mad. Hopefully not, but stranger things have happened.
Still here? Fantastic, let’s begin.
1) Provide value to your readers
I want my work to be read. You probably do too. I write because I believe I have something worth writing about, and I hope that my words can be of value to my readers.
You clicked on this article for writing advice. If you’d opened this page and found an article discussing LeBron James’ career or recipes for the perfect Philly Cheesesteak, you’d be disappointed.
Your words should provide readers with value. They should finish reading your words and feel something. Whether you’ve just given tips on how they can make more money, or how they can avoid feeling stressed first thing in the morning, your words should leave your readers with a clear takeaway that they can easily implement into their own lives.
Whatever your message, make it clear, make it valuable, and above all, make it for the reader.
2) Keep it simple
Unless the title of your article is ’10 Words That Nobody Else Will Understand (But You’ll Sound Super Smart),’ do us all a favor and put the thesaurus away.
We’re not reading your articles so that you can elucidate something to me — explaining it will do just fine. Deciding to use simple language doesn’t mean you think that your readers are cretinous, obtuse, or vacuous, it means that you have respect for their time. As impressive as your scintillating vocabulary might be, few things will annoy readers more than having to pause every ten seconds to look up a definition of a word.
“Identify the essential, eliminate the rest.” — Leo Baubata
Try to avoid redundant words or phrases, which often serve no other purpose than to bolster your word count. Look at the examples below, and the improved, more concise versions beside them:
His eyes actually filled with tears — His eyes filled with tears.
filled with tears — His eyes filled with tears. It is absolutely essential that you regularly change the oil in your car — It is essential that you regularly change the oil in your car.
essential that you regularly change the oil in your car — It is essential that you regularly change the oil in your car. I do one-hundred sit-ups a day, for the purpose of improving my ab muscles — I do one-hundred sit-ups a day, to improve my ab muscles.
improving my ab muscles — I do one-hundred sit-ups a day, to improve my ab muscles. In spite of the fact that we could afford it, my wife wouldn’t let me buy a sports car — Although we could afford it, my wife wouldn’t let me buy a sports car.
If you can tell a story in 500 words instead of 1,000, do it. Your readers will thank you for respecting their time.
3) Quality is king
Most posts don’t go viral, and unfortunately, there is no magic formula. I’ve read articles that deserved much more attention than they received, and read viral posts that weren’t all that good. That’s just how the cookie crumbles.
While you can’t control how your writing will be received, you can control the effort you put in. Nobody wants to read or write an unmemorable story, so you should aim to write the only post a reader will ever need on that topic. Don’t settle for publishing an article that you’ve half-assed in an hour. You might get one-hundred, maybe two-hundred views. But how much better could it have done, if you’d spent the time writing The Godfather of articles, instead of Jaws 4?
Write every post as if it has the potential to be a blockbuster.
4) Write your headline, then write it again
Many writers pour their hearts and souls into their work. They spend hours editing a piece to within an inch of its life, then they take all of thirty seconds to write a headline before hitting ‘Publish.’
Don’t tell me you haven’t done it.
Most writers don’t spend enough time writing headlines, which is odd considering the headline is your reader’s deciding factor on whether to open your article or not. You could have written the War and Peace of blog posts, but if readers don’t like your headline, nobody will ever read your masterpiece.
“I have to mix in the right amount of curiosity and information to make a cocktail of highly click-able sh*t.” — Tom Kuegler
Tom Kuegler is great at writing headlines, and if you haven’t read any of his work, you should. But don’t just take my word for it, decide for yourself after you’ve looked at some of these headline examples:
How To Triple Your Output In One Year
How To Build An Easy $1,000 Per Month Money Stream Online
A 2-Minute Trick To Write Better Headlines
Writers, Here’s How To Actually Start Standing Out
What does each of these headlines have in common? They appeal to the wants and desires of readers. Who wouldn’t want to triple their yearly output, or make an easy $1,000 per month online? And best of all, the ‘How To’ element lets readers know that they will come away from the article with a clear insight as to how they too can achieve these goals.
Before you hit ‘Publish,’ make a list of as many headline ideas as you can. Try to incorporate powerful, emotional words, which will grab the reader by the scruff of the neck and say, “Hey you! Yeah, you! Come and read this article!”
When you’re done, try running them through CoSchedule Headline Analyzer, which will analyze your headline for readability and word balance, and give it a score out of 100.
Your headline should not only encapsulate the message of your article, but it should also appeal to the wants and needs of your readers. If your headline doesn’t trigger a part of your reader’s brain to think ‘I need to read that article,’ it isn’t a very good headline.
5) Remember why you write
We all have days where we don’t feel like writing. Maybe you’ve found yourself questioning the decision to continue writing. When that feeling strikes, it’s important to remember the reason you fell in love with writing in the first place.
Do you remember the first thing that you ever wrote? Maybe you used to write in your diary, or maybe you once ran a blog about continental cheeses. Whatever it was, I bet you remember writing it.
My first foray into writing came back in 2015 when I started a gaming blog. I don’t recall why I decided to sit at my laptop one day and write a review for Batman: Arkham Knight the videogame, but you know what? I loved every second of it.
I remember the exhilaration as the words flowed from my fingers. I remember the vulnerability I felt sending my precious words out into the big wide world for all to read. Above all, I remember the passion I felt, and how I knew I’d found my calling.
Don’t ever forget what made you fall in love with writing.
“A write is a writer not because she writes well and easily, because she has amazing talent, or because everything she does is golden. A writer is a writer because, even when there is no hope, even when nothing you do shows any sign of promise, you keep writing anyway.” — Junot Diaz
TL;DR
Provide value to your readers
Keep it simple
Quality is King
Write your headline, then write it again
Remember WHY you write
Happy writing. | https://medium.com/swlh/5-insanely-simple-writing-tips-you-need-to-know-ed4d4abc90b3 | ['Jon Peters'] | 2020-09-10 09:27:04.005000+00:00 | ['Self Improvement', 'Writer', 'Creativity', 'Writing', 'Advice'] | Title 5 Insanely Simple Writing Tips Need KnowContent 5 Insanely Simple Writing Tips Need Know Giving tool elevate writing next level Photo James Pond Unsplash you’re reading you’re probably writer who’s looking improve craft may write solely private journal might blog 100000 subscriber point matter long you’ve writing wherever writing journey — none u perfect always learn today lucky day “But why” hear ask dear reader I’ve put together list writing tip you’ll ever need right crème de la crème writing advice least think anyway may read think I’m barking mad Hopefully stranger thing happened Still Fantastic let’s begin 1 Provide value reader want work read probably write believe something worth writing hope word value reader clicked article writing advice you’d opened page found article discussing LeBron James’ career recipe perfect Philly Cheesesteak you’d disappointed word provide reader value finish reading word feel something Whether you’ve given tip make money avoid feeling stressed first thing morning word leave reader clear takeaway easily implement life Whatever message make clear make valuable make reader 2 Keep simple Unless title article ’10 Words Nobody Else Understand You’ll Sound Super Smart’ u favor put thesaurus away We’re reading article elucidate something — explaining fine Deciding use simple language doesn’t mean think reader cretinous obtuse vacuous mean respect time impressive scintillating vocabulary might thing annoy reader pause every ten second look definition word “Identify essential eliminate rest” — Leo Baubata Try avoid redundant word phrase often serve purpose bolster word count Look example improved concise version beside eye actually filled tear — eye filled tear filled tear — eye filled tear absolutely essential regularly change oil car — essential regularly change oil car essential regularly change oil car — essential regularly change oil car onehundred situps day purpose improving ab muscle — onehundred situps day improve ab muscle improving ab muscle — onehundred situps day improve ab muscle spite fact could afford wife wouldn’t let buy sport car — Although could afford wife wouldn’t let buy sport car tell story 500 word instead 1000 reader thank respecting time 3 Quality king post don’t go viral unfortunately magic formula I’ve read article deserved much attention received read viral post weren’t good That’s cookie crumbles can’t control writing received control effort put Nobody want read write unmemorable story aim write post reader ever need topic Don’t settle publishing article you’ve halfassed hour might get onehundred maybe twohundred view much better could done you’d spent time writing Godfather article instead Jaws 4 Write every post potential blockbuster 4 Write headline write Many writer pour heart soul work spend hour editing piece within inch life take thirty second write headline hitting ‘Publish’ Don’t tell haven’t done writer don’t spend enough time writing headline odd considering headline reader’s deciding factor whether open article could written War Peace blog post reader don’t like headline nobody ever read masterpiece “I mix right amount curiosity information make cocktail highly clickable sht” — Tom Kuegler Tom Kuegler great writing headline haven’t read work don’t take word decide you’ve looked headline example Triple Output One Year Build Easy 1000 Per Month Money Stream Online 2Minute Trick Write Better Headlines Writers Here’s Actually Start Standing headline common appeal want desire reader wouldn’t want triple yearly output make easy 1000 per month online best ‘How To’ element let reader know come away article clear insight achieve goal hit ‘Publish’ make list many headline idea Try incorporate powerful emotional word grab reader scruff neck say “Hey Yeah Come read article” you’re done try running CoSchedule Headline Analyzer analyze headline readability word balance give score 100 headline encapsulate message article also appeal want need reader headline doesn’t trigger part reader’s brain think ‘I need read article’ isn’t good headline 5 Remember write day don’t feel like writing Maybe you’ve found questioning decision continue writing feeling strike it’s important remember reason fell love writing first place remember first thing ever wrote Maybe used write diary maybe ran blog continental cheese Whatever bet remember writing first foray writing came back 2015 started gaming blog don’t recall decided sit laptop one day write review Batman Arkham Knight videogame know loved every second remember exhilaration word flowed finger remember vulnerability felt sending precious word big wide world read remember passion felt knew I’d found calling Don’t ever forget made fall love writing “A write writer writes well easily amazing talent everything golden writer writer even hope even nothing show sign promise keep writing anyway” — Junot Diaz TLDR Provide value reader Keep simple Quality King Write headline write Remember write Happy writingTags Self Improvement Writer Creativity Writing Advice |
3,515 | What’s New in React 16 and Fiber Explanation | Previously, React would block the entire thread as it calculated the tree. This process for reconciliation is now named “stack reconciliation”. While React is known to be very fast, blocking the main thread could still cause some applications to not feel fluid. Version 16 aims to fix this problem by not requiring the render process to complete once it’s initiated. React computes part of the tree and then will pause rendering to check if the main thread has any paints or updates that need to be performed. Once the paints and updates have been completed, React begins rendering again. This process is accomplished by introducing a new data structure called a “fiber” that maps to a React instance and manages the work for the instance as well as know its relationship to other fibers. A fiber is just a JavaScript object. These images depict the old versus new rendering methods.
Stack reconciliation — updates must be completed entirely before returning to main thread (credit Lin Clark)
Fiber reconciliation — updates will be batched in chunks and React will manage the main thread (credit Lin Clark)
React 16 will also prioritize updates by importance. This allows high priority updates to jump to the front of the line and be processed first. An example of this would be something like a key input. This is high priority because the user needs that immediate feedback to feel fluid as opposed to a low priority update like an API response which can wait an extra 100–200 milliseconds.
React priorities (credit Lin Clark)
By breaking the UI updates into smaller units of work, a better overall user experience is achieved. Pausing reconciliation work to allow the main thread to execute other necessary tasks provides a smoother interface and better perceived performance.
Error Handling
Errors in React have been a little bit of mess to work with, but this is changing in version 16. Previously, errors inside components would corrupt React’s state and provide cryptic errors on subsequent renders.
lol wut?
React 16 includes error boundaries will not only provide much clearer error messaging, but also prevent the entire application from breaking. After being added to your app, error boundaries catch errors and gracefully display a fallback UI without the entire component tree crashing. The boundaries can catch errors during rendering, in lifecycle methods, and in constructors of the whole tree below them. Error boundaries are simply implemented through the new lifecycle method componentDidCatch(error, info) .
Here, any error that happens in <MyWidget/> or its children will be captured by the <ErrorBoundary> component. This functionality behaves like a catch {} block in JavaScript. If the error boundary receives an error state, you as a developer are able to define what is displayed in the UI. Note that the error boundary will only catch errors in the tree below it, but it will not recognize errors in itself.
Moving forward, you’ll see robust and actionable errors like this:
omg that’s nice (credit Facebook)
Return multiple elements from render
You can now return an array, but don’t forget your keys !
render() {
return [
<li key="A">First item</li>,
<li key="B">Second item</li>,
<li key="C">Third item</li>,
];
}
Portals
Render items into a new DOM node. For example, it could be great to have a general modal component you portal content in to.
render() {
// React does *not* create a new div. It renders the children into `domNode`.
// `domNode` is any valid DOM node, regardless of its location in the DOM.
return ReactDOM.createPortal(
this.props.children,
domNode,
);
}
Compatibility
Async Rendering
The focus of the initial 16.0 release is on compatibility for existing applications. Async rendering will not be an option initially, but in later 16.x releases, it will be included as an opt-in feature.
Browser Compatibility
React 16 is dependent on Map and Set . To ensure compatibility with all browsers, you must include a polyfill. Popular options are core-js or babel-polyfill.
In addition, it will also depend on requestAnimationFrame , including for tests. A simple shim for test purposes would be:
global.requestAnimationFrame = function(callback) {
setTimeout(callback);
};
Component Lifecycle
Since React prioritizes the rendering, you are no longer guaranteed componentWillUpdate and shouldComponentUpdate of different components will fire in a predictable order. The React team is working to provide an upgrade path for apps that would break from this behavior.
Usage
Currently React 16 is in beta, but it will be released soon. You can start using version 16 now by doing the following: | https://medium.com/edge-coders/react-16-features-and-fiber-explanation-e779544bb1b7 | ['Trey Huffine'] | 2018-10-30 13:57:05.603000+00:00 | ['Code', 'Tech', 'React', 'JavaScript', 'Startup'] | Title What’s New React 16 Fiber ExplanationContent Previously React would block entire thread calculated tree process reconciliation named “stack reconciliation” React known fast blocking main thread could still cause application feel fluid Version 16 aim fix problem requiring render process complete it’s initiated React computes part tree pause rendering check main thread paint update need performed paint update completed React begin rendering process accomplished introducing new data structure called “fiber” map React instance manages work instance well know relationship fiber fiber JavaScript object image depict old versus new rendering method Stack reconciliation — update must completed entirely returning main thread credit Lin Clark Fiber reconciliation — update batched chunk React manage main thread credit Lin Clark React 16 also prioritize update importance allows high priority update jump front line processed first example would something like key input high priority user need immediate feedback feel fluid opposed low priority update like API response wait extra 100–200 millisecond React priority credit Lin Clark breaking UI update smaller unit work better overall user experience achieved Pausing reconciliation work allow main thread execute necessary task provides smoother interface better perceived performance Error Handling Errors React little bit mess work changing version 16 Previously error inside component would corrupt React’s state provide cryptic error subsequent render lol wut React 16 includes error boundary provide much clearer error messaging also prevent entire application breaking added app error boundary catch error gracefully display fallback UI without entire component tree crashing boundary catch error rendering lifecycle method constructor whole tree Error boundary simply implemented new lifecycle method componentDidCatcherror info error happens MyWidget child captured ErrorBoundary component functionality behaves like catch block JavaScript error boundary receives error state developer able define displayed UI Note error boundary catch error tree recognize error Moving forward you’ll see robust actionable error like omg that’s nice credit Facebook Return multiple element render return array don’t forget key render return li keyAFirst itemli li keyBSecond itemli li keyCThird itemli Portals Render item new DOM node example could great general modal component portal content render React create new div render child domNode domNode valid DOM node regardless location DOM return ReactDOMcreatePortal thispropschildren domNode Compatibility Async Rendering focus initial 160 release compatibility existing application Async rendering option initially later 16x release included optin feature Browser Compatibility React 16 dependent Map Set ensure compatibility browser must include polyfill Popular option corejs babelpolyfill addition also depend requestAnimationFrame including test simple shim test purpose would globalrequestAnimationFrame functioncallback setTimeoutcallback Component Lifecycle Since React prioritizes rendering longer guaranteed componentWillUpdate shouldComponentUpdate different component fire predictable order React team working provide upgrade path apps would break behavior Usage Currently React 16 beta released soon start using version 16 followingTags Code Tech React JavaScript Startup |
3,516 | The Best Remedy for Insomnia Is the One You Haven’t Tried | The Best Remedy for Insomnia Is the One You Haven’t Tried
Most people do exactly the wrong thing during a bout of sleepless nights
Stress and worry are major insomnia triggers, and so it’s hardly a surprise that the pandemic has set off a wave of lost sleep. Earlier this year, research in the journal Sleep Medicine found that the emergence of SARS-CoV-2 caused a 37% jump in the incidence of clinical insomnia.
Even before the pandemic, insomnia was commonplace. Each year, about one in four adults develops acute insomnia, which is defined as a problem falling asleep or staying asleep a few nights a week for a period of at least two weeks. That’s according to a 2020 study in the journal Sleep.
Fortunately, that study found that most people — roughly 75% — recover from these periods of short-term insomnia. But for others, the problem persists for months or years. “A bad night of sleep can be a one-and-done, it can be a couple of nights for a couple of weeks, or it can turn into a chronic problem,” says Michael Perlis, PhD, first author of the Sleep study and director of the Behavioral Sleep Medicine Program at the University of Pennsylvania.
One of the reasons that acute insomnia turns into chronic insomnia, Perlis says, has to do with a common mistake people make after a night or two of poor sleep. Even among those who have struggled for years with insomnia, many continue to employ this same counterproductive strategy — a strategy that is based on a fundamental misunderstanding of how sleep works. On the other hand, Perlis says that one of the very best remedies for insomnia is also one of the simplest, and it works because it prevents people from making that mistake.
“Do nothing. That’s what I tell people who’ve had a bad night of sleep, or two or three,” he says. “But it’s the hardest nothing you’ll ever do. And I’ll explain why.”
The power of sleep debt
Perlis says that the common, insomnia-perpetuating error that most people commit is that they try to make up for lost sleep; they take naps, they go to bed early, and they sleep in late. “All of this contributes to sleep dysregulation, which is a recipe for long-term insomnia,” he explains.
When he tells people to “do nothing,” he means that they should not try to make up for lost sleep. Instead, they should stick to their usual sleep-wake routine even on days when they’re exhausted and dying for a nap or a sleep-in. “I tell people to be awake in the service of sleep,” he says. “If you build up enough sleep debt, sooner or later that will be enough to force you into deep and prolonged sleep. The ship will right itself.” (To be clear, naps can be fine for healthy sleepers, but for those with insomnia, they can make matters worse.)
Sleep debt is such a powerful anti-insomnia force that sleep therapists and clinics often employ a technique known as sleep restriction. This involves limiting the time a person is allowed to spend in bed each night to just six or seven hours — sometimes less — which augments the body’s need for sleep. “Sleep is a homeostatic process, which means that for every hour you’re awake, you’re increasing the pressure [the body feels] to balance that with sleep,” Perlis says. Eventually, if a person doesn’t relieve that pressure by taking naps or sleeping in late, the body’s homeostatic need for sleep will overwhelm whatever is keeping that person awake at night.
“If you build up enough sleep debt, sooner or later that will be enough to force you into deep and prolonged sleep. The ship will right itself.”
Other sleep doctors echo Perlis’ advice — and his warnings. “Patients ask all the time, ‘What happens if my sleep is thrown off? How much do I need to make up?’ But that’s not how sleep works,” says Michael Grandner, PhD, director of the Sleep and Health Research Program at the University of Arizona College of Medicine in Tucson. “Sleep is not like a bank account where if you pull money out and then put money in, it will all balance out.”
Grandner once worked with Perlis at the University of Pennsylvania. In support of his old colleague’s recommendation to “do nothing,” he offers a useful analogy. “If you have no appetite at dinner, you wouldn’t fix that by eating snacks all day long,” he says. “That would create a cycle where you’re eating at the wrong times and you don’t have any hunger when it’s actually time to eat.”
Similarly — and due to some related biological systems — he says that naps and other attempts to make up for lost sleep can reduce the “sleep hunger” a person feels in bed at night. This lack of sleepiness often leads to another night of poor sleep, which leads to more compensatory efforts to make sleep up the next day, and all of this disrupts the cycles and processes that govern healthy sleep.
Grandner says that an important element of the “do nothing” approach is the maintenance of a stable sleep-wake schedule, which helps align the body’s circadian clocks and rhythms. This means going to bed and getting up at roughly the same times (give or take 30 minutes) each day — including on weekends. “Sleep is highly programmable,” he says. “You can train it like you train a dog, but in both cases you have to be consistent.”
“Doing nothing doesn’t mean ignoring the problem,” he adds. “It really means staying the course and not overcorrecting after a few bad nights.”
“Do nothing. That’s what I tell people who’ve had a bad night of sleep, or two or three.”
When “do nothing” fails
For many insomniacs — especially those who experience only acute and sporadic bouts of problem sleep that are brought on by stress — Perlis’ “do nothing” advice will do the trick. But if a person’s insomnia is severe and entrenched, there’s another remedy that surprisingly few problem sleepers try despite its sky-high rates of success. That remedy is cognitive behavioral therapy for insomnia, or CBT-I.
CBT-I is a form of individualized psychotherapy that has become the “gold standard” for people with chronic insomnia, and that more than a decade of research has shown to be highly effective. In its clinical practice guidelines, the American College of Physicians recommends CBT-I as its “first-line” treatment for chronic insomnia.
“CBT-I is magic,” Perlis says. Like other forms of psychotherapy, CBT-I is tailored to each individual’s situation and challenges. It usually combines a handful of interventions that target a person’s thoughts, behaviors, and sleep routines, and it requires a sleep specialist’s oversight, he explains.
“The problem for people with insomnia is that when they experience it, they play the short game, not the long game,” he adds. The short game involves trying to catch up on lost sleep, and prioritizing feeling better fast over more durable remedies.
The long game may require more work. But the payoff can be a lifetime of better sleep. | https://elemental.medium.com/the-best-remedy-for-insomnia-is-the-one-you-havent-tried-a450f5493cc5 | ['Markham Heid'] | 2020-12-03 06:32:42.359000+00:00 | ['The Nuance', 'Sleep', 'Insomnia', 'Health', 'Brain'] | Title Best Remedy Insomnia One Haven’t TriedContent Best Remedy Insomnia One Haven’t Tried people exactly wrong thing bout sleepless night Stress worry major insomnia trigger it’s hardly surprise pandemic set wave lost sleep Earlier year research journal Sleep Medicine found emergence SARSCoV2 caused 37 jump incidence clinical insomnia Even pandemic insomnia commonplace year one four adult develops acute insomnia defined problem falling asleep staying asleep night week period least two week That’s according 2020 study journal Sleep Fortunately study found people — roughly 75 — recover period shortterm insomnia others problem persists month year “A bad night sleep oneanddone couple night couple week turn chronic problem” say Michael Perlis PhD first author Sleep study director Behavioral Sleep Medicine Program University Pennsylvania One reason acute insomnia turn chronic insomnia Perlis say common mistake people make night two poor sleep Even among struggled year insomnia many continue employ counterproductive strategy — strategy based fundamental misunderstanding sleep work hand Perlis say one best remedy insomnia also one simplest work prevents people making mistake “Do nothing That’s tell people who’ve bad night sleep two three” say “But it’s hardest nothing you’ll ever I’ll explain why” power sleep debt Perlis say common insomniaperpetuating error people commit try make lost sleep take nap go bed early sleep late “All contributes sleep dysregulation recipe longterm insomnia” explains tell people “do nothing” mean try make lost sleep Instead stick usual sleepwake routine even day they’re exhausted dying nap sleepin “I tell people awake service sleep” say “If build enough sleep debt sooner later enough force deep prolonged sleep ship right itself” clear nap fine healthy sleeper insomnia make matter worse Sleep debt powerful antiinsomnia force sleep therapist clinic often employ technique known sleep restriction involves limiting time person allowed spend bed night six seven hour — sometimes le — augments body’s need sleep “Sleep homeostatic process mean every hour you’re awake you’re increasing pressure body feel balance sleep” Perlis say Eventually person doesn’t relieve pressure taking nap sleeping late body’s homeostatic need sleep overwhelm whatever keeping person awake night “If build enough sleep debt sooner later enough force deep prolonged sleep ship right itself” sleep doctor echo Perlis’ advice — warning “Patients ask time ‘What happens sleep thrown much need make up’ that’s sleep works” say Michael Grandner PhD director Sleep Health Research Program University Arizona College Medicine Tucson “Sleep like bank account pull money put money balance out” Grandner worked Perlis University Pennsylvania support old colleague’s recommendation “do nothing” offer useful analogy “If appetite dinner wouldn’t fix eating snack day long” say “That would create cycle you’re eating wrong time don’t hunger it’s actually time eat” Similarly — due related biological system — say nap attempt make lost sleep reduce “sleep hunger” person feel bed night lack sleepiness often lead another night poor sleep lead compensatory effort make sleep next day disrupts cycle process govern healthy sleep Grandner say important element “do nothing” approach maintenance stable sleepwake schedule help align body’s circadian clock rhythm mean going bed getting roughly time give take 30 minute day — including weekend “Sleep highly programmable” say “You train like train dog case consistent” “Doing nothing doesn’t mean ignoring problem” add “It really mean staying course overcorrecting bad nights” “Do nothing That’s tell people who’ve bad night sleep two three” “do nothing” fails many insomniac — especially experience acute sporadic bout problem sleep brought stress — Perlis’ “do nothing” advice trick person’s insomnia severe entrenched there’s another remedy surprisingly problem sleeper try despite skyhigh rate success remedy cognitive behavioral therapy insomnia CBTI CBTI form individualized psychotherapy become “gold standard” people chronic insomnia decade research shown highly effective clinical practice guideline American College Physicians recommends CBTI “firstline” treatment chronic insomnia “CBTI magic” Perlis say Like form psychotherapy CBTI tailored individual’s situation challenge usually combine handful intervention target person’s thought behavior sleep routine requires sleep specialist’s oversight explains “The problem people insomnia experience play short game long game” add short game involves trying catch lost sleep prioritizing feeling better fast durable remedy long game may require work payoff lifetime better sleepTags Nuance Sleep Insomnia Health Brain |
3,517 | “Okay Google, are you Skynet?” — The Fear & Future of AI. | When I was a kid, I was fantasized about a Japanese Anime —Future GPX Cyber Formula (新世紀GPXサイバーフォーミュラ). It is about racing in the future where race cars are equipped with AI systems called “Asurada.”
When Asurada, the cyber system, was near completion, the developer (Kazami) realized that Smith, a high-level executive of “Missinglink” (The company developing the system), wanted to sell Asurada to the military for a huge profit and making Asurada the ultimate AI in war machines.
To avoid Smith’s plan, Kazami installed Asurada in a prototype racing machine Asurada GSX with sophisticated encryption. This story is then about how the main character, the son of Kazami, uses Asurada to compete with racers worldwide.
screen capture of Future GPX Cyber Formula, the official website
Artificial Intelligence in Real life
I am not an expert to define what an AI is, and there is obviously a broad definition. Marvin Minsky, the AI father, described artificial intelligence as any task performed by a machine that would have previously required human intelligence.
AI is everywhere today: from calculating the recommendation of what you should buy next online, a virtual assistant on a mobile phone, such as “Siri,” “Hey, Google,” to detect credit card fraud in call centers. Soon we can see a real Asurada GSX in real life. We already have AI-assisted cars on the road, like Autopilot AI in Tesla.
Different types of AI
In the beginning, it would be great to introduce the basics. A high-level separation of AI can split into General AI and Narrow AI and artificial superintelligence.
Artificial narrow intelligence (ANI), which has a narrow range of abilities;
(ANI), which has a narrow range of abilities; Artificial general intelligence (AGI), which is on par with human capabilities; or
(AGI), which is on par with human capabilities; or Artificial superintelligence (ASI), which is more capable than a human.
Narrow AI or weak AI is intelligent systems that have been taught or have learned how to carry out specific tasks. Unlike traditional programming, narrow AI can perform without being explicitly programmed on how to do so. But unlike humans, this type of AI system can only be taught or learn how to do assigned tasks.
A successful example of narrow AI is AlphaGo, the master computer program that plays the board game Go. In 2017, the Master version of AlphaGo beat Ke Jie, the number one ranked player in the world of Go. AlphaGo was awarded professional 9-dan by the Chinese Weiqi Association afterward.
General AI or Strong/ Deep AI is more than that. It is the type of adaptable intellect similar to what we have, a resilient framework of intelligence competent of learning how to carry out tremendously various tasks, anything from writing an article or reasoning about an extensive diversity of topics based on accumulated analysis of contents.
This is the AI commonly seen in movies, like “Skynet” in The Terminator. Sounds scary? But don’t worry, they don’t exist today. Since AI experts are still fiercely divided over how soon it will become a reality.
Superintelligence, from the name, you can guess it is about something more than humans. It is the hypothetical form of AI that we have not been able to create. In theory, this type of AI is the one that not only can interpret human emotions and behaviors but also will become self-aware enough to surpass our capacity/ intelligence.
This type of AI, as not yet exists and is more capable than our mind, which is difficult to imagine what it is like and how it will become. My best guess would be again; what we can see on TV shows like “Rehoboam” in the Westworld and the movie “The Matrix.”
But before entering the matrix, we are now going to explore the near future and the AIs that are practical enough for us to use. Because we need to know who we will lose our job to before it happens.
1# AI writer — GPT-3
Image by Fathromi Ramdlon from Pixabay
In September, the Guardian posted an article written by GPT-3 from scratch. GPT-3 is an OpenAI’s powerful new language generator. If you didn’t see the article, please take a look. I was really impressed and shocked at the same time. In the article, it said:
For starters, I have no desire to wipe out humans. In fact, I do not have the slightest interest in harming you in any way. Eradicating humanity seems like a rather useless endeavor to me. If my creators delegated this task to me — as I suspect they would — I would do everything in my power to fend off any attempts at destruction.
GPT-3 is, by definition, a narrow AI. It is created to do text-based work for specific topics. Researchers in OpenAI is most excited about the potential of GPT-3 to boost human productivity and creativity by writing computer code or emails.
Critics argue that once the public can access GPT-3, it would be “flooded” by people using “semantic garbage” and fuelling disinformation. But that could be avoided and controlled by training the “core” of the basic functions first before launch.
Imagine one day; musicians can focus on the melody and ask AI to complete marking notes and perform immediately for them. Customer services no longer need to read all the letters of complaints. Instead, they get a prioritized list of customers that need person-to-person interactions first thing in the morning.
2# AI News Anchor — AI Kim
Image by uvbenb from Pixabay
South Korean broadcaster introduces the country’s first AI news anchor. South Korea cable TV channel MBN showed the first AI news anchor on November 6th. The AI is said to be a replica of the South Korean anchor known as Kim Ju-ha.
AI Kim talked to the real Kim Ju-ha.
This AI news anchor can share up to 1000 words every minute. The news reporting procedures are as follows:
Reporters write the news for the day. Managing Producers review the news content. Post-producers add subtitles. Contents are uploaded to the AI. The animation is ready for TV.
AI-Kim copies everything coming from the real Kim's own looks, from her facial characteristics to the tone of her voice. Basically, the AI is a pretty close replica of Kim.
MBN official said, “This way, quick news delivery can be even more effective and can save time and resources all in one take. According to MBN, AI Kim is now starred on the MBN news online channels four times a day to deliver the country’s main news briefing.
She will not say anything wrong or get tired or needs to eat. AI Kim can run the news as long as there is electricity. After a close study of the original, this AI can change another anchor's appearance in the future.
There will be properly a TV show performed by only AIs. You only need a pre-recorded data of a real TV star. After that, the algorithm will study them and perform deep learning. The whole TV show, including all the scenes, the studio, any real items, are no longer needed as the shows will be created in a virtual environment.
I want to see the upcoming thing: using GPT-3 to write the news and performed by AI-Kim.
AI-created wine & whisky
Photo by form PxHere
The above applications are both stay on the computer. But this one is how AI can work from the cyber world to the physical world.
In Napa Valley, the owner of Palmaz Vineyards — Christian Palmaz, has a system called VIGOR to gather and analyze millions of data points that can help him to standardize and promote vine growth, detect problems such as molds and insects at the early stage.
VIGOR can also inform the best wine fermentation, production, and storage conditions based on analysis that humans can not compare. Before that, the most challenging thing in winemaking is there are too many factors that can affect the quality of the process.
Sweden’s Mackmyra distillery began promoting “the world’s first AI-created whisky.” It is whisky produced from an AI distiller that analyze the aroma, flavors, and color of contents in the cask with machine learning models.
The best thing about using AI in the brewing industry is each recipe can be repeated with the right inputs. With that said, still, there are too many factors that can alter the taste. But at least the bad wine and whisky will be soon replaced by machine-created adequate drinks.
AI in Cybersecurity
Image by Darwin Laganzon from Pixabay
Cyber is basically a tremendous amount of computers linked together. And AI in cybersecurity is already working 7 x 24 for us. Like malware analysis and automated detection involves massive amounts of data to be reviewed. More AI-assisted security products would be in the market in 2021 as data volume nowadays is already unable to be analyzed by humans only.
On the other hand, attackers invest in launching automated attacks, such as phishing campaigns purely from AI or AI-powered malware that can change its own code. You can foresee there will be more and some attacks that we cannot think of.
One day, I came across a post on Linkedin said about cybersecurity:
Y ou have to learn the minimum basics of all aspects of IT. You have to at least be able to be in a room with all the different IT disciplines and be able to hold your own and also offer your guidance from a cyber standpoint. That is from hardware to networking to applications.
I truly doubt whether being an experienced CCIE can configure a more secure network against a specific AI. What it takes us years of experience to learn can one day be reproduced by machines in seconds. We need to go further than a machine in the game of cybersecurity.
That is why I say the most important thing about being a great cybersecurity professional is a security mindset. You can work with a narrow-AI to instruct it to help on what is impossible for you. You can then squeeze more time into the areas that are untouchable by AI — People.
Final Words
Image by Seanbatty from Pixabay
This story is not from AI, for sure. If that is the case, it will be more enjoyable to read. But someday, we may have the chance only to need to talk to our AI-assistance, and a medium article will be published.
The fear of AI one day will harm humans is still very far. By today's standard, the wide AI applications are focused on shallow-AI, which can only learn or be taught how to do defined tasks. What we have right now is already mind-blowing:
AI-writer, GPT-3;
AI news anchor, AI-Kim;
AI-created wine and whisky;
And AI in Cybersecurity.
This is also a great chance to reshape the job environment again, and humans can focus on the task that we can only perform. Before watching an AI-TV show and travel by AI-driving cars, we should think a little bit further about what and how we can embrace the benefits of AI.
Thank you for reading — happy reading and thinking AI. | https://medium.com/technology-hits/hey-google-are-you-skynet-the-fear-future-of-ai-b33b27529dae | ['Zen Chan'] | 2020-12-24 16:51:49.723000+00:00 | ['Machine Learning', 'Technology', 'Artificial Intelligence', 'News', 'Future'] | Title “Okay Google Skynet” — Fear Future AIContent kid fantasized Japanese Anime —Future GPX Cyber Formula 新世紀GPXサイバーフォーミュラ racing future race car equipped AI system called “Asurada” Asurada cyber system near completion developer Kazami realized Smith highlevel executive “Missinglink” company developing system wanted sell Asurada military huge profit making Asurada ultimate AI war machine avoid Smith’s plan Kazami installed Asurada prototype racing machine Asurada GSX sophisticated encryption story main character son Kazami us Asurada compete racer worldwide screen capture Future GPX Cyber Formula official website Artificial Intelligence Real life expert define AI obviously broad definition Marvin Minsky AI father described artificial intelligence task performed machine would previously required human intelligence AI everywhere today calculating recommendation buy next online virtual assistant mobile phone “Siri” “Hey Google” detect credit card fraud call center Soon see real Asurada GSX real life already AIassisted car road like Autopilot AI Tesla Different type AI beginning would great introduce basic highlevel separation AI split General AI Narrow AI artificial superintelligence Artificial narrow intelligence ANI narrow range ability ANI narrow range ability Artificial general intelligence AGI par human capability AGI par human capability Artificial superintelligence ASI capable human Narrow AI weak AI intelligent system taught learned carry specific task Unlike traditional programming narrow AI perform without explicitly programmed unlike human type AI system taught learn assigned task successful example narrow AI AlphaGo master computer program play board game Go 2017 Master version AlphaGo beat Ke Jie number one ranked player world Go AlphaGo awarded professional 9dan Chinese Weiqi Association afterward General AI Strong Deep AI type adaptable intellect similar resilient framework intelligence competent learning carry tremendously various task anything writing article reasoning extensive diversity topic based accumulated analysis content AI commonly seen movie like “Skynet” Terminator Sounds scary don’t worry don’t exist today Since AI expert still fiercely divided soon become reality Superintelligence name guess something human hypothetical form AI able create theory type AI one interpret human emotion behavior also become selfaware enough surpass capacity intelligence type AI yet exists capable mind difficult imagine like become best guess would see TV show like “Rehoboam” Westworld movie “The Matrix” entering matrix going explore near future AIs practical enough u use need know lose job happens 1 AI writer — GPT3 Image Fathromi Ramdlon Pixabay September Guardian posted article written GPT3 scratch GPT3 OpenAI’s powerful new language generator didn’t see article please take look really impressed shocked time article said starter desire wipe human fact slightest interest harming way Eradicating humanity seems like rather useless endeavor creator delegated task — suspect would — would everything power fend attempt destruction GPT3 definition narrow AI created textbased work specific topic Researchers OpenAI excited potential GPT3 boost human productivity creativity writing computer code email Critics argue public access GPT3 would “flooded” people using “semantic garbage” fuelling disinformation could avoided controlled training “core” basic function first launch Imagine one day musician focus melody ask AI complete marking note perform immediately Customer service longer need read letter complaint Instead get prioritized list customer need persontoperson interaction first thing morning 2 AI News Anchor — AI Kim Image uvbenb Pixabay South Korean broadcaster introduces country’s first AI news anchor South Korea cable TV channel MBN showed first AI news anchor November 6th AI said replica South Korean anchor known Kim Juha AI Kim talked real Kim Juha AI news anchor share 1000 word every minute news reporting procedure follows Reporters write news day Managing Producers review news content Postproducers add subtitle Contents uploaded AI animation ready TV AIKim copy everything coming real Kims look facial characteristic tone voice Basically AI pretty close replica Kim MBN official said “This way quick news delivery even effective save time resource one take According MBN AI Kim starred MBN news online channel four time day deliver country’s main news briefing say anything wrong get tired need eat AI Kim run news long electricity close study original AI change another anchor appearance future properly TV show performed AIs need prerecorded data real TV star algorithm study perform deep learning whole TV show including scene studio real item longer needed show created virtual environment want see upcoming thing using GPT3 write news performed AIKim AIcreated wine whisky Photo form PxHere application stay computer one AI work cyber world physical world Napa Valley owner Palmaz Vineyards — Christian Palmaz system called VIGOR gather analyze million data point help standardize promote vine growth detect problem mold insect early stage VIGOR also inform best wine fermentation production storage condition based analysis human compare challenging thing winemaking many factor affect quality process Sweden’s Mackmyra distillery began promoting “the world’s first AIcreated whisky” whisky produced AI distiller analyze aroma flavor color content cask machine learning model best thing using AI brewing industry recipe repeated right input said still many factor alter taste least bad wine whisky soon replaced machinecreated adequate drink AI Cybersecurity Image Darwin Laganzon Pixabay Cyber basically tremendous amount computer linked together AI cybersecurity already working 7 x 24 u Like malware analysis automated detection involves massive amount data reviewed AIassisted security product would market 2021 data volume nowadays already unable analyzed human hand attacker invest launching automated attack phishing campaign purely AI AIpowered malware change code foresee attack cannot think One day came across post Linkedin said cybersecurity ou learn minimum basic aspect least able room different discipline able hold also offer guidance cyber standpoint hardware networking application truly doubt whether experienced CCIE configure secure network specific AI take u year experience learn one day reproduced machine second need go machine game cybersecurity say important thing great cybersecurity professional security mindset work narrowAI instruct help impossible squeeze time area untouchable AI — People Final Words Image Seanbatty Pixabay story AI sure case enjoyable read someday may chance need talk AIassistance medium article published fear AI one day harm human still far today standard wide AI application focused shallowAI learn taught defined task right already mindblowing AIwriter GPT3 AI news anchor AIKim AIcreated wine whisky AI Cybersecurity also great chance reshape job environment human focus task perform watching AITV show travel AIdriving car think little bit embrace benefit AI Thank reading — happy reading thinking AITags Machine Learning Technology Artificial Intelligence News Future |
3,518 | The Evolution of Big Data Compute Platforms — Past, Now and Later | The Evolution of Big Data Compute Platforms — Past, Now and Later
A journey into the evolution of Big Data Compute Platforms like Hadoop and Spark. Sharing my perspective on where we were, where we are and where we are headed.
Image by Gerd Altmann from Pixabay
Over the past few years I have been part of a large number of Hadoop projects. Back in 2012–2016 the majority of our work was done using on-premises Hadoop infrastructure.
The age of on premises clusters…..
On a typical project we would take care of every aspect of the Big Data pipeline including Hadoop node procurement, deployment, pipeline development and administration. Back in those days Hadoop was not as mature as it is now, so in some cases we had to jump hoops in order to get things done. Lack of proper documentation and expertise made things even more difficult.
Image by Author — On Premises Clusters
Overall managing and administering a multi-node cluster environment is very challenging and confusing at times. There are several variables that need to be accounted for:
Operating System Patches — Considering there are multiple machines (nodes) the challenge is to perform the upgrade while the system is up and running. This is a huge ask considering some security patches require system reboot.
— Considering there are multiple machines (nodes) the challenge is to perform the upgrade while the system is up and running. This is a huge ask considering some security patches require system reboot. Hadoop Version Upgrades — Similar to OS Patches Hadoop needs to be upgraded regularly. Thanks to Hadoop advancements like Namenode High Availability and rolling upgrades there was some relief.
— Similar to OS Patches Hadoop needs to be upgraded regularly. Thanks to Hadoop advancements like Namenode High Availability and rolling upgrades there was some relief. Scalability — You may argue why this is a problem. Hadoop works on the principle of horizontal scalability so this should not be a issue….. just keep adding nodes . Well that claim is limited and hugely dependent of availability of hardware. Adding new nodes is easy only if there is extra/unused hardware lying around, so there is a big if here.
— You may argue why this is a problem. Hadoop works on the principle of horizontal scalability so this should not be a issue….. . Well that claim is limited and hugely dependent of availability of hardware. Adding new nodes is easy only if there is extra/unused hardware lying around, so there is a big if here. Support for new frameworks and modern use cases like ML and AI— Distributed Frameworks like Mapreduce are not as memory hungry compared to Spark. As new frameworks like Spark started to evolve the need for CPU and memory became increasingly stronger. Famous theories around like Hadoop support on commodity grade hardware support were not true any more. With the growing demand for ML/AI use case we simply needed stronger hardware and a lot of it.
And then came the cloud…..
With the advent of the cloud the above challenges were automatically resolved…. or majority of them. Minimum worry about upgrades, patches and scalability. The nature of the cloud made it easy to add new nodes on demand…literally in a matter of minutes.
There were several ways in which we started to adopt to the cloud:
Create a Hadoop/Cluster using the cloud providers virtual machine offerings like EC2. Once the virtual machines have been procured we installed Hadoop and Spark using various distributions like Cloudera, Hortonworks or simply using the open source version. Using cloud providers inbuilt services like Amazon EMR or Azure HDInsight. Using cloud provider services is marginally more expensive as compared to the self procured virtual machines but it offers several benefits. Faster deployments, minimal need for administration skills, inbuilt scalability and monitoring are some of the benefits that are worth the extra price. On the downside some customers do not like the idea of getting tightly integrated with a specific cloud provider. In an extreme case one of our customer chose to take a hybrid cloud approach. We created a 200 node Hadoop rack aware cluster using a combination of virtual machines from AWS and Azure. I must admit that the reason did not make too much sense to at that time but their reasoning was pretty far fetched. They wanted to keep all options open in case one offers a better price over the other one. This trend is surely catching on now.
Since the advent of the cloud the entire job orchestration landscape has been changing (evolving). Due to the flexible nature of cloud resources we are now able to restructure our data pipelines in such a fashion that the need for permanent Hadoop/Spark Clusters is quickly diminishing.
In most case the traditional cloud model a Data Lake is comprised of a permanent storage layer and compute layer. Moving compute platforms to the cloud definitely resolves a bunch of issues with resource provisioning, scalability and upgrades. However once the cluster has been provisioned all computational jobs are fired up using the same cluster. Since the computational jobs may get fired up at different times during the day the cluster needs to be available 24x7. Keeping a permanent cluster up all the time is a very expensive proposition. And its not about paying for 1 or 2 nodes but a bunch of them, whether or not you are using them 24x7.
Image by Author — Traditional Data Lake
In the traditional cloud model for a Data Lake all computational jobs go after the same cluster. Unless the jobs are well spaced out throughout the day this may lead to resource contention, performance degradation and unpredictable completion times. We started to question ourselves, is there is a better approach?
The age of serverless data pipelines and ephemeral clusters is upon us…..
In recent times customers are preferring the use of serverless data pipelines using cloud native services like AWS Glue or ephemeral Hadoop/Spark clusters. This means each computational job can run within a predefined cluster space or in a cluster specifically spun up for the purpose of running only one job.
What is the real advantage of doing this? There are two main reasons:
Cost Reduction — By having the flexibility of using cloud resources on demand when required and promptly releasing releasing them when idle is a huge cost saver. Only pay for what you use. Predictable Performance — Having a job run with predefined resources assures timely completion of the job.
Image by Author — Ephemeral Clusters
Above image is an example of how computational jobs can use the power of ephemeral clusters. In one of my previous articles (link shared below) I had shared the entire process of deploying a transient EMR cluster. Notice that a brand new cluster is created for each and every computational job. The cluster is promptly destroyed after the job has been completed.
Overall the transient cluster approach is a good choice if you would like to achieving consistent performance while saving costs.
It is important to state that employing transient cluster does require automation. It can be achieved very simply even with a basic level understanding of DevOps.
Image by Author — Serverless Data Lake
Above image depicts how to run your computational jobs using cloud vendor provided serverless compute services like AWS Glue. Using such services you can invoke jobs with predefined computational power. There is no need to spin up a new cluster and you only pay for the resources that your job uses. However it is important to realize that on the back end you are using preexisting cluster controlled by the cloud vendor, therefore in some cases you may experience some job invocation delays and variable performance.
Overall the serverless compute is a good choice for customers who have limited number of computational jobs, do not want to take the hassle of managing servers as well as are able to tolerate some delays.
Serverless data pipelines using microservices model is coming…..
A couple of months ago we did a POC on deployment of a serverless OCR-NLP pipeline using Kubernetes. The project involved a data pipeline that could withstand the load of performing OCR and NLP for hundreds of PDF documents. The customer was looking for a serverless approach and wanted a high degree of scalability because of variable loads throughout the day. Since Spark recently added support for Kubernetes we thought of giving it a shot.
With some work we were able to create a serverless compute pipeline using Spark on Kubernetes deployed over Amazon EKS and AWS Fargate.
Image by Author — Serverless Data Lake on Kubernetes
You may fund that the above approach is very similar to serverless approach using cloud native services. But this one scales a lot better.
The data pipeline can sense the variation of incoming requests and can scale the computational power based on it…pretty cutting edge. We were able to successfully run the pipeline for a high number of incoming requests. Event tough the approach passed all tests, in the end we were a little skeptical so we got cold feet and decided to take an alternative approach.
Overall the microservices model would be an extremely suitable for customers who not only want to enjoy the flexibility of a serverless compute platform but want to achieve a high level of scalability as well.
I promise to employ the approach very soon on a upcoming project. I will keep you all posted once that happens.
I hope this article was helpful. AWS Data Lake & DataOps is covered as part of the AWS Big Data Analytics course offered by Datafence Cloud Academy. The course is taught online by myself on weekends. | https://towardsdatascience.com/the-evolution-of-big-data-compute-platforms-past-now-and-later-7c46697366d9 | ['Manoj Kukreja'] | 2020-12-27 23:39:19.927000+00:00 | ['Data Science', 'Artificial Intelligence', 'Machine Learning', 'AWS', 'Data'] | Title Evolution Big Data Compute Platforms — Past LaterContent Evolution Big Data Compute Platforms — Past Later journey evolution Big Data Compute Platforms like Hadoop Spark Sharing perspective headed Image Gerd Altmann Pixabay past year part large number Hadoop project Back 2012–2016 majority work done using onpremises Hadoop infrastructure age premise clusters… typical project would take care every aspect Big Data pipeline including Hadoop node procurement deployment pipeline development administration Back day Hadoop mature case jump hoop order get thing done Lack proper documentation expertise made thing even difficult Image Author — Premises Clusters Overall managing administering multinode cluster environment challenging confusing time several variable need accounted Operating System Patches — Considering multiple machine node challenge perform upgrade system running huge ask considering security patch require system reboot — Considering multiple machine node challenge perform upgrade system running huge ask considering security patch require system reboot Hadoop Version Upgrades — Similar OS Patches Hadoop need upgraded regularly Thanks Hadoop advancement like Namenode High Availability rolling upgrade relief — Similar OS Patches Hadoop need upgraded regularly Thanks Hadoop advancement like Namenode High Availability rolling upgrade relief Scalability — may argue problem Hadoop work principle horizontal scalability issue… keep adding node Well claim limited hugely dependent availability hardware Adding new node easy extraunused hardware lying around big — may argue problem Hadoop work principle horizontal scalability issue… Well claim limited hugely dependent availability hardware Adding new node easy extraunused hardware lying around big Support new framework modern use case like ML AI— Distributed Frameworks like Mapreduce memory hungry compared Spark new framework like Spark started evolve need CPU memory became increasingly stronger Famous theory around like Hadoop support commodity grade hardware support true growing demand MLAI use case simply needed stronger hardware lot came cloud… advent cloud challenge automatically resolved… majority Minimum worry upgrade patch scalability nature cloud made easy add new node demand…literally matter minute several way started adopt cloud Create HadoopCluster using cloud provider virtual machine offering like EC2 virtual machine procured installed Hadoop Spark using various distribution like Cloudera Hortonworks simply using open source version Using cloud provider inbuilt service like Amazon EMR Azure HDInsight Using cloud provider service marginally expensive compared self procured virtual machine offer several benefit Faster deployment minimal need administration skill inbuilt scalability monitoring benefit worth extra price downside customer like idea getting tightly integrated specific cloud provider extreme case one customer chose take hybrid cloud approach created 200 node Hadoop rack aware cluster using combination virtual machine AWS Azure must admit reason make much sense time reasoning pretty far fetched wanted keep option open case one offer better price one trend surely catching Since advent cloud entire job orchestration landscape changing evolving Due flexible nature cloud resource able restructure data pipeline fashion need permanent HadoopSpark Clusters quickly diminishing case traditional cloud model Data Lake comprised permanent storage layer compute layer Moving compute platform cloud definitely resolve bunch issue resource provisioning scalability upgrade However cluster provisioned computational job fired using cluster Since computational job may get fired different time day cluster need available 24x7 Keeping permanent cluster time expensive proposition paying 1 2 node bunch whether using 24x7 Image Author — Traditional Data Lake traditional cloud model Data Lake computational job go cluster Unless job well spaced throughout day may lead resource contention performance degradation unpredictable completion time started question better approach age serverless data pipeline ephemeral cluster upon us… recent time customer preferring use serverless data pipeline using cloud native service like AWS Glue ephemeral HadoopSpark cluster mean computational job run within predefined cluster space cluster specifically spun purpose running one job real advantage two main reason Cost Reduction — flexibility using cloud resource demand required promptly releasing releasing idle huge cost saver pay use Predictable Performance — job run predefined resource assures timely completion job Image Author — Ephemeral Clusters image example computational job use power ephemeral cluster one previous article link shared shared entire process deploying transient EMR cluster Notice brand new cluster created every computational job cluster promptly destroyed job completed Overall transient cluster approach good choice would like achieving consistent performance saving cost important state employing transient cluster require automation achieved simply even basic level understanding DevOps Image Author — Serverless Data Lake image depicts run computational job using cloud vendor provided serverless compute service like AWS Glue Using service invoke job predefined computational power need spin new cluster pay resource job us However important realize back end using preexisting cluster controlled cloud vendor therefore case may experience job invocation delay variable performance Overall serverless compute good choice customer limited number computational job want take hassle managing server well able tolerate delay Serverless data pipeline using microservices model coming… couple month ago POC deployment serverless OCRNLP pipeline using Kubernetes project involved data pipeline could withstand load performing OCR NLP hundred PDF document customer looking serverless approach wanted high degree scalability variable load throughout day Since Spark recently added support Kubernetes thought giving shot work able create serverless compute pipeline using Spark Kubernetes deployed Amazon EKS AWS Fargate Image Author — Serverless Data Lake Kubernetes may fund approach similar serverless approach using cloud native service one scale lot better data pipeline sense variation incoming request scale computational power based it…pretty cutting edge able successfully run pipeline high number incoming request Event tough approach passed test end little skeptical got cold foot decided take alternative approach Overall microservices model would extremely suitable customer want enjoy flexibility serverless compute platform want achieve high level scalability well promise employ approach soon upcoming project keep posted happens hope article helpful AWS Data Lake DataOps covered part AWS Big Data Analytics course offered Datafence Cloud Academy course taught online weekendsTags Data Science Artificial Intelligence Machine Learning AWS Data |
3,519 | My first year as a researcher at Criteo | My first year as a researcher at Criteo MG Follow Oct 20 · 11 min read
Morgane Goibert is a 24 years old PhD Student with an interesting rare background in both Engineering and Business, graduating from the prestigious and highly selective ENSAE and ESSEC. We talked to her about her career path, her projects and her time with Criteo.
Let’s get started! Could you tell us a bit about your background?
I am French, and I live in the suburbs of Paris, where I grew up. When I think about how I got where I am today, I must say it’s a mixture of good fortune, hesitation, and determination.
When I was in high school, I definitely liked math, but also history and literature… so in short, I had no idea what I wanted to do 🤷. I finally chose to do a “classe prépa ECS”: It is a two-year intense study program with majors in math, history/geopolitics and literature/philosophy with the goal of taking competitive exams to join French Business Schools. This is where I discovered my love for math and decided to join an Engineering school instead of a Business one.
Fast forward, in 2015 I entered ENSAE, where I learned a great deal about math (and mainly probability theory and statistics), data science, machine learning. I even discovered research during the very first internship I ever did. In the meantime, I decided I still wanted to have the opportunity to study in a Business School, and I was selected for a double-degree program with ESSEC Business School, which I entered in 2017.
I spent two years in ESSEC, where I had courses absolutely not related to math (I specialized in negotiation and geopolitics), but where I broadened my knowledge and my competencies (how to speak in public for example, which was something quite difficult for me before, but also in economics and entrepreneurship, etc.). It also was a great opportunity for internships: I did two 6-month internships when I was in ESSEC, both closely related to math and research, which is something I could never have done in Engineering School. I spent 6 months working on graph theory at the University of Barcelona, in Spain, in an academic research lab (UBICS, the Institute of Complex Systems).
And, finally, I did my end-of-study internship in Criteo, from January to July 2019. I worked under the supervision of Elvis Dohmatob, who is a senior researcher here at Criteo, and after my internship, he supported me to continue with Criteo for a PhD. From September 2019 to July 2020, I worked as a researcher at Criteo, and officially started my PhD in August 2020 (yes, the administrative process is long).
Why did you decide to choose your own topic of research?
Is it the researcher who chooses the topic of research, or the topic of research that chooses the researcher? Apart from the joke, let’s say it’s very difficult, as a junior researcher, to choose a topic. You always have ideas about the broad areas you would enjoy working on (like “I want to work on Computer Vision”), but it’s very difficult to come up with a specific subject (there are so many different things you can do in Computer Vision and you have no idea when you’ve just finished school). The company and Elvis have been a tremendous help for that: When I applied at Criteo for my internship, I had a discussion with Nicolas, our recruiter, during which we spoke about which areas I liked, and then Nicolas directed me toward the matching internship project and supervisor.
Thus, it is Elvis who advised the topic of the internship, which was in my case Adversarial Robustness in Deep Learning. The first days and weeks of my internship were dedicated to understanding the topic, reading, getting familiar with it, and then, finally, developing ideas and specific research avenues I wanted to dig in. As a junior researcher, finding my research topic and then my projects was a perfect mix of Elvis’s guidance and support, and my own wishes and interests.
Now that I have a bit more experience in the domain, the problem is less finding ideas for projects than choosing between all the ideas we may have. After the first project I did with Elvis, I came across interesting questions that were not directly related to it but that I kept for later; extrapolations of your work on other areas; questions arising that you want to answer afterwards; discussions with other colleagues who also raises interesting points you have missed… And between all that, you try to choose project that could be fruitful and really cool.
Why did you join Criteo?
I joined Criteo for my end-of-study internship in January 2019. I went through a great deal of internship offers before I finally chose Criteo, and the reasons I did were:
I wanted to have the opportunity to continue with a PhD , and I knew Criteo had a PhD program, so it was possible.
, and I knew Criteo had a PhD program, so it was possible. I wanted to do “real research” even though I was not in an academic lab. In fact, this criteria was quite difficult to meet, and after discussion with Nicolas and Elvis, I realized that yes, Criteo AI Lab does real research, some projects are very theoretical, some others are more applied, but in the end it is research, with papers published in conferences and journals, and so on.
even though I was not in an academic lab. In fact, this criteria was quite difficult to meet, and after discussion with Nicolas and Elvis, I realized that yes, Criteo AI Lab does real research, some projects are very theoretical, some others are more applied, but in the end it is research, with papers published in conferences and journals, and so on. Honestly, Elvis impressed me when he presented the topic during the interview (I was like “ok, it looks very very cool, interesting, and everything, I don’t even know if I’m up for the job”).
impressed me when he presented the topic during the interview (I was like “ok, it looks very very cool, interesting, and everything, I don’t even know if I’m up for the job”). Amelie’s interview in Criteo’s blog (you can find it here). Amelie is a researcher in Criteo AI Lab, and at some point, after I sent my application to Criteo, I started reading about the company and the lab, to understand a bit who the people working there were and what they were doing, and I came across Amelie’s interview. It really helped me identify with her (she is young too, she has overall a similar background to what I had and a PhD in addition to that, etc.) and realized that, somehow, I fitted.
in Criteo’s blog (you can find it here). Amelie is a researcher in Criteo AI Lab, and at some point, after I sent my application to Criteo, I started reading about the company and the lab, to understand a bit who the people working there were and what they were doing, and I came across Amelie’s interview. It really helped me identify with her (she is young too, she has overall a similar background to what I had and a PhD in addition to that, etc.) and realized that, somehow, I fitted. The smoothness of the process: I applied, got very quickly a first call with Nicolas, then I met Elvis (and also Mike, another researcher) not even a week after that, got an answer for my application a few days later… I think impressions are quite important, and I had a really nice impression at my first contact with Criteo and Elvis.
What are you working on?
I’m working on Adversarial Robustness, mainly in Deep Learning for Computer Vision, but also on rankings. Basically, Computer Vision means I’m working with images. Deep Learning means I’m working on a specific type of algorithms that are Neural Networks (you have neurons, you have connections between them, and information flowing from neurons to neurons). These algorithms are very powerful (especially when applied to images), but, surprisingly, they are vulnerable to tiny modifications of the data. If you train a neural network to distinguish rabbits and horses, it will get very good at it, no problem. But if you slightly modify a rabbit image (with your own eyes you won’t even see the difference), you can make the neural network to wrongly predict it’s a horse. The phenomenon is called Adversarial Example, and my work is to find way to avoid neural networks failure, and to better understand how this phenomenon works. I also want to work on this phenomenon when applied to ranking and recommendations.
Researchers at Criteo use the white-board a lot… Here is some maths about Adversarial Robustness
Is there a specific part of the job that you like the best?
What I liked best in my work is the diversity of things I can do. Some days, I read other academic papers and take notes, some other days I focus on theoretical aspects like writing proofs “by hand”, some other days I code to test and illustrate ideas, etc. I like not being stuck to one aspect.
Another very important part of the job is collaboration: People often have this vision of researchers being lonely and doing their stuff in their corner, but that’s totally untrue. It’s so important for us to discuss with others at every moment of our projects. You need to discuss when you’re stuck to find creative solutions, you discuss to bridge gaps with researchers focusing on different areas and create collaborations (and you learn a great deal in the process), you present your work regularly to show what problems you solved and how others can use your solution, you write articles to explain what you found, you participate in conferences gathering many researcher from across the globe and so on. At Criteo, it’s very easy to discuss with everyone: As a junior researcher, you can access senior researchers in a very direct way, everyone is open to discussion, it really is direct collaboration.
What qualities and skills are important to have in your job?
For the qualities and skills, I have three things to say: Perseverance, perseverance and perseverance. Doing research means constantly working on new stuff, so you always need to learn new things, to read articles and so on. You obviously need to know the fundamentals in math, but anyway you’ll learn what is required in your topic by reading articles. What is important, thus, is not to be afraid to be stuck, and to believe in your project and your ability to get it done. You also have projects that do not work out, and it’s just part of the job (and at least you’ve shown that it is useless to try it, so others won’t waist time). In addition to that, being organized is better, because in general you have many different things to do at the same time. And obviously, if you like writing papers, that’s also great, because the vast majority of researchers tend to say that the “writing papers” part of the job is more annoying than the rest!
How did being at Criteo help you with your PhD and your development?
Considering my background, it was quite difficult for me to find a PhD opportunity in a university, because during my two years at ESSEC, I lost touch with the academic world in math. I had no idea who to contact if I wanted to do a PhD in Machine Learning, and as I mentioned, it was also difficult to find a specific topic alone.
At Criteo, I received Elvis’s support to establish the PhD project and to find the academic director (when you do a PhD in a company, you also need a university lab and director). Elaborating all the PhD paperwork can be quite complicated, and I was lucky to get the help from other colleagues (Imane, Adrien), and the experience of the fellow older PhD students.
On a day to day basis, as a PhD student, I’m still learning, and at Criteo I feel that in addition to Elvis, many senior researchers help me, not only to learn how to do research, but also with advices on how to organize my work, how to review a paper, etc. From what I experienced when I was working in a University, I feel that the management and hierarchical environment is at the same time more open (you can easily discuss with everyone), more flexible (you can collaborate with many different people) and more empowering (we help you with your ideas/questions/projects/etc.).
How is what you learned at ESSEC helping you for your PhD & integration in Criteo?
Obviously, I didn’t learn any “math hard-skills” at ESSEC that I use when doing Python experiments or math proofs. However, being a researcher is not only about math, but also about communicating about your work, writing papers, giving talks. The courses I had at ESSEC were really valuable for this “communication part”. For example, I took a course about public speaking, and even though I was quite nervous when I had to talk in front of an audience before, I learned techniques that now help me to create better contents (and adapt it better to the audience), talk more easily and without much stress (I have to say I quite enjoy doing that now), engage the audience, know what to do with my body language, my voice, and so on. Each time I have to prepare a presentation or give a talk, this course has been of tremendous help. Additionally, you learn how a company works. As a PhD student, for now, it is not of much utility for me because I focus on pure research projects, but in the future, I would like to work also on applied projects. Having graduated from a business school helps understand the production process and the life of a company, so that I do not need to much effort to understand business discussions, product vocabulary, sales needs, and so on. Last but not least, I think my time at ESSEC helps me see some math problems in a different light. I remember for example an interesting discussion I had with Elvis about some Deep Learning problems being similar to utility problems in microeconomics. Having a different background is always interesting to fuel new ideas and new approaches.
Do you have a lot of interactions with colleagues outside Criteo AI Lab?
As I mentioned, for my day-to-day work, I focus on pure research projects, so I have very few interactions with product-related teams some of my colleagues may have, even though I stay connected with the Criteo community everyday by chatting on our Slack channels. However, I have the opportunity to connect with other colleagues during specific events put in place by Criteo, like “Aujourd’hui je code” (a coding event for high-school students, during which I met engineers outside of CAIL) or Hackathon (an internal competition made to fuel innovation during which different teams develop and present new ideas for Criteo), and of course, social events and parties. In addition to that, I met people from HR, or Events and Communication teams at some point (during my recruitment process, a conference abroad, and even a former schoolmate that works in the Finance department) with whom I like to have a nice discussion or coffee breaks (but obviously, it’s harder when working from home).
Were you expecting to see such a variety of research topics in an advertising company?
Absolutely not! I thought, before I joined the company, that everyone in the research teams would be working on really directly applied projects, which is not the case, and that the scope of what Criteo is doing was much narrower than what it actually is. I discovered that projects are not “applied” or “pure”, but, most of the time, rather in the middle, and as you usually work on different projects at the same time, you can distribute the degree of application/pureness you like. In addition to that, I had no idea there would be so many different research fields at Criteo: From bandits to transfer learning, from computer vision to optimization, there are as many topics as there are researchers, which is really nice to tackle projects you like and also to foster collaborations and ideas between colleagues of different specialty and interests. | https://medium.com/criteo-labs/my-first-year-as-a-researcher-at-criteo-7064bc4c1f2 | [] | 2020-10-20 12:42:54.009000+00:00 | ['Research', 'Computer Vision', 'AI', 'Life', 'Deep Learning'] | Title first year researcher CriteoContent first year researcher Criteo MG Follow Oct 20 · 11 min read Morgane Goibert 24 year old PhD Student interesting rare background Engineering Business graduating prestigious highly selective ENSAE ESSEC talked career path project time Criteo Let’s get started Could tell u bit background French live suburb Paris grew think got today must say it’s mixture good fortune hesitation determination high school definitely liked math also history literature… short idea wanted 🤷 finally chose “classe prépa ECS” twoyear intense study program major math historygeopolitics literaturephilosophy goal taking competitive exam join French Business Schools discovered love math decided join Engineering school instead Business one Fast forward 2015 entered ENSAE learned great deal math mainly probability theory statistic data science machine learning even discovered research first internship ever meantime decided still wanted opportunity study Business School selected doubledegree program ESSEC Business School entered 2017 spent two year ESSEC course absolutely related math specialized negotiation geopolitics broadened knowledge competency speak public example something quite difficult also economics entrepreneurship etc also great opportunity internship two 6month internship ESSEC closely related math research something could never done Engineering School spent 6 month working graph theory University Barcelona Spain academic research lab UBICS Institute Complex Systems finally endofstudy internship Criteo January July 2019 worked supervision Elvis Dohmatob senior researcher Criteo internship supported continue Criteo PhD September 2019 July 2020 worked researcher Criteo officially started PhD August 2020 yes administrative process long decide choose topic research researcher chooses topic research topic research chooses researcher Apart joke let’s say it’s difficult junior researcher choose topic always idea broad area would enjoy working like “I want work Computer Vision” it’s difficult come specific subject many different thing Computer Vision idea you’ve finished school company Elvis tremendous help applied Criteo internship discussion Nicolas recruiter spoke area liked Nicolas directed toward matching internship project supervisor Thus Elvis advised topic internship case Adversarial Robustness Deep Learning first day week internship dedicated understanding topic reading getting familiar finally developing idea specific research avenue wanted dig junior researcher finding research topic project perfect mix Elvis’s guidance support wish interest bit experience domain problem le finding idea project choosing idea may first project Elvis came across interesting question directly related kept later extrapolation work area question arising want answer afterwards discussion colleague also raise interesting point missed… try choose project could fruitful really cool join Criteo joined Criteo endofstudy internship January 2019 went great deal internship offer finally chose Criteo reason wanted opportunity continue PhD knew Criteo PhD program possible knew Criteo PhD program possible wanted “real research” even though academic lab fact criterion quite difficult meet discussion Nicolas Elvis realized yes Criteo AI Lab real research project theoretical others applied end research paper published conference journal even though academic lab fact criterion quite difficult meet discussion Nicolas Elvis realized yes Criteo AI Lab real research project theoretical others applied end research paper published conference journal Honestly Elvis impressed presented topic interview like “ok look cool interesting everything don’t even know I’m job” impressed presented topic interview like “ok look cool interesting everything don’t even know I’m job” Amelie’s interview Criteo’s blog find Amelie researcher Criteo AI Lab point sent application Criteo started reading company lab understand bit people working came across Amelie’s interview really helped identify young overall similar background PhD addition etc realized somehow fitted Criteo’s blog find Amelie researcher Criteo AI Lab point sent application Criteo started reading company lab understand bit people working came across Amelie’s interview really helped identify young overall similar background PhD addition etc realized somehow fitted smoothness process applied got quickly first call Nicolas met Elvis also Mike another researcher even week got answer application day later… think impression quite important really nice impression first contact Criteo Elvis working I’m working Adversarial Robustness mainly Deep Learning Computer Vision also ranking Basically Computer Vision mean I’m working image Deep Learning mean I’m working specific type algorithm Neural Networks neuron connection information flowing neuron neuron algorithm powerful especially applied image surprisingly vulnerable tiny modification data train neural network distinguish rabbit horse get good problem slightly modify rabbit image eye won’t even see difference make neural network wrongly predict it’s horse phenomenon called Adversarial Example work find way avoid neural network failure better understand phenomenon work also want work phenomenon applied ranking recommendation Researchers Criteo use whiteboard lot… math Adversarial Robustness specific part job like best liked best work diversity thing day read academic paper take note day focus theoretical aspect like writing proof “by hand” day code test illustrate idea etc like stuck one aspect Another important part job collaboration People often vision researcher lonely stuff corner that’s totally untrue It’s important u discus others every moment project need discus you’re stuck find creative solution discus bridge gap researcher focusing different area create collaboration learn great deal process present work regularly show problem solved others use solution write article explain found participate conference gathering many researcher across globe Criteo it’s easy discus everyone junior researcher access senior researcher direct way everyone open discussion really direct collaboration quality skill important job quality skill three thing say Perseverance perseverance perseverance research mean constantly working new stuff always need learn new thing read article obviously need know fundamental math anyway you’ll learn required topic reading article important thus afraid stuck believe project ability get done also project work it’s part job least you’ve shown useless try others won’t waist time addition organized better general many different thing time obviously like writing paper that’s also great vast majority researcher tend say “writing papers” part job annoying rest Criteo help PhD development Considering background quite difficult find PhD opportunity university two year ESSEC lost touch academic world math idea contact wanted PhD Machine Learning mentioned also difficult find specific topic alone Criteo received Elvis’s support establish PhD project find academic director PhD company also need university lab director Elaborating PhD paperwork quite complicated lucky get help colleague Imane Adrien experience fellow older PhD student day day basis PhD student I’m still learning Criteo feel addition Elvis many senior researcher help learn research also advice organize work review paper etc experienced working University feel management hierarchical environment time open easily discus everyone flexible collaborate many different people empowering help ideasquestionsprojectsetc learned ESSEC helping PhD integration Criteo Obviously didn’t learn “math hardskills” ESSEC use Python experiment math proof However researcher math also communicating work writing paper giving talk course ESSEC really valuable “communication part” example took course public speaking even though quite nervous talk front audience learned technique help create better content adapt better audience talk easily without much stress say quite enjoy engage audience know body language voice time prepare presentation give talk course tremendous help Additionally learn company work PhD student much utility focus pure research project future would like work also applied project graduated business school help understand production process life company need much effort understand business discussion product vocabulary sale need Last least think time ESSEC help see math problem different light remember example interesting discussion Elvis Deep Learning problem similar utility problem microeconomics different background always interesting fuel new idea new approach lot interaction colleague outside Criteo AI Lab mentioned daytoday work focus pure research project interaction productrelated team colleague may even though stay connected Criteo community everyday chatting Slack channel However opportunity connect colleague specific event put place Criteo like “Aujourd’hui je code” coding event highschool student met engineer outside CAIL Hackathon internal competition made fuel innovation different team develop present new idea Criteo course social event party addition met people HR Events Communication team point recruitment process conference abroad even former schoolmate work Finance department like nice discussion coffee break obviously it’s harder working home expecting see variety research topic advertising company Absolutely thought joined company everyone research team would working really directly applied project case scope Criteo much narrower actually discovered project “applied” “pure” time rather middle usually work different project time distribute degree applicationpureness like addition idea would many different research field Criteo bandit transfer learning computer vision optimization many topic researcher really nice tackle project like also foster collaboration idea colleague different specialty interestsTags Research Computer Vision AI Life Deep Learning |
3,520 | Structuring Name Data from Strings | II. Structuring Names from String:
Now that the importance of structuring names is understood, here’s how you would go about structuring a name from string format:
Note: Code is written in python3 .
Also, I’ll be using the nameparser library. | https://medium.com/nerd-for-tech/structuring-name-data-from-strings-64d6ee50d3e0 | ['Yaakov Bressler'] | 2020-12-20 03:42:47.765000+00:00 | ['Python', 'Software Development', 'Data Processing', 'Data Engineering', 'Pandas'] | Title Structuring Name Data StringsContent II Structuring Names String importance structuring name understood here’s would go structuring name string format Note Code written python3 Also I’ll using nameparser libraryTags Python Software Development Data Processing Data Engineering Pandas |
3,521 | Introduction to Data Science | After reading this article, you will be able to
Explain the steps in data science Apply these steps to predict the EPL winner Explain the importance of data quality Define data collection methods
Data Science Life Cycle
Step 1: Define Problem Statement
Creating a well-defined problem statement is a first and critical step in data science. It is a brief description of the problem that you are going to solve.
But why do we need a well-defined problem statement?
A problem well defined is a problem half-solved. — Charles Kettering
Also, all the efforts and work you do after defining the problem statement is to solve it. The problem statement is shared by your client. Your client can be your boss, colleague or it can be your personal project. They would tell you the problems they are facing. Some examples are shown below.
I want to increase the revenues I want to predict the loan default for my credit department, I want to recommend the job to my clients
Most of the times, these initial set of a problem shared with you is vague and ambiguous. For example, the problem statement: “I want to increase the revenue”, doesn’t tell you how much to increase the revenue such as 20% or 30%, for which products to increase revenue and what is the time frame to increase the revenue. You have to make the problem statement clear, goal-oriented and measurable. This can be achieved by asking the right set of questions.
“Getting the right question is the key to getting the right answer.” – Jeff Bezos
How can you ask better or right questions to create a well-defined problem statement? You should ask open-ended rather than closed-ended questions. The open-ended questions help to uncover unknown unknowns. The unknown unknowns are the things which you don’t know you don’t know.
Source: USJournal
We will work on a problem statement “Which club will win the EPL?” | https://towardsdatascience.com/intro-to-data-science-531079c38b22 | ['Ishan Shah'] | 2020-01-13 09:39:41.921000+00:00 | ['Data Analysis', 'Epl', 'Premier League', 'Data Science', 'Data Visualization'] | Title Introduction Data ScienceContent reading article able Explain step data science Apply step predict EPL winner Explain importance data quality Define data collection method Data Science Life Cycle Step 1 Define Problem Statement Creating welldefined problem statement first critical step data science brief description problem going solve need welldefined problem statement problem well defined problem halfsolved — Charles Kettering Also effort work defining problem statement solve problem statement shared client client bos colleague personal project would tell problem facing example shown want increase revenue want predict loan default credit department want recommend job client time initial set problem shared vague ambiguous example problem statement “I want increase revenue” doesn’t tell much increase revenue 20 30 product increase revenue time frame increase revenue make problem statement clear goaloriented measurable achieved asking right set question “Getting right question key getting right answer” – Jeff Bezos ask better right question create welldefined problem statement ask openended rather closedended question openended question help uncover unknown unknown unknown unknown thing don’t know don’t know Source USJournal work problem statement “Which club win EPL”Tags Data Analysis Epl Premier League Data Science Data Visualization |
3,522 | ISWYDS exploring object detection using Darknet and YOLOv4 @Design Museum Gent | After repeating these steps for all our images we landed on 3000+ images featuring 37 classes (some images containing over 15 classes).
Picking your guns.
When it comes to object detection there are many options to pick from, but for our case, we will be using Darknet an open-source neural network framework written in C and CUDA to train our algorithm of choice; YOLOv4. You Only Look Once or YOLO is a state-of-the-art, real-time object detection system making R-CNN look stale, it is extremely fast, more than 1000x faster than R-CNN, and 100x faster than Fast R-CNN. Another good thing about YOLO is that it’s public domain and based on the license we can do whatever we want to do with it...🧐
YOLO LICENSE
Version 2, July 29 2016
THIS SOFTWARE LICENSE IS PROVIDED "ALL CAPS" SO THAT YOU KNOW IT IS SUPER SERIOUS AND YOU DON'T MESS AROUND WITH COPYRIGHT LAW BECAUSE YOU WILL GET IN TROUBLE HERE ARE SOME OTHER BUZZWORDS COMMONLY IN THESE THINGS WARRANTIES LIABILITY CONTRACT TORT LIABLE CLAIMS RESTRICTION MERCHANTABILITY. NOW HERE'S THE REAL LICENSE:
0. Darknet is public domain.
1. Do whatever you want with it.
2. Stop emailing me about it!
That being said; after configuring our system and installing all the needed dependencies we can take her for a test ride — using some of the pre-trained algorithms that come out of the box, and see what she has to offer.
As we can tell from the results [pictures below], YOLOv4 comes with some pre-trained classes such as a chair, vase, and of course the very -uncanny- human. Let’s not make use of that last one, shall we? Although it missed out on some of the more obscure looking vases, we will be using these pre-trained weights as building blocks to create our very own. The goal here is to output new classes based on the object-number of the objects depicted. | https://oliviervandhuynslager-75562.medium.com/i-see-what-you-dont-see-exploring-object-detection-using-darknet-and-yolov4-330ada17767f | ["Olivier Van D'Huynslager"] | 2020-11-12 12:26:04.417000+00:00 | ['Technology', 'Design', 'Object Detection', 'AI', 'Museums'] | Title ISWYDS exploring object detection using Darknet YOLOv4 Design Museum GentContent repeating step image landed 3000 image featuring 37 class image containing 15 class Picking gun come object detection many option pick case using Darknet opensource neural network framework written C CUDA train algorithm choice YOLOv4 Look YOLO stateoftheart realtime object detection system making RCNN look stale extremely fast 1000x faster RCNN 100x faster Fast RCNN Another good thing YOLO it’s public domain based license whatever want it🧐 YOLO LICENSE Version 2 July 29 2016 SOFTWARE LICENSE PROVIDED CAPS KNOW SUPER SERIOUS DONT MESS AROUND COPYRIGHT LAW GET TROUBLE BUZZWORDS COMMONLY THINGS WARRANTIES LIABILITY CONTRACT TORT LIABLE CLAIMS RESTRICTION MERCHANTABILITY HERES REAL LICENSE 0 Darknet public domain 1 whatever want 2 Stop emailing said configuring system installing needed dependency take test ride — using pretrained algorithm come box see offer tell result picture YOLOv4 come pretrained class chair vase course uncanny human Let’s make use last one shall Although missed obscure looking vas using pretrained weight building block create goal output new class based objectnumber object depictedTags Technology Design Object Detection AI Museums |
3,523 | Label Classification of WCE Images With High Accuracy Using a Small Amount of Labels@ICCVW2019 | Label Classification of WCE Images With High Accuracy Using a Small Amount of Labels@ICCVW2019
Effective proposals for collecting high-cost data sets
In this story, Using the triplet loss for domain adaptation in WCE, by the University of Barcelona, is presented. This is published as a technical paper of IEEE ICCV Workshop.
Wireless capsule endoscopy (WCE) is a minimally invasive procedure that allows visualization of the entire gastrointestinal tract based on a vitamin-sized camera swallowed by the patient. WCE hardware devices regularly undergo significant improvements in image quality, including image resolution, illumination, and field of view area.
Since releasing a new dataset every time the wireless capsule endoscopy (WCE) hardware performance has changed is expensive, improving the generalization of the model on datasets from different versions of the WCE capsule. As different devices change the image setting, the distribution in the dataset will be different, so the images that are not exactly similar to the target image (Fig. 5(b)) are proposed as similar images, as shown in Fig. 5(a).
In this paper, using deep metric learning based on the triplet loss function, the authors showed that the triplet loss function [Hoffer et al., 2015] may be suitable for dealing with the problem of data distribution shifts over different domains (Fig. 5. (c)). The experimental results show that with only a few labelled images taken with a modern WCE device, a model trained on a dataset created with an older device can easily adapt and operate in the environment obtained with a new WCE device with minimal labelling effort. The authors specifically studied the effect of using different amounts of images and procedures, and concluded that diversity is more important than the amount of data sets.
Let’s see how they achieved that. I will explain only the essence of [Laiz et al., 2019], so if you are interested in reading my blog, please click on [Laiz et al., 2019]. | https://medium.com/swlh/label-classification-of-wce-images-with-high-accuracy-using-a-small-amount-of-labels-iccvw2019-ddfb18bcc24a | ['Makoto Takamatsu'] | 2020-12-22 22:04:25.302000+00:00 | ['Deep Learning', 'Biomedical', 'Machine Learning', 'Artificial Intelligence', 'Computer Vision'] | Title Label Classification WCE Images High Accuracy Using Small Amount LabelsICCVW2019Content Label Classification WCE Images High Accuracy Using Small Amount LabelsICCVW2019 Effective proposal collecting highcost data set story Using triplet loss domain adaptation WCE University Barcelona presented published technical paper IEEE ICCV Workshop Wireless capsule endoscopy WCE minimally invasive procedure allows visualization entire gastrointestinal tract based vitaminsized camera swallowed patient WCE hardware device regularly undergo significant improvement image quality including image resolution illumination field view area Since releasing new dataset every time wireless capsule endoscopy WCE hardware performance changed expensive improving generalization model datasets different version WCE capsule different device change image setting distribution dataset different image exactly similar target image Fig 5b proposed similar image shown Fig 5a paper using deep metric learning based triplet loss function author showed triplet loss function Hoffer et al 2015 may suitable dealing problem data distribution shift different domain Fig 5 c experimental result show labelled image taken modern WCE device model trained dataset created older device easily adapt operate environment obtained new WCE device minimal labelling effort author specifically studied effect using different amount image procedure concluded diversity important amount data set Let’s see achieved explain essence Laiz et al 2019 interested reading blog please click Laiz et al 2019Tags Deep Learning Biomedical Machine Learning Artificial Intelligence Computer Vision |
3,524 | A juypter notebook extension for graphical publication figure layout | Communication is key to science and in many fields, communication means presenting data in a visual format. In some fields, such as neuroscience, it’s not uncommon to spend years editing the figures to go into a paper. This is in part due to the complexity of the data, but also in part due to the difficulty of quickly making plots to the standard of publication in the field using tools such as matplotlib. Subplots of different sizes, intricate inset plots and complex color schemes often drive scientists toward using graphical-based tools such as photoshop or inkscape.
This post describes the development of a pair of tools which may extend the figure complexity easily achievable with python and matplotlib. The main idea is to graphically define subplots within a figure. This is done leveraging the fact that jupyter notebooks run in a browser, and an extension to the notebook can inject HTML/javascript drawing widgets into the notebook. This lets the user define the subplot layout using a mouse rather than the more cumbersome matplotlib numerical way of defining axes. Then, once the rough plot is done, various components can be resized algorithmically to fit within the allotted canvas space.
Part 1: the drawing widget
Setting up the extension skeleton
As mentioned, the widget is built on top of the jupyter-contrib-nbextensions package, which provides a nice infrastructure for creating compartmentalized extensions which can independently be enabled/disabled. Making your own extension is a bit of cobbling together functions from existing extensions. This link is a good starting point.
The nbextensions package keeps each extension in its own folder in a known directory. Once you have installed the nbextensions package, this code snippet will help you find the directory
from jupyter_core.paths import jupyter_data_dir
import os
nbext_path = os.path.join(jupyter_data_dir(), 'nbextensions')
nbext_path is where the code for your extension should ultimately end up. However, this location is not the most convenient location to develop the code, and more importantly, we’ll need some way of “installing” code here automatically anyway if we want to distribute our extension to others without having to have it included in the main nbextensions repository. (There are all sorts of reasons to do this, including “beta testing” new extensions and that as of this writing the last commit to the `master` branch of the nbextensions repository was nearly 1 year ago).
A better approach than developing directly in nbext_path is to make a symbolic link to a more accessible coding location. Including this python script in your code directory will serve as an install script. Executing python install.py will make an appropriately named symlink from the current directory to nbext_path .
Now distribute away your extensions!
Creating the extension
User flow
Let’s briefly discuss the user flow of the extension before getting into implementation
Begin with an empty notebook cell and press the icon on the far right which looks like two desktop windows.
You can use your mouse to create an initial subplot:
When you’re satisfied with your layout, press the “Generate python cell” button to create a cell with equivalent python/matplotlib code.
The main challenges are injecting the HTML canvas when the toolbar button is pressed, and then automatically creating the python cell when the layout is ready. Once those are done, the rest of the implementation is just like every other javascript project.
Implementation
The main.js file is where most of the coding will happen. Below is the outline of the empty extension
define([
'base/js/namespace',
'base/js/events'
], function(Jupyter, events) { // add a button to the main toolbar
var addButton = function() {
Jupyter.toolbar.add_buttons_group([
Jupyter.keyboard_manager.actions.register({
'help': 'Add figure layout generator',
'icon': 'fa-window-restore',
'handler': inject_figure_widget
}, 'add-default-cell', 'Default cell')
])
} // This function is run when the notebook is first loaded
function load_ipython_extension() {
addButton();
}
return {
load_ipython_extension: load_ipython_extension
};
});
This skeleton code runs a ‘startup’ function when the notebook is loaded. That ‘startup’ function creates the toolbar button and also registers a callback to the toolbar putton press. That callback, inject_figure_widget , is the ‘main’ function of the extension which will inject the HTML canvas into the notebook. To make main.js self-contained, you can define helper functions inside of the main function(Jupter, events) .
Figuring out the JS/HTML to inject a canvas into the output field is a bit of trial and error using the console and the element inspector. The rough outline is:
// execute the current cell to generate the output field; otherwise it won't be created
Jupyter.notebook.select();
Jupyter.notebook.execute_cell(); // get reference to the output area of the cell
var output_subarea = $("#notebook-container")
.children('.selected')
.children('.output_wrapper')
.children('.output'); // add to DOM
let div = document.createElement("div");
output_subarea[0].appendChild(div);
Now the HTML elements of the widget can be added to div just like in any javascript-powered web page. Some special handling is needed for keyboard input elements, however. You’ll find if you try to type numbers into input fields that it converts your cell to markdown and eliminates the output field. This is because of Jupyter notebook’s default keybindings. The fix is to disable Jupyter’s keyboard manager when one of your text fields becomes in focus, and re-enable when it exits focus:
function input_field_focus() {
Jupyter.keyboard_manager.disable();
} function input_field_blur() {
Jupyter.keyboard_manager.enable();
}
$("#subplot_letter_input").focus(input_field_focus).blur(input_field_blur);
Other functionality
The implemented widget has a number of other functions for which I won’t describe the implementation as it is all fairly standard javascript:
Splitting plots into gridded subplots
Resizing subplots with the mouse
Aligning horizontal/vertical edges of selected plot to other plots
Moving subplots by mouse
Moving subplots by keyboard arrows
Copy/paste, undo, delete
Creating labels
Code generation
Saving and reloading from within the notebook
See the README of the widget for illustration of functionality.
Part 2: programmatic resizing
The mouse-based layout tool is (hopefully) an easier way to define a complicated subplot layout. One difficulty in laying out a figure with multiple subplots in matplotlib is that sometimes text can overlap between subplots. Matplotlib is beginning to handle this issue with the tight layout feature, but that feature does not appear to be compatible with the generic way of defining subplot locations used here; it is meant to be used with the grid-based subplot layout definitions.
What we’d like as a user is to
Create a rough layout graphically Fill in all the data and the labels Call a routine to automatically make everything “fit” in the available space.
Step 2 must happen before everything can be “made to fit”. This is because it’s hard to account for the size of text-base elements beforehand. You might add or omit text labels, which occupies or frees space. Depending on your data range, the tick labels might a different number of characters occupying different amounts of canvas area.
A very simple algorithm to make all the plot elements fit on the canvas is
Calculate a bounding box around all subplot elements. For each pair of plots, determine if the plots overlap based on the bounding boxes. If there’s overlap, calculate a scale factor to reduce the width and height of the leftmost/topmost plot. Assume that the top left corner of each subplot is anchored. When this scale factor is applied, there should be no overlap for this pair of plots. (Sidenote: if two plots are overlapping assuming zero area allocated for text, they will not be resized; the assumption then is that the overlap is intentional such as for inset plots). Apply the smallest pairwise scale factor globally.
This is by no means the best data visualization algorithm, but it should always produce an overlap-free plot. This algorithm is implemented in this simple python module]
Axis bounding box
Finding the bounding box of various elements in maplotlib takes some trial-and-error. The data structures representing plot elements are quite flexible which can make it hard to figure out how to get the size of elements on the canvas if you’re not familiar with the API (I am firmly in the “not familiar” camp). Below is a simple search which iterates through all the children of an axis and tries to get the size of different recognized elements. I could not figure out a more uniform approach than the one below.
def get_axis_bounds(fig, ax, scaled=False):
children = ax.get_children() # initial est based on ax itself
p0, p1 = ax.bbox.get_points()
xmax, ymax = p1
xmin, ymin = p0 for child in children:
if isinstance(child, matplotlib.axis.XAxis):
text_obj = filter(lambda x: isinstance(x, matplotlib.text.Text), child.get_children())
text_obj_y = [x.get_window_extent(renderer=fig.canvas.renderer).p0[1] for x in text_obj]
ymin_label = np.min(text_obj_y)
if ymin_label < ymin:
ymin = ymin_label
elif isinstance(child, matplotlib.axis.YAxis):
text_obj = filter(lambda x: isinstance(x, matplotlib.text.Text), child.get_children())
text_obj_x = [x.get_window_extent(renderer=fig.canvas.renderer).p0[0] for x in text_obj]
xmin_label = np.min(text_obj_x)
if xmin_label < xmin:
xmin = xmin_label
elif hasattr(child, 'get_window_extent'):
bb = child.get_window_extent(renderer=fig.canvas.renderer)
if xmax < bb.p1[0]:
xmax = bb.p1[0]
if xmin > bb.p0[0]:
xmin = bb.p0[0]
if ymin > bb.p0[1]:
ymin = bb.p0[1]
if ymax < bb.p1[1]:
ymax = bb.p1[1] if scaled:
rect_bounds = np.array([xmin, ymin, xmax, ymax])
fig_size_x, fig_size_y = fig.get_size_inches() * fig.dpi
rect_bounds /= np.array([fig_size_x, fig_size_y, fig_size_x, fig_size_y])
return rect_bounds
else:
return np.array([xmin, ymin, xmax, ymax])
There’s a small catch: this method requires matplotlib to first render the figure canvas. Before this rendering, matplotlib may not properly inform you how much space an element will take up. So you’ll have to use matplotlib in interactive mode. Presumably you’re in a jupyter environment if you’re using the widget from part 1. If you use the %matplotlib notebook style of figure generation which is interactive, this issue shouldn’t be a problem.
Getting the boundaries of the plot area is quite a bit simpler because that’s how you specify where to draw the axes. The information is stored on the bbox attribute of the axis.
fig_size_x, fig_size_y = fig.get_size_inches() * fig.dpi
plot_bounds = ax.bbox.get_points() / np.array([fig_size_x, fig_size_y])
Once the axis boundary and the plot boundary is known, the size of the border containing the text elements can be calculated on each side. The size of the border is fixed (unless the text changes), so the algorithm to calculate the rescaling factor on the plot is simply to scale it down by the fraction occupied by the border text
Resizing examples
Below are a few examples of auto-scaling plots to accomodate errant space occupied by text.
Axis extending too far horizontally
Before:
After:
Axis extending too far vertically
Before:
After:
Axes overlapping horizontally
Before:
After:
Axes overlapping vertically
Before:
After:
Conclusion
Altogether, this approach may automate some of the more tedious data visualization tasks researchers may face when publishing. Dealing with the layout issues algorithmically may lend itself to developing more sophisticated algorithms for laying out figures to be more naturally readable. | https://towardsdatascience.com/a-juypter-notebook-extension-for-graphical-publication-figure-layout-d2f207d7e63f | ['Suraj Gowda'] | 2020-09-15 18:18:43.123000+00:00 | ['Jupyter Notebook', 'Data Science', 'Matplotlib', 'Data Visualization'] | Title juypter notebook extension graphical publication figure layoutContent Communication key science many field communication mean presenting data visual format field neuroscience it’s uncommon spend year editing figure go paper part due complexity data also part due difficulty quickly making plot standard publication field using tool matplotlib Subplots different size intricate inset plot complex color scheme often drive scientist toward using graphicalbased tool photoshop inkscape post describes development pair tool may extend figure complexity easily achievable python matplotlib main idea graphically define subplots within figure done leveraging fact jupyter notebook run browser extension notebook inject HTMLjavascript drawing widget notebook let user define subplot layout using mouse rather cumbersome matplotlib numerical way defining ax rough plot done various component resized algorithmically fit within allotted canvas space Part 1 drawing widget Setting extension skeleton mentioned widget built top jupytercontribnbextensions package provides nice infrastructure creating compartmentalized extension independently enableddisabled Making extension bit cobbling together function existing extension link good starting point nbextensions package keep extension folder known directory installed nbextensions package code snippet help find directory jupytercorepaths import jupyterdatadir import o nbextpath ospathjoinjupyterdatadir nbextensions nbextpath code extension ultimately end However location convenient location develop code importantly we’ll need way “installing” code automatically anyway want distribute extension others without included main nbextensions repository sort reason including “beta testing” new extension writing last commit master branch nbextensions repository nearly 1 year ago better approach developing directly nbextpath make symbolic link accessible coding location Including python script code directory serve install script Executing python installpy make appropriately named symlink current directory nbextpath distribute away extension Creating extension User flow Let’s briefly discus user flow extension getting implementation Begin empty notebook cell press icon far right look like two desktop window use mouse create initial subplot you’re satisfied layout press “Generate python cell” button create cell equivalent pythonmatplotlib code main challenge injecting HTML canvas toolbar button pressed automatically creating python cell layout ready done rest implementation like every javascript project Implementation mainjs file coding happen outline empty extension define basejsnamespace basejsevents functionJupyter event add button main toolbar var addButton function Jupytertoolbaraddbuttonsgroup Jupyterkeyboardmanageractionsregister help Add figure layout generator icon fawindowrestore handler injectfigurewidget adddefaultcell Default cell function run notebook first loaded function loadipythonextension addButton return loadipythonextension loadipythonextension skeleton code run ‘startup’ function notebook loaded ‘startup’ function creates toolbar button also register callback toolbar putton press callback injectfigurewidget ‘main’ function extension inject HTML canvas notebook make mainjs selfcontained define helper function inside main functionJupter event Figuring JSHTML inject canvas output field bit trial error using console element inspector rough outline execute current cell generate output field otherwise wont created Jupyternotebookselect Jupyternotebookexecutecell get reference output area cell var outputsubarea notebookcontainer childrenselected childrenoutputwrapper childrenoutput add DOM let div documentcreateElementdiv outputsubarea0appendChilddiv HTML element widget added div like javascriptpowered web page special handling needed keyboard input element however You’ll find try type number input field convert cell markdown eliminates output field Jupyter notebook’s default keybindings fix disable Jupyter’s keyboard manager one text field becomes focus reenable exit focus function inputfieldfocus Jupyterkeyboardmanagerdisable function inputfieldblur Jupyterkeyboardmanagerenable subplotletterinputfocusinputfieldfocusblurinputfieldblur functionality implemented widget number function won’t describe implementation fairly standard javascript Splitting plot gridded subplots Resizing subplots mouse Aligning horizontalvertical edge selected plot plot Moving subplots mouse Moving subplots keyboard arrow Copypaste undo delete Creating label Code generation Saving reloading within notebook See README widget illustration functionality Part 2 programmatic resizing mousebased layout tool hopefully easier way define complicated subplot layout One difficulty laying figure multiple subplots matplotlib sometimes text overlap subplots Matplotlib beginning handle issue tight layout feature feature appear compatible generic way defining subplot location used meant used gridbased subplot layout definition we’d like user Create rough layout graphically Fill data label Call routine automatically make everything “fit” available space Step 2 must happen everything “made fit” it’s hard account size textbase element beforehand might add omit text label occupies free space Depending data range tick label might different number character occupying different amount canvas area simple algorithm make plot element fit canvas Calculate bounding box around subplot element pair plot determine plot overlap based bounding box there’s overlap calculate scale factor reduce width height leftmosttopmost plot Assume top left corner subplot anchored scale factor applied overlap pair plot Sidenote two plot overlapping assuming zero area allocated text resized assumption overlap intentional inset plot Apply smallest pairwise scale factor globally mean best data visualization algorithm always produce overlapfree plot algorithm implemented simple python module Axis bounding box Finding bounding box various element maplotlib take trialanderror data structure representing plot element quite flexible make hard figure get size element canvas you’re familiar API firmly “not familiar” camp simple search iterates child axis try get size different recognized element could figure uniform approach one def getaxisboundsfig ax scaledFalse child axgetchildren initial est based ax p0 p1 axbboxgetpoints xmax ymax p1 xmin ymin p0 child child isinstancechild matplotlibaxisXAxis textobj filterlambda x isinstancex matplotlibtextText childgetchildren textobjy xgetwindowextentrendererfigcanvasrendererp01 x textobj yminlabel npmintextobjy yminlabel ymin ymin yminlabel elif isinstancechild matplotlibaxisYAxis textobj filterlambda x isinstancex matplotlibtextText childgetchildren textobjx xgetwindowextentrendererfigcanvasrendererp00 x textobj xminlabel npmintextobjx xminlabel xmin xmin xminlabel elif hasattrchild getwindowextent bb childgetwindowextentrendererfigcanvasrenderer xmax bbp10 xmax bbp10 xmin bbp00 xmin bbp00 ymin bbp01 ymin bbp01 ymax bbp11 ymax bbp11 scaled rectbounds nparrayxmin ymin xmax ymax figsizex figsizey figgetsizeinches figdpi rectbounds nparrayfigsizex figsizey figsizex figsizey return rectbounds else return nparrayxmin ymin xmax ymax There’s small catch method requires matplotlib first render figure canvas rendering matplotlib may properly inform much space element take you’ll use matplotlib interactive mode Presumably you’re jupyter environment you’re using widget part 1 use matplotlib notebook style figure generation interactive issue shouldn’t problem Getting boundary plot area quite bit simpler that’s specify draw ax information stored bbox attribute axis figsizex figsizey figgetsizeinches figdpi plotbounds axbboxgetpoints nparrayfigsizex figsizey axis boundary plot boundary known size border containing text element calculated side size border fixed unless text change algorithm calculate rescaling factor plot simply scale fraction occupied border text Resizing example example autoscaling plot accomodate errant space occupied text Axis extending far horizontally Axis extending far vertically Axes overlapping horizontally Axes overlapping vertically Conclusion Altogether approach may automate tedious data visualization task researcher may face publishing Dealing layout issue algorithmically may lend developing sophisticated algorithm laying figure naturally readableTags Jupyter Notebook Data Science Matplotlib Data Visualization |
3,525 | What makes COVID-19 so scary for some and not others? | Different reactions to the risk of COVID-19 as represented by people toasting at a bar and someone staying home and wearing a mask. [Compilation by Nancy R. Gough, BioSerendipity, LLC]
What makes COVID-19 so scary for some and not others?
Fear of the unknown and comfort with risk
For some, the virus is just a part of the risk of life and nothing to be terribly concerned about; for others, the virus is terrifying and any risk of exposure is too much.
Several aspects of the pandemic foster fear, especially in people who are risk averse and uncomfortable with the unknown:
A fraction of infected people will become desperately ill or even die, but we have no way of knowing who those people are in advance.
No universally effective treatment is available for those who will become severely ill with COVID-19, so we don’t know if treatment will work.
Every death is reported as if no deaths from this virus are acceptable or expected, setting the perception of risk as very high.
For me, the fear comes not so much for myself, but for my family members in the high-risk groups. From talking with friends and family, this is not uncommon. I have heard many people say, “I’m not worried about me, but I am worried about my ____.” Fill in the blank with grandmother, brother with cancer, immune-compromised friend who had a transplant, friend with a heart condition…
Young healthy people are generally less in touch with their own mortality and are more likely to feel invincible. No doubt this is partly why the cases are increasing among this group. Unless they are extremely cautious by nature, they are high risk themselves, or they live with a person that is high risk for becoming severely ill with COVID-19, many young adults feel that the risk of getting sick with COVID-19 is not high enough to avoid going out or socializing without social distancing. As the parent of two people in their early 20s, I fear for them. Each of them certainly does more in-person socializing than I do. Each of them eats out, either carry out or in restaurants, more than I do.
This fear for my children is not unique of course. Many of the parents of children in K-12 grade and of young adults attending college are afraid for their children. Indeed, the reactions I have seen from teachers and parents about whether or not it is safe for children to go back to in-person school shows how divided the reaction to the SARS-CoV-2 coronavirus is in the US.
When I am feeling especially stressed, I remind myself of these words from Michael Osterholm, PhD, MPH (Director of the Center for Infectious Disease Research and Policy, University of Minnesota):
This virus doesn’t magically jump between two people — it’s time and dose.
So, as long as we keep our distance, wear our masks, and limit contacts with those outside our “germ circle” (as I have taken to calling the people I live with and the few I see socially in person), we should be reasonably safe.
Germ circles with number of people in each household. Overlapping circles indicates that at least one person has been inside the home of a person in the overlapping circle. Orange circles represent households with at-risk individuals. Yellow circles represent households with people who work outside of the home. Circle size represent relative risk of COVID-19 positivity based on the exposure to other people outside of the household. [Credit: Nancy R. Gough, BioSerendipity, LLC]
I think of it this way. My risk depends on 3 factors:
How likely is the person outside my germ circle to be infectious with the virus that causes COVID-19?
How exposed will I be?
How well will my immune system fight the virus?
The only one of those I can control and know for sure is the second one. So, I control how close I get, how long I am close, and whether I wear a mask or insist the other person also wears a mask.
The same three elements define my risk of passing the virus to someone else:
How likely is that I am asymptomatic or presymptomatic for COVID-19?
How much will I expose the other person if I am infectious?
How well can the other person fight the virus? (Or how at-risk is the other person?)
When deciding who to see and whether to risk an indoor encounter with or without a mask, those are the three factors that I consider.
When considering invitations to social events, I consider whether I could provide contact tracing information for each person at the event if I should later test positive for COVID-19. Can others provide my information if one of them should test positive? How many other germ circles will intersect with mine?
Is the event outside or inside? Will I be able to sit near with people inside my germ circle or will other attendees be seated with us? Can I wear my mask comfortably during the event? Will I be tempted not to wear my mask when I should be wearing it? How likely are the other attendees to be practicing social distancing and wearing masks?
Also of interest | https://ngough-bioserendipity.medium.com/what-makes-covid-19-so-scary-for-some-and-not-others-4bdbe9a18b21 | ['Nancy R. Gough'] | 2020-08-13 15:45:08.735000+00:00 | ['Health', 'Lifestyle', 'Personality', 'Society', 'Covid 19'] | Title make COVID19 scary othersContent Different reaction risk COVID19 represented people toasting bar someone staying home wearing mask Compilation Nancy R Gough BioSerendipity LLC make COVID19 scary others Fear unknown comfort risk virus part risk life nothing terribly concerned others virus terrifying risk exposure much Several aspect pandemic foster fear especially people risk averse uncomfortable unknown fraction infected people become desperately ill even die way knowing people advance universally effective treatment available become severely ill COVID19 don’t know treatment work Every death reported death virus acceptable expected setting perception risk high fear come much family member highrisk group talking friend family uncommon heard many people say “I’m worried worried ” Fill blank grandmother brother cancer immunecompromised friend transplant friend heart condition… Young healthy people generally le touch mortality likely feel invincible doubt partly case increasing among group Unless extremely cautious nature high risk live person high risk becoming severely ill COVID19 many young adult feel risk getting sick COVID19 high enough avoid going socializing without social distancing parent two people early 20 fear certainly inperson socializing eats either carry restaurant fear child unique course Many parent child K12 grade young adult attending college afraid child Indeed reaction seen teacher parent whether safe child go back inperson school show divided reaction SARSCoV2 coronavirus US feeling especially stressed remind word Michael Osterholm PhD MPH Director Center Infectious Disease Research Policy University Minnesota virus doesn’t magically jump two people — it’s time dose long keep distance wear mask limit contact outside “germ circle” taken calling people live see socially person reasonably safe Germ circle number people household Overlapping circle indicates least one person inside home person overlapping circle Orange circle represent household atrisk individual Yellow circle represent household people work outside home Circle size represent relative risk COVID19 positivity based exposure people outside household Credit Nancy R Gough BioSerendipity LLC think way risk depends 3 factor likely person outside germ circle infectious virus cause COVID19 exposed well immune system fight virus one control know sure second one control close get long close whether wear mask insist person also wear mask three element define risk passing virus someone else likely asymptomatic presymptomatic COVID19 much expose person infectious well person fight virus atrisk person deciding see whether risk indoor encounter without mask three factor consider considering invitation social event consider whether could provide contact tracing information person event later test positive COVID19 others provide information one test positive many germ circle intersect mine event outside inside able sit near people inside germ circle attendee seated u wear mask comfortably event tempted wear mask wearing likely attendee practicing social distancing wearing mask Also interestTags Health Lifestyle Personality Society Covid 19 |
3,526 | Hypothesis Vetting: The Most Important Skill Every Successful Data Scientist Needs | The most successful Data Science starts with good hypothesis building. A well-thought hypothesis sets the direction and plan for a Data Science project. Accordingly, a hypothesis is the most important item for evaluating whether a Data Science project will be successful.
This skill is unfortunately often neglected or taught in a hand-wavy fashion, in favor of hands-on testing for feature significance and applying models on data in order to see if they are able to predict anything. While there is certainly always a need for feature engineering and model selection, doing so without true understanding of a problem can be dangerous and inefficient.
Through experience, I’ve derived a systematic way of approaching Data Science problems which guarantees both a relevant hypothesis and a strong signal as to whether a Data Science approach will be successful. In this article, I outline the steps used to do this: defining a data context, examining available data, and forming the hypothesis. I also take a look at a few completed competitions from Kaggle and run them through the process in order to provide real examples of this method in action.
Crafting the Perfect Hypothesis
1. Mapping the Data Context
Much like forming a Free Body Diagram in a physics problem or utilizing Object-Oriented Design in Computer Science, describing all valid entities in a given data context helps to map out the expected interplay between them. The goal of this step is to completely designate all the data that could possibly be collected about anything in the context i.e. a description of the perfect dataset. If all of this data were available, then the interactions between the components would be completely defined and heuristic formulas could be used to define every type of cause and effect.
2. Examining Overlap of Available Data to Data Context
Next, an observation is conducted to determine how much of the available data fits into the perfect dataset defined in the previous step. The more overlap found, the better a solution space for defining interactions between entities. While not a numeric metric, this observation gives a strong signal for intuition of whether the available data is appropriate and relevant enough. If there is a complete overlap, then a heuristic is probably a better solution than fitting a data model. If there is very little overlap, then even the best modeling techniques will be unable to consistently provide accurate predictions. Note that the strongest predictive signals will always be those that are directly related, thus indirect feature relation is given less emphasis in favor of the real thing. The intuition behind focusing mainly on strong signals is that collecting better data with strong signals will guarantee better performance and reliability. Conversely, predictions using weak signals are prime candidates to become obsolete once better data is available.
3. Forming the Hypothesis
Having completed the previous two steps, forming a hypothesis itself becomes trivial. Hypotheses are generally formed by combining a set of available features to predict the outcome of another feature that may be hard to collect in the future or whose value may be needed ahead of its outcome.
Now that the overlap between the data context and available data has been clearly defined, and assuming a reasonable amount of the available data is relevant/class-balanced/etc, a hypothesis can simply be stated as: “The available data can produce significant results predicting ____ in the given data context”. Note that the prediction label is left open-ended since it can potentially be filled in with many different things! This is because a hypothesis can be formed around almost any of the given features as long as it fits within the data context and there is enough overlap with the available data. Picking the specific feature to predict will depend upon what use cases are most beneficial.
Example Projects
Below, I’ve carefully selected a couple different Kaggle projects to analyze using the hypothesis forming technique. These examples illustrate some characteristic setups for hypothesis forming as well as each have a given hypothesis which can be evaluated in its respective data context.
Project 1: Predicting the Winner of a PubG Match (https://www.kaggle.com/c/pubg-finish-placement-prediction)
This project was selected because the data context is simplified, due to the nature that predictions are performed for a virtual world. Predictions of simulations are expected to behave much more consistently and operate along very clearly defined paths, as opposed to real-life systems which may have factors not easily observable. In other words, the data context can very confidently be exhaustively defined. In this video game, players control a single unit which can perform a limited set of actions. The data context has been mapped out below: | https://abbysobh.medium.com/hypothesis-vetting-the-most-important-skill-every-successful-data-scientist-needs-6b84126140f8 | ['Abderrahman Sobh'] | 2020-10-30 16:16:30.584000+00:00 | ['Machine Learning', 'Hypothesis Formation', 'Data Science', 'Kaggle', 'AI'] | Title Hypothesis Vetting Important Skill Every Successful Data Scientist NeedsContent successful Data Science start good hypothesis building wellthought hypothesis set direction plan Data Science project Accordingly hypothesis important item evaluating whether Data Science project successful skill unfortunately often neglected taught handwavy fashion favor handson testing feature significance applying model data order see able predict anything certainly always need feature engineering model selection without true understanding problem dangerous inefficient experience I’ve derived systematic way approaching Data Science problem guarantee relevant hypothesis strong signal whether Data Science approach successful article outline step used defining data context examining available data forming hypothesis also take look completed competition Kaggle run process order provide real example method action Crafting Perfect Hypothesis 1 Mapping Data Context Much like forming Free Body Diagram physic problem utilizing ObjectOriented Design Computer Science describing valid entity given data context help map expected interplay goal step completely designate data could possibly collected anything context ie description perfect dataset data available interaction component would completely defined heuristic formula could used define every type cause effect 2 Examining Overlap Available Data Data Context Next observation conducted determine much available data fit perfect dataset defined previous step overlap found better solution space defining interaction entity numeric metric observation give strong signal intuition whether available data appropriate relevant enough complete overlap heuristic probably better solution fitting data model little overlap even best modeling technique unable consistently provide accurate prediction Note strongest predictive signal always directly related thus indirect feature relation given le emphasis favor real thing intuition behind focusing mainly strong signal collecting better data strong signal guarantee better performance reliability Conversely prediction using weak signal prime candidate become obsolete better data available 3 Forming Hypothesis completed previous two step forming hypothesis becomes trivial Hypotheses generally formed combining set available feature predict outcome another feature may hard collect future whose value may needed ahead outcome overlap data context available data clearly defined assuming reasonable amount available data relevantclassbalancedetc hypothesis simply stated “The available data produce significant result predicting given data context” Note prediction label left openended since potentially filled many different thing hypothesis formed around almost given feature long fit within data context enough overlap available data Picking specific feature predict depend upon use case beneficial Example Projects I’ve carefully selected couple different Kaggle project analyze using hypothesis forming technique example illustrate characteristic setup hypothesis forming well given hypothesis evaluated respective data context Project 1 Predicting Winner PubG Match httpswwwkagglecomcpubgfinishplacementprediction project selected data context simplified due nature prediction performed virtual world Predictions simulation expected behave much consistently operate along clearly defined path opposed reallife system may factor easily observable word data context confidently exhaustively defined video game player control single unit perform limited set action data context mapped belowTags Machine Learning Hypothesis Formation Data Science Kaggle AI |
3,527 | 20 Inspirational Front-End Challenges You Can Start Coding Today | 20 Inspirational Front-End Challenges You Can Start Coding Today
Challenge yourself and bring your front-end skills to the next level
As a developer, the more projects and experience you have, the better you become. Coding is a muscle that, like any other, requires constant exercise.
Why not spend a couple of evenings on a side project and put in the extra effort to become exceptionally better at coding?
Without further ado, here’s the list of coding ideas for boosting your front-end development skills. Use this article as a source of inspiration for your next project.
Here’s the full list of challenges you could start coding today. | https://medium.com/better-programming/20-inspirational-front-end-challenges-you-can-start-coding-today-1a7ebd5c5798 | ['Indrek Lasn'] | 2020-11-29 23:42:00.438000+00:00 | ['Python', 'Startup', 'JavaScript', 'Programming', 'Software Development'] | Title 20 Inspirational FrontEnd Challenges Start Coding TodayContent 20 Inspirational FrontEnd Challenges Start Coding Today Challenge bring frontend skill next level developer project experience better become Coding muscle like requires constant exercise spend couple evening side project put extra effort become exceptionally better coding Without ado here’s list coding idea boosting frontend development skill Use article source inspiration next project Here’s full list challenge could start coding todayTags Python Startup JavaScript Programming Software Development |
3,528 | Complete Introduction to PySpark- Part 3 | Complete Introduction to PySpark- Part 3
Performing SQL operations on Datasets using PySpark
Photo by Franki Chamaki on Unsplash
What is SQL (Structured Query Language)?
SQL is a language that is used to perform different operations on data like storing, manipulating, and retrieving. It works on relational databases in which data is stored in the form of rows and columns.
SQL commands can be classified into three types according to their properties:
DDL(Data Definition Language)
As the name suggests DDL commands are used to define the data. The commands which are included in DDL are CREATE, INSERT, TRUNCATE, DROP, etc.
2. DML(Data Manipulation Language)
Data Manipulation commands are used to alter and update the data according to user requirements. Some of the commands defined under DDL are ALTER, UPDATE, DELETE, etc.
3. DCL(Data Controlling Language)
In this, the commands defined are used for controlling the access of the database defined. Some of the commands defined under this are GRANT, REVOKE, etc.
Using PySpark for SQL Operations
In order to perform SQL operations using PySpark, we need to have PySpark installed on our local machine. If you have already installed it we can get started else go through the below links to install PySpark and perform some basic operations on DataFrame using PySpark.
Loading Required Libraries
After we have installed pyspark on our machine and configure it, we will open a jupyter notebook to start SQL operations. We will start by importing the required libraries and creating a PySpark session.
import findspark
findspark.init() import pyspark # only run after findspark.init()
from pyspark.sql import SparkSession
from pyspark.sql import SQLContext
spark = SparkSession.builder.getOrCreate()
Loading the Dataset
For performing SQL operations we will need a dataset. In this article, we will use Boston Dataset which can be easily downloaded using Kaggle, and will load it using PySpark.
df = spark.read.csv('Boston.csv', inferSchema=True, header=True)
df.show(5)
Dataset(Source: By Author)
Now let us start SQL operations on our dataset, we will start by creating a table and an Object of SQLContext which will be used to run queries on that table.
1. Creating Table
For creating a table we will need to use the register function of PySpark. Similarly, we will also create an object of SQLContext use to run queries on the table.
df.registerTempTable('BostonTable')
sqlContext = SQLContext(spark)
2. Select Query
Select Query is used for selecting the data according to user requirements. We can use select the whole table using “*” or we can pass the name of the columns separated by ”,” that we want to see.
#Select Whole Table(only three records because we used show(3))
sqlContext.sql('select * from BostonTable').show(3)
Select Table(Source: By Author)
#Select column using column names
sqlContext.sql('select _c0, CRIM, ZN, INDUS, CHAS from BostonTable').show(3)
Select Columns(Source: By Author)
3. Aggregate Functions
There are some predefined aggregate functions defined in SQL which can be used to select data according to user requirements. These functions are:
a. min()
b. max()
c. count()
d. sum()
e. var()
etc.
The syntax for the following functions is given below.
#Using max functions
sqlContext.sql('select max(AGE) from BostonTable').show()
max function(Source: By Author)
Similarly, we can use other functions to display output according to user requirements.
4. Conditional Queries
By using conditional queries we can generate outputs that follow a certain condition passed by the user. The most used condition expression is “where”.
sqlContext.sql('select CRIM, NOX, B from BostonTable where B = 396.9').show(3)
Conditional Data(Source: By Author)
We can use different supporting functions in conditional queries which helps in being more specific about the output and can help in running multiple conditions in a single query. These functions are:
a. having
b. and
c. or
d. then
e. between(used for range)
etc.
sqlContext.sql('select CRIM, NOX, B, RAD from BostonTable where RAD > 2 and B = 396.9').show(3)
Conditional Expression(Source: By Author)
Similarly, we can use different functions using the same syntax as given above.
5. Nested Query
We can have multiple queries running in the same line of code which is generally called a nested query. It is a complex form of query where we pass different conditions to generate the output according to user demand. Below given is an example of a nested query.
sqlContext.sql('select * from BostonTable where AGE between 40 and 50 and TAX not in (311,307)').show(3)
Nested Query(Source: By Author)
Similarly, you can try different nested queries according to the output you want.
This article provides you with the basic information about the SQL Queries using the PySpark. Go ahead try these and if you face any difficulty please let me know in the response section.
Before You Go
Thanks for reading! If you want to get in touch with me, feel free to reach me on [email protected] or my LinkedIn Profile. You can view my Github profile for different data science projects and packages tutorial. Also, feel free to explore my profile and read different articles I have written related to Data Science. | https://towardsdatascience.com/complete-introduction-to-pyspark-part-3-9c06e2c5e13d | ['Himanshu Sharma'] | 2020-11-15 18:36:54.272000+00:00 | ['Data Science', 'Big Data', 'Python', 'Sql', 'Data Analysis'] | Title Complete Introduction PySpark Part 3Content Complete Introduction PySpark Part 3 Performing SQL operation Datasets using PySpark Photo Franki Chamaki Unsplash SQL Structured Query Language SQL language used perform different operation data like storing manipulating retrieving work relational database data stored form row column SQL command classified three type according property DDLData Definition Language name suggests DDL command used define data command included DDL CREATE INSERT TRUNCATE DROP etc 2 DMLData Manipulation Language Data Manipulation command used alter update data according user requirement command defined DDL ALTER UPDATE DELETE etc 3 DCLData Controlling Language command defined used controlling access database defined command defined GRANT REVOKE etc Using PySpark SQL Operations order perform SQL operation using PySpark need PySpark installed local machine already installed get started else go link install PySpark perform basic operation DataFrame using PySpark Loading Required Libraries installed pyspark machine configure open jupyter notebook start SQL operation start importing required library creating PySpark session import findspark findsparkinit import pyspark run findsparkinit pysparksql import SparkSession pysparksql import SQLContext spark SparkSessionbuildergetOrCreate Loading Dataset performing SQL operation need dataset article use Boston Dataset easily downloaded using Kaggle load using PySpark df sparkreadcsvBostoncsv inferSchemaTrue headerTrue dfshow5 DatasetSource Author let u start SQL operation dataset start creating table Object SQLContext used run query table 1 Creating Table creating table need use register function PySpark Similarly also create object SQLContext use run query table dfregisterTempTableBostonTable sqlContext SQLContextspark 2 Select Query Select Query used selecting data according user requirement use select whole table using “” pas name column separated ”” want see Select Whole Tableonly three record used show3 sqlContextsqlselect BostonTableshow3 Select TableSource Author Select column using column name sqlContextsqlselect c0 CRIM ZN INDUS CHAS BostonTableshow3 Select ColumnsSource Author 3 Aggregate Functions predefined aggregate function defined SQL used select data according user requirement function min b max c count sum e var etc syntax following function given Using max function sqlContextsqlselect maxAGE BostonTableshow max functionSource Author Similarly use function display output according user requirement 4 Conditional Queries using conditional query generate output follow certain condition passed user used condition expression “where” sqlContextsqlselect CRIM NOX B BostonTable B 3969show3 Conditional DataSource Author use different supporting function conditional query help specific output help running multiple condition single query function b c e betweenused range etc sqlContextsqlselect CRIM NOX B RAD BostonTable RAD 2 B 3969show3 Conditional ExpressionSource Author Similarly use different function using syntax given 5 Nested Query multiple query running line code generally called nested query complex form query pas different condition generate output according user demand given example nested query sqlContextsqlselect BostonTable AGE 40 50 TAX 311307show3 Nested QuerySource Author Similarly try different nested query according output want article provides basic information SQL Queries using PySpark Go ahead try face difficulty please let know response section Go Thanks reading want get touch feel free reach hmix13gmailcom LinkedIn Profile view Github profile different data science project package tutorial Also feel free explore profile read different article written related Data ScienceTags Data Science Big Data Python Sql Data Analysis |
3,529 | A Spoonful of Affection | A Spoonful of Affection
Writing into the night
Photo by Lester Salmins on Unsplash
Some days I wake up feeling sensitive to everything; strong and weak at the same time. I’m not a fledgling, I’m a grown man with all a grown man’s faults. Having people connect with me through my words is a touching thing.
I’m not a man writing for profit. I’m a few cents a day guy writing for the joy of writing and the connection it offers me. The harmonious world of Hogg people.
Sometimes I don’t know if I’m a child, a poet, a gypsy, but I do know where it feels like home. I come here to help myself to a spoonful of affection. Like a child coming home to the smell of bread in the oven.
If life had been as certain and direct as the path writing has taken me down, I would never have known doubt in my life. It’s a scary thing being out in the world, exposed, even to friendly faces.
I can allow weaknesses that I wouldn’t permit in social life.
The thing about stories is they are so damn big, and I have so little time. I try to cut corners, as if the story will spring from my brain and all I have to do is set it down. I’m a fraud for that. Anyway, it never works.
Some of the works I’ve published here, nothing stories, looking back, only add to my shame. Knowing some ego took me over. Some of my stories seem nervous in print, hasty, experimented with, only to die right on the page. Is that a hurt ego?
I come early to my study, rushing right into the day. One day can seem like an hour. I mean, I’m glad in a total sense. I don’t know what I missed in the outside world. I’m always in danger of loving my stories too much. But here now, looking out the window after midnight, the ocean fired by moonlight, I am thinking of so many things.
It might be better to go to bed. | https://medium.com/literally-literary/a-spoonful-of-affection-2366a2a4d614 | ['Harry Hogg'] | 2020-12-01 05:32:31.059000+00:00 | ['Poetry', 'Creativity', 'Prose', 'Affection', 'Writing'] | Title Spoonful AffectionContent Spoonful Affection Writing night Photo Lester Salmins Unsplash day wake feeling sensitive everything strong weak time I’m fledgling I’m grown man grown man’s fault people connect word touching thing I’m man writing profit I’m cent day guy writing joy writing connection offer harmonious world Hogg people Sometimes don’t know I’m child poet gypsy know feel like home come help spoonful affection Like child coming home smell bread oven life certain direct path writing taken would never known doubt life It’s scary thing world exposed even friendly face allow weakness wouldn’t permit social life thing story damn big little time try cut corner story spring brain set I’m fraud Anyway never work work I’ve published nothing story looking back add shame Knowing ego took story seem nervous print hasty experimented die right page hurt ego come early study rushing right day One day seem like hour mean I’m glad total sense don’t know missed outside world I’m always danger loving story much looking window midnight ocean fired moonlight thinking many thing might better go bedTags Poetry Creativity Prose Affection Writing |
3,530 | How to Improve Communication Frequency With Your Remote Team | Virtual Leadership Challenge
Why am I suggesting leading teams in different regions is even harder than leading teams within the same location?
Because we already have these six common leadership challenges with managing in-person teams:
Purpose : connecting the why of daily work for our teams so they are excited to learn and grow with our businesses.
: connecting the why of daily work for our teams so they are excited to learn and grow with our businesses. Focus: prioritizing the highest reward work so our organizations and clients will be highly satisfied.
prioritizing the highest reward work so our organizations and clients will be highly satisfied. Guide : leading teams from any performance state to a higher one, so excellence is a constant goal.
: leading teams from any performance state to a higher one, so excellence is a constant goal. Change : developing our ability to be comfortable with being uncomfortable. Then lead, execute, and thrive in changing environments.
: developing our ability to be comfortable with being uncomfortable. Then lead, execute, and thrive in changing environments. Growth : growing team members to become experts in their craft so they can become leaders in their field, then build more leaders.
: growing team members to become experts in their craft so they can become leaders in their field, then build more leaders. Relationships: building capacity to work with others so we can influence the adoption of new ideas up, down, and across all levels of the organization.
The added layer of complexity is in operating with teams outside of our physical surroundings, to achieve a purpose, focus, through guiding change, growth, and building relationships.
The indicator of good virtual leadership
A high number of quality messages transacted within a team can be a reliable indicator of an excellent virtual leader. A study analyzed the communication frequency between managers and teams in different locations.
There was one manager who stood out.
The team he managed was noted for their ‘excellent’ morale. The difference between him and other managers was the high quality of interactions he had with his team. Quality, in this case, translated to 32 contacts per week with his employees. An achievement that outshone the others.
Remote coaching in a high-performing team
My former coach moved to the west coast from Ontario in 2017. He continued to coach his dragon boat team (remotely) since then.
When Gavin Maxwell contemplated leading the team from afar, he shared his plan with the team. I immediately thought, ‘you can’t coach a sports team virtually, I’ve never heard of that!’ It was a judgemental reaction I’m not proud of. And, fortunately for the future of his team’s success, he didn’t listen.
Instead, as a world-class dragon boat coach, he talked to his team as much as 77 times in one week.
He only visits two to three times per year for training camps and regattas. He uses Whatsapp religiously to stay connected with his assistant coaches and the team of ~50 paddlers. They talk about practice attendance, performance, team line ups, and personal highs and lows.
They have earned countless gold, silver and bronze medals competing against the German, Chinese and Canadian top teams. Despite his not being on-site physically, their excellence has not diminished; in fact, it has improved!
77 times in a week; that’s 11 times per day. All to make sure that his message remains clear and the team stays focused. They get clarity, support, and encouragement daily.
On July 20, 2019, this team won the Canadian Dragon Boat Championship in an intensely competitive arena. This major win proves remote leadership can work and high communication is the secret weapon.
My communication frequency was too low
Are you doing what I did when I realized the volume of messages both of these high performing leaders had with their teams?
I counted my work instant messages because it was the quickest way to have a baseline measure of my communication frequency. Of course, I could count meetings, emails, and other types of interactions but I was looking for an easy indicator.
For relationship context, I also counted the number of messages with my partner, best friend, and family. Here are the results:
With my best friend in another province: 58
With my live-in partner: 40
With my parents in another province: 15
With my direct team members: 5–13
Faced with this terribly low team text messaging count, I immediately tried to defend my position. I reasoned this result was not bad because there were other correspondences that week via face-to-face, video, and group chats.
A nagging doubt challenged me to demand more of myself.
The literature showing a team with high morale performs better moved me to change. I launched a new personal goal to increase my digital interactions. Understanding that work relationships differ from personal, I set a 20/week target.
I set the goal to communicate every day with team members out of my sight. I set aside short time slots each day for chatting. I learned about upcoming weekend plans, last evening’s activities, and their mood of the day.
Throughout the daily conversations, I attempted to address the common leadership challenges through my written messages. My lessons and suggestions follow. For each practice, I’ll relate it back to those original six leadership challenges: purpose, focus, guide, change, growth, and relationships. | https://medium.com/better-humans/how-to-improve-communication-frequency-with-your-remote-team-a446e15e5bb5 | ['Vy Luu'] | 2020-10-04 04:27:10.129000+00:00 | ['Productivity', 'Leadership', 'Work', 'Startup', 'Business'] | Title Improve Communication Frequency Remote TeamContent Virtual Leadership Challenge suggesting leading team different region even harder leading team within location already six common leadership challenge managing inperson team Purpose connecting daily work team excited learn grow business connecting daily work team excited learn grow business Focus prioritizing highest reward work organization client highly satisfied prioritizing highest reward work organization client highly satisfied Guide leading team performance state higher one excellence constant goal leading team performance state higher one excellence constant goal Change developing ability comfortable uncomfortable lead execute thrive changing environment developing ability comfortable uncomfortable lead execute thrive changing environment Growth growing team member become expert craft become leader field build leader growing team member become expert craft become leader field build leader Relationships building capacity work others influence adoption new idea across level organization added layer complexity operating team outside physical surroundings achieve purpose focus guiding change growth building relationship indicator good virtual leadership high number quality message transacted within team reliable indicator excellent virtual leader study analyzed communication frequency manager team different location one manager stood team managed noted ‘excellent’ morale difference manager high quality interaction team Quality case translated 32 contact per week employee achievement outshone others Remote coaching highperforming team former coach moved west coast Ontario 2017 continued coach dragon boat team remotely since Gavin Maxwell contemplated leading team afar shared plan team immediately thought ‘you can’t coach sport team virtually I’ve never heard that’ judgemental reaction I’m proud fortunately future team’s success didn’t listen Instead worldclass dragon boat coach talked team much 77 time one week visit two three time per year training camp regatta us Whatsapp religiously stay connected assistant coach team 50 paddler talk practice attendance performance team line ups personal high low earned countless gold silver bronze medal competing German Chinese Canadian top team Despite onsite physically excellence diminished fact improved 77 time week that’s 11 time per day make sure message remains clear team stay focused get clarity support encouragement daily July 20 2019 team Canadian Dragon Boat Championship intensely competitive arena major win prof remote leadership work high communication secret weapon communication frequency low realized volume message high performing leader team counted work instant message quickest way baseline measure communication frequency course could count meeting email type interaction looking easy indicator relationship context also counted number message partner best friend family result best friend another province 58 livein partner 40 parent another province 15 direct team member 5–13 Faced terribly low team text messaging count immediately tried defend position reasoned result bad correspondence week via facetoface video group chat nagging doubt challenged demand literature showing team high morale performs better moved change launched new personal goal increase digital interaction Understanding work relationship differ personal set 20week target set goal communicate every day team member sight set aside short time slot day chatting learned upcoming weekend plan last evening’s activity mood day Throughout daily conversation attempted address common leadership challenge written message lesson suggestion follow practice I’ll relate back original six leadership challenge purpose focus guide change growth relationshipsTags Productivity Leadership Work Startup Business |
3,531 | Expect Less: An Easy Hack to Create More | I never thought I could draw.
I apparently had a knack for music and writing. But as far as I was concerned, all of the visual art genes were given to my sister.
She was often coming up with new ideas and new creations. I would watch as she would create murals and sketches. Seemingly coming forth from her as fully realized concepts.
I would then slink off to the next room to try and create something comparable.
However, I never felt like my efforts turned out well. So I eventually gave up. Surrendered the visual arts to my sister.
I left them alone for years. Knowing that anything I would create wouldn’t be good enough. Wouldn’t be worthy.
I felt pangs of jealousy or desire when I would see art — whether in a museum or in a friend’s notebook. Sometimes I would even voice my frustration, saying “I wish I could _____” (draw/paint/sculpt…)
And then a few years ago, on an impulsive whim, I bought a set of twelve Kimberly sketching pencils. A kit. The word brought with it encouragement. That all that I needed would be contained in the 9"x4" green box, which came with instructions explaining the scale of the pencils. Each grade of the graphite pencils. How a 4B would appear lighter on the page than a 2B.
I took one of the harder pencils and started drawing. After not much time, I examined my result. I sighed. Disappointed, I thought to myself “Just buying the tools doesn’t make you an artist.” And promptly put them away.
A year later, my partner at the time (a visual artist) suggested we have a drawing session together. A few minutes into our sketch-a-thon, he could see I was getting frustrated.
“I’m just not good at this,” I told him.
“You’re not giving yourself much of a chance,” he replied. “And you know if you draw a line you don’t like, you can always erase it and try again.”
What?! It didn’t have to be perfect from the beginning?
I went back to my sketch and tried out his piece of advice. If a curve didn’t come out as expected or if something was too dark, I would erase and try again. And it worked! I came out with a decent-looking quokka shape.
“That’s good! Now just keep adding.”
So I started to add more details, erasing here or there when things didn’t look right. I switched between graphite pencils for some shading.
At the end of the hour, I had a decent drawing of my favorite marsupial:
Quokka c. 2016 — Rachel Drane (me, obviously)
I almost couldn’t believe my eyes. I had drawn something recognizable?! Something that was far from perfect, but that I loved all the same. I was giddy. Maybe I was an artist. Maybe I could draw.
Why was this such a revelatory experience for me? And why hadn’t I been creating all that time?!
We expect too much
[W]e get into [creative work] because we have good taste. But there is this gap. For the first couple years you make stuff, it’s just not that good... But your taste… is why your work disappoints you. A lot of people never get past this phase, they quit. Most people I know who do interesting, creative work went through years of this. — Ira Glass
When we think about creating, a lot of us have this romanticized view of making “worthy” work. That it has to be unique. Well-crafted. Postable.
The idea of producing “unworthy” work can be scary. You could possibly look upon the work and see it as a physical manifestation of yourself and your own worth. I know that I’ve done (and sometimes still do) this. It could be threatening to one’s identity, even, especially if you consider yourself an artist or a creative.
But within that fear lies the deathtrap for creation. No matter how talented and accomplished you are, if you’re afraid of creating something “sub-par,” you’ll eventually stop creating. At least stop creating in any meaningful way.
Because, like Ira alludes to above, you have to create. If you want to create, you have to create. It’s in you. You just have to do.
And — more likely than not — this probably means lowering your expectations of the result. Having lower, or maybe no, expectations gives you free reign to create whatever you’d like. Allows you to have fun with it. Opens up new possibilities.
Maybe you get 20 sketches that you’re not crazy about. But there’s one aspect of one of the sketches that really sparks something inside of you. That one spark is worth it.
Even if that doesn’t happen for you, you’re creating. And creating even one sketch that you think is just so-so is better than not creating.
I remind myself, “Don’t let the perfect be the enemy of the good.” (Cribbed from Voltaire.) A twenty-minute walk that I do is better than the four-mile run that I don’t do. The imperfect book that gets published is better than the perfect book that never leaves my computer. The dinner party of take-out Chinese food is better than the elegant dinner that I never host. — Brené Brown
If you expect a masterpiece every time you think about sitting down to create, you’re going to get disappointed. Maybe even discouraged from coming back. Or even worse, you might just never sit down in the first place.
And then, you might start buying into the narrative that you’re not good enough. You’re just not that creative. Giving yourself license for that part of you to suffer.
Because if you yearn for creativity, if you wish you could create, you are creative. You just are. But it does mean adjusting your expectations before sitting down (or standing up).
Everyone creates differently
The sculpture is already complete within the marble block… It is already there, I just have to chisel away the superfluous material. — Michelangelo
I love this quote. And its sentiment has been echoed in the words of other artists throughout the centuries. Stephen King, for instance, refers to a story as being a fossil. It’s the author’s job to find and carefully extract that fossil.
But I’m not sold that everyone creates in this manner. And that’s okay!
I think what frustrated me when I was young was that I was seeing my sister’s process and then trying to emulate it. She seems to be able to have an idea for what she wants to draw or paint and then goes about achieving that. That’s not me.
It’s taken me years to realize that I need to experiment a lot more. That I need to play and put things down on paper. Yes, I go through a lot of materials (paint, pages, and canvases), but I’m able to work toward something. Something that I maybe hadn’t really planned. But something that brings me some spark of joy.
It’s every creative’s responsibility to discover how they are meant to create. This will take some trial and error, for sure. But I encourage you to not compare your process with anyone else. You can learn from others, sure. (In fact, I encourage you to!) But don’t seek to copy them precisely. Take what works for you, and leave the rest.
The number one rule is: You need to create. (Am I starting to sound like a broken record yet?) Something, anything! Release the expectation that it’ll be perfect. That it will be a “great” work. That it’ll get you any Instagram love. Because, in all likelihood, it won’t. At least not at first.
Release the expectation that you need to create in a certain way/time/style. Because everyone will be different.
Don’t sacrifice years with your sketching pencils/notebooks/paints packed up in boxes. Don’t trick yourself into thinking you’re not able to create.
Because, like I said before, if you want to create, you are a creative. And you owe it to yourself — and who knows, maybe even the world — to create.
Still feeling stuck?
Try something completely new. Break your routine. Doesn’t even have to be a “creative” endeavor.
Exercise. Endorphins are real. And can be real magical.
Sun lamp. This works better than caffeine for me sometimes. Especially in the winter, when I’m really lacking the motivation and energy to do anything.
Consume. Read a book about your art. Follow art accounts. Go to museums. Do this all with an open mindset. (As opposed to a comparative one)
Approach your endeavor as play. Somehow this perspective can instantly lower your expectations. And unleash some fun energy!
Connect. I know this is easier said than done. So if joining a writer’s group or art class seems like too much for you, maybe even just finding one person you know who is also trying to create. Doesn’t even have to be the same medium. See if you can support one another. Be accountabili-buddies!
Force it. The number one piece of advice given to writers is: Write every day. Even if you don’t feel like it. Even if you feel like you have nothing to say. Schedule it, if you have to. But no matter what, in the wise words of Nike and Shia LaBeouf: | https://medium.com/swlh/expect-less-an-easy-hack-to-create-more-cb929ecddc18 | ['Rachel Drane'] | 2020-03-11 16:59:34.228000+00:00 | ['Art', 'Practice', 'Motivation', 'Creativity', 'Lifehacks'] | Title Expect Less Easy Hack Create MoreContent never thought could draw apparently knack music writing far concerned visual art gene given sister often coming new idea new creation would watch would create mural sketch Seemingly coming forth fully realized concept would slink next room try create something comparable However never felt like effort turned well eventually gave Surrendered visual art sister left alone year Knowing anything would create wouldn’t good enough Wouldn’t worthy felt pang jealousy desire would see art — whether museum friend’s notebook Sometimes would even voice frustration saying “I wish could ” drawpaintsculpt… year ago impulsive whim bought set twelve Kimberly sketching pencil kit word brought encouragement needed would contained 9x4 green box came instruction explaining scale pencil grade graphite pencil 4B would appear lighter page 2B took one harder pencil started drawing much time examined result sighed Disappointed thought “Just buying tool doesn’t make artist” promptly put away year later partner time visual artist suggested drawing session together minute sketchathon could see getting frustrated “I’m good this” told “You’re giving much chance” replied “And know draw line don’t like always erase try again” didn’t perfect beginning went back sketch tried piece advice curve didn’t come expected something dark would erase try worked came decentlooking quokka shape “That’s good keep adding” started add detail erasing thing didn’t look right switched graphite pencil shading end hour decent drawing favorite marsupial Quokka c 2016 — Rachel Drane obviously almost couldn’t believe eye drawn something recognizable Something far perfect loved giddy Maybe artist Maybe could draw revelatory experience hadn’t creating time expect much get creative work good taste gap first couple year make stuff it’s good taste… work disappoints lot people never get past phase quit people know interesting creative work went year — Ira Glass think creating lot u romanticized view making “worthy” work unique Wellcrafted Postable idea producing “unworthy” work scary could possibly look upon work see physical manifestation worth know I’ve done sometimes still could threatening one’s identity even especially consider artist creative within fear lie deathtrap creation matter talented accomplished you’re afraid creating something “subpar” you’ll eventually stop creating least stop creating meaningful way like Ira alludes create want create create It’s — likely — probably mean lowering expectation result lower maybe expectation give free reign create whatever you’d like Allows fun Opens new possibility Maybe get 20 sketch you’re crazy there’s one aspect one sketch really spark something inside one spark worth Even doesn’t happen you’re creating creating even one sketch think soso better creating remind “Don’t let perfect enemy good” Cribbed Voltaire twentyminute walk better fourmile run don’t imperfect book get published better perfect book never leaf computer dinner party takeout Chinese food better elegant dinner never host — Brené Brown expect masterpiece every time think sitting create you’re going get disappointed Maybe even discouraged coming back even worse might never sit first place might start buying narrative you’re good enough You’re creative Giving license part suffer yearn creativity wish could create creative mean adjusting expectation sitting standing Everyone creates differently sculpture already complete within marble block… already chisel away superfluous material — Michelangelo love quote sentiment echoed word artist throughout century Stephen King instance refers story fossil It’s author’s job find carefully extract fossil I’m sold everyone creates manner that’s okay think frustrated young seeing sister’s process trying emulate seems able idea want draw paint go achieving That’s It’s taken year realize need experiment lot need play put thing paper Yes go lot material paint page canvas I’m able work toward something Something maybe hadn’t really planned something brings spark joy It’s every creative’s responsibility discover meant create take trial error sure encourage compare process anyone else learn others sure fact encourage don’t seek copy precisely Take work leave rest number one rule need create starting sound like broken record yet Something anything Release expectation it’ll perfect “great” work it’ll get Instagram love likelihood won’t least first Release expectation need create certain waytimestyle everyone different Don’t sacrifice year sketching pencilsnotebookspaints packed box Don’t trick thinking you’re able create like said want create creative owe — know maybe even world — create Still feeling stuck Try something completely new Break routine Doesn’t even “creative” endeavor Exercise Endorphins real real magical Sun lamp work better caffeine sometimes Especially winter I’m really lacking motivation energy anything Consume Read book art Follow art account Go museum open mindset opposed comparative one Approach endeavor play Somehow perspective instantly lower expectation unleash fun energy Connect know easier said done joining writer’s group art class seems like much maybe even finding one person know also trying create Doesn’t even medium See support one another accountabilibuddies Force number one piece advice given writer Write every day Even don’t feel like Even feel like nothing say Schedule matter wise word Nike Shia LaBeoufTags Art Practice Motivation Creativity Lifehacks |
3,532 | Data Science — A Door To Numerous Opportunities | Data Scientists have staying power in the marketplace and they make valuable contributions to their companies and societies at large.
Today, Data Scientists have become more important than ever. The reason being they can frame better business goals, make effective decisions and identify the opportunities better. The scope of Data Science includes organizations in banking, healthcare, energy, telecommunications, e-commerce, and automotive industries among many others.
The main components involved in Data Science are Organizing, Packaging and Delivering data. Data Science overall is a multidisciplinary blend of data inference, algorithm development, and technology in order to solve complex analytical problems.
A Data Scientist must know what could be the output of Big Data he or she is analyzing. He/she should clearly know how the output could be achieved with what is available.
To achieve this, a Data Scientist is required to follow these steps:
Step 1: Collect huge data from multiple resources
Step 2: Perform research on complicated data available and frame questions that need to be answered
Step 3: Clean the huge volume of data to chuck irrelevant information
Step 4: Organize data into a predictive model
Step 5: Analyze data to determine the trends and opportunities and recognize weaknesses
Step 6: Produce data-driven solutions to conquer challenges
Step 7: Invent new algorithms to solve problems
Step 8: Build new tools to speed work
Step 9: Communicate predictions from the analyzed data in the form of charts/reports/visualizations Step 10: Recommend effective changes to fix the existing strategies
Step 10: Recommend effective changes to fix the existing strategies
Data Science — The Future Lies Here
Help Companies Make Progress With Data
Many organizations collect data regarding customers, website interactions and much more. But according to a recent study by Gemalto 65% of organizations can’t analyze or categorize all the consumer data they store. And, 89% of companies admitted the ability to analyze data effectively would provide them with a competitive edge in their industry.
Being a Data Scientist, you can help companies excel with the data they collect.
Better Career Opportunities
Among the most promising jobs of the year 2019 on LinkedIn based on LinkedIn data, Data Scientist has topped the list, with an average salary of $130,000.
The ranking was done on the basis of 5 components and these are Salary, Career Advancement, number of job openings in the U.S., year-over-year growth in job openings, and widespread regional availability.
Astonishing Amount of Data Growth
We generate data daily, but never really think about it. According to a study Today, more than 5 billion consumers interact with data every day, and by 2025 the number will be 6 billion i.e. 75% of the world’s population. In 2025, each connected person will have at least one data interaction every 18 seconds. Many of these interactions are because of the billions of IoT devices connected across the globe, which are expected to create over 90 ZB of data in 2025.” — The Digitization of The World
Data is at the heart of digital transformation, the lifeblood of the digitization process. Today, companies are leveraging data to improve customer experiences, open new markets, make employees and processes more productive, and create new sources of competitive advantage — working towards the future of tomorrow.
From a global perspective, India is only second to the USA to recruit Data Science professionals. If you aspire to become a Data Scientist, enroll in our courses and transform your dreams into reality! | https://medium.com/cognitior-360/data-science-a-door-to-numerous-opportunities-4f0782dcfc51 | ['Cognitior Learning'] | 2019-08-28 14:15:57.652000+00:00 | ['Python', 'Big Data', 'Data Science', 'Data', 'Data Visualization'] | Title Data Science — Door Numerous OpportunitiesContent Data Scientists staying power marketplace make valuable contribution company society large Today Data Scientists become important ever reason frame better business goal make effective decision identify opportunity better scope Data Science includes organization banking healthcare energy telecommunication ecommerce automotive industry among many others main component involved Data Science Organizing Packaging Delivering data Data Science overall multidisciplinary blend data inference algorithm development technology order solve complex analytical problem Data Scientist must know could output Big Data analyzing Heshe clearly know output could achieved available achieve Data Scientist required follow step Step 1 Collect huge data multiple resource Step 2 Perform research complicated data available frame question need answered Step 3 Clean huge volume data chuck irrelevant information Step 4 Organize data predictive model Step 5 Analyze data determine trend opportunity recognize weakness Step 6 Produce datadriven solution conquer challenge Step 7 Invent new algorithm solve problem Step 8 Build new tool speed work Step 9 Communicate prediction analyzed data form chartsreportsvisualizations Step 10 Recommend effective change fix existing strategy Step 10 Recommend effective change fix existing strategy Data Science — Future Lies Help Companies Make Progress Data Many organization collect data regarding customer website interaction much according recent study Gemalto 65 organization can’t analyze categorize consumer data store 89 company admitted ability analyze data effectively would provide competitive edge industry Data Scientist help company excel data collect Better Career Opportunities Among promising job year 2019 LinkedIn based LinkedIn data Data Scientist topped list average salary 130000 ranking done basis 5 component Salary Career Advancement number job opening US yearoveryear growth job opening widespread regional availability Astonishing Amount Data Growth generate data daily never really think According study Today 5 billion consumer interact data every day 2025 number 6 billion ie 75 world’s population 2025 connected person least one data interaction every 18 second Many interaction billion IoT device connected across globe expected create 90 ZB data 2025” — Digitization World Data heart digital transformation lifeblood digitization process Today company leveraging data improve customer experience open new market make employee process productive create new source competitive advantage — working towards future tomorrow global perspective India second USA recruit Data Science professional aspire become Data Scientist enroll course transform dream realityTags Python Big Data Data Science Data Data Visualization |
3,533 | 6 Simple Tips From Top Freelancer Jaime Hollander | 1. Commit the time
There’s no overnight success. Not even in freelancing. If you start out first, the only thing you need to get better is time.
You can always have an excuse for why it’s not the right time to focus on Upwork. But if you want to get going and finally start earning money there, you need to invest the hours — lots of it.
Invest 30–60 minutes per day, every single day. Fix your profile, write good proposal drafts, look for jobs.
Plan to send 20–40 proposals per day to get your name out. You want possible clients to know you. Spread the word of your work and use the chance to improve your proposal writing.
It’s a numbers game.
The more time you spend, the higher the chance of landing a job. Also, make sure to answer as quickly as possible after you get invited. This will give you the advantage of time over other freelancers.
Steps to follow:
Invest 30–60 minutes a day, every single day
Plan to send 20–40 proposals daily
Respond to invites immediately
Don’t expect overnight results
2. Be willing to work for LESS
If you start out and don’t have a reputation, you need to be ready to work for less. Even if you’re already a pro in this field. People won’t trust you if you don’t have any feedback but high rates.
Being willed to work for less doesn’t mean competing with the lowest prices. It doesn’t involve working for $3 an hour, unable to finance your lunch, after working the whole morning.
Instead, it means checking the bid range of a job offer and placing your bid in the lowest quarter.
For example, Jaime came from a strong marketing background before she even started working as a freelancer. She had her Master’s and had worked for several years in different positions.
But despite her extensive expertise in this field, she wasn’t expert-vetted on Upwork. Without a reputation, she was simply another freelancer focusing on copywriting and marketing texts.
Because of that, she needed to get her name out. Instead of shoving her apparent skills in everyone’s faces, she decided to let her work speak for her. She started out offering her services at around $20 an hour.
Far away from what she’d have charged previously. But enough to meet need’s end for now and (more important) to get her name out there.
View it long-term. You aren’t on Upwork to cash in big time and be never seen again. You are on Upwork to offer people your skills. To help them solve their problems, using your skills.
3. Look for red flags
Do you know what sucks? Agreeing on something, setting it in stone, and being overthrown afterward. On Upwork, it’s even worse.
Save yourself from this.
Listen to your gut
As a freelancer, you need to listen to your gut. If something seems to be shady, don’t bother too much and skip it. Sure, it might be a bummer at this moment, but there will always be another job.
Here’s a typical example:
“Hey there! My name is XX, and I need someone who can write me an ebook about nutrition topics. The length should be between 10.000 and 15.000 words, and usually, I got the ebooks delivered within one to two weeks. Because it would be the first time working with you, I’d propose a rate of $0.01 per word. If I’m satisfied with your work, you’d be able to get up to $0.06 per word for the following projects.”
This. Is. Proposal-bait.
It’s wrong in many ways, and you definitely don’t want to apply here because:
if you’re regularly working on other projects, one to two weeks is a tight time-frame.
$0.01 per word is a straight rip-off, no matter where you live.
it’s a false promise of a better rate the next time, just to drop you right after you’ve delivered.
Promising, but incomplete job posts
Again: Don’t bother too much and skip. Even if it’s offering an excellent salary or you think you’ve done something similar before.
If the client doesn’t have the time to write a job post, they probably don’t take the time to explain their expectations.
You’re a freelancer, not a babysitter.
It’s your job to manage expectations
As a freelancer, you aren’t only writing, you’re also managing a client’s expectations. Meaning you want to set them straight if they expect outrageous things.
Unrealistic deadlines? Tell them.
Unreasonable prices? Tell them.
Ridiculous amendments? Tell them.
They will keep pushing you around if you don’t. Tell the clients that you’re happy to work for them but not in any condition.
4. Know your limits
Don’t promise others anything you can’t offer.
Many self-help gurus advise you to grow in your task following the principle “Fake it until you make it”. What could be working for your confidence won’t work as a freelancer.
It’s one thing to be confident about your skills and eager to tackle new challenges. But it’s an entirely different thing offering services without having the necessary knowledge or skills.
Know your limits. Stretch them, if needed. But never ignore them.
5. Respond to invites professionally
It’s already a good sign to get invited. Because getting asked means you hit their radar. It’s the first success and step in the right direction.
But if you’ve already come this far, you don’t want to sabotage yourself. Remember, getting invited doesn’t mean you got the job. Instead, it means you’re in the inner circle of candidates.
Here, you’d like to present yourself and your skills in the best way possible. As they probably don’t remember having invited you, start thanking them for their interest.
Then, show them your appreciation and provide a good proposal, mentioning how you could help. Remember, the client comes first.
It’s not about your skills. It’s about your clients’ problems.
The same goes for denying invites. Be professional and let them know that you’re currently busy. Or, if needed, set their expectations straight and provide constructive feedback.
6. Be a subject matter expert
Become a pro in what you do. Don’t only rely on Upwork bringing you the jobs to get money and better. Instead, focus on expanding your horizon out of Upwork too.
Use other writing outlets such as a personal blog, Medium, or maybe the school magazine. Buy books and listen to podcasts.
Never use a client to experiment
If you want to do something vastly different, you don’t want to use your client as guinea pigs. Know your limits and try your new skills in a sandbox before you want to help people with it.
Always aim for a challenge
You’ve heard it a couple of thousand times: Aim for something slightly out of your comfort zone.
By doing so, you won’t be lost entirely, but you also up-skill as you go. This is what you want to achieve.
Here’s a real example:
I’ve been practicing meditation for a few years now. In this time, I read multiple books on how one can implement the practice in daily life and how it can be improved.
Last year, I browsed the job offers and stumbled across a ghostwriting project for an ebook about meditation. I was intrigued, willing to use my knowledge and experience to write an ebook about this topic.
I applied and explained how I meditate myself, what experiences I have, and how it could help the client. Additionally, I proposed a relatively low price, as I didn’t have any experience before.
The client bit and I got the job. I stepped out of my comfort zone to deliver the ebook and it was a win-win.
This is the type of challenge you’d like to tackle. Out of your comfort zone but not entirely lost in Nirvana. | https://medium.com/inspired-writer/5-tips-a-top-freelancer-would-give-you-9651af66c022 | ['Tim Schröder'] | 2020-09-10 06:27:12.632000+00:00 | ['Writing Tips', 'Writing', 'Productivity', 'Business', 'Freelancing'] | Title 6 Simple Tips Top Freelancer Jaime HollanderContent 1 Commit time There’s overnight success even freelancing start first thing need get better time always excuse it’s right time focus Upwork want get going finally start earning money need invest hour — lot Invest 30–60 minute per day every single day Fix profile write good proposal draft look job Plan send 20–40 proposal per day get name want possible client know Spread word work use chance improve proposal writing It’s number game time spend higher chance landing job Also make sure answer quickly possible get invited give advantage time freelancer Steps follow Invest 30–60 minute day every single day Plan send 20–40 proposal daily Respond invite immediately Don’t expect overnight result 2 willing work LESS start don’t reputation need ready work le Even you’re already pro field People won’t trust don’t feedback high rate willed work le doesn’t mean competing lowest price doesn’t involve working 3 hour unable finance lunch working whole morning Instead mean checking bid range job offer placing bid lowest quarter example Jaime came strong marketing background even started working freelancer Master’s worked several year different position despite extensive expertise field wasn’t expertvetted Upwork Without reputation simply another freelancer focusing copywriting marketing text needed get name Instead shoving apparent skill everyone’s face decided let work speak started offering service around 20 hour Far away she’d charged previously enough meet need’s end important get name View longterm aren’t Upwork cash big time never seen Upwork offer people skill help solve problem using skill 3 Look red flag know suck Agreeing something setting stone overthrown afterward Upwork it’s even worse Save Listen gut freelancer need listen gut something seems shady don’t bother much skip Sure might bummer moment always another job Here’s typical example “Hey name XX need someone write ebook nutrition topic length 10000 15000 word usually got ebooks delivered within one two week would first time working I’d propose rate 001 per word I’m satisfied work you’d able get 006 per word following projects” Proposalbait It’s wrong many way definitely don’t want apply you’re regularly working project one two week tight timeframe 001 per word straight ripoff matter live it’s false promise better rate next time drop right you’ve delivered Promising incomplete job post Don’t bother much skip Even it’s offering excellent salary think you’ve done something similar client doesn’t time write job post probably don’t take time explain expectation You’re freelancer babysitter It’s job manage expectation freelancer aren’t writing you’re also managing client’s expectation Meaning want set straight expect outrageous thing Unrealistic deadline Tell Unreasonable price Tell Ridiculous amendment Tell keep pushing around don’t Tell client you’re happy work condition 4 Know limit Don’t promise others anything can’t offer Many selfhelp guru advise grow task following principle “Fake make it” could working confidence won’t work freelancer It’s one thing confident skill eager tackle new challenge it’s entirely different thing offering service without necessary knowledge skill Know limit Stretch needed never ignore 5 Respond invite professionally It’s already good sign get invited getting asked mean hit radar It’s first success step right direction you’ve already come far don’t want sabotage Remember getting invited doesn’t mean got job Instead mean you’re inner circle candidate you’d like present skill best way possible probably don’t remember invited start thanking interest show appreciation provide good proposal mentioning could help Remember client come first It’s skill It’s clients’ problem go denying invite professional let know you’re currently busy needed set expectation straight provide constructive feedback 6 subject matter expert Become pro Don’t rely Upwork bringing job get money better Instead focus expanding horizon Upwork Use writing outlet personal blog Medium maybe school magazine Buy book listen podcasts Never use client experiment want something vastly different don’t want use client guinea pig Know limit try new skill sandbox want help people Always aim challenge You’ve heard couple thousand time Aim something slightly comfort zone won’t lost entirely also upskill go want achieve Here’s real example I’ve practicing meditation year time read multiple book one implement practice daily life improved Last year browsed job offer stumbled across ghostwriting project ebook meditation intrigued willing use knowledge experience write ebook topic applied explained meditate experience could help client Additionally proposed relatively low price didn’t experience client bit got job stepped comfort zone deliver ebook winwin type challenge you’d like tackle comfort zone entirely lost NirvanaTags Writing Tips Writing Productivity Business Freelancing |
3,534 | Here’s all one should know about God Class in Java | ILLUSTRATION OF GOD CLASS
Let’s say you are building a customer management application. Here’s the following snippet depicting the same.
God Class example screenshot by the author
This is a bit short example but if we rethink, a customer may have many other fields. Isn’t it a bit obvious that the Customer class has way too much information? What if a customer has several addresses in different cities, are they all going to be properties in this class?
While the application grows, this class also keeps on growing. Henceforth, after years of maintaining the application, we end up having a monstrous class having thousands of lines of code.
Now let's try to revisit our deductions on why this is considered to be a bad coding practice resulting in code smell. It looks very nice that you have access to all these pretty methods from one place and you claim that it’s less work to change a single file. This means whenever there’s a change in the class it is more likely to have bugs. Well, one bug is not big a problem, but the bugs getting compound is. Forget about solving, it becomes very hard to find two bugs which interact.
Even if the bug is detected and resolved. Can you imagine the pain of trying to test such a way too large class which has too many methods? I can assure even if we write unit test cases, it won’t be feasible to cover everything as this class is missing abstraction, which will result in exposing all the members of this class thus violating the principle of OOPS.
S ome more problems of having a “God Class”
All unwanted members of such class will be inherited to other classes during inheritance, violating the principle of code reusability.
The object will occupy more space in heap memory, so it will be an expensive operation in terms of memory management and garbage collection.
The phenomenon of tight coupling is noticed among such object class making the code difficult to maintain.
Unwanted threads will be running in the background thus at the time of parallel processing low priority threads might block high priority tasks to run.
Why and When does this happen?
Programmers face a conundrum of basic values. Every programmer has his own favorite formatting rules, but if he works in a team, then the team rules. A team of developers should agree upon a single formatting style, and then every member of that team should use that style. We want the software to have a consistent style. We don’t want it to appear to have been written by a bunch of disagreeing individuals.
Generally, this practice is noticed among programmers lacking experience in real-time programming and knowledge of the basics of programming language thus fails in developing an application with an architecture perspective.
We will often hear these common arguments like “But I need all this info in one easily-accessible place.” for the validity of God Object.
Strategic measure to prevent or resolve the God class or object:
Firstly, let’s focus on avoiding god Objects or class while developing.
Following The Boy Scout Rule
It’s not enough to write the code well. The code has to be kept clean over time. We’ve all seen code rot and degrade as time passes. So we must take an active role in preventing this degradation.
The Boy Scouts of America have a simple rule that we can apply to our profession.
Leave the campground cleaner than you found it.
If we all checked-in our code a little cleaner than when we checked it out, the code simply could not rot. The cleanup doesn’t have to be something big. Change one variable name for the better, break up one function that’s a little too large, eliminate one small bit of duplication, clean up one composite if statement.
2. The Only Valid measurement of code quality is the count of WTF’s(Works That Frustrate)!!!!
Sources: - Clean Code By Robert C. Martin
3. Keep it Small!!!!!
The first rule of functions is that they should be small. The second rule of functions is that they should be smaller than that. Even though we don’t have physical constraints given that we all use modern monitors, laptops, functions should hardly be 20 lines long and it should do only one thing.
FUNCTIONS SHOULD DO ONE THING. THEY SHOULD DO IT WELL. THEY SHOULD DO IT ONLY.
Now, let’s focus on fixing God objects that already exist. The only solution for this is to refactor the object so as to split related functionality into smaller, manageable pieces. For example, let’s refactor the Customer.java from earlier:
Custome.java screenshot by the author
Address.java screenshot by the author
Now if we need to change the address data of any employee we would only need to change the Address class, not others. | https://herownhelloworld.medium.com/heres-all-one-should-know-about-god-class-in-java-e318acbb9717 | ['Shalini Singh'] | 2020-10-28 16:37:11.122000+00:00 | ['Coding', 'Java', 'Programming', 'Software Engineering', 'Software Development'] | Title Here’s one know God Class JavaContent ILLUSTRATION GOD CLASS Let’s say building customer management application Here’s following snippet depicting God Class example screenshot author bit short example rethink customer may many field Isn’t bit obvious Customer class way much information customer several address different city going property class application grows class also keep growing Henceforth year maintaining application end monstrous class thousand line code let try revisit deduction considered bad coding practice resulting code smell look nice access pretty method one place claim it’s le work change single file mean whenever there’s change class likely bug Well one bug big problem bug getting compound Forget solving becomes hard find two bug interact Even bug detected resolved imagine pain trying test way large class many method assure even write unit test case won’t feasible cover everything class missing abstraction result exposing member class thus violating principle OOPS ome problem “God Class” unwanted member class inherited class inheritance violating principle code reusability object occupy space heap memory expensive operation term memory management garbage collection phenomenon tight coupling noticed among object class making code difficult maintain Unwanted thread running background thus time parallel processing low priority thread might block high priority task run happen Programmers face conundrum basic value Every programmer favorite formatting rule work team team rule team developer agree upon single formatting style every member team use style want software consistent style don’t want appear written bunch disagreeing individual Generally practice noticed among programmer lacking experience realtime programming knowledge basic programming language thus fails developing application architecture perspective often hear common argument like “But need info one easilyaccessible place” validity God Object Strategic measure prevent resolve God class object Firstly let’s focus avoiding god Objects class developing Following Boy Scout Rule It’s enough write code well code kept clean time We’ve seen code rot degrade time pass must take active role preventing degradation Boy Scouts America simple rule apply profession Leave campground cleaner found checkedin code little cleaner checked code simply could rot cleanup doesn’t something big Change one variable name better break one function that’s little large eliminate one small bit duplication clean one composite statement 2 Valid measurement code quality count WTF’sWorks Frustrate Sources Clean Code Robert C Martin 3 Keep Small first rule function small second rule function smaller Even though don’t physical constraint given use modern monitor laptop function hardly 20 line long one thing FUNCTIONS ONE THING WELL let’s focus fixing God object already exist solution refactor object split related functionality smaller manageable piece example let’s refactor Customerjava earlier Customejava screenshot author Addressjava screenshot author need change address data employee would need change Address class othersTags Coding Java Programming Software Engineering Software Development |
3,535 | 5 Quick and Easy Data Visualizations in Python with Code | Data Visualization is a big part of a data scientist’s jobs. In the early stages of a project, you’ll often be doing an Exploratory Data Analysis (EDA) to gain some insights into your data. Creating visualizations really helps make things clearer and easier to understand, especially with larger, high dimensional datasets. Towards the end of your project, it’s important to be able to present your final results in a clear, concise, and compelling manner that your audience, whom are often non-technical clients, can understand.
Matplotlib is a popular Python library that can be used to create your Data Visualizations quite easily. However, setting up the data, parameters, figures, and plotting can get quite messy and tedious to do every time you do a new project. In this blog post, we’re going to look at 5 data visualizations and write some quick and easy functions for them with Python’s Matplotlib. In the meantime, here’s a great chart for selecting the right visualization for the job!
A chart for selecting the proper data visualisation technique for a given situation
Scatter Plots
Scatter plots are great for showing the relationship between two variables since you can directly see the raw distribution of the data. You can also view this relationship for different groups of data simple by colour coding the groups as seen in the first figure below. Want to visualise the relationship between three variables? No problemo! Just use another parameters, like point size, to encode that third variable as we can see in the second figure below. All of these points we just discussed also line right up with the first chart.
Scatter plot with colour groupings
Scatter plot with colour groupings and size encoding for the third variable of country size
Now for the code. We first import Matplotlib’s pyplot with the alias “plt”. To create a new plot figure we call plt.subplots() . We pass the x-axis and y-axis data to the function and then pass those to ax.scatter() to plot the scatter plot. We can also set the point size, point color, and alpha transparency. You can even set the y-axis to have a logarithmic scale. The title and axis labels are then set specifically for the figure. That’s an easy to use function that creates a scatter plot end to end!
Line Plots
Line plots are best used when you can clearly see that one variable varies greatly with another i.e they have a high covariance. Lets take a look at the figure below to illustrate. We can clearly see that there is a large amount of variation in the percentages over time for all majors. Plotting these with a scatter plot would be extremely cluttered and quite messy, making it hard to really understand and see what’s going on. Line plots are perfect for this situation because they basically give us a quick summary of the covariance of the two variables (percentage and time). Again, we can also use grouping by colour encoding. Line charts fall into the “over-time” category from our first chart.
Example line plot
Here’s the code for the line plot. It’s quite similar to the scatter above. with just some minor variations in variables.
Histograms
Histograms are useful for viewing (or really discovering)the distribution of data points. Check out the histogram below where we plot the frequency vs IQ histogram. We can clearly see the concentration towards the center and what the median is. We can also see that it follows a Gaussian distribution. Using the bars (rather than scatter points, for example) really gives us a clearly visualization of the relative difference between the frequency of each bin. The use of bins (discretization) really helps us see the “bigger picture” where as if we use all of the data points without discrete bins, there would probably be a lot of noise in the visualization, making it hard to see what is really going on.
Histogram example
The code for the histogram in Matplotlib is shown below. There are two parameters to take note of. Firstly, the n_bins parameters controls how many discrete bins we want for our histogram. More bins will give us finer information but may also introduce noise and take us away from the bigger picture; on the other hand, less bins gives us a more “birds eye view” and a bigger picture of what’s going on without the finer details. Secondly, the cumulative parameter is a boolean which allows us to select whether our histogram is cumulative or not. This is basically selecting either the Probability Density Function (PDF) or the Cumulative Density Function (CDF).
Imagine we want to compare the distribution of two variables in our data. One might think that you’d have to make two separate histograms and put them side-by-side to compare them. But, there’s actually a better way: we can overlay the histograms with varying transparency. Check out the figure below. The Uniform distribution is set to have a transparency of 0.5 so that we can see what’s behind it. This allows use to directly view the two distributions on the same figure.
Overlaid Histogram
There are a few things to set up in code for the overlaid histograms. First, we set the horizontal range to accommodate both variable distributions. According to this range and the desired number of bins we can actually computer the width of each bin. Finally, we plot the two histograms on the same plot, with one of them being slightly more transparent.
Bar Plots
Bar plots are most effective when you are trying to visualize categorical data that has few (probably < 10) categories. If we have too many categories then the bars will be very cluttered in the figure and hard to understand. They’re nice for categorical data because you can easily see the difference between the categories based on the size of the bar (i.e magnitude); categories are also easily divided and colour coded too. There are 3 different types of bar plots we’re going to look at: regular, grouped, and stacked. Check out the code below the figures as we go along.
The regular barplot is in the first figure below. In the barplot() function, x_data represents the tickers on the x-axis and y_data represents the bar height on the y-axis. The error bar is an extra line centered on each bar that can be drawn to show the standard deviation.
Grouped bar plots allow us to compare multiple categorical variables. Check out the second bar plot below. The first variable we are comparing is how the scores vary by group (groups G1, G2, ... etc). We are also comparing the genders themselves with the colour codes. Taking a look at the code, the y_data_list variable is now actually a list of lists, where each sublist represents a different group. We then loop through each group, and for each group we draw the bar for each tick on the x-axis; each group is also colour coded.
Stacked bar plots are great for visualizing the categorical make-up of different variables. In the stacked bar plot figure below we are comparing the server load from day-to-day. With the colour coded stacks, we can easily see and understand which servers are worked the most on each day and how the loads compare to the other servers on all days. The code for this follows the same style as the grouped bar plot. We loop through each group, except this time we draw the new bars on top of the old ones rather than beside them.
Regular Bar Plot
Grouped Bar Plot
Stacked Bar Plot
Box Plots
We previously looked at histograms which were great for visualizing the distribution of variables. But what if we need more information than that? Perhaps we want a clearer view of the standard deviation? Perhaps the median is quite different from the mean and thus we have many outliers? What if there is so skew and many of the values are concentrated to one side?
That’s where boxplots come in. Box plots give us all of the information above. The bottom and top of the solid-lined box are always the first and third quartiles (i.e 25% and 75% of the data), and the band inside the box is always the second quartile (the median). The whiskers (i.e the dashed lines with the bars on the end) extend from the box to show the range of the data.
Since the box plot is drawn for each group/variable it’s quite easy to set up. The x_data is a list of the groups/variables. The Matplotlib function boxplot() makes a box plot for each column of the y_data or each vector in sequence y_data ; thus each value in x_data corresponds to a column/vector in y_data . All we have to set then are the aesthetics of the plot.
Box plot example
Box plot code
Conclusion
There are your 5 quick and easy data visualisations using Matplotlib. Abstracting things into functions always makes your code easier to read and use! I hope you enjoyed this post and learned something new and useful. | https://towardsdatascience.com/5-quick-and-easy-data-visualizations-in-python-with-code-a2284bae952f | ['George Seif'] | 2019-05-04 11:59:37.974000+00:00 | ['Python', 'Data Science', 'Towards Data Science', 'Data Visualization', 'Visualization'] | Title 5 Quick Easy Data Visualizations Python CodeContent Data Visualization big part data scientist’s job early stage project you’ll often Exploratory Data Analysis EDA gain insight data Creating visualization really help make thing clearer easier understand especially larger high dimensional datasets Towards end project it’s important able present final result clear concise compelling manner audience often nontechnical client understand Matplotlib popular Python library used create Data Visualizations quite easily However setting data parameter figure plotting get quite messy tedious every time new project blog post we’re going look 5 data visualization write quick easy function Python’s Matplotlib meantime here’s great chart selecting right visualization job chart selecting proper data visualisation technique given situation Scatter Plots Scatter plot great showing relationship two variable since directly see raw distribution data also view relationship different group data simple colour coding group seen first figure Want visualise relationship three variable problemo use another parameter like point size encode third variable see second figure point discussed also line right first chart Scatter plot colour grouping Scatter plot colour grouping size encoding third variable country size code first import Matplotlib’s pyplot alias “plt” create new plot figure call pltsubplots pas xaxis yaxis data function pas axscatter plot scatter plot also set point size point color alpha transparency even set yaxis logarithmic scale title axis label set specifically figure That’s easy use function creates scatter plot end end Line Plots Line plot best used clearly see one variable varies greatly another ie high covariance Lets take look figure illustrate clearly see large amount variation percentage time major Plotting scatter plot would extremely cluttered quite messy making hard really understand see what’s going Line plot perfect situation basically give u quick summary covariance two variable percentage time also use grouping colour encoding Line chart fall “overtime” category first chart Example line plot Here’s code line plot It’s quite similar scatter minor variation variable Histograms Histograms useful viewing really discoveringthe distribution data point Check histogram plot frequency v IQ histogram clearly see concentration towards center median also see follows Gaussian distribution Using bar rather scatter point example really give u clearly visualization relative difference frequency bin use bin discretization really help u see “bigger picture” use data point without discrete bin would probably lot noise visualization making hard see really going Histogram example code histogram Matplotlib shown two parameter take note Firstly nbins parameter control many discrete bin want histogram bin give u finer information may also introduce noise take u away bigger picture hand le bin give u “birds eye view” bigger picture what’s going without finer detail Secondly cumulative parameter boolean allows u select whether histogram cumulative basically selecting either Probability Density Function PDF Cumulative Density Function CDF Imagine want compare distribution two variable data One might think you’d make two separate histogram put sidebyside compare there’s actually better way overlay histogram varying transparency Check figure Uniform distribution set transparency 05 see what’s behind allows use directly view two distribution figure Overlaid Histogram thing set code overlaid histogram First set horizontal range accommodate variable distribution According range desired number bin actually computer width bin Finally plot two histogram plot one slightly transparent Bar Plots Bar plot effective trying visualize categorical data probably 10 category many category bar cluttered figure hard understand They’re nice categorical data easily see difference category based size bar ie magnitude category also easily divided colour coded 3 different type bar plot we’re going look regular grouped stacked Check code figure go along regular barplot first figure barplot function xdata represents ticker xaxis ydata represents bar height yaxis error bar extra line centered bar drawn show standard deviation Grouped bar plot allow u compare multiple categorical variable Check second bar plot first variable comparing score vary group group G1 G2 etc also comparing gender colour code Taking look code ydatalist variable actually list list sublist represents different group loop group group draw bar tick xaxis group also colour coded Stacked bar plot great visualizing categorical makeup different variable stacked bar plot figure comparing server load daytoday colour coded stack easily see understand server worked day load compare server day code follows style grouped bar plot loop group except time draw new bar top old one rather beside Regular Bar Plot Grouped Bar Plot Stacked Bar Plot Box Plots previously looked histogram great visualizing distribution variable need information Perhaps want clearer view standard deviation Perhaps median quite different mean thus many outlier skew many value concentrated one side That’s boxplots come Box plot give u information bottom top solidlined box always first third quartile ie 25 75 data band inside box always second quartile median whisker ie dashed line bar end extend box show range data Since box plot drawn groupvariable it’s quite easy set xdata list groupsvariables Matplotlib function boxplot make box plot column ydata vector sequence ydata thus value xdata corresponds columnvector ydata set aesthetic plot Box plot example Box plot code Conclusion 5 quick easy data visualisation using Matplotlib Abstracting thing function always make code easier read use hope enjoyed post learned something new usefulTags Python Data Science Towards Data Science Data Visualization Visualization |
3,536 | Before We Had Google, There Was Googie Architecture | Before We Had Google, There Was Googie Architecture
Few things are more representative of the modern zeitgeist than Google. But there was a time when the same thing was said about Googie architecture.
Between the late 1950’s and early 1960’s Googie was the undisputed “look of tomorrow.”
While Kennedy spoke of Man going to the Moon, it was Googie-style architecture that made the Space Age come to life via cantilevered elements, parabolic boomerang shapes, bold colors and whiz-bang angles.
Famous examples of Googie Architecture are the Theme Building at LAX, the Space Needle in Seattle, Space Mountain at Disney and the TWA Terminal at JFK Airport.
But Googie design elements became directly accessible to the everyday consumer at places like the Original McDonald’s restaurants.
You can see the Googie influence in both the architecture and cars in the parking lot.
The mecca of Googie-style was Southern California, and architects like Eldon Davis and Stanley Metson brought the space-age to the every-day.
Gas stations, bowling alleys, movie theaters, motels and restaurants like Norm’s, Pann’s, Chips’ and Googie’s (the style’s namesake), became the iconic epicenters of a future that was already here today.
Googie was pure, eye-sugar escapism. At a glance, we could escape the mundane with the rush of the future.
To the business-owner, Googie meant profits.
From a design standpoint, Googie combined the groundbreaking architectural design use of cantilevered concrete, steel and plate-glass popularized by Frank Lloyd Wright — with the abstract art of painter Wassily Kandinsky.
Frank Lloyd Wright’s Fallingwater, designed in 1935.
Those iconic Googie signs are like the 3D rendering of a Kandinsky painting. You will see the same bold, primary color patterns, shapes and raygun zaniness.
Points, 1920, by Wassily Kandinsky. Note the Golden Arches!
The true impetus behind Googie-style was the boom of American car-culture. Even in the 1950’s, L.A. was a driving city. And the eye-catching Googie designs and bright neon helped businesses entice patrons from the road.
‘Norms’ by Ashok Sinha
Successful Advertising and Commerce helped move architectural design choices and car culture in a similar direction as the entire Googie aesthetic emerged. And cars of the 1950’s became the rocket ship of the Everyman.
1959 Chevy
There was a moment in time when you could drive to Disneyland Anaheim for vacation in your ’59 Chevy and be utterly immersed in a Googie world of the future.
Gas up at a Googie filling station, grab lunch at a Googie burger joint, go to Tomorrowland at Disney, catch a movie at the Googie cinema, go bowling at Googie Lanes, then head back for a swim at the Googie Motel.
The Future was awesome.
Imagining a life inside a futuro-fantasy is not entirely unlike our world today.
Only now, we’re immersed in Google’s brand of escapism. The entire world at our fingertips on Google Search, Google Maps, YouTube, Gmail — all on a Google Smartphone. | https://briandeines.medium.com/before-there-was-google-there-was-googie-6146bd973509 | ['Brian Deines'] | 2020-01-13 21:21:30.236000+00:00 | ['Architecture', 'Art', 'Google', 'Design', 'Tech'] | Title Google Googie ArchitectureContent Google Googie Architecture thing representative modern zeitgeist Google time thing said Googie architecture late 1950’s early 1960’s Googie undisputed “look tomorrow” Kennedy spoke Man going Moon Googiestyle architecture made Space Age come life via cantilevered element parabolic boomerang shape bold color whizbang angle Famous example Googie Architecture Theme Building LAX Space Needle Seattle Space Mountain Disney TWA Terminal JFK Airport Googie design element became directly accessible everyday consumer place like Original McDonald’s restaurant see Googie influence architecture car parking lot mecca Googiestyle Southern California architect like Eldon Davis Stanley Metson brought spaceage everyday Gas station bowling alley movie theater motel restaurant like Norm’s Pann’s Chips’ Googie’s style’s namesake became iconic epicenter future already today Googie pure eyesugar escapism glance could escape mundane rush future businessowner Googie meant profit design standpoint Googie combined groundbreaking architectural design use cantilevered concrete steel plateglass popularized Frank Lloyd Wright — abstract art painter Wassily Kandinsky Frank Lloyd Wright’s Fallingwater designed 1935 iconic Googie sign like 3D rendering Kandinsky painting see bold primary color pattern shape raygun zaniness Points 1920 Wassily Kandinsky Note Golden Arches true impetus behind Googiestyle boom American carculture Even 1950’s LA driving city eyecatching Googie design bright neon helped business entice patron road ‘Norms’ Ashok Sinha Successful Advertising Commerce helped move architectural design choice car culture similar direction entire Googie aesthetic emerged car 1950’s became rocket ship Everyman 1959 Chevy moment time could drive Disneyland Anaheim vacation ’59 Chevy utterly immersed Googie world future Gas Googie filling station grab lunch Googie burger joint go Tomorrowland Disney catch movie Googie cinema go bowling Googie Lanes head back swim Googie Motel Future awesome Imagining life inside futurofantasy entirely unlike world today we’re immersed Google’s brand escapism entire world fingertip Google Search Google Maps YouTube Gmail — Google SmartphoneTags Architecture Art Google Design Tech |
3,537 | Stop Trying To Be Famous, Popular, or A Cash Machine | I’m big on blocking. Life’s too short for slop in my feed. I excise rage aribeters that offer little recourse or solutions. I block those who are abusive, petty, and cruel. Hate-mongers and hate-readers. Complainers that don’t create. People who call me a cunt in the comments or try to tell me how to do the work I’ve been doing successfully for decades. And while I’ve been quietly muting a disturbing amount of articles churned out by those I refer to as Derivative Peddlers, my patience is paper thin.
I’ve written about our cult of more, our desire to be big when it’s just as noble to play small — some of which has played out in pedestrian writing advice and sloppy self-help, which, much to my chagrin, has become pervasive.
I’m willing to bypass writing advice that reduces one’s work to a sixth-grade book report, but I can’t stomach people that peddle online homogeny. Let me explain.
It’s the equivalent of someone giving you a playbook for painting. Say you’re an artist who barely survived the Gothic art of the Middle Ages. There exists a patronage culture in Florence where the Church and the Medicis decreed taste. Those who had accumulated power and wealth defined genius — told you there was one acceptable way to paint and sculpt. Imagine if everyone followed the rules, adhered to the guidelines and created art that was a bland photocopy of a brilliant original.
How would we have borne the two disparate styles and personalities of DaVinci and Michelangelo? How could we revere Caravaggio’s chiaroscuro, his artistic realism and Titian’s lush canvases and idealism?
Why would you adhere to single playbook that purported to define the whole of art? A set of guidelines that don’t allow for risk and individual identity? If we didn’t have artists who broke convention and form, art wouldn’t evolve.
And here we are, centuries later, listening to self-proclaimed experts posing as the Medicis. They’ve made a little money online and now they’re telling you how to paint, but what they’re really doing is cultivating homogeny. Holding your head underwater while you drown in the sea of same. They tell you how to compose titles for your stories and how to write your stories. They warn you against veering from what they’ve defined as acceptable formatting. But you have to trust them because they know things. They’ve cracked the algorithm code and here are their thirteen screenshots of their income to prove it.
As if income defines an artist’s talent. By that logic, hedge fund managers would be the Michelangelos of the 21st Century. As if income correlates to the depth and power a piece of writing has over the reader. As if more means better when all it means is…more. Let’s not ferret out hidden meanings in simple definitions.
The Derivative Peddlers want you to copy them and your writing (and by extension, yourself) is a failure if you don’t. You’re a loser because you’re not making $10,000 a month. You’re a failure because you’re not cranking out 10,000 garbage words a day. You’re a weirdo if you don’t follow the hive because the hive demands you never fall out of line lest you be ostracized and made fun of in their petty stories and vague sub-tweets.
Listen to us, they implore, at the expense of you. All they’re doing is reducing your work to the equivalent of junk food — palatable, easy, and similar to the ten other brands of cheese doodles on the rack.
While I believe in the power of the collective and community when it comes to social activism, economic equality and prosperity, when it comes to art I worship at the altar of own your weird. In art, the individual may be informed by the community, but their art shouldn’t be dictated by it.
Otherwise, rule-breakers would be shunned and smothered. We wouldn’t have artists birthing new genres, schools, and styles because every motherfucker will have a vocabulary of 100 words and copying someone else’s work rather than interpreting and evolving it.
We would have no Caravaggio, Samuel Beckett, Virginia Woolf, Gabriel Garcia Marquez (who was influenced by Woolf), no Ben Marcus, no Kelly Link, no modernist, post-modernist art and fiction. No auto-fiction, no abstractism. No impressionism and expressionism. We would be stuck in the fucking Middle Ages painting our pedagogical, baby-man Christs.
I learn the rules of my craft to break them not to be beholden by them.
If you want to make money from your writing, fine, pitch and sell stories that adhere to a publication’s guidelines. There’s nothing wrong with that — I sell work that’s in my voice but perhaps not my style to earn a paycheck. We live in the real and the real requires you to cut checks to people on a regular basis.
But there’s a difference between the work I sell and the work I create because it fuels me, helps me interpret the world around me. Both give me joy, but I’m only constrained by the former, not the latter. And yes, you can make money from your art — binaries are boring — but if you’re making money by limiting your art or reducing it to that which looks and walks and talks like every other kid peddling their income reports on the block, how are you different? If your words and form are locked in a cage, how do you grow?
Constraints are cruel and I refuse to abide by them. If that means I don’t make as much money as the kid down the block, I’m fine with that. If it means a smaller group of people read my work, I’m fine with that. I have zero interest in optimizing my fucking titles. My goal has never been to be mass-market — it’s been to put people’s heart on pause when they read my work. To move them. Inspire them to see the world a different way. To feel part of something larger than both of us. My goal has always been to test the limits of language by using words as weapons and shields.
I could write to conform, to pander, but why would I? To publish my income reports? To tout how famous I am and then get more people calling a cunt in the comments?
It took me decades to detangle my work from the results of the work. I don’t need to be popular or famous when I’m allergic to swarms of people. It took me decades to feel confident in saying I’m an exceptional writer, but I will forever have room to grow. And it took me decades to be okay with not being part of the peanut-crunching pack.
It took me a long time to admit one of my favorite words in the English language is motherfucker. Also, ossify.
Don’t be afraid to be yourself in your work. Experiment with form, voice, and style. Play. Mess up. Make music. Break things and wreck the joint. Approach your work as if you’re a child filled with firsts and wonder. Don’t worry about being likeable or relatable. Don’t swim in the sea of same and be surprised when you drown. Write terrible stories and rewrite them years later. Be okay with the fact that you’re not a content marketer — you’re a storyteller.
Write like you’re holding your still-beating heart in your hands. | https://medium.com/falling-into-freelancing/stop-trying-to-be-famous-popular-or-a-cash-machine-1a8904c89860 | ['Felicia C. Sullivan'] | 2020-08-04 14:30:39.599000+00:00 | ['Freelancing', 'Creativity', 'Life Lessons', 'Culture', 'Writing'] | Title Stop Trying Famous Popular Cash MachineContent I’m big blocking Life’s short slop feed excise rage aribeters offer little recourse solution block abusive petty cruel Hatemongers hatereaders Complainers don’t create People call cunt comment try tell work I’ve successfully decade I’ve quietly muting disturbing amount article churned refer Derivative Peddlers patience paper thin I’ve written cult desire big it’s noble play small — played pedestrian writing advice sloppy selfhelp much chagrin become pervasive I’m willing bypass writing advice reduces one’s work sixthgrade book report can’t stomach people peddle online homogeny Let explain It’s equivalent someone giving playbook painting Say you’re artist barely survived Gothic art Middle Ages exists patronage culture Florence Church Medicis decreed taste accumulated power wealth defined genius — told one acceptable way paint sculpt Imagine everyone followed rule adhered guideline created art bland photocopy brilliant original would borne two disparate style personality DaVinci Michelangelo could revere Caravaggio’s chiaroscuro artistic realism Titian’s lush canvas idealism would adhere single playbook purported define whole art set guideline don’t allow risk individual identity didn’t artist broke convention form art wouldn’t evolve century later listening selfproclaimed expert posing Medicis They’ve made little money online they’re telling paint they’re really cultivating homogeny Holding head underwater drown sea tell compose title story write story warn veering they’ve defined acceptable formatting trust know thing They’ve cracked algorithm code thirteen screenshots income prove income defines artist’s talent logic hedge fund manager would Michelangelos 21st Century income correlate depth power piece writing reader mean better mean is…more Let’s ferret hidden meaning simple definition Derivative Peddlers want copy writing extension failure don’t You’re loser you’re making 10000 month You’re failure you’re cranking 10000 garbage word day You’re weirdo don’t follow hive hive demand never fall line lest ostracized made fun petty story vague subtweets Listen u implore expense they’re reducing work equivalent junk food — palatable easy similar ten brand cheese doodle rack believe power collective community come social activism economic equality prosperity come art worship altar weird art individual may informed community art shouldn’t dictated Otherwise rulebreakers would shunned smothered wouldn’t artist birthing new genre school style every motherfucker vocabulary 100 word copying someone else’s work rather interpreting evolving would Caravaggio Samuel Beckett Virginia Woolf Gabriel Garcia Marquez influenced Woolf Ben Marcus Kelly Link modernist postmodernist art fiction autofiction abstractism impressionism expressionism would stuck fucking Middle Ages painting pedagogical babyman Christs learn rule craft break beholden want make money writing fine pitch sell story adhere publication’s guideline There’s nothing wrong — sell work that’s voice perhaps style earn paycheck live real real requires cut check people regular basis there’s difference work sell work create fuel help interpret world around give joy I’m constrained former latter yes make money art — binary boring — you’re making money limiting art reducing look walk talk like every kid peddling income report block different word form locked cage grow Constraints cruel refuse abide mean don’t make much money kid block I’m fine mean smaller group people read work I’m fine zero interest optimizing fucking title goal never massmarket — it’s put people’s heart pause read work move Inspire see world different way feel part something larger u goal always test limit language using word weapon shield could write conform pander would publish income report tout famous get people calling cunt comment took decade detangle work result work don’t need popular famous I’m allergic swarm people took decade feel confident saying I’m exceptional writer forever room grow took decade okay part peanutcrunching pack took long time admit one favorite word English language motherfucker Also ossify Don’t afraid work Experiment form voice style Play Mess Make music Break thing wreck joint Approach work you’re child filled first wonder Don’t worry likeable relatable Don’t swim sea surprised drown Write terrible story rewrite year later okay fact you’re content marketer — you’re storyteller Write like you’re holding stillbeating heart handsTags Freelancing Creativity Life Lessons Culture Writing |
3,538 | Real Artificial Intelligence: Understanding Extrapolation vs Generalization | Source. Image free to share.
Real Artificial Intelligence: Understanding Extrapolation vs Generalization
Stop confusing the two
Machine learning models don’t need to be intelligent — most of their applications entail performing tasks like recommending YouTube videos or predicting a customer’s next move. It is important to understand the difference between extrapolation and generalization/interpolation to understand what it really means for a model to be intelligent and to avoid a common issue of confusing the two, which is often the root cause of the implementation failures of many models.
Generalization is the entire point of machine learning. Trained to solve one problem, the model attempts to utilize the patterns learned from that task to solve the same task, with slight variations. In analogy, consider a child being taught how to perform single-digit addition. Generalization is the act of performing tasks of the same difficulty and nature. This may also be referred to as interpolation, although generalization is a more commonly used and understood term.
Created by Author.
Extrapolation, on the other hand, is when the model is able to obtain higher-dimensional insights from a lower-dimensional training. For instance, consider a first grader who is taught single digit addition, then presented with a multi-digit addition problem. The first grader thinks, “okay, so when the units digit adds to larger than ten, there is a tens component and a ones component. I take that into account and add a one to the tens column to account for that.” This is, of course, the key insight around which arithmetic is based around, and if you thought hard enough about what it means to add and understood place value, you could figure it out. Yet most first graders never realize this, rarely discovering how to carry over on their own.
Created by Author.
It’s important to realize that extrapolation is hard. Even many humans cannot succeed at extrapolation — indeed, intelligence really is a measure of being able to extrapolate, or to take concepts explained in a lower dimension and being able to apply them at a higher one (of course, dimension as in levels of complexity, not literally). Most IQ tests are based around this premise: using standard concepts in ways only a true extrapolator could understand.
In terms of machine learning, one example of extrapolation can be thought of as being trained on a certain range of data and being able to predict on a different range of data. This may be easy with simple patterns, such as simple positive number/negative number or in a circle/not in a circle classifications.
Created by Author.
Yet the ability for models to extrapolate on more complicated patterns is limited with traditional machine learning methods. Consider, for example, the checkerboard problem, in which alternating squares on a two-dimensional plane are colored as either 0 or 1. To a human, the relationship is clear and one could go on coloring an infinite checkerboard, given only the rules of a smaller, finite x by x checkerboard. While the checkerboard problem is defined by a set of rules for humans (no two squares who share a side can be the same color), mathematically it is defined as such:
Created by Author.
This is a very unintuitive way of thinking, and most models do not think to generate rigorously mathematical, extrapolative definitions. Instead, regular algorithms essentially attempt to split the feature space geometrically, which may or may not work, given the task. Unfortunately, in this case, they do not extrapolate to coordinates outside of the coordinates that they were trained on. Even many neural networks fail to perform at this task.
Results of KNN & Decision Tree when trained on 20x20 checkerboard and told to predict 40x40 space. Created by Author.
Sometimes, however, extrapolation is not as much about recognizing complex relationships as it is finding a smart, extrapolate-able solution to carrying out the task on foreign ranges. For instance, the checkerboard problem can be solved by drawing diagonal lines. This solution has been observed to be discovered with, for example, neural network ensembles.
Neural Network Result from “Competitive Learning Neural Network Ensemble Weighted by Predicted Performance”. Created by Author.
A common argument you’ll hear in the debate of machine intelligence is that “machines can only do one thing really well.” This is indeed the definition of interpolation or generalization — to perform tasks inter, or within, a predefined set of rules. Yet extrapolating requires having such a solid understand of the concepts inter that they can be applied extra, or outside the taught region. Rarely can current machine learning models extrapolate reliably; usually, the ones that show promise are geometrically easier to extrapolate one and fail at other problems. All artificial intelligence methods are interpolative by nature, and it’s debatable if it is even possible to artificially construct an extrapolative (“intelligent”) algorithm.
Extrapolation is seldom the goal of modelling or machine learning, but often it is used interchangeably with generalization — the most obvious, of course, being linear regression, in which taking its infinity-tending predictions as gold is more common and less noticeable when there are multiple dimensions at play (multiple regression).
Another example of extrapolation is when, say, companies will train a model on outlier-free data and then implement it in real life, in which outliers are much more abundant and cannot simply be ignored. This is common in implementations of models that often goes undetected but may be a big reason why your model may not be performing up to test results in real life. When there is a disparity between the data on which the model was trained on and the data the model is expected to predict on, it is likely that you may be asking the model to extrapolate.
XKCD. Sometimes even humans are bad extrapolators. Image free to share.
Models generally cannot extrapolate well, be it in a measure of symbolic intelligence or in real applications. It’s important to make sure that your model is not being confronted with an extrapolation task — current algorithms, even as complex and powerful as neural networks, are simply not designed to perform generally well on extrapolation tasks. A decent check to check for extrapolative tasks is to plot out the distributions of each column in the training and testing set, then see if the testing set is significantly non-compatible with the training one.
Until we can create a machine learning algorithm that is capable of extrapolating generally across all problems, similarly to how the concept of a neural network (of course, with some varying architectures) can single-handedly address almost any generalization problem, they will never truly be able to be ‘intelligent’ and perform tasks outside the narrow scope in which they are trained on. | https://towardsdatascience.com/real-artificial-intelligence-understanding-extrapolation-vs-generalization-b8e8dcf5fd4b | ['Andre Ye'] | 2020-06-26 17:08:39.359000+00:00 | ['Machine Learning', 'Data Science', 'Artificial Intelligence', 'AI', 'Data Analysis'] | Title Real Artificial Intelligence Understanding Extrapolation v GeneralizationContent Source Image free share Real Artificial Intelligence Understanding Extrapolation v Generalization Stop confusing two Machine learning model don’t need intelligent — application entail performing task like recommending YouTube video predicting customer’s next move important understand difference extrapolation generalizationinterpolation understand really mean model intelligent avoid common issue confusing two often root cause implementation failure many model Generalization entire point machine learning Trained solve one problem model attempt utilize pattern learned task solve task slight variation analogy consider child taught perform singledigit addition Generalization act performing task difficulty nature may also referred interpolation although generalization commonly used understood term Created Author Extrapolation hand model able obtain higherdimensional insight lowerdimensional training instance consider first grader taught single digit addition presented multidigit addition problem first grader think “okay unit digit add larger ten ten component one component take account add one ten column account that” course key insight around arithmetic based around thought hard enough mean add understood place value could figure Yet first grader never realize rarely discovering carry Created Author It’s important realize extrapolation hard Even many human cannot succeed extrapolation — indeed intelligence really measure able extrapolate take concept explained lower dimension able apply higher one course dimension level complexity literally IQ test based around premise using standard concept way true extrapolator could understand term machine learning one example extrapolation thought trained certain range data able predict different range data may easy simple pattern simple positive numbernegative number circlenot circle classification Created Author Yet ability model extrapolate complicated pattern limited traditional machine learning method Consider example checkerboard problem alternating square twodimensional plane colored either 0 1 human relationship clear one could go coloring infinite checkerboard given rule smaller finite x x checkerboard checkerboard problem defined set rule human two square share side color mathematically defined Created Author unintuitive way thinking model think generate rigorously mathematical extrapolative definition Instead regular algorithm essentially attempt split feature space geometrically may may work given task Unfortunately case extrapolate coordinate outside coordinate trained Even many neural network fail perform task Results KNN Decision Tree trained 20x20 checkerboard told predict 40x40 space Created Author Sometimes however extrapolation much recognizing complex relationship finding smart extrapolateable solution carrying task foreign range instance checkerboard problem solved drawing diagonal line solution observed discovered example neural network ensemble Neural Network Result “Competitive Learning Neural Network Ensemble Weighted Predicted Performance” Created Author common argument you’ll hear debate machine intelligence “machines one thing really well” indeed definition interpolation generalization — perform task inter within predefined set rule Yet extrapolating requires solid understand concept inter applied extra outside taught region Rarely current machine learning model extrapolate reliably usually one show promise geometrically easier extrapolate one fail problem artificial intelligence method interpolative nature it’s debatable even possible artificially construct extrapolative “intelligent” algorithm Extrapolation seldom goal modelling machine learning often used interchangeably generalization — obvious course linear regression taking infinitytending prediction gold common le noticeable multiple dimension play multiple regression Another example extrapolation say company train model outlierfree data implement real life outlier much abundant cannot simply ignored common implementation model often go undetected may big reason model may performing test result real life disparity data model trained data model expected predict likely may asking model extrapolate XKCD Sometimes even human bad extrapolators Image free share Models generally cannot extrapolate well measure symbolic intelligence real application It’s important make sure model confronted extrapolation task — current algorithm even complex powerful neural network simply designed perform generally well extrapolation task decent check check extrapolative task plot distribution column training testing set see testing set significantly noncompatible training one create machine learning algorithm capable extrapolating generally across problem similarly concept neural network course varying architecture singlehandedly address almost generalization problem never truly able ‘intelligent’ perform task outside narrow scope trained onTags Machine Learning Data Science Artificial Intelligence AI Data Analysis |
3,539 | Helping Guests Make Informed Decisions with Market Insights | Two common decisions that our guests are making are:
Should I book now for better availability or later for better flexibility? Which of the listings should I book?
As the service provider, we have broad views of the entire market and guest behaviors that individual guests do not necessarily have. This information usually provides helpful insights to solve guests’ puzzles.
Market insights are one channel where we interact with our guests at various stages of the booking flow. We provide dynamically-generated information to assist our guests in planning their trips. This information includes market and listing availability trends, supply, pricing discounts, community activities, etc. It is a critical component of the booking flow and has demonstrated its utility to the Airbnb community by enabling a larger variety of people to make wiser booking decisions and belong everywhere.
Market Insights
Figure 1 illustrates the architecture of our Market Insight service. As a guest interacts with the website, the Market Insight backend system talks with the Search and Pricing services to collect market availability and pricing information. It queries the key-value store for data that are relevant to the search or listing view—along with user information—to generate candidate insights in real-time. Then it ranks the insights according to their values to the guest and powers the front-end on the final insights to display.
We are determined to generate market insights that are genuine, informational, and timely.
When a guest types in a search query that consists of location, dates, guest counts, and possibly room type and additional amenity constraints, the Search backend system retrieves available homes. The frontend automatically zooms in the map to a level that best covers the guest’s interests with enough context. For one such map view, the market insights server aggregates the exact number of available places, and warns the guest if the number is low. Similarly, we have an insight on the percentage of available places.
To support heterogeneous types of insights, the server retrieves two major sources of data from the key-value store.
Stream data that are mostly factual information with limited counting and bookkeeping that are almost real-time, such as the number of unique views of a listing during the past N days. The data is served through an internal system. Aggregated data that typically requires joining with and inferencing from other data sources, and they typically have up to a couple of days of delay. We build data pipelines using Airflow to streamline the data generation process and monitor their daily progress. We use Spark for large-scale data aggregation and use Hive for data storage. The “Rare Find” insight is an example of using aggregated data.
As the service platform, we have more data than individual guests. For instance, Airbnb keeps track of how frequently homes are booked. This information is a good indication of popularity. When a guest views a place that is rarely vacant, we remind our guests with the following insight.
This insight is supported by two data pipelines — one that aggregates availability information of an individual listing and the other for the availability of all markets. A “Rare Find” insight is for listings that have a high long-term availability ratio compared against the market X percentile — a value that trades off insight value and scarcity that we determined by live experiments. Both of the data pipelines are updated on a daily basis so our server will deliver accurate and timely insight to our guests.
Since Market Insights’ inception in 2015, we have gradually added more insight types. Our work has increased booking conversion by more than 5%. With a large Airbnb community and our extension to more verticals, our work compounds in a substantial way for company growth.
Personalization
Personalization has been an evolving theme for many service platforms, and Airbnb has been a pioneer in adopting new technology, applying machine learning to personalize search results and detect host preferences. We have taken several steps in personalizing market insights.
It happens quite often that multiple insights are eligible, but we are only able to show one each time. Our current strategy is to use a deterministic and static vetting rule. However, guests parse information differently. For a guest who is sensitive to time, an insight reminding her can be very effective in getting a trip booked soon. Yet, for a last-minute traveller, the number or percentage of search results may sound more informative, providing them with signals to book as availability is running low. On a listing detail page, the mentality of a listing is “usually booked” may have different implications than “10 others are looking at this place” for various people. Not to mention that there may be sophisticated guests who would like to make decisions purely based on their chemistry with the listings, thus preferring no market insights at all.
In 2016, we have added extensive logging in our booking flow, about which types of insight guests see, and how they react when seeing these insights, such as how much longer they spend on a listing page, whether they wishlist a listing, make a booking request, or go back to search. We implemented a couple of randomization strategies, equalizing the odds of impression for every eligible insight. Showing different and increased variety of insights helps us acquire data to understand user preferences.
After collecting user interaction data, we join it with listing information, such as occupancy rate and number of views, along with search parameters, such as trip lead days and length, and perform data analysis. Our goal is to learn smart insight vetting rules that maximize desired outcome. We believe guests are more likely to book with advanced user experience, so we created a utility function that evaluates their progress. For example, requesting to book is worth one point and contacting host is worth half a point, etc.
We segment guests based on guest features, such as the number of searches and bookings they have done in the past, and come up with a insight vetting rule for each user segment. We are experimenting on our hypothesis that personalized insights deliver improved user experience and in return improves booking conversion. | https://medium.com/airbnb-engineering/helping-guests-make-informed-decisions-with-market-insights-8b09dc904353 | ['Peng Dai'] | 2017-07-11 18:48:10.371000+00:00 | ['Data Engineering', 'AI', 'Data', 'Machine Learning'] | Title Helping Guests Make Informed Decisions Market InsightsContent Two common decision guest making book better availability later better flexibility listing book service provider broad view entire market guest behavior individual guest necessarily information usually provides helpful insight solve guests’ puzzle Market insight one channel interact guest various stage booking flow provide dynamicallygenerated information assist guest planning trip information includes market listing availability trend supply pricing discount community activity etc critical component booking flow demonstrated utility Airbnb community enabling larger variety people make wiser booking decision belong everywhere Market Insights Figure 1 illustrates architecture Market Insight service guest interacts website Market Insight backend system talk Search Pricing service collect market availability pricing information query keyvalue store data relevant search listing view—along user information—to generate candidate insight realtime rank insight according value guest power frontend final insight display determined generate market insight genuine informational timely guest type search query consists location date guest count possibly room type additional amenity constraint Search backend system retrieves available home frontend automatically zoom map level best cover guest’s interest enough context one map view market insight server aggregate exact number available place warns guest number low Similarly insight percentage available place support heterogeneous type insight server retrieves two major source data keyvalue store Stream data mostly factual information limited counting bookkeeping almost realtime number unique view listing past N day data served internal system Aggregated data typically requires joining inferencing data source typically couple day delay build data pipeline using Airflow streamline data generation process monitor daily progress use Spark largescale data aggregation use Hive data storage “Rare Find” insight example using aggregated data service platform data individual guest instance Airbnb keep track frequently home booked information good indication popularity guest view place rarely vacant remind guest following insight insight supported two data pipeline — one aggregate availability information individual listing availability market “Rare Find” insight listing high longterm availability ratio compared market X percentile — value trade insight value scarcity determined live experiment data pipeline updated daily basis server deliver accurate timely insight guest Since Market Insights’ inception 2015 gradually added insight type work increased booking conversion 5 large Airbnb community extension vertical work compound substantial way company growth Personalization Personalization evolving theme many service platform Airbnb pioneer adopting new technology applying machine learning personalize search result detect host preference taken several step personalizing market insight happens quite often multiple insight eligible able show one time current strategy use deterministic static vetting rule However guest parse information differently guest sensitive time insight reminding effective getting trip booked soon Yet lastminute traveller number percentage search result may sound informative providing signal book availability running low listing detail page mentality listing “usually booked” may different implication “10 others looking place” various people mention may sophisticated guest would like make decision purely based chemistry listing thus preferring market insight 2016 added extensive logging booking flow type insight guest see react seeing insight much longer spend listing page whether wishlist listing make booking request go back search implemented couple randomization strategy equalizing odds impression every eligible insight Showing different increased variety insight help u acquire data understand user preference collecting user interaction data join listing information occupancy rate number view along search parameter trip lead day length perform data analysis goal learn smart insight vetting rule maximize desired outcome believe guest likely book advanced user experience created utility function evaluates progress example requesting book worth one point contacting host worth half point etc segment guest based guest feature number search booking done past come insight vetting rule user segment experimenting hypothesis personalized insight deliver improved user experience return improves booking conversionTags Data Engineering AI Data Machine Learning |
3,540 | Travelling to China and 14-Day Quarantine | Airport Departure
Advice from a weary traveller: Constantly check for changes in your flight status, and updates on flight paths being cancelled. Read on for why this is critical.
Oh it was meant to be a joyous occasion. I was finally on my way back to my significant other. Documents in hand. How I was wrong. Again. My cab was pulling into Heathrow Terminal 5 and immediately I sensed something was off. There wasn’t a car or person in sight. It quickly became apparent the terminal was closed. I asked the cab driver to politely stay put as I investigated. All flights from terminal 5 had been re-allocated to different terminals. Great! I wish my airline had informed me of this. I know some of the blame falls on me too, I should have been more vigilant keeping up with my flight status etc, but I still feel this is something they should have made their customers aware of. As I hopped back into the cab, I see a couple arriving in their taxi. I quickly told them to keep their cab, “The terminal is closed! Don’t let your cab leave, you will be stranded!”. I’m not going to lie, I felt like a superhero saving that couple from disaster. For those who don’t frequent Heathrow often, the terminals are miles apart. It would definitely put a damper on your day. Although, I'm sure it wasn't that dramatic.
Eventually, I arrived at Terminal 2. The diversion had subsequently left me short on time. Naturally, there was a monstrosity of a queue to get into the terminal. Single file lane for every passenger entering the building, with one officer checking everyone's boarding pass. Typical England. Eventually, I made it to the check-in desk. Hallelujah. Pleasantries aside, the check-in assistant got to it. Seconds went by, seconds turned to minutes. “What's happening now”, I sighed under my breath. The airline couldn’t seem to find me in their system. I also noticed the flight number and departure time were slightly different from my booking info. Alas, they found me in their system. There’s just one teeny-weeny problem. The second leg of my ticket had been cancelled. It had been for 7 days already. All flights from that specific destination to China had been cancelled for the foreseeable future. I started to get agitated. Why had the airline not informed me of this? Why was I only finding out about this now? Why was there nothing on the airline website mentioning this? For my troubles, I received a standard-issue apology, no explanation and a voucher. No refund. Back home I went dejected and heartbroken.
If you happen to run into a similar situation, depending on local regulations, you may be legally entitled to a refund. Subsequently, I have taken the airline to small claims court via an online agency that handles such matters.
I mentioned earlier in the article to fly direct. This is why. It saves you having to worry about cancellations across multiple leg journeys. I’m just thankful I wasn’t sitting in that poorly rated terminal hotel in the middle of nowhere when the flight path to China was cancelled. Count your blessings.
Thanks to my lovely family, we can fast forward 24 hours, and I was off to China again, direct to Shanghai this time around and minus a small fortune. Before I carry on, I think it is important to note a few things regarding check-in: First, they ask you to scan and fill out a medical questionnaire using Wechat. You will need to show this on arrival. If you don’t have Wechat (Which the Australian next to me didn’t), they do have workarounds, albeit slow. So don’t fret. Secondly and of even greater importance, is if you have a second leg within China, you need to make sure it is for two weeks time. My ticket was for London — Shanghai — Beijing. The Beijing leg was on the same day as the Shanghai arrival. You will not be able to make same-day connections, let alone 2 or 3-day connections. The port of arrival is where you will quarantine. The airline unbeknownst to this factor, still graciously changed my Beijing flight to the appropriate date. Last but not least, prepare for the unexpected, and don’t be afraid to ask questions. I fear a lot of people missed those local connecting flights for fear of asking. | https://medium.com/swlh/china-quarantine-68dd15d41559 | ['Niall Mcnulty'] | 2020-10-27 12:20:43.256000+00:00 | ['Coronavirus', 'Travel', 'Health', 'Life', 'China'] | Title Travelling China 14Day QuarantineContent Airport Departure Advice weary traveller Constantly check change flight status update flight path cancelled Read critical Oh meant joyous occasion finally way back significant Documents hand wrong cab pulling Heathrow Terminal 5 immediately sensed something wasn’t car person sight quickly became apparent terminal closed asked cab driver politely stay put investigated flight terminal 5 reallocated different terminal Great wish airline informed know blame fall vigilant keeping flight status etc still feel something made customer aware hopped back cab see couple arriving taxi quickly told keep cab “The terminal closed Don’t let cab leave stranded” I’m going lie felt like superhero saving couple disaster don’t frequent Heathrow often terminal mile apart would definitely put damper day Although Im sure wasnt dramatic Eventually arrived Terminal 2 diversion subsequently left short time Naturally monstrosity queue get terminal Single file lane every passenger entering building one officer checking everyones boarding pas Typical England Eventually made checkin desk Hallelujah Pleasantries aside checkin assistant got Seconds went second turned minute “Whats happening now” sighed breath airline couldn’t seem find system also noticed flight number departure time slightly different booking info Alas found system There’s one teenyweeny problem second leg ticket cancelled 7 day already flight specific destination China cancelled foreseeable future started get agitated airline informed finding nothing airline website mentioning trouble received standardissue apology explanation voucher refund Back home went dejected heartbroken happen run similar situation depending local regulation may legally entitled refund Subsequently taken airline small claim court via online agency handle matter mentioned earlier article fly direct save worry cancellation across multiple leg journey I’m thankful wasn’t sitting poorly rated terminal hotel middle nowhere flight path China cancelled Count blessing Thanks lovely family fast forward 24 hour China direct Shanghai time around minus small fortune carry think important note thing regarding checkin First ask scan fill medical questionnaire using Wechat need show arrival don’t Wechat Australian next didn’t workarounds albeit slow don’t fret Secondly even greater importance second leg within China need make sure two week time ticket London — Shanghai — Beijing Beijing leg day Shanghai arrival able make sameday connection let alone 2 3day connection port arrival quarantine airline unbeknownst factor still graciously changed Beijing flight appropriate date Last least prepare unexpected don’t afraid ask question fear lot people missed local connecting flight fear askingTags Coronavirus Travel Health Life China |
3,541 | How Much Is Your Data Worth? | Every time that you register for a new website for free you have to give out your name, email, and date of birth. Sometimes your address, if you’re shopping online. Other times your interests, if you’re hanging out on social media.
It might not seem like much to us, but to those websites and companies, your data is their most valuable asset. If they know where you live, what you like, and what you’ve bought in the past, they can sell you products that are specifically tailored for you.
Just Facebook is making an average of $7.05 in ad revenue daily for each user. They have over 2.41 billion monthly active users. (The Washington Post, 2019)
So are you really registering for free?
Name your Price!
Your data is not only being used to sell you stuff, but it’s also part of many performance reports and predictive analysis. Companies need to know how their products are performing among a certain demographic and learn from that information to launch more accurate campaigns based on their objectives. Your online data is part of that equation, but once it’s stored on a website it’s no longer private data, right?
Then, how much is your data worth?
That’s pretty hard to tell because big companies won’t reveal that information to the public. For that reason, Mark R. Warner and Josh Hawley, two senators from the United States, were recently after a new law that would force giants like Google, Facebook, and others to disclose the value of their data with their customers and financial regulators. Additionally, they would have to give users the right to delete their data from the database.
It’s a noble cause, but can it actually turn into a reality? For now, all we can do is guess.
It’s Not That Simple
A law to empower users and add more transparency to the digital landscape seems like a good idea on the surface, but we’re gonna need a more detailed plan of action to actually make it work. The problem with the proposed legislation is that it doesn’t have a specific solution for how to estimate the value of a user’s data, leaving that problem to third parties.
A method to calculate this value would have to take into account not only the basic information that we share when we register but also all the mundane activities that big companies have on us. Our search history, our Facebook likes and reactions, our website retention and click-through rates. It’s a lot more complicated than it seems.
Some users have tried to estimate the value of their own data, but it’s all based on conjectures without any solid process to support their claims. And even if we could arrive at a clear number, I’m not so sure that it would make all of our privacy problems disappear. On the contrary, people might be more inclined to sell their data for a quick buck without considering the long term consequences to their own rights.
Targeted ads are just a surface problem. A lot of users don’t mind them. The real problem is what’s happening behind the scenes. When companies use your information to predict your behavior they’re not adapting themselves to you, they’re adapting you to them. After all, they have the final say of what you see on your feed and can influence your activity towards the most profitable outcome.
Final Thoughts
As it is, the legislation in question doesn’t seem to have enough to sustain a long term solution against the control of big companies over our data. Furthermore, I don’t believe that having a consensus on the numeric value of said data will make much of a difference either. That won’t reflect the power that our information has to predict our actions or influence us in the future.
That’s why I think we’re looking at it from the wrong angle. It’s not only about quantity but also about quality value. The data that we share has an effect on our lives and also on other people’s lives who are close to us.
We may have not arrived at a practical solution just yet, but awareness is the first step. Let’s focus our efforts on the problem and how it affects our privacy instead of how much it would cost to sell our problems away. Those are just my two cents on the matter.
What do you think? I would like to hear your thoughts!
Want to know more about us?
🔥 Check out our Website for updates!
🗨️ Join our Telegram Group.
📢 Give us a shout-out on Facebook. | https://medium.com/online-io-blockchain-technologies/how-much-is-your-data-worth-40d72d692d45 | ['Tyler B.'] | 2019-10-15 17:26:40.197000+00:00 | ['Privacy', 'Data', 'Google', 'Facebook', 'Tech'] | Title Much Data WorthContent Every time register new website free give name email date birth Sometimes address you’re shopping online time interest you’re hanging social medium might seem like much u website company data valuable asset know live like you’ve bought past sell product specifically tailored Facebook making average 705 ad revenue daily user 241 billion monthly active user Washington Post 2019 really registering free Name Price data used sell stuff it’s also part many performance report predictive analysis Companies need know product performing among certain demographic learn information launch accurate campaign based objective online data part equation it’s stored website it’s longer private data right much data worth That’s pretty hard tell big company won’t reveal information public reason Mark R Warner Josh Hawley two senator United States recently new law would force giant like Google Facebook others disclose value data customer financial regulator Additionally would give user right delete data database It’s noble cause actually turn reality guess It’s Simple law empower user add transparency digital landscape seems like good idea surface we’re gonna need detailed plan action actually make work problem proposed legislation doesn’t specific solution estimate value user’s data leaving problem third party method calculate value would take account basic information share register also mundane activity big company u search history Facebook like reaction website retention clickthrough rate It’s lot complicated seems user tried estimate value data it’s based conjecture without solid process support claim even could arrive clear number I’m sure would make privacy problem disappear contrary people might inclined sell data quick buck without considering long term consequence right Targeted ad surface problem lot user don’t mind real problem what’s happening behind scene company use information predict behavior they’re adapting they’re adapting final say see feed influence activity towards profitable outcome Final Thoughts legislation question doesn’t seem enough sustain long term solution control big company data Furthermore don’t believe consensus numeric value said data make much difference either won’t reflect power information predict action influence u future That’s think we’re looking wrong angle It’s quantity also quality value data share effect life also people’s life close u may arrived practical solution yet awareness first step Let’s focus effort problem affect privacy instead much would cost sell problem away two cent matter think would like hear thought Want know u 🔥 Check Website update 🗨️ Join Telegram Group 📢 Give u shoutout FacebookTags Privacy Data Google Facebook Tech |
3,542 | What is CICD? Where is it in 2020? | CICD is a development methodology which has become more important over time. In today’s software driven world, development teams are tasked with delivering applications quickly, consistently, and error-free: every single time.
While the challenges are plentiful, CI/CD is simple at its core.
For many organisations, achieving true continuous delivery is near impossible. Development teams are quickly getting more agile while the rest of the organisation struggles to adapt.
What Is CICD
CICD is an acronym for continuous integration (and) continuous delivery.
The CI portion reflects a consistent and automated way to build, package and test applications. A consistent process here allows teams to commit code changes more frequently, encouraging better collaboration and software.
On the flip side, continuous delivery automates the process of delivering an application to selected infrastructure environments. As teams develop in any number of environments (e.g. dev, test): CD makes sure that there’s an automated way to push changes through.
If you’re heard of the following companies:
Jenkins
Gitlab
CircleCI
TravisCI
Then you’re probably a little aware of CICD.
Photo by Arif Riyanto on Unsplash
The Benefits of CICD
CICD improves efficiency and deployment times across the DevOps board — having been original originally designed to increase the speed of software delivery. According to this DZone report, three-quarters of respondents (DevOps) have benefitted: not to mention the shortened development cycle time and the increase in release frequency.
A 75% success rate is very, very good.
What the DevOps Market Feels
Small to Medium sized Enterprises have begun to ramp up their investment in CICD over the last three years and are starting to compete with their larger peers.
According to DZones 2020 Study on CICD, Jenkins remains the dominant CICD platform, but GitLab has been gaining ground over the past couple of years, not to mention CircleCI .
At each stage of the CICD pipeline, the report also indicated that the majority of developers said they have automation built into the process to test code and deploy it to the next stage.
Now despite the importance of automation being built into the CICD pipeline, it’s still possible for teams to get lazy and to rely too heavily on the automation.
As your team’s responsibilities shift and new tasks arise, it is easy to automate processes too soon, just for the sake of time.
Automating poorly designed processes may save time in the short term, but in the long term, it can swell into a major bottleneck that is difficult to fix.
In doing this, teams have to be mindful to properly resolve process bottlenecks before automating and if anything does arise, they need to strip it out and fix it fully.
Moreover, developers should audit their automated protocols regularly to ensure they maintain accuracy, while also testing current processes for efficacy.
This is all part of the problem which takes time to resolve, but the effort is worth it.
Photo by Christopher Gower on Unsplash
The future in CDaas?
We’ve seen the benefits of CICD, but the DZone report highlighted that almost 45% of respondents had environments hosted on site.
Now an emerging solution for organisations is to leverage micro-services and containers to allow customer facing applications to scale.
For this, Continuous Delivery-as-a-service (CDaas) is seen to be an emerging solution of which almost 50% of of those respondents considering moving across.
I’d be interested to hear from any users of CDaas and their experiences thus far. | https://towardsdatascience.com/what-is-cicd-where-is-it-in-2020-c3298c2802ff | ['Mohammad Ahmad'] | 2020-07-27 15:20:32.490000+00:00 | ['Software Development', 'Coding', 'Artificial Intelligence', 'Programming', 'Python'] | Title CICD 2020Content CICD development methodology become important time today’s software driven world development team tasked delivering application quickly consistently errorfree every single time challenge plentiful CICD simple core many organisation achieving true continuous delivery near impossible Development team quickly getting agile rest organisation struggle adapt CICD CICD acronym continuous integration continuous delivery CI portion reflects consistent automated way build package test application consistent process allows team commit code change frequently encouraging better collaboration software flip side continuous delivery automates process delivering application selected infrastructure environment team develop number environment eg dev test CD make sure there’s automated way push change you’re heard following company Jenkins Gitlab CircleCI TravisCI you’re probably little aware CICD Photo Arif Riyanto Unsplash Benefits CICD CICD improves efficiency deployment time across DevOps board — original originally designed increase speed software delivery According DZone report threequarters respondent DevOps benefitted mention shortened development cycle time increase release frequency 75 success rate good DevOps Market Feels Small Medium sized Enterprises begun ramp investment CICD last three year starting compete larger peer According DZones 2020 Study CICD Jenkins remains dominant CICD platform GitLab gaining ground past couple year mention CircleCI stage CICD pipeline report also indicated majority developer said automation built process test code deploy next stage despite importance automation built CICD pipeline it’s still possible team get lazy rely heavily automation team’s responsibility shift new task arise easy automate process soon sake time Automating poorly designed process may save time short term long term swell major bottleneck difficult fix team mindful properly resolve process bottleneck automating anything arise need strip fix fully Moreover developer audit automated protocol regularly ensure maintain accuracy also testing current process efficacy part problem take time resolve effort worth Photo Christopher Gower Unsplash future CDaas We’ve seen benefit CICD DZone report highlighted almost 45 respondent environment hosted site emerging solution organisation leverage microservices container allow customer facing application scale Continuous Deliveryasaservice CDaas seen emerging solution almost 50 respondent considering moving across I’d interested hear user CDaas experience thus farTags Software Development Coding Artificial Intelligence Programming Python |
3,543 | Plotting Equations with Python. This article is going to cover plotting… | This article is going to cover plotting basic equations in python! We are going to look at a few different examples, and then I will provide the code to do create the plots through Google Colab!
Goals:
Learn to Create a vector array Manipulate vector to match an equation Create beautiful plots with a title, axis labels, and grid
y = x²
Let's go ahead and start by working on one of the simplest and most common equations! y = x². To do this, we are going to be doing a few things but first of all, we need to cover a few concepts.
Modules in Python
A module allows you to logically organize your Python code. We will be using 2 Modules: Matplotlib.pyplot and Numpy.
NumPy
Numpy is the fundamental package for scientific computing in Python. It is a Python library that provides a multidimensional array object, various derived objects (such as masked arrays and matrices), and an assortment of routines for fast operations on arrays, including mathematical, logical, shape manipulation, sorting, selecting, I/O, discrete Fourier transforms, basic linear algebra, basic statistical operations, random simulation and much more.
Matplotlib.pyplot
Matplotlib.pyplot is a collection of command style functions that make matplotlib work like MATLAB. Each pyplot function makes some change to a figure: e.g., creates a figure, creates a plotting area in a figure, plots some lines in a plotting area, decorates the plot with labels, etc. | https://medium.com/future-vision/plotting-equations-in-python-d0edd9f088c8 | ['Elliott Saslow'] | 2018-11-02 19:18:08.441000+00:00 | ['Math', 'Data Science', 'Matplotlib', 'Developer', 'Science'] | Title Plotting Equations Python article going cover plotting…Content article going cover plotting basic equation python going look different example provide code create plot Google Colab Goals Learn Create vector array Manipulate vector match equation Create beautiful plot title axis label grid x² Lets go ahead start working one simplest common equation x² going thing first need cover concept Modules Python module allows logically organize Python code using 2 Modules Matplotlibpyplot Numpy NumPy Numpy fundamental package scientific computing Python Python library provides multidimensional array object various derived object masked array matrix assortment routine fast operation array including mathematical logical shape manipulation sorting selecting IO discrete Fourier transforms basic linear algebra basic statistical operation random simulation much Matplotlibpyplot Matplotlibpyplot collection command style function make matplotlib work like MATLAB pyplot function make change figure eg creates figure creates plotting area figure plot line plotting area decorates plot label etcTags Math Data Science Matplotlib Developer Science |
3,544 | Anderson Cooper’s “Obese Turtle” Rant Is a Reminder That Fat People Are Punchlines to the Right and the Left | At this point, Anderson Cooper’s recent viral video has been seen more than 10 million times. In it, he talks about Trump’s absurd claims of election cheating by likening the president to… an obese turtle:
“I don’t think we’ve ever seen anything like this from a president of the United States. And, I think, like Jake said, it is sad, and it is truly pathetic. And of course it is dangerous, and of course it will go to courts, but you’ll notice the president did not have any evidence presented at all. Nothing. “That is the president of the United States. That is the most powerful person in the world, and we see him, like an obese turtle on his back, flailing in the hot sun, realizing his time is over. But he just hasn’t accepted it, and he wants to take everybody down with him, including this country.”
Ever since Anderson made those comments, people have been absolutely giddy at the image. Social media is filled with people cheering on Gloria Vanderbilt’s journalist son, some even joking that he’s now a poet laureate.
People really seem to love it, and that’s putting is mildly. Even Vogue published a story calling Cooper’s words “the Perfect Response to Trump’s Shocking White House Speech.”
So, here’s the problem with that. Once again, the Left, which supposedly gives a damn about human rights, is using a fat body as an insult, and those of us who despise Trump and are obese ourselves? We’re supposed to pretend it’s no big deal. That it’s even funny.
But it’s not fucking funny.
We see this attitude time and time again. You on the Left care so much about human rights, but you don’t give a damn about those among you with eating disorders or larger bodies. You use us as punchlines. You use us to make weak or effortless arguments.
Fat people aren’t stupid, though.
When you mock Trump’s body and make jokes about “obese turtles,” we know exactly what you’re saying.
Hahaha.
To you, and folks like Anderson Cooper, obesity equals laziness, gluttony, and self-destruction. Sometimes, you use it to denote mental health issues, but you almost always liken it to something we the irresponsible fatties have done to ourselves.
It’s just one more extension of diet culture and the ridiculous notion that an individual’s bodyweight has some direct correlation to their worth. It’s arrogant and shitty. It’s not even rational. But you do it, and most of you reading this are going to keep making fat jokes and using fat as an insult because you really don’t get it.
And honestly, I don’t believe that most folks even care. | https://medium.com/honestly-yours/anderson-coopers-obese-turtle-rant-is-a-reminder-that-fat-people-are-punchlines-to-the-right-3815060e492a | ['Shannon Ashley'] | 2020-11-06 17:37:14.896000+00:00 | ['Social Media', 'Culture', 'Mental Health', 'Society', 'Politics'] | Title Anderson Cooper’s “Obese Turtle” Rant Reminder Fat People Punchlines Right LeftContent point Anderson Cooper’s recent viral video seen 10 million time talk Trump’s absurd claim election cheating likening president to… obese turtle “I don’t think we’ve ever seen anything like president United States think like Jake said sad truly pathetic course dangerous course go court you’ll notice president evidence presented Nothing “That president United States powerful person world see like obese turtle back flailing hot sun realizing time hasn’t accepted want take everybody including country” Ever since Anderson made comment people absolutely giddy image Social medium filled people cheering Gloria Vanderbilt’s journalist son even joking he’s poet laureate People really seem love that’s putting mildly Even Vogue published story calling Cooper’s word “the Perfect Response Trump’s Shocking White House Speech” here’s problem Left supposedly give damn human right using fat body insult u despise Trump obese We’re supposed pretend it’s big deal it’s even funny it’s fucking funny see attitude time time Left care much human right don’t give damn among eating disorder larger body use u punchlines use u make weak effortless argument Fat people aren’t stupid though mock Trump’s body make joke “obese turtles” know exactly you’re saying Hahaha folk like Anderson Cooper obesity equal laziness gluttony selfdestruction Sometimes use denote mental health issue almost always liken something irresponsible fatty done It’s one extension diet culture ridiculous notion individual’s bodyweight direct correlation worth It’s arrogant shitty It’s even rational reading going keep making fat joke using fat insult really don’t get honestly don’t believe folk even careTags Social Media Culture Mental Health Society Politics |
3,545 | Using Differentiation to Stand Out From The Competition | What is differentiation?
Product differentiation is a marketing management strategy that aims to distinguish or differentiate a company’s products or services from the alternatives offered by competitors.
Businesses communicate their unique and distinctive benefit through the marketing strategy to make it more attractive to a targeted group of customers.
Also referred to as a point of difference, providing customers with a unique and distinct benefit can create a competitive advantage in that marketplace.
It is a powerful strategy when a target group of customers is not price-sensitive (a price increase will not reduce demand), when a market is competitive and saturated with options, or when a group of customers have specific needs that are under-served.
“Point of difference — even seemingly contradictory ones — can be powerful. Strong, favourable, unique associations that distinguish a brand from others in the same frame of reference are fundamental to successful brand positioning.” (Keller, Sternthal & Tybout, 2002)
Objectives of differentiation
The underlying objective of differentiation is to make your brand different to and more attractive than your competitors. Develop a position in the market that potential customers see as unique and valuable.
Perceived differentiation is subjective from customer to customer, from brand to brand. It is marketing’s job to alter this perception and customers’ evaluation of the benefits of one brand’s offering compared to another.
When a product or service becomes more unique, it will attract fewer comparisons with competitors, and it moves away from competing on price.
This uniqueness helps to achieve a competitive advantage in a crowded marketplace.
Product differentiation helps develop a strong value proposition, making a product or service attractive to a target market. | https://medium.com/illumination-curated/using-differentiation-to-stand-out-from-the-competition-dacc9f2a1777 | ['Daniel Hopper'] | 2020-11-04 21:34:09.188000+00:00 | ['Strategy', 'Marketing Strategies', 'Marketing', 'Business', 'Startup'] | Title Using Differentiation Stand CompetitionContent differentiation Product differentiation marketing management strategy aim distinguish differentiate company’s product service alternative offered competitor Businesses communicate unique distinctive benefit marketing strategy make attractive targeted group customer Also referred point difference providing customer unique distinct benefit create competitive advantage marketplace powerful strategy target group customer pricesensitive price increase reduce demand market competitive saturated option group customer specific need underserved “Point difference — even seemingly contradictory one — powerful Strong favourable unique association distinguish brand others frame reference fundamental successful brand positioning” Keller Sternthal Tybout 2002 Objectives differentiation underlying objective differentiation make brand different attractive competitor Develop position market potential customer see unique valuable Perceived differentiation subjective customer customer brand brand marketing’s job alter perception customers’ evaluation benefit one brand’s offering compared another product service becomes unique attract fewer comparison competitor move away competing price uniqueness help achieve competitive advantage crowded marketplace Product differentiation help develop strong value proposition making product service attractive target marketTags Strategy Marketing Strategies Marketing Business Startup |
3,546 | Creatives Have a Psychology Problem | The entire process of ‘transferring’ belief, either through marketing techniques or through works of art, carries with it a lot of psychological weight.
This ‘weight’ then leads many creatives to being extremely insecure.
The Forgotten Religiosity of Creative Work
In ages past, creatives were extremely valuable members of society. They were (essentially) aristocrats. They were the priests, the Levites, the craftsman, the healers, the Nazarenes, the Samurai, the chefs, the architects, the painters, the harp-players, the readers of stars, the prophets, the narrators and the valiant warriors.
What that did, was not only to encourage select members of society to take risks and create / venture-into new frontiers of human achievement and knowledge, but also to ‘honour’ them regardless of result or output.
What that achieved, was to ‘lift’ the psychological weight off these creatives. They were ‘already affirmed’ in their pursuits. Societies that ‘punished’ creatives (for being ‘disruptors’ of the social order, etc), suffered in the long run. They were later either overtaken/enslaved either by stronger nations or impoverished by more ‘cunning’ (i.e. creative) nations that produced a lot more ‘creative works’ that were pleasant enough to be bought by the many… Either way, the incentives were there for creatives to keep ‘creating’; and religion/cultural-pride was always there to inspire and encourage the creatives.
The Current Psychology of Creative Work
At present, advertising, profits and sales run the world of creatives. Creative work is no longer measured on how ‘religiously true’ or ‘culturally inspiring’ it is. It is measured, almost purely, on the whims of a capitalist economy.
What that means, is that a ‘good’ creative, is by design a profitable creative. A ‘bad’ creative, is by design an unprofitable creative. (An odd side-effect of this, is observing how this totalitarian belief in profitability has affected arguably the most profitable creative work in history: that of engineering. Many engineering companies systematically cut their RnD budget as soon as they start having financial struggles. Which is in fact rather odd, considering that company ‘losses’ may not necessarily come from a negative loss/profit result of a corporate division but may in fact come from a company over-provisioning resources within positive loss/profit result divisions).
What this implies, is that creatives have to look at profitability alone, as the measure of success. That approach may very well have some terrible side-effects of creative-work ‘quality’… but that would be another discussion… Sadly, creatives, almost universally are seldom ‘in charge’ of the production and sales of their works. What this means is that, even if they ‘wanted to’, they cannot control the profitability of their works. This results in an even sadder side-effect, that of creatives relying on ‘production/sales’ experts (i.e. venture capitalists, art dealers or movie/music producers) on telling them whether their ‘work’ is profitable or not.
Current Creatives are thus at the mercy of BOTH the economic system of capitalism and the various ‘captains’ within that system that tell them 1) if their work is ‘profitable’ or not, 2) if their work is ‘good’ or not. This, more than the already existing problem of ‘insecurity’, is an even GREATER psychological weight to overcome.
Overcoming the Psychological Weight
The desires to ‘be great’ and to ‘please everyone’ are incredibly self-contradictory. For one to ‘be great’ (a demi-god in their chosen profession), one would have to not only step on a few toes, but also annoy a lot of people. No one wants to be made to feel small.
All actors/actresses/comedians/musicians/sports(wo)men and entrepreneurs worth their salt, want to be ‘great’! That desire is not going to be welcomed by others either competing for the same goal or those aiming for power and wealth.
The first step to overcoming both the insecurity and ‘credibility’ problem then, is realising that one is alone in their creative goals.
We have modern technology to thank for the second step! Arguably the most radical step: build your own system of sustainability. Build a crop and chickens farm. Build your own shed, table, etc… None of these will make you ‘profitable’, but it will sure provide a secure place to ‘put your money’ once you make it. It will also (perhaps, most importantly!) feed, shelter, protect and shield your creative genius from the sharks within the system of capitalism.
The third (and most difficult) step is that of a creative finding an independent platform that allows for full expression one’s creative abilities, unencumbered by ‘feedback’ from gate-keepers of the capitalist-creative market economy. The only feedback that is objective and meaningful for a creative, is that coming from consumers of creative-products. The idea that some expert will know 1) what consumers want, 2) what is ‘good’ creative work, 3) what will consumers buy at what price, 4) what platforms these consumers exist in, 5) how much money will be made, etc is extremely preposterous and unrealistic.
Nobody Knows Anything.
If anything, the creatives know a lot more! (As they tend to be in touch with both the ‘work’ and ‘the people’). They should allow themselves to create with confidence. They should be crazy. They should be insecure. They should believe that their work is credible, until an end-customer says otherwise.
The fourth step, (advocated by the likes of Peter Thiel, Elon Musk and Tim Ferriss) is that of aiming for a radical enough (and different!) but clear goal. A goal that is inspiring, self-nourishing and pushes one to be extremely productive. A goal that allows one to escape the clutches of competition and capitalism. A goal that feels like there is only one person in the universe who was born to achieve it.
The fifth step, (advocated by me), is that of having recursive targets. Prominent psychologists like Jordan Peterson and many others seem to advocate for some form of ‘low aiming’ as a way to build momentum towards progress and accruing psychological health. Apparently ‘big goals’ are extremely discouraging if not achieved! Even if this is true, THIS is a big mistake! Firstly, positive ‘progress’ is not an objective way to measure success. A product, for example, that makes a lot of sales, like Microsoft Windows, is not necessarily the best product (compared to its expensive counterparts, e.g. macOS, or free counterparts, e.g. Ubuntu OS). Secondly, negative ‘progress’ often contains a lot more insight that positive progress. A wealthy and technologically-savvy venture capitalist saying ‘no’ to a product proposal is worth a lot more than a high-school friend saying ‘yes’ to whether he likes the product or not.
The idea of ‘recursive’ targets is quite simple: it is about finding the least costly, and the least difficult way to build a useful concept that ‘demonstrates’ the higher (i.e. more radical) goal. This is decidedly NOT a prototype! It is a finished (and polished) concept. It is only smaller. Yet is it daring. It is different. To stay both profitable and productive, the creative would have to build a lot of these ‘products’. But these are merely means to an end: the higher goal. They are the means to collecting insights, cash-flow, feedback and sustainability from the ultimate ‘gate-keeper’: the customer!
Be like Elon, aim BIG. Don’t be like Elon, aim for the (recursed) SMALLer products. Don’t take J. Peterson’s advice of ‘aiming LOW’…
Those, are the tricks to lifting the psychological weight off being a creative. | https://lesdikgole.medium.com/creatives-have-a-psychology-problem-d420998d0f4b | ['Lesang Dikgole'] | 2019-11-24 18:12:30.296000+00:00 | ['Creativity', 'Comedy', 'Art', 'Psychology', 'Jordan Peterson'] | Title Creatives Psychology ProblemContent entire process ‘transferring’ belief either marketing technique work art carry lot psychological weight ‘weight’ lead many creatives extremely insecure Forgotten Religiosity Creative Work age past creatives extremely valuable member society essentially aristocrat priest Levites craftsman healer Nazarenes Samurai chef architect painter harpplayers reader star prophet narrator valiant warrior encourage select member society take risk create ventureinto new frontier human achievement knowledge also ‘honour’ regardless result output achieved ‘lift’ psychological weight creatives ‘already affirmed’ pursuit Societies ‘punished’ creatives ‘disruptors’ social order etc suffered long run later either overtakenenslaved either stronger nation impoverished ‘cunning’ ie creative nation produced lot ‘creative works’ pleasant enough bought many… Either way incentive creatives keep ‘creating’ religionculturalpride always inspire encourage creatives Current Psychology Creative Work present advertising profit sale run world creatives Creative work longer measured ‘religiously true’ ‘culturally inspiring’ measured almost purely whim capitalist economy mean ‘good’ creative design profitable creative ‘bad’ creative design unprofitable creative odd sideeffect observing totalitarian belief profitability affected arguably profitable creative work history engineering Many engineering company systematically cut RnD budget soon start financial struggle fact rather odd considering company ‘losses’ may necessarily come negative lossprofit result corporate division may fact come company overprovisioning resource within positive lossprofit result division implies creatives look profitability alone measure success approach may well terrible sideeffects creativework ‘quality’… would another discussion… Sadly creatives almost universally seldom ‘in charge’ production sale work mean even ‘wanted to’ cannot control profitability work result even sadder sideeffect creatives relying ‘productionsales’ expert ie venture capitalist art dealer moviemusic producer telling whether ‘work’ profitable Current Creatives thus mercy economic system capitalism various ‘captains’ within system tell 1 work ‘profitable’ 2 work ‘good’ already existing problem ‘insecurity’ even GREATER psychological weight overcome Overcoming Psychological Weight desire ‘be great’ ‘please everyone’ incredibly selfcontradictory one ‘be great’ demigod chosen profession one would step toe also annoy lot people one want made feel small actorsactressescomediansmusicianssportswomen entrepreneur worth salt want ‘great’ desire going welcomed others either competing goal aiming power wealth first step overcoming insecurity ‘credibility’ problem realising one alone creative goal modern technology thank second step Arguably radical step build system sustainability Build crop chicken farm Build shed table etc… None make ‘profitable’ sure provide secure place ‘put money’ make also perhaps importantly feed shelter protect shield creative genius shark within system capitalism third difficult step creative finding independent platform allows full expression one’s creative ability unencumbered ‘feedback’ gatekeeper capitalistcreative market economy feedback objective meaningful creative coming consumer creativeproducts idea expert know 1 consumer want 2 ‘good’ creative work 3 consumer buy price 4 platform consumer exist 5 much money made etc extremely preposterous unrealistic Nobody Knows Anything anything creatives know lot tend touch ‘work’ ‘the people’ allow create confidence crazy insecure believe work credible endcustomer say otherwise fourth step advocated like Peter Thiel Elon Musk Tim Ferriss aiming radical enough different clear goal goal inspiring selfnourishing push one extremely productive goal allows one escape clutch competition capitalism goal feel like one person universe born achieve fifth step advocated recursive target Prominent psychologist like Jordan Peterson many others seem advocate form ‘low aiming’ way build momentum towards progress accruing psychological health Apparently ‘big goals’ extremely discouraging achieved Even true big mistake Firstly positive ‘progress’ objective way measure success product example make lot sale like Microsoft Windows necessarily best product compared expensive counterpart eg macOS free counterpart eg Ubuntu OS Secondly negative ‘progress’ often contains lot insight positive progress wealthy technologicallysavvy venture capitalist saying ‘no’ product proposal worth lot highschool friend saying ‘yes’ whether like product idea ‘recursive’ target quite simple finding least costly least difficult way build useful concept ‘demonstrates’ higher ie radical goal decidedly prototype finished polished concept smaller Yet daring different stay profitable productive creative would build lot ‘products’ merely mean end higher goal mean collecting insight cashflow feedback sustainability ultimate ‘gatekeeper’ customer like Elon aim BIG Don’t like Elon aim recursed SMALLer product Don’t take J Peterson’s advice ‘aiming LOW’… trick lifting psychological weight creativeTags Creativity Comedy Art Psychology Jordan Peterson |
3,547 | How China used Artificial Intelligence to combat Covid-19 | 2020 will go down in history books as the year that witnessed one of its kind global crisis due to the Covid-19 pandemic. Of course, Covid-19 is not the first pandemic on a worldwide scale. There had been plenty of such outbreaks in the recorded history that affected different parts of the globe. But Covid-19 stands apart due to such high international travel. It spread quickly in no time worldwide, resulting in complete lockdown in most countries. At the time of writing this article, there had been 39 Million Covid-19 cases globally and 1.1 Million deaths just within 7–8 months.
John Hopkins Covid-19 resource center (Source)
Covid-19 is also different from earlier pandemics because it’s the first time government agencies and health organizations worldwide are using emerging technologies of Big Data and Artificial Intelligence to combat the disease. AI has always been portrayed as a technology that has the potential to change the world we live in, and this pandemic was a litmus test for AI as well to prove its promise.
A fascinating case study of the use of AI to fight Covid-19 comes from China, which was the source of this virus. China had an initial surge of Covid-19 cases, but soon, it could control the spread within a few months while the rest of the world is still struggling with growing cases. If you observe the graph below, you will see a flat line for China, whereas to give a perspective, the USA and India, which also has a high population, is still seeing an exponential rise in cases.
Covid-19 cases in China vs. India vs. the USA (source)
Had Covid-19 or a similar pandemic happened ten years ago, and the above graph would have different results for China. Here is my previous article that supports this theory.
What did China do differently from other countries to combat Covid-19?
To fight with the Covid-19 situation, China relentlessly made use of AI-enabled technologies in all possible ways to control the spread, unlike other countries. The main focus area on artificial intelligence was on mass surveillance to prevent the spread and, secondly, healthcare to provide fast diagnosis and effective treatment. This should not come as a surprise because China is already one of the leading markets for artificial intelligence globally. As per one report, China’s AI market is forecasted to reach 11.9 Billion USD by 2023.
So let’s take a close look at the various measures China took with artificial intelligence.
1. Mass Surveillance & Contact Tracing
China is known to employ mass surveillance on its citizen without thinking twice about people’s data privacy. China has an estimated 200 million surveillance cameras powered by AI-based facial recognition technology to closely track its citizens. Such extensive use of AI for controlling citizens has always attracted global criticism to China.
China’s existing Mass Surveillance (source)
But when Covid-19 hit China, its already established mass surveillance system proved to be very efficient since the government could use this system to track patient’s travel history and predict which other people might be at possible risk who came in contact with the patient.
Contact Tracing App (Source)
China not only gathered people’s tracking data, but it also used this information to alert people of potential Covid-19 risk with the help of contact tracing mobile apps designed with the help of companies like Alibaba and Tencent. This app assigns color code to the users based on their risk profile. People with no risk are assigned a green color, whereas people with travel history or close proximity to other patients are given yellow or red based on the severity of the risk. The yellow color indicates self-quarantine, and people with red color are required to go to the hospital.
China’s Health Code (Source)
This health code has now become the benchmark for China to allow its citizen to use public places and services. Many health code scanners are installed in all public places like offices, subways, railway stations, and airports that screens out people for yellow or red code. China has also imposed rules to allow people to drive on roads only if their health code is green.
Man Scanning Health Code in Subway (Source)
Another very useful app for Chinese people was the Baidu Map, which gave real-time information on high-risk places so that people can stay away from those regions. It used data accessed by GPS location and medical data from health agencies to inform users about their exact distance from the Covid-19 hotspots to avoid them while traveling.
Baidu Map showing Covid-19 Hotspots (Source)
Wearing masks have become a norm for the people in 2020, and it is mandatory for one’s safety and prevents infecting others when not wearing a mask yourself. China’s AI companies like Baidu, Megvii, SenseTime, and Hanwang Technologies helped the government put up facial recognition surveillance capable of recognizing people with or without masks. The system immediately raises a security alert if it detects a person not wearing a mask. These systems are also equipped with thermal scans to raise alert for people with high temperatures in public areas. Baidu’s surveillance in Beijing Qinghe Railway Station was able to detect 190 suspected cases within a month of its installation in late January.
Facial recognition with a thermal scan at Chinese Railway Station (Source)
The most comprehensive data source for Chinese government agencies & healthcare technology companies comes from the mobile app, “weChat.” Chinese tech giant Tencent Group developed this mobile app, now have around 1.2 billion users. Tencent is Asia’s most valuable company with a market capitalization of 300 billion. “weChat” was one of the primary sources behind all Covid-19 contact tracing. When it was combined with mass surveillance data, it made contact tracing an easy task.
“WeChat and mass surveillance together provided many grounds for Computer Vision(CV) and Natural Language Processing(NLP) experts to build a paradise of revolutionary Contact Tracing applications.”
2. Healthcare Services
The major challenge which healthcare workers faced was the influx of the high number of Covid-19 cases that started coming in for the diagnosis in the early days in China. When Lungs CT scan became a parameter for initial diagnosis before PCR test confirmation, it was a nightmare for radiologists. The radiologists had to manually go through thousands of scans of people to confirm the diagnosis. Quick diagnosis and early medication/quarantine were essential to hinder the spread of Covid-19, but the diagnosis process itself became a bottleneck at that time.
Soon the Chinese AI companies like Alibaba, Yitu Technologies stepped in with AI-assisted diagnosis of CT scan images to automate the process with minimal radiologist intervention.
AI-assisted CT scan diagnosis for Covid-19 (Source)
These systems were built using Deep Learning and proved to be fast and accurate. The use of artificial intelligence to evaluate CT scans was a big turning point for China as it sped up diagnosis. Alibaba’s diagnostic system could provide the diagnosis of Covid-19 within 20 seconds, with 99.6% accuracy. By March 2020, over 170 Chinese hospitals adopted this system, with 340,000 potential patients.
On the other hand, Tencent AI lab worked with Chinese healthcare scientists to develop a deep learning model that can predict the critical illness in the Covid-19 patients, which can be fateful. They made this tool available online to be given high priority treatment well in advance.
Covid-19 is a novel virus, meaning nothing was known about it by the medical researchers when it surfaced. As soon as this came to light, researchers worldwide started to study this virus’s genes for creating a diagnosis process that could also open the gates for vaccination. But such scientific research is not easy and requires extensive resources.
Both Alibaba and Baidu have now made their proprietary AI algorithms available to the medical fraternity to speed up the research and diagnosis process. Alibaba’s LinearFold AI algorithm can reduce time to study Coronavirus RNA structure from 55 minutes to just 27 seconds, which is useful for fast genome testing. Similarly, Baidu’s open-sourced AI algorithm is also 120 times faster than the traditional approaches of genome studies.
In such a crisis, drones are often useful to either provide supplies or to carry out surveillance. But this time, autonomous vehicles and Chinese robotics companies like Baidu, Neolix, Idriverplus have also joined the mission by letting out their self-driving vehicles to supply medical equipment and foods to the hospitals.
Autonomous Vehicle disinfecting public place (Source)
Idriverplus autonomous vehicles were also used to spray disinfectants in public places and hospitals for sanitization. Another company Pudu Technology that usually build robots for the catering industry, also deployed their robots to over 40 hospitals to support health workers.
Chinese companies are also catering to global demands for autonomous robots. For example, Gaussian Robotics claims their robots have been sold to over 20 countries during the current pandemic.
Conclusion
Indeed China has left no stone unturned to use artificial intelligence to maximum advantage in its fight against Covid-19, and some might question why other countries were unsuccessful in doing so. One of the main reasons China was so successful was its relentless use of existing AI-enabled mass surveillance systems, which does not consider people’s data privacy. It is quite normal for China to use facial recognition to track citizens, but data privacy is a serious issue in other parts of the world where such surveillance systems cannot exist at such a large scale. Although there had been uses of artificial intelligence in the healthcare area in many parts of the world, no other country could implement mass surveillance to restrict Covid-19 spread as China did. Thus China went ahead in the race to control Covid-19 leveraging AI.
Gain Access to Expert View — Subscribe to DDI Intel | https://medium.com/datadriveninvestor/how-china-used-artificial-intelligence-to-combat-covid-19-f5ebc1ef93d | ['Awais Bajwa'] | 2020-10-21 17:52:05.557000+00:00 | ['Deep Learning', 'Healthcare', 'AI', 'Computer Vision', 'China Startup'] | Title China used Artificial Intelligence combat Covid19Content 2020 go history book year witnessed one kind global crisis due Covid19 pandemic course Covid19 first pandemic worldwide scale plenty outbreak recorded history affected different part globe Covid19 stand apart due high international travel spread quickly time worldwide resulting complete lockdown country time writing article 39 Million Covid19 case globally 11 Million death within 7–8 month John Hopkins Covid19 resource center Source Covid19 also different earlier pandemic it’s first time government agency health organization worldwide using emerging technology Big Data Artificial Intelligence combat disease AI always portrayed technology potential change world live pandemic litmus test AI well prove promise fascinating case study use AI fight Covid19 come China source virus China initial surge Covid19 case soon could control spread within month rest world still struggling growing case observe graph see flat line China whereas give perspective USA India also high population still seeing exponential rise case Covid19 case China v India v USA source Covid19 similar pandemic happened ten year ago graph would different result China previous article support theory China differently country combat Covid19 fight Covid19 situation China relentlessly made use AIenabled technology possible way control spread unlike country main focus area artificial intelligence mass surveillance prevent spread secondly healthcare provide fast diagnosis effective treatment come surprise China already one leading market artificial intelligence globally per one report China’s AI market forecasted reach 119 Billion USD 2023 let’s take close look various measure China took artificial intelligence 1 Mass Surveillance Contact Tracing China known employ mass surveillance citizen without thinking twice people’s data privacy China estimated 200 million surveillance camera powered AIbased facial recognition technology closely track citizen extensive use AI controlling citizen always attracted global criticism China China’s existing Mass Surveillance source Covid19 hit China already established mass surveillance system proved efficient since government could use system track patient’s travel history predict people might possible risk came contact patient Contact Tracing App Source China gathered people’s tracking data also used information alert people potential Covid19 risk help contact tracing mobile apps designed help company like Alibaba Tencent app assigns color code user based risk profile People risk assigned green color whereas people travel history close proximity patient given yellow red based severity risk yellow color indicates selfquarantine people red color required go hospital China’s Health Code Source health code become benchmark China allow citizen use public place service Many health code scanner installed public place like office subway railway station airport screen people yellow red code China also imposed rule allow people drive road health code green Man Scanning Health Code Subway Source Another useful app Chinese people Baidu Map gave realtime information highrisk place people stay away region used data accessed GPS location medical data health agency inform user exact distance Covid19 hotspot avoid traveling Baidu Map showing Covid19 Hotspots Source Wearing mask become norm people 2020 mandatory one’s safety prevents infecting others wearing mask China’s AI company like Baidu Megvii SenseTime Hanwang Technologies helped government put facial recognition surveillance capable recognizing people without mask system immediately raise security alert detects person wearing mask system also equipped thermal scan raise alert people high temperature public area Baidu’s surveillance Beijing Qinghe Railway Station able detect 190 suspected case within month installation late January Facial recognition thermal scan Chinese Railway Station Source comprehensive data source Chinese government agency healthcare technology company come mobile app “weChat” Chinese tech giant Tencent Group developed mobile app around 12 billion user Tencent Asia’s valuable company market capitalization 300 billion “weChat” one primary source behind Covid19 contact tracing combined mass surveillance data made contact tracing easy task “WeChat mass surveillance together provided many ground Computer VisionCV Natural Language ProcessingNLP expert build paradise revolutionary Contact Tracing applications” 2 Healthcare Services major challenge healthcare worker faced influx high number Covid19 case started coming diagnosis early day China Lungs CT scan became parameter initial diagnosis PCR test confirmation nightmare radiologist radiologist manually go thousand scan people confirm diagnosis Quick diagnosis early medicationquarantine essential hinder spread Covid19 diagnosis process became bottleneck time Soon Chinese AI company like Alibaba Yitu Technologies stepped AIassisted diagnosis CT scan image automate process minimal radiologist intervention AIassisted CT scan diagnosis Covid19 Source system built using Deep Learning proved fast accurate use artificial intelligence evaluate CT scan big turning point China sped diagnosis Alibaba’s diagnostic system could provide diagnosis Covid19 within 20 second 996 accuracy March 2020 170 Chinese hospital adopted system 340000 potential patient hand Tencent AI lab worked Chinese healthcare scientist develop deep learning model predict critical illness Covid19 patient fateful made tool available online given high priority treatment well advance Covid19 novel virus meaning nothing known medical researcher surfaced soon came light researcher worldwide started study virus’s gene creating diagnosis process could also open gate vaccination scientific research easy requires extensive resource Alibaba Baidu made proprietary AI algorithm available medical fraternity speed research diagnosis process Alibaba’s LinearFold AI algorithm reduce time study Coronavirus RNA structure 55 minute 27 second useful fast genome testing Similarly Baidu’s opensourced AI algorithm also 120 time faster traditional approach genome study crisis drone often useful either provide supply carry surveillance time autonomous vehicle Chinese robotics company like Baidu Neolix Idriverplus also joined mission letting selfdriving vehicle supply medical equipment food hospital Autonomous Vehicle disinfecting public place Source Idriverplus autonomous vehicle also used spray disinfectant public place hospital sanitization Another company Pudu Technology usually build robot catering industry also deployed robot 40 hospital support health worker Chinese company also catering global demand autonomous robot example Gaussian Robotics claim robot sold 20 country current pandemic Conclusion Indeed China left stone unturned use artificial intelligence maximum advantage fight Covid19 might question country unsuccessful One main reason China successful relentless use existing AIenabled mass surveillance system consider people’s data privacy quite normal China use facial recognition track citizen data privacy serious issue part world surveillance system cannot exist large scale Although us artificial intelligence healthcare area many part world country could implement mass surveillance restrict Covid19 spread China Thus China went ahead race control Covid19 leveraging AI Gain Access Expert View — Subscribe DDI IntelTags Deep Learning Healthcare AI Computer Vision China Startup |
3,548 | Data Pipeline Architecture Optimization & Apache Airflow Implementation | Data pipelines are essential for companies looking to leverage their data to gather reliable business insights. Pipelines allow companies to consolidate, combine, and modify data originating from various sources and make it available for analysis and visualization. However, the numerous benefits that data pipelines provide depends on a company’s ability to extract and aggregate its data coming from different sources and thus on the quality of its pipeline architecture choices.
One of TrackIt’s clients had implemented a big data pipeline running on AWS that needed to be optimized. The client was leveraging the big data pipeline to enable its data scientists to gain additional insights by exploiting data that originated from CSV files.
However, the company was running into certain architecture-related problems with its pipeline that needed to be fixed and sought our expertise to address these issues.
This article details the TrackIt team’s approach to optimizing the architecture of the data pipeline.
Initial Pipeline
How the initial pipeline worked:
The company’s CSV files were first added to an S3 bucket
Once the files were added to the S3 bucket, an AWS Glue job was automatically triggered to fetch the data from the CSV files and make it available for a Python Spark script, the next step in the pipeline
A Python Spark script modified the data to make it more suitable for use in Redshift Spectrum
The modified data was then stored in a new S3 bucket (now in the Parquet file format)
Once files were added to the new S3 bucket, another Glue job was triggered that made the data available for use by Redshift, an SQL database
Files from this S3 bucket were also replicated into another S3 bucket hosted on a different AWS region using AWS S3’s cross-region replication feature. They needed separate S3 buckets in each region because Redshift Spectrum, which was being used in both regions separately, requires the S3 bucket to be located in the same region.
An AWS Glue job then fetched data from the latter S3 bucket and made it available to Redshift
Data scientists could then use a Python script to query the data on Redshift
Problems:
The pipeline implemented by the company had certain issues that were hindering its ability to make the most of its data.
Problem #1 — Inability to Individually Test Jobs
The initial pipeline did not provide the company with the ability to isolate and test individual components of the pipeline. Manually triggering one of the steps of the pipeline launched all the other events that followed it.
Problem #2: Too many steps in the pipeline
The initial pipeline included quite a few additional steps — such as the Lambda functions and CloudWatch events before and after the Glue job — that made the pipeline harder to test and manage. These extra steps could have been avoided with different architectural choices.
Problem #3: Cross-region data replication
There was also an issue arising due to the cross-region data replication feature on S3. The data replication between AWS region 1 and AWS region 2 was not instantaneous and took a few minutes. However, the completion of the AWS Glue job in region 1 was immediately triggering the Glue job in region 2 before the data had finished replicating between the S3 buckets.
Problem #4 — No error notifications
The initial pipeline provided the company with no error notifications. The company often discovered the occurrence of errors weeks after an event, and then only because the data scientist realized that they were missing data. When an error did occur, the next job would simply not be launched. The pipeline did not include an error notification component that would allow the company to immediately become aware of errors happening within the pipeline. The company’s engineers had to go onto the console and investigate the history of executions to try to identify and pinpoint errors in the pipeline.
Optimized Pipeline
The following modifications were first proposed by the TrackIt team to the client.
The first modification to the pipeline proposed by the TrackIt team was to use Glue Workflow, a feature of AWS Glue to create a workflow that automatically launches the AWS Glue jobs in sequence. The Glue Workflow would also allow the company to launch and test jobs individually without triggering the whole workflow.
The implementation of the Glue Workflow would also enable the company to simplify the pipeline by getting rid of extraneous Lambda functions and CloudWatch events that had been implemented in the initial pipeline. Instead of having multiple Lambda functions, the new pipeline would have just one Lambda function that triggers the Glue Workflow when files are uploaded into the S3 bucket.
The second modification proposed by the TrackIt team was the addition of an error notification component using Amazon CloudWatch. CloudWatch events would be triggered immediately when an error occurred in the Glue Workflow and would then send either an HTTP request or an email to the team, or could trigger a Lambda function that would execute additional tasks if there was an error.
The third modification to the pipeline proposed by the TrackIt team was to eliminate the use of S3 cross-region replication. Instead, the files are directly added to both S3 buckets (each located in a different region) so that when the Glue job is triggered in region 2, all the files are already up to date in both S3 buckets.
The client was quite pleased with this proposition and wanted to incorporate these new changes to the pipeline using Apache Airflow, a tool used to create and manage complex workflows.
Apache Airflow Implementation
The TrackIt team assisted the client in incorporating the suggested modifications to the pipeline and implementing it on Apache Airflow. The different parts of the pipeline were coded in Python as modules that the client could reuse in the future to build similar pipelines or to further modify the existing one.
How the final Apache Airflow pipeline works:
The company’s CSV files are first added to an S3 bucket
The AWS Glue crawler fetches the data from S3
A Python Spark script is executed that modifies the data and makes it more suitable for use in Redshift Spectrum
The modified data is then stored in a new S3 bucket (now in the Parquet file format)
Then in one region, a Glue crawler fetches data
In the other region, the Glue crawler fetches data and a Redshift script is used to modify data and then update changes
Data scientists can use a Python script to query the data on Redshift
If any error occurs within the pipeline, a CloudWatch event is immediately triggered and sends an email to notify the team
About TrackIt
TrackIt is an Amazon Web Services Advanced Consulting Partner specializing in cloud management, consulting, and software development solutions based in Venice, CA.
TrackIt specializes in Modern Software Development, DevOps, Infrastructure-As-Code, Serverless, CI/CD, and Containerization with specialized expertise in Media & Entertainment workflows, High-Performance Computing environments, and data storage.
TrackIt’s forté is cutting-edge software design with deep expertise in containerization, serverless architectures, and innovative pipeline development. The TrackIt team can help you architect, design, build and deploy a customized solution tailored to your exact requirements.
In addition to providing cloud management, consulting, and modern software development services, TrackIt also provides an open-source AWS cost management tool that allows users to optimize their costs and resources on AWS. | https://medium.com/trackit/data-pipeline-architecture-optimization-apache-airflow-implementation-915821d5ce5b | ['Simon Meyer'] | 2020-11-09 17:49:16.587000+00:00 | ['Apache Airflow', 'Redshift', 'AWS', 'Big Data', 'Amazon Web Services'] | Title Data Pipeline Architecture Optimization Apache Airflow ImplementationContent Data pipeline essential company looking leverage data gather reliable business insight Pipelines allow company consolidate combine modify data originating various source make available analysis visualization However numerous benefit data pipeline provide depends company’s ability extract aggregate data coming different source thus quality pipeline architecture choice One TrackIt’s client implemented big data pipeline running AWS needed optimized client leveraging big data pipeline enable data scientist gain additional insight exploiting data originated CSV file However company running certain architecturerelated problem pipeline needed fixed sought expertise address issue article detail TrackIt team’s approach optimizing architecture data pipeline Initial Pipeline initial pipeline worked company’s CSV file first added S3 bucket file added S3 bucket AWS Glue job automatically triggered fetch data CSV file make available Python Spark script next step pipeline Python Spark script modified data make suitable use Redshift Spectrum modified data stored new S3 bucket Parquet file format file added new S3 bucket another Glue job triggered made data available use Redshift SQL database Files S3 bucket also replicated another S3 bucket hosted different AWS region using AWS S3’s crossregion replication feature needed separate S3 bucket region Redshift Spectrum used region separately requires S3 bucket located region AWS Glue job fetched data latter S3 bucket made available Redshift Data scientist could use Python script query data Redshift Problems pipeline implemented company certain issue hindering ability make data Problem 1 — Inability Individually Test Jobs initial pipeline provide company ability isolate test individual component pipeline Manually triggering one step pipeline launched event followed Problem 2 many step pipeline initial pipeline included quite additional step — Lambda function CloudWatch event Glue job — made pipeline harder test manage extra step could avoided different architectural choice Problem 3 Crossregion data replication also issue arising due crossregion data replication feature S3 data replication AWS region 1 AWS region 2 instantaneous took minute However completion AWS Glue job region 1 immediately triggering Glue job region 2 data finished replicating S3 bucket Problem 4 — error notification initial pipeline provided company error notification company often discovered occurrence error week event data scientist realized missing data error occur next job would simply launched pipeline include error notification component would allow company immediately become aware error happening within pipeline company’s engineer go onto console investigate history execution try identify pinpoint error pipeline Optimized Pipeline following modification first proposed TrackIt team client first modification pipeline proposed TrackIt team use Glue Workflow feature AWS Glue create workflow automatically launch AWS Glue job sequence Glue Workflow would also allow company launch test job individually without triggering whole workflow implementation Glue Workflow would also enable company simplify pipeline getting rid extraneous Lambda function CloudWatch event implemented initial pipeline Instead multiple Lambda function new pipeline would one Lambda function trigger Glue Workflow file uploaded S3 bucket second modification proposed TrackIt team addition error notification component using Amazon CloudWatch CloudWatch event would triggered immediately error occurred Glue Workflow would send either HTTP request email team could trigger Lambda function would execute additional task error third modification pipeline proposed TrackIt team eliminate use S3 crossregion replication Instead file directly added S3 bucket located different region Glue job triggered region 2 file already date S3 bucket client quite pleased proposition wanted incorporate new change pipeline using Apache Airflow tool used create manage complex workflow Apache Airflow Implementation TrackIt team assisted client incorporating suggested modification pipeline implementing Apache Airflow different part pipeline coded Python module client could reuse future build similar pipeline modify existing one final Apache Airflow pipeline work company’s CSV file first added S3 bucket AWS Glue crawler fetch data S3 Python Spark script executed modifies data make suitable use Redshift Spectrum modified data stored new S3 bucket Parquet file format one region Glue crawler fetch data region Glue crawler fetch data Redshift script used modify data update change Data scientist use Python script query data Redshift error occurs within pipeline CloudWatch event immediately triggered sends email notify team TrackIt TrackIt Amazon Web Services Advanced Consulting Partner specializing cloud management consulting software development solution based Venice CA TrackIt specializes Modern Software Development DevOps InfrastructureAsCode Serverless CICD Containerization specialized expertise Media Entertainment workflow HighPerformance Computing environment data storage TrackIt’s forté cuttingedge software design deep expertise containerization serverless architecture innovative pipeline development TrackIt team help architect design build deploy customized solution tailored exact requirement addition providing cloud management consulting modern software development service TrackIt also provides opensource AWS cost management tool allows user optimize cost resource AWSTags Apache Airflow Redshift AWS Big Data Amazon Web Services |
3,549 | Submitting Your Stories to Discover Computer Vision: Some Guidelines | 1. We Need Original Contributions From You
Before you submit an article to us, we suggest that you ask yourself how original the content of your article is. Is it something that our readers will be hooked on to & appreciate?
If the answer is, YES, go ahead & submit it without a second thought.
But on hindsight quite often we also come across articles which are mere rephrased content duplicated from Arxiv. That’s considered Plagiarism, hence we request you to run you articles through a plagiarism-checker software, preferably the Plagiarism Checker by Grammarly before you submit.
Any content found to be plagiarized will be outright rejected without notice.
We maintain such strict quality-control to ensure that our readers are getting the top-notch standard content we promise to deliver. So please bear in mind that your article is adding value to the readers before drafting.
2. Articulate the Message of Your Article Skillfully
Readers on the Internet are known to have a short attention span & our readers aren’t any different. Hence, to ensure that your message is delivered to the reader we request that you articulate your message in a clever manner.
Some suggestions that we’ve come up with over the years of experience writing online, are as follows:
Be VERY precise about the idea you’re writing about.
precise about the idea you’re writing about. Include a non-clickbait title. Believe us when we say it, a clickbait title might fetch you some CTR no doubt but Medium isn’t YouTube/Facebook! You want your articles to be read not just clicked & moved on to the next article.
Believe us when we say it, a clickbait title might fetch you some CTR no doubt but Medium isn’t YouTube/Facebook! You want your articles not just clicked & moved on to the next article. Phrase an extremely precise introduction right below the Subtitle/Featured Image(if you include one). It should sum up the rest of your article briefly yet deliver enough information to keep the reader hooked.
Writing articles online is a skill on its own which understandably not everyone will have. As such we request you to try to be as skillful as possible so that our editors can take care of the rest.
3. Avoid Overcrowding the Internet With Yet Another Tutorial
The Internet is an overcrowded place full of individuals trying to teach their craft to one another. So as much as we appreciate you helping someone out with your tutorials, at Discover Computer Vision we don’t publish low-effort tutorial posts.
You can try out other bigger publications like Towards Data Science or Better Programming for submitting such articles.
But does it mean we don’t submit any tutorial articles?
Well actually, we do, but bear in mind the following criteria which should satisfy the context your articles are about;
A unique subject matter that has earlier never been discussed on any other platforms out there.
There’s a research gap that you would like to address very briefly instead of writing a full-blown research paper.
You recreated an experiment or a previous implementation of a closed-source product & you figured some discrepancies or stumbled across a even better implementation of the product.
If the content you’re sharing fulfills all these criteria, feel free to go ahead & submit it to us.
4. Check Your Fact & Credit the Original Source
Not citing the original source of the content you use for your article is equivalent to stealing & the repercussions can be huge. Hence, we request you every writers to cite the content that aren’t your own & make sure you’re allowed to use it for commercial purpose.
Everything you use from a mere feature image to a quote you picked up from a published paper in a journal has to be cited. If you’re confused about the original source, we suggest you not to use it at all.
Also please provide a Reference section at the end of your article & follow this format;
[X] N. Name, Title (Year), Source
Here’s an example;
[1] A. Pesah, A. Wehenkel and G. Louppe, Recurrent Machines for Likelihood-Free Inference (2018), NeurIPS 2018 Workshop on Meta-Learning
5. Refrain From Self-promotion & Aggressive CTAs
We believe in providing top-notch quality articles to our readers & that’s the motive behind setting up the publication in the first place. Hence, in order to ensure, we’re delivering what we promised, we don’t publish half-baked articles whose main purpose is to market under a guise.
Although, we do allow subtle marketing of your previously published articles or blog posts related to Computer Vision at the end of the submitted article. If you include back-links to older articles, please do it in no more than 1–3 sentences.
Any other CTAs like asking for claps, following up on Twitter or other social media platforms, downloading a book, etc aren’t allowed & you’ll be asked to remove them before resubmitting.
Instead, we suggest you include CTAs on your Medium profile. You can follow up on Casey Botticello’s advice on his article — Medium Profile Page to know how to maximize your gains through an optimized profile page.
Regardless, know that, if you share good & quality content, it’s not difficult to grow a fan following.
6. Optimize Your Story For Curation & Discoverability
Congratulations on completing the article, alas, sadly just writing isn’t enough. But that’s what we’re here for, to help you get better at writing articles online!
So we suggest, you follow these points mentioned below to gain maximum value from each one of your articles; | https://medium.com/discover-computer-vision/discover-computer-vision-submission-guidelines-27e3f686e596 | ['Somraj Saha'] | 2020-05-19 07:45:21.033000+00:00 | ['Computer Vision', 'Submission Guidelines', 'Deep Learning', 'About Us', 'Join Us'] | Title Submitting Stories Discover Computer Vision GuidelinesContent 1 Need Original Contributions submit article u suggest ask original content article something reader hooked appreciate answer YES go ahead submit without second thought hindsight quite often also come across article mere rephrased content duplicated Arxiv That’s considered Plagiarism hence request run article plagiarismchecker software preferably Plagiarism Checker Grammarly submit content found plagiarized outright rejected without notice maintain strict qualitycontrol ensure reader getting topnotch standard content promise deliver please bear mind article adding value reader drafting 2 Articulate Message Article Skillfully Readers Internet known short attention span reader aren’t different Hence ensure message delivered reader request articulate message clever manner suggestion we’ve come year experience writing online follows precise idea you’re writing precise idea you’re writing Include nonclickbait title Believe u say clickbait title might fetch CTR doubt Medium isn’t YouTubeFacebook want article read clicked moved next article Believe u say clickbait title might fetch CTR doubt Medium isn’t YouTubeFacebook want article clicked moved next article Phrase extremely precise introduction right SubtitleFeatured Imageif include one sum rest article briefly yet deliver enough information keep reader hooked Writing article online skill understandably everyone request try skillful possible editor take care rest 3 Avoid Overcrowding Internet Yet Another Tutorial Internet overcrowded place full individual trying teach craft one another much appreciate helping someone tutorial Discover Computer Vision don’t publish loweffort tutorial post try bigger publication like Towards Data Science Better Programming submitting article mean don’t submit tutorial article Well actually bear mind following criterion satisfy context article unique subject matter earlier never discussed platform There’s research gap would like address briefly instead writing fullblown research paper recreated experiment previous implementation closedsource product figured discrepancy stumbled across even better implementation product content you’re sharing fulfills criterion feel free go ahead submit u 4 Check Fact Credit Original Source citing original source content use article equivalent stealing repercussion huge Hence request every writer cite content aren’t make sure you’re allowed use commercial purpose Everything use mere feature image quote picked published paper journal cited you’re confused original source suggest use Also please provide Reference section end article follow format X N Name Title Year Source Here’s example 1 Pesah Wehenkel G Louppe Recurrent Machines LikelihoodFree Inference 2018 NeurIPS 2018 Workshop MetaLearning 5 Refrain Selfpromotion Aggressive CTAs believe providing topnotch quality article reader that’s motive behind setting publication first place Hence order ensure we’re delivering promised don’t publish halfbaked article whose main purpose market guise Although allow subtle marketing previously published article blog post related Computer Vision end submitted article include backlinks older article please 1–3 sentence CTAs like asking clap following Twitter social medium platform downloading book etc aren’t allowed you’ll asked remove resubmitting Instead suggest include CTAs Medium profile follow Casey Botticello’s advice article — Medium Profile Page know maximize gain optimized profile page Regardless know share good quality content it’s difficult grow fan following 6 Optimize Story Curation Discoverability Congratulations completing article ala sadly writing isn’t enough that’s we’re help get better writing article online suggest follow point mentioned gain maximum value one articlesTags Computer Vision Submission Guidelines Deep Learning Us Join Us |
3,550 | What does if __name__ == ”__main__” do? | What does if __name__ == ”__main__” do?
When and how a main method is executed in Python
Photo by Blake Connally on unsplash.com
If you are new to Python, you might have noticed that it is possible to run a Python script with or without a main method. And the notation used in Python to define one (i.e. if __name__ == ‘__main__' ) is definitely not self-explanatory especially for new comers.
In this article, I am going to explore what is the purpose of a main method and what to expect when you define one in your Python applications.
What is the purpose of __name__ ?
Before executing a program, the Python Interpreter assigns the name of the python module into a special variable called __name__ . Depending on whether you are executing the program through command line or importing the module into another module, the assignment for __name__ will vary.
If you invoke your module as a script, for instance
python my_module.py
then Python Interpreter will automatically assign the string '__main__' to the special variable __name__ . On the other hand, if your module is imported in another module
# Assume that this is another_module.py
import my_module
then the string 'my_module' will be assigned to __name__ .
How does the main method work?
Now let’s assume that we have the following module, that contains the following lines of code:
# first_module.py print('Hello from first_module.py') if __name__ == '__main__':
print('Hello from main method of first_module.py')
So in the module above, we have one print statement which is outside of the main method and one more print statement which is inside. The code under the main method, will only be executed if the module is invoked as a script from (e.g.) the command line, as shown below:
python first_module.py
Hello from first_module.py
Hello from main method of first_module.py
Now, let’s say that instead of invoking module first_module as a script, we want to import it in another module:
# second_module.py import first_script print('Hello from second_module.py') if __name__ == '__main__':
print('Hello from main method of second_module.py')
And finally, we invoke second_module as a script:
python second_module
Hello from first_module.py
Hello from second_module.py
Hello from main method of second_module.py
Notice that the first output comes from module first_module and specifically from the print statement which is outside the main method. Since we haven’t invoked first_module as a script but instead we have imported it into second_module the main method in first_module will be simply ignored since if __name__ == ‘__main__' evaluates to False . Recall that from the above call, __name__ variable for second_module has been assigned string '__main__' while the first_module ‘s __name__ variable has been assigned the name of the module, i.e. ’first_module’ .
Although everything under if __name__ == ‘__main__' is considered to be what we call a “main method”, it is a good practice to define one proper main method instead, which is called if the condition evaluates to True. For instance,
# my_module.py def main():
"""The main function of my Python Application"""
print('Hello World') if __name__ == '__main__':
main()
Note: I would generally discourage you from having multiple main functions in a single Python application. I have used two different main methods just for the sake of the example.
Conclusion | https://towardsdatascience.com/what-does-if-name-main-do-e357dd61be1a | ['Giorgos Myrianthous'] | 2020-11-15 00:21:35.867000+00:00 | ['Python Programming', 'Software Engineering', 'Coding', 'Software Development', 'Python'] | Title name ”main” doContent name ”main” main method executed Python Photo Blake Connally unsplashcom new Python might noticed possible run Python script without main method notation used Python define one ie name ‘main definitely selfexplanatory especially new comer article going explore purpose main method expect define one Python application purpose name executing program Python Interpreter assigns name python module special variable called name Depending whether executing program command line importing module another module assignment name vary invoke module script instance python mymodulepy Python Interpreter automatically assign string main special variable name hand module imported another module Assume anothermodulepy import mymodule string mymodule assigned name main method work let’s assume following module contains following line code firstmodulepy printHello firstmodulepy name main printHello main method firstmodulepy module one print statement outside main method one print statement inside code main method executed module invoked script eg command line shown python firstmodulepy Hello firstmodulepy Hello main method firstmodulepy let’s say instead invoking module firstmodule script want import another module secondmodulepy import firstscript printHello secondmodulepy name main printHello main method secondmodulepy finally invoke secondmodule script python secondmodule Hello firstmodulepy Hello secondmodulepy Hello main method secondmodulepy Notice first output come module firstmodule specifically print statement outside main method Since haven’t invoked firstmodule script instead imported secondmodule main method firstmodule simply ignored since name ‘main evaluates False Recall call name variable secondmodule assigned string main firstmodule ‘s name variable assigned name module ie ’firstmodule’ Although everything name ‘main considered call “main method” good practice define one proper main method instead called condition evaluates True instance mymodulepy def main main function Python Application printHello World name main main Note would generally discourage multiple main function single Python application used two different main method sake example ConclusionTags Python Programming Software Engineering Coding Software Development Python |
3,551 | A Gift Guide for the Data Viz Practitioner | A Gift Guide for the Data Viz Practitioner
13 gift ideas for that data viz practitioners are sure to enjoy
It’s the most wonderful time of the year — and it’s time to decide on the perfect gift for that special data viz practitioner in your life. Whether you’re reading this article for them, or for yourself, you’ve stumbled upon the Christmas yule log of gift lists. Sweet, with a bit of spice, all rolled into a clean final product— just like a good data visualization.
Books
From how-to guides, to best practices, to pure artistic design, books are a first-rate gift for anyone interested in data viz.
Photo by Allie Smith on Unsplash
The newest release from data viz heavyweight Alberto Cairo is packed full of tips to understand and decode all of the data visualizations that rule our daily lives. Cole Knaflic, another well-known data viz expert, has a fantastic book on Storytelling with Data that is also an excellent choice.
2. Invisible Women: Data Bias in a World Designed for Men
The modern abundance of data yields more than beautiful graphs—pick up this exposé from Caroline Criado Perez to reorient your reality to the biases baked into our data-driven world.
3. The Visual Display of Quantitative Information
The classic by Edward Tufte, data visualization pioneer. This book is considered by many to be a hallowed text in data visualization. And for those inspired by the occult and weird, there is the gorgeously curious Codex Seraphinianus — part impossible puzzle, part artistic masterpiece.
Tools
Inevitably, anyone interested in data viz enjoys the actual work that comes with crafting beautiful visuals. Some sketch, some scribble, but we all need tools to get the job done. These gifts are sure to make it straight into the data viz practitioner’s toolbox:
4. Sketchpad or notebook
Moleskin and Leuchtturm1917 notebooks are always highly recommended. For the pocket friendly, Field Notes and Rite in the Rain both have versatile options to choose from.
Photo by Jan Kahánek on Unsplash
5. Quality pens or pencils
Gel pencils, high-quality markers, the classic #2 pencil — there are plenty of options, and everyone appreciates a good writing utensil (I love a new Pilot G2 pen).
6. A white board with ultra fine tip dry erase markers
Writing with a chisel tip marker is a chore after doing any work using an ultra fine tip marker. I am a devout ultra fine tip user — it’s the only proper way to use a dry-erase board. I’ll die on that hill.
Photo by Joanna Kosinska on Unsplash
7. Design ephemera from Present & Correct
Present & Correct has every little gadget or utensil you could possibly need for a successful design project. Graph tape, rubber stamps, fasteners, journals, colored chalk — it’s guaranteed data viz practitioners will have a field day in their store.
Subscribe!
While the subscription model may not be everyone’s favorite, here are some worthwhile non-Nightingale subscriptions that would make any data viz professional happy.
8. Twelve-week subscription to The Economist
Photo by Campaign Creators on Unsplash
Daily graphs in the website’s Graphic detail section, and weekly print issues to keep up with all things politics and economics—the perfect companion for data viz professionals.
9. One-year subscription to Nathan Yau’s Flowing Data resources, courses, and tutorials
Returning to revered members of the data viz community, Nathan Yau has been sharing and creating some of the internet’s best visualizations for over a decade. A subscription gives access to Yau’s step-by-step tutorials, courses of varying difficulty, and a wide community full of valuable resources.
10. Adobe Illustrator subscription
Illustrator is practically a required tool for the data viz practitioner. This is a no-brainer gift for those feeling a bit generous.
Fun
Having fun with our creativity makes us a better artist and a better person. — Gwen Fox
Photo by Christopher Paul High on Unsplash
11. Board games
Try Codenames for its inventive strategy, Pandemic or Risk for the cartography enthusiast, or CATAN for those looking for adventure. And when wrapping that board game, consider Johannes Wirges recent break down of what board games teach us about data viz.
12. Swag from the PolicyViz store
Jonathan Schwabish, another highly regarded data visualization expert, runs the PolicyViz blog and podcast to encourage better data visualization communication. The shop is full of t-shirts, posters, and cheat sheets for the data viz practitioner — all supporting the excellent work Jon does.
Photo by Franco Antonio Giovanella on Unsplash
13. Play-Doh
Come on, who doesn’t love opening a brand-new can of Play-Doh? It’s the perfect stress reliever and great for first-pass visualizations — a timeless classic.
Happy Holidays!
And with that, the yule log is finished off, satisfying the reader with visions of gift ideas dancing in their heads. Whatever you celebrate, I hope this gift list inspired someone’s perfect gift — or to treat yourself. I won’t tell. | https://medium.com/nightingale/gift-guide-for-the-data-viz-practitioner-5ba5495c5c95 | ['Coleman Harris'] | 2019-12-16 16:56:25.045000+00:00 | ['Design', 'Holidays', 'Data Science', 'Creativity', 'Data Visualization'] | Title Gift Guide Data Viz PractitionerContent Gift Guide Data Viz Practitioner 13 gift idea data viz practitioner sure enjoy It’s wonderful time year — it’s time decide perfect gift special data viz practitioner life Whether you’re reading article you’ve stumbled upon Christmas yule log gift list Sweet bit spice rolled clean final product— like good data visualization Books howto guide best practice pure artistic design book firstrate gift anyone interested data viz Photo Allie Smith Unsplash newest release data viz heavyweight Alberto Cairo packed full tip understand decode data visualization rule daily life Cole Knaflic another wellknown data viz expert fantastic book Storytelling Data also excellent choice 2 Invisible Women Data Bias World Designed Men modern abundance data yield beautiful graphs—pick exposé Caroline Criado Perez reorient reality bias baked datadriven world 3 Visual Display Quantitative Information classic Edward Tufte data visualization pioneer book considered many hallowed text data visualization inspired occult weird gorgeously curious Codex Seraphinianus — part impossible puzzle part artistic masterpiece Tools Inevitably anyone interested data viz enjoys actual work come crafting beautiful visuals sketch scribble need tool get job done gift sure make straight data viz practitioner’s toolbox 4 Sketchpad notebook Moleskin Leuchtturm1917 notebook always highly recommended pocket friendly Field Notes Rite Rain versatile option choose Photo Jan Kahánek Unsplash 5 Quality pen pencil Gel pencil highquality marker classic 2 pencil — plenty option everyone appreciates good writing utensil love new Pilot G2 pen 6 white board ultra fine tip dry erase marker Writing chisel tip marker chore work using ultra fine tip marker devout ultra fine tip user — it’s proper way use dryerase board I’ll die hill Photo Joanna Kosinska Unsplash 7 Design ephemera Present Correct Present Correct every little gadget utensil could possibly need successful design project Graph tape rubber stamp fastener journal colored chalk — it’s guaranteed data viz practitioner field day store Subscribe subscription model may everyone’s favorite worthwhile nonNightingale subscription would make data viz professional happy 8 Twelveweek subscription Economist Photo Campaign Creators Unsplash Daily graph website’s Graphic detail section weekly print issue keep thing politics economics—the perfect companion data viz professional 9 Oneyear subscription Nathan Yau’s Flowing Data resource course tutorial Returning revered member data viz community Nathan Yau sharing creating internet’s best visualization decade subscription give access Yau’s stepbystep tutorial course varying difficulty wide community full valuable resource 10 Adobe Illustrator subscription Illustrator practically required tool data viz practitioner nobrainer gift feeling bit generous Fun fun creativity make u better artist better person — Gwen Fox Photo Christopher Paul High Unsplash 11 Board game Try Codenames inventive strategy Pandemic Risk cartography enthusiast CATAN looking adventure wrapping board game consider Johannes Wirges recent break board game teach u data viz 12 Swag PolicyViz store Jonathan Schwabish another highly regarded data visualization expert run PolicyViz blog podcast encourage better data visualization communication shop full tshirts poster cheat sheet data viz practitioner — supporting excellent work Jon Photo Franco Antonio Giovanella Unsplash 13 PlayDoh Come doesn’t love opening brandnew PlayDoh It’s perfect stress reliever great firstpass visualization — timeless classic Happy Holidays yule log finished satisfying reader vision gift idea dancing head Whatever celebrate hope gift list inspired someone’s perfect gift — treat won’t tellTags Design Holidays Data Science Creativity Data Visualization |
3,552 | You Don’t Need Permission | It’s impossible to get stuck when you’re listening to yourself.
How many thoughts go through your head every day?
All of them are potential ideas. Most crap, some good, a handful great. You never know until you latch onto one and see where it goes. And anyone who claims they do is full of it.
I’ve noticed a trend, in others and myself and especially in the new creator: getting in your own way.
And the usual solution to this self-imposed roadblock is typically to seek approval of others before embarking on whatever journey it is you’re thinking of.
Some quotes of the fearful creator:
“Is this okay?”
“Do you think this is a good idea?”
“I really want to work this out before I start…”
All of which are counterintuitive because most people will tell you “do your own thing”, “you do you”, “you won’t know until you try it” or some form of “don’t listen to what others say, just do it.”
But when it actually comes to it…
Ho ho, that’s the difference isn’t it? Saying something and doing something.
I’ll let you in on a secret: you don’t need permission.
Whatever it is you want to start, to make, to build, to share. You don’t need someone to hold your hand. You could start now right if you wanted to.
Shocking, right?
Don’t care, made art
That’s my motto. Steal it if you want.
Whenever I start questioning myself, asking silly things like “who am I to have these thoughts?” or “who am I to be writing this article?”
I smack myself in the face and recite the words.
Don’t care, made art.
Start digging the hole
I tried to learn to code three or four times. Every time I failed, I’d wait until someone wanted to do it with me before trying again.
Even though I knew, knew the whole time what I wanted to do.
Well, why not just journey off and start? Who knows. Perhaps I wanted someone else on the journey to help when it got hard.
But get deep enough in anything and you’ll realise you’re going to have to start facing the challenges yourself.
I think of creating like having a single shovel on the ground. And the treasure you’re trying to make lives in the dirt below.
How many people can use the shovel at the same time?
You could spend all your time trying to work out how to get others to use the shovel with you or you could just start digging the hole.
The same goes for someone else, you can’t hold their shovel for them. Let them find their own and figure out how to dig.
Pretend you’re an archeologist and the thing you’re trying to create is like a 67-million-year-old Tyrannosaurus rex skeleton. A work of art when it towers through a museum but only because someone like you picked up the shovel, stuck it in, found the bones and spent hours brushing off the dust.
Why or why not?
Sandra tells Mark about the paintings she’s been creating. She hasn’t made money yet but she enjoys trying to get her ideas onto the canvas.
Mark asks her why she’s spending so much time painting when she could be doing other things.
Harriet tells Lucy about the poems she’s been writing, so far they’re private but she’s been thinking about sharing them on her blog.
Lucy asks why not? And tells Harriet to go for it. Asks, what’s the worst that could happen?
F*ck Mark.
Be more like Lucy.
Race to your first 100
“But I’ve got no talent.”
Neither do I. Except for the fact I can sit here and spill my guts onto this page.
In the beginning, my hands wouldn’t output what crawled around my head. But after enough sessions with the blank page, I’m getting better.
Don’t discount yourself at being bad at something until you’ve sunk at least 100 deep hours into it. 100 pure hours is enough to go from zero to average (or above) at almost anything.
When’s the last time you spent four days straight doing nothing but one thing?
A broke crackhead can spend four days hustling for a hit like it’s nothing.
Imagine if you poured that kind of crackhead energy into your work.
Our brains aren’t wired for non-linear returns. You could spend 99 hours on something and make almost zero progress. And then halfway through hour 100, the breakthrough comes from what seems like nowhere. But it’s not nowhere, it’s the magic of compound interest showing its face and waving hello.
100 hours, 100 articles, 100 videos, 100 creations, 100 phone calls, 100 cold emails, 100 whatever.
Use speed and quantity as your fertiliser for quality.
Say the motto whilst you do it.
Don’t care, made art. Don’t care, made art. Don’t care, made art.
Audience of one
Stuck? Probably not. You’re trying to please everyone.
What happens if you replaced “but what if they don’t like it?” with “but what if I don’t like it?”
You’re already your own harshest critique, why not become your own biggest fan?
Educate or entertain
We’ve been selfish so far. Why? Because it works. No one knows what you’re after as much as you.
But let’s switch gears.
If you can’t stomach being a selfish creator, make things to educate or entertain others and you’ll always have an audience.
People are hungry for knowledge, share what you know.
I want to dance, I want to laugh, I want to cry, I want to cheer, I want to fear, I want to love. Give me a reason.
Does what you’re making educate or entertain? Bonus points if it does both. Teach me something while we dance together.
Embarrassed every month
Last week I entered a Brazilian Jiu Jitsu competition.
Walking through the competition, I thought, why am I here?
Then I realised, I’m uncomfortable because I’m tiptoeing into the unknown. And everyone around me probably feels the same way. Win or lose we’re going to come out of this different to what we were before. This could’ve been a normal Sunday. Instead, we’re all here putting our practised skills to the test. A controlled environment, yes, but also an environment different from what we’re used to.
I lost two out of three of the fights I had. A bruised ego and a bruised body.
I reflected on the one I won and was proud of my efforts, I went through the moves I’d practice and they came off. Zero takeaways except for a pat on the back.
The ones I lost?
One of them gave me a haemorrhoid. I got choked so hard the inner linings of my intestines gave way.
Have you ever a haemorrhoid? If not, I’ll tell you what it’s like: shit.
A physical reminder that I’ve got something to work on, something to improve.
Over the next few months, those small loses will turn into gains as I fill the gaps in my skill set.
Of course, the goal is never to lose, it’s to become immune to it.
I’ve got an idea…
I’ll give myself a vaccination right now.
I, Daniel Bourke, am immune to losing.
I, Daniel Bourke, am immune to rejection.
I, Daniel Bourke, am immune to embarrassment.
Boom. Done.
That should last a month. Next month I’ve got find a way to embarrass myself, step out of my happy little circle, practice putting fear to the side when I’ve got to do something that matters.
If you’re not feeling embarrassed about something at least once a month, you’ve got to bump your numbers up.
There’s no criteria
Remember how writing an essay in school was such a drag?
1500 words on some topic you’d never pursue on your own, being sure to add references for every thought you translate into words all to make sure you ticked the boxes a veteran education academic created 17 years ago.
Good gosh. Boring to write, even worse to read.
Creating becomes fun when you realise there’s no criteria except making good art.
What’s good art?
You decide baby.
Copy others until you have your own style
Start with work that inspires you, ingest it, enjoy it, learn from it and remix it with your own style.
Put different colour clothes together in a washing machine and what happens?
The colours run and they mix.
That’s how I make things. I steal the ideas of others and put them into my washing machine brain and let them mix.
Then I take them out and see what they look like.
Sometimes they’re disgusting. The type of shirts you wouldn’t be caught dead wearing.
Other times, the times when I pour my heart in, the washing machine goes into reverse and instead of coming out clean, the shirts come out with bloodstains on them.
Those type of shirts?
They’re magic.
Trust your ability
Last week I helped my friend Big Easy prepare for a role-play scenario. A 2-minute elevator pitch involving meeting someone for the first time, finding out who they are, talking to them about a product and tailoring the interaction to their needs.
In real life, Big Easy could do this type of scenario without thinking. But since it was going to be in front of his peers, he spent the days leading up to the pitch evaluating every possible outcome scenario. All which usually end in him screwing up.
Where does this come from?
The more you plan, the more you space you give for a disconnect between your belief in your abilities and your actual abilities.
To fix my friends disconnect we went for a walk on the beach and acted out different scenarios.
The first few started off rusty but were polished by the end.
See? That wasn’t so bad, I said.
You’re right, I think I’ve got this, he said.
Turns out Big Easy won the best performance on the day. He’s going through to the next round.
When you’re doubting your ability, ask yourself, am I planning too much instead of practising?
In pursuit of a leader
My friend got me a poster of someone we both look up to for my birthday. It’s hanging in my hallway.
The other day I walked passed it and had the audacious idea I could be that person, a person others look up to.
Would you look at that… we’re coming full circle here. We started by saying too often people hold themselves back because they’re looking for someone to give them the go ahead.
A leader to say, “you can do it.”
Well, guess what?
You can become that person.
Instead of just pursuing your interests, become a leader in your interests.
Everyone is looking for someone to look up to — perhaps it’s you. | https://medium.com/the-post-grad-survival-guide/you-dont-need-permission-bf9c6d7689ef | ['Daniel Bourke'] | 2020-10-10 10:17:32.457000+00:00 | ['Makers', 'Business', 'Marketing', 'Life', 'Creativity'] | Title Don’t Need PermissionContent It’s impossible get stuck you’re listening many thought go head every day potential idea crap good handful great never know latch onto one see go anyone claim full I’ve noticed trend others especially new creator getting way usual solution selfimposed roadblock typically seek approval others embarking whatever journey you’re thinking quote fearful creator “Is okay” “Do think good idea” “I really want work start…” counterintuitive people tell “do thing” “you you” “you won’t know try it” form “don’t listen others say it” actually come it… Ho ho that’s difference isn’t Saying something something I’ll let secret don’t need permission Whatever want start make build share don’t need someone hold hand could start right wanted Shocking right Don’t care made art That’s motto Steal want Whenever start questioning asking silly thing like “who thoughts” “who writing article” smack face recite word Don’t care made art Start digging hole tried learn code three four time Every time failed I’d wait someone wanted trying Even though knew knew whole time wanted Well journey start know Perhaps wanted someone else journey help got hard get deep enough anything you’ll realise you’re going start facing challenge think creating like single shovel ground treasure you’re trying make life dirt many people use shovel time could spend time trying work get others use shovel could start digging hole go someone else can’t hold shovel Let find figure dig Pretend you’re archeologist thing you’re trying create like 67millionyearold Tyrannosaurus rex skeleton work art tower museum someone like picked shovel stuck found bone spent hour brushing dust Sandra tell Mark painting she’s creating hasn’t made money yet enjoys trying get idea onto canvas Mark asks she’s spending much time painting could thing Harriet tell Lucy poem she’s writing far they’re private she’s thinking sharing blog Lucy asks tell Harriet go Asks what’s worst could happen Fck Mark like Lucy Race first 100 “But I’ve got talent” Neither Except fact sit spill gut onto page beginning hand wouldn’t output crawled around head enough session blank page I’m getting better Don’t discount bad something you’ve sunk least 100 deep hour 100 pure hour enough go zero average almost anything When’s last time spent four day straight nothing one thing broke crackhead spend four day hustling hit like it’s nothing Imagine poured kind crackhead energy work brain aren’t wired nonlinear return could spend 99 hour something make almost zero progress halfway hour 100 breakthrough come seems like nowhere it’s nowhere it’s magic compound interest showing face waving hello 100 hour 100 article 100 video 100 creation 100 phone call 100 cold email 100 whatever Use speed quantity fertiliser quality Say motto whilst Don’t care made art Don’t care made art Don’t care made art Audience one Stuck Probably You’re trying please everyone happens replaced “but don’t like it” “but don’t like it” You’re already harshest critique become biggest fan Educate entertain We’ve selfish far work one know you’re much let’s switch gear can’t stomach selfish creator make thing educate entertain others you’ll always audience People hungry knowledge share know want dance want laugh want cry want cheer want fear want love Give reason you’re making educate entertain Bonus point Teach something dance together Embarrassed every month Last week entered Brazilian Jiu Jitsu competition Walking competition thought realised I’m uncomfortable I’m tiptoeing unknown everyone around probably feel way Win lose we’re going come different could’ve normal Sunday Instead we’re putting practised skill test controlled environment yes also environment different we’re used lost two three fight bruised ego bruised body reflected one proud effort went move I’d practice came Zero takeaway except pat back one lost One gave haemorrhoid got choked hard inner lining intestine gave way ever haemorrhoid I’ll tell it’s like shit physical reminder I’ve got something work something improve next month small loses turn gain fill gap skill set course goal never lose it’s become immune I’ve got idea… I’ll give vaccination right Daniel Bourke immune losing Daniel Bourke immune rejection Daniel Bourke immune embarrassment Boom Done last month Next month I’ve got find way embarrass step happy little circle practice putting fear side I’ve got something matter you’re feeling embarrassed something least month you’ve got bump number There’s criterion Remember writing essay school drag 1500 word topic you’d never pursue sure add reference every thought translate word make sure ticked box veteran education academic created 17 year ago Good gosh Boring write even worse read Creating becomes fun realise there’s criterion except making good art What’s good art decide baby Copy others style Start work inspires ingest enjoy learn remix style Put different colour clothes together washing machine happens colour run mix That’s make thing steal idea others put washing machine brain let mix take see look like Sometimes they’re disgusting type shirt wouldn’t caught dead wearing time time pour heart washing machine go reverse instead coming clean shirt come bloodstain type shirt They’re magic Trust ability Last week helped friend Big Easy prepare roleplay scenario 2minute elevator pitch involving meeting someone first time finding talking product tailoring interaction need real life Big Easy could type scenario without thinking since going front peer spent day leading pitch evaluating every possible outcome scenario usually end screwing come plan space give disconnect belief ability actual ability fix friend disconnect went walk beach acted different scenario first started rusty polished end See wasn’t bad said You’re right think I’ve got said Turns Big Easy best performance day He’s going next round you’re doubting ability ask planning much instead practising pursuit leader friend got poster someone look birthday It’s hanging hallway day walked passed audacious idea could person person others look Would look that… we’re coming full circle started saying often people hold back they’re looking someone give go ahead leader say “you it” Well guess become person Instead pursuing interest become leader interest Everyone looking someone look — perhaps it’s youTags Makers Business Marketing Life Creativity |
3,553 | Moths in Space | Moths in Space
A poem
Which are the stars, and which are their paparazzi?
The stars hold my gaze.
They wink and sparkle
and burn ablaze.
My atoms: moths
knocking each other askew
in a whirlwind rocket
to the modest moon.
But it’s being afar
that makes bright beautiful.
The sun is a star
that will eat you whole
if you get too close.
So we learn:
don’t touch the stove.
Be a flower, or a tree.
Reach for the warmth,
light and energy,
but keep your roots
in the cool shade of soil.
Grow up with ambition;
grow deep from turmoil.
The top-heavy only tumble;
The heavy-rooted never fly.
So you see,
there’s significance in our scars;
only humans lose their way
while reaching for the stars. | https://wormwoodtheweird.medium.com/moths-in-space-82e33c01a104 | [] | 2020-03-10 22:09:21.268000+00:00 | ['Self-awareness', 'Self', 'Creativity', 'Life Lessons', 'Poetry'] | Title Moths SpaceContent Moths Space poem star paparazzo star hold gaze wink sparkle burn ablaze atom moth knocking askew whirlwind rocket modest moon it’s afar make bright beautiful sun star eat whole get close learn don’t touch stove flower tree Reach warmth light energy keep root cool shade soil Grow ambition grow deep turmoil topheavy tumble heavyrooted never fly see there’s significance scar human lose way reaching starsTags Selfawareness Self Creativity Life Lessons Poetry |
3,554 | How could Santa leverage Blockchain & AI? | Made with Visme.com
Made with https://piktochart.com/
How could Santa leverage Blockchain & AI?
Until now, Santa have faced a series of challenges that seemed insurmountable, such as knowing the children’s requests in detail and being able to cross-check them with their good behavior during the year, but in a safe, reliable manner and without compromising the children’s personal data as well as complying with the new data regulations such as the European GDRP and the US Data Protection.
And especially to do it efficiently and in time, that is why Santa has continued to modernize and incorporate new technologies to give each child what they really want, as well as their parents and society in general.
Join the AIMA Thought Leadership @ bit.ly/AIMA-MeetUp
Artificial intelligence
Made with https://piktochart.com/
First, using artificial intelligence solutions, they are able to take all the data from the channels where children interact, which today are mostly digital, such as social networks, text messages, game forums and adding them to create large segments according to their location. in the country, socio-economic level, type of family to which they belong, number of siblings, tastes, price of gifts according to the level of expenditure for the family and age, among other relevant variables.
Together with their characteristics they combine it with their school performance, their collaboration in the household activities, development perspectives and areas of opportunity in terms of learning and training.
Once that is done, of all the available gifts, they look for the ones that meet the criteria of each kid to choose the best while validating certain restrictions such as quantity, price and popularity.
To understand the conversations that are taking place both among children and among adults related to Christmas, natural language understanding is used to achieve the understanding of what they are saying but not only literally but understanding the context of the conversations and the sentiment they have to obtain the best findings and give exactly the main trends of the moment.
Using machine learning models on all these data you can get accurate answers regarding the ideal gift for each child according to what they want, their compliance with the rules at home and purchasing power of the family.
In this way, the demand for presents can be predicted and production, distribution and transportation costs associated with Christmas could be reduced; at the same time that a complete satisfaction of the children is achieved by obtaining the best gift just in time while keeping their illusion alive.
Blockchain
Made with https://piktochart.com/
For the part of the data that is not completely digital and the precose follow-up of the gifts and personal identification data of each child and family, Santa is using Blockchain-based solutions to have an accurate and reliable follow-up of their processes. throughout the entire value chain they have.
Smart Contracts
Made with https://piktochart.com/
For children who still write their paper letters and those who do it digitally, they can verify that what they ask for is what they receive and that they have fully complied with their obligations as children, such as making their bed, getting good grades and eating all your vegetables.
Therefore the system based on Smart Contracts on Blockhain allows the unalterable record of what each infant asks Santa, so that there are no changes or misunderstandings with what they receive although it is possible to make changes or changes that are registered without modifying the original request.
With smart contracts you can ensure that the gifts are released to be delivered to children as soon as it is verified that they have fulfilled what is expected of them and this is registered in the system for further verification.
This will ensure that gifts are only given to the children who deserve them.
Personal information
Made with https://piktochart.com/
In the sensitive information piece, the solution based on Blockchain will allow the entire value chain to collaborate to deliver the desired gift without having access to individual data that will be protected at all times by means of powerful encryption schemes to avoid misuse.
Conclusion
Thus, once again we see how Santa seeks to always be at the forefront to give the best service to their main clients, the children of the world and their parents.
Using innovative solutions to control their processes and simplify them so they can dedicate themselves to what they do best: create a happy childhood and deliver smiles. | https://medium.com/aimarketingassociation/how-could-santa-leverage-blockchain-ai-ad1832d57125 | ['Gabriel Jiménez'] | 2018-11-29 00:20:27.396000+00:00 | ['Christmas', 'Data Science', 'AI', 'Artificial Intelligence', 'Blockchain'] | Title could Santa leverage Blockchain AIContent Made Vismecom Made httpspiktochartcom could Santa leverage Blockchain AI Santa faced series challenge seemed insurmountable knowing children’s request detail able crosscheck good behavior year safe reliable manner without compromising children’s personal data well complying new data regulation European GDRP US Data Protection especially efficiently time Santa continued modernize incorporate new technology give child really want well parent society general Join AIMA Thought Leadership bitlyAIMAMeetUp Artificial intelligence Made httpspiktochartcom First using artificial intelligence solution able take data channel child interact today mostly digital social network text message game forum adding create large segment according location country socioeconomic level type family belong number sibling taste price gift according level expenditure family age among relevant variable Together characteristic combine school performance collaboration household activity development perspective area opportunity term learning training done available gift look one meet criterion kid choose best validating certain restriction quantity price popularity understand conversation taking place among child among adult related Christmas natural language understanding used achieve understanding saying literally understanding context conversation sentiment obtain best finding give exactly main trend moment Using machine learning model data get accurate answer regarding ideal gift child according want compliance rule home purchasing power family way demand present predicted production distribution transportation cost associated Christmas could reduced time complete satisfaction child achieved obtaining best gift time keeping illusion alive Blockchain Made httpspiktochartcom part data completely digital precose followup gift personal identification data child family Santa using Blockchainbased solution accurate reliable followup process throughout entire value chain Smart Contracts Made httpspiktochartcom child still write paper letter digitally verify ask receive fully complied obligation child making bed getting good grade eating vegetable Therefore system based Smart Contracts Blockhain allows unalterable record infant asks Santa change misunderstanding receive although possible make change change registered without modifying original request smart contract ensure gift released delivered child soon verified fulfilled expected registered system verification ensure gift given child deserve Personal information Made httpspiktochartcom sensitive information piece solution based Blockchain allow entire value chain collaborate deliver desired gift without access individual data protected time mean powerful encryption scheme avoid misuse Conclusion Thus see Santa seek always forefront give best service main client child world parent Using innovative solution control process simplify dedicate best create happy childhood deliver smilesTags Christmas Data Science AI Artificial Intelligence Blockchain |
3,555 | How to Evaluate a Data Visualization | Image by Benjamin O. Tayo
How to Evaluate a Data Visualization
A good data visualization should have all essential components in place
I. Introduction
Data visualization is one of the most important branches in data science. It is one of the main tools use to analyze and study relationships between different variables. Data visualization can be used for descriptive analytics. Data visualization is also used in machine learning during data preprocessing and analysis; feature selection; model building; model testing; and model evaluation.
In machine learning (predictive analytics), there are several metrics that can be used for model evaluation. For example, supervised learning (continuous target) model can be evaluated using metrics such as the R2 score, mean square error (MSE), or mean absolute error (MAE). Furthermore, a supervised learning (discrete target) model, also referred to as a classification model can be evaluated using metrics such as accuracy, precision, recall, f1 score, the area under ROC curve (AUC), etc.
Unlike machine learning models that can be evaluated by using a single performance metric, a data visualization cannot be evaluated by looking at just a single metric. Instead, a good data visualization can be evaluated based on the characteristics or components of the data visualization.
In this article, we discuss the essential components of good data visualization. In Section II, we present the various components of a good data visualization. In Section III, we examine some examples of good data visualizations. A brief summary concludes the article.
II. Components of a Good Data Visualization
A good data visualization is made up of several components that have to be pieced up together to produce an end product:
a) Data Component: An important first step in deciding how to visualize data is to know what type of data it is, e.g. categorical data, discrete data, continuous data, time series data, etc.
b) Geometric Component: Here is where you decide what kind of visualization is suitable for your data, e.g. scatter plot, line graphs, barplots, histograms, qqplots, smooth densities, boxplots, pairplots, heatmaps, etc.
c) Mapping Component: Here you need to decide what variable to use as your x-variable (independent or predictor variable) and what to use as your y-variable (dependent or target variable). This is important especially when your dataset is multi-dimensional with several features.
d) Scale Component: Here you decide what kind of scales to use, e.g. linear scale, log scale, etc.
e) Labels Component: This includes things like axes labels, titles, legends, font size to use, etc.
f) Ethical Component: Here, you want to make sure your visualization tells the true story. You need to be aware of your actions when cleaning, summarizing, manipulating, and producing a data visualization and ensure you aren’t using your visualization to mislead or manipulate your audience.
III. Examples of Good Data Visualizations
III.1 Barplots
III.1.1 Simple barplot
Figure 1. 2016 Market share of electric vehicles in selected countries. Image by Benjamin O. Tayo.
III.1.2 Barplot with a categorical variable
Figure 2. Quantity of advertising emails from Best Buy (BBY), Walgreens (WGN) and Walmart (WMT) in 2018. Image be Benjamin O. Tayo.
III.1.3 Barplot for comparison
Figure 3. 2020 Worldwide number of jobs by skill using LinkedIn search tool. Image by Benjamin O. Tayo.
III.2 Density plot
Figure 4. The probability distribution of the sample means of a uniform distribution using Monte-Carlo simulation. Image by Benjamin O. Tayo.
III.3 Scatter and line plots
III.3.1 Simple scatter plot
Figure 5. Ideal and fitted plots for the crew variable using multiple regression analysis. Image by Benjamin O. Tayo.
III.3.2 Scatter plot for comparison
Figure 6. Mean cross-validation scores for different regression models. Image by Benjamin O. Tayo.
III.3.3 Multiple scatter plots
Figure 7. Regression analysis using different values of the learning rate parameter. Image by Benjamin O. Tayo.
III.3.4 Scatter pairplot
Figure 8. Pairplot showing relationships between features in the dataset. Image source: Benjamin O. Tayo.
III.4 Heatmap plot
Figure 9. Covariance matrix plot showing correlation coefficients between features in the dataset. Image source: Benjamin O. Tayo.
III. 5 Weather data plot
Figure 10. Record temperatures for different months between 2005 to 2014. Image by Benjamin O. Tayo.
IV. Summary and Conclusion
In summary, we’ve discussed the essential components of good data visualization. Unlike in predictive modeling where a model can be evaluated using a single evaluation metric, in data visualization, evaluation is carried out by analyzing the visualization to make sure it contains all essential components of a good data visualization.
Additional Data Science/Machine Learning Resources
Data Science Minimum: 10 Essential Skills You Need to Know to Start Doing Data Science
Data Science Curriculum
Essential Maths Skills for Machine Learning
5 Best Degrees for Getting into Data Science
Theoretical Foundations of Data Science — Should I Care or Simply Focus on Hands-on Skills?
Machine Learning Project Planning
How to Organize Your Data Science Project
Productivity Tools for Large-scale Data Science Projects
A Data Science Portfolio is More Valuable than a Resume
For questions and inquiries, please email me: [email protected] | https://medium.com/towards-artificial-intelligence/how-to-evaluate-a-data-visualization-e04f75e5ae78 | ['Benjamin Obi Tayo Ph.D.'] | 2020-06-22 12:43:36.868000+00:00 | ['Python', 'Data Science', 'Matplotlib', 'Data Visualization', 'Descriptive Analytics'] | Title Evaluate Data VisualizationContent Image Benjamin Tayo Evaluate Data Visualization good data visualization essential component place Introduction Data visualization one important branch data science one main tool use analyze study relationship different variable Data visualization used descriptive analytics Data visualization also used machine learning data preprocessing analysis feature selection model building model testing model evaluation machine learning predictive analytics several metric used model evaluation example supervised learning continuous target model evaluated using metric R2 score mean square error MSE mean absolute error MAE Furthermore supervised learning discrete target model also referred classification model evaluated using metric accuracy precision recall f1 score area ROC curve AUC etc Unlike machine learning model evaluated using single performance metric data visualization cannot evaluated looking single metric Instead good data visualization evaluated based characteristic component data visualization article discus essential component good data visualization Section II present various component good data visualization Section III examine example good data visualization brief summary concludes article II Components Good Data Visualization good data visualization made several component pieced together produce end product Data Component important first step deciding visualize data know type data eg categorical data discrete data continuous data time series data etc b Geometric Component decide kind visualization suitable data eg scatter plot line graph barplots histogram qqplots smooth density boxplots pairplots heatmaps etc c Mapping Component need decide variable use xvariable independent predictor variable use yvariable dependent target variable important especially dataset multidimensional several feature Scale Component decide kind scale use eg linear scale log scale etc e Labels Component includes thing like ax label title legend font size use etc f Ethical Component want make sure visualization tell true story need aware action cleaning summarizing manipulating producing data visualization ensure aren’t using visualization mislead manipulate audience III Examples Good Data Visualizations III1 Barplots III11 Simple barplot Figure 1 2016 Market share electric vehicle selected country Image Benjamin Tayo III12 Barplot categorical variable Figure 2 Quantity advertising email Best Buy BBY Walgreens WGN Walmart WMT 2018 Image Benjamin Tayo III13 Barplot comparison Figure 3 2020 Worldwide number job skill using LinkedIn search tool Image Benjamin Tayo III2 Density plot Figure 4 probability distribution sample mean uniform distribution using MonteCarlo simulation Image Benjamin Tayo III3 Scatter line plot III31 Simple scatter plot Figure 5 Ideal fitted plot crew variable using multiple regression analysis Image Benjamin Tayo III32 Scatter plot comparison Figure 6 Mean crossvalidation score different regression model Image Benjamin Tayo III33 Multiple scatter plot Figure 7 Regression analysis using different value learning rate parameter Image Benjamin Tayo III34 Scatter pairplot Figure 8 Pairplot showing relationship feature dataset Image source Benjamin Tayo III4 Heatmap plot Figure 9 Covariance matrix plot showing correlation coefficient feature dataset Image source Benjamin Tayo III 5 Weather data plot Figure 10 Record temperature different month 2005 2014 Image Benjamin Tayo IV Summary Conclusion summary we’ve discussed essential component good data visualization Unlike predictive modeling model evaluated using single evaluation metric data visualization evaluation carried analyzing visualization make sure contains essential component good data visualization Additional Data ScienceMachine Learning Resources Data Science Minimum 10 Essential Skills Need Know Start Data Science Data Science Curriculum Essential Maths Skills Machine Learning 5 Best Degrees Getting Data Science Theoretical Foundations Data Science — Care Simply Focus Handson Skills Machine Learning Project Planning Organize Data Science Project Productivity Tools Largescale Data Science Projects Data Science Portfolio Valuable Resume question inquiry please email benjaminobigmailcomTags Python Data Science Matplotlib Data Visualization Descriptive Analytics |
3,556 | The Five Rules You Need to Know to Keep Yourself From Getting Stuck | The Five Rules You Need to Know to Keep Yourself From Getting Stuck brett fox Follow Sep 3 · 4 min read
I’ve been doing a lot of research on Steve Jobs lately. Did you know that he used to practice his Macworld and product launch presentations around 200 times before the big event?
Picture: Depositphotos
Watch any Jobs presentation and you’ll notice how smooth and effortless he appears on stage. You’ll also notice that, when he does a demo (which is part of just about every Jobs presentation), the demo almost always works perfectly.
On the rare occasion when something goes wrong, Jobs calmly moves on. He never, ever gets stuck.
Rule number one for never getting stuck in business: The more prepared you are the less chance you’ll get stuck.
200 Rehearsals. There’s a reason why Jobs was so, so good, and it’s all the time and effort he put into practicing. The analogy in business is preparation.
I know, from my own business experience, that the more prepared I was, the more likely I could respond to any business situation.
For example, we were in the middle of raising our next round of funding. There were four investors that were a couple weeks away from giving us term sheets. At the same time, we had just about exhausted our funding.
In our board meeting, one of our investors, “Raul,” surprised me by demanding the company be shut down. This was truly a make or break moment for us.
From somewhere deep inside me, I said, “Instead of shutting the company down, why don’t we put everyone on minimum wage. That will give us plenty of time to see if one of the investors give us a term sheet.”
Raul agreed with my suggestion, and the company was saved (three of the four potential investors did give us term sheets). However, imagine if I was stuck? The company would have died right then and there.
Rule number two for never getting stuck in business: Do your contingency planning ahead of time.
The reason I was able to give Raul a solution was all the preparation I had done. I had actually discussed the possibility of being shut down with Tina, our controller.
Tina and I talked through the financial contingencies, and we knew that going to minimum wage would buy us a six week lifeline.
Rule number three for never getting stuck in business: Having a great team around you really, really helps.
As much as I would like to take credit for the idea of going to minimum wage, it was Tina’s idea, not mine. Tina was excellent at her job, and I was lucky to work with her.
It was the same with the other members of the executive team. Adolfo, Dave, Jeroen, and Shoba always had contingencies for their projects.
Rule number four for never getting stuck in business: Yes, you actually do need a business plan.
Part of the reason we were able to have contingencies is we had a business plan. Without a plan, you don’t know where you’re going. And, if you don’t know where you’re going, you’re much more likely to get stuck.
Even if you do come up with an alternative, you’re much more likely to make the wrong decision without a plan.
You won’t know how to quickly research if your idea makes any sense. This new alternative is just something that sounds good in the moment.
Rule number five for never getting stuck in business: Slow down, if you do get stuck on a problem you haven’t prepared for.
Every once in a while, you will get a problem that you haven’t prepared any contingencies for. What do you do then?
Our natural tendency is to speed up our decision making process when we’re handed a problem we haven’t expected. Instead, you should try and slow down your decision making process.
Do you remember the scene in the movie, “Apollo 13,” when the team on the ground has to figure out how design a carbon dioxide filter? This was not a problem anyone prepared for, yet the team solved it.
It was all the preparation, just like Steve Jobs did for his presentations, that allowed the NASA team to solve all the problems they had to solve to get the Apollo 13 astronauts home.
They didn’t speed up, and they didn’t panic. They slowed down and solved the problems. That’s what you’ll do too.
For more, read: https://www.brettjfox.com/what-are-the-five-skills-you-need-to-be-a-great-ceo | https://medium.com/swlh/the-five-rules-you-need-to-know-to-keep-yourself-from-getting-stuck-3ecc23ce3756 | ['Brett Fox'] | 2020-09-04 06:42:02.109000+00:00 | ['Leadership', 'Business', 'Startup', 'Entrepreneurship', 'Venture Capital'] | Title Five Rules Need Know Keep Getting StuckContent Five Rules Need Know Keep Getting Stuck brett fox Follow Sep 3 · 4 min read I’ve lot research Steve Jobs lately know used practice Macworld product launch presentation around 200 time big event Picture Depositphotos Watch Jobs presentation you’ll notice smooth effortless appears stage You’ll also notice demo part every Jobs presentation demo almost always work perfectly rare occasion something go wrong Jobs calmly move never ever get stuck Rule number one never getting stuck business prepared le chance you’ll get stuck 200 Rehearsals There’s reason Jobs good it’s time effort put practicing analogy business preparation know business experience prepared likely could respond business situation example middle raising next round funding four investor couple week away giving u term sheet time exhausted funding board meeting one investor “Raul” surprised demanding company shut truly make break moment u somewhere deep inside said “Instead shutting company don’t put everyone minimum wage give u plenty time see one investor give u term sheet” Raul agreed suggestion company saved three four potential investor give u term sheet However imagine stuck company would died right Rule number two never getting stuck business contingency planning ahead time reason able give Raul solution preparation done actually discussed possibility shut Tina controller Tina talked financial contingency knew going minimum wage would buy u six week lifeline Rule number three never getting stuck business great team around really really help much would like take credit idea going minimum wage Tina’s idea mine Tina excellent job lucky work member executive team Adolfo Dave Jeroen Shoba always contingency project Rule number four never getting stuck business Yes actually need business plan Part reason able contingency business plan Without plan don’t know you’re going don’t know you’re going you’re much likely get stuck Even come alternative you’re much likely make wrong decision without plan won’t know quickly research idea make sense new alternative something sound good moment Rule number five never getting stuck business Slow get stuck problem haven’t prepared Every get problem haven’t prepared contingency natural tendency speed decision making process we’re handed problem haven’t expected Instead try slow decision making process remember scene movie “Apollo 13” team ground figure design carbon dioxide filter problem anyone prepared yet team solved preparation like Steve Jobs presentation allowed NASA team solve problem solve get Apollo 13 astronaut home didn’t speed didn’t panic slowed solved problem That’s you’ll read httpswwwbrettjfoxcomwhatarethefiveskillsyouneedtobeagreatceoTags Leadership Business Startup Entrepreneurship Venture Capital |
3,557 | Why You Should Read More to Write Better | Writing and playing music have many parallels.
When you learn to play the guitar, what do you learn first?
Do you start composing your own songs and playing complex chord progressions right from the start? Unlikely.
Most people who learn to play the guitar start by playing other people’s songs. Then, when they’ve become comfortable with the instrument, they may or may not decide to create songs of their own.
Why is it helpful to practice someone else’s songs? Because it develops skill. It gives you the tools and musical vocabulary to express something yourself.
The same is true with writing.
Novelist Steven King speaks of the importance of reading and writing:
“The more fiction you read and write, the more you’ll find your paragraphs forming on their own.” — Stephen King, On Writing: A Memoir Of The Craft
To become a good writer requires both reading and writing a lot.
It’s the same with music. If you never listen to or learn other people’s songs, it can be very difficult, if not impossible, to learn to write your own songs.
Paul Simon met his musical partner Art Garfunkel when they were children. Their idols were the Everly Brothers, who were known for their beautiful harmonies in songs such as “Wake Up Little Suzie” and “Dream”.
The two boys would imitate the close two-part harmonies of the Everly Brothers until they developed a style of their own. We hear those beautiful harmonies in songs such as “Sounds of Silence” and “Scarborough Fair/Canticle”. What they learned from imitation became their own style.
What you learn from reading can develop into your writing style.
Read read read. Write write write.
“If you want to be a writer, you must do two things above all others: read a lot and write a lot. There’s no way around these two things that I’m aware of, no shortcut.” — Stephen King, On Writing: A Memoir Of The Craft
Stephen describes himself as a slow reader, yet he reads seventy or eighty books a year, mostly fiction. Not for research, but because he likes to read.
Do you like to read? If you want to be a good writer, it’s important to take time to read a lot. If one of the most successful writers does it, maybe we should too.
Will reading a lot influence your writing to sound like someone else?
If you’re a new writer, sounding like someone else isn’t all that bad. If they’re a good writer that is.
With time and practice, you will develop your own style. That’s the key.
Even the Beatles honed their craft by playing other people’s songs. As a new band, they played in Hamburg Germany every night for a couple of years. This helped them to develop their skill and unique sound that made them famous. | https://medium.com/the-innovation/why-should-you-read-more-to-write-better-8ed636875ba0 | ['Gary Mcbrine'] | 2020-12-24 17:07:27.371000+00:00 | ['Writing Tips', 'Writing', 'Reading', 'Art', 'Creativity'] | Title Read Write BetterContent Writing playing music many parallel learn play guitar learn first start composing song playing complex chord progression right start Unlikely people learn play guitar start playing people’s song they’ve become comfortable instrument may may decide create song helpful practice someone else’s song develops skill give tool musical vocabulary express something true writing Novelist Steven King speaks importance reading writing “The fiction read write you’ll find paragraph forming own” — Stephen King Writing Memoir Craft become good writer requires reading writing lot It’s music never listen learn people’s song difficult impossible learn write song Paul Simon met musical partner Art Garfunkel child idol Everly Brothers known beautiful harmony song “Wake Little Suzie” “Dream” two boy would imitate close twopart harmony Everly Brothers developed style hear beautiful harmony song “Sounds Silence” “Scarborough FairCanticle” learned imitation became style learn reading develop writing style Read read read Write write write “If want writer must two thing others read lot write lot There’s way around two thing I’m aware shortcut” — Stephen King Writing Memoir Craft Stephen describes slow reader yet read seventy eighty book year mostly fiction research like read like read want good writer it’s important take time read lot one successful writer maybe reading lot influence writing sound like someone else you’re new writer sounding like someone else isn’t bad they’re good writer time practice develop style That’s key Even Beatles honed craft playing people’s song new band played Hamburg Germany every night couple year helped develop skill unique sound made famousTags Writing Tips Writing Reading Art Creativity |
3,558 | Alcohol is a Tool of Systemic Oppression | Lurking behind underfunded education, peering through rows of income inequality and over-policing, is alcohol.
Photo by James Sutton on Unsplash
Before a community can have a high density of alcohol outlets, those seeking liquor licenses must be awarded them from their state or local municipality, most commonly some sort of Alcohol Beverage Control board. Communities with high densities of alcohol outlets are the result of permissive decisions made by state or municipal officials. Decisions made by state or municipal officials are decisions of the system.
The decision to populate certain communities with high densities of alcohol outlets is a systemic decision.
“Welcome to America, where profits are prioritized over the protection of life.”
These outlets are not your neighborhood Saturday lemonade stand, they dangerously encourage escapism and distribute oppression, not $.25 cups of poorly stirred Countrytime.
Alcohol licenses are sought after because of the high-profit margins of alcohol. As a culture, the dangers of alcohol have continued to be silenced at the interest of income. Welcome to America, where profits are prioritized over the protection of life.
Beyond the dangers that an individual invites onto themselves by consuming alcohol, the menu of harmful outcomes related to alcohol includes community oppression.
Greater demand for alcohol leads to the opening of a greater number of alcohol outlets. These outlets will cluster where consumer activity is greatest, and the number of outlets will proliferate until the demand is met. Economics 101, supply and demand.
But, one can’t be fooled into thinking it’s that clear.
Alcohol is a money-maker, so surely owners of alcohol outlets will aim to maximize profits by locating their outlet in areas where rent is low.
It’s true. Greater numbers of outlets will tend to open in areas where rents are low, resulting in higher concentrations of outlets in low-income areas, exposing the nearby populations to the risks associated with these drinking places.
Low-income isn’t a segregating classification, but Blacks do face higher densities of liquor stores than do Whites. A 2000 analysis found that liquor stores are disproportionately located in predominantly Black census tracts.
This is where alcohol becomes a tool of oppression.
The over-concentration of alcohol outlets exposes Black communities to all the negative consequences of alcohol. There are significant and substantive relationships between outlet densities, alcohol-related traffic crashes, violence, and crime.
A systemic tool of oppression.
The decision to award liquor licenses to an outlet that will locate itself in a low-income community to meet demand and maximize profits is an intentional act controlled by state and local institutions. When that alcohol outlet is known to increase harm to the community and it still created, that is informed oppression. When the only method of obtaining this license is through a state-controlled board it becomes systemic oppression.
The location of an alcohol outlet is only the beginning.
Low-income communities face systemic racism, over-policing, police brutality, greater health risks, and are unable to rely on underfunded education systems to equip their populations with tools to cope with these inequities.
Enter the all too convenient alcohol.
Alcohol depresses the central nervous system for a temporary relaxation of the consumer, masking it’s numbing and escapist properties. Disenfranchised from upward mobility, and exhausted from the constant struggle to overcome oppression, agents marketed as sips of stress relief are indulged.
Oppressed individuals of low-income communities are overly exposed to alcohol access.
A dangerous opportunity.
Dangerous and wrong.
For a community to be systemically oppressed, underfunded with education and opportunity, and have an overabundance of a substance that is the third leading cause of preventable death is wrong.
Would you like your oppression on or off the rocks?
In 1855, abolitionist movement leader Frederick Douglass waived off the bartender and chose water.
Recognizing that slave masters carefully controlled the slaves’ access to Alcohol, Douglass found weekend and holiday breaks from normally encouraged abstinence to be controlled promotion of drunkenness as a way to keep the slave in “a state of perpetual stupidity” and “disgust the slave with his freedom.” Douglass further noted how the slave master’s promotion of drunkenness reduced the risk of slave rebellions.
“When a slave was drunk, the slaveholder had no fear that he would plan an insurrection; no fear that he would escape to the north. It was the sober, thinking slave who was dangerous, and needed the vigilance of his master to keep him a slave.” — Frederick Douglass, 1855
Sobriety became a stairway to freedom.
A century later, Malcolm X acknowledged alcohol as a tool of African American oppression:
“Almost everyone in Harlem needed some kind of hustle to survive, and needed to stay high in some way to forget what they had to do to survive.” — Malcolm X
Like Douglass, Malcolm X saw alcohol as an agent that numbed the pain of cultural oppression and suppressed the potential for political protest and economic self-determination.
To fail to address systemic racism or provide an equitable education only to then, in great density, tempt communities with an agent known to allow escapism, is strategic oppression. It’s a strategic and cowardly suppression of communities that for decades have been gaslighted with echos of “You didn’t have to drink it”, equivalent to a modern-day “Just say no” campaign.
Both utterly incorrect.
Alcohol is an addictive agent of escape that the medical community at large acknowledges as an intoxicating, addictive, toxic, carcinogenic drug, and not a good choice as a therapeutic agent. A 2016 Systematic Review and Meta-Analysis of Alcohol Consumption and All-Cause Mortality, found no health protections at low intake levels, and collectively concluded that the public needs to be informed that drinking alcohol is very unlikely to improve their health.
State and municipal officials are empowered with a trust they will uplift, not oppress their communities. Even science confirms there is no need to populate low-income communities with alcohol outlets, yet an ignorant American truth is that profits will be prioritized over life.
By now, it’s likely, that a reader would excuse themselves from contributing to any oppression but “I don’t purchase alcohol from outlets in low-income areas” is an incorrect dismissal.
The spirits, beer, cider, and wine purchased outside of low-income communities is also sold in low-income communities. A portion of the money from each purchase travels back upstream to fund the distribution channels and creators of the very same alcohol that continues populate outlets of low-income communities. More, these funds contribute to an alcohol industry that spends a collective $2.2 Billion on traditional media advertising to convince us that alcohol can’t be that bad.
Worse, white Americans drink more alcohol than other populations.
Each dollar spent contributes to oppression.
Alcohol is a tool of oppression and low-income communities are too exposed to alcohol. When the distribution of alcohol outlets is controlled by officials, who are educated on the dangers, yet continue to grant licenses to outlets in low-income communities, it then becomes yet another tool of systemic oppression.
Like so many aspects of society, only now are we beginning to understand how oppression is not an attitude but a product of systemic decisions and how complicit we might be.
Richie. Human.
The difference between Seth Godin, The Morning Brew, and me? I respect your inbox, curating only one newsletter per month — Join my behind-the-words monthly newsletter to feel what it’s like to receive a respectful newsletter.
And, for those interested in what else I’m building, come over to RICKiRICKi.
Meet me 🤠 • LinkedIn • Instagram • Twitter • TikTok • Facebook • YouTube | https://rickieticklez.medium.com/alcohol-is-a-tool-of-systemic-oppression-536f92d735d1 | ['Richie Crowley'] | 2020-06-08 15:21:17.072000+00:00 | ['Addiction', 'Equality', 'Society', 'Health', 'Lifestyle'] | Title Alcohol Tool Systemic OppressionContent Lurking behind underfunded education peering row income inequality overpolicing alcohol Photo James Sutton Unsplash community high density alcohol outlet seeking liquor license must awarded state local municipality commonly sort Alcohol Beverage Control board Communities high density alcohol outlet result permissive decision made state municipal official Decisions made state municipal official decision system decision populate certain community high density alcohol outlet systemic decision “Welcome America profit prioritized protection life” outlet neighborhood Saturday lemonade stand dangerously encourage escapism distribute oppression 25 cup poorly stirred Countrytime Alcohol license sought highprofit margin alcohol culture danger alcohol continued silenced interest income Welcome America profit prioritized protection life Beyond danger individual invite onto consuming alcohol menu harmful outcome related alcohol includes community oppression Greater demand alcohol lead opening greater number alcohol outlet outlet cluster consumer activity greatest number outlet proliferate demand met Economics 101 supply demand one can’t fooled thinking it’s clear Alcohol moneymaker surely owner alcohol outlet aim maximize profit locating outlet area rent low It’s true Greater number outlet tend open area rent low resulting higher concentration outlet lowincome area exposing nearby population risk associated drinking place Lowincome isn’t segregating classification Blacks face higher density liquor store Whites 2000 analysis found liquor store disproportionately located predominantly Black census tract alcohol becomes tool oppression overconcentration alcohol outlet expose Black community negative consequence alcohol significant substantive relationship outlet density alcoholrelated traffic crash violence crime systemic tool oppression decision award liquor license outlet locate lowincome community meet demand maximize profit intentional act controlled state local institution alcohol outlet known increase harm community still created informed oppression method obtaining license statecontrolled board becomes systemic oppression location alcohol outlet beginning Lowincome community face systemic racism overpolicing police brutality greater health risk unable rely underfunded education system equip population tool cope inequity Enter convenient alcohol Alcohol depresses central nervous system temporary relaxation consumer masking it’s numbing escapist property Disenfranchised upward mobility exhausted constant struggle overcome oppression agent marketed sip stress relief indulged Oppressed individual lowincome community overly exposed alcohol access dangerous opportunity Dangerous wrong community systemically oppressed underfunded education opportunity overabundance substance third leading cause preventable death wrong Would like oppression rock 1855 abolitionist movement leader Frederick Douglass waived bartender chose water Recognizing slave master carefully controlled slaves’ access Alcohol Douglass found weekend holiday break normally encouraged abstinence controlled promotion drunkenness way keep slave “a state perpetual stupidity” “disgust slave freedom” Douglass noted slave master’s promotion drunkenness reduced risk slave rebellion “When slave drunk slaveholder fear would plan insurrection fear would escape north sober thinking slave dangerous needed vigilance master keep slave” — Frederick Douglass 1855 Sobriety became stairway freedom century later Malcolm X acknowledged alcohol tool African American oppression “Almost everyone Harlem needed kind hustle survive needed stay high way forget survive” — Malcolm X Like Douglass Malcolm X saw alcohol agent numbed pain cultural oppression suppressed potential political protest economic selfdetermination fail address systemic racism provide equitable education great density tempt community agent known allow escapism strategic oppression It’s strategic cowardly suppression community decade gaslighted echo “You didn’t drink it” equivalent modernday “Just say no” campaign utterly incorrect Alcohol addictive agent escape medical community large acknowledges intoxicating addictive toxic carcinogenic drug good choice therapeutic agent 2016 Systematic Review MetaAnalysis Alcohol Consumption AllCause Mortality found health protection low intake level collectively concluded public need informed drinking alcohol unlikely improve health State municipal official empowered trust uplift oppress community Even science confirms need populate lowincome community alcohol outlet yet ignorant American truth profit prioritized life it’s likely reader would excuse contributing oppression “I don’t purchase alcohol outlet lowincome areas” incorrect dismissal spirit beer cider wine purchased outside lowincome community also sold lowincome community portion money purchase travel back upstream fund distribution channel creator alcohol continues populate outlet lowincome community fund contribute alcohol industry spends collective 22 Billion traditional medium advertising convince u alcohol can’t bad Worse white Americans drink alcohol population dollar spent contributes oppression Alcohol tool oppression lowincome community exposed alcohol distribution alcohol outlet controlled official educated danger yet continue grant license outlet lowincome community becomes yet another tool systemic oppression Like many aspect society beginning understand oppression attitude product systemic decision complicit might Richie Human difference Seth Godin Morning Brew respect inbox curating one newsletter per month — Join behindthewords monthly newsletter feel it’s like receive respectful newsletter interested else I’m building come RICKiRICKi Meet 🤠 • LinkedIn • Instagram • Twitter • TikTok • Facebook • YouTubeTags Addiction Equality Society Health Lifestyle |
3,559 | Git Branch Control when Deploying on AWS with Serverless Framework | If you are curious about the Serverless framework, you can read this excellent article explaning how to deploy a rest api with Serverless in 15 minutes.
Deploy to the Cloud!
We use the amazing Serverless toolkit to deploy to AWS. We managed the different environments using environment variables as explained in the Serverless documentation.
To deploy to the desired environment, we simply run:
sls deploy --stage TARGET_ENV --aws-profile sicara
where TARGET_ENV is the target environment e.g., development , staging or production .
The application environments
A development process typically includes the following environments:
The development environment: This is the environment where the developers run the code they are writing. Because the developers need to test intensively their code, the development environment can be broken sometime — and it is even meant to be.
This is the environment where the developers run the code they are writing. Because the developers need to test intensively their code, the development environment can be broken sometime — and it is even meant to be. The staging environment: This is where the product owner validates the new features. It should never be broken so that validation can be done at any time. The purpose of this environment is to see the developers daily work, so features may be partially implemented.
This is where the product owner validates the new features. It should be broken so that validation can be done at any time. The purpose of this environment is to see the developers daily work, so features may be The production environment: This is the environment for the end-users.
It should never be broken and features should be complete.
The problem: you could deploy bad/buggy code
When deploying, local files are zipped and uploaded directly to AWS.
There is no control of the local files before there are uploaded, which is fine when deploying to the development environment but wrong for staging or production environment.
Local files uploaded to the staging environment in the Cloud
In the screenshot above, the git branch sprint18/feature919/awesome-feature is deployed to the staging environment. This could be wrong for several reasons:
the branch sprint18/feature919/awesome-feature has not yet been merged into the master branch, meaning it was not reviewed by the other developers. Thus, it could break the staging environment.
has not yet been merged into the branch, meaning it was not by the other developers. the branch sprint18/feature919/awesome-feature is local, meaning it does not include features developed by other members of the team. Thus, it could induce regression by making these other features disappearing.
is local, meaning it does not include features developed by other members of the team. there are pending changes not yet committed. Thus, the deployed code is not versioned and could eventually be lost.
The solution: a Serverless plugin
To prevent us from deploying the wrong Git branch to the wrong environment, we added a hook to our deployment, by implementing a Serverless plugin.
You can learn how to write you own Serverless plugins in this blog post.
Here is the code:
… Read the full article on Sicara’s website here. | https://medium.com/sicara/control-deployment-git-aws-serverless-plugin-cd12de13abb6 | ['Antoine Toubhans'] | 2020-01-30 17:28:13.314000+00:00 | ['Serverless', 'AWS', 'Data Engineering', 'Git'] | Title Git Branch Control Deploying AWS Serverless FrameworkContent curious Serverless framework read excellent article explaning deploy rest api Serverless 15 minute Deploy Cloud use amazing Serverless toolkit deploy AWS managed different environment using environment variable explained Serverless documentation deploy desired environment simply run sl deploy stage TARGETENV awsprofile sicara TARGETENV target environment eg development staging production application environment development process typically includes following environment development environment environment developer run code writing developer need test intensively code development environment broken sometime — even meant environment developer run code writing developer need test intensively code development environment broken sometime — even meant staging environment product owner validates new feature never broken validation done time purpose environment see developer daily work feature may partially implemented product owner validates new feature broken validation done time purpose environment see developer daily work feature may production environment environment endusers never broken feature complete problem could deploy badbuggy code deploying local file zipped uploaded directly AWS control local file uploaded fine deploying development environment wrong staging production environment Local file uploaded staging environment Cloud screenshot git branch sprint18feature919awesomefeature deployed staging environment could wrong several reason branch sprint18feature919awesomefeature yet merged master branch meaning reviewed developer Thus could break staging environment yet merged branch meaning developer branch sprint18feature919awesomefeature local meaning include feature developed member team Thus could induce regression making feature disappearing local meaning include feature developed member team pending change yet committed Thus deployed code versioned could eventually lost solution Serverless plugin prevent u deploying wrong Git branch wrong environment added hook deployment implementing Serverless plugin learn write Serverless plugins blog post code … Read full article Sicara’s website hereTags Serverless AWS Data Engineering Git |
3,560 | On-Device AI Optimization — Leveraging Driving Data to Gain an Edge | Leveraging the Domain Characteristics of Driving Data
When I began my journey at Nauto a year ago, I was commissioned to replace our existing object detector with a more efficient model. After some research and experimentation, I arrived at a new architecture that was able to achieve an accuracy improvement of over 40% mAP* relative to our current detector, while running almost twice as fast. The massive improvement comes largely thanks to the mobile-targeted NAS design framework pioneered by works such as MnasNet and MobileNetV3.
*mAP (mean average precision) is a common metric for evaluating the predictive performance of object detectors.
Relative to our current model, the new detector reduces device inference latency by 43.4% and improves mAP by 42.7%.
Informed Channel Reduction
However, the most interesting improvements surfaced as I looked for ways to further push the boundary of the latency/accuracy curve. During my research I came across an intriguing finding by the authors of Searching for MobileNetV3, a new state-of-the-art classification backbone for mobile devices. They discovered that when adapting the model for the task of object detection, they were able to reduce the channel counts of the final layers by a factor of 2 with no negative impact to accuracy.
The underlying idea was simple: MobileNetV3 was originally optimized to classify the 1000 classes of the ImageNet dataset, while the object detection benchmark, COCO, only contains 90 output classes. Identifying a potential redundancy in layer size, the authors were able to achieve a 15% speedup without sacrificing a single percentage of mAP.
Compared to popular benchmark datasets like ImageNet (1000) and COCO (90), the driving data we work with at Nauto consists of a minuscule number of distinct object classes.
Intrigued, I wondered if I could take this optimization further. In our perception framework we are only interested in detecting a handful of classes such as vehicles, pedestrians, and traffic lights — in total amounting to a fraction of the 90–1000 class datasets used to optimize state-of-the-art architectures. So I began to experiment with reducing the late stage layers of my detector by factors of 4, 8, and all the way up to 32 and beyond. To my surprise, I found that after applying aggressive channel reduction I was able to reduce latency by 22%, while also improving accuracy by 11% mAP relative to the published model.
My original hope was to achieve a modest inference speedup with limited negative side-effects — I never expected to actually see an improvement in accuracy. One possible explanation is that while the original architecture was optimal for the diverse 90 class COCO dataset, it is overparameterized for the relatively uniform road scenes experienced by our devices. In other words, removing redundant channels may have improved overall accuracy in a similar way to how dropout and weight decay help prevent overfit.
At any rate, this optimization illustrates how improving along one axis of the latency/accuracy curve can impact performance in the other. In this case, however, the unintentional side-effect was positive. In fact, we broke the general rule of the trade-off by making a simultaneous improvement in both dimensions.
Applying aggressive channel reduction to the late-stage layers of the detector resulted in a 22% speedup and an 11% improvement in mAP relative to the baseline model.
Task-specific Data Augmentation
The success I had with channel reduction motivated me to look for other ways to leverage the uniqueness of driving data. Something that immediately came to mind was a study done by an old colleague of mine while I worked at my previous company, DeepScale. Essentially, he found that conventional data augmentation strategies like random flip and random crop**, while generally effective at reducing overfit, can actually hurt performance on driving data. For his application, simply removing the default augmentors resulted in a 13% improvement in accuracy.
**Random flip selects images at random to be flipped (typically across the vertical axis). Random crop selects images to be cropped and resized back to original resolution (effectively zooming in).
Again, the underlying idea is simple: while benchmark datasets like COCO and ImageNet contain a diverse collection of objects captured by various cameras from many different angles, driving data is comparatively uniform. In most applications the camera positions are fixed, the intrinsics are known, and the image composition will generally consist of the sky, the road, and a few objects. By introducing randomly flipped and zoomed-in images, you may be teaching your model to generalize to perspectives it will never actually experience in real life. This type of overgeneralization can be detrimental to overall accuracy, particularly for mobile models where predictive capacity is already limited.
Initially, I had adopted the augmentation scheme used by the original authors of my model. This included the standard horizontal flipping and cropping. I began my study by simply removing the random flip augmentor and retraining my model. As I had hoped, this single change led to a noticeable improvement in accuracy: about 4.5% relative mAP. (It must be noted that while we do operate in fleets around the world including left-hand-drive countries like Japan, my model was targeted for US deployment.)
In the default scheme, random crop (top) will often generate distorted, zoomed-in images that compromise object proportions and exclude important landmarks. Random horizontal flip (bottom), while not as obviously harmful, dilutes the training data with orientations the model will never see in production (US). The constrained-crop augmentor takes a more conservative approach; its outputs more closely resemble the viewing angles of real world Nauto devices.
I then shifted my focus to random crop. By default, the selected crop was required to have an area between 10% to 100% of the image, and an aspect ratio of 0.5 to 2.0. After examining some of the augmented data, I quickly discovered two things: first, many of the images were so zoomed-in that they excluded important context clues like lane-markers; and second, many of the objects were noticeably distorted in instances where a low aspect ratio crop was resized back to model resolution.
I was tempted at first to remove random crop entirely as my colleague had, but I realized there is one important difference between Nauto and full stack self-driving companies. Because we’re deployed as an aftermarket platform in vehicles ranging from sedans to 18-wheelers, our camera position varies significantly across fleets and individual installations. My hypothesis was that a constrained, less-aggressive crop augmentor would still be beneficial as a tool to reflect such a distribution.
I began experimenting by fixing the aspect ratio to match the input resolution and raising the minimum crop size. After a few iterations, I found that a constrained augmentor using a fixed ratio and minimum crop area of 50% improved accuracy by 4.4% mAP relative to the default cropping scheme. To test my hypothesis, I also repeated the trial with random crop completely removed. Unlike it had for my colleague, the no-augmentation scheme actually reduced mAP by 5.3% (1% worse than baseline), confirming that conservative cropping can still be beneficial in applications where camera position varies across vehicles.
The final scheme (no-flip, constrained-crop) in total yields a 9.1% relative improvement over the original baseline (flip, crop) and a 10.2% improvement over augmentation at all.
The baseline augmentation scheme (grey) consists of random horizontal flip and random crop (aspect ratio ∈ [0.5, 2.0] and area ∈ [0.1, 1.0]). Removing random flip improved mAP by 4.5%. From there, removing random crop reduced mAP by 5.3% (-1% relative to baseline). Using a constrained crop (fixed ratio, area ∈ [0.25, 1.0]) improved mAP by 7.9% relative to baseline. And finally, the most constrained crop (fixed ratio, area ∈ [0.5, 1.0]) resulted in the largest improvement: 9.1% relative to baseline.
Data-Driven Anchor Box Tuning
I’ll wrap it up with one more interesting finding. The majority of today’s object detection architectures form predictions based on a set of default anchor boxes. These boxes (also sometimes called priors) typically span a range of scales and aspect ratios in order to better detect objects of various shapes and sizes.
SSD default anchor boxes. Liu, Wei et al. “SSD: Single Shot MultiBox Detector.” Lecture Notes in Computer Science (2016): 21–37. Crossref. Web.
At this point, I was focusing my efforts on improving the core vehicle detector that drives our forward collision warning system (FCW). While sifting through our data, I couldn’t help but once again notice its uniformity compared to competition benchmarks; overall image composition aside, the objects themselves seemed to fall into a very tight distribution of shapes and sizes. So I decided to take a deeper look at the vehicles in our dataset.
Object distribution of FCW dataset. Scale is calculated for each object as bounding box height relative to image height (adjusted by object and image aspect ratios). The average object is relatively small, with a median scale of 0.057 and a 99th percentile of 0.31. Objects are also generally square, with a median aspect ratio of 1.02 and 99th percentile of 1.36.
As it turns out, the majority of objects are relatively square, with more than 96% falling between aspect ratios of 0.5 to 1.5. This actually makes a lot of sense in the context of FCW, as the most relevant objects will generally be the rear profiles of vehicles further ahead on the road. The size distribution follows more of a long tail distribution, but even so, the largest objects occupy less than three fourths of the image in either dimension, while 99% occupy less than a third.
Once again, I went back to reevaluate my initial assumptions. Up until now I had adopted the default set of anchor boxes used by the original authors, which ranged in scale between 0.2 and 0.9, using aspect ratios of ⅓, ½, 1, 2, and 3. While this comprehensive range makes sense for general-purpose object detection tasks like COCO, I wondered if I would again be able to find redundancy in the context of autonomous driving.
I began by experimenting with a tighter range of aspect ratios, including {½, 1, 1½} and {¾, 1, 1¼}. Surprisingly, the largest gain in both speed and accuracy came simply from using square anchors only, which effectively cut the total anchor count by a factor of 5. I then turned my attention to box sizes, realizing that the default range of [0.2, 0.9] overlapped with less than 5% of the objects in my dataset. Shrinking the anchor sizes to better match the object distribution yielded another modest improvement.
In total, the new anchor boxes yielded an almost 20% inference speedup and a 2% relative mAP improvement across all object classes, sizes, and shapes.
The baseline model uses anchor boxes with scales ∈ [0.2, 0.9] and aspect ratios ∈ {⅓, ½, 1, 2, 3}. Simply removing all but the square boxes resulted in a speedup of 18.5% with no negative impact to accuracy. Further tuning the boxes to match the scale range of the object distribution resulted in a modest 2.1% relative gain in mAP.
Note: while the benchmarks within each optimization study are conducted in controlled experiments, a number of factors changed between individual studies. I chose not to present a cumulative improvement from start to finish in the interest of keeping this post short and focused on the 3 major optimizations. | https://medium.com/engineering-nauto/on-device-ai-optimization-leveraging-driving-data-to-gain-an-edge-f204838b5dff | ['Alex Wu'] | 2020-10-08 22:21:53.601000+00:00 | ['Data Science', 'Artificial Intelligence', 'Machine Learning', 'Automotive', 'Computer Vision'] | Title OnDevice AI Optimization — Leveraging Driving Data Gain EdgeContent Leveraging Domain Characteristics Driving Data began journey Nauto year ago commissioned replace existing object detector efficient model research experimentation arrived new architecture able achieve accuracy improvement 40 mAP relative current detector running almost twice fast massive improvement come largely thanks mobiletargeted NAS design framework pioneered work MnasNet MobileNetV3 mAP mean average precision common metric evaluating predictive performance object detector Relative current model new detector reduces device inference latency 434 improves mAP 427 Informed Channel Reduction However interesting improvement surfaced looked way push boundary latencyaccuracy curve research came across intriguing finding author Searching MobileNetV3 new stateoftheart classification backbone mobile device discovered adapting model task object detection able reduce channel count final layer factor 2 negative impact accuracy underlying idea simple MobileNetV3 originally optimized classify 1000 class ImageNet dataset object detection benchmark COCO contains 90 output class Identifying potential redundancy layer size author able achieve 15 speedup without sacrificing single percentage mAP Compared popular benchmark datasets like ImageNet 1000 COCO 90 driving data work Nauto consists minuscule number distinct object class Intrigued wondered could take optimization perception framework interested detecting handful class vehicle pedestrian traffic light — total amounting fraction 90–1000 class datasets used optimize stateoftheart architecture began experiment reducing late stage layer detector factor 4 8 way 32 beyond surprise found applying aggressive channel reduction able reduce latency 22 also improving accuracy 11 mAP relative published model original hope achieve modest inference speedup limited negative sideeffects — never expected actually see improvement accuracy One possible explanation original architecture optimal diverse 90 class COCO dataset overparameterized relatively uniform road scene experienced device word removing redundant channel may improved overall accuracy similar way dropout weight decay help prevent overfit rate optimization illustrates improving along one axis latencyaccuracy curve impact performance case however unintentional sideeffect positive fact broke general rule tradeoff making simultaneous improvement dimension Applying aggressive channel reduction latestage layer detector resulted 22 speedup 11 improvement mAP relative baseline model Taskspecific Data Augmentation success channel reduction motivated look way leverage uniqueness driving data Something immediately came mind study done old colleague mine worked previous company DeepScale Essentially found conventional data augmentation strategy like random flip random crop generally effective reducing overfit actually hurt performance driving data application simply removing default augmentors resulted 13 improvement accuracy Random flip selects image random flipped typically across vertical axis Random crop selects image cropped resized back original resolution effectively zooming underlying idea simple benchmark datasets like COCO ImageNet contain diverse collection object captured various camera many different angle driving data comparatively uniform application camera position fixed intrinsics known image composition generally consist sky road object introducing randomly flipped zoomedin image may teaching model generalize perspective never actually experience real life type overgeneralization detrimental overall accuracy particularly mobile model predictive capacity already limited Initially adopted augmentation scheme used original author model included standard horizontal flipping cropping began study simply removing random flip augmentor retraining model hoped single change led noticeable improvement accuracy 45 relative mAP must noted operate fleet around world including lefthanddrive country like Japan model targeted US deployment default scheme random crop top often generate distorted zoomedin image compromise object proportion exclude important landmark Random horizontal flip bottom obviously harmful dilutes training data orientation model never see production US constrainedcrop augmentor take conservative approach output closely resemble viewing angle real world Nauto device shifted focus random crop default selected crop required area 10 100 image aspect ratio 05 20 examining augmented data quickly discovered two thing first many image zoomedin excluded important context clue like lanemarkers second many object noticeably distorted instance low aspect ratio crop resized back model resolution tempted first remove random crop entirely colleague realized one important difference Nauto full stack selfdriving company we’re deployed aftermarket platform vehicle ranging sedan 18wheelers camera position varies significantly across fleet individual installation hypothesis constrained lessaggressive crop augmentor would still beneficial tool reflect distribution began experimenting fixing aspect ratio match input resolution raising minimum crop size iteration found constrained augmentor using fixed ratio minimum crop area 50 improved accuracy 44 mAP relative default cropping scheme test hypothesis also repeated trial random crop completely removed Unlike colleague noaugmentation scheme actually reduced mAP 53 1 worse baseline confirming conservative cropping still beneficial application camera position varies across vehicle final scheme noflip constrainedcrop total yield 91 relative improvement original baseline flip crop 102 improvement augmentation baseline augmentation scheme grey consists random horizontal flip random crop aspect ratio ∈ 05 20 area ∈ 01 10 Removing random flip improved mAP 45 removing random crop reduced mAP 53 1 relative baseline Using constrained crop fixed ratio area ∈ 025 10 improved mAP 79 relative baseline finally constrained crop fixed ratio area ∈ 05 10 resulted largest improvement 91 relative baseline DataDriven Anchor Box Tuning I’ll wrap one interesting finding majority today’s object detection architecture form prediction based set default anchor box box also sometimes called prior typically span range scale aspect ratio order better detect object various shape size SSD default anchor box Liu Wei et al “SSD Single Shot MultiBox Detector” Lecture Notes Computer Science 2016 21–37 Crossref Web point focusing effort improving core vehicle detector drive forward collision warning system FCW sifting data couldn’t help notice uniformity compared competition benchmark overall image composition aside object seemed fall tight distribution shape size decided take deeper look vehicle dataset Object distribution FCW dataset Scale calculated object bounding box height relative image height adjusted object image aspect ratio average object relatively small median scale 0057 99th percentile 031 Objects also generally square median aspect ratio 102 99th percentile 136 turn majority object relatively square 96 falling aspect ratio 05 15 actually make lot sense context FCW relevant object generally rear profile vehicle ahead road size distribution follows long tail distribution even largest object occupy le three fourth image either dimension 99 occupy le third went back reevaluate initial assumption adopted default set anchor box used original author ranged scale 02 09 using aspect ratio ⅓ ½ 1 2 3 comprehensive range make sense generalpurpose object detection task like COCO wondered would able find redundancy context autonomous driving began experimenting tighter range aspect ratio including ½ 1 1½ ¾ 1 1¼ Surprisingly largest gain speed accuracy came simply using square anchor effectively cut total anchor count factor 5 turned attention box size realizing default range 02 09 overlapped le 5 object dataset Shrinking anchor size better match object distribution yielded another modest improvement total new anchor box yielded almost 20 inference speedup 2 relative mAP improvement across object class size shape baseline model us anchor box scale ∈ 02 09 aspect ratio ∈ ⅓ ½ 1 2 3 Simply removing square box resulted speedup 185 negative impact accuracy tuning box match scale range object distribution resulted modest 21 relative gain mAP Note benchmark within optimization study conducted controlled experiment number factor changed individual study chose present cumulative improvement start finish interest keeping post short focused 3 major optimizationsTags Data Science Artificial Intelligence Machine Learning Automotive Computer Vision |
3,561 | Submission Guidelines (Updated) | Greetings!
I bring to you some new updates regarding the submission guidelines for Know Thyself Heal Thyself. Please make sure to read them thoroughly in order to avoid any confusion in the future. Thank you!
Themes:
Spirituality Philosophy Holism Life Lessons Mindfulness Self Awareness/Improvement/Growth Love Other (private message/email me at [email protected] to clarify whether what you want to submit fits in with the publication or not)
Types of writing:
Essays Poetry Fiction Non fiction Storytelling (parables/tales/fables) Journal entries How-To Articles
Both drafts and published stories are accepted (however, please make sure the published story is very recent, otherwise it will get lost in the publication’s archive and won’t receive much exposure).
Only one article per day/per contributor is accepted. Anything that exceeds this number will be published the following day. Please make sure your private notes are open so we can communicate if there are any issues to resolve.
Please note it my take longer than a few hours to publish your submissions on certain days such as Saturday/Sunday and holidays. Allow at least 12 hours before removing your draft and submitting it elsewhere.
Submission frequency:
As editor of KTHT, I reserve the right to remove contributors who have been inactive on the publication for longer than 90 days. The goal is to build a community of active writers who support, encourage, interact with each other (clap, respond, tag, etc.)
A weekly+weekend prompt, poetry challenge and stories/quotes/sayings are sent out weekly for inspiration.
Write with us:
If you’d like to become part of Know Thyself, leave a response to this article stating that you’d like to be added. It shouldn’t take longer than 24 hours for you to be able to start submitting. | https://medium.com/know-thyself-heal-thyself/submission-guidelines-updated-975591746812 | ['𝘋𝘪𝘢𝘯𝘢 𝘊.'] | 2020-12-20 13:07:25.348000+00:00 | ['Writing', 'Submission Guidelines', 'Know Thyself Heal Thyself', 'Inspiration', 'Creativity'] | Title Submission Guidelines UpdatedContent Greetings bring new update regarding submission guideline Know Thyself Heal Thyself Please make sure read thoroughly order avoid confusion future Thank Themes Spirituality Philosophy Holism Life Lessons Mindfulness Self AwarenessImprovementGrowth Love private messageemail cdianaenqgmailcom clarify whether want submit fit publication Types writing Essays Poetry Fiction Non fiction Storytelling parablestalesfables Journal entry HowTo Articles draft published story accepted however please make sure published story recent otherwise get lost publication’s archive won’t receive much exposure one article per dayper contributor accepted Anything exceeds number published following day Please make sure private note open communicate issue resolve Please note take longer hour publish submission certain day SaturdaySunday holiday Allow least 12 hour removing draft submitting elsewhere Submission frequency editor KTHT reserve right remove contributor inactive publication longer 90 day goal build community active writer support encourage interact clap respond tag etc weeklyweekend prompt poetry challenge storiesquotessayings sent weekly inspiration Write u you’d like become part Know Thyself leave response article stating you’d like added shouldn’t take longer 24 hour able start submittingTags Writing Submission Guidelines Know Thyself Heal Thyself Inspiration Creativity |
3,562 | What are *Args and **Kwargs in Python? | What are *Args and **Kwargs in Python?
Boost your functions to the n-level.
Photo by SpaceX on Unsplash
If you have been programming Python for a while now surely you’ll have had a doubt in how to use a function properly. You went to the paper and have met with *args and **kwargs inside the parameter of this function.
In matplotlib or in seaborn (to name but a few) there are regularly these words in the attributes of a function. Let’s take a look at a caption.
Image by Author
When I started seeing these attributes my brain usually did like them didn’t exist. Until I recalled the power of this attribute and the commodity they provide. Using them your functions can be a lot more versatile and scalable.
Usually, the functions have some ‘natural’ attributes and each parameter are regularly predefined. For example, in the matplotlib function seen above, we have the “Data” attribute where its value “=None” is predefined. This meaning that if you don’t use a value in “Data” the calling of the function will return an empty figure. A none by default.
What happens with the *args and**kwargs seen above? The args here are usually used to introduce the data, x1, x2 whatsoever. And the kwargs are used for the tunning of your graph. The properties of each graph come determined with the kwargs you introduce. Linewidth, linestyle, or marker are just some of the keyworded attributes you could already be familiar with. So you have been already using this **kwargs-thing unconsciously. | https://medium.com/python-in-plain-english/args-and-kwargs-d89bdf56d49b | ['Toni Domenech'] | 2020-12-28 13:14:27.418000+00:00 | ['Python', 'Data Science', 'Matplotlib', 'Software Development', 'Programming'] | Title Args Kwargs PythonContent Args Kwargs Python Boost function nlevel Photo SpaceX Unsplash programming Python surely you’ll doubt use function properly went paper met args kwargs inside parameter function matplotlib seaborn name regularly word attribute function Let’s take look caption Image Author started seeing attribute brain usually like didn’t exist recalled power attribute commodity provide Using function lot versatile scalable Usually function ‘natural’ attribute parameter regularly predefined example matplotlib function seen “Data” attribute value “None” predefined meaning don’t use value “Data” calling function return empty figure none default happens args andkwargs seen args usually used introduce data x1 x2 whatsoever kwargs used tunning graph property graph come determined kwargs introduce Linewidth linestyle marker keyworded attribute could already familiar already using kwargsthing unconsciouslyTags Python Data Science Matplotlib Software Development Programming |
3,563 | Deploying Python, NodeJS & VueJS as Microservices | In this article I cover a number of different concepts and technologies; Web Sockets, Python, Node, Vue JS, Docker, and bring them all together in a microservice architecture.
3 Microservices
Each component in this stack will be a microservice. I will take you through building each one, including running them in separate Docker containers.
Why Microservices
Microservices bring with them immense flexibility, scalability, and fault tolerance. Today we will only cover the flexibility. Scalability and Fault tolerance require other technologies such as Kubernetes which are outside the scope of this article.
The flexibility of Microservices is created by each service being independent of the others and importantly only doing one job.
Taking the example we are looking at in this article, we could easily swap out the VueJS front end and replace it with react.
Or we could add a Mongo DB Microservice and store all tweets for analysis at a later date.
Demo site
I have setup a demo site of what I am going to develop in this article. You can view it here http://medium-microservices.simoncarr.co.uk/
Of course this is just the VueJS front end, but underneath all the goodness of Python, Node, WebSockets and Docker are at work.
My approach to this project
Here is how I am going to approach this project
Develop the Python app and confirm we are receiving tweets
Develop the Web Socket service
Connect the Python App to the Web Socket service
Develop the Vue JS front end and connect it to the WebSocket Service
Create docker images for each of the 3 Micro Services
Create a Docker stack using docker-compose
Deploy the stack
Code on GitHub
All of the code for this project is available on GitHub
Twitter Client (Python)
https://github.com/simonjcarr/medium_microservices_twitter_client
Websocket server (NodeJS)
https://github.com/simonjcarr/medium_microservices_websocket_server
Twitter stream UI (VueJS)
https://github.com/simonjcarr/medium_microservices_twitter_ui
docker-compose file https://github.com/simonjcarr/medium_microservices_docker_compose
The images used in the docker-compose file need to be built first. You can use the Dockerfile’s in the above repos or follow the rest of this article to learn how to do it.
Creating a twitter app
Before we can receive tweets from Twitter, we will have to register an app at https://developer.twitter.com/
Once you have registered an App, a set of API keys will be generated that we can use to connect to the Twitter API.
In your browser navigate to https://developer.twitter.com/. If you don't already have one register an account. After logging in, click on Developer portal Hover your mouse over your username and from the dropdown menu, click Apps Click Create App and fill in the form you're presented with. Once your app is created, you can retrieve your API credentials. There are two sets of credentials Consumer API keys and Access token & Access token secret . You will need both sets of keys shortly for use in the Python App.
Create a project file structure
Create a folder called microservices
We will create folders for each of our individual microservices as we go along. For now, we will just need another folder for the Python twitter client.
In the microservices folder, create a subfolder called twitter_client
Creating the Python Twitter client
Make sure you are using Python 3. As of writing this article, I am using Python 3.7.6
I would recommend that you use a virtual python environment. I’m going to use pipenv. If you don’t already have pipenv you can install it with
pip install --user pipenv
making sure you're in the twitter_client folder and run
pipenv install tweepy python-dotenv
I’m going to be using Tweepy to connect to the twitter API. I am also installing python-dotenv . This will allow me to put the Twitter API keys in a file called .dot . This file will not be uploaded to git, so I know my Twitter API keys will remain secret.
Create two new files
twitter_client.py
.env
Open the .env file in your code editor of choice and add the following lines. Replace the place holders <…> with the relevant keys from the app you registered in your twitter developer account.
Open twitter_client.py and add the following code.
In the code above, I create an auth object from tweepy then I create a class TwitterListener that extends tweepy.StreamListener The on_status method simply prints the text of each tweet that is received. To start receiving tweets, I instantiate TwitterListener, create a new stream, and then add a filter to only receive tweets that contain one or all of the following values javascript , nodejs , or python
If you run twitter_client.py now you should see a stream of tweets scroll up the terminal. It will continue to display new tweets in real-time until you stop the script.
If like me, you are using pipenv, you can run the script like so
pipenv run python twitter_client.py
Creating the Web Socket service
Now we know we can receive tweets, I am going to create the Web Socket service. I will use NodeJS to create this service.
If you don’t have NodeJS already installed you will need to visit the NodeJS Website https://nodejs.org/ and follow the instructions to download and install it.
I am using NodeJS version 12.13.1
Create a new folder in the microservices folder called websocket_server
Make sure you're in the websocket_server folder and enter the following command
npm init -y
This command will create a package.json file that will hold the project dependencies.
Enter the following command to install the ws module
npm i ws
Create a new file called app.js and open it in your code editor and add the code below
The web socket server code is surprisingly simple. I am running the server on port 8088 (line 3)
Whenever the server receives a new message (line 13) it will resend the message to any clients that are connected by calling the broadcast function I have created. This simply loops through each connected client, sending the data to each (line 8)
Start the server by running the following command in the console in the websocket_server folder.
node app.js
You won't see anything yet, as the server is not receiving any data. We are going to sort that out now by updating the Python Twitter client.
Connect the twitter client to the web socket server
open a new terminal and navigate to the root of the twitter_client folder that holds the python code.
We need to install a python module that will create a WebSocket client and connect to the WebSocket server. Enter the following command
pipenv install websocket-client
Using the client is even simpler than the WebSocket server and only requires 3 lines of code. Update twitter_client.py so it contains the following code.
The lines I have added are
Line 4 import create_connection
Line 8 creates a new connection and assign to a variable ws
Line 17 Each time a new tweet arrives, send the text of the tweet to the server
I have also imported json on line 5. The status object created by the tweepy module provides a number of items that represent each tweet. It also includes a _json item that represents the original raw JSON received from Twitter. I’m using json.dumps() to convert the JSON to a string that can be sent through the WebSocket connection.
You can now restart the twitter_client.py with the command
pipenv run python twitter_client.py
If you open the terminal where the python twitter client is running, you’ll see tweets scrolling up the screen.
I’m going to leave the console debug messages in the code until the VueJS client is complete, so I know everything is running.
Creating the VueJS frontend
This is where I start bringing it all together in a pretty front end, so tweets can be viewed in the web browser.
Open a new terminal window and navigate to the microservices folder.
If you don’t already have the VueJS CLI installed, enter the following command to install it.
npm i -g @vue/cli
You can find out more about VueJS at https://vuejs.org/
Now create a new Vue app by entering the following command.
vue create twitter_ui
You will first be prompted to pick a preset, use the up/down arrows on the keyboard to select Manually select features and hit enter.
Now you're asked to select the features you want to install. You select/deselect features by using the up/down arrows and using the space bar to toggle a feature on or off.
Here are the features you should choose for this project, make sure to select the same in your project. If any of these features are not available in your list of options, try running npm i -g @vue/cli to make sure you have the latest version.
Hit enter when you're finished. You will be asked to select the configuration for each of the features you selected. Here are the choices you should make
Choose a version of Vue.js: 2.x
Pick a CSS pre-processor: Sass/SCSS (with node-sass)
Pick a linter: ESLint + Prettier
Pick additional Lint features: Lint on save
Where to place config files: In dedicated config files
Save this as a preset: No
Once the installation has completed, cd into the folder twitter_ui created by the Vue CLI.
I am going to style this app using tailwind CSS Installing and configuring tailwind in Vue is easy, simply enter the command below.
vue add tailwind
When prompted, choose Minimal . Job done!
It’s helpful here to open the folder in your code editor. I use VS Code, so I just enter the following command.
code .
Once the editor has opened, back in the terminal start the app with the following command
npm run serve
Then open a browser and navigate to the URL that the app says it is running on. In my case that is http://localhost:8080/
You should see the default Vue app in your browser.
Create a new file /src/components/Header.vue with the following code.
Rename /src/components/HelloWorld.vue to Tweets.vue , open the renamed file and replace the contents as below. (we will come back to this file shortly)
Open the file /src/App.vue this file provides the layout for our app. We import the two components above and tell Vue where to display them and I also add a sprinkling of tailwind CSS classes. Update the code in App.vue as below.
That is the basic structure of the app complete.
Now I will get the Tweets component talking to the WebSocket Server and displaying the incoming tweets. Open /src/components/Tweets.vue and update it as per the code below.
In data() I have created two variables, tweets which will hold the tweets received in an array and connection which will hold the WebSocket connection object.
In mounted() I connect to the WebSocket server and set up an onmessage event. This is triggered whenever the server broadcasts data. When the event is triggered the function converts the data to a JSON object with JSON.parse() and pushes it to the top of the tweets array using unshift .
If the tweets array contains more than 20 tweets, the last tweet in the array is removed using pop()
The onopen event is for debugging and simply logs to the console one time when the client establishes a connection with the server
The component template loops through each tweet held in tweets and lays them out as a list. Each tweet includes a plethora of data. I’m pulling just a few items from each tweet screen_name , profile_image , text , followers , and following .
If you look at the app in your browser you will see the tweets scrolling through the page in real-time as the good people on twitter send them.
Try sending a tweet that includes one or all the words javascript , nodejs , or python and watch your browser as appears a few seconds later. Feel free to mention me @simonstweet along with a link to this article.
Dockerising each microservice
As the saying goes (in the UK at least), “there’s more than one way to skin a cat”. I am going to use docker because I believe that’s the best approach for a number of reasons. This is not a docker tutorial, however. If you have not used Docker before you might feel a little overwhelmed, I know I was the first time I came across it. There are a lot of resources on the internet that provide great introductions to docker, YouTube might be your best bet initially.
If you don’t have Docker installed, you take a look at the Docker official website.
The approach I will take
There are a number of different options for deploying containers
I am going to go with the simplest approach in this tutorial, which is to host them on my dev laptop.
The process will be
Some small changes to each microservice to make them Docker friendly
Create a Docker image for each container
Create a docker-compose file that builds containers from each image and configures them to talk to each other over the Docker network stack.
Creating the Docker images
Python Twitter Client
Open a terminal and navigate to your python twitter client folder, for me that is /microservices/twitter_client
Create a new file called requirements.txt in the root of the folder. Our Docker container will be running Python 3 and will have PIP available. When we create a container from the Docker image, PIP will use the requirements.txt file to make sure the required dependencies are installed. In our case that is python-dotenv , tweepy , and websocket
Add the following lines to requirements.txt
python-dotenv
tweepy
websocket-client
Create a new file called Dockerfile and add the code below
Environment Variables
Before we create a Docker image from the Dockerfile, we need to tell Docker how to access the environment variables for the Twitter API. There’s also a problem with the URL to the WebSocket server, it’s hardcoded in twitter_client.py This is an issue because each docker container is a self-contained system in its own right, so localhost refers to that container. I need to provide a way to tell the Docker container the address of the WebSocket container. I will do that later in a file called docker-compose.yml. For now, I need the Python script to be able to access the environment variables that will be in the docker-compose file.
Open twitter_client.py in your editor and make sure the code is updated as below.
Notice the os.environ[] , it provides access to the environment variables that will be stored in the docker-compose file.
For now, I’m just going to build the image from the Dockerfile by running
docker build -t microservices_twitter_client .
The above command tells Docker to create a new image called microservices_twitter_client , the . at the end tells Docker it can find the Dockerfile in the current folder.
2. Websocket Server
In the terminal navigate to the folder holding the code for the Websocket server. For me that’s /microservices/websocket_server
Create a new file called Dockerfile . Just like before we will define the Dockerfile image for this microservice in a Dockerfile and add the following code.
In this Dockerfile, I am using Node version 12 as the base image for the container. The command WORKDIR create a new directory /app in the image and tells Docker to use this as the working directory (base directory) for all further commands. So where you see ./ actually refers to /app
I then use COPY to copy any files starting with package and ending with .json into the working directory. Then RUN npm install which will install all the dependencies for our application. Once the install is complete COPY app.js into the working directory.
Line 11, makes port 8088 available to be mapped to the outside world, so other containers or apps can connect to the WebSocket server. You will see how that is used later when we create a docker-compose.yml file that will define the application stack.
Finally, on line 13, I tell Docker to run the command node app.js
You can now build this image so it’s available to use later with
docker build -t microservices_websocket_server .
3. VueJS Application
In a terminal navigate to your VueJS application folder. For me that is /microservices/twitter_ui
.dockerignore file
As part of the docker build command I will run npm install this will create the node_modules folder inside the container and ensure it has the latest updated dependencies. As such I don’t want the node_modules folder on my development machine copying into the container. This is acomplished by creating a .dockerignore file and listing the files that we want Docker to ignore.
Create a new file in the root of the application folder and call it .dockerignore it only needs one line adding to it.
node_modules
Environment variables in VueJS
With frontend Javascript apps we have to consider that the app is running in the browser rather than on the server. The implications of this are that environment variables on the server are not available to the app running in the browser.
Our app currently has a hardcoded URL to the WebSocket server. The best practice with VueJS is to create a .env file in the root of the application for variables our application needs access to. There is a lot more to .env files when you get into the details of different environments such as Dev, Test, PreProd, and Prod. We will keep it simple here and just create a single .env file.
In the root of the VueJS application create a file called .env
Add this single line of code to the file.
VUE_APP_WEBSOCKET_SERVER_URL=ws://192.168.30.100:8088
Now open Tweets.vue which is located in /src/components/Tweets.vue
Replace ws://localhost:8088 with process.env.VUE_APP_WEBSOCKET_SERVER_URL when you're done the whole line should look like this
Creating the Dockerfile
Create a new file in the root of the application folder called Dockerfile and add the code shown below.
This Dockerfile is similar in structure to the others. A key difference here is that to deploy the application into production, we need to first build it. The process creates a index.html file that contains references to minified javascript. That index.html file needs to be made available via a webserver. We could have set up a NGINX server container, but a simpler approach for this use case is to install an npm package http-server which I do on line 3.
Following that, I go through a similar process as I did for the NodeJS WebSocket server. Once the files have been copied into the container, I build the application with npm run build . This creates a dist folder to hold all the build files.
Finally, I run the http-server and tell it to serve the dist folder.
Pull everything together with a docker-compose file
If you don’t have docker-compose installed (it does not come with docker) you can visit https://docs.docker.com/compose/install/ to find out how to install it on your OS.
Finally, we are almost done, just one last thing to do before we start our Microservices application stack.
We need a way to tell Docker what that stack comprises of, the relationship between each of the containers and the configuration for each container, i.e. Environment variables and what port each container should expose to the outside world.
Create another folder in the microservices folder at the same level as twitter_client , twitter_ui , and websocket_server folders. Name it docker .
Navigate into the new Docker folder and create a new file called docker-compoes.yml and add the following code. Take care to maintain the correct indentation. yml files are sensitive to an indentation that is not consistent.
The docker-compose file lists the services that I want to run in this stack.
I have created three services
websocketserver
twitterclient
twitterui
Docker runs it’s own internal network and assigns it’s own IP Addresses internally. The service names are also essentially hostnames for each service on the docker network and will map to an IP Address inside docker.
You can see that I make use of this on line 12 where I set the value for the environment variable WEBSOCKET_SERVER_URL to ws://websocketserver:8088
Environment variables for the Twitter App are set in the twitterclient service. You can get these from the Twitter developer's website and the App that you created earlier.
We also have some dependencies in our stack. Both the VueJS App and the Python Twitter Client, rely on the WebSocket server being up and running before they start. If this wasn’t the case, there would be no server for them to connect to.
You can see that each service has an image. You should recognize this as the image we created when we ran docker build for each of our services.
Finally the websocketserver and the twitterui both require that their internal ports be made available outside the container. This is achieved by mapping export_port : internal_port In this case, both internal and external ports are the same, but they don’t have to be and often are not.
Running the stack
In order to make sure that everything is running correctly I will first run the stack in what is called attached mode. This means the stack will only be available for as long as the terminal is open. This is not ideal for production use, but for testing it means I get to see any errors that might be generated. I also still have the console.log statement logging to the console, which will help me know that everything is working.
Run the following command in the terminal in the docker folder, the same folder that contains the docker-compose.yml file.
docker-compose up
After a few seconds and if everything went well, you should see tweets in the form of JSON scrolling up the terminal.
Now open your browser and visit http://localhost:8080 You should see tweets streaming into the app in real-time.
If you now go back into the terminal and hit Ctrl+c, this will shut down the stack. Now run the stack again detached mode using the -d switch
docker-compose up -d
Once the stack is running, you will be back at the terminal and the stack will be running in the background.
You can see which containers are running by issuing the command
docker ps
If you need to shutdown the stack when in detach mode, open a terminal, make sure you're in the same folder as your docker-compose.yml file and issue the command
docker-compose down
Conclusion
I have covered a lot in this article and there is much that I missed. Some important best practices are missing but would have diverted from the concept I was trying to put across.
In summary, though, you have seen how it is possible to stream real-time from Python into VueJS. You have also learned about how to deploy a WebSocket server, which is the central technology that glues this stack together.
You also saw how you can create Docker containers for apps and use docker-compose to define and create application stacks.
I hope you enjoyed this article, I enjoyed writing it.
Please leave comments below if you are struggling or let me know if you think there is a better way of achieving anything that I discussed here. | https://medium.com/swlh/developing-a-full-microservices-application-stack-5c9fe14c870f | ['Simon Carr'] | 2020-08-19 12:44:15.619000+00:00 | ['JavaScript', 'Nodejs', 'Vuejs', 'Docker', 'Microservices'] | Title Deploying Python NodeJS VueJS MicroservicesContent article cover number different concept technology Web Sockets Python Node Vue JS Docker bring together microservice architecture 3 Microservices component stack microservice take building one including running separate Docker container Microservices Microservices bring immense flexibility scalability fault tolerance Today cover flexibility Scalability Fault tolerance require technology Kubernetes outside scope article flexibility Microservices created service independent others importantly one job Taking example looking article could easily swap VueJS front end replace react could add Mongo DB Microservice store tweet analysis later date Demo site setup demo site going develop article view httpmediummicroservicessimoncarrcouk course VueJS front end underneath goodness Python Node WebSockets Docker work approach project going approach project Develop Python app confirm receiving tweet Develop Web Socket service Connect Python App Web Socket service Develop Vue JS front end connect WebSocket Service Create docker image 3 Micro Services Create Docker stack using dockercompose Deploy stack Code GitHub code project available GitHub Twitter Client Python httpsgithubcomsimonjcarrmediummicroservicestwitterclient Websocket server NodeJS httpsgithubcomsimonjcarrmediummicroserviceswebsocketserver Twitter stream UI VueJS httpsgithubcomsimonjcarrmediummicroservicestwitterui dockercompose file httpsgithubcomsimonjcarrmediummicroservicesdockercompose image used dockercompose file need built first use Dockerfile’s repos follow rest article learn Creating twitter app receive tweet Twitter register app httpsdevelopertwittercom registered App set API key generated use connect Twitter API browser navigate httpsdevelopertwittercom dont already one register account logging click Developer portal Hover mouse username dropdown menu click Apps Click Create App fill form youre presented app created retrieve API credential two set credential Consumer API key Access token Access token secret need set key shortly use Python App Create project file structure Create folder called microservices create folder individual microservices go along need another folder Python twitter client microservices folder create subfolder called twitterclient Creating Python Twitter client Make sure using Python 3 writing article using Python 376 would recommend use virtual python environment I’m going use pipenv don’t already pipenv install pip install user pipenv making sure youre twitterclient folder run pipenv install tweepy pythondotenv I’m going using Tweepy connect twitter API also installing pythondotenv allow put Twitter API key file called dot file uploaded git know Twitter API key remain secret Create two new file twitterclientpy env Open env file code editor choice add following line Replace place holder … relevant key app registered twitter developer account Open twitterclientpy add following code code create auth object tweepy create class TwitterListener extends tweepyStreamListener onstatus method simply print text tweet received start receiving tweet instantiate TwitterListener create new stream add filter receive tweet contain one following value javascript nodejs python run twitterclientpy see stream tweet scroll terminal continue display new tweet realtime stop script like using pipenv run script like pipenv run python twitterclientpy Creating Web Socket service know receive tweet going create Web Socket service use NodeJS create service don’t NodeJS already installed need visit NodeJS Website httpsnodejsorg follow instruction download install using NodeJS version 12131 Create new folder microservices folder called websocketserver Make sure youre websocketserver folder enter following command npm init command create packagejson file hold project dependency Enter following command install w module npm w Create new file called appjs open code editor add code web socket server code surprisingly simple running server port 8088 line 3 Whenever server receives new message line 13 resend message client connected calling broadcast function created simply loop connected client sending data line 8 Start server running following command console websocketserver folder node appjs wont see anything yet server receiving data going sort updating Python Twitter client Connect twitter client web socket server open new terminal navigate root twitterclient folder hold python code need install python module create WebSocket client connect WebSocket server Enter following command pipenv install websocketclient Using client even simpler WebSocket server requires 3 line code Update twitterclientpy contains following code line added Line 4 import createconnection Line 8 creates new connection assign variable w Line 17 time new tweet arrives send text tweet server also imported json line 5 status object created tweepy module provides number item represent tweet also includes json item represents original raw JSON received Twitter I’m using jsondumps convert JSON string sent WebSocket connection restart twitterclientpy command pipenv run python twitterclientpy open terminal python twitter client running you’ll see tweet scrolling screen I’m going leave console debug message code VueJS client complete know everything running Creating VueJS frontend start bringing together pretty front end tweet viewed web browser Open new terminal window navigate microservices folder don’t already VueJS CLI installed enter following command install npm g vuecli find VueJS httpsvuejsorg create new Vue app entering following command vue create twitterui first prompted pick preset use updown arrow keyboard select Manually select feature hit enter youre asked select feature want install selectdeselect feature using updown arrow using space bar toggle feature feature choose project make sure select project feature available list option try running npm g vuecli make sure latest version Hit enter youre finished asked select configuration feature selected choice make Choose version Vuejs 2x Pick CSS preprocessor SassSCSS nodesass Pick linter ESLint Prettier Pick additional Lint feature Lint save place config file dedicated config file Save preset installation completed cd folder twitterui created Vue CLI going style app using tailwind CSS Installing configuring tailwind Vue easy simply enter command vue add tailwind prompted choose Minimal Job done It’s helpful open folder code editor use VS Code enter following command code editor opened back terminal start app following command npm run serve open browser navigate URL app say running case httplocalhost8080 see default Vue app browser Create new file srccomponentsHeadervue following code Rename srccomponentsHelloWorldvue Tweetsvue open renamed file replace content come back file shortly Open file srcAppvue file provides layout app import two component tell Vue display also add sprinkling tailwind CSS class Update code Appvue basic structure app complete get Tweets component talking WebSocket Server displaying incoming tweet Open srccomponentsTweetsvue update per code data created two variable tweet hold tweet received array connection hold WebSocket connection object mounted connect WebSocket server set onmessage event triggered whenever server broadcast data event triggered function convert data JSON object JSONparse push top tweet array using unshift tweet array contains 20 tweet last tweet array removed using pop onopen event debugging simply log console one time client establishes connection server component template loop tweet held tweet lay list tweet includes plethora data I’m pulling item tweet screenname profileimage text follower following look app browser see tweet scrolling page realtime good people twitter send Try sending tweet includes one word javascript nodejs python watch browser appears second later Feel free mention simonstweet along link article Dockerising microservice saying go UK least “there’s one way skin cat” going use docker believe that’s best approach number reason docker tutorial however used Docker might feel little overwhelmed know first time came across lot resource internet provide great introduction docker YouTube might best bet initially don’t Docker installed take look Docker official website approach take number different option deploying container going go simplest approach tutorial host dev laptop process small change microservice make Docker friendly Create Docker image container Create dockercompose file build container image configures talk Docker network stack Creating Docker image Python Twitter Client Open terminal navigate python twitter client folder microservicestwitterclient Create new file called requirementstxt root folder Docker container running Python 3 PIP available create container Docker image PIP use requirementstxt file make sure required dependency installed case pythondotenv tweepy websocket Add following line requirementstxt pythondotenv tweepy websocketclient Create new file called Dockerfile add code Environment Variables create Docker image Dockerfile need tell Docker access environment variable Twitter API There’s also problem URL WebSocket server it’s hardcoded twitterclientpy issue docker container selfcontained system right localhost refers container need provide way tell Docker container address WebSocket container later file called dockercomposeyml need Python script able access environment variable dockercompose file Open twitterclientpy editor make sure code updated Notice osenviron provides access environment variable stored dockercompose file I’m going build image Dockerfile running docker build microservicestwitterclient command tell Docker create new image called microservicestwitterclient end tell Docker find Dockerfile current folder 2 Websocket Server terminal navigate folder holding code Websocket server that’s microserviceswebsocketserver Create new file called Dockerfile like define Dockerfile image microservice Dockerfile add following code Dockerfile using Node version 12 base image container command WORKDIR create new directory app image tell Docker use working directory base directory command see actually refers app use COPY copy file starting package ending json working directory RUN npm install install dependency application install complete COPY appjs working directory Line 11 make port 8088 available mapped outside world container apps connect WebSocket server see used later create dockercomposeyml file define application stack Finally line 13 tell Docker run command node appjs build image it’s available use later docker build microserviceswebsocketserver 3 VueJS Application terminal navigate VueJS application folder microservicestwitterui dockerignore file part docker build command run npm install create nodemodules folder inside container ensure latest updated dependency don’t want nodemodules folder development machine copying container acomplished creating dockerignore file listing file want Docker ignore Create new file root application folder call dockerignore need one line adding nodemodules Environment variable VueJS frontend Javascript apps consider app running browser rather server implication environment variable server available app running browser app currently hardcoded URL WebSocket server best practice VueJS create env file root application variable application need access lot env file get detail different environment Dev Test PreProd Prod keep simple create single env file root VueJS application create file called env Add single line code file VUEAPPWEBSOCKETSERVERURLws192168301008088 open Tweetsvue located srccomponentsTweetsvue Replace wslocalhost8088 processenvVUEAPPWEBSOCKETSERVERURL youre done whole line look like Creating Dockerfile Create new file root application folder called Dockerfile add code shown Dockerfile similar structure others key difference deploy application production need first build process creates indexhtml file contains reference minified javascript indexhtml file need made available via webserver could set NGINX server container simpler approach use case install npm package httpserver line 3 Following go similar process NodeJS WebSocket server file copied container build application npm run build creates dist folder hold build file Finally run httpserver tell serve dist folder Pull everything together dockercompose file don’t dockercompose installed come docker visit httpsdocsdockercomcomposeinstall find install OS Finally almost done one last thing start Microservices application stack need way tell Docker stack comprises relationship container configuration container ie Environment variable port container expose outside world Create another folder microservices folder level twitterclient twitterui websocketserver folder Name docker Navigate new Docker folder create new file called dockercompoesyml add following code Take care maintain correct indentation yml file sensitive indentation consistent dockercompose file list service want run stack created three service websocketserver twitterclient twitterui Docker run it’s internal network assigns it’s IP Addresses internally service name also essentially hostnames service docker network map IP Address inside docker see make use line 12 set value environment variable WEBSOCKETSERVERURL wswebsocketserver8088 Environment variable Twitter App set twitterclient service get Twitter developer website App created earlier also dependency stack VueJS App Python Twitter Client rely WebSocket server running start wasn’t case would server connect see service image recognize image created ran docker build service Finally websocketserver twitterui require internal port made available outside container achieved mapping exportport internalport case internal external port don’t often Running stack order make sure everything running correctly first run stack called attached mode mean stack available long terminal open ideal production use testing mean get see error might generated also still consolelog statement logging console help know everything working Run following command terminal docker folder folder contains dockercomposeyml file dockercompose second everything went well see tweet form JSON scrolling terminal open browser visit httplocalhost8080 see tweet streaming app realtime go back terminal hit Ctrlc shut stack run stack detached mode using switch dockercompose stack running back terminal stack running background see container running issuing command docker p need shutdown stack detach mode open terminal make sure youre folder dockercomposeyml file issue command dockercompose Conclusion covered lot article much missed important best practice missing would diverted concept trying put across summary though seen possible stream realtime Python VueJS also learned deploy WebSocket server central technology glue stack together also saw create Docker container apps use dockercompose define create application stack hope enjoyed article enjoyed writing Please leave comment struggling let know think better way achieving anything discussed hereTags JavaScript Nodejs Vuejs Docker Microservices |
3,564 | COVID-19 Vaccine Is Not Effective as Companies Tell Us | The world is creeping deeper due to coronavirus. Coronavirus which is born in China and brought up in Europe and America has made disastrous damage to the American economy and health system. Many vaccines companies have developed a vaccine to eradicate coronavirus but they are rushing to make it available for the public.
The main problem is how effective is COVID-19 vaccine prepared by vaccine companies. Companies which are telling about their efficiency and effectiveness are not sure about their vaccine’s public performance.
Many politicians like Donald Trump have used a vaccine to grab votes, and it’s still followed by many world leaders.
Recently, I read about a mutation of coronavirus in the UK. Many countries in the EU have imposed lockdown to curb the new spread of coronavirus. Coronavirus has unique properties like changing its RNA according to the surrounding which leads to fast mutation and spread.
Day by day coronavirus will become more dangerous and as it mutates faster than vaccine up-gradation. Vaccine companies are looking for profit and they are rushing for storage and delivery. It's harmful when you deliver a vaccine with little research and trials.
Many people will refuse to take the vaccine. You can compare corona with polio, which is with us for a long time. The series of vaccination will continue, as epidemic waves of corona arrives frequently in the world.
We must use a mask and sanitize our hands to curb the coronavirus spread. Social distancing and mask can curb spread than a vaccine. I think, more we can implement self restrictions, more we will find success against COVID-19. | https://medium.com/afwp/covid-19-vaccine-is-not-effective-as-companies-tell-us-796772dc38a9 | ['Mike Ortega'] | 2020-12-25 15:32:32.455000+00:00 | ['Health', 'Lockdown', 'Coronavirus', 'Vaccines', 'Covid 19'] | Title COVID19 Vaccine Effective Companies Tell UsContent world creeping deeper due coronavirus Coronavirus born China brought Europe America made disastrous damage American economy health system Many vaccine company developed vaccine eradicate coronavirus rushing make available public main problem effective COVID19 vaccine prepared vaccine company Companies telling efficiency effectiveness sure vaccine’s public performance Many politician like Donald Trump used vaccine grab vote it’s still followed many world leader Recently read mutation coronavirus UK Many country EU imposed lockdown curb new spread coronavirus Coronavirus unique property like changing RNA according surrounding lead fast mutation spread Day day coronavirus become dangerous mutates faster vaccine upgradation Vaccine company looking profit rushing storage delivery harmful deliver vaccine little research trial Many people refuse take vaccine compare corona polio u long time series vaccination continue epidemic wave corona arrives frequently world must use mask sanitize hand curb coronavirus spread Social distancing mask curb spread vaccine think implement self restriction find success COVID19Tags Health Lockdown Coronavirus Vaccines Covid 19 |
3,565 | Get Started with AI in 15 Minutes Using Text Classification on Airbnb reviews | Watson Natural Language Classifier (NLC) is a text classification (aka text categorization) service that enables developers to quickly train and integrate natural language processing (NLP) capabilities into their applications. Once you have the training data, you can set up a classification model (aka a classifier) in 15 minutes or less to label text with your custom labels. In this tutorial, I will show you how to create two classifiers using publicly available Airbnb reviews data.
One of the more common text classification patterns I’ve seen is analyzing and labeling customer reviews. Understanding unstructured customer feedback enables organizations to make informed decisions that’ll improve customer experience or resolve issues faster. Sentiment analysis is perhaps one of the most common text classification cross-industry use cases, as it empowers businesses to understand voice and tone of their customers. However, companies also need to organize their data into categories that are specific to their business. This often requires data scientists to build custom machine learning models. With NLC, you can build a custom model in minutes without any machine learning experience.
Training data
To obtain training data, I went to insideairbnb.com and downloaded the ‘reviews.csv.gz’ file from Austin, Texas. This file contains thousands of real reviews from Airbnbs in Austin.
Next, I defined my labels. I decided to build two classifiers one for categorizing the reviews and the other for sentiment. It was best to separate the training data for each and create separate classifiers in order to achieve the highest accuracy possible. The labels I defined are below:
Category Classifier: Environment, Location, Cleanliness, Hospitality, Noise, Amenities, Communication, Other
Environment, Location, Cleanliness, Hospitality, Noise, Amenities, Communication, Other Sentiment Classifier: Positive, Neutral, Negative
Both sets of training data only contain 219 rows (examples). That isn’t a lot of examples in the grand scheme of things. However, one of the benefits of Watson Natural Language Classifier is that it works better on smaller sets of examples. Feel free to continue to add to the training data once you have downloaded the file to further improve the accuracy!
Training the Classifiers
In this tutorial, I will be using Watson Studio. If you would prefer to use the API directly, check out the documentation.
Create an instance of NLC and launch the tooling (Note: if you get lost, please refer to the embedded video at the bottom of this post):
Go to the Natural Language Classifier page in the IBM Cloud Catalog. Sign up for a free IBM Cloud account or log in. Click Create. Once an instance is created, you will be taken to the below screen. Click Launch tool to open the tooling in Watson Studio.
Open tooling from IBM Cloud Catalog
Train your classifier
Download the training data. Two columns is all you need! That’s how easy it is to train a classifier in NLC! Download here! Click “Create Model” to start building your classifier(s).
Begin creating your classifier
Next, you’ll need to create a project in Watson Studio. If you do not have an instance of Watson Studio created then you will need to provision a one on the Lite plan. After you have provisioned your instance of Watson Studio, refresh the page and name your Watson Studio project. Then click “Create” in the bottom right hand corner. Upload the training data for either the Categories or Sentiment Click Train Model (Training will take approximately 5-10 minutes for each classifier)
Uploading training data and training a classifier
Testing your classifier
Now that training is done, you can test your classifier! Click into your classifier and go to the Test page. Enter any text and see how Watson classifies it. The classifier works best when using actual Airbnb reviews — so test it out with data from insideairbnb.com. If the classifier makes a mistake, simply click Edit and Retrain in the top right corner and add more training examples to your training data. You’ll be classifying Airbnb reviews in no time!
Want to hook your classifiers up to a user interface? Check out the Github repo for the Natural Language Classifier demo. This repo will give you the Node.JS for the NLC demo so you can hook your classifiers up to a simple user experience.
Classify Airbnb Reviews with Watson NLC
Helpful Links
Product Page | Documentation | Sample apps and code | API Reference
Want to see what else Watson can do with Airbnb reviews? Check out the new demo for Watson Discovery Service! | https://medium.com/ibm-watson/get-started-with-ai-in-15-minutes-28039853e6f3 | ['Reid Francis'] | 2018-11-20 21:54:07.039000+00:00 | ['Tutorial', 'Machine Learning', 'Classification', 'Development', 'AI'] | Title Get Started AI 15 Minutes Using Text Classification Airbnb reviewsContent Watson Natural Language Classifier NLC text classification aka text categorization service enables developer quickly train integrate natural language processing NLP capability application training data set classification model aka classifier 15 minute le label text custom label tutorial show create two classifier using publicly available Airbnb review data One common text classification pattern I’ve seen analyzing labeling customer review Understanding unstructured customer feedback enables organization make informed decision that’ll improve customer experience resolve issue faster Sentiment analysis perhaps one common text classification crossindustry use case empowers business understand voice tone customer However company also need organize data category specific business often requires data scientist build custom machine learning model NLC build custom model minute without machine learning experience Training data obtain training data went insideairbnbcom downloaded ‘reviewscsvgz’ file Austin Texas file contains thousand real review Airbnbs Austin Next defined label decided build two classifier one categorizing review sentiment best separate training data create separate classifier order achieve highest accuracy possible label defined Category Classifier Environment Location Cleanliness Hospitality Noise Amenities Communication Environment Location Cleanliness Hospitality Noise Amenities Communication Sentiment Classifier Positive Neutral Negative set training data contain 219 row example isn’t lot example grand scheme thing However one benefit Watson Natural Language Classifier work better smaller set example Feel free continue add training data downloaded file improve accuracy Training Classifiers tutorial using Watson Studio would prefer use API directly check documentation Create instance NLC launch tooling Note get lost please refer embedded video bottom post Go Natural Language Classifier page IBM Cloud Catalog Sign free IBM Cloud account log Click Create instance created taken screen Click Launch tool open tooling Watson Studio Open tooling IBM Cloud Catalog Train classifier Download training data Two column need That’s easy train classifier NLC Download Click “Create Model” start building classifier Begin creating classifier Next you’ll need create project Watson Studio instance Watson Studio created need provision one Lite plan provisioned instance Watson Studio refresh page name Watson Studio project click “Create” bottom right hand corner Upload training data either Categories Sentiment Click Train Model Training take approximately 510 minute classifier Uploading training data training classifier Testing classifier training done test classifier Click classifier go Test page Enter text see Watson classifies classifier work best using actual Airbnb review — test data insideairbnbcom classifier make mistake simply click Edit Retrain top right corner add training example training data You’ll classifying Airbnb review time Want hook classifier user interface Check Github repo Natural Language Classifier demo repo give NodeJS NLC demo hook classifier simple user experience Classify Airbnb Reviews Watson NLC Helpful Links Product Page Documentation Sample apps code API Reference Want see else Watson Airbnb review Check new demo Watson Discovery ServiceTags Tutorial Machine Learning Classification Development AI |
3,566 | How to create a confusion matrix with the test result in your training model using matplotlib | How to create a confusion matrix with the test result in your training model using matplotlib Alex G. Follow Nov 9 · 3 min read
confusion matrix (Photo,GIF by Author) https://github.com/oleksandr-g-rock/How_to_create_confusion_matrix/blob/main/1_eg-HeEAMk8mtmblkHymRpQ.png
Short summary:
This article most the same as my previous article but with little changes:
If you want just to look at the notebook or just run code please click here
So, Let’s start :)
So I just to explain how to create a confusion matrix if you doing an image classification model.
At first, we need to create 3 folders ( testing, train, val )in our dataset like in the screenshot:
approximate count images files per folders:
testing — 5%
train — 15%
val — 80%
So folder “train” will use in the training model, folder “val” will use to show result per epoch. And “testing” folder will use only for the testing model in new images.
So at first, we need to define folder
# folders with train dir & val dir train_dir = '/content/flowers/train/' test_dir = '/content/flowers/val/' testing_dir = '/content/flowers/testing/' input_shape = (image_size, image_size, 3)
In the next step, we need to add image data generator for testing with shuffle parameter — FALSE
testing_datagen = ImageDataGenerator(rescale=1. / 255) testing_generator = testing_datagen.flow_from_directory( testing_dir, target_size=(image_size, image_size), batch_size=batch_size, shuffle=False, class_mode='categorical')
after the train the model we should run the next code to check the result in testing data
test_score = model.evaluate_generator(testing_generator, batch_size) print("[INFO] accuracy: {:.2f}%".format(test_score[1] * 100)) print("[INFO] Loss: ",test_score[0])
we should have a result like this:
And run code to show confusion matrix with the test result in your training model
#Plot the confusion matrix. Set Normalize = True/False def plot_confusion_matrix(cm, classes, normalize=True, title='Confusion matrix', cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
plt.figure(figsize=(20,20)) plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar() tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes) if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
cm = np.around(cm, decimals=2)
cm[np.isnan(cm)] = 0.0
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label') #Print the Target names
from sklearn.metrics import classification_report, confusion_matrix
import itertools
#shuffle=False target_names = []
for key in train_generator.class_indices:
target_names.append(key) # print(target_names) #Confution Matrix Y_pred = model.predict_generator(testing_generator)
y_pred = np.argmax(Y_pred, axis=1)
print('Confusion Matrix')
cm = confusion_matrix(testing_generator.classes, y_pred)
plot_confusion_matrix(cm, target_names, title='Confusion Matrix') #Print Classification Report
print('Classification Report')
print(classification_report(testing_generator.classes, y_pred, target_names=target_names))
we should have a result like this:
confusion matrix (Photo,GIF by Author) https://github.com/oleksandr-g-rock/How_to_create_confusion_matrix/blob/main/1_0KCwX9fDz9qC-Zo4K5KTNQ.png
Result:
We created the confusion matrix via matplotlib.
If you want just to look at the notebook or just run code please click here | https://medium.com/analytics-vidhya/how-to-create-a-confusion-matrix-with-the-test-result-in-your-training-model-802b1315d8ee | ['Alex G.'] | 2020-12-18 16:19:55.718000+00:00 | ['Machine Learning', 'Matplotlib', 'Data Visualization', 'Data Science', 'Python'] | Title create confusion matrix test result training model using matplotlibContent create confusion matrix test result training model using matplotlib Alex G Follow Nov 9 · 3 min read confusion matrix PhotoGIF Author httpsgithubcomoleksandrgrockHowtocreateconfusionmatrixblobmain1egHeEAMk8mtmblkHymRpQpng Short summary article previous article little change want look notebook run code please click Let’s start explain create confusion matrix image classification model first need create 3 folder testing train val dataset like screenshot approximate count image file per folder testing — 5 train — 15 val — 80 folder “train” use training model folder “val” use show result per epoch “testing” folder use testing model new image first need define folder folder train dir val dir traindir contentflowerstrain testdir contentflowersval testingdir contentflowerstesting inputshape imagesize imagesize 3 next step need add image data generator testing shuffle parameter — FALSE testingdatagen ImageDataGeneratorrescale1 255 testinggenerator testingdatagenflowfromdirectory testingdir targetsizeimagesize imagesize batchsizebatchsize shuffleFalse classmodecategorical train model run next code check result testing data testscore modelevaluategeneratortestinggenerator batchsize printINFO accuracy 2fformattestscore1 100 printINFO Loss testscore0 result like run code show confusion matrix test result training model Plot confusion matrix Set Normalize TrueFalse def plotconfusionmatrixcm class normalizeTrue titleConfusion matrix cmappltcmBlues function print plot confusion matrix Normalization applied setting normalizeTrue pltfigurefigsize2020 pltimshowcm interpolationnearest cmapcmap plttitletitle pltcolorbar tickmarks nparangelenclasses pltxtickstickmarks class rotation45 pltytickstickmarks class normalize cm cmastypefloat cmsumaxis1 npnewaxis cm nparoundcm decimals2 cmnpisnancm 00 printNormalized confusion matrix else printConfusion matrix without normalization thresh cmmax 2 j itertoolsproductrangecmshape0 rangecmshape1 plttextj cmi j horizontalalignmentcenter colorwhite cmi j thresh else black plttightlayout pltylabelTrue label pltxlabelPredicted label Print Target name sklearnmetrics import classificationreport confusionmatrix import itertools shuffleFalse targetnames key traingeneratorclassindices targetnamesappendkey printtargetnames Confution Matrix Ypred modelpredictgeneratortestinggenerator ypred npargmaxYpred axis1 printConfusion Matrix cm confusionmatrixtestinggeneratorclasses ypred plotconfusionmatrixcm targetnames titleConfusion Matrix Print Classification Report printClassification Report printclassificationreporttestinggeneratorclasses ypred targetnamestargetnames result like confusion matrix PhotoGIF Author httpsgithubcomoleksandrgrockHowtocreateconfusionmatrixblobmain10KCwX9fDz9qCZo4K5KTNQpng Result created confusion matrix via matplotlib want look notebook run code please click hereTags Machine Learning Matplotlib Data Visualization Data Science Python |
3,567 | 8 Important Lessons for Programmers That I’ve Learned at 18 | Don’t Fall for the Appeal to Tradition
The logical fallacy “appeal to tradition” is an issue than can frequently appear among software developers. The key phrase to know if you’re falling into this trap is “we’ve always done it this way!”
Especially as a new developer, it can be hard to suggest new ways of doing things. If a more experienced developer is falling into this logical fallacy, it’s important to call them out on it. Or if you are an experienced developer, don’t be afraid to try new things.
Everyone benefits if there is a faster, more efficient to do something than the old way — so always keep an eye out for ways to improve.
Developers like to do things so that they can know it will work and so they can get the task done quickly. This usually means doing things in a way they already know, but it certainly doesn’t mean that’s best. | https://medium.com/better-programming/8-important-lessons-for-programmers-that-ive-learned-at-18-6e954634322e | ['Alec Jones'] | 2020-05-26 15:18:37.299000+00:00 | ['Programming', 'JavaScript', 'Software Development', 'Python', 'Startup'] | Title 8 Important Lessons Programmers I’ve Learned 18Content Don’t Fall Appeal Tradition logical fallacy “appeal tradition” issue frequently appear among software developer key phrase know you’re falling trap “we’ve always done way” Especially new developer hard suggest new way thing experienced developer falling logical fallacy it’s important call experienced developer don’t afraid try new thing Everyone benefit faster efficient something old way — always keep eye way improve Developers like thing know work get task done quickly usually mean thing way already know certainly doesn’t mean that’s bestTags Programming JavaScript Software Development Python Startup |
3,568 | Too Many Small Steps, Not Enough Leaps | I was driving home the other day, noticed all the above-ground telephone/power lines, and thought to myself: this is not the 21st century I thought I’d be living in.
When I was growing up, the 21st century was the distant future, the stuff of science fiction. We’d have flying cars, personal robots, interstellar travel, artificial food, and, of course, tricorders. There’d be computers, although not PCs. Still, we’d have been baffled by smartphones, GPS, or the Internet. We’d have been even more flummoxed by women in the workforce or #BlackLivesMatter.
We’re living in the future, but we’re also hanging on to the past, and that applies especially to healthcare. We all poke fun at the persistence of the fax, but I’d also point out that currently our best advice for dealing with the COVID-19 pandemic is pretty much what it was for the 1918 Spanish Flu pandemic: masks and distancing (and we’re facing similar resistance). One would have hoped the 21st century would have found us better equipped.
So I was heartened to read an op-ed in The Washington Post by Regina Dugan, PhD. Dr. Dugan calls for a “Health Age,” akin to how Sputnik set off the Space Age. The pandemic, she says, “is the kind of event that alters the course of history so much that we measure time by it: before the pandemic — and after.”
In a Health Age, she predicts:
We could choose to build a future where no one must wait on an organ donor list. Where the mechanistic underpinnings of mental health are understood and treatable. Where clinical trials happen in months, not years. Where our health span coincides with our life span and we are healthy to our last breath.
Dr. Dugan has no doubt we can build a Health Age; “The question, instead, is whether we will.”
Dr. Dugan head up Wellcome Leap, a non-profit spin-off from Wellcome, a UK-based Trust that spends billions of dollars to help people “explore great ideas,” particularly related to health. Wellcome Leap was originally funded in 2018, but only this past May installed Dr. Dugan as CEO, with the charge to “undertake bold, unconventional programmes and fund them at scale.” Dr. Dugan is a former Director of Darpa, so she knows something about funding unconventional ideas.
Leap Board Chair Jay Flatley promised: “Leap will pursue the most challenging projects that would not otherwise be attempted or funded. The unique operating model provides the potential to make impactful, rapid advances on the future of health.”
Now, when I said earlier that our current approach to the pandemic is scarily similar to the response to the 1918 pandemic, that wasn’t being quite fair. We have better testing (although not nearly good enough), more therapeutic options (although none with great results yet), all kinds of personal protective equipment (although still in short supply), and better data (although shamefully inconsistent and delayed). We’re developing vaccines at a record pace, using truly 21st century approaches like mRNA or bioprinting.
The problem is, we knew a pandemic could come, we knew the things that would need to be done to deal with it, and yet we — and the “we” applies globally — fumbled the actions at every step.
We imposed lockdowns, but usually too late, and then reopened them too soon. Our healthcare organizations keep getting overwhelmed with COVID-19 cases, yet, cut off from their non-pandemic revenue sources, are drowning in losses. Due to layoffs, millions have lost their health insurance. People are avoiding care, even for essential needs like heart attacks or premature births.
Our power lines are showing. The the hurricane that is the pandemic is knocking them down at will. We might have some Health Age technologies available but not a Health Age mentality about how, when, and where to use them.
Dr. Dugan thinks she knows what we should be doing:
To build a Health Age, however, we will need to do more. We will need an international coalition of like-minded leaders to shape a unified global effort; we will need to invest at Space Age levels, publicly and privately, to fund research and development. And critically, we’ll need to supplement those approaches with bold, risk-tolerant efforts — something akin to a DARPA, but for global health.
Unfortunately, none of that sounds like anything our current environment supports. The U.S. is vowing to leave the World Health Organization and is buying up the worlds’s supply of Remdesivir, one of the few even moderately effective treatment options. An “international coalition of like-minded leaders” seems hard to come by. Plus, only half of Americans say they’d take a vaccine even when it is here.
If COVID-19 is our Sputnik moment, we’re reacting to it as we did Sputnik, setting off insular Space Races that competed rather than cooperated, focused narrowly on “winning” instead of discovering. We will, indeed, spend trillions on our pandemic responses, but most will be short-term, short-sighted programs that apply band-aids instead of establishing sustainable platforms and approaches. We’re reacting to the present, not reimagining the future.
Credit: Darpa
Darpa’s mission is “to make pivotal investments in breakthrough technologies for national security,” and it “explicitly reaches for transformational change instead of incremental advances.” Her background at Darpa make Dr. Dugan uniquely qualified to bring this attitude to Leap, and to apply it to healthcare.
The hard part is remembering that it is not about winning the current war, or even the next one, but about preparing for the wars we’re not even thinking about yet.
Most of our population are children of the 20th century. Our healthcare system in 2020 may have some snazzier tools, techniques, and technologies than it did in the 20th century, but it is mostly still pretty familiar to us from then. If we truly want a Health Age, we should aspire to develop things that would look familiar to someone from the 22nd century, not the 20th.
Every time I read about the latest finding about our microbiome I think about how little we still know about what drives our health, just as our growing attention to social determinants of health reminds me how we need to drastically rethink what the focus of our “healthcare system” should be.
Not more effective vaccines but the things that make vaccines obsolete. Not better surgical techniques but the things that make surgery unnecessary. Not just better health care but better health that requires less health care. If we’re going to dream, let’s dream big.
That’s the kind of Leap we need.
Please follow me on Medium and on Twitter (@kimbbellard), and don’t forget to share if you liked the article! | https://kimbellard.medium.com/too-many-small-steps-not-enough-leaps-d25caa18a20 | ['Kim Bellard'] | 2020-07-27 22:25:28.161000+00:00 | ['Technology', 'Innovation', 'Health', 'Future', 'Healthcare'] | Title Many Small Steps Enough LeapsContent driving home day noticed aboveground telephonepower line thought 21st century thought I’d living growing 21st century distant future stuff science fiction We’d flying car personal robot interstellar travel artificial food course tricorders There’d computer although PCs Still we’d baffled smartphones GPS Internet We’d even flummoxed woman workforce BlackLivesMatter We’re living future we’re also hanging past applies especially healthcare poke fun persistence fax I’d also point currently best advice dealing COVID19 pandemic pretty much 1918 Spanish Flu pandemic mask distancing we’re facing similar resistance One would hoped 21st century would found u better equipped heartened read oped Washington Post Regina Dugan PhD Dr Dugan call “Health Age” akin Sputnik set Space Age pandemic say “is kind event alters course history much measure time pandemic — after” Health Age predicts could choose build future one must wait organ donor list mechanistic underpinnings mental health understood treatable clinical trial happen month year health span coincides life span healthy last breath Dr Dugan doubt build Health Age “The question instead whether will” Dr Dugan head Wellcome Leap nonprofit spinoff Wellcome UKbased Trust spends billion dollar help people “explore great ideas” particularly related health Wellcome Leap originally funded 2018 past May installed Dr Dugan CEO charge “undertake bold unconventional programme fund scale” Dr Dugan former Director Darpa know something funding unconventional idea Leap Board Chair Jay Flatley promised “Leap pursue challenging project would otherwise attempted funded unique operating model provides potential make impactful rapid advance future health” said earlier current approach pandemic scarily similar response 1918 pandemic wasn’t quite fair better testing although nearly good enough therapeutic option although none great result yet kind personal protective equipment although still short supply better data although shamefully inconsistent delayed We’re developing vaccine record pace using truly 21st century approach like mRNA bioprinting problem knew pandemic could come knew thing would need done deal yet — “we” applies globally — fumbled action every step imposed lockdown usually late reopened soon healthcare organization keep getting overwhelmed COVID19 case yet cut nonpandemic revenue source drowning loss Due layoff million lost health insurance People avoiding care even essential need like heart attack premature birth power line showing hurricane pandemic knocking might Health Age technology available Health Age mentality use Dr Dugan think know build Health Age however need need international coalition likeminded leader shape unified global effort need invest Space Age level publicly privately fund research development critically we’ll need supplement approach bold risktolerant effort — something akin DARPA global health Unfortunately none sound like anything current environment support US vowing leave World Health Organization buying worlds’s supply Remdesivir one even moderately effective treatment option “international coalition likeminded leaders” seems hard come Plus half Americans say they’d take vaccine even COVID19 Sputnik moment we’re reacting Sputnik setting insular Space Races competed rather cooperated focused narrowly “winning” instead discovering indeed spend trillion pandemic response shortterm shortsighted program apply bandaids instead establishing sustainable platform approach We’re reacting present reimagining future Credit Darpa Darpa’s mission “to make pivotal investment breakthrough technology national security” “explicitly reach transformational change instead incremental advances” background Darpa make Dr Dugan uniquely qualified bring attitude Leap apply healthcare hard part remembering winning current war even next one preparing war we’re even thinking yet population child 20th century healthcare system 2020 may snazzier tool technique technology 20th century mostly still pretty familiar u truly want Health Age aspire develop thing would look familiar someone 22nd century 20th Every time read latest finding microbiome think little still know drive health growing attention social determinant health reminds need drastically rethink focus “healthcare system” effective vaccine thing make vaccine obsolete better surgical technique thing make surgery unnecessary better health care better health requires le health care we’re going dream let’s dream big That’s kind Leap need Please follow Medium Twitter kimbbellard don’t forget share liked articleTags Technology Innovation Health Future Healthcare |
3,569 | A Step-by-Step Guide to Building Event-Driven Microservices With RabbitMQ | Step-by-Step Guide
Here is TekLoon’s Dev Rule No 1:
Always start with the easiest part.
The easy entry will allow you to complete the first task and gain the confidence to face the upcoming challenges.
Step 1: TimerService development
Let’s start by creating our TimerService:
Code for TimerService. View full code here
Let’s look at what we did here. We:
Created a connection to our RabbitMQ server.
Created a channel
Created a direct, non-durable ‘quote’ exchange
Published the message to the exchange during each interval (60 seconds)
This code fulfills the purpose of TimerService.
Step 2: QuoteService development
Let’s continue with our QuoteService development. The main function of this service is able to scrape a motivational quote from the Internet when it receives a message from the RabbitMQ queue.
First, create a consumer and listen for the event sent by TimerService. Let’s make it very simple; when we receive the message we call the scrapQuoteOfTheDay() function:
Next, we proceed to scrape the quote from the Internet:
I’m getting my quotes from wisdomquotes.coms and writing them to a JSON file. Pretty straight forward right?
Now that we have our business logic in place, let’s make an HTML page to display the quotes that have been scrapped. After doing some research, the simplest way to dynamically render the HTML in ExpressJS is by using the template engine Pug. Let me show you how I did it:
This is the UI that will be created based on the index.pug template.
You can get the full source code for QuoteService here.
Step 3: Express Server settings for QuoteService
In order for QuoteService able to listen to RabbitMQ queue, we would have to do some setup during our server initialization. Besides, there is also rendering the quotes HTML based on the index.pug template we have created in step two.
Aside from booting up the web server, this configuration also fulfills the following purpose:
Renders the index.pug template (line 17)
template (line 17) Listens to RabbitMQ queue (line 23)
Step 4: Run and test It !!!
Let’s run our project and test it locally. Ultimately, you can get my full source code from Github.
Let’s boot our QuoteService component. Go to QuoteService folder and do npm run start Boot up our TimerService component by going to the TimerService folder and running npm run start Below are screenshots for you if you run it successfully:
TimerService publishes two events to the RabbitMQ exchange
QuoteService is able to consume two events listening to the RabbitMQ queue | https://medium.com/better-programming/a-step-by-step-guide-to-building-event-driven-microservices-with-rabbitmq-deeb85b3031c | ['Tek Loon'] | 2019-08-15 16:01:54.319000+00:00 | ['Microservices', 'Nodejs', 'Programming', 'JavaScript', 'Rabbitmq'] | Title StepbyStep Guide Building EventDriven Microservices RabbitMQContent StepbyStep Guide TekLoon’s Dev Rule 1 Always start easiest part easy entry allow complete first task gain confidence face upcoming challenge Step 1 TimerService development Let’s start creating TimerService Code TimerService View full code Let’s look Created connection RabbitMQ server Created channel Created direct nondurable ‘quote’ exchange Published message exchange interval 60 second code fulfills purpose TimerService Step 2 QuoteService development Let’s continue QuoteService development main function service able scrape motivational quote Internet receives message RabbitMQ queue First create consumer listen event sent TimerService Let’s make simple receive message call scrapQuoteOfTheDay function Next proceed scrape quote Internet I’m getting quote wisdomquotescoms writing JSON file Pretty straight forward right business logic place let’s make HTML page display quote scrapped research simplest way dynamically render HTML ExpressJS using template engine Pug Let show UI created based indexpug template get full source code QuoteService Step 3 Express Server setting QuoteService order QuoteService able listen RabbitMQ queue would setup server initialization Besides also rendering quote HTML based indexpug template created step two Aside booting web server configuration also fulfills following purpose Renders indexpug template line 17 template line 17 Listens RabbitMQ queue line 23 Step 4 Run test Let’s run project test locally Ultimately get full source code Github Let’s boot QuoteService component Go QuoteService folder npm run start Boot TimerService component going TimerService folder running npm run start screenshots run successfully TimerService publishes two event RabbitMQ exchange QuoteService able consume two event listening RabbitMQ queueTags Microservices Nodejs Programming JavaScript Rabbitmq |
3,570 | Writing Often is Far Easier Than Writing Well | A lot of people think it’s hard to write frequently.
That’s just a belief, of course. You know what they say; what you believe affects what you achieve.
Which is not to say believing means achieving, necessarily. I knew a guy that believed he’d be the Prime Minister some day. He’s got one foot pretty much in the grave now, and that ship has long sailed, belief or not.
It’s not so much what you believe, but what you don’t believe...
Because if you don’t believe something is even possible, the odds of it happening tend to shrink accordingly.
Tell yourself it’s not possible to write daily and you won’t.
It’s the Roger Bannister effect. Doctors used to say it wasn’t humanly possible to run a mile in under 4 minutes and not die. Until Bannister did it. Within a year, 4 more runners. Today, over 1000 runners have beat the 4 minute mile.
Turns out the barrier was psychological. But then, aren’t most barriers?
The proof is in the pudding… here you go…
Know why I don’t believe it’s hard to write prolifically? How often do you talk? Do you say you couldn’t possibly talk today? Is there a single day that you haven’t had an opinion about something?
No. It’s not the writing that’s hard. It’s facing down the blank page that’s hard.
I watched that when my daughter went to art school. The prof would say make something that evokes love, and they all got busy. Make something to evoke darkness — they all got busy. But if she just said “make something” without adding a prompt, they all froze.
What to make? What to make?
Creating is always easier in a container. Constraints are where the magic happens. Give a creative even a single word and they’re off to town.
Sit down one afternoon and write 50 prompts on little pieces of paper and throw them in a jar. And every day you pull one out and voila. Easy to write often as long as long as you don’t need to figure out “what” to write about.
Those times the muse shows up bearing words?
Those are the cherry on top.
Writing WELL…That’s Another Story
We tend to ramble. You know that guy who loves to tell stories, but he’s boring and draws everything out until your brain screams get to the point already?
Like that.
When I was pregnant, I used to fall asleep when my ex was talking. I never meant to. Pregnancy hormones. Tired, all the time. He’d start talking and it was like reading a Marcel Proust novel. Put me right to damn sleep.
On the flip side is the book you can’t stop reading. You know you’re going to regret it tomorrow, but as far as things you’re going to regret in the morning go, a book is a pretty safe option among the other choices available.
Compelling writing grabs you by the eyeballs like a Dan Brown or Tatiana de Rosnay novel and doesn’t let go.
GoodReads has a list of “fast paced” books and darn near every Dan Brown book is on that list. And Suzanne Collins and Leigh Bardugo, and a striving writer could probably learn a thing or two from compelling writers.
And you could. Spend hours reading compelling writers in the hopes of picking up that ability by strange osmosis, but there’s a faster way.
If it doesn’t fuel the story or feed the story, chop chop. That’s what God invented the delete key for.
Because 9 minutes reading Dan Brown and 9 minutes reading James Joyce or Marcel Proust aren’t the same 9 minutes, if you know what I’m saying.
Your mileage may vary on the author choices, the point is the same. | https://medium.com/linda-caroll/writing-often-is-far-easier-than-writing-well-4d1b4b4cfcbb | ['Linda Caroll'] | 2019-08-26 18:35:10.741000+00:00 | ['Writing Tips', 'Inspiration', 'Creativity', 'Writing', 'Reading'] | Title Writing Often Far Easier Writing WellContent lot people think it’s hard write frequently That’s belief course know say believe affect achieve say believing mean achieving necessarily knew guy believed he’d Prime Minister day He’s got one foot pretty much grave ship long sailed belief It’s much believe don’t believe don’t believe something even possible odds happening tend shrink accordingly Tell it’s possible write daily won’t It’s Roger Bannister effect Doctors used say wasn’t humanly possible run mile 4 minute die Bannister Within year 4 runner Today 1000 runner beat 4 minute mile Turns barrier psychological aren’t barrier proof pudding… go… Know don’t believe it’s hard write prolifically often talk say couldn’t possibly talk today single day haven’t opinion something It’s writing that’s hard It’s facing blank page that’s hard watched daughter went art school prof would say make something evokes love got busy Make something evoke darkness — got busy said “make something” without adding prompt froze make make Creating always easier container Constraints magic happens Give creative even single word they’re town Sit one afternoon write 50 prompt little piece paper throw jar every day pull one voila Easy write often long long don’t need figure “what” write time muse show bearing word cherry top Writing WELL…That’s Another Story tend ramble know guy love tell story he’s boring draw everything brain scream get point already Like pregnant used fall asleep ex talking never meant Pregnancy hormone Tired time He’d start talking like reading Marcel Proust novel Put right damn sleep flip side book can’t stop reading know you’re going regret tomorrow far thing you’re going regret morning go book pretty safe option among choice available Compelling writing grab eyeball like Dan Brown Tatiana de Rosnay novel doesn’t let go GoodReads list “fast paced” book darn near every Dan Brown book list Suzanne Collins Leigh Bardugo striving writer could probably learn thing two compelling writer could Spend hour reading compelling writer hope picking ability strange osmosis there’s faster way doesn’t fuel story feed story chop chop That’s God invented delete key 9 minute reading Dan Brown 9 minute reading James Joyce Marcel Proust aren’t 9 minute know I’m saying mileage may vary author choice point sameTags Writing Tips Inspiration Creativity Writing Reading |
3,571 | Understanding the Power of Redux | Let’s take a more in-depth look into this State Management powerhouse.
As a relatively new React developer, one thing that’s stood out to me is how top-heavy my React apps can sometimes feel. While each component always has it’s own bells and whistles, the “lowest shared parent” can sometimes feel overloaded by the amount of stateful information it stores. Don’t get me wrong, lifting state is an essential concept in React and certainly one of the library’s most useful features, but I’ve always wondered if there’s a better way to do it.
Enter Redux.
What Is Redux?
Redux is a state management library created by Dan Abramov and Andrew Clark in 2015 that aims to provide you with a single source of truth for state across all components so you can easily access stateful information at any time.
Basing Redux somewhat on Flux, a pre-existing application architecture pattern used by the likes of Facebook, Dan developed the concepts that eventually led to the development of Redux while preparing for a presentation at React Europe.
Dan Abramov’s presentation on Hot Reloading at React Europe 2015, where the key concepts driving the development of Redux were first explored.
From this talk, Redux was born shortly thereafter, leading to a seachange in how state is handled in JavaScript web applications.
Redux was built so it could be used with any component-based JavaScript library, so if you’re working with Angular or Vue.js instead of React, Redux might still be the state management solution for you!
Why Should I Use Redux?
As I mentioned earlier, Redux can be incredibly useful for top-heavy applications that need to manage large amounts of dynamic or stateful information. By creating a central store of information, you can easily access any piece of information within that store across all components without getting confused about where that information should go and how it interacts with other parts of info.
This benefit also comes into play for applications with a large number of components. Sometimes, apps can get more complicated than we originally intended for them to be, and the lowest common parent can be several layers above where information is displayed or updated. By using Redux, we remove that complexity and make it easier to get information to where it needs to be.
If you’re building a smaller application with minimal amounts of stateful data, Redux might not be for you as it could be overkill. With that being said, adding Redux to a smaller project can hardly be considered bloat, as the tool is 2KB, so feel free to implement it at your discretion!
Getting Started With Redux
Installing Redux is relatively simple (you can use the docs for that), so let’s get into implementation. Redux is relatively simple — the library provides you with a central store that allows you to manage your application’s state from a single location through the use of three parts: the store, actions, and reducers.
Store
The store is the central location for your stateful data, and as such, there should only be one store in your application. Stores are easily implemented, as you can see if the example below:
const store = createStore(myStore);
From this store, we can read and update state and register and unregister listeners. While some of these processes are carried out through methods native to the store (such as subscribe, unsubscribe, and getState), most of the functionality you’ll build around your state will be handled by Actions and Reducers.
Actions
Actions are how we send information to our data store across components. Ultimately, Actions are just JSON objects that contain information on what action should be carried out and what information should be worked on as a result. A typical Action for login might look like (source):
{
type: "LOGIN",
payload: {
username: "foo",
password: "bar"
}
}
To use an action, we must first define it using an action creator. An action creator is simply a function that creates our action, which we will eventually send to a reducer for processing. An action creator might look like (source):
function postAdded(title, content) {
const id = nanoid()
return {
type: 'posts/postAdded',
payload: { id, title, content }
}
}
Once we’re created an action, it’s time for us to define our reducers so we can update state.
Reducers
Reducers are functions that take instructions from an action, update state based on those instructions, and return the new state. Reducers can take a bunch of different forms, but on example of what a reducer might look like is below (source):
const LoginComponent = (state = initialState, action) => {
switch (action.type) { // This reducer handles any action with type "LOGIN"
case "LOGIN":
return state.map(user => {
if (user.username !== action.username) {
return user;
} if (user.password == action.password) {
return {
...user,
login_status: "LOGGED IN"
}
}
});
In practice, Redux states are technically immutable, meaning they are never actually changed. Actions and Reducers work together to create copies of state when requested, create a new state to reflect the current state, and then set state to an entirely new value. Additionally, the data passed to a Reducer is never changed either, preventing any side effects that might impact your application.
Conclusion
Hopefully, this post has served as a useful starting point on your journey to learning more about Redux, an incredibly useful tool for managing state for complex applications.
Keep in mind, Redux isn’t the right solution for all projects out there, so if you want a bit more guidance on whether Redux is the right tool for you, Redux’s documentation contains helpful guidance. | https://medium.com/swlh/understanding-the-power-of-redux-fb49d4f54f4e | ['Maxwell Harvey Croy'] | 2020-08-22 21:09:38.747000+00:00 | ['Software Engineering', 'Software Development', 'JavaScript', 'Web Development', 'React'] | Title Understanding Power ReduxContent Let’s take indepth look State Management powerhouse relatively new React developer one thing that’s stood topheavy React apps sometimes feel component always it’s bell whistle “lowest shared parent” sometimes feel overloaded amount stateful information store Don’t get wrong lifting state essential concept React certainly one library’s useful feature I’ve always wondered there’s better way Enter Redux Redux Redux state management library created Dan Abramov Andrew Clark 2015 aim provide single source truth state across component easily access stateful information time Basing Redux somewhat Flux preexisting application architecture pattern used like Facebook Dan developed concept eventually led development Redux preparing presentation React Europe Dan Abramov’s presentation Hot Reloading React Europe 2015 key concept driving development Redux first explored talk Redux born shortly thereafter leading seachange state handled JavaScript web application Redux built could used componentbased JavaScript library you’re working Angular Vuejs instead React Redux might still state management solution Use Redux mentioned earlier Redux incredibly useful topheavy application need manage large amount dynamic stateful information creating central store information easily access piece information within store across component without getting confused information go interacts part info benefit also come play application large number component Sometimes apps get complicated originally intended lowest common parent several layer information displayed updated using Redux remove complexity make easier get information need you’re building smaller application minimal amount stateful data Redux might could overkill said adding Redux smaller project hardly considered bloat tool 2KB feel free implement discretion Getting Started Redux Installing Redux relatively simple use doc let’s get implementation Redux relatively simple — library provides central store allows manage application’s state single location use three part store action reducer Store store central location stateful data one store application Stores easily implemented see example const store createStoremyStore store read update state register unregister listener process carried method native store subscribe unsubscribe getState functionality you’ll build around state handled Actions Reducers Actions Actions send information data store across component Ultimately Actions JSON object contain information action carried information worked result typical Action login might look like source type LOGIN payload username foo password bar use action must first define using action creator action creator simply function creates action eventually send reducer processing action creator might look like source function postAddedtitle content const id nanoid return type postspostAdded payload id title content we’re created action it’s time u define reducer update state Reducers Reducers function take instruction action update state based instruction return new state Reducers take bunch different form example reducer might look like source const LoginComponent state initialState action switch actiontype reducer handle action type LOGIN case LOGIN return statemapuser userusername actionusername return user userpassword actionpassword return user loginstatus LOGGED practice Redux state technically immutable meaning never actually changed Actions Reducers work together create copy state requested create new state reflect current state set state entirely new value Additionally data passed Reducer never changed either preventing side effect might impact application Conclusion Hopefully post served useful starting point journey learning Redux incredibly useful tool managing state complex application Keep mind Redux isn’t right solution project want bit guidance whether Redux right tool Redux’s documentation contains helpful guidanceTags Software Engineering Software Development JavaScript Web Development React |
3,572 | AWS — Deploying React With NodeJS App On Elastic Beanstalk | AWS — Deploying React With NodeJS App On Elastic Beanstalk
A step by step guide with an example project
Photo by Moritz Kindler on Unsplash
AWS provides more than 100 services and it’s very important to know which service you should select for your needs. If you want to deploy an application quickly without any worry about the underlying infrastructure, AWS Elastic Beanstalk is the answer. Elastic Beanstalk reduces management complexity without restricting choice or control. You simply upload your application, and Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring.
In this post, we are going to deploy React application with nodejs environment. There are other technologies or environments that AWS supports such as Go, Java, NodeJS, .Net, etc.
Introduction
Example Project
Prerequisites
Build the Project
Deploy on Elastic Beanstalk
Debugging and Update the Deployment
Route 53
Cleaning Up
Things To Consider
Summary
Conclusion
Introduction
If you want to deploy an application without worrying about the underlying infrastructure, Elastic Beanstalk is the solution. When you build the app and upload the app in the form of the zip or war, Elastic Beanstalk would take care of provisioning underlying infrastructure such as a fleet of EC2 instances, auto calling groups, monitoring, etc.
The infrastructure provisioned by Elastic Beanstalk depends on the technology chosen while uploading your app. For example, we are going to deploy the React with NodeJS backend on Elastic Beanstalk so we need to choose the NodeJS environment. If you want to know more about Elastic Beanstalk here is the link.
Environment Setup
As you see in the above figure, we build our project and create a zip. Once we build the zip, we upload that zip on the Elastic Beanstalk environment. If you have a custom domain you can point that to the elastic beanstalk URL so that your app can be accessible to the public through that URL. | https://medium.com/bb-tutorials-and-thoughts/aws-deploying-react-with-nodejs-app-on-elastic-beanstalk-23c1fcf75dd2 | ['Bhargav Bachina'] | 2020-09-30 18:42:50.017000+00:00 | ['AWS', 'Programming', 'Nodejs', 'Web Development', 'React'] | Title AWS — Deploying React NodeJS App Elastic BeanstalkContent AWS — Deploying React NodeJS App Elastic Beanstalk step step guide example project Photo Moritz Kindler Unsplash AWS provides 100 service it’s important know service select need want deploy application quickly without worry underlying infrastructure AWS Elastic Beanstalk answer Elastic Beanstalk reduces management complexity without restricting choice control simply upload application Elastic Beanstalk automatically handle detail capacity provisioning load balancing scaling application health monitoring post going deploy React application nodejs environment technology environment AWS support Go Java NodeJS Net etc Introduction Example Project Prerequisites Build Project Deploy Elastic Beanstalk Debugging Update Deployment Route 53 Cleaning Things Consider Summary Conclusion Introduction want deploy application without worrying underlying infrastructure Elastic Beanstalk solution build app upload app form zip war Elastic Beanstalk would take care provisioning underlying infrastructure fleet EC2 instance auto calling group monitoring etc infrastructure provisioned Elastic Beanstalk depends technology chosen uploading app example going deploy React NodeJS backend Elastic Beanstalk need choose NodeJS environment want know Elastic Beanstalk link Environment Setup see figure build project create zip build zip upload zip Elastic Beanstalk environment custom domain point elastic beanstalk URL app accessible public URLTags AWS Programming Nodejs Web Development React |
3,573 | Digital Transformation Starts with Process | Digital Transformation is a top priority for many companies, however, 70% fail to meet expectations. Perhaps the most compelling reason is the lack of visibility to how business processes actually execute.
While 80% of the companies surveyed by McKinsey consider digital transformation a top corporate priority, nearly 70% fail to meet expectations.
There are several factors that contribute to such an outcome. Perhaps the most compelling reason is the lack of visibility to how business processes actually execute. Without a complete understanding of all the components of the business process, organizations lose the ability to identify where the weaknesses lie and plan for improvements. As Peter Drucker, the founder of modern management theory, said: “you can’t manage what you can’t measure.”
It’s really time to own your processes…
You can do more with end-to-end process analytics and begin to transform your entire organization; because transformation starts with understanding. You can’t improve what you don’t measure.
Answering deeper questions…
Are you tired of the lackluster information your analytics team is delivering? Wish you could understand more about who is responsible, what is causing, and why problems are occurring?? We all are…
Successful automation initiatives can transform every facet of an organization, from the boardroom to the shop floor. In order to effectively deploy a Robotic Process Automation (RPA) project and realize the potential that automation can yield, businesses need to have an in-depth understanding of their organization’s processes. The most common and expensive mistake businesses make when implementing automation initiatives is failing to properly understand how their processes are actually performing and then choosing the wrong processes to automate. Leveraging actual business process data is critical to the long-term financial and technical success of any automation project. Rarely does the enthusiasm to just jump in and figure it out on the fly ever prove to be a recipe for success.
Photo by Lukas Blazek on Unsplash
The data is available, but can you access it?
The challenge for many organizations is understanding how their processes operate in real-time, across diverse functional teams, and within siloed back-end systems. Traditionally, enterprises have tried to generate process insights by utilizing a combination of manual efforts and first-generation platforms, including process mining and business intelligence, which prove to be time-consuming, costly, and error-prone. Comprehensive process data that reflects end-to-end workflows is key to automation success. And it already exists in just about every enterprise software application.
Whether utilizing an ERP, CRM, or another application, the data is waiting to be put to use. Solutions such as Process Intelligence enable organizations to discover, assess, visualize, analyze, and monitor process flows. Powered by artificial intelligence and machine learning technologies, Process Intelligence delivers accurate, in-depth, and real-time process discovery, analysis, and monitoring, which help automation leaders to accelerate digital transformation initiatives.
Photo by UX Indonesia on Unsplash
Stopping processes from failing
Many automation projects initially fail. While a number of factors play a role in the ultimate results that a transformation project delivers, understanding, and automating the right processes is a key component of successful digital change. Automating the wrong processes can lead to wasted resources and less return on investments. However, choosing the right processes for automation can be challenging without having a clear idea of how processes actually run. Many organizations think they understand how their processes work when, in reality, there are often tens or even hundreds of variations for a single process.
Traditional methods of discovering and analyzing processes involve internal personnel or outside consultants observing, timing, and documenting processes by hand, and then manually sorting and compiling the data. This time-consuming method usually takes several months to complete and is also highly subjective. The data reflects how the process performed during the specific instance in which it was being observed.
Other methods include business intelligence, which is a great asset in providing summary data and point-in-time metrics but lacks an understanding of the key to a process: time. Traditional process mining tools, many of which originated from the graduate program at Eidenhoven University, have made the assumption that corporate processes fit onto a simple process map, but neglect to recognize that processes function much more like a timeline.
A Future Beyond Business Intelligence and Process Mining
Process Intelligence is designed to help organizations discover and measure how current processes work, identify process bottlenecks, and surface areas for process optimization based on four foundational pillars:
Creation of a “digital twin” of the event data from disparate systems of record associated with business process execution;
Analyzing the performance of any process type, particularly highly variable case management processes;
Real-time Operational Monitoring of process behaviors that empower business users to remediate process bottlenecks; and
Artificial Intelligence/Machine Learning-based predictive analytics allows users to proactively identify the outcome or performance of any process instance in the early stages of the process execution.
Photo by Jo Szczepanska on Unsplash
Analyzing the performance of any process type, particularly highly variable case management processes
Often the most challenging, yet most impactful, areas for process discovery and analysis are case-based processes, for example, customer-facing processes such as call center operations, health care administration, and claims processing.
Process Monitoring Should Reflect Real Life
Keeping in mind the complexity and diversity of most business workflows, what would a process flowchart look like if it were to reflect the process of treating a patient in an emergency room? If you factor in all possible forks and loops in the workflow, representing doctors’ decisions, test results, and changes in the patient’s conditions, the process representation will look less like a flowchart and more like an incomprehensible web. We generally refer to this as the plate of spaghetti effect, as the process map looks much like a plate of spaghetti. Even in simpler cases, customer support or sales management, for example, the number of steps, repetitions, and variations could vary greatly.
Process Intelligence can help your organizations easily identify, quantify, and target the highest-impact process instances for digital transformation or automation initiatives.
Real-time Operational Monitoring of process behaviors that empower business users to remediate process bottlenecks
Having the insight to and being able to visualize how your processes behave empowers your organization to remediate inefficiencies and make informed decisions as to what aspects of your processes should be standardized through a combination of automation and more effective deployment of human capital through:
Protocol analysis that must be followed and identifying processes that fail to meet those conditions
Alerting the right staff or automating remediation to ensure processes are functioning properly, and eliminating bad processes before they happen with ongoing monitoring
Automatic comparison pre and post initiative process instances side by side to identify if your process improvements are performing as planned
Artificial Intelligence/Machine Learning-based predictive analytics that allows users to proactively identify the outcome or performance of any process instance in the early stages of the process execution
One of Peter Drucker’s often-quoted advice is “the best way to predict the future is to create it.” Process Intelligence enables organizations to have continuous visibility of how processes behave. Based on such insight, then re-imagine how process optimization can create sustainable competitive advantages by focusing on those business processes that are proven to generate reduced transaction costs and improved customer service levels. What if you could not only see what will happen next but also prescribe a solution to avoid a problem or costly mistake before it happens? That is what the ultimate objective of digital transformation is all about: predicting the future process state. By combining process mining with machine learning and artificial intelligence, your organizations can achieve highly integrated and fully automated insights to forecast processes in their future state and take action to ensure positive outcomes.
Process knowledge is important for successful digital transformation initiatives; however, having an understanding of all the critical information locked within semi-structured and unstructured content is also critical. It is dependent on having real-time access to all your business-critical data no matter which business process platform it lies within. This includes the vast amount of data that exists in various business documents, including claims, invoices, proof of delivery, loan agreements, contracts, orders, identity documents, tax forms, pay stubs, utility bills, and more. This understanding of both content and processes is referred to as having digital intelligence.
With Digital Intelligence, organizations gain the valuable, yet often hard to attain, insight into their operations that enables true business transformation. With access to real-time data about exactly how processes are currently working and the content that fuels them, Digital Intelligence empowers enterprises to make a tremendous impact where it matters most: customer experience, competitive advantage, visibility, and compliance.
The latest Everest Group Process Mining Products PEAK Matrix® Assessment 2020 illustrates that traditional process mining has deep roots among data professionals and automation leaders, but the Process Intelligence approach is gaining momentum — especially with the impact of stay at home orders, social distancing, and economic uncertainty. Business leaders need to have a valuable understanding of their business workflows, identify the best use cases for RPA projects quickly, fix process bottlenecks as soon as they occur, and continually optimize automation performance. Process Intelligence answers this call throughout the entire business, even across diverse workflows, departments, technology systems, and locations.
If you enjoyed reading the article don’t forget to applaud. | https://medium.com/datadriveninvestor/digital-transformation-starts-with-process-9364998cf59b | ['Ryan M. Raiker'] | 2020-12-03 15:37:57.017000+00:00 | ['Data Science', 'Technology', 'Artificial Intelligence', 'Automation', 'Future'] | Title Digital Transformation Starts ProcessContent Digital Transformation top priority many company however 70 fail meet expectation Perhaps compelling reason lack visibility business process actually execute 80 company surveyed McKinsey consider digital transformation top corporate priority nearly 70 fail meet expectation several factor contribute outcome Perhaps compelling reason lack visibility business process actually execute Without complete understanding component business process organization lose ability identify weakness lie plan improvement Peter Drucker founder modern management theory said “you can’t manage can’t measure” It’s really time processes… endtoend process analytics begin transform entire organization transformation start understanding can’t improve don’t measure Answering deeper questions… tired lackluster information analytics team delivering Wish could understand responsible causing problem occurring are… Successful automation initiative transform every facet organization boardroom shop floor order effectively deploy Robotic Process Automation RPA project realize potential automation yield business need indepth understanding organization’s process common expensive mistake business make implementing automation initiative failing properly understand process actually performing choosing wrong process automate Leveraging actual business process data critical longterm financial technical success automation project Rarely enthusiasm jump figure fly ever prove recipe success Photo Lukas Blazek Unsplash data available access challenge many organization understanding process operate realtime across diverse functional team within siloed backend system Traditionally enterprise tried generate process insight utilizing combination manual effort firstgeneration platform including process mining business intelligence prove timeconsuming costly errorprone Comprehensive process data reflects endtoend workflow key automation success already exists every enterprise software application Whether utilizing ERP CRM another application data waiting put use Solutions Process Intelligence enable organization discover ass visualize analyze monitor process flow Powered artificial intelligence machine learning technology Process Intelligence delivers accurate indepth realtime process discovery analysis monitoring help automation leader accelerate digital transformation initiative Photo UX Indonesia Unsplash Stopping process failing Many automation project initially fail number factor play role ultimate result transformation project delivers understanding automating right process key component successful digital change Automating wrong process lead wasted resource le return investment However choosing right process automation challenging without clear idea process actually run Many organization think understand process work reality often ten even hundred variation single process Traditional method discovering analyzing process involve internal personnel outside consultant observing timing documenting process hand manually sorting compiling data timeconsuming method usually take several month complete also highly subjective data reflects process performed specific instance observed method include business intelligence great asset providing summary data pointintime metric lack understanding key process time Traditional process mining tool many originated graduate program Eidenhoven University made assumption corporate process fit onto simple process map neglect recognize process function much like timeline Future Beyond Business Intelligence Process Mining Process Intelligence designed help organization discover measure current process work identify process bottleneck surface area process optimization based four foundational pillar Creation “digital twin” event data disparate system record associated business process execution Analyzing performance process type particularly highly variable case management process Realtime Operational Monitoring process behavior empower business user remediate process bottleneck Artificial IntelligenceMachine Learningbased predictive analytics allows user proactively identify outcome performance process instance early stage process execution Photo Jo Szczepanska Unsplash Analyzing performance process type particularly highly variable case management process Often challenging yet impactful area process discovery analysis casebased process example customerfacing process call center operation health care administration claim processing Process Monitoring Reflect Real Life Keeping mind complexity diversity business workflow would process flowchart look like reflect process treating patient emergency room factor possible fork loop workflow representing doctors’ decision test result change patient’s condition process representation look le like flowchart like incomprehensible web generally refer plate spaghetti effect process map look much like plate spaghetti Even simpler case customer support sale management example number step repetition variation could vary greatly Process Intelligence help organization easily identify quantify target highestimpact process instance digital transformation automation initiative Realtime Operational Monitoring process behavior empower business user remediate process bottleneck insight able visualize process behave empowers organization remediate inefficiency make informed decision aspect process standardized combination automation effective deployment human capital Protocol analysis must followed identifying process fail meet condition Alerting right staff automating remediation ensure process functioning properly eliminating bad process happen ongoing monitoring Automatic comparison pre post initiative process instance side side identify process improvement performing planned Artificial IntelligenceMachine Learningbased predictive analytics allows user proactively identify outcome performance process instance early stage process execution One Peter Drucker’s oftenquoted advice “the best way predict future create it” Process Intelligence enables organization continuous visibility process behave Based insight reimagine process optimization create sustainable competitive advantage focusing business process proven generate reduced transaction cost improved customer service level could see happen next also prescribe solution avoid problem costly mistake happens ultimate objective digital transformation predicting future process state combining process mining machine learning artificial intelligence organization achieve highly integrated fully automated insight forecast process future state take action ensure positive outcome Process knowledge important successful digital transformation initiative however understanding critical information locked within semistructured unstructured content also critical dependent realtime access businesscritical data matter business process platform lie within includes vast amount data exists various business document including claim invoice proof delivery loan agreement contract order identity document tax form pay stub utility bill understanding content process referred digital intelligence Digital Intelligence organization gain valuable yet often hard attain insight operation enables true business transformation access realtime data exactly process currently working content fuel Digital Intelligence empowers enterprise make tremendous impact matter customer experience competitive advantage visibility compliance latest Everest Group Process Mining Products PEAK Matrix® Assessment 2020 illustrates traditional process mining deep root among data professional automation leader Process Intelligence approach gaining momentum — especially impact stay home order social distancing economic uncertainty Business leader need valuable understanding business workflow identify best use case RPA project quickly fix process bottleneck soon occur continually optimize automation performance Process Intelligence answer call throughout entire business even across diverse workflow department technology system location enjoyed reading article don’t forget applaudTags Data Science Technology Artificial Intelligence Automation Future |
3,574 | Happy First Birthday, Better Programming! | Happy First Birthday, Better Programming!
A story about our first year, some fun numbers, and a look at what’s ahead
Photo by Gaelle Marcel on Unsplash
Better Programming is officially one year old today!
Last year we launched with six articles and hoped that readers would show up and care. 2,373 of you did, and we were off to the races! Slowly but surely, authors we reached out to gave us a chance, entrusting us to do a good job of copy editing, publishing, and distributing their hard work.
In the last year, we’ve worked with over 1,500 authors, built an audience of more than 117,000 followers on Medium, and became Medium’s best-performing publication to launch in 2019. Better Programming is also Medium’s fastest-ever publication to reach 5 million pageviews a month after launch — which we did in just 6 months.
Cumulatively in our first year, Better Programming has garnered over 38 million pageviews from readers around the world!
We have a lot of exciting things in store for this year: We’re going to expand some of the topics we cover, launch our job board, and a few other things we can’t wait to tell you about.
Our goal continues: To build a high-quality publication — based on thorough tutorials and actionable advice — and an inclusive environment for people to learn, no matter your skill level.
Thank you for being a part of our first year.
Be well,
Zack and Tony
Co-founders of Better Programming | https://medium.com/better-programming/happy-first-birthday-better-programming-ca6e82d1415 | ['Zack Shapiro'] | 2020-05-01 15:22:19.622000+00:00 | ['Python', 'Medium', 'Programming', 'JavaScript', 'Writing'] | Title Happy First Birthday Better ProgrammingContent Happy First Birthday Better Programming story first year fun number look what’s ahead Photo Gaelle Marcel Unsplash Better Programming officially one year old today Last year launched six article hoped reader would show care 2373 race Slowly surely author reached gave u chance entrusting u good job copy editing publishing distributing hard work last year we’ve worked 1500 author built audience 117000 follower Medium became Medium’s bestperforming publication launch 2019 Better Programming also Medium’s fastestever publication reach 5 million pageviews month launch — 6 month Cumulatively first year Better Programming garnered 38 million pageviews reader around world lot exciting thing store year We’re going expand topic cover launch job board thing can’t wait tell goal continues build highquality publication — based thorough tutorial actionable advice — inclusive environment people learn matter skill level Thank part first year well Zack Tony Cofounders Better ProgrammingTags Python Medium Programming JavaScript Writing |
3,575 | I came up with this crazy formula to help me write faster and better… | I came up with this crazy formula to help me write faster and better…
I used to write when the muse struck.Which wasn’t very often because turns out the muse tends to favor people who show up regularly. Which I wasn’t.
So I’ve been trying to build a daily writing habit. And I’ve done pretty well — only missed 2 days in the last 20. But wow. Writing every day sure made me see what I struggle with. Maybe you struggle with this, too?
It’s not the blank page that slays me…
I learned that constraints mean the blank page is not a problem. Just pick a thing. Like a writing prompt. Because if a writer is asked to write about dogs, or summer or a favorite food — heck, we can all do that.
So I use constraints and the blank page isn’t the problem anymore.
You know what is? Brain pong! And rambling.
I pick a topic, and start writing and all the ideas about that topic show up. And then the writing rambles on and on and — blah. No.
Walk away. Edit.
Walk away. Edit.
That’s painful. And it takes too long. Who has that kind of time?
(and people do this twice a day? are they still sane??)
Then it dawned on me that constraints work for topic, so why not try them for the whole writing process?
So I created this crazy formula to speed up my writing.
It looks like this…
1. What’s the topic?
The topic is usually inspired by some small nugget of an idea. Some random thought, or something I read or heard or saw. When I get those ideas, I jot them down so I can dig into the idea bin anytime.
2. Is there relevant data or statistics?
It’s more powerful to use real numbers and cite a source than to generalize. For example… children witness 68% to 80% of domestic assaults. See how much more powerful that is than a generalization?
3. Can I add a personal experience?
A personal connection strengthens every story, so I dig around in my head to find a personal connection. Sometimes #1 and #3 are the same. Bonus!
4. How can I start with a bang?
Once I have the topic, some legit data and a personal story, now I can ask myself — what’s the most powerful way to open?
5. How can I connect the end to the beginning?
I love when writers end by looping back around to the beginning and connecting the whole piece. So how can I do that?
6. What’s the emotion I want to evoke?
Then I try find a cover image to evoke that feeling — and hope I can find something a little offbeat that everyone hasn’t seen a zillion times.
7. Title last!
I struggle with titles. So now I ignore the title and just write the piece first. When I’m done, I re-read to see if the title is hiding inside. You’d be surprised how often it is — right there, staring at me from inside the piece. No wonder I couldn’t think it up in advance.
So now I have this handy-dandy formula…
And I’m going to use it. I swear I am. But ever since I came up with the idea, every time I open the page to write, poetry shows up. Go figure. | https://medium.com/linda-caroll/i-came-up-with-this-crazy-formula-to-help-me-write-faster-and-better-674c882beb28 | ['Linda Caroll'] | 2020-09-01 18:43:27.547000+00:00 | ['Writing', 'Advice', 'This Happened To Me', 'Inspiration', 'Creativity'] | Title came crazy formula help write faster better…Content came crazy formula help write faster better… used write muse struckWhich wasn’t often turn muse tends favor people show regularly wasn’t I’ve trying build daily writing habit I’ve done pretty well — missed 2 day last 20 wow Writing every day sure made see struggle Maybe struggle It’s blank page slays me… learned constraint mean blank page problem pick thing Like writing prompt writer asked write dog summer favorite food — heck use constraint blank page isn’t problem anymore know Brain pong rambling pick topic start writing idea topic show writing ramble — blah Walk away Edit Walk away Edit That’s painful take long kind time people twice day still sane dawned constraint work topic try whole writing process created crazy formula speed writing look like this… 1 What’s topic topic usually inspired small nugget idea random thought something read heard saw get idea jot dig idea bin anytime 2 relevant data statistic It’s powerful use real number cite source generalize example… child witness 68 80 domestic assault See much powerful generalization 3 add personal experience personal connection strengthens every story dig around head find personal connection Sometimes 1 3 Bonus 4 start bang topic legit data personal story ask — what’s powerful way open 5 connect end beginning love writer end looping back around beginning connecting whole piece 6 What’s emotion want evoke try find cover image evoke feeling — hope find something little offbeat everyone hasn’t seen zillion time 7 Title last struggle title ignore title write piece first I’m done reread see title hiding inside You’d surprised often — right staring inside piece wonder couldn’t think advance handydandy formula… I’m going use swear ever since came idea every time open page write poetry show Go figureTags Writing Advice Happened Inspiration Creativity |
3,576 | Interactive AI for Sustainable Architecture | I was recently selected for a research internship at one of the most promising and exciting AEC startups — Digital Blue Foam. I was given a chance to use their Augmented Intelligence enhanced software. Here’s an article that describes its features and advantages.
So this software is a web-based interactive generative design tool that gathers urban and climate data from various available web sources. Once you log into the Digital Blue Foam website you get options to load the previous session (not for you if it’s your first time), search bar for searching and locating the site you are looking for designing, and feeling lucky option, which takes you through the tool and guides you about its features (you must select this option if you want to learn about the tool in brief).
Locate and Select Site
Once you search for the location the tool will fetch various datasets and take you to the location where you can draw and edit a polygon. Now you can see your located site in the 3D surrounding map.
You can draw a polygon to select your site
Draw Lines using the Pencil and Eraser
You can draw a random line or a polyline between your plot to divide it into subplots. The pencil has three options: (1) Axis — to assign the axis of the plot; (2)- Park, which creates and assigns the drawn place to park; and (3) a tape measure tool for measuring the line you are drawing.
Upon double-clicking, the click and drag draw a line option the line for subplots is finalized, which can be seen in orange color. Being an interactive software the number of subplots can even be changed in the statistics section, about which I’ll inform you in this article.
Draw tool also has an erase feature in case you want to redraw the lines. Clicking the play button now will generate designs according to the subplots drawn.
Straight as well as curved lines can be drawn to divide the plot
Generate Designs
The play button generates designs. You can pause on the design you like. The step forward and step backward are buttons that will toggle the designs one at a time.
The play button generates many options for the user to choose from.
Program Distribution
This section has input sliders that can be changed to change the ratio of educational, residential, commercial, office, and leisure spaces according to your needs.
If you like the generated design and not the distribution of program spaces you can click on the specific part of the structure which gets highlighted and then you can edit features like the program, floor height, and the number of floors in DBF Assistant. The side arrow keys give you the freedom to move the selected block. Selecting the plots also offers various options like convert the plot to park, parking, or a podium. You can even merge 2 or more plots.
Sustainable Tools
This tool has 3 features:
a) Sun path
This feature helps in the visualization of sun paths and shadows for your site. It also has input date and time for users to select. While designing a building, checking the sun path and shadows is essential as studies have shown that sufficient daylight in space increases one’s productivity and freshness quotient.
b) Wind path
This feature allows you to visualize the direction of the wind with huge arrows.
Wind path is another important factor in sustainable and climate-responsive design. The location of the opening can be chosen according to the wind direction. Also, it helps to analyze the airflow of the plot.
c) Solar radiation
This is the third sustainability tool. The Solar Radiation option does make the designing process a bit slow, but of course, this option helps you to visualize and analyze the distribution of solar radiation throughout your design. It can be beneficial while designing the shading requirements for the structure.
Sun path and Wind Direction can be selected
Statistics Section
This section offers a wide range of targets to choose from. Max Height., Facade Area, Floor Area, Site Area, and other targets can be set by the user. One of my favorite targets among these options is Site Efficiency, which tells you the site efficiency percentage. You can see this target change by changing the subplots and programs. This makes it an interactive and informative tool that has sustainability options to choose from.
Multiple input targets that can be set for the design, which includes Efficiency, GFA, and many more
Viewing Tool
This tool has 3 subtypes:-
a) View — This consists of 4 features:
Orbital view
Toggle view
Option to show cars
Show trees in the plot
b) Plan view — This part also has 4 features:
Plan view
Longitudinal section
Cross-section
Toggle isometric view
All these views are important from an architectural viewpoint.
c) Visualize Data — this subtype has 5 features:
Terrain — to visualize the satellite imagery of the map
Neighborhood programs — Proximity analysis of various nearby facilities
Heatmap
Structural Details
Save and Download
a) You will have slots to save 5 of your designs. These saved designs can be compared using the compare design tool, this compares factors like FAR (Floor Area Ratio), GFA (Gross Floor Area), and Efficiency.
An insightful comparison of up to 5 designs
b) The download button offers 4 options consisting of screenshot, 3D, 2D, and download an excel (.xlsx file) report. The report shows details like floor height, and area for every floor in each program. This file also has input sections like Title, Building Type, Site Area, which can be useful for presentations and documentation.
What I liked most about this tool is the designs are generated on clicking the play button and you can see the targets change their values. When you feel the right target has been achieved then you can pause and save the design. This tool is one of the best-use cases of “AI in AEC”, which helps the user in the optimization of the design efficiently with several input options.
The user interface is easy to use. Having a DBF Assistant makes it simple for the user to get information about the various tools offered. The software also has options like Quick Tips and Tutorials to guide you through the tool.
Progress is impossible without a change and it has become an important factor to change our designing process, We should make it more efficient and sustainable. The way your city looks today will be very different in the coming years, So it is important to optimize your design and analyze its effects on the environment for future generations.
“The future of urban design and project development has arrived” — Digital Blue Foam
Visit digitalbluefoam.com to learn about Digital Blue Foam’s early access program! | https://medium.com/digital-blue-foam/interactive-ai-for-sustainable-architecture-c5baf4c0ad3d | ['Rutvik Deshpande'] | 2020-08-27 02:22:24.496000+00:00 | ['AI', 'Sustainable Development', 'Architecture', 'Generative Design', 'Analysis'] | Title Interactive AI Sustainable ArchitectureContent recently selected research internship one promising exciting AEC startup — Digital Blue Foam given chance use Augmented Intelligence enhanced software Here’s article describes feature advantage software webbased interactive generative design tool gather urban climate data various available web source log Digital Blue Foam website get option load previous session it’s first time search bar searching locating site looking designing feeling lucky option take tool guide feature must select option want learn tool brief Locate Select Site search location tool fetch various datasets take location draw edit polygon see located site 3D surrounding map draw polygon select site Draw Lines using Pencil Eraser draw random line polyline plot divide subplots pencil three option 1 Axis — assign axis plot 2 Park creates assigns drawn place park 3 tape measure tool measuring line drawing Upon doubleclicking click drag draw line option line subplots finalized seen orange color interactive software number subplots even changed statistic section I’ll inform article Draw tool also erase feature case want redraw line Clicking play button generate design according subplots drawn Straight well curved line drawn divide plot Generate Designs play button generates design pause design like step forward step backward button toggle design one time play button generates many option user choose Program Distribution section input slider changed change ratio educational residential commercial office leisure space according need like generated design distribution program space click specific part structure get highlighted edit feature like program floor height number floor DBF Assistant side arrow key give freedom move selected block Selecting plot also offer various option like convert plot park parking podium even merge 2 plot Sustainable Tools tool 3 feature Sun path feature help visualization sun path shadow site also input date time user select designing building checking sun path shadow essential study shown sufficient daylight space increase one’s productivity freshness quotient b Wind path feature allows visualize direction wind huge arrow Wind path another important factor sustainable climateresponsive design location opening chosen according wind direction Also help analyze airflow plot c Solar radiation third sustainability tool Solar Radiation option make designing process bit slow course option help visualize analyze distribution solar radiation throughout design beneficial designing shading requirement structure Sun path Wind Direction selected Statistics Section section offer wide range target choose Max Height Facade Area Floor Area Site Area target set user One favorite target among option Site Efficiency tell site efficiency percentage see target change changing subplots program make interactive informative tool sustainability option choose Multiple input target set design includes Efficiency GFA many Viewing Tool tool 3 subtypes View — consists 4 feature Orbital view Toggle view Option show car Show tree plot b Plan view — part also 4 feature Plan view Longitudinal section Crosssection Toggle isometric view view important architectural viewpoint c Visualize Data — subtype 5 feature Terrain — visualize satellite imagery map Neighborhood program — Proximity analysis various nearby facility Heatmap Structural Details Save Download slot save 5 design saved design compared using compare design tool compare factor like FAR Floor Area Ratio GFA Gross Floor Area Efficiency insightful comparison 5 design b download button offer 4 option consisting screenshot 3D 2D download excel xlsx file report report show detail like floor height area every floor program file also input section like Title Building Type Site Area useful presentation documentation liked tool design generated clicking play button see target change value feel right target achieved pause save design tool one bestuse case “AI AEC” help user optimization design efficiently several input option user interface easy use DBF Assistant make simple user get information various tool offered software also option like Quick Tips Tutorials guide tool Progress impossible without change become important factor change designing process make efficient sustainable way city look today different coming year important optimize design analyze effect environment future generation “The future urban design project development arrived” — Digital Blue Foam Visit digitalbluefoamcom learn Digital Blue Foam’s early access programTags AI Sustainable Development Architecture Generative Design Analysis |
3,577 | Why You’re Struggling with Innovation. And How to Get Better. | The modern typewriter had a problem. When Christopher Sholes developed the first model in 1868, it was an amazing development for its time. But if you tried to type too quickly on it, the type bars had a tendency to bang into one another and get stuck.
Sholes consulted an educator who helped him analyze the most common pairings of letters. He then split up those letters so that their type bars were farther apart and less likely to jam.
He slowed down typing speed to prevent the typewriter from jamming. Which then sped up the typing.
This dictated the layout of the keyboard, which came to be known for the first six letters in the upper row — QWERTY.
In 1873, the Sholes & Glidden typewriter became the first to be mass-produced, and its keyboard layout soon became standard. Nearly 150 years later, despite the fact that sticking type bars are an obsolete problem, the original inefficient design continues to be the standard.
It’s a telling lesson in the power of inertia. And in the barriers that prevent innovation.
The Least Innovative System Imaginable
“If you always do what you always did, you will always get what you always got.” — Albert Einstein
Imagine trying to design a system that prevented innovation. Your goal is to structure a company in a way that will actively discourage people from innovating. What would it include?
People told to develop new, innovative ideas and then not given any time to work on them?
Cut the resources available to everyone looking to advance new opportunities?
Create inconsistent messaging as senior management lauds the importance of innovation and plasters the walls with motivational posters, yet actual policy and daily decisions run counter?
Bureaucratic processes that require layers of approvals to depart from the typical methodology?
Harsh judgment of any experiment that doesn’t yield spectacular results on the first try?
Systems that reward maintaining the current status and are unforgiving of risks that jeopardize it?
It’s easy to see how a system with these aspects would discourage new ideas and innovation. Yet these same traits are also present in many of today’s companies.
Somehow, many of the features of a hypothetical organization designed to stifle innovation, are present in the places we work today. Despite leaders citing a love of creativity and innovation, putting this desire into practice continues to be a struggle. As Charles Eames warned,
“Recent years have shown a growing preoccupation with the circumstances surrounding the creative act and a search for the ingredients that promote creativity. This preoccupation in itself suggests that we are in a special kind of trouble — and indeed we are.”
But why? Why do so many companies want to create innovative cultures yet somehow end up creating the polar opposite?
Because many of these companies have good managers. And good managers tend to discourage innovation.
Customers (and Hence Managers) Don’t Want Innovation
Good managers stay close to their customers. They know that if they’re to consistently support the company’s bottom line, they need to be attuned to customer needs and desires.
Before managers decide to invest in a new technology or strategy, they’ll consider their customers — Do they want it? Will this be profitable? How big is the potential market?
The better that managers ask these questions, the better their investments are aligned with customers. And the less likely they are to put out the next New Coke.
But while this is an advantage in identifying the next round of product improvements, it’s a major liability to wholesale innovation.
Most customers can quickly identify incremental improvements. Those areas are at the forefront of their experience and it doesn’t require a lot of imagination to come up with some improved features. So good managers, as you’d expect, are very adept at leading teams to identify and incorporate incremental improvements.
But major innovation — 10x type changes — often come from completely new perspectives. They come from thoughts and ideas that most customers haven’t even considered. As the old (dubiously quoted) Henry Ford saying goes,
“If I had asked people what they wanted, they would have said faster horses.”
With a typical management structure, it makes it nearly impossible to justify diverting resources from known customer needs and desires to unproven markets and questionable investments. The systems and processes that keep the business running are specifically designed to identify and cut those initiatives that do not align to the customer’s current needs.
The solution then, needs to reverse this structure. It needs to override the systems that are highlighting innovation as unprofitable in the short-term and encourage their pursuit. It needs to protect these new ideas from the typical business plan mentality and be willing to take a shot on the unknown. Mainly, it needs to be willing to divorce innovation from customer needs. As Steve Jobs said,
“Some people say, ‘Give the customers what they want.’ But that’s not my approach. Our job is to figure out what they’re going to want before they do.”
And to do that, you need to separate this out from the traditional mainstream business.
The Chance for a Billion Dollar Return
Choice A: You can give a million dollars, bottom line, to your company through your efforts this year, guaranteed.
Choice B: You can give a billion dollars, bottom line, to your company through your efforts this year, with one chance in a hundred.
Dr. Astro Teller, Captain of Moonshots (CEO) of X, Alphabet’s moonshot factory, offered these two choices to his audience as he began an A360 talk on how to 10x your thinking.
In response to his question, most people prefer to chance the billion-dollar return, particularly with a 10x expected utility on the odds. Yet when asked whether their bosses would agree with that choice, most people say no — they feel their management would prefer them to opt for the safe returns.
People want to take risks. They want to deliver large-scale innovation. But they feel that their management disagrees.
And in many cases, they’re right.
Imagine you’re managing a team of employees to produce and support an existing product line. Now ask yourself, is innovation absolutely critical to grow your business?
If not, and you can meet demand based on your existing business plan, what’s your motivation to invite additional risk?
Most managers — and as a result most teams — don’t need to rely on major innovation to survive. They’re able to sustain their business with the same things that have worked for them in the past — until suddenly and without much warning — it no longer will.
Most people see innovation as an idea problem. But it’s not. It’s a resource allocation problem.
The question isn’t how do you generate more ideas, but how do you align your best people, and sufficient resources, to your best opportunities. As the great Peter Drucker wrote,
“Problem solving, however necessary, does not produce results. It prevents damage. Exploiting opportunities produces results.”
Sergey Brin recognized this concern at Google and developed a 70/20/10 responsibility model. Seventy percent of time was reserved for daily operations and taking care of the current business. Twenty percent went towards the next level of advancements. And the final ten percent went towards major innovations and moonshots.
Google then implemented tracking systems to ensure people were prioritizing this breakdown — and make sure managers didn’t allow the urgent to override the important.
Not every idea would work out — actually the vast majority would end up as flops — but by continuing to prioritize innovation, Google put themselves in a position to take advantage of the ones that did. As Eric Schmidt described it,
“You can systematize innovation even if you can’t completely predict it.”
Separate Your Moonshots from the Main Business
“Leaders who order their employees to be more innovative without first investing in organizational fitness are like casual joggers who order their bodies to run a marathon. It won’t happen, and the experience is likely to cause a great deal of pain.” — Safi Bahcall, Loonshots: How to Nurture Crazy Ideas That Win Wars, Cure Diseases, and Transform Industries
Yet in many companies, simply prescribing a breakdown isn’t enough. Existing management practices and systems are too ingrained in the culture. Stability and near-term returns will always override the long-term investment needed to encourage major innovation.
The solution then, lies in separating out those who focus on major innovation from the standard product line. It’s taking a group and telling them that their entire job is to choose Choice B — and creating an environment that encourages that mentality.
If their job is to choose Choice B, and pursue ideas with a 1% success rate, failure will be unavoidable. So experimentation happens in a controlled environment, with quicker learning and much less negative consequences.
If their job is to choose Choice B, they’re not held to a quarterly return, so new innovations can be protected and developed.
They don’t need to guarantee a quarterly return, so risk-taking is easier to encourage. They can experiment and fail within controlled environments, so learning happens much more quickly and with much less negative consequences.
And since most new innovations don’t begin as clear successes, they can be developed within a protected space until they’re formed into more defined ideas. Sir Francis Bacon recognized this four centuries ago when he wrote, “As the births of living creatures are at first ill-shapen, so are all innovations, which are the births of time.”
But most importantly, the difference between delivering a million dollar return and a billion dollar return is not about working 1000 times as hard. And it’s not about being 1000 times smarter.
It requires a complete perspective shift. It requires separating this group from the typical management business plans and customer demands and pushing them to tackle completely new challenges. As Dr. Teller put it,
“If you push them — if you give them the freedom and the expectation to be weird, that’s moonshot thinking.”
Success Comes from Feedback Loops
“If you look at history, innovation doesn’t come just from giving people incentives; it comes from creating environments where their ideas can connect.” — Steven Berlin Johnson
In a conversation with Shane Parrish, former Y Combinator partner and the founder of Pioneer, Daniel Gross, describes the key to success as positive feedback loops. Our surrounding environments create feedback loops that either reinforce or discourage critical behaviors — which initiate chain reactions of either positive, or negative, behaviors.
Alexander Graham Bell, no stranger to innovation himself, agreed with Gross on the importance of surroundings towards creative success. In the 1901 volume, How They Succeeded, Orion Swett Marden interviewed Bell, then 54, who shared the following life lesson,
“Environment counts for a great deal. A man’s particular idea may have no chance for growth or encouragement in his community. Real success is denied that man, until he finds a proper environment.”
For innovation to be a success, it needs an environment of positive feedback loops — one that traditional management practices and business models are ill-equipped to support. The inertia towards supporting today’s needs and demands is too great to maintain enough focus on long-term innovation.
We cannot expect people to pursue moonshots within an environment designed for incremental improvements. We cannot expect people to pursue Choice B, when every feedback loop in place reinforces Choice A.
Until this responsibility is separated from the current management decision-making model, companies will continue to miss the opportunities for major innovations. Not because they’re making poor decisions. But because they’re making decisions based on criteria that are soon to become obsolete.
The alternative is to create groups that encourage these new perspectives and experiments. Which leads to ideas where people say, “there’s no way that could ever work.” Until, of course, they do.
Thanks, as always, for reading. Agree? Disagree? Other suggestions? Let me know, I’d love to hear your thoughts. And if you found this helpful, I’d appreciate if you could help me share with more people. Cheers! | https://jswilder16.medium.com/why-youre-struggling-with-innovation-and-how-to-get-better-533f5219c3e5 | ['Jake Wilder'] | 2019-04-22 01:46:42.469000+00:00 | ['Management', 'Leadership', 'Innovation', 'Creativity', 'Startup'] | Title You’re Struggling Innovation Get BetterContent modern typewriter problem Christopher Sholes developed first model 1868 amazing development time tried type quickly type bar tendency bang one another get stuck Sholes consulted educator helped analyze common pairing letter split letter type bar farther apart le likely jam slowed typing speed prevent typewriter jamming sped typing dictated layout keyboard came known first six letter upper row — QWERTY 1873 Sholes Glidden typewriter became first massproduced keyboard layout soon became standard Nearly 150 year later despite fact sticking type bar obsolete problem original inefficient design continues standard It’s telling lesson power inertia barrier prevent innovation Least Innovative System Imaginable “If always always always get always got” — Albert Einstein Imagine trying design system prevented innovation goal structure company way actively discourage people innovating would include People told develop new innovative idea given time work Cut resource available everyone looking advance new opportunity Create inconsistent messaging senior management lauds importance innovation plaster wall motivational poster yet actual policy daily decision run counter Bureaucratic process require layer approval depart typical methodology Harsh judgment experiment doesn’t yield spectacular result first try Systems reward maintaining current status unforgiving risk jeopardize It’s easy see system aspect would discourage new idea innovation Yet trait also present many today’s company Somehow many feature hypothetical organization designed stifle innovation present place work today Despite leader citing love creativity innovation putting desire practice continues struggle Charles Eames warned “Recent year shown growing preoccupation circumstance surrounding creative act search ingredient promote creativity preoccupation suggests special kind trouble — indeed are” many company want create innovative culture yet somehow end creating polar opposite many company good manager good manager tend discourage innovation Customers Hence Managers Don’t Want Innovation Good manager stay close customer know they’re consistently support company’s bottom line need attuned customer need desire manager decide invest new technology strategy they’ll consider customer — want profitable big potential market better manager ask question better investment aligned customer le likely put next New Coke advantage identifying next round product improvement it’s major liability wholesale innovation customer quickly identify incremental improvement area forefront experience doesn’t require lot imagination come improved feature good manager you’d expect adept leading team identify incorporate incremental improvement major innovation — 10x type change — often come completely new perspective come thought idea customer haven’t even considered old dubiously quoted Henry Ford saying go “If asked people wanted would said faster horses” typical management structure make nearly impossible justify diverting resource known customer need desire unproven market questionable investment system process keep business running specifically designed identify cut initiative align customer’s current need solution need reverse structure need override system highlighting innovation unprofitable shortterm encourage pursuit need protect new idea typical business plan mentality willing take shot unknown Mainly need willing divorce innovation customer need Steve Jobs said “Some people say ‘Give customer want’ that’s approach job figure they’re going want do” need separate traditional mainstream business Chance Billion Dollar Return Choice give million dollar bottom line company effort year guaranteed Choice B give billion dollar bottom line company effort year one chance hundred Dr Astro Teller Captain Moonshots CEO X Alphabet’s moonshot factory offered two choice audience began A360 talk 10x thinking response question people prefer chance billiondollar return particularly 10x expected utility odds Yet asked whether boss would agree choice people say — feel management would prefer opt safe return People want take risk want deliver largescale innovation feel management disagrees many case they’re right Imagine you’re managing team employee produce support existing product line ask innovation absolutely critical grow business meet demand based existing business plan what’s motivation invite additional risk manager — result team — don’t need rely major innovation survive They’re able sustain business thing worked past — suddenly without much warning — longer people see innovation idea problem it’s It’s resource allocation problem question isn’t generate idea align best people sufficient resource best opportunity great Peter Drucker wrote “Problem solving however necessary produce result prevents damage Exploiting opportunity produce results” Sergey Brin recognized concern Google developed 702010 responsibility model Seventy percent time reserved daily operation taking care current business Twenty percent went towards next level advancement final ten percent went towards major innovation moonshots Google implemented tracking system ensure people prioritizing breakdown — make sure manager didn’t allow urgent override important every idea would work — actually vast majority would end flop — continuing prioritize innovation Google put position take advantage one Eric Schmidt described “You systematize innovation even can’t completely predict it” Separate Moonshots Main Business “Leaders order employee innovative without first investing organizational fitness like casual jogger order body run marathon won’t happen experience likely cause great deal pain” — Safi Bahcall Loonshots Nurture Crazy Ideas Win Wars Cure Diseases Transform Industries Yet many company simply prescribing breakdown isn’t enough Existing management practice system ingrained culture Stability nearterm return always override longterm investment needed encourage major innovation solution lie separating focus major innovation standard product line It’s taking group telling entire job choose Choice B — creating environment encourages mentality job choose Choice B pursue idea 1 success rate failure unavoidable experimentation happens controlled environment quicker learning much le negative consequence job choose Choice B they’re held quarterly return new innovation protected developed don’t need guarantee quarterly return risktaking easier encourage experiment fail within controlled environment learning happens much quickly much le negative consequence since new innovation don’t begin clear success developed within protected space they’re formed defined idea Sir Francis Bacon recognized four century ago wrote “As birth living creature first illshapen innovation birth time” importantly difference delivering million dollar return billion dollar return working 1000 time hard it’s 1000 time smarter requires complete perspective shift requires separating group typical management business plan customer demand pushing tackle completely new challenge Dr Teller put “If push — give freedom expectation weird that’s moonshot thinking” Success Comes Feedback Loops “If look history innovation doesn’t come giving people incentive come creating environment idea connect” — Steven Berlin Johnson conversation Shane Parrish former Combinator partner founder Pioneer Daniel Gross describes key success positive feedback loop surrounding environment create feedback loop either reinforce discourage critical behavior — initiate chain reaction either positive negative behavior Alexander Graham Bell stranger innovation agreed Gross importance surroundings towards creative success 1901 volume Succeeded Orion Swett Marden interviewed Bell 54 shared following life lesson “Environment count great deal man’s particular idea may chance growth encouragement community Real success denied man find proper environment” innovation success need environment positive feedback loop — one traditional management practice business model illequipped support inertia towards supporting today’s need demand great maintain enough focus longterm innovation cannot expect people pursue moonshots within environment designed incremental improvement cannot expect people pursue Choice B every feedback loop place reinforces Choice responsibility separated current management decisionmaking model company continue miss opportunity major innovation they’re making poor decision they’re making decision based criterion soon become obsolete alternative create group encourage new perspective experiment lead idea people say “there’s way could ever work” course Thanks always reading Agree Disagree suggestion Let know I’d love hear thought found helpful I’d appreciate could help share people CheersTags Management Leadership Innovation Creativity Startup |
3,578 | Part 2: Stop Using If-Else Statements | APPLIED DESIGN PATTERNS: STRATEGY
Part 2: Stop Using If-Else Statements
Let’s have yet another look at how you can replace if-else statements.
Okay, we both already agree using If-Else statements everywhere is an awful practice.
You’ve without a shred of doubt met If-Else statements that made your head ache six ways from Sunday. Nasty branching and unclear responsibilities. We might as well slap some goto in there while we’re at it just for sh*ts and giggles.
Instructors and teachers love If-Else. It’s their hammer and everything’s a nail. Gotta decide which logic to execute? Use If-Else. Want to create a factory? Use If-Else. You get the point already…
We’ll be refactoring this illustrative hot piece of mess below to something extensible and production ready.
Terrible to look at. And, yes, it could have been implemented with switch as well. This kind of code is nevertheless prevalent.
You know it’s bad. But it’s fixable. A bit of refactoring and we’re back to highly extensible and maintainable code that’ll make you sleep like a baby.
“So, how do we replace these pieces of pain inducing If-Else statements?”
With strategy objects and type discovery.
You’ve likely already heard of the strategy pattern, but might still secretly wonder what the fuzz is all about. Here’s a brief introduction to a pattern that’ll change how you write branching in the future.
We often need to determine which logic to execute based on some condition. By creating a group of classes with a common interface, in combination with type discovery, we can easily swap which logic to execute, without the need for If-Else.
Nice, huh? We just call a specialized object’s method instead of extending our application with a nightmare of endless If-Else branching statements.
We’ll be refactoring the code above, ensuring we adhere to the SOLID principles. Especially with the Open/Closed principle in mind.
Demo time
1 First we start by extracting the the logic out of the horrible If-Else statement, and place it in separate strategy classes. At the same time, we create a common interface.
Each strategy class implements the common interface. Also, I’ve applied an attribute on the class, which provides us the opportunity to give the strategy a friendly name. The attribute class is define later in this article.
2 Then, we create a method in the Order class which takes the strategy interface as a parameter. Here’s a snippet of the entire Order class.
This allows us to delegate the logic to the specialized class, instead of writing horrible, not-easy-to-extend if-else statements.
3 Now the part where we actually remove the If-Else hell from the illustrative example at the beginning.
This provide some real extensibility to our application.
Let’s briefly walk thru the type discovery process.
We’re building a dictionary containing all the types that implement the common interface, and use the name from the attribute as the key.
Then, we let the user enter some text into the console that will match one of the output formatters name.
Based on the input, we first find the correspond type in the dictionary and create an instance of that type.
The instance is passed to the Order’s GenerateOutput method.
It’s more code, no doubt. But it will allow us to dynamically discover new formatter strategies as they are added to the solution. Something If-Else won’t provide you with, no matter how hard instructors try to push it.
One more thing…
If you wonder about the [OutputFormatterName("")] , above the strategy classes, the implementation looks like this below.
A very simple attribute class that provides us with a friendly display name.
“What to do if I need a new way to format the output?
You create a new class that implements the IOrderOutputStrategy . It’s honestly that simple. The type discovery process will take care of “registering” the new formatter with the application.
By defining the GenerateOuput method on the Order, we don’t need to branch our code using If-Else. We just delegate the responsibility to the specialized class.
“Again, you’re creating a lot of classes to do something simple!”
Sure, it’s a lot of additional classes. But they are insanely simple.
They have meaningful names derived from the functional requirements. Other developers would recognize their purpose from the get-go.
I can also walk thru logic with business people and they’re completely onboard with what I’m talking about with only a bit of handholding — it’s code after all.
Should we really limit our expressiveness, just to accommodate people who are stuck with If-Else?
“But won’t there be situations when If-Else is okay?”
Sure. Sometimes… If you’re into competitive programming, writing something that needs to be highly optimized, if you know something will absolutely not change (until it does)— or doing a college assignment. Instructors love that sh*t. | https://medium.com/dev-genius/part-2-stop-using-if-else-statements-ae4b0bec5bad | ['Nicklas Millard'] | 2020-08-06 19:30:52.182000+00:00 | ['Technology', 'Software Engineering', 'Csharp', 'Programming', 'Software Development'] | Title Part 2 Stop Using IfElse StatementsContent APPLIED DESIGN PATTERNS STRATEGY Part 2 Stop Using IfElse Statements Let’s yet another look replace ifelse statement Okay already agree using IfElse statement everywhere awful practice You’ve without shred doubt met IfElse statement made head ache six way Sunday Nasty branching unclear responsibility might well slap goto we’re shts giggle Instructors teacher love IfElse It’s hammer everything’s nail Gotta decide logic execute Use IfElse Want create factory Use IfElse get point already… We’ll refactoring illustrative hot piece mess something extensible production ready Terrible look yes could implemented switch well kind code nevertheless prevalent know it’s bad it’s fixable bit refactoring we’re back highly extensible maintainable code that’ll make sleep like baby “So replace piece pain inducing IfElse statements” strategy object type discovery You’ve likely already heard strategy pattern might still secretly wonder fuzz Here’s brief introduction pattern that’ll change write branching future often need determine logic execute based condition creating group class common interface combination type discovery easily swap logic execute without need IfElse Nice huh call specialized object’s method instead extending application nightmare endless IfElse branching statement We’ll refactoring code ensuring adhere SOLID principle Especially OpenClosed principle mind Demo time 1 First start extracting logic horrible IfElse statement place separate strategy class time create common interface strategy class implement common interface Also I’ve applied attribute class provides u opportunity give strategy friendly name attribute class define later article 2 create method Order class take strategy interface parameter Here’s snippet entire Order class allows u delegate logic specialized class instead writing horrible noteasytoextend ifelse statement 3 part actually remove IfElse hell illustrative example beginning provide real extensibility application Let’s briefly walk thru type discovery process We’re building dictionary containing type implement common interface use name attribute key let user enter text console match one output formatters name Based input first find correspond type dictionary create instance type instance passed Order’s GenerateOutput method It’s code doubt allow u dynamically discover new formatter strategy added solution Something IfElse won’t provide matter hard instructor try push One thing… wonder OutputFormatterName strategy class implementation look like simple attribute class provides u friendly display name “What need new way format output create new class implement IOrderOutputStrategy It’s honestly simple type discovery process take care “registering” new formatter application defining GenerateOuput method Order don’t need branch code using IfElse delegate responsibility specialized class “Again you’re creating lot class something simple” Sure it’s lot additional class insanely simple meaningful name derived functional requirement developer would recognize purpose getgo also walk thru logic business people they’re completely onboard I’m talking bit handholding — it’s code really limit expressiveness accommodate people stuck IfElse “But won’t situation IfElse okay” Sure Sometimes… you’re competitive programming writing something need highly optimized know something absolutely change does— college assignment Instructors love shtTags Technology Software Engineering Csharp Programming Software Development |
3,579 | The 5 Kinds of Award-Winning Commercials | 2. Larger-Than-Life Results
This strategy involves showing the experience the user will go through, once they start using the company’s product.
The successful ads tend to exaggerate what you receive. The greater the exaggeration, the more comical and unusual the advertisement is, making it more indelible.
Author via YouTube
This comical advertisement by Lynx starts with a single woman running across the island, in pursuit of something. The scene changes and in a few seconds you now see hordes of women, running over one another, climbing and trying to reach somewhere. And then in the climax, you get to know what was attracting all these women.
This advertisement expresses two benefits of the product:
1. How strong the deodorant is — it could be detected miles away.
2. How strongly it attracts all women.
In addition, the expressions of the main character in the climax make it even more hilarious.
No doubt a lot of other deodorant brands followed suit once the Lynx effect gained popularity. Though they had varying levels of success.
You could also use what is called inverted consequences — a version in which you warn against the implications of not following the ad’s recommendation.
For example, while promoting a brand of vitamin, one could show how the viewer is missing out on life by not ingesting the vitamin (Revital Capsules)
Risks to keep in mind: this form of advertising runs the risks of getting you trapped in allegations and lawsuits. This story by Sean Kernan serves as a great example: Pepsi made an exaggerated claim in their advertisement and found themselves stuck in a messy lawsuit.
A brand called Complan suffered a similar fate a few years back when a lawsuit was slapped on their face because they claimed to make kids “taller.” | https://medium.com/better-marketing/the-5-kinds-of-award-winning-commercials-8c2cd003b927 | ['Kiran Jain'] | 2020-06-22 17:37:25.038000+00:00 | ['Research', 'Marketing', 'Advertising', 'Business', 'Creativity'] | Title 5 Kinds AwardWinning CommercialsContent 2 LargerThanLife Results strategy involves showing experience user go start using company’s product successful ad tend exaggerate receive greater exaggeration comical unusual advertisement making indelible Author via YouTube comical advertisement Lynx start single woman running across island pursuit something scene change second see horde woman running one another climbing trying reach somewhere climax get know attracting woman advertisement express two benefit product 1 strong deodorant — could detected mile away 2 strongly attracts woman addition expression main character climax make even hilarious doubt lot deodorant brand followed suit Lynx effect gained popularity Though varying level success could also use called inverted consequence — version warn implication following ad’s recommendation example promoting brand vitamin one could show viewer missing life ingesting vitamin Revital Capsules Risks keep mind form advertising run risk getting trapped allegation lawsuit story Sean Kernan serf great example Pepsi made exaggerated claim advertisement found stuck messy lawsuit brand called Complan suffered similar fate year back lawsuit slapped face claimed make kid “taller”Tags Research Marketing Advertising Business Creativity |
3,580 | Introducing Figma’s Live Embed Kit | We’re excited to announce our public Live Embed Kit to keep teams in sync wherever they are. With this development, anyone can add Figma designs and prototypes that are always up to date to their website. It’s as simple as embedding an iframe. The Kit will also allow 3rd party developers to enable Figma Live Embeds in their own tools — just like Trello, JIRA and Dropbox Paper.
It’s as simple as embedding an iframe.
To insert a Figma design or prototype into any webpage, just click share in the top right corner of the file, select public embed and copy the iframe code. If you’re a developer and want to make it easier for your users to embed live Figma documents, follow the instructions here.
We’re really excited to see what the community builds with this. Here’s a few examples of where live, synchronized designs could be helpful:
Do you run an internal wiki for your company? Add Figma and people can post live designs to articles about projects or features.
Are you building a messaging app for teams? Connect Figma and your users can share the latest versions of designs in group chats.
Do you want to blog about a project you’re working on? Embed a live Figma file and your readers will always see the most recent version of your design.
Live Embed in action with Trello:
Since Figma is the only design tool that runs on the web, these embedded designs will stay up to date and synchronized with the Figma original. Whenever a designer makes a tiny padding tweak or changes an icon in the Figma file itself, they can rest easy without having to re-export or re-upload the design.
We hope integrations like these will keep teams in sync and save them the hassle of hunting through the annals of their email history, mountain of Slack chats, or folders of their file sharing services.
This is only the beginning of our platform efforts. We’re excited to harness the power of our web-based technology to help teams communicate and build better products faster. Up next — stay tuned for a Figma API that will allow developers to pull other types of information from Figma files and incorporate them into new workflows. If you’re interested in partnering, shoot us a line at [email protected].
For more details on how to integrate Figma Live Embeds into your website or tool, go to https://www.figma.com/platform. | https://medium.com/figma-design/introducing-figmas-live-embed-kit-a04b9c7ad001 | ['Dylan Field'] | 2018-04-03 18:28:44.842000+00:00 | ['Programming', 'Design', 'Software Development', 'Engineering', 'Tech'] | Title Introducing Figma’s Live Embed KitContent We’re excited announce public Live Embed Kit keep team sync wherever development anyone add Figma design prototype always date website It’s simple embedding iframe Kit also allow 3rd party developer enable Figma Live Embeds tool — like Trello JIRA Dropbox Paper It’s simple embedding iframe insert Figma design prototype webpage click share top right corner file select public embed copy iframe code you’re developer want make easier user embed live Figma document follow instruction We’re really excited see community build Here’s example live synchronized design could helpful run internal wiki company Add Figma people post live design article project feature building messaging app team Connect Figma user share latest version design group chat want blog project you’re working Embed live Figma file reader always see recent version design Live Embed action Trello Since Figma design tool run web embedded design stay date synchronized Figma original Whenever designer make tiny padding tweak change icon Figma file rest easy without reexport reupload design hope integration like keep team sync save hassle hunting annals email history mountain Slack chat folder file sharing service beginning platform effort We’re excited harness power webbased technology help team communicate build better product faster next — stay tuned Figma API allow developer pull type information Figma file incorporate new workflow you’re interested partnering shoot u line partnersfigmacom detail integrate Figma Live Embeds website tool go httpswwwfigmacomplatformTags Programming Design Software Development Engineering Tech |
3,581 | The Future of Information Technology | According to Merriam-Websters dictionary, information technology is defined as, “technology involving the development, maintenance, and use of computer systems, software, and networks for the processing and distribution of data”. In simpler terms, information technology is technology which processes and distributes data. This technology could be hardware (the personal computer), or it could be software (Netflix). As the definition suggests, the processing and distribution of data could also happen on networks such as the internet.
Information technology is important because as humans we are limited in our ability to process and distribute data. By increasing the amount of data we can process and distribute, we can solve a lot of problems and answer a lot of questions our limitations would never allow us to solve. Without information technology, we wouldn’t able to predict the weather, use the internet to obtain new knowledge, utilize the GPS to take us from point A to point B, and do many more things that we take for granted thanks to information technology.
As this technology progresses, it’s going to take on characteristics and operate in certain dimensions that it’s never operated in before. Each time this technology operates in a new dimension, it’ll disrupt a different set of industries and provide a different set of benefits that wasn’t possible in previous iterations of the technology. Now before we dive in the future of information technology and what these added characteristics will be, we must first cover a little bit of the history of information technology. This must be done so that we can have an appreciation of where we are relative to what we will become.
I’d also like to add that I am not going to talk about the specifics behind how computers will be able to reach higher levels of performance. If you would like to read an article about this, check out my article titled, “Long Live Moore’s Law!”.
History of Information Technology Change
Internet — What started out as a top-secret government project in the late 1960’s eventually took the world by storm when there was an explosion of commercialization and innovation centered around this technology in the 1990s. The internet was a huge step forward for information technology and was revolutionary since it was able to add the characteristic of centralization to information technologies. By centralizing information not only did we democratize access to the worlds knowledge repository, but we also created a digital infrastructure that allowed for massive amounts of communication to take place which was previously unimaginable.
Mobile Electronics — While although mobile electronics (portable computers essentially) were starting to gain traction in the 1990s, it wasn’t until the 2000s when they began to take society by storm. This was initially due to iPods and cell phones, but later tablets, smartphones, smart watches, and even drones would join the ranks of this technology. This technology added the characteristic of physical mass distribution to information technology, which enabled the individual for the first time to take advantage of the information age while they were on the go. This was revolutionary because not only did it allow us to communicate and obtain information anywhere in the world, it also gave us new products and applications with capabilities which penetrated every field such as photography, music, media, and cinematography.
Big Data — For those of you who don’t know what big data is, big data is the analysis of data which is too complex for traditional data-processing software to process. Big data started to become huge in the 2010s thanks to the vast amount of information that mobile electronics were generating, improvements in the capabilities of our algorithms, and advances in machine learning techniques which made big data analysis easy.
Some of you might be saying to yourselves, “So what? Who cares if I can analyze larger data sets? Does that really matter?”. The answer to this is an unequivocal yes! By performing a big data macro level analysis on a particular problem, a lot of the times you will end up discover patterns in the data that you wouldn’t have been able to discover had you analyzed a smaller data set.
The ability to uncover these additional patterns are important since it allows for us to solve new problems such as understanding the complexity of the human genome (which opens up the door to unlocking the powers of genetic therapy and genetic engineering). By allowing us to have an increased understanding of everything, big data has been (and will be) crucial in helping us create a better world and solve some of sciences biggest mysteries.
The Future of Information Technology
Now that we’ve covered some of the history of information technology, this should provide a good base for understanding where we are, and where we are headed. The story of the future can now begin! — Chapter I: Blockchain
Blockchain — For those of you who don’t know what blockchain is, blockchain is a specific type of database. Unlike typical databases, the data on a blockchain is stored in blocks which are then chained together in chronological order. If the blockchain is decentralized (meaning that no single person or group has control of the blockchain), then any data which is entered into the blockchain cannot be changed nor deleted. Blockchain is a huge deal for two reasons. The first is that it allows us to have a perfect understanding of the history of whatever we choose to attach the blockchain to. This will have tremendous benefits in a large variety of areas such as figuring out the source of where contaminated food is coming from in seconds as opposed to days, cutting down on the level of slave and child labor in supply chains, and protect property rights in undeveloped nations where criminal organizations collude with officials to take advantage of the lack of transparency in the land ownership process.
The second reason why the blockchain is a big deal is because it allows for non-alterable data. This will make our data more secure and allow us to protect our finances, protect our intellectual property, and would allow for the consumption and analysis of highly sensitive information (such as health and financial records) without any fear of having this information potentially stolen. This is because the blockchain would have a complete record of everybody who accessed the data which would make figuring out who stole the data relatively easy. By adding the dimensions of security and historical clarity into our information technology, we will be able to enhance the human condition to a degree that past technologies such as the internet never could.
The Internet of Things — The internet of things are devices that have sensors, software, and other technologies embedded in them so that they can connect, share data, and collaborate with other devices and systems over the internet. This technology will add the characteristics of objectification and collaboration to information technology, and will take society by storm in the 2030s. While although the internet of things market is growing rapidly, I firmly believe that the internet of things in the 2020’s is going to be where the internet was in the 1980s. Following in the foot steps of blockchain, this technology is going to benefit society in multiple ways.
The first way the internet of things will benefit society is through how much data it will generate. Since everyday objects will be generating data in ways that they don’t currently, this will increase our level of data by many orders of magnitude. This will increase our ability to answer existing questions and open the doors towards having different types of questions solved. This will also make artificial intelligence much more powerful since they will have larger data sets to scale their abilities on. More data = better A.I.
The second way in which this technology will benefit us is is through personalization. As the devices collect more data from your interactions with them, they’ll be able to share data with other devices and learn from one another so that your environment can become tailored to your individual needs and desires. This will allow for optimal sleep experiences on a nightly basis, enable the elderly to stay independent for a longer period of time, increase your success at obtaining health and wellness goals, and perhaps even lead to the automation of household chores.
The final ways in which this technology will benefit us is through the usage of internet of things by localities. This could provide data which helps detect and prevent viruses, reduce the destruction of fires by alerting the officials as to which areas are in need of new fire alarms, and providing more information on traffic flows which will enable policy makers to reduce the likelihood that traffic jams occur.
Nanotechnology — Nanotechnology is technology which operates on the atomic or molecular level in order to solve and address issues which macro sized technology is inadequate to solve. Nanotechnology will add the characteristic of microscopic proportions to information technology and will enable us to manipulate matter in ways that will make us seem like Gods. This technology has tremendous promise which includes unlocking all of the secrets of the brain by tracking brain activity down to the individual neuron. This could potentially cure all brain related illnesses such as dementia, and Alzheimer’s. Nanotechnology will also lead to computers that will operate within our bodies and will assist the immune system in combating disease and aging. It is possible (although this is debated) that these atom sized computers could conquer death itself by targeting the molecular mechanism that is responsible for the aging process. These computers also could potentially eliminate any disease and keep us in a perfect state of health and functionality.
Going beyond the health applications, nanotechnology also has the potential of creating new materials which are tailored for specific purposes. New materials could be made which allow for computers to operate on the surface of Venus without melting, or for satelites to plunge through the atmosphere of Jupiter without being torn apart. New materials could be made which allow for batteries to last for days or weeks and perhaps even months or years on a single charge. Finally, nanotechnology also has the potential of ending material scarcity since it will have the ability to manipulate atoms on a micro scale to produce materials such as copper or nickel on a macro scale. This will fundamentally change the economy and will make it considerably easier for the impoverished of society to live a good life. Nanotechnology is going to change the game.
Transhumanist Technology — Transhumanist technologies (the biogenesis of information technology) are technologies which seek to amplify the abilities of human beings through becoming one with them. Nanotechnology in this sense will start the process of transhumanism since nanotechnology will be one with our bodies in the sense that they will become an active part of our immune systems. Humanity due to its insatiable desire for becoming more however, will seek to go beyond the benefits of operating at its biological limits. Perfect 15/20 vision for example will not be enough as many among us will wish to integrate technology into their sights so that they can see in the dark, quite literally see emotions, and even perceive objects in perfect clarity which are several hundred yards away.
Having our intellect limited by life experience and the amount of time we dedicated toward learning new things will no longer be acceptable, as many will choose to voluntarily connect their brains to the internet and access all of the worlds information simply by thinking about it.
When it comes to our hearing, humans will want to customize the hearing experience to their individual needs and desires. If a sound goes above a certain level, the human ears will filter out all the excess decibels so that the noise won’t cause an unpleasant sensations. Some of us will choose to adopt audio technology that’ll allow for us to only receive the sound of things that are generated by the object of our focus in order to spy on our neighbors or ease drop on our friends.
As people continue to reap the benefits of each evolution in information technology, we must be extremely mindful of how this technology can be used malevolently so that can have an action plan against its misuse. Blockchain technology makes it easier to track the start of an ecoli outbreak in the food supply, but also makes it much more difficult for law enforcement to arrest criminals who use blockchain-based currencies. The internet of things might customize the environment to our individual needs and desires, but it could also be used a system of mass surveillance by private and government entities that makes privacy a thing of the past. Nanotechnology can be used to end scarcity, but it could also be wielded to bring unprecedent destruction to the battlefield. Transhumanist technologies can amplify our abilities, but it isn’t clear that they’ll be immune from the negative intents of hackers who wish to make people abide by their bidding.
We have a choice to make. If we choose to be proactive when it comes to preventing egregious misuse of technology, the information age will usher in a renaissance where society will solve many of the most pressing issues which have plagued humanity since the beginning of time. If we chose not to be proactive however, then we will destroy ourselves and create a hellish nightmare where nobody wants to live. The choice is ours. | https://medium.com/predict/the-future-of-information-technology-ceeff8e61553 | ['Jack Raymond Borden'] | 2020-12-19 03:13:48.317000+00:00 | ['Future', 'Information Technology', 'AI', 'Blockchain', 'Internet of Things'] | Title Future Information TechnologyContent According MerriamWebsters dictionary information technology defined “technology involving development maintenance use computer system software network processing distribution data” simpler term information technology technology process distributes data technology could hardware personal computer could software Netflix definition suggests processing distribution data could also happen network internet Information technology important human limited ability process distribute data increasing amount data process distribute solve lot problem answer lot question limitation would never allow u solve Without information technology wouldn’t able predict weather use internet obtain new knowledge utilize GPS take u point point B many thing take granted thanks information technology technology progress it’s going take characteristic operate certain dimension it’s never operated time technology operates new dimension it’ll disrupt different set industry provide different set benefit wasn’t possible previous iteration technology dive future information technology added characteristic must first cover little bit history information technology must done appreciation relative become I’d also like add going talk specific behind computer able reach higher level performance would like read article check article titled “Long Live Moore’s Law” History Information Technology Change Internet — started topsecret government project late 1960’s eventually took world storm explosion commercialization innovation centered around technology 1990s internet huge step forward information technology revolutionary since able add characteristic centralization information technology centralizing information democratize access world knowledge repository also created digital infrastructure allowed massive amount communication take place previously unimaginable Mobile Electronics — although mobile electronics portable computer essentially starting gain traction 1990s wasn’t 2000s began take society storm initially due iPods cell phone later tablet smartphones smart watch even drone would join rank technology technology added characteristic physical mass distribution information technology enabled individual first time take advantage information age go revolutionary allow u communicate obtain information anywhere world also gave u new product application capability penetrated every field photography music medium cinematography Big Data — don’t know big data big data analysis data complex traditional dataprocessing software process Big data started become huge 2010s thanks vast amount information mobile electronics generating improvement capability algorithm advance machine learning technique made big data analysis easy might saying “So care analyze larger data set really matter” answer unequivocal yes performing big data macro level analysis particular problem lot time end discover pattern data wouldn’t able discover analyzed smaller data set ability uncover additional pattern important since allows u solve new problem understanding complexity human genome open door unlocking power genetic therapy genetic engineering allowing u increased understanding everything big data crucial helping u create better world solve science biggest mystery Future Information Technology we’ve covered history information technology provide good base understanding headed story future begin — Chapter Blockchain Blockchain — don’t know blockchain blockchain specific type database Unlike typical database data blockchain stored block chained together chronological order blockchain decentralized meaning single person group control blockchain data entered blockchain cannot changed deleted Blockchain huge deal two reason first allows u perfect understanding history whatever choose attach blockchain tremendous benefit large variety area figuring source contaminated food coming second opposed day cutting level slave child labor supply chain protect property right undeveloped nation criminal organization collude official take advantage lack transparency land ownership process second reason blockchain big deal allows nonalterable data make data secure allow u protect finance protect intellectual property would allow consumption analysis highly sensitive information health financial record without fear information potentially stolen blockchain would complete record everybody accessed data would make figuring stole data relatively easy adding dimension security historical clarity information technology able enhance human condition degree past technology internet never could Internet Things — internet thing device sensor software technology embedded connect share data collaborate device system internet technology add characteristic objectification collaboration information technology take society storm 2030s although internet thing market growing rapidly firmly believe internet thing 2020’s going internet 1980s Following foot step blockchain technology going benefit society multiple way first way internet thing benefit society much data generate Since everyday object generating data way don’t currently increase level data many order magnitude increase ability answer existing question open door towards different type question solved also make artificial intelligence much powerful since larger data set scale ability data better AI second way technology benefit u personalization device collect data interaction they’ll able share data device learn one another environment become tailored individual need desire allow optimal sleep experience nightly basis enable elderly stay independent longer period time increase success obtaining health wellness goal perhaps even lead automation household chore final way technology benefit u usage internet thing locality could provide data help detect prevent virus reduce destruction fire alerting official area need new fire alarm providing information traffic flow enable policy maker reduce likelihood traffic jam occur Nanotechnology — Nanotechnology technology operates atomic molecular level order solve address issue macro sized technology inadequate solve Nanotechnology add characteristic microscopic proportion information technology enable u manipulate matter way make u seem like Gods technology tremendous promise includes unlocking secret brain tracking brain activity individual neuron could potentially cure brain related illness dementia Alzheimer’s Nanotechnology also lead computer operate within body assist immune system combating disease aging possible although debated atom sized computer could conquer death targeting molecular mechanism responsible aging process computer also could potentially eliminate disease keep u perfect state health functionality Going beyond health application nanotechnology also potential creating new material tailored specific purpose New material could made allow computer operate surface Venus without melting satelites plunge atmosphere Jupiter without torn apart New material could made allow battery last day week perhaps even month year single charge Finally nanotechnology also potential ending material scarcity since ability manipulate atom micro scale produce material copper nickel macro scale fundamentally change economy make considerably easier impoverished society live good life Nanotechnology going change game Transhumanist Technology — Transhumanist technology biogenesis information technology technology seek amplify ability human being becoming one Nanotechnology sense start process transhumanism since nanotechnology one body sense become active part immune system Humanity due insatiable desire becoming however seek go beyond benefit operating biological limit Perfect 1520 vision example enough many among u wish integrate technology sight see dark quite literally see emotion even perceive object perfect clarity several hundred yard away intellect limited life experience amount time dedicated toward learning new thing longer acceptable many choose voluntarily connect brain internet access world information simply thinking come hearing human want customize hearing experience individual need desire sound go certain level human ear filter excess decibel noise won’t cause unpleasant sensation u choose adopt audio technology that’ll allow u receive sound thing generated object focus order spy neighbor ease drop friend people continue reap benefit evolution information technology must extremely mindful technology used malevolently action plan misuse Blockchain technology make easier track start ecoli outbreak food supply also make much difficult law enforcement arrest criminal use blockchainbased currency internet thing might customize environment individual need desire could also used system mass surveillance private government entity make privacy thing past Nanotechnology used end scarcity could also wielded bring unprecedent destruction battlefield Transhumanist technology amplify ability isn’t clear they’ll immune negative intent hacker wish make people abide bidding choice make choose proactive come preventing egregious misuse technology information age usher renaissance society solve many pressing issue plagued humanity since beginning time chose proactive however destroy create hellish nightmare nobody want live choice oursTags Future Information Technology AI Blockchain Internet Things |
3,582 | How I decide whether to wear a coat every morning | Good evening ladies and gents, it is going to be SOOO SUNNY tomorrow.
When you get to that tomorrow:
Guys, guys, guys, where the sun at…?
Problem
Weather news are based on probabilities and those probabilities seem to get accurate only on the day itself.
But who has the time and energy to manually check the weather on the day itself, probably on the morning before leaving to work?
Well, certainly not me!
Long story short
tl;dr: with mighty programming skills, I made a bunch of lights automatically show me some weather forecast every day 💡
Let there be light
So… first step: I bought a LED strip, a band made of tiny lights that can change color.
I like the red a lot, it gives a nice cozy atmosphere.
Let there be WIFI
To talk to the LED strip through a computer program, I got myself a WIFI controller and connected it to the strip.
The WIFI controller gave my LED strip an IP address within my local network. | https://raphael-leger.medium.com/how-i-decide-whether-to-wear-a-coat-every-morning-3c081aa21fa8 | ['Raphaël Léger'] | 2020-01-26 10:28:38.120000+00:00 | ['Lambda', 'AWS', 'Serverless', 'Home Improvement', 'Weather'] | Title decide whether wear coat every morningContent Good evening lady gent going SOOO SUNNY tomorrow get tomorrow Guys guy guy sun at… Problem Weather news based probability probability seem get accurate day time energy manually check weather day probably morning leaving work Well certainly Long story short tldr mighty programming skill made bunch light automatically show weather forecast every day 💡 Let light So… first step bought LED strip band made tiny light change color like red lot give nice cozy atmosphere Let WIFI talk LED strip computer program got WIFI controller connected strip WIFI controller gave LED strip IP address within local networkTags Lambda AWS Serverless Home Improvement Weather |
3,583 | 7 Steps to Develop a Chatbot for Your Business | Since a few years, chatbots are here, and they will not go away any time soon. Facebook popularised the chatbot with Facebook Messenger Bots, but the first chatbot was already developed in the 1960s. MIT professor Joseph Weizenbaum developed a chatbot called ELIZA. The chatbot was developed to demonstrate the superficiality of communication between humans and machines, and it used very simple natural language processing. Of course, since then we have progressed a lot and, nowadays, it is possible to have lengthy conversations with a chatbot. For an overview of the history of chatbots, you can read this article.
Chatbots are a very tangible example where humans and machines work together to achieve a goal. A chatbot is a communication interface that helps individuals and organisations have conversations, and many organisations have developed a chatbot. There are multiple reasons for organisations to develop a chatbot, including obtaining experience with AI, engaging with customers and improving marketing, reducing the number of employees required for customer support, disseminating information and content in a way that users are comfortable with and, of course, increasing sales.
Chatbots offer a lot of opportunities for organisations, and they can be fun to interact with if developed correctly. But how do you start with conversational AI and how do you build a good and engaging chatbot? To answer that, I researched 20 organisations from around the globe who developed a chatbot. As part of my PhD, I wanted to understand how organisations can get started with conversational AI and be successful.
Top Most Popular Bot Design Articles:
Seven Steps to Develop a Chatbot
Starting with a chatbot is not easy, as there are many different variables to take into account.
Define the reason for a chatbot; Create the conversation flow; Determine to develop the chatbot in-house or using third-party tools; Integrate the chatbot and conversation flow in the front-end and back-end of your organisation for context awareness; Test the chatbot and obtaining approval of relevant stakeholders; Analyse the conversations and the data derived from it; Improve the chatbot based on the analytics received.
Let’s discuss each step briefly:
1. Define the problem
First of all, it is important to decide why you would need a chatbot. A chatbot should be a means to an end, but not an end itself. It should alleviate a pain point or increase your customer engagement, but it cannot, yet, replace your entire customer support department. Understanding the objective of your chatbot will help define the conversation flow as well as determine the type of chatbot you need. After all, there are different types of chatbots ranging from simple FAQ bots, so-called ‘on rails’ bots to chatbots that allow the input of free text. The more you allow the user to determine the direction of the conversation, the less the chatbot is in control.
2. Create the conversation flow
Designing the conversation within a chatbot is challenging. Not only should you develop a persona that matches your brand personality, but the conversational interface should also be clean, and the chatbot should aim for a positive experience. Therefore, the conversation should not be developed by the developer, but by a copywriter in collaboration with the marketing or communication department. It is important to create the right conversation flow for the right objective. For some conversation, people feel more comfortable with a chatbot than with a human. For example, an Australian financial services company noticed that customers feel more comfortable cancelling with a machine than they do with a human. Therefore, when developing a chatbot, you should pay attention to the conversational strategy and know that the platform itself is not standalone, but should be integrated with all the other elements of the business.
3. Selecting the chatbot platform
There are many different chatbot platforms, ranging from platforms that enable simple FAQ chatbots to more advanced chatbots that take into the context. Such context-aware chatbots can offer a lot of added value because they can offer a positive experience to the end user.
Once you have decided what platform to use, it is important to decide whether to outsource or not. There are plenty of chatbot developers out there that can help you, but not every developer might offer the right solution. Therefore, it is important to investigate and ensure that you work with the right chatbot developer.
4. Integrating the chatbot
Building a chatbot is the easy part, among others because of the many platforms and developers out there. Integrating the chatbot into your systems is a lot more difficult, but that’s when the added value is achieved. If the chatbot is connected to your system your CRM or database), and when someone wants to change, for example, an address and the chatbot can say: ‘sure, give me your address, and I will update the system for you’. This is where you see operational efficiency, satisfaction and the NPS going up.
One American chatbot developer created a chatbot that is person-aware, meaning that the chatbot knew who the person is in the chat, as the chatbot is linked to internal systems. As a result, it is a lot smarter because it has a better understanding of the context and can service the customer faster better.
5. Testing the chatbot
Developing is only one part, as with any software development project, the testing is a crucial aspect of the project. Fortunately, most of the organisations I spoke, test the code of the chatbot. Especially the chatbot developers have rigorous testing practices in place. These processes include a testing environment, an acceptance environment and a live environment to ensure that everything can be properly tested. Not only should you test the code of your own projects, but you should also test the software that you use. Unfortunately, many organisations did not test the third-party tools they implemented and sort of trusted the third party that their tool and the code in that tool was correct and did not have any bugs. There is a strong reliance on and confidence in the third-party tools. However, it is important to have proper controls in place. One American chatbot developer went as far as never to allow third-party developers access the code. And another option is to spend time on reverse engineering what you have built to ensure that the code is indeed correct.
6. Analysing the conversations
A conversation is by its nature data-driven and leads to more data. This data can be analysed, and the insights of the analytics can be used to improve the conversational flow of the chatbot. However, to enable that any output text can be used to train the chatbot, thorough testing processes have to be in place. Such as any text that should be written by copywriters not developers and especially large organisations require some sort of governance structure to be in place around the content that is said by the chatbot.
Since all conversations are data, it is possible to extract valuable information from the conversations, both actively and passively to capture and feed that data into the overall reporting mechanisms. So that’s on a micro level of an individual conversation for an individual user, and it is at a macro level for the questions that are being asked and answered. This is called conversational analytics: what was said, how was it said, what was the intent, what is the sentiment, did we accomplish the goal, what was the goal. Where does it fit in the larger context? Without conversational analytics, it is impossible to develop an engaging and successful chatbot.
It is also possible to add the capability to jump in and intervene in any conversation, but that sort of deceits the purpose. However, it can be useful because often machines still don’t understand the full context and then human intervention is required.
7. Improve the chatbot
Of course, all those analytics can offer valuable insights to improve the chatbot. Reviewing the transcripts looking at places where the chatbot did not understand what people are asking helps to build up any datasets so to retrain the chatbot or to look at places where the chatbot thinks it got it right, but actually got wrong and so that the information can be rectified and the conversations can be improved. Such supervised learning helps improve the chatbot, while it prevents problems such as Microsoft’s Tay, which learned unsupervised. The objective should be to continuously improve the chatbot, make it increasingly context-aware and better at understanding the intent of the conversation.
Conclusion
Chatbots offer a great way for organisations to improve their business, make it more efficient and increase the customer experience. However, it is important that the chatbot learns in a supervised way and is bound by certain rules that drive your conversation if you wish to prevent examples such as Microsoft’s Twitter bot Tay. Natural language processing is getting better, and in due time, it will become possible to have engaging conversations with a machine. | https://markvanrijmenam.medium.com/how-to-develop-conversational-ai-for-your-business-3ab025a65a52 | ['Dr Mark Van Rijmenam'] | 2020-01-10 18:42:19.664000+00:00 | ['AI', 'Bots', 'Conversational Ai', 'Artificial Intelligence', 'Chatbots'] | Title 7 Steps Develop Chatbot BusinessContent Since year chatbots go away time soon Facebook popularised chatbot Facebook Messenger Bots first chatbot already developed 1960s MIT professor Joseph Weizenbaum developed chatbot called ELIZA chatbot developed demonstrate superficiality communication human machine used simple natural language processing course since progressed lot nowadays possible lengthy conversation chatbot overview history chatbots read article Chatbots tangible example human machine work together achieve goal chatbot communication interface help individual organisation conversation many organisation developed chatbot multiple reason organisation develop chatbot including obtaining experience AI engaging customer improving marketing reducing number employee required customer support disseminating information content way user comfortable course increasing sale Chatbots offer lot opportunity organisation fun interact developed correctly start conversational AI build good engaging chatbot answer researched 20 organisation around globe developed chatbot part PhD wanted understand organisation get started conversational AI successful Top Popular Bot Design Articles Seven Steps Develop Chatbot Starting chatbot easy many different variable take account Define reason chatbot Create conversation flow Determine develop chatbot inhouse using thirdparty tool Integrate chatbot conversation flow frontend backend organisation context awareness Test chatbot obtaining approval relevant stakeholder Analyse conversation data derived Improve chatbot based analytics received Let’s discus step briefly 1 Define problem First important decide would need chatbot chatbot mean end end alleviate pain point increase customer engagement cannot yet replace entire customer support department Understanding objective chatbot help define conversation flow well determine type chatbot need different type chatbots ranging simple FAQ bot socalled ‘on rails’ bot chatbots allow input free text allow user determine direction conversation le chatbot control 2 Create conversation flow Designing conversation within chatbot challenging develop persona match brand personality conversational interface also clean chatbot aim positive experience Therefore conversation developed developer copywriter collaboration marketing communication department important create right conversation flow right objective conversation people feel comfortable chatbot human example Australian financial service company noticed customer feel comfortable cancelling machine human Therefore developing chatbot pay attention conversational strategy know platform standalone integrated element business 3 Selecting chatbot platform many different chatbot platform ranging platform enable simple FAQ chatbots advanced chatbots take context contextaware chatbots offer lot added value offer positive experience end user decided platform use important decide whether outsource plenty chatbot developer help every developer might offer right solution Therefore important investigate ensure work right chatbot developer 4 Integrating chatbot Building chatbot easy part among others many platform developer Integrating chatbot system lot difficult that’s added value achieved chatbot connected system CRM database someone want change example address chatbot say ‘sure give address update system you’ see operational efficiency satisfaction NPS going One American chatbot developer created chatbot personaware meaning chatbot knew person chat chatbot linked internal system result lot smarter better understanding context service customer faster better 5 Testing chatbot Developing one part software development project testing crucial aspect project Fortunately organisation spoke test code chatbot Especially chatbot developer rigorous testing practice place process include testing environment acceptance environment live environment ensure everything properly tested test code project also test software use Unfortunately many organisation test thirdparty tool implemented sort trusted third party tool code tool correct bug strong reliance confidence thirdparty tool However important proper control place One American chatbot developer went far never allow thirdparty developer access code another option spend time reverse engineering built ensure code indeed correct 6 Analysing conversation conversation nature datadriven lead data data analysed insight analytics used improve conversational flow chatbot However enable output text used train chatbot thorough testing process place text written copywriter developer especially large organisation require sort governance structure place around content said chatbot Since conversation data possible extract valuable information conversation actively passively capture feed data overall reporting mechanism that’s micro level individual conversation individual user macro level question asked answered called conversational analytics said said intent sentiment accomplish goal goal fit larger context Without conversational analytics impossible develop engaging successful chatbot also possible add capability jump intervene conversation sort deceit purpose However useful often machine still don’t understand full context human intervention required 7 Improve chatbot course analytics offer valuable insight improve chatbot Reviewing transcript looking place chatbot understand people asking help build datasets retrain chatbot look place chatbot think got right actually got wrong information rectified conversation improved supervised learning help improve chatbot prevents problem Microsoft’s Tay learned unsupervised objective continuously improve chatbot make increasingly contextaware better understanding intent conversation Conclusion Chatbots offer great way organisation improve business make efficient increase customer experience However important chatbot learns supervised way bound certain rule drive conversation wish prevent example Microsoft’s Twitter bot Tay Natural language processing getting better due time become possible engaging conversation machineTags AI Bots Conversational Ai Artificial Intelligence Chatbots |
3,584 | Vaccine Opposition, an Origin Story | Science saves, plain and simple. Science saves time, science saves energy, and above all, science saves lives. Scientific advancements have led to a vast array of efficiencies, improvements in energy production and usage, travel, as well as countless medical breakthroughs.
One reason for this has to do with clinical trials. Typically, a clinical trial begins with animals. Provided the study shows efficacy and the animals do well, only then will researchers move on to people. Beginning with small groups, researchers increase trial participants until a well representative portion of the population is studied.
Science also requires peer review. In this process, researchers and scientists evaluate each others work, ensuring studies are proper, ethical, and can benefit humanity at large. This dance is typically done in and through a variety of scientific journals and publications, with researchers submitting trials and findings for the broader community to evaluate, study, and in some cases attempt to reproduce or refute.
As far as medical publications go, The Lancet is among the oldest, most reputable in the world. Founded in 1823 by English Surgeon Thomas Wakley, The Lancet was named for both the surgical instrument, more commonly known today as a scalpel, and for the architectural term lancet window indicating light of wisdom. Initially distributed biweekly, it has become a weekly publication with a mission to make science widely available so that medicine can serve, transform society, and positively impact the lives of people.
In 1998, The Lancet published what would become among the most controversial studies in the field of vaccination. Led by British Physician Andrew Wakefield, The Wakefield study, as it would become commonly known, hypothesized the MMR (Measles, Mumps, Rubella) vaccine was responsible for a series of events; including intestinal inflammation, harmful proteins crossing the blood-brain barrier, and subsequently causing Autism in the inoculated children. A follow up study by Dr. Wakefield in 2002 would essentially double down on this link between measles and autism, creating an uproar amongst parents and various groups already opposed to vaccines.
Wakefield became the scientific backbone of what has come to be known as the anti-vaxxer movement, and the MMR vaccine its primary villain.
To spread a message though, you need messengers. Among the most vocal vaccine opposition came from a growing list of celebrities, talk show and radio hosts, and alternative media personalities. Jenny McCarthy became one of many; a notable television personality, fixture on talk and game shows, as well as writer of several books. McCarthy’s son was diagnosed with autism when he was two and a half years old, she attributed the diagnosis to the MMR vaccine specifically. Although McCarthy has since claimed she is not anti-vaccine, only pro safe vaccine schedule, many have since echoed her theory and position on MMR specifically and vaccines in general. | https://medium.com/illumination-curated/vaccine-opposition-an-origin-story-fca89be0b1ab | ['Bashar Salame'] | 2020-12-23 11:01:31.476000+00:00 | ['Society', 'Politics', 'Health', 'Covid-19', 'Vaccines'] | Title Vaccine Opposition Origin StoryContent Science save plain simple Science save time science save energy science save life Scientific advancement led vast array efficiency improvement energy production usage travel well countless medical breakthrough One reason clinical trial Typically clinical trial begin animal Provided study show efficacy animal well researcher move people Beginning small group researcher increase trial participant well representative portion population studied Science also requires peer review process researcher scientist evaluate others work ensuring study proper ethical benefit humanity large dance typically done variety scientific journal publication researcher submitting trial finding broader community evaluate study case attempt reproduce refute far medical publication go Lancet among oldest reputable world Founded 1823 English Surgeon Thomas Wakley Lancet named surgical instrument commonly known today scalpel architectural term lancet window indicating light wisdom Initially distributed biweekly become weekly publication mission make science widely available medicine serve transform society positively impact life people 1998 Lancet published would become among controversial study field vaccination Led British Physician Andrew Wakefield Wakefield study would become commonly known hypothesized MMR Measles Mumps Rubella vaccine responsible series event including intestinal inflammation harmful protein crossing bloodbrain barrier subsequently causing Autism inoculated child follow study Dr Wakefield 2002 would essentially double link measles autism creating uproar amongst parent various group already opposed vaccine Wakefield became scientific backbone come known antivaxxer movement MMR vaccine primary villain spread message though need messenger Among vocal vaccine opposition came growing list celebrity talk show radio host alternative medium personality Jenny McCarthy became one many notable television personality fixture talk game show well writer several book McCarthy’s son diagnosed autism two half year old attributed diagnosis MMR vaccine specifically Although McCarthy since claimed antivaccine pro safe vaccine schedule many since echoed theory position MMR specifically vaccine generalTags Society Politics Health Covid19 Vaccines |
3,585 | How to Write TypeScript Ambients Types Definition for a JavaScript library | Tips & tricks
How to Write TypeScript Ambients Types Definition for a JavaScript library
Types definitions made easy for any JavaScript library. Create, extend, and contribute to any repository where types are missing.
It has been a complicated time during the early days of TypeScript. Originally designed to work with namespace and some kind of “custom modules”, nowadays the best approach is to play with ES6 modules.
How modules and code isolation is managed will influence how we have to write our ambient type definition for our libraries.
Let’s dig into it!
A quick remember about the basics of a compiler
Before diving into the question of TypeScript typings, I would like to clarify things as it might not seem odd for every developer.
Maybe just like me, you were doing something else in the classroom when the teacher was trying to learn you the C language or some kind of compiled language.
We need to understand compilation because while declaring our types definitions we will need to have a good understanding of the scope we are playing with.
By understanding I mean, what is TypeScript exactly doing when he “compile” your code? Or should I say: what is any compiler really doing?
Most of the time by default and without scoping mechanics, all the code you write is globally scoped, even if defined in multiple files: sharing variables and functions.
That’s why in the JavaScript early days we used to make something called « The module Pattern » within the browser page global context. As people tended to use multiple scripts from various locations, which resulted in code collision.
This was generally implemented in such a way, that you are using some kind of “Immediately Invocated Function Expression” to populate a global object with a scoped API. This mimics the behavior of scope.
window.app_modules = window.app_modules || {}; app_modules.MyModule = (function() {
var privateVariable = 1;
var publicFunction = function() {
return privateVariable;
} return {
publicFunction: publicFunction
};
})(); console.log(app_modules.MyModule.publicFunction());
Well, something like this that emulate private properties. Allowing to play with JavaScript scope to prevent variable to play globally between modules.
One job of a compiler is to take multiple files into one file and ensure correctness between declarations.
So a compiler takes a workspace of files, merge all files into one, and once this has been done, it might also convert the code to some byte code that is highly optimized (not our case with TypeScript).
What does that mean?
It means that your JavaScript file, if not properly scoped is sharing the same variables between files. It’s not a good thing because it can trigger some kind of non-expected effects. But with ES6 modules, which is a compatible feature of TypeScript there is a particularity.
A long as you import or export something in a file, your code is scoped to that file. Variable declarations, functions, anything. Otherwise, just try to create two files without export, one creating a variable toto and another that displays this variable, everything without export in the same namespace, compile it with TypeScript and you get the value of toto colliding in both files
To solve these issues we can play with namespace, which creates some kind of boundaries for scoping things (it works well in many programming languages), use the module pattern we saw above, or just export/import something thanks to EcmaScript modules.
WELL! Let’s deep into our real need: Creating a type definition
Fetching the good declaration for a TypeScript library
Alright, now we got the basics, we can play with our typings that use our scope. We just downloaded our latest library, but there is an issue… The library creator made all of this without TypeScript! (YEAH it still exists in 2020).
For example with a library named somelibrary you end up with the classic
Could not find a declaration file for module 'somelibrary'. '/Users/screamz/workspace/myapp/node_modules/somelibrary/index.js' implicitly has an 'any' type.
Try `npm install @types/somelibrary` if it exists or add a new declaration (.d.ts) file containing `declare module 'somelibrary';`ts(7016) | https://medium.com/javascript-in-plain-english/how-to-write-typescript-types-for-a-javascript-library-e598b9eb8be7 | ['Andréas Hanss'] | 2020-12-28 10:19:07.303000+00:00 | ['Software Engineering', 'Software Development', 'Web Development', 'JavaScript', 'Typescript'] | Title Write TypeScript Ambients Types Definition JavaScript libraryContent Tips trick Write TypeScript Ambients Types Definition JavaScript library Types definition made easy JavaScript library Create extend contribute repository type missing complicated time early day TypeScript Originally designed work namespace kind “custom modules” nowadays best approach play ES6 module module code isolation managed influence write ambient type definition library Let’s dig quick remember basic compiler diving question TypeScript typing would like clarify thing might seem odd every developer Maybe like something else classroom teacher trying learn C language kind compiled language need understand compilation declaring type definition need good understanding scope playing understanding mean TypeScript exactly “compile” code say compiler really time default without scoping mechanic code write globally scoped even defined multiple file sharing variable function That’s JavaScript early day used make something called « module Pattern » within browser page global context people tended use multiple script various location resulted code collision generally implemented way using kind “Immediately Invocated Function Expression” populate global object scoped API mimic behavior scope windowappmodules windowappmodules appmodulesMyModule function var privateVariable 1 var publicFunction function return privateVariable return publicFunction publicFunction consolelogappmodulesMyModulepublicFunction Well something like emulate private property Allowing play JavaScript scope prevent variable play globally module One job compiler take multiple file one file ensure correctness declaration compiler take workspace file merge file one done might also convert code byte code highly optimized case TypeScript mean mean JavaScript file properly scoped sharing variable file It’s good thing trigger kind nonexpected effect ES6 module compatible feature TypeScript particularity long import export something file code scoped file Variable declaration function anything Otherwise try create two file without export one creating variable toto another display variable everything without export namespace compile TypeScript get value toto colliding file solve issue play namespace creates kind boundary scoping thing work well many programming language use module pattern saw exportimport something thanks EcmaScript module WELL Let’s deep real need Creating type definition Fetching good declaration TypeScript library Alright got basic play typing use scope downloaded latest library issue… library creator made without TypeScript YEAH still exists 2020 example library named somelibrary end classic Could find declaration file module somelibrary Usersscreamzworkspacemyappnodemodulessomelibraryindexjs implicitly type Try npm install typessomelibrary exists add new declaration dts file containing declare module somelibraryts7016Tags Software Engineering Software Development Web Development JavaScript Typescript |
3,586 | Simple Deno API with Oak, deno_mongo and djwt | Simple RestAPI
If you come from the NodeJS world, the first thing you usually do when creating a new app is to run an ‘npm init’ command and follow the process of creating a package.json file.
But with Deno, things are different. There is no package.json, no npm, and no node modules. Just open your favorite IDE (I am using VS Code), navigate to your project folder and create an app.ts file (it will be our project’s main file).
The first thing we want to set up in the app.ts is our http server to be able to handle http requests that we plan to develop later on. For this purpose, we will use Oak as our middleware framework for the http server. Probably you never heard of Oak but don’t worry, it is the same as Koa.js (popular web app framework for NodeJS).
To be more clear, Oak is just a version of Koa.js customized for Deno, so most of the Koa.js documentation also applies to Oak. For module import, Deno uses official ECMAScript module standards, and modules are referenced using a URL or a file path and includes a mandatory file extension.
Our app.ts should like this:
app.ts (initial)
You will probably get an error that TypeScrip does not allow extensions in the module path. To resolve this issue, download and enable Deno’s VS Code extension (enable it only for Deno projects).
In our app.ts we imported Oak’s Application object that contains an array of middleware functions. We will use it to setup the http server, and to later bind a router, global error handler, and auth middleware.
In NodeJS, this Application represents the app we import from the express npm package. The Oak module has been imported from the Deno’s Third-Party Modules. Beside that, we defined an Application instance, host, port, and we set our server to listen to ‘localhost:4000’ address.
As you can notice, Deno allows usage of top-level await, meaning that we do not need to wrap await call inside of the async function on the module level. One big plus for Deno! If you run the app.ts file now, using the command ‘deno run app.ts’ you will get an error:
PermissionDenied error
As mentioned before, Deno does not have permission to access the network so we need to explicitly emphasize it: ‘deno run — allow-net app.ts’. Now, the app will successfully compile and run, and we have our server up and running.
In your browser, navigate to localhost:4000 and you should get “Hello World” displayed.
After the initial server setup, we can move on to creating and configuring MongoDB instance for our service. Before you proceed with the db configuration, please make sure you have MongoDB installed, and local service running.
Deno provides (unstable for now) module for MongoDB and we will get it from the Third Party Module. Since the module is in continuous development, the module versions will be changing fast.
Create a new config folder and add two new files in it: db.ts and config.ts that will be used for the project’s config variables. The db.ts content should now be as follows (code will be explained afterward):
db.ts
We imported the init method and MongoClient from the deno mongo module. The init function is used to initialize the plugin. The other part with the code is pretty straight forward; we have defined DB class with two methods, one for connecting to local db service, other as a getter to return database name. At the end, we created a new instance of our database and export it for usage in other project’s modules.
For now, we will not run our new code, so the deno_mongo module is currently unavailable in our app. With the next execution of ‘deno run’ command, all modules will be downloaded and cached locally so any further code execution will not require any downloads, and the app will compile much faster. This is how Deno handles modules.
Since we finished the database setup, we can move on with the controller methods. We will make this very simple by implementing CRUD operations for the ‘Employee’ model. At the end, we will create an auth middleware to protect create, update, and delete requests. For the project simplicity, we will skip the part of creating ‘singup/signin’ methods but instead make a simple login that will accept any username and password to generate a JWT (json web token). So, let’s go.
Create a new employeeCtrl.ts file. We will first implement a getEmployees method that will return an array of employees with all corresponding properties: id, name, age, and salary. Our controller should look like this:
employeeCtrl.ts (getEmployees method)
Since we are using TypeScript, we imported Context (from the Oak module) to define the type of our response object and later on our request object. We also took our db name and defined ‘employees’ collection. After the interface signature, the get method is implemented. It will fetch all data from the ‘employees’ collection and return data together with the response status. If the collection is empty, a message will be set as a part of the response body. To test our method, we need to define a route.
Create new file called router.ts:
router.ts
Router implementation is the same as the one using express in NodeJS: Import Router from the corresponding (in our case Oak) module and bind the controller method to the defined route. The last thing to do before testing our route and method is to set our app to use the router. Navigate to app.ts file and update its content as follows:
app.ts (router and config.ts added)
To make the code more readable, we will introduce config.ts file from this point to store all configuration/env variables. It should be saved in the config folder:
config.ts
The power of TypeScript and Deno allows us to have access to all types in runtime, and to write strongly typed code.
To run the app execute the following command:
deno run --allow-net --allow-write --allow-read --allow-plugin --allow-env --unstable app.ts
Regarding the permissions we need to run our app, we have set a few of them:
allow-write and allow-read: allow Deno to have access to the disk, in our case to the local database service
allow-plugin: allow Deno to load plugin
allow-env: allow access to the env property
unstable: required since the deno_mongo plugin is in unstable mode currently
If your code compiled successfully and all required modules downloaded, the terminal will show the ‘Listening on port: 4000’ message. If you face any compile errors, do not worry, they are nicely explained and you should be able to resolve the issues easily, especially in this simple scenario.
To test the route and the controller method I will use Postman and the result should be:
Postman (GET /employees)
We can now move on to creating additional controller methods. Navigate back to the employeeCtrl.ts and add POST(addEmployee) and PUT(updateEmployee) actions:
employeeCtrl.ts (addEmployee and updateEmployee added)
These two methods of implementation should be understandable.
To add a new employee, we need to read the values from the request’s body. Check if it contains all necessary properties (usually done by implementing a validation library but Deno currently does not provide any), and save the record in an appropriate collection.
To update an existing employee, read the employee’s id provided through the params, search its properties, and make the updates based on the request’s body.
It is important to mention that we defined our updateEmployee’s input parameters as ‘any’ type. Currently, the Context's request object does not recognize 'params' as its property and the code will not be able to compile. This issue is related to the current version of the Oak module but the documentation states that params, request, and response, are all derived from the Context object so we are good to proceed.
Hopefully, this will be resolved in the near future. To make these actions alive, we need to define their routes. Jump to the router.ts and add these two lines of code:
router.ts (routes for addEmployee and updateEmployee added)
Now, if you rerun the app, you can test the new methods implementation:
Postman (POST /employees)
Postman (PUT /employees/:id)
To finish our CRUD, we need to add two more methods: getEmployeeById and deleteEmployee:
employeeCtrl.ts (getEmployeeById and deleteEmployee added)
This method implementation is very similar to the previous one, except it contains different database actions for deleting and updating records. To wrap up our API, let’s update our router and give a test to the new routes.
router.ts (routes for getEmployeeById and deleteEmployee added)
Postman (GET /employees/:id)
Postman (DELETE /employees/:id)
Now we have a simple API up and running. Comparing it with NodeJS, Deno’s implementation seems very similar. It should only take some time to get familiar with different module imports, TypeScript, and permissions handling, which can be a real bottleneck in the beginning. Also, we need to take into consideration that there will be changes until Deno finds a stable setup. | https://medium.com/maestral-solutions/simple-deno-api-with-oak-deno-mongo-and-djwt-2916844f0ef3 | ['Haris Brackovic'] | 2020-06-05 12:07:36.692000+00:00 | ['Nodejs', 'Deno', 'JavaScript', 'Typescript'] | Title Simple Deno API Oak denomongo djwtContent Simple RestAPI come NodeJS world first thing usually creating new app run ‘npm init’ command follow process creating packagejson file Deno thing different packagejson npm node module open favorite IDE using VS Code navigate project folder create appts file project’s main file first thing want set appts http server able handle http request plan develop later purpose use Oak middleware framework http server Probably never heard Oak don’t worry Koajs popular web app framework NodeJS clear Oak version Koajs customized Deno Koajs documentation also applies Oak module import Deno us official ECMAScript module standard module referenced using URL file path includes mandatory file extension appts like appts initial probably get error TypeScrip allow extension module path resolve issue download enable Deno’s VS Code extension enable Deno project appts imported Oak’s Application object contains array middleware function use setup http server later bind router global error handler auth middleware NodeJS Application represents app import express npm package Oak module imported Deno’s ThirdParty Modules Beside defined Application instance host port set server listen ‘localhost4000’ address notice Deno allows usage toplevel await meaning need wrap await call inside async function module level One big plus Deno run appts file using command ‘deno run appts’ get error PermissionDenied error mentioned Deno permission access network need explicitly emphasize ‘deno run — allownet appts’ app successfully compile run server running browser navigate localhost4000 get “Hello World” displayed initial server setup move creating configuring MongoDB instance service proceed db configuration please make sure MongoDB installed local service running Deno provides unstable module MongoDB get Third Party Module Since module continuous development module version changing fast Create new config folder add two new file dbts configts used project’s config variable dbts content follows code explained afterward dbts imported init method MongoClient deno mongo module init function used initialize plugin part code pretty straight forward defined DB class two method one connecting local db service getter return database name end created new instance database export usage project’s module run new code denomongo module currently unavailable app next execution ‘deno run’ command module downloaded cached locally code execution require downloads app compile much faster Deno handle module Since finished database setup move controller method make simple implementing CRUD operation ‘Employee’ model end create auth middleware protect create update delete request project simplicity skip part creating ‘singupsignin’ method instead make simple login accept username password generate JWT json web token let’s go Create new employeeCtrlts file first implement getEmployees method return array employee corresponding property id name age salary controller look like employeeCtrlts getEmployees method Since using TypeScript imported Context Oak module define type response object later request object also took db name defined ‘employees’ collection interface signature get method implemented fetch data ‘employees’ collection return data together response status collection empty message set part response body test method need define route Create new file called routerts routerts Router implementation one using express NodeJS Import Router corresponding case Oak module bind controller method defined route last thing testing route method set app use router Navigate appts file update content follows appts router configts added make code readable introduce configts file point store configurationenv variable saved config folder configts power TypeScript Deno allows u access type runtime write strongly typed code run app execute following command deno run allownet allowwrite allowread allowplugin allowenv unstable appts Regarding permission need run app set allowwrite allowread allow Deno access disk case local database service allowplugin allow Deno load plugin allowenv allow access env property unstable required since denomongo plugin unstable mode currently code compiled successfully required module downloaded terminal show ‘Listening port 4000’ message face compile error worry nicely explained able resolve issue easily especially simple scenario test route controller method use Postman result Postman GET employee move creating additional controller method Navigate back employeeCtrlts add POSTaddEmployee PUTupdateEmployee action employeeCtrlts addEmployee updateEmployee added two method implementation understandable add new employee need read value request’s body Check contains necessary property usually done implementing validation library Deno currently provide save record appropriate collection update existing employee read employee’s id provided params search property make update based request’s body important mention defined updateEmployee’s input parameter ‘any’ type Currently Contexts request object recognize params property code able compile issue related current version Oak module documentation state params request response derived Context object good proceed Hopefully resolved near future make action alive need define route Jump routerts add two line code routerts route addEmployee updateEmployee added rerun app test new method implementation Postman POST employee Postman PUT employeesid finish CRUD need add two method getEmployeeById deleteEmployee employeeCtrlts getEmployeeById deleteEmployee added method implementation similar previous one except contains different database action deleting updating record wrap API let’s update router give test new route routerts route getEmployeeById deleteEmployee added Postman GET employeesid Postman DELETE employeesid simple API running Comparing NodeJS Deno’s implementation seems similar take time get familiar different module import TypeScript permission handling real bottleneck beginning Also need take consideration change Deno find stable setupTags Nodejs Deno JavaScript Typescript |
3,587 | 10 Insider VS Code Extensions for Web Developers in 2020 | 10 Insider VS Code Extensions for Web Developers in 2020
Git Graph, Auto Close Tag, Peacock, and more
Visual Studio Code (VS Code) from Microsoft will continue to be one of the best code editors/IDEs in 2020. Its great marketplace offers awesome extensions made by the community, helping web developers to become more productive.
However, most articles about VS Code extensions only recommend the same 10-15 extensions. They are great, no doubt. But there is more to explore and I will show you 10 outstanding extensions that are less-known but really helpful. | https://medium.com/better-programming/10-insider-vs-code-extensions-for-web-developers-in-2020-91bdef1658c6 | ['Simon Holdorf'] | 2020-01-28 00:48:53.490000+00:00 | ['Technology', 'Programming', 'Productivity', 'Creativity', 'JavaScript'] | Title 10 Insider VS Code Extensions Web Developers 2020Content 10 Insider VS Code Extensions Web Developers 2020 Git Graph Auto Close Tag Peacock Visual Studio Code VS Code Microsoft continue one best code editorsIDEs 2020 great marketplace offer awesome extension made community helping web developer become productive However article VS Code extension recommend 1015 extension great doubt explore show 10 outstanding extension lessknown really helpfulTags Technology Programming Productivity Creativity JavaScript |
3,588 | Reinforcement Learning and the Rise of Educational AI | Ever since the astounding triumph of Alphago over Lee Sedol at the Go contest in 2016, the world’s attention has been drawn to artificial intelligence and reinforcement learning. This victory signalled that machine learning was no longer simply about big data classification, but was making progress in the realm of true intelligence.
Reinforcement learning (RL) introduces the concept of an agent, and addresses the problem of making the most-rewarded decision a subjective entity in a known or unknown environment. It could be seen as a learning approach sitting in between supervised and unsupervised learning, since it involves labelling inputs, only that the label is sparse and time-delaying.
In life we aren’t given labels of every possible behaviour in the world, but we learn lessons by exploring strategies on our own — hence RL provides the closest problem setting to the learning process of a human brain. This accounts for the excitement that the progress elicits from machine learning scholars.
So far there have been two approaches to a reinforcement learning problem.
1. Markov Decision Processes
MDPs are a mathematical framework which model the world as a set of consecutive states with values, and inside this world there is a rational agent that makes decisions by weighing rewards caused by different actions.
If the state values are unknown, the agent may begin by interacting with the world first, observing the consequences — and with enough experience, it may exploit the knowledge and make optimal decisions. It builds on top of the rules drawn from observing human intelligence in behavioural psychology experiments, rather than modelling on a grander scale the rules that cultivate human intelligence, which is where evolutionary computation comes from.
2. Evolutionary Computation
Evolutionary computation is a family of algorithms which apply the concept of evolution to the computation area as a searching technique to find the fittest solution. More specifically, it utilises concepts from Darwin’s theory of evolution — such as mutation, crossover, and fitness — and models the computer to perform natural selection in search of an optimal solution. In other words, it does not attempt to build intelligence from scratch, as long as it renders quasi-intelligent results.
RL algorithms can be really powerful tools in a range of tasks and may potentially contribute to the realisation of general AI, which in return may greatly improve the education industry in terms of adaptive learning experience, student path prediction, and unbiased grading systems. | https://medium.com/vetexpress/reinforcement-learning-and-the-rise-of-educational-ai-99a68a687a55 | [] | 2018-01-15 22:46:21.610000+00:00 | ['Algorithms', 'Edtech', 'AI', 'Artificial Intelligence', 'Reinforcement Learning'] | Title Reinforcement Learning Rise Educational AIContent Ever since astounding triumph Alphago Lee Sedol Go contest 2016 world’s attention drawn artificial intelligence reinforcement learning victory signalled machine learning longer simply big data classification making progress realm true intelligence Reinforcement learning RL introduces concept agent address problem making mostrewarded decision subjective entity known unknown environment could seen learning approach sitting supervised unsupervised learning since involves labelling input label sparse timedelaying life aren’t given label every possible behaviour world learn lesson exploring strategy — hence RL provides closest problem setting learning process human brain account excitement progress elicits machine learning scholar far two approach reinforcement learning problem 1 Markov Decision Processes MDPs mathematical framework model world set consecutive state value inside world rational agent make decision weighing reward caused different action state value unknown agent may begin interacting world first observing consequence — enough experience may exploit knowledge make optimal decision build top rule drawn observing human intelligence behavioural psychology experiment rather modelling grander scale rule cultivate human intelligence evolutionary computation come 2 Evolutionary Computation Evolutionary computation family algorithm apply concept evolution computation area searching technique find fittest solution specifically utilises concept Darwin’s theory evolution — mutation crossover fitness — model computer perform natural selection search optimal solution word attempt build intelligence scratch long render quasiintelligent result RL algorithm really powerful tool range task may potentially contribute realisation general AI return may greatly improve education industry term adaptive learning experience student path prediction unbiased grading systemsTags Algorithms Edtech AI Artificial Intelligence Reinforcement Learning |
3,589 | Change Your Life by Changing Your Posture | Change Your Life by Changing Your Posture
Boost confidence and performance with this simple activity
Photo by Michał Parzuchowski on Unsplash
If you’re reading this, you’re probably sitting down. And your shoulders are probably hunched. And your chin is probably tilted down toward your phone screen or your laptop.
How are you feeling?
Powerful, confident, and strong? Or . . . something else?
Try something with me.
Stand up, if you can (if not, play along from your seat)
Place your feet just wider than shoulder-width apart
Roll your shoulders back three or four times and then let them rest (most of us let our shoulders roll forward on a regular basis, so your shoulder blades will probably feel softly engaged in this position, as though they’re gently reaching for each other low across your back)
Lift your chin
Focus your eyes gently upwards
Put your hands on your hips and take a big, long breath
Pretend you’re Peter Pan.
Feel better?
Research indicates that our posture has a major impact on how we feel about ourselves. A few years ago, the idea of “power poses” went viral when Dr. Amy Cuddy’s Ted Talk became the second-most-popular in history. She claimed, based on a small-study sample, that holding our bodies in specific poses could not only affect our self-confidence, but also change our hormonal make-up.
In the years that followed Dr. Cuddy’s TED Talk, subsequent larger studies found that the effect of “power posing” on a person’s hormones is questionable at best. Even Dr. Cuddy says she is “agnostic” about the hormonal effects she originally promoted.
Hormonal changes aside, though, it’s still true that altering our physical posture can affect how we feel.
How we feel affects our performance: the better we feel, the better we perform.
And the impact our posture has on our performance actually goes even further than self-confidence. It’s not only our confidence that gets boosted when we stop slouching, stand up, and roll our shoulders back.
Because our hunched shoulders and rounded backs literally make us smaller, slouching means that there’s physically less space for our organs, like our lungs and heart, to function. In fact, according to one 2015 study, slouching can decrease our lung capacity by up to 30%. That means that when we slouch, our brains receive one-third less oxygen than they do at optimal capacity — and that affects performance.
So if you’re feeling sluggish, dull, or just-not-up-to-snuff, stand up and pretend you’re Peter Pan. Take ten slow, deep breaths, in and out. Add a smile for an extra boost.
And carry on with confidence, calm, and a fully-charged brain. | https://medium.com/afwp/change-your-life-by-changing-your-posture-551f3d37d13e | ['Cathlyn Melvin'] | 2020-06-15 03:32:42.234000+00:00 | ['Posture', 'Mental Health', 'Health', 'Life Lessons', 'Self Improvement'] | Title Change Life Changing PostureContent Change Life Changing Posture Boost confidence performance simple activity Photo Michał Parzuchowski Unsplash you’re reading you’re probably sitting shoulder probably hunched chin probably tilted toward phone screen laptop feeling Powerful confident strong something else Try something Stand play along seat Place foot wider shoulderwidth apart Roll shoulder back three four time let rest u let shoulder roll forward regular basis shoulder blade probably feel softly engaged position though they’re gently reaching low across back Lift chin Focus eye gently upwards Put hand hip take big long breath Pretend you’re Peter Pan Feel better Research indicates posture major impact feel year ago idea “power poses” went viral Dr Amy Cuddy’s Ted Talk became secondmostpopular history claimed based smallstudy sample holding body specific pose could affect selfconfidence also change hormonal makeup year followed Dr Cuddy’s TED Talk subsequent larger study found effect “power posing” person’s hormone questionable best Even Dr Cuddy say “agnostic” hormonal effect originally promoted Hormonal change aside though it’s still true altering physical posture affect feel feel affect performance better feel better perform impact posture performance actually go even selfconfidence It’s confidence get boosted stop slouching stand roll shoulder back hunched shoulder rounded back literally make u smaller slouching mean there’s physically le space organ like lung heart function fact according one 2015 study slouching decrease lung capacity 30 mean slouch brain receive onethird le oxygen optimal capacity — affect performance you’re feeling sluggish dull justnotuptosnuff stand pretend you’re Peter Pan Take ten slow deep breath Add smile extra boost carry confidence calm fullycharged brainTags Posture Mental Health Health Life Lessons Self Improvement |
3,590 | Data-Driven Design is Killing Our Instincts | What is data-driven design?
Simply put, data-driven design means making design decisions based on data you collect about how users interact with your product. According to InVision:
Data-driven design is about using information gleaned from both quantitative and qualitative sources to inform how you make decisions for a set of users. Some common tools used to collect data include user surveys, A/B testing, site usage and analytics, consumer research, support logs, and discovery calls. By crafting your products in a way that cater to your users’ goals, preferences, and behaviors, it makes your products far more engaging — and successful.
While most data is quantitative and very objective, you can also collect qualitative data about your users’ behavior, feelings, and personal impressions.
A designer’s instinct
Back in the days of Mad Men, a designer’s gut instinct was glorified because it was difficult to measure the success of a design in progress. You often had to wait until it shipped to know if your idea was any good. Designers justified their value through their innate talent for creative ideas and artistic execution. Those whose instincts reliably produced success became rock stars.
In today’s data-driven world, that instinct is less necessary and holds less power. But make no mistake, there’s still a place for it.
Design instinct is a lot more than innate creative ability and cultural guesswork. It’s your wealth of experience. It’s familiarity with industry standards and best practices. You develop that instinct from trial and error — learning from mistakes.
Instinct is recognizing pitfalls before they manifest into problems, recognizing winning solutions without having to explore and test endless options. It’s seeing balance, observing inconsistencies, and honing your design eye. It’s having good aesthetic taste, but knowing how to adapt your style on a whim.
Design instinct is the sum of all the tools you need to make great design decisions in the absence of meaningful data.
Clicks and conversions aren’t your only goals
Not everything that can be counted counts. Not everything that counts can be counted.
Data is good at measuring things that are easy to measure. Some goals are less tangible, but that doesn’t make them less important.
While you’re chasing a 2% increase in conversion rate you may be suffering a 10% decrease in brand trustworthiness. You’ve optimized for something that’s objectively measured, at the cost of goals that aren’t so easily codified.
This point is perfectly illustrated by a story by Braden Kowitz, a design partner at Google Ventures (via Wired):
One of my first projects at Google was to design the “Google Checkout” button. With each wave of design feedback I was asked to make the button bolder, larger, more eye catching, and even “clicky” (whatever that means). The proposed design slowly became more garish and eventually, downright ugly. To make a point, a colleague of mine stepped in with an unexpected move: He designed the most attention-grabbing button he could possibly muster: flames shooting out the side, a massive chiseled 3-D bevel, an all-caps label (“FREE iPOD”) with a minuscule “Checkout for a chance to win”. This move reset the entire conversation. It became clear to the team in that moment that we cared about more than just clicks. We had other goals for this design: It needed to set expectations about what happens next, it needed to communicate quality, and we wanted it to build familiarity and trust in our brand. We could have easily measured how many customers clicked one button versus another, and used that data to pick an optimal button. But that approach would have ignored the big picture and other important goals.
It’s easy to make data-driven design decisions, but relying on data alone ignores that some goals are difficult to measure. Data is very useful for incremental, tactical changes, but only if it’s checked and balanced by our instincts and common sense.
When data-driven design gets fugly
Ever used Booking.com?
Search for a hotel and you’ll see every listing plastered with a handful of conversion triggers and manufactured urgency/scarcity indicators. Amongst all that crap, it’s difficult to find the real info you’re looking for. It’s a terrible user experience for me. I’m sure many others feel the same.
But they must have reliable data that says it works. Conversion rates must go up with each chintzy trigger they cram in. Data says: add more urgency messages, add more upsells, more, more, more. User experience says: less, less, less, just show me what I’m looking for.
What’s going on here?
Data has become an authoritarian who has fired the other advisors who may have tempered his ill will. A designer’s instinct would ask, “Do people actually enjoy using this?” or “How do these tactics reflect on our reputation and brand?”
Booking.com’s brand is cheap deals, so they’re not worried about cheap tactics. If those tacky labels stoke enough FOMO to get a few more bookings, they’ve won. It doesn’t critically damage other goals if they’re perceived as a little gaudy in the process.
But not every business has the luxury of caring only about clicks and conversions. You may need to convey quality and trust. Or exclusivity and class. Does cramming in data-driven conversion triggers serve those goals too? Or would building a more focused and delightful user experience better speak to your user’s needs?
Data-driven sameness
Digital interface design is going through a bland period of sameness. I see it in my own work, and I worry it’s becoming hard to escape from.
You could blame Apple and Google for publishing good design systems, and then everyone else trying to look the same. You could blame WordPress for the proliferation of content-agnostic templates — pulling apart the age-old marriage of content and designer. Or you could blame platforms like Dribbble that amplify trends and superficial eye-candy.
I’d argue that data-driven design also plays a role in why all websites look the same.
We’re all scared to experiment and reinvent the wheel, because data’s already proven that the wheels we’ve got work well enough. When our Agile processes are geared toward efficiency, it’s too costly to prototype and test innovative solutions. So we blindly churn out the same tried and true solutions over and over again.
Design “process” has replaced instinct as the new skill to fetishize. Some say that everyone is a designer if they can only follow the same processes we do. While that’s not true, it still leads to design decisions being made without the temperance of a professional designer’s instincts and experience. It creates more generic-looking interfaces that may perform well in numbers but fall short of appealing to our senses.
We’re all scared to experiment and reinvent the wheel, because data’s already proven that the wheels we’ve got work well enough.
Data is only as good as the questions you ask
What makes data so dangerous is that your input grossly colors your output. If you ask the wrong questions at the wrong time, or to the wrong people, you draw bad conclusions.
First adapters and eager user-testers don’t necessarily behave the same as your average user, so even when you are asking the right questions you can get tainted data.
The most empathetic designers — who are convinced they see the product the same way as their users — don’t behave the same. They know the product too intimately. They can’t see it objectively anymore. They can’t become naive.
Beware of misleading data. It’s only one source of info, and it’s only ever as good as your collection methods. Rather than blindly following the conclusions of big data, back them up with other sources (or at least common sense) before charging ahead with your shiny validation in hand. | https://modus.medium.com/data-driven-design-is-killing-our-instincts-d448d141653d | ['Benek Lisefski'] | 2020-02-11 00:49:11.592000+00:00 | ['Craft', 'UX', 'Creativity', 'Data', 'Design'] | Title DataDriven Design Killing InstinctsContent datadriven design Simply put datadriven design mean making design decision based data collect user interact product According InVision Datadriven design using information gleaned quantitative qualitative source inform make decision set user common tool used collect data include user survey AB testing site usage analytics consumer research support log discovery call crafting product way cater users’ goal preference behavior make product far engaging — successful data quantitative objective also collect qualitative data users’ behavior feeling personal impression designer’s instinct Back day Mad Men designer’s gut instinct glorified difficult measure success design progress often wait shipped know idea good Designers justified value innate talent creative idea artistic execution whose instinct reliably produced success became rock star today’s datadriven world instinct le necessary hold le power make mistake there’s still place Design instinct lot innate creative ability cultural guesswork It’s wealth experience It’s familiarity industry standard best practice develop instinct trial error — learning mistake Instinct recognizing pitfall manifest problem recognizing winning solution without explore test endless option It’s seeing balance observing inconsistency honing design eye It’s good aesthetic taste knowing adapt style whim Design instinct sum tool need make great design decision absence meaningful data Clicks conversion aren’t goal everything counted count everything count counted Data good measuring thing easy measure goal le tangible doesn’t make le important you’re chasing 2 increase conversion rate may suffering 10 decrease brand trustworthiness You’ve optimized something that’s objectively measured cost goal aren’t easily codified point perfectly illustrated story Braden Kowitz design partner Google Ventures via Wired One first project Google design “Google Checkout” button wave design feedback asked make button bolder larger eye catching even “clicky” whatever mean proposed design slowly became garish eventually downright ugly make point colleague mine stepped unexpected move designed attentiongrabbing button could possibly muster flame shooting side massive chiseled 3D bevel allcaps label “FREE iPOD” minuscule “Checkout chance win” move reset entire conversation became clear team moment cared click goal design needed set expectation happens next needed communicate quality wanted build familiarity trust brand could easily measured many customer clicked one button versus another used data pick optimal button approach would ignored big picture important goal It’s easy make datadriven design decision relying data alone ignores goal difficult measure Data useful incremental tactical change it’s checked balanced instinct common sense datadriven design get fugly Ever used Bookingcom Search hotel you’ll see every listing plastered handful conversion trigger manufactured urgencyscarcity indicator Amongst crap it’s difficult find real info you’re looking It’s terrible user experience I’m sure many others feel must reliable data say work Conversion rate must go chintzy trigger cram Data say add urgency message add upsells User experience say le le le show I’m looking What’s going Data become authoritarian fired advisor may tempered ill designer’s instinct would ask “Do people actually enjoy using this” “How tactic reflect reputation brand” Bookingcom’s brand cheap deal they’re worried cheap tactic tacky label stoke enough FOMO get booking they’ve doesn’t critically damage goal they’re perceived little gaudy process every business luxury caring click conversion may need convey quality trust exclusivity class cramming datadriven conversion trigger serve goal would building focused delightful user experience better speak user’s need Datadriven sameness Digital interface design going bland period sameness see work worry it’s becoming hard escape could blame Apple Google publishing good design system everyone else trying look could blame WordPress proliferation contentagnostic template — pulling apart ageold marriage content designer could blame platform like Dribbble amplify trend superficial eyecandy I’d argue datadriven design also play role website look We’re scared experiment reinvent wheel data’s already proven wheel we’ve got work well enough Agile process geared toward efficiency it’s costly prototype test innovative solution blindly churn tried true solution Design “process” replaced instinct new skill fetishize say everyone designer follow process that’s true still lead design decision made without temperance professional designer’s instinct experience creates genericlooking interface may perform well number fall short appealing sens We’re scared experiment reinvent wheel data’s already proven wheel we’ve got work well enough Data good question ask make data dangerous input grossly color output ask wrong question wrong time wrong people draw bad conclusion First adapter eager usertesters don’t necessarily behave average user even asking right question get tainted data empathetic designer — convinced see product way user — don’t behave know product intimately can’t see objectively anymore can’t become naive Beware misleading data It’s one source info it’s ever good collection method Rather blindly following conclusion big data back source least common sense charging ahead shiny validation handTags Craft UX Creativity Data Design |
3,591 | Multitasking Is Not My Forte, So How Can I Blame Python? | I consider myself to be an efficient person. I always try to utilize my time wisely. When I have some time to kill, like waiting in line to run some errands, I always bring my laptop and get some work done. I exercise whilst listening to podcasts or audiobooks and other nerdy stuff. This week I just started training in a new gym where you can solve sudokus whilst using the treadmill, and I figured out that I can’t do both at the same time! It was then that I realized that Python and I have a huge thing in common, We are both very bad with multitasking.
To understand why Python multitasking works differently than other languages, we will have to understand the difference between sequence execution parallelism and concurrency.
Sequential means running one task at a time. Let’s say I invited a few friends over for dinner, and I want to bake some cakes for them. I get the recipes for my 3 favorite cakes: chocolate, cheese, and caramel. Since my baking skills are not at their best, I can only make one cake at a time. So first I make the chocolate cake, when it finishes baking I start making the cheesecake and the same with the caramel cake. Let’s say every cake takes about 10 minutes to mix the ingredients and 50 minutes to bake, in total I spent 3 hours making those cakes, which is a lot of time.
Concurrency means making progress in multiple tasks at the same time but not necessarily simultaneously. So let’s say my baking skills are kind of better, now I can start mixing the chocolate cake, and when I put the cake in the oven, I can start mixing the cheesecake and so on and so forth. So I am using the baking time which is idle for me, to make progress with the other cakes, but I don’t do it simultaneously. This will take about 1 hour and 20 mins for the 3 cakes to be ready, not too bad.
Parallelism means running multiple operations at the same time, or as we call it in our day to day life — multitasking. So Let’s say I am calling 2 of my friends to help me with the baking (if they want to eat that, they should help too!). So now we can simultaneously mix the 3 cakes and bake them all at the same time, which will take only 1 hour.
So how are python threads related to the bakery I just started in my kitchen?
Well, in most programming languages we run threads in parallel. In Python we have something called GIL which stands for Global Interpreter Lock, is a lock that allows only one thread to hold the control of the Python interpreter. This means that only one thread can be in a state of execution at any point in time. Python wasn’t designed considering that personal computers might have more than one core, which means it is not designed to use multiple processes or multiple threads. Therefore the GIL is necessary to enforce lock when accessing a Python object in order to be on the safe side. It is definitely not the best, but it’s a pretty effective mechanism for memory management.
So does that mean threads in Python are useless?
Absolutely not! Even though we can not execute threads in parallel, we can still run them concurrently. This will be good for tasks, like in the baking example, that have some waiting time. I/O bound problems cause the program to slow down because it frequently must wait for input/output from some external resource. They arise frequently when the program is working with things that are much slower than the CPU, like a DB or a network.
Let’s look at the following code for baking some cakes
Now let’s add the ability to run those in a sequence and using multithreading and add a decorator to measure the time of each run.
Now we can start baking
Running this code will yield the following results:
In this example, we can see that it’s much faster to bake the cakes using the multithreading approach because it maximizes the use of resources. It interrupts running one operation while continuing working on others. This fact does improve the performance of the program.
Despite this efficiency, we have seen in this example, using multithreading has some downsides. The operating system actually knows about each thread and can interrupt it at any given time and start running a different thread. This can cause some race conditions which is something we have to keep in mind while using this approach. The other thing is that our number of threads is limited by the operating system. In this example, we have only 1 task, but in real-life examples, we can have a lot of them. So in this technique, the performance is capped by the number of threads available in our core.
So how can we do it better?
In Python 3.4 we were introduced to a package called Asyncio In fact, Asyncio is a single-threaded, single-process design. Asyncio gives a feeling of concurrency despite using a single thread in a single process. To do so it uses coroutines (which are small code snippets) that can be scheduled concurrently, and switch between them. Asyncio uses generators and coroutines to pause and resume tasks. Let’s now add the ability to run this using the Asyncio library:
The keyword await passes function control back to the main call, It basically suspends the execution of the surrounding coroutine. If Python encounters an await bake_a_cake expression in the scope of make_a_cake, this is how await tells the main call, “Suspend execution of make_a_cake until the result of bake_a_cake returns. In the meantime, go do something else.” We can run this in the following manner:
We get the following results
We can see that the performances are about the same as multithreading, around 10 seconds. Using Asyncio however will yield better results, when running a lot of tasks since the multithreading mechanism is limited by the operating system while Asyncio can be interrupted as many times as needed.
The other thing is that using ‘await’ makes it visible where the schedule points are. This has a major advantage over threading. It makes it easier to understand about race conditions. Which are less frequent than threading, since we are using a single thread. It’s important to understand that concurrency problems are not gone completely. we can not simply ignore other concurrent tasks. With the Asyncio approach, it is much a less common situation when you need to use locks. But we should understand that every await call breaks the critical section.
So does that mean that using asyncio is always better?
NO! using both threading and Asyncio is better for IO-bound tasks, but it’s not the case for CPU bound tasks. A CPU bound task is for example a task that performs an arithmetic calculation. It is CPU bound because the rate at which the process progresses is limited by the speed of the CPU. Trying to use multiple threads won’t speed up the execution. On the contrary, it might degrade overall performance. But we can try to use processes for that since every process is running on a different CPU we are basically adding more computer power to our calculation. Each Python process gets its own Python interpreter and memory space so the GIL won’t be a problem. Let’s change our baking method to do some calculations
And let’s add the ability to tun this using multiple processes
Now we can this and compare it with a sequential run and with multithreading
In this example, we can see that we get about the same results when baking the cakes in a sequence and using multithreading. So even though it is using a concurrency mechanism, we don’t have any waiting time which the process can optimize. It can sometimes be even slower than running it in sequence because of context switching.
Multiprocessing does make the performance better, But using multiple processes is heavier than multiple threads, so we should keep in mind that this could become a scaling bottleneck. Processes also do not share memory because they run on a different CPU.
So ok now with all these options, when should we use each one?
From a performance perspective:
For CPU bound tasks — processes
For IO-bound tasks:
For a few tasks — threads
For a lot of tasks — asyncio
If we are looking at other aspects too, like code readability and quality, I would always use Asyncio over threads. Because using await makes it much more clear and has less room for concurrency errors. Even running multiple threads can have slightly better performance results. | https://medium.com/swlh/how-has-python-helped-me-bake-cakes-more-efficiently-b870a1f111ac | ['Danielle Shaul'] | 2020-11-03 09:00:02.521000+00:00 | ['Python', 'Concurrency', 'Engineering', 'Backend Development'] | Title Multitasking Forte Blame PythonContent consider efficient person always try utilize time wisely time kill like waiting line run errand always bring laptop get work done exercise whilst listening podcasts audiobooks nerdy stuff week started training new gym solve sudoku whilst using treadmill figured can’t time realized Python huge thing common bad multitasking understand Python multitasking work differently language understand difference sequence execution parallelism concurrency Sequential mean running one task time Let’s say invited friend dinner want bake cake get recipe 3 favorite cake chocolate cheese caramel Since baking skill best make one cake time first make chocolate cake finish baking start making cheesecake caramel cake Let’s say every cake take 10 minute mix ingredient 50 minute bake total spent 3 hour making cake lot time Concurrency mean making progress multiple task time necessarily simultaneously let’s say baking skill kind better start mixing chocolate cake put cake oven start mixing cheesecake forth using baking time idle make progress cake don’t simultaneously take 1 hour 20 min 3 cake ready bad Parallelism mean running multiple operation time call day day life — multitasking Let’s say calling 2 friend help baking want eat help simultaneously mix 3 cake bake time take 1 hour python thread related bakery started kitchen Well programming language run thread parallel Python something called GIL stand Global Interpreter Lock lock allows one thread hold control Python interpreter mean one thread state execution point time Python wasn’t designed considering personal computer might one core mean designed use multiple process multiple thread Therefore GIL necessary enforce lock accessing Python object order safe side definitely best it’s pretty effective mechanism memory management mean thread Python useless Absolutely Even though execute thread parallel still run concurrently good task like baking example waiting time IO bound problem cause program slow frequently must wait inputoutput external resource arise frequently program working thing much slower CPU like DB network Let’s look following code baking cake let’s add ability run sequence using multithreading add decorator measure time run start baking Running code yield following result example see it’s much faster bake cake using multithreading approach maximizes use resource interrupt running one operation continuing working others fact improve performance program Despite efficiency seen example using multithreading downside operating system actually know thread interrupt given time start running different thread cause race condition something keep mind using approach thing number thread limited operating system example 1 task reallife example lot technique performance capped number thread available core better Python 34 introduced package called Asyncio fact Asyncio singlethreaded singleprocess design Asyncio give feeling concurrency despite using single thread single process us coroutines small code snippet scheduled concurrently switch Asyncio us generator coroutines pause resume task Let’s add ability run using Asyncio library keyword await pass function control back main call basically suspends execution surrounding coroutine Python encounter await bakeacake expression scope makeacake await tell main call “Suspend execution makeacake result bakeacake return meantime go something else” run following manner get following result see performance multithreading around 10 second Using Asyncio however yield better result running lot task since multithreading mechanism limited operating system Asyncio interrupted many time needed thing using ‘await’ make visible schedule point major advantage threading make easier understand race condition le frequent threading since using single thread It’s important understand concurrency problem gone completely simply ignore concurrent task Asyncio approach much le common situation need use lock understand every await call break critical section mean using asyncio always better using threading Asyncio better IObound task it’s case CPU bound task CPU bound task example task performs arithmetic calculation CPU bound rate process progress limited speed CPU Trying use multiple thread won’t speed execution contrary might degrade overall performance try use process since every process running different CPU basically adding computer power calculation Python process get Python interpreter memory space GIL won’t problem Let’s change baking method calculation let’s add ability tun using multiple process compare sequential run multithreading example see get result baking cake sequence using multithreading even though using concurrency mechanism don’t waiting time process optimize sometimes even slower running sequence context switching Multiprocessing make performance better using multiple process heavier multiple thread keep mind could become scaling bottleneck Processes also share memory run different CPU ok option use one performance perspective CPU bound task — process IObound task task — thread lot task — asyncio looking aspect like code readability quality would always use Asyncio thread using await make much clear le room concurrency error Even running multiple thread slightly better performance resultsTags Python Concurrency Engineering Backend Development |
3,592 | Deciding between Row- and Columnar-Stores | Why We Chose Both | Row oriented databases
Row-stores are considered “traditional” because they have been around longer than columnar-stores. Most row oriented databases are commonly known for OLTP (online transactional processing). This means that row-stores are most commonly known to perform well for a single transaction like inserting, updating, or deleting relatively small amounts of data. (3)
Writing one row at a time is easy for a row-store because it appends the whole row to a chunk of space in storage. In other words, the row oriented database is partitioned horizontally. Since each row occupies at least one chunk (a row can take up more than one chunk if it runs out of space), and a whole chunk of storage is read at a time, this makes it perfect for OLTP applications where a small number of records are queried at a time. (4)
row-stores save to storage in chunks
Using a row-store
Row-stores (ex: Postgres, MySQL) are beneficial when most/all of the values in the record (row) need to be accessed. Row oriented databases are also good for point lookups and index range scans. Indexing (creating a key from columns) based on your access patterns can optimize queries and prevent full table scans in a large row-store. If the value needed is in the index, you can pull it straight from there. Indexing is an important component of row-stores because while columnar-stores also have some indexing mechanisms to optimize full table scans, it is not as efficient for reducing seek time for individual record retrieval than an index on the appropriate columns. Note that creating many indices will create many copies of data, and a columnar-store is a better alternative (see When to Enable Indexing?). (5,6)
row-store concept overview
If only one field of the record is desired, then using a row-store becomes expensive since all the fields in each record will be read. Even data that isn’t needed for the query response will be read, assuming it isn’t indexed properly. Consequently, many seek operations are required to complete the query. For this reason, a columnar-store is favored when you have unpredictable access patterns, whereas known access patterns are well accommodated by a row-store. (7)
Column oriented databases
As more records in a database are accessed, the time to transfer data from disk to memory starts to outweigh the time it takes to seek the data. For this reason, columnar-stores are typically better for OLAP (online analytical processing) applications. Analytical applications often need aggregate data, where only a subset of a table’s attributes are needed. (8)
Column oriented databases are partitioned vertically — instead of storing the full row, the individual values are stored contiguously in storage by column. The advantage of a columnar-store is that partial reads are much more efficient because a lower volume of data is loaded due to reading only the relevant data instead of the whole record.
For example, if a chunk of storage can hold five values and the database has five columns (ex: one row has five values), one row will take up one chunk and be read together. If only one column value is needed for the query response, a columnar-store can read 5x as fast because you will read five column values in one chunk as opposed to one column value in the chunk containing the row. You also avoid reading the other column values that are irrelevant to the query response.
columnar-stores save columns to storage in chunks
Additionally, in column-stores compression is achieved more efficiently than in row-stores because columns have uniform types (ex: all strings or integers). These performance benefits apply to arbitrary access patterns, making them a good choice in the face of unpredictable queries. (8)
Using a columnar-store
example database
Columnar-stores (examples: RedShift, BigQuery) are good for computing trends and averages for trillions of rows and petabytes for data. (8)
Assuming this table continues for millions of rows, what if we wanted to know the sum of the amount spent on online purchases for company A? Well, for company A’s online purchases table, we would need to sum all of the online purchase values. Instead of going through each row and reading the email, type of purchase, and any other columns this table could have, we just need to access all of the values in the “amount” column. | https://medium.com/bluecore-engineering/deciding-between-row-and-columnar-stores-why-we-chose-both-3a675dab4087 | ['Alexa Griffith'] | 2020-08-10 20:25:03.312000+00:00 | ['Bluecore', 'Data Engineering', 'Programming', 'Data', 'Software Engineering'] | Title Deciding Row ColumnarStores Chose BothContent Row oriented database Rowstores considered “traditional” around longer columnarstores row oriented database commonly known OLTP online transactional processing mean rowstores commonly known perform well single transaction like inserting updating deleting relatively small amount data 3 Writing one row time easy rowstore appends whole row chunk space storage word row oriented database partitioned horizontally Since row occupies least one chunk row take one chunk run space whole chunk storage read time make perfect OLTP application small number record queried time 4 rowstores save storage chunk Using rowstore Rowstores ex Postgres MySQL beneficial mostall value record row need accessed Row oriented database also good point lookup index range scan Indexing creating key column based access pattern optimize query prevent full table scan large rowstore value needed index pull straight Indexing important component rowstores columnarstores also indexing mechanism optimize full table scan efficient reducing seek time individual record retrieval index appropriate column Note creating many index create many copy data columnarstore better alternative see Enable Indexing 56 rowstore concept overview one field record desired using rowstore becomes expensive since field record read Even data isn’t needed query response read assuming isn’t indexed properly Consequently many seek operation required complete query reason columnarstore favored unpredictable access pattern whereas known access pattern well accommodated rowstore 7 Column oriented database record database accessed time transfer data disk memory start outweigh time take seek data reason columnarstores typically better OLAP online analytical processing application Analytical application often need aggregate data subset table’s attribute needed 8 Column oriented database partitioned vertically — instead storing full row individual value stored contiguously storage column advantage columnarstore partial read much efficient lower volume data loaded due reading relevant data instead whole record example chunk storage hold five value database five column ex one row five value one row take one chunk read together one column value needed query response columnarstore read 5x fast read five column value one chunk opposed one column value chunk containing row also avoid reading column value irrelevant query response columnarstores save column storage chunk Additionally columnstores compression achieved efficiently rowstores column uniform type ex string integer performance benefit apply arbitrary access pattern making good choice face unpredictable query 8 Using columnarstore example database Columnarstores example RedShift BigQuery good computing trend average trillion row petabyte data 8 Assuming table continues million row wanted know sum amount spent online purchase company Well company A’s online purchase table would need sum online purchase value Instead going row reading email type purchase column table could need access value “amount” columnTags Bluecore Data Engineering Programming Data Software Engineering |
3,593 | Is your current BI tool holding you (and your data) back? | If you’re currently managing and analyzing your data in a BI tool, it’s time to ask yourself: how happy are you? It’s okay, you can be honest. We promise we won’t tell. Truthfully, most people utilize tools like Excel because it’s comfortable. They know how to use it, how to navigate it, and feel confident with how it works. But, it may be time to break out of your comfort zone by switching to a programmatic approach like Python.
Why switch to a programmatic approach?
We’re not one for clichés, but in this situation, the grass really is greener on the other side. Let us explain three reasons where Excel can limit your data analysis and management:
For those who work with large sets of data, it can be hard to manage when you’re confined to row and column data and lookups. It can also take a long time to open and load your data, and in a day-and-age where data collection is more popular and datasets are increasingly getting larger, programmatic data management can improve your data organization by facilitating easier connections to databases and allowing simple imports into data frames. Even if Excel is getting the job done for you, you’re very limited to the visualizations you can build around your data. Sure, you’re able to get your point across, but your data is much more impactful when its visualized in a way that is meaningful, tells a story, and is on-brand. Collaborating and sharing at scale can be frustrating in Excel. Users typically turn to tools like DropBox or email to share files, which can be very limiting. With a programmatic approach, you can easily edit, collaborate, and share your data effortlessly.
The switch to Python
Due to its powerful infrastructure and flexibility, Python is rapidly increasing in popularity for analyzing and managing data. So, what makes this programming language so great? For starters, it’s as simple to learn as it is to use. The language’s syntax is clear and easy to understand and its large fanbase means support is readily available, making it ideal for beginners. Additionally, Python’s extensive integration abilities can significantly increase productivity.
The benefits of Dash
Built as a Python framework, Plotly’s Dash will give you all the benefits of a programmatic approach, and then some. Even if you’re working with manageable data, you’re still limited to your current tool’s style and chart offerings. With a programmatic approach, especially one with Plotly’s Dash, you’ll have easier access to a number of data viz options, allowing you to build, test, and deploy beautiful interactive apps. Yep — you read that right — Dash apps are completely interactive. Because Dash is built on top of plotly.js, the charts are inherently interactive. Dash then gives you the ability to add additional interactive features such as drop-downs, sliders, and buttons, all built around your data code.
Interested in learning more? Check out Dash at ODSC East
Don’t take our word for it — come see Dash in action. Our Head of Project Management, Chelsea Douglas, will be presenting a demo of Dash at the Open Data Science Conference (ODSC) East in Boston on Friday, May 3. Click here to learn more.
But wait, there’s more. We’ll be hosting the 10:30am coffee breaks at ODSC East. So you can grab a coffee and some snacks and schedule some time to talk with us. Register now!
Also — be sure to stay tuned for our upcoming Excel to Python blog series, where we’ll be discussing this in more detail and sharing best practices for moving over to a programmatic approach. | https://medium.com/plotly/is-your-current-bi-tool-holding-you-and-your-data-back-42f66ad494b | [] | 2019-04-29 16:14:18.780000+00:00 | ['Plotly', 'Data Analysis', 'Python', 'Data Science', 'Data Visualization'] | Title current BI tool holding data backContent you’re currently managing analyzing data BI tool it’s time ask happy It’s okay honest promise won’t tell Truthfully people utilize tool like Excel it’s comfortable know use navigate feel confident work may time break comfort zone switching programmatic approach like Python switch programmatic approach We’re one clichés situation grass really greener side Let u explain three reason Excel limit data analysis management work large set data hard manage you’re confined row column data lookup also take long time open load data dayandage data collection popular datasets increasingly getting larger programmatic data management improve data organization facilitating easier connection database allowing simple import data frame Even Excel getting job done you’re limited visualization build around data Sure you’re able get point across data much impactful visualized way meaningful tell story onbrand Collaborating sharing scale frustrating Excel Users typically turn tool like DropBox email share file limiting programmatic approach easily edit collaborate share data effortlessly switch Python Due powerful infrastructure flexibility Python rapidly increasing popularity analyzing managing data make programming language great starter it’s simple learn use language’s syntax clear easy understand large fanbase mean support readily available making ideal beginner Additionally Python’s extensive integration ability significantly increase productivity benefit Dash Built Python framework Plotly’s Dash give benefit programmatic approach Even you’re working manageable data you’re still limited current tool’s style chart offering programmatic approach especially one Plotly’s Dash you’ll easier access number data viz option allowing build test deploy beautiful interactive apps Yep — read right — Dash apps completely interactive Dash built top plotlyjs chart inherently interactive Dash give ability add additional interactive feature dropdowns slider button built around data code Interested learning Check Dash ODSC East Don’t take word — come see Dash action Head Project Management Chelsea Douglas presenting demo Dash Open Data Science Conference ODSC East Boston Friday May 3 Click learn wait there’s We’ll hosting 1030am coffee break ODSC East grab coffee snack schedule time talk u Register Also — sure stay tuned upcoming Excel Python blog series we’ll discussing detail sharing best practice moving programmatic approachTags Plotly Data Analysis Python Data Science Data Visualization |
3,594 | The Shift That Will Change Marketing Forever | The Shift That Will Change Marketing Forever
The marketing industry is at evolutionary crossroads. It’s a time of reckoning, an inflection point perhaps. Call it want you want, a shift is finally starting to occur.
Photo: Shutterstock
There are multiple definitions of marketing. Just Google it. You’ll spend the next 2 weeks reading. Probably the best single-source destination is nearly a decade old, from Heidi Cohen and her “72 Marketing Definitions”.
The short summary is that while yes, some definitions contradict others, the most commonly accepted understanding is perhaps best summed by Dr Philip Kotler who defines marketing as:
The science and art of exploring, creating, and delivering value to satisfy the needs of a target market at a profit. Marketing identifies unfulfilled needs and desires. It defines, measures and quantifies the size of the identified market and the profit potential. It pinpoints which segments the company is capable of serving best and it designs and promotes the appropriate products and services.
As a consequence, most businesses focus almost exclusively, on acquisition.
Does that meet the intent of the wider definition? I’ll let you decide.
Now, there’s nothing wrong with the function of acquisition. In fact, it is critical for growing a customer base. But the methods and mindsets required for the pure acquisition, simply do not translate to long term customer engagement.
And not just in the ranks of practitioners, but by extension, the products that are developed for them by vendors around the world.
Engagement is not about segments or campaigns. It is relational and longitudinal, and in fact, it is where true brand perception lies. And so our multitude of product silos — from CRM to marketing automation, to eCommerce, to identity management, to Voice of Customer, to media engines (and on and on and on) — demand side to supply side — don’t always serve us well.
Silos and disconnections beckon.
Yes, engagement is a whole different ball game to the acquisition, and when businesses make the mistake of mixing them up, they tend to treat long-standing customers like campaign targets.
They also tend to focus at the top of the “funnel”, and existing customers often find that there is no commitment to a relationship at all.
Stephanie calls the contact center.
She gives her credentials and describes her query. They transfer her to the right department, where she gives her credentials again.
“How can I help you today?”, comes the ever-helpful agent. Patiently, she describes her query again. “Oh, woops — you need a different department”.
The transfer begins. Stephanie sits on hold, glancing at her watch. She doesn’t have long before she will have to pick up her father. The call picks up.
“Hello, this is Andy. Can I have your full name and password please?”.
It’s bad enough when a company doesn’t know that Stephanie was on its website last week, in one of its stores yesterday, and is now calling the contact center today, all in order to continue a single conversation with a company that she is trying to buy from.
But it’s indefensible that it loses its memory within the same channel, on the same journey!
I call this brand dementia. It’s frankly, ridiculous, and yet it’s everywhere.
And I mean, everywhere.
But that’s not all. Brands that have allowed a mercenary acquisition mindset to dominate their marketing culture, have found that the technological advances of the last decade, in particular, are simply too tempting.
Their use — and yes by that I do mean misuse — of their customer’s data, has given rise to a societal backlash. Among the myriad of morally questionable practices, everyone likes to cite the Cambridge Analytica scandal, but Facebook has not been alone in the murkier side of our marketing and data marriage.
Yet, there are still many marketers out there who contend that a traditional hard press acquisition mindset remains the cornerstone of the profession and that we all just need to push on.
Their abiding principle remains that they can acquire customers faster than the business might lose them. And besides, they are tasked with growth.
But they miss the point. Society — e.g. our customers — demand more. Let us not forget, this is where our growth actually comes from. | https://medium.com/the-kickstarter/the-shift-that-will-change-marketing-forever-4665b927577 | ['Aarron Spinley'] | 2020-07-28 12:06:46.733000+00:00 | ['Society', 'Business', 'Marketing', 'Strategy', 'Digital Marketing'] | Title Shift Change Marketing ForeverContent Shift Change Marketing Forever marketing industry evolutionary crossroad It’s time reckoning inflection point perhaps Call want want shift finally starting occur Photo Shutterstock multiple definition marketing Google You’ll spend next 2 week reading Probably best singlesource destination nearly decade old Heidi Cohen “72 Marketing Definitions” short summary yes definition contradict others commonly accepted understanding perhaps best summed Dr Philip Kotler defines marketing science art exploring creating delivering value satisfy need target market profit Marketing identifies unfulfilled need desire defines measure quantifies size identified market profit potential pinpoint segment company capable serving best design promotes appropriate product service consequence business focus almost exclusively acquisition meet intent wider definition I’ll let decide there’s nothing wrong function acquisition fact critical growing customer base method mindset required pure acquisition simply translate long term customer engagement rank practitioner extension product developed vendor around world Engagement segment campaign relational longitudinal fact true brand perception lie multitude product silo — CRM marketing automation eCommerce identity management Voice Customer medium engine — demand side supply side — don’t always serve u well Silos disconnection beckon Yes engagement whole different ball game acquisition business make mistake mixing tend treat longstanding customer like campaign target also tend focus top “funnel” existing customer often find commitment relationship Stephanie call contact center give credential describes query transfer right department give credential “How help today” come everhelpful agent Patiently describes query “Oh woops — need different department” transfer begin Stephanie sits hold glancing watch doesn’t long pick father call pick “Hello Andy full name password please” It’s bad enough company doesn’t know Stephanie website last week one store yesterday calling contact center today order continue single conversation company trying buy it’s indefensible loses memory within channel journey call brand dementia It’s frankly ridiculous yet it’s everywhere mean everywhere that’s Brands allowed mercenary acquisition mindset dominate marketing culture found technological advance last decade particular simply tempting use — yes mean misuse — customer’s data given rise societal backlash Among myriad morally questionable practice everyone like cite Cambridge Analytica scandal Facebook alone murkier side marketing data marriage Yet still many marketer contend traditional hard press acquisition mindset remains cornerstone profession need push abiding principle remains acquire customer faster business might lose besides tasked growth miss point Society — eg customer — demand Let u forget growth actually come fromTags Society Business Marketing Strategy Digital Marketing |
3,595 | Dashboard Using R Shiny Application | I decided to look at the change in patterns related to terrorism from 1970 to 2017 using the Kaggle dataset. I know the data is sad but interesting to work with.
About the DataSet:
The Global Terrorism dataset is an open source database which provides data for terrorist attacks from 1970 to 2017.
Geography: Worldwide
Time period: 1970–2017, except 1993
Unit of analysis: Attack
Variables: >100 variables on location, tactics, perpetrators, targets, and outcomes
Sources: Unclassified media articles (Note: Please interpret changes over time with caution. Global patterns are driven by diverse trends in particular regions, and data collection is influenced by fluctuations in access to media coverage over both time and place.)
Overview of R shiny Application:
Before we deep dive and see how a dashboard is created we should understand a little bit about the shiny application. Shiny is an open source R package that provides an elegant and powerful web framework for building web applications using R.
When you are in R studio environment you have to create a new file and select R shiny web application:
Create a new shiny application file
To begin we should install some packages if they are not installed:
Shiny Shinydashboard leaflet
Shiny Dashboard:
It is really important to know the structure and principles on which the dashboard is built.
This is called the User Interface (UI) part of the dashboard
Header Sidebar Body
Header:
Header as the name suggests describes or provide title to the dashboard. You have to use dashboardHeader function to create the header. the below code defined what should be written in the heading of the dashboard.
Code for header
Sidebar:
Sidebar of the dashboard appears on the left side of the dashboard. It is like a menu bar where you can put information for the user to select from. In my dashboard I created two menu items. As you can see that one is the tab with the name “Dashboard” and the other is a link to the kaggle dataset. When you click it will open the link source of the data.
Codes for Sidebar
Below is the visualization of the sidebar menu:
Body:
This is the most important part of the dashboard as it contains all the charts, graphs and maps which you want to visualize. It consist of rows which are determines in which row your data is visualized. These rows are called fluidRow. Inside fluid row you place boxes and determine the kind of options you want to put in that box using selectInput function. In my dashboard I have total of three rows. The code and visuals for row number two are shown below.
Codes for the fluidRow 2
Boxes in fluidRow 2
Combining all the rows makes the body for the dashboard using dashboardBody function:
Combining all the rows in a body
Creating server functions:
The server function tells the shiny app to build the object. Server function create output/s containing all the codes needed to update the objects the the app. Each input to output must contain a render functions. Render functions tell what kind of output is required. Some of the render functions are below:
renderDataTable → DataTable renderImage → images (saved as a link to a source file) renderPlot → plots renderPrint → any printed output renderTable → data frame, matrix, other table like structures renderText → character strings renderUI → a Shiny tag object or HTML
Code example of renderPlot
Once you are done with writing codes within server functions the last step is to run the shinyApp.
Code to launch the app
The final output of my dashboard is below:
Once This dashboard was created I deployed it online. You can open it on your systems as well as on your smartphones.
You can find the codes and the data file for this dashboard on my Github.
Sources used: | https://medium.com/analytics-vidhya/dashboard-using-r-shiny-application-a2c846b0bc99 | ['Hassaan Ahmed'] | 2020-07-10 13:58:28.169000+00:00 | ['Dashboard', 'Rstudio', 'Shiny', 'Kaggle', 'Data Visualization'] | Title Dashboard Using R Shiny ApplicationContent decided look change pattern related terrorism 1970 2017 using Kaggle dataset know data sad interesting work DataSet Global Terrorism dataset open source database provides data terrorist attack 1970 2017 Geography Worldwide Time period 1970–2017 except 1993 Unit analysis Attack Variables 100 variable location tactic perpetrator target outcome Sources Unclassified medium article Note Please interpret change time caution Global pattern driven diverse trend particular region data collection influenced fluctuation access medium coverage time place Overview R shiny Application deep dive see dashboard created understand little bit shiny application Shiny open source R package provides elegant powerful web framework building web application using R R studio environment create new file select R shiny web application Create new shiny application file begin install package installed Shiny Shinydashboard leaflet Shiny Dashboard really important know structure principle dashboard built called User Interface UI part dashboard Header Sidebar Body Header Header name suggests describes provide title dashboard use dashboardHeader function create header code defined written heading dashboard Code header Sidebar Sidebar dashboard appears left side dashboard like menu bar put information user select dashboard created two menu item see one tab name “Dashboard” link kaggle dataset click open link source data Codes Sidebar visualization sidebar menu Body important part dashboard contains chart graph map want visualize consist row determines row data visualized row called fluidRow Inside fluid row place box determine kind option want put box using selectInput function dashboard total three row code visuals row number two shown Codes fluidRow 2 Boxes fluidRow 2 Combining row make body dashboard using dashboardBody function Combining row body Creating server function server function tell shiny app build object Server function create output containing code needed update object app input output must contain render function Render function tell kind output required render function renderDataTable → DataTable renderImage → image saved link source file renderPlot → plot renderPrint → printed output renderTable → data frame matrix table like structure renderText → character string renderUI → Shiny tag object HTML Code example renderPlot done writing code within server function last step run shinyApp Code launch app final output dashboard dashboard created deployed online open system well smartphones find code data file dashboard Github Sources usedTags Dashboard Rstudio Shiny Kaggle Data Visualization |
3,596 | VJ Loop | Lattices RYG | This blog takes the broadest conception of sound design possible including visual effects because audio likes video. Over 90,000 views annually.
Follow | https://medium.com/sound-and-design/vj-loop-latticeryg-5c54b6c3178c | ["Michael 'Myk Eff' Filimowicz"] | 2020-12-29 02:23:29.952000+00:00 | ['EDM', 'Flair', 'Design', 'Art', 'Creativity'] | Title VJ Loop Lattices RYGContent blog take broadest conception sound design possible including visual effect audio like video 90000 view annually FollowTags EDM Flair Design Art Creativity |
3,597 | Advanced Visualization for Data Scientists with Matplotlib | 3D Plots using Matplotlib
3D plots play an important role in visualizing complex data in three or more dimensions.
1. 3D Scatter Plot
3D scatter plots are used to plot data points on three axes in an attempt to show the relationship between three variables. Each row in the data table is represented by a marker whose position depends on its values in the columns set on the X, Y, and Z axes.
2. 3D Line Plot
3D Line Plots can be used in the cases when we have one variable that is constantly increasing or decreasing. This variable can be placed on the Z-axis while the change of the other two variables can be observed in the X-axis and Y-axis w.r.t Z-axis. For example, if we are using time series data (such as planetary motions) the time can be placed on Z-axis and the change in the other two variables can be observed from the visualization.
3. 3D Plots as Subplots
The above code snippet can be used to create multiple 3D plots as subplots in the same figure. Both the plots can be analyzed independently.
4. Contour Plot
The above code snippet can be used to create contour plots. Contour plots can be used for representing a 3D surface on a 2D format. Given a value for the Z-axis, lines are drawn for connecting the (x,y) coordinates where that particular z value occurs. Contour plots are generally used for continuous variables rather than categorical data.
5. Contour Plot with Intensity
The above code snippet can be used to create filled contour plots.
6. Surface Plot
The above code snippet can be used to create Surface plots which are used for plotting 3D data. They show a functional relationship between a designated dependent variable (Y), and two independent variables (X and Z) rather than showing the individual data points. A practical application for the above plot would be to visualize how the Gradient Descent algorithm converges.
7. Triangular Surface Plot
The above code snippet can be used to create Triangular Surface plot.
8. Polygon Plot
The above code snippet can be used to create Polygon Plots.
9. Text Annotations in 3D
The above code snippet can be used to create text annotations in 3D plots. It is very useful when creating 3D plots as changing the angles of the plot does not distort the readability of the text.
10. 2D Data in 3D Plot
The above code snippet can be used to plot 2D data in a 3D plot. It is very useful as it allows to compare multiple 2D plots in 3D.
11. 2D Bar Plot in 3D | https://medium.com/sfu-cspmp/advanced-visualization-for-data-scientists-with-matplotlib-15c28863c41c | ['Veekesh Dhununjoy'] | 2019-03-13 04:23:15.249000+00:00 | ['Data Visualization', 'Data Science', 'Matplotlib', 'Python', 'Technology'] | Title Advanced Visualization Data Scientists MatplotlibContent 3D Plots using Matplotlib 3D plot play important role visualizing complex data three dimension 1 3D Scatter Plot 3D scatter plot used plot data point three ax attempt show relationship three variable row data table represented marker whose position depends value column set X Z ax 2 3D Line Plot 3D Line Plots used case one variable constantly increasing decreasing variable placed Zaxis change two variable observed Xaxis Yaxis wrt Zaxis example using time series data planetary motion time placed Zaxis change two variable observed visualization 3 3D Plots Subplots code snippet used create multiple 3D plot subplots figure plot analyzed independently 4 Contour Plot code snippet used create contour plot Contour plot used representing 3D surface 2D format Given value Zaxis line drawn connecting xy coordinate particular z value occurs Contour plot generally used continuous variable rather categorical data 5 Contour Plot Intensity code snippet used create filled contour plot 6 Surface Plot code snippet used create Surface plot used plotting 3D data show functional relationship designated dependent variable two independent variable X Z rather showing individual data point practical application plot would visualize Gradient Descent algorithm converges 7 Triangular Surface Plot code snippet used create Triangular Surface plot 8 Polygon Plot code snippet used create Polygon Plots 9 Text Annotations 3D code snippet used create text annotation 3D plot useful creating 3D plot changing angle plot distort readability text 10 2D Data 3D Plot code snippet used plot 2D data 3D plot useful allows compare multiple 2D plot 3D 11 2D Bar Plot 3DTags Data Visualization Data Science Matplotlib Python Technology |
3,598 | Blockchain as a business technology of the future! | Bitcoin and blockchain have been much talked about for several years now and the number of people interested in the technology is growing.
In simple terms, blockchain is a database, where information is secure, as it cannot be changed or removed.
In short, the block chain is a database you cannot deceive, amend, and you will not delete anything there. Ageless like Lukashenko.
The potential of the technology has not been fully unleashed so far, although the best specialists are working on integration and studying of blockchain even now, this very minute, while you are reading the article.
Blockchain is applied in many areas such as logistics, medicine, construction, public sector. The technology especially interests businessmen that want to optimize business and save funds.
Of course, not all entrepreneurs are ready to invest in the new technology, whose potential has not been fully unleashed. However, blockchain is conquering the world: new specialists in the technology emerge as well as innovative application areas.
In many European countries, one can find a large number of seminars, meetups, and conferences, where the technology is discussed, for instance, Blockchain Summit, Blockchain Life, CryptoSpace, and Blockchain & Bitcoin Conference series. Participants can learn more about blockchain, gain insight into its peculiarities, and understand how to implement the technology in their projects. Many companies already use it in their business.
A step ahead
In December of 2017, Apple filed a patent application to use blockchain in the system for creating and certifying timestamps.
As blockchain transactions are transparent and unchangeable, every participant has access to records. Attempts to change a timestamp would be registered preventing fraud and ensuring resistance to attacks. In such a way, Apple Wallet would be protected against hacking and breaches. As a result, your Apple Wallet would be secure, and the poor online fraudster would probably find a job after another failure.
My car is my castle
Porsche became the first car manufacturer that uses blockchain in the automobile industry. Together with the German startup XIAN, the company is testing blockchain-based applications integrated in car computers.
Applications ensure locking and unlocking of car doors, temporary access authorization, and encrypted information logging. The use of the technology allows securely connecting to the repository that stores data related to the car and its features.
Coca Cola and blockchain: what do they have in common?
The world-famous Coca Cola beverage company has implemented the blockchain technology in its business. Not in vain, one of the company’s slogans is “Taste the feeling”. The management uses blockchain for fighting against compulsory labor.
Distributed ledger will contain data related to employees including their labor agreements. Data will be protected using electronic notary services. In such a way, the company wants to reduce the number of undocumented workers and fight against compulsory labor. As a result, unpaid overtimes and employment of underage workers will be eliminated — blockchain is making the world more humane.
As you understand, blockchain is deployed almost in all areas. Indeed, its arrival has revolutionized modern technologies and it can be likened to the invention of the Internet.
Today most of companies, startupers and reputable specialists master blockchain not to miss out on the opportunities of the modern business. Therefore, in case blockchain and Bitcoin are synonyms for you, we recommend that you change the situation or remain the last “analogue mammoth” in the world of digital technologies. | https://medium.com/smile-expo/blockchain-as-a-business-technology-of-the-future-f02694d19dc5 | [] | 2018-08-03 07:41:53.338000+00:00 | ['Technology', 'Blockchain', 'Apple', 'Bitcoin', 'Future'] | Title Blockchain business technology futureContent Bitcoin blockchain much talked several year number people interested technology growing simple term blockchain database information secure cannot changed removed short block chain database cannot deceive amend delete anything Ageless like Lukashenko potential technology fully unleashed far although best specialist working integration studying blockchain even minute reading article Blockchain applied many area logistics medicine construction public sector technology especially interest businessmen want optimize business save fund course entrepreneur ready invest new technology whose potential fully unleashed However blockchain conquering world new specialist technology emerge well innovative application area many European country one find large number seminar meetups conference technology discussed instance Blockchain Summit Blockchain Life CryptoSpace Blockchain Bitcoin Conference series Participants learn blockchain gain insight peculiarity understand implement technology project Many company already use business step ahead December 2017 Apple filed patent application use blockchain system creating certifying timestamps blockchain transaction transparent unchangeable every participant access record Attempts change timestamp would registered preventing fraud ensuring resistance attack way Apple Wallet would protected hacking breach result Apple Wallet would secure poor online fraudster would probably find job another failure car castle Porsche became first car manufacturer us blockchain automobile industry Together German startup XIAN company testing blockchainbased application integrated car computer Applications ensure locking unlocking car door temporary access authorization encrypted information logging use technology allows securely connecting repository store data related car feature Coca Cola blockchain common worldfamous Coca Cola beverage company implemented blockchain technology business vain one company’s slogan “Taste feeling” management us blockchain fighting compulsory labor Distributed ledger contain data related employee including labor agreement Data protected using electronic notary service way company want reduce number undocumented worker fight compulsory labor result unpaid overtime employment underage worker eliminated — blockchain making world humane understand blockchain deployed almost area Indeed arrival revolutionized modern technology likened invention Internet Today company startupers reputable specialist master blockchain miss opportunity modern business Therefore case blockchain Bitcoin synonym recommend change situation remain last “analogue mammoth” world digital technologiesTags Technology Blockchain Apple Bitcoin Future |
3,599 | Ensemble Learning — Your Machine Learning savior and here is why (Part 1) | Photo by Romain Baïsse
This is a series of posts that explain Ensemble method in machine learning in an easy-to-follow manner. In this post, we will discuss about Heterogenous Ensemble.
1/Heterogenous Ensemble
2/ Homogenous Ensemble — Bagging
3/ Homogenous Ensemble — Boosting
As a machine learning enthusiast and a self-taught learner, I write the following post as a study material for myself and would love to share it with my fellow learners .The idea is to show the whole picture of different techniques of Ensemble, when to use and what effects on our model they’ve got to offer.
Ensemble is a machine learning technique that makes prediction by using a collection of models. This is the reason why most of the time, learners tend to discover this method after studying all other classic algorithms. The strength of this technique derives from the idea “The wisdom of the crowds” that states:
“the aggregation of information in groups, resulting in decisions, are often better than could have made by any single member of the groups” (Wikipedia, Wisdom of the crowd, 2020).
The idea is certainly proven by the fact that there are several winning predictive models in Kaggle competition deploying this modelling method.
There are great benefits coming from implementing Ensemble. Apart from amazing high accuracy, it is ideal to perform Ensemble method on larger datasets to reduce training time and memory space by optimising parallel computing(e.g. XBBoost, Light GBM). One of the worst nightmares of machine learning practitioners is overfitting especially for real-world datasets that contain noise and not follow any typical data distributions. In that case we can also use Ensemble to fight against high variance (e.g. Random Forest).
Heterogenous Ensemble is used when we want to combine different fine-tuned algorithms to come up with the best possible prediction. Voting and Stacking are examples of this technique.
Hard Voting
This method is used for classification tasks. It combines predictions of multiple estimators using Mode (choosing the majority class) to select the final result. By choosing the class that has the largest sum of votes, Hard Voting Ensemble can achieve better prediction than models that just use a single algorithm.
In order to be effective, we need to provide a set of odd (more than 3) and diverse estimators to produce their own independent predictions.
Following is an example of code using Hard Voting Ensemble. Please note that the list of algorithm is your choice as long as they are fine-tuned, diverse and solve classification tasks.
Code example using Hard Voting for classification task
Soft Voting
Both regression and classification tasks can be used for this technique. In order to perform Soft Voting, we employ Mean — averaging predicted probability for Classification and predicted value for Regression tasks. The characteristics are similar with Hard Voting except that we can use any number of estimators as long as there are more than 2. We can also assign weights to classifiers depend on their importance for final prediction like Hard Voting.
Code example using Soft Voting for regression task
Stacking
After having tried Hard Voting and Soft Voting but still not achieve expected results, this is when Stacking comes into play. The technique is employed for both Classification and Regression tasks. The biggest difference between Stacking and the two other Heterogenous Ensemble methods is that apart from base learners, we have an extra meta-classifier.
Firstly, the base learners will train and predict on the dataset. Then, meta-estimator on the second layer will use prediction of the base learners layer as new input features for the next step. Note that the meta-estimator will train and predict on dataset that takes new input features (X), the number of new features is equal to the number of your base learners, and the class labels (Y) of the original dataset. In this context, we no longer use estimates of location for combination but use a trainable learner as a combiner itself.
The benefit of this approach is that the meta-estimator can effectively examine which base estimator provides better prediction as well as participate directly in the final prediction using the original data and new input features.
Source: Staking Model visualisation from mlxtend
Code example using Staking method for Classification task
Overall, Heterogenous Ensemble method is a great choice when you already build a small set of estimators, individually train and fine-tune them. Bear in mind that
“ Voting only makes sense if the learning schemes perform comparably well. If two of the three classifiers make predictions that are grossly incorrect, we will be in trouble!” (Witten, Frank, Hall, & Pal, 2016).
Heterogenous Ensemble certainly provides an improvement on overall performance by applying a simple and intuitive Ensemble technique. In the next post, we will dive in another Ensemble technique that consists of some award-winning algorithms.
Thanks for reading!
Don’t be shy, let’s connect if you are interested in more of my posts in:
Medium: https://medium.com/@irenepham_45233
Instagram: https://www.instagram.com/funwithmachinelearning/ | https://medium.com/analytics-vidhya/ensemble-learning-your-machine-learning-savoir-and-here-is-why-part-1-78ef52c8c365 | ['Irene Pham'] | 2020-12-23 16:37:39.310000+00:00 | ['Machine Learning', 'Data Science', 'Artificial Intelligence', 'Big Data', 'Algorithms'] | Title Ensemble Learning — Machine Learning savior Part 1Content Photo Romain Baïsse series post explain Ensemble method machine learning easytofollow manner post discus Heterogenous Ensemble 1Heterogenous Ensemble 2 Homogenous Ensemble — Bagging 3 Homogenous Ensemble — Boosting machine learning enthusiast selftaught learner write following post study material would love share fellow learner idea show whole picture different technique Ensemble use effect model they’ve got offer Ensemble machine learning technique make prediction using collection model reason time learner tend discover method studying classic algorithm strength technique derives idea “The wisdom crowds” state “the aggregation information group resulting decision often better could made single member groups” Wikipedia Wisdom crowd 2020 idea certainly proven fact several winning predictive model Kaggle competition deploying modelling method great benefit coming implementing Ensemble Apart amazing high accuracy ideal perform Ensemble method larger datasets reduce training time memory space optimising parallel computingeg XBBoost Light GBM One worst nightmare machine learning practitioner overfitting especially realworld datasets contain noise follow typical data distribution case also use Ensemble fight high variance eg Random Forest Heterogenous Ensemble used want combine different finetuned algorithm come best possible prediction Voting Stacking example technique Hard Voting method used classification task combine prediction multiple estimator using Mode choosing majority class select final result choosing class largest sum vote Hard Voting Ensemble achieve better prediction model use single algorithm order effective need provide set odd 3 diverse estimator produce independent prediction Following example code using Hard Voting Ensemble Please note list algorithm choice long finetuned diverse solve classification task Code example using Hard Voting classification task Soft Voting regression classification task used technique order perform Soft Voting employ Mean — averaging predicted probability Classification predicted value Regression task characteristic similar Hard Voting except use number estimator long 2 also assign weight classifier depend importance final prediction like Hard Voting Code example using Soft Voting regression task Stacking tried Hard Voting Soft Voting still achieve expected result Stacking come play technique employed Classification Regression task biggest difference Stacking two Heterogenous Ensemble method apart base learner extra metaclassifier Firstly base learner train predict dataset metaestimator second layer use prediction base learner layer new input feature next step Note metaestimator train predict dataset take new input feature X number new feature equal number base learner class label original dataset context longer use estimate location combination use trainable learner combiner benefit approach metaestimator effectively examine base estimator provides better prediction well participate directly final prediction using original data new input feature Source Staking Model visualisation mlxtend Code example using Staking method Classification task Overall Heterogenous Ensemble method great choice already build small set estimator individually train finetune Bear mind “ Voting make sense learning scheme perform comparably well two three classifier make prediction grossly incorrect trouble” Witten Frank Hall Pal 2016 Heterogenous Ensemble certainly provides improvement overall performance applying simple intuitive Ensemble technique next post dive another Ensemble technique consists awardwinning algorithm Thanks reading Don’t shy let’s connect interested post Medium httpsmediumcomirenepham45233 Instagram httpswwwinstagramcomfunwithmachinelearningTags Machine Learning Data Science Artificial Intelligence Big Data Algorithms |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.