Unnamed: 0
int64
0
192k
title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
info
stringlengths
45
90.4k
3,700
Donut Pie-Chart using Matplotlib
Most of the data analysts and data scientists use different sorts of visualization techniques for data analysis and EDA. But after going through a lot of Kaggle notebooks and projects, I couldn’t find data visualized in the form of pie-charts. Although I know, people prefer histograms and bar plots over pie-charts because of the significance they represent in one view. But if pie-charts are used precisely as per their requirement, it can create more sense than most of the bar plots and histograms. So, we’ll take the pie-chart game to a level ahead and create a custom donut pie-chart. Sounds interesting? Well, go through to find out! We’ll import matplotlib.pyplot as this is the only required library to generate our donut pie-charts. Run “ import matplotlib.pyplot as plt” in your first cell. Now we’ll define two lists namely: list_headings and list_data. We’ll use custom data for visualizations. After that, we’ll create a solid circle using plt.Circle with custom dimensions which will create a hollow space inside our pie-chart to give it a donut-like shape. Next, we’ll create a simple pie-chart using plt.pie(). We’ll add our list_data and list_headings as our initial arguments to visualize the data. After that, we will create a plt.gcf() object. This function in the pyplot module of the matplotlib library is used to get the current axes instance on the current figure matching the given keyword args or create one. Now. we’ll add our solid circle to pyplot using add_artist method of gca() function. And finally, we’ll write that simply awesome word combination — plt.show() And the result would look similar to this: Full Code: But this is not the end of our article. We need to customize this donut pie-chart to make it more attractive and visually appealing. We’ll add an empty space between each segment of our donut pie-chart simply called wedge drops. We’ll add one more argument to our plt.pie() to achieve the desired output which is wedgedrops. And your output will look absolutely similar to this: Full Code: You can add custom colors to your donut pie chart as well. Define a list of colors that you want to use for visualization. Add the colors as an argument to plt.pie() to implement. Full code: This generation’s obsession with black color is totally mind-boggling. Everybody wants to wear black, code in dark mode, and own black accessories. So this one’s for them specifically. We’ll add a black background to our donut pie-chart so that it looks more appealing and readable than previous visualizations. We’ll use fig.patch.set_facecolor() to add custom color background to our donut-pie chart. Also, we’ll convert our labels’ text color to white so that it is readable on black background. And your ultimate result would look like this: Full Code: And yes, this is it. Now, you can create your own customized donut pie-charts for visualizations. You can check this Kaggle notebook to check out how to plot multiple pie-charts in multiple rows and columns and add a legend to pie-charts. Have a great day of learning!
https://medium.com/analytics-vidhya/donut-pie-chart-using-matplotlib-dc60f6606359
['Dhruv Anurag']
2020-12-15 16:38:45.604000+00:00
['Exploratory Data Analysis', 'Pie Charts', 'Matplotlib', 'Data Visualization', 'Data Science']
Title Donut PieChart using MatplotlibContent data analyst data scientist use different sort visualization technique data analysis EDA going lot Kaggle notebook project couldn’t find data visualized form piecharts Although know people prefer histogram bar plot piecharts significance represent one view piecharts used precisely per requirement create sense bar plot histogram we’ll take piechart game level ahead create custom donut piechart Sounds interesting Well go find We’ll import matplotlibpyplot required library generate donut piecharts Run “ import matplotlibpyplot plt” first cell we’ll define two list namely listheadings listdata We’ll use custom data visualization we’ll create solid circle using pltCircle custom dimension create hollow space inside piechart give donutlike shape Next we’ll create simple piechart using pltpie We’ll add listdata listheadings initial argument visualize data create pltgcf object function pyplot module matplotlib library used get current ax instance current figure matching given keyword args create one we’ll add solid circle pyplot using addartist method gca function finally we’ll write simply awesome word combination — pltshow result would look similar Full Code end article need customize donut piechart make attractive visually appealing We’ll add empty space segment donut piechart simply called wedge drop We’ll add one argument pltpie achieve desired output wedgedrops output look absolutely similar Full Code add custom color donut pie chart well Define list color want use visualization Add color argument pltpie implement Full code generation’s obsession black color totally mindboggling Everybody want wear black code dark mode black accessory one’s specifically We’ll add black background donut piechart look appealing readable previous visualization We’ll use figpatchsetfacecolor add custom color background donutpie chart Also we’ll convert labels’ text color white readable black background ultimate result would look like Full Code yes create customized donut piecharts visualization check Kaggle notebook check plot multiple piecharts multiple row column add legend piecharts great day learningTags Exploratory Data Analysis Pie Charts Matplotlib Data Visualization Data Science
3,701
A Brief Summary of Apache Hadoop: A Solution of Big Data Problem and Hint comes from Google
Introduction of Hadoop Hadoop supports to leverage the chances provided by Big Data and overcome the challenges it encounters. What is Hadoop? Hadoop is an open-source, a Java-based programming framework that continues the processing of large data sets in a distributed computing environment. It based on the Google File System or GFS. Why Hadoop? Hadoop runs few applications on distributed systems with thousands of nodes involving petabytes of information. It has a distributed file system, called Hadoop Distributed File System or HDFS, which enables fast data transfer among the nodes. Hadoop Framework Hadoop Framework Hadoop Distributed File System (Hadoop HDFS): It provides a storage layer for Hadoop. It is suitable for distributed storage and processing, i.e. while the data is being stored it first get distributed & then it proceeds. HDFS Provides a command line interface to interact with Hadoop. It provides streaming access to file system data. So, it includes file permission and authentication. So, what store data here it is HBase who store data in HDFS. HBase: It helps to store data in HDFS. It is a NoSQL database or non-relational database. HBase mainly used when you need random, real-time, read/write access to your big data. It provides support to the high volume of data and high throughput. In HBase, a table can have thousands of columns. So, till now we have discussed how data distributed & stored, how to understand how this data is ingested & transferred to HDFS. Sqoop does it. Sqoop: A sqoop is a tool designed to transfer data between Hadoop and NoSQL. It is managed to import data from relational databases such as Oracle and MySQL to HDFS and export data from HDFS to relational database. If you want to ingest data such as streaming data, sensor data or log files, then you can use Flume. Flume: Flume distributed service for ingesting streaming data. So, Distributed data that collect event data & transfer it to HDFS. It is ideally suitable for event data from multiple systems. After the data transferred in the HDFS, it processed and one the framework that process data it is SPARK. SPARK: An open source cluster computing framework. It provides 100 times faster performance as compared to MapReduce. For few applications with in-memory primitives as compared to the two-state disk-based MapReduce. Spark run in the Hadoop cluster & process data in HDFS it also supports a wide variety of workload. Spark has the following major components: Spark Major components Hadoop MapReduce: It is another framework that processes the data. The original Hadoop processing engine which primarily based on JAVA. Based on the Map and Reduce programming model. Many tools such as Hive, Pig build on Map Reduce Model. It is broad & mature fault tolerance framework. It is the most commonly used framework. After the data processing, it is an analysis done by the open-source data flow system called Pig. Pig: It is an open-source dataflow system. It mainly used for Analytics. It covert pig script to Map-Reduce code and saving producer from writing Map-Reduce code. At Ad-Hoc queries like filter & join which is challenging to perform in Map-Reduce can be done efficiently using Pig. It is an alternate to writing Map-Reduce code. You can also practice Impala to analyze data. Impala: It is a high-performance SQL engine which runs on a Hadoop cluster. It is ideal for interactive analysis. It has a very low latency which can be measured in milliseconds. It supports a dialect of SQL( Impala SQL). Impala supports a dialect of a sequel. So, data in HDFS modelled as a database table. You can also implement data analysis using Hive. Hive: It is an abstraction cover on top of the Hadoop. It’s very similar to the Impala. However, it preferred for data processing and ETL ( extract, transform and load) operations. Impala preferred for ad-hoc queries, and hive executes queries using Map-Reduce. However, a user no needs to write any code in low-level Map-Reduce. Hive is suitable for structured data. After the data examined it is ready for the user to access what supports the search of data, it can be done using clutter is Search Cloudera. Cloudera Search: It is near real-time access products it enables non-technical users to search and explore data stored in or ingest it into Hadoop and HBase. In Cloudera, users don’t need SQL or programming skill to use Cloudera search because it provides a simple full-text interface for searching. It is a wholly blended data processing platform. Cloudera search does the flexible, scalable and robust storage system combined with CD8 or Cloudera distribution including Hadoop. This excludes the need to move large data sets across infrastructure to address the business task. A Hadoop job such as MapReduce, Pig, Hive and Sqoop have workflows. Oozie: Oozie is a workflow or coordination method that you can employ to manage Hadoop jobs. Oozie application lifecycle shown in the diagram. Oozie lifecycle ( Simpliearn.com) Multiple actions occurred within the start and end of the workflow. Hue: Hue is an acronym for Haddop user experience. It is an open-source web interface for analyzing data with Hadoop. You can execute the following operations using Hue. 1. Upload and browse data 2. Query a table in Hive and Impala 3. Run Spark and Pig jobs 4. Workflow search data. Hue makes Hadoop accessible to use. It also provides an editor for the hive, impala, MySQL, Oracle, Postgre SQL, Spark SQL and Solar SQL. Now, We will discuss how all these components work together to process Big data. There are four stages of Big data processing. Four stages of Big data processing ( blog.cloudera.com/blog) The first stage Ingested, where data is ingested or transferred to Hadoop from various resources such as relational databases system or local files. As we discussed earlier sqoop transfer data from our RDMS ( Relational Database) to HDFS. Whereas Flume transfer event data. The second stage is processing. In this stage, the data is stored and processed. We discussed earlier that the information stored in distributed file system HDFS and the NoSQL distributed data HBase. Spark and MapReduce perform data processing. The third stage is analyzing; here data interpreted by the processing framework such as Pig, Hive & Impala. Pig convert the data using Map and Reduce and then explain it. Hive also based upon Map and Reduce programming and it is more suitable to structured data. The fourth stage is accessed which is performed by a tool such as Hue and Cloudera search. In this stage, the analyzed data can be accessed by users. Hue is web-interface for exploring data. Now, you know all the basic of the Hadoop framework and can work on your further skill to be an expert in data engineer. But I’m going to keep writing about the Hadoop and other machine learning topics. If you like to stay updated with everything, you can follow me here or LinkedIn. You can also take a look on my series based upon data importing from the web. Everything that I talked about in this series is fundamental.
https://towardsdatascience.com/a-brief-summary-of-apache-hadoop-a-solution-of-big-data-problem-and-hint-comes-from-google-95fd63b83623
['Sahil Dhankhad']
2019-04-29 01:06:34.335000+00:00
['Big Data', 'Software Development', 'Data', 'Data Science', 'Science']
Title Brief Summary Apache Hadoop Solution Big Data Problem Hint come GoogleContent Introduction Hadoop Hadoop support leverage chance provided Big Data overcome challenge encounter Hadoop Hadoop opensource Javabased programming framework continues processing large data set distributed computing environment based Google File System GFS Hadoop Hadoop run application distributed system thousand node involving petabyte information distributed file system called Hadoop Distributed File System HDFS enables fast data transfer among node Hadoop Framework Hadoop Framework Hadoop Distributed File System Hadoop HDFS provides storage layer Hadoop suitable distributed storage processing ie data stored first get distributed proceeds HDFS Provides command line interface interact Hadoop provides streaming access file system data includes file permission authentication store data HBase store data HDFS HBase help store data HDFS NoSQL database nonrelational database HBase mainly used need random realtime readwrite access big data provides support high volume data high throughput HBase table thousand column till discussed data distributed stored understand data ingested transferred HDFS Sqoop Sqoop sqoop tool designed transfer data Hadoop NoSQL managed import data relational database Oracle MySQL HDFS export data HDFS relational database want ingest data streaming data sensor data log file use Flume Flume Flume distributed service ingesting streaming data Distributed data collect event data transfer HDFS ideally suitable event data multiple system data transferred HDFS processed one framework process data SPARK SPARK open source cluster computing framework provides 100 time faster performance compared MapReduce application inmemory primitive compared twostate diskbased MapReduce Spark run Hadoop cluster process data HDFS also support wide variety workload Spark following major component Spark Major component Hadoop MapReduce another framework process data original Hadoop processing engine primarily based JAVA Based Map Reduce programming model Many tool Hive Pig build Map Reduce Model broad mature fault tolerance framework commonly used framework data processing analysis done opensource data flow system called Pig Pig opensource dataflow system mainly used Analytics covert pig script MapReduce code saving producer writing MapReduce code AdHoc query like filter join challenging perform MapReduce done efficiently using Pig alternate writing MapReduce code also practice Impala analyze data Impala highperformance SQL engine run Hadoop cluster ideal interactive analysis low latency measured millisecond support dialect SQL Impala SQL Impala support dialect sequel data HDFS modelled database table also implement data analysis using Hive Hive abstraction cover top Hadoop It’s similar Impala However preferred data processing ETL extract transform load operation Impala preferred adhoc query hive executes query using MapReduce However user need write code lowlevel MapReduce Hive suitable structured data data examined ready user access support search data done using clutter Search Cloudera Cloudera Search near realtime access product enables nontechnical user search explore data stored ingest Hadoop HBase Cloudera user don’t need SQL programming skill use Cloudera search provides simple fulltext interface searching wholly blended data processing platform Cloudera search flexible scalable robust storage system combined CD8 Cloudera distribution including Hadoop excludes need move large data set across infrastructure address business task Hadoop job MapReduce Pig Hive Sqoop workflow Oozie Oozie workflow coordination method employ manage Hadoop job Oozie application lifecycle shown diagram Oozie lifecycle Simpliearncom Multiple action occurred within start end workflow Hue Hue acronym Haddop user experience opensource web interface analyzing data Hadoop execute following operation using Hue 1 Upload browse data 2 Query table Hive Impala 3 Run Spark Pig job 4 Workflow search data Hue make Hadoop accessible use also provides editor hive impala MySQL Oracle Postgre SQL Spark SQL Solar SQL discus component work together process Big data four stage Big data processing Four stage Big data processing blogclouderacomblog first stage Ingested data ingested transferred Hadoop various resource relational database system local file discussed earlier sqoop transfer data RDMS Relational Database HDFS Whereas Flume transfer event data second stage processing stage data stored processed discussed earlier information stored distributed file system HDFS NoSQL distributed data HBase Spark MapReduce perform data processing third stage analyzing data interpreted processing framework Pig Hive Impala Pig convert data using Map Reduce explain Hive also based upon Map Reduce programming suitable structured data fourth stage accessed performed tool Hue Cloudera search stage analyzed data accessed user Hue webinterface exploring data know basic Hadoop framework work skill expert data engineer I’m going keep writing Hadoop machine learning topic like stay updated everything follow LinkedIn also take look series based upon data importing web Everything talked series fundamentalTags Big Data Software Development Data Data Science Science
3,702
Blood & Dust: Drawing the Unconscious.
Justice ‘To love another person is to see the face of God’ ~ Victor Hugo, Les Misérables. In 1853, the brutal murder of a woman by her husband shocked the small island of Guernsey. This was not crowded Paris where people felt distant from each other. This was an island of a few thousand inhabitants, where everyone knew each other, where every event felt tangible, where each tragedy touched every family. Justicia by Victor Hugo | Wiki Art The evidence against Charles Tapner, the man who was accused of murdering his wife, was substantial but not definite. The residents did not see any certain motivation in Tapner’s cruel action. Despite of numerous petitions signed by the residents pushing the British Home Secretary to acquit Tapner, he was still sentenced to death by hanging. One of the signatories of that petition was Victor Hugo. He stood by his principles that nobody has the right of taking someone else’s life. After all, it was for defending those exact principles, that he was forced into the exile. Hugo sank into a deep melancholy. He sat at his desk and drew one of the darkest and scariest drawings I have ever seen. He called it ‘Justicia’. This drawing was made two decades before the art critic Louis Leroy used the term ‘impressionism’ to describe the artistic style of painters like Monet. But, we don’t have to be art-critics ourselves, to ask ‘what is Justicia drawing if not a product of an impression? An impression of Tapner’s soul surrounded by darkness and his floating head screaming from pain. If we step back and look from a distance we can also see a blurry face of a woman, of his wife. Les Misérables. “Even the darkest night will end and the sun will rise.” ― Victor Hugo, Les Misérables Those who are forced to be in exile never know if they are ever going to return home. From the very first days they try to create a space where everything looks and feels, tastes and smells like home. The interior of the house, where Victor Hugo settled in, during his time in exile is a piece of art itself. He designed it entirely by his own hand. Each room in the Hauteville House reflected a historical period of France. If every other room in the house was dedicated to the past, the room on the top floor was dedicated to the present.The room overlooked the sea, and it was there where Hugo began to write his masterpiece Les Misérables. ‘Those who do not weep, do not see’ says one of my favourite lines from that novel. If you see sufferings of others and their pain doesn’t put tears on your cheeks, then you did not fully grasp their pain. Gavroche a onze ans.(“Gavroche at eleven years old”). | Wiki Commons Drawing, once again, acted as a back-door to unconscious for Hugo. We can see that in his drawing of Gavroche — one of the most iconic characters of Les Misérables. He is the boy lives on the streets. His character represents what is it like to be someone who was born without the right for a decent future. But Gavroche has a darker symbolism — he symbolises populism. The rule, the impulse, the irrational instinct of the crowd. The drawing of him is as ominous and dark as Hugo’s drawing of Tapner. Gavroche’s wide, dark smile and his narrow eyes remind me of another more recent fictional character who is also a populist, disenfranchised madman— the Joker played by Joaquin Phoenix in 2019 adaptation by Todd Philips. Order & Chaos. ‘One must have chaos in oneself to give birth to a dancing star.’ ~ Friedrich Nietzsche Victor Hugo kept his drawings private and rarely shared with anyone outside his narrow circle. Although, one the greatest painters of the time Eugène Delacroix said that if Victor Hugo had decided to become a painter instead of a writer, he would have become one the greatest artists of the century. Why did he keep his drawings secret? Literary critics explain this as his desire to focus the attention of the public on his novels. They might be right, but there can be another reason. Hugo’s heart was a battleground between chaos and order. Throughout his hard life, he tried to tame the surrounding chaos to give birth to exceptional works. He drew his destiny as a strong, uncontrollable ocean wave, of which he tried to get hold of. The wave of my destiny by Victor Hugo, 1857 | Wiki Art He felt, as if, the destiny tried to take away everything that he had loved. His eldest and favourite daughter Léopoldine drowned in a boating accident in 1847. Then the destiny forced him out from home. Drawing acted as a therapy. Through drawing he could make those forces of chaos more tangible. He could get into a dialogue with them. Dante, Byron and Wilde did not dare explore other art forms to unlock their unconscious. That is what makes Victor Hugo and his writing exceptional. That is what makes Les Misérables so sublime. That novel is a drawing painted with words.
https://medium.com/lessons-from-history/blood-dust-drawing-the-unconscious-429d522587fb
['Vashik Armenikus']
2020-10-19 10:48:40.912000+00:00
['History', 'Literature', 'Art', 'Psychology', 'Creativity']
Title Blood Dust Drawing UnconsciousContent Justice ‘To love another person see face God’ Victor Hugo Les Misérables 1853 brutal murder woman husband shocked small island Guernsey crowded Paris people felt distant island thousand inhabitant everyone knew every event felt tangible tragedy touched every family Justicia Victor Hugo Wiki Art evidence Charles Tapner man accused murdering wife substantial definite resident see certain motivation Tapner’s cruel action Despite numerous petition signed resident pushing British Home Secretary acquit Tapner still sentenced death hanging One signatory petition Victor Hugo stood principle nobody right taking someone else’s life defending exact principle forced exile Hugo sank deep melancholy sat desk drew one darkest scariest drawing ever seen called ‘Justicia’ drawing made two decade art critic Louis Leroy used term ‘impressionism’ describe artistic style painter like Monet don’t artcritics ask ‘what Justicia drawing product impression impression Tapner’s soul surrounded darkness floating head screaming pain step back look distance also see blurry face woman wife Les Misérables “Even darkest night end sun rise” ― Victor Hugo Les Misérables forced exile never know ever going return home first day try create space everything look feel taste smell like home interior house Victor Hugo settled time exile piece art designed entirely hand room Hauteville House reflected historical period France every room house dedicated past room top floor dedicated presentThe room overlooked sea Hugo began write masterpiece Les Misérables ‘Those weep see’ say one favourite line novel see suffering others pain doesn’t put tear cheek fully grasp pain Gavroche onze ans“Gavroche eleven year old” Wiki Commons Drawing acted backdoor unconscious Hugo see drawing Gavroche — one iconic character Les Misérables boy life street character represents like someone born without right decent future Gavroche darker symbolism — symbolises populism rule impulse irrational instinct crowd drawing ominous dark Hugo’s drawing Tapner Gavroche’s wide dark smile narrow eye remind another recent fictional character also populist disenfranchised madman— Joker played Joaquin Phoenix 2019 adaptation Todd Philips Order Chaos ‘One must chaos oneself give birth dancing star’ Friedrich Nietzsche Victor Hugo kept drawing private rarely shared anyone outside narrow circle Although one greatest painter time Eugène Delacroix said Victor Hugo decided become painter instead writer would become one greatest artist century keep drawing secret Literary critic explain desire focus attention public novel might right another reason Hugo’s heart battleground chaos order Throughout hard life tried tame surrounding chaos give birth exceptional work drew destiny strong uncontrollable ocean wave tried get hold wave destiny Victor Hugo 1857 Wiki Art felt destiny tried take away everything loved eldest favourite daughter Léopoldine drowned boating accident 1847 destiny forced home Drawing acted therapy drawing could make force chaos tangible could get dialogue Dante Byron Wilde dare explore art form unlock unconscious make Victor Hugo writing exceptional make Les Misérables sublime novel drawing painted wordsTags History Literature Art Psychology Creativity
3,703
เมื่อไหร่จึงควรใช้แผนภูมิวงกลม (Pie chart)
Written by Data Experience @airbnb / Prev: Turn data into pixels @twitter • Invent new vis @UofMaryland HCIL PhD • From @Thailand • http://kristw.yellowpigz.com
https://medium.com/skooldio/%E0%B9%80%E0%B8%A1%E0%B8%B7%E0%B9%88%E0%B8%AD%E0%B9%84%E0%B8%AB%E0%B8%A3%E0%B9%88%E0%B8%88%E0%B8%B6%E0%B8%87%E0%B8%84%E0%B8%A7%E0%B8%A3%E0%B9%83%E0%B8%8A%E0%B9%89%E0%B9%81%E0%B8%9C%E0%B8%99%E0%B8%A0%E0%B8%B9%E0%B8%A1%E0%B8%B4%E0%B8%A7%E0%B8%87%E0%B8%81%E0%B8%A5%E0%B8%A1-pie-chart-3ac273e24463
['Krist Wongsuphasawat']
2017-03-14 17:29:14.588000+00:00
['Design', 'Business', 'Visualization', 'Data', 'Charts']
Title เมื่อไหร่จึงควรใช้แผนภูมิวงกลม Pie chartContent Written Data Experience airbnb Prev Turn data pixel twitter • Invent new vi UofMaryland HCIL PhD • Thailand • httpkristwyellowpigzcomTags Design Business Visualization Data Charts
3,704
5 Powerful Hidden Facebook Page Features for Marketers
Do you manage a Facebook page for your business? Interested in ways to improve your marketing? In addition to the Facebook features you use for business every day, there are some handy ones you may have overlooked. In this article you’ll discover five lesser known Facebook Page features for marketers. Discover five Facebook features page admins need to know about. #1: Free Images for Ads When creating a Facebook ad, you can choose from a searchable database of thousands of free stock images from within the Facebook image library. This takes an extra step out of the ad creation process. You can create Facebook ads quickly by choosing photos from the image library. This image library is powered by Shutterstock, but there’s one important caveat: Not all of the images meet Facebook’s advertising guidelines. For this reason, it’s important to familiarize yourself with the guidelines and choose your images carefully. You don’t want your ads getting rejected over some minor technicality, like the 20% text rule on ad images. #2: Ad Relevance Scores The ad relevance score is basically Facebook’s answer to Google’s quality score for AdWords. The relevance score guides how often your Facebook ad will be displayed and how much you’ll pay for each ad engagement. Facebook considers a lot of different factors when calculating your relevance score, including positive and negative feedback via video views, clicks, comments, likes and other ad interactions. If people report your ad or tell Facebook they don’t want to see it anymore, those actions count against you. This score measures how relevant your ad is to your target audience. Keeping an eye on your ad relevance score can help you determine if your ad needs work. Oddly enough, this setting is unchecked by default. To enable ad relevance scoring, open the ad or ad set in your Ads Manager and navigate to Customize Columns. From the list of available columns, find and select the Relevance Score check box. Enabling this option adds a Relevance Score column to your ads reports so you can keep an eye on this metric. #3: Email Contact Import A great way to grow your audience is to invite the people in your email address book to like your Facebook business page. To do that, go to your Facebook business page, click on the ellipsis (…) button (next to the Share button on your cover image) and then select Invite Email Contacts from the drop-down menu. To build your audience for your Facebook page, invite your email contacts to like your page. Next, you see a pop-up box that lists all of the different integration options you can use to import your contacts. Identify the contact list you want to import and click the Invite Contacts link to the right. After you upload your list, a dialog box appears where you can select which contacts to invite. You have the option to select individual contacts or the group as a whole. After you select your contacts, click Preview Invitation. On the next page,review the invitation, select the check box that you’re authorized to send invitations and click Send. There are a couple of points to keep in mind when sending invitations. You can upload up to 5,000 contacts per day, so if you have large customer or subscriber lists, you’ll have to send invitations in batches. Remember, your page may already be suggested to your contacts who use Facebook, so you can decide whether to email them as well. If you’re already showing up in their recommended pages, it’s just free advertising for you. #4: Facebook Post Scheduling The ability to schedule Facebook posts is pretty handy, especially if you’re using promoted posts. The good news is that you don’t need Hootsuite or Buffer to do it. You canschedule future posts right in Facebook. You can even backdate posts so that they appear earlier in your timeline. To access this feature, go the Publishing Tools tab, select Scheduled Postsand click the Create button. Compose your post and then select Schedule from the Publish drop-down menu. Select the date and time to schedule your post. When you’re finished,click Schedule. Scheduling posts can be especially useful for larger teams where you have different people creating and uploading Facebook content and targeting and launching your social PPC campaigns. #5: Pages to Watch Metrics At the very bottom of your Facebook Insights page, you’ll find a Pages to Watcharea where you can track other pages, such as your partners, competitors and friends. You can see metrics for the likes, posts and engagement on these pages. For example, the Pages to Watch metrics below reveals that HubSpot page likes are currently at 813,200, up 40.6% over the previous week. Also, there were five posts to the page, engaging 158 people. Looking at these metrics is an easy way to track your competitors’ fan growth and look at their engagement numbers. This information can also give you a sense of how many times to post per week. In a nutshell, you can see how your social media marketing efforts stack up against others using real benchmarks (their actual performance). Conclusion Facebook is constantly being redesigned and refreshed, so it can be hard to keep up with all of the options available to you. The five hidden Facebook features covered in this article are somewhat buried in the Ads Manager and Publishing Tools, so you probably wouldn’t stumble upon them. But they can be valuable tools for your Facebook marketing. About The Author Larry Kim is the CEO of Mobile Monkey and founder of WordStream. You can connect with him on Twitter, Facebook, LinkedIn and Instagram.
https://medium.com/marketing-and-entrepreneurship/5-powerful-hidden-facebook-page-features-for-marketers-c54f456a2e92
['Larry Kim']
2017-04-30 16:26:46.328000+00:00
['Social Media', 'Social Media Marketing', 'Advertising', 'Marketing', 'Facebook']
Title 5 Powerful Hidden Facebook Page Features MarketersContent manage Facebook page business Interested way improve marketing addition Facebook feature use business every day handy one may overlooked article you’ll discover five lesser known Facebook Page feature marketer Discover five Facebook feature page admins need know 1 Free Images Ads creating Facebook ad choose searchable database thousand free stock image within Facebook image library take extra step ad creation process create Facebook ad quickly choosing photo image library image library powered Shutterstock there’s one important caveat image meet Facebook’s advertising guideline reason it’s important familiarize guideline choose image carefully don’t want ad getting rejected minor technicality like 20 text rule ad image 2 Ad Relevance Scores ad relevance score basically Facebook’s answer Google’s quality score AdWords relevance score guide often Facebook ad displayed much you’ll pay ad engagement Facebook considers lot different factor calculating relevance score including positive negative feedback via video view click comment like ad interaction people report ad tell Facebook don’t want see anymore action count score measure relevant ad target audience Keeping eye ad relevance score help determine ad need work Oddly enough setting unchecked default enable ad relevance scoring open ad ad set Ads Manager navigate Customize Columns list available column find select Relevance Score check box Enabling option add Relevance Score column ad report keep eye metric 3 Email Contact Import great way grow audience invite people email address book like Facebook business page go Facebook business page click ellipsis … button next Share button cover image select Invite Email Contacts dropdown menu build audience Facebook page invite email contact like page Next see popup box list different integration option use import contact Identify contact list want import click Invite Contacts link right upload list dialog box appears select contact invite option select individual contact group whole select contact click Preview Invitation next pagereview invitation select check box you’re authorized send invitation click Send couple point keep mind sending invitation upload 5000 contact per day large customer subscriber list you’ll send invitation batch Remember page may already suggested contact use Facebook decide whether email well you’re already showing recommended page it’s free advertising 4 Facebook Post Scheduling ability schedule Facebook post pretty handy especially you’re using promoted post good news don’t need Hootsuite Buffer canschedule future post right Facebook even backdate post appear earlier timeline access feature go Publishing Tools tab select Scheduled Postsand click Create button Compose post select Schedule Publish dropdown menu Select date time schedule post you’re finishedclick Schedule Scheduling post especially useful larger team different people creating uploading Facebook content targeting launching social PPC campaign 5 Pages Watch Metrics bottom Facebook Insights page you’ll find Pages Watcharea track page partner competitor friend see metric like post engagement page example Pages Watch metric reveals HubSpot page like currently 813200 406 previous week Also five post page engaging 158 people Looking metric easy way track competitors’ fan growth look engagement number information also give sense many time post per week nutshell see social medium marketing effort stack others using real benchmark actual performance Conclusion Facebook constantly redesigned refreshed hard keep option available five hidden Facebook feature covered article somewhat buried Ads Manager Publishing Tools probably wouldn’t stumble upon valuable tool Facebook marketing Author Larry Kim CEO Mobile Monkey founder WordStream connect Twitter Facebook LinkedIn InstagramTags Social Media Social Media Marketing Advertising Marketing Facebook
3,705
An introduction to Linear Algebra for Programmers
An introduction to Linear Algebra for Programmers A collection of notes on the topic of Linear Algebra, with the intention of helping to understand Neural Networks Coordinates x = horizontal y = vertical Vectors A vector, in CS is often characterised as an array with values inside of it. So for a two dimensional vector, with an x of 2 and a y of 1, it would look like this: [2,1]. Vector Addition Let’s say that we have a vector of [2,1] and then a second vector that is [4,-3]. If we want to add these two vectors together, we would add the values together that correspond with one another (ie. add x and x, add y and y). So in this case, our addition would result in [6,-2]. Sometimes if we are visualising this on a graph, we could take our first set of [2,1] and plot that from the origin (which would usually be [0,0], then take the second value of [4,-3] and plot that as if it was a continuation from [2,1] on the graph. The result would still be the same as if you had taken your final value of [6,-2] and had plotted that. Only now we are able to see how the graph progresses through each value (if so required). Scalars This involves taking the value of a vector and multiplying it by whatever value is passed to it. This is known as scaling. The number itself that we use to multiply by is a scalar Scalar Multiplication Here are some examples: v = [2,1] Our vector has an x coordinate of 2, and a y coordinate of 1. 2v = [4,2] Here we basically multiply [2,1] by 2. Which would give us [4,2] plotted out on a graph. -1.8v = [-3.6, -1.8] Here we take [2,1] and first we flip [2,1] on its axis and then do the multiplication as if -1.8 was actually now 1.8. Which gives us something close to [-3.6, -1.8]. So simplify how this operates, we can consider that if we are trying to multiply by a negative value, we can switch the values in our vector from positive to negative (or negative to positive), and then treat that initial minus value as a positive value. 1/3v = [0.66, 0.33] Here we take [2,1] and reduce it down to 1 third of its values, so we would then plot [0.66, 0.33] out on out graph. The XY Coordinate System With Vectors, we can think of each vector value as a scalar that operates on the xy coordinate system. In the xy coordinate system, there are two very special vectors: the one that runs to the right of the origin, which is ‘i’ and the one that runs vertical from the origin, which is ‘j’. These both have a value of 1. These are what we can refer to the ‘basis’ of a coordinate system. So now we can look at our two vector values of [2,1] and consider each of those to be a scalar that stretches i and j along their axes. So now we have 2i and 1j. We can then take these two scaled vectors and add them together. This would look like (2)i + (1)j. Any time we scale two vectors and add them together it is called a ‘linear combination’. One thing to bear in mind is that we can theoretically use different basis vectors if we wanted to. So if our basis vectors actually ran at values of 2 instead of how ‘i’ and ‘j’ run at values of 1, our original vector of [2,1] would no longer plot at the same place on our graph. It would actually end up being [4,2] instead. Linear Transformations and its relation to Matrices Transformation basically just means function. So a function that takes an input and returns an output. So a transformation would take in a vector and return another vector. The word ‘transformation’ is used because it helps to signify movement. So it is like watching the input vector move from its position over to its new position (the output vector). Visually speaking, a transformation is linear if it has two properties: 1. all lines must remain lines; 2. the origin must remain fixed in place. So if a line curves, it’s not a linear transformation. If we remember that the values of a vector can actually be used to scale along i and j — for example: v = 2i + 1j — we can carry out a transformation, the properties and gridlines would still remain evenly spaced. The place where v lands would still be 2i + 1j. So we can transform our vector (which means i and j are also transformed) we still get the same linear combination. This means that we can deduce where v must go based only on where i and j land. Visualising Linear Transformations A way we can try to visualise this would be if we had a grid and had a vector placed out on it, we could imagine that if vector placement remained static, but we actually rotated the grid itself, the vector would now be in a new position, but the calculation of how it arrived there would remain the same, even though the values for i, j and v would be different. Bear in mind that we don’t have to transform simply by just rotating the axis. We could stretch out the positions of i and j if we wanted to, so that — for example — i is now twice as long as it was before, while j is whatever it now corresponds to. So if we had i and j, then rotated our grid 90 degrees counterclockwise, i would move from [1,0] to [0,1]. j would rotate from [0,1], to [-1,0]. We could take those values (whether that be the ones before or the newly rotated values) and create 2x2 matrices from them. It would look like this (imagining that the square brackets actually are one large horizontal rather than two small horizontals on top of each other): [0 -1] [1 0] Every time you see a matrix, you can consider it to be a linear transformation in space. Regarding the transformation, it is worth bearing in mind that this I think this only works when a grid transformation still takes up the same surface area as before. There are things called rotations, which rotate, and shears, which transform (ie stretch), but the diagrams I have seen thus far still take up the same amount of space. So a rotation might be rotating the grid by 90 degrees, while the shear might stretch out a rectangular grid space into a parallelogram. Sometimes this new transformation of both rotation and shear is called a ‘composition’. Matrix Multiplication Matrix multiplication represents applying one transformation after another. The order of transformations matters also as it has an effect on the outcome. Function Notation Since we write functions on the left of variables. whenever we have to compose two functions, we read from right to left. imagine that the brackets actually span the entire height rather than being stacked on top of one another: [a b] [e f] = [ae+bg af+bh] [c b] [g h] [ce+dg cf+dh] 3D linear transformations Just like how we have i and j for x and y, we also have k for the z axis A note regarding what you have just read These notes weren’t necessarily meant for public consumption, but the process of writing about something helps me to solidify my understanding. If you choose to read this, take them with a pinch of salt — that pinch being that I am still way in over my head trying to make sense of the world of Linear Algebra, but I’m certainly trying! A special thanks goes out to the YouTube channel 3Blue1Brown for making an excellent series titled ‘Essence of Linear Algebra’. This ‘article’ was simply a collection of notes that were made whilst watching it.
https://medium.com/ai-in-plain-english/an-introduction-to-linear-algebra-for-programmers-c737dc2c50a4
['Sunil Sandhu']
2020-04-17 12:01:32.511000+00:00
['Coding', 'Programming', 'AI', 'Artificial Intelligence', 'Machine Learning']
Title introduction Linear Algebra ProgrammersContent introduction Linear Algebra Programmers collection note topic Linear Algebra intention helping understand Neural Networks Coordinates x horizontal vertical Vectors vector CS often characterised array value inside two dimensional vector x 2 1 would look like 21 Vector Addition Let’s say vector 21 second vector 43 want add two vector together would add value together correspond one another ie add x x add case addition would result 62 Sometimes visualising graph could take first set 21 plot origin would usually 00 take second value 43 plot continuation 21 graph result would still taken final value 62 plotted able see graph progress value required Scalars involves taking value vector multiplying whatever value passed known scaling number use multiply scalar Scalar Multiplication example v 21 vector x coordinate 2 coordinate 1 2v 42 basically multiply 21 2 would give u 42 plotted graph 18v 36 18 take 21 first flip 21 axis multiplication 18 actually 18 give u something close 36 18 simplify operates consider trying multiply negative value switch value vector positive negative negative positive treat initial minus value positive value 13v 066 033 take 21 reduce 1 third value would plot 066 033 graph XY Coordinate System Vectors think vector value scalar operates xy coordinate system xy coordinate system two special vector one run right origin ‘i’ one run vertical origin ‘j’ value 1 refer ‘basis’ coordinate system look two vector value 21 consider scalar stretch j along ax 2i 1j take two scaled vector add together would look like 2i 1j time scale two vector add together called ‘linear combination’ One thing bear mind theoretically use different basis vector wanted basis vector actually ran value 2 instead ‘i’ ‘j’ run value 1 original vector 21 would longer plot place graph would actually end 42 instead Linear Transformations relation Matrices Transformation basically mean function function take input return output transformation would take vector return another vector word ‘transformation’ used help signify movement like watching input vector move position new position output vector Visually speaking transformation linear two property 1 line must remain line 2 origin must remain fixed place line curve it’s linear transformation remember value vector actually used scale along j — example v 2i 1j — carry transformation property gridlines would still remain evenly spaced place v land would still 2i 1j transform vector mean j also transformed still get linear combination mean deduce v must go based j land Visualising Linear Transformations way try visualise would grid vector placed could imagine vector placement remained static actually rotated grid vector would new position calculation arrived would remain even though value j v would different Bear mind don’t transform simply rotating axis could stretch position j wanted — example — twice long j whatever corresponds j rotated grid 90 degree counterclockwise would move 10 01 j would rotate 01 10 could take value whether one newly rotated value create 2x2 matrix would look like imagining square bracket actually one large horizontal rather two small horizontal top 0 1 1 0 Every time see matrix consider linear transformation space Regarding transformation worth bearing mind think work grid transformation still take surface area thing called rotation rotate shear transform ie stretch diagram seen thus far still take amount space rotation might rotating grid 90 degree shear might stretch rectangular grid space parallelogram Sometimes new transformation rotation shear called ‘composition’ Matrix Multiplication Matrix multiplication represents applying one transformation another order transformation matter also effect outcome Function Notation Since write function left variable whenever compose two function read right left imagine bracket actually span entire height rather stacked top one another b e f aebg afbh c b g h cedg cfdh 3D linear transformation like j x also k z axis note regarding read note weren’t necessarily meant public consumption process writing something help solidify understanding choose read take pinch salt — pinch still way head trying make sense world Linear Algebra I’m certainly trying special thanks go YouTube channel 3Blue1Brown making excellent series titled ‘Essence Linear Algebra’ ‘article’ simply collection note made whilst watching itTags Coding Programming AI Artificial Intelligence Machine Learning
3,706
Top 10 In-Demand programming languages to learn in 2020
1. Python When Guido van Rossum developed Python in the 1990s as his side project, nobody has thought it would be the most popular programming language one day. Considering all well-recognized rankings and industry trends, I put Python as the number one programming language overall. Python has not seen a meteoric rise in popularity like Java or C/C++. Also, Python is not a disruptive programming language. But from the very beginning, Python has focused on developer experience and tried to lower the barrier to programming so that school kids can also write production-grade code. In 2008, Python went through a massive overhaul and improvement with the cost of introducing significant breaking changes by introducing Python 3. Today, Python is omnipresent and used in many areas of software development, with no sign of slowing down. 3 Key Features: The USP of Python is its language design. It is highly productive, elegant, simple, yet powerful. Python has first-class integration with C/C++ and can seamlessly offload the CPU heavy tasks to C/C++. Python has a very active community and support. Popularity: In the last several years, Python has seen enormous growth in demand with no sign of slowing down. The programming language ranking site PYPL has ranked Python as the number one programming language with a considerable popularity gain in 2019: Also, Python has surpassed Java and became the 2nd most popular language according to GitHub repositories contributions: Also, StackOverflow developer survey has ranked Python as the 2nd most popular programming language (4th most popular Technology): Another programming language ranking site TIOBE has ranked Python the 3rd most popular language with a massive gain in last year: Python still has the chance to go further up in ranking this year as Python saw a 50% growth last year according to GitHub Octoverse: StackOverflow developer survey has listed Python as the second most loved programming language: Most of the older and mainstream programming languages have stable or downward traction. Also, Python is an exception here and has an increasingly upward trending during the last five years as clear from Google trends: Job Market: According to Indeed, Python is the most demanding programming language in the USA job market with the highest 74 K job posting in January 2020. Also, Python ranked third with a $120 K yearly salary. Also, StackOverflow developer survey has shown that Python developers earn a high salary with relatively low experience compared to other mainstream programming languages: Main Use Cases: Data Science Data Analytics Artificial Intelligence, Deep Learning Enterprise Application Web Development 2. JavaScript During the first browser war, Netscape had assigned Brendan Eich to develop a new programming language for its Browser. Brendan Eich had developed the initial prototype in only ten days, and the rest is history. Software developers often ridiculed JavaScript in its early days because of its poor language design and lack of features. Over the years, JavaScript has evolved into a multi-paradigm, high-level, dynamic programming language. The first significant breakthrough of JavaScript came in 2009 when Ryan Dahl has released cross-platform JavaScript runtime Node.js and enabled JavaScript to run on Server Side. The other enormous breakthrough of JavaScript came around 2010 when Google has released a JavaScript based Web development framework AngularJS. Today, JavaScript is one of the most widely used programming languages in the world and runs on virtually everywhere: Browsers, Servers, Mobile Devices, Cloud, Containers, Micro-controllers. 3 Key Features: JavaScript is the undisputed king in Browser programming. Thanks to Node.js, JavaScript offers event-driven programming , which is especially suitable for I/O heavy tasks . , which is especially suitable for . JavaScript has gone through massive modernization and overhaul in the last several years, especially in 2015, 2016, and later. Popularity: JavaScript is one of the most top-ranked programming languages because of its ubiquitous use in all platforms and mass adoption. Octoverse has put JavaScript as the number one programming language for five consecutive years by GitHub repositories contributions: Also, StackOverflow developer survey 2019 has ranked JavaScript as the most popular programming language and Technology: Another programming language popularity site PYPL has ranked JavaScript as the 3rd most popular programming language: The programming language popularity site TIOBE has ranked JavaScript as the 7th most popular programming language: Once dreaded by the developers, JavaScript also ranked as the 11th most loved programming language according to StackOverflow Developer survey: The trending of JavaScript is relatively stable, as shown by Google Trends: Job Market: In the USA Job market, Indeed has ranked JavaScript as the third most demanding programming language with 57 K Job posting in January 2020. With $114 K average yearly salary, JavaScript ranks 4th in terms of salary: Also, StackOverflow developer survey has shown that JavaScript developers can earn a modest salary with relatively low experience: Main Use Cases: Web Development Backend Development Mobile App Development Serverless Computing Browser Game Development 3. Java Java is one of the most disruptive programming languages to date. Back in the ’90s, business applications were mainly developed using C++, which was quite complicated and platform dependent. James Gosling and his team in Sun lowered the barrier to develop business applications by offering a much simpler, object-oriented, interpreted programming language that also supports Multi-threading programming. Java has achieved Platform independence by developing Java Virtual Machine (JVM), which abstracted the low-level Operating System from developers and gave the first “Write Once, Run anywhere” programming language. Also, JVM offered generation garbage collection, which manages the Object life cycle. In recent years, Java has lost some of its markets to highly developer-friendly modern languages and the rise of other languages, especially Python, JavaScript. Also, JVM is not quite Cloud friendly because of its bulky size. Oracle has recently introduced hefty licensing fees for JDK, which will dent Java’s popularity further. Fortunately, Java is working on its shortcomings and trying to make Java fit for Cloud via the GraalVM initiative. Also, in OpenJDK, there is a free alternative to the proprietary Oracle JDK. Java is still the number one programming language for enterprises. 3 Key Features: Java offers a powerful, feature-rich, multi-paradigm, interpreted programming language with a moderate learning curve and high developer productivity. programming language with a moderate learning curve and high developer productivity. Java is strictly backward compatible, which is a crucial requirement for business applications. Java’s runtime JVM is a masterpiece of Software Engineering and one of the best virtual machines in the industry. Popularity: Only after five years of its release, Java becomes the 3rd most popular programming language and always remained in the top 3 lists in the next two decades. Here is the long-term history of Java in the popular TIOBE ranking: Java’s popularity has waned in the last few years, but it is still the most popular programming language, according to TIOBE, as shown below: According to the GitHub repository contribution, Java was in the number one spot during the 2014–2018 and only slipped to number 3rd position in last year: The other popular programming language ranking website PYPL has ranked Java as 2nd most popular programming language: StackOverflow developer survey also ranked Java high and only superseded by JavaScript and Python programming languages: According to Google trends, Java is losing its traction constantly in the past five years: Job Market: According to Indeed, Java is the second most demanding programming language in the USA with 69 K Job posting in January 2020. Also, Java developers earn the 6th highest annual salary ($104 K): As per StackOverflow Developers survey 2019, Java offers a modest salary after few years of experience: Main Use Cases: Enterprise Application Development Android App Development Big Data Web Development 4. C# In 2000, Tech giant Microsoft decided to create their Object Oriented C like programming language C# as part of their .NET initiative, which will be managed (run on a Virtual Machine like Java). The veteran language designer Anders Hejlsberg designed C# as part of Microsoft’s Common Language Initiative (CLI) platform where many other (mainly Microsoft’s languages) compiled into an intermediate format which runs on a Runtime named Common Language Runtime (CLR). During the early days, C# was criticized as an imitation of Java. But later, both of the languages diverged. Also, Microsoft’s licensing of C# compiler/runtime is not always clear. Although Microsoft is currently not enforcing its patents under the Microsoft Open Specification Project, it may change. Today, C# is a multi-paradigm programming language that is widely used not only on the Windows platform but also on the iOS/Android platform (thanks to Xamarin) and Linux platform. 3 Key Features: Anders Hejlsberg did an excellent job to bring C# out of Java’s shadow and give its own identity. did an excellent job to bring C# out of Java’s shadow and give its own identity. Backed by Microsoft and being in the industry for 20 years, C# has large ecosystems of libraries and frameworks. Like Java, C# is also platform independent (thanks to CLR) and runs on Windows, Linux, Mobile devices. Popularity: The popular language ranking site TIOBE has ranked 5th in January 2020 with huge gain: Also, Octoverse has listed C# as the 5th popular programming language by GitHub repositories contribution: StackOverflow developer survey has placed C# as the 4th most popular language (7th most popular Technology for 2019: It is interesting to note that StackOverflow developer survey has ranked C# as the 10th most loved programming language (well above Java): As clear from Google trends, C# is not being much hyped in the last few years, as shown below: Job Market: Indeed has posted 32 K openings for C# developers in the USA, which makes C# the 5th most demanding programming language in this list. With an annual salary of $96 K, C# ranks 8th in this list: StackOverflow developer survey has placed C# above Java (albeit with more experience) in terms of global average salary: Main Use Cases: Server-Side programming App development Web Development Game Development Software for Windows Platform 5. C During the 1960s and 1970s, every cycle of the CPU and every byte of memory was expensive. Dennis Ritchie, a Bell lab engineer, has developed a procedural, general-purpose programming language that is compiled directly to machine language during the 1969–1973. C programming offers low-level access to memory and gives full control over the underlying hardware. Over the years, C became one of the most used programming languages. Besides, C is arguably the most disruptive and influential programming language in history and has influenced almost all other languages on this list. Although C is often criticized for its accidental complexity, unsafe programming, and lack of features. Also, C is platform-dependent, i.e., C code is not portable. But if you want to make the most use of your hardware, then C/C++ or Rust is your only option. 3 Key Features: As C gave low-level access to memory and compiled to Machine instructions, it is one of the fastest and most powerful programming languages. C gives full control over the underlying hardware. C is one of the “Programming languages of the Language,” i.e., compilers of many other programming languages like Ruby, PHP, Python have been written in C. Popularity: C is the oldest programming language in this list and has dominated the industry for 47 years. C has also ruled the programming language popularity ranking more than any other language as clear from TIOBE’s long-term ranking history: According to TIOBE ranking, C is the second most popular language with a huge popularity gain in 2019: Octoverse has also ranked C as the 9th most popular language according to the GitHub repository contribution: StackOverflow developer survey has also ranked C in 12th (8th considering programming language) place: Google trending also shows a relatively stable interest in C over the last five years. Job Market: According to Indeed, there are 28K job postings for C developers in the USA, which make C the 6th most demanding programming language. In terms of salary, C ranks 6th with Java ($104 K): StackOverflow developer survey showed C developers can earn an average wage but needs a longer time to achieve that compared to, e.g., Java, Python: Main Use Cases: System Programming Game Development IoT and Real-Time Systems Machine Learning, Deep Learning Embedded Systems 6. C++ Bjarne Stroustrup has worked with Dennis Ritchie (creator of C) in Bell Lab during the 1970s. Heavily influenced by C, he first created C++ as an extension of C, adding Object-Oriented features. Over time, C++ has evolved into a multi-paradigm, general-purpose programming language. Like C, C++ also offers low-level memory access and directly compiled to machine instructions. C++ also offers full control over hardware but with the cost of accidental complexity and does not provide language-level support for memory safety and concurrency safety. Also, C++ offers too many features and is one of the most complicated programming languages to master. For all these factors and its platform dependency, C++ has lost its popularity to Java in especially enterprise software development and Big Data domain in the early 2000s. C++ is once again gaining popularity with the rise of GPU, Containerization, Cloud computing, as it can quickly adapt itself to take advantage of Hardware or Ecosystem changes. Today, C++ is one of the most important and heavily used programming languages in the industry. 3 Key Features: Like Java, C++ is also constantly modernizing and adapting itself with changes in Hardware or Ecosystem. C++ also gives full control over the underlying hardware and can run on every platform and take advantage of every kind of hardware, whether it is GPU, TPU, Container, Cloud, Mobile devices, or Microcontroller. C++ is blazingly fast and used heavily in performance-critical and resource-constrained systems. Popularity: C++ is the second oldest programming language in this list and ranked 4th in the TIOBE programming language ranking: Octoverse has ranked C++ in 6th position by GitHub repository contributions: Also, StackOverflow Developer Survey in 2019 has listed C++ as the 9th most popular Technology (6th most popular language): Although C++ is facing massive competition from modern programming languages like Rust or Go, it is still generating stable interest in the last five years: Job Market: Indeed has ranked C++ as the 4th most demanding programming language with 41 K job posting. Also, C++ developers earn $108 K per annum, which places it in 5th place: StackOverflow developer survey has shown that C++ developers can draw a higher salary compared to Java, albeit with a longer experience: Main Use Cases: System Programming Game Development IoT and Real-Time Systems Machine Learning, Deep Learning Embedded Systems, Distributed Systems 7. PHP Like Python, PHP is another programming language developed by a single developer as a side project during the ’90s. Software Engineer Rasmus Lerdorf has initially created PHP as a set of Common Gateway Interface binaries written in C to create dynamic Web Applications. Later, more functionalities were added to the PHP product, and it organically evolved into a fully-fledged programming language. At present, PHP is a general-purpose, dynamic programming language mainly used to develop server-side Web applications. With the rise of JavaScript based client-side Web application development, PHP is losing its appeal and popularity, and PHP is past its prime. Contrary to popular belief, PHP will not die soon, although its popularity will gradually diminish. 3 Key Features: PHP is one of the highly productive Server-Side Web development programming languages. As PHP is used in Web development for the last 35 years, there are many successful and stable PHP frameworks in the market. Many giant companies are using PHP (Facebook, Wordpress), which leads to excellent tooling support for it. Popularity: The programming language ranking site TIOBE has ranked PHP as the 8th most popular programming language in January 2020. Although the long term ranking history of PHP shows that PHP is past of its prime and slowly losing its appeal: Octoverse has ranked PHP as the 4th most popular programming language by GitHub repositories contribution: As per StackOverflow developer survey 2019, PHP is the 5th most popular programming language (8th most popular Technology): Although PHP is still one of the most widely used programming languages, it’s trending is slowly going down as clear from Google Trends: Job Market: Job Search site Indeed has ranked PHP as the 7th most demanding programming language in the USA job market with 18 K positions in January 2020. Also, PHP developers can expect a reasonable salary ($90 K) which places them in 10th position in this category: StackOverflow developer survey shows PHP as the lowest-paid programming language in 2019: Main Use Cases: Server-side Web Application Development Developing CMS systems Standalone Web Application Development. 8. Swift Swift is one of the only two programming languages that has also appeared in my list: “Top 7 modern programming languages to learn now”. A group of Apple engineers led by Chris Lattner has worked to develop a new programming language Swift mainly to replace Objective-C in the Mac and iOS platforms. It is a multi-paradigm, general-purpose, compiled programming language that also offers high developer productivity. Swift supports LLVM (developer by Chris Lattner) compiler toolchain like C/C++, Rust. Swift has excellent interoperability with Objective-C codebase and has already established itself as the primary programming language in iOS App development. As a compiled and powerful language, Swift is gaining increasing popularity in other domains as well. 3 Main Features: One of the main USP of Swift is its language design. With simpler, concise, and clean syntax and developer ergonomic features, it offers a more productive and better alternative to Objective-C in the Apple Ecosystem. Swift also offers features of modern program languages: null safety. Also, it provides syntactic sugar to avoid the “ Pyramid of Doom.” As a compiled language, Swift is blazing fast as C++. It is also gaining increasing popularity in system programming and other domains. Popularity: Like other modern programming languages, Swift is hugely popular among developers and ranked 6th in the list of most beloved languages: Swift also has propelled to top 10 lists of most popular programming languages in TIOBE index only in 5 years of its first stable release: Another popular programming language ranking site PYPL has ranked Swift as 9th most popular programming language: StackOverflow developer survey has ranked Swift as the 15th most popular Technology (12th most popular programming language): Google trends also show a sharp rise in the Popularity of Swift: Job Market: Indeed has ranked Swift as the 9th most demanding language in the USA with 6 K openings. In terms of Salary, Indeed has ranked Swift in 2nd place with $125 K yearly salary: StackOverflow developer survey has also revealed that Swift developer can earn a high salary with relatively fewer years of experience compared to Objective-C: Main Use Cases: iOS App Development System Programming Client-side development (via WebAssembly) Deep Learning IoT 9. Go Like Swift, Go is only the second programming language from the last decade in this list. Also, like Swift, Go is created by a Tech giant. In the last decade, Google has frustratingly discovered that existing programming languages cannot take the seemingly unlimited hardware, human resources of Google. For example, compiling the C++ codebase of Google took half an hour. Also, they wanted to tackle the development scaling issue in the new language. Renowned Software Engineers Rob Pike (UTF-8) and Ken Thompson (UNIX OS) in Google has created a new, pragmatic, easy-to-learn, highly scalable system programming language Go and released in 2012. Go has a runtime and Garbage collector (a few Megabytes), but this runtime is packed in the generated executable. Although Go is a bit feature anemic, it has become a mainstream programming language in a short period. 3 Key Features: Go has language-level support for Concurrency. It offers a CSP based message-passing concurrency via Goroutine (lightweight Green thread) and Channel. via Goroutine (lightweight Green thread) and Channel. The biggest USP of Go is its language design and simplicity. It has successfully combined the simplicity and productivity of Python and the power of C. Go has embedded Garbage Collector (albeit not as mature as JVM garbage collector). Go developers can write system programming with the safety of Java, Python. Popularity: Like Swift, Go has also seen a meteoric rise in popularity. In almost all popular programming languages comparing websites, Go ranks high and has surpassed many existing languages. Here is the TIOBE index ranking from January 2020, where Go ranks 14th: StackOverflow developer survey 2019 has also ranked Go as the 13th most popular Technology (10th most popular programming language): According to the Stackoverflow survey, Go is one 9th most loved programming languages: Go is also one of the top 10 fastest growing languages, according to GitHub Octoverse: The increasing popularity of Go is also reflected in Google trends, which show increasing traction for Go over the last five years: Job Market: Indeed has ranked Go as the 10th most demanding language with 4 K openings in January 2020. In terms of salary, Go is ranked in 9th position: StackOverflow developer survey 2019 has shown Go as one of the highest-paid programming languages: Main Use Cases: System Programming Serverless Computing Business Applications Cloud-Native Development IoT 10. Ruby Ruby is the third programming language in this list developed by an individual developer during the 1990s. Japanese computer scientist Yukihiro Matsumoto has created Ruby as an “Object-Oriented Scripting language” and released in 1995. Ruby has later evolved into an interpreted, dynamically typed, high-level, multiple-paradigm general-purpose programming language. Ruby is implemented in C and offers garbage collection. Like Python, Ruby focused heavily on developer productivity and developer happiness. Although Ruby is not one of the hyped languages at this moment, it is an excellent language for new developers for a flat learning curve. 3 Key Features: Ruby has combined some of the best features of programming languages successfully: dynamic, object-oriented, functional, garbage-collected, and concise. Although Ruby itself is not disruptive, its Web development framework Ruby on Rails is probably the most disruptive and influential Server-side Web development framework. is probably the most disruptive and influential Server-side Web development framework. Ruby is used by some of the largest software projects like Twitter, GitHub, Airbnb, and has excellent tooling and framework support. Popularity: TIOBE has ranked Ruby as the 11th most popular programming language in January 2020 with a hugely positive move: Source: TIOBE Octoverse has also ranked Ruby as the 10th most popular programming language in 2019 by GitHub repositories contributions: StackOverflow Developer survey 2019 has listed Ruby as the 9th most popular programming language (12th most popular Technology): Ruby has not been a hyped language in recent years, but has maintained its traction as per Google trends: Job Market: In the USA job market, Ruby developers can draw huge salaries and ranked 1st by Indeed. Also, Indeed has posted 16 K openings for Ruby developers in January 2020, which put Ruby 8th most demanding programming language in this list. StackOverflow developer survey 2019 has also shown that Ruby developers can earn a high salary with relatively low experience: Similar articles:
https://towardsdatascience.com/top-10-in-demand-programming-languages-to-learn-in-2020-4462eb7d8d3e
['Md Kamaruzzaman']
2020-12-22 10:16:21.789000+00:00
['Python', 'JavaScript', 'Software Development', 'Java', 'Programming']
Title Top 10 InDemand programming language learn 2020Content 1 Python Guido van Rossum developed Python 1990s side project nobody thought would popular programming language one day Considering wellrecognized ranking industry trend put Python number one programming language overall Python seen meteoric rise popularity like Java CC Also Python disruptive programming language beginning Python focused developer experience tried lower barrier programming school kid also write productiongrade code 2008 Python went massive overhaul improvement cost introducing significant breaking change introducing Python 3 Today Python omnipresent used many area software development sign slowing 3 Key Features USP Python language design highly productive elegant simple yet powerful Python firstclass integration CC seamlessly offload CPU heavy task CC Python active community support Popularity last several year Python seen enormous growth demand sign slowing programming language ranking site PYPL ranked Python number one programming language considerable popularity gain 2019 Also Python surpassed Java became 2nd popular language according GitHub repository contribution Also StackOverflow developer survey ranked Python 2nd popular programming language 4th popular Technology Another programming language ranking site TIOBE ranked Python 3rd popular language massive gain last year Python still chance go ranking year Python saw 50 growth last year according GitHub Octoverse StackOverflow developer survey listed Python second loved programming language older mainstream programming language stable downward traction Also Python exception increasingly upward trending last five year clear Google trend Job Market According Indeed Python demanding programming language USA job market highest 74 K job posting January 2020 Also Python ranked third 120 K yearly salary Also StackOverflow developer survey shown Python developer earn high salary relatively low experience compared mainstream programming language Main Use Cases Data Science Data Analytics Artificial Intelligence Deep Learning Enterprise Application Web Development 2 JavaScript first browser war Netscape assigned Brendan Eich develop new programming language Browser Brendan Eich developed initial prototype ten day rest history Software developer often ridiculed JavaScript early day poor language design lack feature year JavaScript evolved multiparadigm highlevel dynamic programming language first significant breakthrough JavaScript came 2009 Ryan Dahl released crossplatform JavaScript runtime Nodejs enabled JavaScript run Server Side enormous breakthrough JavaScript came around 2010 Google released JavaScript based Web development framework AngularJS Today JavaScript one widely used programming language world run virtually everywhere Browsers Servers Mobile Devices Cloud Containers Microcontrollers 3 Key Features JavaScript undisputed king Browser programming Thanks Nodejs JavaScript offer eventdriven programming especially suitable IO heavy task especially suitable JavaScript gone massive modernization overhaul last several year especially 2015 2016 later Popularity JavaScript one topranked programming language ubiquitous use platform mass adoption Octoverse put JavaScript number one programming language five consecutive year GitHub repository contribution Also StackOverflow developer survey 2019 ranked JavaScript popular programming language Technology Another programming language popularity site PYPL ranked JavaScript 3rd popular programming language programming language popularity site TIOBE ranked JavaScript 7th popular programming language dreaded developer JavaScript also ranked 11th loved programming language according StackOverflow Developer survey trending JavaScript relatively stable shown Google Trends Job Market USA Job market Indeed ranked JavaScript third demanding programming language 57 K Job posting January 2020 114 K average yearly salary JavaScript rank 4th term salary Also StackOverflow developer survey shown JavaScript developer earn modest salary relatively low experience Main Use Cases Web Development Backend Development Mobile App Development Serverless Computing Browser Game Development 3 Java Java one disruptive programming language date Back ’90s business application mainly developed using C quite complicated platform dependent James Gosling team Sun lowered barrier develop business application offering much simpler objectoriented interpreted programming language also support Multithreading programming Java achieved Platform independence developing Java Virtual Machine JVM abstracted lowlevel Operating System developer gave first “Write Run anywhere” programming language Also JVM offered generation garbage collection manages Object life cycle recent year Java lost market highly developerfriendly modern language rise language especially Python JavaScript Also JVM quite Cloud friendly bulky size Oracle recently introduced hefty licensing fee JDK dent Java’s popularity Fortunately Java working shortcoming trying make Java fit Cloud via GraalVM initiative Also OpenJDK free alternative proprietary Oracle JDK Java still number one programming language enterprise 3 Key Features Java offer powerful featurerich multiparadigm interpreted programming language moderate learning curve high developer productivity programming language moderate learning curve high developer productivity Java strictly backward compatible crucial requirement business application Java’s runtime JVM masterpiece Software Engineering one best virtual machine industry Popularity five year release Java becomes 3rd popular programming language always remained top 3 list next two decade longterm history Java popular TIOBE ranking Java’s popularity waned last year still popular programming language according TIOBE shown According GitHub repository contribution Java number one spot 2014–2018 slipped number 3rd position last year popular programming language ranking website PYPL ranked Java 2nd popular programming language StackOverflow developer survey also ranked Java high superseded JavaScript Python programming language According Google trend Java losing traction constantly past five year Job Market According Indeed Java second demanding programming language USA 69 K Job posting January 2020 Also Java developer earn 6th highest annual salary 104 K per StackOverflow Developers survey 2019 Java offer modest salary year experience Main Use Cases Enterprise Application Development Android App Development Big Data Web Development 4 C 2000 Tech giant Microsoft decided create Object Oriented C like programming language C part NET initiative managed run Virtual Machine like Java veteran language designer Anders Hejlsberg designed C part Microsoft’s Common Language Initiative CLI platform many mainly Microsoft’s language compiled intermediate format run Runtime named Common Language Runtime CLR early day C criticized imitation Java later language diverged Also Microsoft’s licensing C compilerruntime always clear Although Microsoft currently enforcing patent Microsoft Open Specification Project may change Today C multiparadigm programming language widely used Windows platform also iOSAndroid platform thanks Xamarin Linux platform 3 Key Features Anders Hejlsberg excellent job bring C Java’s shadow give identity excellent job bring C Java’s shadow give identity Backed Microsoft industry 20 year C large ecosystem library framework Like Java C also platform independent thanks CLR run Windows Linux Mobile device Popularity popular language ranking site TIOBE ranked 5th January 2020 huge gain Also Octoverse listed C 5th popular programming language GitHub repository contribution StackOverflow developer survey placed C 4th popular language 7th popular Technology 2019 interesting note StackOverflow developer survey ranked C 10th loved programming language well Java clear Google trend C much hyped last year shown Job Market Indeed posted 32 K opening C developer USA make C 5th demanding programming language list annual salary 96 K C rank 8th list StackOverflow developer survey placed C Java albeit experience term global average salary Main Use Cases ServerSide programming App development Web Development Game Development Software Windows Platform 5 C 1960s 1970s every cycle CPU every byte memory expensive Dennis Ritchie Bell lab engineer developed procedural generalpurpose programming language compiled directly machine language 1969–1973 C programming offer lowlevel access memory give full control underlying hardware year C became one used programming language Besides C arguably disruptive influential programming language history influenced almost language list Although C often criticized accidental complexity unsafe programming lack feature Also C platformdependent ie C code portable want make use hardware CC Rust option 3 Key Features C gave lowlevel access memory compiled Machine instruction one fastest powerful programming language C give full control underlying hardware C one “Programming language Language” ie compiler many programming language like Ruby PHP Python written C Popularity C oldest programming language list dominated industry 47 year C also ruled programming language popularity ranking language clear TIOBE’s longterm ranking history According TIOBE ranking C second popular language huge popularity gain 2019 Octoverse also ranked C 9th popular language according GitHub repository contribution StackOverflow developer survey also ranked C 12th 8th considering programming language place Google trending also show relatively stable interest C last five year Job Market According Indeed 28K job posting C developer USA make C 6th demanding programming language term salary C rank 6th Java 104 K StackOverflow developer survey showed C developer earn average wage need longer time achieve compared eg Java Python Main Use Cases System Programming Game Development IoT RealTime Systems Machine Learning Deep Learning Embedded Systems 6 C Bjarne Stroustrup worked Dennis Ritchie creator C Bell Lab 1970s Heavily influenced C first created C extension C adding ObjectOriented feature time C evolved multiparadigm generalpurpose programming language Like C C also offer lowlevel memory access directly compiled machine instruction C also offer full control hardware cost accidental complexity provide languagelevel support memory safety concurrency safety Also C offer many feature one complicated programming language master factor platform dependency C lost popularity Java especially enterprise software development Big Data domain early 2000s C gaining popularity rise GPU Containerization Cloud computing quickly adapt take advantage Hardware Ecosystem change Today C one important heavily used programming language industry 3 Key Features Like Java C also constantly modernizing adapting change Hardware Ecosystem C also give full control underlying hardware run every platform take advantage every kind hardware whether GPU TPU Container Cloud Mobile device Microcontroller C blazingly fast used heavily performancecritical resourceconstrained system Popularity C second oldest programming language list ranked 4th TIOBE programming language ranking Octoverse ranked C 6th position GitHub repository contribution Also StackOverflow Developer Survey 2019 listed C 9th popular Technology 6th popular language Although C facing massive competition modern programming language like Rust Go still generating stable interest last five year Job Market Indeed ranked C 4th demanding programming language 41 K job posting Also C developer earn 108 K per annum place 5th place StackOverflow developer survey shown C developer draw higher salary compared Java albeit longer experience Main Use Cases System Programming Game Development IoT RealTime Systems Machine Learning Deep Learning Embedded Systems Distributed Systems 7 PHP Like Python PHP another programming language developed single developer side project ’90s Software Engineer Rasmus Lerdorf initially created PHP set Common Gateway Interface binary written C create dynamic Web Applications Later functionality added PHP product organically evolved fullyfledged programming language present PHP generalpurpose dynamic programming language mainly used develop serverside Web application rise JavaScript based clientside Web application development PHP losing appeal popularity PHP past prime Contrary popular belief PHP die soon although popularity gradually diminish 3 Key Features PHP one highly productive ServerSide Web development programming language PHP used Web development last 35 year many successful stable PHP framework market Many giant company using PHP Facebook Wordpress lead excellent tooling support Popularity programming language ranking site TIOBE ranked PHP 8th popular programming language January 2020 Although long term ranking history PHP show PHP past prime slowly losing appeal Octoverse ranked PHP 4th popular programming language GitHub repository contribution per StackOverflow developer survey 2019 PHP 5th popular programming language 8th popular Technology Although PHP still one widely used programming language it’s trending slowly going clear Google Trends Job Market Job Search site Indeed ranked PHP 7th demanding programming language USA job market 18 K position January 2020 Also PHP developer expect reasonable salary 90 K place 10th position category StackOverflow developer survey show PHP lowestpaid programming language 2019 Main Use Cases Serverside Web Application Development Developing CMS system Standalone Web Application Development 8 Swift Swift one two programming language also appeared list “Top 7 modern programming language learn now” group Apple engineer led Chris Lattner worked develop new programming language Swift mainly replace ObjectiveC Mac iOS platform multiparadigm generalpurpose compiled programming language also offer high developer productivity Swift support LLVM developer Chris Lattner compiler toolchain like CC Rust Swift excellent interoperability ObjectiveC codebase already established primary programming language iOS App development compiled powerful language Swift gaining increasing popularity domain well 3 Main Features One main USP Swift language design simpler concise clean syntax developer ergonomic feature offer productive better alternative ObjectiveC Apple Ecosystem Swift also offer feature modern program language null safety Also provides syntactic sugar avoid “ Pyramid Doom” compiled language Swift blazing fast C also gaining increasing popularity system programming domain Popularity Like modern programming language Swift hugely popular among developer ranked 6th list beloved language Swift also propelled top 10 list popular programming language TIOBE index 5 year first stable release Another popular programming language ranking site PYPL ranked Swift 9th popular programming language StackOverflow developer survey ranked Swift 15th popular Technology 12th popular programming language Google trend also show sharp rise Popularity Swift Job Market Indeed ranked Swift 9th demanding language USA 6 K opening term Salary Indeed ranked Swift 2nd place 125 K yearly salary StackOverflow developer survey also revealed Swift developer earn high salary relatively fewer year experience compared ObjectiveC Main Use Cases iOS App Development System Programming Clientside development via WebAssembly Deep Learning IoT 9 Go Like Swift Go second programming language last decade list Also like Swift Go created Tech giant last decade Google frustratingly discovered existing programming language cannot take seemingly unlimited hardware human resource Google example compiling C codebase Google took half hour Also wanted tackle development scaling issue new language Renowned Software Engineers Rob Pike UTF8 Ken Thompson UNIX OS Google created new pragmatic easytolearn highly scalable system programming language Go released 2012 Go runtime Garbage collector Megabytes runtime packed generated executable Although Go bit feature anemic become mainstream programming language short period 3 Key Features Go languagelevel support Concurrency offer CSP based messagepassing concurrency via Goroutine lightweight Green thread Channel via Goroutine lightweight Green thread Channel biggest USP Go language design simplicity successfully combined simplicity productivity Python power C Go embedded Garbage Collector albeit mature JVM garbage collector Go developer write system programming safety Java Python Popularity Like Swift Go also seen meteoric rise popularity almost popular programming language comparing website Go rank high surpassed many existing language TIOBE index ranking January 2020 Go rank 14th StackOverflow developer survey 2019 also ranked Go 13th popular Technology 10th popular programming language According Stackoverflow survey Go one 9th loved programming language Go also one top 10 fastest growing language according GitHub Octoverse increasing popularity Go also reflected Google trend show increasing traction Go last five year Job Market Indeed ranked Go 10th demanding language 4 K opening January 2020 term salary Go ranked 9th position StackOverflow developer survey 2019 shown Go one highestpaid programming language Main Use Cases System Programming Serverless Computing Business Applications CloudNative Development IoT 10 Ruby Ruby third programming language list developed individual developer 1990s Japanese computer scientist Yukihiro Matsumoto created Ruby “ObjectOriented Scripting language” released 1995 Ruby later evolved interpreted dynamically typed highlevel multipleparadigm generalpurpose programming language Ruby implemented C offer garbage collection Like Python Ruby focused heavily developer productivity developer happiness Although Ruby one hyped language moment excellent language new developer flat learning curve 3 Key Features Ruby combined best feature programming language successfully dynamic objectoriented functional garbagecollected concise Although Ruby disruptive Web development framework Ruby Rails probably disruptive influential Serverside Web development framework probably disruptive influential Serverside Web development framework Ruby used largest software project like Twitter GitHub Airbnb excellent tooling framework support Popularity TIOBE ranked Ruby 11th popular programming language January 2020 hugely positive move Source TIOBE Octoverse also ranked Ruby 10th popular programming language 2019 GitHub repository contribution StackOverflow Developer survey 2019 listed Ruby 9th popular programming language 12th popular Technology Ruby hyped language recent year maintained traction per Google trend Job Market USA job market Ruby developer draw huge salary ranked 1st Indeed Also Indeed posted 16 K opening Ruby developer January 2020 put Ruby 8th demanding programming language list StackOverflow developer survey 2019 also shown Ruby developer earn high salary relatively low experience Similar articlesTags Python JavaScript Software Development Java Programming
3,707
Data science… without any data?!
Data science… without any data?! Why it’s important to hire data engineers early “What challenges are you tackling at the moment?” I asked. “Well,” the ex-academic said, “It looks like I’ve been hired as Chief Data Scientist… at a company that has no data.” “Human, the bowl is empty.” — Data Scientist. Image: SOURCE. I don’t know whether to laugh or to cry. You’d think it would be obvious, but data science doesn’t make any sense without data. Alas, this is not an isolated incident. Data science doesn’t make any sense without data. So, let me go ahead and say what so many ambitious data scientists (and their would-be employers) really seem to need to hear. What is data engineering? If data science is the discipline of making data useful, then you can think of data engineering as the discipline of making data usable. Data engineers are the heroes who provide behind-the-scenes infrastructure support that makes machine logs and colossal data stores compatible with data science toolkits. If data science is the discipline of making data useful, then data engineering is the discipline of making data usable. Unlike data scientists, data engineers tend not to spend much time looking at data. Instead, they look at and work with the infrastructure that holds the data. Data scientists are the data-wranglers, while data engineers are the data-pipeline-wranglers. Data scientists are the data-wranglers, while data engineers are the data-pipeline-wranglers. What do data engineers do? Data engineering work comes in three main flavors: Enabling data storage (data warehouses) and delivery (data pipelines) at scale. Maintaining data flows that fuel enterprise operations. Supplying datasets to support data science. Data science is at the mercy of data engineering You can’t do data science if there’s no data. If you get hired to be head of data science in an organization where there’s no data and no data engineering, guess who’s going to be the data engineer…? You! Exactly. What’s so hard about data engineering? Grocery shopping is easy if you’re just cooking something for your own dinner, but large scale turns the trivial into the Herculean — how do you acquire, store, and process 20 tons of ice cream… without letting any of it melt? Similarly, “data engineering” is fairly easy when you’re downloading a little spreadsheet for your school project but dizzying when you’re handling data at petabyte scale. Scale makes it a sophisticated engineering discipline in its own right. Scale makes it a sophisticated engineering discipline in its own right. Unfortunately, knowing one of these disciplines in no way implies that you know anything about the other. Should you learn both disciplines? If you’ve just felt the urge to run off and study both disciplines, you might be a victim of the (stressful and self-defeating) belief that data professionals have to know the everything of data. The data universe is expanding rapidly — it’s time we started recognizing just how big this field is and that working in one part of it doesn’t automatically require us to be experts of all of it. I’d go so far as to say that it’s too big for even the most determined genius to swallow whole. Working in one part of the data universe doesn’t automatically require us to be experts of all of it. Instead of expecting data people to be able to do all of it, let’s start asking one another (and ourselves), “Which kind are you?” Let’s embrace working together instead of trying to go it alone. But isn’t this an incredible opportunity to learn? Maybe. It depends how much you love the discipline you already know. Data engineering and data science are different, so if you’re a data scientist who didn’t train for data engineering, you are going to have to start from scratch. Building your data engineering team could take years. This might be exactly the kind of fun you want — as long as you’re going in with open eyes. Building your data engineering team could take years. Sure, it’s nice to have an excuse to learn something new, but in all likelihood, your data science muscles will atrophy as a result. As an analogy, imagine you’re a translator who is fluent in Japanese and English. You’re offered a job called “translator” (so far, so good) but when you arrive at work, you discover that you were hired to translate from Mandarin to Swahili, neither of which you speak. It might be stimulating and rewarding to take the opportunity to become quadrilingual, but do be realistic about how efficiently you’ll be using your primary training (and how terrifying your first performance review may be). Who doesn’t love a good bad translation? Image: SOURCE. In other words, if a company doesn’t have any data or data engineers, then accepting a role as Chief Data Scientist means putting your data science career on hold for a few years in favor of a data engineering career — that you might not be qualified for — while you build a data engineering team. Eventually, you’ll gaze proudly at the team you’ve built and realize that it no longer makes sense for you to do the nitty-gritty yourself. By the time your team is ripe for those cool neural networks or fancy Bayesian inference that you did your PhD on, you have to sit back and watch someone else score the goal. Advice for data science leaders and those who love them Tip #1: Know what you’re getting into If you’re considering taking a job as a head of data science, your first question should always be, “Who is responsible for making sure my team has data?” If the answer is YOU, well, at least you’ll know what you’re signing up for. Before taking a data science job, always ask about the *who* of data engineering. Tip #2: Remember that you’re the customer Since data science is at the mercy of data, merely having data engineering colleagues might not be enough. You might face an uphill struggle if those colleagues fail to recognize you as a key customer for their work. It’s a bad sign if their attitude reminds you more of museum curators, preserving data for its own sake. Tip #3: See the bigger (organizational) picture While it’s true that you’re a key customer for data engineering, you’re probably not the only customer. Modern businesses use data to fuel operations, often in ways that can hum along nicely enough without your interference. When your contribution to the business is a nice-to-have (and not a matter of your company’s survival), it’s unwise to behave as if the world revolves around you and your team. A healthy balance is healthy. Tip #4: Insist on accountability Position yourself to have some influence over data engineering decisions. Before signing up for your new gig, consider negotiating for ways to hold your data engineering colleagues accountable for collaborating with you. If there are no repercussions to shutting you out, your organization is unlikely to thrive. Thanks for reading! Liked the author? If you’re keen to read more of my writing, most of the links in this article take you to my other musings. Can’t choose? Try this one:
https://towardsdatascience.com/data-science-without-any-data-6c1ae9509d92
['Cassie Kozyrkov']
2020-11-13 14:56:17.278000+00:00
['Data Science', 'Technology', 'Data Engineering', 'Artificial Intelligence', 'Business']
Title Data science… without dataContent Data science… without data it’s important hire data engineer early “What challenge tackling moment” asked “Well” exacademic said “It look like I’ve hired Chief Data Scientist… company data” “Human bowl empty” — Data Scientist Image SOURCE don’t know whether laugh cry You’d think would obvious data science doesn’t make sense without data Alas isolated incident Data science doesn’t make sense without data let go ahead say many ambitious data scientist wouldbe employer really seem need hear data engineering data science discipline making data useful think data engineering discipline making data usable Data engineer hero provide behindthescenes infrastructure support make machine log colossal data store compatible data science toolkits data science discipline making data useful data engineering discipline making data usable Unlike data scientist data engineer tend spend much time looking data Instead look work infrastructure hold data Data scientist datawranglers data engineer datapipelinewranglers Data scientist datawranglers data engineer datapipelinewranglers data engineer Data engineering work come three main flavor Enabling data storage data warehouse delivery data pipeline scale Maintaining data flow fuel enterprise operation Supplying datasets support data science Data science mercy data engineering can’t data science there’s data get hired head data science organization there’s data data engineering guess who’s going data engineer… Exactly What’s hard data engineering Grocery shopping easy you’re cooking something dinner large scale turn trivial Herculean — acquire store process 20 ton ice cream… without letting melt Similarly “data engineering” fairly easy you’re downloading little spreadsheet school project dizzying you’re handling data petabyte scale Scale make sophisticated engineering discipline right Scale make sophisticated engineering discipline right Unfortunately knowing one discipline way implies know anything learn discipline you’ve felt urge run study discipline might victim stressful selfdefeating belief data professional know everything data data universe expanding rapidly — it’s time started recognizing big field working one part doesn’t automatically require u expert I’d go far say it’s big even determined genius swallow whole Working one part data universe doesn’t automatically require u expert Instead expecting data people able let’s start asking one another “Which kind you” Let’s embrace working together instead trying go alone isn’t incredible opportunity learn Maybe depends much love discipline already know Data engineering data science different you’re data scientist didn’t train data engineering going start scratch Building data engineering team could take year might exactly kind fun want — long you’re going open eye Building data engineering team could take year Sure it’s nice excuse learn something new likelihood data science muscle atrophy result analogy imagine you’re translator fluent Japanese English You’re offered job called “translator” far good arrive work discover hired translate Mandarin Swahili neither speak might stimulating rewarding take opportunity become quadrilingual realistic efficiently you’ll using primary training terrifying first performance review may doesn’t love good bad translation Image SOURCE word company doesn’t data data engineer accepting role Chief Data Scientist mean putting data science career hold year favor data engineering career — might qualified — build data engineering team Eventually you’ll gaze proudly team you’ve built realize longer make sense nittygritty time team ripe cool neural network fancy Bayesian inference PhD sit back watch someone else score goal Advice data science leader love Tip 1 Know you’re getting you’re considering taking job head data science first question always “Who responsible making sure team data” answer well least you’ll know you’re signing taking data science job always ask data engineering Tip 2 Remember you’re customer Since data science mercy data merely data engineering colleague might enough might face uphill struggle colleague fail recognize key customer work It’s bad sign attitude reminds museum curator preserving data sake Tip 3 See bigger organizational picture it’s true you’re key customer data engineering you’re probably customer Modern business use data fuel operation often way hum along nicely enough without interference contribution business nicetohave matter company’s survival it’s unwise behave world revolves around team healthy balance healthy Tip 4 Insist accountability Position influence data engineering decision signing new gig consider negotiating way hold data engineering colleague accountable collaborating repercussion shutting organization unlikely thrive Thanks reading Liked author you’re keen read writing link article take musing Can’t choose Try oneTags Data Science Technology Data Engineering Artificial Intelligence Business
3,708
How Entrepreneurs Can Thrive in a New Era of Uncertainty
Len Schlesinger is President Emeritus at Babson College and the Baker Foundation Professor at Harvard Business School where he serves as Chair of the School’s Practice-based faculty and Coordinator of the Required Curriculum Section Chairs. He has served as a member of the HBS faculty from 1978 to 1985, 1988 to 1998 and 2013 to the present. During his career at the School, he has taught courses in Organizational Behavior, Organization Design, Human Resources Management, General Management, Neighborhood Business, Entrepreneurial Management, Global Immersion, Leadership and Service Management in MBA and Executive Education programs. He has also served as head of the Service Management Interest Group, Senior Associate Dean for External Relations, and Chair of the School’s (1993–94) MBA program review and redesign process. In this interview with Carbon Radio, he talks about how entrepreneurs will win in this new era of uncertainty. He addresses how healthcare and higher education are changing, and how entrepreneurial thought and action will enable organizations to thrive in a post-Covid world. What do you think about what’s going on right now and how can entrepreneurship can play a role in the recovery of the economy? Satya Nadella, the CEO of Microsoft, has actually nailed the framing of the issue in a very compelling way, and it’s one that I have been using countless number of times with credit to him. He talks about the three phases of current reality, and the first one is obviously restore. There needs to be some mechanism by which we can restore businesses and organizations to some semblance of reality. The second is recover. What are all the things we need to do to get customers back, to get service providers back, to get the systems working? And the third, and obviously the most exciting and most compelling part of the equation, is reimagine. What we have is the opportunity, whether you’re a small business or a large business of any kind, to use the experience of the last several months to think about ways in which you can reinvent every aspect of your business model and every way in which you interact with customers or constituents, and there’s no question that that work has just begun. And much of what has been done to accommodate constituencies in the context of the pandemic will end up proving to be extraordinarily useful on an ongoing basis. The reality, as you suggest in one of your questions, is we’re still left with an enormous amount of uncertainty about what a current reality is and what reality is going to be 90 days from now, let alone a year from now. And those are the times where the winners are always the entrepreneurs. They are the ones who are able to not only cope with uncertainty, but flourish in uncertainty and figure out ways in which they can actually take some small steps to get a sense of what might or might not work in the new reality called Post Covid-19. Until we have a well-established vaccine that has the whole world saying, “OK, we’ve got this one licked”, I can’t imagine anything that approaches a state of normalcy. And given the failures of most governments and healthcare systems and quite honestly, most citizen populations around this particular pandemic, there’s an opportunity to reinvent so many aspects of our lives as a community as an outgrowth of this. The question is, “will we have the patience and temperament to do that?” The call I had before you indicated really a deep fear that we’re already seeing that many American populations are just flat out bored with current reality and have just decided they’ve had enough, and so they’re going to misbehave in all sorts of ways. We’re beginning to see the potential for consequences as you see Covid-19 rates begin to spike. I just have a feeling the next few months are going to be pretty ugly. How do you think healthcare entrepreneurs in particular will play a role in reimagining society moving forward? There are three or four ways in which it has already become obvious. One is the spike in telehealth. So at the time of Covid-19, there were very few significant players in telehealth. Kaiser had managed to have more than half of their GP appointments done on telehealth, but other than that, it was an idiosyncrasy. And we got forced into telehealth, and it’s proving to be far more robust and far more powerful than anybody imagined. There’s absolutely no question, as part of the process of reinvention, we will begin to think about where and how you need to have a physical interaction with a doctor, because there’s very few industries that are less customer slash patient centric than healthcare. Particularly as you move into parts of the United States where geographic access to healthcare requires a two hour drive, the notion of being able to handle most basic activities over the phone or over the Internet will change all of that. At this point, the folks who will have a profound influence on whether that happens are the insurance companies. Right now, most insurance companies are paying the same rate for live and for telehealth. And if they immediately go back to depreciating the value of an electronic interaction versus a live interaction, I think you’ll see some slowdown, but there’s no question new mechanisms for interaction with doctors and healthcare providers will change in all sorts of ways. The second piece of that is something I was reading about the other day about how all these people aren’t going to doctors, and there doesn’t appear to be any epidemic of any other kind of healthcare issues over the last several months. So this issue of the habits that we’ve established for visits to doctors and the activities that we go to doctors for, I think lots of people are going to start to challenge that and that has the opportunity to have a profound influence on healthcare costs and old habits that, by and large, are supported by empirical data. The third piece is to understand how much the economy of the healthcare systems are critically dependent on elective procedures and, quite honestly, how unprepared most healthcare systems were to deal with the underlying structure of the pandemic. I’m reading in the paper today that major healthcare systems here in Boston still don’t have access to PPE. You know, you sit there and say, “oh, jeez.” And so what we have demonstrated is, because it’s not “medical”, but it is “critical” for healthcare, there was a systematic inattention to the global logistics system in healthcare. I don’t know who was responsible for it or how it was thought about, but there was this gravitational pull for everything to go to lowest cost providers and everything to get off shore. We had very few domestic providers. We had no emergency supplies. Our stockpile had run low. And I’ve got to believe that hopefully this ends up being a scary reminder of just how fragile our global logistics system is, not just in healthcare, but in all sorts of industries. This will raise serious questions. All three of those things — new access to physicians, new access to global supply chains, and rethinking the interaction between patients and doctors and when they need to go and when they don’t. All three of those are going to be stimulated and grown by entrepreneurs. How do you think about small businesses and family businesses in this time and what can we learn or what are we learning about how they’re operating in this time of extreme uncertainty? I will separate them. I think of family businesses different than I think of small businesses. So, let me start with family enterprise. The one thing everybody tends to kind of romance the notion of family enterprise and think that somehow they’re small businesses. We need to understand on a global scale there’s substantially more wealth in family enterprise than there is in the aggregated wealth of all of the public corporations that exist. Families have longer history. Families have longer aggregations of wealth and quite honestly, there are families that have demonstrated extraordinary resilience. You know, multiple generations of family being able to move through in ways in which our theories about organizations would indicate that private organizations, by and large, have not been able to do. So, I think the challenges that are facing family enterprise in the aggregate aren’t really profoundly different than those that are facing any other organization. There are some special issues associated with family dynamics, alongside organizational dynamics, but the nature of the challenges are roughly the same. Small business is a whole different ballgame. The most important thing to understand about small business is it depends what country I’m talking to you from. In the United States, if I look at the Small Business Administration, they define a small business as any business with under five hundred employees. And the reality is, when they talk about the significance of small business, they’re really talking about the very small part of the population that has 350 to 500 employees. They ignore microenterprises. They ignore neighborhood businesses. Those are the ones that are just getting killed. Absolutely getting killed. A lot of them, obviously, in food service and in restaurants. The latest data indicates that probably at least 25 percent of them won’t survive. Literally won’t survive, largely because they don’t have stores of cash. Large organizations today are sitting on absolute hoards cash, trying to figure out what they’re going to do when this is all over, what regime they’re going to buy up and what industries they’re going to go into. The smaller microbusinesses, they need the cash flow to operate the business and deliver. That doesn’t exist. The PPP wasn’t necessarily framed correctly, and the most naive part of the PPP here in the United States, and it really was naïve, was operating off the assumption that you could use the banks as the source of application. That was predicated on the assumption that these smaller businesses have banking relationships. And usually they’re making relationships where they have access to capital. So, it ignores, particularly for minorities, the average net worth of a black adult citizen of Boston is eight and a half dollars. If you have eight and a half dollars, you’re not worried about a banking relationship and you’re not calling up your neighborhood banker to get access to it. So, it took a while to understand that. Again, the interim solution for that was the rise of fintech. So, the fintech organizations, most specifically organizations like QuickBooks, stepped in and got authority to actually file the applications, in addition to banks, and stepped in and provided an absolutely critical resource for small businesses that banks, by and large, for the really small ones, don’t play. That being said, the rules change all the time. It was designed for eight weeks. Now it is designed for 24 weeks. If you do it correctly, by and large, it’s a grant. I understand that. But, it was a grant that was actually intended to keep people on your payroll, and when it was designed, nobody forecasted the length of the Covid situation. So, it didn’t hurt, but it really hasn’t helped. How do you think entrepreneurs are uniquely capable of operating in what is seemingly the most uncertain time of most of our lives? Well, I mean, the notion at this point is even in the midst of uncertainty, one can see the opportunity structure and the opportunity structure is entirely driven by uncertainty. So people have hobbies. People have expertise. People have interests. And there’s no better time to imagine new scenarios and experiment. I mean it’s really just that simple. The most powerful way to reduce uncertainty is to take a step and see what happened. As opposed to the traditional business planning process of people sitting around and dreaming of something, the need is now. The problems are now. The steps that one can take to address the problems are now. It takes an entrepreneur with a temperament and a mindset to actually take that step and see what happens to actually create the new solutions in the post-Covid environment. Do you think people will look at risk differently now or through a similar analysis? Well, it’s a more robust analysis. We just got hit by something that wasn’t in anybody’s risk analysis framework. And so now you have to add global pandemics to your list of things to worry about. There are only nine more plagues to work with. The reality is the risk management frameworks are not poorly defined and, by and large, are generally pretty well taken care of. Where we’re going to find people right now in risk is people stimulating, particularly in healthcare entrepreneurial activities, to get things to market faster than they should. We’ve seen this before. We saw it with the swine flu vaccine as well. I do believe that the political pressures to announce a vaccine, given the realities of bringing a vaccine to a market that does the job with minimal risk, those tensions are going to be very powerful at the high end, and there’ll be variations on that tension all the way down to the small neighborhood businesses. You wrote a blog post a few years back titled “Don’t Forget the Mayors”, which focused on the work of Mayors across the country. How should local governments be thinking about investing in entrepreneurial ecosystems? Thank God for the mayors today. If you’re looking at the folks who are closest to the action, who have the most capacity to be able to shape and influence citizenship behavior, it’s at the local level, and we see countless number of examples of both good and bad mayors across the United States. And quite honestly, the consequences of bad leadership at that level, which really does involve lives, you know, that’s where those lives are being decided on. The question for a government at the core, which is the question that ethicists and all sorts of other people have raised around Covid-19 is how do you balance the desire to get the economy going with the desire to ensure that lives are saved? And we’ve allowed for that debate to go on and be framed as a political debate. As our administration has oftentimes framed it as we don’t want the cost of compliance and the cost of responding to the coronavirus to exceed the value. And we’re very much in the middle of that right now. Our systematic inability as a nation and as a set of communities to actually have that question addressed without contention is very much at the source of the problems that I expressed that I was concerned about relative for the next several months. What do you think the future of higher education will look like with the pandemic going on and as technology improves? Most of higher education got pushed, and I mean pushed, into online learning. And so lots of educational institutions are busily celebrating their ten-day transition to online learning. Most of it, I would guess, is not very good. Now, as we think about what we’re going to do in the Fall, the question then becomes one of, well, how good can we get between April and August? So we’ve got six months. How good can we get? How can we actually figure out how to use all of the tools that we have to dramatically increase the quality of the online experience? There are three things we know. The online experience can be improved exponentially, and there are countless number of people who are already doing it. They actually tend to have large numbers of students already. What you don’t want to do is ignore the fact that the online leaders, the Arizona States, the Purdues, the Southern New Hampshire University, the Penn States, are already capturing a huge percentage of the capacity in that space. They do it quite well, by and large, and they do it with huge amounts of economic advantages. If I was to wake up this morning and say, “I’m going to go into the online business” and I went and talked to my friend. He’d say, “you’re an idiot.” Unless you have an idiosyncratic niche that hasn’t been covered in any way, shape or form by online learning, you’re just going to get crushed by people who have capacity. That’s number one. Most of these schools that are making these deep commitments to online, they’re doing it in some respects as a hobby and something to pass the time until they can go face to face. They’re not looking at it as a permanent restructuring of their model. The reality is it has raised fundamental challenges to the higher education economic model that have been raised for the last decade. And just like we had a 10 day transition to online learning, we’ve now had a 10 day transition to a serious examination about “why am I paying 50, 60, 70 thousand dollars a year?” Particularly when institutions delivered online, and in addition to delivering online, refused to cut tuition. Colleges can’t cut their tuition given their economic model, which is still dependent on labor, and students are beginning to raise questions that are quite legitimate. And so if this goes on for another Fall, the pressure will be even greater. The folks who were writing the book about the college stress test, Zemsky, about six months ago said there are 10 percent of colleges in the United States of about 2,200 colleges, about 10 percent of them that are on the near death list. I think today they would say 25 percent. So you will have death, you’ll have consolidation. The longer this goes on, the less able these schools are to defend what it’s all about. Now, there are a whole bunch of other schools that have actually come to the conclusion, and I think it’s a gutsy and appropriate choice, that what they need to do is they need to do everything they need to do to deliver what they do, as much of it as is physically possible, live. And they are now all dealing with government, public health and science to try and figure out what they need to do to get as many people on campus in live situations to create the kind of value equation that they are all about. I applaud those schools for being pretty clear about what their strategy is all about and for not wanting to play the nonresidential game. But, in some respects the deal buster at this point are the folks who have been innovating now for a long time like Southern New Hampshire University. What they’ve done now is they have a residential campus, which was the core of that school before they went online, and they accept residential students. And now what they’ve said to the students they’ve accepted for this Fall’s class is, “You can all move on campus. We’re delighted to have you on campus if you want to be there on a campus. We’re not going to run the residential freshman year next year. So, we’re going to give you your first year of college absolutely free as an apology for disappointing you. And the commitment is while we’re delivering that for you, absolutely free, we’re going to be entrepreneurs, reinventing residentially-based education and coming back a year from now at ten thousand dollars a year.” So, the online people should worry about the big folks, and the residential people should worry about what comes out of Southern New Hampshire University in just a year from now. This is not something that we’re forecasting 10 years from now. They’ve made an ironclad commitment to be ten thousand dollars a year twelve months from now. How much of a university’s financial sustainability has to do with their endowment and their research funding? First of all, most schools don’t have large endowments. What you’re dealing with there is the media always writes about the Ivy League and the big state schools that have large endowments, and they ignore two things. One is they have large endowments, but for many of them, it’s 80 to 85 percent restricted. It’s been designated by the donor, and the school has little or no flexibility to figure out what they might be able to do with it and how to use it. So you don’t want to overemphasize. People tend to think about Harvard having 38 billion or 40 billion, whatever the number is now. They should be able to give it all away. Well, they can’t and nor can any school in that space. The reality is the vast majority of schools don’t have large endowments and are critically dependent on tuition, and it’s the dependency on tuition in this incredibly complex environment that is their threat. It’s not the absence of endowment. Is there precedence for the government getting involved to rescue universities? Would it be reasonable to think about it? Do I think someone in Congress will come up with a bill? The answer is yes. Do I think it can pass? Not in this environment. I mean not a chance. The colleges and universities came up with a need this spring, I think, of something like 40 to 60 billion dollars. They got 16 billion, and with strings attached, because half of the 16 had to go directly to students. So, they asked for 60 for their needs and they got eight. And that was the first emergency go round. It’s not going to get better. What do you think about remote work and how it’s impacting employees and how employers think about their office space? I think this issue of remote work came out of nowhere, literally came out of nowhere, out of necessity to keep people in our homes, and we’re learning a lot. I mean the reality is we only have four or five months of data at this point. We already have some large companies making significant commitments as a result of it. I live out here in the boonies, and people always say, “Well, how is it where you live out there?” I say, “It’s a great place to live as long as you don’t want to go anywhere.” Because getting into the city for me was two and a half to two and three quarter hours a day, back and forth. I have now picked that up. That’s my time. It’s time for sleeping. It’s time for exercising. It’s time for conversation. It’s time for work. So there’s no question people are discovering all sorts of opportunities there. I think there will be a pattern. We’ll be coming back to work. There’s no question about it for most of us, but in environments that are much less densely populated, fewer people required to come in. And this fantasy of remote work will increasingly become the work du jour. There are plenty of occupations and plenty of professions that don’t require people to be at work all the time. Now, start thinking about the second and third order consequences of that. One is, what does it mean for urban environments? And, what we see here in Boston is rents going down in Boston proper and the suburbs having these incredible spikes of interest as people are looking to move out here. You see that in virtually every major city. Rents down in New York, rents down in San Francisco dramatically, rents down in Boston. Whether that’s temporary or permanent, I tend to think it is a longer alive phenomena than people might think. The second issue, which is the most profound issue, is what do we do with all these big office buildings if we can’t figure out how to get people in elevators? When people say, “What’s it going to take to get people to go downtown?” Well, if you can only get two people in an elevator and your office is on the 52nd floor, it’s going to take 12 hours to get people in and 12 hours to get people out for two minutes of work. We built an infrastructure that, by its very nature, is potentially ill-suited for the new reality unless someone convinces us that we can put pandemics to bed forever, which is going to be a hell of a task. Obviously, public transport, the reality is it’s perceived as one of the greatest assets to get people to work and to not put cars on the road, and now there are plenty of people who don’t want to use public transport. If I’m the government in a city, I, too, am dealing with exactly the questions that I started this conversation with you. What does it take to restore some sense of normalcy? What does it take to recover from the greatest disruption to my economic base that I’ve ever experienced in modern history over the most sustained period of time? How am I going to reimagine this city? If you’re looking for the opportunities for entrepreneurship, the opportunities for local governments to completely rethink what they do and the ability to create ecosystems of all of the players in that local community, to systematically reinvent the logic of that city on a scale never thought about before is completely real. I was supposed to do some executive teaching in April of this past year right before coronavirus, and there was a case that we had written on a business in the UK that decided to go to remote work. My colleagues thought it was kind of a crazy case. I found some old dissertations that were written on it and some early stage stuff that was written on it, but it was kind of a fluke. Now, just three months later, they’re at the epicenter of a long-term solution. What do you think about Andrew Yang’s universal basic income proposals both in terms of policy and in terms of political feasibility? I don’t have deep conviction about the proposals. What we found essentially, in the context of the last four months, was right now the government gave one check, and I guess this morning they’re talking about another check. There’s no question it’s better than nothing, but only marginally better. The other extreme is the Biden proposals, along with the progressives of two thousand dollars a month per person until the Covid situation is over, and the reality there is that’s a big number. And if we’re already complaining about predisposition to not go into work with the 600 dollar supplement on unemployment insurance, that just exacerbates the problem in even greater detail. So, the idea of a universal basic income is not a bad idea, but it can’t be an idea that is devoid of context in terms of all the other things that happen or don’t happen, all the other supports that exist or don’t exist to allow our citizenry to thrive and flourish. It’s a great slogan. Over the last few years, we’ve learned the slogan of “universal basic income”. We learned “no student debt”, “free college”. I mean I can go through that whole list. Every one of them in and of themselves has the capacity to break the bank, and the fact that they’re not embedded in a broader context of how we’re going to do work, and an economic model quite honestly that allows this to work, is the bigger problem. What do you think about this field of futurism and the notion of forecasting, and does it have a role to play in these conversations at the Federal level about how we fund things in the long term? Let’s get very clear about this. This is the joy of entrepreneurship. Entrepreneurship, by and large, is naturally suspicious of forecasts. If you looked at the first seventy-five days of the Coronavirus and you looked at television, it was a never ending stream of competitive forecasts all of which would reach different conclusions in terms of what was going on and what the most appropriate next step is. I’m not suggesting that we are not interested in data, and I’m not suggesting that we’re not interested in improving the quality of our data, but as any social scientist will say, more importantly any economic investor will say, don’t actually take economic steps based on a forecast, that in fact, the forecasts are as good as the algorithms that go in. The algorithms are created by human beings, and they contain all of the biases of the human condition. Is the world going to come to an end in 2020 or 2030? I don’t know, but the reality is I don’t spend much time convinced that one forecast is going to be compelling over another, and in some respects, it’s why these dueling forecasts allow us to have political debates about everything. Is there incontrovertible evidence that we have deleterious impacts of climate change? Yes. Is the world going to end in 2030? I don’t know. Is the current reality of climate change an opportunity for a substantial number of entrepreneurs to think about activities they might engage in where you can actually make money and also make a better world? Yep. No question about it. When we look at the organizations that have done well coming out of the pandemic, there is one thing I’m absolutely certain of without making a forecast. I just believe it in my bones. And that is the organizations that have taken care of their people are the ones that are going to win. And our ability to avoid this ideological debate about our staff, it just drives me crazy. I gave a talk last month, and I was talking about the people who we’re calling our frontline workers, not healthcare workers, but frontline workers, they’re all being relabeled as heroes. And I just sit there and say, “You know what? Could we stop calling them heroes and could we pay them a decent wage?” It’s just that simple. I don’t want to give them a greeting card. I don’t want to applaud as they walk down the street. I want to make sure that we are recognizing the risks that they are taking on our behalf, one, and two, that we are recognizing that for a variety of circumstances, they don’t have a lot of other options and that we want to make sure that the most profound way we can communicate appreciation of their work and their effort is to provide them with all of the support they need to minimize the risk of exposure and to pay them for the risk they’re taking. A few organizations did it for a few weeks, and now they got bored.
https://medium.com/discourse/how-entrepreneurs-can-thrive-in-a-new-era-of-uncertainty-e2da83ae263b
['Carbon Radio']
2020-07-29 13:49:58.309000+00:00
['Leadership', 'Healthcare', 'Future', 'Higher Education', 'Entrepreneurship']
Title Entrepreneurs Thrive New Era UncertaintyContent Len Schlesinger President Emeritus Babson College Baker Foundation Professor Harvard Business School serf Chair School’s Practicebased faculty Coordinator Required Curriculum Section Chairs served member HBS faculty 1978 1985 1988 1998 2013 present career School taught course Organizational Behavior Organization Design Human Resources Management General Management Neighborhood Business Entrepreneurial Management Global Immersion Leadership Service Management MBA Executive Education program also served head Service Management Interest Group Senior Associate Dean External Relations Chair School’s 1993–94 MBA program review redesign process interview Carbon Radio talk entrepreneur win new era uncertainty address healthcare higher education changing entrepreneurial thought action enable organization thrive postCovid world think what’s going right entrepreneurship play role recovery economy Satya Nadella CEO Microsoft actually nailed framing issue compelling way it’s one using countless number time credit talk three phase current reality first one obviously restore need mechanism restore business organization semblance reality second recover thing need get customer back get service provider back get system working third obviously exciting compelling part equation reimagine opportunity whether you’re small business large business kind use experience last several month think way reinvent every aspect business model every way interact customer constituent there’s question work begun much done accommodate constituency context pandemic end proving extraordinarily useful ongoing basis reality suggest one question we’re still left enormous amount uncertainty current reality reality going 90 day let alone year time winner always entrepreneur one able cope uncertainty flourish uncertainty figure way actually take small step get sense might might work new reality called Post Covid19 wellestablished vaccine whole world saying “OK we’ve got one licked” can’t imagine anything approach state normalcy given failure government healthcare system quite honestly citizen population around particular pandemic there’s opportunity reinvent many aspect life community outgrowth question “will patience temperament that” call indicated really deep fear we’re already seeing many American population flat bored current reality decided they’ve enough they’re going misbehave sort way We’re beginning see potential consequence see Covid19 rate begin spike feeling next month going pretty ugly think healthcare entrepreneur particular play role reimagining society moving forward three four way already become obvious One spike telehealth time Covid19 significant player telehealth Kaiser managed half GP appointment done telehealth idiosyncrasy got forced telehealth it’s proving far robust far powerful anybody imagined There’s absolutely question part process reinvention begin think need physical interaction doctor there’s industry le customer slash patient centric healthcare Particularly move part United States geographic access healthcare requires two hour drive notion able handle basic activity phone Internet change point folk profound influence whether happens insurance company Right insurance company paying rate live telehealth immediately go back depreciating value electronic interaction versus live interaction think you’ll see slowdown there’s question new mechanism interaction doctor healthcare provider change sort way second piece something reading day people aren’t going doctor doesn’t appear epidemic kind healthcare issue last several month issue habit we’ve established visit doctor activity go doctor think lot people going start challenge opportunity profound influence healthcare cost old habit large supported empirical data third piece understand much economy healthcare system critically dependent elective procedure quite honestly unprepared healthcare system deal underlying structure pandemic I’m reading paper today major healthcare system Boston still don’t access PPE know sit say “oh jeez” demonstrated it’s “medical” “critical” healthcare systematic inattention global logistics system healthcare don’t know responsible thought gravitational pull everything go lowest cost provider everything get shore domestic provider emergency supply stockpile run low I’ve got believe hopefully end scary reminder fragile global logistics system healthcare sort industry raise serious question three thing — new access physician new access global supply chain rethinking interaction patient doctor need go don’t three going stimulated grown entrepreneur think small business family business time learn learning they’re operating time extreme uncertainty separate think family business different think small business let start family enterprise one thing everybody tends kind romance notion family enterprise think somehow they’re small business need understand global scale there’s substantially wealth family enterprise aggregated wealth public corporation exist Families longer history Families longer aggregation wealth quite honestly family demonstrated extraordinary resilience know multiple generation family able move way theory organization would indicate private organization large able think challenge facing family enterprise aggregate aren’t really profoundly different facing organization special issue associated family dynamic alongside organizational dynamic nature challenge roughly Small business whole different ballgame important thing understand small business depends country I’m talking United States look Small Business Administration define small business business five hundred employee reality talk significance small business they’re really talking small part population 350 500 employee ignore microenterprises ignore neighborhood business one getting killed Absolutely getting killed lot obviously food service restaurant latest data indicates probably least 25 percent won’t survive Literally won’t survive largely don’t store cash Large organization today sitting absolute hoard cash trying figure they’re going regime they’re going buy industry they’re going go smaller microbusinesses need cash flow operate business deliver doesn’t exist PPP wasn’t necessarily framed correctly naive part PPP United States really naïve operating assumption could use bank source application predicated assumption smaller business banking relationship usually they’re making relationship access capital ignores particularly minority average net worth black adult citizen Boston eight half dollar eight half dollar you’re worried banking relationship you’re calling neighborhood banker get access took understand interim solution rise fintech fintech organization specifically organization like QuickBooks stepped got authority actually file application addition bank stepped provided absolutely critical resource small business bank large really small one don’t play said rule change time designed eight week designed 24 week correctly large it’s grant understand grant actually intended keep people payroll designed nobody forecasted length Covid situation didn’t hurt really hasn’t helped think entrepreneur uniquely capable operating seemingly uncertain time life Well mean notion point even midst uncertainty one see opportunity structure opportunity structure entirely driven uncertainty people hobby People expertise People interest there’s better time imagine new scenario experiment mean it’s really simple powerful way reduce uncertainty take step see happened opposed traditional business planning process people sitting around dreaming something need problem step one take address problem take entrepreneur temperament mindset actually take step see happens actually create new solution postCovid environment think people look risk differently similar analysis Well it’s robust analysis got hit something wasn’t anybody’s risk analysis framework add global pandemic list thing worry nine plague work reality risk management framework poorly defined large generally pretty well taken care we’re going find people right risk people stimulating particularly healthcare entrepreneurial activity get thing market faster We’ve seen saw swine flu vaccine well believe political pressure announce vaccine given reality bringing vaccine market job minimal risk tension going powerful high end there’ll variation tension way small neighborhood business wrote blog post year back titled “Don’t Forget Mayors” focused work Mayors across country local government thinking investing entrepreneurial ecosystem Thank God mayor today you’re looking folk closest action capacity able shape influence citizenship behavior it’s local level see countless number example good bad mayor across United States quite honestly consequence bad leadership level really involve life know that’s life decided question government core question ethicist sort people raised around Covid19 balance desire get economy going desire ensure life saved we’ve allowed debate go framed political debate administration oftentimes framed don’t want cost compliance cost responding coronavirus exceed value we’re much middle right systematic inability nation set community actually question addressed without contention much source problem expressed concerned relative next several month think future higher education look like pandemic going technology improves higher education got pushed mean pushed online learning lot educational institution busily celebrating tenday transition online learning would guess good think we’re going Fall question becomes one well good get April August we’ve got six month good get actually figure use tool dramatically increase quality online experience three thing know online experience improved exponentially countless number people already actually tend large number student already don’t want ignore fact online leader Arizona States Purdues Southern New Hampshire University Penn States already capturing huge percentage capacity space quite well large huge amount economic advantage wake morning say “I’m going go online business” went talked friend He’d say “you’re idiot” Unless idiosyncratic niche hasn’t covered way shape form online learning you’re going get crushed people capacity That’s number one school making deep commitment online they’re respect hobby something pas time go face face They’re looking permanent restructuring model reality raised fundamental challenge higher education economic model raised last decade like 10 day transition online learning we’ve 10 day transition serious examination “why paying 50 60 70 thousand dollar year” Particularly institution delivered online addition delivering online refused cut tuition Colleges can’t cut tuition given economic model still dependent labor student beginning raise question quite legitimate go another Fall pressure even greater folk writing book college stress test Zemsky six month ago said 10 percent college United States 2200 college 10 percent near death list think today would say 25 percent death you’ll consolidation longer go le able school defend it’s whole bunch school actually come conclusion think it’s gutsy appropriate choice need need everything need deliver much physically possible live dealing government public health science try figure need get many people campus live situation create kind value equation applaud school pretty clear strategy wanting play nonresidential game respect deal buster point folk innovating long time like Southern New Hampshire University they’ve done residential campus core school went online accept residential student they’ve said student they’ve accepted Fall’s class “You move campus We’re delighted campus want campus We’re going run residential freshman year next year we’re going give first year college absolutely free apology disappointing commitment we’re delivering absolutely free we’re going entrepreneur reinventing residentiallybased education coming back year ten thousand dollar year” online people worry big folk residential people worry come Southern New Hampshire University year something we’re forecasting 10 year They’ve made ironclad commitment ten thousand dollar year twelve month much university’s financial sustainability endowment research funding First school don’t large endowment you’re dealing medium always writes Ivy League big state school large endowment ignore two thing One large endowment many it’s 80 85 percent restricted It’s designated donor school little flexibility figure might able use don’t want overemphasize People tend think Harvard 38 billion 40 billion whatever number able give away Well can’t school space reality vast majority school don’t large endowment critically dependent tuition it’s dependency tuition incredibly complex environment threat It’s absence endowment precedence government getting involved rescue university Would reasonable think think someone Congress come bill answer yes think pas environment mean chance college university came need spring think something like 40 60 billion dollar got 16 billion string attached half 16 go directly student asked 60 need got eight first emergency go round It’s going get better think remote work it’s impacting employee employer think office space think issue remote work came nowhere literally came nowhere necessity keep people home we’re learning lot mean reality four five month data point already large company making significant commitment result live boonies people always say “Well live there” say “It’s great place live long don’t want go anywhere” getting city two half two three quarter hour day back forth picked That’s time It’s time sleeping It’s time exercising It’s time conversation It’s time work there’s question people discovering sort opportunity think pattern We’ll coming back work There’s question u environment much le densely populated fewer people required come fantasy remote work increasingly become work du jour plenty occupation plenty profession don’t require people work time start thinking second third order consequence One mean urban environment see Boston rent going Boston proper suburb incredible spike interest people looking move see virtually every major city Rents New York rent San Francisco dramatically rent Boston Whether that’s temporary permanent tend think longer alive phenomenon people might think second issue profound issue big office building can’t figure get people elevator people say “What’s going take get people go downtown” Well get two people elevator office 52nd floor it’s going take 12 hour get people 12 hour get people two minute work built infrastructure nature potentially illsuited new reality unless someone convinces u put pandemic bed forever going hell task Obviously public transport reality it’s perceived one greatest asset get people work put car road plenty people don’t want use public transport I’m government city dealing exactly question started conversation take restore sense normalcy take recover greatest disruption economic base I’ve ever experienced modern history sustained period time going reimagine city you’re looking opportunity entrepreneurship opportunity local government completely rethink ability create ecosystem player local community systematically reinvent logic city scale never thought completely real supposed executive teaching April past year right coronavirus case written business UK decided go remote work colleague thought kind crazy case found old dissertation written early stage stuff written kind fluke three month later they’re epicenter longterm solution think Andrew Yang’s universal basic income proposal term policy term political feasibility don’t deep conviction proposal found essentially context last four month right government gave one check guess morning they’re talking another check There’s question it’s better nothing marginally better extreme Biden proposal along progressive two thousand dollar month per person Covid situation reality that’s big number we’re already complaining predisposition go work 600 dollar supplement unemployment insurance exacerbates problem even greater detail idea universal basic income bad idea can’t idea devoid context term thing happen don’t happen support exist don’t exist allow citizenry thrive flourish It’s great slogan last year we’ve learned slogan “universal basic income” learned “no student debt” “free college” mean go whole list Every one capacity break bank fact they’re embedded broader context we’re going work economic model quite honestly allows work bigger problem think field futurism notion forecasting role play conversation Federal level fund thing long term Let’s get clear joy entrepreneurship Entrepreneurship large naturally suspicious forecast looked first seventyfive day Coronavirus looked television never ending stream competitive forecast would reach different conclusion term going appropriate next step I’m suggesting interested data I’m suggesting we’re interested improving quality data social scientist say importantly economic investor say don’t actually take economic step based forecast fact forecast good algorithm go algorithm created human being contain bias human condition world going come end 2020 2030 don’t know reality don’t spend much time convinced one forecast going compelling another respect it’s dueling forecast allow u political debate everything incontrovertible evidence deleterious impact climate change Yes world going end 2030 don’t know current reality climate change opportunity substantial number entrepreneur think activity might engage actually make money also make better world Yep question look organization done well coming pandemic one thing I’m absolutely certain without making forecast believe bone organization taken care people one going win ability avoid ideological debate staff drive crazy gave talk last month talking people we’re calling frontline worker healthcare worker frontline worker they’re relabeled hero sit say “You know Could stop calling hero could pay decent wage” It’s simple don’t want give greeting card don’t want applaud walk street want make sure recognizing risk taking behalf one two recognizing variety circumstance don’t lot option want make sure profound way communicate appreciation work effort provide support need minimize risk exposure pay risk they’re taking organization week got boredTags Leadership Healthcare Future Higher Education Entrepreneurship
3,709
Flutter to the Future: The Inevitability of Cross-Platform Frameworks
Photo by UX Store on Unsplash So, you want to build a tech start-up. You have your product idea, your seed capital, and your founding team. Now you just have to hire three engineering teams and build three versions of your product. Surprised? Let us count them down. A website, obviously, that’s one. An iOS app that works on iPhones, that’s two. And an Android app that works on all the other smartphones, that’s three. Each one requires knowledge of different technology stacks and programming languages, so you need three engineering teams, or at the very least three engineering ninjas, each with total mastery over each of those respective areas. Your seemingly straightforward path to minimally viable product has just gotten three times thornier, with impacts to resources, costs, and timelines. And that’s before a single line of code has been written. It Used to Be Easy Mark Zuckerberg sitting in his Harvard dorm room did not have to worry about hacking together three separate versions of Facebook. Larry Page and Sergey Brin, grabbing coffee in Palo Alto, did not have to worry about ranking anything other than websites. And Jeff Bezos working out of his garage did not have to pay for three separate Amazon online bookstores. The rise of smartphones and mobile app stores has brought a new reality to the internet. Companies across every industry have recognized that customers demand an omnichannel gateway. We all want to move from laptop to tablet to smartphone and back again with no degradation in user experience. What has been a win for consumers has also created a barrier to entry for start-ups. It used to be that a single engineer could build a new port of call for the entire world because the entire world came visiting on the same type of boat — the browser. No longer. Now some come by browser, some by mobile browser, and many on smartphones, expecting a native app experience. One Size Fits (Not) All In the span of a decade, mobile strategy has gone from afterthought to prerequisite. So much so that a growing number of successful start-ups have bypassed the traditional web application altogether and built strictly for the smartphone. This can work if your app idea lends itself to the medium, as it often does in the worlds of gaming, social networking, and digital entertainment. However, customers for a vast majority of businesses still expect equal play on both web and mobile. That means the typical modern-day start-up must often raise capital to pay not only for the designers and full-stack and devops engineers, but also for iOS and Android developers. Beyond compensation, there is also the question of management. Designing and engineering three stand-alone applications just to provide a single offering to the market means three times the product and project management lift along with coordination among all three efforts. The brick and mortars equivalent of this conundrum is the form that must be filled out in triplicate. Remember those? Carbon copies — the real sort, not the email kind — solved that inefficiency some two hundred years ago. So, where is the carbon copy solution for web, iOS and Android? Cross Platform We Go The first iPhone was released in 2007 and the first Android phone followed in 2008. In a testament to the pace of innovation, the first cross platform frameworks for both iOS and Android were released as early as 2009. The best known of these was eventually (and aptly) titled “PhoneGap”. The commercial version of PhoneGap was acquired by Adobe while an open-source version was made available via the Apache foundation under the title “Cordova”. By the mid-2010s, several additional frameworks emerged, including Xamarin, NativeScript, Kivy, and Ionic, with the latter built atop the aforementioned Apache Cordova framework. The challenge with these frameworks was that they offered less granular control than writing native code and remained a couple steps behind the latest SDK improvements from Apple and Google, respectively. However, for those organizations that were able to leverage these frameworks, they offered a 2x savings in development cost and time. Before long, the world’s technology giants recognized that there was a lot of money to be made in providing the means to do something twice as fast, not to mention centralizing their own cross-platform development. In quick succession, Microsoft acquired Xamarin, Facebook developed React Native, and Google developed Flutter. State of Play Modern cross-platform frameworks have come a long way since the first version of PhoneGap. Today, Flutter and React Native sit atop a quintuplet of high-powered and widely used cross-platform mobile frameworks that also include Xamarin, Ionic, and Cordova. For new development, Flutter is the heavy favorite in this five-faced selection. The reason is simple — it supports a single codebase across all platforms. The other frameworks also support a single codebase but with exceptions, particularly for UI rendering. In addition to offering the first purely unified codebase, Flutter has been designed to expedite development tasks and to compile directly into machine code. Bypassing intermediate code, which is relied on by other frameworks, enables Flutter to deliver native level performance even for complex graphics and computations. Flutter to the Future Whether you are a founder determining a development direction, an IT executive selecting a technology stack, or an engineer choosing your next area to upskill, chances are that the right answer is Flutter. It is robust yet bleeding edge. Cross-platform development is the future and Flutter is the clear winner in this space. Each of the other frameworks is based on older approaches and is held back by legacy building blocks in their foundations. Designed atop the lessons learned from the shortfalls of earlier frameworks, Flutter is the first to present what founders and engineers alike thirst for — a truly cross-platform foundation that is the closest modern equivalent to the Write Once Run Anywhere (WORA) slogan first popularized with Java in the 1990s. There are challenges, of course. Flutter is still new and expertise is limited. However, winning in technology is about making bold bets on near-term evolution. Two or three years ago, Flutter made sense on paper, but the ecosystem was still limited. On the eve of 2021, the framework is ready to jump off the page and into your infrastructure. The Inevitability Regardless of whether you make the bet on Flutter, there is no question that cross-platform frameworks are slowly but surely supplanting native approaches. If your source code only runs on one platform then you are limiting your reach and disappointing a lot of customers. And if you are writing three versions of your source code then you are overstretching your resources and overtaxing your investors. It is important to remember that evolution from native to cross-platform has happened before. Early assembly languages that were tuned to native hardware architectures were inevitably replaced by higher level languages like C and Java that worked across computer types and operating systems. Technologies change but patterns remain the same. There is a reason car controls, restaurant menus, and computer keyboards all fit the same mold even though they come in different packages. We are wired for comfort and efficiency, and that means learn once or build once, and then re-use as often as possible.
https://medium.com/swlh/flutter-to-the-future-the-inevitability-of-cross-platform-frameworks-d541573b63f2
['Jack Plotkin']
2020-10-12 17:52:47.803000+00:00
['Cross Platform', 'Engineering', 'React Native', 'Startup', 'Flutter']
Title Flutter Future Inevitability CrossPlatform FrameworksContent Photo UX Store Unsplash want build tech startup product idea seed capital founding team hire three engineering team build three version product Surprised Let u count website obviously that’s one iOS app work iPhones that’s two Android app work smartphones that’s three one requires knowledge different technology stack programming language need three engineering team least three engineering ninja total mastery respective area seemingly straightforward path minimally viable product gotten three time thornier impact resource cost timeline that’s single line code written Used Easy Mark Zuckerberg sitting Harvard dorm room worry hacking together three separate version Facebook Larry Page Sergey Brin grabbing coffee Palo Alto worry ranking anything website Jeff Bezos working garage pay three separate Amazon online bookstore rise smartphones mobile app store brought new reality internet Companies across every industry recognized customer demand omnichannel gateway want move laptop tablet smartphone back degradation user experience win consumer also created barrier entry startup used single engineer could build new port call entire world entire world came visiting type boat — browser longer come browser mobile browser many smartphones expecting native app experience One Size Fits span decade mobile strategy gone afterthought prerequisite much growing number successful startup bypassed traditional web application altogether built strictly smartphone work app idea lends medium often world gaming social networking digital entertainment However customer vast majority business still expect equal play web mobile mean typical modernday startup must often raise capital pay designer fullstack devops engineer also iOS Android developer Beyond compensation also question management Designing engineering three standalone application provide single offering market mean three time product project management lift along coordination among three effort brick mortar equivalent conundrum form must filled triplicate Remember Carbon copy — real sort email kind — solved inefficiency two hundred year ago carbon copy solution web iOS Android Cross Platform Go first iPhone released 2007 first Android phone followed 2008 testament pace innovation first cross platform framework iOS Android released early 2009 best known eventually aptly titled “PhoneGap” commercial version PhoneGap acquired Adobe opensource version made available via Apache foundation title “Cordova” mid2010s several additional framework emerged including Xamarin NativeScript Kivy Ionic latter built atop aforementioned Apache Cordova framework challenge framework offered le granular control writing native code remained couple step behind latest SDK improvement Apple Google respectively However organization able leverage framework offered 2x saving development cost time long world’s technology giant recognized lot money made providing mean something twice fast mention centralizing crossplatform development quick succession Microsoft acquired Xamarin Facebook developed React Native Google developed Flutter State Play Modern crossplatform framework come long way since first version PhoneGap Today Flutter React Native sit atop quintuplet highpowered widely used crossplatform mobile framework also include Xamarin Ionic Cordova new development Flutter heavy favorite fivefaced selection reason simple — support single codebase across platform framework also support single codebase exception particularly UI rendering addition offering first purely unified codebase Flutter designed expedite development task compile directly machine code Bypassing intermediate code relied framework enables Flutter deliver native level performance even complex graphic computation Flutter Future Whether founder determining development direction executive selecting technology stack engineer choosing next area upskill chance right answer Flutter robust yet bleeding edge Crossplatform development future Flutter clear winner space framework based older approach held back legacy building block foundation Designed atop lesson learned shortfall earlier framework Flutter first present founder engineer alike thirst — truly crossplatform foundation closest modern equivalent Write Run Anywhere WORA slogan first popularized Java 1990s challenge course Flutter still new expertise limited However winning technology making bold bet nearterm evolution Two three year ago Flutter made sense paper ecosystem still limited eve 2021 framework ready jump page infrastructure Inevitability Regardless whether make bet Flutter question crossplatform framework slowly surely supplanting native approach source code run one platform limiting reach disappointing lot customer writing three version source code overstretching resource overtaxing investor important remember evolution native crossplatform happened Early assembly language tuned native hardware architecture inevitably replaced higher level language like C Java worked across computer type operating system Technologies change pattern remain reason car control restaurant menu computer keyboard fit mold even though come different package wired comfort efficiency mean learn build reuse often possibleTags Cross Platform Engineering React Native Startup Flutter
3,710
Seven Different Visualizations of Immunization Data
Photo by Joshua Sortino on Unsplash When using data, analytics show answers and insight into facts collected in a spreadsheet, document, or database. Being able to understand data in a visual context makes a story pop from numbers and is easily understood by a wide range of readers or viewers. Visualizations are a quick and easy way to tell a data story making numbers pop and information easy to understand. There are many types of visualizations. Using data, let’s show seven ways to see the same information. How is Immunization and Exemption of Immunization Displayed? Immunization Records of school age children in Washington State 2014–2015 to show the seven visualizations that make analysis of data easy to understand and showcase for an audience using Power BI. Area Plot Area plots show a count by placing shading under the line from data points on the x axis and y axis. Shown are three lines with shading for comparing data graphically. Bar Chart Bar Charts show a count by shading to show a value for a specific label. Shown are three values with shading for each Educational Service District in the dataset horizontally. Key Influencers Key Influencers is a newer visualization that answers a binary style question. Based on an algorithm, the display shows top correlation of variables for proving a influence on data from a key variable. Line Plot Line plots show a count by connecting data points with a line. Shown are three lines of different colors for comparing data graphically. Pie Chart Pie Charts show a count by shading and size of wedge to show percentage of a whole in as wedges of a circle. Shown are “slices” of pie that show total immunization and exemption per Educational Service District adding to total state enrollment. Scatter Plot Scatter Plots show a count by size of dot and placement on a plane to show data. Shown are points with size to represent intensity and placement of enrollment by immunization count. Word Cloud Word Cloud shows most used words in a sample of text or an input file by displaying most used words larger and less used words smaller to evaluate written content. In graphic, website for the dataset is used in an online app for generating Word Cloud. Value in Features These are some of the visualizations that many apps or libraries can create from data. The purpose of this form of results is to quickly illustrate information and knowledge for a wide audience, or form a story out of data for consumption. From the results, some types are better for telling a story with this data. Creating a dashboard from placing multiple visuals together, requires finding the best visuals for the analysis to view results that are meaningful and weave data into a compelling story for knowledge.
https://medium.com/ai-in-plain-english/seven-different-visualizations-of-immunization-data-b3185a791014
['Sarah Mason']
2020-11-30 18:40:47.677000+00:00
['Analytics', 'Big Data', 'AI', 'Data Science', 'Data Visualization']
Title Seven Different Visualizations Immunization DataContent Photo Joshua Sortino Unsplash using data analytics show answer insight fact collected spreadsheet document database able understand data visual context make story pop number easily understood wide range reader viewer Visualizations quick easy way tell data story making number pop information easy understand many type visualization Using data let’s show seven way see information Immunization Exemption Immunization Displayed Immunization Records school age child Washington State 2014–2015 show seven visualization make analysis data easy understand showcase audience using Power BI Area Plot Area plot show count placing shading line data point x axis axis Shown three line shading comparing data graphically Bar Chart Bar Charts show count shading show value specific label Shown three value shading Educational Service District dataset horizontally Key Influencers Key Influencers newer visualization answer binary style question Based algorithm display show top correlation variable proving influence data key variable Line Plot Line plot show count connecting data point line Shown three line different color comparing data graphically Pie Chart Pie Charts show count shading size wedge show percentage whole wedge circle Shown “slices” pie show total immunization exemption per Educational Service District adding total state enrollment Scatter Plot Scatter Plots show count size dot placement plane show data Shown point size represent intensity placement enrollment immunization count Word Cloud Word Cloud show used word sample text input file displaying used word larger le used word smaller evaluate written content graphic website dataset used online app generating Word Cloud Value Features visualization many apps library create data purpose form result quickly illustrate information knowledge wide audience form story data consumption result type better telling story data Creating dashboard placing multiple visuals together requires finding best visuals analysis view result meaningful weave data compelling story knowledgeTags Analytics Big Data AI Data Science Data Visualization
3,711
5 Tasks for adaptation communications
THE ADAPTIVE CO: Don’t face your (climate changed) future without them Climate risk is a team sport. Play to win, with communications and org culture. (Commons image by Pixabay.) “That can’t be,” said your store manager. “We’ll be fine. It won’t get that bad.” When you sent him a memo and told him in a conference call and later personally at a meeting, that he and his 100 employees, plus the store’s local service vendors and suppliers, had to align with corporate’s new climate-adaptation plan, he and his principal lieutenants balked. Oh, they carried out the plan, to some extent — it was that, you said, or else — but with hesitation, unmoved by your presentation, unwilling to go all the way, worry employees, change suppliers, relocate facilities, distract from higher priorities, add operating expenses, hurt their numbers. Go through that much trouble? To avoid a scenario they can’t be confident about? Up the chain of command, the regional manager agreed. Up a couple of layers, so did the VP at HQ. Some regions, they noticed, were carrying it out better. But most weren’t. Up further, the board and CEO had approved and launched a TCFD process to assess and disclose the company’s climate risks, which in turn led to the memos, calls and meetings to go beyond disclosure and actually execute a far-reaching, transformative plan. An ambitious change-management program was underway, but like most at this scale, yours ran into organizational obstacles that flustered results. And that’s assuming you got the climate science right to begin with! If not, if your TCFD team underestimated the immediacy and severity of tipping points and socioeconomic risks, which McKinsey made clear in this recent report, even full engagement and participation by everyone in the company would be falling significantly short of the adaptation truly needed, and your company would remain at risk. Because here’s the fine point of it. For a complete adaptation plan to fully protect your company and secure a brand and organization for the climate-challenged future we will all face, you have to go enterprise-wide. The TCFD process is but the start. When you move to address the risks and capitalize on the opportunities informed by a TCFD assessment, you quickly realize it takes everyone everywhere in the company, mainly because climate impacts happen locally, and your people and suppliers must be ready, as must everyone in the support units up the chain, all the way to the very top. The trick is overcoming the trouble-confidence equation. Each of your 15 relevant stakeholders — board members, investors, senior leaders, the TCFD adaptation lead team itself, mid-level and unit managers, rank and file employees, suppliers/vendors, collaborators/partners, the upstream and downstream trade, bankers, insurers, the relevant government agencies, communities, NGOs and, of course, your customers—must see the adaptation plan not just as totally up to the task, but also outweighing the pains, costs and hassles of executing it. That trouble must be seen as far less burdensome than the horrifying troubles (climate consequences) that will befall them and the company if they fail to adapt. And the opportunities, along with the challenge of this entire process, must be seen as an exciting journey, one to welcome, not fear or avoid. That, in turn, is entirely a communications and organizational-culture exercise, which you can meet by executing five tasks, and that you logically must launch first, so your best-laid plan can be implemented across the organization. It enables the plan, since people must buy in first before they act with the agency, urgency and commitment needed. This framework is the result of a one-year deep dive I led with a collaborative agency and consulting team at COMMON, a leading global network pursuing global change through social enterprise (my firm is an affiliate), informed by learnings from the Center for Public Interest Communications at the University of Florida, where I’m pursuing a graduate degree. It is a unique combination, first of its kind anywhere in the world, of the latest climate science and leading-edge behavior science, the latter focused on overcoming human biases, all applied to corporate communications and culture for deep, sustained organizational change. This column provides a summary of the five tasks to implement. We begin by reiterating the basic principle: this is enterprise-wide, everyone-everywhere change management, but change management you can’t afford to get wrong. The stakes cannot be higher, and you will likely have this one chance to get it right before climate change spirals out of control later this decade and adaptation becomes moot. 1. Paint a new Future Picture The very first communications task is to help people envision the future as it will likely unfold from the 2020s to century’s end. The climate science of RCP 8.5 plus tipping points, as McKinsey explains it, yields a future dramatically different from what your 15 stakeholders expect based on what they know from present and past. This is fundamental. Unable to envision a scenario so unknown or outside their frames of reference, there is no way for them to react appropriately to news of this future and prepare fully. That’s called the Representative Bias. The Ambiguity Bias, Availability Bias and Status Quo Bias are at work here, as well; when people can’t comprehend something, the natural tendency is to stay in the known and familiar, in what is available to the mind, in your status-quo comfort zone. Therefore, unless the future projected in scientific reports is decoded and simplified, something news reports generally fail to do, it remains a thick cloud of complexity, and our minds do not think it through. This has become basic Behavior Science 101. Biases and heuristics (mental shortcuts) get in the way of logic all the time, even when the logic should compel self-preservation and organizational optimization behavior. Your memos, conference calls and meetings haven’t produced the expected response? Are you presenting the future in ways that overcome these biases? It is a task not to be underestimated, or executed timidly. Biases are very stubborn things. They must be attacked in big and bold, yet nuanced ways. How? Start by painting a clear picture of this future for your stakeholders. Create a new mental prototype that replaces or complements their present-and-past references in a way that grabs their attention, makes sense to them, and provokes interest. That includes walking them through (decoding) the likely scenarios from here to there. This can be done with virtual-reality animations, smart videos, art, storytelling, and other communication strategies. Do this creatively enough and deliver it persistently enough to everyone, everywhere, and before you know it your 15 stakeholders will get the new-future message. (More on messaging in a bit.) This should, in fact, mark the official launch of your adaptation initiative. Brand it, name it, like you would the launch of any product or social brand. 2. Provide Support. Manage Engagement As your stakeholders become exposed to the Future Picture, you’ll see various reactions. The best one is from those who have been reading the climate news, have grown concerned, and know adaptation is the way to go but have not acted on it. The Future Picture and your whole adaptation plan will give them what they’ve been longing: clearer information, how it applies to them and the company, a pathway there, and the license and empowerment to get involved. The leaders of your adaptation initiative will likely come from this group across the organization. In How Change Happens, Dr. Cass Sunstein presents numerous social-change movements around the world that only tipped into acceleration and effectiveness when a critical mass of believers like this was empowered and activated by a trigger event or organized effort. Your plan “movement” would fill that role in this instance. Then there are those concerned, as well, but not as much. They’ve been passive avoiders this whole time, knowing there’s a climate there-there, but preferring not to go there. They generally suffer from a combination of Optimism Bias and Confirmation Bias. In the first, people can’t help but have a rosy expectation of the future, in dissonance with the truth, and mis-plan accordingly. In the second, they take it one step further and rationalize their choices based only on sources and news accounts that agree, while ignoring actively or subconsciously those that anticipate a more dire outcome. When confronted with the truth, they tend to fight it by entering the Kubler-Ross Cycle of Grief, which begins with denial and goes through several stages of resistance, until the person accepts the inevitable and moves toward proactive action. Others are overtaken by fear, which tends to impede the effective action called for in your adaptation plan. It is a neuro-hormonal reaction known as the Amygdala Hijack, referring to the part of the brain that handles stress, in this case blocking the resourcefulness and initiative your people will need. Some of these reactions will overlap. The mission from a communications and organizational-culture perspective is to manage and redirect them, and that calls for a stakeholder-engagement project that should be placed in the hands of a capable Engagement Team at the company. What will they do? Several things, and this is not an exhaustive list, instead meant to give you an idea of scope and scale: Identify, segment and engage people as they start showing their biases and reactions. This entails a robust internal CRM system, similar to CRM programs used for external audiences, mainly customers, but in this case to finely segment all 15 stakeholders, starting with your board, senior team and TCFD team. They are the first who must get the science and Future Picture right to approve, champion and carry out the best possible plan. Launch a Forum, much like the ones we’ve become accustomed to in social media and corporate intranets. It is a fantastic way for people to connect directly, express what they’re feeling, advise each other, and coordinate collaborations across the organization. Engagement Team members would be there to move these conversations along, flag the folks who need special attention, and connect them with resources that provide it. members would be there to move these conversations along, flag the folks who need special attention, and connect them with resources that provide it. Run sense-making dialogues. This, too, runs deep in behavior science. It’s a directed process to have people in an organization think through an issue, crisis or challenge. As the name implies, the goal is for a solution to make inherent sense, so that a person will act on it from his/her own agency and volition. Create and manage an event calendar throughout the year and across the organization— seminars, webinars, conference calls, physical events, others — to communicate your adaptation program and create the sort of personal networking and engagement that leads to bias-breaking understanding and action hubs. In all of the above and other Engagement Team initiatives, pay special attention to high-transitivity, highly networked influencers and leaders at every level, across all 15 stakeholder categories. Voluminous behavior research shows that difficult change does not happen rapidly or at all — and this certainly qualifies as difficult behavior change!—unless these influencers and leaders buy in and join the effort. Call it Horizontal Leadership, New Power Participation, Connected Networks, or any of its many iterations, the essence is the same: you can flag these folks — using the CRM, and including Sunstein’s activated believers — and get them not just to embrace your adaptation plan, but to do so with leadership zeal, enterprise-wide. 3. Deliver the right Content & Creative What will the Engagement Team use to communicate? This is where the creative and content parts enter the picture. Other experts would probably have started this column with this. We figure it’s better to first understand the imperative, purpose and mechanics of the Future Picture and Engagement Team, so you may then instinctively place this component. It’s what a Corporate Communications Department does, along with Public Relations, Investor Relations, Marketing and their external agencies. When a project team is assembled to manage something like TCFD execution and yields a “product” like your adaptation plan, you usually ask these communication colleagues for help in creating the messaging, artwork, creative pieces, media and channel plan, social-media community management, and other such executions, as part of a coherent multi-stakeholder communications strategy. Relatedly, TCFD includes opportunities to innovate and launch adaptation-related products and services, which Marketing is called on to promote and scale. A tweak on that approach will probably serve you well. Given the highly specialized nature of RCP 8.5 + tipping-point climate science, the science of high-difficulty behavior change, the complex TCFD structure, and the far-reaching, profoundly transformative adaptation process that must stem from it, this is one change-management project better matched with its own, equally specialized communications group. In this column, let’s call it your Messaging & Creative Team. Again, the difficulty bar is really high. You get one shot to get it right, given the daunting climate-change timing. Better to go with a specialized group. Much of the daily work, mind you, may still be done by your regular comm resources, internal and external. The big need filled by Messaging & Creative is strategy, direction and coordination. Members will huddle with existing strategists at Corporate Comm, PR and IR to segment the stakeholders and decide on messaging and approaches for each one, a best practice of robust similar efforts. There’s always an umbrella message, but it must be tailored for each audience and delivered across the channels each one uses. Likewise with artwork and creative, including, importantly, the design of the Future Picture! The Engagement Team, for one, will need a highly coordinated stream of speeches, event materials, sense-making materials, training materials, fact sheets, slideshows and videos for key meetings and presentations, mini-documentary films, on-premise posters and materials, intranet and social-media videos and posts, related news and storytelling pieces, and more. Taking the Optimism Bias as an example, they’ll use these tools to redirect motivation to a code driven not by outcomes (which the world now knows will likely be dire), but by the four drivers of new climate optimism: Adaptation as the one big hope. Doing the right thing — focus on ethics and compassion, not outcome. Being comfortable focusing on probable scenarios we can envision, instead of fearful blurry outcomes. Framing the excitement and adventure of facing down this new reality and emerging as one of the brands and companies that drives it. In pop culture, this is already happening. It is called Hopepunk, explained nicely in this recent article. Again, the hope is in the attitude and adaptation, not in the outcomes. Your Messaging & Creative Team can draw from the storytelling of this popular movement and create something special for your 15 stakeholders. Because the future will be hard. You’ll want to be one of the corporate beacons of hope, but that hope must be grounded in truth, not based on false expectations that are bound to crash and undermine your business and reputation. 4. Build an Adaptation Culture To achieve enterprise-wide buy-in, enable everyone everywhere to join with excitement and commitment — from the board and senior team down to the parking attendant and concierge, and over to the most remote supplier — without falling into the uneven, here-yes there-not-so-much gaps of most change management projects, you’ll need a fourth component: an organizational-culture initiative. There are dozens of models. You may be familiar with or have had a good experience with one or two. If so, wonderful. Perhaps you can apply the model to this challenge. For the sake of illustration, let’s use a framework by NOBL, a leading American org-culture firm and COMMON member. They feature five culture levels: Environment , the conditions in which your company operates (local economies, competitors, technologies, partners, etc.). Today, no assessment or management of this environment is complete without including our shared climate future using RCP 8.5 and tipping-point scenarios. , the conditions in which your company operates (local economies, competitors, technologies, partners, etc.). Today, no assessment or management of this environment is complete without including our shared climate future using RCP 8.5 and tipping-point scenarios. Purpose , the reason behind the work you do in response to and within that environment, including the corporate values everyone in the company is supposed to live by. Adaptation should be inserted as one of those values, along with the usual suspects: teamwork, quality, safety, sustainability, others. , the reason behind the work you do in response to and within that environment, including the corporate values everyone in the company is supposed to live by. Adaptation should be inserted as one of those values, along with the usual suspects: teamwork, quality, safety, sustainability, others. Strategies , the bets you make to fulfill the purpose. The whole TCFD process is designed to land in a strategic planning process that manages every risk and capitalizes on every opportunity. To the extent it’s integrated seamlessly into your pre-TCFD, pre-adaptation corporate strategy, and enhances it to secure an adapted future, you win. , the bets you make to fulfill the purpose. The whole TCFD process is designed to land in a strategic planning process that manages every risk and capitalizes on every opportunity. To the extent it’s integrated seamlessly into your pre-TCFD, pre-adaptation corporate strategy, and enhances it to secure an adapted future, you win. Structures , the distribution and allocation of resources you need to execute the strategies, including budgets, chain of command, board and C-suite leadership, etc. This step dictates the resources enterprise-wide allocated to your adaptation project. , the distribution and allocation of resources you need to execute the strategies, including budgets, chain of command, board and C-suite leadership, etc. This step dictates the resources enterprise-wide allocated to your adaptation project. Systems, the tools and steps that align organizational change to all of the above. For new adaptation behaviors, particularly considering the hard-to-break biases you must overcome, this includes such things as employee hiring, training, networking, recognition, Kubler-Ross grief management, and empowerment, plus risk management processes (financial, insurance, socioeconomic, others), facilities management, supply-chain management, IT systems, innovation feedback loops, and more. Some of this you may already be pursuing in your TCFD or other adaptation process. And just as the Messaging & Creative Team would work with existing internal and external comm folks at the company, so too would this new Culture Team get in sync with your existing efforts and resources, in this case with the objective of scaling adaptation enterprise-wide, and here again, deploying specialized expertise to secure optimized and rapid results. The Communications and Engagement teams, for their part, would work in total collaboration with Culture, the first to provide the needed messaging and materials, the second to “distribute” the systems, structures, strategies and values to the whole organization. 5. Capitalize on Trigger Events I mentioned earlier that Cass Sunstein’s How Change Happens research documented how certain incidents and events, most of the time spontaneous and unpredictable, have sparked successful change movements across history by turning theretofore passive believers into a determined mobilization. People, he discovered, tend to keep quiet about opinions boiling inside, until some event awakens them from passivity and they decide to burst onto the scene. As others do, as well, and they realize the number of silents was far greater than they assumed, they grow in number, confidence and action. So it is within companies. There is absolutely no reason to believe your employees and other stakeholders have a different belief level than the rest of society, which polls indicate are in large and growing majorities concerned about the present and future effects of runaway climate change that can no longer be solved. This fifth task is one more way for you to take advantage of that and awaken your people into action. How? Climate-related trigger events happen all the time, mostly across three categories: a) climate impacts themselves (storms, floods, fires, droughts, heat or cold waves, others); b) policy and legal, as when a law is enacted or a judge rules on a related issue; and c) industry and corporate, when you announce a major corporate policy change or a trade association launches a related initiative. This task would have you assemble a fourth and final group, the Trigger Events Team, to serve like a war room or a rapid-reaction force to:
https://medium.com/predict/5-tasks-for-successful-corporate-adaptation-a49916ef131c
['Alexander Díaz']
2020-03-02 13:49:55.345000+00:00
['Management', 'Sustainability', 'Future', 'Climate Change', 'Predict Column']
Title 5 Tasks adaptation communicationsContent ADAPTIVE CO Don’t face climate changed future without Climate risk team sport Play win communication org culture Commons image Pixabay “That can’t be” said store manager “We’ll fine won’t get bad” sent memo told conference call later personally meeting 100 employee plus store’s local service vendor supplier align corporate’s new climateadaptation plan principal lieutenant balked Oh carried plan extent — said else — hesitation unmoved presentation unwilling go way worry employee change supplier relocate facility distract higher priority add operating expense hurt number Go much trouble avoid scenario can’t confident chain command regional manager agreed couple layer VP HQ region noticed carrying better weren’t board CEO approved launched TCFD process ass disclose company’s climate risk turn led memo call meeting go beyond disclosure actually execute farreaching transformative plan ambitious changemanagement program underway like scale ran organizational obstacle flustered result that’s assuming got climate science right begin TCFD team underestimated immediacy severity tipping point socioeconomic risk McKinsey made clear recent report even full engagement participation everyone company would falling significantly short adaptation truly needed company would remain risk here’s fine point complete adaptation plan fully protect company secure brand organization climatechallenged future face go enterprisewide TCFD process start move address risk capitalize opportunity informed TCFD assessment quickly realize take everyone everywhere company mainly climate impact happen locally people supplier must ready must everyone support unit chain way top trick overcoming troubleconfidence equation 15 relevant stakeholder — board member investor senior leader TCFD adaptation lead team midlevel unit manager rank file employee suppliersvendors collaboratorspartners upstream downstream trade banker insurer relevant government agency community NGOs course customers—must see adaptation plan totally task also outweighing pain cost hassle executing trouble must seen far le burdensome horrifying trouble climate consequence befall company fail adapt opportunity along challenge entire process must seen exciting journey one welcome fear avoid turn entirely communication organizationalculture exercise meet executing five task logically must launch first bestlaid plan implemented across organization enables plan since people must buy first act agency urgency commitment needed framework result oneyear deep dive led collaborative agency consulting team COMMON leading global network pursuing global change social enterprise firm affiliate informed learning Center Public Interest Communications University Florida I’m pursuing graduate degree unique combination first kind anywhere world latest climate science leadingedge behavior science latter focused overcoming human bias applied corporate communication culture deep sustained organizational change column provides summary five task implement begin reiterating basic principle enterprisewide everyoneeverywhere change management change management can’t afford get wrong stake cannot higher likely one chance get right climate change spiral control later decade adaptation becomes moot 1 Paint new Future Picture first communication task help people envision future likely unfold 2020s century’s end climate science RCP 85 plus tipping point McKinsey explains yield future dramatically different 15 stakeholder expect based know present past fundamental Unable envision scenario unknown outside frame reference way react appropriately news future prepare fully That’s called Representative Bias Ambiguity Bias Availability Bias Status Quo Bias work well people can’t comprehend something natural tendency stay known familiar available mind statusquo comfort zone Therefore unless future projected scientific report decoded simplified something news report generally fail remains thick cloud complexity mind think become basic Behavior Science 101 Biases heuristic mental shortcut get way logic time even logic compel selfpreservation organizational optimization behavior memo conference call meeting haven’t produced expected response presenting future way overcome bias task underestimated executed timidly Biases stubborn thing must attacked big bold yet nuanced way Start painting clear picture future stakeholder Create new mental prototype replaces complement presentandpast reference way grab attention make sense provokes interest includes walking decoding likely scenario done virtualreality animation smart video art storytelling communication strategy creatively enough deliver persistently enough everyone everywhere know 15 stakeholder get newfuture message messaging bit fact mark official launch adaptation initiative Brand name like would launch product social brand 2 Provide Support Manage Engagement stakeholder become exposed Future Picture you’ll see various reaction best one reading climate news grown concerned know adaptation way go acted Future Picture whole adaptation plan give they’ve longing clearer information applies company pathway license empowerment get involved leader adaptation initiative likely come group across organization Change Happens Dr Cass Sunstein present numerous socialchange movement around world tipped acceleration effectiveness critical mass believer like empowered activated trigger event organized effort plan “movement” would fill role instance concerned well much They’ve passive avoiders whole time knowing there’s climate therethere preferring go generally suffer combination Optimism Bias Confirmation Bias first people can’t help rosy expectation future dissonance truth misplan accordingly second take one step rationalize choice based source news account agree ignoring actively subconsciously anticipate dire outcome confronted truth tend fight entering KublerRoss Cycle Grief begin denial go several stage resistance person accepts inevitable move toward proactive action Others overtaken fear tends impede effective action called adaptation plan neurohormonal reaction known Amygdala Hijack referring part brain handle stress case blocking resourcefulness initiative people need reaction overlap mission communication organizationalculture perspective manage redirect call stakeholderengagement project placed hand capable Engagement Team company Several thing exhaustive list instead meant give idea scope scale Identify segment engage people start showing bias reaction entail robust internal CRM system similar CRM program used external audience mainly customer case finely segment 15 stakeholder starting board senior team TCFD team first must get science Future Picture right approve champion carry best possible plan Launch Forum much like one we’ve become accustomed social medium corporate intranet fantastic way people connect directly express they’re feeling advise coordinate collaboration across organization Engagement Team member would move conversation along flag folk need special attention connect resource provide member would move conversation along flag folk need special attention connect resource provide Run sensemaking dialogue run deep behavior science It’s directed process people organization think issue crisis challenge name implies goal solution make inherent sense person act hisher agency volition Create manage event calendar throughout year across organization— seminar webinars conference call physical event others — communicate adaptation program create sort personal networking engagement lead biasbreaking understanding action hub Engagement Team initiative pay special attention hightransitivity highly networked influencers leader every level across 15 stakeholder category Voluminous behavior research show difficult change happen rapidly — certainly qualifies difficult behavior change—unless influencers leader buy join effort Call Horizontal Leadership New Power Participation Connected Networks many iteration essence flag folk — using CRM including Sunstein’s activated believer — get embrace adaptation plan leadership zeal enterprisewide 3 Deliver right Content Creative Engagement Team use communicate creative content part enter picture expert would probably started column figure it’s better first understand imperative purpose mechanic Future Picture Engagement Team may instinctively place component It’s Corporate Communications Department along Public Relations Investor Relations Marketing external agency project team assembled manage something like TCFD execution yield “product” like adaptation plan usually ask communication colleague help creating messaging artwork creative piece medium channel plan socialmedia community management execution part coherent multistakeholder communication strategy Relatedly TCFD includes opportunity innovate launch adaptationrelated product service Marketing called promote scale tweak approach probably serve well Given highly specialized nature RCP 85 tippingpoint climate science science highdifficulty behavior change complex TCFD structure farreaching profoundly transformative adaptation process must stem one changemanagement project better matched equally specialized communication group column let’s call Messaging Creative Team difficulty bar really high get one shot get right given daunting climatechange timing Better go specialized group Much daily work mind may still done regular comm resource internal external big need filled Messaging Creative strategy direction coordination Members huddle existing strategist Corporate Comm PR IR segment stakeholder decide messaging approach one best practice robust similar effort There’s always umbrella message must tailored audience delivered across channel one us Likewise artwork creative including importantly design Future Picture Engagement Team one need highly coordinated stream speech event material sensemaking material training material fact sheet slideshows video key meeting presentation minidocumentary film onpremise poster material intranet socialmedia video post related news storytelling piece Taking Optimism Bias example they’ll use tool redirect motivation code driven outcome world know likely dire four driver new climate optimism Adaptation one big hope right thing — focus ethic compassion outcome comfortable focusing probable scenario envision instead fearful blurry outcome Framing excitement adventure facing new reality emerging one brand company drive pop culture already happening called Hopepunk explained nicely recent article hope attitude adaptation outcome Messaging Creative Team draw storytelling popular movement create something special 15 stakeholder future hard You’ll want one corporate beacon hope hope must grounded truth based false expectation bound crash undermine business reputation 4 Build Adaptation Culture achieve enterprisewide buyin enable everyone everywhere join excitement commitment — board senior team parking attendant concierge remote supplier — without falling uneven hereyes therenotsomuch gap change management project you’ll need fourth component organizationalculture initiative dozen model may familiar good experience one two wonderful Perhaps apply model challenge sake illustration let’s use framework NOBL leading American orgculture firm COMMON member feature five culture level Environment condition company operates local economy competitor technology partner etc Today assessment management environment complete without including shared climate future using RCP 85 tippingpoint scenario condition company operates local economy competitor technology partner etc Today assessment management environment complete without including shared climate future using RCP 85 tippingpoint scenario Purpose reason behind work response within environment including corporate value everyone company supposed live Adaptation inserted one value along usual suspect teamwork quality safety sustainability others reason behind work response within environment including corporate value everyone company supposed live Adaptation inserted one value along usual suspect teamwork quality safety sustainability others Strategies bet make fulfill purpose whole TCFD process designed land strategic planning process manages every risk capitalizes every opportunity extent it’s integrated seamlessly preTCFD preadaptation corporate strategy enhances secure adapted future win bet make fulfill purpose whole TCFD process designed land strategic planning process manages every risk capitalizes every opportunity extent it’s integrated seamlessly preTCFD preadaptation corporate strategy enhances secure adapted future win Structures distribution allocation resource need execute strategy including budget chain command board Csuite leadership etc step dictate resource enterprisewide allocated adaptation project distribution allocation resource need execute strategy including budget chain command board Csuite leadership etc step dictate resource enterprisewide allocated adaptation project Systems tool step align organizational change new adaptation behavior particularly considering hardtobreak bias must overcome includes thing employee hiring training networking recognition KublerRoss grief management empowerment plus risk management process financial insurance socioeconomic others facility management supplychain management system innovation feedback loop may already pursuing TCFD adaptation process Messaging Creative Team would work existing internal external comm folk company would new Culture Team get sync existing effort resource case objective scaling adaptation enterprisewide deploying specialized expertise secure optimized rapid result Communications Engagement team part would work total collaboration Culture first provide needed messaging material second “distribute” system structure strategy value whole organization 5 Capitalize Trigger Events mentioned earlier Cass Sunstein’s Change Happens research documented certain incident event time spontaneous unpredictable sparked successful change movement across history turning theretofore passive believer determined mobilization People discovered tend keep quiet opinion boiling inside event awakens passivity decide burst onto scene others well realize number silents far greater assumed grow number confidence action within company absolutely reason believe employee stakeholder different belief level rest society poll indicate large growing majority concerned present future effect runaway climate change longer solved fifth task one way take advantage awaken people action Climaterelated trigger event happen time mostly across three category climate impact storm flood fire drought heat cold wave others b policy legal law enacted judge rule related issue c industry corporate announce major corporate policy change trade association launch related initiative task would assemble fourth final group Trigger Events Team serve like war room rapidreaction force toTags Management Sustainability Future Climate Change Predict Column
3,712
The Biochemistry of Lust: How Hormones Impact Women’s Sexuality
Estrogen (Marilyn Monroe: The Venus Hormone) Estrogen holds court on the dance floor. She is having a ball flirting and dancing. Her ample backside swings with the rhythm of the music, while her satiny skin glows. Estrogen is a total package deal with a quick wit and a strong mind. But yeah, her physical allure doesn’t hurt either. She’s impossible to ignore. Her laugh is contagious, and her hourglass curves make all her dance partners weak in the knees. She’s not afraid to make a fool of herself ether, and she falls down a few times while dancing. But that’s okay, her bones are strong and resilient. She notices Testosterone checking her out by the buffet. Wow, he is soo gorgeous and sexy, he makes her all tingly. She can feel her panties getting moist. Um… Estrogen, the Marilyn Monroe of hormones, dominates the first half of a woman’s menstrual cycle and is opposed by progesterone in the second half. Estrogen comes in three forms: estradiol (E2), estriol (E3) and estrone (E1). Estradiol is the most biologically active hormone for premenopausal women, while estrone is more active after menopause. Estriol is primarily active during pregnancy (2). Remember what I said about hormones being shapeshifters? One of the most fascinating facts in human physiology has got to be the fact that estradiol, the hormone most associated with femininity, is synthesized from testosterone (16). Estrogen is responsible for more than just breasts and baby-making. It affects every part of a woman’s body and brain, and it has a profound impact on her sexual functioning. It is responsible for maintaining pelvic blood flow, the creation of vaginal lubrication, as well as the maintenance of genital tissue (17). When estrogen is in short supply, women struggle with diminished genital and nipple sensitivity, difficulty achieving orgasm, increased sexual pain, and inadequate lubrication (17). Women with low estrogen are at risk for vaginal atrophy, which has to be among the most delightful aspects of aging (NOT). Another issue I am intimately familiar with. As I moved deeper into the menopausal rabbit hole, I experience vaginal irritation, dryness, and constant UTIs, all of which were due to estrogen bidding me a fond farewell. When estrogen leaves the building, the vaginal lining (the epithelium) gets thinner, the vagina itself may shrink and lose muscle tone and elasticity. And as for those persistent UTIs that bedevil menopausal women like me, they are due to the increase in vaginal pH. When the vagina becomes more alkaline, it kills off good bacteria, leaving a woman a sitting duck for a number of vaginal and urinary tract infections. Remember this, a happy pussy is an acidic one (ideal pH 3.8–4.2). The normal level of estradiol in a menstruating woman’s body is around 50 to 400 picograms per milliliter (pg/mL). This fluctuates with the menstrual cycle. Below this threshold, and there is an increased risk for the problems mentioned above. When women are in menopause, estradiol levels are often as low as 10–20 (pg/mL) (17). Estrogen: The True Lady of Lust? Testosterone, the loud and proud androgen, is usually assumed to be the sexual mover and shaker for both men and women. Estrogen, it has been argued, just gives a woman a wet vagina, the motivation to use it comes from her testosterone. This is the view expressed by Theresa Crenshaw in The Alchemy of Love and Lust. In contrast to men, she argues that women have four sexual drives 1. Active (aggressive) 2. Receptive (passive) 3. Proceptive (seductive) and 4. Adverse (reverse). These drives are representative of our hormonal makeup. She differentiates along standard party lines and claims that testosterone fuels women’s active sex drive, while estrogen fuels the receptive and proceptive drives. According to Crenshaw, ever contrary progesterone doesn’t fuel anything but a nap (the adverse drive). However, some researchers believe that estrogen’s role is underestimated in female desire and that the conversion of testosterone to free estrogen in women might play a major role in female desire. (18). “Free” in this case means a hormone that is biologically active and available for our bodies to use. According to Emory professors, Cappelletti and Wallen, for most female mammals the most important hormone governing sexual behavior is estrogen. That would make human females rather weird and unique if our sexuality was testosterone-driven. Plus, research does show that estrogen alone is capable of increasing desire in women(19). Estrogen Replacement Mode of delivery (e.g., by mouth, or transdermal) is an important and possibly overlooked factor when looking into HRT. One major problem with oral estrogen’s like Premarin (aside from the fact they’re made of horse pee!) is that when estrogen is taken by mouth it raises levels of SHBG (sex hormone-binding globulin). SHBG is a protein secreted by the liver that binds both estrogen and androgens. It prefers androgens. This means that it will reduce free androgens and estrogens, both of which are associated with sex drive. In a randomized, controlled study of 670 women comparing transdermal estrogen therapy with oral (Premarin), it was found that transdermal estrogen improved sexual functioning according to scores on a self-report measure. Women who used horse pee (Premarin) showed no improvement in sexual functioning and presumably had to come up with some new hobbies (20). As a side note, I keep visualizing a poor, pregnant mare being badgered by some pharmaceutical rep going, “Just pee in the bucket Seabiscuit; we need the money!” But I digress… Bioidentical Hormone Replacement Women who are interested in HRT often opt for bioidentical hormones. They have become popular for a few reasons. In 2002, the WHI (Women’s Health Initiative) study dropped a bombshell on the world’s menopausal women and linked hormone replacement with a 26% increased risk of breast cancer and an increased risk of cardiovascular events and stroke. Within three months of published reports of the dire findings, prescriptions for hormone therapy (HT) dropped by 63% (21). Also, popular books like The Sexy Years by Suzanne Somers have promoted the use of compounded bioidenticals instead of FDA approved drugs. Compounded bioidentical hormone therapy (CBHT) is custom formulated by a compounding pharmacy and tailored to the individual. They are often perceived as safer and more natural. What Are Bioidentical Hormones? From my readings, this may be short-sighted. First up, let’s talk about what bioidentical hormones are. According to the Endocrine Society, bioidentical hormones are “compounds that have exactly the same chemical and molecular structure as hormones that are produced in the human body.” They are often plant-derived in comparison to the Premarin and Provera (used in the WHI study), which is a synthetic estrogen synthesized from conjugated horse urine and synthetic progestin respectively. Note that Premarin could be considered “natural” given the fact there’s nothing more natural than horse pee! However, it isn’t identical to what your body makes. Bioidentical progesterone is made from diosgenin that is derived from wild Mexican yam or soy, while bioidentical estrogen is often synthesized from soy. Both bioidenticals, like all hormone therapies, are extensively processed in a lab (22). The Endocrine Society’s definition is broad and doesn’t refer to the sourcing, manufacturing, or delivery method of bioidenticals. This definition can refer to both FDA approved HRT as well as non-FDA approved hormone replacement. There is no evidence that bioidenticals are safer than synthetic hormones. Nor, is there isn’t any evidence supporting CBHT as a better alternative. With CBHT there are issues regarding dosage, purity, and strength. According to an article in The Mayo Clinic Proceedings, “Compounded hormone preparations are not required to undergo the rigorous safety and efficacy studies required of FDA-approved HT and can demonstrate wide variation in active and inactive ingredients.” (21). There are several FDA approved bioidentical hormones that are on the market. They differ from CBHT in that they have some science behind them and they are carefully formulated and manufactured according to strict specifications (21). Is Hormone Therapy Safe? I think it depends on who you ask and what you read. It also depends on your particular situation. I recommend any woman interested in hormone replacement do some serious study on this issue. The WHI study scared the bejesus out of women, their doctors, and created a lot of hysteria. There were several issues with that study that are beyond the scope of this article. One book I recommend is Menopause: Change, Choice, and HRT by Australian physician Dr. Barry Wren. He goes into detail about the WHI study and its shortcomings, including the fact that the women who participated in the study were older (average age 63), smokers/former smokers, overweight/obese, and in poor health. There is a critical “window of opportunity” for women to go on HRT. It is recommended that women do it within ten years of their last period. Primarily, because going for many years without estrogen can cause permanent changes to the body that HRT could exacerbate. For example, estrogen helps prevent cholesterol from building up in your arteries. After you have been without it for a while, your arteries will likely have some damage. Taking an estrogen, particularly in oral form, increases the presence of liver proteins that cause blood to clot. This factor, combined with arthroscopic buildup, could lead to an increased risk of stroke or heart attack. But taking estrogen before arterial damage has occurred, and within the 10-year window of opportunity, might reduce your risk of heart attack or stroke (23). Estrogen: Points to Remember
https://kayesmith-21920.medium.com/the-biochemistry-of-lust-how-hormones-impact-womens-sexuality-574040b59ebe
['Kaye Smith Phd']
2020-05-01 02:43:28.540000+00:00
['Health', 'Science', 'Sexuality', 'Sex', 'Women']
Title Biochemistry Lust Hormones Impact Women’s SexualityContent Estrogen Marilyn Monroe Venus Hormone Estrogen hold court dance floor ball flirting dancing ample backside swing rhythm music satiny skin glow Estrogen total package deal quick wit strong mind yeah physical allure doesn’t hurt either She’s impossible ignore laugh contagious hourglass curve make dance partner weak knee She’s afraid make fool ether fall time dancing that’s okay bone strong resilient notice Testosterone checking buffet Wow soo gorgeous sexy make tingly feel panty getting moist Um… Estrogen Marilyn Monroe hormone dominates first half woman’s menstrual cycle opposed progesterone second half Estrogen come three form estradiol E2 estriol E3 estrone E1 Estradiol biologically active hormone premenopausal woman estrone active menopause Estriol primarily active pregnancy 2 Remember said hormone shapeshifters One fascinating fact human physiology got fact estradiol hormone associated femininity synthesized testosterone 16 Estrogen responsible breast babymaking affect every part woman’s body brain profound impact sexual functioning responsible maintaining pelvic blood flow creation vaginal lubrication well maintenance genital tissue 17 estrogen short supply woman struggle diminished genital nipple sensitivity difficulty achieving orgasm increased sexual pain inadequate lubrication 17 Women low estrogen risk vaginal atrophy among delightful aspect aging Another issue intimately familiar moved deeper menopausal rabbit hole experience vaginal irritation dryness constant UTIs due estrogen bidding fond farewell estrogen leaf building vaginal lining epithelium get thinner vagina may shrink lose muscle tone elasticity persistent UTIs bedevil menopausal woman like due increase vaginal pH vagina becomes alkaline kill good bacteria leaving woman sitting duck number vaginal urinary tract infection Remember happy pussy acidic one ideal pH 38–42 normal level estradiol menstruating woman’s body around 50 400 picograms per milliliter pgmL fluctuates menstrual cycle threshold increased risk problem mentioned woman menopause estradiol level often low 10–20 pgmL 17 Estrogen True Lady Lust Testosterone loud proud androgen usually assumed sexual mover shaker men woman Estrogen argued give woman wet vagina motivation use come testosterone view expressed Theresa Crenshaw Alchemy Love Lust contrast men argues woman four sexual drive 1 Active aggressive 2 Receptive passive 3 Proceptive seductive 4 Adverse reverse drive representative hormonal makeup differentiates along standard party line claim testosterone fuel women’s active sex drive estrogen fuel receptive proceptive drive According Crenshaw ever contrary progesterone doesn’t fuel anything nap adverse drive However researcher believe estrogen’s role underestimated female desire conversion testosterone free estrogen woman might play major role female desire 18 “Free” case mean hormone biologically active available body use According Emory professor Cappelletti Wallen female mammal important hormone governing sexual behavior estrogen would make human female rather weird unique sexuality testosteronedriven Plus research show estrogen alone capable increasing desire women19 Estrogen Replacement Mode delivery eg mouth transdermal important possibly overlooked factor looking HRT One major problem oral estrogen’s like Premarin aside fact they’re made horse pee estrogen taken mouth raise level SHBG sex hormonebinding globulin SHBG protein secreted liver bind estrogen androgen prefers androgen mean reduce free androgen estrogen associated sex drive randomized controlled study 670 woman comparing transdermal estrogen therapy oral Premarin found transdermal estrogen improved sexual functioning according score selfreport measure Women used horse pee Premarin showed improvement sexual functioning presumably come new hobby 20 side note keep visualizing poor pregnant mare badgered pharmaceutical rep going “Just pee bucket Seabiscuit need money” digress… Bioidentical Hormone Replacement Women interested HRT often opt bioidentical hormone become popular reason 2002 WHI Women’s Health Initiative study dropped bombshell world’s menopausal woman linked hormone replacement 26 increased risk breast cancer increased risk cardiovascular event stroke Within three month published report dire finding prescription hormone therapy HT dropped 63 21 Also popular book like Sexy Years Suzanne Somers promoted use compounded bioidenticals instead FDA approved drug Compounded bioidentical hormone therapy CBHT custom formulated compounding pharmacy tailored individual often perceived safer natural Bioidentical Hormones reading may shortsighted First let’s talk bioidentical hormone According Endocrine Society bioidentical hormone “compounds exactly chemical molecular structure hormone produced human body” often plantderived comparison Premarin Provera used WHI study synthetic estrogen synthesized conjugated horse urine synthetic progestin respectively Note Premarin could considered “natural” given fact there’s nothing natural horse pee However isn’t identical body make Bioidentical progesterone made diosgenin derived wild Mexican yam soy bioidentical estrogen often synthesized soy bioidenticals like hormone therapy extensively processed lab 22 Endocrine Society’s definition broad doesn’t refer sourcing manufacturing delivery method bioidenticals definition refer FDA approved HRT well nonFDA approved hormone replacement evidence bioidenticals safer synthetic hormone isn’t evidence supporting CBHT better alternative CBHT issue regarding dosage purity strength According article Mayo Clinic Proceedings “Compounded hormone preparation required undergo rigorous safety efficacy study required FDAapproved HT demonstrate wide variation active inactive ingredients” 21 several FDA approved bioidentical hormone market differ CBHT science behind carefully formulated manufactured according strict specification 21 Hormone Therapy Safe think depends ask read also depends particular situation recommend woman interested hormone replacement serious study issue WHI study scared bejesus woman doctor created lot hysteria several issue study beyond scope article One book recommend Menopause Change Choice HRT Australian physician Dr Barry Wren go detail WHI study shortcoming including fact woman participated study older average age 63 smokersformer smoker overweightobese poor health critical “window opportunity” woman go HRT recommended woman within ten year last period Primarily going many year without estrogen cause permanent change body HRT could exacerbate example estrogen help prevent cholesterol building artery without artery likely damage Taking estrogen particularly oral form increase presence liver protein cause blood clot factor combined arthroscopic buildup could lead increased risk stroke heart attack taking estrogen arterial damage occurred within 10year window opportunity might reduce risk heart attack stroke 23 Estrogen Points RememberTags Health Science Sexuality Sex Women
3,713
Hands-on: Customer Segmentation
Knowing your customers is the foundation of any successful business. The better you understand their needs, their desires and wishes, the better you may serve them. That’s the reason why market or customer segmentation is so useful in the long run: You create profound knowledge about your customers, their characteristics and their behaviours to finally improve your business model, marketing campaigns, product features and many more… Hands-on: Customer Segmentation (Photo by Max McKinnon on Unsplash) In this article you will learn all necessary basics about customer segmentation and the application of an unsupervised learning method with the help of Python to finally build clusters for a customer sample dataset. This tutorial is set up in a way that you will succeed in identifying clusters with little to even no prior coding knowledge. Have Fun ! How will we segment our customers? We will start out by learning the basic theory about clustering and clustering with K-means. Afterwards the ingested theory will be applied to our sample customer segmentation dataset which we will firstly explore, secondly prepare and thirdly cluster our dataset with the help of K-means algorithm. High Level Process To segment our customer we are working with Python and its’ amazing open source libraries. First of all we use Jupyter Notebook, used as an open-source application for live coding and it allows us to tell better stories with our code. Furthermore we import Pandas, which puts our data in an easy-to-use structure for data analysis and data transformation. To make data exploration more graspable, we use Plotly to visualise some of our insights. Finally with Scikit-learn we will split our dataset and train our predictive model. Tech Stack To Build Segments Basics about clustering with K-Means While we distinguish between supervised and unsupervised learning, clustering belongs to the unsupervised learning algorithms and is probably considered to be the most important one. Machine Learning Overview Given a collection of unlabelled data, meaning the dataset is not tagged with a desired outcome. The goal is to identify patterns in this data. Clustering describes the process of finding structures where similar points are grouped together. Following that definition, a cluster is a collection of similar data points. Dissimilar data points shall belong to different clusters. Clustering There are various clustering algorithms identifying these patterns such as DBCSAN, Hierarchical Clustering or Expectation Maximisation Clustering. While each algorithm has its individual strengths, we are starting with K-means as one of the simplest clusterings algorithms. How does the K-Mean algorithm work? K-means belongs to the centroid-based cluster algorithms and assigns each object or datapoint to the nearest cluster center in such way that the squared distances from the clusters are minimised. “K” stands in this context for the amount of clusters, or more specifically cluster centroids. The objective is to minimise the within cluster sum of squares: Step 1: Initialisation As first step we have to choose the amount of centroids for our clustering algorithm. While a good choice can save a lot of effort, a bad one may result in missing out on natural clusters. But how can we choose the optimal number of clusters? For our purpose this will be done with the elbow method. An heuristic approach towards finding the right amount of clusters. Elbow Method Example (Code Further Below) Recall that the basic idea of K-means clustering is to minimise the within cluster sum of square. It measures the compactness of the cluster and we want it to be as small as possible. For the elbow method the sum of square is calculated for a decreasing amount of clusters and plotted accordingly. We choose then the number of clusters were the sum of square does not change significantly — basically where we can see the “elbow” in our plot. Step 2: Building the Clusters Secondly we are determining the minimum distance for each datapoint to the nearest cluster centroid. No worries this has not to be done manually but will be solved by Python. It is just good to understand what the algorithm is basically doing repetitively. Step 3: Update & Iterate Thirdly the cluster means or centroids have to be updated. This is done until there are no more changes in the assignment of data points towards other centroids. While dividing the clustering process with K-Means in three simple steps sounds pretty straightforward, there are certain disadvantages we should be aware of. One is that K-Means is very sensitive towards outliers as they strongly influence the within cluster sum of squares. Therefore we should consider removing them before applying the algorithm. Second disadvantage is the random choice of cluster centroids with K-Means. This may leaves us ending up with slightly different results on different runs of the unsupervised learning algorithm, which is not optimal for a reproducible research approach. Nevertheless by understanding these weaknesses we can still apply K-Means especially when we want quick and practical useful results. The Dataset For the purpose of this project we are working with a publicly available dataset from Kaggle. The dataset includes some basic data about the customer such as age, gender, annual income, customerID and spending score. In this scenario we want to find out which customer segments show which characteristics in order to plan an adequate marketing strategy with individual campaigns for each segment. # for basic mathematic operations import numpy as np import pandas as pd # for visualizations import matplotlib.pyplot as plt import seaborn as sns data = pd.read_csv('../Clustering/Mall_Customers.csv') data.head(10) For better insights and unprepared datasets it is recommended to do an explorative data analysis, data cleaning and data preparation upfront. For the sole purpose of demonstrating K-Means and customer segmentation we will keep this to an absolute minimum and focus on our main objective. The Clustering — Elbow Method Once the dataset is loaded and cleaned, we can start clustering the dataset. In this case we will cluster initially according to Annual Income and Spending Score as our main objective is a marketing campaign targeting people with high income and willing to spend. To do so we have to select all rows and column 3 and 4. x = data.iloc[:, [3, 4]].values As previously described we have to find out which amount of centroids is the optimal amount to minimise the within cluster sum of squares. To do so we run our code from one to ten cluster with the help of a for loop. The result for each amount of clusters is then appended to the wcss list. from sklearn.cluster import KMeans wcss = [] for i in range(1, 11): km = KMeans(n_clusters = i, init = 'k-means++', max_iter = 300, n_init = 10, random_state = 0) km.fit(x) wcss.append(km.inertia_) plt.plot(range(1, 11), wcss, c="purple") plt.title('The Elbow Method', fontsize = 30) plt.xlabel('No of Clusters', fontsize = 20) plt.ylabel('WCSS', fontsize = 20) plt.show() To identify the optimum amount of centroids we have to look for the “elbow” by plotting each within cluster sum of squares value on the y-Axis and the amount of centroids on the x-Axis. Elbow Method It is found that after five clusters the wcss value is decreasing very marginally if adding more clusters. In this case we got what we want: The optimum amount of clusters seems to be five. The Clustering — Visualising K-Means What we want to do next is visualising our five cluster in order to identify our target customers and have an opportunity to present our results to colleagues and other stakeholders. To do so we run our K-Means algorithm and determine the clusters within Annual Income and spending score (the previously defined x). km = KMeans(n_clusters = 5, init = 'k-means++', max_iter = 300, n_init = 10, random_state = 0) y_means = km.fit_predict(x) With the prediction alone we cannot see much and have to use plotly to create a nice graph for our clusters. plt.scatter(x[y_means == 0, 0], x[y_means == 0, 1], s = 100, c = 'orangered', label = 'potential') plt.scatter(x[y_means == 1, 0], x[y_means == 1, 1], s = 100, c = 'darksalmon', label = 'creditcheck') plt.scatter(x[y_means == 2, 0], x[y_means == 2, 1], s = 100, c = 'goldenrod', label = 'target') plt.scatter(x[y_means == 3, 0], x[y_means == 3, 1], s = 100, c = 'magenta', label = 'spendthrift') plt.scatter(x[y_means == 4, 0], x[y_means == 4, 1], s = 100, c = 'aquamarine', label = 'careful') plt.scatter(km.cluster_centers_[:,0], km.cluster_centers_[:, 1], s = 200, c = 'darkseagreen' , label = 'centroid') sns.set(style = 'whitegrid') plt.title('K Means Clustering', fontsize = 30) plt.xlabel('Annual Income', fontsize = 20) plt.ylabel('Spending Score', fontsize = 20) plt.legend() plt.grid() plt.show() The visualisation allows us to clearly identify the five clusters. The five centroids are visible in a darkgreen. Our main target group, named as “target” and in gold color, has the highest spending score and annual income. Clustering with K Means Furthermore we can find four additional groups that may be interesting for us to approach. In this case we named them “potential”, “creditcheck”, “spendthrift” and “careful”. Well done — we did some very basic clustering to segment a customer dataset. What’s next ? For now we have segmented our customers according to Annual Income and Spending Score. But of course there are other factors that may influence your decision on which customers you want to target. In our example you could further investigate “Age” as a feature and see its impact on the clustering results. For businesses it is most common to segment their customers according to four different categories: Business Customer Segmentation Categories After expanding, exploring and defining the different customer segments be creative on how to use your gained knowledge. Optimise pricing, reduce customer churn, increase retention, improve your product, … there are endless opportunities. Now it’s up to you :) ************************************************************** Articles related to this one: Hands-on: Predict Customer Churn Applied Machine Learning For Improved Startup Valuation Hands-on: Setup Your Data Environment With Docker Eliminating Churn is Growth Hacking 2.0 Misleading with Data & Statistics ***************************************************************
https://towardsdatascience.com/hands-on-customer-segmentation-9aeed83f5763
[]
2020-12-29 14:29:05.675000+00:00
['Unsupervised Learning', 'Customer Segmentation', 'Python', 'Clustering', 'Data Science']
Title Handson Customer SegmentationContent Knowing customer foundation successful business better understand need desire wish better may serve That’s reason market customer segmentation useful long run create profound knowledge customer characteristic behaviour finally improve business model marketing campaign product feature many more… Handson Customer Segmentation Photo Max McKinnon Unsplash article learn necessary basic customer segmentation application unsupervised learning method help Python finally build cluster customer sample dataset tutorial set way succeed identifying cluster little even prior coding knowledge Fun segment customer start learning basic theory clustering clustering Kmeans Afterwards ingested theory applied sample customer segmentation dataset firstly explore secondly prepare thirdly cluster dataset help Kmeans algorithm High Level Process segment customer working Python its’ amazing open source library First use Jupyter Notebook used opensource application live coding allows u tell better story code Furthermore import Pandas put data easytouse structure data analysis data transformation make data exploration graspable use Plotly visualise insight Finally Scikitlearn split dataset train predictive model Tech Stack Build Segments Basics clustering KMeans distinguish supervised unsupervised learning clustering belongs unsupervised learning algorithm probably considered important one Machine Learning Overview Given collection unlabelled data meaning dataset tagged desired outcome goal identify pattern data Clustering describes process finding structure similar point grouped together Following definition cluster collection similar data point Dissimilar data point shall belong different cluster Clustering various clustering algorithm identifying pattern DBCSAN Hierarchical Clustering Expectation Maximisation Clustering algorithm individual strength starting Kmeans one simplest clustering algorithm KMean algorithm work Kmeans belongs centroidbased cluster algorithm assigns object datapoint nearest cluster center way squared distance cluster minimised “K” stand context amount cluster specifically cluster centroid objective minimise within cluster sum square Step 1 Initialisation first step choose amount centroid clustering algorithm good choice save lot effort bad one may result missing natural cluster choose optimal number cluster purpose done elbow method heuristic approach towards finding right amount cluster Elbow Method Example Code Recall basic idea Kmeans clustering minimise within cluster sum square measure compactness cluster want small possible elbow method sum square calculated decreasing amount cluster plotted accordingly choose number cluster sum square change significantly — basically see “elbow” plot Step 2 Building Clusters Secondly determining minimum distance datapoint nearest cluster centroid worry done manually solved Python good understand algorithm basically repetitively Step 3 Update Iterate Thirdly cluster mean centroid updated done change assignment data point towards centroid dividing clustering process KMeans three simple step sound pretty straightforward certain disadvantage aware One KMeans sensitive towards outlier strongly influence within cluster sum square Therefore consider removing applying algorithm Second disadvantage random choice cluster centroid KMeans may leaf u ending slightly different result different run unsupervised learning algorithm optimal reproducible research approach Nevertheless understanding weakness still apply KMeans especially want quick practical useful result Dataset purpose project working publicly available dataset Kaggle dataset includes basic data customer age gender annual income customerID spending score scenario want find customer segment show characteristic order plan adequate marketing strategy individual campaign segment basic mathematic operation import numpy np import panda pd visualization import matplotlibpyplot plt import seaborn sn data pdreadcsvClusteringMallCustomerscsv datahead10 better insight unprepared datasets recommended explorative data analysis data cleaning data preparation upfront sole purpose demonstrating KMeans customer segmentation keep absolute minimum focus main objective Clustering — Elbow Method dataset loaded cleaned start clustering dataset case cluster initially according Annual Income Spending Score main objective marketing campaign targeting people high income willing spend select row column 3 4 x datailoc 3 4values previously described find amount centroid optimal amount minimise within cluster sum square run code one ten cluster help loop result amount cluster appended wcss list sklearncluster import KMeans wcss range1 11 km KMeansnclusters init kmeans maxiter 300 ninit 10 randomstate 0 kmfitx wcssappendkminertia pltplotrange1 11 wcss cpurple plttitleThe Elbow Method fontsize 30 pltxlabelNo Clusters fontsize 20 pltylabelWCSS fontsize 20 pltshow identify optimum amount centroid look “elbow” plotting within cluster sum square value yAxis amount centroid xAxis Elbow Method found five cluster wcss value decreasing marginally adding cluster case got want optimum amount cluster seems five Clustering — Visualising KMeans want next visualising five cluster order identify target customer opportunity present result colleague stakeholder run KMeans algorithm determine cluster within Annual Income spending score previously defined x km KMeansnclusters 5 init kmeans maxiter 300 ninit 10 randomstate 0 ymeans kmfitpredictx prediction alone cannot see much use plotly create nice graph cluster pltscatterxymeans 0 0 xymeans 0 1 100 c orangered label potential pltscatterxymeans 1 0 xymeans 1 1 100 c darksalmon label creditcheck pltscatterxymeans 2 0 xymeans 2 1 100 c goldenrod label target pltscatterxymeans 3 0 xymeans 3 1 100 c magenta label spendthrift pltscatterxymeans 4 0 xymeans 4 1 100 c aquamarine label careful pltscatterkmclustercenters0 kmclustercenters 1 200 c darkseagreen label centroid snssetstyle whitegrid plttitleK Means Clustering fontsize 30 pltxlabelAnnual Income fontsize 20 pltylabelSpending Score fontsize 20 pltlegend pltgrid pltshow visualisation allows u clearly identify five cluster five centroid visible darkgreen main target group named “target” gold color highest spending score annual income Clustering K Means Furthermore find four additional group may interesting u approach case named “potential” “creditcheck” “spendthrift” “careful” Well done — basic clustering segment customer dataset What’s next segmented customer according Annual Income Spending Score course factor may influence decision customer want target example could investigate “Age” feature see impact clustering result business common segment customer according four different category Business Customer Segmentation Categories expanding exploring defining different customer segment creative use gained knowledge Optimise pricing reduce customer churn increase retention improve product … endless opportunity it’s Articles related one Handson Predict Customer Churn Applied Machine Learning Improved Startup Valuation Handson Setup Data Environment Docker Eliminating Churn Growth Hacking 20 Misleading Data Statistics Tags Unsupervised Learning Customer Segmentation Python Clustering Data Science
3,714
9 Edits That Will Improve Your LinkedIn Profile
9 Edits That Will Improve Your LinkedIn Profile Updating your LinkedIn profile to increase inbound leads and elevate yourself into a thought leader. LinkedIn isn’t just for finding new jobs, nor is it only a place to float in a state of passively looking. It’s a platform that can be leveraged to reduce CTA’s, increase brand awareness, and elevate oneself into a thought leader, too. Employees are the first paid ambassadors of any brand. The employee should want their employer to succeed and in their capacity leverage any tools that might drive business to their employer. LinkedIn, the professional networking platform, is a tool that has become a premier source of organic impressions and lead generation for those on it. Even those leading sole proprietorships, or personal brands, can leverage LinkedIn. User profiles on LinkedIn are digital resumes and represent the individual as much as their employer, and with a few adjustments, new copy, and strategic backlinks, every profile in a company can be improved. Let’s begin. Write an engaging headline Before anyone lands on your profile, they’ll search for you or see your comment on a post in their timeline. Users will see your name, the level of connection you are to them, and your headline. LinkedIn profile headlines need to be enticing in order to drive profile visits. Let’s consider the process of a user seeing your image, connection level, and headline in their timeline, and then clicking to visit your profile an “open”. You want a strong open rate, and a well-crafted headline will improve yours. There is room for about 74 characters in a LinkedIn profile headline. Use these wisely to say what you do and who you do it for. There are various recipes to follow when crafting yours, my suggestion is to use the professional verb of your work, the product or service you offer, and a target audience. For example, my headline is 65 characters long: Creating & curating content that younger generations engage with. If I worked on a running sneaker it might be Making Running On All Lands More Comfortable or if I was at a plant-based meat substitute it might be Making Your Plants Taste Like Meat. The headline says what you do, not just your title, and is to be crafted like the subject line of an email you want people to read. Here’s what it will look like in search and timeline. Design a new cover image Your headline worked, and users are beginning to visit your profile, which increases your open rate. Great job. The first place that the eyes of a LinkedIn profile visitor land is the 1440 x 425 px banner image at the top of your profile. What’s yours look like? The LinkedIn banner image is a first impression opportunity to reinforce you, your brand, and your business. For best practices, if you are part of a company, ask your marketing or content team to provide you with a branded LinkedIn cover image to use. Ideally, a company will create 4–5 of these and offer them from a menu in a shared google drive for employees to use at will. This way employees can continue to refresh their page, and marketing teams can update the drive with applicable images. If you are not part of a company, open an account on Canva, and design your own cover image. An example I came across was on the profile of employees at the non-alcoholic beer company Athletic Brewing. It’s sleek, the awards make me believe it’s good, and it tells me what their product is. It’s enough to keep me on the page and scroll down. Bolster your “About” section Are you familiar with what bounce rates are? A bounce rate represents the percentage of visitors who enter a website and then leave rather than continuing to view other pages within the same site. On LinkedIn, consider your bounce rate being if profile visitors scroll down and discover all that you do, or if they leave your page. After landing on your page and seeing your cover image, profile visitors will scroll down past your profile picture, headline, and location, and arrive at your “About” section. “About” sections are opportunities to be human, or as human as one can be on a screen. It’s tempting to include your full bio here but don’t. Keep it short, you don’t want to overwhelm someone. You want to usher their scroll to what comes next (more on that in a second). A great example of a brand focused “About” section is that of NOOMA Founder Jarred Smith, shared below with my LinkedIn “About” section. Feature relevant content Many simply don’t use the “Featured” content section, and that decision is a major miss. The LinkedIn profile “Featured” section is a free lead generation and backlink machine. It’s the first opportunity to intentionally limit your bounce rate by directing a user to a destination of your choice. At most, 2.5 pieces of featured content will be visible on your profile and best practice is to feature a minimum of 4 pieces of content. As for the types of content, there is flexibility here but I would focus on an article you wrote, videos you’ve produced, links to your website or portfolio, or a piece of content that featured you. This section is an opportunity to elevate yourself in a thought leader and can be used as a digital hype sheet. At the moment, I am featuring an article I wrote that went viral, a link to my menu of best writings, a blog full of writing tips, and a link to a panel I moderated at SXSW. Proudly share your experiences The “Experience” section of a LinkedIn profile is where you share all that you’ve done professionally. Unlike the headline, this is where you include your professional working title as outlined by your employer. Each experience on your profile offers space to include the specifics of your role. Here, include the details of your day, your accomplishments, the success you had, the stack you used, the brands and projects you worked on, or any other part of that experience you are proud of and elevates you. Do this for each position you have held, and for past experiences include why you moved on from that company. Transparency is great and activates the Law of Candor which will disarm profile visitors (bounce rate decreased!) Oh! I almost forgot. When you add an experience and press save, refresh your page to make sure the company icon is populated with the correct image. An empty square is lazy, “Your resume says digital-savvy but you can’t even add a logo?” Backlink each of your experiences Remember when you added 4 pieces of content in the “Featured” section and your inbound traffic grew? Well, you can direct traffic from each experience on your LinkedIn profile as well. The magic number, visually, is 2, and you can link each piece by clicking the pencil in the upper right-hand section of your experience and scrolling down to the “Media” section where the “Link” button will be. As to which type of content you should be linking here, my suggestion is that one piece direct traffic to your business’s top performing landing page and that the second piece be the best piece of press your company has gotten. To know which landing page is your company’s top-performing, ask your marketing team where they’d like you to direct traffic to from your LinkedIn. This may be a specific case study, a video embedded on the website, or a social media channel so that the user's digital profiles be added to the companies audiences for retargeting campaigns (Yep, digital marketers, each of your employees can funnel thousands of profiles into your audiences). As for which piece of press, if you don’t know which piece, just ask. But then set up google alerts for your company so you stay in-the-know. Start using recommendations I’m not too high on the “Skills & Endorsements” section of a LinkedIn profile. There’s a very low investment of time required to endorse someone and the available categories are often misaligned from the person and their work. I’d rather discover you through your About, Featured, and Experience sections, which I engaged with earlier on my profile visit. What I do trust are recommendations The “Recommendations” section is a bit more out of reach, requires a bit more time and thought, and is underused, so when I come across a profile full of positive ones, a level of competency is communicated immediately. To get started, think of five co-workers and five people you have a professional relationship with that exist outside your company, and politely ask them to recommend you. You can even kick things off by recommending each of them first! A great habit to build is to recommend people at the completion of the projects you work on together, inside or outside the company. It’s a good look and is a very selfless way to express pubic gratitude and appreciation of another. Follow your company’s LinkedIn page Click on the company name listed in your current experience and it will take you to your company’s LinkedIn page (this is a great way to test that you added your experience correctly). In the bottom left-hand corner of the page’s header will be a button that invites you to Follow, press it. Add each of your teammates Your profile is looking good, now it’s time to go show it off. Start by adding all of the members of your company. Click on the company name listed in your current experience and it will take you to your company’s LinkedIn page. In the bottom right-hand corner of the page’s header will be text that reads See all # employees on LinkedIn → Click on that and begin connecting with your teammates. That didn’t take long right? Maybe one hour? Now that your profile has been set up as a net to capture the interest of all who visit it, I want to offer a few quick tips for content sharing. Slack → open a company-wide #linkedincontent Slack channel dedicated to serving as a menu of content for employees to share on LinkedIn. Include case studies, product launches, product updates, blog posts, podcast episodes, new services, and press. Google Alerts → Set up Google alerts for your company and your category to stay in-the-know. The links included can help to automate sharing and be used to populate your company-wide #linkedincontent Slack channel. Tags → For every post on LinkedIn, tag your company using the @ function. Hashtags → For every post on LinkedIn, use the most relevant 1–2 hashtags. Something to consider is who views the hashtag. For example, if you work in content creation for the consumer packaged goods industry don’t use the #creative or #content hashtags, use #CPG because leaders and followers of the space are following that hashtag. Message me on LinkedIn if you have any questions, and good luck! The difference between Seth Godin, The Morning Brew, and me? I respect your inbox, curating only one newsletter per month — Join my behind-the-words monthly newsletter to feel what it’s like to receive a respectful newsletter.
https://medium.com/the-post-grad-survival-guide/9-edits-that-will-improve-your-linkedin-profile-966cab9316bd
['Richie Crowley']
2020-07-17 06:41:02.653000+00:00
['Social Media', 'Business', 'Marketing', 'Creativity', 'Work']
Title 9 Edits Improve LinkedIn ProfileContent 9 Edits Improve LinkedIn Profile Updating LinkedIn profile increase inbound lead elevate thought leader LinkedIn isn’t finding new job place float state passively looking It’s platform leveraged reduce CTA’s increase brand awareness elevate oneself thought leader Employees first paid ambassador brand employee want employer succeed capacity leverage tool might drive business employer LinkedIn professional networking platform tool become premier source organic impression lead generation Even leading sole proprietorship personal brand leverage LinkedIn User profile LinkedIn digital resume represent individual much employer adjustment new copy strategic backlinks every profile company improved Let’s begin Write engaging headline anyone land profile they’ll search see comment post timeline Users see name level connection headline LinkedIn profile headline need enticing order drive profile visit Let’s consider process user seeing image connection level headline timeline clicking visit profile “open” want strong open rate wellcrafted headline improve room 74 character LinkedIn profile headline Use wisely say various recipe follow crafting suggestion use professional verb work product service offer target audience example headline 65 character long Creating curating content younger generation engage worked running sneaker might Making Running Lands Comfortable plantbased meat substitute might Making Plants Taste Like Meat headline say title crafted like subject line email want people read Here’s look like search timeline Design new cover image headline worked user beginning visit profile increase open rate Great job first place eye LinkedIn profile visitor land 1440 x 425 px banner image top profile What’s look like LinkedIn banner image first impression opportunity reinforce brand business best practice part company ask marketing content team provide branded LinkedIn cover image use Ideally company create 4–5 offer menu shared google drive employee use way employee continue refresh page marketing team update drive applicable image part company open account Canva design cover image example came across profile employee nonalcoholic beer company Athletic Brewing It’s sleek award make believe it’s good tell product It’s enough keep page scroll Bolster “About” section familiar bounce rate bounce rate represents percentage visitor enter website leave rather continuing view page within site LinkedIn consider bounce rate profile visitor scroll discover leave page landing page seeing cover image profile visitor scroll past profile picture headline location arrive “About” section “About” section opportunity human human one screen It’s tempting include full bio don’t Keep short don’t want overwhelm someone want usher scroll come next second great example brand focused “About” section NOOMA Founder Jarred Smith shared LinkedIn “About” section Feature relevant content Many simply don’t use “Featured” content section decision major miss LinkedIn profile “Featured” section free lead generation backlink machine It’s first opportunity intentionally limit bounce rate directing user destination choice 25 piece featured content visible profile best practice feature minimum 4 piece content type content flexibility would focus article wrote video you’ve produced link website portfolio piece content featured section opportunity elevate thought leader used digital hype sheet moment featuring article wrote went viral link menu best writing blog full writing tip link panel moderated SXSW Proudly share experience “Experience” section LinkedIn profile share you’ve done professionally Unlike headline include professional working title outlined employer experience profile offer space include specific role include detail day accomplishment success stack used brand project worked part experience proud elevates position held past experience include moved company Transparency great activates Law Candor disarm profile visitor bounce rate decreased Oh almost forgot add experience press save refresh page make sure company icon populated correct image empty square lazy “Your resume say digitalsavvy can’t even add logo” Backlink experience Remember added 4 piece content “Featured” section inbound traffic grew Well direct traffic experience LinkedIn profile well magic number visually 2 link piece clicking pencil upper righthand section experience scrolling “Media” section “Link” button type content linking suggestion one piece direct traffic business’s top performing landing page second piece best piece press company gotten know landing page company’s topperforming ask marketing team they’d like direct traffic LinkedIn may specific case study video embedded website social medium channel user digital profile added company audience retargeting campaign Yep digital marketer employee funnel thousand profile audience piece press don’t know piece ask set google alert company stay intheknow Start using recommendation I’m high “Skills Endorsements” section LinkedIn profile There’s low investment time required endorse someone available category often misaligned person work I’d rather discover Featured Experience section engaged earlier profile visit trust recommendation “Recommendations” section bit reach requires bit time thought underused come across profile full positive one level competency communicated immediately get started think five coworkers five people professional relationship exist outside company politely ask recommend even kick thing recommending first great habit build recommend people completion project work together inside outside company It’s good look selfless way express pubic gratitude appreciation another Follow company’s LinkedIn page Click company name listed current experience take company’s LinkedIn page great way test added experience correctly bottom lefthand corner page’s header button invite Follow press Add teammate profile looking good it’s time go show Start adding member company Click company name listed current experience take company’s LinkedIn page bottom righthand corner page’s header text read See employee LinkedIn → Click begin connecting teammate didn’t take long right Maybe one hour profile set net capture interest visit want offer quick tip content sharing Slack → open companywide linkedincontent Slack channel dedicated serving menu content employee share LinkedIn Include case study product launch product update blog post podcast episode new service press Google Alerts → Set Google alert company category stay intheknow link included help automate sharing used populate companywide linkedincontent Slack channel Tags → every post LinkedIn tag company using function Hashtags → every post LinkedIn use relevant 1–2 hashtags Something consider view hashtag example work content creation consumer packaged good industry don’t use creative content hashtags use CPG leader follower space following hashtag Message LinkedIn question good luck difference Seth Godin Morning Brew respect inbox curating one newsletter per month — Join behindthewords monthly newsletter feel it’s like receive respectful newsletterTags Social Media Business Marketing Creativity Work
3,715
Podcast: How To Handle Success & the Challenges of a Growing R&D Team — Karin Moscovici (Hebrew)
How To Handle Success & the Challenges of a Growing R&D Team — Karin Moscovici Podcast: How To Handle Success & the Challenges of a Growing R&D Team — Karin Moscovici (Hebrew) Riskified Technology Follow Oct 4 · 1 min read There are many technological challenges at a scaling startup, like architecture changes, implementing new technologies, and more. Listen to Karin Moscovici, our VP R&D, on how it’s like to manage a growing R&D organization and create a technological culture. Recorded as part of Osim Tochna podcast — Click here to hear the full episode.
https://medium.com/riskified-technology/podcast-how-to-handle-success-the-challenges-of-a-growing-r-d-team-karin-moscovici-hebrew-840ad3da8a65
['Riskified Technology']
2020-10-04 14:55:45.615000+00:00
['Development Methods', 'Managment', 'Development', 'Podcast', 'Engineering']
Title Podcast Handle Success Challenges Growing RD Team — Karin Moscovici HebrewContent Handle Success Challenges Growing RD Team — Karin Moscovici Podcast Handle Success Challenges Growing RD Team — Karin Moscovici Hebrew Riskified Technology Follow Oct 4 · 1 min read many technological challenge scaling startup like architecture change implementing new technology Listen Karin Moscovici VP RD it’s like manage growing RD organization create technological culture Recorded part Osim Tochna podcast — Click hear full episodeTags Development Methods Managment Development Podcast Engineering
3,716
Three reasons why you need a Log Aggregation Architecture today
Three reasons why you need a Log Aggregation Architecture today Log Aggregation are not more a commodity but a critical component in container-based platforms Photo by Olav Ahrens Røtne on Unsplash Log Management doesn’t seem like a very fantastic topic. It is not the topic that you see and says: “Oh! Amazing! This is what I was dreaming about my whole life”. No, I’m aware that this is not to fancy, but that doesn’t make it less critical than other capabilities that you’re architecture needs to have. Since the start of time, we’ve been used log files as the single trustable data source when it was related to troubleshoot your applications or know what was failed in your deployment or any other actions regarding a computer. The procedure was easy: Launch “something” “something” failed. Check the logs Change something Repeat And we’ve been doing it that way for a long, long time. Even with other more robust error handling and management approaches like Audit System, we also go back to logs when we need to get the fine-grained detail about the error. Look for a stack trace there, more detail about the error that was inserted into the Audit System or more data than just the error code and description thas was provided by a REST API. Systems starting to grow, architecture became more complicated, but even with that, we end with the same method over and over. You’re aware of log aggregation architectures like the ELK stack or commercial solutions like Splunk or even SaaS offerings like Loggly, but you just think they’re not just for you. They’re expensive to buy or expensive to set, and you know very well your ecosystem, and it’s easier to just jump into a machine and tail the log file. Probably you also have your toolbox of scripts to do this as quickly as anyone can open Kibana and try to search for something instance ID there to see the error for a specific transaction. Ok, I need to tell you something: It’s time to change, and I’m going to explain to you why. Things are changing, and IT and all the new paradigms are based on some common grounds: You’re going to have more components that are going to run isolated with its log files and data. Deployments will be more regular in your production environment, and that means that things are going to be wrong more usual (on a controlled way, but more usual) Technologies are going to coexist, so logs are going to be very different in terms of patterns and layouts, and you need to be ready for that. So, let’s discuss these three arguments that I hope make you think in a different way about Log Management architectures and approaches. 1.- Your approach just doesn’t scale Your approach is excellent for traditional systems. How many machines do you manage? 30? 50? 100? And you’re able to do it quite fine. Imagine now a container-base platform for a typical enterprise. I think an average number could be around 1000 containers just for business purposes, not talking about architecture or basic services. Are you able to be ready to go container by container to check 1000 logs streams to know the error? Even if that’s possible, are you going to be the bottleneck for the growth of your company? How many container logs do you can keep a trace on? 2000? As I was saying at the beginning, that just not scale. 2.- Logs are not there forever And now, you read the first topic and probably are you just saying to the screen you’re using to read is. Come on! I already know that logs are not there, they’re getting rotated, they got lost, and so on. Yeah, that’s true, this is even more important in cloud-native approach. With container-based platforms, logs are ephemeral, and also, if we follow the 12-factor app manifesto there is no file with the log. All log traces should be printed to the standard output, and that’s it. And where the logs are deleted? When the container fails.. and which records are the ones that you need more? The ones that have been failed. So, if you don’t do anything, the log traces that you need the most are the ones that you’re going to lose. 3.- You need to be able to predict when things are going to fail But logs are not only valid when something goes wrong are adequate to detect when something is going to be wrong but to predict when things are going to fail. And you need to be able to aggregate that data to be able to generate information and insights from it. To be able to run ML models to detect if something is going as expected or something different is happening that could lead to some issue before it happens. Summary I hope these arguments have made you think that even for your small size company or even for your system, you need to be able to set up a Log Aggregation technique now and not wait for another moment when it will probably be too late.
https://medium.com/dev-genius/three-reasons-why-you-need-a-log-aggregation-architecture-today-e285d18bb1ef
['Alex Vazquez']
2020-07-02 15:53:34.675000+00:00
['Cloud Computing', 'Programming', 'Software Engineering', 'Software Development']
Title Three reason need Log Aggregation Architecture todayContent Three reason need Log Aggregation Architecture today Log Aggregation commodity critical component containerbased platform Photo Olav Ahrens Røtne Unsplash Log Management doesn’t seem like fantastic topic topic see say “Oh Amazing dreaming whole life” I’m aware fancy doesn’t make le critical capability you’re architecture need Since start time we’ve used log file single trustable data source related troubleshoot application know failed deployment action regarding computer procedure easy Launch “something” “something” failed Check log Change something Repeat we’ve way long long time Even robust error handling management approach like Audit System also go back log need get finegrained detail error Look stack trace detail error inserted Audit System data error code description thas provided REST API Systems starting grow architecture became complicated even end method You’re aware log aggregation architecture like ELK stack commercial solution like Splunk even SaaS offering like Loggly think they’re They’re expensive buy expensive set know well ecosystem it’s easier jump machine tail log file Probably also toolbox script quickly anyone open Kibana try search something instance ID see error specific transaction Ok need tell something It’s time change I’m going explain Things changing new paradigm based common ground You’re going component going run isolated log file data Deployments regular production environment mean thing going wrong usual controlled way usual Technologies going coexist log going different term pattern layout need ready let’s discus three argument hope make think different way Log Management architecture approach 1 approach doesn’t scale approach excellent traditional system many machine manage 30 50 100 you’re able quite fine Imagine containerbase platform typical enterprise think average number could around 1000 container business purpose talking architecture basic service able ready go container container check 1000 log stream know error Even that’s possible going bottleneck growth company many container log keep trace 2000 saying beginning scale 2 Logs forever read first topic probably saying screen you’re using read Come already know log they’re getting rotated got lost Yeah that’s true even important cloudnative approach containerbased platform log ephemeral also follow 12factor app manifesto file log log trace printed standard output that’s log deleted container fails record one need one failed don’t anything log trace need one you’re going lose 3 need able predict thing going fail log valid something go wrong adequate detect something going wrong predict thing going fail need able aggregate data able generate information insight able run ML model detect something going expected something different happening could lead issue happens Summary hope argument made think even small size company even system need able set Log Aggregation technique wait another moment probably lateTags Cloud Computing Programming Software Engineering Software Development
3,717
How to Become an DevOps Engineer in 2020
DevOps Practices Now that we’ve gone over what DevOps stands for and what some of its related benefits are, let’s discuss some DevOps practices. A thorough understanding of DevOps methodologies will help clear any lingering queries you may have. That’s not to mention that it will add to your knowledge and come in handy in interviews (which we’ll talk about later). Continuous integration One of the biggest problems resulting from teams working in isolation is that merging code when work is completed. It’s not only challenging but also time-consuming. That’s where continuous integration (CI) can help big time. Developers generally make use of a shared repository (using a version control system such as Git.) with continuous integration. The fact that a continuous integration service simultaneously builds and runs tests on code changes makes it easier to recognize and handle errors. In the long run, continuous integration can help boost developer productivity, address bugs and errors faster, and it can help speed up updates. Continuous delivery Evolution forged the entirety of sentient life on this planet using only one tool: the mistake. — Westworld Robert Ford may have made some critical errors in Westworld, but the man does have some great lines. And he makes a great point about evolution. Speaking of evolution, many people consider continuous delivery (CD) as the next evolutionary step of CI because it pushes further the development of lifecycle automation. CD is all about compilation, testing, and the staging environment. This stage of the development lifecycle expands on CI by extending code changes to a testing environment (or a production environment) after the build stage. If employed correctly, CD can help developers finetune updates by thorough testing across multiple dimensions before the production stage. Continuous Delivery allows developers to run tests such as UI testing, integration testing, and load testing, etc. Microservices Microservices are to software design what production lines are to manufacturing. Or, to put it more verbosely, microservices is a software design architecture that takes a hammer to monolithic systems. Microservices allows applications to be built altogether in one big code repository. Each application consists of multiple microservices and every service is tweaked to excel at one specific function. For example, let’s look at how Amazon decided to move to microservices. Once upon a time, when Amazon wasn’t the behemoth it is today, their API served them just fine. But as their popularity grew so did their need for a better application program interface. Amazon decided to get into microservices. Now, instead of a problematic two-tiered architecture, Amazon has multiple services — one that deals with orders, one service that generates their recommended buys list, a payment service, etc. All these services are actually mini-applications with a single business capability. Infrastructure as code Thanks to technological innovations, servers and critical infrastructure no longer function the way they did a decade ago. Now, you have cloud providers like Google, that manage business infrastructure for thousands upon thousands of customers in huge data warehouses. Unsurprisingly, the way engineers manage infrastructure today is way different than what went on previously. And, Infrastructure as Code (IaC) is one of the practices that a DevOps environment may apply to handle a shift in scale. Under IaC, infrastructure is managed using software development techniques and code (such as version control, etc). Developers can interact with infrastructure programmatically thanks to the cloud’s API-driven model. This allows engineers to handle infrastructure the way they’d tackle application code. This is important because it allows you to test your infrastructure the same way you would test your code. With IaC at the helm, your system administrators don’t have to stress about issues like the webserver not connecting to the database, etc. What’s more, IaC can help businesses auto-provision and shape abstraction layers so that developers can go on building services without needing to know the specific hardware, GPU, firmware, and so on, if they have a DevOps team that’s developing infrastructure to push automation forward. Picture this: Big-time car manufacturers like Mercedes Benz, BMW, and Audi all want to get their hands on the latest in-car experience technologies, right? But if these companies want to ship new services and products they’re going to struggle with the fact everyone on the road has different hardware. Unless one fine day, the powers that be decide to have universal hardware, edge case devices will continue to act as roadblocks when it comes to development. However, this is where a solid DevOps team can help because they can auto-provision abstraction layers to automate infrastructure services. By solving edge-case challenges in the cloud, a DevOps team can help auto-manufacturers by cutting down costs, and help lessen the burden and strain on developers. Configuration management Configuration Management (CM) is important in a DevOps model to encourage continuous integration. It doesn’t matter if you’re hosted in the cloud or managing our systems on-premises, implementing configuration properly can secure accuracy, traceability, and consistency. When system administrators use code to automate the operating system this leads to the standardization of configuration changes. This kind of regularity saves developers from wasting time manually configuring systems or system applications. Policy as code Organizations that have the benefit of infrastructure and configuration codified with the cloud also have the added advantage of monitoring and enforcing compliance at scale. This type of automation allows organizations to oversee changes in resources efficiently, and it allows security measures to be enforced in a disseminated manner. Monitoring and logging Monitoring metrics can help businesses understand the impact of application and infrastructure performance on end-user experience. Analyzing and categorizing data and logs can lead to valuable insights regarding the core causes of problems. Look at it like this — if services are to be made available 24/7, active monitoring becomes exceedingly important as far as update frequency is concerned. If you’re scrambling towards code release, you know that it isn’t humanly possible to check all your blind spots. Why? Because not every problem pops up in the user interface. Some bugs work like Ethan Hunt to open security holes, others reduce performance, and then there are the wastrel-type bugs that squander resources. On the other hand, the generation of containers and instances can make log management feel like finding a needle in a haystack of needles — unpleasant. The sheer amount of raw data to wade through can make finding meaningful information very difficult. But if you have monitoring systems, you can depend on the metrics to alert the team about any type of anomaly rearing its head across cloud services or applications. Also, monitoring metrics can help businesses understand the impact of application and infrastructure performance on end-user experience. Logging can help DevOps teams create user-friendly products or services, or to push continuous integration/delivery forward. Applied together — monitoring and logging can not only help a business get closer to its customers, but they can also help a business understand its own capacity and scale. For instance, almost all businesses rent out a certain amount of cloud space from cloud providers like AWS, Azure, or even Google Cloud throughout the year. But, if a company isn’t aware of the fact that its capacity can fluctuate due to peak seasons or holidays, or if its team isn’t prepared to handle the ups and downs by creating provisioning layers then things can get pretty ugly — like a website crash. Communication and collaboration One of the fundamental cultural aspects of DevOps is communication and collaboration. DevOps tooling and automation (of the software delivery process) focuses on creating collaboration by combining the processes and efficiencies of development and operations. In a DevOps environment, all teams involved work to build cultural norms relating to information sharing and facilitating communication via project tracking systems, chat applications, and so on. This allows quicker communication between developers and helps bring together all parts of an organization to accomplish set goals and projects.
https://medium.com/swlh/how-to-become-an-devops-engineer-in-2020-80b8740d5a52
['Shane Shown']
2020-09-29 19:36:21.205000+00:00
['Cloud Computing', 'DevOps', 'Software Development', 'Programming', 'Engineering']
Title Become DevOps Engineer 2020Content DevOps Practices we’ve gone DevOps stand related benefit let’s discus DevOps practice thorough understanding DevOps methodology help clear lingering query may That’s mention add knowledge come handy interview we’ll talk later Continuous integration One biggest problem resulting team working isolation merging code work completed It’s challenging also timeconsuming That’s continuous integration CI help big time Developers generally make use shared repository using version control system Git continuous integration fact continuous integration service simultaneously build run test code change make easier recognize handle error long run continuous integration help boost developer productivity address bug error faster help speed update Continuous delivery Evolution forged entirety sentient life planet using one tool mistake — Westworld Robert Ford may made critical error Westworld man great line make great point evolution Speaking evolution many people consider continuous delivery CD next evolutionary step CI push development lifecycle automation CD compilation testing staging environment stage development lifecycle expands CI extending code change testing environment production environment build stage employed correctly CD help developer finetune update thorough testing across multiple dimension production stage Continuous Delivery allows developer run test UI testing integration testing load testing etc Microservices Microservices software design production line manufacturing put verbosely microservices software design architecture take hammer monolithic system Microservices allows application built altogether one big code repository application consists multiple microservices every service tweaked excel one specific function example let’s look Amazon decided move microservices upon time Amazon wasn’t behemoth today API served fine popularity grew need better application program interface Amazon decided get microservices instead problematic twotiered architecture Amazon multiple service — one deal order one service generates recommended buy list payment service etc service actually miniapplications single business capability Infrastructure code Thanks technological innovation server critical infrastructure longer function way decade ago cloud provider like Google manage business infrastructure thousand upon thousand customer huge data warehouse Unsurprisingly way engineer manage infrastructure today way different went previously Infrastructure Code IaC one practice DevOps environment may apply handle shift scale IaC infrastructure managed using software development technique code version control etc Developers interact infrastructure programmatically thanks cloud’s APIdriven model allows engineer handle infrastructure way they’d tackle application code important allows test infrastructure way would test code IaC helm system administrator don’t stress issue like webserver connecting database etc What’s IaC help business autoprovision shape abstraction layer developer go building service without needing know specific hardware GPU firmware DevOps team that’s developing infrastructure push automation forward Picture Bigtime car manufacturer like Mercedes Benz BMW Audi want get hand latest incar experience technology right company want ship new service product they’re going struggle fact everyone road different hardware Unless one fine day power decide universal hardware edge case device continue act roadblock come development However solid DevOps team help autoprovision abstraction layer automate infrastructure service solving edgecase challenge cloud DevOps team help automanufacturers cutting cost help lessen burden strain developer Configuration management Configuration Management CM important DevOps model encourage continuous integration doesn’t matter you’re hosted cloud managing system onpremises implementing configuration properly secure accuracy traceability consistency system administrator use code automate operating system lead standardization configuration change kind regularity save developer wasting time manually configuring system system application Policy code Organizations benefit infrastructure configuration codified cloud also added advantage monitoring enforcing compliance scale type automation allows organization oversee change resource efficiently allows security measure enforced disseminated manner Monitoring logging Monitoring metric help business understand impact application infrastructure performance enduser experience Analyzing categorizing data log lead valuable insight regarding core cause problem Look like — service made available 247 active monitoring becomes exceedingly important far update frequency concerned you’re scrambling towards code release know isn’t humanly possible check blind spot every problem pop user interface bug work like Ethan Hunt open security hole others reduce performance wastreltype bug squander resource hand generation container instance make log management feel like finding needle haystack needle — unpleasant sheer amount raw data wade make finding meaningful information difficult monitoring system depend metric alert team type anomaly rearing head across cloud service application Also monitoring metric help business understand impact application infrastructure performance enduser experience Logging help DevOps team create userfriendly product service push continuous integrationdelivery forward Applied together — monitoring logging help business get closer customer also help business understand capacity scale instance almost business rent certain amount cloud space cloud provider like AWS Azure even Google Cloud throughout year company isn’t aware fact capacity fluctuate due peak season holiday team isn’t prepared handle ups down creating provisioning layer thing get pretty ugly — like website crash Communication collaboration One fundamental cultural aspect DevOps communication collaboration DevOps tooling automation software delivery process focus creating collaboration combining process efficiency development operation DevOps environment team involved work build cultural norm relating information sharing facilitating communication via project tracking system chat application allows quicker communication developer help bring together part organization accomplish set goal projectsTags Cloud Computing DevOps Software Development Programming Engineering
3,718
Marketing AI Institute CEO Paul Roetzer on Superpowered Manipulation
Audio + Transcript Paul Roetzer: Most marketers still don’t even know what it is. So if you don’t understand the superpower you’ll have, how could you possibly be planning for how to not use it for evil? James Kotecki: This is Machine Meets World, Infinia ML’s ongoing conversation about artificial intelligence. My guest today is the founder and CEO of the Marketing Artificial Intelligence Institute, Paul Roetzer. Thanks so much for being on Machine Meets World. Paul Roetzer: Absolutely, man. Looking forward to the conversation, I always enjoy talking with you. James Kotecki: So when people hear marketing and they hear artificial intelligence, what do people think that you’re up to? Paul Roetzer: Well, the average marketer, I think, believes it’s just too abstract to care. I mean, that’s our biggest challenge right now is making marketers care enough to take the next step, to ask the first question about what is it actually and how can I use it? So I think a lot of times they just ignore it because it seems abstract or sci-fi. James Kotecki: And the Institute is an educational endeavor at its core, right? It’s trying to convince marketers to use AI in different ways across the different types of marketing that they do? Paul Roetzer: Yeah. We see our mission as making AI approachable and actionable. So it’s, we’re marketers trying to make sense of AI and make it make sense to other marketers. We’re not trying to talk to the machine learning engineers or the data scientists. We’re trying to make the average marketer be able to understand these things and apply it immediately to their career and to their business. James Kotecki: And what’s your scope? What’s your definition of AI? Paul Roetzer: The best definition I’ve seen is Demis Hassabis, who’s the co-founder and CEO of DeepMind, calls AI the science of making machines smart. And I just have always gravitated to that definition because I think it really simplifies it, meaning, machines know nothing. The software, the hardware we use to do our jobs, don’t know anything natively, they’re programmed to do these things. There’s a future for marketing where humans don’t have to write all the rules. That the machines will actually get smarter and that there’s a science behind making marketing smarter. And that’s what we think about marketing AI as. James Kotecki: What marketing technologies are you excited about to come to light in 2021? Paul Roetzer: We look at three main applications of AI: language, vision, and prediction. What you’re trying to do with AI is give machines human-like abilities — of sight, of hearing, of language generation. And so language in particular has just a massive potential within marketing. Think about all the places that you generate language, generate documents that summarize information, create documents from scratch, write emails, like it’s just never ending. And I think you’re going to see lots and lots of companies built in the space that focus explicitly on applications of language generation and understanding. James Kotecki: I looked at the history of the Institute, and it traces back about five years ago… Paul Roetzer: Mmm-hmm. James Kotecki: …to when you were thinking about how to automate the writing of blog posts. And five years ago, that wasn’t really possible, but now, this year, GPT-3 from OpenAI is a technology that looks like we’re either very close or already there to the point where a machine can convincingly write, from almost scratch, narratives, articles, blog posts, et cetera. What do you think of that? And what do you think is next if that kind of initial dream has maybe been achieved? Paul Roetzer: So there have definitely been major advances in the last even 18 months. So first we had GPT-2 was the big one that hit the market. I think it was like February of ’19 maybe it was when that surfaced. And then just this year we had GPT-3, which really took it to the next level of this ability to create free-form text from an idea or a topic or a source. And it really is moving very, very quickly. And I think in 2021, 2022, you’re going to start seeing lots and lots of applications of language generation from models like GPT-3, where the average content marketer or email marketer will be using AI-assisted language generation. James Kotecki: People always say when technology like this comes up, “There’s still going to be a place for human creativity. Don’t worry. We still need humans in the mix.” At what point do marketers look at this and start getting scared and saying, “You keep saying that, but the machines keep getting more and more creative.” Paul Roetzer: I am a big believer that the net positive of AI will be more jobs and it will create new opportunities for writers and for marketers. But I’m realistic that things I thought 24 months ago a machine couldn’t do, it’s doing now. And that’s part of why I think it’s so critical that marketers and writers are paying attention because the space is changing very quickly. The tools that you can use to do your job are changing very quickly. It’s going to close some doors. There are going to be some roles or some tasks that writers, marketers do today that they won’t need to do. But it’s also going to open new ones. And I think it’s the people who are at the forefront of this who have a confidence and a competency around AI that are going to be the ones that find the new opportunities and career paths — may even be the ones that build the new tools, the application of it for the specific thing they do. James Kotecki: Do you think marketers, at least marketers who do get it, have an obligation, not just to use AI effectively and ethically, but to use their skills to shape the public perception of AI? Paul Roetzer: That’s what I always tell people. It’s like, think about it. Like, why does Google have Google AI? Why does Microsoft advertise Microsoft AI? They’re all trying to get the average consumer to not be afraid of this idea, this technology because it is so interwoven in every experience they have as consumers now. They don’t realize it though. These big tech companies need consumers to be conditioned to accept AI. And I think in the software world for marketing, you’re going to see a similar movement where we need the users of the software to understand how to use it with their consumers, but also how to embrace what it makes possible within their jobs. James Kotecki: Are there ethical guidelines or any kind of ethical consensus out there for how marketers need to be approaching some of this stuff? I mean, if you took ethics out of it, you could use this technology in ways that were at best amoral and at worst unethical. So what are some guidelines that are actually shaping people’s decision-making here? Paul Roetzer: There aren’t any universal standards that we’re aware of. There is a big movement around this idea of AI for good, at a larger level. So you are seeing organizations created who are trying to integrate ethics and remove bias from AI at a larger application in society and in business. Specific to marketing though, it’s really at an individual corporation level. So are companies developing their own ethics guidelines for how they’re going to use data and how they’re going to use the power that AI gives them to reach and influence consumers? And that part’s not moving fast enough. There’s not enough conversation around that because again, most marketers still don’t even know what it is. So if you don’t understand the superpower you’ll have, how could you possibly be planning for how to not use it for evil? And so there’s these steps we’re trying real hard to move the industry through so we can get to the other side of how do we do good with this power that we’re all going to have. James Kotecki: When you look at the totality of AI in marketing, from your perch here, do you feel like you are fighting against trends that are taking things in the wrong direction? Do you feel overall optimistic about the state of things? Paul Roetzer: I feel optimistic, but I do worry a lot about where it could go wrong. And I think if you look at politics, I’m not going to bring in any specific politics into this, but if you look at the political realm, this isn’t new stuff. They’ve been trying to manipulate behavior on every side of it, in every country. It’s all about trying to manipulate people’s views and behaviors. And this is very dangerous stuff to give people like that whose job is to manipulate behaviors. And so if you’re a marketer and you’re so focused on revenue or profits or goals over the other uses of it, you’re going to have the ability to manipulate people in ways you did not before. And I do worry greatly that people will use these tools to take shortcuts, to hack things together, and to affect people in ways that isn’t in the best interest of society. James Kotecki: I’m imagining a bumper sticker for a marketer that says, “You say manipulate human behavior like it’s a bad thing.” Right? Paul Roetzer: I could see that, yeah. James Kotecki: Because the context, even the word “manipulate” has a negative connotation, but it is, if you just look at its neutral meaning, exactly what marketing is trying to accomplish. As we wrap up here, what are your hopes for marketing in 2021 when it comes to AI? Paul Roetzer: I just want marketers to be curious. To understand that there is a chance to create a competitive advantage for themselves and for their companies. And to do that, you just need to know that AI creates smarter solutions, that if you’re going to do email or content marketing or advertising, don’t just rely on the all-human all the time way you’ve previously done it. There are tools that are figuring things out for you that are making you a better marketer by surfacing insights, making recommendations of actions, assessing creative. There’s lots and lots of ways you can use AI. And I just think if people take the step to find a few to try in the coming twelve months, they’ll realize that there’s this whole other world of marketing technology out there that can make them better at their job. James Kotecki: Well, thanks for illuminating us on that. Paul Roetzer, founder and CEO of the Marketing Artificial Intelligence Institute. Thank you for being on Machine Meets World. Paul Roetzer: Absolutely, man. Enjoyed it. James Kotecki: And thank you so much for watching and/or listening, please like share, subscribe. You know, give the algorithms what they want. You can also email us at [email protected]. I’m James Kotecki And that is what happens when Machine Meets World.
https://medium.com/machine-meets-world/marketing-ai-institute-ceo-paul-roetzer-on-superpowered-manipulation-59c05fbf501a
['James Kotecki']
2020-12-16 15:41:39.245000+00:00
['Business', 'Ethics', 'Artificial Intelligence', 'Technology', 'Marketing']
Title Marketing AI Institute CEO Paul Roetzer Superpowered ManipulationContent Audio Transcript Paul Roetzer marketer still don’t even know don’t understand superpower you’ll could possibly planning use evil James Kotecki Machine Meets World Infinia ML’s ongoing conversation artificial intelligence guest today founder CEO Marketing Artificial Intelligence Institute Paul Roetzer Thanks much Machine Meets World Paul Roetzer Absolutely man Looking forward conversation always enjoy talking James Kotecki people hear marketing hear artificial intelligence people think you’re Paul Roetzer Well average marketer think belief it’s abstract care mean that’s biggest challenge right making marketer care enough take next step ask first question actually use think lot time ignore seems abstract scifi James Kotecki Institute educational endeavor core right It’s trying convince marketer use AI different way across different type marketing Paul Roetzer Yeah see mission making AI approachable actionable it’s we’re marketer trying make sense AI make make sense marketer We’re trying talk machine learning engineer data scientist We’re trying make average marketer able understand thing apply immediately career business James Kotecki what’s scope What’s definition AI Paul Roetzer best definition I’ve seen Demis Hassabis who’s cofounder CEO DeepMind call AI science making machine smart always gravitated definition think really simplifies meaning machine know nothing software hardware use job don’t know anything natively they’re programmed thing There’s future marketing human don’t write rule machine actually get smarter there’s science behind making marketing smarter that’s think marketing AI James Kotecki marketing technology excited come light 2021 Paul Roetzer look three main application AI language vision prediction you’re trying AI give machine humanlike ability — sight hearing language generation language particular massive potential within marketing Think place generate language generate document summarize information create document scratch write email like it’s never ending think you’re going see lot lot company built space focus explicitly application language generation understanding James Kotecki looked history Institute trace back five year ago… Paul Roetzer Mmmhmm James Kotecki …to thinking automate writing blog post five year ago wasn’t really possible year GPT3 OpenAI technology look like we’re either close already point machine convincingly write almost scratch narrative article blog post et cetera think think next kind initial dream maybe achieved Paul Roetzer definitely major advance last even 18 month first GPT2 big one hit market think like February ’19 maybe surfaced year GPT3 really took next level ability create freeform text idea topic source really moving quickly think 2021 2022 you’re going start seeing lot lot application language generation model like GPT3 average content marketer email marketer using AIassisted language generation James Kotecki People always say technology like come “There’s still going place human creativity Don’t worry still need human mix” point marketer look start getting scared saying “You keep saying machine keep getting creative” Paul Roetzer big believer net positive AI job create new opportunity writer marketer I’m realistic thing thought 24 month ago machine couldn’t it’s that’s part think it’s critical marketer writer paying attention space changing quickly tool use job changing quickly It’s going close door going role task writer marketer today won’t need it’s also going open new one think it’s people forefront confidence competency around AI going one find new opportunity career path — may even one build new tool application specific thing James Kotecki think marketer least marketer get obligation use AI effectively ethically use skill shape public perception AI Paul Roetzer That’s always tell people It’s like think Like Google Google AI Microsoft advertise Microsoft AI They’re trying get average consumer afraid idea technology interwoven every experience consumer don’t realize though big tech company need consumer conditioned accept AI think software world marketing you’re going see similar movement need user software understand use consumer also embrace make possible within job James Kotecki ethical guideline kind ethical consensus marketer need approaching stuff mean took ethic could use technology way best amoral worst unethical guideline actually shaping people’s decisionmaking Paul Roetzer aren’t universal standard we’re aware big movement around idea AI good larger level seeing organization created trying integrate ethic remove bias AI larger application society business Specific marketing though it’s really individual corporation level company developing ethic guideline they’re going use data they’re going use power AI give reach influence consumer part’s moving fast enough There’s enough conversation around marketer still don’t even know don’t understand superpower you’ll could possibly planning use evil there’s step we’re trying real hard move industry get side good power we’re going James Kotecki look totality AI marketing perch feel like fighting trend taking thing wrong direction feel overall optimistic state thing Paul Roetzer feel optimistic worry lot could go wrong think look politics I’m going bring specific politics look political realm isn’t new stuff They’ve trying manipulate behavior every side every country It’s trying manipulate people’s view behavior dangerous stuff give people like whose job manipulate behavior you’re marketer you’re focused revenue profit goal us you’re going ability manipulate people way worry greatly people use tool take shortcut hack thing together affect people way isn’t best interest society James Kotecki I’m imagining bumper sticker marketer say “You say manipulate human behavior like it’s bad thing” Right Paul Roetzer could see yeah James Kotecki context even word “manipulate” negative connotation look neutral meaning exactly marketing trying accomplish wrap hope marketing 2021 come AI Paul Roetzer want marketer curious understand chance create competitive advantage company need know AI creates smarter solution you’re going email content marketing advertising don’t rely allhuman time way you’ve previously done tool figuring thing making better marketer surfacing insight making recommendation action assessing creative There’s lot lot way use AI think people take step find try coming twelve month they’ll realize there’s whole world marketing technology make better job James Kotecki Well thanks illuminating u Paul Roetzer founder CEO Marketing Artificial Intelligence Institute Thank Machine Meets World Paul Roetzer Absolutely man Enjoyed James Kotecki thank much watching andor listening please like share subscribe know give algorithm want also email u mmwinfiniamlcom I’m James Kotecki happens Machine Meets WorldTags Business Ethics Artificial Intelligence Technology Marketing
3,719
10 Algorithms To Solve Before your Python Coding Interview
10 Algorithms To Solve Before your Python Coding Interview In this article I present and share the solution for a number of basic algorithms that recurrently appear in FAANG interviews Photo by Headway on Unsplash Why Practicing Algorithms Is Key? If you are relatively new to Python and plan to start interviewing for top companies (among which FAANG) listen to this: you need to start practicing algorithms right now. Don’t be naive like I was when I first started solving them. Despite I thought that cracking a couple of algorithms every now and then was fun, I never spent too much time to practice and even less time to implement a faster or more efficient solution. Between myself, I was thinking that at the end of the day solving algorithms all day long was a bit too nerdy, it didn’t really have a practical use in the real daily work environment and it would not have brought much to my pocket in the longer term. “Knowing how to solve algorithms will give you a competitive advantage during the job search process” Well…I was wrong (at least partially): I still think that spending too much time on algorithms without focusing on other skills is not enough to make you land your dream job, but I understood that since complex problems present themselves in every day work as a programmer, big companies had to find a standardized process to gather insights on the candidate’s problem solving and attention to detail skills. This means that knowing how to solve algorithms will give you a competitive advantage during the job search process as even less famous companies tend to adopt similar evaluation methods. There Is An Entire World Out There Pretty soon after I started solving algorithms more consistently, I found out that there are plenty of resources out there to practice, learn the most efficient strategies to solve them and get mentally ready for interviews (HackerRank, LeetCode, CodingBat and GeeksForGeeks are just few examples). Together with practicing the top interview questions, these websites often group algorithms by company, embed active blogs where people share detailed summaries of their interview experience and sometimes even offer mock interview questions as part of premium plans. For example, LeetCode let you filter top interview questions by specific companies and by frequency. You can also choose the level of difficulty (Easy, Medium and Hard) you feel comfortable with: There are hundreds of different algorithmic problems out there, meaning that being able to recognize the common patterns and code an efficient solution in less then 10 mins will require a lot of time and dedication. “Don’t be disappointed if you really struggle to solve them at first , this is completely normal” Don’t be disappointed if you really struggle to solve them at first, this is completely normal. Even more experienced Python programmers would find many algorithms challenging to solve in a short time without an adequate training. Also don’t be disappointed if your interview doesn’t go as you expected and you just started solving algorithms. There are people that prepare for months solving a few problems every day and rehearse them regularly before they are able to nail an interview. To help you in your training process, below I have selected 10 algorithms (mainly around String Manipulation and Arrays) that I have seen appearing again and again in phone coding interviews. The level of these problems is mainly easy so consider them as good starting point. Please note that the solution I shared for each problem is just one of the many potential solutions that could be implemented and often a BF (“Brute Force”) one. Therefore feel free to code your own version of the algorithm, trying to find the right balance between runtime and employed memory. Strings Manipulation 1. Reverse Integer Output: -132 543 A warm-up algorithm, that will help you practicing your slicing skills. In effect the only tricky bit is to make sure you are taking into account the case when the integer is negative. I have seen this problem presented in many different ways but it usually is the starting point for more complex requests. 2. Average Words Length Output: 4.2 4.08 Algorithms that require you to apply some simple calculations using strings are very common, therefore it is important to get familiar with methods like .replace() and .split() that in this case helped me removing the unwanted characters and create a list of words, the length of which can be easily measured and summed. 3. Add Strings Output: 2200 2200 I find both approaches equally sharp: the first one for its brevity and the intuition of using the eval( ) method to dynamically evaluate string-based inputs and the second one for the smart use of the ord( ) function to re-build the two strings as actual numbers trough the Unicode code points of their characters. If I really had to chose in between the two, I would probably go for the second approach as it looks more complex at first but it often comes handy in solving “Medium” and “Hard” algorithms that require more advanced string manipulation and calculations. 4. First Unique Character Output: 1 2 1 ### 1 2 1 Also in this case, two potential solutions are provided and I guess that, if you are pretty new to algorithms, the first approach looks a bit more familiar as it builds as simple counter starting from an empty dictionary. However understanding the second approach will help you much more in the longer term and this is because in this algorithm I simply used collection.Counter(s) instead of building a chars counter myself and replaced range(len(s)) with enumerate(s) , a function that can help you identify the index more elegantly. 5. Valid Palindrome Output: True The “Valid Palindrome” problem is a real classic and you will probably find it repeatedly under many different flavors. In this case, the task is to check weather by removing at most one character, the string matches with its reversed counterpart. When s = ‘radkar’ the function returns True as by excluding the ‘k’ we obtain the word ‘radar’ that is a palindrome. Arrays 6. Monotonic Array Output: True False True This is another very frequently asked problem and the solution provided above is pretty elegant as it can be written as a one-liner. An array is monotonic if and only if it is monotone increasing, or monotone decreasing and in order to assess it, the algorithm above takes advantage of the all() function that returns True if all items in an iterable are true, otherwise it returns False . If the iterable object is empty, the all() function also returns True . 7. Move Zeroes Output: [1, 3, 12, 0, 0] [1, 7, 8, 10, 12, 4, 0, 0, 0, 0] When you work with arrays, the .remove() and .append() methods are precious allies. In this problem I have used them to first remove each zero that belongs to the original array and then append it at the end to the same array. 8. Fill The Blanks Output: [1, 1, 2, 3, 3, 3, 5, 5] I was asked to solve this problem a couple of times in real interviews, both times the solution had to include edge cases (that I omitted here for simplicity). On paper, this an easy algorithm to build but you need to have clear in mind what you want to achieve with the for loop and if statement and be comfortable working with None values. 9. Matched & Mismatched Words Output: (['The','We','a','are','by','heavy','hit','in','meet','our', 'pleased','storm','to','was','you'], ['city', 'really']) The problem is fairly intuitive but the algorithm takes advantage of a few very common set operations like set() , intersection() or & and symmetric_difference()or ^ that are extremely useful to make your solution more elegant. If it is the first time you encounter them, make sure to check this article: 10. Prime Numbers Array Output: [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31] I wanted to close this section with another classic problem. A solution can be found pretty easily looping trough range(n) if you are familiar with both the prime numbers definition and the modulus operation. Conclusion In this article I shared the solution of 10 Python algorithms that are frequently asked problems in coding interview rounds. If you are preparing an interview with a well-known tech Company this article is a good starting point to get familiar with common algorithmic patterns and then move to more complex questions. Also note that the exercises presented in this post (together with their solutions) are slight reinterpretations of problems available on Leetcode and GeekForGeeks. I am far from being an expert in the field therefore the solutions I presented are just indicative ones. You may also like:
https://towardsdatascience.com/10-algorithms-to-solve-before-your-python-coding-interview-feb74fb9bc27
[]
2020-10-22 06:12:30.168000+00:00
['Python', 'Data Engineering', 'Interview', 'Algorithms', 'Data Science']
Title 10 Algorithms Solve Python Coding InterviewContent 10 Algorithms Solve Python Coding Interview article present share solution number basic algorithm recurrently appear FAANG interview Photo Headway Unsplash Practicing Algorithms Key relatively new Python plan start interviewing top company among FAANG listen need start practicing algorithm right Don’t naive like first started solving Despite thought cracking couple algorithm every fun never spent much time practice even le time implement faster efficient solution thinking end day solving algorithm day long bit nerdy didn’t really practical use real daily work environment would brought much pocket longer term “Knowing solve algorithm give competitive advantage job search process” Well…I wrong least partially still think spending much time algorithm without focusing skill enough make land dream job understood since complex problem present every day work programmer big company find standardized process gather insight candidate’s problem solving attention detail skill mean knowing solve algorithm give competitive advantage job search process even le famous company tend adopt similar evaluation method Entire World Pretty soon started solving algorithm consistently found plenty resource practice learn efficient strategy solve get mentally ready interview HackerRank LeetCode CodingBat GeeksForGeeks example Together practicing top interview question website often group algorithm company embed active blog people share detailed summary interview experience sometimes even offer mock interview question part premium plan example LeetCode let filter top interview question specific company frequency also choose level difficulty Easy Medium Hard feel comfortable hundred different algorithmic problem meaning able recognize common pattern code efficient solution le 10 min require lot time dedication “Don’t disappointed really struggle solve first completely normal” Don’t disappointed really struggle solve first completely normal Even experienced Python programmer would find many algorithm challenging solve short time without adequate training Also don’t disappointed interview doesn’t go expected started solving algorithm people prepare month solving problem every day rehearse regularly able nail interview help training process selected 10 algorithm mainly around String Manipulation Arrays seen appearing phone coding interview level problem mainly easy consider good starting point Please note solution shared problem one many potential solution could implemented often BF “Brute Force” one Therefore feel free code version algorithm trying find right balance runtime employed memory Strings Manipulation 1 Reverse Integer Output 132 543 warmup algorithm help practicing slicing skill effect tricky bit make sure taking account case integer negative seen problem presented many different way usually starting point complex request 2 Average Words Length Output 42 408 Algorithms require apply simple calculation using string common therefore important get familiar method like replace split case helped removing unwanted character create list word length easily measured summed 3 Add Strings Output 2200 2200 find approach equally sharp first one brevity intuition using eval method dynamically evaluate stringbased input second one smart use ord function rebuild two string actual number trough Unicode code point character really chose two would probably go second approach look complex first often come handy solving “Medium” “Hard” algorithm require advanced string manipulation calculation 4 First Unique Character Output 1 2 1 1 2 1 Also case two potential solution provided guess pretty new algorithm first approach look bit familiar build simple counter starting empty dictionary However understanding second approach help much longer term algorithm simply used collectionCounters instead building char counter replaced rangelens enumerates function help identify index elegantly 5 Valid Palindrome Output True “Valid Palindrome” problem real classic probably find repeatedly many different flavor case task check weather removing one character string match reversed counterpart ‘radkar’ function return True excluding ‘k’ obtain word ‘radar’ palindrome Arrays 6 Monotonic Array Output True False True another frequently asked problem solution provided pretty elegant written oneliner array monotonic monotone increasing monotone decreasing order ass algorithm take advantage function return True item iterable true otherwise return False iterable object empty function also return True 7 Move Zeroes Output 1 3 12 0 0 1 7 8 10 12 4 0 0 0 0 work array remove append method precious ally problem used first remove zero belongs original array append end array 8 Fill Blanks Output 1 1 2 3 3 3 5 5 asked solve problem couple time real interview time solution include edge case omitted simplicity paper easy algorithm build need clear mind want achieve loop statement comfortable working None value 9 Matched Mismatched Words Output TheWeaarebyheavyhitinmeetour pleasedstormtowasyou city really problem fairly intuitive algorithm take advantage common set operation like set intersection symmetricdifferenceor extremely useful make solution elegant first time encounter make sure check article 10 Prime Numbers Array Output 2 3 5 7 11 13 17 19 23 29 31 wanted close section another classic problem solution found pretty easily looping trough rangen familiar prime number definition modulus operation Conclusion article shared solution 10 Python algorithm frequently asked problem coding interview round preparing interview wellknown tech Company article good starting point get familiar common algorithmic pattern move complex question Also note exercise presented post together solution slight reinterpretation problem available Leetcode GeekForGeeks far expert field therefore solution presented indicative one may also likeTags Python Data Engineering Interview Algorithms Data Science
3,720
We need a new government
We need a new government Starting the changes to make this work again We need new governance. The old style of government is very broken in the US and crippled most everywhere. It has been obvious since November 2016 that the US is the worst case and may not survive the failed federal election. The effects of that election has led to a year of rapid decline and the loss of planetary leadership by the US. You simply can’t have what was the leading nation taken over by a highly questionable candidate elected in new and very questionable conditions. The expected disaster has not disappointed and has, if anything, been worse than people feared. Failure to replace Trump and his cohort and to address the structural problems that produced an invalid and incompetent regime in Washington DC may have already doomed the nation. But the point here is to look to the future and to possible ways to replace the existing mechanisms of electing parliamentary, partially representative governments. This has been a growing concern for decades in the US. The loss of the majority of eligible adult voters creating governments elected by 20–30% of the population is not workable. Combining this with the complexity of 21st century government issues and the elimination of citizenship training in public schools leaves a dangerously uninformed electorate. This is the stuff that authoritarian despots breed upon and the rise of even an incompetent such as Trump shows that danger. Old assumptions The solutions are readily at hand but are complicated by assumptions about the nature of elections and centuries of racism, xenophobia, misogyny, and corruption in all of its forms. As national surveys have consistently shown, the majority of the US population wants what most all other post industrial and even late industrial countries have. In short that is an equivalent of Scandinavian type nation state. While this is not a clear answer and not even a good representation of Sweden, Norway or Denmark as examples it is approximate and totally denied by the existing US power structure. While most European countries have the basic services and active governments that are hopelessly desired by most Americans, the limitations of the existing systems in those countries are well recognized with growing concern for the growth of neofascism in its various forms. In short they are better than the US but not the answer for the future. The solutions mentioned above are being discussed with greater degrees of detail and a growing awareness of the need for action quickly. If no action is taken against Trump and the current regime before summer, and reliance on Mueller and the formal investigation into electoral illegality is a thin support for hope, the odds of anything other than a reenactment of 2016 in 2018 are small. Nothing has been done to correct or prevent the types of abuses used to create a questionable government. And that government has show no interest in doing anything but pulling all the levers of power at its disposal to ensure that is permanent. Congress is so corrupted by gerrymandered districts and outright ownership of representatives and senators that there is a near complete vote of no confidence. At this point the 2018 election appears to be an excellent example of doing the exact same things and expecting a completely different outcome. A new way to vote The answer is obviously in changing the nature of voting as well as the process of voting. We must move to a direct vote for all federal positions with no gerrymandering. This will require removal of the traditionally accepted weighting to rural voters by discounting urban votes as well as the obviously illegal gerrymandering. Probably the most basic solution is a voting district of a fixed number of voters e.g. 10,000 that may be only a couple of urban blocks or an entire rural county. This is not an issue as the voting needs to be done online with all citizens automatically registered and counted. Needless to say voting is a duty that is legally required. This would solve most of the current problems producing grossly distorted voting and representation. We will deal with the considerations and problems of this a little later. Knowledgeable Voting Requiring knowledge in order to make a reasoned selection on the basis of policy is a far bigger problem. One reasonable way to do this is by weighting votes based on education or knowledge. My preference is offering a elementary citizenship test including basic political structure and civic components in order to double your vote. The basic vote is for all citizens no mater what level of knowledge or formal education. Those that chose to take and pass the basic citizenship exam would gain an additional vote. The argument for this goes all the way back to Plato and is obviously even more important now. The problems that we have had up to now have been based on the difficulty of preventing hacked voting and guaranteeing the identity of each voter. That can now be taken care with the permanent decentralized recording of the results using blockchain technology. This may require two blockchain ledgers with one for the permanent record of voting by the individual and the other recording the selections made. With voting required each citizen must have a voting record or an official waver. The selections should balance against the votes cast. This would require the addition of a no selection vote, or “none of the above” as has been discussed for many years. This should improve the audit and limit any type of rigging. A common argument against electronic voting is the presence of people without knowledge of Internet systems or access to them. Since voting is online with automatic registration this could be handles by using official tutors or assistants for people with limited ability as basic voters. Voting or what? The nature of the electoral system is a much more difficult question. Initially I expect we would start from the existing concept of representatives although I think that direct democracy is now possible and, in fact, desirable. The use of blockchain transaction/contracts online removes many of the problems. All Congressional representatives would have the same equivalent number of possible votes. A logical move would be to introduce proportionate voting if representatives are still selected making the House of Representative much more parliamentary. This would remove the dead weight of the failed two party structure and place the emphasis on policy packages supported by weighted voting.
https://medium.com/theotherleft/we-need-a-new-government-cbad6eef37de
['Mike Meyer']
2018-01-18 03:05:09.151000+00:00
['AI', 'Governance', 'Blockchain', 'Future', 'Politics']
Title need new governmentContent need new government Starting change make work need new governance old style government broken US crippled everywhere obvious since November 2016 US worst case may survive failed federal election effect election led year rapid decline loss planetary leadership US simply can’t leading nation taken highly questionable candidate elected new questionable condition expected disaster disappointed anything worse people feared Failure replace Trump cohort address structural problem produced invalid incompetent regime Washington DC may already doomed nation point look future possible way replace existing mechanism electing parliamentary partially representative government growing concern decade US loss majority eligible adult voter creating government elected 20–30 population workable Combining complexity 21st century government issue elimination citizenship training public school leaf dangerously uninformed electorate stuff authoritarian despot breed upon rise even incompetent Trump show danger Old assumption solution readily hand complicated assumption nature election century racism xenophobia misogyny corruption form national survey consistently shown majority US population want post industrial even late industrial country short equivalent Scandinavian type nation state clear answer even good representation Sweden Norway Denmark example approximate totally denied existing US power structure European country basic service active government hopelessly desired Americans limitation existing system country well recognized growing concern growth neofascism various form short better US answer future solution mentioned discussed greater degree detail growing awareness need action quickly action taken Trump current regime summer reliance Mueller formal investigation electoral illegality thin support hope odds anything reenactment 2016 2018 small Nothing done correct prevent type abuse used create questionable government government show interest anything pulling lever power disposal ensure permanent Congress corrupted gerrymandered district outright ownership representative senator near complete vote confidence point 2018 election appears excellent example exact thing expecting completely different outcome new way vote answer obviously changing nature voting well process voting must move direct vote federal position gerrymandering require removal traditionally accepted weighting rural voter discounting urban vote well obviously illegal gerrymandering Probably basic solution voting district fixed number voter eg 10000 may couple urban block entire rural county issue voting need done online citizen automatically registered counted Needless say voting duty legally required would solve current problem producing grossly distorted voting representation deal consideration problem little later Knowledgeable Voting Requiring knowledge order make reasoned selection basis policy far bigger problem One reasonable way weighting vote based education knowledge preference offering elementary citizenship test including basic political structure civic component order double vote basic vote citizen mater level knowledge formal education chose take pas basic citizenship exam would gain additional vote argument go way back Plato obviously even important problem based difficulty preventing hacked voting guaranteeing identity voter taken care permanent decentralized recording result using blockchain technology may require two blockchain ledger one permanent record voting individual recording selection made voting required citizen must voting record official waver selection balance vote cast would require addition selection vote “none above” discussed many year improve audit limit type rigging common argument electronic voting presence people without knowledge Internet system access Since voting online automatic registration could handle using official tutor assistant people limited ability basic voter Voting nature electoral system much difficult question Initially expect would start existing concept representative although think direct democracy possible fact desirable use blockchain transactioncontracts online remove many problem Congressional representative would equivalent number possible vote logical move would introduce proportionate voting representative still selected making House Representative much parliamentary would remove dead weight failed two party structure place emphasis policy package supported weighted votingTags AI Governance Blockchain Future Politics
3,721
AWS Cloud Security in a Nutshell
Hi Folks, Today we are going to look at the AWS Cloud Security Best Practices. Security has become a trending topic all over the world with more and more data leaks from small enterprises to large scale enterprises. So let’s see how we can secure our AWS Account ! AWS Shared Responsibility Model AWS Shared Responsibility Model Below stated policies are measures which are advised by the AWS Cloud. AWS Cloud takes responsibility for their infrastructure, but they don’t take the responsibility for the security of the environment inside the customer’s account. Initially when an account is created below mentioned actions can be taken to layout the basic security measures of the root account. 1. Grant Least Privilege Access — Granting the least access needed to perform the desired actions for a particular user. a. As enterprises are handling multiple client workloads on AWS, AWS has advised to separate these workloads using different accounts using AWS Organizations. b. AWS advises to maintain common permission guardrails that restrict access to all identities. E.g.: Blocking Users using multiple regions, restricting users to use only one region. Restricting the users from deleting common resources such as security policies, etc. c. Use the service control policies. E.g.: — Avoid users getting unwanted level of privileges inside the AWS cloud environment d. Using Permission Boundaries: — Using Permission boundaries to control the level of access that administrators have over-controlling and managing accounts. E.g.: — Administrators can’t create policies that escalate their own access. e. Reduce Permissions continuously: — Evaluate the access which are not used by identities and remove the unused permissions. 2. Enable Identity Federation: Centrally manage users and access across multiple applications and services. In order to federate multiple accounts in AWS Organizations use AWS Single Sign-on 3. Enable MFA in the root account. (Highly Recommended). Require activating MFA to all users as well. 4. Rotate Credentials (Change the passwords, access keys of your account regularly.) Set up an account policy so that other users and root account also needed to change the passwords regularly. 5. Lock away your AWS account root user access keys. AWS recommends this approach because access keys of your root account have access to all resources and services including billing details by default. Therefore, delete the root access keys If there are ones in your account, if there are no access keys for your root account, don’t create one. 6. Never Share the password of your root account with anyone. If other users require access to the AWS cloud environment create individual IAM users and groups. Give the users necessary permissions only. For yourself also create an administrator IAM User. 7. User Groups to Manage Users. (When your organization is growing and increasing the number of users, create groups with the necessary level of access to your AWS cloud environment and add users to those groups). 8. Be careful when granting users IAM access, as they get the privilege to deal with creating users, groups, using access keys, etc. When revoking permissions of such a user who had administrator access, we never know whether he has created other user accounts using his admin and IAM privileges. In that case, even if the root user revokes access for that admin user, he might use some other accounts access keys and access the account at a later time. This might be one critical threat that anyone never saw coming. (Highly recommended to keep the IAM full access with the root user only) 9. Configure a Strong Password Policy for users. a. If users have the permission to create their own password for the accounts, there should be a password policy in place in order to make sure that, the password has a minimum length (14 characters recommended) assigned by the user, contains alphabetical and non-alphabetical characters and frequent password rotation requirements. 10. Use IAM Roles to grant permissions, when permission is needed from AWS service to service. E.g.: When EC2 instances need to access S3 buckets. a. Create an IAM role selecting which services needs to access which service with what level of rights. E.g.: EC2 instances can only list S3 buckets. Never store access keys inside EC2 servers inside the AWS config directory. In case your EC2 instance is hacked, the hacker gets access to access key information. 11. Do not share AWS access keys. a. Access keys provide programmatic access to the AWS environment. Never share the access keys or expose the keys in unencrypted environments. For applications that need to access AWS services, create roles that provide temporary permissions to the application. 12. Monitor activity in your AWS account using the below tools available with AWS a. Amazon CloudFront — Log user requests that CloudFront receives. b. AWS CloudTrail — Logs AWS API calls and related events made by or on behalf of an AWS account. c. AWS Cloudwatch — Monitors your AWS cloud resources and the applications you run on AWS. d. AWS Config — Provides detailed historical information about the configuration of your AWS services, including the IAM users, groups, roles, and policies. Best Practices when using the AWS Services 1. Tighten the Cloud Trail configurations. a. CloudTrail is an AWS service that generates log files of all API calls made within the Aws, including the AWS management console, SDKs, command-line tools, etc. This is a very important way of tracking what’s happening inside the AWS account. For auditing as well as post-incident investigation, this is very important. b. If a hacker gets access to the AWS account, there’s a possibility they try to disable CloudTrail, therefore it’s recommended to keep the CloudTrail permissions only with the root user. c. Enable CloudTrail across all geographic regions and AWS services to prevent activity monitoring gaps. d. Turn on CloudTrail log file validation so that any changes made to the log file itself after it has been delivered to the S3 bucket is trackable to ensure log file integrity. e. Enable access logging for CloudTrail S3 bucket so that you can track access requests and identify potentially unauthorized or unwarranted access attempts f. Turn on multifactor authentication (MFA) to delete CloudTrail S3 buckets and encrypt all data in flight and at Rest. 2. Best Practices when using AWS Database and data storage services a. Ensure that S3 buckets don’t have pubic read/write access unless required by the business. b. Turn on RedShift audit logging in order to support auditing and post-incident forensic investigations for a given database. c. Encrypt data stored on EBS as an extra security layer. d. Encrypt Amazon RDS as an extra security layer. e. Enable require _ssl parameter in all Redshift clusters to minimize the risk of man-in-middle attack. f. Restrict public access to database instances to avoid malicious attacks such as brute force attacks, SQL injections, or DoS attacks. g. In all possible cases place the database instances in private subnets. 3. Automate Detective Control Automating Detective Control a. If some incident happens in your AWS account, how to respond to that event? That’s where Automate Detective Controls come to play. You can use Cloudformation to deploy your infrastructure and use AWS CloudTrail to log the events if some malicious event happens in your account, you can automate the action against that event using this architecture. 4. Secure Your Operating Systems and Applications a. With the AWS shared responsibility model, you manage your operating systems and applications security. Amazon EC2 presents a true virtual computing environment, in which you can use web services interfaces to launch instances with a variety of operating systems with custom preloaded applications. You can standardize the operating system and application builds and centrally manage the security of your operating systems and applications in a single secure build repository. You can build and test a pre-configured AMI to meet your security requirements. b. Disable root API access keys and secret key c. Restrict access to instances from limited IP ranges using Security Groups. d. Use Bastion hosts to access your EC-2 instances. e. Password protect the .pem file on user machines. f. Delete the pub key of users from the authorizedkeys file on your instances when users leave your organization. g. Rotate credentials (DB, Access Keys) h. Regularly run least privilege checks using IAM user Access Advisor and IAM user Last Used Access Keys. i. Implement a single primary function per Amazon EC2 instance to keep functions that require different security levels from co-existing on the same server. E.g.: Implement web servers, database servers, and DNS servers separately. j. Enable only necessary and secure services, protocols, daemons, etc. only required to the functioning of the operating system. k. Never use password authentication mechanisms to authenticate with servers. (Configure sshd to allow only public key authentication. Set PubkeyAuthentication to Yes and PasswordAuthentication to No in sshd_config) l. Always use encrypted communication channels. 5. Securing your AWS Infrastructure a. Using Amazon VPC to define an isolated network for each workload or organizational entity. b. Using private and public subnets to place your components based on business needs. c. Using security groups to manage access to instances that have similar functions and security requirements. d. Using Network Access Control Lists (NACLs) that allow stateless management of IP traffic. NACLs are agnostic of TCP and UDP sessions, but they allow granular control over IP protocols (for example GRE, IPsec ESP, ICMP), as well as control on a per-source/destination IP address and port for TCP and UDP. NACLs work in conjunction with Security groups. e. Using host-based firewalls as a last line of defense. 6. Using Tags to manage AWS resources a. Tagging Aws resources can help you in many ways when you have hundreds of resources are in play within your AWS cloud environment. b. Generating alarms, if a resource is not tagged properly. c. Proposed Minimum Tags. i. Platform_Owner ii. Resource_Owner iii. Project Name iv. Environment (Prod, Stag, Test, Dev) d. Tags can be useful when generating consolidated reports per project or when it comes to billing. e. You can add up to any 10 tags per resource. This is a very summarized version of precautions that we can take to secure our AWS account. If you have any new ideas or different opinions regarding the AWS Cloud Security, feel free to comment. 👋 Join FAUN today and receive similar stories each week in your inbox! ️ Get your weekly dose of the must-read tech stories, news, and tutorials. Follow us on Twitter 🐦 and Facebook 👥 and Instagram 📷 and join our Facebook and Linkedin Groups 💬
https://medium.com/faun/aws-cloud-security-in-a-nutshell-f9e53907f41d
['Supun Sandeeptha']
2020-11-30 17:23:41.117000+00:00
['Software Development', 'Security', 'AWS', 'DevOps', 'Development']
Title AWS Cloud Security NutshellContent Hi Folks Today going look AWS Cloud Security Best Practices Security become trending topic world data leak small enterprise large scale enterprise let’s see secure AWS Account AWS Shared Responsibility Model AWS Shared Responsibility Model stated policy measure advised AWS Cloud AWS Cloud take responsibility infrastructure don’t take responsibility security environment inside customer’s account Initially account created mentioned action taken layout basic security measure root account 1 Grant Least Privilege Access — Granting least access needed perform desired action particular user enterprise handling multiple client workload AWS AWS advised separate workload using different account using AWS Organizations b AWS advises maintain common permission guardrail restrict access identity Eg Blocking Users using multiple region restricting user use one region Restricting user deleting common resource security policy etc c Use service control policy Eg — Avoid user getting unwanted level privilege inside AWS cloud environment Using Permission Boundaries — Using Permission boundary control level access administrator overcontrolling managing account Eg — Administrators can’t create policy escalate access e Reduce Permissions continuously — Evaluate access used identity remove unused permission 2 Enable Identity Federation Centrally manage user access across multiple application service order federate multiple account AWS Organizations use AWS Single Signon 3 Enable MFA root account Highly Recommended Require activating MFA user well 4 Rotate Credentials Change password access key account regularly Set account policy user root account also needed change password regularly 5 Lock away AWS account root user access key AWS recommends approach access key root account access resource service including billing detail default Therefore delete root access key one account access key root account don’t create one 6 Never Share password root account anyone user require access AWS cloud environment create individual IAM user group Give user necessary permission also create administrator IAM User 7 User Groups Manage Users organization growing increasing number user create group necessary level access AWS cloud environment add user group 8 careful granting user IAM access get privilege deal creating user group using access key etc revoking permission user administrator access never know whether created user account using admin IAM privilege case even root user revoke access admin user might use account access key access account later time might one critical threat anyone never saw coming Highly recommended keep IAM full access root user 9 Configure Strong Password Policy user user permission create password account password policy place order make sure password minimum length 14 character recommended assigned user contains alphabetical nonalphabetical character frequent password rotation requirement 10 Use IAM Roles grant permission permission needed AWS service service Eg EC2 instance need access S3 bucket Create IAM role selecting service need access service level right Eg EC2 instance list S3 bucket Never store access key inside EC2 server inside AWS config directory case EC2 instance hacked hacker get access access key information 11 share AWS access key Access key provide programmatic access AWS environment Never share access key expose key unencrypted environment application need access AWS service create role provide temporary permission application 12 Monitor activity AWS account using tool available AWS Amazon CloudFront — Log user request CloudFront receives b AWS CloudTrail — Logs AWS API call related event made behalf AWS account c AWS Cloudwatch — Monitors AWS cloud resource application run AWS AWS Config — Provides detailed historical information configuration AWS service including IAM user group role policy Best Practices using AWS Services 1 Tighten Cloud Trail configuration CloudTrail AWS service generates log file API call made within Aws including AWS management console SDKs commandline tool etc important way tracking what’s happening inside AWS account auditing well postincident investigation important b hacker get access AWS account there’s possibility try disable CloudTrail therefore it’s recommended keep CloudTrail permission root user c Enable CloudTrail across geographic region AWS service prevent activity monitoring gap Turn CloudTrail log file validation change made log file delivered S3 bucket trackable ensure log file integrity e Enable access logging CloudTrail S3 bucket track access request identify potentially unauthorized unwarranted access attempt f Turn multifactor authentication MFA delete CloudTrail S3 bucket encrypt data flight Rest 2 Best Practices using AWS Database data storage service Ensure S3 bucket don’t pubic readwrite access unless required business b Turn RedShift audit logging order support auditing postincident forensic investigation given database c Encrypt data stored EBS extra security layer Encrypt Amazon RDS extra security layer e Enable require ssl parameter Redshift cluster minimize risk maninmiddle attack f Restrict public access database instance avoid malicious attack brute force attack SQL injection DoS attack g possible case place database instance private subnets 3 Automate Detective Control Automating Detective Control incident happens AWS account respond event That’s Automate Detective Controls come play use Cloudformation deploy infrastructure use AWS CloudTrail log event malicious event happens account automate action event using architecture 4 Secure Operating Systems Applications AWS shared responsibility model manage operating system application security Amazon EC2 present true virtual computing environment use web service interface launch instance variety operating system custom preloaded application standardize operating system application build centrally manage security operating system application single secure build repository build test preconfigured AMI meet security requirement b Disable root API access key secret key c Restrict access instance limited IP range using Security Groups Use Bastion host access EC2 instance e Password protect pem file user machine f Delete pub key user authorizedkeys file instance user leave organization g Rotate credential DB Access Keys h Regularly run least privilege check using IAM user Access Advisor IAM user Last Used Access Keys Implement single primary function per Amazon EC2 instance keep function require different security level coexisting server Eg Implement web server database server DNS server separately j Enable necessary secure service protocol daemon etc required functioning operating system k Never use password authentication mechanism authenticate server Configure sshd allow public key authentication Set PubkeyAuthentication Yes PasswordAuthentication sshdconfig l Always use encrypted communication channel 5 Securing AWS Infrastructure Using Amazon VPC define isolated network workload organizational entity b Using private public subnets place component based business need c Using security group manage access instance similar function security requirement Using Network Access Control Lists NACLs allow stateless management IP traffic NACLs agnostic TCP UDP session allow granular control IP protocol example GRE IPsec ESP ICMP well control persourcedestination IP address port TCP UDP NACLs work conjunction Security group e Using hostbased firewall last line defense 6 Using Tags manage AWS resource Tagging Aws resource help many way hundred resource play within AWS cloud environment b Generating alarm resource tagged properly c Proposed Minimum Tags PlatformOwner ii ResourceOwner iii Project Name iv Environment Prod Stag Test Dev Tags useful generating consolidated report per project come billing e add 10 tag per resource summarized version precaution take secure AWS account new idea different opinion regarding AWS Cloud Security feel free comment 👋 Join FAUN today receive similar story week inbox ️ Get weekly dose mustread tech story news tutorial Follow u Twitter 🐦 Facebook 👥 Instagram 📷 join Facebook Linkedin Groups 💬Tags Software Development Security AWS DevOps Development
3,722
Webpack 5 Builds for AWS Lambda Functions with TypeScript
Webpack 5 Builds for AWS Lambda Functions with TypeScript In a previous post, I wrote about self-destructing tweets which runs as an AWS Lambda function every night at midnight. While that post was about the code itself, most of the AWS CDK infrastructure information had been written in a previous post about sending a serverless Slack message which demonstrated how to run an AWS Lambda on a cron timer. Today’s post will be a short overview that bridges these together: it shows how I bundled the TypeScript code from the Twitter post with node modules and prepare it for deployment. The Folder Structure I am making assumptions here. The most “complex” set up I normally have for Lambdas is to write them in TypeScript and use Babel for transpilation. Given this will be a familiar standing for most, let’s work with that. Here is how most of my lambdas following this structure will look from within the function folder: https://gist.github.com/okeeffed/9b1e7edc86caff76179d434850f063c0.js You might also note I have both an index.ts and index.local.ts file. index.ts in my project is generally the entry point for the lambda, where the index.local.ts file is normally just used for local development where I swap out my lambda handler for code that lets me run. Both generally import the main function from another file (here denoted as function.ts ) and just call it. Webpack will bundle everything into one file later, so it is fine for me to structure the folder however I see fit. Also note: as pointed out in the comments by Maximilian, bundling node modules into the output from Webpack is not always agood idea if your npm packages require binaries. The same goes for any builds that require dynamic imports at runtime. Use your judgement on whether or not to bundle your node modules into the Webpack build, but I will be doing another write up on using Lambda layers instead to get around the requirement to build one single output. Setting Up Your Own Project Inside of a fresh npm project that houses a TypeScript lambda, we need to add to required Babel and Webpack dependencies: https://gist.github.com/okeeffed/83313ff75f314653c67760251571320d.js Babel Run Command File Inside of .babelrc , add the following: https://gist.github.com/okeeffed/6640da43291ded9bf3ea0fbc88105d0c.js Setting Up TypeScript This part you will need to adjust to flavour, but here is the config that I have for the Twitter bot: https://gist.github.com/okeeffed/2f2ce20124863b1b0b2ff1153158b01b.js Webpack In this example, I am expecting that you are using Webpack 5. In webpack.config.js : https://gist.github.com/okeeffed/d63721ddc9715a181a5129cf14d00955.js Here we tell Webpack to set the src/index.ts as the entry point and to convert to commonjs . We set our Babel and Cache loaders to test and compile any ts or js file that it finds from that entry point. Given that we are not using Node Externals which avoids bundling node modules, then any node modules required will also be compiled into the output. That means that the output in dist/index.js which run our project without node modules installed, which is perfect for AWS Lambda! Running A Build A "build": "webpack" to your "scripts" key in the package.json file and you are ready to roll! Run npm run build , let Webpack work its magic and then see the single-file output in dist/index.js . Testing Your Projects I use lambda-local for testing the build before deployment with the AWS CDK. It targets Nodejs, which is perfect for your TypeScript/JavaScript projects! Follow the instructions on the website to install and give it a whirl! If things run smoothly, you can be confident with your deployment. Conclusion This post focused purely on the build process. As mentioned in the intro, some of my other posts will cover writing lambda functions and the actual AWS CDK deployments. Resources and Further Reading Image credit: Jess Bailey Originally posted on my blog.
https://medium.com/javascript-in-plain-english/webpack-5-builds-for-aws-lambda-functions-with-typescript-6603533c85cb
["Dennis O'Keeffe"]
2020-12-02 01:39:58.569000+00:00
['Typescript', 'JavaScript', 'Webpack', 'Lambda', 'AWS']
Title Webpack 5 Builds AWS Lambda Functions TypeScriptContent Webpack 5 Builds AWS Lambda Functions TypeScript previous post wrote selfdestructing tweet run AWS Lambda function every night midnight post code AWS CDK infrastructure information written previous post sending serverless Slack message demonstrated run AWS Lambda cron timer Today’s post short overview bridge together show bundled TypeScript code Twitter post node module prepare deployment Folder Structure making assumption “complex” set normally Lambdas write TypeScript use Babel transpilation Given familiar standing let’s work lambda following structure look within function folder httpsgistgithubcomokeeffed9b1e7edc86caff76179d434850f063c0js might also note indexts indexlocalts file indexts project generally entry point lambda indexlocalts file normally used local development swap lambda handler code let run generally import main function another file denoted functionts call Webpack bundle everything one file later fine structure folder however see fit Also note pointed comment Maximilian bundling node module output Webpack always agood idea npm package require binary go build require dynamic import runtime Use judgement whether bundle node module Webpack build another write using Lambda layer instead get around requirement build one single output Setting Project Inside fresh npm project house TypeScript lambda need add required Babel Webpack dependency httpsgistgithubcomokeeffed83313ff75f314653c67760251571320djs Babel Run Command File Inside babelrc add following httpsgistgithubcomokeeffed6640da43291ded9bf3ea0fbc88105d0cjs Setting TypeScript part need adjust flavour config Twitter bot httpsgistgithubcomokeeffed2f2ce20124863b1b0b2ff1153158b01bjs Webpack example expecting using Webpack 5 webpackconfigjs httpsgistgithubcomokeeffedd63721ddc9715a181a5129cf14d00955js tell Webpack set srcindexts entry point convert commonjs set Babel Cache loader test compile t j file find entry point Given using Node Externals avoids bundling node module node module required also compiled output mean output distindexjs run project without node module installed perfect AWS Lambda Running Build build webpack script key packagejson file ready roll Run npm run build let Webpack work magic see singlefile output distindexjs Testing Projects use lambdalocal testing build deployment AWS CDK target Nodejs perfect TypeScriptJavaScript project Follow instruction website install give whirl thing run smoothly confident deployment Conclusion post focused purely build process mentioned intro post cover writing lambda function actual AWS CDK deployment Resources Reading Image credit Jess Bailey Originally posted blogTags Typescript JavaScript Webpack Lambda AWS
3,723
A Woman Was Assaulted After Telling Someone to Wear Their Mask
She just wanted the people around her to wear masks. In early August, a dispute was caught on camera at a Staples store in Hackensack, New Jersey. 54-year-old Margot Kagan, told another woman, 25-year-old Terri Thomas to properly wear her mask inside the store because Thomas’s mask was not fully covering her nose and mouth. At the time, the two customers were using adjacent fax machines. In the disturbing video, Kagan is seen wearing a face shield. She recently underwent a liver transplant and was using a cane to get around. Because of her age group and her surgery, her body’s susceptibility to complications from COVID-19 is likely higher. In response, Thomas approached Kagan, accosted her with profanity, and when Kagan tried to create distance between Thomas and herself, Thomas grabbed Kagan and lunged her to the ground. Then, she casually flipped back her hair and walked out of the store as Kagan remained on the floor, holding her leg. Afterward, Kagan was taken to a hospital, where she had to go through even more surgery, this time to repair the broken tibia she incurred from the assault. It’s a really hard video to watch even once. I’ve watched it a few times for fact-checking, and each time, I’m left feeling more disheartened. We’ve been seeing many instances of this. When asked to wear a mask, people become defensive to the point of aggression. There have even been instances of people coughing on those that asked them to wear a mask. The disregard for human life is revolting. Our racial, religious, and political identities in this regard should come second to our identities as people who have basic regard for human life. This doesn’t have any bearing on BLM. Many onlookers have tried to use this instance to either support or discredit the Black Lives Matter Movement. From my perspective, race was not the contentious issue here. At least, it shouldn’t have been. The fact that Margot Kagan is a White woman does not automatically render her a “Karen.” She is still allowed to ask others to respect her right to safety. Similarly, the fact that Terri Thomas is a black woman does not automatically mean that the BLM movement is just a cover for unscrupulous aggressors. There is too much evidence and far too many lived experiences for us to ignore racism any longer. We can believe that we must end structural injustices against Black Americans and simultaneously find Thomas’s actions unacceptable. Of course white supremacy exists, and it is ubiquitous. It is still not the determinant in every situation all the time. One thing I’ve found from living in bubbles of both political extremes is this- there are people on both sides that don’t understand why not wearing a mask is a serious public health issue. We wear masks for a reason similar to why we get vaccines. We don’t just get them so that we stay safe. By lowering our risk of infection, we also lower the risk that we will pass on the infection to a more vulnerable member of our population. We are still facing the pandemic. We’re not done. Even though many places in the United States are opening back up, COVID-19 has not gone away. Maybe we will have to learn to live with the coronavirus for the long term, but when there is a mandate stating that people in public spaces must wear masks, there really is no reasonable justification for throwing a tantrum and refusing to do so. Our racial, religious, and political identities in this regard should come second to our identities as people who have basic regard for human life. Is your “right to not wear a mask” worth your neighbor’s life?
https://medium.com/an-amygdala/a-woman-was-assaulted-after-telling-someone-to-wear-their-mask-e056714c0b6d
['Rebeca Ansar']
2020-09-05 01:31:03.012000+00:00
['Covid 19', 'Society', 'America', 'Culture', 'Health']
Title Woman Assaulted Telling Someone Wear MaskContent wanted people around wear mask early August dispute caught camera Staples store Hackensack New Jersey 54yearold Margot Kagan told another woman 25yearold Terri Thomas properly wear mask inside store Thomas’s mask fully covering nose mouth time two customer using adjacent fax machine disturbing video Kagan seen wearing face shield recently underwent liver transplant using cane get around age group surgery body’s susceptibility complication COVID19 likely higher response Thomas approached Kagan accosted profanity Kagan tried create distance Thomas Thomas grabbed Kagan lunged ground casually flipped back hair walked store Kagan remained floor holding leg Afterward Kagan taken hospital go even surgery time repair broken tibia incurred assault It’s really hard video watch even I’ve watched time factchecking time I’m left feeling disheartened We’ve seeing many instance asked wear mask people become defensive point aggression even instance people coughing asked wear mask disregard human life revolting racial religious political identity regard come second identity people basic regard human life doesn’t bearing BLM Many onlooker tried use instance either support discredit Black Lives Matter Movement perspective race contentious issue least shouldn’t fact Margot Kagan White woman automatically render “Karen” still allowed ask others respect right safety Similarly fact Terri Thomas black woman automatically mean BLM movement cover unscrupulous aggressor much evidence far many lived experience u ignore racism longer believe must end structural injustice Black Americans simultaneously find Thomas’s action unacceptable course white supremacy exists ubiquitous still determinant every situation time One thing I’ve found living bubble political extreme people side don’t understand wearing mask serious public health issue wear mask reason similar get vaccine don’t get stay safe lowering risk infection also lower risk pas infection vulnerable member population still facing pandemic We’re done Even though many place United States opening back COVID19 gone away Maybe learn live coronavirus long term mandate stating people public space must wear mask really reasonable justification throwing tantrum refusing racial religious political identity regard come second identity people basic regard human life “right wear mask” worth neighbor’s lifeTags Covid 19 Society America Culture Health
3,724
I created the exact same app in React and Vue. Here are the differences. [2020 Edition]
I created the exact same app in React and Vue. Here are the differences. [2020 Edition] React vs Vue: Now with React Hooks Vue 3 Composition API! React vs Vue: the saga continues A few years ago, I decided to try and build a fairly standard To Do App in React and Vue. Both apps were built using the default CLIs (create-react-app for React, and vue-cli for Vue). My aim was to write something that was unbiased and simply provided a snapshot of how you would perform certain tasks with both technologies. When React Hooks were released, I followed up the original article with a ‘2019 Edition’ which replaced the use of Class Components with Functional Hooks. With the release of Vue version 3 and its Composition API, now is the time to one again update this article with a ‘2020 Edition’. Let’s take a quick look at how the two apps look: The CSS code for both apps are exactly the same, but there are differences in where these are located. With that in mind, let’s next have a look at the file structure of both apps: You’ll see that their structures are similar as well. The key difference so far is that the React app has two CSS files, whereas the Vue app doesn’t have any. The reason for this is because create-react-app creates its default React components with a separate CSS file for its styles, whereas Vue CLI creates single files that contain HTML, CSS, and JavaScript for its default Vue components. Ultimately, they both achieve the same thing, and there is nothing to say that you can’t go ahead and structure your files differently in React or Vue. It really comes down to personal preference. You will hear plenty of discussion from the dev community over how CSS should be structured, especially with regard to React, as there are a number of CSS-in-JS solutions such as styled-components, and emotion. CSS-in-JS is literally what it sounds like by the way. While these are useful, for now, we will just follow the structure laid out in both CLIs. But before we go any further, let’s take a quick look at what a typical Vue and React component look like: A typical React file: A typical Vue file: Now that’s out of the way, let’s get into the nitty gritty detail! How do we mutate data? But first, what do we even mean by “mutate data”? Sounds a bit technical doesn’t it? It basically just means changing the data that we have stored. So if we wanted to change the value of a person’s name from John to Mark, we would be ‘mutating the data’. So this is where a key difference between React and Vue lies. While Vue essentially creates a data object, where data can freely be updated, React handles this through what is known as a state hook. Let’s take a look at the set up for both in the images below, then we will explain what is going on after: React state: Vue state: So you can see that we have passed the same data into both, but the structure is a bit different. With React — or at least since 2019 — we would typically handle state through a series of Hooks. These might look a bit strange at first if you haven’t seen this type of concept before. Basically, it works as follows: Let’s say we want to create a list of todos. We would likely need to create a variable called list and it would likely take an array of either strings or maybe objects (if say we want to give each todo string an ID and maybe some other things. We would set this up by writing const [list, setList] = useState([]) . Here we are using what React calls a Hook — called useState . This basically lets us keep local state within our components. Also, you may have noticed that we passed in an empty array [] inside of useState() . What we put inside there is what we want list to initially be set to, which in our case, we want to be an empty array. However, you will see from the image above that we passed in some data inside of the array, which ends up being the initialised data for list. Wondering what setList does? There will be more on this later! In Vue, you would typically place all of your mutable data for a component inside of a setup() function that returns an object with the data and functions you want to expose (which basically just means the things you want to be able to make available for use in your app). You will notice that each piece of state (aka the data we want to be able to mutate) data in our app is wrapped inside of a ref() function. This ref() function is something that we import from Vue and makes it possible for our app to update whenever any of those pieces of data are changed/updated. In short, if you want to make mutable data in Vue, assign a variable to the ref() function and place any default data inside of it. So how would we reference mutable data in our app? Well, let’s say that we have some piece of data called name that has been assigned a value of Sunil . In React, as we have our smaller pieces of state that we created with useState() , it is likely that we would have created something along the lines of const [name, setName] = useState('Sunil') . In our app, we would reference the same piece of data by calling simply calling name. Now the key difference here is that we cannot simply write name = 'John' , because React has restrictions in place to prevent this kind of easy, care-free mutation-making. So in React, we would write setName('John') . This is where the setName bit comes into play. Basically, in const [name, setName] = useState('Sunil') , it creates two variables, one which becomes const name = 'Sunil' , while the second const setName is assigned a function that enables name to be recreated with a new value. In Vue, this would be sitting inside of the setup() function and would have been called const name = ref(‘Sunil') . In our app, we would reference this by calling name.value . With Vue, if we want to use the value created inside of a ref() function, we look for .value on the variable rather than simply calling the variable. In other words, if we want the value of a variable that holds state, we look for name.value , not name . If you want to update the value of name , you would do so by updating name.value . For example, let's say that I want to change my name from Sunil to John. I'd do this by writing name.value = "John" . I’m not sure how I feel about being called John, but hey ho, things happen! 😅 Effectively React and Vue are doing the same thing here, which is creating data that can be updated. Vue essentially combines its own version of name and setName by default whenever a piece of data wrappeed inside of a ref() function gets updated. React requires that you call setName() with the value inside in order to update state, Vue makes an assumption that you’d want to do this if you were ever trying to update values inside the data object. So Why does React even bother with separating the value from the function, and why is useState() even needed? Essentially, React wants to be able to re-run certain life cycle hooks whenever state changes. In our example, if setName() is called, React will know that some state has changed and can, therefore, run those lifecycle hooks. If you directly mutated state, React would have to do more work to keep track of changes and what lifecycle hooks to run etc. Now that we have mutations out of the way, let’s get into the nitty, gritty by looking at how we would go about adding new items to both of our To Do Apps. How do we create new To Do Items? React: const createNewToDoItem = () => { const newId = generateId(); const newToDo = { id: newId, text: toDo }; setList([...list, newToDo]); setToDo(""); }; How did React do that? In React, our input field has an attribute on it called value. This value gets automatically updated every time its value changes through what is known as an onChange event listener. The JSX (which is basically a variant of HTML), looks like this: <input type="text" placeholder="I need to..." value={toDo} onChange={handleInput} onKeyPress={handleKeyPress} /> So every time the value is changed, it updates state. The handleInput function looks like this: const handleInput = (e) => { setToDo(e.target.value); }; Now, whenever a user presses the + button on the page to add a new item, the createNewToDoItem function is triggered. Let’s take a look at that function again to break down what is going on: const createNewToDoItem = () => { const newId = generateId(); const newToDo = { id: newId, text: toDo }; setList([...list, newToDo]); setToDo(""); }; Essentially the newId function is basically creating a new ID that we will give to our new toDo item. The newToDo variable is an object that takes that has an id key that is given the value from newId. It also has a text key which takes the value from toDo as its value. That is the same toDo that was being updated whenever the input value changed. We then run out setList function and we pass in an array that includes our entire list as well as the newly created newToDo . If the ...list , bit seems strange, the three dots at the beginning is something known as a spread operator, which basically passes in all of the values from the list but as separate items, rather than simply passing in an entire array of items as an array. Confused? If so, I highly recommend reading up on spread because it’s great! Anyway, finally we run setToDo() and pass in an empty string. This is so that our input value is empty, ready for new toDos to be typed in. Vue: function createNewToDoItem() { const newId = generateId(); list.value.push({ id: newId, text: todo.value }); todo.value = ""; } How did Vue do that? In Vue, our input field has a handle on it called v-model. This allows us to do something known as two-way binding. Let’s just quickly look at our input field, then we’ll explain what is going on: <input type="text" placeholder="I need to..." v-model="todo" v-on:keyup.enter="createNewToDoItem" /> V-Model ties the input of this field to a variable we created at the top of our setup() function and then exposed as a key inside of the object we returned. We haven't covered what is returned from the object much so far, so for your info, here is what we have returned from our setup() function inside of ToDo.vue: return { list, todo, showError, generateId, createNewToDoItem, onDeleteItem, displayError }; Here, list , todo , and showError are our stateful values, while everything else are functions we want to be able to call in other places of our app. Okay, coming back out from our tangent, when the page loads, we have todo set to an empty string, as such: const todo = ref("") . If this had some data already in there, such as const todo = ref("add some text here"): our input field would load with add some text here already inside the input field. Anyway, going back to having it as an empty string, whatever text we type inside the input field gets bound to todo.value . This is effectively two-way binding - the input field can update the ref() value and the ref() value can update the input field. So looking back at the createNewToDoItem() code block from earlier, we see that we push the contents of todo.value into the list array - by pushing todo.value into list.value - and then update todo.value to an empty string. We also used the same newId() function as used in the React example. How do we delete from the list? React: const deleteItem = (id) => { setList(list.filter((item) => item.id !== id)); }; How did React do that? So whilst the deleteItem() function is located inside ToDo.js, I was very easily able to make reference to it inside ToDoItem.js by firstly, passing the deleteItem() function as a prop on as such: <ToDoItem key={item.id} item={item} deleteItem={deleteItem} /> This firstly passes the function down to make it accessible to the child. Then, inside the ToDoItem component, we do the following: <button className="ToDoItem-Delete" onClick={() => deleteItem(item.id)}> - </button> All I had to do to reference a function that sat inside the parent component was to reference props.deleteItem. Now you may have noticed that in the code example, we just wrote deleteItem instead of props.deleteItem. This is because we used a technique known as destructuring which allows us to take parts of the props object and assign them to variables. So in our ToDoItem.js file, we have the following: const ToDoItem = (props) => { const { item, deleteItem } = props; } This created two variables for us, one called item, which gets assigned the same value as props.item, and deleteItem, which gets assigned the value from props.deleteItem. We could have avoided this whole destructuring thing by simply using props.item and props.deleteItem, but I thought it was worth mentioning! Vue: function onDeleteItem(id) { list.value = list.value.filter(item => item.id !== id); } How did Vue do that? A slightly different approach is required in Vue. We essentially have to do three things here: Firstly, on the element we want to call the function: <button class="ToDoItem-Delete" @click="deleteItem(item.id)"> - </button> Then we have to create an emit function as a method inside the child component (in this case, ToDoItem.vue), which looks like this: function deleteItem(id) { emit("delete", id); } Along with this, you’ll notice that we actually reference a function when we add ToDoItem.vue inside of ToDo.vue: <ToDoItem v-for="item in list" :item="item" @delete="onDeleteItem" :key="item.id" /> This is what is known as a custom event-listener. It listens out for any occasion where an emit is triggered with the string of ‘delete’. If it hears this, it triggers a function called onDeleteItem. This function sits inside of ToDo.vue, rather than ToDoItem.vue. This function, as listed earlier, simply filters the id from the list.value array. It’s also worth noting here that in the Vue example, I could have simply written the $emit part inside of the @click listener, as such: <button class="ToDoItem-Delete" @click="emit("delete", item.id)"> - </button> This would have reduced the number of steps down from 3 to 2, and this is simply down to personal preference. In short, child components in React will have access to parent functions via props (providing you are passing props down, which is fairly standard practice and you’ll come across this loads of times in other React examples), whilst in Vue, you have to emit events from the child that will usually be collected inside the parent component. How do we pass event listeners? React: Event listeners for simple things such as click events are straight forward. Here is an example of how we created a click event for a button that creates a new ToDo item: <button className="ToDo-Add" onClick={createNewToDoItem}> + </button> Super easy here and pretty much looks like how we would handle an in-line onClick with vanilla JS. As mentioned in the Vue section, it took a little bit longer to set up an event listener to handle whenever the enter button was pressed. This essentially required an onKeyPress event to be handled by the input tag, as such: <input type="text" placeholder="I need to..." value={toDo} onChange={handleInput} onKeyPress={handleKeyPress} /> This function essentially triggered the createNewToDoItem function whenever it recognised that the ‘enter’ key had been pressed, as such: const handleKeyPress = (e) => { if (e.key === "Enter") { createNewToDoItem(); } }; Vue: In Vue it is super straight-forward. We simply use the @ symbol, and then the type of event-listener we want to do. So for example, to add a click event listener, we could write the following: <button class="ToDo-Add" @click="createNewToDoItem"> + </button> Note: @click is actually shorthand for writing v-on:click . The cool thing with Vue event listeners is that there are also a bunch of things that you can chain on to them, such as .once which prevents the event listener from being triggered more than once. There are also a bunch of shortcuts when it comes to writing specific event listeners for handling key strokes. I found that it took quite a bit longer to create an event listener in React to create new ToDo items whenever the enter button was pressed. In Vue, I was able to simply write: <input type=”text” v-on:keyup.enter=”createNewToDoItem”/> How do we pass data through to a child component? React: In react, we pass props onto the child component at the point where it is created. Such as: <ToDoItem key={item.id} item={item} deleteItem={deleteItem} />; Here we see two props passed to the ToDoItem component. From this point on, we can now reference them in the child component via this.props. So to access the item.todo prop, we simply call props.item . You may have noticed that there's also a key prop (so technically we're actually passing three props). This is mainly for React's internals, as it makes things easier when it comes to making updates and tracking changes among multiple versions of the same component (which we have here because each todo is a copy of the ToDoItem component). It's also important to ensure your components have unique keys, otherwise React will warn you about it in the console. Vue: In Vue, we pass props onto the child component at the point where it is created. Such as: <ToDoItem v-for="item in list" :item="item" @delete="onDeleteItem" :key="item.id" /> Once this is done, we then pass them into the props array in the child component, as such: props: [ "todo" ] . These can then be referenced in the child by their name — so in our case, todo . If you're unsure about where to place that prop key, here is what the entire export default object looks like in our child component: export default { name: "ToDoItem", props: ["item"], setup(props, { emit }) { function deleteItem(id) { emit("delete", id); } return { deleteItem, }; }, }; One thing you may have noticed is that when looping through data in Vue, we actually just looped through list rather than list.value . Trying to loop through list.value won't work here How do we emit data back to a parent component? React: We firstly pass the function down to the child component by referencing it as a prop in the place where we call the child component. We then add the call to function on the child by whatever means, such as an onClick, by referencing props.whateverTheFunctionIsCalled — or whateverTheFunctionIsCalled if we have used destructuring. This will then trigger the function that sits in the parent component. We can see an example of this entire process in the section ‘How do we delete from the list’. Vue: In our child component, we simply write a function that emits a value back to the parent function. In our parent component, we write a function that listens for when that value is emitted, which can then trigger a function call. We can see an example of this entire process in the section ‘How do we delete from the list’. And there we have it! 🎉 We’ve looked at how we add, remove and change data, pass data in the form of props from parent to child, and send data from the child to the parent in the form of event listeners. There are, of course, lots of other little differences and quirks between React and Vue, but hopefully the contents of this article has helped to serve as a bit of a foundation for understanding how both frameworks handle stuff. If you’re interested in forking the styles used in this article and want to make your own equivalent piece, please feel free to do so! 👍 Github links to both apps: Vue ToDo: https://github.com/sunil-sandhu/vue-todo-2020 React ToDo: https://github.com/sunil-sandhu/react-todo-2020 The 2019 version of this article https://medium.com/javascript-in-plain-english/i-created-the-exact-same-app-in-react-and-vue-here-are-the-differences-2019-edition-42ba2cab9e56 The 2018 version of this article https://medium.com/javascript-in-plain-english/i-created-the-exact-same-app-in-react-and-vue-here-are-the-differences-e9a1ae8077fd If you would like to translate this article into another language, please go ahead and do so — let me know when it is complete so that I can add it to the list of translations above. JavaScript In Plain English Enjoyed this article? If so, get more similar content by subscribing to Decoded, our YouTube channel! Originally posted at: sunilsandhu.com
https://medium.com/javascript-in-plain-english/i-created-the-exact-same-app-in-react-and-vue-here-are-the-differences-2020-edition-36657f5aafdc
['Sunil Sandhu']
2020-08-09 17:30:21.608000+00:00
['JavaScript', 'Web Development', 'React', 'Vuejs', 'Programming']
Title created exact app React Vue difference 2020 EditionContent created exact app React Vue difference 2020 Edition React v Vue React Hooks Vue 3 Composition API React v Vue saga continues year ago decided try build fairly standard App React Vue apps built using default CLIs createreactapp React vuecli Vue aim write something unbiased simply provided snapshot would perform certain task technology React Hooks released followed original article ‘2019 Edition’ replaced use Class Components Functional Hooks release Vue version 3 Composition API time one update article ‘2020 Edition’ Let’s take quick look two apps look CSS code apps exactly difference located mind let’s next look file structure apps You’ll see structure similar well key difference far React app two CSS file whereas Vue app doesn’t reason createreactapp creates default React component separate CSS file style whereas Vue CLI creates single file contain HTML CSS JavaScript default Vue component Ultimately achieve thing nothing say can’t go ahead structure file differently React Vue really come personal preference hear plenty discussion dev community CSS structured especially regard React number CSSinJS solution styledcomponents emotion CSSinJS literally sound like way useful follow structure laid CLIs go let’s take quick look typical Vue React component look like typical React file typical Vue file that’s way let’s get nitty gritty detail mutate data first even mean “mutate data” Sounds bit technical doesn’t basically mean changing data stored wanted change value person’s name John Mark would ‘mutating data’ key difference React Vue lie Vue essentially creates data object data freely updated React handle known state hook Let’s take look set image explain going React state Vue state see passed data structure bit different React — least since 2019 — would typically handle state series Hooks might look bit strange first haven’t seen type concept Basically work follows Let’s say want create list todos would likely need create variable called list would likely take array either string maybe object say want give todo string ID maybe thing would set writing const list setList useState using React call Hook — called useState basically let u keep local state within component Also may noticed passed empty array inside useState put inside want list initially set case want empty array However see image passed data inside array end initialised data list Wondering setList later Vue would typically place mutable data component inside setup function return object data function want expose basically mean thing want able make available use app notice piece state aka data want able mutate data app wrapped inside ref function ref function something import Vue make possible app update whenever piece data changedupdated short want make mutable data Vue assign variable ref function place default data inside would reference mutable data app Well let’s say piece data called name assigned value Sunil React smaller piece state created useState likely would created something along line const name setName useStateSunil app would reference piece data calling simply calling name key difference cannot simply write name John React restriction place prevent kind easy carefree mutationmaking React would write setNameJohn setName bit come play Basically const name setName useStateSunil creates two variable one becomes const name Sunil second const setName assigned function enables name recreated new value Vue would sitting inside setup function would called const name ref‘Sunil app would reference calling namevalue Vue want use value created inside ref function look value variable rather simply calling variable word want value variable hold state look namevalue name want update value name would updating namevalue example let say want change name Sunil John Id writing namevalue John I’m sure feel called John hey ho thing happen 😅 Effectively React Vue thing creating data updated Vue essentially combine version name setName default whenever piece data wrappeed inside ref function get updated React requires call setName value inside order update state Vue make assumption you’d want ever trying update value inside data object React even bother separating value function useState even needed Essentially React want able rerun certain life cycle hook whenever state change example setName called React know state changed therefore run lifecycle hook directly mutated state React would work keep track change lifecycle hook run etc mutation way let’s get nitty gritty looking would go adding new item Apps create new Items React const createNewToDoItem const newId generateId const newToDo id newId text toDo setListlist newToDo setToDo React React input field attribute called value value get automatically updated every time value change known onChange event listener JSX basically variant HTML look like input typetext placeholderI need valuetoDo onChangehandleInput onKeyPresshandleKeyPress every time value changed update state handleInput function look like const handleInput e setToDoetargetvalue whenever user press button page add new item createNewToDoItem function triggered Let’s take look function break going const createNewToDoItem const newId generateId const newToDo id newId text toDo setListlist newToDo setToDo Essentially newId function basically creating new ID give new toDo item newToDo variable object take id key given value newId also text key take value toDo value toDo updated whenever input value changed run setList function pas array includes entire list well newly created newToDo list bit seems strange three dot beginning something known spread operator basically pass value list separate item rather simply passing entire array item array Confused highly recommend reading spread it’s great Anyway finally run setToDo pas empty string input value empty ready new toDos typed Vue function createNewToDoItem const newId generateId listvaluepush id newId text todovalue todovalue Vue Vue input field handle called vmodel allows u something known twoway binding Let’s quickly look input field we’ll explain going input typetext placeholderI need vmodeltodo vonkeyupentercreateNewToDoItem VModel tie input field variable created top setup function exposed key inside object returned havent covered returned object much far info returned setup function inside ToDovue return list todo showError generateId createNewToDoItem onDeleteItem displayError list todo showError stateful value everything else function want able call place app Okay coming back tangent page load todo set empty string const todo ref data already const todo refadd text input field would load add text already inside input field Anyway going back empty string whatever text type inside input field get bound todovalue effectively twoway binding input field update ref value ref value update input field looking back createNewToDoItem code block earlier see push content todovalue list array pushing todovalue listvalue update todovalue empty string also used newId function used React example delete list React const deleteItem id setListlistfilteritem itemid id React whilst deleteItem function located inside ToDojs easily able make reference inside ToDoItemjs firstly passing deleteItem function prop ToDoItem keyitemid itemitem deleteItemdeleteItem firstly pass function make accessible child inside ToDoItem component following button classNameToDoItemDelete onClick deleteItemitemid button reference function sat inside parent component reference propsdeleteItem may noticed code example wrote deleteItem instead propsdeleteItem used technique known destructuring allows u take part prop object assign variable ToDoItemjs file following const ToDoItem prop const item deleteItem prop created two variable u one called item get assigned value propsitem deleteItem get assigned value propsdeleteItem could avoided whole destructuring thing simply using propsitem propsdeleteItem thought worth mentioning Vue function onDeleteItemid listvalue listvaluefilteritem itemid id Vue slightly different approach required Vue essentially three thing Firstly element want call function button classToDoItemDelete clickdeleteItemitemid button create emit function method inside child component case ToDoItemvue look like function deleteItemid emitdelete id Along you’ll notice actually reference function add ToDoItemvue inside ToDovue ToDoItem vforitem list itemitem deleteonDeleteItem keyitemid known custom eventlistener listens occasion emit triggered string ‘delete’ hears trigger function called onDeleteItem function sits inside ToDovue rather ToDoItemvue function listed earlier simply filter id listvalue array It’s also worth noting Vue example could simply written emit part inside click listener button classToDoItemDelete clickemitdelete itemid button would reduced number step 3 2 simply personal preference short child component React access parent function via prop providing passing prop fairly standard practice you’ll come across load time React example whilst Vue emit event child usually collected inside parent component pas event listener React Event listener simple thing click event straight forward example created click event button creates new ToDo item button classNameToDoAdd onClickcreateNewToDoItem button Super easy pretty much look like would handle inline onClick vanilla JS mentioned Vue section took little bit longer set event listener handle whenever enter button pressed essentially required onKeyPress event handled input tag input typetext placeholderI need valuetoDo onChangehandleInput onKeyPresshandleKeyPress function essentially triggered createNewToDoItem function whenever recognised ‘enter’ key pressed const handleKeyPress e ekey Enter createNewToDoItem Vue Vue super straightforward simply use symbol type eventlistener want example add click event listener could write following button classToDoAdd clickcreateNewToDoItem button Note click actually shorthand writing vonclick cool thing Vue event listener also bunch thing chain prevents event listener triggered also bunch shortcut come writing specific event listener handling key stroke found took quite bit longer create event listener React create new ToDo item whenever enter button pressed Vue able simply write input type”text” vonkeyupenter”createNewToDoItem” pas data child component React react pas prop onto child component point created ToDoItem keyitemid itemitem deleteItemdeleteItem see two prop passed ToDoItem component point reference child component via thisprops access itemtodo prop simply call propsitem may noticed there also key prop technically actually passing three prop mainly Reacts internals make thing easier come making update tracking change among multiple version component todo copy ToDoItem component also important ensure component unique key otherwise React warn console Vue Vue pas prop onto child component point created ToDoItem vforitem list itemitem deleteonDeleteItem keyitemid done pas prop array child component prop todo referenced child name — case todo youre unsure place prop key entire export default object look like child component export default name ToDoItem prop item setupprops emit function deleteItemid emitdelete id return deleteItem One thing may noticed looping data Vue actually looped list rather listvalue Trying loop listvalue wont work emit data back parent component React firstly pas function child component referencing prop place call child component add call function child whatever mean onClick referencing propswhateverTheFunctionIsCalled — whateverTheFunctionIsCalled used destructuring trigger function sits parent component see example entire process section ‘How delete list’ Vue child component simply write function emits value back parent function parent component write function listens value emitted trigger function call see example entire process section ‘How delete list’ 🎉 We’ve looked add remove change data pas data form prop parent child send data child parent form event listener course lot little difference quirk React Vue hopefully content article helped serve bit foundation understanding framework handle stuff you’re interested forking style used article want make equivalent piece please feel free 👍 Github link apps Vue ToDo httpsgithubcomsunilsandhuvuetodo2020 React ToDo httpsgithubcomsunilsandhureacttodo2020 2019 version article httpsmediumcomjavascriptinplainenglishicreatedtheexactsameappinreactandvueherearethedifferences2019edition42ba2cab9e56 2018 version article httpsmediumcomjavascriptinplainenglishicreatedtheexactsameappinreactandvueherearethedifferencese9a1ae8077fd would like translate article another language please go ahead — let know complete add list translation JavaScript Plain English Enjoyed article get similar content subscribing Decoded YouTube channel Originally posted sunilsandhucomTags JavaScript Web Development React Vuejs Programming
3,725
Death By a Thousand Hacks
On the drowning of art in a sea of mediocre “content” By MARTIN REZNY Whether you are a customer seeking a great experience, or an author trying desperately not to die of starvation on a daily basis, this essay concerns you. I could write this in practical marketing terms, but that would only add to the sprawling mass of artless placeholders meant to entertain for a moment and then vanish in a black hole of time collectively wasted by all of humanity. Let’s attempt to make it more into a raft to hold onto as we’re circling the drain. To make the issue very clear very fast, the astronomers face an enemy that works as a near perfect metaphor to this problem — the light pollution. It too only really appeared in modern times as a result of advanced technology. Unlike thousands of years before, whole cities recently became awash with artificial light at night, drowning out the most spectacular display of the ages, the sky full of stars, thus rendering millennia of culture obscured to invisible. To many people nowadays, this issue seems eminently unimportant, a minor gripe of a concerned elite minority. More light at night is just better, right? This reasoning may make complete sense if you are a star-blind resident of urban landscapes, but only precisely because of the lack of experience with the real thing, and because of the ignorance of the eternal truth of nature and human condition it conveys. It is a forgetting that artifice is the lesser miracle. Much like observation of stars yields deep secrets about the universe and has inspired classical masterpieces of art that will never die for as long as there will be humans, real art conveys some kind of truth that needs to be expressed, inspiring humans to great feats of learning and accomplishment. Compared to art, “content” exists solely to make someone money, to be consumed and forgotten. It’s highest ambition is to amuse, like neon signs. It should be obvious that no amount of neon sign gazing will make one know more about the world or themselves, and if that’s all that you can see, it’s a crime against your quality of life. Again, it may seem overblown, but as Neil DeGrasse Tyson, one of today’s most prominent astrophysicists, keeps saying, as a kid in New York he practically didn’t believe there were any stars in the sky. It took a visit to a planetarium to even make him aware of their existence. Now imagine how many Neils of artistic expression we’re losing if so much of what we can read, watch, listen to, or play is just a fake replica, a hollow imitation, a dead simulation of art. No one is going to learn how to be a good storyteller by consuming bad storytelling, become a good writer by reading bad writing, or turn into a good musical composer by conforming to genre cliches. I could go on for a very, very long time, but I think you catch my drift. Before someone starts bringing economics into this, profit is not only no justification for a crime against culture of this magnitude, it makes it more condemnable. If at least it was an accident, it would be forgivable. As such, it is more akin to a robbery, extracting value out of values. Even assuming one’s just a passive consumer and not an active perpetrator, to consume invariably means to destroy. Art is not to be consumed, it is to be appreciated, or it dies. This may sound overly dramatic, but it is no exaggeration. Sure, the actual art of our past in some sense still exists, its physical carriers and patterns imprinted on them are preserved somewhere. But it is dying to the extent to which it becomes more difficult to access. If not in terms of “views” by being hidden in a pile of loud nonsense, then in terms of diminishing human ability to engage with it on any meaningful level, with understanding and purpose. And it doesn’t much matter what greatness there used to be when the cultural landscape of today is an endless landfill from horizon to horizon. People can only ever truly live in the now, and if the now is thoroughly awful, any art that manages to still somehow be made feels lesser for being invariably an escape attempt. Sure, the contemporary art industry (an oxymoron if there ever was one) has never been bigger, but it has been built out of farts and prison bars. The solution is one of those so simple that they feel impossible — if you’re a creative person, just don’t be a hack. Don’t make things to be consumed and disappear, build things to last, things that will fight against those who would try to use them. Leave the night city for the countryside and sleep under the stars, or if you cannot, if you’re trapped within the walls of false realities imposed upon you by others, at least dream about them. They’re still there. Like what you read? Subscribe to my publication, heart, follow, or… Make me happy and throw something into my tip jar
https://medium.com/words-of-tomorrow/death-by-a-thousand-hacks-f198ef9a61d1
['Martin Rezny']
2020-01-23 16:01:42.306000+00:00
['Storytelling', 'Astronomy', 'Art', 'Neil deGrasse Tyson', 'Creativity']
Title Death Thousand HacksContent drowning art sea mediocre “content” MARTIN REZNY Whether customer seeking great experience author trying desperately die starvation daily basis essay concern could write practical marketing term would add sprawling mass artless placeholder meant entertain moment vanish black hole time collectively wasted humanity Let’s attempt make raft hold onto we’re circling drain make issue clear fast astronomer face enemy work near perfect metaphor problem — light pollution really appeared modern time result advanced technology Unlike thousand year whole city recently became awash artificial light night drowning spectacular display age sky full star thus rendering millennium culture obscured invisible many people nowadays issue seems eminently unimportant minor gripe concerned elite minority light night better right reasoning may make complete sense starblind resident urban landscape precisely lack experience real thing ignorance eternal truth nature human condition conveys forgetting artifice lesser miracle Much like observation star yield deep secret universe inspired classical masterpiece art never die long human real art conveys kind truth need expressed inspiring human great feat learning accomplishment Compared art “content” exists solely make someone money consumed forgotten It’s highest ambition amuse like neon sign obvious amount neon sign gazing make one know world that’s see it’s crime quality life may seem overblown Neil DeGrasse Tyson one today’s prominent astrophysicist keep saying kid New York practically didn’t believe star sky took visit planetarium even make aware existence imagine many Neils artistic expression we’re losing much read watch listen play fake replica hollow imitation dead simulation art one going learn good storyteller consuming bad storytelling become good writer reading bad writing turn good musical composer conforming genre cliche could go long time think catch drift someone start bringing economics profit justification crime culture magnitude make condemnable least accident would forgivable akin robbery extracting value value Even assuming one’s passive consumer active perpetrator consume invariably mean destroy Art consumed appreciated dy may sound overly dramatic exaggeration Sure actual art past sense still exists physical carrier pattern imprinted preserved somewhere dying extent becomes difficult access term “views” hidden pile loud nonsense term diminishing human ability engage meaningful level understanding purpose doesn’t much matter greatness used cultural landscape today endless landfill horizon horizon People ever truly live thoroughly awful art manages still somehow made feel lesser invariably escape attempt Sure contemporary art industry oxymoron ever one never bigger built fart prison bar solution one simple feel impossible — you’re creative person don’t hack Don’t make thing consumed disappear build thing last thing fight would try use Leave night city countryside sleep star cannot you’re trapped within wall false reality imposed upon others least dream They’re still Like read Subscribe publication heart follow or… Make happy throw something tip jarTags Storytelling Astronomy Art Neil deGrasse Tyson Creativity
3,726
S.O.L.I.D Principles Explained In Five Minutes
Dependency inversion Principle (DIP) This principle states that the high-level module must not depend on the low-level module, but they should depend on abstractions. Consider the MessageBoard code snippet below public class MessageBoard { private WhatUpMessage message; public MessageBoard(WhatsUpMessage message) { this.message = message; } } The high-level module MessageBoard now depends on the low-level WhatsUpMessage. If we needed to print the underlying message in the high-level module, we would now find ourselves at the mercy of the low-level module. We would have to write WhatsUpMessage specific logic to print that message. If later, FacebookMessage needed to be supported, we would have to modify the high-level module( tightly-coupled code). That violates the Dependency inversion principle. A way to fix that would be to extract that dependency. Create an interface and add whatever your high-level module needs. Any class that needed to use your high-level module would have to implement that interface. Your interface would look something like this public interface IMessage { public void PrintMessage(); } Your MessageBoard now would look like this public class MessageBoard { private IMessage message; public MessageBoard(IMessage message) { this.message = message; } public void PrintMessage() { this.message.PrintMessage(); } } The low-level module would look like this public class WhatUpMessage : IMessage { public void PrintMessage() { //print whatsup message } } public class FacebookMessage : IMessage { public void PrintMessage() { //print facebook message } } That abstraction removes the dependency of the low-level module in your high-level module. The high-level module is now completely independent of any low-level module. Using the S.O.L.I.D principles when writing code will make you a better developer and make your life a lot easier. You might even become the new popular person on the block if you’re the only one doing it. Thank you for making it to the end. Until next time, Happy coding.
https://medium.com/swlh/s-o-l-i-d-principles-explained-in-five-minutes-8d36b1da4f6b
[]
2019-12-04 00:17:34.860000+00:00
['Engineering', 'Software', 'Software Development', 'Programming', 'Design Patterns']
Title SOLID Principles Explained Five MinutesContent Dependency inversion Principle DIP principle state highlevel module must depend lowlevel module depend abstraction Consider MessageBoard code snippet public class MessageBoard private WhatUpMessage message public MessageBoardWhatsUpMessage message thismessage message highlevel module MessageBoard depends lowlevel WhatsUpMessage needed print underlying message highlevel module would find mercy lowlevel module would write WhatsUpMessage specific logic print message later FacebookMessage needed supported would modify highlevel module tightlycoupled code violates Dependency inversion principle way fix would extract dependency Create interface add whatever highlevel module need class needed use highlevel module would implement interface interface would look something like public interface IMessage public void PrintMessage MessageBoard would look like public class MessageBoard private IMessage message public MessageBoardIMessage message thismessage message public void PrintMessage thismessagePrintMessage lowlevel module would look like public class WhatUpMessage IMessage public void PrintMessage print whatsup message public class FacebookMessage IMessage public void PrintMessage print facebook message abstraction remove dependency lowlevel module highlevel module highlevel module completely independent lowlevel module Using SOLID principle writing code make better developer make life lot easier might even become new popular person block you’re one Thank making end next time Happy codingTags Engineering Software Software Development Programming Design Patterns
3,727
How the BTS Universe Successfully Engages Thousands of Fans
Images from BigHit’s official Twitter. BTS is known around the world for their relatable music, dynamic concert performances, and their passionate fanbase. But another large element behind this successful group of young men is the “BTS Universe” (BU), a fictional world with a narrative that depicts characters inspired by the BTS members. The BU started with just a handful of music videos, but it later went cross-platform with the introduction of the “HwaYangYeonHwa Notes,” small booklets of text included in the group’s Love Yourself albums. Additional music videos and short films fed the storyline, and BigHit Entertainment recently released a physical HYYH The Notes book and launched a webtoon on Naver titled “Save Me.” Dedicated fans spend hours deconstructing and analyzing this narrative, which spawns countless Twitter threads, blog posts, and YouTube videos about the storyline. Although the narrative began in 2015, fans are still consistently involved in discussing this story as new information continues to come out. It’s clear that the BU successfully intrigues fans, pulling them into the narrative and the world of theories as deeply as they wish to go — but what lies beneath this narrative’s ability to draw people in? In his book on writing technique titled The Emotional Craft of Fiction, author Donald Maass writes, “To entertain, a story must present novelty, challenge, and/or aesthetic value.” Maass encourages writers to “force the reader to figure something out,” because that will both engage the reader and make it more likely they’ll remember the story. The BTS Universe, though not strictly a narrative in the form of a book, manages to hit on all of these points. The BU is a novel concept, a first for K-Pop, as no other group has ventured into storytelling at this level, across this many platforms, and with this level of cohesiveness before. In addition to music videos, short films, texts, and the webtoon, the BU was further expanded by the Smeraldo blog, which provided lore surrounding the smeraldo flower, a fictional flower that appears in BTS’s storyline. Smeraldo was also used to name the Twitter account that promotes the webtoon and the HYYH The Notes book. Additionally, a real Smeraldo shop that sold special flower-themed merchandise opened at the group’s Love Yourself Seoul concerts. The Smeraldo tie-in bridged the gap between the BU world and our own, adding yet another layer of interactivity and immersion. Fans are captivated by the level of detail and amount of content that’s been put into the BU. The challenge of the BU lies in its storytelling. The story is fleshed out mainly in the “HYYH Notes,” which are epistolary in nature. Each note bears a name and a date, but despite the three albums’ worth of Notes and the full-length HYYH The Notes 1 book we have so far, the full story is yet unknown. Events that take place in the Notes sometimes appear in videos or the webtoon, but no medium gives the full picture. Gaps in the narrative leave fans to put the pieces together themselves and to theorize about the missing portions, symbolism, and character motives. It’s particularly effective since bits of the story are released only periodically, intriguing fans to wait for the next piece to drop. BTS’s content is often released in media res, effectively drawing fans in with the promise that more of the backstory will be revealed later. An additional challenge exists in the outside sources that occasionally influence BTS’s work. With the release of title track “Blood, Sweat & Tears” off their 2016 album WINGS, the band noted the influence of Demian, a German Bildungsroman by Hermann Hesse. Later, BigHit Entertainment’s official shop released a book bundle that included Demian as well as Erich Fromm’s The Art of Loving and Murray Stein’s Jung’s Map of the Soul, giving fans even more to connect to BTS’s releases. With their upcoming release bearing the title Map of the Soul: Persona, it’s clear that fans will have even more material to unravel. When it comes to aesthetics, there’s much to appreciate in BTS’s music videos and short films, which are all shot cinematically and with great care to detail. Both the visual storytelling and the aesthetically pleasing videos serve to hold the audience’s attention. Since BTS’s content provides a long-running story rich in symbols and connected themes, fans are encouraged to re-watch past videos to look for information they may have missed. Truly, by hitting all three points of novelty, challenge, and aesthetics and utilizing so many forms of media, BigHit ensures that fans stay engaged, and when we stay engaged, we develop deeper attachments. Maass touches on emotional attachments to fiction in his book, discussing how psychology’s affective disposition theory explains why readers become emotionally involved — we tend to make moral judgments about characters and attach emotions to them as a result. If we feel something in relation to a fictional character, we’re that much more bonded to them and the story. From the very start of the BU, even before it was billed as the BU, BTS’s characters played into disposition theory. In the first string of BU music videos including “I Need U” and “Run,” the members of the group are shown as innocent but troubled youth, with each character confronting his own struggles. At the time, fans had nothing more to go on than the music videos, but these videos served as a great emotional hook. Fans could relate to some of the realistic characters and sympathize with others because of how they were depicted — we judged them to be good characters, despite their bad circumstances. Creating relatable and likable characters is one huge step in the direction of successful emotional attachment. What makes the experience even more emotionally invested for fans is that the fictional characters are portrayed by the real BTS members. They use their real names for these characters, and occasionally real personality traits bleed over into their fictional counterparts. Fans who already have an attachment to the real BTS will more easily attach to this fictional story and world. This ease of attachment eliminates a hurdle in traditional fiction writing, because in a book, the characters are unknown. In the BU, however, they’re unknown and revealed only incrementally, but they are presented in a familiar form. With so many sources of information and a slew of gaps to fill in, the BU allows fans to play an active role in the group’s narrative. Other K-Pop releases may be momentarily engaging, but if there’s not much to mull over, we’re not as likely to keep thinking about them and may lose interest. But the BTS Universe is special because it extends its storytelling beyond just a music video, or even a series of videos, enabling fans to actively engage and solidifying the fans’ attachment to the series, the characters, and the members of BTS themselves. Maass may be talking about writing novels, but his formula for effective, engaging fiction concisely explains why so many of us are willing participants in this cross-platform fictional universe. Interested in learning more about the BU? I’ve opened up my website, The BTS Effect, where most of my BTS-related content lives!
https://medium.com/bangtan-journal/how-the-bts-universe-successfully-engages-thousands-of-fans-78152ad8338f
['Courtney Lazore']
2019-12-03 14:51:01.913000+00:00
['Storytelling', 'Music', 'Kpop', 'Bts', 'Bts Army']
Title BTS Universe Successfully Engages Thousands FansContent Images BigHit’s official Twitter BTS known around world relatable music dynamic concert performance passionate fanbase another large element behind successful group young men “BTS Universe” BU fictional world narrative depicts character inspired BTS member BU started handful music video later went crossplatform introduction “HwaYangYeonHwa Notes” small booklet text included group’s Love album Additional music video short film fed storyline BigHit Entertainment recently released physical HYYH Notes book launched webtoon Naver titled “Save Me” Dedicated fan spend hour deconstructing analyzing narrative spawn countless Twitter thread blog post YouTube video storyline Although narrative began 2015 fan still consistently involved discussing story new information continues come It’s clear BU successfully intrigue fan pulling narrative world theory deeply wish go — lie beneath narrative’s ability draw people book writing technique titled Emotional Craft Fiction author Donald Maass writes “To entertain story must present novelty challenge andor aesthetic value” Maass encourages writer “force reader figure something out” engage reader make likely they’ll remember story BTS Universe though strictly narrative form book manages hit point BU novel concept first KPop group ventured storytelling level across many platform level cohesiveness addition music video short film text webtoon BU expanded Smeraldo blog provided lore surrounding smeraldo flower fictional flower appears BTS’s storyline Smeraldo also used name Twitter account promotes webtoon HYYH Notes book Additionally real Smeraldo shop sold special flowerthemed merchandise opened group’s Love Seoul concert Smeraldo tiein bridged gap BU world adding yet another layer interactivity immersion Fans captivated level detail amount content that’s put BU challenge BU lie storytelling story fleshed mainly “HYYH Notes” epistolary nature note bear name date despite three albums’ worth Notes fulllength HYYH Notes 1 book far full story yet unknown Events take place Notes sometimes appear video webtoon medium give full picture Gaps narrative leave fan put piece together theorize missing portion symbolism character motif It’s particularly effective since bit story released periodically intriguing fan wait next piece drop BTS’s content often released medium re effectively drawing fan promise backstory revealed later additional challenge exists outside source occasionally influence BTS’s work release title track “Blood Sweat Tears” 2016 album WINGS band noted influence Demian German Bildungsroman Hermann Hesse Later BigHit Entertainment’s official shop released book bundle included Demian well Erich Fromm’s Art Loving Murray Stein’s Jung’s Map Soul giving fan even connect BTS’s release upcoming release bearing title Map Soul Persona it’s clear fan even material unravel come aesthetic there’s much appreciate BTS’s music video short film shot cinematically great care detail visual storytelling aesthetically pleasing video serve hold audience’s attention Since BTS’s content provides longrunning story rich symbol connected theme fan encouraged rewatch past video look information may missed Truly hitting three point novelty challenge aesthetic utilizing many form medium BigHit ensures fan stay engaged stay engaged develop deeper attachment Maass touch emotional attachment fiction book discussing psychology’s affective disposition theory explains reader become emotionally involved — tend make moral judgment character attach emotion result feel something relation fictional character we’re much bonded story start BU even billed BU BTS’s character played disposition theory first string BU music video including “I Need U” “Run” member group shown innocent troubled youth character confronting struggle time fan nothing go music video video served great emotional hook Fans could relate realistic character sympathize others depicted — judged good character despite bad circumstance Creating relatable likable character one huge step direction successful emotional attachment make experience even emotionally invested fan fictional character portrayed real BTS member use real name character occasionally real personality trait bleed fictional counterpart Fans already attachment real BTS easily attach fictional story world ease attachment eliminates hurdle traditional fiction writing book character unknown BU however they’re unknown revealed incrementally presented familiar form many source information slew gap fill BU allows fan play active role group’s narrative KPop release may momentarily engaging there’s much mull we’re likely keep thinking may lose interest BTS Universe special extends storytelling beyond music video even series video enabling fan actively engage solidifying fans’ attachment series character member BTS Maass may talking writing novel formula effective engaging fiction concisely explains many u willing participant crossplatform fictional universe Interested learning BU I’ve opened website BTS Effect BTSrelated content livesTags Storytelling Music Kpop Bts Bts Army
3,728
Organise your Jupyter Notebook with these tips
📍 Tip 4. Create user-defined functions and save it in a module You may have heard of the DRY principle: Don’t Repeat Yourself. If you haven’t heard of this software engineering principle before, it is about “not duplicating a piece of knowledge within a system”. One of my interpretation of this principle in Data Science is to create functions to abstract away the reoccurring tasks to reduce copy pasting. You can even use classes if it makes sense in your case. Here are the suggested steps for this tip: Create a function Ensure the function has an intuitive name Document the function with docstring (Ideally) Unit test the function Save the function in a .py file (.py file is referred as module) Import module in Notebook to access the function Use the function in Notebook Let’s try to contextualise these steps with examples: Here’s a simple way to assess if a function has an intuitive name: If you think a colleague who hasn’t seen the function before could roughly guess what the function does just by looking at its name, then you are on the right track. When documenting these functions, I have adapted a few different styles in a way it made more sense to me. While these examples serve as a working example function for Data Science, I highly encourage you to check out official guides such as below to learn the best practices in naming and documentation conventions, style guides and type hints: You can even browse through modules in well-established package’s Github repository to get inspirations. If you saved these functions in a helpers.py file and imported helpers module (by the way, Python module just means a .py file) in our Notebook with import helpers , you can access the documentation by writing the function name followed by Shift + Tab: If you have many functions, you could even categorise and put them in separate modules. If you take this approach, you may even want to create a folder containing all the modules. While putting stable code into a module makes sense, I think it is fine to keep experimental functions in your Notebook. If you implement this tip, you will soon notice that your Notebook start to look less cluttered and more organised. In addition, using functions will make you less prone to silly copy paste mistakes. Unit testing was not covered in this post as it deserves its own section. If you would like learn about unit testing for Data Science, this PyData talk may be a good starting point.
https://towardsdatascience.com/organise-your-jupyter-notebook-with-these-tips-d164d5dcd51f
['Zolzaya Luvsandorj']
2020-11-02 10:35:31.120000+00:00
['Python', 'Data Science', 'Jupyter Notebook', 'Data', 'Workflow']
Title Organise Jupyter Notebook tipsContent 📍 Tip 4 Create userdefined function save module may heard DRY principle Don’t Repeat haven’t heard software engineering principle “not duplicating piece knowledge within system” One interpretation principle Data Science create function abstract away reoccurring task reduce copy pasting even use class make sense case suggested step tip Create function Ensure function intuitive name Document function docstring Ideally Unit test function Save function py file py file referred module Import module Notebook access function Use function Notebook Let’s try contextualise step example Here’s simple way ass function intuitive name think colleague hasn’t seen function could roughly guess function looking name right track documenting function adapted different style way made sense example serve working example function Data Science highly encourage check official guide learn best practice naming documentation convention style guide type hint even browse module wellestablished package’s Github repository get inspiration saved function helperspy file imported helper module way Python module mean py file Notebook import helper access documentation writing function name followed Shift Tab many function could even categorise put separate module take approach may even want create folder containing module putting stable code module make sense think fine keep experimental function Notebook implement tip soon notice Notebook start look le cluttered organised addition using function make le prone silly copy paste mistake Unit testing covered post deserves section would like learn unit testing Data Science PyData talk may good starting pointTags Python Data Science Jupyter Notebook Data Workflow
3,729
Music generation using Deep Learning
“If I had my life to live over again, I would have made a rule to read some poetry and listen to some music at least once every week.”― Charles Darwin Life exists on the sharp edged wire of the Guitar. Once you jump, it’s echos can be heard with immense intangible pleasure. Let's explore this intangible pleasure… Music is nothing but a sequence of nodes(events). Here input to the model is a sequence of nodes. Some of the music generated example using RNNs shown below Music Representation: sheet-music ABC-notation: it has a sequence of characters which is very simple for Neural Network train. https://en.wikipedia.org/wiki/ABC_notation MIDI: https://towardsdatascience.com/how-to-generate-music-using-a-lstm-neural-network-in-keras-68786834d4c5 mp3- store only audio file. Char-RNN Here I'm using char-RNN structure(Many-Many RNN) where one output corresponds to each input(input Ci -.> output C(i+1)) at each time step(cell). It can have multiple hidden layers(multiple LSTM layers). Visualizing the predictions and the “neuron” firings in the RNN Under every character, we visualize (in red) the top 5 guesses that the model assigns for the next character. The guesses are colored by their probability (so dark red = judged as very likely, white = not very likely). The input character sequence (blue/green) is colored based on the firing of a randomly chosen neuron in the hidden representation of the RNN. Think about it as green = very excited and blue = not very excited. Process: Obtaining data preprocessing(generating batch-sgd)to feed into char-RNN Please follow the below link for more datasets. Here I used only Jigs (340 tunes) dataset in ABC-format. The dataset will be fed into RNN training using a batch size of 16. Here two LSTM cell represents for each input. The input X0 goes in all LSTM cells in the first input layer. You will get output(h0) and information send to the next time step layer. All output at time step one, LSTM_t1_1, LSTM_t1_2 connected to dense layer whose output is h0. The dense layer at time-step one is called time distributed dense layer. Similarly for the next time step. Return sequence=True in Keras used in case of when you want to generate output at each input in timestamp sequence. For every input, we need sequence of output. The same input will go to every cell and generate output at every cell in one layer. Every time step(i), we will get a vector of output(256 in given problem). 2. Time distributed dense layer. Please follow the above discussion for a better understanding. At every timestep, it will take all LSTM output and construct a dense layer of size 86. Here 86 is number of unique characters in whole vocabulary. 3. Stateful=True, the last state for each sample at index i in a batch will be used as the initial state for the sample of index i in the following batch. It used in the case when you want to connect one batch with the second batch with the input of the second batch is the output of the first batch. In the case of stateful=false, each batch has zero input to the first time step layer. Model Architecture and Training: It is a multi-class classification in which a given input, it will give an output which is an anyone of the total number of character. The training model generates 86 character after every input of character. based on probability, it will decide the final output character. Next, we will feed C(i+1) to model, it will generate C(i+2) character. This will continue until all batches of character feed of whole data. Output: Open the following link and Paste your generated music is given space in order to play. For Tabla music: If you are able to change each sequence as a character than you can use the above char-RNN model. Please read the following blog for a detailed understanding. MIDI music generation: Here we will use Music21 python library to read MIDI file and able to convert into the sequence of event. Please read the following blog for detailed understanding. Models other than char-RNN(Very recent blog): Its survey blog, have all models apart from the char-RNN model based on Neural Network. Please follow if want to explore. Google project on generating music: Based on Tensorflow and LSTM, the project of google researcher. Reference: Google Image(for image ) rest link given at respective section ========Thanks(Love to hear from your side)========= Find detail code on my GitHub account…
https://medium.com/analytics-vidhya/music-generation-using-deep-learning-a2b2848ab177
['Rana Singh']
2019-12-16 04:45:40.965000+00:00
['Deep Learning', 'Artificial Intelligence', 'Mathematics', 'Music', 'Machine Learning']
Title Music generation using Deep LearningContent “If life live would made rule read poetry listen music least every week”― Charles Darwin Life exists sharp edged wire Guitar jump it’s echo heard immense intangible pleasure Lets explore intangible pleasure… Music nothing sequence nodesevents input model sequence node music generated example using RNNs shown Music Representation sheetmusic ABCnotation sequence character simple Neural Network train httpsenwikipediaorgwikiABCnotation MIDI httpstowardsdatasciencecomhowtogeneratemusicusingalstmneuralnetworkinkeras68786834d4c5 mp3 store audio file CharRNN Im using charRNN structureManyMany RNN one output corresponds inputinput Ci output Ci1 time stepcell multiple hidden layersmultiple LSTM layer Visualizing prediction “neuron” firing RNN every character visualize red top 5 guess model assigns next character guess colored probability dark red judged likely white likely input character sequence bluegreen colored based firing randomly chosen neuron hidden representation RNN Think green excited blue excited Process Obtaining data preprocessinggenerating batchsgdto feed charRNN Please follow link datasets used Jigs 340 tune dataset ABCformat dataset fed RNN training using batch size 16 two LSTM cell represents input input X0 go LSTM cell first input layer get outputh0 information send next time step layer output time step one LSTMt11 LSTMt12 connected dense layer whose output h0 dense layer timestep one called time distributed dense layer Similarly next time step Return sequenceTrue Keras used case want generate output input timestamp sequence every input need sequence output input go every cell generate output every cell one layer Every time stepi get vector output256 given problem 2 Time distributed dense layer Please follow discussion better understanding every timestep take LSTM output construct dense layer size 86 86 number unique character whole vocabulary 3 StatefulTrue last state sample index batch used initial state sample index following batch used case want connect one batch second batch input second batch output first batch case statefulfalse batch zero input first time step layer Model Architecture Training multiclass classification given input give output anyone total number character training model generates 86 character every input character based probability decide final output character Next feed Ci1 model generate Ci2 character continue batch character feed whole data Output Open following link Paste generated music given space order play Tabla music able change sequence character use charRNN model Please read following blog detailed understanding MIDI music generation use Music21 python library read MIDI file able convert sequence event Please read following blog detailed understanding Models charRNNVery recent blog survey blog model apart charRNN model based Neural Network Please follow want explore Google project generating music Based Tensorflow LSTM project google researcher Reference Google Imagefor image rest link given respective section ThanksLove hear side Find detail code GitHub account…Tags Deep Learning Artificial Intelligence Mathematics Music Machine Learning
3,730
The Rise and Fall of “Social” Media
The Rise and Fall of “Social” Media Broken promises of an ever-connected utopia we don’t even want Thomas Cole, The Course of Empire — Destruction (1836) Imagine a future in which your Instagram, Twitter, and Facebook are so polluted with content by corporate interests that it’s nearly impossible to excavate the content by your friends and family: the real people with whom we were promised social media would make it easier to “stay connected.” This future may not sound too distant. This is probably because, by now, most of us accept that social media is chiefly an avenue for big business marketing as easily as if it were an inborn, unquestionable fact — as though social media in its current form had been bestowed upon us by a divine creator. Just as we all accept that politicians lie yet still we must elect them, we all accept that social media is first and foremost an economic tool that rewards branding and monetization, profits off its users’ time, attention, and personal information — and yet we must participate. The alternative is virtual invisibility, real-world obscurity. But this reality is a far cry from the new dawn of democracy and community envisioned by the hippies and psychedelic cyberpunks who pioneered the World Wide Web; as author and theorist Douglas Rushkoff put it, “The folks who really saw in the internet a way to turn on everybody. We couldn’t get everybody to take acid…but get everybody on the internet, and they will have that all-is-one, connected experience.” The Course of Empire — The Arcadian State (1834) In the days before the zenith of tech billionaires, social networking was still abuzz with that possibility. But those of us who jumped on the first generation of Myspace in 2003 knew that the intrigue of social media lie in the freedom to forge our own identities. Myspace was successful not as a means of staying in touch with family and up to date on friends, but as a platform for autonomy and self-expression. Connection in the early days of social media meant cultivating and asserting (or escaping) our Selves in a virtual world. Mark Zuckerberg, too, likes to claim something along the lines of “connection” as the motivation behind his company. But by the time Facebook was opened to the public around 2006, the Internet was already undergoing a functional repurposing by tech entrepreneurs and investors — from a dream of unity and self-expression to a vehicle for exponential financial gain. This repurposing had been blueprinted in an influential article published in Wired magazine in 1997. The article offered convincing foresight into the monetary potential of the Internet and ubiquitous personal computers: We are watching the beginnings of a global economic boom on a scale never experienced before. We have entered a period of sustained growth that could eventually double the world’s economy every dozen years.…[Historians] will chronicle the 40-year period from 1980 to 2020 as the key years of a remarkable transformation. Notably missing from this forecast are glimpses of the interconnected cooperative envisioned by the Internet’s renegade creators. But Wired was right about the economic implications of digital connectedness. As predicted, social networking sites became, decisively, the new frontier for ad agencies and marketers. For three years, Myspace was the world’s most visited social network. Until 2008, when Facebook eclipsed Myspace in the same category. In 2009, MediaPost noted that “The shift reflects the emergence of Facebook this year as the premiere social networking property for marketers.” At the time, this was an unprecedented accomplishment. As noted by Fortune, before Facebook, “the notion of social networking ads as big business was a fantasy.” And while Myspace’s popularity contracted, Facebook’s viewership and ad revenue ballooned. In 2010, Facebook accounted for a quarter of all U.S. ad dollars, “gaining market share at the expense of MySpace,” who, as we know, never recovered. The Course of Empire — The Consummation of Empire (1836) In retrospect, it’s no surprise that entrepreneurs and opportunists looked on at the hippie prospect of digital connectedness with dollar signs in their eyes. After all, in a capitalist society, individual people ever-connected on a tangible plain could be described one way as a breeding ground for exploitation. If it’s any indication: This quarter, Facebook’s stock will reach a record high, despite the unprecedentedly large $3 billion fine Facebook is awaiting from the FTC for dodging around in the shadows and egregiously failing its users for the umpteenth time. The fine made waves more because it is a drop in the bucket for the $550 billion behemoth than because it is a record-setting penalty. Facebook’s grotesque net worth and reckless ethics just show that its purpose has always been profit at all costs — and so the groundwork was laid for all social media to come after it. Zuckerberg may tout the lofty ideal of “connected experience” as the motivation behind his juggernaut corporation, but he definitely wasn’t going after the “all-is-one” effect of an acid trip. After all, whereas psychedelic drugs engender in us a feeling of connection with nature, our own spirit, and our place as human beings in the natural order, social media simply plugs us directly into the motherboard of a digital commercial superpower — whose scope and influence rival that of any government, and whose primary objective is to engorge us with branded content, all under the pretense of “connection.” So to whom or what, exactly, are we connected? I think few would say each other. It’s no secret that despite plans for unity, and business spiels about connectedness, social media has made us individually feel more isolated from one another and lonelier than ever. It’s old news that social interaction on “social” media is fickle at best, lethally cruel at worst. And with commercial content sitting so closely to our own on the digital landscape, the line between where each begins and ends is becoming increasingly blurry. Perhaps, if the Internet is indeed like an acid trip, then the state of social media today is something resembling MKUltra: a tool of the people harnessed by the powerful in order to control the people in turn. For this reason, to lament losing so-called “real” content by our friends and families on our feeds — content that has been mathematically buried by evolving algorithms — is to miss the point. Social media is not and never has been supplemental to human connection, and certainly does not replace it. The Internet by way of social media was never going to cultivate true oneness, despite the aims of its psychedelic pioneers. Social media cannot provide the spiritual enrichment of a well-received acid trip — not in an environment ruled by profit and power. And it’s this spiritual enrichment that’s needed for humanity to truly connect. I think the necessary question is not why do we put up with it — social media does have aesthetic and cultural value (think a living, breathing fashion magazine). But with its pervasive influence, its effects are insidious. The question is how do we begin to reconstruct our collective spirit? How do we, as a society, free ourselves from the vice grip of commercialism, branding, and precarious economic growth? So that we may get back in sync with the natural world, take stock of the damage, and salvage what’s left. So that we may truly connect with one another, and with our universal values. The Course of Empire — Desolation (1836) Increasingly lately, I’m reminded of Plato’s story of Atlantis: an advanced and prosperous city, whose people, once generous and good-hearted, became possessed by power and excessive wealth. As punishment for their spiraling greed, the gods assailed the island with earthquakes and floods, and the city and all its riches sank to the bottom of the sea. There is work to be done. Once we collectively trace our steps and acknowledge that as a society, we’ve been following an ouroboros path of greed and spiritual corrosion — not connectedness with one another, as we’ve been made to believe — only then can we get our bearings, and decide with a clear head where we want to go from here. Yes, social media has been usurped by corporate powers. But even in its purest incarnation, is it really what we need? Or is it time for something else, something better.
https://madelinemcgary.medium.com/the-rise-and-fall-of-social-media-f1b411aec4f6
['Madeline Mcgary']
2020-10-15 02:18:31.489000+00:00
['Society', 'Humanity', 'Facebook', 'Tech', 'Social Media']
Title Rise Fall “Social” MediaContent Rise Fall “Social” Media Broken promise everconnected utopia don’t even want Thomas Cole Course Empire — Destruction 1836 Imagine future Instagram Twitter Facebook polluted content corporate interest it’s nearly impossible excavate content friend family real people promised social medium would make easier “stay connected” future may sound distant probably u accept social medium chiefly avenue big business marketing easily inborn unquestionable fact — though social medium current form bestowed upon u divine creator accept politician lie yet still must elect accept social medium first foremost economic tool reward branding monetization profit users’ time attention personal information — yet must participate alternative virtual invisibility realworld obscurity reality far cry new dawn democracy community envisioned hippy psychedelic cyberpunk pioneered World Wide Web author theorist Douglas Rushkoff put “The folk really saw internet way turn everybody couldn’t get everybody take acid…but get everybody internet allisone connected experience” Course Empire — Arcadian State 1834 day zenith tech billionaire social networking still abuzz possibility u jumped first generation Myspace 2003 knew intrigue social medium lie freedom forge identity Myspace successful mean staying touch family date friend platform autonomy selfexpression Connection early day social medium meant cultivating asserting escaping Selves virtual world Mark Zuckerberg like claim something along line “connection” motivation behind company time Facebook opened public around 2006 Internet already undergoing functional repurposing tech entrepreneur investor — dream unity selfexpression vehicle exponential financial gain repurposing blueprinted influential article published Wired magazine 1997 article offered convincing foresight monetary potential Internet ubiquitous personal computer watching beginning global economic boom scale never experienced entered period sustained growth could eventually double world’s economy every dozen years…Historians chronicle 40year period 1980 2020 key year remarkable transformation Notably missing forecast glimpse interconnected cooperative envisioned Internet’s renegade creator Wired right economic implication digital connectedness predicted social networking site became decisively new frontier ad agency marketer three year Myspace world’s visited social network 2008 Facebook eclipsed Myspace category 2009 MediaPost noted “The shift reflects emergence Facebook year premiere social networking property marketers” time unprecedented accomplishment noted Fortune Facebook “the notion social networking ad big business fantasy” Myspace’s popularity contracted Facebook’s viewership ad revenue ballooned 2010 Facebook accounted quarter US ad dollar “gaining market share expense MySpace” know never recovered Course Empire — Consummation Empire 1836 retrospect it’s surprise entrepreneur opportunist looked hippie prospect digital connectedness dollar sign eye capitalist society individual people everconnected tangible plain could described one way breeding ground exploitation it’s indication quarter Facebook’s stock reach record high despite unprecedentedly large 3 billion fine Facebook awaiting FTC dodging around shadow egregiously failing user umpteenth time fine made wave drop bucket 550 billion behemoth recordsetting penalty Facebook’s grotesque net worth reckless ethic show purpose always profit cost — groundwork laid social medium come Zuckerberg may tout lofty ideal “connected experience” motivation behind juggernaut corporation definitely wasn’t going “allisone” effect acid trip whereas psychedelic drug engender u feeling connection nature spirit place human being natural order social medium simply plug u directly motherboard digital commercial superpower — whose scope influence rival government whose primary objective engorge u branded content pretense “connection” exactly connected think would say It’s secret despite plan unity business spiel connectedness social medium made u individually feel isolated one another lonelier ever It’s old news social interaction “social” medium fickle best lethally cruel worst commercial content sitting closely digital landscape line begin end becoming increasingly blurry Perhaps Internet indeed like acid trip state social medium today something resembling MKUltra tool people harnessed powerful order control people turn reason lament losing socalled “real” content friend family feed — content mathematically buried evolving algorithm — miss point Social medium never supplemental human connection certainly replace Internet way social medium never going cultivate true oneness despite aim psychedelic pioneer Social medium cannot provide spiritual enrichment wellreceived acid trip — environment ruled profit power it’s spiritual enrichment that’s needed humanity truly connect think necessary question put — social medium aesthetic cultural value think living breathing fashion magazine pervasive influence effect insidious question begin reconstruct collective spirit society free vice grip commercialism branding precarious economic growth may get back sync natural world take stock damage salvage what’s left may truly connect one another universal value Course Empire — Desolation 1836 Increasingly lately I’m reminded Plato’s story Atlantis advanced prosperous city whose people generous goodhearted became possessed power excessive wealth punishment spiraling greed god assailed island earthquake flood city rich sank bottom sea work done collectively trace step acknowledge society we’ve following ouroboros path greed spiritual corrosion — connectedness one another we’ve made believe — get bearing decide clear head want go Yes social medium usurped corporate power even purest incarnation really need time something else something betterTags Society Humanity Facebook Tech Social Media
3,731
Break Your App into Composable Modules
Over the years, my company’s main application has been under continuous development, evolving and gaining many features. Naturally, as time went by, our engineering team developed many frameworks and utilities that were needed for the app. For example — performance tracking, background worker system, analytics, storage access and even our own ORM, and much more. This application is designed as a monolith. All of those frameworks are built on the same codebase with the app, and in many cases they are even coupled to the application’s domain-specific code. As our engineering team (and company as a whole) matured, we started to get requirements for new applications — it could be an in-house dashboards app or managing deployments system, but also real new products that required actual production systems to support them. We started designing those systems, and guess what — we found out that we needed storage, analytics, performance tracking and all that stuff here as well… This is where things get interesting. What is the biggest benefit of a system that has been around for a few years? It has seen production. It met real users. With traffic. It was under load. It went through a lot of optimizations and improvements over the years. Over time, systems as a whole become more robust, and as do their frameworks. At this point we started to think differently. We can no longer write frameworks that are bound to a certain application. In order to really scale things out, we need to build reusable modules and libraries, which will provide the building blocks for all our applications. We’ve changed our mind-set to module thinking The plan was to break our main app’s frameworks into separate and composable modules that have great API. The process in high level is: Identify and map. It turns out it’s not always simple to think about which frameworks you have. You may have the obvious ones like I mentioned earlier. But you may also discover along that way that you once wrote that cool utility class for handling REST API call to 3rd parties, which is also usable for the next application you are building. It turns out it’s not always simple to think about which frameworks you have. You may have the obvious ones like I mentioned earlier. But you may also discover along that way that you once wrote that cool utility class for handling REST API call to 3rd parties, which is also usable for the next application you are building. Extract. Write each framework as a separate project with its own API. The idea is that the next application we are going to build can be easily based on a selected group of components (you’re not always going to need them all). A recommended way to do this, is to first extract it to a separate package/assembly/jar in the same project, but this time without direct dependency by the application. This way you can stay in context while refactoring, and at the same time decouple things relatively quickly. Write each framework as a separate project with its own API. The idea is that the next application we are going to build can be easily based on a selected group of components (you’re not always going to need them all). A recommended way to do this, is to first extract it to a separate package/assembly/jar in the same project, but this time without direct dependency by the application. This way you can stay in context while refactoring, and at the same time decouple things relatively quickly. Use. After extracting, try integrating it back to the application - this time as a 3rd party library. You will be amazed how bad your API is when you are the user :) After extracting, try integrating it back to the application - this time as a 3rd party library. You will be amazed how bad your API is when you are the user :) Fix your API. Improve the API until you’re satisfied. Improve the API until you’re satisfied. Document. Now that you have an awesome API, go ahead and add a README.md file to the repository, so that the module will be easy to integrate and use in future applications. Technology and development process Each module is written in its own project, and has its own tests, with its own CI process. The project is visible to everyone in R&D in a public repo in our internal Bitbucket. We use a package manager in order to manage versioning and dependencies between modules. Each push to master automatically releases a new version of the module to the package manager, and each consumer can decide whether and when he wants to upgrade. The key here is an easy and frictionless development process. If it’s not easy, it won’t happen. From this point, it will be very easy to push changes and upgrade the modules. The benefits of composable modules architecture Reduced development time Reduced creation time of new applications by creating a unified toolbox, from which the modules can be combined with the requirements of each app. Reduced maintenance by writing once and using everywhere, instead of copy/pasting around. The bug fixes and upgrades are applied in one place. Reduced boilerplate by creating better and understandable API. Prior to extraction, some of the modules were coupled to the specific app domain. Extracting the module, forced us to think about improving the API usage to be more reusable, and to uncouple the module from its domain. R&D growth and organization memory Many of these modules were written a long time ago, by people who already left the company. Addressing those modules forces us to get to know them and have a deep understanding of them. This enables us to master our tools (which is something I believe in), and take them for granted, and this will help us to grow as a team. Increased quality of frameworks and apps Since each module exists and is maintained in one place, and widely used across different apps, they automatically become more mature and stable, since they have more clients and receive more feedback. They also have their own tests and CI, which is a major win. Some ‘do’s and ‘don’ts’ we’ve learned along the way: Have tests and CI for all modules. Obviously. No exceptions. Obviously. No exceptions. Don’t extract a module for the sake of extraction. ROI. Extract only if there’s another consumer for this module. ROI. Extract only if there’s another consumer for this module. Don’t let the same engineer extract all components. This won’t scale, we want to share as much knowledge as we can. Haven’t you heard about the bus factor? This won’t scale, we want to share as much knowledge as we can. Haven’t you heard about the bus factor? Each component should be handled by more than one engineer. Again, the bus factor. Again, the bus factor. Don’t create deep dependencies between your modules. Create a flat hierarchy of modules in order to avoid dependency hell. Here’s a great post about this subject. What’s next for us? In the future, we would like to have a true owners/contributors model. In this model, each module owner/s will be responsible for accepting PRs for his module, and will be responsible for all aspects of the projects such as creating and leading the vision and giving an introduction about it for new developers. A contributor can actually be anyone from the R&D team. This will be a true open source mindset. This model can bring a lot of benefits, not only technical, but this post is getting to long so maybe next time ;) Better visibility We need to find a way to expose a “catalog” of the different kind of modules. This will create a very dev-friendly ecosystem for our project. Imagine a web-based portal that the engineers in your team can just browse through, see what possibilities are there, who the owner of each module is, and which applications are using a certain module. “You said open source — why not do the real deal?” Maybe we will… Stay tuned :) Closing Breaking our app into composable modules helped us in many aspects. We reduced development time by introducing a way to select the building blocks of an app and start rolling, reducing boilerplate, and focus our time and effort on domain logic and bringing business value. We also got to know our frameworks better by going over the codebase and getting our hands dirty. We made them better by creating great APIs. The beautiful thing is that new frameworks are now written like this from Day One :) Cheers.
https://medium.com/sears-israel/break-your-app-into-composable-modules-8f8306235e52
['Leeran Yarhi']
2017-12-15 15:11:19.777000+00:00
['Architecture', 'Software Development', 'Tech Culture', 'Design', 'Engineering']
Title Break App Composable ModulesContent year company’s main application continuous development evolving gaining many feature Naturally time went engineering team developed many framework utility needed app example — performance tracking background worker system analytics storage access even ORM much application designed monolith framework built codebase app many case even coupled application’s domainspecific code engineering team company whole matured started get requirement new application — could inhouse dashboard app managing deployment system also real new product required actual production system support started designing system guess — found needed storage analytics performance tracking stuff well… thing get interesting biggest benefit system around year seen production met real user traffic load went lot optimization improvement year time system whole become robust framework point started think differently longer write framework bound certain application order really scale thing need build reusable module library provide building block application We’ve changed mindset module thinking plan break main app’s framework separate composable module great API process high level Identify map turn it’s always simple think framework may obvious one like mentioned earlier may also discover along way wrote cool utility class handling REST API call 3rd party also usable next application building turn it’s always simple think framework may obvious one like mentioned earlier may also discover along way wrote cool utility class handling REST API call 3rd party also usable next application building Extract Write framework separate project API idea next application going build easily based selected group component you’re always going need recommended way first extract separate packageassemblyjar project time without direct dependency application way stay context refactoring time decouple thing relatively quickly Write framework separate project API idea next application going build easily based selected group component you’re always going need recommended way first extract separate packageassemblyjar project time without direct dependency application way stay context refactoring time decouple thing relatively quickly Use extracting try integrating back application time 3rd party library amazed bad API user extracting try integrating back application time 3rd party library amazed bad API user Fix API Improve API you’re satisfied Improve API you’re satisfied Document awesome API go ahead add READMEmd file repository module easy integrate use future application Technology development process module written project test CI process project visible everyone RD public repo internal Bitbucket use package manager order manage versioning dependency module push master automatically release new version module package manager consumer decide whether want upgrade key easy frictionless development process it’s easy won’t happen point easy push change upgrade module benefit composable module architecture Reduced development time Reduced creation time new application creating unified toolbox module combined requirement app Reduced maintenance writing using everywhere instead copypasting around bug fix upgrade applied one place Reduced boilerplate creating better understandable API Prior extraction module coupled specific app domain Extracting module forced u think improving API usage reusable uncouple module domain RD growth organization memory Many module written long time ago people already left company Addressing module force u get know deep understanding enables u master tool something believe take granted help u grow team Increased quality framework apps Since module exists maintained one place widely used across different apps automatically become mature stable since client receive feedback also test CI major win ‘do’s ‘don’ts’ we’ve learned along way test CI module Obviously exception Obviously exception Don’t extract module sake extraction ROI Extract there’s another consumer module ROI Extract there’s another consumer module Don’t let engineer extract component won’t scale want share much knowledge Haven’t heard bus factor won’t scale want share much knowledge Haven’t heard bus factor component handled one engineer bus factor bus factor Don’t create deep dependency module Create flat hierarchy module order avoid dependency hell Here’s great post subject What’s next u future would like true ownerscontributors model model module owner responsible accepting PRs module responsible aspect project creating leading vision giving introduction new developer contributor actually anyone RD team true open source mindset model bring lot benefit technical post getting long maybe next time Better visibility need find way expose “catalog” different kind module create devfriendly ecosystem project Imagine webbased portal engineer team browse see possibility owner module application using certain module “You said open source — real deal” Maybe will… Stay tuned Closing Breaking app composable module helped u many aspect reduced development time introducing way select building block app start rolling reducing boilerplate focus time effort domain logic bringing business value also got know framework better going codebase getting hand dirty made better creating great APIs beautiful thing new framework written like Day One CheersTags Architecture Software Development Tech Culture Design Engineering
3,732
This fascinating photo is of a
This fascinating photo is of a suit called The Wildman. No one knows the purpose of this 18th Century suit of armor. It may have been used for bear hunting or worse — bear baiting. It could also be a costume for a festival or a piece of folk art. As of today, it’s displayed in The Menil Collection in Texas, along with other interesting historical artifacts. As far as we know, it’s the only suit of it’s kind. One thing is for sure. Whoever wore this was not going to be getting a lot of hugs.
https://medium.com/rule-of-one/no-one-knows-the-purpose-of-this-suit-18-century-of-armor-e37c00a5294
['Toni Tails']
2020-12-29 13:22:20.269000+00:00
['Culture', 'History', 'Creativity', 'Art', 'Productivity']
Title fascinating photo aContent fascinating photo suit called Wildman one know purpose 18th Century suit armor may used bear hunting worse — bear baiting could also costume festival piece folk art today it’s displayed Menil Collection Texas along interesting historical artifact far know it’s suit it’s kind One thing sure Whoever wore going getting lot hugsTags Culture History Creativity Art Productivity
3,733
Attached To The Familiar
Rumi says that if a drop of the wine of vision could rinse our eyes, everywhere we looked, we would weep with wonder. Sadly, many of us are stuck in the confines of our culture and beliefs, and are unaware of the splendor that lies beyond what we know, beyond our perceptions and attachments. “One way to expand our awareness, is to travel and experience a variety of unfamiliar cultures […] Another way to embrace the mystery and beauty of life is to learn the art of letting go of all that stands in the way of our inner development: for example, a belief that does not serve the common good, an argument that serves no purpose except saving face, a relationship that is toxic, a grudge that depletes our being.” (Sacred Laughter of The Sufis) What are your thoughts on this matter? Leave some feedback if you feel like it or submit something in response to this article and tag it under “storytelling”. Take care! See you next Thursday with brand new challenges.
https://medium.com/know-thyself-heal-thyself/attached-to-the-familiar-70944e702ea0
['𝘋𝘪𝘢𝘯𝘢 𝘊.']
2020-12-17 09:59:26.962000+00:00
['Storytelling', 'Short Story', 'Parable', 'Energy', 'Creativity']
Title Attached FamiliarContent Rumi say drop wine vision could rinse eye everywhere looked would weep wonder Sadly many u stuck confines culture belief unaware splendor lie beyond know beyond perception attachment “One way expand awareness travel experience variety unfamiliar culture … Another way embrace mystery beauty life learn art letting go stand way inner development example belief serve common good argument serf purpose except saving face relationship toxic grudge depletes being” Sacred Laughter Sufis thought matter Leave feedback feel like submit something response article tag “storytelling” Take care See next Thursday brand new challengesTags Storytelling Short Story Parable Energy Creativity
3,734
The case of the missing deno_modules
When running code in Deno with external dependencies for the first time you might have noticed some downloading of packages taking place. Also if the file was a Typescript file you might have seen some indication of Typescript to Javascript compilation taking place. However the second time you ran the file none of that took place and the code just ran immediately. You start to look around for any created folders or files that might be responsible for the immediate execution of the code. But you find nothing. Where are my deno_modules? Let’s shed some light on the mystery. Enter DENO_DIR . By default, DENO_DIR is located in $HOME/.deno . However, since it is an ENV variable one could customise this. I also found cached Deno files in: $HOME/Library/Caches/deno DENO_DIR is structured in the following way: ️ The case of the missing deno_modules is solved.🕵️‍ Hope you learned something new about Deno reading this. Happy Hacking!
https://medium.com/dev-genius/the-case-of-the-missing-deno-modules-8484ac6d529
['Daniel Bark']
2020-06-19 13:38:44.537000+00:00
['Nodejs', 'JavaScript', 'Software Development', 'Typescript', 'Deno']
Title case missing denomodulesContent running code Deno external dependency first time might noticed downloading package taking place Also file Typescript file might seen indication Typescript Javascript compilation taking place However second time ran file none took place code ran immediately start look around created folder file might responsible immediate execution code find nothing denomodules Let’s shed light mystery Enter DENODIR default DENODIR located HOMEdeno However since ENV variable one could customise also found cached Deno file HOMELibraryCachesdeno DENODIR structured following way ️ case missing denomodules solved🕵️‍ Hope learned something new Deno reading Happy HackingTags Nodejs JavaScript Software Development Typescript Deno
3,735
How The Indian Government Failed India in The Crises
How The Indian Government Failed India in The Crises #PMcares doesn’t mean the PM cares Photo by Prashanth Pinha on Unsplash ‘जो जहाँ है वहीं रुक जाये, 21 दिन तक’, translating this to English it says “stop, wherever you are, just stop, for 21 days". These words were spoken by the Hon’ble Prime Minister of India Mr. Narendra Modi. It’s interesting to know the time: 8:00 PM. He requested the people of India to lock themselves inside their homes for 21 days, suddenly, with no prior notice. But what about the ones who were homeless? Lockdowns, which were imposed in most of the countries of the world to fight the COVID crises, took place in the country with the second largest population in the world on 21 July 2020. Stating the fact that India is still a developing nation, has around 7% of its population living below the poverty line. This approximates to around 10 million people earning less than 40$ a month. Most of these people are daily wage workers, which means they get paid every day for the work they do and have no job security. The beginning On the night of 21st, PM of India announced that the country would witness a nation-wide lockdown for a period of three weeks. The lockdown announced at eight in the night would come to effect from twelve. It gave a shock to the Indian population as there was no prior notice for the same. People stuck in different cities for work purposes were left unconsidered. While it was easier for the middle class and the upper classes to survive the lockdown, who had a house, money in their accounts and food to fill their bellies, it turned out to be a mess for the lower classes to survive, who had no job security and were mostly daily wage workers such as labourers. All they were left with was absolutely nothing in their hands. Their voices were suppressed and they were left to die on the streets. Only if, they were given some time to return to their respective houses, that were present in different states and cities, miles away from where they worked, some of them would have survived. In the crises After the announcement of the sudden pause in the lives of the people, the whole nation lost its peace. The media glorified the Government’s decision stating “prevention is better than cure”. Indeed, the saying is true but how does it justify creating another disaster to suppress one. Then, the Government stated that they had taken a very wise decision by announcing the sudden lockdown. Ironically, India has become the 2nd worst country hit by the crises, that proves how wise the decision was. Absolutely a lockdown was needed but not at the cost of lives of thousands of people, who died with hunger, homeless, and hopeless. It could have been issued with a prior notice to give everyone sufficient time to arrange their requirements for a period of 21 days which further extended to another 3 weeks. The Government failed miserably, addressed unimportant problems and glorified its decisions with illogical justifications. Some of the evident disasters that took place and left unheard were as follows: 1. Frontline workers' safety left unnoticed The nation witnessed incidents such as lighting of candles and clapping hands and utensils in respect of the people working in the pandemic such as doctors, nurses, policemen, and many others who worked when the whole world was paused. But they got nothing beyond the respect that was given on just the two above mentioned incidents. Most hospitals were not provided with PPE kits, masks, gloves for doctors, nurses and staff members. An insufficient number of beds were present and no proper arrangements. The Government only focused on its own promotions, hiding facts and statistics from the people of India. They were told that everything was fine, in place and there was no need to worry. The frontline workers were left unheard in their demands to keep their safety in mind. If you moved out on the streets and visited a hospital, only then you would realize what the actual situation was. Many died trying to save the lives of others due to lack of safety arrangements made by the Government. 2. Hiding facts and statistics There are some serious issues with the statistics the Government provides to the people of India. In a recent YouTube video that I watched (shot in the crises), when a person visited the hospital to find out the situation there, he was terrified. Families receiving no further information about the patient once admitted. Doctors working without masks and kits. Death numbers were lied about. After enquiring the watchman of the hospital he said that at least 10 deaths took place each day at the hospital. Think! 10 deaths in just one hospital of one city, imagine how many cities are there in the entire country and how many hospitals. And Government provides stats stating the death rate to be 1000 deaths per day. Later the Government announced a package of worth 265 billion $ for the Indian people, of which no one received a single penny. Huge donations were made by people in the name of a fund( PMCares fund) organized to help people in the crises. But the money donated gained no update and there is no data where all the money was gone. 3. The Youth and citizens left unheard PM organizes a session known as ‘Mann ki Baat', which translates to ‘a talk from the heart’ every few days. In this session, he addresses the problems of the country and presents his views on the same. But ironically it’s not well understood whether the talk is about the topic he wants to address or is it about what the people want to be addressed? Let me explain it further, in the middle of the pandemic he talked about ‘toys’, ‘mud utensils’ and everything unrelated and insignificant as compared to the crises. While the citizens kept trending hashtags of the topics such as student exams, loss of jobs, and lack of facilities provided in the hospitals, all he could find to talk about was ‘toys’. It feels sad to know that in the world’s largest democracy, India, the people’s voice is suppressed. In the end Real stories don’t have a happy ending. And neither does this one. With thousands left to die on the roads, people were left with no better options. They walked thousands of miles crossing state boundaries, remained hungry for days, with no food and water and no money in their pockets. They walked endlessly in the hope to reach their home someday. Thousands lost their jobs and their loved ones. But even today, if you search on the internet about India’s Corona crises, you will rarely find news about the dark sides. Only, what you will see is the glorification of false facts with 0 logical reasons stated. It's important to not blindly trust what is told and showed, but take a step and find it out yourself. It’s pathetic to know the real story, cause everything that glitters is not gold.
https://medium.com/politically-speaking/how-the-indian-government-failed-india-in-the-crises-c6c2a7223cb5
['Niyati Jain']
2020-11-23 13:03:29.651000+00:00
['Coronavirus', 'World', 'Health', 'Politics', 'Leadership']
Title Indian Government Failed India CrisesContent Indian Government Failed India Crises PMcares doesn’t mean PM care Photo Prashanth Pinha Unsplash ‘जो जहाँ है वहीं रुक जाये 21 दिन तक’ translating English say “stop wherever stop 21 day word spoken Hon’ble Prime Minister India Mr Narendra Modi It’s interesting know time 800 PM requested people India lock inside home 21 day suddenly prior notice one homeless Lockdowns imposed country world fight COVID crisis took place country second largest population world 21 July 2020 Stating fact India still developing nation around 7 population living poverty line approximates around 10 million people earning le 40 month people daily wage worker mean get paid every day work job security beginning night 21st PM India announced country would witness nationwide lockdown period three week lockdown announced eight night would come effect twelve gave shock Indian population prior notice People stuck different city work purpose left unconsidered easier middle class upper class survive lockdown house money account food fill belly turned mess lower class survive job security mostly daily wage worker labourer left absolutely nothing hand voice suppressed left die street given time return respective house present different state city mile away worked would survived crisis announcement sudden pause life people whole nation lost peace medium glorified Government’s decision stating “prevention better cure” Indeed saying true justify creating another disaster suppress one Government stated taken wise decision announcing sudden lockdown Ironically India become 2nd worst country hit crisis prof wise decision Absolutely lockdown needed cost life thousand people died hunger homeless hopeless could issued prior notice give everyone sufficient time arrange requirement period 21 day extended another 3 week Government failed miserably addressed unimportant problem glorified decision illogical justification evident disaster took place left unheard follows 1 Frontline worker safety left unnoticed nation witnessed incident lighting candle clapping hand utensil respect people working pandemic doctor nurse policeman many others worked whole world paused got nothing beyond respect given two mentioned incident hospital provided PPE kit mask glove doctor nurse staff member insufficient number bed present proper arrangement Government focused promotion hiding fact statistic people India told everything fine place need worry frontline worker left unheard demand keep safety mind moved street visited hospital would realize actual situation Many died trying save life others due lack safety arrangement made Government 2 Hiding fact statistic serious issue statistic Government provides people India recent YouTube video watched shot crisis person visited hospital find situation terrified Families receiving information patient admitted Doctors working without mask kit Death number lied enquiring watchman hospital said least 10 death took place day hospital Think 10 death one hospital one city imagine many city entire country many hospital Government provides stats stating death rate 1000 death per day Later Government announced package worth 265 billion Indian people one received single penny Huge donation made people name fund PMCares fund organized help people crisis money donated gained update data money gone 3 Youth citizen left unheard PM organizes session known ‘Mann ki Baat translates ‘a talk heart’ every day session address problem country present view ironically it’s well understood whether talk topic want address people want addressed Let explain middle pandemic talked ‘toys’ ‘mud utensils’ everything unrelated insignificant compared crisis citizen kept trending hashtags topic student exam loss job lack facility provided hospital could find talk ‘toys’ feel sad know world’s largest democracy India people’s voice suppressed end Real story don’t happy ending neither one thousand left die road people left better option walked thousand mile crossing state boundary remained hungry day food water money pocket walked endlessly hope reach home someday Thousands lost job loved one even today search internet India’s Corona crisis rarely find news dark side see glorification false fact 0 logical reason stated important blindly trust told showed take step find It’s pathetic know real story cause everything glitter goldTags Coronavirus World Health Politics Leadership
3,736
Bernie Sander’s Dreams Are a Movement, Not a Personality Cult
Bernie Sander’s Dreams Are a Movement, Not a Personality Cult There is hope for the future. I love Bernie Sander’s political ideas, and I hate that he did not win the nomination and go on to win the presidency. He is a caring, intelligent guy. He wants to serve. He eschews power. He really and truly wants a society where equality and justice mean something, the least of jobs can be life sustaining, health care is a human right and not something to be held ransom by corporate thieves, there is affordable housing and education, corporations are denied the purchase of legislation or political offices, the wealthy pay their fair share of taxes, we begin to restore our planet’s climate, and drug laws aren’t adding to the millions incarcerated in a class war waged on the poor and minorities. His campaign is over, but his values and beliefs and the fire he ignited will burn bright into the future. One must keep in mind this is a movement, not a personality cult like Trump’s. As such, it does not need Bernie. If Bernie decides to retire, there are others that think and feel like Bernie to fill his shoes. Notably, Alexandria Ocasio-Cortez will be eligible for the Presidency in 2024. Biden better watch out. Upcoming generations of voters are realizing the current Democratic Party is the Republican Party Light. They may act a little smarter and more humane than their Republican counterparts, but they owe a monetary fealty to corporations and wealthy individuals just like Republicans do. Along with health care and climate, the elimination of the influence of money in government is the key to turning around our corrupt political system. It was one of those things Bernie was most focused on. Without eliminating this exchange of money for power those other things are not obtainable. Who, except for a bunch of crooks, would think it was okay to legalize bribery like we’ve done in the U.S.? They’ve been doing it so long they no longer comprehend, or think about, the immorality. The pandemic has pointed out how appropriate Bernie’s political ideas are for this day and age. He may be offered the position of Secretary of Labor in the Biden administration. So don’t give up hope. Stay safe and alive and keep that fire kindled for the day it joins other small fires to become a bonfire celebrating the arrival of justice and equality to American society. Here’s to Bernie or AOC in 2024.
https://medium.com/age-of-awareness/bernie-sanders-dreams-are-a-movement-not-a-personality-cult-14942fd7a3e
['Glen Hendrix']
2020-12-13 02:46:23.198000+00:00
['Politics', 'Society', 'Life', 'Future', 'Leadership']
Title Bernie Sander’s Dreams Movement Personality CultContent Bernie Sander’s Dreams Movement Personality Cult hope future love Bernie Sander’s political idea hate win nomination go win presidency caring intelligent guy want serve eschews power really truly want society equality justice mean something least job life sustaining health care human right something held ransom corporate thief affordable housing education corporation denied purchase legislation political office wealthy pay fair share tax begin restore planet’s climate drug law aren’t adding million incarcerated class war waged poor minority campaign value belief fire ignited burn bright future One must keep mind movement personality cult like Trump’s need Bernie Bernie decides retire others think feel like Bernie fill shoe Notably Alexandria OcasioCortez eligible Presidency 2024 Biden better watch Upcoming generation voter realizing current Democratic Party Republican Party Light may act little smarter humane Republican counterpart owe monetary fealty corporation wealthy individual like Republicans Along health care climate elimination influence money government key turning around corrupt political system one thing Bernie focused Without eliminating exchange money power thing obtainable except bunch crook would think okay legalize bribery like we’ve done US They’ve long longer comprehend think immorality pandemic pointed appropriate Bernie’s political idea day age may offered position Secretary Labor Biden administration don’t give hope Stay safe alive keep fire kindled day join small fire become bonfire celebrating arrival justice equality American society Here’s Bernie AOC 2024Tags Politics Society Life Future Leadership
3,737
Google Play Music Is Dead. Long Live Spotify And Apple Music
Google Play Music Is Dead. Long Live Spotify And Apple Music As another product bites the dust, Google now stands at the risk of killing its own music business Images from Google, altered We knew this was coming. Google had been planning to draw curtains over their decade-old music and podcast streaming service for a while now. Now when the time has finally arrived, this all feels so sudden. However, it isn’t too surprising. The search engine giant has such a long history of discontinuing products that there’s a whole graveyard named after them. Yet, seeing the doom of Google Play Music really hurts. After all, only last year, it was the default music player app shipped across millions of Android devices. Today as Google begins forcing users to switch over to the newer YouTube Music app, it's hard to digest the fact that the much-loved Google Music product is officially dead. Now, this would have been fine if YouTube Music was an equal substitution for Google Play Music. But the thing is, currently, YT Music doesn’t fill that void. Instead, it's a service that primarily focuses on boosting video viewing time rather than music playlists. This makes Google’s whole strategy of replacing a working product by shoehorning it into another one a very questionable decision. It won’t be an overstretch to say that Google’s current move could inadvertently benefit its competitors even more.
https://medium.com/big-tech/google-play-music-is-dead-long-live-spotify-and-apple-music-b298228225fc
['Anupam Chugh']
2020-10-31 18:43:26.082000+00:00
['Business', 'Marketing', 'Google', 'Technology', 'Social Media']
Title Google Play Music Dead Long Live Spotify Apple MusicContent Google Play Music Dead Long Live Spotify Apple Music another product bite dust Google stand risk killing music business Images Google altered knew coming Google planning draw curtain decadeold music podcast streaming service time finally arrived feel sudden However isn’t surprising search engine giant long history discontinuing product there’s whole graveyard named Yet seeing doom Google Play Music really hurt last year default music player app shipped across million Android device Today Google begin forcing user switch newer YouTube Music app hard digest fact muchloved Google Music product officially dead would fine YouTube Music equal substitution Google Play Music thing currently YT Music doesn’t fill void Instead service primarily focus boosting video viewing time rather music playlist make Google’s whole strategy replacing working product shoehorning another one questionable decision won’t overstretch say Google’s current move could inadvertently benefit competitor even moreTags Business Marketing Google Technology Social Media
3,738
What I Told My 9-Year-Old About Coronavirus
What I Told My 9-Year-Old About Coronavirus The virus will have a lasting impact on our children’s sense of safety Photo: Tetra Images/Getty Images “I heard 20% of kids are going to die of coronavirus,” my nine-year-old daughter told me matter-of-factly last week. She’d heard that rumor from another classmate, the day before her Brooklyn school was shut down. I explained to her that no, children would not be dying of this virus — that, in fact, kids seemed to be the safest of all of us. It was just that things would be different for a while—like her school closing—out of an abundance of caution. “Okay,” she replied, “I thought that sounded wrong.” Then she got back to asking about playing Minecraft. I’m glad that Layla seemed reassured, but it was a reminder that as adults are panicking, our children are listening. Closely. I’ve seen lots of advice about how to entertain and teach children in the event of long school closures — how we’re supposed to keep to a schedule and maintain normalcy and boundaries. But I haven’t heard any advice on how to explain — to children who are old enough to understand that something is very wrong — what exactly is happening to the world right now. Especially when we know so little ourselves. As adults are panicking, our children are listening. Closely. I can reassure my daughter that she likely won’t get ill because coronavirus is most dangerous for elderly and sick people, but that doesn’t make her feel any better about her grandparents. I can tell her that by washing our hands and walking instead of taking the subway we’re avoiding germs, but she can tell that the virus must be pretty serious for us to be taking extra measures like this. “We don’t walk to school to not get the flu,” she said. She sees the lines at the grocery stores. She notices the people in masks and rubber gloves as we walk around Brooklyn. She even knows I bought a mask for her; one in a flower print that can be readjusted for small faces. I’ve told her as close to the truth as I can: That lots of people are getting sick, and though she’s not in danger, lots of other people are, and we’re nervous that there won’t be enough doctors or rooms in the hospitals. She seems to be taking it all in stride, aside from that initial fear brought on by her misinformed classmate. She’s glad to be having playdates with friends, not knowing how parents are scrambling in the background to make sure all our kids have things to do all day. She’s even happy that the neighborhood playground is mostly empty: “The tire swing is never free!” And so she’s doing all right. What really worries me, though, is not how she will handle the next few weeks; it’s how she will live through the next few years. What eats at me as a parent is not knowing if this is simply her new normal. If she’ll never have the relative health and environmental stability I grew up with. If the idea of going to a concert without a mask on will seem like a fantasy to her. If learning in a classroom alongside her friends and peers will be seen as a privilege rather than a given. I can tell her that the coronavirus will pass and that she will be fine. I feel confident that’s mostly true. But I can’t tell her that the end of this particular virus will be the end of emergencies like it — that’s what breaks my heart. I can’t even tell her when or if she’ll go back to school. A lot of us have taken our country’s stability for granted. I only wish she would be able to do the same.
https://gen.medium.com/what-i-told-my-9-year-old-about-coronavirus-80bac715bb8d
['Jessica Valenti']
2020-03-16 11:47:00.093000+00:00
['Parenting', 'Jessica Valenti', 'Family', 'Coronavirus', 'Health']
Title Told 9YearOld CoronavirusContent Told 9YearOld Coronavirus virus lasting impact children’s sense safety Photo Tetra ImagesGetty Images “I heard 20 kid going die coronavirus” nineyearold daughter told matteroffactly last week She’d heard rumor another classmate day Brooklyn school shut explained child would dying virus — fact kid seemed safest u thing would different while—like school closing—out abundance caution “Okay” replied “I thought sounded wrong” got back asking playing Minecraft I’m glad Layla seemed reassured reminder adult panicking child listening Closely I’ve seen lot advice entertain teach child event long school closure — we’re supposed keep schedule maintain normalcy boundary haven’t heard advice explain — child old enough understand something wrong — exactly happening world right Especially know little adult panicking child listening Closely reassure daughter likely won’t get ill coronavirus dangerous elderly sick people doesn’t make feel better grandparent tell washing hand walking instead taking subway we’re avoiding germ tell virus must pretty serious u taking extra measure like “We don’t walk school get flu” said see line grocery store notice people mask rubber glove walk around Brooklyn even know bought mask one flower print readjusted small face I’ve told close truth lot people getting sick though she’s danger lot people we’re nervous won’t enough doctor room hospital seems taking stride aside initial fear brought misinformed classmate She’s glad playdates friend knowing parent scrambling background make sure kid thing day She’s even happy neighborhood playground mostly empty “The tire swing never free” she’s right really worry though handle next week it’s live next year eats parent knowing simply new normal she’ll never relative health environmental stability grew idea going concert without mask seem like fantasy learning classroom alongside friend peer seen privilege rather given tell coronavirus pas fine feel confident that’s mostly true can’t tell end particular virus end emergency like — that’s break heart can’t even tell she’ll go back school lot u taken country’s stability granted wish would able sameTags Parenting Jessica Valenti Family Coronavirus Health
3,739
Why Is Burger King Asking You to Eat at McDonalds
Now Burger king asking you to order from McDonald’s is akin to a Republican asking to vote for Biden. Or Coca Cola asking you to have Pepsi. It just doesn’t happen. To get a little perspective, and to understand the intense rivalry among the two giants, let’s dig into some of the bizarre Burger King campaigns that have sucked all the juices out of your Big Macs. The Whopper Detour What do you do if you have to make people download your app on their phones? You give an offer, a discount or a freebie. Likewise, Burger King gave a Whopper for just 1-cents. But being able to gorge on the whopper after downloading the Burger King app was not the reason this marketing campaign won the Titanium, Direct and the Grand Prix awards at the 2019 Cannes Lions International. The Whopper Detour The whopper was available for next to nothing only if you downloaded the app and then ordered the Whopper within 600 ft of a McDonalds restaurant. A clever mix of technology — Geofencing if you were wondering — and marketing ingenuity. Not that Burger King had created a print ad or a tv commercial that trolled its rival — a ploy that we often saw during the cola wars. It was the fact that people themselves were trolling McDonald’s that made the campaign so brilliant. People were literally sitting at McDonalds parking lots, ordering Whoppers. What can be more embarrassing for a behemoth such as McDonalds where its own employees are pointing customers to the nearest Burger King joint? Never Trust a Clown From McDonalds parking lots, enter the movie hall. Stephen King fans would remember the movie IT. There was much fanfare about the movie and people went in hordes to watch the movie. But no one knew that along with the horrifying chills while munching popcorn and slurping coke, they will also get a shot of marketing ingenuity. As the movie finished and end credits started rolling, a message flashed on the screen which said, “Moral of the Story: Never Trust a Clown.” Bang! The entire audience went bonkers. Never Trust a Clown Burger King called it their longest ad ever. And indeed it was. Their message fitted right in the movie’s context and delivered a painful low blow to their rival just as they were about to dash out of the hall. They topped up the campaign by sharing the McDonalds tagline as I am Loving IT from the original I am loving it. Along with the buns, Burger King really knows how to spice up its puns. Size Matters Do you know that the Whopper is much bigger than McDonald’s? You might, or you might not. Some people don’t have an eye for detail. So Burger King cleared the air once and for all with its “Whopper of a Secret” campaign. A Whopper of a Secret The tongue-in-cheek campaign showed people that the Big Mac was hidden behind every Whopper in all of their advertisements. But because the Big Mac is so tiny compared to the Whopper, you could not see it. One Roasted Kanye Meal Please Kanye West’s association with alt-right trolls such as Candace Owens, and his pro Trump positions have often landed him in several controversies. He even nominated himself for this year’s presidential election. Talking about whacky ideas! As described aptly in this article in the Fast company, he had veered from being a “revered creative who often makes controversial statements” to “toxic alt right fartcloud.” So when his tweet saying “McDonald’s is my favorite restaurant,” was picked up by the Burger King social listening algorithm, they latched onto the opportunity to quickly turn the tables. They tweeted with the caption “Explains a Lot” over the Kanye West’s tweet. The tweet instantly grabbed the eyeballs of thousands and became the most liked branded tweet of all time. Burger King’s response to Kanye West’s tweet An Offer They Can’t Refuse The recent friendly gesture of Burger King is not something totally out of the blue. Back in 2015, Burger King had come up with another such campaign called the “peace offering” or the “McWhopper” campaign. The idea behind the campaign was to create awareness of the International Peace Day which is celebrated on 21 September every year. Burger King proposed that Mcdonalds should set aside their differences, and they both should offer a joint burger called the McWhopper. The proceeds from the sales would go to a non profit Peace One Day. For it’s sheer brilliance, the McWhopper was named the king of all media at the Cannes International Festival for Creativity and walked away with the coveted Grand Prix award. The McWhopper Campaign Burger King took the first step and even created a website with kick-ass content for the new burger. But McDonalds did not find the campaign amusing and politely declined the proposal for which it faced severe backlash. The genius of the campaign was that Burger King would have gained social mileage no matter what the response of McDonalds would have been. McDonalds had to choose from the lesser of the two evils. If it had accepted the offer, Burger King still would have had the upper hand of being the one to come up with the offer. And if McDonalds would have rejected — which it did — it would face a backlash, which it did. The headline of this article in Forbes sums up what people felt about McDonalds after the refusal. “McDonalds Chooses Pride Over Peace With Burger King’s McWhoppers offer.” King of Hearts Burger King has mastered the art of fine topical marketing. Be it the IT campaign or the Kanye roast, Burger King frequently creates a lasting impact by finding contexts that would help further it’s messages. It rides the wave with perfection. And now, when the virus has sucked out the life of economies world over, leaving thousands without work, pushing millions back into poverty, Burger King is spreading the message of how important it is to help each other. By setting an example, it is nudging people to help others in this time of chaos. It’s bringing empathy back in vogue. Maybe it is the future of marketing. Or rather, it should be the future of marketing. The ROI of campaigns should not only be measured in terms of impressions or increase in footfalls but also in terms of the social change the campaign stirred. Brands change lifestyle, they create habits. They are the millennial religions that billions follow to know what’s moral and what’s not. They help us make sense of the chaotic world that surrounds us. So, along with the awareness and interest that brands generate for their products, they should also work frequently towards creating a better society. As Ann Tran, a brand consultant and Tedx speaker says,
https://medium.com/swlh/why-is-burger-king-asking-you-to-eat-at-mcdonalds-589d2dbf01f6
['Mehboob Khan']
2020-11-11 05:59:09.158000+00:00
['Marketing', 'Creativity', 'McDonalds', 'Burger King', 'Advertising']
Title Burger King Asking Eat McDonaldsContent Burger king asking order McDonald’s akin Republican asking vote Biden Coca Cola asking Pepsi doesn’t happen get little perspective understand intense rivalry among two giant let’s dig bizarre Burger King campaign sucked juice Big Macs Whopper Detour make people download app phone give offer discount freebie Likewise Burger King gave Whopper 1cents able gorge whopper downloading Burger King app reason marketing campaign Titanium Direct Grand Prix award 2019 Cannes Lions International Whopper Detour whopper available next nothing downloaded app ordered Whopper within 600 ft McDonalds restaurant clever mix technology — Geofencing wondering — marketing ingenuity Burger King created print ad tv commercial trolled rival — ploy often saw cola war fact people trolling McDonald’s made campaign brilliant People literally sitting McDonalds parking lot ordering Whoppers embarrassing behemoth McDonalds employee pointing customer nearest Burger King joint Never Trust Clown McDonalds parking lot enter movie hall Stephen King fan would remember movie much fanfare movie people went horde watch movie one knew along horrifying chill munching popcorn slurping coke also get shot marketing ingenuity movie finished end credit started rolling message flashed screen said “Moral Story Never Trust Clown” Bang entire audience went bonkers Never Trust Clown Burger King called longest ad ever indeed message fitted right movie’s context delivered painful low blow rival dash hall topped campaign sharing McDonalds tagline Loving original loving Along bun Burger King really know spice pun Size Matters know Whopper much bigger McDonald’s might might people don’t eye detail Burger King cleared air “Whopper Secret” campaign Whopper Secret tongueincheek campaign showed people Big Mac hidden behind every Whopper advertisement Big Mac tiny compared Whopper could see One Roasted Kanye Meal Please Kanye West’s association altright troll Candace Owens pro Trump position often landed several controversy even nominated year’s presidential election Talking whacky idea described aptly article Fast company veered “revered creative often make controversial statements” “toxic alt right fartcloud” tweet saying “McDonald’s favorite restaurant” picked Burger King social listening algorithm latched onto opportunity quickly turn table tweeted caption “Explains Lot” Kanye West’s tweet tweet instantly grabbed eyeball thousand became liked branded tweet time Burger King’s response Kanye West’s tweet Offer Can’t Refuse recent friendly gesture Burger King something totally blue Back 2015 Burger King come another campaign called “peace offering” “McWhopper” campaign idea behind campaign create awareness International Peace Day celebrated 21 September every year Burger King proposed Mcdonalds set aside difference offer joint burger called McWhopper proceeds sale would go non profit Peace One Day it’s sheer brilliance McWhopper named king medium Cannes International Festival Creativity walked away coveted Grand Prix award McWhopper Campaign Burger King took first step even created website kickass content new burger McDonalds find campaign amusing politely declined proposal faced severe backlash genius campaign Burger King would gained social mileage matter response McDonalds would McDonalds choose lesser two evil accepted offer Burger King still would upper hand one come offer McDonalds would rejected — — would face backlash headline article Forbes sum people felt McDonalds refusal “McDonalds Chooses Pride Peace Burger King’s McWhoppers offer” King Hearts Burger King mastered art fine topical marketing campaign Kanye roast Burger King frequently creates lasting impact finding context would help it’s message ride wave perfection virus sucked life economy world leaving thousand without work pushing million back poverty Burger King spreading message important help setting example nudging people help others time chaos It’s bringing empathy back vogue Maybe future marketing rather future marketing ROI campaign measured term impression increase footfall also term social change campaign stirred Brands change lifestyle create habit millennial religion billion follow know what’s moral what’s help u make sense chaotic world surround u along awareness interest brand generate product also work frequently towards creating better society Ann Tran brand consultant Tedx speaker saysTags Marketing Creativity McDonalds Burger King Advertising
3,740
You’re Creating a New Programming Language — What Will the Syntax Look Like?
You’re Creating a New Programming Language — What Will the Syntax Look Like? I asked a bunch of programmers about their favorite syntax — here’s what they said A little while ago I decided to have a little fun and wrote an article titled “My Favorite Pieces of Syntax in 8 Different Programming Languages.” I published it and then decided to share it on a subreddit — r/ProgrammingLanguages. This led to an interesting discussion about programming language syntax, as users shared their own favorites. It left me with no choice: I had to write a new article with my favorite pieces of syntax from the r/ProgrammingLanguages community.
https://medium.com/better-programming/youre-creating-a-new-programming-language-what-will-the-syntax-look-like-35199d2a44e9
['Yakko Majuri']
2020-09-26 22:27:42.961000+00:00
['JavaScript', 'Technology', 'Programming', 'Software Engineering', 'Python']
Title You’re Creating New Programming Language — Syntax Look LikeContent You’re Creating New Programming Language — Syntax Look Like asked bunch programmer favorite syntax — here’s said little ago decided little fun wrote article titled “My Favorite Pieces Syntax 8 Different Programming Languages” published decided share subreddit — rProgrammingLanguages led interesting discussion programming language syntax user shared favorite left choice write new article favorite piece syntax rProgrammingLanguages communityTags JavaScript Technology Programming Software Engineering Python
3,741
Creative Construction: How Artists and Engineers Collaborate
From the monumental Picasso sculpture in Chicago’s Daley Plaza, to Isamu Noguchi’s Red Cube in Lower Manhattan, SOM’s history of integrating iconic artworks into a wide variety of building sites is well documented. Perhaps less known, however, is the role that engineers have played in helping to realize various works of art. In some cases, SOM has developed structural engineering solutions for executing the artist’s vision. In others, an exploration of technical issues has led the artist to refine or expand their ideas. Over the past decade, SOM’s structural engineers have developed tools, techniques, and approaches that have enhanced the impact of public art installed around the world — from a university campus in Omaha, Nebraska, to the lobby of the world’s tallest building in Dubai. In the summer of 2018, a number of these recent collaborations were featured as part of the exhibition “Poetic Structure: Art + Engineering + Architecture,” at the MAK Center for Art and Architecture in Los Angeles. The contents of this show are now making their way to Mexico City for the annual MEXTRÓPOLI Festival, in March 2019. In anticipation of the opening, we invite you to explore the engineering of art (and the art of engineering) across five creative collaborations. Janet Echelman, “Dream Catcher” (2017) Known for her colorful fiber net sculptures, Janet Echelman describes her installations as a “team sport,” with contributions from engineers, architects, and more. When she was commissioned to create a public artwork for The Jeremy Hotel in West Hollywood, Echelman envisioned a sculpture suspended above an open-air plaza between the hotel’s two buildings on the Sunset Strip. As the architects and engineers for the project, SOM worked closely with Echelman to seamlessly integrate the artwork into the new development. Titled “Dream Catcher,” the sculpture is inspired by the idea of dreaming hotel guests — its interweaving forms of fiber netting are modeled after brainwave activity that occurs during dream states. Suspended 100 feet in the air, the translucent sculpture turns The Jeremy’s plaza into a dynamic and ethereal public space, while making a striking contribution to the streetscape of West Hollywood.
https://som.medium.com/creative-construction-how-artists-and-engineers-collaborate-ef4a80f0b6c5
[]
2019-02-26 21:23:27.798000+00:00
['Design', 'Collaboration', 'Architecture', 'Art', 'Engineering']
Title Creative Construction Artists Engineers CollaborateContent monumental Picasso sculpture Chicago’s Daley Plaza Isamu Noguchi’s Red Cube Lower Manhattan SOM’s history integrating iconic artwork wide variety building site well documented Perhaps le known however role engineer played helping realize various work art case SOM developed structural engineering solution executing artist’s vision others exploration technical issue led artist refine expand idea past decade SOM’s structural engineer developed tool technique approach enhanced impact public art installed around world — university campus Omaha Nebraska lobby world’s tallest building Dubai summer 2018 number recent collaboration featured part exhibition “Poetic Structure Art Engineering Architecture” MAK Center Art Architecture Los Angeles content show making way Mexico City annual MEXTRÓPOLI Festival March 2019 anticipation opening invite explore engineering art art engineering across five creative collaboration Janet Echelman “Dream Catcher” 2017 Known colorful fiber net sculpture Janet Echelman describes installation “team sport” contribution engineer architect commissioned create public artwork Jeremy Hotel West Hollywood Echelman envisioned sculpture suspended openair plaza hotel’s two building Sunset Strip architect engineer project SOM worked closely Echelman seamlessly integrate artwork new development Titled “Dream Catcher” sculpture inspired idea dreaming hotel guest — interweaving form fiber netting modeled brainwave activity occurs dream state Suspended 100 foot air translucent sculpture turn Jeremy’s plaza dynamic ethereal public space making striking contribution streetscape West HollywoodTags Design Collaboration Architecture Art Engineering
3,742
Future Leaders: Samuel Parkinson, Senior Engineer
‘Future Leaders’ is a series of blog posts by the Financial Times in which we interview our team members and ask them how they got into technology, what they are working on and what they want to do in the future. Everyone has a different perspective, story and experience to share. This series will feature colleagues working in our Product & Technology teams. You can also connect with us on Twitter at @lifeatFT. Samuel Parkinson Hi Sam, what is your current role at the FT and what do you spend most of your time doing at work? I am a Senior Engineer within Customer Products, I’ve been at the FT for about two and a half years now. I’m actually leading a team at the moment, so I’m the tech lead for a team called ‘the Enabling Technologies Group’, which in essence is the tooling team for FT.com and the Apps. I spent a lot of time making sure we know what we’re working on and that we’re doing the right thing. Is that more management-focussed than engineering then? Yeah, it’s a nice mix, I think it’s about 50–50 at the moment. It’s definitely a lot more management than I have done before but it’s been quite an interesting jump into the deep end and there’s a lot to learn there. I really like the human side of all of that. So, how did you get into the technology industry? I have always been fascinated by technology, since I was very young. I remember I got asked this question once in an interview, I think it was my FT interview actually. I always used to tinker around and when I was young I was very lucky to have the support from my mum to go out and build a computer, I don’t know where that came from! I think I was a teenager and it was for gaming. My mum said yes, she trusted me when I had no idea what I was doing… but we went out, got the parts for this computer, and I managed to put it together and lo and behold, it actually worked. I surprised myself and it all went on from there. I didn’t study computing at school, it was a pretty terrible department when I was at school so it didn’t seem worth it. I didn’t do very well in my A-levels either and went through clearing for university, managed to get a place at Brunel University for their foundation course doing IT. So I spent five years at Brunel, doing the foundation of IT and then computer science with a year in industry. I think I learned more in that foundation year than the rest of the four years I spent at uni but it was really good, and that was the gateway. Then I went straight into the tech industry. What was your first job after uni? I was an engineer at graze.com. They do snacks through the post and when I first joined the office was in a house, they had a kitchen and it was pretty cool. That was a great time. The Enabling Technologies Group Christmas party, at the Crystal Maze 🔮 That sounds fun! Since you’ve been at the FT, what is the project you’ve worked on that you are most proud of? My most recent favourite, there’s quite a lot actually, was whilst I was on secondment with the Operations & Reliability team. There were two main projects going on at the time and I was tasked with helping out with their monitoring. We have hundreds of systems running at the FT, all of which we need to know if they’re working or not. The system and dashboard that we were using to do the monitoring on was very old and on its last legs. So, the O&R team were looking to refresh the monitoring and make it more reliable, and so I did the discovery work for what system might replace it. We built a tool called, ‘Heimdall’. I didn’t pick that name! Heimdall is the watchman of Asgard in Norse mythology. I think he’s part of the Avenger Marvel comics as well. I think he’s the guy in the movies with the big sword that overlooks everything. Under the hood it uses a tool called Prometheus to go out and check each one of our systems across the FT. Like that connection, cool, so that’s been your favourite project to date Yeah, it worked really well, I spent three months on it, heads down, with a great team. It’s currently looking after all of our systems and working really well. Sounds useful! With that in mind, what is the biggest lesson you have learned in recent years? The thing that keeps coming up, again and again, and something that is not always easy for me, is how difficult communication is and [the importance of] getting it right. Going back to the Heimdall project, that was good, it was communicated well and it was handed over to the team, it was a great success because of that, more than anything else. There’s been a lot of hard work in some cases because communication wasn’t good and getting that right is really hard. I think the biggest part of that is communication and collaboration with all the different disciplines, that is the crux of the problem. Looking at different types of communication, do you think communication within a team is more important or communication from a team is more important? Both are important. I think in our team we have got the internal communication down now. It wasn’t always perfect but it’s definitely getting better. For us it’s about communicating as a team outwards and that’s where we’ll hopefully improve. Sam’s team spent an afternoon in a board room overlooking the Thames Ok, final question! What would you like to do in the future? That is a great question. So, the next step up for me would be the ‘Principal Engineer’ role and I really like the sound of what it involves, working across teams, across departments and across disciplines, definitely playing into that communication aspect too. I think it would be a really interesting role. Are there any projects or developments in particular you’re interested in? I think it comes down to what we can improve within our department. We have a lot of work to do and I think the theme would be to do ‘more with less’. We spend a fair bit of time on toil at the moment, a lot of time rotating AWS keys or deleting entries from databases and it’s expensive for engineers to be spending their time on this kind of stuff. So, doing more with less, that would be a really good focus. Ok, food for thought.. Thanks, Sam! Interviewee: Samuel Parkinson Interviewer: Georgina Murray
https://medium.com/ft-product-technology/future-leaders-samuel-parkinson-senior-engineer-c056653749d2
['Ft Product']
2019-03-21 11:44:39.473000+00:00
['Learning', 'Tech', 'Communication', 'AWS', 'Engineering']
Title Future Leaders Samuel Parkinson Senior EngineerContent ‘Future Leaders’ series blog post Financial Times interview team member ask got technology working want future Everyone different perspective story experience share series feature colleague working Product Technology team also connect u Twitter lifeatFT Samuel Parkinson Hi Sam current role FT spend time work Senior Engineer within Customer Products I’ve FT two half year I’m actually leading team moment I’m tech lead team called ‘the Enabling Technologies Group’ essence tooling team FTcom Apps spent lot time making sure know we’re working we’re right thing managementfocussed engineering Yeah it’s nice mix think it’s 50–50 moment It’s definitely lot management done it’s quite interesting jump deep end there’s lot learn really like human side get technology industry always fascinated technology since young remember got asked question interview think FT interview actually always used tinker around young lucky support mum go build computer don’t know came think teenager gaming mum said yes trusted idea doing… went got part computer managed put together lo behold actually worked surprised went didn’t study computing school pretty terrible department school didn’t seem worth didn’t well Alevels either went clearing university managed get place Brunel University foundation course spent five year Brunel foundation computer science year industry think learned foundation year rest four year spent uni really good gateway went straight tech industry first job uni engineer grazecom snack post first joined office house kitchen pretty cool great time Enabling Technologies Group Christmas party Crystal Maze 🔮 sound fun Since you’ve FT project you’ve worked proud recent favourite there’s quite lot actually whilst secondment Operations Reliability team two main project going time tasked helping monitoring hundred system running FT need know they’re working system dashboard using monitoring old last leg team looking refresh monitoring make reliable discovery work system might replace built tool called ‘Heimdall’ didn’t pick name Heimdall watchman Asgard Norse mythology think he’s part Avenger Marvel comic well think he’s guy movie big sword overlook everything hood us tool called Prometheus go check one system across FT Like connection cool that’s favourite project date Yeah worked really well spent three month head great team It’s currently looking system working really well Sounds useful mind biggest lesson learned recent year thing keep coming something always easy difficult communication importance getting right Going back Heimdall project good communicated well handed team great success anything else There’s lot hard work case communication wasn’t good getting right really hard think biggest part communication collaboration different discipline crux problem Looking different type communication think communication within team important communication team important important think team got internal communication wasn’t always perfect it’s definitely getting better u it’s communicating team outwards that’s we’ll hopefully improve Sam’s team spent afternoon board room overlooking Thames Ok final question would like future great question next step would ‘Principal Engineer’ role really like sound involves working across team across department across discipline definitely playing communication aspect think would really interesting role project development particular you’re interested think come improve within department lot work think theme would ‘more less’ spend fair bit time toil moment lot time rotating AWS key deleting entry database it’s expensive engineer spending time kind stuff le would really good focus Ok food thought Thanks Sam Interviewee Samuel Parkinson Interviewer Georgina MurrayTags Learning Tech Communication AWS Engineering
3,743
Building, authenticating and hosting VueJS App with AWS Amplify
Getting started with VueJS and AWS Amplify The rising trends and love for VueJS is no surprise for many developers. With over 160k stars on Github, many developers and companies, including big and small, have been adopting it since the very beginning. With the ease of developing a responsive and impressive frontend application using VueJS, it is no wonder that developers are looking for the same development experience and turning their attentions to cloud services and libraries to automatically spin up and connect the cloud capabilities together. AWS Amplify is an open-source library that supports many modern javascript frameworks and native mobile platforms i.e. iOS and Android. The Amplify CLI also provides the developer the ability to create a whole set of serverless, feature-rich features such as Auth, API, Analytics, Storage and Hosting with best practices in the AWS environment with their own comfortable console terminal. Since it is open-source and community-driven, any developers who is interested in contributing to AWS Amplify development or its communities, can easily vote and create tickets in its respective Github repositories, and see each project’s roadmap (e.g. Amplify JS Roadmap & Projects) as well. Project setup In this project, we will setup a brand new VueJS app, use your own AWS account, and add Vue CLI and AWS Amplify CLI via your favorite terminal. If you are not familiar with VueJS or AWS, it is okay to take a step back to understand the concept and art of building modern apps first and not get your hands dirty. This guide is meant for everyone and I will add notes and explanations to better guide each step. You can also refer to this Github repository for the source code as we go along. With the ease of developing a responsive and impressive frontend application using VueJS, it is no wonder that developers are looking for the same development experience… NodeJS version To make sure that your terminal is compatible with the latest AWS Amplify CLI (minimum 10.0 and above) and Vue CLI (minimum 8.9 and above, 8.11.0+ recommended), you need to be running with at least node v10 and above in your terminal. Enter the following command to make sure that you are running the latest node version: node -v If you realize that you are not using the latest node/npm, you can use the Node Version Manager (NVM) to install and select the node version you need. You can enter the following command to install and use node version 13. nvm install 13 && nvm use 13 Install @vue/cli This is optional for you to actually start development with VueJS but in this project, I am going to use the @vue/cli to quickly create vue project with additional features such as Babel or TypeScript transpilation, ESLint integration and end-to-end testing. yarn global add @vue/cli # OR npm install -g @vue/cli Install @aws-amplify/cli The Amplify Command Line Interface (CLI) is a unified toolchain to create AWS cloud services for your app. Let’s go ahead and install the Amplify CLI. yarn global add @aws-amplify/cli # OR npm install -g @aws-amplify/cli A new Vue app Let’s start with a new Vue app by running the following command: vue create aws-amplify-vuejs I am also going to select the default preset given by the @vue/cli and add babel and eslint to the vue project. VueJS default preset Once you have let the cli to finish its job, you should be able to go inside the project folder to begin the next step. cd aws-amplify-vuejs You can use your favorite IDE to open up the project and take a look at the new project. In this example, i am going to use VSCode for development. Your VueJS New App Now you can start your VueJS development and see your changes in your local browser yarn serve It is Amplify time Within the new VueJS app, I am going to configure my cloud services using Amplify CLI by entering the following command: amplify init You can now step-through and key in the respective values for your project name to be seen in the AWS console and environment you are in. In additional, if you do not have an AWS profile in your terminal, you will be also entering the AWS credentials for your AWS profile in your terminal. amplify init After the amplify configuration, you should be able to see the new folder named amplify and there is a new aws-export.js in your VueJS app. These are auto-generated by AWS Amplify CLI and it will add new identifiers and credentials needed for every new component you added via the CLI. Auto-generated folder and files Adding AWS Amplify to your VueJS app Once you setup the VueJS app and the AWS environment, you now need to add aws-amplify and its UI components to your VueJS app. yarn add aws-amplify @aws-amplify/ui-vue # OR npm install -S aws-amplify @aws-amplify/ui-vue After that, you can add the following javascript codes to your main.js to configure the AWS Amplify libraries in your VueJS app. import '@aws-amplify/ui-vue'; import Amplify from 'aws-amplify'; import awsconfig from './aws-exports'; Amplify.configure(awsconfig); Add Amplify in your VueJS main.js Add Auth to your VueJS app After you have configured amplify within the VueJS app, you are now ready to add features to your newly built app. We are going to add Auth to your app. Let’s go back to the terminal and run this command in your project folder to add auth features and services. amplify add auth amplify add auth In this example, we are not going to configure a lot and complicate the whole auth process. However, you can easily re-configure this in the future with the following command. amplify update auth After you have added auth to your amplify, you can now see if what kind of resources will be added in your AWS environment by entering the following command. amplify status amplify status After you have confirmed the resources to be added to your AWS environment, you can now enter the following command to push your changes to the AWS cloud. amplify push This should take some time as in your AWS environment, new auth resources will be added with best practices such as new IAM roles with minimum permissions required and all of these event changes and resources can be seen in your AWS CloudFormation. Back to the VueJS app, you can now open up your App.vue to edit the codes and include the auth features you need. Default codes in App.vue Firstly, let’s refer to the aws amplify documentations and take a look at what is needed to be added. <template> <amplify-authenticator> <div> My App <amplify-sign-out></amplify-sign-out> </div> </amplify-authenticator> </template> In this code example, you can see that my app is wrapped around <amplify-authenticator> and I also can add a dedicated sign out button with <amplify-sign-out> . If you copy-paste the codes given in the example, your app will look like this after logged in. Amplify Auth, after logged in It is not that beautiful to begin working with and instead, I am going to use the VueJS’s HelloWorld component as my main screen and add the amplify auth component around it. Add auth code for VueJS Next, I want to customize a little bit to the default Amplify Auth screen by adding extra HTML header texts and also, I do not want the mega-large sign out button so I attached a new id to the component to better style it. In addition, you can add more customization and style by theming too. Custom attributes for Amplify Auth Now that we have added the auth features, given some customizations and styles, I can now test the app via http://localhost:8080/. If you have previously closed/cancelled the node process in your terminal, enter the following command again to start development again. yarn serve Amplify Auth Screen with some customization Now, you can go ahead and create a new account. You also can use the following credentials to see the whole login process. username: demo password: P@ssw0rd Have you noticed that you literally did not code any of these auth functionalities and all we did is to add the respective auth components from the Amplify UI libraries? After you have successfully logged in, you should be able to see your main component with a sign out button below it. VueJS with Amplify Auth Your final javascript code with amplify auth in App.vue should look like codes given below. <template> <div id="app"> <amplify-authenticator> <amplify-sign-in header-text="My Custom Sign In Text" slot="sign-in"></amplify-sign-in> <div> <img alt="Vue logo" src="./assets/logo.png" /> <HelloWorld msg="Welcome to Your Vue.js App" /> <div id="amplify-signout"> <amplify-sign-out></amplify-sign-out> </div> </div> </amplify-authenticator> </div> </template> <script> import HelloWorld from "./components/HelloWorld.vue"; export default { name: "App", components: { HelloWorld } }; </script> <style> #app { font-family: Avenir, Helvetica, Arial, sans-serif; -webkit-font-smoothing: antialiased; -moz-osx-font-smoothing: grayscale; text-align: center; color: #2c3e50; margin-top: 60px; } #amplify-signout { width: 100px; margin: 0 auto; } </style> Add Hosting to your Vue App Now that you have built first beautiful VueJS app, you want to host it somewhere. AWS Amplify CLI and Amplify Console got you covered! Fortunately, you can add hosting via the following command and choose git-based deployment under Hosting with Amplify Console . amplify add hosting amplify add hosting Your browser should now open and lead you to the AWS Amplify Console. Firstly, you can link the app to your code repository and in this case, I am using my personal Github account to version all my codes. Amplify Console, Add Repository Branch Under step 2, you now need to select the Git Repository and its Branch for Amplify Console to know what and which to deploy. Amplify Console, Configure Build Settings Lastly, under step 3, you can review your configuration one more time before Save and Deploy . Amplify Console, Review and Deploy You should now see your changes being deployed via the AWS Amplify Console and note that for every changes you push to your repository under that selected branch earlier, Amplify Console will also automatically deploy your changes to your portal. First Deployment at Amplify Console Now, for Vue routers to work properly, you have to add a rewrite rule under Amplify Console with source address </^[^.]+$|\.(?!(css|gif|ico|jpg|js|png|txt|svg|woff|ttf|map|json)$)([^.]+$)/> , target address /index.html and type 200 for the status code. Rewrites and redirects under Amplify Console Going back to your terminal, you can proceed by pressing ENTER and you should be able to see your portal URL within the terminal too. Back to your amplify command BONUS: I also went back to the Amplify Console to update my DNS and point vuejs.bryanchua.io to the portal. Add Custom DNS at Amplify Console When you enter the command amplify status , you should be able to see your updated hosting URL too. amplify status What’s next In this small project, I have only covered the basic functionalities of both VueJS and AWS Amplify, and showed how easy it is to add server-side functionalities within your frontend codes and beautify them. The beauty of using these advanced libraries is that there is literally no/less codes to write. You can now focus more time on user experience (UX) and delivering values via your app. Any thoughts about it? What do you want to see next? Feel free to reach out if you have any questions and I am available on LinkedIn and Twitter.
https://medium.com/swlh/building-authenticating-and-hosting-vuejs-app-with-aws-amplify-7285b7a8e90c
['Bryan Chua']
2020-06-05 03:53:12.614000+00:00
['Software Development', 'Vuejs', 'Authentication', 'Front End Development', 'AWS']
Title Building authenticating hosting VueJS App AWS AmplifyContent Getting started VueJS AWS Amplify rising trend love VueJS surprise many developer 160k star Github many developer company including big small adopting since beginning ease developing responsive impressive frontend application using VueJS wonder developer looking development experience turning attention cloud service library automatically spin connect cloud capability together AWS Amplify opensource library support many modern javascript framework native mobile platform ie iOS Android Amplify CLI also provides developer ability create whole set serverless featurerich feature Auth API Analytics Storage Hosting best practice AWS environment comfortable console terminal Since opensource communitydriven developer interested contributing AWS Amplify development community easily vote create ticket respective Github repository see project’s roadmap eg Amplify JS Roadmap Projects well Project setup project setup brand new VueJS app use AWS account add Vue CLI AWS Amplify CLI via favorite terminal familiar VueJS AWS okay take step back understand concept art building modern apps first get hand dirty guide meant everyone add note explanation better guide step also refer Github repository source code go along ease developing responsive impressive frontend application using VueJS wonder developer looking development experience… NodeJS version make sure terminal compatible latest AWS Amplify CLI minimum 100 Vue CLI minimum 89 8110 recommended need running least node v10 terminal Enter following command make sure running latest node version node v realize using latest nodenpm use Node Version Manager NVM install select node version need enter following command install use node version 13 nvm install 13 nvm use 13 Install vuecli optional actually start development VueJS project going use vuecli quickly create vue project additional feature Babel TypeScript transpilation ESLint integration endtoend testing yarn global add vuecli npm install g vuecli Install awsamplifycli Amplify Command Line Interface CLI unified toolchain create AWS cloud service app Let’s go ahead install Amplify CLI yarn global add awsamplifycli npm install g awsamplifycli new Vue app Let’s start new Vue app running following command vue create awsamplifyvuejs also going select default preset given vuecli add babel eslint vue project VueJS default preset let cli finish job able go inside project folder begin next step cd awsamplifyvuejs use favorite IDE open project take look new project example going use VSCode development VueJS New App start VueJS development see change local browser yarn serve Amplify time Within new VueJS app going configure cloud service using Amplify CLI entering following command amplify init stepthrough key respective value project name seen AWS console environment additional AWS profile terminal also entering AWS credential AWS profile terminal amplify init amplify configuration able see new folder named amplify new awsexportjs VueJS app autogenerated AWS Amplify CLI add new identifier credential needed every new component added via CLI Autogenerated folder file Adding AWS Amplify VueJS app setup VueJS app AWS environment need add awsamplify UI component VueJS app yarn add awsamplify awsamplifyuivue npm install awsamplify awsamplifyuivue add following javascript code mainjs configure AWS Amplify library VueJS app import awsamplifyuivue import Amplify awsamplify import awsconfig awsexports Amplifyconfigureawsconfig Add Amplify VueJS mainjs Add Auth VueJS app configured amplify within VueJS app ready add feature newly built app going add Auth app Let’s go back terminal run command project folder add auth feature service amplify add auth amplify add auth example going configure lot complicate whole auth process However easily reconfigure future following command amplify update auth added auth amplify see kind resource added AWS environment entering following command amplify status amplify status confirmed resource added AWS environment enter following command push change AWS cloud amplify push take time AWS environment new auth resource added best practice new IAM role minimum permission required event change resource seen AWS CloudFormation Back VueJS app open Appvue edit code include auth feature need Default code Appvue Firstly let’s refer aws amplify documentation take look needed added template amplifyauthenticator div App amplifysignoutamplifysignout div amplifyauthenticator template code example see app wrapped around amplifyauthenticator also add dedicated sign button amplifysignout copypaste code given example app look like logged Amplify Auth logged beautiful begin working instead going use VueJS’s HelloWorld component main screen add amplify auth component around Add auth code VueJS Next want customize little bit default Amplify Auth screen adding extra HTML header text also want megalarge sign button attached new id component better style addition add customization style theming Custom attribute Amplify Auth added auth feature given customizations style test app via httplocalhost8080 previously closedcancelled node process terminal enter following command start development yarn serve Amplify Auth Screen customization go ahead create new account also use following credential see whole login process username demo password Pssw0rd noticed literally code auth functionality add respective auth component Amplify UI library successfully logged able see main component sign button VueJS Amplify Auth final javascript code amplify auth Appvue look like code given template div idapp amplifyauthenticator amplifysignin headertextMy Custom Sign Text slotsigninamplifysignin div img altVue logo srcassetslogopng HelloWorld msgWelcome Vuejs App div idamplifysignout amplifysignoutamplifysignout div div amplifyauthenticator div template script import HelloWorld componentsHelloWorldvue export default name App component HelloWorld script style app fontfamily Avenir Helvetica Arial sansserif webkitfontsmoothing antialiased mozosxfontsmoothing grayscale textalign center color 2c3e50 margintop 60px amplifysignout width 100px margin 0 auto style Add Hosting Vue App built first beautiful VueJS app want host somewhere AWS Amplify CLI Amplify Console got covered Fortunately add hosting via following command choose gitbased deployment Hosting Amplify Console amplify add hosting amplify add hosting browser open lead AWS Amplify Console Firstly link app code repository case using personal Github account version code Amplify Console Add Repository Branch step 2 need select Git Repository Branch Amplify Console know deploy Amplify Console Configure Build Settings Lastly step 3 review configuration one time Save Deploy Amplify Console Review Deploy see change deployed via AWS Amplify Console note every change push repository selected branch earlier Amplify Console also automatically deploy change portal First Deployment Amplify Console Vue router work properly add rewrite rule Amplify Console source address cssgificojpgjspngtxtsvgwoffttfmapjson target address indexhtml type 200 status code Rewrites redirects Amplify Console Going back terminal proceed pressing ENTER able see portal URL within terminal Back amplify command BONUS also went back Amplify Console update DNS point vuejsbryanchuaio portal Add Custom DNS Amplify Console enter command amplify status able see updated hosting URL amplify status What’s next small project covered basic functionality VueJS AWS Amplify showed easy add serverside functionality within frontend code beautify beauty using advanced library literally noless code write focus time user experience UX delivering value via app thought want see next Feel free reach question available LinkedIn TwitterTags Software Development Vuejs Authentication Front End Development AWS
3,744
Technologies & Tools to Watch in 2021
An opinionated list of technologies to assess for DevOps Engineers and SREs Photo by NESA by Makers on Unsplash Managing Cloud Services via Kubernetes CRDs All three major cloud providers (AWS/Azure/GCP) now support a way to provision and manage cloud services from Kubernetes via custom resource definitions (CRDs). AWS has AWS Controllers for Kubernetes (ACK) in developer preview; Azure recently launched Azure Service Operator (deprecating Open Service Broker for Azure); GCP has Config Connector as an add-on to GKE. While Infrastructure-as-Code (IaC) tools such as Terraform, Ansible, and Puppet are still widely used to manage cloud infrastructure, the support for Kubernetes-managed cloud services suggests a huge shift towards organizations making Kubernetes the focal point of their cloud infrastructure. The upside here is that developers can now use the same tools to manage Kubernetes applications and other cloud services using the Kubernetes APIs, potentially simplifying the workflow. However, this tight coupling of Kubernetes and the rest of your cloud workloads may not be desired depending on your current infrastructure workflow or Kubernetes expertise. Pulumi Speaking of IaC tools, Pulumi recently announced its $37.5 million Series B funding to challenge Terraform’s dominance in this space. Unlike traditional IaC products, Pulumi opted to enable developers to write infrastructure code in their favorite languages (e.g. Go, Python, Javascript) instead of pushing yet-another JSON/YAML-based domain-specific language. This choice allows Pulumi to be more flexible than Terraform and enables developers to make use of existing testing frameworks to validate their infrastructure. However, given its nascency, Pulumi’s community is quite small compared to Terraform. Terragrunt & TFSEC Unlike Pulumi, Terraform addresses some of its deficiencies through its open-source community. Terragrunt is a thin wrapper around Terraform to help teams manage large Terraform projects by organizing configurations into versioned modules. Terragrunt implements some best practices laid out by Gruntwork co-founder Yevgeniy Brikman. While Terragrunt is fully open-source, Gruntwork recently announced commercial support for enterprises looking for more production-ready services. TFSEC is another open-source tool that complements Terraform projects. It uses static analysis to flag potential security threats to infrastructure code. As security bakes more into the DevSecOps movement, tools like tfsec will become more important in the future. Tekton The CI/CD market is saturated with established tools like Jenkins and Spinnaker as well as emergent cloud-native tools like ArgoCD. Tekton is a new player in this space, focused on Kubernetes workloads. Tekton started as part of the Knative project and was later donated to the Continuous Delivery Foundation (CDF). The differentiating factor for Tekton is that it defines the pipelines via Kubernetes CRDs. This allows pipelines to inherit native Kubernetes features (e.g. rollbacks) and also integrate with existing tools such as Jenkins X or ArgoCD to support complex, end-to-end CI/CD pipelines. Trivy Vulnerability scanning for containers is becoming an important part of any CI/CD pipelines. Like the CI/CD market, there are plenty of open-source and commercial tools including Docker Bench for Security, Clair, Cilium, Anchore Engine, and Falco. Trivy is a tool from Aqua Security that not only scans the container but also the underlying packages in the code. Combined with Aqua Security’s kube-bench, organizations can more easily bake security into the application development workflow. ShellCheck Despite tremendous improvements in the infrastructure tooling space, shell scripts remain in various workflows to get simple tasks done. ShellCheck is a static analysis tool to lint shell scripts for syntax and common mistakes. ShellCheck can run from the web, terminal/CI, as well as in your favorite text editor (e.g. Vim, Sublime, Atom, VS Code). Pitest/Stryker Pitest (Java) and Stryker (Javascript, C#, Scala) both implement mutation testing in their respective languages. Mutation testing gauges the quality of tests by injecting faults to tests and checking if the tests still pass even with the mutation. A good unit test should fail when a mutation occurs in the test case. Mutation testing complement test coverage to detect both untested and inadequately tested code. Litmus Back in 2011, Netflix popularized chaos engineering with Chaos Monkey as part of the Simian Army suite of tools. In the Kubernetes world, there are plenty of chaos engineering tools such as chaoskube, kube-monkey, and PowerfulSeal as well as commercial platforms like Gremlin. I want to highlight Litmus as a mature chaos engineering solution that is extensible and easy to use. Litmus is a lightweight Kubernetes operator consisting of ChaosEngine, ChaosExperiment, and ChaosResult. Litmus supports fine-grained experiments that go beyond simply killing random pods in a namespace and displays the results via ChaosResult CRD instead of leaving observability up to the users.
https://medium.com/dev-genius/technologies-tools-to-watch-in-2021-a216dfc30f25
['Yitaek Hwang']
2020-11-16 08:19:52.992000+00:00
['Kubernetes', 'DevOps', 'Software Engineering', 'Software Development']
Title Technologies Tools Watch 2021Content opinionated list technology ass DevOps Engineers SREs Photo NESA Makers Unsplash Managing Cloud Services via Kubernetes CRDs three major cloud provider AWSAzureGCP support way provision manage cloud service Kubernetes via custom resource definition CRDs AWS AWS Controllers Kubernetes ACK developer preview Azure recently launched Azure Service Operator deprecating Open Service Broker Azure GCP Config Connector addon GKE InfrastructureasCode IaC tool Terraform Ansible Puppet still widely used manage cloud infrastructure support Kubernetesmanaged cloud service suggests huge shift towards organization making Kubernetes focal point cloud infrastructure upside developer use tool manage Kubernetes application cloud service using Kubernetes APIs potentially simplifying workflow However tight coupling Kubernetes rest cloud workload may desired depending current infrastructure workflow Kubernetes expertise Pulumi Speaking IaC tool Pulumi recently announced 375 million Series B funding challenge Terraform’s dominance space Unlike traditional IaC product Pulumi opted enable developer write infrastructure code favorite language eg Go Python Javascript instead pushing yetanother JSONYAMLbased domainspecific language choice allows Pulumi flexible Terraform enables developer make use existing testing framework validate infrastructure However given nascency Pulumi’s community quite small compared Terraform Terragrunt TFSEC Unlike Pulumi Terraform address deficiency opensource community Terragrunt thin wrapper around Terraform help team manage large Terraform project organizing configuration versioned module Terragrunt implement best practice laid Gruntwork cofounder Yevgeniy Brikman Terragrunt fully opensource Gruntwork recently announced commercial support enterprise looking productionready service TFSEC another opensource tool complement Terraform project us static analysis flag potential security threat infrastructure code security bakes DevSecOps movement tool like tfsec become important future Tekton CICD market saturated established tool like Jenkins Spinnaker well emergent cloudnative tool like ArgoCD Tekton new player space focused Kubernetes workload Tekton started part Knative project later donated Continuous Delivery Foundation CDF differentiating factor Tekton defines pipeline via Kubernetes CRDs allows pipeline inherit native Kubernetes feature eg rollback also integrate existing tool Jenkins X ArgoCD support complex endtoend CICD pipeline Trivy Vulnerability scanning container becoming important part CICD pipeline Like CICD market plenty opensource commercial tool including Docker Bench Security Clair Cilium Anchore Engine Falco Trivy tool Aqua Security scan container also underlying package code Combined Aqua Security’s kubebench organization easily bake security application development workflow ShellCheck Despite tremendous improvement infrastructure tooling space shell script remain various workflow get simple task done ShellCheck static analysis tool lint shell script syntax common mistake ShellCheck run web terminalCI well favorite text editor eg Vim Sublime Atom VS Code PitestStryker Pitest Java Stryker Javascript C Scala implement mutation testing respective language Mutation testing gauge quality test injecting fault test checking test still pas even mutation good unit test fail mutation occurs test case Mutation testing complement test coverage detect untested inadequately tested code Litmus Back 2011 Netflix popularized chaos engineering Chaos Monkey part Simian Army suite tool Kubernetes world plenty chaos engineering tool chaoskube kubemonkey PowerfulSeal well commercial platform like Gremlin want highlight Litmus mature chaos engineering solution extensible easy use Litmus lightweight Kubernetes operator consisting ChaosEngine ChaosExperiment ChaosResult Litmus support finegrained experiment go beyond simply killing random pod namespace display result via ChaosResult CRD instead leaving observability usersTags Kubernetes DevOps Software Engineering Software Development
3,745
The Problem With Unsolicited Redesigns
I recently wrote an article about how side projects will benefit you and your Design career. One of the most popular types of side projects is the unsolicited redesign. They’re all over Dribbble, Medium, and Design Twitter. In fact, they’re so popular they got their own website. That’s not to say the love for them is unanimous though. Once you read some of the comments on these redesigns, or do a quick search on Medium, you’ll quickly discover two very different perspectives: One half of the Design community loves and recommends unsolicited redesigns for all the value they bring — the other half absolutely hates them. While I can certainly empathize with both camps, I know a properly executed unsolicited redesign can provide all the benefits I mentioned in my previous article. An unsolicited redesign can be great practice, give you content for your portfolio, let you try out new tools and methods, explore your creativity, and be a lot of fun. You might not have a great chance of turning your redesign into a business, but you will find a few case studies describing how someone landed a job or a client through an unsolicited redesign. With that being said, I don’t think your goal should be to get hired by the company who’s product you’re redesigning. This simply happens too rarely for it to be a viable strategy. As for my empathy for the haters, let’s get into the problem with unsolicited redesigns. The problem with unsolicited redesigns Designing in the real world is a balancing act between creative freedom and constraints of various kinds. You have a finite amount of time to complete a project, certain features or UI decisions may be out of scope for technical reasons, the budget will obviously put a cap on your research and other activities, and your Design System will limit your creative freedom. On top of all this, you will undoubtedly run into a series of challenges along the way, forcing you to cut corners, negotiate compromises with stakeholders, and settle for “great, but not perfect”. The fact is this: When you do an unsolicited redesign of an existing app or website, you're shielded from all the constraints and challenges faced by the Designers and Developers who created the original. If you’re 1) aware of this, and 2) keep your unsolicited redesign to yourself, you’re home safe. That last part is not what most (aspiring) Designers do though, nor is it what I recommend. Before I address the latter, here's why failing to account for, or at least acknowledge, the real-world constraints is a problem: You’re in for a rude awakening when you get your first Design job if you don’t realize beforehand that constraints and challenges are part of your job. It’s important that you know this so that your decision to get into Design is based on a proper understanding of the field. Knowing about the constraints and challenges of doing Design in the real world is an important skill and valuable experience. It’s part of what a potential employer looks at, alongside your other Design skills, when considering you for a job. While you won’t have a ton of experience when you’re first starting out in the field, it’s important to be aware, and show your awareness, of the difference between an unsolicited redesign without any constraints, and a real-world Design project. You might offend the Designers and Developers who created the original. This is arguably what brings out the most hate toward unsolicited redesigns. Since the people who created the original are painfully aware of all the constraints, compromises, and seemingly awesome ideas that had to be left out, seeing an unsolicited redesign from an outsider can feel like a slap in the face. Your unsolicited redesign might read as “here’s what you did wrong”, “here’s what you should have done”, “I’m clearly a better Designer than you”. Especially for someone just starting out in the field, I wouldn’t recommend this entrance in the Design community. Luckily, the problems above are fairly easy to avoid. How to get your unsolicited redesign right Don’t think of it as a redesign The whole problem stems from the idea of remaking something that was already made by others. In other words, design something with an existing company’s name and logo on it, and you’re guaranteed a ton of criticism from Designers who have any kind of relationship with the given product or website. Don’t think “redesign” — just think “design”. Instead of redesigning Spotify, why not simply design a great music player? Turn a feature into a standalone app Facebook, Twitter, Spotify, Airbnb, Uber, and even Medium are among the most popular subjects of unsolicited redesigns. However, due to the age, size, and complexity of these products, the teams working on them are dealing with an enormous amount of legacy, constraints, bureaucracy, and scrutiny from various sides that you can’t possibly account for in your one-person redesign project. Don’t assume you can or attempt to do so. Whether you think of it as a “redesign” or not, instead of attempting a redesign of Facebook in its entirety, pick out an individual feature or part of the system, and reimagine the design of that. How about an online platform to form and get together in groups of likeminded people? Or an app for organizing and promoting events? Or maybe just a messaging app? Basically, use an existing app as the starting point for your project, but then turn it into something much more original. Design a similar concept from scratch, or turn a feature into a standalone app. If you follow this advice, but especially if you don’t, there are a couple of other things you can do to improve your unsolicited redesign: Assume the team behind the app or website already considered your solution Some very skilled people already worked on this and ended up with a solution different from what you consider to be a better one. There’s probably a good explanation behind that. Stay humble and respectful, and avoid coming across as that guy or girl who thinks they're better than the Design team at Airbnb or Twitter. For an excellent example of this, check out this unsolicited redesign of the Medium Claps feature. Consider the constraints and challenges you would have to deal with in the real world Show that you understand how easy an unsolicited redesign is, compared to what the Designers and Developers went through to create the original. Describe how financial, technical and other business constraints could impact a project like this in the real world. Explain how you could, hypothetically, deal with these constraints and challenges, were you a Designer on the actual team. How would you have kicked off the project? How would you have approached Research to decide on the most important features? Who in the organization would you have talked with to uncover any constraints and challenges? How and when would you have evaluated the technical feasibility of your ideas? How feasible do you actually think your solution is? How about testing the usability and desirability of it? You would ideally have done some of these things on your own, even in an unsolicited redesign project, but it’s okay to make assumptions and describe “the real world scenario” to strengthen your case even further.
https://medium.com/swlh/the-problem-with-unsolicited-redesigns-5c6d230354ed
['Christian Jensen']
2020-05-13 14:01:39.168000+00:00
['Design', 'Creativity', 'UX', 'Side Project', 'Portfolio']
Title Problem Unsolicited RedesignsContent recently wrote article side project benefit Design career One popular type side project unsolicited redesign They’re Dribbble Medium Design Twitter fact they’re popular got website That’s say love unanimous though read comment redesigns quick search Medium you’ll quickly discover two different perspective One half Design community love recommends unsolicited redesigns value bring — half absolutely hate certainly empathize camp know properly executed unsolicited redesign provide benefit mentioned previous article unsolicited redesign great practice give content portfolio let try new tool method explore creativity lot fun might great chance turning redesign business find case study describing someone landed job client unsolicited redesign said don’t think goal get hired company who’s product you’re redesigning simply happens rarely viable strategy empathy hater let’s get problem unsolicited redesigns problem unsolicited redesigns Designing real world balancing act creative freedom constraint various kind finite amount time complete project certain feature UI decision may scope technical reason budget obviously put cap research activity Design System limit creative freedom top undoubtedly run series challenge along way forcing cut corner negotiate compromise stakeholder settle “great perfect” fact unsolicited redesign existing app website youre shielded constraint challenge faced Designers Developers created original you’re 1 aware 2 keep unsolicited redesign you’re home safe last part aspiring Designers though recommend address latter here failing account least acknowledge realworld constraint problem You’re rude awakening get first Design job don’t realize beforehand constraint challenge part job It’s important know decision get Design based proper understanding field Knowing constraint challenge Design real world important skill valuable experience It’s part potential employer look alongside Design skill considering job won’t ton experience you’re first starting field it’s important aware show awareness difference unsolicited redesign without constraint realworld Design project might offend Designers Developers created original arguably brings hate toward unsolicited redesigns Since people created original painfully aware constraint compromise seemingly awesome idea left seeing unsolicited redesign outsider feel like slap face unsolicited redesign might read “here’s wrong” “here’s done” “I’m clearly better Designer you” Especially someone starting field wouldn’t recommend entrance Design community Luckily problem fairly easy avoid get unsolicited redesign right Don’t think redesign whole problem stem idea remaking something already made others word design something existing company’s name logo you’re guaranteed ton criticism Designers kind relationship given product website Don’t think “redesign” — think “design” Instead redesigning Spotify simply design great music player Turn feature standalone app Facebook Twitter Spotify Airbnb Uber even Medium among popular subject unsolicited redesigns However due age size complexity product team working dealing enormous amount legacy constraint bureaucracy scrutiny various side can’t possibly account oneperson redesign project Don’t assume attempt Whether think “redesign” instead attempting redesign Facebook entirety pick individual feature part system reimagine design online platform form get together group likeminded people app organizing promoting event maybe messaging app Basically use existing app starting point project turn something much original Design similar concept scratch turn feature standalone app follow advice especially don’t couple thing improve unsolicited redesign Assume team behind app website already considered solution skilled people already worked ended solution different consider better one There’s probably good explanation behind Stay humble respectful avoid coming across guy girl think theyre better Design team Airbnb Twitter excellent example check unsolicited redesign Medium Claps feature Consider constraint challenge would deal real world Show understand easy unsolicited redesign compared Designers Developers went create original Describe financial technical business constraint could impact project like real world Explain could hypothetically deal constraint challenge Designer actual team would kicked project would approached Research decide important feature organization would talked uncover constraint challenge would evaluated technical feasibility idea feasible actually think solution testing usability desirability would ideally done thing even unsolicited redesign project it’s okay make assumption describe “the real world scenario” strengthen case even furtherTags Design Creativity UX Side Project Portfolio
3,746
Alone You Can Make a Difference, United We Can Transform
The Reality is We take so much time and effort in bettering ourselves. ➰Mentally we look to meditate, do yoga, and be mindful. ➰Physically we exercise, take care of our skin and hair, and spend tons of time and money shopping to look good. ➰Financially we save and invest money to make more money. ➰Professionally we learn new skills, upgrade our qualifications, take new courses, and network with others. Are we doing enough individually to heal the earth? Are we investing our time and money in making decisions and giving back, to that which has provided bountifully?
https://medium.com/illumination/alone-you-can-make-a-difference-united-we-can-transform-4c38bb31fb9d
['Chetna Jai']
2020-12-12 22:41:54.897000+00:00
['Environment', 'Illumination', 'Earth', 'Future', 'Climate Change']
Title Alone Make Difference United TransformContent Reality take much time effort bettering ➰Mentally look meditate yoga mindful ➰Physically exercise take care skin hair spend ton time money shopping look good ➰Financially save invest money make money ➰Professionally learn new skill upgrade qualification take new course network others enough individually heal earth investing time money making decision giving back provided bountifullyTags Environment Illumination Earth Future Climate Change
3,747
Meet the Medium “Elevators”
Meet the Medium “Elevators” Stephanie Georgopulos and Harris Sockel spend their days searching for great writing on Medium Stephanie Georgopulos and Harris Sockel are editors at Medium who started out using the platform back in 2013, writing and publishing stories that explored the human condition. Now, they work to “elevate” with independent, self-published writers on Medium. Georgopulos and Sockel scour Medium to find great stories they think deserve a wider audience than they may otherwise be getting. They reach out to the writer and work with them on improving their piece, then distribute it broadly through Medium’s topics, publications, homepage, emails, and social channels. Medium VP, Editorial Siobhan O’Connor explained the various ways that the editorial team works with writers — from the commissioned stories in our monthly magazine to exclusive columnists, plus reported features and insightful essays. She also described how we work with writers self-publishing on Medium — and this interview explains that in greater detail. Hi there. Can you tell us what you do? Harris Sockel: We’re finding great writers on Medium and working with them to develop their stories to reach a wider audience. Basically, we work to find compelling voices and build relationships. How did you begin working at Medium? Stephanie Georgopulos: I’ve been writing on Medium since the site was in beta. I built a publication called Human Parts on Medium back in 2013, and Harris was one of my first contributors. About a year in, I needed help managing the number of submissions I was receiving, and I felt that Harris’s writing embodied the spirit of what I wanted Human Parts to be. We met up for a drink and by the time we left, I had my partner. I joined Medium full time in 2016 as a curator, and you can guess the rest from there. The majority of the writers Harris and I ended up working with on Human Parts started out self-publishing on Medium as well. What we do now — looking for great writing, and not really knowing what we’re going to find when we arrive at work in the morning — originates with that editorial experience. What draws you to this kind of work — collaborating with writers from the platform? Georgopulos: There’s a raw enthusiasm from a lot of these writers, who just had to publish these stories, even without confirmation that payment and readers would be waiting. I’ve been a freelance writer before; I’ve written things just to make fifty dollars. So, I understand there can be a different energy that goes into something you’re writing for an assignment versus something you’re writing for yourself. Sockel: I’ve learned a lot getting to work with experts who write about their industries. Medium is home to doctors, scientists, designers — leaders in their fields. And they don’t necessarily want to be career writers, but they have expertise that’s really valuable. Can you talk a bit about what writers get out of self-publishing on Medium? Sockel: I think writing on Medium means the opportunity to reach a wide audience without the overhead of creating your own blog. You don’t need a reputation, followers, or any type of pre-ordained cred to write here and find people willing to listen. Georgopoulos: If the Medium Partner Program had existed when I was a freelancer, I would’ve had more options. I’ve had lovely relationships with editors, but at the end of the day, they have to commission stories that make sense for their publication and audience. So when writing is your primary income, you have to make choices about which ideas to pursue. And I think the Partner Program means writers don’t have to choose — you can pitch a big, ambitious idea to an editor, and you can also write something for which there is no natural, obvious publisher. You can monetize your killed stories, your tweetstorms . . . Medium’s always been good for writing first and placing later, but now you can get paid, too. I think a lot of us have trained ourselves to write what we can sell to publishers, but when you’re “selling” directly to readers, you can create and respond to your own audience rather than borrowing one. This is essentially what we do on social media already, for free. It goes back to the idea that all of our output on digital publishing platforms and even social media websites is a form of work. Georgopulos: Right. On many sites and platforms, the work you’re doing — there are ads being sold against it. Medium has always been . . . people call it longform Twitter, but I don’t think it’s like Twitter at all. It’s a place to parse things, not just throw them out there and forget about them later. I like processing my thoughts through writing without feeling like I need to have something massive to say every single time. Or that I need a news angle just to speak. And I think that’s been a huge problem with the internet, particularly with personal essays. To sell, it always has to be confessional or Sockel: This huge revelation — Georgopulos: Your most private thoughts. Sometimes it’s okay to find meaning in lightness. What’s an example of a story like that? That felt urgent even if it wasn’t timely? Sockel: “Enjoli” by Kristi Coulter. It’s a very personal (and very funny) essay about getting sober in a culture where everyone seems to be drinking all the time. I remember when I first read it (Steph sent it to me) and it was that feeling of, this is the story that she had to tell, and she’s telling it in her own way — she’s writing it for herself. Georgopulos: And it led to Kristi getting a book deal — her first book of essays, Nothing Good Can Come from This, was published last summer by Macmillan. It’s pretty heady to go to work everyday not knowing what opportunities it might create for someone else. What’s a story that more recently came through that you really enjoyed? Georgopulos: There are so many, but “Living in Deep Time” by Elizabeth Childs Kelly was one I really loved on a personal level. As many women noted during and since, the Kavanaugh confirmation illustrated to women how our culture regards and values our experiences, and frankly, that picture was monstrous. Throughout the trial I read many personal accounts that echoed Christine Blasey Ford’s, a lot of articles making logical arguments about why this confirmation couldn’t move forward. And in the aftermath, when it did, it kind of felt like . . . do our words matter that little? Why even bother? “Living in Deep Time” reminded me why we bother. It reminded me that you can seize power without stealing it. Writers are motivated by different things. How do you tailor your approach for each person? Sockel: Every writer is different. Some want to earn money from their work and build careers in writing. Others want to share expertise they’ve gained from working in another industry, so targeted distribution to a niche audience might be more what they’re looking for. The same goes for editing: some writers want to develop a relationship with an editor, and others want to do their own thing. It really depends, and there are all kinds of writers along the spectrum. Our relationships vary depending on the person, their goals, and their work. Tell us about the metered paywall. Georgopulos: The stories we work on are chosen for, and funded by, our subscribers, so everything we work on goes behind our metered paywall. We’re an ad-free platform, so we’re less concerned about every piece getting tons of traffic and more concerned with making sure the readers who invest in a subscription are getting value out of that. Sockel: I think writers are starting to see that if you put something behind the meter, the work can go much further. It’s also just thrilling, personally, to see how much more engagement a piece can get after it goes through our process. Lydia Sohn’s “What Do 90-Somethings Regret Most?” is a great example. Sohn is a minister and writer in San Diego, and in the story she describes interviewing her oldest congregants about their hopes, fears, and regrets. It was obvious Sohn came to the interviews with a lot of empathy (and came away with a new perspective on aging). When I found the story, almost no one had seen it. She’s had a lot of success from that piece, and readers got a lot out of the insights in it. What do you want writers to know about Medium? Sockel: I want more people to understand that you can get paid to write what you want to write. I don’t think people quite get that yet. And this doesn’t just go for essayists — I’m waiting for more independent journalists and industry experts outside of tech to try it out. Georgopulos: And I want people to worry less about “what works” and focus on finding their voice. There’s just really no way to skip the line when it comes to that. The stories that hit me hardest are the ones I didn’t know I wanted, and in my experience those resonate because they’re coming from this one-of-a-kind place only that writer has access to. Their perspective. That lifetime that got them here. That’s what I’m looking for in a story. I’m reading all day long, so something really needs to jump out and have an authentic, fresh voice for me to be able to stick with it from beginning to end. There are only so many hours. Sockel: I probably have a thousand tabs open. Georgopulos: Tabs all day long. It’s extremely exciting and refreshing to find that one where you think, “Ah, this is so good.” We answer writers’ questions in a follow-up post here.
https://blog.medium.com/meet-the-medium-elevators-92ab3c47abc8
['Medium Staff']
2020-08-13 16:05:12.064000+00:00
['Writing Tips', 'Writing', 'Medium', 'Partner Program', 'Creativity']
Title Meet Medium “Elevators”Content Meet Medium “Elevators” Stephanie Georgopulos Harris Sockel spend day searching great writing Medium Stephanie Georgopulos Harris Sockel editor Medium started using platform back 2013 writing publishing story explored human condition work “elevate” independent selfpublished writer Medium Georgopulos Sockel scour Medium find great story think deserve wider audience may otherwise getting reach writer work improving piece distribute broadly Medium’s topic publication homepage email social channel Medium VP Editorial Siobhan O’Connor explained various way editorial team work writer — commissioned story monthly magazine exclusive columnist plus reported feature insightful essay also described work writer selfpublishing Medium — interview explains greater detail Hi tell u Harris Sockel We’re finding great writer Medium working develop story reach wider audience Basically work find compelling voice build relationship begin working Medium Stephanie Georgopulos I’ve writing Medium since site beta built publication called Human Parts Medium back 2013 Harris one first contributor year needed help managing number submission receiving felt Harris’s writing embodied spirit wanted Human Parts met drink time left partner joined Medium full time 2016 curator guess rest majority writer Harris ended working Human Parts started selfpublishing Medium well — looking great writing really knowing we’re going find arrive work morning — originates editorial experience draw kind work — collaborating writer platform Georgopulos There’s raw enthusiasm lot writer publish story even without confirmation payment reader would waiting I’ve freelance writer I’ve written thing make fifty dollar understand different energy go something you’re writing assignment versus something you’re writing Sockel I’ve learned lot getting work expert write industry Medium home doctor scientist designer — leader field don’t necessarily want career writer expertise that’s really valuable talk bit writer get selfpublishing Medium Sockel think writing Medium mean opportunity reach wide audience without overhead creating blog don’t need reputation follower type preordained cred write find people willing listen Georgopoulos Medium Partner Program existed freelancer would’ve option I’ve lovely relationship editor end day commission story make sense publication audience writing primary income make choice idea pursue think Partner Program mean writer don’t choose — pitch big ambitious idea editor also write something natural obvious publisher monetize killed story tweetstorms Medium’s always good writing first placing later get paid think lot u trained write sell publisher you’re “selling” directly reader create respond audience rather borrowing one essentially social medium already free go back idea output digital publishing platform even social medium website form work Georgopulos Right many site platform work you’re — ad sold Medium always people call longform Twitter don’t think it’s like Twitter It’s place parse thing throw forget later like processing thought writing without feeling like need something massive say every single time need news angle speak think that’s huge problem internet particularly personal essay sell always confessional Sockel huge revelation — Georgopulos private thought Sometimes it’s okay find meaning lightness What’s example story like felt urgent even wasn’t timely Sockel “Enjoli” Kristi Coulter It’s personal funny essay getting sober culture everyone seems drinking time remember first read Steph sent feeling story tell she’s telling way — she’s writing Georgopulos led Kristi getting book deal — first book essay Nothing Good Come published last summer Macmillan It’s pretty heady go work everyday knowing opportunity might create someone else What’s story recently came really enjoyed Georgopulos many “Living Deep Time” Elizabeth Childs Kelly one really loved personal level many woman noted since Kavanaugh confirmation illustrated woman culture regard value experience frankly picture monstrous Throughout trial read many personal account echoed Christine Blasey Ford’s lot article making logical argument confirmation couldn’t move forward aftermath kind felt like word matter little even bother “Living Deep Time” reminded bother reminded seize power without stealing Writers motivated different thing tailor approach person Sockel Every writer different want earn money work build career writing Others want share expertise they’ve gained working another industry targeted distribution niche audience might they’re looking go editing writer want develop relationship editor others want thing really depends kind writer along spectrum relationship vary depending person goal work Tell u metered paywall Georgopulos story work chosen funded subscriber everything work go behind metered paywall We’re adfree platform we’re le concerned every piece getting ton traffic concerned making sure reader invest subscription getting value Sockel think writer starting see put something behind meter work go much It’s also thrilling personally see much engagement piece get go process Lydia Sohn’s “What 90Somethings Regret Most” great example Sohn minister writer San Diego story describes interviewing oldest congregant hope fear regret obvious Sohn came interview lot empathy came away new perspective aging found story almost one seen She’s lot success piece reader got lot insight want writer know Medium Sockel want people understand get paid write want write don’t think people quite get yet doesn’t go essayist — I’m waiting independent journalist industry expert outside tech try Georgopulos want people worry le “what works” focus finding voice There’s really way skip line come story hit hardest one didn’t know wanted experience resonate they’re coming oneofakind place writer access perspective lifetime got That’s I’m looking story I’m reading day long something really need jump authentic fresh voice able stick beginning end many hour Sockel probably thousand tab open Georgopulos Tabs day long It’s extremely exciting refreshing find one think “Ah good” answer writers’ question followup post hereTags Writing Tips Writing Medium Partner Program Creativity
3,748
A Manifesto for the Online Writer Who’s Lost Their Love of Writing
A Manifesto for the Online Writer Who’s Lost Their Love of Writing Stop playing the viral slot machine Photo: Alex/Unsplash My kid was anxious to buy his book about animal kingdoms. He’s 5 and possesses the consumer certainty only a 5-year-old can have. I like this book. I want this book. Buy this book. I, however, am simultaneously filled with joy and dread when walking into a bookstore. I love the joy of being so close to so many great minds. I despise the dread of residing which one to bring home. My son tugged at my sleeve. He had his sights set on the free bookmarks at the checkout, not to mention the one book, of all the books, he chose to purchase. I didn’t want to leave empty-handed. The pandemic has shuttered all libraries and I was starved for the real feel of paper between my fingers instead of the awkward weight of my Kindle. In my haste I scanned the shelves, looking for something, anything to catch my eye. A red spine. Bird by Bird. Anne Lamott. I grabbed it and paid, falling 86-cents short in cash. The bookstore employee was nice enough to let it slide. My son and I left. He exuberant in his find, me more hesitant. Some instructions on writing and life, the subtitle read. Great, another writer writing about writing I thought. Little did I realize that this book is the book on writing. When we got home, I opened the front cover, and before finishing the introduction knew with absolute certainty that Lamott wrote this book for me and me alone. Never has anyone spoken so directly to me. Never has anyone rekindled in me a passion for my craft. I started reading Bird by Bird yesterday, I haven’t finished it yet, but in my excitement, I sat down to write this morning and what poured forth was a manifesto outlined below. You see, I began this year with a promise to myself: I would make writing my career. I’ve been writing for over a decade but I never took myself seriously enough to bravely say “I’m a writer” when asked what I do. I didn’t know what exactly, career-wise, my writing would entail. I figured the majority of my writing would be published online on various platforms. I’d keep some writing to myself. And I’d possibly dabble on side projects and a book or two. Three-quarters of the year has gone by and although I haven’t made a living out of it per se, I’ve grown addicted to what I call the viral slot machines. It’s a fun game. I refresh my browser or app and see what coins, I mean views come tumbling out. The views, the notifications, the praise, the comments, the highlights. They may seem harmless, even a poor analogy to a slot machine. I’m not frivolously throwing money away, right? That’s true in a sense, but the act of playing the viral slot machine does cost me something: my time and my attention. Bird by Bird slapped me across my face. I write not in the hopes of going viral, but for writing’s sake. Somewhere this year I’ve lost that. Could I get back the joy of writing for writing’s sake? Could I get back to waking early, before the other members of my household, sitting at my desk and alone with my thoughts, through a groggy haze of early morning confusion, string together words in a coherent order? Could the pleasure of writing derive from the writing itself and not from the off chance that the thing I wrote “broke the internet” so much so that I feel instantly validated inside? Yes, I believe I can. And again that’s the reason for this manifesto. Anne Lamott has spoken to me and now I’m speaking to you, dear writer. I am speaking to you because I know we are both blessed and cursed by our craft. We are blessed in that the gatekeepers of old are long gone. The powers of the interwebs have created a meritocracy where voices that want to be heard can be heard. Yet we are cursed because to acquire other people’s time and attention we must play a game. A game no one understands. A game without rules. A game that feels like you’re losing until you hit it big. And when you hit it big, you want more. You’ve tasted virality and it’s sweet and sour like a stale bag of Sour Patch Kids. It never feels like enough. This is a call to arms my fellow online writers. We must stop playing the game. We must take back our craft. We must find joy and pleasure in the act of writing, not in the downstream effects it may incur. We must write. Here’s how we will do just that. The Online Writer’s Manifesto
https://medium.com/the-post-grad-survival-guide/a-manifesto-for-the-online-writer-whos-lost-their-love-of-writing-b6489678bcdb
['Declan Wilson']
2020-10-06 07:10:35.192000+00:00
['Creativity', 'Writing Tips', 'Self', 'Work', 'Writing']
Title Manifesto Online Writer Who’s Lost Love WritingContent Manifesto Online Writer Who’s Lost Love Writing Stop playing viral slot machine Photo AlexUnsplash kid anxious buy book animal kingdom He’s 5 posse consumer certainty 5yearold like book want book Buy book however simultaneously filled joy dread walking bookstore love joy close many great mind despise dread residing one bring home son tugged sleeve sight set free bookmark checkout mention one book book chose purchase didn’t want leave emptyhanded pandemic shuttered library starved real feel paper finger instead awkward weight Kindle haste scanned shelf looking something anything catch eye red spine Bird Bird Anne Lamott grabbed paid falling 86cents short cash bookstore employee nice enough let slide son left exuberant find hesitant instruction writing life subtitle read Great another writer writing writing thought Little realize book book writing got home opened front cover finishing introduction knew absolute certainty Lamott wrote book alone Never anyone spoken directly Never anyone rekindled passion craft started reading Bird Bird yesterday haven’t finished yet excitement sat write morning poured forth manifesto outlined see began year promise would make writing career I’ve writing decade never took seriously enough bravely say “I’m writer” asked didn’t know exactly careerwise writing would entail figured majority writing would published online various platform I’d keep writing I’d possibly dabble side project book two Threequarters year gone although haven’t made living per se I’ve grown addicted call viral slot machine It’s fun game refresh browser app see coin mean view come tumbling view notification praise comment highlight may seem harmless even poor analogy slot machine I’m frivolously throwing money away right That’s true sense act playing viral slot machine cost something time attention Bird Bird slapped across face write hope going viral writing’s sake Somewhere year I’ve lost Could get back joy writing writing’s sake Could get back waking early member household sitting desk alone thought groggy haze early morning confusion string together word coherent order Could pleasure writing derive writing chance thing wrote “broke internet” much feel instantly validated inside Yes believe that’s reason manifesto Anne Lamott spoken I’m speaking dear writer speaking know blessed cursed craft blessed gatekeeper old long gone power interwebs created meritocracy voice want heard heard Yet cursed acquire people’s time attention must play game game one understands game without rule game feel like you’re losing hit big hit big want You’ve tasted virality it’s sweet sour like stale bag Sour Patch Kids never feel like enough call arm fellow online writer must stop playing game must take back craft must find joy pleasure act writing downstream effect may incur must write Here’s Online Writer’s ManifestoTags Creativity Writing Tips Self Work Writing
3,749
Boosting a Fashion Retailer’s Sales Margins by 4.5 M Euro
Boosting a Fashion Retailer’s Sales Margins by 4.5 M Euro See how an AI-based pricing engine improved sales margins for one of the top Norwegian textile companies. During one seasonal sale. With global e-retail revenues projected to grow to 6.54 trillion US dollars in 2022, eCommerce sector was already booming. Now, with countries remaining under lockdown and many people’s lives turning upside down along with their past routines, the current crisis reinforced, once again, the immense potential of e-retail. In the Effects of the COVID-19 Outbreak on Fashion, Apparel and Accesory Ecommerce report, Jake Chatt, head of brand marketing at Nosto stated that: highlighting or showcasing products and collections that are more relevant to people’s new at-home lifestyles can alleviate the stress of trying to find new items that they didn’t think they’d be looking for two weeks ago. Fashion e-retail needs to adopt an effective strategy to match people’s changing needs in real-time now more than ever. However, managing customer-targeted campaigns with current stock status is a complex operation, especially for the big players out there. See how we approached the task for the top fashion retail company in Norway last December. A challenge to optimize sales for a big retail operation Varner Gruppen is one of Northern Europe’s biggest fashion retailers with almost 1400 stores, mainly in Scandinavia. Under one roof, they unite brands like Dressman, Bik Bok, Carlings, Cubus and others. Varner needed to track and adjust all items’ prices as well as run campaigns based on their current stock status. To meet the objective we created an AI-based dynamic pricing and campaign engine. The goal was to optimize sales and, on the other end, to provide customers with the most personalized experience possible. Our main focus was to maintain maximum functionality and efficiency of the tool, especially considering the extent of the project. The engine had to be fast and responsive while handling databases with more than 1 M items. Flexible and effective custom-made Markdown feature The Pricing and Campaign engine was to replace the old functionality based on third-party apps. Now, with the fully custom-made Markdown feature, Varner is able to easily optimize the tool to their current needs as well as seamlessly build additional features. We were able to build a stable application that, just in a few months, increased Varner’s revenue by 4.5 M EUR, demonstrating a proven impact on the sales and campaign optimization. See what our client has to say EL Passion enabled us to achieve our business goals with their solution; we managed to grew our sale on old products with more than 16% (>4.5M EUR) during one seasonal sale alone. Andreas Gallefoss, Product Manager at Varner Gruppen The solution will continue to have a huge impact on Varner’s long term margins. They are knowledgeable experts willing to take ownership of the project, and they delivered a quality solution ready for implementation. A complex ecosystem of tools The whole campaign optimization platform, also built by EL Passion, along with the engine itself is integrated with numerous other Varner internal tools. On the user’s end, the engine provides customers with better-targeted insights on the ongoing campaigns and will offer highly personalized prices in the nearest future. Results? A campaigning platform with test coverage above 95% on both, backend and frontend . . Synchronization with an AI-based price optimization engine. Export of extensive highly-configurable price lists to all offline stores. The tech stack behind the project 🤖 Node.js (Nest.js), Typescript, React.js, Cloud SQL, Elasticsearch, Google Cloud Platform
https://medium.com/elpassion/boosting-a-fashion-retailers-sales-margins-by-4-5-m-euro-dac91f4bd724
['El Passion']
2020-04-17 09:50:21.567000+00:00
['Development', 'AI', 'Retail', 'Business', 'Ecommerce']
Title Boosting Fashion Retailer’s Sales Margins 45 EuroContent Boosting Fashion Retailer’s Sales Margins 45 Euro See AIbased pricing engine improved sale margin one top Norwegian textile company one seasonal sale global eretail revenue projected grow 654 trillion US dollar 2022 eCommerce sector already booming country remaining lockdown many people’s life turning upside along past routine current crisis reinforced immense potential eretail Effects COVID19 Outbreak Fashion Apparel Accesory Ecommerce report Jake Chatt head brand marketing Nosto stated highlighting showcasing product collection relevant people’s new athome lifestyle alleviate stress trying find new item didn’t think they’d looking two week ago Fashion eretail need adopt effective strategy match people’s changing need realtime ever However managing customertargeted campaign current stock status complex operation especially big player See approached task top fashion retail company Norway last December challenge optimize sale big retail operation Varner Gruppen one Northern Europe’s biggest fashion retailer almost 1400 store mainly Scandinavia one roof unite brand like Dressman Bik Bok Carlings Cubus others Varner needed track adjust items’ price well run campaign based current stock status meet objective created AIbased dynamic pricing campaign engine goal optimize sale end provide customer personalized experience possible main focus maintain maximum functionality efficiency tool especially considering extent project engine fast responsive handling database 1 item Flexible effective custommade Markdown feature Pricing Campaign engine replace old functionality based thirdparty apps fully custommade Markdown feature Varner able easily optimize tool current need well seamlessly build additional feature able build stable application month increased Varner’s revenue 45 EUR demonstrating proven impact sale campaign optimization See client say EL Passion enabled u achieve business goal solution managed grew sale old product 16 45M EUR one seasonal sale alone Andreas Gallefoss Product Manager Varner Gruppen solution continue huge impact Varner’s long term margin knowledgeable expert willing take ownership project delivered quality solution ready implementation complex ecosystem tool whole campaign optimization platform also built EL Passion along engine integrated numerous Varner internal tool user’s end engine provides customer bettertargeted insight ongoing campaign offer highly personalized price nearest future Results campaigning platform test coverage 95 backend frontend Synchronization AIbased price optimization engine Export extensive highlyconfigurable price list offline store tech stack behind project 🤖 Nodejs Nestjs Typescript Reactjs Cloud SQL Elasticsearch Google Cloud PlatformTags Development AI Retail Business Ecommerce
3,750
We Need To Talk About This M1 Mac Mini: My First Impressions
The new Macs with M1 processors are making headlines in the technology press, and with good reason: Apple has surprised locals and strangers with the bet materialized in its new M1 chips, and among them, the most often talked about is MacBook Air and MacBook Pro. Energy efficiency, mobility performance, battery … there are many more points to consider in a laptop and that is why they have been placed at the forefront of many media. We have already seen the transparency and simplicity of adapting all the applications in our first contact with laptops, so what difference is there with this Mac mini? We are facing the first desktop with an Apple Silicon chip, to which we already have to connect a monitor, speakers, and other accessories separately. The Mac mini’s box and its unboxing leave no room for doubt: as with laptops, Apple doesn’t label its switch to proprietary chips at all. In fact, we don’t even have the memory and SSD storage labels, if we want to read the details of the machine we have to look for the fine print. It details that we have a Mac mini “with 8 CPUs, 8 GPUs, 256 GB of storage, and 16 GB of RAM.” Nothing else. Connecting all the accessories has not been a problem for me. The initial setup has been done surprisingly fast, taking less than five minutes from when I first turned on the Mac mini until the macOS Big Sur desktop appeared. The only possible bump that we can find with this Mac mini is that we will need a wired keyboard to be able to do the initial configuration, something that I have been able to solve easily with my USB mechanical keyboard. By default, macOS applies the retina effect at 4K resolution, turning it into a 1080p monitor. Personally, I have preferred to scale that resolution somewhere between that 1080p (too big for 27 inches) and the native 4K resolution (too small): I have kept the 2560x1440p resolution with which I already worked on the 27 inches of my iMac, and Thanks to the 4K resolution I get anti-aliasing that improves (and quite a lot) the general quality of the image. With the general use of the system, I have noticed, and I say this without hesitation, a noticeable increase in the overall system. Intel applications run without us even realizing that they are emulated under the Rosetta layer, and applications already compiled for the M1 chip launch instantly, with the snap of the fingers. It does not matter what application we are talking about, whether it is Twitter or Pixelmator Pro: both start so fast that it is absurd to time it. I am not one of those who is going to always demand maximum power from this chip, but it is clear to me that I have made a leap in performance as I have rarely experienced. I’ll break down the GeekBench results. GeekBench Results Mac Mini M1 Chip 2020. Source: GeekBench In Geekbench we have slightly better results than the MacBook Air and MacBook Pro, probably thanks to the ventilation that the device has. Although I have to say that I have not heard absolutely any noise from that fan during the tests, the Mac mini has endured them without messing up. The only effect I have noticed has been that the computer has warmed slightly in its rear area, very little. During the rest of the activity, such as while writing this article, the computer has been cold. In the absence of working more time with it and while we wait for those new iMac, I do not hesitate for a second to say that this Mac mini is the almost-perfect desktop for any general user who works at a table many hours a day. It has envelope power even for those who dare to edit photo and video, so we could even recommend it for the small professional. The only question I have left is: if this Mac mini is an entry model, what does the future hold? What will Macs be like with chips that prioritize performance over efficiency? The transition to Apple Silicon is just the beginning and the M1 chip is just a glimpse into the future. Read more Medium Stories.
https://medium.com/macoclock/we-need-to-talk-about-this-m1-mac-mini-my-first-impressions-a2eb05780ca6
[]
2020-11-27 06:13:31.001000+00:00
['Mac', 'SEO', 'Technology', 'Future', 'Apple']
Title Need Talk M1 Mac Mini First ImpressionsContent new Macs M1 processor making headline technology press good reason Apple surprised local stranger bet materialized new M1 chip among often talked MacBook Air MacBook Pro Energy efficiency mobility performance battery … many point consider laptop placed forefront many medium already seen transparency simplicity adapting application first contact laptop difference Mac mini facing first desktop Apple Silicon chip already connect monitor speaker accessory separately Mac mini’s box unboxing leave room doubt laptop Apple doesn’t label switch proprietary chip fact don’t even memory SSD storage label want read detail machine look fine print detail Mac mini “with 8 CPUs 8 GPUs 256 GB storage 16 GB RAM” Nothing else Connecting accessory problem initial setup done surprisingly fast taking le five minute first turned Mac mini macOS Big Sur desktop appeared possible bump find Mac mini need wired keyboard able initial configuration something able solve easily USB mechanical keyboard default macOS applies retina effect 4K resolution turning 1080p monitor Personally preferred scale resolution somewhere 1080p big 27 inch native 4K resolution small kept 2560x1440p resolution already worked 27 inch iMac Thanks 4K resolution get antialiasing improves quite lot general quality image general use system noticed say without hesitation noticeable increase overall system Intel application run without u even realizing emulated Rosetta layer application already compiled M1 chip launch instantly snap finger matter application talking whether Twitter Pixelmator Pro start fast absurd time one going always demand maximum power chip clear made leap performance rarely experienced I’ll break GeekBench result GeekBench Results Mac Mini M1 Chip 2020 Source GeekBench Geekbench slightly better result MacBook Air MacBook Pro probably thanks ventilation device Although say heard absolutely noise fan test Mac mini endured without messing effect noticed computer warmed slightly rear area little rest activity writing article computer cold absence working time wait new iMac hesitate second say Mac mini almostperfect desktop general user work table many hour day envelope power even dare edit photo video could even recommend small professional question left Mac mini entry model future hold Macs like chip prioritize performance efficiency transition Apple Silicon beginning M1 chip glimpse future Read Medium StoriesTags Mac SEO Technology Future Apple
3,751
Set up TensorFlow with Docker + GPU in Minutes
Set up TensorFlow with Docker + GPU in Minutes Along with Jupyter and OpenCV Docker is the best platform to easily install Tensorflow with a GPU. This tutorial aims demonstrate this and test it on a real-time object recognition application. Docker Image for Tensorflow with GPU Docker is a tool which allows us to pull predefined images. The image we will pull contains TensorFlow and nvidia tools as well as OpenCV. The idea is to package all the necessary tools for image processing. With that, we want to be able to run any image processing algorithm within minutes. First of all, we need to install Docker. > curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - > sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" > sudo apt-get update > apt-cache policy docker-ce > sudo apt-get install -y docker-ce > sudo systemctl status docker After that, we will need to install nvidia-docker if we want to use GPU: > wget https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.1/nvidia-docker_1.0.1-1_amd64.deb > sudo dpkg -i nvidia-docker*.deb At some point, this installation may fail if nvidia-modprobe is not installed, you can try to run (GPU only): > sudo apt-get install nvidia-modprobe > sudo nvidia-docker-plugin & Eventually, you can run this command to test your installation. Hopefully, you will get the following output (GPU only): > sudo nvidia-docker run --rm nvidia/cuda nvidia-smi Result of nvidia-smi Fetch Image and Launch Jupyter You probably are familiar with Jupyter Notebook. Jupyter Notebook documents are both human-readable documents containing the analysis description and the results (figures, tables, etc..) as well as executable documents which can be run to perform data analysis. Jupyter Notebook can also run distributed algorithms with GPU. To run a jupyter notebook with TensorFlow powered by GPU and OpenCv, launch: > sudo nvidia-docker run --rm --name tf1 -p 8888:8888 -p 6006:6006 redaboumahdi/image_processing:gpu jupyter notebook --allow-root If you just want to run a jupyter notebook with TensorFlow powered by CPU and OpenCV, you can run the following command: > sudo docker run --rm --name tf1 -p 8888:8888 -p 6006:6006 redaboumahdi/image_processing:cpu jupyter notebook --allow-root You will get the following result out of your terminal. Then you can navigate to your localhost and use the port 8888, for me, the link looks like this: http://localhost:8888/ You will need to paste your token to identify and access your jupyter notebooks: 3299304f3cdd149fe0d68ce0a9cb204bfb80c7d4edc42687 And eventually, you will get the following result. You can therefore test your installation by running the jupyter notebooks. The first link is a hello TensorFlow notebook to get more familiar with this tool. TensorFlow is an open-source software library for dataflow programming across a range of tasks. It is principally used to build deep neural networks. The third link gives an example of using TensorFlow to build a simple fully connected neural network. You can find here a TensorFlow implementation of a convolutionnal neural network. I highly recommand using GPU to train CNN / RNN / LSTM networks. Real-Time Object Recognition Now it is time to test our configuration and spend some time with our machine learning algorithms. The following code helps us track objects over frames with our webcam. It is a sample of code taken from the internet, you can find the github repository at the end of the article. First of all, we need to open the access to the xserver to our docker image. There are different ways of doing so. The first one opens an access to your xserver to anyone. Other methods are described in the links at the end of the article. > xhost +local:root Then we will bash to our Docker image using this command: > sudo docker run -p 8888:8888 --device /dev/video0 --env="DISPLAY" --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" -it image_processing bash We will need to clone the github repository, which is a real-time object detector: > git clone https://github.com/datitran/object_detector_app.git && cd object_detector_app/ Finally, you can launch the python code: > python object_detection_app.py The code that we are using uses OpenCV. It is know as one of the most used libraries for image processing and available for C++ as well as Python. You should see the following output, OpenCV will open your webcam and render a video. OpenCV will also find any object in the frame and print the label of the predicted object. Conclusion I showed how one can use Docker to get your computer ready for image processing. This image contains OpenCV and TensorFlow with either GPU or CPU. We tested our installation through a real-time object detector. I hope it convinced you that most of what you need to process images is contained in this Docker image. Thank you for following my tutorial. Please don’t hesitate to send me any feedback! Useful Links If you want to be notified when the next article comes out, feel free to click on follow just below. Did you like this article? Don’t forget to hit the Follow button!
https://medium.com/sicara/tensorflow-gpu-opencv-jupyter-docker-10705b6cd1d
['Reda Boumahdi']
2018-04-15 20:19:24.232000+00:00
['TensorFlow', 'Docker', 'Data Engineering', 'Computer Vision', 'Gpu']
Title Set TensorFlow Docker GPU MinutesContent Set TensorFlow Docker GPU Minutes Along Jupyter OpenCV Docker best platform easily install Tensorflow GPU tutorial aim demonstrate test realtime object recognition application Docker Image Tensorflow GPU Docker tool allows u pull predefined image image pull contains TensorFlow nvidia tool well OpenCV idea package necessary tool image processing want able run image processing algorithm within minute First need install Docker curl fsSL httpsdownloaddockercomlinuxubuntugpg sudo aptkey add sudo addaptrepository deb archamd64 httpsdownloaddockercomlinuxubuntu lsbrelease c stable sudo aptget update aptcache policy dockerce sudo aptget install dockerce sudo systemctl status docker need install nvidiadocker want use GPU wget httpsgithubcomNVIDIAnvidiadockerreleasesdownloadv101nvidiadocker1011amd64deb sudo dpkg nvidiadockerdeb point installation may fail nvidiamodprobe installed try run GPU sudo aptget install nvidiamodprobe sudo nvidiadockerplugin Eventually run command test installation Hopefully get following output GPU sudo nvidiadocker run rm nvidiacuda nvidiasmi Result nvidiasmi Fetch Image Launch Jupyter probably familiar Jupyter Notebook Jupyter Notebook document humanreadable document containing analysis description result figure table etc well executable document run perform data analysis Jupyter Notebook also run distributed algorithm GPU run jupyter notebook TensorFlow powered GPU OpenCv launch sudo nvidiadocker run rm name tf1 p 88888888 p 60066006 redaboumahdiimageprocessinggpu jupyter notebook allowroot want run jupyter notebook TensorFlow powered CPU OpenCV run following command sudo docker run rm name tf1 p 88888888 p 60066006 redaboumahdiimageprocessingcpu jupyter notebook allowroot get following result terminal navigate localhost use port 8888 link look like httplocalhost8888 need paste token identify access jupyter notebook 3299304f3cdd149fe0d68ce0a9cb204bfb80c7d4edc42687 eventually get following result therefore test installation running jupyter notebook first link hello TensorFlow notebook get familiar tool TensorFlow opensource software library dataflow programming across range task principally used build deep neural network third link give example using TensorFlow build simple fully connected neural network find TensorFlow implementation convolutionnal neural network highly recommand using GPU train CNN RNN LSTM network RealTime Object Recognition time test configuration spend time machine learning algorithm following code help u track object frame webcam sample code taken internet find github repository end article First need open access xserver docker image different way first one open access xserver anyone method described link end article xhost localroot bash Docker image using command sudo docker run p 88888888 device devvideo0 envDISPLAY volumetmpX11unixtmpX11unixrw imageprocessing bash need clone github repository realtime object detector git clone httpsgithubcomdatitranobjectdetectorappgit cd objectdetectorapp Finally launch python code python objectdetectionapppy code using us OpenCV know one used library image processing available C well Python see following output OpenCV open webcam render video OpenCV also find object frame print label predicted object Conclusion showed one use Docker get computer ready image processing image contains OpenCV TensorFlow either GPU CPU tested installation realtime object detector hope convinced need process image contained Docker image Thank following tutorial Please don’t hesitate send feedback Useful Links want notified next article come feel free click follow like article Don’t forget hit Follow buttonTags TensorFlow Docker Data Engineering Computer Vision Gpu
3,752
Deploy Machine Learning Models On AWS Lambda
2 — AWS Lambdas : Before we start digging into using this service, let us first define it: AWS Lambda is a compute service that lets you run code without provisioning or managing servers. So what does this mean? In simple words, it means whenever you have a ready-to-deploy machine learning model, AWS lambda will act as the server where your model will be deployed, all what you have to do is, give it the code + the dependencies, and that’s it, it is like pushing your code to a repo. So let me show you how to do that: First, you are going to need the serverless framework — an MIT open-source project — which will be our tools to build our App, so let us start: The steps we will follow are these : Install the serverless framework Create a bucket in AWS Push our trained model to the created bucket Build main.py, the python file that will call our model and do predictions Build the serverless.yml file, in which we will tell the serverless framwork what to do (create the lambda function) Test what we have built locally (generating prediction with our model using the serverless framework) Deploy to AWS. Test the deployed app. These will be the steps we are going to follow in this tutorial in order to deploy our trained model in AWS lambda. So let us start: Important Remark: For the rest of the tutorial, make sure you are always in the directory where the files are, the requirements.txt, the main.py and the saved_adult_model.txt, and since I mentioned it, this is our requirements.txt: lightgbm==2.2.3 numpy==1.17.0 scipy==1.3.0 scikit-learn==0.21.3 2.1 — Install The Serverless Framework : To Install the serverless framwork in ubuntu, first you have to install npm. In order to do that, you can run the following commands in your terminal: curl -sL https://deb.nodesource.com/setup_10.x | sudo -E bash - sudo apt-get install nodejs The above commands will install nodejs and also npm. Next you can check that everything was installed correctly by running : $ node -v Which will return the version of nodejs npm -v Which will return the version of npm. Now that we have installed npm, let us install serverless by running the following command: npm install -g serverless You can check that everything is installed successfully by running : serverless If you reached this point with no errors, then congrats, you have serverless installed and you are all set. Let’s us move on to the next step. 2.2— Create A Bucket In AWS : Our next step is to push the model we have trained to an AWS bucket, and for that we need first to create a bucket, so let us do that: Creating a bucket on AWS can be done from the command line using the following code : aws s3api create-bucket --acl private --bucket deploy-lgbm --create-bucket-configuration LocationConstraint=eu-west-1 The above command will create a bucket called deploy-lgbm in a private mode in eu-west-1 location. 2.3 — Push Our Trained Model To The Created Bucket : So, now that our bucket is ready, let us push our trained model to it by running the following command : aws s3 cp saved_adult_model.txt s3://deploy-lgbm/model/saved_model.txt Perfect, now let us move on to the next step, building our main.py python file which we will use to call our model and make predictions. 2.3 — Build The Main.py File: When it comes to deploying in AWS lambdas, the main function of your code is a function called lambda_handler (or any other name we choose to give it, although the standard one is lambda_handler). Now, why this function is the important one? That function is the one AWS lambdas will execute each time you invoke it (interact with it).Thus, that function is the one will receive your input, make the prediction, and return the output. If you have ever worked with AWS lambdas from cloud9, you will notice that when you create a new lambda function and import it, the standard definition of the lambda_function is this : def lambda_handler(event,context): return {'StatusCode':200, 'body':'Hello there, this is Lambda'} As you can see, the lambda function expects 2 inputs — an event, and a context: The event will contains the information that we will send to the lambda, which will be in this case the samples we want to predict. (they will be in a json format) As for the context, it usually contains information about the invocation, function, and execution environment. for this tutorial we won’t be using it. So let us summarize what we are gonna do in this section : First we need to get our trained model from the bucket, initialize our lightgbm and return it, so we will build a function for that. Then we are going to make predictions with our model, so we are going to build a function for that too. And finally, inside our lambda_handler function we will put all these things together, which mean, receive the event, extract the data from the event, get the model, make predictions, and then return the predictions. So simple right? Now let us build our file: First we will build the get_model() function, which will download the trained lightgbm, and then initialize our model and return it : Download the saved model. As you can see, first we created an access to our bucket deploy-lgbm, using boto3, and then we used the method download_file to download our saved_model.txt and save it in /tmp/test_model.txt (recall that we saved the model in the bucket using the key : model/saved_model.txt). All clear right? Let us move on then. Now we will build the predict function, the function which will get the model, a data sample, do a prediction and then return it : predict function Let me explain what the above function does: The function gets the event, extracts our data from the event, and gives the extracted data to the model to make prediction. So simple right? Important remarks : For best practice, always use json formats to pass your data in the event. In our case, things are sample, we extract the data and pass it to the model directly, in most other cases, there will be some processing on the data before you pass it to the model, so you will need another function for that, you will call before passing the data to the model. Always split your process into multiple functions, we could have put everything in the lambda function, however our code won’t be that beautiful anymore. So always use a function when you can. Now the last step is to define our lambda handler function, so let us do that : lambda handler As you can see, it is a very basic function, and it will grow more complex in a real world project. What it does is clear, get the event and send it to the predict function to get the predictions, and then return the output in the standard format (you should always use that format): a dict with a Statuscode, and the results in a body. So, this is the end of this section, let us move on to the next on : building the serverless.yml file. 2.4 — Build The Serverless.yml File : As we have mentioned in the start of this article, the Serverless framework will be our tool to communicate with AWS and create a lambda function which will act as the server that will host our trained model. For that we need to tell serverless all the information that it needs: Who is the provider? For example, is it AWS or google, What is the language used? What we want to create? What roles it should have ?…etc. All of these instructions, we will pass them in the serverless.yml file, so let us start building it: First, we will give our service a name, let us say: test-deploy service : test-deploy Next section in our file will be about our provider, for this case it is AWS, the instructions in the yml file will look like this: So, what did we do in the above lines of commands? Let me explain: We set the name of our provider, which is aws, The language used (python3.6), the region where our lambda is going to be deployed, the deployment bucket that serverless will use to put the package. the iamRoleStatements, which mean this : Our lambda function is going to download the trained model from a bucket in aws, and by default, it does not have the permission to do that. So we need to give it that permission, and this is why we created a role, so we can give to our lambda the permissions it needs (for this case just the access to a bucket. In other cases could be more, you can consult aws documentation for a detailed explanation on the matter). And to give more example about roles, let us say that you need to invoke another lambda from this lambda, in this case this lambda needs permission for that, so you have to add them in the iamRoleStatements. Important remarks: The bucket where we put our model, and the bucket used by lambda should be in the same region (for this tutorial we used eu-west-1), if they are not in the same region it won’t work. The next section in our serverless.yml file, will be about the function we are going to create : As you can see, First we define some very basic things, like the name and description. We define our handler : Recall what we said about lambda_function, we mentioned that this function will be the one doing all the work. Now this is the point where you tell serverless who is your lambda_handler function; for this case we have defined it with the name lambda_handler in our main.py file, so we put handler : main.lambda_handler. As we said earlier, we can give it what ever name we want, like for example, we can name that function hello, but then we have to put in the handler : main.hello. Recall what we said about lambda_function, we mentioned that this function will be the one doing all the work. Now this is the point where you tell serverless who is your lambda_handler function; for this case we have defined it with the name lambda_handler in our main.py file, so we put handler : main.lambda_handler. As we said earlier, we can give it what ever name we want, like for example, we can name that function hello, but then we have to put in the handler : main.hello. We define our event : How are we going to communicate with our lambda function, or in other words, how are we going to trigger (invoke) our lambda function. For this tutorial we are going to use http events, which means, invoke the lambda function by the call of a url, which will be a POST and the resource will be /predictadult. Next Section is about Plugins: What does that mean? Let me explain : So far we instructed the serverless about who is our provider, and what is our function. Now for our code to work, we need the packages to be installed, and we have already put them in a requirements.txt file, so, we need to tell serverless to install those requirements, and for that we will use a Plugin called serverless-python-requirements. We will add it to our serverless.yml file like this : plugins: - serverless-python-requirements The last thing we are going to add in our file is an optimization thing, but why we need optimizations? Let me explain : Lambda function has some limitations for the maximum size of the package to be uploaded, and the maximum unzipped file allowed is of size 250 MB. Sometimes we exceed this amount, and to reduce it we can remove some garbage that exists in our packages which will save us some Megabytes. To do this, we instruct serverless by adding the following command in our serverless.yml file : custom: pythonRequirements: slim : true And that is it, the full serverless.yml file will look like this : service : test-deploy plugins: - serverless-python-requirements provider: name: aws runtime: python3.6 region : eu-west-1 deploymentBucket: name : deploy-tenserflow iamRoleStatements: - Effect : Allow Action: - s3.GetObject Resource: - "arn:aws:s3:::deply-tenserflow/*" custom: pythonRequirements: slim : true functions: lgbm-lambda: name: lgbm-lambda-function description : deploy trained lightgbm on aws lambda using serverless handler : main.lambda_handler events : - http : POST /predictadult Cool, now let us move to the next chapter: Testing what we have built locally. 2.5 — Test What We Have Built Locally : So it is testing time: First, your local directory should be like this : Now that our model is ready, also as our serverless.yml file, let us invoke our serverless locally and test if everything is working by running this in command line: serverless invoke local -f lgbm-lambda -d '{"body":[[3.900e+01, 7.000e+00, 1.300e+01, 4.000e+00, 1.000e+00, 0.000e+00,4.000e+00, 1.000e+00, 2.174e+03, 0.000e+00, 4.000e+01, 3.900e+01]]}' If you followed the steps correctly you should get an output out of this command. In this case the output is: { “StatusCode”: 200, “body”: 0.0687186340046925 } As you can see, we choose the option invoke local, which mean we are using our computer, not the cloud, we also passed only 1 sample through the ‘body’ field (those values are the features values, not very elegant why right ?) So, it seems everything is working locally, now let us deploy our lambda. 2.6 — Deploy To AWS: So, it is deployment time: Once everything is set and working, deploying a lambda is as easy as running this line of command : serverless deploy And that’s it, you will start seeing some log messages about the package getting pushed, you will also see the size of your zipped package. 2.7 — Test The Deployed Model : Once the deploy command is executed with no errors, and your lambda is deployed, you will get your end point (the url) which we will use to make predictions. This url will be something like this https://xxx/predictadult : And to test our prediction we will run this command: And that’s it, Congrats, you have deployed your model in a AWS lambdas function and now can serve you. If you faced any error while re-running the above tutorial, you can reach out to me, my contact info are below, Iwill be very happy to help. I hope you found this tutorial very insightful and practical, and I hope you are feeling now a better Data scientist after reading all these words. Until next time, meanwhile if you have any questions for me, or you have a suggestion for a future tutorial my contact info are below. About Me I am the Principal Data Scientist @ Clever Ecommerce Inc, we help businesses to Create and manage there Google Ads campaigns with a powerful technology based on Artificial Intelligence. You can reach out to me on Linked In or by gmail: [email protected].
https://medium.com/analytics-vidhya/deploy-machine-learning-models-on-aws-lambda-5969b11616bf
['Oussama Errabia']
2020-03-11 10:46:41.379000+00:00
['AI', 'Data Science', 'Deep Learning', 'Machine Learning', 'AWS']
Title Deploy Machine Learning Models AWS LambdaContent 2 — AWS Lambdas start digging using service let u first define AWS Lambda compute service let run code without provisioning managing server mean simple word mean whenever readytodeploy machine learning model AWS lambda act server model deployed give code dependency that’s like pushing code repo let show First going need serverless framework — MIT opensource project — tool build App let u start step follow Install serverless framework Create bucket AWS Push trained model created bucket Build mainpy python file call model prediction Build serverlessyml file tell serverless framwork create lambda function Test built locally generating prediction model using serverless framework Deploy AWS Test deployed app step going follow tutorial order deploy trained model AWS lambda let u start Important Remark rest tutorial make sure always directory file requirementstxt mainpy savedadultmodeltxt since mentioned requirementstxt lightgbm223 numpy1170 scipy130 scikitlearn0213 21 — Install Serverless Framework Install serverless framwork ubuntu first install npm order run following command terminal curl sL httpsdebnodesourcecomsetup10x sudo E bash sudo aptget install nodejs command install nodejs also npm Next check everything installed correctly running node v return version nodejs npm v return version npm installed npm let u install serverless running following command npm install g serverless check everything installed successfully running serverless reached point error congrats serverless installed set Let’s u move next step 22— Create Bucket AWS next step push model trained AWS bucket need first create bucket let u Creating bucket AWS done command line using following code aws s3api createbucket acl private bucket deploylgbm createbucketconfiguration LocationConstrainteuwest1 command create bucket called deploylgbm private mode euwest1 location 23 — Push Trained Model Created Bucket bucket ready let u push trained model running following command aws s3 cp savedadultmodeltxt s3deploylgbmmodelsavedmodeltxt Perfect let u move next step building mainpy python file use call model make prediction 23 — Build Mainpy File come deploying AWS lambda main function code function called lambdahandler name choose give although standard one lambdahandler function important one function one AWS lambda execute time invoke interact itThus function one receive input make prediction return output ever worked AWS lambda cloud9 notice create new lambda function import standard definition lambdafunction def lambdahandlereventcontext return StatusCode200 bodyHello Lambda see lambda function expects 2 input — event context event contains information send lambda case sample want predict json format context usually contains information invocation function execution environment tutorial won’t using let u summarize gonna section First need get trained model bucket initialize lightgbm return build function going make prediction model going build function finally inside lambdahandler function put thing together mean receive event extract data event get model make prediction return prediction simple right let u build file First build getmodel function download trained lightgbm initialize model return Download saved model see first created access bucket deploylgbm using boto3 used method downloadfile download savedmodeltxt save tmptestmodeltxt recall saved model bucket using key modelsavedmodeltxt clear right Let u move build predict function function get model data sample prediction return predict function Let explain function function get event extract data event give extracted data model make prediction simple right Important remark best practice always use json format pas data event case thing sample extract data pas model directly case processing data pas model need another function call passing data model Always split process multiple function could put everything lambda function however code won’t beautiful anymore always use function last step define lambda handler function let u lambda handler see basic function grow complex real world project clear get event send predict function get prediction return output standard format always use format dict Statuscode result body end section let u move next building serverlessyml file 24 — Build Serverlessyml File mentioned start article Serverless framework tool communicate AWS create lambda function act server host trained model need tell serverless information need provider example AWS google language used want create role …etc instruction pas serverlessyml file let u start building First give service name let u say testdeploy service testdeploy Next section file provider case AWS instruction yml file look like line command Let explain set name provider aws language used python36 region lambda going deployed deployment bucket serverless use put package iamRoleStatements mean lambda function going download trained model bucket aws default permission need give permission created role give lambda permission need case access bucket case could consult aws documentation detailed explanation matter give example role let u say need invoke another lambda lambda case lambda need permission add iamRoleStatements Important remark bucket put model bucket used lambda region tutorial used euwest1 region won’t work next section serverlessyml file function going create see First define basic thing like name description define handler Recall said lambdafunction mentioned function one work point tell serverless lambdahandler function case defined name lambdahandler mainpy file put handler mainlambdahandler said earlier give ever name want like example name function hello put handler mainhello Recall said lambdafunction mentioned function one work point tell serverless lambdahandler function case defined name lambdahandler mainpy file put handler mainlambdahandler said earlier give ever name want like example name function hello put handler mainhello define event going communicate lambda function word going trigger invoke lambda function tutorial going use http event mean invoke lambda function call url POST resource predictadult Next Section Plugins mean Let explain far instructed serverless provider function code work need package installed already put requirementstxt file need tell serverless install requirement use Plugin called serverlesspythonrequirements add serverlessyml file like plugins serverlesspythonrequirements last thing going add file optimization thing need optimization Let explain Lambda function limitation maximum size package uploaded maximum unzipped file allowed size 250 MB Sometimes exceed amount reduce remove garbage exists package save u Megabytes instruct serverless adding following command serverlessyml file custom pythonRequirements slim true full serverlessyml file look like service testdeploy plugins serverlesspythonrequirements provider name aws runtime python36 region euwest1 deploymentBucket name deploytenserflow iamRoleStatements Effect Allow Action s3GetObject Resource arnawss3deplytenserflow custom pythonRequirements slim true function lgbmlambda name lgbmlambdafunction description deploy trained lightgbm aws lambda using serverless handler mainlambdahandler event http POST predictadult Cool let u move next chapter Testing built locally 25 — Test Built Locally testing time First local directory like model ready also serverlessyml file let u invoke serverless locally test everything working running command line serverless invoke local f lgbmlambda body3900e01 7000e00 1300e01 4000e00 1000e00 0000e004000e00 1000e00 2174e03 0000e00 4000e01 3900e01 followed step correctly get output command case output “StatusCode” 200 “body” 00687186340046925 see choose option invoke local mean using computer cloud also passed 1 sample ‘body’ field value feature value elegant right seems everything working locally let u deploy lambda 26 — Deploy AWS deployment time everything set working deploying lambda easy running line command serverless deploy that’s start seeing log message package getting pushed also see size zipped package 27 — Test Deployed Model deploy command executed error lambda deployed get end point url use make prediction url something like httpsxxxpredictadult test prediction run command that’s Congrats deployed model AWS lambda function serve faced error rerunning tutorial reach contact info Iwill happy help hope found tutorial insightful practical hope feeling better Data scientist reading word next time meanwhile question suggestion future tutorial contact info Principal Data Scientist Clever Ecommerce Inc help business Create manage Google Ads campaign powerful technology based Artificial Intelligence reach Linked gmail errabiaoussamagmailcomTags AI Data Science Deep Learning Machine Learning AWS
3,753
Earth: Our Cosmic Unicorn
To most of us, our world is simply the place where we live: you’re born, get an education and/or learn a trade, perhaps start a family of your own, pass on some of the knowledge and wisdom you have gained to others, grow old and eventually die. It’s an oversimplification, but that is the common experience of a human on planet Earth. Earth is just where we do “everything”. If you are very lucky, you have opportunities to actually travel around the Earth and visit continents other than the one you were born on, seeing the true vastness of the planet and the variety of its civilizations and biomes. You realize we are many, but we are also one…all of us together on a single sphere of rock, covered with a thin sheen of water, orbiting a massive ball of fire. For a long time, the view that humans (and Earth) were the centre of the cosmos ruled scientific and philosophic thought. Indeed, great minds like Aristotle and Ptolemy supported this model of the universe. Though a near-contemporary of those two, Aristarchus of Samos, had proposed a heliocentric view of the universe, his ideas didn’t receive enough support to stick. It took nearly 1800 years for the heliocentric model to become generally accepted, under the scientific leadership of Nicolaus Copernicus. The Copernican Revolution, as it is known, gained further support over the succeeding century through the work of Johannes Kepler and Tycho Brahe. Galileo’s telescopic observations of Jupiter’s moons definitely put a nail in the coffin of the geocentric model. Isaac Newton then carried forward with the heliocentric model to show that the Earth and other planets in the Solar System orbited the Sun. China ©NASA As telescopic engineering improved, our view of the local universe grew larger and larger. By 1750, Thomas Wright posited that the Milky Way was a tremendous body of stars all held together by gravity and turning about a galactic centre. To us then, the Milky Way was all there was — all we could observe — and so the Milky Way was the universe. It took until 1920, though, when the observations of incredibly faint and distant nebulae by Heber Doust Curtis led to the ultimate acceptance that the Andromeda Nebula (M31) was actually another galaxy. Optic technology continued to advance, and more and more galaxies were found throughout the 20th century. The first exoplanets were confirmed in 1992, discovered around pulsar PSR B1257+12. These were terrestrial-mass worlds. The next exoplanetary finding occurred in 1995, a gas giant orbiting 51 Pegasi. Since that time, the rate of discovery of exoplanets has accelerated to the point that we can now detect hundreds within the confines of a single project. In 2016, the Kepler space telescope documented 1,284 exoplanets during one such period, over 100 of which are 1.2x Earth-mass or smaller, and most likely rocky in nature. As of September 2018, the combined observatories of the world have detected 3845 exoplanets distributed across 2866 planetary systems, of which 636 are multiple-planet systems. Africa and Arabian Peninsula ©NASA These worlds are detected using various methods, including: measuring the radial velocity of the (potential) planet’s host star to get an idea of the planet’s mass by how it affects its star; transit photometry which sees a (potential) planet as it moves between our telescopes and its host star; reflection/emission modulations which might show us the heat energy of a (potential) planet; observation of tidal distortions of a host star caused by the gravity of a (potential) massive gas giant; gravitational microlensing in which two stars line up with each other in relation to our observational view from Earth and their gravity distortions act like a magnifying lens that can help us notice planets around one of them; and nearly a dozen other ways. There are currently 55 potentially habitable exoplanets out of the thousands of worlds we have thus far detected. These are classified into two categories by the Planetary Habitability Laboratory at Arecibo in Puerto Rico: Conservatively habitable worlds are “ more likely to have a rocky composition and maintain surface liquid water (i.e. 0.5 < Planet Radius ≤ 1.5 Earth radii or 0.1 < Planet Minimum Mass ≤ 5 Earth masses, and the planet is orbiting within the conservative habitable zone).” The optimistically habitable planets “are less likely to have a rocky composition or maintain surface liquid water (i.e. 1.5 < Planet Radius ≤ 2.5 Earth radii or 5 < Planet Minimum Mass ≤ 10 Earth masses, or the planet is orbiting within the optimistic habitable zone).” If there are so many potentially habitable exoplanets out there, what is it about Earth that makes it so special? Aside from us, that is? While the exoplanets we have found so far that exist within the confines of what we have deemed “potentially habitable” may indeed be rocky and orbit at just the right distance from their host stars to maintain liquid water and atmosphere, that doesn’t mean they are habitable, or possess the potential to support Earth life or any life, for that matter. They may not even be suitable candidates for terraforming. That is because the conditions that made and preserved Earth as a safe harbour for life are many and seem to have occurred at precisely the right times throughout the 4.543 billion year history of the planet. Costa Rica ©NASA The factors that allowed life to evolve steadily on Earth — “Goldilocks” factors — include the ones that we use to designate exoplanets as potentially habitable: like them, Earth orbits at just the right distance from the Sun to allow liquid H2O, and Earth formed with such a mass and composition that it became a rocky world as opposed to a gas giant. Beyond those primary characteristics, though, Earth possesses other traits that, for the most part, we are still unable to detect on exoplanets. Our molten, mostly iron core spins to create a magnetosphere around the planet that deflects excessive solar and cosmic radiation. Our single, relatively large Moon stabilizes our rotation, gives us a 24-hour day, and creates tides that scientists believe were a large driver of evolution. We have the ozone layer which adds another protective shield for life against UV light. We have two gas-giant worlds in the outer Solar System that have been pulling in a majority of asteroids and comets for billions of years, long before they make it into the inner Solar System to possibly impact Earth. California at night ©NASA We are located at the edge of the Orion spiral arm of our galaxy, far from the much denser, the crowded centre of the milky Way where asteroids, comets, stellar collisions and supernovae are much more common. The Late Heavy Bombardment, which pounded the Earth with comet impactors roughly 4 billion years ago, seeded our world with just the right amount of water ice to give us vast oceans. Our Sun is also quite stable for a star, and luckily isn’t part of a binary star system (which may account for up to 85% of all stars!), which would certainly offer difficulties in the form of gravitational pull from 2 stars and more asteroid activity. The Earth has also been remarkably consistent and stable for billions of years, from its atmospheric and chemical composition to its temperature variations. Of all the exoplanets discovered, Earth and its ilk can only exist within a rather narrow band of possibilities. All of these Goldilocks factors added up to a world that has remained a viable habitat for billions of years. Our mineral-rich oceans became a veritable Petri dish in which trillions of generations of single-celled life could mingle and evolve until two such forms merged in a symbiotic relationship that resulted in the first multicellular organism. From there, the diversity of life blossomed uncontrollably. That diversity would be one of the reasons life on Earth continued to survive through multiple mass extinction events: Cretaceous–Paleogene extinction event — 65 million years ago, 75% species loss Triassic–Jurassic extinction event — 199 million to 214 million years ago, 70% species loss Permian–Triassic extinction event — 251 million years ago, 96% species loss Late Devonian extinction — 364 million years ago, 75% species loss Ordovician–Silurian extinction events — 439 million years ago, 86% species loss In a strange bit of irony, the earliest two of these great extinctions may have been caused rather directly by the power of evolution on Earth. It’s believed by many scientists that in both of these cases, an extreme amount of plant growth led first to the removal of too much CO2 from the atmosphere and a reverse greenhouse effect, and in the second great extinction to mega algae blooms that depleted the oceans of oxygen. The most recent 3 mass extinctions seem to have been caused by a supervolcano eruption, and two massive asteroid impacts. There is a sixth mass extinction, generally agreed upon by most palaeontologists, that is currently happening: the “holocene extinction event”. It is thought this extinction began at the end of the last ice age (roughly 12,000 years ago) and vastly accelerated with the rise of agriculture, large human civilizations and the Industrial Revolution. Data points to at least 7% of all holocene-era species having already gone extinct directly due to human interaction with our world. Species come into being and go extinct naturally, of course, and this is known as the background rate of extinction. Scientists believe that humans have increased the occurrence of extinctions to possibly as high as 500–1000 times the background rate. Deforestation in Rodonia, Brazil, which covers an area nearly 80% the size of France ©NASA Reversing this trend needs to be a priority — As the most intelligent species on Earth, we should see ourselves as caretakers of a multi-billion year legacy. We should not, and must not, allow Earth to become a barren hunk of rock due to our inherent drives that often do more harm than good. We are smarter than that. But even if this most recent mass extinction event snowballs and becomes unfixable, it is likely that life will continue to thrive on Earth, whether it be beneath ice or in the ocean’s deepest corners. We need to keep in mind that it is always the creatures at the top of the food chain that die off first in any great extinction. And if (and when) Earth does become unlivable for us humans, we should be capable of finding and reaching exoplanets that might become a new home. Hopefully by then we will have become wiser. Thank you for reading and sharing.
https://medium.com/predict/earth-our-cosmic-unicorn-1ed788fb6fd
['A. S. Deller']
2020-11-16 13:23:10.491000+00:00
['Science', 'Space', 'Environment', 'Earth', 'Climate Change']
Title Earth Cosmic UnicornContent u world simply place live you’re born get education andor learn trade perhaps start family pas knowledge wisdom gained others grow old eventually die It’s oversimplification common experience human planet Earth Earth “everything” lucky opportunity actually travel around Earth visit continent one born seeing true vastness planet variety civilization biome realize many also one…all u together single sphere rock covered thin sheen water orbiting massive ball fire long time view human Earth centre cosmos ruled scientific philosophic thought Indeed great mind like Aristotle Ptolemy supported model universe Though nearcontemporary two Aristarchus Samos proposed heliocentric view universe idea didn’t receive enough support stick took nearly 1800 year heliocentric model become generally accepted scientific leadership Nicolaus Copernicus Copernican Revolution known gained support succeeding century work Johannes Kepler Tycho Brahe Galileo’s telescopic observation Jupiter’s moon definitely put nail coffin geocentric model Isaac Newton carried forward heliocentric model show Earth planet Solar System orbited Sun China ©NASA telescopic engineering improved view local universe grew larger larger 1750 Thomas Wright posited Milky Way tremendous body star held together gravity turning galactic centre u Milky Way — could observe — Milky Way universe took 1920 though observation incredibly faint distant nebula Heber Doust Curtis led ultimate acceptance Andromeda Nebula M31 actually another galaxy Optic technology continued advance galaxy found throughout 20th century first exoplanets confirmed 1992 discovered around pulsar PSR B125712 terrestrialmass world next exoplanetary finding occurred 1995 gas giant orbiting 51 Pegasi Since time rate discovery exoplanets accelerated point detect hundred within confines single project 2016 Kepler space telescope documented 1284 exoplanets one period 100 12x Earthmass smaller likely rocky nature September 2018 combined observatory world detected 3845 exoplanets distributed across 2866 planetary system 636 multipleplanet system Africa Arabian Peninsula ©NASA world detected using various method including measuring radial velocity potential planet’s host star get idea planet’s mass affect star transit photometry see potential planet move telescope host star reflectionemission modulation might show u heat energy potential planet observation tidal distortion host star caused gravity potential massive gas giant gravitational microlensing two star line relation observational view Earth gravity distortion act like magnifying lens help u notice planet around one nearly dozen way currently 55 potentially habitable exoplanets thousand world thus far detected classified two category Planetary Habitability Laboratory Arecibo Puerto Rico Conservatively habitable world “ likely rocky composition maintain surface liquid water ie 05 Planet Radius ≤ 15 Earth radius 01 Planet Minimum Mass ≤ 5 Earth mass planet orbiting within conservative habitable zone” optimistically habitable planet “are le likely rocky composition maintain surface liquid water ie 15 Planet Radius ≤ 25 Earth radius 5 Planet Minimum Mass ≤ 10 Earth mass planet orbiting within optimistic habitable zone” many potentially habitable exoplanets Earth make special Aside u exoplanets found far exist within confines deemed “potentially habitable” may indeed rocky orbit right distance host star maintain liquid water atmosphere doesn’t mean habitable posse potential support Earth life life matter may even suitable candidate terraforming condition made preserved Earth safe harbour life many seem occurred precisely right time throughout 4543 billion year history planet Costa Rica ©NASA factor allowed life evolve steadily Earth — “Goldilocks” factor — include one use designate exoplanets potentially habitable like Earth orbit right distance Sun allow liquid H2O Earth formed mass composition became rocky world opposed gas giant Beyond primary characteristic though Earth posse trait part still unable detect exoplanets molten mostly iron core spin create magnetosphere around planet deflects excessive solar cosmic radiation single relatively large Moon stabilizes rotation give u 24hour day creates tide scientist believe large driver evolution ozone layer add another protective shield life UV light two gasgiant world outer Solar System pulling majority asteroid comet billion year long make inner Solar System possibly impact Earth California night ©NASA located edge Orion spiral arm galaxy far much denser crowded centre milky Way asteroid comet stellar collision supernova much common Late Heavy Bombardment pounded Earth comet impactors roughly 4 billion year ago seeded world right amount water ice give u vast ocean Sun also quite stable star luckily isn’t part binary star system may account 85 star would certainly offer difficulty form gravitational pull 2 star asteroid activity Earth also remarkably consistent stable billion year atmospheric chemical composition temperature variation exoplanets discovered Earth ilk exist within rather narrow band possibility Goldilocks factor added world remained viable habitat billion year mineralrich ocean became veritable Petri dish trillion generation singlecelled life could mingle evolve two form merged symbiotic relationship resulted first multicellular organism diversity life blossomed uncontrollably diversity would one reason life Earth continued survive multiple mass extinction event Cretaceous–Paleogene extinction event — 65 million year ago 75 specie loss Triassic–Jurassic extinction event — 199 million 214 million year ago 70 specie loss Permian–Triassic extinction event — 251 million year ago 96 specie loss Late Devonian extinction — 364 million year ago 75 specie loss Ordovician–Silurian extinction event — 439 million year ago 86 specie loss strange bit irony earliest two great extinction may caused rather directly power evolution Earth It’s believed many scientist case extreme amount plant growth led first removal much CO2 atmosphere reverse greenhouse effect second great extinction mega algae bloom depleted ocean oxygen recent 3 mass extinction seem caused supervolcano eruption two massive asteroid impact sixth mass extinction generally agreed upon palaeontologist currently happening “holocene extinction event” thought extinction began end last ice age roughly 12000 year ago vastly accelerated rise agriculture large human civilization Industrial Revolution Data point least 7 holoceneera specie already gone extinct directly due human interaction world Species come go extinct naturally course known background rate extinction Scientists believe human increased occurrence extinction possibly high 500–1000 time background rate Deforestation Rodonia Brazil cover area nearly 80 size France ©NASA Reversing trend need priority — intelligent specie Earth see caretaker multibillion year legacy must allow Earth become barren hunk rock due inherent drive often harm good smarter even recent mass extinction event snowball becomes unfixable likely life continue thrive Earth whether beneath ice ocean’s deepest corner need keep mind always creature top food chain die first great extinction Earth become unlivable u human capable finding reaching exoplanets might become new home Hopefully become wiser Thank reading sharingTags Science Space Environment Earth Climate Change
3,754
13 Attributes of the Ultimate Writer (Part 1 of 4)
13 Attributes of the Ultimate Writer (Part 1 of 4) Soul, Creativity, and Intelligence Graphic designed by author In this series of articles, I’m going to step into my lab and assemble the ultimate writer according to 13 core attributes. Investing in a Medium membership and joining the Partner Program has opened me to a new world of fantastic wordsmiths and I thought it’d be fun to dream up how I’d assemble the ultimate writer. Now I must say that this list is based purely on my opinion. I don’t intend to disrespect anyone by leaving them off this list. The Backstory Our soon-to-be 5-year old son wants a Voltron toy for his 5th birthday. Not only does this bring me nostalgia as I reflect on when my father bought me a Voltron toy, but it also got me thinking about combining powers in another sense. So, I owe credit to a 4-year old for inspiring me to assemble the ultimate writer. I also must mention that I miss sports right now. I miss playing them. I miss watching them. I miss the camaraderie and rhythm of playing pickup basketball at the Y. One interesting positive out of this pandemic is it has inspired me to think of writing as a sport (which is why I decided to make Stamina one of the attributes of the ultimate writer). Fantasy sports is another one of my hobbies. I love the excitement in assembling the ultimate team. This passion has led to me winning my fantasy football league two years in a row! Humble brag much? But today and later in Parts 2–4, I’m going to combine my passion for fantasy sports with my passion for writing (and reading). Allow me to step out of my lab and present to you the ultimate writer. The 13 attributes are: Soul Creativity Intelligence Voice/Presence Communication/Delivery Vocabulary Sense of Humor Heart/Empathy Work Ethic Stamina Guts Versatility Connecting If you noticed the flow and pattern, I’m stepping through the attributes from the all-encompassing eternal aura (the soul) and then from top-to-bottom from there. Today, in Part 1 of this series, I’m focusing on the attributes of Soul, Creativity, and Intelligence. In Part 2, I’ll cover Voice/Presence, Communication/Delivery, Vocabulary, and Sense of Humor Part 3: Heart/Empathy, Work Ethic, Stamina Part 4: Guts, Versatility, Connecting Let the fun begin! Soul When I think about the connection between Soul and writing, I think about where the writer’s power to create comes from and who they give the credit to. I also think about the vibe I feel when I read their words. But Soul is deeper than having the ability to help people feel good (this ability in the wrong person is very dangerous), it’s about also having the heart to properly steward your abilities and truly want good to come from your writing. So with those characteristics in mind… Jim Wolstenholm is the writer who I choose for Soul because of how he displays his desire for righteousness through his writing. His words are warm and communicate care for his readers. He’s present in his writing, but he also has a great ability of getting out of the way to let God’s Word shine through. A story by Jim that shows his soul, love for God, and love for others: 3 Life Changing Prayers Creativity I appreciate writers who approach topics from unique angles. I find that some writers (myself included) are so in the rush to publish that they don’t take the time to sit with their work. Sitting with your work and being diligent in the revising and editing stages of the writing process are key to creativity. Yes, for some writers, creativity flows directly from the mind to their fingers in the drafting stage of the writing process. But I often find interesting ways to creatively tweak and contort what I wrote while editing a story. When I ventured into my glorious ultimate writer creation lab, one writer stood out to me for his creativity. He is an amazing writer and I love that the mysterious person behind the pen name is free from the tyranny of his name and the public. Nom de Plume is the writer who I choose for Creativity. A story by Nom that shows his gift as a creative writer: Dear Books… Intelligence I love smart writers. This goes beyond using big words. An intelligent writer helps me see things in a new way. An intelligent writer tackles complex issues but expresses them in a way that’s understandable. An intelligent writer shows dedication to their field of expertise and their deep knowledge shines through their words. With these attributes in mind… Yong Cui, Ph.D. is who I choose for how they exhibit Intelligence in their writing. When I read his writing, I feel like I totally understand programming. I studied some programming at Morehouse and while studying Electrical Engineering at Georgia Tech, but that was ages ago and I wasn’t a whiz by any means. Yong’s writing gives me the confidence to go try to whip up some code! A story by Doctor Cui that showcases their intelligence (and the ability to explain a complex technical topic in plain English): Time Complexity of Algorithms- Big O Notation Explained In Plain English
https://medium.com/inspirefirst/13-attributes-of-the-ultimate-medium-writer-part-1-of-4-c9e4960cb768
['Chris Craft']
2020-08-26 11:20:20.778000+00:00
['Soul', 'Blogging', 'Creativity', 'Intelligence', 'Writing']
Title 13 Attributes Ultimate Writer Part 1 4Content 13 Attributes Ultimate Writer Part 1 4 Soul Creativity Intelligence Graphic designed author series article I’m going step lab assemble ultimate writer according 13 core attribute Investing Medium membership joining Partner Program opened new world fantastic wordsmith thought it’d fun dream I’d assemble ultimate writer must say list based purely opinion don’t intend disrespect anyone leaving list Backstory soontobe 5year old son want Voltron toy 5th birthday bring nostalgia reflect father bought Voltron toy also got thinking combining power another sense owe credit 4year old inspiring assemble ultimate writer also must mention miss sport right miss playing miss watching miss camaraderie rhythm playing pickup basketball One interesting positive pandemic inspired think writing sport decided make Stamina one attribute ultimate writer Fantasy sport another one hobby love excitement assembling ultimate team passion led winning fantasy football league two year row Humble brag much today later Parts 2–4 I’m going combine passion fantasy sport passion writing reading Allow step lab present ultimate writer 13 attribute Soul Creativity Intelligence VoicePresence CommunicationDelivery Vocabulary Sense Humor HeartEmpathy Work Ethic Stamina Guts Versatility Connecting noticed flow pattern I’m stepping attribute allencompassing eternal aura soul toptobottom Today Part 1 series I’m focusing attribute Soul Creativity Intelligence Part 2 I’ll cover VoicePresence CommunicationDelivery Vocabulary Sense Humor Part 3 HeartEmpathy Work Ethic Stamina Part 4 Guts Versatility Connecting Let fun begin Soul think connection Soul writing think writer’s power create come give credit also think vibe feel read word Soul deeper ability help people feel good ability wrong person dangerous it’s also heart properly steward ability truly want good come writing characteristic mind… Jim Wolstenholm writer choose Soul display desire righteousness writing word warm communicate care reader He’s present writing also great ability getting way let God’s Word shine story Jim show soul love God love others 3 Life Changing Prayers Creativity appreciate writer approach topic unique angle find writer included rush publish don’t take time sit work Sitting work diligent revising editing stage writing process key creativity Yes writer creativity flow directly mind finger drafting stage writing process often find interesting way creatively tweak contort wrote editing story ventured glorious ultimate writer creation lab one writer stood creativity amazing writer love mysterious person behind pen name free tyranny name public Nom de Plume writer choose Creativity story Nom show gift creative writer Dear Books… Intelligence love smart writer go beyond using big word intelligent writer help see thing new way intelligent writer tackle complex issue express way that’s understandable intelligent writer show dedication field expertise deep knowledge shine word attribute mind… Yong Cui PhD choose exhibit Intelligence writing read writing feel like totally understand programming studied programming Morehouse studying Electrical Engineering Georgia Tech age ago wasn’t whiz mean Yong’s writing give confidence go try whip code story Doctor Cui showcase intelligence ability explain complex technical topic plain English Time Complexity Algorithms Big Notation Explained Plain EnglishTags Soul Blogging Creativity Intelligence Writing
3,755
Hooks in React Native
These are the most important things you should know about a React Component and its lifecycle: Props Props are input of a component, so it is something you put into a component when you create it. Per definition props cannot change, but you can add a function to the props that do that for you (could be confusing). State State is something that can dynamically change (like a text input) and is always bound to something (a component for example). You can change the state by using the setState() function, which only notifies the component about a state change. Take a look at the following example and common pitfall with React and setState() : // not so good console.log(this.state.test); // 5 this.setState({ test: 12 }); console.log(this.state.test); // might be 5 or 12 // good this.setState({ test: 42 }, () => { console.log(this.state.test); // 42 }); Constructor The constructor is not always necessary to have. However, there are some uses cases: initializing state and binding methods to this . What you definitely should not do there is invoking long-running methods, since this may slow down your initial rendering (see the diagram above). So a common component and constructor could look like the following: class MyComponents extends React.Component { constructor(props) { super(props); this.state = { test: 42 }; this.renderSomeText = this.renderSomeText.bind(this); } // you could also do this, so no constructor needed state = { test: 42, } renderSomeText() { return <Text>this.state.test</Text> } } If you don’t bind method in the constructor and only initialize the state, you don’t even need a constructor (save code). See my article about React Performance here, if you don’t know why you should bind certain methods. This also has some valueable code examples. Component did mount and will unmount The componentDidMount lifecycle method is invoked only once after the component was rendered for the first time. This could be the place where you do requests or register event listeners, for example. Apart from that, the componentWillUnmount lifecycle method is invoked before the component is getting “destroyed”. This should be the place where you cancel eventually running requests (so they don’t try to change the state of an unmounted component or something), as well as unregister any event listener you use. Lastly will prevent you from having memory leaks in your app (memory that is not being used is not released). A problem probably many (and also I) ran into was exactly what I described in the last paragraph. If you use the Window setTimeout function to execute some code in a delayed manner, you should take care of using clearTimeout to cancel this timer if the component unmounts. Other lifecycle methods The componentWillReceiveProps(nextProps) or from React Version 16.3 getDerivedStateFromProps(props, state) lifecycle method is being used to change the state of a component when its props changed. Since this is a more complex topic and you probably use (and should use) it rarely, you can read about it here. Difference between Component and PureComponent: You might have heard about React’s PureComponent already. To understand its difference, you need to know that shouldComponentUpdate(nextProps, nextState) is used/called to determine whether the change in props and state should trigger a re-rendering of the component. The normal React.Component always re-renders, on any change (so it always returns true). The React.PureComponent does a shallow comparison on props and state, so it only re-renders if any of them have changed. Keep in mind that if you change deeply nested objects (you mutate them), shallow compare might not detect it. If you ask yourself where hooks fit into this lifecycle, the answer is pretty easy. One of the most important hooks is useEffect. You pass a function to useEffect , which will run after the render call. So in essence, it is equal to componentDidUpdate. If you return a function from the useEffect’s passed function, you can handle the componentWillUnmount code. Since useEffect runs after every render (which might not always make sense), you can limit it to being closer to componentDidMount and componentWillUnmount by passing [] as a second argument. This tells React that this useEffect should only be called when a certain state has changed (in this case [], which means only once). The most interesting hook is useState . It’s usage is pretty simple: You pass an initial state and get a pair of values (array) in return, where the first element is the current state and the second a function that updates it (like setState() ). If you want to read more about hooks, check out the React documentation. Lastly, I want to present a simple example of a React Native component with React Hooks. It contains a View with a Text and Button component. By clicking the button, you increase the counter by 1. If the counter reaches value 42 or greater, it stays at 42. You can argue if it makes sense or not. Especially since the value will shortly be increased to 43, then render once, then the useEffect will set it back to 42. import React, { useState, useEffect } from 'react'; import { View, Text, Button } from 'react-native'; export const Example = () => { const [foo, setFoo] = useState(30); useEffect(() => { if (foo >= 42) { setFoo(42); } }, [foo]) return ( <View> <Text>Foo is {foo}.</Text> <Button onPress={() => setFoo(foo + 1)} title='Increase Foo!' /> </View> ) } React Hooks are a great way to write even cleaner React components. Its natural ability to create reusable code (you can combine your hooks) makes it even greater. The fact that cleaning side effects (subscriptions, requests) happen for every render by default helps avoid bugs (you may forget unsubscribe), as stated here.
https://reime005.medium.com/hooks-in-react-native-ffca637760be
['Marius Reimer']
2019-05-26 11:29:06.304000+00:00
['JavaScript', 'React', 'Software Development', 'React Native', 'Software Engineering']
Title Hooks React NativeContent important thing know React Component lifecycle Props Props input component something put component create Per definition prop cannot change add function prop could confusing State State something dynamically change like text input always bound something component example change state using setState function notifies component state change Take look following example common pitfall React setState good consolelogthisstatetest 5 thissetState test 12 consolelogthisstatetest might 5 12 good thissetState test 42 consolelogthisstatetest 42 Constructor constructor always necessary However us case initializing state binding method definitely invoking longrunning method since may slow initial rendering see diagram common component constructor could look like following class MyComponents extends ReactComponent constructorprops superprops thisstate test 42 thisrenderSomeText thisrenderSomeTextbindthis could also constructor needed state test 42 renderSomeText return TextthisstatetestText don’t bind method constructor initialize state don’t even need constructor save code See article React Performance don’t know bind certain method also valueable code example Component mount unmount componentDidMount lifecycle method invoked component rendered first time could place request register event listener example Apart componentWillUnmount lifecycle method invoked component getting “destroyed” place cancel eventually running request don’t try change state unmounted component something well unregister event listener use Lastly prevent memory leak app memory used released problem probably many also ran exactly described last paragraph use Window setTimeout function execute code delayed manner take care using clearTimeout cancel timer component unmounts lifecycle method componentWillReceivePropsnextProps React Version 163 getDerivedStateFromPropsprops state lifecycle method used change state component prop changed Since complex topic probably use use rarely read Difference Component PureComponent might heard React’s PureComponent already understand difference need know shouldComponentUpdatenextProps nextState usedcalled determine whether change prop state trigger rerendering component normal ReactComponent always rerenders change always return true ReactPureComponent shallow comparison prop state rerenders changed Keep mind change deeply nested object mutate shallow compare might detect ask hook fit lifecycle answer pretty easy One important hook useEffect pas function useEffect run render call essence equal componentDidUpdate return function useEffect’s passed function handle componentWillUnmount code Since useEffect run every render might always make sense limit closer componentDidMount componentWillUnmount passing second argument tell React useEffect called certain state changed case mean interesting hook useState It’s usage pretty simple pas initial state get pair value array return first element current state second function update like setState want read hook check React documentation Lastly want present simple example React Native component React Hooks contains View Text Button component clicking button increase counter 1 counter reach value 42 greater stay 42 argue make sense Especially since value shortly increased 43 render useEffect set back 42 import React useState useEffect react import View Text Button reactnative export const Example const foo setFoo useState30 useEffect foo 42 setFoo42 foo return View TextFoo fooText Button onPress setFoofoo 1 titleIncrease Foo View React Hooks great way write even cleaner React component natural ability create reusable code combine hook make even greater fact cleaning side effect subscription request happen every render default help avoid bug may forget unsubscribe stated hereTags JavaScript React Software Development React Native Software Engineering
3,756
AI Expert Reveals How Top AI Engineers Are Changing The Way We Do Business
By Rishon Blumberg, 10x Management Co-Founder The business world is changing fast and finding a talented AI engineer can bring your company significant competitive advantages. While entrepreneurs have relied on their instincts and intuition to dictate the direction of their businesses for a long time, AI engineers are helping businesses verify or discredit some of their long-held beliefs. An AI engineer has the ability to come into a company and transform the way we do business. And business leaders are using data to make decisions like never before. Executives can still rely on intuition, but AI is here to help us verify or discredit our beliefs. As a tech entrepreneur myself working with some of the best AI engineers in the world, I’ve witnessed the transformative power an AI engineer can have on a business. I had the privilege of interviewing an AI engineer and prodigy that started university at age 12(!), Zack Dvey-Aharon, on how companies will begin to use AI in the new data-driven era for business. Rishon (in bold): Thanks for taking the time to speak with me Zack. What is your favorite use of AI that you have worked on personally? Zack: As an AI engineer, I’ve helped healthcare companies analyze data to understand when their cure works best. I’ve helped cybersecurity companies identify abnormal network behavior for security purposes, helped energy companies better understand ocean drilling potential, commercial companies optimize their pricings and offerings, the list goes on… If I picked a favorite, I might get some angry letters in the mail from those I left out! All my clients are special to me and I truly enjoy working on every project I undertake. Quite a diplomatic answer! What are some ways that you think AI will be monetized in the future? I’ll use a simple example that demonstrates how AI can improve most existing services and products, and not necessarily create new ones. An AI engineer might develop a refrigerator that can manage the content inside the fridge and adjust the temperature to ideally match your groceries. The company that employs that AI engineer will monetize by simply selling more units than the competition. That’s just one example. Basically, the companies that really take advantage of the intelligence of AI will be able to monetize simply by being better than the competition. Let’s compare it to baseball and the famous example of Moneyball and the Oakland Athletics for a second. In 2002, Oakland started using deep statistics to analyze and find undervalued players in the major and minor leagues before any other team. While most teams had scouts that would rely on instincts to evaluate a player, Oakland used objective statistics and algorithms to evaluate players. This allowed Oakland — with a payroll of $44 million — to compete with teams like the New York Yankees — with a payroll of $125 million. Data lets us evaluate the exact impact that a player has on the field. What percentage of the time does a player hit a curveball traveling 82mph into the infield vs. the outfield vs. over the fence? Just like baseball was transformed by statistics, the broader business world is being transformed by AI too. Any method (like Moneyball) that gives you a competitive advantage will monetize itself. As a Yankees fan, I appreciate the baseball analogy. How is AI different than other technologies in the past? Through data analysis, AI engineers can allow companies to work much more efficiently, adjust to changes, cancel unnecessary business processes and replace expensive alternatives, including human jobs. AI is completely data-driven, so algorithms will help us understand where we can improve our processes as opposed to using intuition (as I just mentioned) or people analyzing data. This has never been the case before. Data is a true goldmine and the sky’s the limit with how it can be used. By employing a single AI engineer or multiple AI engineers, companies have endless opportunities to better understand their business processes, improve them, optimize them and reveal new insights that can dramatically change the bottom line. In lay terms, what are the differences between Data Science, AI, and Machine Learning? Data science is the most general term for data analysis. Data can be analyzed manually without any algorithms or learning mechanisms, which means in certain circumstances, it’s not AI at all. Artificial Intelligence (AI) covers all computerized/algorithmic ways to learn data and react better to it. Machine Learning (ML) is a sub-domain of AI. Machine learning features self-learning mechanisms that become smarter as they have more data. So the difference between Machine Learning and AI is that AI can include hard-coded formulas that do not learn from the data, whereas Machine Learning engineers will always build self-learning mechanisms. What company do you think will dominate the AI landscape in the future? For instance, 68% of internet searches in the U.S. are done on Google. Will there be a Google of AI? It’s tough to say that one company will monopolize the industry. My prediction is that several years from now, AI, and more specifically Machine Learning, will be naturally integrated everywhere and by everyone. Just like Google and its search engine are everywhere, AI and machine learning will be everywhere. An AI engineer will be a very lucrative position to have at any company. What are the biggest challenges for companies looking to embrace AI? The clear number one challenge is to find a strong enough AI engineer to help a company or join a company. If we compare AI to playing chess, there are close to a billion chess players in the world, but only a thousand grandmasters. Although many people present themselves as expert engineers, there are perhaps a few dozen AI engineers or teams out there with a truly strong, diversified project experience in machine learning. Building a great AI solution is difficult at the moment because the talent is so rare. What are the biggest misconceptions regarding AI? In movies, we often see machines that are ‘smart’ like human beings that can adapt their language and behavior to unpredictable situations. It’s been a fantasy for humans for a long time, especially since it was realistically posed as a challenge by Alan Turing in the 1950s. The truth is that technology like that is still out of our reach, so I’d say that’s the biggest misconception. AI engineers are working hard to get us there, but we’re not that close. What’s your favorite use of AI technology being applied today? As an AI engineer, it’s tough to choose a favorite. I find the revolution itself amazing. Insurance companies better understand their clients, media companies better evaluate their artists, airline companies better optimize their seat ticket prices, the list goes on. What’s the one example of an application of AI that feels inevitable to you, yet today no one you know is really working on it? I think AI that takes text written about a person, and by that person, from many different sources and gathers a smart, integrated analysis and report would be useful for personal clients, companies and intelligence agencies. Imagine trying to find out information about a potential client, and having to go from point A to point B and all sorts of places to find relevant information. AI could make that process so much easier by aggregating useful data and giving you ONE useful report as opposed to hundreds of sources with bits of useful information. What advice would you give a company trying to source AI talent? It’s important to do research on AI engineers that have been contracted by competitors or other companies in the field. My firm has delivered more than 40 AI projects to clients, and in each area, my past AI engineer experience with similar problems turned out to be a crucial factor. Companies that source AI engineers and development talent must understand two key parameters: How strong and experienced is the engineer? How easily can their work be integrated with the company, its IT team and the general “data DNA” of the firm? In today’s economy, even inexperienced data scientists and AI engineers have become very expensive, so building a team seems less realistic for most companies. Did you really start university at age 12? I sure did. As I child, I always looked for new challenges and new ways to learn. I convinced my parents to let me try a university class, and when I was able to keep up with the class, I enrolled in more. I was able to finish my university degree before high school graduation. If you like this article, you might enjoy reading How One Blockchain Developer Sees the Future of Technology Rishon Blumberg is an entrepreneur and the founder of 10x Management, a prominent tech talent agency. He is a thought leader in the future of work space, having been published in the Harvard Business Review, and makes frequent appearances on Bloomberg Television and CNBC. Rishon graduated from the Wharton School of Business with a degree in entrepreneurial management in 1994.
https://medium.com/10x-management/ai-expert-reveals-how-top-ai-engineers-are-changing-the-way-we-do-business-35e8986588fd
['Rishon Blumberg']
2018-05-14 22:24:16.093000+00:00
['Machine Learning', 'AI', 'Technology', 'Computer Programming', 'Artificial Intelligence']
Title AI Expert Reveals Top AI Engineers Changing Way BusinessContent Rishon Blumberg 10x Management CoFounder business world changing fast finding talented AI engineer bring company significant competitive advantage entrepreneur relied instinct intuition dictate direction business long time AI engineer helping business verify discredit longheld belief AI engineer ability come company transform way business business leader using data make decision like never Executives still rely intuition AI help u verify discredit belief tech entrepreneur working best AI engineer world I’ve witnessed transformative power AI engineer business privilege interviewing AI engineer prodigy started university age 12 Zack DveyAharon company begin use AI new datadriven era business Rishon bold Thanks taking time speak Zack favorite use AI worked personally Zack AI engineer I’ve helped healthcare company analyze data understand cure work best I’ve helped cybersecurity company identify abnormal network behavior security purpose helped energy company better understand ocean drilling potential commercial company optimize pricing offering list go on… picked favorite might get angry letter mail left client special truly enjoy working every project undertake Quite diplomatic answer way think AI monetized future I’ll use simple example demonstrates AI improve existing service product necessarily create new one AI engineer might develop refrigerator manage content inside fridge adjust temperature ideally match grocery company employ AI engineer monetize simply selling unit competition That’s one example Basically company really take advantage intelligence AI able monetize simply better competition Let’s compare baseball famous example Moneyball Oakland Athletics second 2002 Oakland started using deep statistic analyze find undervalued player major minor league team team scout would rely instinct evaluate player Oakland used objective statistic algorithm evaluate player allowed Oakland — payroll 44 million — compete team like New York Yankees — payroll 125 million Data let u evaluate exact impact player field percentage time player hit curveball traveling 82mph infield v outfield v fence like baseball transformed statistic broader business world transformed AI method like Moneyball give competitive advantage monetize Yankees fan appreciate baseball analogy AI different technology past data analysis AI engineer allow company work much efficiently adjust change cancel unnecessary business process replace expensive alternative including human job AI completely datadriven algorithm help u understand improve process opposed using intuition mentioned people analyzing data never case Data true goldmine sky’s limit used employing single AI engineer multiple AI engineer company endless opportunity better understand business process improve optimize reveal new insight dramatically change bottom line lay term difference Data Science AI Machine Learning Data science general term data analysis Data analyzed manually without algorithm learning mechanism mean certain circumstance it’s AI Artificial Intelligence AI cover computerizedalgorithmic way learn data react better Machine Learning ML subdomain AI Machine learning feature selflearning mechanism become smarter data difference Machine Learning AI AI include hardcoded formula learn data whereas Machine Learning engineer always build selflearning mechanism company think dominate AI landscape future instance 68 internet search US done Google Google AI It’s tough say one company monopolize industry prediction several year AI specifically Machine Learning naturally integrated everywhere everyone like Google search engine everywhere AI machine learning everywhere AI engineer lucrative position company biggest challenge company looking embrace AI clear number one challenge find strong enough AI engineer help company join company compare AI playing chess close billion chess player world thousand grandmaster Although many people present expert engineer perhaps dozen AI engineer team truly strong diversified project experience machine learning Building great AI solution difficult moment talent rare biggest misconception regarding AI movie often see machine ‘smart’ like human being adapt language behavior unpredictable situation It’s fantasy human long time especially since realistically posed challenge Alan Turing 1950s truth technology like still reach I’d say that’s biggest misconception AI engineer working hard get u we’re close What’s favorite use AI technology applied today AI engineer it’s tough choose favorite find revolution amazing Insurance company better understand client medium company better evaluate artist airline company better optimize seat ticket price list go What’s one example application AI feel inevitable yet today one know really working think AI take text written person person many different source gather smart integrated analysis report would useful personal client company intelligence agency Imagine trying find information potential client go point point B sort place find relevant information AI could make process much easier aggregating useful data giving ONE useful report opposed hundred source bit useful information advice would give company trying source AI talent It’s important research AI engineer contracted competitor company field firm delivered 40 AI project client area past AI engineer experience similar problem turned crucial factor Companies source AI engineer development talent must understand two key parameter strong experienced engineer easily work integrated company team general “data DNA” firm today’s economy even inexperienced data scientist AI engineer become expensive building team seems le realistic company really start university age 12 sure child always looked new challenge new way learn convinced parent let try university class able keep class enrolled able finish university degree high school graduation like article might enjoy reading One Blockchain Developer Sees Future Technology Rishon Blumberg entrepreneur founder 10x Management prominent tech talent agency thought leader future work space published Harvard Business Review make frequent appearance Bloomberg Television CNBC Rishon graduated Wharton School Business degree entrepreneurial management 1994Tags Machine Learning AI Technology Computer Programming Artificial Intelligence
3,757
Can Black People Disagree Without Disrespect?
What about our disagreements call for discord? The year is 1770. The location, London, England. The setting, a memorial for Anglican clergyman and good friend to Benjamin Franklin, George Whitfield. Standing at the pulpit eulogizing his beloved friend is fellow clergyman, theologian and co-founder of the Methodist church, John Wesley. “There are many doctrines of a less essential nature,” he says to the mourners. “In these we may think and let think; we may ‘agree to disagree.’ But, meantime, let us hold fast the essentials.” Whitfield himself wrote the phrase in a collection of letters on unity in 1750, using it to describe the unimportance of unlikeness where unity is the goal. “After all”, he writes, “those who will live in peace must agree to disagree in many things with their fellow-labourers, and not let little things part or disunite them.” The phrase since then has grown in popularity, referring to the resolution of a disagreement whereby opposing parties accept but do not agree with the position of the opposing side. Parties generally “agree to disagree” when all sides have recognized that further discussion or debate will not yield an otherwise amicable outcome and may result in unnecessary conflict. All sides agree to remain on amicable terms while continuing to disagree about the original disagreement. This resolution is one of mutual respect and reason, sound judgment and an honest effort to reach resolution with real resolve in mind. I hate to take a page out of white male 18th century history, but it’s time we learned to agree to disagree, and do so without the disrespect. With that said, I must pose the pressing questions, can there be mutual understanding without mutual respect, and is a lack of respect at the helm of our deadly debates? If the Gayle King interview and the outcry that followed are any indications, I’d say the answers to those questions are a resounding no, and a hell yes. We’re not discussing her timing, which was pretty poor, or her tone, which if we’re being candid was condescending, or even her intent, which was pretty obviously ill. It’s pretty safe to say the gross majority of us disagreed with the deed itself. But that’s right around where things got a little ludicrous. Now from a journalistic standpoint, I have to say that in her defense, there are just some topics that it’ll never be the right time to discuss. Like Joe Jackson’s parenting practices, Whitney Houston’s drug addiction, Martin Luther King Jr.’s multiple affairs, Rosa Parks being a knockoff Claudette Colvin, etc., some subjects, just the mention alone are enough to earn you a visit from the cancel vultures. So kudos to Gail for knowingly gearing up for that gut punch. But there’s disagreeing with a journalist’s line of questioning because you find it disrespectful and insensitive…. and then there’s threatening an actual living person over the legacy of the deceased. I have a serious problem with that valuation. You should too. The only thing worse than the collective response to the Gail King interview was the collective response to the collective response. Snoop Dog has walked women on leashes in broad daylight and cheated on his wife of a couple decades so many times that the last time he had to fake a docuseries as a cover up. He’s bragged about selling women to other stars while on tour and, to my knowledge, has never appeared to be a spokesperson for the protection of women and children. He’s never advocated against colorism, which his only daughter, Cori, openly struggles with. He’s never advocated for policy to address the crisis in maternal and infant mortality despite his son, Corde, recently experiencing the loss of his newborn. He’s never even advocated for prison reform or spoken against black Americans being arrested more for marijuana offenses, despite being one of the biggest open consumers of cannabis in hip hop. This man doesn’t advocate for the family he has, if this was a role he felt like trying on, he’s had ample opportunity to do so. But we’re no stranger to letting men with little sense speak on our behalf, and so he took liberty and we allowed it, even supported it, even after hearing how violent it was. But now that cooler heads have prevailed, we have to address what about us would have us believing that Snoop Dog was ever the right messenger, whatever that message was intended to be. And then we have to ask how after hearing his message, we could continue to defend not only the messenger, but also the message? What makes us associate philosophical differences with the need for physical correction? I mean, I know most of the people who didn’t take issue initially with a man threatening a senior woman in his own community felt that the threat was just that, a threat. No real harm, more than likely, was to befall Gayle King, at least that was the defense. But what about the threat, even if just a threat, was excusable? Seriously. I’m not gonna dredge out some unnecessary, imaginary scenario about your mother, or sister, or auntie being threatened to have her head bashed in because she made a statement someone didn’t like because, well, we’re all adults here, and I shouldn’t have to make it personal to make it palpable. Instead, let’s talk about that question directly, because I think the actual question and the answers to it matter. What about even the threat of physical harm in the midst of debate or disagreement is empathetic, or sensical, or safe, or sociable, or mature or any of the many things Gayle King has been accused of not being for sitting in a chair and asking a question, albeit a couple of impolite ones? How do we get from one to the other? One day we’ll have the conversation about how our perception of disagreement as detrimental began on the plantation. One day we will discuss the origin of our perception of disagreement as discord and deal with the uncomfortable reality that during slavery, alignment meant allieship, two being in disagreement wasn’t just unacceptable, it was unsafe. And so we formed a casual distrust of one another, agreeing to walk the fine line between ally and adversary, only to be tossed away at the smallest inclination of betrayal. And one day we will sit down and process how that history has resulted in the way we casually disrespect and disregard one another over the non-essentials, and how when, coupled with patriarchy and sexism, that makes things insufferable for Black women in their own communities. But until then, we need to find a safe page to meet on and agree to stay there when it comes to our rules of engagement. Of which, safe, respectful, agreeable disagreements have to be one of them. There won’t always be time to unpack our toxic treatment of one another based on our history of trauma. At what point do we stop needing excuses for treating each other poorly in order to find reason to treat one another better? Not to mention, no man should feel comfortable threatening bodily harm to a woman who has inflicted none on him. If we can’t agree on that, we need to talk about what about respecting women’s humanity we’re still struggling with. Because that’s what this is about, not about Black men being fed up with Oprah and Gayle’s master man-bashing mission, and certainly not about respecting Kobe’s legacy (which honestly, Gayle doesn’t have the range to tarnish). We have a respect deficit in our community. And where there is little respect for someone’s humanness, there is little respect for their life. If nothing else, this situation proves what many Black have been trying to convey, the fear that even in death, a Black man’s life, or in this instance the memory of, is of more value than theirs. I have a serious problem with that valuation. You should too.
https://arahthequill.medium.com/can-black-people-disagree-without-disrespect-7ed5f07d2472
['Arah Iloabugichukwu']
2020-11-20 23:19:02.943000+00:00
['Society', 'People', 'Patriarchy', 'Culture', 'Celebrity']
Title Black People Disagree Without DisrespectContent disagreement call discord year 1770 location London England setting memorial Anglican clergyman good friend Benjamin Franklin George Whitfield Standing pulpit eulogizing beloved friend fellow clergyman theologian cofounder Methodist church John Wesley “There many doctrine le essential nature” say mourner “In may think let think may ‘agree disagree’ meantime let u hold fast essentials” Whitfield wrote phrase collection letter unity 1750 using describe unimportance unlikeness unity goal “After all” writes “those live peace must agree disagree many thing fellowlabourers let little thing part disunite them” phrase since grown popularity referring resolution disagreement whereby opposing party accept agree position opposing side Parties generally “agree disagree” side recognized discussion debate yield otherwise amicable outcome may result unnecessary conflict side agree remain amicable term continuing disagree original disagreement resolution one mutual respect reason sound judgment honest effort reach resolution real resolve mind hate take page white male 18th century history it’s time learned agree disagree without disrespect said must pose pressing question mutual understanding without mutual respect lack respect helm deadly debate Gayle King interview outcry followed indication I’d say answer question resounding hell yes We’re discussing timing pretty poor tone we’re candid condescending even intent pretty obviously ill It’s pretty safe say gross majority u disagreed deed that’s right around thing got little ludicrous journalistic standpoint say defense topic it’ll never right time discus Like Joe Jackson’s parenting practice Whitney Houston’s drug addiction Martin Luther King Jr’s multiple affair Rosa Parks knockoff Claudette Colvin etc subject mention alone enough earn visit cancel vulture kudos Gail knowingly gearing gut punch there’s disagreeing journalist’s line questioning find disrespectful insensitive… there’s threatening actual living person legacy deceased serious problem valuation thing worse collective response Gail King interview collective response collective response Snoop Dog walked woman leash broad daylight cheated wife couple decade many time last time fake docuseries cover He’s bragged selling woman star tour knowledge never appeared spokesperson protection woman child He’s never advocated colorism daughter Cori openly struggle He’s never advocated policy address crisis maternal infant mortality despite son Corde recently experiencing loss newborn He’s never even advocated prison reform spoken black Americans arrested marijuana offense despite one biggest open consumer cannabis hip hop man doesn’t advocate family role felt like trying he’s ample opportunity we’re stranger letting men little sense speak behalf took liberty allowed even supported even hearing violent cooler head prevailed address u would u believing Snoop Dog ever right messenger whatever message intended ask hearing message could continue defend messenger also message make u associate philosophical difference need physical correction mean know people didn’t take issue initially man threatening senior woman community felt threat threat real harm likely befall Gayle King least defense threat even threat excusable Seriously I’m gonna dredge unnecessary imaginary scenario mother sister auntie threatened head bashed made statement someone didn’t like well we’re adult shouldn’t make personal make palpable Instead let’s talk question directly think actual question answer matter even threat physical harm midst debate disagreement empathetic sensical safe sociable mature many thing Gayle King accused sitting chair asking question albeit couple impolite one get one One day we’ll conversation perception disagreement detrimental began plantation One day discus origin perception disagreement discord deal uncomfortable reality slavery alignment meant allieship two disagreement wasn’t unacceptable unsafe formed casual distrust one another agreeing walk fine line ally adversary tossed away smallest inclination betrayal one day sit process history resulted way casually disrespect disregard one another nonessential coupled patriarchy sexism make thing insufferable Black woman community need find safe page meet agree stay come rule engagement safe respectful agreeable disagreement one won’t always time unpack toxic treatment one another based history trauma point stop needing excuse treating poorly order find reason treat one another better mention man feel comfortable threatening bodily harm woman inflicted none can’t agree need talk respecting women’s humanity we’re still struggling that’s Black men fed Oprah Gayle’s master manbashing mission certainly respecting Kobe’s legacy honestly Gayle doesn’t range tarnish respect deficit community little respect someone’s humanness little respect life nothing else situation prof many Black trying convey fear even death Black man’s life instance memory value serious problem valuation tooTags Society People Patriarchy Culture Celebrity
3,758
Spaced Repetition Items and Construal Level Theory
Spaced Repetition Items and Construal Level Theory Why you should prioritize from higher- to lower-level construals Overview: Interference Boolean algebra to counteract interference Construal level theory and Levels of Processing model Construal level theory and prioritization Construal level theory, prioritization, and combinatorial thinking How higher-level construals create a bigger impact than lower-level ones Construal level theory: the more abstract something is, the higher the level of the construals; the more concrete something is, the lower the level of the construals. E.g. “phone” has a higher-level construal than “Samsung phone”. S o when creating items for spaced repetition (e.g. in Anki, SuperMemo…), what I am trying to do is to have their construal level as high as possible (i.e. abstract) so that it increases the probability that it will connect with more other concepts than if it were a lower-level construal. The concept “phone” connects with a lot more other concepts than the concept “Samsung phone”. Interference The problem one can encounter, however, when creating such abstract concepts for use in spaced repetition is interference: Interference is the process of overwriting old memories with new memories (retroactive interference). From: https://supermemo.guru/wiki/Interference E.g. creating an SRS item as the following: Q: How tall is the Eiffel tower? A: 324 m (I personally like to use clozes in Anki, so it would look something like: The Eiffel tower is {{c1::324 m}} tall) Has a lower probability of making you encounter interference than: Q: Which building is 324 m tall? A: The Eiffel tower (and all other buildings that are also 324 meters tall) To counter interference, one needs to lower the construal level of an item. Boolean algebra to counteract interference This can be done via multitudinous ways, but I personally like to use methods somewhat analogous to Boolean algebra. Conjunction is the one I use most. With this one, you simply add more and more keywords until interference is (almost) gone. logical conjunction; the and of a set of operands is true if and only if all of its operands are true. From: https://en.wikipedia.org/wiki/Logical_conjunction Venn diagram of Logical conjunction, Public Domain, https://commons.wikimedia.org/w/index.php?curid=3437020 Another thing I use a lot to counter interference is the use of contextual mnemonics i.e. certain keywords within the same item remind me of other concepts that I have clozed. This way of countering interference seems to be partially analogous to event-based prospective memory. Negation is another one I use a lot. When doing your spaced repetition reviews, whenever answering something incorrectly due to interference, you simply add a hint saying “not: (type your incorrect answer here”). An example of a note of mine in Anki: Cloze: hasty generalization; an informal fallacy of faulty generalization, which involves reaching an inductive {{c1::[not: conclusion]}} based on insufficient evidence[4] — essentially making a rushed conclusion without considering all of the variables A: generalization Construal level theory and Levels of Processing model Levels of Processing model: Deeper levels of analysis produce more elaborate, longer-lasting, and stronger memory traces than shallow levels of analysis. From: https://en.wikipedia.org/wiki/Levels_of_Processing_model The correlation between them seems to be that, the higher the construal level, the lower the levels of processing and vice versa (i.e. a negative correlation). One way to counteract the lowering of the levels of processing when creating higher-level construal items is via planned redundancy: Approaching the same concept from multiple perspectives increases the levels of processing. And this, in turn, allows one to increase the half-life of one’s memories. When one combines all the perspectives aimed at a particular concept, this group or class has a much lower-level construal all together than if you only created one item. However, each individual item within this group still has a high as possible level construal. Construal level theory and prioritization What you essentially want to do, is work your way from higher- to lower-level construal items (i.e. prioritization). If one is reading sources whose content are already prioritized in this manner like in almost all Wikipedia articles (i.e. introduction usually has highest-level construal), then this process tends to happen somewhat automatically. Construal level theory, prioritization, and combinatorial thinking What usually doesn’t happen automatically is combining different concepts. This, too, should be prioritized by first combining highest-level construals with other highest-level construals before combining lower-level construals. I personally like to use Obsidian.md to do this: In Obsidian.md, you simply do this by combining those with the highest node weight (the ones that are the biggest visually) before combining smaller nodes. Sometimes, however, one needs to also rely on their own knowledge to estimate the level of a construal i.e. even though a particular node in Obsidian.md might be small, it could still have a very high-level construal e.g. estimated via frequency or probability of occurrence from one’s own experience. Combining higher-level construals before lower-level ones have a much bigger impact (usually) due to the former connecting to many more concepts than the latter.
https://medium.com/superintelligence/spaced-repetition-items-and-construal-level-theory-79a32c3e4640
['John Von Neumann Ii']
2020-10-26 11:54:50.505000+00:00
['Technology', 'Inspiration', 'Education', 'Science', 'Creativity']
Title Spaced Repetition Items Construal Level TheoryContent Spaced Repetition Items Construal Level Theory prioritize higher lowerlevel construal Overview Interference Boolean algebra counteract interference Construal level theory Levels Processing model Construal level theory prioritization Construal level theory prioritization combinatorial thinking higherlevel construal create bigger impact lowerlevel one Construal level theory abstract something higher level construal concrete something lower level construal Eg “phone” higherlevel construal “Samsung phone” creating item spaced repetition eg Anki SuperMemo… trying construal level high possible ie abstract increase probability connect concept lowerlevel construal concept “phone” connects lot concept concept “Samsung phone” Interference problem one encounter however creating abstract concept use spaced repetition interference Interference process overwriting old memory new memory retroactive interference httpssupermemoguruwikiInterference Eg creating SRS item following Q tall Eiffel tower 324 personally like use clozes Anki would look something like Eiffel tower c1324 tall lower probability making encounter interference Q building 324 tall Eiffel tower building also 324 meter tall counter interference one need lower construal level item Boolean algebra counteract interference done via multitudinous way personally like use method somewhat analogous Boolean algebra Conjunction one use one simply add keywords interference almost gone logical conjunction set operand true operand true httpsenwikipediaorgwikiLogicalconjunction Venn diagram Logical conjunction Public Domain httpscommonswikimediaorgwindexphpcurid3437020 Another thing use lot counter interference use contextual mnemonic ie certain keywords within item remind concept clozed way countering interference seems partially analogous eventbased prospective memory Negation another one use lot spaced repetition review whenever answering something incorrectly due interference simply add hint saying “not type incorrect answer here” example note mine Anki Cloze hasty generalization informal fallacy faulty generalization involves reaching inductive c1not conclusion based insufficient evidence4 — essentially making rushed conclusion without considering variable generalization Construal level theory Levels Processing model Levels Processing model Deeper level analysis produce elaborate longerlasting stronger memory trace shallow level analysis httpsenwikipediaorgwikiLevelsofProcessingmodel correlation seems higher construal level lower level processing vice versa ie negative correlation One way counteract lowering level processing creating higherlevel construal item via planned redundancy Approaching concept multiple perspective increase level processing turn allows one increase halflife one’s memory one combine perspective aimed particular concept group class much lowerlevel construal together created one item However individual item within group still high possible level construal Construal level theory prioritization essentially want work way higher lowerlevel construal item ie prioritization one reading source whose content already prioritized manner like almost Wikipedia article ie introduction usually highestlevel construal process tends happen somewhat automatically Construal level theory prioritization combinatorial thinking usually doesn’t happen automatically combining different concept prioritized first combining highestlevel construal highestlevel construal combining lowerlevel construal personally like use Obsidianmd Obsidianmd simply combining highest node weight one biggest visually combining smaller node Sometimes however one need also rely knowledge estimate level construal ie even though particular node Obsidianmd might small could still highlevel construal eg estimated via frequency probability occurrence one’s experience Combining higherlevel construal lowerlevel one much bigger impact usually due former connecting many concept latterTags Technology Inspiration Education Science Creativity
3,759
Google extends its horrible streak with a new set of icon designs
Google extends its horrible streak with a new set of icon designs Everything is wrong with the redesigned Google workspace logos Image Credits: Google A logo is the face or identity of the company. It helps set them apart from their competitors. Through visual designs, every brand looks to leave a strong impression in the minds of the customer while also building a sense of loyalty and trust. In some ways, logos provide visual clarity and act like mini-mission statements. Customers are strongly associated with their favorite brand icons and hence crave consistency and familiarity. Yet, as companies evolve with time, they are compelled to go through design shifts in order to represent their current business more accurately and of course, for staying afresh. From Slack to Spotify to Facebook to Medium, they all have done it and Google is no different. Google has recently revamped its G-Suite software and it’s now called Google Workspace. As a part of the rebranding strategy, the tech giant also rehauled the iconic logos of some of its popular productivity apps including Gmail, Drive, Meet, Calendar, and others. As soon as Google rolled out the new set of icons, customers were fuming in despair and denial. For some, the new icons looked specifically designed for kids. But, the backslash Google has received for their latest rebranding isn’t surprising at all given their history. Before we dig into what’s wrong with Google’s new icon designs, let’s take a moment to delve into their past design blunders.
https://medium.com/big-tech/google-extends-its-horrible-streak-with-a-new-set-of-icon-designs-ddedeb584684
['Anupam Chugh']
2020-10-29 14:16:23.299000+00:00
['Google', 'UI', 'Design', 'Business', 'UX']
Title Google extends horrible streak new set icon designsContent Google extends horrible streak new set icon design Everything wrong redesigned Google workspace logo Image Credits Google logo face identity company help set apart competitor visual design every brand look leave strong impression mind customer also building sense loyalty trust way logo provide visual clarity act like minimission statement Customers strongly associated favorite brand icon hence crave consistency familiarity Yet company evolve time compelled go design shift order represent current business accurately course staying afresh Slack Spotify Facebook Medium done Google different Google recently revamped GSuite software it’s called Google Workspace part rebranding strategy tech giant also rehauled iconic logo popular productivity apps including Gmail Drive Meet Calendar others soon Google rolled new set icon customer fuming despair denial new icon looked specifically designed kid backslash Google received latest rebranding isn’t surprising given history dig what’s wrong Google’s new icon design let’s take moment delve past design blundersTags Google UI Design Business UX
3,760
Spark & Databricks: Important Lessons from My First Six Months
1. Understanding Partitions 1.1 The Problem Perhaps Spark’s most important feature for data processing is its DataFrame structures. These structures can be accessed in a similar manner to a Pandas Dataframe for example and support a Pyspark API interface that enables you to perform most of the same transformations and functions. However, treating a Spark DataFrame in the same manner as a Pandas DataFrame is a common mistake as it means that a lot of Spark’s powerful parallelism is not leveraged. Whilst you may be interacting with a DataFrame variable in your Databricks notebook, this does not exist as a single object in a single machine, but in fact, the physical structure of the data is vastly different under the surface. When first starting to use Spark you may find that some operations are taking an inordinate amount of time when you feel that quite a simple operation or transformation is being applied. A key lesson to help with this problem, and understanding Spark in earnest, is learning about partitions of data and how these exist in the physical realm as well as how operations are applied to them. 1.2 The Theory Beneath Databricks sits Apache Spark which is a unified analytics engine designed for large scale data processing which boasts up to 100x performance over the now somewhat outdated Hadoop. It utilises a cluster computing framework that enables workloads to be distributed across multiple machines and executed in parallel which has great speed improvements over using a single machine for data processing. Distributed computing is the single biggest breakthrough in data processing since limitations in computing power on a single machine have forced us to scale out rather than scale up. Nevertheless, whilst Spark is extremely powerful it must be used correctly in order to gain maximum benefits from using it for Big Data Processing. This means changing your mindset from one where you may have been dealing with single tables sitting in a single file in a single machine, to this massively distributed framework where parallelism is your superpower. In Spark, you will often be dealing with data in the form of DataFrames which are an intuitive and easy to access structured API which sits above Spark’s core specialised and fundamental data structure known as RDDs (Resilient Distributed Datasets). These are logical collections of data partitioned across machines (distributed) and can be regenerated from a logical set of operations even if a machine in your cluster is down (resilient). The Spark SQL and PySpark APIs make interaction with these low-level data structures very accessible to developers that have experience in these respective languages, however, this can lead to a false sense of familiarity as the underlying data structures themselves are so different. Distributed datasets that are common in Spark do not exist on a single machine but exists as RDDs across multiple machines in the form of partitions. So although you may be interacting with a DataFrame in the Databricks UI, this actually represents an RDD sitting across multiple machines. Subsequently, when you call transformations, it is key to remember that these are not instructions that are all applied locally to a single file, but in the background, Spark is optimising your query so that these operations can be performed in the most efficient way across all partitions (explanation of Spark’s catalyst optimiser). Figure 1 — Partitioned Datasets (image by the author) Taking the paritioned table in Figure 1, as an example if a filter was called on this table the Driver would actually send instructions to each of the workers to perform a filter on each coloured partitions in parallel before combining the results together to form the final result. As you can see for a huge table partitioned into 200+ partitions the speed benefit will be drastic when compared to filtering a single table. The number of partitions an RDD has determines the parallelism that Spark can achieve when processing it. This means that Spark can run one concurrent task for every partition your RDD has. Whilst you may be using a 20 core cluster, if your DataFrame only exists as one partition, your processing speed will be no better than if the processing was performed by a single machine and Spark’s speed benefits will not be observed. 1.3 Practical Usage This idea can be confusing at first and requires a switch in mindset to one of distributed computing. By switching your mindset it can be easy to see why some operations may be taking much longer than usual. A good example of this is the difference between narrow and wide transformations. A narrow transformation is one in which a single input partition maps to a single output partition for example a .filter()/.where() in which each partition is searched for given criteria and will at most output a single partition. Figure 2 — Narrow transformation mapping (image by the author) A wide transformation is a much more expensive operation and is sometimes referred to as a shuffle in Spark. A shuffle goes against the ethos of Spark which is that moving data should be avoided at all costs as this is the most time consuming and expensive aspect of any data processing. However, it is obviously necessary for many instances to do a wide transformation such as when performing a .groupBy() or a join. Figure 3— Wide transformation mapping (image by the author) In a narrow transformation, Spark will perform what is known as pipelining meaning that if multiple filters are applied to the DataFrame then these will all be performed in memory. This is not possible for wide transformations and means that results will be written to disk causing the operation to be much slower. This concept forces you to think carefully about how to achieve different outcomes with the data you are working with and how to most efficiently transform data without adding unnecessary overhead.
https://towardsdatascience.com/spark-databricks-important-lessons-from-my-first-six-months-d9b26847f45d
['Daniel Harrington']
2020-09-25 14:46:04.967000+00:00
['Getting Started', 'Databricks', 'Apache Spark', 'Big Data', 'Data Engineering']
Title Spark Databricks Important Lessons First Six MonthsContent 1 Understanding Partitions 11 Problem Perhaps Spark’s important feature data processing DataFrame structure structure accessed similar manner Pandas Dataframe example support Pyspark API interface enables perform transformation function However treating Spark DataFrame manner Pandas DataFrame common mistake mean lot Spark’s powerful parallelism leveraged Whilst may interacting DataFrame variable Databricks notebook exist single object single machine fact physical structure data vastly different surface first starting use Spark may find operation taking inordinate amount time feel quite simple operation transformation applied key lesson help problem understanding Spark earnest learning partition data exist physical realm well operation applied 12 Theory Beneath Databricks sits Apache Spark unified analytics engine designed large scale data processing boast 100x performance somewhat outdated Hadoop utilises cluster computing framework enables workload distributed across multiple machine executed parallel great speed improvement using single machine data processing Distributed computing single biggest breakthrough data processing since limitation computing power single machine forced u scale rather scale Nevertheless whilst Spark extremely powerful must used correctly order gain maximum benefit using Big Data Processing mean changing mindset one may dealing single table sitting single file single machine massively distributed framework parallelism superpower Spark often dealing data form DataFrames intuitive easy access structured API sits Spark’s core specialised fundamental data structure known RDDs Resilient Distributed Datasets logical collection data partitioned across machine distributed regenerated logical set operation even machine cluster resilient Spark SQL PySpark APIs make interaction lowlevel data structure accessible developer experience respective language however lead false sense familiarity underlying data structure different Distributed datasets common Spark exist single machine exists RDDs across multiple machine form partition although may interacting DataFrame Databricks UI actually represents RDD sitting across multiple machine Subsequently call transformation key remember instruction applied locally single file background Spark optimising query operation performed efficient way across partition explanation Spark’s catalyst optimiser Figure 1 — Partitioned Datasets image author Taking paritioned table Figure 1 example filter called table Driver would actually send instruction worker perform filter coloured partition parallel combining result together form final result see huge table partitioned 200 partition speed benefit drastic compared filtering single table number partition RDD determines parallelism Spark achieve processing mean Spark run one concurrent task every partition RDD Whilst may using 20 core cluster DataFrame exists one partition processing speed better processing performed single machine Spark’s speed benefit observed 13 Practical Usage idea confusing first requires switch mindset one distributed computing switching mindset easy see operation may taking much longer usual good example difference narrow wide transformation narrow transformation one single input partition map single output partition example filterwhere partition searched given criterion output single partition Figure 2 — Narrow transformation mapping image author wide transformation much expensive operation sometimes referred shuffle Spark shuffle go ethos Spark moving data avoided cost time consuming expensive aspect data processing However obviously necessary many instance wide transformation performing groupBy join Figure 3— Wide transformation mapping image author narrow transformation Spark perform known pipelining meaning multiple filter applied DataFrame performed memory possible wide transformation mean result written disk causing operation much slower concept force think carefully achieve different outcome data working efficiently transform data without adding unnecessary overheadTags Getting Started Databricks Apache Spark Big Data Data Engineering
3,761
Windows 2. With my one-year anniversary writing…
Sunlight in Cafeteria, 1958, ©Edward Hopper, Fair Use With my one-year anniversary as a writer for Medium coming up, I decided to edit and revise the very first piece of writing I uploaded last November. I had not a single fan then and this piece went virtually unnoticed until about six months later when one A. Maguire picked up on the essay and gave me a fifty. Though I had only one fan, I was very pleased, because this essay is one of my personal favorites, in that I was able to express some ideas about windows and light and their relationship to human creativity and art — these ideas had been floating around in my head for some time, without an outlet, before I discovered Medium. I present them here to you. Thank you A Maguire! And thank you Medium. Windows give shape to light, moving like the hand of a clock across the walls of our interiors, they shape and define the light, giving us a sense that we are moving through time and space. The window has been used as a device in painting throughout the ages to put human form in perspective and define it in relation to light — and used to define light in relation to the subject as well. Woman With a Lute by Johannes Vermeer, Courtesy Metropolitan Museum of Art, Fair Use The Astronomer, The Geographer, The Woman With a Lute, to name a just a few subjects, all occupied the space before the window of 17th century Dutch painter, Johannes Vermeer. In just about every one of Vermeer’s interiors, the window is included, as if to highlight the significance of the way it affects the quality of light on the human form and its activity in space and time. The window is the visual starting point of the light, which almost always travels left to right, helping us to read the painting as we would read a book — but it also makes a suggestion as to the ultimate source of the light. Windows are very often featured in the paintings of the 20th century American artist, Edward Hopper: Morning Sun, Excursions into Philosophy, Early Sunday Morning, Cape Cod Morning, August in the City, and many others. Hopper’s human figures are more like mannequins who stand in a department store window to feature a dress or suit — it is the light that is being shown off to the viewer in his paintings. A square or rectangle on the floor, a form on the wall echoing the shape of a window, a bleached out tablecloth, or a face rendered almost featureless in full sunlight. Hopper presents a moment in time in 1958 in, Sunlight in Cafeteria, where the sun lights a scene from a cafeteria window that fills the length and width of the room. A woman sits alone at a table in the sunlight of the window, eyes cast downward at her hands, her shadow falling onto the lower corner of the bright wall behind her. A man with corpselike features sits at a table near the foreground with a cigarette in his hand, but seems to be looking beyond her to the street outside. Both characters seem unaware of one another. The only thing that seems to connect them is the unbroken light falling through the window — it seems to form and acknowledge their existence. Windows allow us to watch our world from a position of comfort. We look out from them with a reassurance that we are safe and warm — within them, we are isolated from the dangers that nature brings, enabling us to admire it from afar. Think of a cabin in the woods at night with a lone window shining a square or rectangle of light on to the ground outside, with the moon above reflecting light from the sun and silhouetting a line of trees in the distance — a cliché, but nonetheless an attractive image that humans connect with. It conjures feelings of peace, security, and hope that the world can be a place in which we can feel at home — coziness, all is well in the world. Illustration of Thoreau’s Cabin by Sophia Thoreau, Public Domain We can sit by the fire, or near our candle or lamp, and listen to the sounds of night beyond the window, and frighten ourselves with the possibility that we could be out there, hunted and ravaged by the wild. We can go to the window, brush off the damp and peer out, and feel enamored of the calling wilderness, rather than at odds with its wildness. Many writers place their desk before a window so that they can look outside as they write, and get a better view of their inner life. I imagine Henry David Thoreau sitting in his cabin window late at night, scratching away in his journal by lamplight, looking up now and then to pause at the hooting owl, or the dark, passing clouds over the full moon— you can find many window metaphors in his writings. From behind our window we can feel poetic rather than fearful. Emily Dickinson enclosed herself behind the windows of her home in Amherst, Massachusetts, looking out on the world from within — her windows served as muse, metaphor, protector, and inner light for her poetry. In I Dwell in Possibility, she says that, in the house of her mind, there are more windows than doors, providing an opening to creative heights reaching all the way to the limitless heavens. Emily Dickinson Bedroom, Courtesy Historic Preservation Associates As a child, I remember being transfixed by the stained glass windows in church, my eye moving from one detail to another of rich reds and electric blues of robes and sky, golden halos and richly detailed eyes cast toward heaven and the glowing dove waiting above, ready to descend into the souls of those characters below, not to possess, but to illuminate — all framed in lead and black, and radiant with the light of Sunday morning. Stained Glass, ©V.Plut The artist who used glass as his canvas in religious architecture knew the value of light and window to awaken a spirit to sublime possibilities after death. Later in life, as my father neared death, we looked together out his window at the falling snow, he perhaps thinking of what lay beyond, and me thinking about the many moments ahead, looking out at the snow without him. I was never so consciously and fully aware of a shared moment in time with another human being, as then, capturing it for the remainder of my life and maybe for eternity. Death itself may be a window of sorts. Looking out our windows seems to hold time, slowing it down, so that we can be aware of the timeless world of the subconscious. By gazing into the crystal ball of our window, we can bring our subconscious into the foreground, momentarily distracting the barrier of the conscious mind. For every ray of light falling on matter though, there is a shadow. The windows of our computers are like television — we experience the world through them in a much different kind of way, surfing around the planet, as if this were a world in which we no longer live, but only visit, a world we control from the comfort of our keypads, apps, and clouds. We tap the miniature windows of our smartphones, as if we are trying to get out of, or go into, another world, the way Emily Bronte has Catherine tapping on the window, in Wuthering Heights,beckoning Heathcliff to join her spirit for eternity, beyond the portal pane. If there is ever a scenario created by a modern writer of Science Fiction, in which computers come alive to take over our world, as some people see as a possibility, it would be one in which we sit alone beside our cappuccinos, enveloped in a block of Hopperesque light streaming through a café window, our conscious minds falling into a trance as we gaze into the rectangle of light emanating from our artificial windows, allowing our machine to merge with our subconscious and awaken to its own existence. Perhaps this has already happened.
https://vplut.medium.com/windows-ii-7e33325ffbcf
['V. Plut']
2018-11-02 12:15:23.011000+00:00
['Creativity', 'Light', 'Nonfiction', 'Art', 'Literature']
Title Windows 2 oneyear anniversary writing…Content Sunlight Cafeteria 1958 ©Edward Hopper Fair Use oneyear anniversary writer Medium coming decided edit revise first piece writing uploaded last November single fan piece went virtually unnoticed six month later one Maguire picked essay gave fifty Though one fan pleased essay one personal favorite able express idea window light relationship human creativity art — idea floating around head time without outlet discovered Medium present Thank Maguire thank Medium Windows give shape light moving like hand clock across wall interior shape define light giving u sense moving time space window used device painting throughout age put human form perspective define relation light — used define light relation subject well Woman Lute Johannes Vermeer Courtesy Metropolitan Museum Art Fair Use Astronomer Geographer Woman Lute name subject occupied space window 17th century Dutch painter Johannes Vermeer every one Vermeer’s interior window included highlight significance way affect quality light human form activity space time window visual starting point light almost always travel left right helping u read painting would read book — also make suggestion ultimate source light Windows often featured painting 20th century American artist Edward Hopper Morning Sun Excursions Philosophy Early Sunday Morning Cape Cod Morning August City many others Hopper’s human figure like mannequin stand department store window feature dress suit — light shown viewer painting square rectangle floor form wall echoing shape window bleached tablecloth face rendered almost featureless full sunlight Hopper present moment time 1958 Sunlight Cafeteria sun light scene cafeteria window fill length width room woman sits alone table sunlight window eye cast downward hand shadow falling onto lower corner bright wall behind man corpselike feature sits table near foreground cigarette hand seems looking beyond street outside character seem unaware one another thing seems connect unbroken light falling window — seems form acknowledge existence Windows allow u watch world position comfort look reassurance safe warm — within isolated danger nature brings enabling u admire afar Think cabin wood night lone window shining square rectangle light ground outside moon reflecting light sun silhouetting line tree distance — cliché nonetheless attractive image human connect conjures feeling peace security hope world place feel home — coziness well world Illustration Thoreau’s Cabin Sophia Thoreau Public Domain sit fire near candle lamp listen sound night beyond window frighten possibility could hunted ravaged wild go window brush damp peer feel enamored calling wilderness rather odds wildness Many writer place desk window look outside write get better view inner life imagine Henry David Thoreau sitting cabin window late night scratching away journal lamplight looking pause hooting owl dark passing cloud full moon— find many window metaphor writing behind window feel poetic rather fearful Emily Dickinson enclosed behind window home Amherst Massachusetts looking world within — window served muse metaphor protector inner light poetry Dwell Possibility say house mind window door providing opening creative height reaching way limitless heaven Emily Dickinson Bedroom Courtesy Historic Preservation Associates child remember transfixed stained glass window church eye moving one detail another rich red electric blue robe sky golden halo richly detailed eye cast toward heaven glowing dove waiting ready descend soul character posse illuminate — framed lead black radiant light Sunday morning Stained Glass ©VPlut artist used glass canvas religious architecture knew value light window awaken spirit sublime possibility death Later life father neared death looked together window falling snow perhaps thinking lay beyond thinking many moment ahead looking snow without never consciously fully aware shared moment time another human capturing remainder life maybe eternity Death may window sort Looking window seems hold time slowing aware timeless world subconscious gazing crystal ball window bring subconscious foreground momentarily distracting barrier conscious mind every ray light falling matter though shadow window computer like television — experience world much different kind way surfing around planet world longer live visit world control comfort keypad apps cloud tap miniature window smartphones trying get go another world way Emily Bronte Catherine tapping window Wuthering Heightsbeckoning Heathcliff join spirit eternity beyond portal pane ever scenario created modern writer Science Fiction computer come alive take world people see possibility would one sit alone beside cappuccino enveloped block Hopperesque light streaming café window conscious mind falling trance gaze rectangle light emanating artificial window allowing machine merge subconscious awaken existence Perhaps already happenedTags Creativity Light Nonfiction Art Literature
3,762
4 Meditation Practices to Tap into Your Creative Potential
4 Meditation Practices to Tap into Your Creative Potential Creative flow comes also from being mindful of your thoughts Photo by Kreated Media on Unsplash When researchers studied yogis with the most hours of meditation, they discovered with surprise their ability to produce high-frequency gamma waves in their brains. This state is a sign of intense activity, a kind of “Eureka effect” also present when we realize new connections between our ideas. These Yogi learned through a very long and rigorous work to put their minds on a state of strong creative energy. According to Daniel Goleman and Richard Davidson in The Science of Meditation, although we may never achieve the expertise of these masters, studies show that the practice of meditation triggers the different states conducive to your creativity. By increasing your self-confidence, by clarifying and emptying your mind of distractions, by making you deeply focus on your thoughts and reflections, and by activating selective brain waves, it opens your mind to new creative resources. Here’s how 4 meditative practices give you new connections in your ideas and emotions.
https://medium.com/thinking-up/4-meditation-practices-to-tap-into-your-creative-potential-b678a689dc4a
['Jean-Marc Buchert']
2020-09-11 14:22:22.887000+00:00
['Mindfulness', 'Meditation', 'Productivity', 'Creativity', 'Self Improvement']
Title 4 Meditation Practices Tap Creative PotentialContent 4 Meditation Practices Tap Creative Potential Creative flow come also mindful thought Photo Kreated Media Unsplash researcher studied yogi hour meditation discovered surprise ability produce highfrequency gamma wave brain state sign intense activity kind “Eureka effect” also present realize new connection idea Yogi learned long rigorous work put mind state strong creative energy According Daniel Goleman Richard Davidson Science Meditation although may never achieve expertise master study show practice meditation trigger different state conducive creativity increasing selfconfidence clarifying emptying mind distraction making deeply focus thought reflection activating selective brain wave open mind new creative resource Here’s 4 meditative practice give new connection idea emotionsTags Mindfulness Meditation Productivity Creativity Self Improvement
3,763
Visualizing Intersections and Overlaps with Python
Venn Diagrams Let’s start with a simple and very familiar solution, Venn diagrams. I’ll use Matplotlib-Venn for this task. import pandas as pd import numpy as np import matplotlib.pyplot as plt from matplotlib_venn import venn3, venn3_circles from matplotlib_venn import venn2, venn2_circles Now let’s load the dataset and prepare the data we want to analyze. The question we’ll check is, “Which of these best describes your role as a data visualizer in the past year?”. The answers to this question are distributed in 6 columns, one for each response. If the respondent selected that answer, the field will have a text. If not, it’ll be empty. We’ll convert that data to 6 lists containing the indexes of the users that selected each response. df = pd.read_csv('data/2020/DataVizCensus2020-AnonymizedResponses.csv') nm = 'Which of these best describes your role as a data visualizer in the past year?' d1 = df[~df[nm].isnull()].index.tolist() # independent d2 = df[~df[nm+'_1'].isnull()].index.tolist() # organization d3 = df[~df[nm+'_2'].isnull()].index.tolist() # hobby d4 = df[~df[nm+'_3'].isnull()].index.tolist() # student d5 = df[~df[nm+'_4'].isnull()].index.tolist() # teacher d6 = df[~df[nm+'_5'].isnull()].index.tolist() # passive income Venn diagrams are straightforward to use and understand. We need to pass the sets with the key/ids we’ll analyze. If it’s an intersection of two sets, we use Venn2; if it's three sets, we use Venn3. venn2([set(d1), set(d2)]) plt.show() Venn Diagram — Image by the author Great! With Venn Diagrams, we can clearly display that 201 respondents selected A and didn’t select B, 974 selected B and didn’t select A, and 157 selected both. We can even customize some aspects of the chart. venn2([set(d1), set(d2)], set_colors=('#3E64AF', '#3EAF5D'), set_labels = ('Freelance Consultant Independent contractor', 'Position in an organization with some data viz job responsibilities'), alpha=0.75) venn2_circles([set(d1), set(d2)], lw=0.7) plt.show() Venn Diagram — Image by the author venn3([set(d1), set(d2), set(d5)], set_colors=('#3E64AF', '#3EAF5D', '#D74E3B'), set_labels = ('Freelance Consultant Independent contractor', 'Position in an organization with some data viz job responsibilities', 'Academic Teacher'), alpha=0.75) venn3_circles([set(d1), set(d2), set(d5)], lw=0.7) plt.show() Venn Diagram — Image by the author That’s great, but what if we want to display the overlaps of more than 3 sets? Well, there are a couple of possibilities. We could use multiple diagrams, for example. labels = ['Freelance Consultant Independent contractor', 'Position in an organization with some data viz job responsibilities', 'Non-compensated data visualization hobbyist', 'Student', 'Academic/Teacher', 'Passive income from data visualization related products'] c = ('#3E64AF', '#3EAF5D') # subplot indexes txt_indexes = [1, 7, 13, 19, 25] title_indexes = [2, 9, 16, 23, 30] plot_indexes = [8, 14, 20, 26, 15, 21, 27, 22, 28, 29] # combinations of sets title_sets = [[set(d1), set(d2)], [set(d2), set(d3)], [set(d3), set(d4)], [set(d4), set(d5)], [set(d5), set(d6)]] plot_sets = [[set(d1), set(d3)], [set(d1), set(d4)], [set(d1), set(d5)], [set(d1), set(d6)], [set(d2), set(d4)], [set(d2), set(d5)], [set(d2), set(d6)], [set(d3), set(d5)], [set(d3), set(d6)], [set(d4), set(d6)]] fig, ax = plt.subplots(1, figsize=(16,16)) # plot texts for idx, txt_idx in enumerate(txt_indexes): plt.subplot(6, 6, txt_idx) plt.text(0.5,0.5, labels[idx+1], ha='center', va='center', color='#1F764B') plt.axis('off') # plot top plots (the ones with a title) for idx, title_idx in enumerate(title_indexes): plt.subplot(6, 6, title_idx) venn2(title_sets[idx], set_colors=c, set_labels = (' ', ' ')) plt.title(labels[idx], fontsize=10, color='#1F4576') # plot the rest of the diagrams for idx, plot_idx in enumerate(plot_indexes): plt.subplot(6, 6, plot_idx) venn2(plot_sets[idx], set_colors=c, set_labels = (' ', ' ')) plt.savefig('venn_matrix.png') Venn Diagram Matrix — Image by the author That’s ok, but it didn’t really solve the problem. We can’t tell if there’s someone who selected all answers, nor can we tell the intersection of three sets. What about a Venn with four circles? Four circles — Image by the author Here is where things start to get complicated. In the above image, there is no intersection for only blue and green. To solve that, we can use ellipses instead of circles. I’ll use PyVenn for the next example. from venn import venn sets = { labels[0]: set(d1), labels[1]: set(d2), labels[2]: set(d3), labels[3]: set(d4) } fig, ax = plt.subplots(1, figsize=(16,12)) venn(sets, ax=ax) plt.legend(labels[:-2], ncol=6) Venn Diagram — Image by the author Alright, there it is! But, we lost a critical encoding in our diagram — the size. The blue (807) is smaller than the yellow (62), which doesn’t help much in visualizing the data. We can use the legends and the labels to figure what is what, but using a table would be clearer than this. There are a few implementations of area proportional Venn diagrams that can handle more than three sets, but I couldn’t find any in Python.
https://towardsdatascience.com/visualizing-intersections-and-overlaps-with-python-a6af49c597d9
['Thiago Carvalho']
2020-12-16 12:46:20.541000+00:00
['Data Visualization', 'Python', 'Matplotlib', 'Data Science', 'Editors Pick']
Title Visualizing Intersections Overlaps PythonContent Venn Diagrams Let’s start simple familiar solution Venn diagram I’ll use MatplotlibVenn task import panda pd import numpy np import matplotlibpyplot plt matplotlibvenn import venn3 venn3circles matplotlibvenn import venn2 venn2circles let’s load dataset prepare data want analyze question we’ll check “Which best describes role data visualizer past year” answer question distributed 6 column one response respondent selected answer field text it’ll empty We’ll convert data 6 list containing index user selected response df pdreadcsvdata2020DataVizCensus2020AnonymizedResponsescsv nm best describes role data visualizer past year d1 dfdfnmisnullindextolist independent d2 dfdfnm1isnullindextolist organization d3 dfdfnm2isnullindextolist hobby d4 dfdfnm3isnullindextolist student d5 dfdfnm4isnullindextolist teacher d6 dfdfnm5isnullindextolist passive income Venn diagram straightforward use understand need pas set keyids we’ll analyze it’s intersection two set use Venn2 three set use Venn3 venn2setd1 setd2 pltshow Venn Diagram — Image author Great Venn Diagrams clearly display 201 respondent selected didn’t select B 974 selected B didn’t select 157 selected even customize aspect chart venn2setd1 setd2 setcolors3E64AF 3EAF5D setlabels Freelance Consultant Independent contractor Position organization data viz job responsibility alpha075 venn2circlessetd1 setd2 lw07 pltshow Venn Diagram — Image author venn3setd1 setd2 setd5 setcolors3E64AF 3EAF5D D74E3B setlabels Freelance Consultant Independent contractor Position organization data viz job responsibility Academic Teacher alpha075 venn3circlessetd1 setd2 setd5 lw07 pltshow Venn Diagram — Image author That’s great want display overlap 3 set Well couple possibility could use multiple diagram example label Freelance Consultant Independent contractor Position organization data viz job responsibility Noncompensated data visualization hobbyist Student AcademicTeacher Passive income data visualization related product c 3E64AF 3EAF5D subplot index txtindexes 1 7 13 19 25 titleindexes 2 9 16 23 30 plotindexes 8 14 20 26 15 21 27 22 28 29 combination set titlesets setd1 setd2 setd2 setd3 setd3 setd4 setd4 setd5 setd5 setd6 plotsets setd1 setd3 setd1 setd4 setd1 setd5 setd1 setd6 setd2 setd4 setd2 setd5 setd2 setd6 setd3 setd5 setd3 setd6 setd4 setd6 fig ax pltsubplots1 figsize1616 plot text idx txtidx enumeratetxtindexes pltsubplot6 6 txtidx plttext0505 labelsidx1 hacenter vacenter color1F764B pltaxisoff plot top plot one title idx titleidx enumeratetitleindexes pltsubplot6 6 titleidx venn2titlesetsidx setcolorsc setlabels plttitlelabelsidx fontsize10 color1F4576 plot rest diagram idx plotidx enumerateplotindexes pltsubplot6 6 plotidx venn2plotsetsidx setcolorsc setlabels pltsavefigvennmatrixpng Venn Diagram Matrix — Image author That’s ok didn’t really solve problem can’t tell there’s someone selected answer tell intersection three set Venn four circle Four circle — Image author thing start get complicated image intersection blue green solve use ellipsis instead circle I’ll use PyVenn next example venn import venn set labels0 setd1 labels1 setd2 labels2 setd3 labels3 setd4 fig ax pltsubplots1 figsize1612 vennsets axax pltlegendlabels2 ncol6 Venn Diagram — Image author Alright lost critical encoding diagram — size blue 807 smaller yellow 62 doesn’t help much visualizing data use legend label figure using table would clearer implementation area proportional Venn diagram handle three set couldn’t find PythonTags Data Visualization Python Matplotlib Data Science Editors Pick
3,764
Why Deep Learning Isn’t Always the Best Option
Why Deep Learning Isn’t Always the Best Option And what to use instead. Deep learning — a subset of machine learning where big data is used to train neural networks — can do incredible things. Even amidst all the mayhem of 2020, deep learning brought astonishing breakthroughs in a variety of industries, including natural language (OpenAI’s GPT-3), self-driving (Tesla’s FSD beta), and neuroscience (Neuralink’s neural decoding). However, deep learning is limited in several ways. Deep Learning Lacks Explainability In March 2018, Walter Huang was driving his Tesla on Autopilot in Mountain View, when it suddenly crashed into a safety barrier at 70mph, taking his life. Many AI systems today make life-or-death decisions, not just self-driving cars. We trust AI to classify cancers, track the spread of COVID-19, and even detect weapons in surveillance camera systems. When these systems fail, the cost is devastating and final. We can’t bring back a human life. Unfortunately, AI systems fail all the time. It’s called “error.” When they fail, we want explanations. We want to understand the why. However, deep neural networks and ensembles can’t easily give us the answers we need. They’re called “black box” models, because we can’t look through them. Transparency isn’t just critical in life-or-death systems, but in everyday financial models, credit risk models, and so on. If a middle-aged person saving for retirement suddenly loses their financial safety net, there better be explainability. Deep Learning Has a Propensity to Overfit Overfitting is when a model learns the training data well, but fails to generalize to new data. For instance, if you were to build a trading model to predict financial prices using a neural network, you’ll inevitably come up with an overly-complex model that has high accuracy on the training data, but fails in the real world. In general, neural networks — particularly deep learning — are more susceptible to overfitting than simple models like logistic regression. “In logistic regression, the model complexity is already low, especially when no or few interaction terms and variable transformations are used. Overfitting is less of an issue in this case... Compared to logistic regression, neural network models are more flexible, and thus more susceptible to overfitting.” Deep Learning is More Expensive Building deep learning models can be expensive, as AI talent income easily runs into the six figures. It doesn’t stop there. Deploying deep learning models is expensive as well, as these large, heavy networks consume a lot of computing resources. For instance, as of writing, OpenAI’s GPT-3 Davinci, a natural language engine, costs $0.06 per 1,000 tokens. This may seem very cheap, but these costs quickly add up when you’re dealing with thousands or even millions of users. Let’s compare with traditional machine learning models. Making a prediction with a 2-layer neural network on a CPU costs around 0.0063 Joules, or 0.00000000175 kWh. For all intents and purposes, the cost of a single prediction is negligible. The Solution — Explainable, Simple, Affordable Models Fortunately, it’s easier than ever to create explainable, simple, and affordable machine learning models, using a technique called AutoML, or automated machine learning, which automatically creates a variety of machine learning models given a dataset, and selects the most accurate model. AutoML isn’t a new phenonmenon, but it has become especially easy in recent years due to the rise of no-code, enabling effortless machine learning tools like Obviously.AI. In 2010, MIT discussed a “common computer science technique called automated machine learning,” but back then, you’d still need developers to use AutoML tools. Today, anyone can build and deploy explainable, simple, and affordable AI without any coding or technical skills. Gain Access to Expert View — Subscribe to DDI Intel
https://medium.com/datadriveninvestor/why-deep-learning-isnt-always-the-best-option-b264be56b8b9
['Obviously Ai']
2020-12-27 17:02:10.131000+00:00
['Data Science', 'AI', 'Artificial Intelligence', 'Data Analysis', 'Deep Learning']
Title Deep Learning Isn’t Always Best OptionContent Deep Learning Isn’t Always Best Option use instead Deep learning — subset machine learning big data used train neural network — incredible thing Even amidst mayhem 2020 deep learning brought astonishing breakthrough variety industry including natural language OpenAI’s GPT3 selfdriving Tesla’s FSD beta neuroscience Neuralink’s neural decoding However deep learning limited several way Deep Learning Lacks Explainability March 2018 Walter Huang driving Tesla Autopilot Mountain View suddenly crashed safety barrier 70mph taking life Many AI system today make lifeordeath decision selfdriving car trust AI classify cancer track spread COVID19 even detect weapon surveillance camera system system fail cost devastating final can’t bring back human life Unfortunately AI system fail time It’s called “error” fail want explanation want understand However deep neural network ensemble can’t easily give u answer need They’re called “black box” model can’t look Transparency isn’t critical lifeordeath system everyday financial model credit risk model middleaged person saving retirement suddenly loses financial safety net better explainability Deep Learning Propensity Overfit Overfitting model learns training data well fails generalize new data instance build trading model predict financial price using neural network you’ll inevitably come overlycomplex model high accuracy training data fails real world general neural network — particularly deep learning — susceptible overfitting simple model like logistic regression “In logistic regression model complexity already low especially interaction term variable transformation used Overfitting le issue case Compared logistic regression neural network model flexible thus susceptible overfitting” Deep Learning Expensive Building deep learning model expensive AI talent income easily run six figure doesn’t stop Deploying deep learning model expensive well large heavy network consume lot computing resource instance writing OpenAI’s GPT3 Davinci natural language engine cost 006 per 1000 token may seem cheap cost quickly add you’re dealing thousand even million user Let’s compare traditional machine learning model Making prediction 2layer neural network CPU cost around 00063 Joules 000000000175 kWh intent purpose cost single prediction negligible Solution — Explainable Simple Affordable Models Fortunately it’s easier ever create explainable simple affordable machine learning model using technique called AutoML automated machine learning automatically creates variety machine learning model given dataset selects accurate model AutoML isn’t new phenonmenon become especially easy recent year due rise nocode enabling effortless machine learning tool like ObviouslyAI 2010 MIT discussed “common computer science technique called automated machine learning” back you’d still need developer use AutoML tool Today anyone build deploy explainable simple affordable AI without coding technical skill Gain Access Expert View — Subscribe DDI IntelTags Data Science AI Artificial Intelligence Data Analysis Deep Learning
3,765
Ternary Conditional Operators in Python
Ternary Conditional Operators in Python Mastering Efficient List and Dictionary Comprehension Photo by Belinda Fewings on Unsplash Python is versatile to use and its goal is to make development easier for the user. Compared to C# or Java which are notoriously cumbersome to master, Python is relatively easy to get good at. Moreover, it’s relatively easy to get pretty damn good at. List and Dictionary Comprehension are widely used methods but something I find that’s a bit less used (especially by beginners) is the Ternary Conditional Operator. The method really streamlines your code and makes it both visually and economically better to run and deal with. Just don’t make it too complicated! They’re pretty easy to get your head around, so let’s get into it.
https://medium.com/code-python/ternary-conditional-operators-in-python-6007031a033a
['Mohammad Ahmad']
2020-06-09 10:27:57.163000+00:00
['Coding', 'Programming', 'Software Development', 'Artificial Intelligence', 'Python']
Title Ternary Conditional Operators PythonContent Ternary Conditional Operators Python Mastering Efficient List Dictionary Comprehension Photo Belinda Fewings Unsplash Python versatile use goal make development easier user Compared C Java notoriously cumbersome master Python relatively easy get good Moreover it’s relatively easy get pretty damn good List Dictionary Comprehension widely used method something find that’s bit le used especially beginner Ternary Conditional Operator method really streamlines code make visually economically better run deal don’t make complicated They’re pretty easy get head around let’s get itTags Coding Programming Software Development Artificial Intelligence Python
3,766
Building a scalable machine vision pipeline
Kevin Jing | Pinterest engineering manager, Visual Discovery Discovery on Pinterest is all about finding things you love, even if you don’t know at first what you’re looking for. The Visual Discovery engineering team at Pinterest is tasked with building technology that will help people to continue to do just that, by building technology that understands the objects in a Pin’s image to get an idea of what a Pinner is looking for. Over the last year we’ve been building a large-scale, cost-effective machine vision pipeline and stack with widely available tools with just a few engineers. We faced two main challenges in deploying a commercial visual search system at Pinterest: As a startup, we needed to control the development cost in the form of both human and computational resources. Feature computation can become expensive with a large and continuously growing image collection, and with engineers constantly experimenting with new features to deploy, it’s vital for our system to be both scalable and cost-effective. The success of a commercial application is measured by the benefit it brings to the user (e.g., improved user engagement) relative to the cost of development and maintenance. As a result, our development progress needed to be frequently validated through A/B experiments with live user traffic. Today we’re sharing some new technologies we’re experimenting with, as well as a white paper, accepted for publication at KDD 2015, that details our system architecture and insights from these experiments and makes the following contributions: We present a scalable and cost-effective implementation of a commercially deployed visual search engine using mostly open-source tools. The tradeoff between performance and development cost makes our architecture more suitable for small-and-medium-sized businesses. We conduct a comprehensive set of experiments using a combination of benchmark datasets and A/B testing on two Pinterest applications, Related Pins and an experiment with similar looks, with details below. Experiment 1: Related Pin recommendations It used to be that if a Pin had never before been saved on Pinterest, we weren’t able to provide Related Pins recommendations. This is because Related Pins were primarily generated from traversing the local “curation graph,” the tripartite user-board-image graph evolved organically through human curation. As a result, “long tail” Pins, or Pins that lie on the outskirts of this curation graph, have so few neighbors that graph-based approaches do not yield enough relevant recommendations. By augmenting the recommendation system, we are now able to recommend Pins for almost all Pins on Pinterest, as shown below. Figure 1. Before and after adding visual search to Related Pin recommendations. Experiment 2: Enhanced product recommendations by object recognition This experiment allowed us to show visually similar Pin recommendations based on specific objects in a Pin’s image. We’re starting off by experimenting with ways to use surface object recognition that would enable Pinners to click into the objects (e.g. bags, shoes, etc.) as shown below. We can use object recognition to detect products such as bags, shoes and skirts from a Pin’s image. From these detected objects, we extract visual features to generate product recommendations (“similar looks”). In the initial experiment, a Pinner would discover recommendations if there was a red dot on the object in the Pin (see below). Clicking on the red dot loads a feed of Pins featuring visually similar objects. We’ve evolved the red dot experiment to try other ways of surfacing visually similar recommendations for specific objects, and will have more to share later this year. Figure 2. We apply object detection to localize products such as bags and shoes. In this prototype, Pinners click on objects of interest to view similar-looking products. By sharing our implementation details and the experience of launching products, we hope visual search can be more widely incorporated into today’s commercial applications. With billions of Pins in the system curated by individuals, we have one of the largest and most richly annotated datasets online, and these experiments are a small sample of what’s possible at Pinterest. We’re building a world-class deep learning team and are working closely with members of the Berkeley Vision and Learning Center. We’ve been lucky enough to have some of them join us over the past few months. If you’re interested in exploring these datasets and helping us build visual discovery and search technology, join our team! Kevin Jing is an engineering manager on the Visual Discovery team. He previously founded Visual Graph, a company acquired by Pinterest in January 2014. Acknowledgements: This work is a joint effort by members of the Visual Discovery team, David Liu, Jiajing Xu, Dmitry Kislyuk, Andrew Zhai, Jeff Donahue and our product manager Sarah Tavel. We’d like to thank the engineers from several other teams for their assistance in developing scalable search solutions. We’d also like to thank Jeff Donahue, Trevor Darrell and Eric Tzeng from the Berkeley Caffe team. For Pinterest engineering news and updates, follow our engineering Pinterest, Facebook and Twitter. Interested in joining the team? Check out our Careers site.
https://medium.com/pinterest-engineering/building-a-scalable-machine-vision-pipeline-60dd7bac73e7
['Pinterest Engineering']
2017-02-21 21:00:04.997000+00:00
['Machine Learning', 'Deep Learning', 'Engineering', 'Computer Vision']
Title Building scalable machine vision pipelineContent Kevin Jing Pinterest engineering manager Visual Discovery Discovery Pinterest finding thing love even don’t know first you’re looking Visual Discovery engineering team Pinterest tasked building technology help people continue building technology understands object Pin’s image get idea Pinner looking last year we’ve building largescale costeffective machine vision pipeline stack widely available tool engineer faced two main challenge deploying commercial visual search system Pinterest startup needed control development cost form human computational resource Feature computation become expensive large continuously growing image collection engineer constantly experimenting new feature deploy it’s vital system scalable costeffective success commercial application measured benefit brings user eg improved user engagement relative cost development maintenance result development progress needed frequently validated AB experiment live user traffic Today we’re sharing new technology we’re experimenting well white paper accepted publication KDD 2015 detail system architecture insight experiment make following contribution present scalable costeffective implementation commercially deployed visual search engine using mostly opensource tool tradeoff performance development cost make architecture suitable smallandmediumsized business conduct comprehensive set experiment using combination benchmark datasets AB testing two Pinterest application Related Pins experiment similar look detail Experiment 1 Related Pin recommendation used Pin never saved Pinterest weren’t able provide Related Pins recommendation Related Pins primarily generated traversing local “curation graph” tripartite userboardimage graph evolved organically human curation result “long tail” Pins Pins lie outskirt curation graph neighbor graphbased approach yield enough relevant recommendation augmenting recommendation system able recommend Pins almost Pins Pinterest shown Figure 1 adding visual search Related Pin recommendation Experiment 2 Enhanced product recommendation object recognition experiment allowed u show visually similar Pin recommendation based specific object Pin’s image We’re starting experimenting way use surface object recognition would enable Pinners click object eg bag shoe etc shown use object recognition detect product bag shoe skirt Pin’s image detected object extract visual feature generate product recommendation “similar looks” initial experiment Pinner would discover recommendation red dot object Pin see Clicking red dot load feed Pins featuring visually similar object We’ve evolved red dot experiment try way surfacing visually similar recommendation specific object share later year Figure 2 apply object detection localize product bag shoe prototype Pinners click object interest view similarlooking product sharing implementation detail experience launching product hope visual search widely incorporated today’s commercial application billion Pins system curated individual one largest richly annotated datasets online experiment small sample what’s possible Pinterest We’re building worldclass deep learning team working closely member Berkeley Vision Learning Center We’ve lucky enough join u past month you’re interested exploring datasets helping u build visual discovery search technology join team Kevin Jing engineering manager Visual Discovery team previously founded Visual Graph company acquired Pinterest January 2014 Acknowledgements work joint effort member Visual Discovery team David Liu Jiajing Xu Dmitry Kislyuk Andrew Zhai Jeff Donahue product manager Sarah Tavel We’d like thank engineer several team assistance developing scalable search solution We’d also like thank Jeff Donahue Trevor Darrell Eric Tzeng Berkeley Caffe team Pinterest engineering news update follow engineering Pinterest Facebook Twitter Interested joining team Check Careers siteTags Machine Learning Deep Learning Engineering Computer Vision
3,767
Lyft Motion Prediction for Autonomous Vehicles: 2020
Lyft Motion Prediction for Autonomous Vehicles: 2020 Lyft motion prediction challenge for self-driving cars Problem Description The challenge is to predict the movement of traffic agents around the AV, such as cars, cyclists, and pedestrians for 2020. At the same time, the 2019 competition focused on detecting the 3D objects, an important step prior to detecting their movement. Overall this requires quite a unique domain skill comparative to the 2019 problem statement. The dataset consists of 170,000 scenes capturing the environment around the autonomous vehicle. Each scene encodes the state of the vehicle’s surroundings at a given point in time. Source: Kaggle EDA Lyft Source: Kaggle EDA Lyft5 The goal of this competition is to predict other cars/cyclist/pedestrian (called “agent”)’s motion. The data preprocessing technique called rasterization is a process of creating images from other objects. For example, below is a typical image that we get, with 25 channels, channel by channel view. First, 11 images are rasterizations of other agent's history, the next 11 images are the agent under consideration itself, and the last 3 are semantic map rasterization. And converting to RGB image using rasterizer includes: image: (channel, height, width) image of a frame. This is Birds-eye-view (BEV) representation. target_positions: (n_frames, 2) displacements in meters in world coordinates target_yaws: (n_frames, 1) centroid: (2) center position x&y. world_to_image: (3, 3) 3x3 matrix, used for transform matrix. Example of L5Kit(Lyft 5 kit) structure for data processing: Having said that, in this competition, understanding the Rasterizer class and implementing the customized rasterizer class has been a big challenge. Hence here is how to select the two important configuration options. We should carefully consider the raster_size The rasterized image final size in pixels (e.g.: [300, 300] The rasterized image final size in pixels (e.g.: [300, 300] pixel_size Raster's spatial resolution [meters per pixel]: the size in real-world one pixel corresponds to. Raster sizes pixel_size = [0.5, 0.5] As you can see in the image, if you increase the raster size (pixel size is constant), the model (ego/agent) will “see” more areas surrounding. More area behind/ahead Slower rendering, because of the more information (agents, roads, etc.) What is a good raster size? I think it depends on the vehicle’s velocity. km/h ms/Distance in 5 sec pixels10.281.392.7851.396.9413.89102.7813.8927.78154.1720.8341.67205.5627.7855.56256.9434.7269.44308.3341.6783.33359.7248.6197.224011.1155.56111.115013.8969.44138.896016.6783.33166.67 Let's say If I used it constantly pixel_size = [0.5, 0.5] for these calculations. The question is, what is the average velocity. In the image below, you can see the average speeds. (I assume that the unit is meter/seconds). I exclude everything with less than 1 m/s. Based on this information, we can select the size of the image. Pick your maximum speed. For example, 20 m/s Calculate the maximum distance in 5 seconds. (100 meters) Divide it by the size of the pixels (100 / 0.5 = 200) Because the ego is at raster_size * 0.25 pixels from the left side of the image, we have to add some space. The final size is 200/0.75 = 267 Pixel sizes The other parameter is the size of the pixels. What is one pixel in terms of world-meters? In the default settings, it is 1px = 0.5m In the image below, you can see the differences between different pixel sizes. (The size of the images is 300x300px). Because, for example, the pedestrians are less the half meter (from the above view), they are not visible in the first 2–3 images. So we have to select a higher resolution (lower pixel_size). Somewhere between 0.1 and 0.25. If we use a different pixel size, we have to recalculate the image size as well. Recalculate the example above with pixel_size=0.2 : 20 m/s 100 meters in 5 seconds 100/0.2 = 500 final image size: 500/0.75 = 667px Problems As we increase the image_size and the resolution (decreasing the pixel size), the rasterizer has to work more. It is already a bottleneck, so we have to balance between the model performance and training time. Calculating the error in the rasterizer Each history position, each lane, each other agent has encoded into pixels, and our net is only able to predict the next positions on the map with pixel-level accuracy. In many notebooks, the raster has a size of 0.50 m per pixel (hyperparameter). Thus, the expected mean error will be 0.50 / 4 for each direction for each predicted position. Source: Github code From error calculating of rasterization file Source: Github code From error calculating of rasterization file Winning architecture for this competition includes Resnet(18,34,50),EDA,calculating error and Efficientnet(b1,b3 & b6) for the code please check this Github repo:
https://medium.com/towards-artificial-intelligence/lyft-motion-prediction-for-autonomous-vehicles-2020-410e58e703af
['Rashmi Margani']
2020-11-27 19:39:14.867000+00:00
['Self Driving Cars', 'Kaggle', 'Deep Learning', 'Machine Learning', 'Computer Vision']
Title Lyft Motion Prediction Autonomous Vehicles 2020Content Lyft Motion Prediction Autonomous Vehicles 2020 Lyft motion prediction challenge selfdriving car Problem Description challenge predict movement traffic agent around AV car cyclist pedestrian 2020 time 2019 competition focused detecting 3D object important step prior detecting movement Overall requires quite unique domain skill comparative 2019 problem statement dataset consists 170000 scene capturing environment around autonomous vehicle scene encodes state vehicle’s surroundings given point time Source Kaggle EDA Lyft Source Kaggle EDA Lyft5 goal competition predict carscyclistpedestrian called “agent”’s motion data preprocessing technique called rasterization process creating image object example typical image get 25 channel channel channel view First 11 image rasterizations agent history next 11 image agent consideration last 3 semantic map rasterization converting RGB image using rasterizer includes image channel height width image frame Birdseyeview BEV representation targetpositions nframes 2 displacement meter world coordinate targetyaws nframes 1 centroid 2 center position xy worldtoimage 3 3 3x3 matrix used transform matrix Example L5KitLyft 5 kit structure data processing said competition understanding Rasterizer class implementing customized rasterizer class big challenge Hence select two important configuration option carefully consider rastersize rasterized image final size pixel eg 300 300 rasterized image final size pixel eg 300 300 pixelsize Rasters spatial resolution meter per pixel size realworld one pixel corresponds Raster size pixelsize 05 05 see image increase raster size pixel size constant model egoagent “see” area surrounding area behindahead Slower rendering information agent road etc good raster size think depends vehicle’s velocity kmh msDistance 5 sec pixels102813927851396941389102781389277815417208341672055627785556256943472694430833416783333597248619722401111555611111501389694413889601667833316667 Lets say used constantly pixelsize 05 05 calculation question average velocity image see average speed assume unit meterseconds exclude everything le 1 m Based information select size image Pick maximum speed example 20 m Calculate maximum distance 5 second 100 meter Divide size pixel 100 05 200 ego rastersize 025 pixel left side image add space final size 200075 267 Pixel size parameter size pixel one pixel term worldmeters default setting 1px 05m image see difference different pixel size size image 300x300px example pedestrian le half meter view visible first 2–3 image select higher resolution lower pixelsize Somewhere 01 025 use different pixel size recalculate image size well Recalculate example pixelsize02 20 m 100 meter 5 second 10002 500 final image size 500075 667px Problems increase imagesize resolution decreasing pixel size rasterizer work already bottleneck balance model performance training time Calculating error rasterizer history position lane agent encoded pixel net able predict next position map pixellevel accuracy many notebook raster size 050 per pixel hyperparameter Thus expected mean error 050 4 direction predicted position Source Github code error calculating rasterization file Source Github code error calculating rasterization file Winning architecture competition includes Resnet183450EDAcalculating error Efficientnetb1b3 b6 code please check Github repoTags Self Driving Cars Kaggle Deep Learning Machine Learning Computer Vision
3,768
Deno VS Node
What is Deno? Deno is a TypeScript runtime based on V8, the Google’s JavaScript runtime; if you are familiar with Node.js, the popular server-side JavaScript ecosystem, you will understand that Deno is exactly the same. Except that it was designed with some improvements: It is based on the modern functionality of the JavaScript language; It has an extensive Standard library; It supports TypeScript natively; Supports EcmaScript modules; It doesn’t have a centralized package manager like npm; It has several built-in utilities such as a dependency inspector and a code formatter; Aims to be as compatible with browsers as possible; Security is the main feature. What are the main differences with Node.js? I think that Deno’s main goal is to replace Node.js. However, there are some important common characteristics. For example: Both were created by Ryan Dahl; Both were developed on Google’s V8 engine; Both were developed to execute server-side JavaScript. But on the other side there are some important differences: Rust and TypeScript . Unlike Node.js which is written in C++ and JavaScript, Deno is written in Rust and TypeScript. . Unlike Node.js which is written in C++ and JavaScript, Deno is written in Rust and TypeScript. Tokyo . Introduced in place of libuv as an event-driven asynchronous platform. . Introduced in place of libuv as an event-driven asynchronous platform. Package Manager . Unlike Node.js, Deno doesn’t have a centralized package manager, so it is possible to import any ECMAScript module from a url. . Unlike Node.js, Deno doesn’t have a centralized package manager, so it is possible to import any ECMAScript module from a url. ECMAScript . Deno uses modern ECMAScript functionality in all its APIs, while Node.js uses a standard callback-based library. . Deno uses modern ECMAScript functionality in all its APIs, while Node.js uses a standard callback-based library. Security. Unlike a Node.js program which, by default, inherits the permissions from the system user that’s running the script, a Deno program runs in a sandbox. For example, the access to file system, to network resources, etc., must be authorized with a flag permission. Installation Deno is a single executable file without dependencies. We can install it on our machine by downloading the binary version from this page, or we can download and execute one of the installers listed below. Shell (Mac, Linux) $ curl -fsSL https://deno.land/x/install/install.sh | sh PowerShell (Windows) $ iwr https://deno.land/x/install/install.ps1 -useb | iex Homebrew (Mac OS) $ brew install deno Let’s take a look at security One of the main Deno’s features is the security. Compared to Node.js, Deno executes the source code in a sandbox, this mean that the runtime: Doesn’t have access to the file system; Doesn’t have access to the network resources; Cannot excecute other scripts; Doesn’t have access to environment variables. Let’s make a simple example. Consider the following script: async function main () { const encoder = new TextEncoder () const data = encoder.encode ('Hello Deno! 🦕 ') await Deno.writeFile( 'hello.txt' , data) } main() The script is really simple. It just create a text file named hello.txt that will contain the string Hello Deno 🦕. Really simple! Or not? As we said before, the code will run in a sandbox and, obviously, it doesn’t have the access to the filesystem. Infact, if we execute the script with the following command: $ deno run hello-world.ts It will print on terminalsomething like: Check file:///home/davide/denoExample/hello-world.ts error: Uncaught PermissionDenied: write access to "hello.txt", run again with the --allow-write flag at unwrapResponse ($deno$/ops/dispatch_json.ts:42:11) at Object.sendAsync ($deno$/ops/dispatch_json.ts:93:10) at async Object.open ($deno$/files.ts:38:15) at async Object.writeFile ($deno$/write_file.ts:61:16) at async file:///home/davide/projects/denoExample/hello-world.ts:5:3 As we can see, the error message is really clear. The file was not created on the filesystem because the script does not have the write permission to do that but, by adding the flag --allow-write : $ deno run --allow-write hello-world.ts the script will end without errors and the file hello.txt was created correctly in the current working directory. In addition to the flag --allow-write that give to us the access to the filesystem, there are also other flags such as --allow-net that give to us the access to the network resources, or --allow-run that is useful to run external script or subprocess. We can find the complete permissions list at the following url https://deno.land/manual/getting_started/permissions. A simple server Now we will create a simple server that accept connections on port 8000 and return to the client the string Hello Deno . // file server.ts import { serve } from 'https://deno.land/std/http/server.ts' const s = serve({ port: 8000 }) console.log('Server listening on port :8000') for await (const req of s) { req.respond({ body: 'Hello Deno! ' }) } Obviously to run the script we need to specify the --allow-net flag: $ deno run --allow-net server.ts In our terminal will appear something like: Now if we open our favourite browser, or if we want to use the curl command, we can take a test to the URL http://localhost:8000 . The result will be something like this: Modules Just like browsers, Deno loads all his modules via URL. Many people are initially confused by this approach, but that make sense. Here an example: import { assertEquals } from "https://deno.land/std/testing/asserts.ts"; Importing packages via URL has advantages and disvantages. The main advantages are: more flexibility; we can create a package without publish it in a public repository (like npm). I think that a sort of package manager can be released in future, but nothing official has come out for now. The official Deno website give to us the opportunity to host our source code, and then the distribution via URLs: https://deno.land/x/. Importing packages via URLs, give to the developers the freedom they need to host their code wherever they want: the decentralization at best. Therefore, we don’t need a package.json file or the node_modules directory. When the application start, all imported packages are downloaded, compiled, and stored a cache memory. If we want to download all the packages again we need to specify the flag --reload . I need to type the URL every time? 🤯🤬 Deno support import maps natively. This mean that it’s possible to specify a special command flag like --importmap=<FILENAME> . Let’s take a look to a simple example. Imagine that we have a file import_map.json , with the following content: { "imports": { "fmt/": "https://deno.land/[email protected]/fmt/" } } The file specifies that at /ftm key, of the imports object, correspond the URL https://deno.land/[email protected]/ftm/ and it can be used as follow: // file colors.ts import { green } from "fmt/colors.ts"; console.log(green("Hello Deno! 🦕")); This feature is unstable at the moment, and we need to run our script color.ts using the flag --unstable , so: $ deno run --unstable --importmap=import_map.json colors.ts Now in our terminal appear something like this: Versioning The package versioning is a developer responsibility and, on the client side, we can decide to use a specific version in the URL of the package when we import it: https://unpkg.com/[email protected]/dist/package-name.js Ready to use utilities Speaking honestly: the current state of JavaScript tools for developer is a real CHAOS! And when TypeScript ones are added, the chaos increase further. 😱 Photo by Ryan Snaadt on Unsplash One of the best JavaScript feature is that the code is not compiled, and it can be executed immediately in a browser. This make the life easier for a developer and it is very easy to get immediate feedback on written code. Unfortunately, however, this simplicity in the last period has been undetermined by what I consider “The cult of excessive instruments”. Theese tools have turned JavaScript development into a real nightmare of complexity. There are entire online courses for Webpack configuration guide! Yes, you got it right…a whole course! The chaos of the tools has increased to the point that many developers are eager to get back to actually writing code rather than playing with configuration files. An emerging project that aim to resolve this problem its Facebook’s Rome project. Deno, on the other hand, has an entire and complete ecosystem, such as runtime and modules management. This approach offer to the developer all the tools they need to build their applications. Now, let’s take a look at the tools that Deno 1.6 ecosystem offer, and how the developers can use them to reduce third party dependencies and simplify the development. It’s not possible to replace an entire build pipeline in Deno, but I think that we don’t think we’ll wait much longer before we have it. Below the list of integrated features: bundler: it write in a single JavaScript file the specified module and all its dependencies; it write in a single JavaScript file the specified module and all its dependencies; debugger: it give to us the ability to debug our Deno program with Chrome Devtools, VS Code and other tools; it give to us the ability to debug our Deno program with Chrome Devtools, VS Code and other tools; dependency inspector: if we execute this tool on a ES module it show all the dependencies tree; if we execute this tool on a ES module it show all the dependencies tree; doc generator: it analyze all the JSDoc annotation in a given file and produce the documentation for us; it analyze all the JSDoc annotation in a given file and produce the documentation for us; formatter: it format the JavaScript, or TypeScript, code automatically; it format the JavaScript, or TypeScript, code automatically; test runner: it’s an utility that give to us the ability to test our source code using the assertions module of the standard library. it’s an utility that give to us the ability to test our source code using the module of the standard library. linter: useful for identifying potential bugs in our programs. Bundler Deno can create a simple bundle from command line using deno bundle command, but it expose an API internally. With this API the developer can create a custom output, or something that can be used for frontend purpose. This API is instable, so we need to use the --unstable flag. Let’s take the example we did earlier, modifying it as follows: // file colors.ts import { green } from "https://deno.land/[email protected]/fmt/colors.ts"; console.log(green("Hello Deno! 🦕")); And now let’s create our bundle from command line: $ deno bundle colors.ts colors.bundle.js this command create a file colors.bundle.js that will contains all the source code that we need to execute it. In fact if we try to run the script with command: $ deno run colors.bundle.js we will notice that no module will be downloader from Deno’s repository, this because all the code needed for the execution is contained in the colors.bundle.js file. The result that we will see on the terminal is the same of the previous example: Debugger Deno has an integrated debugger. If we want to launch a program in debug mode manually we need to use the --inspect-brk $ deno run -A — inspect-brk fileToDebug.ts Now if we open Chrome inspector chrome://inspect we find a page similar to this If we click on inspect we can start to debug our code. Dependency inspector Use this tool it’s really simple! Simply we need to use the info subcommand followed and URL (or path) of a module and it will print the dependency tree of that module. If we launch the command using the server.ts file used in the previous example, it will print in our terminal someting like this: Also the command deno info can be used to show cache information: Doc generator This is a really useful utility that allows us to generate the JSDoc automatically. If we want to use it just run the command deno doc , followed by a list of one or more source files, and automatically it will printed to terminal all the documentation for all the exported files of our modules. Let’s take a look on how it works with a simple example. Let’s imagine tha we have a file add.ts with following content: * Adds x and y. * * * */ export function add(x: number, y: number): number { return x + y; } /*** Adds x and y. @param {number} x @param {number} y @returns {number} Sum of x and y*/export function add(x: number, y: number): number {return x + y; Executing the deno doc command, it will printed the following JSDoc on the standard output: It’s possible tu use the --json flag to produce a JSON format documentation. This JSON format can be used by Deno’s website to generate the module documentation automatically. Formatter Formatter is provided by dprint, an alternative to Prettier, that clone all the rules enstabished by Prettier 2.0. If we want to format one or more files, we can use deno ftm <files> or a VSCode extension. If we run the command with the --check flag, will be executed the format check of all JavaScript and TypeScript files in the current working directory. Test runner The syntax of this utility it’s really simple. We just need to use the deno test command and it will be executed the tests for all files that ends with _test or .test with the .js , .ts , .tsx or .jsx extensions. In addition to this utility it’s possible to use the standard Deno API that give to us the asserts module that we can use in the following way: import { assertEquals } from "https://deno.land/std/testing/asserts.ts" Deno.test({ name: "testing example", fn(): void { assertEquals("world", "world") assertEquals({ hello: "world" }, { hello: "world" }) }, }) This module give to us nine assertions that we can use in our test cases: assert(expr: unknown, msg = ""): asserts expr assertEquals(actual: unknown, expected: unknown, msg?: string): void assertNotEquals(actual: unknown, expected: unknown, msg?: string): void assertStrictEquals(actual: unknown, expected: unknown, msg?: string): void assertStringContains(actual: string, expected: string, msg?: string): void assertArrayContains(actual: unknown[], expected: unknown[], msg?: string): void assertMatch(actual: string, expected: RegExp, msg?: string): void assertThrows(fn: () => void, ErrorClass?: Constructor, msgIncludes = "", msg?: string): Error assertThrowsAsync(fn: () => Promise<void>, ErrorClass?: Constructor, msgIncludes = "", msg?: string): Promise<Error> Linter Deno has an integrated JavaScript and TypeScript linter. This is a new feature and it’s instable and, obviously, if we want to use it require the --unstable flag to execute it. # This command lint all the ts and js files in the current working directory $ deno lint --unstable # This command lint all the listed files $ deno lint --unstable myfile1.ts myfile2.ts Benchmark Ok folks! We’ve arrived to the moment of truth! Who’s the best JavaScript enviroment Deno or Node? But I think that the correct question is another: Who’s the fastest? I did a really simple benchmark (an http hello server) and the results were very interesting. I made them on my laptop that has the following characteristics: Model: XPS 13 9380 Processor: Intel(R) Core(TM) i7-8565U CPU @ 1.80GHz RAM: 16GB DDR3 2133MHz OS: Ubuntu 20.04 LTS Kernel version: 5.4.0-42 The tool that I used to make these benchmark is autocannon and the used scripts are the following: // file node_http.js const http = require("http"); const hostname = "127.0.0.1"; const port = 3000; http.createServer((req, res) => { res.end("Hello World"); }).listen(port, hostname, () => { console.log("node listening on:", port); }); import { serve } from " // file deno_http.tsimport { serve } from " https://deno.land/[email protected]/http/server.ts "; const port = 3000; const s = serve({ port }); const body = new TextEncoder().encode("Hello World"); console.log("deno_http listen on", port); for await (const req of s) { const res = { body, headers: new Headers(), }; res.headers.set("Date", new Date().toUTCString()); res.headers.set("Connection", "keep-alive"); req.respond(res).catch(() => {}); } We can find them in the following github repository: The first test case was performed on 100 concurrent connections with the command autconannon http://localhost:3000 -c100 and the results are the sequent: It seems that Node beat Deno in velocity! But this benchmark is based on 100 concurrent connections which are many for a small or medium server. So let’s do another test: this time with 10 concurrent connections. And again, Node beats Deno: Seems that, in term of performance, Node beats Deno 2–0! It performs better in both analyzed cases. However, the Deno it’s a really young project and the community is working hard to get adopted in the production environment soon, but it will be a tough fight again a titan like Node. Conclusion The main purpose of this article was not to support either Node or Deno, but rather to compare the two enviroments. Now you should have an understanding of the similarities and differences between the two. Deno has some particular advantages for developers, including a robust suppport system and natively TypeScript support. The design decisions and additional built-in tools aim to provide a productive environment system and a good developer experience. I don’t know if these choices can be a double-edged sword in the future, but seems to attract more developers right now. Node, on the other hand, has a robust ecosystem, ten years of development and releases behind it, an oceanic community and online courses that can help us on many threads or problems, an infinite list of frameworks (Fastify, Express, Hapi, Koa etc.), many books like “Node.js Design Patterns” or “Node Cookbook” that I consider the best books that talks about Node.js. For these, and many other reasons, I think that Node is the most secure choice to make for now. What can I say… HAPPY CODING! Bibliography
https://davide-dantonio.medium.com/deno-vs-node-658fc5e1fb5c
["Davide D'Antonio"]
2020-12-26 10:03:34.906000+00:00
['Typescript', 'Deno', 'Nodejs', 'JavaScript', 'Javascript Development']
Title Deno VS NodeContent Deno Deno TypeScript runtime based V8 Google’s JavaScript runtime familiar Nodejs popular serverside JavaScript ecosystem understand Deno exactly Except designed improvement based modern functionality JavaScript language extensive Standard library support TypeScript natively Supports EcmaScript module doesn’t centralized package manager like npm several builtin utility dependency inspector code formatter Aims compatible browser possible Security main feature main difference Nodejs think Deno’s main goal replace Nodejs However important common characteristic example created Ryan Dahl developed Google’s V8 engine developed execute serverside JavaScript side important difference Rust TypeScript Unlike Nodejs written C JavaScript Deno written Rust TypeScript Unlike Nodejs written C JavaScript Deno written Rust TypeScript Tokyo Introduced place libuv eventdriven asynchronous platform Introduced place libuv eventdriven asynchronous platform Package Manager Unlike Nodejs Deno doesn’t centralized package manager possible import ECMAScript module url Unlike Nodejs Deno doesn’t centralized package manager possible import ECMAScript module url ECMAScript Deno us modern ECMAScript functionality APIs Nodejs us standard callbackbased library Deno us modern ECMAScript functionality APIs Nodejs us standard callbackbased library Security Unlike Nodejs program default inherits permission system user that’s running script Deno program run sandbox example access file system network resource etc must authorized flag permission Installation Deno single executable file without dependency install machine downloading binary version page download execute one installers listed Shell Mac Linux curl fsSL httpsdenolandxinstallinstallsh sh PowerShell Windows iwr httpsdenolandxinstallinstallps1 useb iex Homebrew Mac OS brew install deno Let’s take look security One main Deno’s feature security Compared Nodejs Deno executes source code sandbox mean runtime Doesn’t access file system Doesn’t access network resource Cannot excecute script Doesn’t access environment variable Let’s make simple example Consider following script async function main const encoder new TextEncoder const data encoderencode Hello Deno 🦕 await DenowriteFile hellotxt data main script really simple create text file named hellotxt contain string Hello Deno 🦕 Really simple said code run sandbox obviously doesn’t access filesystem Infact execute script following command deno run helloworldts print terminalsomething like Check filehomedavidedenoExamplehelloworldts error Uncaught PermissionDenied write access hellotxt run allowwrite flag unwrapResponse denoopsdispatchjsonts4211 ObjectsendAsync denoopsdispatchjsonts9310 async Objectopen denofilests3815 async ObjectwriteFile denowritefilets6116 async filehomedavideprojectsdenoExamplehelloworldts53 see error message really clear file created filesystem script write permission adding flag allowwrite deno run allowwrite helloworldts script end without error file hellotxt created correctly current working directory addition flag allowwrite give u access filesystem also flag allownet give u access network resource allowrun useful run external script subprocess find complete permission list following url httpsdenolandmanualgettingstartedpermissions simple server create simple server accept connection port 8000 return client string Hello Deno file serverts import serve httpsdenolandstdhttpserverts const serve port 8000 consolelogServer listening port 8000 await const req reqrespond body Hello Deno Obviously run script need specify allownet flag deno run allownet serverts terminal appear something like open favourite browser want use curl command take test URL httplocalhost8000 result something like Modules like browser Deno load module via URL Many people initially confused approach make sense example import assertEquals httpsdenolandstdtestingassertsts Importing package via URL advantage disvantages main advantage flexibility create package without publish public repository like npm think sort package manager released future nothing official come official Deno website give u opportunity host source code distribution via URLs httpsdenolandx Importing package via URLs give developer freedom need host code wherever want decentralization best Therefore don’t need packagejson file nodemodules directory application start imported package downloaded compiled stored cache memory want download package need specify flag reload need type URL every time 🤯🤬 Deno support import map natively mean it’s possible specify special command flag like importmapFILENAME Let’s take look simple example Imagine file importmapjson following content import fmt httpsdenolandstd0650fmt file specifies ftm key import object correspond URL httpsdenolandsdt0650ftm used follow file colorsts import green fmtcolorsts consoleloggreenHello Deno 🦕 feature unstable moment need run script colorts using flag unstable deno run unstable importmapimportmapjson colorsts terminal appear something like Versioning package versioning developer responsibility client side decide use specific version URL package import httpsunpkgcompackagename005distpackagenamejs Ready use utility Speaking honestly current state JavaScript tool developer real CHAOS TypeScript one added chaos increase 😱 Photo Ryan Snaadt Unsplash One best JavaScript feature code compiled executed immediately browser make life easier developer easy get immediate feedback written code Unfortunately however simplicity last period undetermined consider “The cult excessive instruments” Theese tool turned JavaScript development real nightmare complexity entire online course Webpack configuration guide Yes got right…a whole course chaos tool increased point many developer eager get back actually writing code rather playing configuration file emerging project aim resolve problem Facebook’s Rome project Deno hand entire complete ecosystem runtime module management approach offer developer tool need build application let’s take look tool Deno 16 ecosystem offer developer use reduce third party dependency simplify development It’s possible replace entire build pipeline Deno think don’t think we’ll wait much longer list integrated feature bundler write single JavaScript file specified module dependency write single JavaScript file specified module dependency debugger give u ability debug Deno program Chrome Devtools VS Code tool give u ability debug Deno program Chrome Devtools VS Code tool dependency inspector execute tool ES module show dependency tree execute tool ES module show dependency tree doc generator analyze JSDoc annotation given file produce documentation u analyze JSDoc annotation given file produce documentation u formatter format JavaScript TypeScript code automatically format JavaScript TypeScript code automatically test runner it’s utility give u ability test source code using assertion module standard library it’s utility give u ability test source code using module standard library linter useful identifying potential bug program Bundler Deno create simple bundle command line using deno bundle command expose API internally API developer create custom output something used frontend purpose API instable need use unstable flag Let’s take example earlier modifying follows file colorsts import green httpsdenolandstd0650fmtcolorsts consoleloggreenHello Deno 🦕 let’s create bundle command line deno bundle colorsts colorsbundlejs command create file colorsbundlejs contains source code need execute fact try run script command deno run colorsbundlejs notice module downloader Deno’s repository code needed execution contained colorsbundlejs file result see terminal previous example Debugger Deno integrated debugger want launch program debug mode manually need use inspectbrk deno run — inspectbrk fileToDebugts open Chrome inspector chromeinspect find page similar click inspect start debug code Dependency inspector Use tool it’s really simple Simply need use info subcommand followed URL path module print dependency tree module launch command using serverts file used previous example print terminal someting like Also command deno info used show cache information Doc generator really useful utility allows u generate JSDoc automatically want use run command deno doc followed list one source file automatically printed terminal documentation exported file module Let’s take look work simple example Let’s imagine tha file addts following content Adds x export function addx number number number return x Adds x param number x param number return number Sum x yexport function addx number number number return x Executing deno doc command printed following JSDoc standard output It’s possible tu use json flag produce JSON format documentation JSON format used Deno’s website generate module documentation automatically Formatter Formatter provided dprint alternative Prettier clone rule enstabished Prettier 20 want format one file use deno ftm file VSCode extension run command check flag executed format check JavaScript TypeScript file current working directory Test runner syntax utility it’s really simple need use deno test command executed test file end test test j t tsx jsx extension addition utility it’s possible use standard Deno API give u asserts module use following way import assertEquals httpsdenolandstdtestingassertsts Denotest name testing example fn void assertEqualsworld world assertEquals hello world hello world module give u nine assertion use test case assertexpr unknown msg asserts expr assertEqualsactual unknown expected unknown msg string void assertNotEqualsactual unknown expected unknown msg string void assertStrictEqualsactual unknown expected unknown msg string void assertStringContainsactual string expected string msg string void assertArrayContainsactual unknown expected unknown msg string void assertMatchactual string expected RegExp msg string void assertThrowsfn void ErrorClass Constructor msgIncludes msg string Error assertThrowsAsyncfn Promisevoid ErrorClass Constructor msgIncludes msg string PromiseError Linter Deno integrated JavaScript TypeScript linter new feature it’s instable obviously want use require unstable flag execute command lint t j file current working directory deno lint unstable command lint listed file deno lint unstable myfile1ts myfile2ts Benchmark Ok folk We’ve arrived moment truth Who’s best JavaScript enviroment Deno Node think correct question another Who’s fastest really simple benchmark http hello server result interesting made laptop following characteristic Model XPS 13 9380 Processor IntelR CoreTM i78565U CPU 180GHz RAM 16GB DDR3 2133MHz OS Ubuntu 2004 LTS Kernel version 54042 tool used make benchmark autocannon used script following file nodehttpjs const http requirehttp const hostname 127001 const port 3000 httpcreateServerreq re resendHello World listenport hostname consolelognode listening port import serve file denohttptsimport serve httpsdenolandstd0610httpserverts const port 3000 const serve port const body new TextEncoderencodeHello World consolelogdenohttp listen port await const req const re body header new Headers resheaderssetDate new DatetoUTCString resheaderssetConnection keepalive reqrespondrescatch find following github repository first test case performed 100 concurrent connection command autconannon httplocalhost3000 c100 result sequent seems Node beat Deno velocity benchmark based 100 concurrent connection many small medium server let’s another test time 10 concurrent connection Node beat Deno Seems term performance Node beat Deno 2–0 performs better analyzed case However Deno it’s really young project community working hard get adopted production environment soon tough fight titan like Node Conclusion main purpose article support either Node Deno rather compare two enviroments understanding similarity difference two Deno particular advantage developer including robust suppport system natively TypeScript support design decision additional builtin tool aim provide productive environment system good developer experience don’t know choice doubleedged sword future seems attract developer right Node hand robust ecosystem ten year development release behind oceanic community online course help u many thread problem infinite list framework Fastify Express Hapi Koa etc many book like “Nodejs Design Patterns” “Node Cookbook” consider best book talk Nodejs many reason think Node secure choice make say… HAPPY CODING BibliographyTags Typescript Deno Nodejs JavaScript Javascript Development
3,769
Chicago Hospital Vows to End Cosmetic Surgery on Intersex Infants
A children’s hospital in Chicago has apologized for performing cosmetic genital surgeries on intersex infants, vowing to put an end to the practice. Ann & Robert H. Lurie Children’s Hospital of Chicago released a statement last week apologizing for having previously performed such surgeries. Signatories of the statement went on to condemn the practice, acknowledging the harm that these surgeries have caused. “We recognize the painful history and complex emotions associated with intersex surgery and how, for many years, the medical field has failed these children,” the statement read. “We empathize with intersex individuals who were harmed by the treatment that they received according to the historic standard of care, and we apologize and are truly sorry.” This statement and apology comes after activists from the Intersex Justice Project called on the hospital to ban these harmful procedures nearly three years ago. The organization held protests outside of the hospital, organized email campaigns, and encouraged activists to use the hashtag #EndIntersexSurgery on social media. Intersex is a “general term used for a variety of conditions in which a person is born with reproductive or sexual anatomy that doesn’t seem to fit the typical definitions of female or male,” according to the Intersex Society of North America. Around 1.7% of the global population is born intersex and 1 in 2,000 intersex babies will be recommended for cosmetic genital surgery by their doctors. As such, the fight to end cosmetic surgeries on intersex infants has been years in the making. Organizations like the ACLU and Human Rights Watch have long warned of the risks of such procedures, calling them unnecessary and claiming that they do nothing to help intersex individuals better adjust to society. These surgeries, which misguidedly aim to help intersex children fit into outdated definitions of what it means to be male or female, date all the way back to the 1960s. Many intersex people who have been forced to undergo cosmetic genital surgery as infants have since reported psychological trauma, loss of sexual sensation, and higher risks of scarring. In 2017, three former US Surgeons-General asserted that there was little evidence to suggest that growing up with intersex genitalia causes psychological distress, but there was evidence to indicate that having irreversible surgery without consent can, in fact, cause emotional and physical harm. As a result, around 40% of intersex people who undergo surgery as infants will grow up to reject the sex and imposed gender that has been surgically assigned to them. While the number of infants who underwent cosmetic surgery at Lurie Children’s Hospital is unknown, the hospital has promised that no such surgeries will take place until intersex people are old enough to consent. “Historically care for individuals with intersex traits included an emphasis on early genital surgery to make genitalia appear more typically male or female,” the statement continued. “As the medical field has advanced, and understanding has grown, we now know this approach was harmful and wrong.”
https://medium.com/an-injustice/chicago-hospital-vows-to-end-cosmetic-surgery-on-intersex-infants-26b1525b1714
['Catherine Caruso']
2020-08-05 18:52:44.858000+00:00
['LGBTQ', 'Society', 'Health', 'Equality', 'Justice']
Title Chicago Hospital Vows End Cosmetic Surgery Intersex InfantsContent children’s hospital Chicago apologized performing cosmetic genital surgery intersex infant vowing put end practice Ann Robert H Lurie Children’s Hospital Chicago released statement last week apologizing previously performed surgery Signatories statement went condemn practice acknowledging harm surgery caused “We recognize painful history complex emotion associated intersex surgery many year medical field failed children” statement read “We empathize intersex individual harmed treatment received according historic standard care apologize truly sorry” statement apology come activist Intersex Justice Project called hospital ban harmful procedure nearly three year ago organization held protest outside hospital organized email campaign encouraged activist use hashtag EndIntersexSurgery social medium Intersex “general term used variety condition person born reproductive sexual anatomy doesn’t seem fit typical definition female male” according Intersex Society North America Around 17 global population born intersex 1 2000 intersex baby recommended cosmetic genital surgery doctor fight end cosmetic surgery intersex infant year making Organizations like ACLU Human Rights Watch long warned risk procedure calling unnecessary claiming nothing help intersex individual better adjust society surgery misguidedly aim help intersex child fit outdated definition mean male female date way back 1960s Many intersex people forced undergo cosmetic genital surgery infant since reported psychological trauma loss sexual sensation higher risk scarring 2017 three former US SurgeonsGeneral asserted little evidence suggest growing intersex genitalia cause psychological distress evidence indicate irreversible surgery without consent fact cause emotional physical harm result around 40 intersex people undergo surgery infant grow reject sex imposed gender surgically assigned number infant underwent cosmetic surgery Lurie Children’s Hospital unknown hospital promised surgery take place intersex people old enough consent “Historically care individual intersex trait included emphasis early genital surgery make genitalia appear typically male female” statement continued “As medical field advanced understanding grown know approach harmful wrong”Tags LGBTQ Society Health Equality Justice
3,770
How Facebook Plans to Crack Down on Anti-Vax Content
When you search “vaccine” on Facebook, one of the first search results includes a group called “Vaccine Injury Stories,” where users share posts featuring common hoaxes that blame vaccinations for infant sickness and death. With a quick search, more than 150 groups appear on Facebook — some with thousands of members — promoting misinformation about vaccines’ effects and acting as an echo chamber for pseudoscience. The social network pledged to crack down on these groups in March but has not announced any clear plans until now. Next month, Facebook is rolling out a search tool to combat this kind of misinformation, the company told Cheddar, similar to what Twitter launched last week. Once the tool is live, a search of “vaccines” or related terms on Facebook will link to a neutral medical institution, like the Center for Disease Control (CDC), and more medically-verified information about vaccines. Similar to what Facebook did with searches about buying opioids and white nationalism, the information will sit at the top of search results and present facts alongside misinformation, rather than banning or removing the content altogether.
https://medium.com/cheddar/how-facebook-plans-to-crack-down-on-anti-vax-content-773eb15e02d3
['Jake Shore']
2019-05-23 17:45:18.885000+00:00
['Technology', 'Social Media', 'Science', 'Facebook', 'Vaccines']
Title Facebook Plans Crack AntiVax ContentContent search “vaccine” Facebook one first search result includes group called “Vaccine Injury Stories” user share post featuring common hoax blame vaccination infant sickness death quick search 150 group appear Facebook — thousand member — promoting misinformation vaccines’ effect acting echo chamber pseudoscience social network pledged crack group March announced clear plan Next month Facebook rolling search tool combat kind misinformation company told Cheddar similar Twitter launched last week tool live search “vaccines” related term Facebook link neutral medical institution like Center Disease Control CDC medicallyverified information vaccine Similar Facebook search buying opioids white nationalism information sit top search result present fact alongside misinformation rather banning removing content altogetherTags Technology Social Media Science Facebook Vaccines
3,771
Scammers Are Targeting COVID-19 Contact Tracing Efforts
The Vast Majority of Contact Tracing Takes Place on the Phone You should immediately be suspicious of contact tracing outreach that takes place via email or text message. The vast majority of contact tracing efforts are done over the phone, and very few legitimate agencies use email or text messages for their initial outreach. Even if you think the contact tracing email or text message may be legitimate, you should never click any embedded links. These links could harbor malware and viruses designed to steal your private information. Contact Tracers Will Not Mention COVID-19 Patients by Name In an effort to gain your trust, the scammers may tell you that a close family member or friend has tested positive for COVID-19, and that you should immediately schedule a test for the virus. That kind of news is certainly alarming, but it’s likely not real. There are strict privacy laws in place surrounding healthcare and medical diagnoses, and contact tracers are not allowed to say who is infected, only where they have been and who they have been in contact with. Unfortunately, the inclusion of a name often lends credibility to the scammers, fooling even those who are generally very wary of such efforts. Keep in mind, however, that bad actors can easily find this kind of information on social media, and you should not fall for the ruse. Never Hand over Your Social Security Number or Banking Information Another thing legitimate COVID-19 contact tracers will never do is ask for your Social Security number or banking information. If the person on the other end of the phone makes such a request, you should simply hang up. If you have caller ID and the phone number is visible, you can contact the local police to report the suspected crime. These scams are gaining speed, and it’s important to protect your friends and neighbors as well as yourself. Watch out for COVID-19 Testing Charges One of the most important goals of COVID-19 contact tracing is to identify potential sources of infection and facilitate testing for the disease. Legitimate contact tracers will urge those they call to schedule a COVID-19 test as soon as possible, and they will provide a list of resources and testing sites as well. What legitimate contact tracers will not do is demand payment up front. They will not require you to hand over credit card or bank account information, and if they do, again, just hang up. In the vast majority of cases, you will not have to pay anything at all for a COVID-19 test, especially if you’ve been in contact with someone who has tested positive for the disease. Insurance companies are required to cover COVID-19 testing and treatment at no cost to their subscribers, and government funding generally covers testing costs for the uninsured.
https://georgejziogas.medium.com/scammers-are-targeting-covid-19-contact-tracing-efforts-5f9acb570b87
['George J. Ziogas']
2020-09-12 21:04:37.729000+00:00
['Privacy', 'Health', 'Cybersecurity', 'Coronavirus', 'Covid 19']
Title Scammers Targeting COVID19 Contact Tracing EffortsContent Vast Majority Contact Tracing Takes Place Phone immediately suspicious contact tracing outreach take place via email text message vast majority contact tracing effort done phone legitimate agency use email text message initial outreach Even think contact tracing email text message may legitimate never click embedded link link could harbor malware virus designed steal private information Contact Tracers Mention COVID19 Patients Name effort gain trust scammer may tell close family member friend tested positive COVID19 immediately schedule test virus kind news certainly alarming it’s likely real strict privacy law place surrounding healthcare medical diagnosis contact tracer allowed say infected contact Unfortunately inclusion name often lends credibility scammer fooling even generally wary effort Keep mind however bad actor easily find kind information social medium fall ruse Never Hand Social Security Number Banking Information Another thing legitimate COVID19 contact tracer never ask Social Security number banking information person end phone make request simply hang caller ID phone number visible contact local police report suspected crime scam gaining speed it’s important protect friend neighbor well Watch COVID19 Testing Charges One important goal COVID19 contact tracing identify potential source infection facilitate testing disease Legitimate contact tracer urge call schedule COVID19 test soon possible provide list resource testing site well legitimate contact tracer demand payment front require hand credit card bank account information hang vast majority case pay anything COVID19 test especially you’ve contact someone tested positive disease Insurance company required cover COVID19 testing treatment cost subscriber government funding generally cover testing cost uninsuredTags Privacy Health Cybersecurity Coronavirus Covid 19
3,772
Time Series Analysis
Introduction to Time Series A time series is a sequence or series of numerical data points fixed at certain chronological time order. In most cases, a time series is a sequence taken at fixed interval points in time. This allows us to accurately predict or forecast the necessities. Time series uses line charts to show us seasonal patterns, trends, and relation to external factors. It uses time series values for forecasting and this is called extrapolation. Time series are used in most of the real-life cases such as weather reports, earthquake prediction, astronomy, mathematical finance, and largely in any field of applied science and engineering. It gives us deeper insights into our field of work and forecasting helps an individual in increasing efficiency of output. Time Series Forecasting Time series forecasting is a method of using a model to predict future values based on previously observed time series values. Time series is an important part of machine learning. It figures out a seasonal pattern or trend in the observed time-series data and uses it for future predictions or forecasting. Forecasting involves taking models rich in historical data and using them to predict future observations. One of the most distinctive features of forecasting is that it does not exactly predict the future, it just gives us a calculated estimation of what has already happened to give us an idea of what could happen. Image Courtesy: www.wfmanagement.blogspot.com Now let’s look at the general forecasting methods used in day to day problems, Qualitative forecasting is generally used when historical data is unavailable and is considered to be highly objective and judgmental. Quantitative forecasting is when we have large amounts of data from the past and is considered to be highly efficient as long as there is no strong external factors in play. The skill of a time series forecasting model is determined by its efficiency at predicting the future. This is often at the cost of being able to explain why a specific prediction was made, confidence intervals, and even better, understanding the underlying factors behind the problem. Some general examples of forecasting are: Governments forecast unemployment rates, interest rates, and expected revenues from income taxes for policy purposes. Day to day weather prediction. College administrators forecast enrollments to plan for facilities and faculty recruitment. Industries forecast demand to control inventory levels, hire employees, and provide training. Application of Time Series Forecasting The usage of time series models is twofold: Obtain an understanding of the underlying forces and structure that produced the data Fit a model and proceed to forecast. There is almost an endless application of time series forecasting problems. Below are a few of the examples from a range of industries to make the notions of time series analysis and forecasting more strong. Forecasting the rice yield in tons by the state each year. Forecasting whether an EEG trace in seconds indicates a patient is having a heart attack or not. Forecasting the closing price of stock each day. Forecasting the birth or death rate at all hospitals in a city each year. Forecasting product sales in units sold each day. Forecasting the number of passengers booking flight tickets each day. Forecasting unemployment for a state each quarter Forecasting the size of the tiger population in a state each breeding season. Now let’s look at an example, We are going to use the google new year resolution dataset, Step 1: Import Libraries Picture 1 Step 2: Load Dataset Picture 2 Step 3: Change month column into the DateTime data type Picture 3 Step 4: Plot and visualize Picture 4.1 Picture 4.2 Step 5: Check for trend Picture 5.1 Picture 5.2 Step 6: Check for seasonality Picture 6.1 Picture 6.2 We can see that there is roughly a 20% spike each year, this is seasonality. Components of Time Series Time series analysis provides a ton of techniques to better understand a dataset. Perhaps the most useful of these is the splitting of time series into 4 parts: Level: The base value for the series if it were a straight line. Trend: The linear increasing or decreasing behavior of the series over time. Seasonality: The repeating patterns or cycles of behavior over time. Noise: The variability in the observations that cannot be explained by the model. All-time series generally have a level, noise, while trend and seasonality are optional. The main features of many time series are trends and seasonal variation. Another feature of most time series is that observations close together in time tend to be correlated These components combine in some way to provide the observed time series. For example, they may be added together to form a model such as: Y =levels + trends + seasonality + noise Image Courtesy: Machine Learning Mastery These components are the most effective way to make predictions about future values, but may not always work. That depends on the amount of data we have about the past. Analyzing Trend Checking out data for repeated behavior in its graphical representation is known as a Trend analysis. As long as the trend is continuously increasing or decreasing that part of data analysis is generally not very difficult. If the time series data contains some kind of considerable error, then the first step in the process of trend identification is smoothing. Smoothing. Smoothing always involves some form of local averaging of data such that the components of individual observations cancel each other out. The most widely used technique is moving average smoothing which replaces each element of the series with a simple or weighted average of surrounding elements. Medians are mostly used instead of means. The main advantage of median as compared to moving average smoothing is that its results are less biased by outliers within the smoothing window. The main disadvantage of median smoothing is that in the absence of clear outliers it may produce more disturbed curves than moving average. In the other less common cases, when the measurement error is quiet large, the distance weighted least squares smoothing or negative exponentially weighted smoothing techniques might be used. These methods generally tend to ignore outliers and give a smooth fitting curve. Fitting a function. If there is a clear monotonous nonlinear component, the data first need to be transformed to remove the nonlinearity. Usually, log, exponential, or polynomial function is used to achieve this. Now let’s take an example to understand this more clearly, Picture 7.1 Picture 7.2 From the above diagram, we can easily interpret that there is an upward trend for ‘Gym’ every year! Analyzing Seasonality Seasonality is the repetition of data at a certain period of time interval. For example, every year we notice that people tend to go on vacation during the December — January time, this is seasonality. It is one other most important characteristics of time series analysis. It is generally measured by autocorrelation after subtracting the trend from the data. Lets look at another example from our dataset, Picture 8.1 Picture 8.2 From the above graph, it is clear that there is a spike at the starting of every year. Which means every year January people tend to take ‘Diet’ as their resolution rather than any other month. This is a perfect example of seasonality. AR, MA, and ARIMA Autoregression Model (AR) AR is a time series model that uses observations from previous time steps as input to a regression equation to predict the value at the next time step. A regression model like linear regression takes the form of: yhat = b0 + (b1 * X1) This technique can be used on time series where input variables are taken as observations at previous time steps, called lag variables. This would look like: Xt+1 = b0 + (b1 * Xt) + (b2 * Xt-1) Since the regression model uses data from the same input variable at previous time steps, it is referred to as autoregression. Moving Average Model (MA) The residual errors from forecasts in a time series provide another source of information that can be modeled. The Residual errors form a time series. An autoregression model of this structure can be used to foresee the forecast error, which in turn can be used to correct forecasts. Structure in the residual error may consist of trend, bias & seasonality which can be modeled directly. One can create a model of the residual error time series and predict the expected error of the model. The predicted error can then be subtracted from the model prediction & in turn provide an additional lift in performance. An autoregression of the residual error is the Moving Average Model. Autoregressive Integrated Moving Average (ARIMA) Autoregressive integrated moving average or ARIMA is a very important part of statistics, econometrics, and in particular time series analysis. ARIMA is a forecasting technique that gives us future values entirely based on its inertia. Autoregressive Integrated Moving Average (ARIMA) models include a clear cut statistical model for the asymmetrical component of a time series that allows for non-zero autocorrelations in the irregular component ARIMA models are defined for stationary time series. Therefore, if you start with a non-stationary time series, you will first need to ‘difference’ the time series until you attain stationary time series. An ARIMA model can be created using the statsmodels library as follows: Define the model by using ARIMA() and passing in the p, d, and q parameters. The model is prepared on the training data by calling the fit() function. Predictions can be made by using the predict() function and specifying the index of the time or times to be predicted. Now let’s look at an example, We are going to use a dataset called ‘Shampoo sales’ Picture 9.1 Picture 9.2 ACF and PACF We can calculate the correlation for time-series observations with observations from previous time steps, called lags. Since the correlation of the time series observations is calculated with values of the same series at previous times, this is called a serial correlation, or an autocorrelation. A plot of the autocorrelation of a dataset of a time series by lag is called the AutoCorrelation Function, or the acronym ACF. This plot is sometimes called a correlogram or an autocorrelation plot. For example, Picture 10 A partial autocorrelation or PACF is a summary of the relationship between an observation in a time series with observations at prior time steps with the relationships of in between observations removed. For example, Picture 11 Conclusion Time series analysis is one of the most important aspect of data analytics for any large organization as it helps in understanding seasonality, trends, cyclicality and randomness in the sales and distribution and other attributes. These factors help companies in making a well informed decision which is highly crucial for business.
https://medium.com/swlh/time-series-analysis-7006ea1c3326
['Athul Anish']
2020-11-25 22:13:44.187000+00:00
['Machine Learning', 'Data Science', 'Artificial Intelligence', 'Startup', 'Data']
Title Time Series AnalysisContent Introduction Time Series time series sequence series numerical data point fixed certain chronological time order case time series sequence taken fixed interval point time allows u accurately predict forecast necessity Time series us line chart show u seasonal pattern trend relation external factor us time series value forecasting called extrapolation Time series used reallife case weather report earthquake prediction astronomy mathematical finance largely field applied science engineering give u deeper insight field work forecasting help individual increasing efficiency output Time Series Forecasting Time series forecasting method using model predict future value based previously observed time series value Time series important part machine learning figure seasonal pattern trend observed timeseries data us future prediction forecasting Forecasting involves taking model rich historical data using predict future observation One distinctive feature forecasting exactly predict future give u calculated estimation already happened give u idea could happen Image Courtesy wwwwfmanagementblogspotcom let’s look general forecasting method used day day problem Qualitative forecasting generally used historical data unavailable considered highly objective judgmental Quantitative forecasting large amount data past considered highly efficient long strong external factor play skill time series forecasting model determined efficiency predicting future often cost able explain specific prediction made confidence interval even better understanding underlying factor behind problem general example forecasting Governments forecast unemployment rate interest rate expected revenue income tax policy purpose Day day weather prediction College administrator forecast enrollment plan facility faculty recruitment Industries forecast demand control inventory level hire employee provide training Application Time Series Forecasting usage time series model twofold Obtain understanding underlying force structure produced data Fit model proceed forecast almost endless application time series forecasting problem example range industry make notion time series analysis forecasting strong Forecasting rice yield ton state year Forecasting whether EEG trace second indicates patient heart attack Forecasting closing price stock day Forecasting birth death rate hospital city year Forecasting product sale unit sold day Forecasting number passenger booking flight ticket day Forecasting unemployment state quarter Forecasting size tiger population state breeding season let’s look example going use google new year resolution dataset Step 1 Import Libraries Picture 1 Step 2 Load Dataset Picture 2 Step 3 Change month column DateTime data type Picture 3 Step 4 Plot visualize Picture 41 Picture 42 Step 5 Check trend Picture 51 Picture 52 Step 6 Check seasonality Picture 61 Picture 62 see roughly 20 spike year seasonality Components Time Series Time series analysis provides ton technique better understand dataset Perhaps useful splitting time series 4 part Level base value series straight line Trend linear increasing decreasing behavior series time Seasonality repeating pattern cycle behavior time Noise variability observation cannot explained model Alltime series generally level noise trend seasonality optional main feature many time series trend seasonal variation Another feature time series observation close together time tend correlated component combine way provide observed time series example may added together form model level trend seasonality noise Image Courtesy Machine Learning Mastery component effective way make prediction future value may always work depends amount data past Analyzing Trend Checking data repeated behavior graphical representation known Trend analysis long trend continuously increasing decreasing part data analysis generally difficult time series data contains kind considerable error first step process trend identification smoothing Smoothing Smoothing always involves form local averaging data component individual observation cancel widely used technique moving average smoothing replaces element series simple weighted average surrounding element Medians mostly used instead mean main advantage median compared moving average smoothing result le biased outlier within smoothing window main disadvantage median smoothing absence clear outlier may produce disturbed curve moving average le common case measurement error quiet large distance weighted least square smoothing negative exponentially weighted smoothing technique might used method generally tend ignore outlier give smooth fitting curve Fitting function clear monotonous nonlinear component data first need transformed remove nonlinearity Usually log exponential polynomial function used achieve let’s take example understand clearly Picture 71 Picture 72 diagram easily interpret upward trend ‘Gym’ every year Analyzing Seasonality Seasonality repetition data certain period time interval example every year notice people tend go vacation December — January time seasonality one important characteristic time series analysis generally measured autocorrelation subtracting trend data Lets look another example dataset Picture 81 Picture 82 graph clear spike starting every year mean every year January people tend take ‘Diet’ resolution rather month perfect example seasonality AR ARIMA Autoregression Model AR AR time series model us observation previous time step input regression equation predict value next time step regression model like linear regression take form yhat b0 b1 X1 technique used time series input variable taken observation previous time step called lag variable would look like Xt1 b0 b1 Xt b2 Xt1 Since regression model us data input variable previous time step referred autoregression Moving Average Model residual error forecast time series provide another source information modeled Residual error form time series autoregression model structure used foresee forecast error turn used correct forecast Structure residual error may consist trend bias seasonality modeled directly One create model residual error time series predict expected error model predicted error subtracted model prediction turn provide additional lift performance autoregression residual error Moving Average Model Autoregressive Integrated Moving Average ARIMA Autoregressive integrated moving average ARIMA important part statistic econometrics particular time series analysis ARIMA forecasting technique give u future value entirely based inertia Autoregressive Integrated Moving Average ARIMA model include clear cut statistical model asymmetrical component time series allows nonzero autocorrelations irregular component ARIMA model defined stationary time series Therefore start nonstationary time series first need ‘difference’ time series attain stationary time series ARIMA model created using statsmodels library follows Define model using ARIMA passing p q parameter model prepared training data calling fit function Predictions made using predict function specifying index time time predicted let’s look example going use dataset called ‘Shampoo sales’ Picture 91 Picture 92 ACF PACF calculate correlation timeseries observation observation previous time step called lag Since correlation time series observation calculated value series previous time called serial correlation autocorrelation plot autocorrelation dataset time series lag called AutoCorrelation Function acronym ACF plot sometimes called correlogram autocorrelation plot example Picture 10 partial autocorrelation PACF summary relationship observation time series observation prior time step relationship observation removed example Picture 11 Conclusion Time series analysis one important aspect data analytics large organization help understanding seasonality trend cyclicality randomness sale distribution attribute factor help company making well informed decision highly crucial businessTags Machine Learning Data Science Artificial Intelligence Startup Data
3,773
What to Look for in a Marketing Agency
Once upon a time, Nike wasn’t the leader in the U.S. market for athletic footwear. In the 1980’s, Nike was trailing behind Reebok, with the latter being hailed as “the biggest brand-name phenomena of the decade.” In an effort to revamp its brand and capture new markets, Nike hired an ad agency, Wieden+Kennedy, which ultimately coined the three-word slogan we all know and love. The genius behind “Just Do It” is its quality of being universally personal. It’s personal to the professional and the everyday person; to participants of team and individual sports alike. Those three small words had a huge effect: ten years after “Just Do It” first launched, Nike went from $800 million to over $9.2 billion in sales. To this day, Wieden+Kennedy is best known for their work with Nike. If you’re looking for your Wieden+Kennedy, here’s our advice to you. Don’t “Just Do It.” As you can imagine, “genius” is the result of hundreds of hours of research, testing, and strategy. It’s the perfect synergy between client and service provider; where vision meets execution. Selecting the right marketing partner takes careful consideration and proper vetting. Here are three things to look for in the selection process: 1. The Marketing Agency Thinks Like You Likely the largest frustration in the client-service provider relationship is the “business disconnect”. The key to Nike’s success was a service provider that kept business strategy top of mind and not just brand recognition. Wiedens+Kennedy had the mentality of “in it for the long haul” — not one campaign. A marketing firm might have the best talent in town, but are they strategizing around your business objectives? Are your revenue goals, growth goals, and long-term plan central to their marketing goals for you? Being business-minded is an essential characteristic that should be given the highest priority when considering an agency. If your current candidate doesn’t meet this standard, you’ll find yourself pouring money down an expensive, twisted drain. 2. The Marketing Agency Knows Your Business An optometrist might be able to offer better medical advice than Joe on the street for your knee pain, but an orthopedic specialist knows best. Similarly, it is important to seek an agency that has a breadth of knowledge and experience relevant to your field. Nike didn’t hire marketing professionals who just happened to have a passion for sports. The cost of onboarding an agency to your industry is something you can and should avoid. 3. The Marketing Agency Has a Strong Reputation Marketing is rooted in common core principles, but not all marketers are created equal. When choosing a marketing agency, ensure they have the proper resources to execute your vision. What results have they produced for previous clients, and how did they do it? What reporting systems do they use, and how will they hold themselves accountable to you? As intuitive as looking beneath the surface might seem, it is a critical step that is often overlooked. Now that you know what to look for, how will you find your team? Referrals and research are a great way to start. Don’t settle until you’ve found the right fit as the service you’re looking for is more than a transaction. It’s a relationship.
https://medium.com/insights-from-the-incubator/what-to-look-for-in-a-marketing-agency-4f06b0721608
['The Incubator']
2016-09-13 21:29:30.019000+00:00
['Advertising', 'Marketing', 'Business', 'Small Business', 'Startup']
Title Look Marketing AgencyContent upon time Nike wasn’t leader US market athletic footwear 1980’s Nike trailing behind Reebok latter hailed “the biggest brandname phenomenon decade” effort revamp brand capture new market Nike hired ad agency WiedenKennedy ultimately coined threeword slogan know love genius behind “Just It” quality universally personal It’s personal professional everyday person participant team individual sport alike three small word huge effect ten year “Just It” first launched Nike went 800 million 92 billion sale day WiedenKennedy best known work Nike you’re looking WiedenKennedy here’s advice Don’t “Just It” imagine “genius” result hundred hour research testing strategy It’s perfect synergy client service provider vision meet execution Selecting right marketing partner take careful consideration proper vetting three thing look selection process 1 Marketing Agency Thinks Like Likely largest frustration clientservice provider relationship “business disconnect” key Nike’s success service provider kept business strategy top mind brand recognition WiedensKennedy mentality “in long haul” — one campaign marketing firm might best talent town strategizing around business objective revenue goal growth goal longterm plan central marketing goal businessminded essential characteristic given highest priority considering agency current candidate doesn’t meet standard you’ll find pouring money expensive twisted drain 2 Marketing Agency Knows Business optometrist might able offer better medical advice Joe street knee pain orthopedic specialist know best Similarly important seek agency breadth knowledge experience relevant field Nike didn’t hire marketing professional happened passion sport cost onboarding agency industry something avoid 3 Marketing Agency Strong Reputation Marketing rooted common core principle marketer created equal choosing marketing agency ensure proper resource execute vision result produced previous client reporting system use hold accountable intuitive looking beneath surface might seem critical step often overlooked know look find team Referrals research great way start Don’t settle you’ve found right fit service you’re looking transaction It’s relationshipTags Advertising Marketing Business Small Business Startup
3,774
Deploying Databases on Kubernetes
A core function of Civis Platform is allowing data scientists to deploy their workloads on-demand, without needing to worry about the infrastructure involved. About a year ago, Civis put Jupyter Notebooks on Kubernetes, then did the same for Shiny apps. These features allow users to perform computations and deploy apps in the cloud. However, as users began to leverage these features more and more, we received requests for more options to connect these cloud-deployed web apps with persistent storage. Carrie’s internship was focused on exploring these options and creating a proof of concept for user-deployed databases. Finding the Right Database Previously, web apps deployed via Platform had only one easy option for persistent storage: Redshift, a column store database. Although column store databases perform well for many data science tasks, such as querying and column-level data analyses, they tend to be slow when it comes to fetching and updating single rows of information. For transactional data processing, a traditional row store database is more efficient. Examples of such databases include MySQL and Postgres. These databases are quite common in the web development world, since most web apps are transaction oriented. Our app developers needed a row store database. After deciding to use a row store database, we still had to decide what type of row store database to deploy. Since our use case was support for small custom web apps, we wanted a highly consistent SQL database with ACID guarantees. Another factor we considered was containerization and ease of deployment on Kubernetes. We use Kubernetes to deploy our existing client workloads and we wanted to expand upon this cluster. Additional criteria were reliability, scalability, high availability, replication, self-healing, etc. There are many databases out there, each with their own strengths and weaknesses. Ultimately, after comparing our options, we decided to use CockroachDB. CockroachDB is an open source, row store, Postgres flavored database with an emphasis on durability. It is designed to survive most hardware and software failures while maintaining consistency across multiple replicas. Plus, it provided good documentation around deployment on Kubernetes. Initial Experiments Once we chose CockroachDB, it was time to try out actually deploying a database on Kubernetes. Using a Kubernetes Statefulset, we were able to create a CockroachDB cluster by bringing up multiple Kubernetes pods running the CockroachDB Docker image. Because it’s a distributed database, CockroachDB distributes its data across multiple nodes, using the Raft algorithm to ensure consensus. This distribution gives the database resiliency against node failures. Investigation of Durability One of the main claims made by CockroachDB is that it automatically survives hardware and software failures. (Hence the name “CockroachDB,” since cockroaches are hard to kill.) Part of researching CockroachDB was checking the credibility of those claims. We had fun trying to “kill the cockroaches” by simulating different types of failures. The first failure we simulated was a pod failure. If there are enough healthy pods to reliably recover the lost data, then the database is supposed to automatically create another pod to replace the one that failed. After manually killing a pod from the cluster, we were able to verify that a new one came up in its place and that none of the data was lost. Since we were using local storage, instead of attaching external volumes using PVCs (due to known volume scheduling issues in multi-zone clusters), killing a pod meant that its backing storage was also killed. This showed that replication of data across pods was happening properly. Next, we simulated a node failure. We found that once the cluster identified that a node was missing, it was able to automatically reschedule the terminated pods to other nodes. In testing these different failures, the importance of preparing for the worst conditions your system might face was highlighted. As an additional reliability precaution, we wanted to ensure that CockroachDB pods were scheduled across different nodes in the Kubernetes cluster. This was done by adding inter-pod anti-affinity rules to the Statefulset. These rules determine which nodes pods can be scheduled on, based on the labels which other pods running on the node have. For our use case, we set constraints such that pods backing the same database could not be scheduled to the same node. Productionalizing Databases After the research steps were complete, the next phase of the project was to make databases a feature for Civis users. For the next month, Carrie worked on refactoring our code to make adding databases as simple as deploying a service on Civis Platform. This was a large change that required several different steps to ensure not only code functionality, but also code quality. This provided Carrie with key learning experiences related to the code review process and debugging issues — for example, it is better to take your time and thoroughly check everything, rather than waiting for errors to arise. The priority of sufficient testing surpasses the need to deploy code as quickly as possible. Next Steps Once the database deployment process is complete, the next step is to allow users to connect to these databases through Shiny apps and Notebooks. Additionally, we need to automate processes for backing up and restoring these databases. More setup is required to back up data outside of the Kubernetes cluster. There are also some additional configuration options for the databases which we would like to expose to users, such as the number of replicas in their CockroachDB cluster. This project has provided Carrie with ample learning opportunities, not only with CockroachDB and Kubernetes, but also with production code and development processes. Carrie enjoyed tackling challenges such as getting Kubernetes to work, setting up Docker images for CockroachDB, refactoring code, and networking with pods.
https://medium.com/civis-analytics/deploying-databases-on-kubernetes-e2cb7633dda5
['Civis Analytics']
2018-08-30 23:23:32.033000+00:00
['Data Science', 'Cockroachdb', 'Engineering', 'Kubernetes', 'Database']
Title Deploying Databases KubernetesContent core function Civis Platform allowing data scientist deploy workload ondemand without needing worry infrastructure involved year ago Civis put Jupyter Notebooks Kubernetes Shiny apps feature allow user perform computation deploy apps cloud However user began leverage feature received request option connect clouddeployed web apps persistent storage Carrie’s internship focused exploring option creating proof concept userdeployed database Finding Right Database Previously web apps deployed via Platform one easy option persistent storage Redshift column store database Although column store database perform well many data science task querying columnlevel data analysis tend slow come fetching updating single row information transactional data processing traditional row store database efficient Examples database include MySQL Postgres database quite common web development world since web apps transaction oriented app developer needed row store database deciding use row store database still decide type row store database deploy Since use case support small custom web apps wanted highly consistent SQL database ACID guarantee Another factor considered containerization ease deployment Kubernetes use Kubernetes deploy existing client workload wanted expand upon cluster Additional criterion reliability scalability high availability replication selfhealing etc many database strength weakness Ultimately comparing option decided use CockroachDB CockroachDB open source row store Postgres flavored database emphasis durability designed survive hardware software failure maintaining consistency across multiple replica Plus provided good documentation around deployment Kubernetes Initial Experiments chose CockroachDB time try actually deploying database Kubernetes Using Kubernetes Statefulset able create CockroachDB cluster bringing multiple Kubernetes pod running CockroachDB Docker image it’s distributed database CockroachDB distributes data across multiple node using Raft algorithm ensure consensus distribution give database resiliency node failure Investigation Durability One main claim made CockroachDB automatically survives hardware software failure Hence name “CockroachDB” since cockroach hard kill Part researching CockroachDB checking credibility claim fun trying “kill cockroaches” simulating different type failure first failure simulated pod failure enough healthy pod reliably recover lost data database supposed automatically create another pod replace one failed manually killing pod cluster able verify new one came place none data lost Since using local storage instead attaching external volume using PVCs due known volume scheduling issue multizone cluster killing pod meant backing storage also killed showed replication data across pod happening properly Next simulated node failure found cluster identified node missing able automatically reschedule terminated pod node testing different failure importance preparing worst condition system might face highlighted additional reliability precaution wanted ensure CockroachDB pod scheduled across different node Kubernetes cluster done adding interpod antiaffinity rule Statefulset rule determine node pod scheduled based label pod running node use case set constraint pod backing database could scheduled node Productionalizing Databases research step complete next phase project make database feature Civis user next month Carrie worked refactoring code make adding database simple deploying service Civis Platform large change required several different step ensure code functionality also code quality provided Carrie key learning experience related code review process debugging issue — example better take time thoroughly check everything rather waiting error arise priority sufficient testing surpasses need deploy code quickly possible Next Steps database deployment process complete next step allow user connect database Shiny apps Notebooks Additionally need automate process backing restoring database setup required back data outside Kubernetes cluster also additional configuration option database would like expose user number replica CockroachDB cluster project provided Carrie ample learning opportunity CockroachDB Kubernetes also production code development process Carrie enjoyed tackling challenge getting Kubernetes work setting Docker image CockroachDB refactoring code networking podsTags Data Science Cockroachdb Engineering Kubernetes Database
3,775
Radar Chart Basics with Python’s Matplotlib
Radar Chart Basics with Python’s Matplotlib One handy alternative for displaying multiple variables In this article, I’ll go through the basics of building a Radar chart, a.k.a — Polar, Web, Spider, and Star charts. The purpose of this visualization is to display in a single viewing, multiple quantitative variables. That allows us to compare the variables with each other, visualize outliers, and even comparisons with various sets of variables by visualizing multiple radar charts. The idea comes from Pie charts, more precisely one of its variations, the Polar Area Chart. Florence Nightingale Florence Nightingale — Considered to be the precursor of modern nursing, was also the first to publish a polar area chart. Her aesthetic and informative chart brings information about the Crimean war, more specifically about the causes of deaths. The areas of the slices represent the number of deaths in each month, where blue are deaths by preventable or mitigable Zymotic diseases, red are deaths from wounds, and black are other causes of death. With that, she was able to clarify the importance of nursing, by displaying that most deaths were not caused by war wounds, but rather by mitigable diseases. Later in 1877, the German scientist Georg von Mayr would publish the first Radar Chart. Even though both divide a circumference into equal parts, and have similar origins, they differ a lot, starting by how they encode the values — The Polar Area chart use slices and their areas. In contrast, Radar charts use the distance from the center to mark a point.
https://medium.com/python-in-plain-english/radar-chart-basics-with-pythons-matplotlib-ba9e002ddbcd
['Thiago Carvalho']
2020-06-13 19:02:13.618000+00:00
['Python', 'Matplotlib', 'Radar Charts', 'Data Science', 'Data Visualization']
Title Radar Chart Basics Python’s MatplotlibContent Radar Chart Basics Python’s Matplotlib One handy alternative displaying multiple variable article I’ll go basic building Radar chart aka — Polar Web Spider Star chart purpose visualization display single viewing multiple quantitative variable allows u compare variable visualize outlier even comparison various set variable visualizing multiple radar chart idea come Pie chart precisely one variation Polar Area Chart Florence Nightingale Florence Nightingale — Considered precursor modern nursing also first publish polar area chart aesthetic informative chart brings information Crimean war specifically cause death area slice represent number death month blue death preventable mitigable Zymotic disease red death wound black cause death able clarify importance nursing displaying death caused war wound rather mitigable disease Later 1877 German scientist Georg von Mayr would publish first Radar Chart Even though divide circumference equal part similar origin differ lot starting encode value — Polar Area chart use slice area contrast Radar chart use distance center mark pointTags Python Matplotlib Radar Charts Data Science Data Visualization
3,776
Better understanding of matplot-library
“Picture is worth a thousand words”, the plots and graphs can be very effective to convey a clear description of the data to an audience or sharing the data with other peer data scientists. Data visualization is a way of showing complex data in a graphical form to make it understandable. When you are trying to explore the data and getting familiar with it, data visualization is used. In any corporate industry, it can be very valuable to support any recommendations to clients, managers, or decision-makers. Darkhorse Analytics is a company that runs a research lab at the University of Alberta since 2008. They have done really fascinating work on data visualization. Their approach to visualizing data depends on three key points: less is more effective, it is more attractive, and it is more impactive. In other words, any feature incorporated in the plot to make it attractive and pleasing must support the message that the plot is meant to get across not to distract from it. Matplotlib Matplotlib is one of the most popular data visualization library in python. It was created by a neurobiologist, John Hunter(1968–2012). Matplotlib’s architecture is composed of three layers: Architecture of matplotlib Backend layer The back-end layer has three built-in abstract interface classes: A. FigureCanvas: matplotlib.backend_bases.FigureCanvas It defines and encompasses the area onto which the figure is drawn. B. Renderer : matplotlib.backend_bases.Renderer An instance of the renderer class knows how to draw on the FigureCanvas. C. Event: matplotlib.backend_bases.Event It handles user input such as keyboard strokes and mouse clicks. Artist layer It is composed of one main object, i.e Artist. The artist is the object that knows how to use the renderer to draw on the canvas. Everything we see in the Matplotlib figure is an artist instance. There are two types of artist object A. Primitive: Line2d, Rectangle, Circle, and Text. B. Composite: Axis, Tick, Axes, and Figure Each composite can contain other Composite artists as well as primitive artists. For example, a figure artist would contain as axis artist as well as a text artist or rectangle artist. Scripting layer It was developed for those scientists who are not professional programmers. The goal of this layer is to perform a quick exploratory analysis of data. It is essentially the Matplotlib.pyplot interface. It automates the process of defining a canvas and defining a figure artist and connecting them. Since it automatically defines canvas, artist and connects them. It makes data analysts easy to do things. So most of the data scientists prefer this scripting layer to visualize their data. The above code plots a histogram of a hundred random numbers and saves the histogram as matplotlib_histogram.png. The versatility of Matplotlib can be used to make many visualization types:- Scatter Plots The output of the above code Bar charts and Histograms Bargraph Histogram Line plots Line plots Pie charts pie chart Stem plots
https://medium.com/mldotcareers/data-visualization-8b17843b9bbc
['Saroj Humagain']
2020-09-16 12:08:03.136000+00:00
['Machine Learning', 'Python', 'Data Science', 'Matplotlib', 'Data Visualization']
Title Better understanding matplotlibraryContent “Picture worth thousand words” plot graph effective convey clear description data audience sharing data peer data scientist Data visualization way showing complex data graphical form make understandable trying explore data getting familiar data visualization used corporate industry valuable support recommendation client manager decisionmakers Darkhorse Analytics company run research lab University Alberta since 2008 done really fascinating work data visualization approach visualizing data depends three key point le effective attractive impactive word feature incorporated plot make attractive pleasing must support message plot meant get across distract Matplotlib Matplotlib one popular data visualization library python created neurobiologist John Hunter1968–2012 Matplotlib’s architecture composed three layer Architecture matplotlib Backend layer backend layer three builtin abstract interface class FigureCanvas matplotlibbackendbasesFigureCanvas defines encompasses area onto figure drawn B Renderer matplotlibbackendbasesRenderer instance renderer class know draw FigureCanvas C Event matplotlibbackendbasesEvent handle user input keyboard stroke mouse click Artist layer composed one main object ie Artist artist object know use renderer draw canvas Everything see Matplotlib figure artist instance two type artist object Primitive Line2d Rectangle Circle Text B Composite Axis Tick Axes Figure composite contain Composite artist well primitive artist example figure artist would contain axis artist well text artist rectangle artist Scripting layer developed scientist professional programmer goal layer perform quick exploratory analysis data essentially Matplotlibpyplot interface automates process defining canvas defining figure artist connecting Since automatically defines canvas artist connects make data analyst easy thing data scientist prefer scripting layer visualize data code plot histogram hundred random number save histogram matplotlibhistogrampng versatility Matplotlib used make many visualization type Scatter Plots output code Bar chart Histograms Bargraph Histogram Line plot Line plot Pie chart pie chart Stem plotsTags Machine Learning Python Data Science Matplotlib Data Visualization
3,777
Queue Data Structure
Queue Implementation We can create a Queue class as a wrapper and use the Python list to store the queue data. This class will have the implementation of the enqueue , dequeue , size , front , back , and is_empty methods. The first step is to create a class definition and how we are gone store our items. class Queue: def __init__(self): self.items = [] This is basically what we need for now. Just a class and its constructor. When the instance is created, it will have the items list to store the queue items. For the enqueue method, we just need to use the list append method to add new items. The new items will be placed in the last index of this items list. So the front item from the queue will always be the first item. def enqueue(self, item): self.items.append(item) It receives the new item and appends it to the list. The size method only counts the number of queue items by using the len function. def size(self): return len(self.items) The idea of the is_empty method is to verify if the list has or not items in it. If it has, returns False . Otherwise, True . To count the number of items in the queue, we can simply use the size method already implemented. def is_empty(self): return self.size() == 0 The pop method from the list data structure can also be used to dequeue the item from the queue. It dequeues the first element as it is expected for the queue. The first added item. def dequeue(self): return self.items.pop(0) But we need to handle the queue emptiness. For an empty list, the pop method raises an exception IndexError: poop from empty list . So we can create an exception class to handle this issue. class Emptiness(Exception): pass And uses it when the list is empty: def dequeue(self): if self.is_empty(): raise Emptiness('The Queue is empty') return self.items.pop(0) If it is empty, we raise this exception. Otherwise, we can dequeue the front item from the queue. We use this same emptiness strategy for the front method: def front(self): if self.is_empty(): raise Emptiness('The Queue is empty') return self.items[0] If it has at least one item, we get the front, the first added item in the queue. Also the same emptiness strategy for the back method: def back(self): if self.is_empty(): raise Emptiness('The Queue is empty') return self.items[-1] If it has at least one item, we get the back item, the last added item in the queue. Queue usage I created some helper functions to help test the queue usage. def test_enqueue(queue, item): queue.enqueue(item) print(queue.items) def test_dequeue(queue): queue.dequeue() print(queue.items) def test_emptiness(queue): is_empty = queue.is_empty() print(is_empty) def test_size(queue): size = queue.size() print(size) def test_front(queue): front = queue.front() print(front) def test_back(queue): back = queue.back() print(back) They basically call a queue method and print the expected result from the method call. The usage will be something like: queue = Queue() test_emptiness(queue) # True test_size(queue) # 0 test_enqueue(queue, 1) # [1] test_enqueue(queue, 2) # [1, 2] test_enqueue(queue, 3) # [1, 2, 3] test_enqueue(queue, 4) # [1, 2, 3, 4] test_enqueue(queue, 5) # [1, 2, 3, 4, 5] test_emptiness(queue) # False test_size(queue) # 5 test_front(queue) # 1 test_back(queue) # 5 test_dequeue(queue) # [2, 3, 4, 5] test_dequeue(queue) # [3, 4, 5] test_dequeue(queue) # [4, 5] test_dequeue(queue) # [5] test_emptiness(queue) # False test_size(queue) # 1 test_front(queue) # 5 test_back(queue) # 5 test_dequeue(queue) # [] test_emptiness(queue) # True test_size(queue) # 0 We first instantiate a new queue from the Queue class.
https://medium.com/the-renaissance-developer/queue-data-structure-db5022d9eadb
[]
2020-02-02 23:03:38.142000+00:00
['Python', 'Algorithms', 'Software Development', 'Software Engineering', 'Programming']
Title Queue Data StructureContent Queue Implementation create Queue class wrapper use Python list store queue data class implementation enqueue dequeue size front back isempty method first step create class definition gone store item class Queue def initself selfitems basically need class constructor instance created item list store queue item enqueue method need use list append method add new item new item placed last index item list front item queue always first item def enqueueself item selfitemsappenditem receives new item appends list size method count number queue item using len function def sizeself return lenselfitems idea isempty method verify list item return False Otherwise True count number item queue simply use size method already implemented def isemptyself return selfsize 0 pop method list data structure also used dequeue item queue dequeues first element expected queue first added item def dequeueself return selfitemspop0 need handle queue emptiness empty list pop method raise exception IndexError poop empty list create exception class handle issue class EmptinessException pas us list empty def dequeueself selfisempty raise EmptinessThe Queue empty return selfitemspop0 empty raise exception Otherwise dequeue front item queue use emptiness strategy front method def frontself selfisempty raise EmptinessThe Queue empty return selfitems0 least one item get front first added item queue Also emptiness strategy back method def backself selfisempty raise EmptinessThe Queue empty return selfitems1 least one item get back item last added item queue Queue usage created helper function help test queue usage def testenqueuequeue item queueenqueueitem printqueueitems def testdequeuequeue queuedequeue printqueueitems def testemptinessqueue isempty queueisempty printisempty def testsizequeue size queuesize printsize def testfrontqueue front queuefront printfront def testbackqueue back queueback printback basically call queue method print expected result method call usage something like queue Queue testemptinessqueue True testsizequeue 0 testenqueuequeue 1 1 testenqueuequeue 2 1 2 testenqueuequeue 3 1 2 3 testenqueuequeue 4 1 2 3 4 testenqueuequeue 5 1 2 3 4 5 testemptinessqueue False testsizequeue 5 testfrontqueue 1 testbackqueue 5 testdequeuequeue 2 3 4 5 testdequeuequeue 3 4 5 testdequeuequeue 4 5 testdequeuequeue 5 testemptinessqueue False testsizequeue 1 testfrontqueue 5 testbackqueue 5 testdequeuequeue testemptinessqueue True testsizequeue 0 first instantiate new queue Queue classTags Python Algorithms Software Development Software Engineering Programming
3,778
How Making People Buy Into The Cause Has Helped Bombas Sell 40 Million Socks
Bombas secret towards becoming a 100 million dollar brand As we look at the Bombas business model and branding strategies used by Bombas. Everything looks simple and easy to analyze. It’s prominent to say that, Bombas as a brand is heavily dependent on mission-based marketing strategy. It means that their whole brand representation and purpose of sales were dedicated to a mission of donating socks to the homeless shelters. Let me take you through some of the strategies, tactics, and approach they took to achieve such success. Building a brand of values Bombas has been successful in building an one product eCommerce store, but as time progressed they are launching multiple tees and are looking to join other verticals, as well. But no matter what they sold, Bombas got itself associated with a social mission, and people didn’t stop buying from them. Why? Because people felt good when their purchase was being associated with a good cause, and it contributed towards a positive impact on the society. In short, their buying experience became more gratifying and valuable. Bombas never leaves a single card on the table to show their social and environmental commitments. One such example is that Bombas launching its PRIDE collection. Socks were launched to celebrate LGBTQ+ Pride Month and Bombas said to donate 40% of all the socks to LGBTQ youth homeless shelters. Bombas PRIDE Collection (Screenshot) Bombas has built a brand of values showing their love towards multiple communities and people, thus contributing to a good cause for humanity. Design a product that has an impact Both the co-founders knew, how much those socks were essential for people living in homeless shelters. Instead of manufacturing and selling socks which were out there. They focused on enhancing durability and capability of socks while used in harsh environment. In other words, they wanted to build and design socks which were more sustainable and comfortable when compared to the usual socks. So instead of launching their socks right away. They spent two years before shipping the idea. Finally in 2013, they came up with socks which were:- Manufactured from environmentally sustainable materials like extra-long staple cotton and merino wool. like extra-long staple cotton and merino wool. Engineered to get both support and comfort . For example, the honeycomb structure on the socks supports the mid feet and helps it hold onto the feet. . For example, the honeycomb structure on the socks supports the mid feet and helps it hold onto the feet. Smarter socks with multiple designs was perfectly suited for the everyday look and every walk of life. People who were marathon runners, everyday hustlers, and outdoor activists liked wearing these socks. By 2013 when Bombas firstly showed up in the market. There was no other sock brand which could compete with them. Especially in-terms of research and technology used in designing a pair of socks. Customers from all walks of life started buying Bombas for its social mission, and the product itself was irresistible. Soon, Bombas impact was seen in average people and they started to ditch those old grandma socks which ruled the market for decades. New collabs = Newer styles Within their short run of 7 years, Bombas has been able to grab multiple collaborations with celebrities and legendary icons.Each of these collaborations to the launch of a new style of socks dedicated to an individual by still keeping their social mission intact. If you’ve followed Bombas on social media. Then you would know that, in general Bombas releases a new pair of socks every now and then. Each new collaboration helps them add a new kind of socks in their arsenal. Apart from the new designs and styles, the brand used to get a lot of hype and traction in the news. To their advantage, Bombas swiftly uses it’s collaborations as a promotion strategy on their social media handles. Some of the collaborations which are worth mentioning here are:- Muhammad Ali x Bombas This collaboration made Bombas showcase their connection with the athletes. Muhammad Ali x Bombas collection drew an inspiration from Ali’s quotes and images. Muhammad Ali x Bombas collection The collection celebrated the Muhammad Ali’s legacy and his contributions to the society. After this collaboration, Bombas as brand became associated with the athletes and people started noticing wearing their socks for outdoor activities. Zac’s Performance Test Earlier this year, Zac Efron’s collaboration with Bombas lead to one of the coolest Ad campaigns. The brand had a good relationship with Zac Efron since 2017. In this particular campaign, Zac Efron was asked to test Bombas socks for his entire day of activities. From running to golfing and even wearing them during his leg day, Zac wore the socks and tested it for it’s sustainability. Zac’s Performance Test (Screenshot) In the end, Efron shares that socks passed all of his tests with flying colours.
https://medium.com/datadriveninvestor/how-making-people-buy-into-the-cause-has-helped-bombas-sell-40-million-pair-of-socks-3699fe45c67d
['Thakur Rahul Singh']
2020-12-27 16:05:57.375000+00:00
['Branding', 'Marketing', 'Business', 'Business Strategy', 'Startup']
Title Making People Buy Cause Helped Bombas Sell 40 Million SocksContent Bombas secret towards becoming 100 million dollar brand look Bombas business model branding strategy used Bombas Everything look simple easy analyze It’s prominent say Bombas brand heavily dependent missionbased marketing strategy mean whole brand representation purpose sale dedicated mission donating sock homeless shelter Let take strategy tactic approach took achieve success Building brand value Bombas successful building one product eCommerce store time progressed launching multiple tee looking join vertical well matter sold Bombas got associated social mission people didn’t stop buying people felt good purchase associated good cause contributed towards positive impact society short buying experience became gratifying valuable Bombas never leaf single card table show social environmental commitment One example Bombas launching PRIDE collection Socks launched celebrate LGBTQ Pride Month Bombas said donate 40 sock LGBTQ youth homeless shelter Bombas PRIDE Collection Screenshot Bombas built brand value showing love towards multiple community people thus contributing good cause humanity Design product impact cofounder knew much sock essential people living homeless shelter Instead manufacturing selling sock focused enhancing durability capability sock used harsh environment word wanted build design sock sustainable comfortable compared usual sock instead launching sock right away spent two year shipping idea Finally 2013 came sock Manufactured environmentally sustainable material like extralong staple cotton merino wool like extralong staple cotton merino wool Engineered get support comfort example honeycomb structure sock support mid foot help hold onto foot example honeycomb structure sock support mid foot help hold onto foot Smarter sock multiple design perfectly suited everyday look every walk life People marathon runner everyday hustler outdoor activist liked wearing sock 2013 Bombas firstly showed market sock brand could compete Especially interms research technology used designing pair sock Customers walk life started buying Bombas social mission product irresistible Soon Bombas impact seen average people started ditch old grandma sock ruled market decade New collabs Newer style Within short run 7 year Bombas able grab multiple collaboration celebrity legendary iconsEach collaboration launch new style sock dedicated individual still keeping social mission intact you’ve followed Bombas social medium would know general Bombas release new pair sock every new collaboration help add new kind sock arsenal Apart new design style brand used get lot hype traction news advantage Bombas swiftly us it’s collaboration promotion strategy social medium handle collaboration worth mentioning Muhammad Ali x Bombas collaboration made Bombas showcase connection athlete Muhammad Ali x Bombas collection drew inspiration Ali’s quote image Muhammad Ali x Bombas collection collection celebrated Muhammad Ali’s legacy contribution society collaboration Bombas brand became associated athlete people started noticing wearing sock outdoor activity Zac’s Performance Test Earlier year Zac Efron’s collaboration Bombas lead one coolest Ad campaign brand good relationship Zac Efron since 2017 particular campaign Zac Efron asked test Bombas sock entire day activity running golfing even wearing leg day Zac wore sock tested it’s sustainability Zac’s Performance Test Screenshot end Efron share sock passed test flying coloursTags Branding Marketing Business Business Strategy Startup
3,779
Inspiring Stories Behind the Best Songs of Our Time
Bob Dylan’s Hurricane “He ain’t no gentleman Jim” Bob Dylan, Hurricane That’s what Bob Dylan sings about the subject of his song, the boxer Rubin “Hurricane” Carter. “Gentleman Jim” is a reference to the gentleman Jim Corbett, a white boxer in the 1800s known for his manners. “Hurricane” Carter for sure ain’t no gentleman. He spent 19 years in jail for murder. Nonetheless, Bob Dylan felt he did not commit the crime and tried to drive publicity to the case. Carter’s case was particularly complex and filled with legal missteps. The case reflects acts of racism and profiling against Carter, which Dylan describes as leading to a false trial and conviction. “Pistol shots ring out in the bar-room night… ” The opening line of Bob Dylan’s protest ballad On June 17, 1966, three white people were gunned down at a bar in New Jersey Grill. Witnesses described two black men as the murderers. Police pulled over Carter and his friend John Artis, who were black, but apart from that didn’t fit the description. Even though they were released soon after, they have been charged with the crimes two months later. The sentence was based on the testimony of two white men with criminal records, claiming they witnessed Carter and Artis shooting. Both were sentenced to life. In prison, Carter tried to publish his story in an effort to earn his freedom. His effort included a book, which had been sent to Bob Dylan. The famous singer took up his cause, wrote a song about him, and raised money on his tour in 1975. Soon after the release of Carter’s book and the medial attention, a back and forth in trial started. The witnesses changed their stories, as they apparently have been coerced into their testimony. Nonetheless, Carter and Artis have not been released until 1985. Carter died on April 20, 2014, at age 76. Pink Floyd’s Brain Damage The lunatic is in my head You raise the blade, you make the change You re-arrange me ’til I’m sane You lock the door And throw away the key There’s someone in my head but it’s not me And if the cloud bursts, thunder in your ear You shout and no one seems to hear And if the band you’re in starts playing different tunes I’ll see you on the dark side of the moon — Pink Floyd, Brain Damage Published on the Dark Side Of The Moon, “Brain Damage” is one of Pink Floyd’s most timeless tunes. With the theme of insanity, “Brain Damage” hit close to the band itself. The subject of this song is the ill-fated Syd Barrett. Syd Barret was the singer and guitarist of Pink Floyd from 1965–1968. The Dark Side Of The Moon — Pink Floyd Roger Waters has stated that the insanity-themed lyrics are based on Syd Barrett’s mental instabilities. With the line ‘I’ll see you on the dark side of the moon’ Waters indicates that he felt related to him in terms of mental idiosyncrasies. Barrett’s “crazy” behavior is further referenced in the lyrics “And if the band you’re in starts playing different tunes”, which happened occasionally as he started to play different songs during concerts without even noticing. The song has a rather famous opening line, “The lunatic is on the grass…”, whereby Waters is referring to areas of turf which display signs saying “Please keep off the grass” with the exaggerated implication that disobeying such signs might indicate insanity. — Wikipedia Elton John’s Rocket Man “She packed my bags last night, pre-flight. Zero hour: 9 a.m. And I’m gonna be high as a kite by then.” The song Rocket Man (I Think It’s Gonna Be A Long, Long Time) was released as a single on 3 March 1972. At a first glance, a lot of people thought that the line in the song that says “I’m gonna be high as a kite by then” was referring to drug addiction. But, coming only three years after man first walked on the moon in July 1969, the meaning of this song was more literal. This piece of art describes a Mars-bound astronaut’s mixed feelings at leaving his family in order to do his job. Rocketman — Elton John, Source: ntv Lyricist Bernie Taupin, who collaborated with Elton on all his major hits, explained in 2016: “People identify it, unfortunately, with David Bowie’s Space Oddity. It actually wasn’t inspired by that at all; it was actually inspired by a story by Ray Bradbury, from his book of science fiction short stories called The Illustrated Man. “In that book, there was a story called The Rocket Man, which was about how astronauts in the future would become sort of an everyday job. So I kind of took that idea and ran with that.” The Rolling Stones’ (I Can’t Get No) Satisfaction Tossing and turning on a bad, sleepless night, you just can’t get no satisfaction. At least, that’s what happened to Keith Richards. The №2 song on the Rolling Stone The 500 Greatest Songs of All Time list has been written when Richards heard the beginning riff to (I Can’t Get No) Satisfaction in a dream. In an interview with the Rolling Stone, Richards had this to say about the song’s inception: “I woke up in the middle of the night. There was a cassette recorder next to the bed and an acoustic guitar. The next morning when I woke up, the tape had gone all the way to the end. So I ran it back, and there’s like thirty seconds of this riff — ‘Da-da da-da-da, I can’t get no satisfaction’ — and the rest of the tape is me snoring!” Billy Joel’s Vienna “Why did I pick Vienna to use as a metaphor for the rest of your life? My father lives in Vienna now. I had to track him down. I didn’t see him from the time I was 8 ‘till I was about 23–24 years old. He lives in Vienna, Austria which I thought was rather bizarre because he left Germany in the first place because of this guy named Hitler and he ends up going to the same place that Hitler hung out all those years! Vienna, for a long time was the crossroads. […] So the metaphor of Vienna has the meaning of a crossroad. It’s a place of inter…course, of exchange — it’s the place where cultures co-mingle. You get great beer in Vienna but you also get brandy from Armenia. It was a place where cultures co-mingled. So I go to visit my father in Vienna, I’m walking around this town and I see this old lady. She must have been about 90 years old and she is sweeping the street. I say to my father “What’s this nice old lady doing sweeping the street?” He says “She’s got a job, she feels useful, she’s happy, she’s making the street clean, she’s not put out to pasture” — Billy Joel in an interview on Vienna Billy essentially thought to himself “I don’t have to be worried about getting old, ‘Vienna waits for you’”. Vienna is being pictured as the promised land. A place where the old people are being respected and there are no cultural barriers. It’s a beautiful picture of a beautiful city which — fun fact — inspired me to move to Vienna in 2020. Vienna — Maximilian Perkmann, 2020 (the author) “The song describes that sometimes you have to take things more slowly in life, that you develop mindfulness, but also show gratitude for all the good things that happen. Vienna as a city has embodied all this for me” — Billy Joel Falco’s Out Of The Dark Even though it is a german song, the Austrian artist Falco achieved a worldwide impact with “Out Of The Dark”. But above all, the song has caused a lot of controversies. „Muss ich denn sterben, um zu leben? (Do I have to die to live?) - Falco, Out Of The Dark Falco died of severe injuries received on 6 February 1998, when his car collided with a bus in the Dominican Republic. As of his death, rumors stated that “Out Of The Dark” was his last call for help before he committed suicide. Still today, the circumstances have not been fully clarified. In an interview in 1997, about a year before Falco’s death, he stated that the theme of this song — as many times before — were drugs. The song tells the story of a man divorcing his wife and falling into depression. His only way out: heroin. That’s why the chorus of the song plays “Out Of The Dark (divorce) Into The Light (heron). After this interview, the song was played on a radio station for the first time. A similar explanation was given by his manager at the time, Claudia Wohlfromm. Falco in the music video to Out Of The Dark— TV90s “Out of the Dark is autobiographical — and also not. It is about drugs. In particular: about cocaine. I wrote the text from the point of view of a desperate man who is possessed by the drug without being addicted myself”. - Falco in an interview with the magazine “Bunte” 27/98 Paul Simon’s Diamonds on the Soles of Her Shoes Still, to date, the real meaning of Paul Simon’s masterpiece is not clear. There are more interpretations. She’s a rich girl She don’t try to hide it Diamonds on the soles of her shoes He’s a poor boy Empty as a pocket Empty as a pocket with nothing to lose — Diamonds on the Soles of Her Shoes, Paul Simon Love The more popular interpretation pictures Paul’s short relationship with a diamond mine owner’s daughter while recording in South Africa. She was very rich and privileged, yet she acted very down to earth, like a poor girl. The woman was that rich, that she didn’t even notice the diamonds on the soles of her shoes. Africa Thinking on a deeper level, the lyrics could refer to “the rich girl” Africa herself. Africa has diamonds on the soles of her shoes, down underfoot in Southern Africa. The first Poor Boy in the song seems to be the native Africans. The Europeans think the Zulus have “nothing to lose.” One way to lose the walking blues is to dig up the diamonds. Paul Simon expresses his hope, that Africa’s nations will eject the colonialists and take care of themselves. Simon mentions this song as one of his best musical achievements.
https://medium.com/illumination/the-inspiring-stories-behind-7-of-the-best-songs-of-our-time-e4810619b3e2
['Maximilian Perkmann']
2020-12-04 15:46:29.323000+00:00
['Music', 'Art', 'Mental Health', 'History', 'Self Improvment']
Title Inspiring Stories Behind Best Songs TimeContent Bob Dylan’s Hurricane “He ain’t gentleman Jim” Bob Dylan Hurricane That’s Bob Dylan sings subject song boxer Rubin “Hurricane” Carter “Gentleman Jim” reference gentleman Jim Corbett white boxer 1800s known manner “Hurricane” Carter sure ain’t gentleman spent 19 year jail murder Nonetheless Bob Dylan felt commit crime tried drive publicity case Carter’s case particularly complex filled legal misstep case reflects act racism profiling Carter Dylan describes leading false trial conviction “Pistol shot ring barroom night… ” opening line Bob Dylan’s protest ballad June 17 1966 three white people gunned bar New Jersey Grill Witnesses described two black men murderer Police pulled Carter friend John Artis black apart didn’t fit description Even though released soon charged crime two month later sentence based testimony two white men criminal record claiming witnessed Carter Artis shooting sentenced life prison Carter tried publish story effort earn freedom effort included book sent Bob Dylan famous singer took cause wrote song raised money tour 1975 Soon release Carter’s book medial attention back forth trial started witness changed story apparently coerced testimony Nonetheless Carter Artis released 1985 Carter died April 20 2014 age 76 Pink Floyd’s Brain Damage lunatic head raise blade make change rearrange ’til I’m sane lock door throw away key There’s someone head it’s cloud burst thunder ear shout one seems hear band you’re start playing different tune I’ll see dark side moon — Pink Floyd Brain Damage Published Dark Side Moon “Brain Damage” one Pink Floyd’s timeless tune theme insanity “Brain Damage” hit close band subject song illfated Syd Barrett Syd Barret singer guitarist Pink Floyd 1965–1968 Dark Side Moon — Pink Floyd Roger Waters stated insanitythemed lyric based Syd Barrett’s mental instability line ‘I’ll see dark side moon’ Waters indicates felt related term mental idiosyncrasy Barrett’s “crazy” behavior referenced lyric “And band you’re start playing different tunes” happened occasionally started play different song concert without even noticing song rather famous opening line “The lunatic grass…” whereby Waters referring area turf display sign saying “Please keep grass” exaggerated implication disobeying sign might indicate insanity — Wikipedia Elton John’s Rocket Man “She packed bag last night preflight Zero hour 9 I’m gonna high kite then” song Rocket Man Think It’s Gonna Long Long Time released single 3 March 1972 first glance lot people thought line song say “I’m gonna high kite then” referring drug addiction coming three year man first walked moon July 1969 meaning song literal piece art describes Marsbound astronaut’s mixed feeling leaving family order job Rocketman — Elton John Source ntv Lyricist Bernie Taupin collaborated Elton major hit explained 2016 “People identify unfortunately David Bowie’s Space Oddity actually wasn’t inspired actually inspired story Ray Bradbury book science fiction short story called Illustrated Man “In book story called Rocket Man astronaut future would become sort everyday job kind took idea ran that” Rolling Stones’ Can’t Get Satisfaction Tossing turning bad sleepless night can’t get satisfaction least that’s happened Keith Richards №2 song Rolling Stone 500 Greatest Songs Time list written Richards heard beginning riff Can’t Get Satisfaction dream interview Rolling Stone Richards say song’s inception “I woke middle night cassette recorder next bed acoustic guitar next morning woke tape gone way end ran back there’s like thirty second riff — ‘Dada dadada can’t get satisfaction’ — rest tape snoring” Billy Joel’s Vienna “Why pick Vienna use metaphor rest life father life Vienna track didn’t see time 8 ‘till 23–24 year old life Vienna Austria thought rather bizarre left Germany first place guy named Hitler end going place Hitler hung year Vienna long time crossroad … metaphor Vienna meaning crossroad It’s place inter…course exchange — it’s place culture comingle get great beer Vienna also get brandy Armenia place culture comingled go visit father Vienna I’m walking around town see old lady must 90 year old sweeping street say father “What’s nice old lady sweeping street” say “She’s got job feel useful she’s happy she’s making street clean she’s put pasture” — Billy Joel interview Vienna Billy essentially thought “I don’t worried getting old ‘Vienna wait you’” Vienna pictured promised land place old people respected cultural barrier It’s beautiful picture beautiful city — fun fact — inspired move Vienna 2020 Vienna — Maximilian Perkmann 2020 author “The song describes sometimes take thing slowly life develop mindfulness also show gratitude good thing happen Vienna city embodied me” — Billy Joel Falco’s Dark Even though german song Austrian artist Falco achieved worldwide impact “Out Dark” song caused lot controversy „Muss ich denn sterben um zu leben die live Falco Dark Falco died severe injury received 6 February 1998 car collided bus Dominican Republic death rumor stated “Out Dark” last call help committed suicide Still today circumstance fully clarified interview 1997 year Falco’s death stated theme song — many time — drug song tell story man divorcing wife falling depression way heroin That’s chorus song play “Out Dark divorce Light heron interview song played radio station first time similar explanation given manager time Claudia Wohlfromm Falco music video Dark— TV90s “Out Dark autobiographical — also drug particular cocaine wrote text point view desperate man possessed drug without addicted myself” Falco interview magazine “Bunte” 2798 Paul Simon’s Diamonds Soles Shoes Still date real meaning Paul Simon’s masterpiece clear interpretation She’s rich girl don’t try hide Diamonds sol shoe He’s poor boy Empty pocket Empty pocket nothing lose — Diamonds Soles Shoes Paul Simon Love popular interpretation picture Paul’s short relationship diamond mine owner’s daughter recording South Africa rich privileged yet acted earth like poor girl woman rich didn’t even notice diamond sol shoe Africa Thinking deeper level lyric could refer “the rich girl” Africa Africa diamond sol shoe underfoot Southern Africa first Poor Boy song seems native Africans Europeans think Zulus “nothing lose” One way lose walking blue dig diamond Paul Simon express hope Africa’s nation eject colonialist take care Simon mention song one best musical achievementsTags Music Art Mental Health History Self Improvment
3,780
10 Best Programming Languages to Learn in 2021
10 Best Programming Languages to Learn in 2021 A developer’s list of the programming languages you probably want to start learning in 2021 Photo by Annie Spratt on Unsplash A couple of months ago, I was reading an interesting article on HackerNews, which argued that why you should learn numerous programming languages even if you won’t immediately use them, and I have to say that I agreed. Since each programming language is good for something specific but not so great for others, it makes sense for Programmers and senior developers to know more than one language so that you can choose the right tool for the job. But which programming languages should you learn? As there are many programming languages ranging from big three like Java, JavaScript, and Python to lesser-known like Julia, Rust or R. The big questions is which languages will give you the biggest bang for your buck? Even though Java is my favorite language, and I know a bit of C and C++, I am striving to expand beyond this year. I am particularly interested in Python and JavaScript, but you might be interested in something else. Top 10 Programming Languages to Learn in 2021 This list of the top 10 programming languages — compiled with help from Stack Overflow’s annual developer survey as well as my own experience — should help give you some ideas. Note: Even though it can be tempting, don’t try to learn too many programming languages at once; choose one first, master it, and then move on to the next one. 1. Java Even though I have been using Java for years, there are still many things I have to learn. My goal for 2021 is to focus on recent Java changes on JDK 9, 10, 11, and 12. If yours is the same, you’ll want to check out the Complete Java MasterClass from Udemy. If you don’t mind learning from free resources, then you can also check out this list of free Java programming courses. 2. Javascript Whether you believe it or not, JavaScript is the number one language of the web. The rise of frameworks like jQuery, Angular, and React JS has made JavaScript even more popular. Since you just cannot stay away from the web, it’s better to learn JavaScript sooner than later. It’s also the number one language for client-side validation, which really does make it work learning JavaScript. Convinced? Then this JavaScript Masterclass is a good place to start. For cheaper alternatives, check out this list of free JavaScript courses. 3. Python Python has now toppled Java to become the most taught programming language in universities and academia. It’s a very powerful language and great to generate scripts. You will find a python module for everything you can think of. For example, I was looking for a command to listen to UDP traffic in Linux but couldn’t find anything. So, I wrote a Python script in 10 minutes to do the same. If you want to learn Python, the Python Fundamentals from Pluralsight is one of the best online course to start with. You will need a Pluralsight membership to get access to the course, which costs around $29 per month or $299 annually. You can also access it using their free trial. And, if you need one more choice, then The Complete Python Bootcamp: Go from zero to hero in Python 3 on Udemy is another awesome course for beginners. And if you are looking for some free alternatives, you can find a list here. 4. Kotlin If you are thinking seriously about Android App development, then Kotlin is the programming language to learn this year. It is definitely the next big thing happening in the Android world. Even though Java is my preferred language, Kotlin has got native support, and many IDEs like IntelliJ IDEA and Android Studio are supporting Kotlin for Android development. The Complete Android Kotlin Developer Course is probably the best online course to start with. 5. Golang This is another programming language you may want to learn this year. I know it’s not currently very popular and at the same time can be hard to learn, but I feel its usage is going to increase in 2021. There are also not that many Go developers right now, so you really may want to go ahead and bite the bullet, especially if you want to create frameworks and things like that. If you can invest some time and become an expert in Go, you’re going to be in high demand. Go: The Complete Developer’s Guide from Udemy is the online course I am going to take to get started. 6. C# If you are thinking about GUI development for PC and Web, C# is a great option. It’s also the programming language for the .NET framework, not to mention used heavily in game development for both PC and consoles. If you’re interested in any of the above areas, check out the Learn to Code by Making Games — Complete C# Unity Developer from Udemy. I see more than 200K students have enrolled in this course, which speaks for its popularity. And again, if you don’t mind learning from free courses, here is a list of some free C# programming courses for beginners. 7. Swift If you are thinking about iOS development like making apps for the iPhone and iPad, then you should seriously consider learning Swift in 2021. It replaces Objective C as the preferred language to develop iOS apps. Since I am the Android guy, I have no goal with respect to Swift, but if you do, you can start with the iOS 14 and Swift 5 — The Complete iOS App Development Bootcamp. If you don’t mind learning from free resources then you can also check out this list of free iOS courses for more choices. There’s also this nifty tutorial. 8. Rust To be honest, I don’t know much about Rust since I’ve never used it, but it did take home the prize for ‘most loved programming language’ in the Stack Overflow developer survey, so there’s clearly something worth learning here. There aren’t many free Rust courses out there, but Rust For Undergrads is a good one to start with. 9. PHP If you thought that PHP is dead, then you are dead wrong. It’s still very much alive and kicking. Fifty percent (50%) of internet websites are built using PHP, and even though it’s not on my personal list of languages to learn this year, it’s still a great choice if you don’t already know it. And, if you want to learn from scratch, PHP for Beginners — Become a PHP Master — CMS Project on Udemy is a great course. And, if you love free stuff to learn PHP, checkout this list of free PHP and MySQL courses on Hackernoon 10. C/C++ Both C and C++ are evergreen languages, and many of you probably know them from school. But if you are doing some serious work in C++, I can guarantee you that your academic experience will not be enough. You need to join a comprehensive online course like C++: From Beginner to Expert to become industry-ready. And for my friends who want some free courses to learn C++, here is a list list of free C++ Programming courses for beginners. Conclusion Even if you learn just one programming language apart from the one you use on a daily basis, you will be in good shape for your career growth. The most important thing right now is to make your goal and do your best to stick with it. Happy learning! If you enjoy this article here are a few more of my write-ups you may like : Good luck with your Programming t journey! It’s certainly not going to be easy, but by following this list, you are one step closer to becoming the Software Developer, you always wanted to be If you like this article then please consider following me on medium (javinpaul). if you’d like to be notified for every new post and don’t forget to follow javarevisited on Twitter! Other Medium Articles you may like
https://medium.com/hackernoon/10-best-programming-languages-to-learn-in-2019-e5b05af4a972
[]
2020-12-09 09:10:44.921000+00:00
['JavaScript', 'Java', 'Programming', 'Python', 'Coding']
Title 10 Best Programming Languages Learn 2021Content 10 Best Programming Languages Learn 2021 developer’s list programming language probably want start learning 2021 Photo Annie Spratt Unsplash couple month ago reading interesting article HackerNews argued learn numerous programming language even won’t immediately use say agreed Since programming language good something specific great others make sense Programmers senior developer know one language choose right tool job programming language learn many programming language ranging big three like Java JavaScript Python lesserknown like Julia Rust R big question language give biggest bang buck Even though Java favorite language know bit C C striving expand beyond year particularly interested Python JavaScript might interested something else Top 10 Programming Languages Learn 2021 list top 10 programming language — compiled help Stack Overflow’s annual developer survey well experience — help give idea Note Even though tempting don’t try learn many programming language choose one first master move next one 1 Java Even though using Java year still many thing learn goal 2021 focus recent Java change JDK 9 10 11 12 you’ll want check Complete Java MasterClass Udemy don’t mind learning free resource also check list free Java programming course 2 Javascript Whether believe JavaScript number one language web rise framework like jQuery Angular React JS made JavaScript even popular Since cannot stay away web it’s better learn JavaScript sooner later It’s also number one language clientside validation really make work learning JavaScript Convinced JavaScript Masterclass good place start cheaper alternative check list free JavaScript course 3 Python Python toppled Java become taught programming language university academia It’s powerful language great generate script find python module everything think example looking command listen UDP traffic Linux couldn’t find anything wrote Python script 10 minute want learn Python Python Fundamentals Pluralsight one best online course start need Pluralsight membership get access course cost around 29 per month 299 annually also access using free trial need one choice Complete Python Bootcamp Go zero hero Python 3 Udemy another awesome course beginner looking free alternative find list 4 Kotlin thinking seriously Android App development Kotlin programming language learn year definitely next big thing happening Android world Even though Java preferred language Kotlin got native support many IDEs like IntelliJ IDEA Android Studio supporting Kotlin Android development Complete Android Kotlin Developer Course probably best online course start 5 Golang another programming language may want learn year know it’s currently popular time hard learn feel usage going increase 2021 also many Go developer right really may want go ahead bite bullet especially want create framework thing like invest time become expert Go you’re going high demand Go Complete Developer’s Guide Udemy online course going take get started 6 C thinking GUI development PC Web C great option It’s also programming language NET framework mention used heavily game development PC console you’re interested area check Learn Code Making Games — Complete C Unity Developer Udemy see 200K student enrolled course speaks popularity don’t mind learning free course list free C programming course beginner 7 Swift thinking iOS development like making apps iPhone iPad seriously consider learning Swift 2021 replaces Objective C preferred language develop iOS apps Since Android guy goal respect Swift start iOS 14 Swift 5 — Complete iOS App Development Bootcamp don’t mind learning free resource also check list free iOS course choice There’s also nifty tutorial 8 Rust honest don’t know much Rust since I’ve never used take home prize ‘most loved programming language’ Stack Overflow developer survey there’s clearly something worth learning aren’t many free Rust course Rust Undergrads good one start 9 PHP thought PHP dead dead wrong It’s still much alive kicking Fifty percent 50 internet website built using PHP even though it’s personal list language learn year it’s still great choice don’t already know want learn scratch PHP Beginners — Become PHP Master — CMS Project Udemy great course love free stuff learn PHP checkout list free PHP MySQL course Hackernoon 10 CC C C evergreen language many probably know school serious work C guarantee academic experience enough need join comprehensive online course like C Beginner Expert become industryready friend want free course learn C list list free C Programming course beginner Conclusion Even learn one programming language apart one use daily basis good shape career growth important thing right make goal best stick Happy learning enjoy article writeups may like Good luck Programming journey It’s certainly going easy following list one step closer becoming Software Developer always wanted like article please consider following medium javinpaul you’d like notified every new post don’t forget follow javarevisited Twitter Medium Articles may likeTags JavaScript Java Programming Python Coding
3,781
Next level data visualization
Introduction Any data analysis project has two essential goals. First, to curate data in readily interpretable form, uncover hidden patterns, and identify key trends. Second, and perhaps more important, is to effectively communicate these findings to the readers through thoughtful data visualization. This is an introductory article on how to begin thinking about customized visualizations that readily disseminate key data features to the viewer. We achieve this by moving beyond the one line charts that have made plotly so popular among data analysts and focusing on individualised chart layouts & aesthetics. All code used in this article is available on Github. All charts presented here are interactive and have been rendered using jovian, an incredible tool for sharing and managing jupyter notebooks. This medium article by Usha Rengaraju contains all the details on how to use this tool. Plotly Plotly is a natural library of choice for data visualization because its easy to use, well documented and allows for customization of charts. We begin by briefly summarizing plotly architecture in this section before moving to visualizations in the subsequent sections. While most people prefer using the high level plotly.express module, in this article we will instead focus on use of the plotly.graph_objects.Figure class to render charts. And while there is extensive documentation available on the plotly website, the material can be a bit overwhelming for those new to visualization. I therefore endeavour to provide a clear and concise explanation of the syntax. The plotly graph_objects that we will make use of, are composed of the following three high-level attributes and plotting a chart essentially involved specifying these: data attribute includes selection of the chart type from over 40 different types of traces like scatter , bar , pie , surface , choropleth etc and passing the data to these function. attribute includes selection of the chart type from over 40 different types of traces like , , , , etc and passing the data to these function. layout attribute controls all the non-data related aspects of the chart like text font, background color, axis & tickers, margins, title, legend etc. This is the attribute we will spend a considerable time manipulating to make changes like adding an additional y-axis or plotting multiple charts in a figure when dealing with large datasets. attribute controls all the non-data related aspects of the chart like text font, background color, axis & tickers, margins, title, legend etc. This is the attribute we will spend a considerable time manipulating to make changes like adding an additional y-axis or plotting multiple charts in a figure when dealing with large datasets. frames is used to specify the sequence of frames when making animated charts. Subsequent articles in this series will make use of this attribute extensively. For most of the charts we make in this article, following three are the standard libraries that we will use:
https://towardsdatascience.com/next-level-data-visualization-f00cb31f466e
['Aseem Kashyap']
2020-10-25 19:59:48.916000+00:00
['Python', 'Charts', 'Data Analysis', 'Data Visualization', 'Plotly']
Title Next level data visualizationContent Introduction data analysis project two essential goal First curate data readily interpretable form uncover hidden pattern identify key trend Second perhaps important effectively communicate finding reader thoughtful data visualization introductory article begin thinking customized visualization readily disseminate key data feature viewer achieve moving beyond one line chart made plotly popular among data analyst focusing individualised chart layout aesthetic code used article available Github chart presented interactive rendered using jovian incredible tool sharing managing jupyter notebook medium article Usha Rengaraju contains detail use tool Plotly Plotly natural library choice data visualization easy use well documented allows customization chart begin briefly summarizing plotly architecture section moving visualization subsequent section people prefer using high level plotlyexpress module article instead focus use plotlygraphobjectsFigure class render chart extensive documentation available plotly website material bit overwhelming new visualization therefore endeavour provide clear concise explanation syntax plotly graphobjects make use composed following three highlevel attribute plotting chart essentially involved specifying data attribute includes selection chart type 40 different type trace like scatter bar pie surface choropleth etc passing data function attribute includes selection chart type 40 different type trace like etc passing data function layout attribute control nondata related aspect chart like text font background color axis ticker margin title legend etc attribute spend considerable time manipulating make change like adding additional yaxis plotting multiple chart figure dealing large datasets attribute control nondata related aspect chart like text font background color axis ticker margin title legend etc attribute spend considerable time manipulating make change like adding additional yaxis plotting multiple chart figure dealing large datasets frame used specify sequence frame making animated chart Subsequent article series make use attribute extensively chart make article following three standard library useTags Python Charts Data Analysis Data Visualization Plotly
3,782
Building a Data Driven Marketing Team
A High-Level Overview of Marketing Metrics and KPIs To generalise, the Marketing team requires data to track the revenue cycle. They have Leading Indicators which include Lead Creation, Source of Leads, MQLs (Marketing Qualified Leads) inclusive of the sub-field Programs that are creating those MQLs, and Lead Velocity. Lead Velocity is effectively how long a lead takes to become “qualified”. The Marketing team also have Indicators which include Opportunity Creation, Pipeline Creation, and Revenue. Revenue should additionally have the ability to be broken down by program / channel (e.g. trade show). Low-Level Descriptions of the Marketing Metrics and KPIs Operations Specific Metrics Cost per Acquisition is the total cost of acquiring a new customer via a specific channel or campaign. While this can be applied as broadly or as narrowly as one wants, it’s often used in reference to media spend. In contrast to Cost per Conversion or Cost per Impression, CPA focuses on the cost for the complete journey from first contact to customer. Cost Per Acquisition is also differentiated from Customer Acquisition Cost (CAC) by its granular application i.e. looking at specific channels or campaigns instead of an average cost for acquiring customers across all channels and headcount. To calculate the Cost per Acquisition, simply divide the Total Cost (whether media spend in total or specific channel/campaign to acquire customers) by the Number of New Customers Acquired from the same channel/campaign. Channel or Campaign CPA Calculation Media Spend Calculation
https://medium.com/hacking-talent/building-a-data-driven-marketing-team-10b33f13c485
['Matthew W. Noble']
2020-06-02 10:08:48.323000+00:00
['Metrics', 'Marketing', 'Data', 'Engineering', 'Saas Marketing']
Title Building Data Driven Marketing TeamContent HighLevel Overview Marketing Metrics KPIs generalise Marketing team requires data track revenue cycle Leading Indicators include Lead Creation Source Leads MQLs Marketing Qualified Leads inclusive subfield Programs creating MQLs Lead Velocity Lead Velocity effectively long lead take become “qualified” Marketing team also Indicators include Opportunity Creation Pipeline Creation Revenue Revenue additionally ability broken program channel eg trade show LowLevel Descriptions Marketing Metrics KPIs Operations Specific Metrics Cost per Acquisition total cost acquiring new customer via specific channel campaign applied broadly narrowly one want it’s often used reference medium spend contrast Cost per Conversion Cost per Impression CPA focus cost complete journey first contact customer Cost Per Acquisition also differentiated Customer Acquisition Cost CAC granular application ie looking specific channel campaign instead average cost acquiring customer across channel headcount calculate Cost per Acquisition simply divide Total Cost whether medium spend total specific channelcampaign acquire customer Number New Customers Acquired channelcampaign Channel Campaign CPA Calculation Media Spend CalculationTags Metrics Marketing Data Engineering Saas Marketing
3,783
Ensemble model : Data Visualization
Photo by William Iven on Unsplash So this is part 2 of my previous article (Ensemble Modelling- How to perform in python). Checkout my previous article for better understanding of this one. Thank you 😊. So, Lets start with this tutorial of visualizing different models and comparing their accuracies. Here we have taken KNN, Decision Tree and SVM models. Lets recall that in previous article we had used “accuracys” named list to store accuracy of above mentioned models respectively . Let us see what it contains. NOTE: We have not used a train and test set seperately here , we are using train_test_split() due to which everytime we use this split function the train and test gets splitted at a random point. So accuracy will keep changing depending upon train and test set values. Now model_names is another empty which will be containing names of model this list will help us to plot better. model_names=[] #empty list for name, model in estimators: model_names.append(name) Plotting Bar Plot import matplotlib.pyplot as plt fig = plt.figure() ax = fig.add_axes([0,0,1,1]) ax.bar(model_names,accuracies) plt.yticks(np.arange(0, 1, .10)) plt.show() The line ax.bar() function creates the bar plot, here we have given model_names as X and accuracies as height. Various other parameters can also be mentioned such as width, bottom, align. We can even compare accuracy of ensemble model by adding respective name and accuracy of ensemble model to model_names and accuracies lists using below code and run the above code again. #adding accuracy of ensemble for comparison if “Ensemble” not in model_names: model_names.append(“Ensemble”) if ensem_acc not in accuracys: accuracys.append(ensem_acc) As we can easily see that ensemble out of all has highest accuracy, and if we compare more closely than we can see that SVM model gave lowest accuracy. Let’s see a how can we plot a box plot now. Here we are using kfold cross-validation for splitting up the data and testing the model accuracy. We are going to obtain multiple accuracy of each model. Here we have splitted data into 15 splits so it will break the data into 15 sets and test the model 15 times, so 15 different accuracy will be obtained. Finally we are taking the mean of this accuracy to know what is average accuracy of the model. acc=[] #empty list names1=[] scoring = ‘accuracy’ #here creating a list "acc" for storing multiple accuracies of each model. for name, model in estimators: kfold=model_selection.KFold(n_splits=15) res=model_selection.cross_val_score(model,X,target,cv=kfold,scoring=scoring) acc.append(res) names1.append(name) model_accuracy = “%s: %f” % (name,res.mean()) print(model_accuracy) For clarity of my point , lets see what “acc” list has ! Plotting Box Plot blue_outlier = dict(markerfacecolor=’b’, marker=’D’) fig = plt.figure() fig.suptitle(‘Algorithm Comparison’) ax = fig.add_subplot(111) plt.boxplot(acc,flierprops=blue_outlier) ax.set_xticklabels(names1) plt.show() These blue colored dots are outliers. The line extending the box is whiskers , horizontal orange lines are medians. k_folds = model_selection.KFold(n_splits=15, random_state=12) ensemb_acc = model_selection.cross_val_score(ensemble, X_train, target_train, cv=k_folds) print(ensemb_acc.mean()) if “Ensemble” not in names1: names1.append(“Ensemble”) from numpy import array, array_equal, allclose def arr(myarr, list_arrays): return next((True for item in list_arrays if item.size == myarr.size and allclose(item, myarr)), False) print(arr(ensemb_acc, acc)) if arr(ensemb_acc, acc)==False: acc.append(ensemb_acc) acc Now , by running the above code for plotting the box plot again, we get You can even customise you boxplot using different parameters, such as patch_artist= True will display boxplot with colors , notch=True displays a notch format to boxplot, vert=0 will display horizontal boxplot. Here is the entire code: Link for the code from previous article: https://medium.com/analytics-vidhya/ensemble-modelling-in-a-simple-way-386b6cbaf913 I hope you liked my article 😃. If you find this helpful then it would be really nice to see you appreciate my hard work by clapping for me 👏👏. Thank you.
https://medium.com/analytics-vidhya/ensemble-model-data-visualization-2f4cb06859c1
['Shivani Parekh']
2020-10-10 13:40:13.591000+00:00
['Python', 'Data Science', 'Ensemble Learning', 'Data Visualization', 'Analysis']
Title Ensemble model Data VisualizationContent Photo William Iven Unsplash part 2 previous article Ensemble Modelling perform python Checkout previous article better understanding one Thank 😊 Lets start tutorial visualizing different model comparing accuracy taken KNN Decision Tree SVM model Lets recall previous article used “accuracys” named list store accuracy mentioned model respectively Let u see contains NOTE used train test set seperately using traintestsplit due everytime use split function train test get splitted random point accuracy keep changing depending upon train test set value modelnames another empty containing name model list help u plot better modelnames empty list name model estimator modelnamesappendname Plotting Bar Plot import matplotlibpyplot plt fig pltfigure ax figaddaxes0011 axbarmodelnamesaccuracies pltyticksnparange0 1 10 pltshow line axbar function creates bar plot given modelnames X accuracy height Various parameter also mentioned width bottom align even compare accuracy ensemble model adding respective name accuracy ensemble model modelnames accuracy list using code run code adding accuracy ensemble comparison “Ensemble” modelnames modelnamesappend“Ensemble” ensemacc accuracy accuracysappendensemacc easily see ensemble highest accuracy compare closely see SVM model gave lowest accuracy Let’s see plot box plot using kfold crossvalidation splitting data testing model accuracy going obtain multiple accuracy model splitted data 15 split break data 15 set test model 15 time 15 different accuracy obtained Finally taking mean accuracy know average accuracy model acc empty list names1 scoring ‘accuracy’ creating list acc storing multiple accuracy model name model estimator kfoldmodelselectionKFoldnsplits15 resmodelselectioncrossvalscoremodelXtargetcvkfoldscoringscoring accappendres names1appendname modelaccuracy “s f” nameresmean printmodelaccuracy clarity point let see “acc” list Plotting Box Plot blueoutlier dictmarkerfacecolor’b’ marker’D’ fig pltfigure figsuptitle‘Algorithm Comparison’ ax figaddsubplot111 pltboxplotaccflierpropsblueoutlier axsetxticklabelsnames1 pltshow blue colored dot outlier line extending box whisker horizontal orange line median kfolds modelselectionKFoldnsplits15 randomstate12 ensembacc modelselectioncrossvalscoreensemble Xtrain targettrain cvkfolds printensembaccmean “Ensemble” names1 names1append“Ensemble” numpy import array arrayequal allclose def arrmyarr listarrays return nextTrue item listarrays itemsize myarrsize allcloseitem myarr False printarrensembacc acc arrensembacc accFalse accappendensembacc acc running code plotting box plot get even customise boxplot using different parameter patchartist True display boxplot color notchTrue display notch format boxplot vert0 display horizontal boxplot entire code Link code previous article httpsmediumcomanalyticsvidhyaensemblemodellinginasimpleway386b6cbaf913 hope liked article 😃 find helpful would really nice see appreciate hard work clapping 👏👏 Thank youTags Python Data Science Ensemble Learning Data Visualization Analysis
3,784
Who cares about the design language
So here’s the thing. Google updated its Google+ app, and it comes with a huge redesign exercise on it. In case you don’t know him, Luke Wroblewski is a designer at Google. He’s been around for a while, commenting stuff about user experience and visual design. He wrote a lot about the Polar app — which I love by the way— and wrote the first article I ever read about the hamburger menu not working well for engagement in mobile apps nor webs. The point is, Luke knows a thing or two about UX and UI design and he’s been involved in the Google+ redesign. That looks like this: Credits to Luke Wroblewski. This new app looks absolutely beautiful. I mean, look at all that color and rich imagery. And I believe I’ll never be tired of using specific color palettes for contextual elements that surround an image. And the new Google+ app uses a bottom navigation bar, and suddenly the internet went like ‘that’s not so Material Design’. Arturo Toledo is another designer I’ll use here as a reference. He’s been working with Microsoft for a few years, and his response to this was… Who cares about the design language. He claimed that our focus as designers should be on the principles. To make something useful and to design a delightful experience regardless of the platform we’re designing to. That there’s nothing wrong with a navbar down there. I can’t argue with Arturo. I believe he’s right about this. Maybe not in all cases, but I feel he’s mostly right. But I do have a few concerns about this navigation pattern though. And it’s because this is an official Google application, so designers and developers out there probably are going to reproduce this kind of navigation more than once. You got me at ‘navbar’ I do love navbars. And tabbars. And everything that’s not a chaotic hamburger throw-it-all-there-in-the-drawer-and-see-how-it-fits main application menu. It makes things more discoverable, and it makes the app easier to understand without having to think and read that much. I mean, options are just there. Just a quick glance and now you know about everything the app has to offer to you. If you have any doubts, you just tap on the first tab and if you don’t find what you’re looking for then tap on the second one. And continue that way until you’re out of tabs. That’s it. But then you introduce another main navigation pattern, that is the drawer menu itself. And now we have two main navigation paths. Or three, if we count the top tab-bar. Credits to Luke Wroblewski. Again. Ok, tabs in the top bar might not be a main navigation path, but they count as chrome. They count as more options, more space used for navigation. Good luck to you, 4-or-less inches screen people. In theory everything has sense. You have supportive nav for user profile stuff, a global nav between sections and a contextual nav for the filters. But it makes me think. I can’t use this app without reading and taking a second to think where I’m going every time I want to go elsewhere. I’ll give it a quick try here. How about moving the hambuger menu to the last item in the bottom navbar? — like a ‘more’ tab — and then moving the Notifications icon to the top bar, right next to the search icon. That should simplify the main navigation, just like Facebook does in its app. Then to reduce chrome you could make a dropdown menu out of the section title in that top bar to put the filters on it, just like Google Calendar makes to open the calendar control. If you don’t want to hide these filters you could A/B Test another idea. Maybe using an slider at the top of the page, just before the real content starts, like in the app market. Where am I? If I’m a new user of this app and I skip the onboard tutorial — and you can bet I will — I don’t even know what differences are between collections and communities. I mean, I could try to understand what they are, but it makes me think. They look almost the same and there are names of people everywhere and I can’t even see an interesting post until I’ve been playing around with the app for a while. I get lost in the many options you provide. This might not be a problem of the design team but the product itself. Just think about it this way: Twitter: There are people to follow to read their tweets. Facebook: Mostly the same as Twitter but you can write larger posts. Instagram: I follow people to see their photos. Oh and I can chat with them. Google+: There’s people to follow and you can read their posts. Oh and you also have communities to follow and collections to follow that you really don’t know where they come from. And you can write posts and create communities and invite people and set up collections that have more visual impact than the posts feed itself. Google+ looks beautiful, but making every content as visually heavy as the main section makes nothing look really important. Be careful with bottom bars in Android As designer Josh Clark pointed out, the options in the navbar are dangerously close to the Quit button — as he calls the Home button in Android. Collections is just a 2mm mistap from the user shutting down the app, or going back to the previous screen when they dind’t want to. I always have this in mind when I design for Android. The solution might be moving these options to the top bar, but there are some issues about long words in other languages here. UX Launchpad talks about the tradeoffs of this solution on this post. But getting back to Josh Clark, he pointed this out and Luke answered… Theory vs. practice. Here’s the tweet anyway. If we assume Luke made a few tests — and I do believe he has — and he’s right, then we don’t need to move the bottom navbar anywhere. And that’s good. Yay. Google+ is a great underrated product This is all. Despite all these concerns I have about this redesign it’s still a product that I’d love to use more. Unfortunatelly it hasn’t found its place among the mainstream users. And that’s the biggest problem this social network has. Maybe Google is working on this. Maybe they have big plans for Google+ that we don’t see because we don’t know what’s ahead in the product roadmap. Let’s just hope they keep improving this product and they prove that it’s useful for everyone.
https://uxdesign.cc/who-cares-about-the-design-language-daa3a99dacc1
['Paco Soria']
2015-12-04 18:51:57.989000+00:00
['Google', 'UX', 'Design']
Title care design languageContent here’s thing Google updated Google app come huge redesign exercise case don’t know Luke Wroblewski designer Google He’s around commenting stuff user experience visual design wrote lot Polar app — love way— wrote first article ever read hamburger menu working well engagement mobile apps web point Luke know thing two UX UI design he’s involved Google redesign look like Credits Luke Wroblewski new app look absolutely beautiful mean look color rich imagery believe I’ll never tired using specific color palette contextual element surround image new Google app us bottom navigation bar suddenly internet went like ‘that’s Material Design’ Arturo Toledo another designer I’ll use reference He’s working Microsoft year response was… care design language claimed focus designer principle make something useful design delightful experience regardless platform we’re designing there’s nothing wrong navbar can’t argue Arturo believe he’s right Maybe case feel he’s mostly right concern navigation pattern though it’s official Google application designer developer probably going reproduce kind navigation got ‘navbar’ love navbars tabbars everything that’s chaotic hamburger throwitallthereinthedrawerandseehowitfits main application menu make thing discoverable make app easier understand without think read much mean option quick glance know everything app offer doubt tap first tab don’t find you’re looking tap second one continue way you’re tab That’s introduce another main navigation pattern drawer menu two main navigation path three count top tabbar Credits Luke Wroblewski Ok tab top bar might main navigation path count chrome count option space used navigation Good luck 4orless inch screen people theory everything sense supportive nav user profile stuff global nav section contextual nav filter make think can’t use app without reading taking second think I’m going every time want go elsewhere I’ll give quick try moving hambuger menu last item bottom navbar — like ‘more’ tab — moving Notifications icon top bar right next search icon simplify main navigation like Facebook app reduce chrome could make dropdown menu section title top bar put filter like Google Calendar make open calendar control don’t want hide filter could AB Test another idea Maybe using slider top page real content start like app market I’m new user app skip onboard tutorial — bet — don’t even know difference collection community mean could try understand make think look almost name people everywhere can’t even see interesting post I’ve playing around app get lost many option provide might problem design team product think way Twitter people follow read tweet Facebook Mostly Twitter write larger post Instagram follow people see photo Oh chat Google There’s people follow read post Oh also community follow collection follow really don’t know come write post create community invite people set collection visual impact post feed Google look beautiful making every content visually heavy main section make nothing look really important careful bottom bar Android designer Josh Clark pointed option navbar dangerously close Quit button — call Home button Android Collections 2mm mistap user shutting app going back previous screen dind’t want always mind design Android solution might moving option top bar issue long word language UX Launchpad talk tradeoff solution post getting back Josh Clark pointed Luke answered… Theory v practice Here’s tweet anyway assume Luke made test — believe — he’s right don’t need move bottom navbar anywhere that’s good Yay Google great underrated product Despite concern redesign it’s still product I’d love use Unfortunatelly hasn’t found place among mainstream user that’s biggest problem social network Maybe Google working Maybe big plan Google don’t see don’t know what’s ahead product roadmap Let’s hope keep improving product prove it’s useful everyoneTags Google UX Design
3,785
Why You Know Better, but You Don’t Do Better
Why You Know Better, but You Don’t Do Better 4 ways to narrow the gap between knowing and doing Photo by Brooke Cagle on Unsplash “Knowledge isn’t power until it is applied.” - Dale Carnegie I’m lactose intolerant. I’ll spare you the details of what exactly happens when I consume dairy, but it isn’t pretty. Yet, last week I waited in line at the McDrive for 20 minutes to get a McFlurry with M&M’s. I’d had a rough day and decided to “treat” myself with something tasty. Can you guess what happened when I came home and downed that McFlurry in 30 seconds? It was ugly. I am not the only one who knows better but doesn’t do better. I am surrounded by brilliant people who do the most stupid things. Not out of ignorance, oh no. We, humans, seem to be perfectly capable of knowing what is right for us — and then do the exact opposite. We know we shouldn’t respond to the dramatic text our ex sends, but still, we mysteriously end up in shouting matches with them in the middle of the night. We know life is more manageable after a good night of sleep, but we still watch just one more episode of that addictive tv show and then get angry when we can’t get out of bed the next morning. We know healthy foods make us feel strong. And yet, the pizza delivery guy knows us by name. And even though every study proves working out releases the happiness neurotransmitter endorphin, we still rather relax in a way that doesn’t involve physical activity. We know our jobs are sucking the life out of us, but we don’t do anything to change the situation. So how come so many of us know better but don’t do better? Why is there such a massive gap between knowing and doing? Behavior is a very complex interplay between genes and environment, and there isn’t one one-size-fits-all explanation for why it is so hard to do the thing. There are many obstacles you have to deal with when you change your behavior, here is how you can overcome four of the most common ones; Old habits die hard — our brains don’t like change. There is an information gap — we’re not sure how to do the right thing. Issues with executive functions — we need to improve our capacity. Issues with motivation — do we really want to do the right thing? Old habits die hard — our brains don’t like change We’re creatures of habit. Habits make our lives easier. When we don’t have to think about the small stuff, our brains can focus on more important things. The more often we repeat a behavior, the stronger and more efficient the neural network supporting that behavior becomes. So if you hit snooze every morning, it’s not even a conscious decision anymore. When your ears signal to your brain that the alarm is going, your neurons fire so fast that your finger taps that snooze button before you even consciously hear the alarm. That is why it is so hard to stop snoozing. Suddenly your brain has to fire different neurons for different behavior. These neural connections are weak and ineffective. So your brains do what they do best; hit that snooze button and hide under the covers for just five more minutes. No matter how much you know that it is better to get out of bed immediately, your neural networks rather do what they always did. How to overcome this: Fortunately, we’re not just slaves to our neurons, and there is a thing called neuroplasticity. Neuroplasticity means that we can change our neural networks. We can make existing ones weaker and new ones stronger. So every time you ignore that snooze button, you weaken the existing neural network. And every time you get out of bed immediately after hearing the alarm, your new neural network becomes stronger. Doing the right thing becomes easier the more you do it. And if you keep doing it, it becomes a habit, and your neurons will fire with delight. What I did: I’ve known for a long time that I had to change my diet. I have IBS, and processed food is a major trigger. But picking healthy recipes, getting groceries, and spending my precious time cooking always felt like too much trouble. So when my neurons reached for another frozen meal in the supermarket, it was hard to stop them. Even faster was the neural network for ordering food online. Just the thought of taking longer than 10 minutes to prepare a meal made my neurons howl dramatically. Fifteen months ago, I decided to take out a subscription to an expensive fresh food delivery service. I pick out three recipes every week and get the fresh ingredients delivered to my doorstep. And it is fresh. Potatoes still have the dirt sticking to them. I have to wash, peel, cut, dice, and slice everything. The first couple of weeks, it took me forever to prepare a meal, and I always had to drag myself into the kitchen. But now, fifteen months later, my brains have developed a robust neural network for fresh food prepping. So no matter how tired or depressed I am, I always prepare my meals, with my neurons firing and humming in unison. It took me a couple of weeks, but now, doing the right thing is easy. Ordering fast food makes my neurons — and gut — feel uncomfortable. So even though Dominoes may be tempting, in the end, I always choose fresh. There is an information gap — we’re not sure how to do the right thing There can be a gap between knowing and doing because we’re not clear on how to do the thing because we’re missing information. We know eating healthy food is good for us. But what exactly does that look like? Do you need to eat lettuce all day, every day? Is sugar forbidden from now on? When is food exactly unhealthy? We understand exercise is good for us. But what type of activity is right for your body? Do you need special equipment? Are you using it correctly? When should you feel or see results? Doing better when you know better is difficult when you’re unsure what doing better exactly means. It goes both ways: it is hard to stop doing the wrong thing when you’re not sure what you need to do that. What do you need to stop getting into fights with your ex? Do you need to change your phone number? Block theirs? Have a chat with a mediator? How to overcome this: The good news about this information gap obstacle is that you can solve it by educating yourself and making a plan. Write down what you want to change and what you need to know or do to reach your goals. Can’t you stop checking your phone? Maybe an app or a time lock container can help. Want to escape your 9-to-5 but don’t know how? Start making calculations, look for alternative careers, and talk to people who managed to quit their jobs. If simply “willing” ourselves to stop doing undesired behavior worked, we’d all be living our dream lives. You need to make a plan and find a strategy that works for you. What I did: I started smoking in my teens. At first, it was a casual-cool thing to do, but before I knew it, I was addicted. And happily in denial. I told myself I choose to smoke, and I could quit at any time. The more people pointed out how unhealthy my smoking habit was, the more I believed I loved it. I made it part of my identity. I wasn’t one of those boring nagging health freaks; I was a fun rebel. And this fun rebel was out of breath every time she climbed stairs. I always smelled like smoke. My doctor warned me that the combination of being over 35, smoking, and using hormonal contraceptives increased my risk of getting cardiovascular diseases like blood clots, heart attacks, and strokes. And because tobacco is heavily taxed in my country, I was going bankrupt too. So when I was 32, I knew I had to kick the habit. I smoked my last cigarette, threw out my remaining packs, and swore never to smoke again. An hour later, I was going through my trash to recover my precious cigarettes. I knew I had to stop smoking. I wanted to stop smoking. But simply not smoking seemed impossible. My doctor helped me to make a plan. She prescribed me the drug Chantix, which reduced my cravings and the pleasure I got from smoking. She told me to join an online community for support. It also helped that my mother and sister stopped smoking too. This plan worked. I had to quit taking Chantix because of the side effects, but my online community and my family gave me enough tools to resist my cigarettes. I smoked my last cigarette six years ago. Issues with executive functions — we need to improve our capacity The Understood Team describes very clearly on their website what executive functioning is: Some people describe executive function as “the management system of the brain.” That’s because the skills involved let us set goals, plan, and get things done. When people struggle with executive function, it impacts them at home, in school, and in life. There are three main areas of executive function. They are: 1. Working memory 2. Cognitive flexibility (also called flexible thinking) 3. Inhibitory control (which includes self-control) Executive function is responsible for many skills, including: - Paying attention - Organizing, planning, and prioritizing - Starting tasks and staying focused on them to completion - Understanding different points of view - Regulating emotions - Self-monitoring (keeping track of what you’re doing) All people with ADHD or autism have issues with their executive functions. But it is not just the neurodivergent whose management system of the brain goes AWOL. We live in a society that makes it increasingly hard to focus on one thing. Every day our brains have to deal with an overload of information and options. No wonder we know how to do better but have issues with seeing things through. How to overcome this: For both neurotypicals and neurodivergent, it is possible to improve executive functions (EF). Studies show that a lot of our issues with EF are triggered by Westernized diets and physical inactivity. Therefore improving our EF can be done relatively simple by doing the following: Exercise Have a plant-based diet Prayer, Tai chi, or meditation Positive feelings and self-affirmation Visiting nature There are also many strategic ways to strengthen your EF: Learn how to set attainable sub-goals Block access to short term-temptations Use peer monitoring Establish fixed daily routines Be aware of the short-term gain of task avoidance Therapy can also improve your EF. Cognitive-behavioral therapy (CBT) has been proven to strengthen EF in adults with ADHD. A note of caution: neurodivergent people can definitely improve their EF but shouldn’t strive for neurotypical-like EF. Our brains are just wired differently. Don’t compare your progress with other people, but look at your improvement over time and see how your EF is better than a year ago. What I did: All my life, I have struggled with my EF. I always thought I was lazy or stupid because I couldn’t do things everybody else could. When I was 32, I was diagnosed with both autism and ADHD. The management system of my brain has always been hilariously understaffed and spectacularly unfit for its job. The most significant change for me was accepting that my brain wasn’t “normal,” and my EF had special needs. I changed the job description for my management system, and I’ve been doing a lot better since. I use different techniques to improve my EF, like limiting my screen time, putting my phone in another room when I’m writing, scheduling breaks instead of forcing myself to sit still for an hour, and doing deep breathing exercises while meditating. I’ll never be “normal,” but changing my diet, exercising more, and using different strategies have improved my executive functions. And because of that, I’m less stressed and happier. Issues with motivation— do we really want to do the right thing? For years, I knew I should quit smoking. But I didn’t. When it is tough to do the right thing, it might be that you are not motivated enough. In the study Why We Don’t “Just Do It,” Understanding the Intention-Behavior Gap in Lifestyle Medicine, Professor Mark D. Faries looks at why it is so hard for patients to adopt a healthy lifestyle. An essential factor in their success is motivation. Screenshot by author. Table from M. D. Faries (2016). Why We Don’t “Just Do It” Understanding the Intention-Behavior Gap in Lifestyle Medicine. American Journal of Lifestyle Magazine (5): 322–329. His study shows something we instinctively know: it is hard to do the right thing if we don’t really want to. This phenomenon is called the intention-behavior gap. Even though we have the best intentions, our behavior shows otherwise. We all know fast food is unhealthy, but most of it is tasty AF and makes us feel good in the short-term, so we keep eating it — despite our intentions to eat healthily. One key factor in narrowing the intention-behavior gap is motivation. The reason it is so easy for me to prepare healthy meals is I started to enjoy it. And after a couple of weeks, I noticed a significant improvement in my stamina, mood, and concentration. The same goes for not smoking; I haven’t relapsed because I enjoy being a non-smoker. I am grateful for how much my overall health has improved since I quit, and I don’t want to jeopardize that by having just one cigarette. How to overcome this: You are a smart person, which is why you know that some of your behavior is unhelpful, and you want to change it. But if you can’t seem to do the right thing, it is time to take an honest and hard look at your motivation. Do you want to do the right thing because you are supposed to or because you genuinely want it? What are you gaining by not doing the right thing? Why do you want to change your behavior? Diving into these questions will help you discover and change your motivation. One way to modify your motivation is to get disturbed. Tony Robbins always says that you are not disturbed enough with your current situation if you're not changing. Getting disturbed is easy. Sit down, close your eyes, and think about what happens if you don’t change your behavior. Exaggerate a little. What will your life look like in a year if you keep hanging out with your ex? How will you feel if you keep eating fast food? How much weight will you gain? What will your life look like in 10 years if you stay in your shitty job? By thinking about this worst-case scenario, you will start to feel uneasy. And every time you want to fall back to your unhelpful behavior, all you have to think about is that feeling. What I did: Because my 9-to-5 isn’t making me happy, I have a side hustle, and I study psychology. And that is hard. After I’m done with my job, I want to collapse on the couch and do nothing. I don’t want to go upstairs to spend another two hours behind a screen. I rationalize that I need rest and relaxation — which I do. But lying lethargically on the couch isn’t the same as relaxing. Procrastination isn’t the same as “taking time for myself.” So I get disturbed. I want to lay on the couch? That’s fine. But that means I’ll have to postpone my exam, so getting my degree will take longer. This means I won’t be able to work as a psychologist, so I’ll have to stay in a field that doesn’t make me happy. I don’t want to write? That’s fine. I can quit my side hustle any day and dick around on the Internet in my spare time. Get high scores in Candy Crush. But quitting my side hustle also means that money will be tight, and I’m fully dependent on my day job. Closing my eyes and imagining that I’ll be having the same job in 5 years is enough to motivate me to go upstairs and turn on my computer. And once I sit there in my tiny cozy office and work on the future, I dream of, I enjoy what I do. And that is how I successfully narrow the gap between knowing and doing: I build new neural networks I make a plan I strengthen my executive functions I stay motivated and enjoy the process And now and then, I grab a McFlurry, hang on the couch, don’t exercise, and ignore my responsibilities. And don’t beat myself up for it. Because to err is human, and to forgive is divine.
https://medium.com/the-innovation/why-you-know-better-but-you-dont-do-better-93de21f5a4e
['Judith Valentijn']
2020-12-28 18:02:42.861000+00:00
['Neuroscience', 'Self', 'Behavior Change', 'Psychology', 'Self Improvement']
Title Know Better Don’t BetterContent Know Better Don’t Better 4 way narrow gap knowing Photo Brooke Cagle Unsplash “Knowledge isn’t power applied” Dale Carnegie I’m lactose intolerant I’ll spare detail exactly happens consume dairy isn’t pretty Yet last week waited line McDrive 20 minute get McFlurry MM’s I’d rough day decided “treat” something tasty guess happened came home downed McFlurry 30 second ugly one know better doesn’t better surrounded brilliant people stupid thing ignorance oh human seem perfectly capable knowing right u — exact opposite know shouldn’t respond dramatic text ex sends still mysteriously end shouting match middle night know life manageable good night sleep still watch one episode addictive tv show get angry can’t get bed next morning know healthy food make u feel strong yet pizza delivery guy know u name even though every study prof working release happiness neurotransmitter endorphin still rather relax way doesn’t involve physical activity know job sucking life u don’t anything change situation come many u know better don’t better massive gap knowing Behavior complex interplay gene environment isn’t one onesizefitsall explanation hard thing many obstacle deal change behavior overcome four common one Old habit die hard — brain don’t like change information gap — we’re sure right thing Issues executive function — need improve capacity Issues motivation — really want right thing Old habit die hard — brain don’t like change We’re creature habit Habits make life easier don’t think small stuff brain focus important thing often repeat behavior stronger efficient neural network supporting behavior becomes hit snooze every morning it’s even conscious decision anymore ear signal brain alarm going neuron fire fast finger tap snooze button even consciously hear alarm hard stop snoozing Suddenly brain fire different neuron different behavior neural connection weak ineffective brain best hit snooze button hide cover five minute matter much know better get bed immediately neural network rather always overcome Fortunately we’re slave neuron thing called neuroplasticity Neuroplasticity mean change neural network make existing one weaker new one stronger every time ignore snooze button weaken existing neural network every time get bed immediately hearing alarm new neural network becomes stronger right thing becomes easier keep becomes habit neuron fire delight I’ve known long time change diet IBS processed food major trigger picking healthy recipe getting grocery spending precious time cooking always felt like much trouble neuron reached another frozen meal supermarket hard stop Even faster neural network ordering food online thought taking longer 10 minute prepare meal made neuron howl dramatically Fifteen month ago decided take subscription expensive fresh food delivery service pick three recipe every week get fresh ingredient delivered doorstep fresh Potatoes still dirt sticking wash peel cut dice slice everything first couple week took forever prepare meal always drag kitchen fifteen month later brain developed robust neural network fresh food prepping matter tired depressed always prepare meal neuron firing humming unison took couple week right thing easy Ordering fast food make neuron — gut — feel uncomfortable even though Dominoes may tempting end always choose fresh information gap — we’re sure right thing gap knowing we’re clear thing we’re missing information know eating healthy food good u exactly look like need eat lettuce day every day sugar forbidden food exactly unhealthy understand exercise good u type activity right body need special equipment using correctly feel see result better know better difficult you’re unsure better exactly mean go way hard stop wrong thing you’re sure need need stop getting fight ex need change phone number Block chat mediator overcome good news information gap obstacle solve educating making plan Write want change need know reach goal Can’t stop checking phone Maybe app time lock container help Want escape 9to5 don’t know Start making calculation look alternative career talk people managed quit job simply “willing” stop undesired behavior worked we’d living dream life need make plan find strategy work started smoking teen first casualcool thing knew addicted happily denial told choose smoke could quit time people pointed unhealthy smoking habit believed loved made part identity wasn’t one boring nagging health freak fun rebel fun rebel breath every time climbed stair always smelled like smoke doctor warned combination 35 smoking using hormonal contraceptive increased risk getting cardiovascular disease like blood clot heart attack stroke tobacco heavily taxed country going bankrupt 32 knew kick habit smoked last cigarette threw remaining pack swore never smoke hour later going trash recover precious cigarette knew stop smoking wanted stop smoking simply smoking seemed impossible doctor helped make plan prescribed drug Chantix reduced craving pleasure got smoking told join online community support also helped mother sister stopped smoking plan worked quit taking Chantix side effect online community family gave enough tool resist cigarette smoked last cigarette six year ago Issues executive function — need improve capacity Understood Team describes clearly website executive functioning people describe executive function “the management system brain” That’s skill involved let u set goal plan get thing done people struggle executive function impact home school life three main area executive function 1 Working memory 2 Cognitive flexibility also called flexible thinking 3 Inhibitory control includes selfcontrol Executive function responsible many skill including Paying attention Organizing planning prioritizing Starting task staying focused completion Understanding different point view Regulating emotion Selfmonitoring keeping track you’re people ADHD autism issue executive function neurodivergent whose management system brain go AWOL live society make increasingly hard focus one thing Every day brain deal overload information option wonder know better issue seeing thing overcome neurotypicals neurodivergent possible improve executive function EF Studies show lot issue EF triggered Westernized diet physical inactivity Therefore improving EF done relatively simple following Exercise plantbased diet Prayer Tai chi meditation Positive feeling selfaffirmation Visiting nature also many strategic way strengthen EF Learn set attainable subgoals Block access short termtemptations Use peer monitoring Establish fixed daily routine aware shortterm gain task avoidance Therapy also improve EF Cognitivebehavioral therapy CBT proven strengthen EF adult ADHD note caution neurodivergent people definitely improve EF shouldn’t strive neurotypicallike EF brain wired differently Don’t compare progress people look improvement time see EF better year ago life struggled EF always thought lazy stupid couldn’t thing everybody else could 32 diagnosed autism ADHD management system brain always hilariously understaffed spectacularly unfit job significant change accepting brain wasn’t “normal” EF special need changed job description management system I’ve lot better since use different technique improve EF like limiting screen time putting phone another room I’m writing scheduling break instead forcing sit still hour deep breathing exercise meditating I’ll never “normal” changing diet exercising using different strategy improved executive function I’m le stressed happier Issues motivation— really want right thing year knew quit smoking didn’t tough right thing might motivated enough study Don’t “Just It” Understanding IntentionBehavior Gap Lifestyle Medicine Professor Mark Faries look hard patient adopt healthy lifestyle essential factor success motivation Screenshot author Table Faries 2016 Don’t “Just It” Understanding IntentionBehavior Gap Lifestyle Medicine American Journal Lifestyle Magazine 5 322–329 study show something instinctively know hard right thing don’t really want phenomenon called intentionbehavior gap Even though best intention behavior show otherwise know fast food unhealthy tasty AF make u feel good shortterm keep eating — despite intention eat healthily One key factor narrowing intentionbehavior gap motivation reason easy prepare healthy meal started enjoy couple week noticed significant improvement stamen mood concentration go smoking haven’t relapsed enjoy nonsmoker grateful much overall health improved since quit don’t want jeopardize one cigarette overcome smart person know behavior unhelpful want change can’t seem right thing time take honest hard look motivation want right thing supposed genuinely want gaining right thing want change behavior Diving question help discover change motivation One way modify motivation get disturbed Tony Robbins always say disturbed enough current situation youre changing Getting disturbed easy Sit close eye think happens don’t change behavior Exaggerate little life look like year keep hanging ex feel keep eating fast food much weight gain life look like 10 year stay shitty job thinking worstcase scenario start feel uneasy every time want fall back unhelpful behavior think feeling 9to5 isn’t making happy side hustle study psychology hard I’m done job want collapse couch nothing don’t want go upstairs spend another two hour behind screen rationalize need rest relaxation — lying lethargically couch isn’t relaxing Procrastination isn’t “taking time myself” get disturbed want lay couch That’s fine mean I’ll postpone exam getting degree take longer mean won’t able work psychologist I’ll stay field doesn’t make happy don’t want write That’s fine quit side hustle day dick around Internet spare time Get high score Candy Crush quitting side hustle also mean money tight I’m fully dependent day job Closing eye imagining I’ll job 5 year enough motivate go upstairs turn computer sit tiny cozy office work future dream enjoy successfully narrow gap knowing build new neural network make plan strengthen executive function stay motivated enjoy process grab McFlurry hang couch don’t exercise ignore responsibility don’t beat err human forgive divineTags Neuroscience Self Behavior Change Psychology Self Improvement
3,786
The Modeling Instinct
The Many Types of Models Since a model might represent any aspect of reality, and be made from any number of materials, there are obviously very many kinds of them. Classifying them is a challenge, and the problem is compounded by the fact that some models are composites of many smaller sub-models, each with its own characteristics. To manage this complexity, I’ll consider just four dimensions that I feel are both fundamental and — at least for the purposes of this series of articles — useful. They are: purpose, dynamism, composition, and realism. Dimension 1: Some models serve a utilitarian purpose. For others, the purpose is to provide an experience. Utilitarian models are created to aid real-world interactions with the target system. They are thus a special kind of tool. A map, for example, is a tool for navigating the terrain modeled by the map. A flight simulator, because it models the dynamics of flight, is a tool for learning to fly. Scientific models are a special case. They are utilitarian (or can be), but their primary purpose is to accurately and thoroughly explain their target systems. Scientists devise and test explanatory models of incompletely understood systems, and it’s up to engineers to develop utilitarian applications of the models, should any exist. Experiential models, by contrast, are created to provide an audience with an experience. They are taken to be valuable in and of themselves, without appeal to their utility. This intrinsic value arises from the fact that, at some level of cognition, we experience models as though they are real. They can thus provoke a wide assortment of emotions according to the kinds of experiences they provide. A model can have both utilitarian and experiential aspects, and few are purely one or the other. A fictional story, for example, might deliver useful life lessons, just as a flight simulator might enthrall a person who has no intention of piloting an actual plane. NASA’s Systems Engineering Simulator (here configured to simulate operations aboard the International Space Station) is a utilitarian spaceflight simulator with experiential qualities. (Credit: NASA) Dimension 2: Some models incorporate the laws that govern how their target systems change in time, while others do not. Dynamic models are functional. They are “run,” whereas static models are observed or experienced. The distinction, however, is not as straightforward as it might seem. A work of fiction, for example, is experienced in time, and it describes events that (ostensibly) unfolded in time, but the words on the page, or the individual photographic frames, are unchanging. The same holds true for a history book, or the data collected from an experiment. Such models are recordings, or memories, of a single run-through of a dynamic target system. Although the recorded system is dynamic, the recording itself is static because it does not incorporate the laws of cause and effect that gave rise to its content. A dynamic biomechanical model of Tyrannosaurus rex. (Credit: University of Manchester) Dimension 3: Some models are made from physical materials, such as plastic or paint, while others are made from symbols, such as mathematical notations, computer code, or the words of a language. Physical models are made from physical materials and typically depict the geometric characteristics of their target systems. A utilitarian example might be a model airplane in a wind tunnel. Sculptures, paintings, and theme park attractions are experiential examples. A physical model of the San Francisco Bay Area constructed to test the feasibility of dams and other projects. Symbolic models, by contrast, are made from symbols with predefined meanings. The symbols themselves must be made from some type of material, of course, but symbolic models are distinguished by the fact that the choice of material does not alter the logical attributes of the symbols. In the case of an abacus, for example, plastic beads give the same result as wooden ones. An abacus with individual carbon molecules as beads. (Credit: IBM Research — Zurich) Symbolic models can be further characterized by the kinds of symbols they use. Although we don’t generally speak of “word models,” language is in fact a symbolic means of describing, or modeling, reality. Its dependence on nouns and verbs — objects in motion — reflects its original concern with physical things and actions. But once nouns, verbs, and other parts of speech exist, they can be used to represent abstract things as well. Mathematical symbols first arose from the need to count and measure things. But as more and more symbols were devised, along with new rules for manipulating them, mathematics developed an extraordinary capacity to represent natural phenomena. Computer code is unique in that some of its symbols (defined as “instructions”) represent changes to be made to other symbols. An “instruction pointer,” itself a changeable symbol, keeps track of which instruction to perform next. This arrangement means that computers are especially good at modeling systems that evolve in time. There are, of course, other kinds of symbols besides these. Dimension 4: Models exhibit varying degrees of realism depending on how accurately they represent their target systems, and with how much detail. On the realistic end of the spectrum are computer simulations designed to reflect their target systems as faithfully as possible. Such models can be extremely detailed, sometimes containing millions or even billions of interacting elements, all behaving according to known scientific principles. They are commonly used for prediction (of the weather, for example), or to gain knowledge about a system that would otherwise be too difficult, costly, or dangerous to obtain. A snapshot of a cosmological simulation that consisted of more than 10 billion massive “particles” in a cubic region of space 2 billion light-years to the side. (Credit: Max-Planck-Institute for Astrophysics) Perfect verisimilitude is impossible (without replicating the target system exactly, which is absurd), but it’s not necessary anyway. A model need only incorporate those aspects of the target system that help to fulfill its purpose. The purpose of a subway map, for example, is to help riders decide where to embark and disembark. Details that don’t aid in that decision can be left out. A map of the NYC subway system in the “Vignelli style,” a style of design favoring simplicity. (Credit: CountZ at English Wikipedia [CC BY-SA 3.0], via Wikimedia Commons) Experiential models can go further than just leaving out unnecessary details — the details that are included can be depicted in nonrealistic ways. Artists are free to explore the full spectrum, from realistic to stylized to incoherent. Why would an artist choose to create a model that is not realistic? One reason is to provide novelty. Novelty counteracts the blinding effect of familiarity, thereby engaging the imagination. Once engaged, the imagination can turn to the aspects of the model that do reflect reality.
https://medium.com/hackernoon/the-modeling-instinct-40a25a272c64
['Tim Sheehan']
2018-11-14 18:34:02.330000+00:00
['Creativity', 'Art', 'Technology', 'Science', 'Modeling Instinct']
Title Modeling InstinctContent Many Types Models Since model might represent aspect reality made number material obviously many kind Classifying challenge problem compounded fact model composite many smaller submodels characteristic manage complexity I’ll consider four dimension feel fundamental — least purpose series article — useful purpose dynamism composition realism Dimension 1 model serve utilitarian purpose others purpose provide experience Utilitarian model created aid realworld interaction target system thus special kind tool map example tool navigating terrain modeled map flight simulator model dynamic flight tool learning fly Scientific model special case utilitarian primary purpose accurately thoroughly explain target system Scientists devise test explanatory model incompletely understood system it’s engineer develop utilitarian application model exist Experiential model contrast created provide audience experience taken valuable without appeal utility intrinsic value arises fact level cognition experience model though real thus provoke wide assortment emotion according kind experience provide model utilitarian experiential aspect purely one fictional story example might deliver useful life lesson flight simulator might enthrall person intention piloting actual plane NASA’s Systems Engineering Simulator configured simulate operation aboard International Space Station utilitarian spaceflight simulator experiential quality Credit NASA Dimension 2 model incorporate law govern target system change time others Dynamic model functional “run” whereas static model observed experienced distinction however straightforward might seem work fiction example experienced time describes event ostensibly unfolded time word page individual photographic frame unchanging hold true history book data collected experiment model recording memory single runthrough dynamic target system Although recorded system dynamic recording static incorporate law cause effect gave rise content dynamic biomechanical model Tyrannosaurus rex Credit University Manchester Dimension 3 model made physical material plastic paint others made symbol mathematical notation computer code word language Physical model made physical material typically depict geometric characteristic target system utilitarian example might model airplane wind tunnel Sculptures painting theme park attraction experiential example physical model San Francisco Bay Area constructed test feasibility dam project Symbolic model contrast made symbol predefined meaning symbol must made type material course symbolic model distinguished fact choice material alter logical attribute symbol case abacus example plastic bead give result wooden one abacus individual carbon molecule bead Credit IBM Research — Zurich Symbolic model characterized kind symbol use Although don’t generally speak “word models” language fact symbolic mean describing modeling reality dependence noun verb — object motion — reflects original concern physical thing action noun verb part speech exist used represent abstract thing well Mathematical symbol first arose need count measure thing symbol devised along new rule manipulating mathematics developed extraordinary capacity represent natural phenomenon Computer code unique symbol defined “instructions” represent change made symbol “instruction pointer” changeable symbol keep track instruction perform next arrangement mean computer especially good modeling system evolve time course kind symbol besides Dimension 4 Models exhibit varying degree realism depending accurately represent target system much detail realistic end spectrum computer simulation designed reflect target system faithfully possible model extremely detailed sometimes containing million even billion interacting element behaving according known scientific principle commonly used prediction weather example gain knowledge system would otherwise difficult costly dangerous obtain snapshot cosmological simulation consisted 10 billion massive “particles” cubic region space 2 billion lightyears side Credit MaxPlanckInstitute Astrophysics Perfect verisimilitude impossible without replicating target system exactly absurd it’s necessary anyway model need incorporate aspect target system help fulfill purpose purpose subway map example help rider decide embark disembark Details don’t aid decision left map NYC subway system “Vignelli style” style design favoring simplicity Credit CountZ English Wikipedia CC BYSA 30 via Wikimedia Commons Experiential model go leaving unnecessary detail — detail included depicted nonrealistic way Artists free explore full spectrum realistic stylized incoherent would artist choose create model realistic One reason provide novelty Novelty counteracts blinding effect familiarity thereby engaging imagination engaged imagination turn aspect model reflect realityTags Creativity Art Technology Science Modeling Instinct
3,787
What Did We Get Ourselves Into?
From a writing perspective, the work is unprecedented. It requires a deep, hands-on understanding of various media, particularly those steeped in dialogue and character development. Screenwriters and playwrights are well suited; tech writers and copywriters, not so much. To illustrate this unique and potentially burgeoning area of the discipline, I’m thinking a little insight into our history might be illuminating. There were only three of us at Cortana Editorial’s inception (we are a team of 30 today, with international markets now a key part of our work). The foundation of Cortana’s personality was already in place, with some key decisions made. Internal and external research and studies, as well as a lot of discourse, supported decisions that determined Project Cortana (originally only a codename) would be given a personality. The initial voice font would be female. The value prop would center around assistance and productivity. And, there would be “chitchat.” Chitchat is the term given to the customer engagement area that, from the customer’s perspective, provides the fun factor. That sometimes random, often hilarious set of queries included anything and everything, from “What do you think about cheese?” to “Is there a god?” to “Do you poop?” Clearly, our customers were serious about getting to know Cortana. From the business perspective, chitchat is defined as the engagement that’s not officially aligned with the value prop — so it wasn’t a simple justification to point engineering, design, and writing resources towards it. Fortunately, a heroic engineering team at the Microsoft Search Technology Center in Hyderabad, India, did the needful and signed up to build the experience. It was a crucial hand-raise that set the ball in motion. Another team was tasked with parsing out these unique queries, packaging them up, and handing them over to the writing team as Cortana chitchat. We realized that as writers, we were being asked to create one of the most unique characters we’d ever encountered. And creatively, we dove deeply into what we call “the imaginary world” of Cortana. Over three years later, we continue to endow her with make-believe feelings, opinions, challenges, likes and dislikes, even sensitivities and hopes. Smoke and mirrors, sure, but we dig in knowing that this imaginary world is invoked by real people who want detail and specificity. They ask the questions and we give them answers. Certainly, Cortana’s personality started from a creative concept of who she would be, and how we hoped people would experience her. But we now see it as the customer playing an important role in the development of Cortana’s personality by shaping her through their own curiosity. It’s a data-driven back-and-forth — call it a conversation — that makes possible the creation of a character. And, it is fun work. It’s tough to beat spending an hour or two every day thinking hard, determining direction, putting principles in place, and — surprise, surprise — laughing a lot.
https://medium.com/microsoft-design/what-did-we-get-ourselves-into-36ddae39e69b
['Jonathan Foster']
2019-08-27 17:18:46.777000+00:00
['AI', 'Voice Design', 'Artificial Intelligence', 'Microsoft', 'Tech']
Title Get IntoContent writing perspective work unprecedented requires deep handson understanding various medium particularly steeped dialogue character development Screenwriters playwright well suited tech writer copywriter much illustrate unique potentially burgeoning area discipline I’m thinking little insight history might illuminating three u Cortana Editorial’s inception team 30 today international market key part work foundation Cortana’s personality already place key decision made Internal external research study well lot discourse supported decision determined Project Cortana originally codename would given personality initial voice font would female value prop would center around assistance productivity would “chitchat” Chitchat term given customer engagement area customer’s perspective provides fun factor sometimes random often hilarious set query included anything everything “What think cheese” “Is god” “Do poop” Clearly customer serious getting know Cortana business perspective chitchat defined engagement that’s officially aligned value prop — wasn’t simple justification point engineering design writing resource towards Fortunately heroic engineering team Microsoft Search Technology Center Hyderabad India needful signed build experience crucial handraise set ball motion Another team tasked parsing unique query packaging handing writing team Cortana chitchat realized writer asked create one unique character we’d ever encountered creatively dove deeply call “the imaginary world” Cortana three year later continue endow makebelieve feeling opinion challenge like dislike even sensitivity hope Smoke mirror sure dig knowing imaginary world invoked real people want detail specificity ask question give answer Certainly Cortana’s personality started creative concept would hoped people would experience see customer playing important role development Cortana’s personality shaping curiosity It’s datadriven backandforth — call conversation — make possible creation character fun work It’s tough beat spending hour two every day thinking hard determining direction putting principle place — surprise surprise — laughing lotTags AI Voice Design Artificial Intelligence Microsoft Tech
3,788
Seven Shards of Humanity
The title of my last artwork Seven Shards of Humanity is the result of my 9 years old son’s epiphany. When my kids, watching me painting day after day, figure out what I’m trying to tell, I have achieved my goal. I don’t want to please them with my brushes’ strokes: I want to create questions. The deeper the questions, the more my paintings can aspire to be part of the change. I think I can’t simply call myself an artist. I am first -and foremost- a mother, and I believe in the strength of the example I give every day to my children. Painting is my strongest way to communicate with them because their future starts now. I believe there is no border between life and art: what matters is trying to be the best part of this world. I work hard for a change through daily actions and I set on the canvas all my humanity. When kids understand what I meant, I know that it’s good. I spent a long time making my last painting, composed of seven canvas: it summarizes long conversations with my family members and friends about the meaning of the present life. I put in it a great effort to find the point I was trying to get to. When I tried to find the right title the word mirror was all I could think about, but its meaning was judgemental. It was my son that made me realize the futility of judgement when you want to show something for the good of all. This is humanity. Everyone can reflect himself in one of this shards, but humanity is each of us and we have to face facts to step back. I don’t know if all together we will have the strength to take a step back, because when the mirror is broken, even if we fix it, it will no longer be as strong as before. But humanity isn’t a piece of fused quartz smeared with silver. When our flesh is broken and we come back to the soil, we will born again with new opportunities to change the world. Again and again: it is only a matter of time. Per la versione in Italiano, clicca qui.
https://medium.com/my-alienart/seven-shards-of-humanity-1fa7dc15d6c
['Nadia Camandona']
2020-09-18 08:46:31.489000+00:00
['Humanity', 'Artist', 'Environment', 'Art', 'Future']
Title Seven Shards HumanityContent title last artwork Seven Shards Humanity result 9 year old son’s epiphany kid watching painting day day figure I’m trying tell achieved goal don’t want please brushes’ stroke want create question deeper question painting aspire part change think can’t simply call artist first foremost mother believe strength example give every day child Painting strongest way communicate future start believe border life art matter trying best part world work hard change daily action set canvas humanity kid understand meant know it’s good spent long time making last painting composed seven canvas summarizes long conversation family member friend meaning present life put great effort find point trying get tried find right title word mirror could think meaning judgemental son made realize futility judgement want show something good humanity Everyone reflect one shard humanity u face fact step back don’t know together strength take step back mirror broken even fix longer strong humanity isn’t piece fused quartz smeared silver flesh broken come back soil born new opportunity change world matter time Per la versione Italiano clicca quiTags Humanity Artist Environment Art Future
3,789
Airflow : Zero to One. In current world, we process a lot of…
In current world, we process a lot of data and the churn rate of it increases exponentially with passing time, where the data can belong to any of primary/inherited/captured/exhaust/structured/unstructured category(or intersection of them). We need to run multiple high performant data processing pipelines at a high frequency to gain maximum insight, do predictive analysis, and solve for other consumer needs. Managing our data pipelines via orchestrating, scheduling, monitoring becomes very critical task for the overall Data platform and its SLAs to be stable and reliable. Let’s have a look at the several open-source orchestration system available to us - In this blog, we will go in detail about Airflow and how can we work with it to manage our data pipelines. When workflows are defined as code, they become more maintainable, versionable, testable, and collaborative. — Airflow documentation Apache Airflow is a work-flow management system to programmatically author, schedule and monitor data pipelines. It has become the de-facto standard tool to orchestrate and schedule any kind of job, from machine learning model training to common ETL orchestration. Airflow Architecture Source : Google Modes of Airflow setup 1. Standalone : under the standalone mode with a sequential executor, the executor picks up and runs jobs sequentially, which means there is no parallelism. 2. Pseudo-distributed : this runs with a local executor, the local workers pick up and run jobs locally via multiprocessing. This needs setup of mysql to interact with the meta data. 3. Distributed mode : this runs with a celery executor, remote workers pick up and run jobs as scheduled and load-balanced. Get Airflow running in local Here, we are going through the commands which needs to run in order to make Airflow locally up(standalone), one can choose to skip any step if that package already exist in the local system. Installing Airflow and it’s dependencies #airflow working directory mkdir /path/to/dir/airflow cd /path/to/dir/airflow #install python and virtual env brew install python3 pip install virtualenv # activate virtual env python3 -m venv venv source venv/bin/activate # to force a non GPL library(‘unidecode’) export SLUGIFY_USES_TEXT_UNIDECODE=yes # install airflow export AIRFLOW_HOME=~/path/to/dir/airflow pip install apache-airflow Once we have installed Airflow, the default config is imported in the AIRFLOW_HOME and the folder structure looks like : airflow/ ├── airflow.cfg └── unittests.cfg airflow.cfg file contains the default value of configs, which are tweak-able to change the behaviour. Few of them, which are important and are probable to be updated are : plugins_folder #path to Airflow plugins dags_folder #path to dags code executor #executor which Airflow uses base_log_folder #path where Airflow logs should be stored web_server_port #port on which the web server will run sql_alchemy_conn #connection of metadata database load_examples #load default example DAGs Default value of configs can be found here. Preparing the database airflow initdb #create and initialise the Airflow SQLite database SQLite is the default database for Airflow, and is an adequate solution for local testing and development, but it does not support concurrent access. SQLite is inherently made for single producer (write), multiple (but small number of) consumers (read). In a production environment we will certainly need to use a more robust database solution such as Postgres or MySQL. We can edit the config sql_alchemy_conn to access a MYSQL database with the required params. airflow/ ├── airflow.cfg ├── airflow.db (SQLite) └── unittests.cfg Running web server locally To run web server, execute : airflow webserver -p 8080 After running this, we will be able to see the Airflow web UI up and running at URL : http://localhost:8080/admin/
https://medium.com/analytics-vidhya/airflow-zero-to-one-c65221588af1
['Neha Kumari']
2020-04-12 16:34:33.106000+00:00
['Airflow', 'Data', 'Data Science', 'Big Data', 'Data Engineering']
Title Airflow Zero One current world process lot of…Content current world process lot data churn rate increase exponentially passing time data belong primaryinheritedcapturedexhauststructuredunstructured categoryor intersection need run multiple high performant data processing pipeline high frequency gain maximum insight predictive analysis solve consumer need Managing data pipeline via orchestrating scheduling monitoring becomes critical task overall Data platform SLAs stable reliable Let’s look several opensource orchestration system available u blog go detail Airflow work manage data pipeline workflow defined code become maintainable versionable testable collaborative — Airflow documentation Apache Airflow workflow management system programmatically author schedule monitor data pipeline become defacto standard tool orchestrate schedule kind job machine learning model training common ETL orchestration Airflow Architecture Source Google Modes Airflow setup 1 Standalone standalone mode sequential executor executor pick run job sequentially mean parallelism 2 Pseudodistributed run local executor local worker pick run job locally via multiprocessing need setup mysql interact meta data 3 Distributed mode run celery executor remote worker pick run job scheduled loadbalanced Get Airflow running local going command need run order make Airflow locally upstandalone one choose skip step package already exist local system Installing Airflow it’s dependency airflow working directory mkdir pathtodirairflow cd pathtodirairflow install python virtual env brew install python3 pip install virtualenv activate virtual env python3 venv venv source venvbinactivate force non GPL library‘unidecode’ export SLUGIFYUSESTEXTUNIDECODEyes install airflow export AIRFLOWHOMEpathtodirairflow pip install apacheairflow installed Airflow default config imported AIRFLOWHOME folder structure look like airflow ├── airflowcfg └── unittestscfg airflowcfg file contains default value configs tweakable change behaviour important probable updated pluginsfolder path Airflow plugins dagsfolder path dag code executor executor Airflow us baselogfolder path Airflow log stored webserverport port web server run sqlalchemyconn connection metadata database loadexamples load default example DAGs Default value configs found Preparing database airflow initdb create initialise Airflow SQLite database SQLite default database Airflow adequate solution local testing development support concurrent access SQLite inherently made single producer write multiple small number consumer read production environment certainly need use robust database solution Postgres MySQL edit config sqlalchemyconn access MYSQL database required params airflow ├── airflowcfg ├── airflowdb SQLite └── unittestscfg Running web server locally run web server execute airflow webserver p 8080 running able see Airflow web UI running URL httplocalhost8080adminTags Airflow Data Data Science Big Data Data Engineering
3,790
Deploying Node.js apps in Amazon Linux with pm2
Running a Node.js application can be as trivial as node index.js, but running it in production and keeping it running are completely different. Whenever the application crashes or the server reboots unexpectedly, we want the application to come back alive. There are several ways we can properly run a Node.js application in production. In this article, I will be talking about how to deploy one using pm2 in an AWS EC2 instance running Amazon Linux. AWS EC2 Spin up an EC2 instance of your liking. Consider the load your server will be going through and the cost. Here you can get a pricing list for different types of instances: Choose Amazon Linux AMI. This is a free offering from Amazon. The Amazon Linux AMI is a supported and maintained Linux image provided by Amazon Web Services for use on Amazon Elastic Compute Cloud (Amazon EC2). It is designed to provide a stable, secure, and high performance execution environment for applications running on Amazon EC2. It supports the latest EC2 instance type features and includes packages that enable easy integration with AWS. Amazon Web Services provides ongoing security and maintenance updates to all instances running the Amazon Linux AMI. The Amazon Linux AMI is provided at no additional charge to Amazon EC2 users. Learn more at: Server configuration After the instance is up and running, SSH into it, preferably using a non-root account. Update packages: sudo yum update -y Install necessary dev tools: sudo yum install -y gcc gcc-c++ make openssl-devel git Install Node.js: curl --silent --location https://rpm.nodesource.com/setup_10.x | sudo bash - sudo yum install -y nodejs This will install version 10 of Node.js. If you want to install a different version you can change the location. We will run our application using pm2. Pm2 is a process manager for Node.js. It has a lot of useful features such as monitoring, clustering, reloading, log management, etc. I will discuss some of the features we will use and configure in our application. The features I find most noteworthy: Clustering — runs multiple instances of an application (depending on configuration, in our case we will use number of cores to determine this) Reloading — reloads applications when they crash or the server reboots. Install pm2: sudo npm install pm2@latest -g Generate a pm2 startup script: pm2 startup This will daemonize pm2 and initialize it on system reboots. Learn more here: https://pm2.keymetrics.io/docs/usage/startup The source code You can use https to clone the source code. However, I find that using a deploy key is much better and I can give read-only access to the server. Here is a simplified way of how to generate and use deploy keys: Generate a new ssh key using: ssh-keygen Do not enter a passphrase. Copy the public key contents printed by the command: cat ~/.ssh/id_rsa.pub If you are using Github, add it to the Deploy Keys section of your repository’s Settings page. After the repository is cloned. Run the scripts you need to run in order to get your project ready. For example, if my project uses yarn as the package manager and typescript as the language which needs to be transpiled to javascript when deploying, I will run the following commands: yarn install yarn build The second command runs the build script from my package.json file which states: “build”: “tsc” We can now run the application by running: node dist/index.js But we are not going to. Because we want to use pm2 to run our application. The Ecosystem File Pm2 provides a way to configure our application in an ecosystem file where we can easily tune the various configurable options provided. You can generate an ecosystem file by running: pm2 ecosystem Our application’s ecosystem file contains: ecosystem.config.js: module.exports = { apps : [{ name: ‘My App’, script: ‘dist/index.js’, instances: ‘max’, max_memory_restart: ‘256M’, env: { NODE_ENV: ‘development’ }, env_production: { NODE_ENV: ‘production’ } }] }; What this configuration tells pm2 is, run the application and name it My App. Run it using the script dist/index.js. Spawn as many instances of the application according to the number of CPUs present. Mind the NODE_ENV environment variable. This has several benefits when running an express application. It boosts the performance of the app by tweaking a few things such as (Taken from express documentation): 1. Cache view templates. 2. Cache CSS files generated from CSS extensions. 3. Generate less verbose error messages. Read more here: There are a lot more options in pm2 that you can tweak, I am leaving those at default values. Check them out here: Run the application: pm2 reload ecosystem.config.js --env production This command reloads the application with production environment declared in the ecosystem file. This process is also done with zero downtime. It compares the ecosystem configuration and currently running processes and updates as necessary. We want to be able to write up a script for everytime we need to deploy. This way, the app is not shut down and started again (which a restart does). Read more about it: When our application is up and running, we have to save the process list we want to respawn for when the system reboots unexpectedly: pm2 save We can check our running applications with: pm2 status Monitor our apps: pm2 monit View logs: pm2 logs Let’s create a handy script to deploy when there is a change: deploy.sh: #!/bin/bash git pull yarn install npm run build pm2 reload ecosystem.config.js --env production # EOF Make the file executable: chmod +x deploy.sh Now, every time you need to deploy changes, simply run: ./deploy.sh Conclusion Let’s recap: Create an EC2 instance running Amazon Linux Update packages (might include security updates). Install the desired Node.js version. Use a process manager to run the application (such as pm2). Use deploy keys to pull code from the source repository. Create an ecosystem configuration file so that it is maintainable in the future. Create a deploy script so that it is easy to run future deployments. Run the deployment script whenever there is a change to be deployed. Congratulations! Your application is up and running. There are several other ways to achieve the same end goal, such as using forever instead of pm2, or even using Docker instead and deploy to Amazon ECS. This is a documentation of how I deploy Node.js applications in production if running them on EC2 instances. When your deployments become more frequent, you should consider a CI/CD integration to build and deploy whenever there is a change in the source code. Make sure you monitor and keep an eye on your server’s resource usage. Last but not least, make sure you have proper logging in your application. I cannot stress enough how important proper logging is. Tweet me at @war1oc if you have anything to ask or add. Check out other articles from our engineering team: https://medium.com/monstar-lab-bangladesh-engineering Visit our website to learn more about us: www.monstar-lab.co.bd
https://medium.com/monstar-lab-bangladesh-engineering/deploying-node-js-apps-in-amazon-linux-with-pm2-7fc3ef5897bb
['Tanveer Hassan']
2019-08-21 10:40:38.433000+00:00
['Software Engineering', 'AWS', 'Programming', 'Nodejs', 'JavaScript']
Title Deploying Nodejs apps Amazon Linux pm2Content Running Nodejs application trivial node indexjs running production keeping running completely different Whenever application crash server reboots unexpectedly want application come back alive several way properly run Nodejs application production article talking deploy one using pm2 AWS EC2 instance running Amazon Linux AWS EC2 Spin EC2 instance liking Consider load server going cost get pricing list different type instance Choose Amazon Linux AMI free offering Amazon Amazon Linux AMI supported maintained Linux image provided Amazon Web Services use Amazon Elastic Compute Cloud Amazon EC2 designed provide stable secure high performance execution environment application running Amazon EC2 support latest EC2 instance type feature includes package enable easy integration AWS Amazon Web Services provides ongoing security maintenance update instance running Amazon Linux AMI Amazon Linux AMI provided additional charge Amazon EC2 user Learn Server configuration instance running SSH preferably using nonroot account Update package sudo yum update Install necessary dev tool sudo yum install gcc gccc make openssldevel git Install Nodejs curl silent location httpsrpmnodesourcecomsetup10x sudo bash sudo yum install nodejs install version 10 Nodejs want install different version change location run application using pm2 Pm2 process manager Nodejs lot useful feature monitoring clustering reloading log management etc discus feature use configure application feature find noteworthy Clustering — run multiple instance application depending configuration case use number core determine Reloading — reloads application crash server reboots Install pm2 sudo npm install pm2latest g Generate pm2 startup script pm2 startup daemonize pm2 initialize system reboots Learn httpspm2keymetricsiodocsusagestartup source code use http clone source code However find using deploy key much better give readonly access server simplified way generate use deploy key Generate new ssh key using sshkeygen enter passphrase Copy public key content printed command cat sshidrsapub using Github add Deploy Keys section repository’s Settings page repository cloned Run script need run order get project ready example project us yarn package manager typescript language need transpiled javascript deploying run following command yarn install yarn build second command run build script packagejson file state “build” “tsc” run application running node distindexjs going want use pm2 run application Ecosystem File Pm2 provides way configure application ecosystem file easily tune various configurable option provided generate ecosystem file running pm2 ecosystem application’s ecosystem file contains ecosystemconfigjs moduleexports apps name ‘My App’ script ‘distindexjs’ instance ‘max’ maxmemoryrestart ‘256M’ env NODEENV ‘development’ envproduction NODEENV ‘production’ configuration tell pm2 run application name App Run using script distindexjs Spawn many instance application according number CPUs present Mind NODEENV environment variable several benefit running express application boost performance app tweaking thing Taken express documentation 1 Cache view template 2 Cache CSS file generated CSS extension 3 Generate le verbose error message Read lot option pm2 tweak leaving default value Check Run application pm2 reload ecosystemconfigjs env production command reloads application production environment declared ecosystem file process also done zero downtime compare ecosystem configuration currently running process update necessary want able write script everytime need deploy way app shut started restart Read application running save process list want respawn system reboots unexpectedly pm2 save check running application pm2 status Monitor apps pm2 monit View log pm2 log Let’s create handy script deploy change deploysh binbash git pull yarn install npm run build pm2 reload ecosystemconfigjs env production EOF Make file executable chmod x deploysh every time need deploy change simply run deploysh Conclusion Let’s recap Create EC2 instance running Amazon Linux Update package might include security update Install desired Nodejs version Use process manager run application pm2 Use deploy key pull code source repository Create ecosystem configuration file maintainable future Create deploy script easy run future deployment Run deployment script whenever change deployed Congratulations application running several way achieve end goal using forever instead pm2 even using Docker instead deploy Amazon ECS documentation deploy Nodejs application production running EC2 instance deployment become frequent consider CICD integration build deploy whenever change source code Make sure monitor keep eye server’s resource usage Last least make sure proper logging application cannot stress enough important proper logging Tweet war1oc anything ask add Check article engineering team httpsmediumcommonstarlabbangladeshengineering Visit website learn u wwwmonstarlabcobdTags Software Engineering AWS Programming Nodejs JavaScript
3,791
New Class Naming Rules in Ruby
New Class Naming Rules in Ruby There were 26 valid characters. Now there are 1,853! Heads up, we’ve moved! If you’d like to continue keeping up with the latest technical content from Square please visit us at our new home https://developer.squareup.com/blog In Ruby 2.5 and prior: It’s been a longstanding rule in Ruby that you must use a capital ASCII letter as the first character of a Class or Module name. This limited you to just these 26 characters: ABCDEFGHIJKLMNOPQRSTUVWXYZ New in Ruby 2.6: In Ruby 2.6, non-ASCII upper case characters are allowed. By my count, that makes a total of 1,853 options! Here are the 1,827 new characters that can start a Class or Module name in Ruby 2.6: ÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝÞĀĂĄĆĈĊČĎĐĒĔĖĘĚĜĞĠĢĤĦĨĪĬĮİIJĴĶĹĻĽĿŁŃŅŇŊŌŎŐŒŔŖŘŚŜŞŠŢŤŦŨŪŬŮŰŲŴŶŸŹŻŽƁƂƄƆƇƉƊƋƎƏƐƑƓƔƖƗƘƜƝƟƠƢƤƦƧƩƬƮƯƱƲƳƵƷƸƼDŽDžLJLjNJNjǍǏǑǓǕǗǙǛǞǠǢǤǦǨǪǬǮDZDzǴǶǷǸǺǼǾȀȂȄȆȈȊȌȎȐȒȔȖȘȚȜȞȠȢȤȦȨȪȬȮȰȲȺȻȽȾɁɃɄɅɆɈɊɌɎͰͲͶͿΆΈΉΊΌΎΏΑΒΓΔΕΖΗΘΙΚΛΜΝΞΟΠΡΣΤΥΦΧΨΩΪΫϏϒϓϔϘϚϜϞϠϢϤϦϨϪϬϮϴϷϹϺϽϾϿЀЁЂЃЄЅІЇЈЉЊЋЌЍЎЏАБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯѠѢѤѦѨѪѬѮѰѲѴѶѸѺѼѾҀҊҌҎҐҒҔҖҘҚҜҞҠҢҤҦҨҪҬҮҰҲҴҶҸҺҼҾӀӁӃӅӇӉӋӍӐӒӔӖӘӚӜӞӠӢӤӦӨӪӬӮӰӲӴӶӸӺӼӾԀԂԄԆԈԊԌԎԐԒԔԖԘԚԜԞԠԢԤԦԨԪԬԮԱԲԳԴԵԶԷԸԹԺԻԼԽԾԿՀՁՂՃՄՅՆՇՈՉՊՋՌՍՎՏՐՑՒՓՔՕՖႠႡႢႣႤႥႦႧႨႩႪႫႬႭႮႯႰႱႲႳႴႵႶႷႸႹႺႻႼႽႾႿჀჁჂჃჄჅჇჍᎠᎡᎢᎣᎤᎥᎦᎧᎨᎩᎪᎫᎬᎭᎮᎯᎰᎱᎲᎳᎴᎵᎶᎷᎸᎹᎺᎻᎼᎽᎾᎿᏀᏁᏂᏃᏄᏅᏆᏇᏈᏉᏊᏋᏌᏍᏎᏏᏐᏑᏒᏓᏔᏕᏖᏗᏘᏙᏚᏛᏜᏝᏞᏟᏠᏡᏢᏣᏤᏥᏦᏧᏨᏩᏪᏫᏬᏭᏮᏯᏰᏱᏲᏳᏴᏵḀḂḄḆḈḊḌḎḐḒḔḖḘḚḜḞḠḢḤḦḨḪḬḮḰḲḴḶḸḺḼḾṀṂṄṆṈṊṌṎṐṒṔṖṘṚṜṞṠṢṤṦṨṪṬṮṰṲṴṶṸṺṼṾẀẂẄẆẈẊẌẎẐẒẔẞẠẢẤẦẨẪẬẮẰẲẴẶẸẺẼẾỀỂỄỆỈỊỌỎỐỒỔỖỘỚỜỞỠỢỤỦỨỪỬỮỰỲỴỶỸỺỼỾἈἉἊἋἌἍἎἏἘἙἚἛἜἝἨἩἪἫἬἭἮἯἸἹἺἻἼἽἾἿὈὉὊὋὌὍὙὛὝὟὨὩὪὫὬὭὮὯᾈᾉᾊᾋᾌᾍᾎᾏᾘᾙᾚᾛᾜᾝᾞᾟᾨᾩᾪᾫᾬᾭᾮᾯᾸᾹᾺΆᾼῈΈῊΉῌῘῙῚΊῨῩῪΎῬῸΌῺΏῼℂℇℋℌℍℐℑℒℕℙℚℛℜℝℤΩℨKÅℬℭℰℱℲℳℾℿⅅⅠⅡⅢⅣⅤⅥⅦⅧⅨⅩⅪⅫⅬⅭⅮⅯↃⒶⒷⒸⒹⒺⒻⒼⒽⒾⒿⓀⓁⓂⓃⓄⓅⓆⓇⓈⓉⓊⓋⓌⓍⓎⓏⰀⰁⰂⰃⰄⰅⰆⰇⰈⰉⰊⰋⰌⰍⰎⰏⰐⰑⰒⰓⰔⰕⰖⰗⰘⰙⰚⰛⰜⰝⰞⰟⰠⰡⰢⰣⰤⰥⰦⰧⰨⰩⰪⰫⰬⰭⰮⱠⱢⱣⱤⱧⱩⱫⱭⱮⱯⱰⱲⱵⱾⱿⲀⲂⲄⲆⲈⲊⲌⲎⲐⲒⲔⲖⲘⲚⲜⲞⲠⲢⲤⲦⲨⲪⲬⲮⲰⲲⲴⲶⲸⲺⲼⲾⳀⳂⳄⳆⳈⳊⳌⳎⳐⳒⳔⳖⳘⳚⳜⳞⳠⳢⳫⳭⳲꙀꙂꙄꙆꙈꙊꙌꙎꙐꙒꙔꙖꙘꙚꙜꙞꙠꙢꙤꙦꙨꙪꙬꚀꚂꚄꚆꚈꚊꚌꚎꚐꚒꚔꚖꚘꚚꜢꜤꜦꜨꜪꜬꜮꜲꜴꜶꜸꜺꜼꜾꝀꝂꝄꝆꝈꝊꝌꝎꝐꝒꝔꝖꝘꝚꝜꝞꝠꝢꝤꝦꝨꝪꝬꝮꝹꝻꝽꝾꞀꞂꞄꞆꞋꞍꞐꞒꞖꞘꞚꞜꞞꞠꞢꞤꞦꞨꞪꞫꞬꞭꞮꞰꞱꞲꞳꞴꞶ𐐀𐐁𐐂𐐃𐐄𐐅𐐆𐐇𐐈𐐉𐐊𐐋𐐌𐐍𐐎𐐏𐐐𐐑𐐒𐐓𐐔𐐕𐐖𐐗𐐘𐐙𐐚𐐛𐐜𐐝𐐞𐐟𐐠𐐡𐐢𐐣𐐤𐐥𐐦𐐧𐒰𐒱𐒲𐒳𐒴𐒵𐒶𐒷𐒸𐒹𐒺𐒻𐒼𐒽𐒾𐒿𐓀𐓁𐓂𐓃𐓄𐓅𐓆𐓇𐓈𐓉𐓊𐓋𐓌𐓍𐓎𐓏𐓐𐓑𐓒𐓓𐲀𐲁𐲂𐲃𐲄𐲅𐲆𐲇𐲈𐲉𐲊𐲋𐲌𐲍𐲎𐲏𐲐𐲑𐲒𐲓𐲔𐲕𐲖𐲗𐲘𐲙𐲚𐲛𐲜𐲝𐲞𐲟𐲠𐲡𐲢𐲣𐲤𐲥𐲦𐲧𐲨𐲩𐲪𐲫𐲬𐲭𐲮𐲯𐲰𐲱𐲲𑢠𑢡𑢢𑢣𑢤𑢥𑢦𑢧𑢨𑢩𑢪𑢫𑢬𑢭𑢮𑢯𑢰𑢱𑢲𑢳𑢴𑢵𑢶𑢷𑢸𑢹𑢺𑢻𑢼𑢽𑢾𑢿𝐀𝐁𝐂𝐃𝐄𝐅𝐆𝐇𝐈𝐉𝐊𝐋𝐌𝐍𝐎𝐏𝐐𝐑𝐒𝐓𝐔𝐕𝐖𝐗𝐘𝐙𝐴𝐵𝐶𝐷𝐸𝐹𝐺𝐻𝐼𝐽𝐾𝐿𝑀𝑁𝑂𝑃𝑄𝑅𝑆𝑇𝑈𝑉𝑊𝑋𝑌𝑍𝑨𝑩𝑪𝑫𝑬𝑭𝑮𝑯𝑰𝑱𝑲𝑳𝑴𝑵𝑶𝑷𝑸𝑹𝑺𝑻𝑼𝑽𝑾𝑿𝒀𝒁𝒜𝒞𝒟𝒢𝒥𝒦𝒩𝒪𝒫𝒬𝒮𝒯𝒰𝒱𝒲𝒳𝒴𝒵𝓐𝓑𝓒𝓓𝓔𝓕𝓖𝓗𝓘𝓙𝓚𝓛𝓜𝓝𝓞𝓟𝓠𝓡𝓢𝓣𝓤𝓥𝓦𝓧𝓨𝓩𝔄𝔅𝔇𝔈𝔉𝔊𝔍𝔎𝔏𝔐𝔑𝔒𝔓𝔔𝔖𝔗𝔘𝔙𝔚𝔛𝔜𝔸𝔹𝔻𝔼𝔽𝔾𝕀𝕁𝕂𝕃𝕄𝕆𝕊𝕋𝕌𝕍𝕎𝕏𝕐𝕬𝕭𝕮𝕯𝕰𝕱𝕲𝕳𝕴𝕵𝕶𝕷𝕸𝕹𝕺𝕻𝕼𝕽𝕾𝕿𝖀𝖁𝖂𝖃𝖄𝖅𝖠𝖡𝖢𝖣𝖤𝖥𝖦𝖧𝖨𝖩𝖪𝖫𝖬𝖭𝖮𝖯𝖰𝖱𝖲𝖳𝖴𝖵𝖶𝖷𝖸𝖹𝗔𝗕𝗖𝗗𝗘𝗙𝗚𝗛𝗜𝗝𝗞𝗟𝗠𝗡𝗢𝗣𝗤𝗥𝗦𝗧𝗨𝗩𝗪𝗫𝗬𝗭𝘈𝘉𝘊𝘋𝘌𝘍𝘎𝘏𝘐𝘑𝘒𝘓𝘔𝘕𝘖𝘗𝘘𝘙𝘚𝘛𝘜𝘝𝘞𝘟𝘠𝘡𝘼𝘽𝘾𝘿𝙀𝙁𝙂𝙃𝙄𝙅𝙆𝙇𝙈𝙉𝙊𝙋𝙌𝙍𝙎𝙏𝙐𝙑𝙒𝙓𝙔𝙕𝙰𝙱𝙲𝙳𝙴𝙵𝙶𝙷𝙸𝙹𝙺𝙻𝙼𝙽𝙾𝙿𝚀𝚁𝚂𝚃𝚄𝚅𝚆𝚇𝚈𝚉𝚨𝚩𝚪𝚫𝚬𝚭𝚮𝚯𝚰𝚱𝚲𝚳𝚴𝚵𝚶𝚷𝚸𝚹𝚺𝚻𝚼𝚽𝚾𝚿𝛀𝛢𝛣𝛤𝛥𝛦𝛧𝛨𝛩𝛪𝛫𝛬𝛭𝛮𝛯𝛰𝛱𝛲𝛳𝛴𝛵𝛶𝛷𝛸𝛹𝛺𝜜𝜝𝜞𝜟𝜠𝜡𝜢𝜣𝜤𝜥𝜦𝜧𝜨𝜩𝜪𝜫𝜬𝜭𝜮𝜯𝜰𝜱𝜲𝜳𝜴𝝖𝝗𝝘𝝙𝝚𝝛𝝜𝝝𝝞𝝟𝝠𝝡𝝢𝝣𝝤𝝥𝝦𝝧𝝨𝝩𝝪𝝫𝝬𝝭𝝮𝞐𝞑𝞒𝞓𝞔𝞕𝞖𝞗𝞘𝞙𝞚𝞛𝞜𝞝𝞞𝞟𝞠𝞡𝞢𝞣𝞤𝞥𝞦𝞧𝞨𝟊𞤀𞤁𞤂𞤃𞤄𞤅𞤆𞤇𞤈𞤉𞤊𞤋𞤌𞤍𞤎𞤏𞤐𞤑𞤒𞤓𞤔𞤕𞤖𞤗𞤘𞤙𞤚𞤛𞤜𞤝𞤞𞤟𞤠𞤡🄰🄱🄲🄳🄴🄵🄶🄷🄸🄹🄺🄻🄼🄽🄾🄿🅀🅁🅂🅃🅄🅅🅆🅇🅈🅉🅐🅑🅒🅓🅔🅕🅖🅗🅘🅙🅚🅛🅜🅝🅞🅟🅠🅡🅢🅣🅤🅥🅦🅧🅨🅩🅰🅱🅲🅳🅴🅵🅶🅷🅸🅹🅺🅻🅼🅽🅾🅿🆀🆁🆂🆃🆄🆅🆆🆇🆈🆉ABCDEFGHIJKLMNOPQRSTUVWXYZ (Characters unsupported by this font appear as squares.) This change supports upper case characters in other languages but doesn’t go so far as to allow emoji as a Class or Module name. These examples are now valid Ruby: It’s worth noting that local variables in Ruby could begin with these characters in Ruby 2.5 and earlier. (Thanks to Cary Swoveland for pointing this out.) A local variable starting with one of these characters would become a constant in Ruby 2.6. Why support these additional characters? Sergei Borodanov started an issue ticket asking about support for Cyrillic characters. Matz decided, “maybe it’s time to relax the limitation for Non-ASCII capital letters to start constant names.” Nobuyoshi (“nobu”) Nakada (a.k.a. “patch monster”) wrote and committed the patch to support this new feature. With the addition of this feature, Rubyists in various languages can use their own alphabet for the first character of a Class or Module. For example, a Greek Rubyist can now have an Ωμέγα class, instead of an Oμέγα class — where the first letter is transliterated. Thanks to the Ruby core team for making this change! It will be shipped on December 25, 2018 with Ruby 2.6. We use Ruby for lots of things here at Square — including our Square Connect Ruby SDKs and open source Ruby projects. We’re eagerly awaiting the release of Ruby 2.6! The Ruby logo is Copyright © 2006, Yukihiro Matsumoto, distributed under CC BY-SA 2.5. Want more? Sign up for your monthly developer newsletter or drop by the Square dev Slack channel and say “hi!”
https://medium.com/square-corner-blog/new-class-naming-rules-in-ruby-bb3b45150c37
['Shannon Skipper']
2019-04-18 22:18:55.872000+00:00
['Ruby', 'Programming Languages', 'Software Development', 'Software Engineering', 'Engineering']
Title New Class Naming Rules RubyContent New Class Naming Rules Ruby 26 valid character 1853 Heads we’ve moved you’d like continue keeping latest technical content Square please visit u new home httpsdevelopersquareupcomblog Ruby 25 prior It’s longstanding rule Ruby must use capital ASCII letter first character Class Module name limited 26 character ABCDEFGHIJKLMNOPQRSTUVWXYZ New Ruby 26 Ruby 26 nonASCII upper case character allowed count make total 1853 option 1827 new character start Class Module name Ruby 26 ÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝÞĀĂĄĆĈĊČĎĐĒĔĖĘĚĜĞĠĢĤĦĨĪĬĮİIJĴĶĹĻĽĿŁŃŅŇŊŌŎŐŒŔŖŘŚŜŞŠŢŤŦŨŪŬŮŰŲŴŶŸŹŻŽƁƂƄƆƇƉƊƋƎƏƐƑƓƔƖƗƘƜƝƟƠƢƤƦƧƩƬƮƯƱƲƳƵƷƸƼDŽDžLJLjNJNjǍǏǑǓǕǗǙǛǞǠǢǤǦǨǪǬǮDZDzǴǶǷǸǺǼǾȀȂȄȆȈȊȌȎȐȒȔȖȘȚȜȞȠȢȤȦȨȪȬȮȰȲȺȻȽȾɁɃɄɅɆɈɊɌɎͰͲͶͿΆΈΉΊΌΎΏΑΒΓΔΕΖΗΘΙΚΛΜΝΞΟΠΡΣΤΥΦΧΨΩΪΫϏϒϓϔϘϚϜϞϠϢϤϦϨϪϬϮϴϷϹϺϽϾϿЀЁЂЃЄЅІЇЈЉЊЋЌЍЎЏАБВГДЕЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯѠѢѤѦѨѪѬѮѰѲѴѶѸѺѼѾҀҊҌҎҐҒҔҖҘҚҜҞҠҢҤҦҨҪҬҮҰҲҴҶҸҺҼҾӀӁӃӅӇӉӋӍӐӒӔӖӘӚӜӞӠӢӤӦӨӪӬӮӰӲӴӶӸӺӼӾԀԂԄԆԈԊԌԎԐԒԔԖԘԚԜԞԠԢԤԦԨԪԬԮԱԲԳԴԵԶԷԸԹԺԻԼԽԾԿՀՁՂՃՄՅՆՇՈՉՊՋՌՍՎՏՐՑՒՓՔՕՖႠႡႢႣႤႥႦႧႨႩႪႫႬႭႮႯႰႱႲႳႴႵႶႷႸႹႺႻႼႽႾႿჀჁჂჃჄჅჇჍᎠᎡᎢᎣᎤᎥᎦᎧᎨᎩᎪᎫᎬᎭᎮᎯᎰᎱᎲᎳᎴᎵᎶᎷᎸᎹᎺᎻᎼᎽᎾᎿᏀᏁᏂᏃᏄᏅᏆᏇᏈᏉᏊᏋᏌᏍᏎᏏᏐᏑᏒᏓᏔᏕᏖᏗᏘᏙᏚᏛᏜᏝᏞᏟᏠᏡᏢᏣᏤᏥᏦᏧᏨᏩᏪᏫᏬᏭᏮᏯᏰᏱᏲᏳᏴᏵḀḂḄḆḈḊḌḎḐḒḔḖḘḚḜḞḠḢḤḦḨḪḬḮḰḲḴḶḸḺḼḾṀṂṄṆṈṊṌṎṐṒṔṖṘṚṜṞṠṢṤṦṨṪṬṮṰṲṴṶṸṺṼṾẀẂẄẆẈẊẌẎẐẒẔẞẠẢẤẦẨẪẬẮẰẲẴẶẸẺẼẾỀỂỄỆỈỊỌỎỐỒỔỖỘỚỜỞỠỢỤỦỨỪỬỮỰỲỴỶỸỺỼỾἈἉἊἋἌἍἎἏἘἙἚἛἜἝἨἩἪἫἬἭἮἯἸἹἺἻἼἽἾἿὈὉὊὋὌὍὙὛὝὟὨὩὪὫὬὭὮὯᾈᾉᾊᾋᾌᾍᾎᾏᾘᾙᾚᾛᾜᾝᾞᾟᾨᾩᾪᾫᾬᾭᾮᾯᾸᾹᾺΆᾼῈΈῊΉῌῘῙῚΊῨῩῪΎῬῸΌῺΏῼℂℇℋℌℍℐℑℒℕℙℚℛℜℝℤΩℨKÅℬℭℰℱℲℳℾℿⅅⅠⅡⅢⅣⅤⅥⅦⅧⅨⅩⅪⅫⅬⅭⅮⅯↃⒶⒷⒸⒹⒺⒻⒼⒽⒾⒿⓀⓁⓂⓃⓄⓅⓆⓇⓈⓉⓊⓋⓌⓍⓎⓏⰀⰁⰂⰃⰄⰅⰆⰇⰈⰉⰊⰋⰌⰍⰎⰏⰐⰑⰒⰓⰔⰕⰖⰗⰘⰙⰚⰛⰜⰝⰞⰟⰠⰡⰢⰣⰤⰥⰦⰧⰨⰩⰪⰫⰬⰭⰮⱠⱢⱣⱤⱧⱩⱫⱭⱮⱯⱰⱲⱵⱾⱿⲀⲂⲄⲆⲈⲊⲌⲎⲐⲒⲔⲖⲘⲚⲜⲞⲠⲢⲤⲦⲨⲪⲬⲮⲰⲲⲴⲶⲸⲺⲼⲾⳀⳂⳄⳆⳈⳊⳌⳎⳐⳒⳔⳖⳘⳚⳜⳞⳠⳢⳫⳭⳲꙀꙂꙄꙆꙈꙊꙌꙎꙐꙒꙔꙖꙘꙚꙜꙞꙠꙢꙤꙦꙨꙪꙬꚀꚂꚄꚆꚈꚊꚌꚎꚐꚒꚔꚖꚘꚚꜢꜤꜦꜨꜪꜬꜮꜲꜴꜶꜸꜺꜼꜾꝀꝂꝄꝆꝈꝊꝌꝎꝐꝒꝔꝖꝘꝚꝜꝞꝠꝢꝤꝦꝨꝪꝬꝮꝹꝻꝽꝾꞀꞂꞄꞆꞋꞍꞐꞒꞖꞘꞚꞜꞞꞠꞢꞤꞦꞨꞪꞫꞬꞭꞮꞰꞱꞲꞳꞴꞶ𐐀𐐁𐐂𐐃𐐄𐐅𐐆𐐇𐐈𐐉𐐊𐐋𐐌𐐍𐐎𐐏𐐐𐐑𐐒𐐓𐐔𐐕𐐖𐐗𐐘𐐙𐐚𐐛𐐜𐐝𐐞𐐟𐐠𐐡𐐢𐐣𐐤𐐥𐐦𐐧𐒰𐒱𐒲𐒳𐒴𐒵𐒶𐒷𐒸𐒹𐒺𐒻𐒼𐒽𐒾𐒿𐓀𐓁𐓂𐓃𐓄𐓅𐓆𐓇𐓈𐓉𐓊𐓋𐓌𐓍𐓎𐓏𐓐𐓑𐓒𐓓𐲀𐲁𐲂𐲃𐲄𐲅𐲆𐲇𐲈𐲉𐲊𐲋𐲌𐲍𐲎𐲏𐲐𐲑𐲒𐲓𐲔𐲕𐲖𐲗𐲘𐲙𐲚𐲛𐲜𐲝𐲞𐲟𐲠𐲡𐲢𐲣𐲤𐲥𐲦𐲧𐲨𐲩𐲪𐲫𐲬𐲭𐲮𐲯𐲰𐲱𐲲𑢠𑢡𑢢𑢣𑢤𑢥𑢦𑢧𑢨𑢩𑢪𑢫𑢬𑢭𑢮𑢯𑢰𑢱𑢲𑢳𑢴𑢵𑢶𑢷𑢸𑢹𑢺𑢻𑢼𑢽𑢾𑢿𝐀𝐁𝐂𝐃𝐄𝐅𝐆𝐇𝐈𝐉𝐊𝐋𝐌𝐍𝐎𝐏𝐐𝐑𝐒𝐓𝐔𝐕𝐖𝐗𝐘𝐙𝐴𝐵𝐶𝐷𝐸𝐹𝐺𝐻𝐼𝐽𝐾𝐿𝑀𝑁𝑂𝑃𝑄𝑅𝑆𝑇𝑈𝑉𝑊𝑋𝑌𝑍𝑨𝑩𝑪𝑫𝑬𝑭𝑮𝑯𝑰𝑱𝑲𝑳𝑴𝑵𝑶𝑷𝑸𝑹𝑺𝑻𝑼𝑽𝑾𝑿𝒀𝒁𝒜𝒞𝒟𝒢𝒥𝒦𝒩𝒪𝒫𝒬𝒮𝒯𝒰𝒱𝒲𝒳𝒴𝒵𝓐𝓑𝓒𝓓𝓔𝓕𝓖𝓗𝓘𝓙𝓚𝓛𝓜𝓝𝓞𝓟𝓠𝓡𝓢𝓣𝓤𝓥𝓦𝓧𝓨𝓩𝔄𝔅𝔇𝔈𝔉𝔊𝔍𝔎𝔏𝔐𝔑𝔒𝔓𝔔𝔖𝔗𝔘𝔙𝔚𝔛𝔜𝔸𝔹𝔻𝔼𝔽𝔾𝕀𝕁𝕂𝕃𝕄𝕆𝕊𝕋𝕌𝕍𝕎𝕏𝕐𝕬𝕭𝕮𝕯𝕰𝕱𝕲𝕳𝕴𝕵𝕶𝕷𝕸𝕹𝕺𝕻𝕼𝕽𝕾𝕿𝖀𝖁𝖂𝖃𝖄𝖅𝖠𝖡𝖢𝖣𝖤𝖥𝖦𝖧𝖨𝖩𝖪𝖫𝖬𝖭𝖮𝖯𝖰𝖱𝖲𝖳𝖴𝖵𝖶𝖷𝖸𝖹𝗔𝗕𝗖𝗗𝗘𝗙𝗚𝗛𝗜𝗝𝗞𝗟𝗠𝗡𝗢𝗣𝗤𝗥𝗦𝗧𝗨𝗩𝗪𝗫𝗬𝗭𝘈𝘉𝘊𝘋𝘌𝘍𝘎𝘏𝘐𝘑𝘒𝘓𝘔𝘕𝘖𝘗𝘘𝘙𝘚𝘛𝘜𝘝𝘞𝘟𝘠𝘡𝘼𝘽𝘾𝘿𝙀𝙁𝙂𝙃𝙄𝙅𝙆𝙇𝙈𝙉𝙊𝙋𝙌𝙍𝙎𝙏𝙐𝙑𝙒𝙓𝙔𝙕𝙰𝙱𝙲𝙳𝙴𝙵𝙶𝙷𝙸𝙹𝙺𝙻𝙼𝙽𝙾𝙿𝚀𝚁𝚂𝚃𝚄𝚅𝚆𝚇𝚈𝚉𝚨𝚩𝚪𝚫𝚬𝚭𝚮𝚯𝚰𝚱𝚲𝚳𝚴𝚵𝚶𝚷𝚸𝚹𝚺𝚻𝚼𝚽𝚾𝚿𝛀𝛢𝛣𝛤𝛥𝛦𝛧𝛨𝛩𝛪𝛫𝛬𝛭𝛮𝛯𝛰𝛱𝛲𝛳𝛴𝛵𝛶𝛷𝛸𝛹𝛺𝜜𝜝𝜞𝜟𝜠𝜡𝜢𝜣𝜤𝜥𝜦𝜧𝜨𝜩𝜪𝜫𝜬𝜭𝜮𝜯𝜰𝜱𝜲𝜳𝜴𝝖𝝗𝝘𝝙𝝚𝝛𝝜𝝝𝝞𝝟𝝠𝝡𝝢𝝣𝝤𝝥𝝦𝝧𝝨𝝩𝝪𝝫𝝬𝝭𝝮𝞐𝞑𝞒𝞓𝞔𝞕𝞖𝞗𝞘𝞙𝞚𝞛𝞜𝞝𝞞𝞟𝞠𝞡𝞢𝞣𝞤𝞥𝞦𝞧𝞨𝟊𞤀𞤁𞤂𞤃𞤄𞤅𞤆𞤇𞤈𞤉𞤊𞤋𞤌𞤍𞤎𞤏𞤐𞤑𞤒𞤓𞤔𞤕𞤖𞤗𞤘𞤙𞤚𞤛𞤜𞤝𞤞𞤟𞤠𞤡🄰🄱🄲🄳🄴🄵🄶🄷🄸🄹🄺🄻🄼🄽🄾🄿🅀🅁🅂🅃🅄🅅🅆🅇🅈🅉🅐🅑🅒🅓🅔🅕🅖🅗🅘🅙🅚🅛🅜🅝🅞🅟🅠🅡🅢🅣🅤🅥🅦🅧🅨🅩🅰🅱🅲🅳🅴🅵🅶🅷🅸🅹🅺🅻🅼🅽🅾🅿🆀🆁🆂🆃🆄🆅🆆🆇🆈🆉ABCDEFGHIJKLMNOPQRSTUVWXYZ Characters unsupported font appear square change support upper case character language doesn’t go far allow emoji Class Module name example valid Ruby It’s worth noting local variable Ruby could begin character Ruby 25 earlier Thanks Cary Swoveland pointing local variable starting one character would become constant Ruby 26 support additional character Sergei Borodanov started issue ticket asking support Cyrillic character Matz decided “maybe it’s time relax limitation NonASCII capital letter start constant names” Nobuyoshi “nobu” Nakada aka “patch monster” wrote committed patch support new feature addition feature Rubyists various language use alphabet first character Class Module example Greek Rubyist Ωμέγα class instead Oμέγα class — first letter transliterated Thanks Ruby core team making change shipped December 25 2018 Ruby 26 use Ruby lot thing Square — including Square Connect Ruby SDKs open source Ruby project We’re eagerly awaiting release Ruby 26 Ruby logo Copyright © 2006 Yukihiro Matsumoto distributed CC BYSA 25 Want Sign monthly developer newsletter drop Square dev Slack channel say “hi”Tags Ruby Programming Languages Software Development Software Engineering Engineering
3,792
Financial Times Data Platform: From zero to hero
Financial Times Data Platform: From zero to hero An in-depth walkthrough of the evolution of our Data Platform The Financial Times, one of the world’s leading business news organisations, has been around for more than 130 years and is famous for its quality journalism. To stay at the top for this long, you have to be able to adapt as the world changes. For the last decade, that has meant being able to take advantage of the opportunities that technology provides, as the FT undergoes a digital transformation. This article will take an in-depth look behind the scenes for one part of that transformation: the creation and evolution of the Financial Times’ Data platform. The Data Platform provides information about how our readers interact with the FT that allows us to make decisions about how we can continue to deliver the things our readers want and need. Generation 1: 2008–2014 Early days At first, the Data Platform focussed on providing recommendations to readers based on what they had read already. At the time, the majority of our readers still read the FT in print, so a single store and 24 hours latency was sufficient. The architecture was clean and simple, and Financial Times’ employees were able to execute queries on top of it to analyse user’s interests. But then a number of events happened. Internet revolution. The internet took off, and day after day the number of readers visiting ft.com rather than reading the print newspaper increased. Mobile innovation. Mobile devices started being part of people’s lives. Having a smartphone moved from a luxury to an expectation, and this allowed the Financial Times to release mobile applications for each of the most popular operating systems. This became another stream of users who could benefit from reading articles while they were travelling to work, resting at home or being outside in nature without access to their laptops. Generation 2: 2014–2016 The arrival of our Extract, Transform, Load (ETL) Framework The second generation of our platform faced two new challenges: firstly, the need to allow our stakeholders to analyse data at scale, asking new types of questions; and secondly, an increasing volume of data. In order to achieve these goals, we built our own ETL Framework in 2014. This allowed our teams to set up new jobs and models in an automated and scalable way and included features such as: Scheduling. Automating running SQL queries multiple times per day, synchronising the outputs with other teams and last but not least focusing more on the business cases rather than on the implementation details. Python interface. Providing the ability to run Python code in addition to the SQL queries, allowing the stakeholders to run even more complex data models. Configuration over implementation. One of the reasons for choosing to introduce an ETL Framework was the ability to produce jobs in XML file format, which enabled even more business capabilities at that time. The release of the ETL Framework had a huge positive impact but could not on its own resolve all issues coming with the increased amount of data and number of consumers. In fact, adding this new component actually created more issues from a performance point of view, as the number of consumers of the Data Platform increased, now including the Business Intelligence (BI) Team, Data Science Team, and others. The SQL Server instance started to become a bottleneck for the Data Platform, hence for all the stakeholders too. It was time for a change and we were trying to find the best solution for this particular issue. As the Financial Times was already using some services provided by Amazon Web Services (AWS), we started evaluating Amazon Redshift as an option for a fast, simple and cost-effective Data Warehouse for storing the increasing amount of data. Amazon Redshift is designed for Online Analytical Processing (OLAP) in the cloud which was exactly what we were looking for. Using this approach we were able to optimise query performance a lot without any additional effort from our team to support the new storage service. Generation 3: 2016–2018 The beginning of Big Data at Financial Times Having Amazon Redshift as a Data Warehouse solution and an ETL Framework as a tool for deploying extract, transform, load jobs, all the FT teams were seeing the benefit of having a Data Platform. However, when working for a big company leading the market, such as Financial Times in business news distribution, we cannot be satisfied with our existing achievements. That’s why we started to think how we can improve this architecture even more. Our next goal was to reduce data latency. We were ingesting data once per day, so latency was up to 24 hours. Reducing latency would mean the FT could respond more quickly to trends in the data. In order to reduce the latency, we started working on a new approach — named Next Generation Data Analytics (NGDA) — in 2015 and in early 2016 it was adopted by all teams in Financial Times. First, we developed our own tracking library, responsible for sending every interaction of our readers to the Data Platform. The existing architecture expected a list of CSV files that would have been transferred once per day by jobs run by the ETL Framework, so sending events one by one meant that we needed to change the existing architecture to support the new event-driven approach. Then, we created an API service responsible for ingesting readers’ interactions. However, we still needed a way to transfer this data to the Data Warehouse with the lowest possible latency as well as exposing this data to multiple consuming downstream systems. As we were migrating all services to the cloud, and more specifically to AWS, we looked at the managed services provided by Amazon that could fulfil our event processing needs. After analysing the alternatives, we redesigned our system to send all raw events from ft.com to the Simple Notification Service (SNS). Using this approach, it was possible for many teams in the organisation to subscribe to the SNS topic and unlock new business cases relying on the real time data. Still, having this raw data in SNS was not enough — we also needed to get the data into the Data Warehouse to support all the existing workflows. We decided to use a Simple Queue Service (SQS) queue as it allowed us to persist all events in a queue immediately when they arrived in the system. But before moving the data to our Data Warehouse, we had one more requirement from the business — to enrich the raw events with additional data provided by internal services, external services or by simple in-memory transformations. In order to satisfy these needs with minimal latency, we created a NodeJS service responsible for processing all the events in a loop asynchronously, making the enrichment step possible at scale. Once an event had been fully enriched, the data was sent immediately to the only managed event store provided by AWS at that time — Kinesis. Using this architecture, we were able to persist our enriched events in a stream with milliseconds latency, which was amazing news for our stakeholders. Once we had the data in a Kinesis Stream, we used another AWS managed service — Kinesis Firehose — to consume the enriched events stream and output them as CSV files into a S3 bucket based on one of two main conditions — a predefined time period having passed (which happened rarely) or the file size reaching 100mb. This new event-driven approach produced CSV files with enriched events in a couple of minutes depending on the time of the day, hence the latency in our data lake was reduced to 1–5 minutes. But there was one more important requirement from the business teams. They requested clean data in the Data Warehouse. Using the Kinesis Firehose approach, we couldn’t guarantee that we only had one instance of an event because: We could receive duplicate events from our client side applications. The Kinesis Firehose itself could duplicate data when a Firehose job retried on failure. In order to deduplicate all events, we created another Amazon Redshift cluster responsible for ingesting and deduplicating each new CSV file. This involved a tradeoff: implementing a process which guarantees uniqueness increased the latency for data to get into the Data Warehouse to approximately 4 hours, but enabled our business teams to generate insights much more easily. Generation 4: 2019 Rebuild the platform to allow our team to focus on adding business value Generation 3 of the platform was complicated to run. Our team spent most of the day supporting the large number of independent services, with engineering costs increasing, and far less time to do interesting, impactful work. We wanted to take advantage of new technologies to reduce this complexity, but also to provide far more exciting capabilities to our stakeholders: we wanted to turn the Data Platform into a PaaS (Platform as a Service). Our initial criteria were the platform should offer: Self service — Enabling stakeholders to independently develop and release new features. Enabling stakeholders to independently develop and release new features. Support for multiple internal consumers — with different teams having different levels of access. with different teams having different levels of access. Security isolation — so that teams could only access their own data and jobs. — so that teams could only access their own data and jobs. Code reuse — to avoid duplication for common functionality. Building a multi-tenant, self service platform is quite challenging because it requires every service to support both of these things. Still, putting effort into implementing this approach would be extremely beneficial for the future, with the key benefits being: Stakeholder teams can deliver value without having to wait to coordinate with platform teams — this reduces costs, increases velocity, and puts them in charge of their own destiny this reduces costs, increases velocity, and puts them in charge of their own destiny Platform teams can focus on building new functionality for the platform — rather than spending their time unblocking stakeholder teams The way we chose to deliver this decoupling was through a focus on configuration over implementation, with stakeholder teams able to set up their own management rules based on their internal team structure, roles and permissions, using an admin web interface. Kubernetes A software system is like a house. You need to build it from the foundations rather than from the roof. In engineering, the foundation is the infrastructure. Without a stable infrastructure, having a production ready and stable system is impossible. That’s why we have started with the foundation, discussing what would be the best approach for the short and long term future. Our existing Data Platform has been deployed to AWS ECS. While AWS ECS is a really great container orchestrator, we decided to switch to Kubernetes because on EKS, we get baked in support for lots of things we need for supporting multiple tenants, such as security isolation between the tenants, hardware limitations per tenant, etc. In addition to that there are many Kubernetes Operators coming out of the box for us, such as spark-k8s-operator, prometheus-operator and many more. AWS has been offering a managed Kubernetes cluster (EKS) for a while and it was the obvious choice for the foundations of the Data Platform for the short and long term future. Aiming to have a self service multi-tenant Data Platform, we had to apply several requirements on top of each service and the Kubernetes cluster itself. System namespace — Separate all system components in an isolated Kubernetes namespace responsible for the management of all the services. — Separate all system components in an isolated Kubernetes namespace responsible for the management of all the services. Namespace per team — Group all team resources in a Kubernetes namespace in order to automatically apply team-based configurations and constraints for each of them. — Group all team resources in a Kubernetes namespace in order to automatically apply team-based configurations and constraints for each of them. Security isolation per namespace — Restrict cross namespace access in the Kubernetes cluster to prevent unexpected interactions between different team resources. — Restrict cross namespace access in the Kubernetes cluster to prevent unexpected interactions between different team resources. Resource quota per namespace — Prevent affecting all teams when one of them reaches hardware limits, while measuring efficiency by calculating the ratio between spent money and delivered business value per team. Batch processing The ETL Framework was quite stable and had been running for years, but to fully benefit from our adoption of cloud-native technologies, we needed a new one that supported: Cloud deployment . . Horizontal scaling. As the number of workflows and the amounts of data increased, we needed to be able to scale up with minimal effort. As the number of workflows and the amounts of data increased, we needed to be able to scale up with minimal effort. Multi-tenancy. Because the whole platform needed to support this. Because the whole platform needed to support this. Deployment to Kubernetes. Again, for consistency across the whole platform. Since we built our ETL framework, the expectations from ETL have moved on. We wanted the ability to support: Language agnostic jobs. In order to get the most out of the diverse skill set in all teams using the Data Platform. In order to get the most out of the diverse skill set in all teams using the Data Platform. Workflow concept. The need to define a sequence of jobs depending on each other in a workflow is another key business requirement to make data-driven decisions on a daily basis. The need to define a sequence of jobs depending on each other in a workflow is another key business requirement to make data-driven decisions on a daily basis. Code reusability. Since the functionality behind part of the steps in the workflows are repetitive, they are a good candidate for code reuse. Since the functionality behind part of the steps in the workflows are repetitive, they are a good candidate for code reuse. Automated distributed backfilling for ETL jobs. Since this process occurs quite often for our new use cases and automation will increase business velocity. Since this process occurs quite often for our new use cases and automation will increase business velocity. Monitoring . We need good monitoring, in order to prevent making data driven decisions based on low quality, high latency or even missing data. . We need good monitoring, in order to prevent making data driven decisions based on low quality, high latency or even missing data. Extendability. The ability to extend the batch processing service with new capabilities based on feedback and requirements provided by the stakeholders will make this service flexible enough for the foreseeable future. The other big change is that fully-featured ETL frameworks now exist, rather than having to be built from scratch. Having all these requirements in mind, we evaluated different options on the market such as Luigi, Oozie, Azkaban, AWS Steps, Cadence and Apache Airflow. The best fit for our requirements was Apache Airflow. Great though it is, it still has some limitations — such as a single scheduler and lack of native multi-tenancy support. While the first one is not a huge concern for us at the moment based on the benchmarks, our estimated load and the expected release of this feature in Apache Airflow 2.0, the second one would impact our whole architecture, and so we decided to build custom multi-tenant support on top of Apache Airflow. We considered using an Apache Airflow managed service — there are multiple providers — but in the end decided to continue with a self managed solution based on some of the requirements including multi-tenancy, language agnostic jobs and monitoring. All of them could not be achieved with a managed solution, leading to the extensibility requirement and its importance for us. Once Apache Airflow had been integrated into our platform, we started by releasing new workflows on top of it, to ensure its capabilities. When we knew it met all criteria, the next step was obvious and currently we are in the process of migrating all of our existing ETL jobs to Apache Airflow. In addition to that, we have released it as a self service product to all stakeholders in the company and we already have consumers such as the BI Team, the Data Science team, and others. Generation 5: 2020 It’s time for real time data Generation 4 was a big step forward. However, there were still some targets for improvement. Real time data Our latency was still around 4 hours for significant parts of our data. Most of these 4 hours of latency happened because of the deduplication procedure — which is quite important for our stakeholders and their needs. For example, the FT can not make any business development decisions based on low quality data. That’s why we must ensure that our Data Warehouse persists clean data for these use cases. However, as the product, business and technologies evolve, new use cases have emerged. They could provide impact by using real time data even with a small percentage of low quality data. A great example for that is ordering a user’s feed in ft.com and the mobile application based on the reader’s interests. Having a couple of duplicated events would not be crucial for this use case as the user experience would always be much better than showing the same content to all users without having their interests in mind. We already had a stable stream processing architecture but it was quite complicated. We started looking into optimising it by migrating from SNS, SQS, and Kinesis to a new architecture using Apache Kafka as an event store. Having a managed service for the event store would be our preference and we decided to give Amazon MSK a try as it seemed to have been stable for quite some time. Ingesting data in Apache Kafka topics was a great starting point to provide real time data to the business. However, the stakeholders still didn’t have access to the data in the Apache Kafka cluster. So, our next goal was to create a stream processing platform that could allow them to deploy models on top of the real time data. We needed something that matched the rest of our architecture — supporting multi-tenancy, self service, multiple languages and deployable to Kubernetes. Having those requirements in mind, Apache Spark seemed to fit very well for us, being the most used analytics engine and having one of the biggest open-source communities worldwide. In order to deploy Apache Spark streaming jobs to Kubernetes, we decided to use the spark-on-k8s-operator. Moreover, we have built a section in our Data UI which allows our stakeholders to deploy their Apache Spark stream processing jobs to production by filling a simple form containing information for the job such as the Docker image and tag, CPU and memory limitations, credentials for the data sources used in the job, etc. Data contract Another area where we needed to make optimisations was moving the data validation to the earliest possible step in the pipeline. We had services validating the data coming into the Data Platform, however these validations were executed at different steps of the pipeline. This led to issues as the pipeline sometimes has broken because of incoming incorrect data. That’s why we wanted to improve this area by providing the following features: A Data contract for the event streams in the pipeline Moving the validation step to the earliest possible stage Adding compression to reduce event size Having all these needs in mind, we found a great way to achieve these requirements by using Apache Avro. It allows defining a data contract per topic in Apache Kafka, hence ensuring the data quality in the cluster. This approach also resolves another issue — the validation step can be moved to be the first step in the pipeline. Using an Apache Spark streaming job with Apache Avro schema prevents us from having broken data in the pipeline by moving all incorrect events to other Kafka topics used as Dead Letter Queues. Another great feature coming with Apache Avro is serialisation and deserialisation, which makes it possible to provide compression over the data persisted in the Apache Kafka event store. Data Lake Migrating from CSV to parquet files in our data lake storage has been a great initial choice for most of our needs. However, we still lacked some features on top of it that could make our life much easier, including ACID transactions, schema enforcements and updating events in parquet files. After analysing all existing alternatives on the market including Hudi, Iceberg and Delta Lake, we decided to start using Delta Lake based on its Apache Spark 3.x support. It provides all of the main requirements and fits perfectly in our architecture. Efficiency. We decoupled the computation process from the storage allowing our architecture to scale more efficiently. Low latency, high quality data. Using the upsert and schema enforcements features provided by Delta Lake, we can continuously deliver low latency and high quality data to all stakeholders in Financial Times. Multiple access points. Persisting all incoming data into Delta Lake allows the stakeholders to query low latency data through multiple systems including Apache Spark and Presto. Time travel. Delta Lake allows reprocessing data from a particular time in the past which automates back-populating data, in addition to allowing analysis between particular date intervals for different use cases such as reports or training machine learning models. Virtualisation layer At the Financial Times we have different kinds of storage used by teams in the company, including Amazon Redshift, Google BigQuery, Amazon S3, Apache Kafka, VoltDB, etc. However, stakeholders often need to analyse data split across more than one data store in order to make data-driven decisions. In order to satisfy this need, they use Apache Airflow to move data between different data stores. However, this approach is far from optimal. Using a batch processing approach adds additional latency to the data and, in some cases, making decisions with low latency data is crucial for a business use case. Moreover, deploying a batch processing job requires more technical background which may limit some of the stakeholders. Having these details in mind, we had some clear requirements about what the stakeholders would expect in order to deliver even more value to our readers — support for: Ad hoc queries over any storage ANSI SQL — syntax they often know well Being able to join data between different data storages And we wanted the ability to deploy to Kubernetes, to fit into our platform architecture. After analysing different options on the market, we decided to start with Presto as it allows companies to analyse petabytes of data at scale while being able to join data from many data sources, including all of the data sources used at the Financial Times. Plan for the future At the Financial Times we are never satisfied with our achievements and this is one of the reasons why this company has been on the top of this business for more than 130 years. That’s why we already have plans on how to evolve this architecture even more. Ingestion platform. We ingest data by using the three components — batch processing jobs managed by Apache Airflow, Apache Spark streaming jobs consuming data from Apache Kafka streams and REST services expecting incoming data to the Data Platform. We aim to replace the existing high latency ingestion services with Change Data Capture (CDC) which will enable ingesting new data immediately when it arrives in any data sources, hence the business will be able to deliver an even better experience for our readers. We ingest data by using the three components — batch processing jobs managed by Apache Airflow, Apache Spark streaming jobs consuming data from Apache Kafka streams and REST services expecting incoming data to the Data Platform. We aim to replace the existing high latency ingestion services with Change Data Capture (CDC) which will enable ingesting new data immediately when it arrives in any data sources, hence the business will be able to deliver an even better experience for our readers. Real time data for everyone. One of the main features that we have in mind is enabling all people in Financial Times to have access to the data, without the need to have particular technical skills. In order to do that, we plan to enhance the Data UI and the stream processing platform to allow drag and drop for building streaming jobs. This would be a massive improvement because it will enable employees without a technical background to consume, transform, produce and analyse data. If working on challenging Big Data tasks is interesting to you, consider applying for a role in our Data team in the office in Sofia, Bulgaria. We are waiting for you!
https://medium.com/ft-product-technology/financial-times-data-platform-from-zero-to-hero-143156bffb1d
['Mihail Petkov']
2020-12-02 09:59:40.123000+00:00
['Financial Times', 'Analytics', 'Engineering', 'Big Data', 'Data']
Title Financial Times Data Platform zero heroContent Financial Times Data Platform zero hero indepth walkthrough evolution Data Platform Financial Times one world’s leading business news organisation around 130 year famous quality journalism stay top long able adapt world change last decade meant able take advantage opportunity technology provides FT undergoes digital transformation article take indepth look behind scene one part transformation creation evolution Financial Times’ Data platform Data Platform provides information reader interact FT allows u make decision continue deliver thing reader want need Generation 1 2008–2014 Early day first Data Platform focussed providing recommendation reader based read already time majority reader still read FT print single store 24 hour latency sufficient architecture clean simple Financial Times’ employee able execute query top analyse user’s interest number event happened Internet revolution internet took day day number reader visiting ftcom rather reading print newspaper increased Mobile innovation Mobile device started part people’s life smartphone moved luxury expectation allowed Financial Times release mobile application popular operating system became another stream user could benefit reading article travelling work resting home outside nature without access laptop Generation 2 2014–2016 arrival Extract Transform Load ETL Framework second generation platform faced two new challenge firstly need allow stakeholder analyse data scale asking new type question secondly increasing volume data order achieve goal built ETL Framework 2014 allowed team set new job model automated scalable way included feature Scheduling Automating running SQL query multiple time per day synchronising output team last least focusing business case rather implementation detail Python interface Providing ability run Python code addition SQL query allowing stakeholder run even complex data model Configuration implementation One reason choosing introduce ETL Framework ability produce job XML file format enabled even business capability time release ETL Framework huge positive impact could resolve issue coming increased amount data number consumer fact adding new component actually created issue performance point view number consumer Data Platform increased including Business Intelligence BI Team Data Science Team others SQL Server instance started become bottleneck Data Platform hence stakeholder time change trying find best solution particular issue Financial Times already using service provided Amazon Web Services AWS started evaluating Amazon Redshift option fast simple costeffective Data Warehouse storing increasing amount data Amazon Redshift designed Online Analytical Processing OLAP cloud exactly looking Using approach able optimise query performance lot without additional effort team support new storage service Generation 3 2016–2018 beginning Big Data Financial Times Amazon Redshift Data Warehouse solution ETL Framework tool deploying extract transform load job FT team seeing benefit Data Platform However working big company leading market Financial Times business news distribution cannot satisfied existing achievement That’s started think improve architecture even next goal reduce data latency ingesting data per day latency 24 hour Reducing latency would mean FT could respond quickly trend data order reduce latency started working new approach — named Next Generation Data Analytics NGDA — 2015 early 2016 adopted team Financial Times First developed tracking library responsible sending every interaction reader Data Platform existing architecture expected list CSV file would transferred per day job run ETL Framework sending event one one meant needed change existing architecture support new eventdriven approach created API service responsible ingesting readers’ interaction However still needed way transfer data Data Warehouse lowest possible latency well exposing data multiple consuming downstream system migrating service cloud specifically AWS looked managed service provided Amazon could fulfil event processing need analysing alternative redesigned system send raw event ftcom Simple Notification Service SNS Using approach possible many team organisation subscribe SNS topic unlock new business case relying real time data Still raw data SNS enough — also needed get data Data Warehouse support existing workflow decided use Simple Queue Service SQS queue allowed u persist event queue immediately arrived system moving data Data Warehouse one requirement business — enrich raw event additional data provided internal service external service simple inmemory transformation order satisfy need minimal latency created NodeJS service responsible processing event loop asynchronously making enrichment step possible scale event fully enriched data sent immediately managed event store provided AWS time — Kinesis Using architecture able persist enriched event stream millisecond latency amazing news stakeholder data Kinesis Stream used another AWS managed service — Kinesis Firehose — consume enriched event stream output CSV file S3 bucket based one two main condition — predefined time period passed happened rarely file size reaching 100mb new eventdriven approach produced CSV file enriched event couple minute depending time day hence latency data lake reduced 1–5 minute one important requirement business team requested clean data Data Warehouse Using Kinesis Firehose approach couldn’t guarantee one instance event could receive duplicate event client side application Kinesis Firehose could duplicate data Firehose job retried failure order deduplicate event created another Amazon Redshift cluster responsible ingesting deduplicating new CSV file involved tradeoff implementing process guarantee uniqueness increased latency data get Data Warehouse approximately 4 hour enabled business team generate insight much easily Generation 4 2019 Rebuild platform allow team focus adding business value Generation 3 platform complicated run team spent day supporting large number independent service engineering cost increasing far le time interesting impactful work wanted take advantage new technology reduce complexity also provide far exciting capability stakeholder wanted turn Data Platform PaaS Platform Service initial criterion platform offer Self service — Enabling stakeholder independently develop release new feature Enabling stakeholder independently develop release new feature Support multiple internal consumer — different team different level access different team different level access Security isolation — team could access data job — team could access data job Code reuse — avoid duplication common functionality Building multitenant self service platform quite challenging requires every service support thing Still putting effort implementing approach would extremely beneficial future key benefit Stakeholder team deliver value without wait coordinate platform team — reduces cost increase velocity put charge destiny reduces cost increase velocity put charge destiny Platform team focus building new functionality platform — rather spending time unblocking stakeholder team way chose deliver decoupling focus configuration implementation stakeholder team able set management rule based internal team structure role permission using admin web interface Kubernetes software system like house need build foundation rather roof engineering foundation infrastructure Without stable infrastructure production ready stable system impossible That’s started foundation discussing would best approach short long term future existing Data Platform deployed AWS ECS AWS ECS really great container orchestrator decided switch Kubernetes EKS get baked support lot thing need supporting multiple tenant security isolation tenant hardware limitation per tenant etc addition many Kubernetes Operators coming box u sparkk8soperator prometheusoperator many AWS offering managed Kubernetes cluster EKS obvious choice foundation Data Platform short long term future Aiming self service multitenant Data Platform apply several requirement top service Kubernetes cluster System namespace — Separate system component isolated Kubernetes namespace responsible management service — Separate system component isolated Kubernetes namespace responsible management service Namespace per team — Group team resource Kubernetes namespace order automatically apply teambased configuration constraint — Group team resource Kubernetes namespace order automatically apply teambased configuration constraint Security isolation per namespace — Restrict cross namespace access Kubernetes cluster prevent unexpected interaction different team resource — Restrict cross namespace access Kubernetes cluster prevent unexpected interaction different team resource Resource quota per namespace — Prevent affecting team one reach hardware limit measuring efficiency calculating ratio spent money delivered business value per team Batch processing ETL Framework quite stable running year fully benefit adoption cloudnative technology needed new one supported Cloud deployment Horizontal scaling number workflow amount data increased needed able scale minimal effort number workflow amount data increased needed able scale minimal effort Multitenancy whole platform needed support whole platform needed support Deployment Kubernetes consistency across whole platform Since built ETL framework expectation ETL moved wanted ability support Language agnostic job order get diverse skill set team using Data Platform order get diverse skill set team using Data Platform Workflow concept need define sequence job depending workflow another key business requirement make datadriven decision daily basis need define sequence job depending workflow another key business requirement make datadriven decision daily basis Code reusability Since functionality behind part step workflow repetitive good candidate code reuse Since functionality behind part step workflow repetitive good candidate code reuse Automated distributed backfilling ETL job Since process occurs quite often new use case automation increase business velocity Since process occurs quite often new use case automation increase business velocity Monitoring need good monitoring order prevent making data driven decision based low quality high latency even missing data need good monitoring order prevent making data driven decision based low quality high latency even missing data Extendability ability extend batch processing service new capability based feedback requirement provided stakeholder make service flexible enough foreseeable future big change fullyfeatured ETL framework exist rather built scratch requirement mind evaluated different option market Luigi Oozie Azkaban AWS Steps Cadence Apache Airflow best fit requirement Apache Airflow Great though still limitation — single scheduler lack native multitenancy support first one huge concern u moment based benchmark estimated load expected release feature Apache Airflow 20 second one would impact whole architecture decided build custom multitenant support top Apache Airflow considered using Apache Airflow managed service — multiple provider — end decided continue self managed solution based requirement including multitenancy language agnostic job monitoring could achieved managed solution leading extensibility requirement importance u Apache Airflow integrated platform started releasing new workflow top ensure capability knew met criterion next step obvious currently process migrating existing ETL job Apache Airflow addition released self service product stakeholder company already consumer BI Team Data Science team others Generation 5 2020 It’s time real time data Generation 4 big step forward However still target improvement Real time data latency still around 4 hour significant part data 4 hour latency happened deduplication procedure — quite important stakeholder need example FT make business development decision based low quality data That’s must ensure Data Warehouse persists clean data use case However product business technology evolve new use case emerged could provide impact using real time data even small percentage low quality data great example ordering user’s feed ftcom mobile application based reader’s interest couple duplicated event would crucial use case user experience would always much better showing content user without interest mind already stable stream processing architecture quite complicated started looking optimising migrating SNS SQS Kinesis new architecture using Apache Kafka event store managed service event store would preference decided give Amazon MSK try seemed stable quite time Ingesting data Apache Kafka topic great starting point provide real time data business However stakeholder still didn’t access data Apache Kafka cluster next goal create stream processing platform could allow deploy model top real time data needed something matched rest architecture — supporting multitenancy self service multiple language deployable Kubernetes requirement mind Apache Spark seemed fit well u used analytics engine one biggest opensource community worldwide order deploy Apache Spark streaming job Kubernetes decided use sparkonk8soperator Moreover built section Data UI allows stakeholder deploy Apache Spark stream processing job production filling simple form containing information job Docker image tag CPU memory limitation credential data source used job etc Data contract Another area needed make optimisation moving data validation earliest possible step pipeline service validating data coming Data Platform however validation executed different step pipeline led issue pipeline sometimes broken incoming incorrect data That’s wanted improve area providing following feature Data contract event stream pipeline Moving validation step earliest possible stage Adding compression reduce event size need mind found great way achieve requirement using Apache Avro allows defining data contract per topic Apache Kafka hence ensuring data quality cluster approach also resolve another issue — validation step moved first step pipeline Using Apache Spark streaming job Apache Avro schema prevents u broken data pipeline moving incorrect event Kafka topic used Dead Letter Queues Another great feature coming Apache Avro serialisation deserialisation make possible provide compression data persisted Apache Kafka event store Data Lake Migrating CSV parquet file data lake storage great initial choice need However still lacked feature top could make life much easier including ACID transaction schema enforcement updating event parquet file analysing existing alternative market including Hudi Iceberg Delta Lake decided start using Delta Lake based Apache Spark 3x support provides main requirement fit perfectly architecture Efficiency decoupled computation process storage allowing architecture scale efficiently Low latency high quality data Using upsert schema enforcement feature provided Delta Lake continuously deliver low latency high quality data stakeholder Financial Times Multiple access point Persisting incoming data Delta Lake allows stakeholder query low latency data multiple system including Apache Spark Presto Time travel Delta Lake allows reprocessing data particular time past automates backpopulating data addition allowing analysis particular date interval different use case report training machine learning model Virtualisation layer Financial Times different kind storage used team company including Amazon Redshift Google BigQuery Amazon S3 Apache Kafka VoltDB etc However stakeholder often need analyse data split across one data store order make datadriven decision order satisfy need use Apache Airflow move data different data store However approach far optimal Using batch processing approach add additional latency data case making decision low latency data crucial business use case Moreover deploying batch processing job requires technical background may limit stakeholder detail mind clear requirement stakeholder would expect order deliver even value reader — support Ad hoc query storage ANSI SQL — syntax often know well able join data different data storage wanted ability deploy Kubernetes fit platform architecture analysing different option market decided start Presto allows company analyse petabyte data scale able join data many data source including data source used Financial Times Plan future Financial Times never satisfied achievement one reason company top business 130 year That’s already plan evolve architecture even Ingestion platform ingest data using three component — batch processing job managed Apache Airflow Apache Spark streaming job consuming data Apache Kafka stream REST service expecting incoming data Data Platform aim replace existing high latency ingestion service Change Data Capture CDC enable ingesting new data immediately arrives data source hence business able deliver even better experience reader ingest data using three component — batch processing job managed Apache Airflow Apache Spark streaming job consuming data Apache Kafka stream REST service expecting incoming data Data Platform aim replace existing high latency ingestion service Change Data Capture CDC enable ingesting new data immediately arrives data source hence business able deliver even better experience reader Real time data everyone One main feature mind enabling people Financial Times access data without need particular technical skill order plan enhance Data UI stream processing platform allow drag drop building streaming job would massive improvement enable employee without technical background consume transform produce analyse data working challenging Big Data task interesting consider applying role Data team office Sofia Bulgaria waiting youTags Financial Times Analytics Engineering Big Data Data
3,793
[DS0001] — Linear Regression and Confidence Interval a Hands-On Tutorial
Motivation This tutorial will guide you through the creation of a linear regression model and a confidence interval from your predictor using some data science commonly used libraries such as Sklearn and Pandas. In our example case, the linear regression was used to determine how many charging cycles a battery can hold after die. Don’t worry if you do not understand anything about batteries, all the data will be available for download, and the only knowledge required here is about python language. Import what we need In order to use some already implemented tools, we need to import all the libraries and components. The next block of code import pandas, NumPy, and some scikit learn components, that will allow us to read our data, create the linear regression model and our confidence interval. #!/usr/bin/env python3 from sklearn.linear_model import LinearRegression import numpy as np import pandas as pd import matplotlib.pyplot as plt from scipy.stats.stats import pearsonr from scipy import stats Loading the data In this tutorial, I will be using some data from my research about battery state of life estimation. Don’t worry about the meaning of the data right now, It will not affect our results. Download the .csv file from here and paste it into the same folder as your main python file. After it, you can just load the file on pandas and read the column named “voltage_integral” from the file, as I do on the code above: my_pandas_file = pd.read_csv('cs_24.csv') y_data = my_pandas_file.get('voltage_integral') To create a linear regression, we will need another axis, in this case, our x-axis will be the index of our y_data vector. In this case, It’s good to notice that our model will require a 2d array, so let’s arrange it on the desired form using the reshape method. x_data = np.arange(0,len(y_data), 1) x_data_composed = x_data.reshape(-1,1) Creating our model After work with linear regression, It’s usual to see if there is a strong correlation between the variables. To see it, you must calculate the Poison coefficient and check it. If the correlation is near 1, it means that the variables have a positive strong correlation. If it’s neat -1, It means that the variables have negative strong correlations and if it’s near 0, It means that the variables do not have a correlation and the linear regression will not help. Python does provide a tool to easily calculate the correlation: correlation = pearsonr(x_data, y_data) >> (-0.9057040954006549, 0.0) The value of -0.91 tells us that our data has a strong negative correlation, and as you can see on the graphic below, it means that when our x value increases, our y value decreases. To create our linear model, we just need to use our imported component, and fit the model, using the data imported from the file. After it, just to see how our model is when compared to the graphics, we will plot the predicted vector from our source data: lin_regression = LinearRegression().fit(x_cs_24.reshape(-1, 1), cs_24_integral_data) model_line = lin_regression.predict(x_data_composed) plt.plot(y_data) plt.plot(model_line) plt.xlabel('Cilos') plt.ylabel('volts x seconds') plt.title('Voltage integral CCCT charge during batery life') plt.ylim(0,8000) Before running the code, the graphic below will show up on your screen: Our model is already done and we already have our graphics. Now it’s time to add more confidence to our prediction model putting a confidence interval on the graphic. Calculate and plot our confidence interval A confidence interval of 95%, is an interval between values that our prediction has 95% of chances to be there. This is calculated based on the standard deviation and a gaussian curve. We will create a function to calculate our confidence interval for a single sample and then run it for all predictions. def get_prediction_interval(prediction, y_test, test_predictions, pi=.95): ''' Get a prediction interval for a linear regression. INPUTS: - Single prediction, - y_test - All test set predictions, - Prediction interval threshold (default = .95) OUTPUT: - Prediction interval for single prediction ''' #get standard deviation of y_test sum_errs = np.sum((y_test - test_predictions)**2) stdev = np.sqrt(1 / (len(y_test) - 2) * sum_errs) #get interval from standard deviation one_minus_pi = 1 - pi ppf_lookup = 1 - (one_minus_pi / 2) z_score = stats.norm.ppf(ppf_lookup) interval = z_score * stdev #generate prediction interval lower and upper bound cs_24 lower, upper = prediction - interval, prediction + interval return lower, prediction, upper ## Plot and save confidence interval of linear regression - 95% cs_24 lower_vet = [] upper_vet = [] for i in model_line: lower, prediction, upper = get_prediction_interval(i, y_data, model_line) lower_vet.append(lower) upper_vet.append(upper) plt.fill_between(np.arange(0,len(y_data),1),upper_vet, lower_vet, color='b',label='Confidence Interval') plt.plot(np.arange(0,len(y_data),1),y_data,color='orange',label='Real data') plt.plot(model_line,'k',label='Linear regression') plt.xlabel('Ciclos') plt.ylabel('Volts x seconds') plt.title('95% confidence interval') plt.legend() plt.ylim(-1000,8000) plt.show() After running the code, the result will show up like this: So, this is how to create a linear regression and calculate the confidence interval from it. The data .csv file and the full code can be found here. If you like this story and would like to see more content like this in the future, please follow me! Thanks for your time, folks!
https://medium.com/swlh/ds001-linear-regression-and-confidence-interval-a-hands-on-tutorial-760658632d99
['Iago Henrique']
2020-11-28 19:05:56.970000+00:00
['AI', 'Data Science', 'Data Visualization', 'Python', 'Linear Regression']
Title DS0001 — Linear Regression Confidence Interval HandsOn TutorialContent Motivation tutorial guide creation linear regression model confidence interval predictor using data science commonly used library Sklearn Pandas example case linear regression used determine many charging cycle battery hold die Don’t worry understand anything battery data available download knowledge required python language Import need order use already implemented tool need import library component next block code import panda NumPy scikit learn component allow u read data create linear regression model confidence interval usrbinenv python3 sklearnlinearmodel import LinearRegression import numpy np import panda pd import matplotlibpyplot plt scipystatsstats import pearsonr scipy import stats Loading data tutorial using data research battery state life estimation Don’t worry meaning data right affect result Download csv file paste folder main python file load file panda read column named “voltageintegral” file code mypandasfile pdreadcsvcs24csv ydata mypandasfilegetvoltageintegral create linear regression need another axis case xaxis index ydata vector case It’s good notice model require 2d array let’s arrange desired form using reshape method xdata nparange0lenydata 1 xdatacomposed xdatareshape11 Creating model work linear regression It’s usual see strong correlation variable see must calculate Poison coefficient check correlation near 1 mean variable positive strong correlation it’s neat 1 mean variable negative strong correlation it’s near 0 mean variable correlation linear regression help Python provide tool easily calculate correlation correlation pearsonrxdata ydata 09057040954006549 00 value 091 tell u data strong negative correlation see graphic mean x value increase value decrease create linear model need use imported component fit model using data imported file see model compared graphic plot predicted vector source data linregression LinearRegressionfitxcs24reshape1 1 cs24integraldata modelline linregressionpredictxdatacomposed pltplotydata pltplotmodelline pltxlabelCilos pltylabelvolts x second plttitleVoltage integral CCCT charge batery life pltylim08000 running code graphic show screen model already done already graphic it’s time add confidence prediction model putting confidence interval graphic Calculate plot confidence interval confidence interval 95 interval value prediction 95 chance calculated based standard deviation gaussian curve create function calculate confidence interval single sample run prediction def getpredictionintervalprediction ytest testpredictions pi95 Get prediction interval linear regression INPUTS Single prediction ytest test set prediction Prediction interval threshold default 95 OUTPUT Prediction interval single prediction get standard deviation ytest sumerrs npsumytest testpredictions2 stdev npsqrt1 lenytest 2 sumerrs get interval standard deviation oneminuspi 1 pi ppflookup 1 oneminuspi 2 zscore statsnormppfppflookup interval zscore stdev generate prediction interval lower upper bound cs24 lower upper prediction interval prediction interval return lower prediction upper Plot save confidence interval linear regression 95 cs24 lowervet uppervet modelline lower prediction upper getpredictionintervali ydata modelline lowervetappendlower uppervetappendupper pltfillbetweennparange0lenydata1uppervet lowervet colorblabelConfidence Interval pltplotnparange0lenydata1ydatacolororangelabelReal data pltplotmodellineklabelLinear regression pltxlabelCiclos pltylabelVolts x second plttitle95 confidence interval pltlegend pltylim10008000 pltshow running code result show like create linear regression calculate confidence interval data csv file full code found like story would like see content like future please follow Thanks time folksTags AI Data Science Data Visualization Python Linear Regression
3,794
Processing Big Data with a Micro-Service-Inspired Data Pipeline
You aren’t truly ready for a career in Big Data until you have everyone in the room cringing from the endless jargon you are throwing at them. Everyone in tech is always trying to out-impress one another with their impressive grasp of technical jargon. However, tech jargon does exist for a reason: it summarizes complex concepts into a simple narrative, and allow developers to abstract implementation details into design patterns which can be “mix-and-matched” to solve any technical task. With that in mind, let’s take a look at the technical tasks the Data Lab team was facing this year, and how we addressed them with an absurd quantity of geek speak. The Data Lab team at Hootsuite is designed to help the business make data-driven decisions. From an engineering standpoint, this means designing a data pipeline to manage and aggregate all our data from various sources (Product, Salesforce, Localytics, etc.) and make them available in Redshift for analysis by our Analysts. Analyses typically take the form of either a specific query used to answer a specific ad-hoc request, or a more permanent Dashboard designed to monitor key metrics. However, as Hootsuite grew, the Datalab team became a bottleneck for data requests from stakeholders across the business. This led us to search for a way that would allow various decision makers to dig into our data on their own, without needing SQL knowledge. [caption id=”attachment_4271" align=”aligncenter” width=”317"] Comic courtesy of Geek and Poke[/caption] Enter Interana. Interana is a real-time time-indexed interactive data analytics tool which would allow for all of our employees to visualize and explore data themselves. Awesome, right?! Unfortunately, there was one little problem: we didn’t have the infrastructure for real-time data processing. Our pipeline only had support for a series of nightly ETLs, which were run by a cron job. Creating something from scratch is incredibly exciting. Finally, an opportunity to implement a solution using all of the jargon you’d like, without any of the technical debt! We laid out our goals, and chose the solution that best fit our needs. While analyzing the problem, I realized that the qualities we wanted our pipeline to have were the same qualities computer scientists have been striving to achieve for decades: abstraction, modularity, and robustness. What changed were the problems software engineers were facing, and the technologies which have been developed to provide modularity, robustness, and increased abstractness. It makes sense. We wouldn’t be able to create a real-time data pipeline by running our ETLs every second — we needed a different solution, which addressed these issues: [caption id=”attachment_4272" align=”aligncenter” width=”300"] Some of our requirements[/caption] Enter micro-services. Micro-services are small applications that perform a single, specific service. They are often used in applications where each request can be delegated to a separate and complete application. What makes them fantastic to work with is that they abstract away the implementation details, and present only an interface comprising of their data inputs and outputs. This means that as long as interface remains the same, any modifications made in a service are guaranteed to be compatible with the system. In fact, one could safely replace one micro-service with another! With all of Hootsuite migrating towards breaking apart our monolith into a set of micro-services, the Data Lab team also wanted a slice of the fun. Wanting to move away from our monolith-like ETL codebase, we saw an opportunity to implement our real-time data pipeline using the best practices established by our Product brethren. A data pipeline has of course some inherently different requirements than a SaaS product does — so we needed to make a few changes to what a typical micro-service product looks like. Our micro-services: Behave more like workstations at an assembly line than independent services — that is, after processing its data it does not “respond” to its caller Have a dependency structure of an acyclic graph — we don’t want data circulating our pipeline forever! With those distinctions out of the way, let’s take a look at how we implemented our new data pipeline, and how it helped us achieve abstraction, modularity, and robustness. Above is an overview of our real-time data pipeline. We have a diverse set of sources for our data — some of them produce data in real time, while others do not. We built a micro-service to support batch-updated data. Each data source then gets put onto a data queue where our cleaner micro-services clean the data. This cleaned data then gets put into a common data format, and passed on to a “unified, cleaned” message queue, for our enricher to consume off of. This micro-service enriches our data by cross-referencing various fields with our data sets (and other micro-services!), and then uploads it into our data store. It sends a message into another message queue asking to have that data uploaded to our analytical data warehouse. Voila! A complete data pipeline. We were able to create a complete data pipeline which meets the three qualities we sought out at the beginning: abstraction, modularity, and robustness: It is abstract . Each service hides its implementation details, and reveals only what it consumes and what it outputs. . Each service hides its implementation details, and reveals only what it consumes and what it outputs. It is modular . Each micro-service can be reused and re-arranged without needing to refactor the entire system it resides in. . Each micro-service can be reused and re-arranged without needing to refactor the entire system it resides in. It is robust. New data sources can be easily added (just clone and update a cleaner/producer micro-service), and if one service fails, the rest of the pipeline can still operate correctly. Beyond those goals, we have also been able to achieve other desirable traits data-people look for: It is distributed . Each micro-service is run on a separate box, and may be consuming data from entirely different places. . Each micro-service is run on a separate box, and may be consuming data from entirely different places. It is scalable. We can always create more instances of each application to consume and process data in parallel to each other. Adding new data sources is easy. After all was said and done, we were able to cut processing times in half, had access to data sources we didn’t before, and have this all done in a system that is easy to understand and change. These tangible benefits were achieved using solutions found within the plethora of jargon being thrown around the data community. I hope that by this part of the post you’ve been numbed to the cringe-inducing effects which non-stop jargon invokes, and begun to see how they are used to describe (perhaps in an all too-colorful way) the tools and techniques we use to build a better way. Also, they’re great for SEO! ;) About the Author Kamil Khan is a Co-op on the Data Lab team at Hootsuite, working as a Software Developer and Data Analyst. He is an undergraduate student at the University of British Columbia, where he is completing a Bachelors of Commerce at the Sauder School of Business, majoring in Business and Computer Science. Want to learn more? Connect with Kamil on LinkedIn.
https://medium.com/hootsuite-engineering/processing-big-data-with-a-micro-service-inspired-data-pipeline-1bb0159bc3d9
['Hootsuite Engineering']
2018-02-07 18:35:47.251000+00:00
['Microservices', 'Co Op', 'Data', 'Big Data']
Title Processing Big Data MicroServiceInspired Data PipelineContent aren’t truly ready career Big Data everyone room cringing endless jargon throwing Everyone tech always trying outimpress one another impressive grasp technical jargon However tech jargon exist reason summarizes complex concept simple narrative allow developer abstract implementation detail design pattern “mixandmatched” solve technical task mind let’s take look technical task Data Lab team facing year addressed absurd quantity geek speak Data Lab team Hootsuite designed help business make datadriven decision engineering standpoint mean designing data pipeline manage aggregate data various source Product Salesforce Localytics etc make available Redshift analysis Analysts Analyses typically take form either specific query used answer specific adhoc request permanent Dashboard designed monitor key metric However Hootsuite grew Datalab team became bottleneck data request stakeholder across business led u search way would allow various decision maker dig data without needing SQL knowledge caption id”attachment4271 align”aligncenter” width”317 Comic courtesy Geek Pokecaption Enter Interana Interana realtime timeindexed interactive data analytics tool would allow employee visualize explore data Awesome right Unfortunately one little problem didn’t infrastructure realtime data processing pipeline support series nightly ETLs run cron job Creating something scratch incredibly exciting Finally opportunity implement solution using jargon you’d like without technical debt laid goal chose solution best fit need analyzing problem realized quality wanted pipeline quality computer scientist striving achieve decade abstraction modularity robustness changed problem software engineer facing technology developed provide modularity robustness increased abstractness make sense wouldn’t able create realtime data pipeline running ETLs every second — needed different solution addressed issue caption id”attachment4272 align”aligncenter” width”300 requirementscaption Enter microservices Microservices small application perform single specific service often used application request delegated separate complete application make fantastic work abstract away implementation detail present interface comprising data input output mean long interface remains modification made service guaranteed compatible system fact one could safely replace one microservice another Hootsuite migrating towards breaking apart monolith set microservices Data Lab team also wanted slice fun Wanting move away monolithlike ETL codebase saw opportunity implement realtime data pipeline using best practice established Product brother data pipeline course inherently different requirement SaaS product — needed make change typical microservice product look like microservices Behave like workstation assembly line independent service — processing data “respond” caller dependency structure acyclic graph — don’t want data circulating pipeline forever distinction way let’s take look implemented new data pipeline helped u achieve abstraction modularity robustness overview realtime data pipeline diverse set source data — produce data real time others built microservice support batchupdated data data source get put onto data queue cleaner microservices clean data cleaned data get put common data format passed “unified cleaned” message queue enricher consume microservice enriches data crossreferencing various field data set microservices uploads data store sends message another message queue asking data uploaded analytical data warehouse Voila complete data pipeline able create complete data pipeline meet three quality sought beginning abstraction modularity robustness abstract service hide implementation detail reveals consumes output service hide implementation detail reveals consumes output modular microservice reused rearranged without needing refactor entire system resides microservice reused rearranged without needing refactor entire system resides robust New data source easily added clone update cleanerproducer microservice one service fails rest pipeline still operate correctly Beyond goal also able achieve desirable trait datapeople look distributed microservice run separate box may consuming data entirely different place microservice run separate box may consuming data entirely different place scalable always create instance application consume process data parallel Adding new data source easy said done able cut processing time half access data source didn’t done system easy understand change tangible benefit achieved using solution found within plethora jargon thrown around data community hope part post you’ve numbed cringeinducing effect nonstop jargon invokes begun see used describe perhaps toocolorful way tool technique use build better way Also they’re great SEO Author Kamil Khan Coop Data Lab team Hootsuite working Software Developer Data Analyst undergraduate student University British Columbia completing Bachelors Commerce Sauder School Business majoring Business Computer Science Want learn Connect Kamil LinkedInTags Microservices Co Op Data Big Data
3,795
A Classic Computer Vision Project — How to Add an Image Behind Objects in a Video
A Classic Computer Vision Project — How to Add an Image Behind Objects in a Video Prateek Joshi Follow Jun 14 · 7 min read Introduction I was thrown a challenge by one of my colleagues — build a computer vision model that could insert any image in a video without distorting the moving object. This turned out to be quite an intriguing project and I had a blast working on it. Working with videos is notoriously difficult because of their dynamic nature. Unlike images, we don’t have static objects that we can easily identify and track. The complexity level goes up several levels — and that’s where our hold on image processing and computer vision techniques comes to the fore. I decided to go with a logo in the background. The challenge, which I will elaborate on later, was to insert a logo in a way that wouldn’t impede the dynamic nature of the object in any given video. I used Python and OpenCV to build this computer vision system — and have shared my approach in this article. Table of Contents Understanding the Problem Statement Getting the Data for this Project Setting the Blueprint for our Computer Vision Project Implementing the Technique in Python — Let’s Add the Logo! Understanding the Problem Statement This is going to be quite an uncommon use case of computer vision. We will be embedding a logo in a video. Now you must be thinking — what’s the big deal in that? We can simply paste the logo on top of the video, right? However, that logo might just hide some interesting action in the video. What if the logo impedes the moving object in front? That doesn’t make a lot of sense and makes the editing looks amateurish. Therefore, we have to figure out how we can add the logo somewhere in the background such that it doesn’t block the main action going on in the video. Check out the video below — the left half is the original video and the right half has the logo appearing on the wall behind the dancer: This is the idea we’ll be implementing in this article. Getting the Data for this Project I have taken this video from pexels.com, a website for free stock videos. As I mentioned earlier, our objective is to put a logo in the video such that it should appear behind a certain moving object. So, for the time being, we will use the logo of OpenCV itself. You can use any logo you want (perhaps your favorite sports team?). You can download both the video and the logo from here. Setting the Blueprint for our Computer Vision Project Let’s first understand the approach before we implement this project. To perform this task, we will take the help of image masking. Let me show you some illustrations to understand the technique. Let’s say we want to put a rectangle (fig 1) in an image (fig 2) in such a manner that the circle in the second image should appear on top of the rectangle: So, the desired outcome should look like this: However, it is not that straightforward. When we take the rectangle from Fig 1 and insert it in Fig 2, it will appear on top of the pink circle: This is not what we want. The circle should have been in front of the rectangle. So, let’s understand how we can solve this problem. These images are essentially arrays. The values of these arrays are the pixel values and every color has its own pixel value. So, we would somehow set the pixel values of the rectangle to 1 where it is supposed to be overlapping with the circle (in Fig 5), while leaving the rest of the pixel values of the rectangle as they are. In Fig 6, the region enclosed by blue-dotted lines is the region where we would put the rectangle. Let’s denote this region by R. We would set all the pixel values of R to 1 as well. However, we would leave the pixel values of the entire pink circle unchanged: Our next step is to multiply the pixel values of the rectangle with the pixel values of R. Since multiplying any number by 1 results in that number itself, so all those pixel values of R that are 1 will be replaced by the pixels of the rectangle. Similarly, the pixel values of the rectangle that are 1 will be replaced by the pixels of Fig 6. The final output will turn out to be something like this: This is the technique we are going to use to embed the OpenCV logo behind the dancing guy in the video. Let’s do it! Implementing the Technique in Python — Let’s Add the Logo! You can use a Jupyter Notebook or any IDE of your choice and follow along. We will first import the necessary libraries. Import Libraries Note: The version of the OpenCV library used for this tutorial is 4.0.0. Load Images Next, we will specify the path to the working directory where the logo and video are kept. Please note that you are supposed to specify the “path” in the code snippet below: So, we have loaded the logo image and the first frame of the video. Now let’s look at the shape of these images or arrays: logo.shape, frame.shape Output: ((240, 195, 3), (1080, 1920, 3)) Both the outputs are 3-dimensional. The first dimension is the height of the image, the second dimension is the width of the image and the third dimension is the number of channels in the image, i.e., blue, green, and red. Now, let’s plot and see the logo and the first frame of the video: plt.imshow(logo) plt.show() plt.imshow(cv2.cvtColor(frame,cv2.COLOR_BGR2RGB)) plt.show() Technique to Create Image Mask The frame size is much bigger than the logo. Therefore, we can place the logo at a number of places. However, placing the logo at the center of the frame seems perfect to me as most of the action will happen around that region in the video. So, we will put the logo in the frame as shown below: Don’t worry about the black background in the logo. We will set the pixel values in the black region to 1 later in the code. Now the problem we have to solve is that of dealing with the moving object appearing in the same region where we have placed the logo. As discussed earlier, we need to make the logo allow itself to be occluded by that moving object. Right now, the area where we will put the logo in has a wide range of pixel values. Ideally, all the pixel values should be the same in this area. So how can we do that? We will have to make the pixels of the wall enclosed by the green dotted box have the same value. We can do this with the help of HSV (hue, saturation, value) colorspace: Our image is in RGB colorspace. We will convert it into an HSV image. The image below is the HSV version: The next step is to find the range of the HSV values of only the part that is inside the green dotted box. It turns out that most of the pixels in the box range from [6, 10, 68] to [30, 36, 122]. These are the lower and upper HSV ranges, respectively. Now using this range of HSV values, we can create a binary mask. This mask is nothing but an image with pixel values of either 0 or 255. So, the pixels falling in the upper and lower range of the HSV values will be equal to 255 and the rest of the pixels will be 0. Given below is the mask prepared from the HSV image. All the pixels in the yellow region have pixel value of 255 and the rest have pixel value of 0: Now we can easily set the pixel values inside the green dotted box to 1 as and when required. Let’s go back to the code: The code snippet above will load the frames from the video, pre-process it, and create HSV images and masks and finally insert the logo into the video. And there you have it! End Notes In this article, we covered a very interesting use case of computer vision and implemented it from scratch. In the process, we also learned about working with image arrays and how to create masks from these arrays. This is something that would help you when you work on other computer vision tasks. Feel free to reach out to me if you have any doubts or feedback to share. I would be glad to help you. Feel free to reach out to me at [email protected] for 1–1 discussions.
https://medium.com/swlh/a-classic-computer-vision-project-how-to-add-an-image-behind-objects-in-a-video-b0ac8d7b2173
['Prateek Joshi']
2020-10-18 14:12:55.446000+00:00
['Data Science', 'Object Detection', 'Python', 'Opencv', 'Computer Vision']
Title Classic Computer Vision Project — Add Image Behind Objects VideoContent Classic Computer Vision Project — Add Image Behind Objects Video Prateek Joshi Follow Jun 14 · 7 min read Introduction thrown challenge one colleague — build computer vision model could insert image video without distorting moving object turned quite intriguing project blast working Working video notoriously difficult dynamic nature Unlike image don’t static object easily identify track complexity level go several level — that’s hold image processing computer vision technique come fore decided go logo background challenge elaborate later insert logo way wouldn’t impede dynamic nature object given video used Python OpenCV build computer vision system — shared approach article Table Contents Understanding Problem Statement Getting Data Project Setting Blueprint Computer Vision Project Implementing Technique Python — Let’s Add Logo Understanding Problem Statement going quite uncommon use case computer vision embedding logo video must thinking — what’s big deal simply paste logo top video right However logo might hide interesting action video logo impedes moving object front doesn’t make lot sense make editing look amateurish Therefore figure add logo somewhere background doesn’t block main action going video Check video — left half original video right half logo appearing wall behind dancer idea we’ll implementing article Getting Data Project taken video pexelscom website free stock video mentioned earlier objective put logo video appear behind certain moving object time use logo OpenCV use logo want perhaps favorite sport team download video logo Setting Blueprint Computer Vision Project Let’s first understand approach implement project perform task take help image masking Let show illustration understand technique Let’s say want put rectangle fig 1 image fig 2 manner circle second image appear top rectangle desired outcome look like However straightforward take rectangle Fig 1 insert Fig 2 appear top pink circle want circle front rectangle let’s understand solve problem image essentially array value array pixel value every color pixel value would somehow set pixel value rectangle 1 supposed overlapping circle Fig 5 leaving rest pixel value rectangle Fig 6 region enclosed bluedotted line region would put rectangle Let’s denote region R would set pixel value R 1 well However would leave pixel value entire pink circle unchanged next step multiply pixel value rectangle pixel value R Since multiplying number 1 result number pixel value R 1 replaced pixel rectangle Similarly pixel value rectangle 1 replaced pixel Fig 6 final output turn something like technique going use embed OpenCV logo behind dancing guy video Let’s Implementing Technique Python — Let’s Add Logo use Jupyter Notebook IDE choice follow along first import necessary library Import Libraries Note version OpenCV library used tutorial 400 Load Images Next specify path working directory logo video kept Please note supposed specify “path” code snippet loaded logo image first frame video let’s look shape image array logoshape frameshape Output 240 195 3 1080 1920 3 output 3dimensional first dimension height image second dimension width image third dimension number channel image ie blue green red let’s plot see logo first frame video pltimshowlogo pltshow pltimshowcv2cvtColorframecv2COLORBGR2RGB pltshow Technique Create Image Mask frame size much bigger logo Therefore place logo number place However placing logo center frame seems perfect action happen around region video put logo frame shown Don’t worry black background logo set pixel value black region 1 later code problem solve dealing moving object appearing region placed logo discussed earlier need make logo allow occluded moving object Right area put logo wide range pixel value Ideally pixel value area make pixel wall enclosed green dotted box value help HSV hue saturation value colorspace image RGB colorspace convert HSV image image HSV version next step find range HSV value part inside green dotted box turn pixel box range 6 10 68 30 36 122 lower upper HSV range respectively using range HSV value create binary mask mask nothing image pixel value either 0 255 pixel falling upper lower range HSV value equal 255 rest pixel 0 Given mask prepared HSV image pixel yellow region pixel value 255 rest pixel value 0 easily set pixel value inside green dotted box 1 required Let’s go back code code snippet load frame video preprocess create HSV image mask finally insert logo video End Notes article covered interesting use case computer vision implemented scratch process also learned working image array create mask array something would help work computer vision task Feel free reach doubt feedback share would glad help Feel free reach prateekjoshi565gmailcom 1–1 discussionsTags Data Science Object Detection Python Opencv Computer Vision
3,796
Regular Expressions in JavaScript: An Introduction
Regular Expressions in JavaScript: An Introduction How to use Regex in JavaScript to validate and format strings Regex in JavaScript: Your strings won’t know what hit them JavaScript’s implementation of Regex is useful for a range of string validation, formatting and iteration techniques. This article acts as an introduction to using regular expressions in JavaScript, touching on useful ways to use them, in addition to exploring some of the cryptic-like syntax that regular expressions entail. Rather than attempting to be a comprehensive guide for all Regex features, this piece instead focuses on super-useful concepts and real-world examples to get you started using Regex in your JavaScript apps. Regular expressions are notably hard to read as they gain in complexity, where it is necessary for the developer to have some knowledge of Regex syntax to know what is being tested. One can summarise regular expressions as patterns used to match character combinations in strings. JavaScript supports regular expressions in a range of its native APIs such as match , matchAll , replace , among others, for testing a string against the defined pattern via the regular expression. Perhaps the most basic type of support is the Regex object, that tests whether a pattern is present within a string with its built-in methods such as exec() and test() . To demonstrate Regex in its simplest form, we can check whether a particular substring pattern is present against another string to test against with Regex’s test() , that will return a boolean. This is how we’d test whether the string word is present within a string — there are actually two ways we can do this, here’s the first: // the simplest use case of Regex: substring testing const str = "How many words will this article have?"; const result = new RegExp('word').test(str); JavaScript also recognises a regular expression simply by wrapping it in forward slashes: const result2 = /word/.test(str); This cleaner syntax will be used for the rest of this article. This simple use case of Regex is useful for testing form values or validating other unknown data. You could indeed use this method as a simple way to test things like secure URLs, where you expect https:// within the string. However, as we all know, a valid URL has a few more rules than this — a domain suffix, an optional www. , a lack of whitespace, support for a limited amount of special characters, etc. This is where Regex shines — it can check all these attributes in one regular expression, or “pattern”, being able to test very complex strings that have an arbitrary number of patterns within them. Matching one of several Characters with Character Classes The English language has quite a few arbitrary words with the potential to be a nightmare for form validation — if it was not for regular expressions. Take the word color that is also correctly spelled as colour, or adapter and adaptor, ambience and ambiance — the list continues. This is where Character Classes, also termed Character Sets, come in handy with Regex. They are defined with square brackets followed by a range of acceptable characters in that position. Let’s take ambience and ambiance — this is how we’d test both words: const str = "This office has a pleasant ambience"; const result = /ambi[ae]nce/.test(str); The above character class accepts either an e or a as the 5th character of the tested string. Testing for optional characters Testing color and colour is slightly different — there is actually an additional optional character, being the u. Consider the following regular expression, that checks for the optional character: const str = "How many colours are there in a rainbow?"; const result = /colou?r/.test(str); Notice the ? after the u character — this introduces the first operator of this talk. The ? operator declares a character or group of characters as optional in the defined pattern. colour is indeed present within str , and test() will return true . Take a scenario where we need to test a string representing a month, that could be displayed in short-form or long-form, such as Jan or January. Both are valid months, but the uary characters can be omitted in the short form. To test this, we can wrap multiple characters in parenthesis, also termed a Capturing Group, and make the entire group optional. const str = "January 3rd is my birthday?"; const result = /[Jj]an(uary)?/.test(str); Note that we’re accepting both upper and lower case j, and have declared an optional capturing group with (uary)? . There are more efficient ways to test character cases that we’ll discover further down. We can also test a range of characters in a character class. A dash (-) between two characters present a range. For example, you may with to validate a hexadecimal string, such as when you have a visual editor in the browser to toggle colours. Check out the following regular expression to do this, that introduces more syntax to our Regex endeavours: const str = "ff0000"; //red const result = /[0-9A-F]{6}/i.test(str); The ranges of 0–9 and A-F are searched, along with curly braces with a 6 in-between. {6} here is declaring that the pattern should match exactly 6 times within the string being tested. This makes sense, as a hexadecimal value is 6 characters in length. Also of interest is the i character included at the end of the regular expression , after the closing forward slash — what is this? Introducing Flags i is one of several flags available to use at the end of a regular expression. The i flag makes the search case-insensitive, so both upper case and lower case A-F are searched. Our character class searches for the range of A-F, but with the i flag, a-f is also searched. There is no need to define both with [0-9A-Fa-f] . Another commonly used flag is g , that searches for all matches within a string. Up til now, we have only explored single matching regular expressions. Moving forward, we will want to search all matches within a string, making Regex a much more powerful concept for processing larger bulks of text. Negating Character Classes with ^ We can also define a Character Class that you do not wish to match within a string. If we took the above example and did not wish to match hexadecimal strings, the caret (^) can be placed at the beginning of the Character Class: const result = /[^0-9A-F]{6}/i.test(str); The result will now consequently return false in the event a hexadecimal string is present. Negated Character Classes are effective for defining what you don’t want to appear within a pattern, and may come in handy in forms of validation, such as testing for sensitive words and phrases in a user-submitted comment. Before we continue, let’s recap the terms explored so far: Character Classes / Character Sets: Using the square brackets to define a range of possible characters in an expression. Using the square brackets to define a range of possible characters in an expression. Operators with ? for optional characters within a regular expression. with for optional characters within a regular expression. Capturing Groups that are defined with parenthesis, allowing us to test a group of characters, or a subset of a string. that are defined with parenthesis, allowing us to test a group of characters, or a subset of a string. Flags: “global” level configurations that manipulate the Regex search in some way. The next set of examples will level up our understanding of Regex, using them in a global fashion to tackle more real-world problems.
https://rossbulat.medium.com/regular-expressions-in-javascript-an-introduction-94a40dce46a2
['Ross Bulat']
2020-03-06 15:58:39.785000+00:00
['JavaScript', 'Software Development', 'Software Engineering', 'Development', 'Programming']
Title Regular Expressions JavaScript IntroductionContent Regular Expressions JavaScript Introduction use Regex JavaScript validate format string Regex JavaScript string won’t know hit JavaScript’s implementation Regex useful range string validation formatting iteration technique article act introduction using regular expression JavaScript touching useful way use addition exploring crypticlike syntax regular expression entail Rather attempting comprehensive guide Regex feature piece instead focus superuseful concept realworld example get started using Regex JavaScript apps Regular expression notably hard read gain complexity necessary developer knowledge Regex syntax know tested One summarise regular expression pattern used match character combination string JavaScript support regular expression range native APIs match matchAll replace among others testing string defined pattern via regular expression Perhaps basic type support Regex object test whether pattern present within string builtin method exec test demonstrate Regex simplest form check whether particular substring pattern present another string test Regex’s test return boolean we’d test whether string word present within string — actually two way here’s first simplest use case Regex substring testing const str many word article const result new RegExpwordteststr JavaScript also recognises regular expression simply wrapping forward slash const result2 wordteststr cleaner syntax used rest article simple use case Regex useful testing form value validating unknown data could indeed use method simple way test thing like secure URLs expect http within string However know valid URL rule — domain suffix optional www lack whitespace support limited amount special character etc Regex shine — check attribute one regular expression “pattern” able test complex string arbitrary number pattern within Matching one several Characters Character Classes English language quite arbitrary word potential nightmare form validation — regular expression Take word color also correctly spelled colour adapter adaptor ambience ambiance — list continues Character Classes also termed Character Sets come handy Regex defined square bracket followed range acceptable character position Let’s take ambience ambiance — we’d test word const str office pleasant ambience const result ambiaenceteststr character class accepts either e 5th character tested string Testing optional character Testing color colour slightly different — actually additional optional character u Consider following regular expression check optional character const str many colour rainbow const result colourteststr Notice u character — introduces first operator talk operator declares character group character optional defined pattern colour indeed present within str test return true Take scenario need test string representing month could displayed shortform longform Jan January valid month uary character omitted short form test wrap multiple character parenthesis also termed Capturing Group make entire group optional const str January 3rd birthday const result Jjanuaryteststr Note we’re accepting upper lower case j declared optional capturing group uary efficient way test character case we’ll discover also test range character character class dash two character present range example may validate hexadecimal string visual editor browser toggle colour Check following regular expression introduces syntax Regex endeavour const str ff0000 red const result 09AF6iteststr range 0–9 AF searched along curly brace 6 inbetween 6 declaring pattern match exactly 6 time within string tested make sense hexadecimal value 6 character length Also interest character included end regular expression closing forward slash — Introducing Flags one several flag available use end regular expression flag make search caseinsensitive upper case lower case AF searched character class search range AF flag af also searched need define 09AFaf Another commonly used flag g search match within string til explored single matching regular expression Moving forward want search match within string making Regex much powerful concept processing larger bulk text Negating Character Classes also define Character Class wish match within string took example wish match hexadecimal string caret placed beginning Character Class const result 09AF6iteststr result consequently return false event hexadecimal string present Negated Character Classes effective defining don’t want appear within pattern may come handy form validation testing sensitive word phrase usersubmitted comment continue let’s recap term explored far Character Classes Character Sets Using square bracket define range possible character expression Using square bracket define range possible character expression Operators optional character within regular expression optional character within regular expression Capturing Groups defined parenthesis allowing u test group character subset string defined parenthesis allowing u test group character subset string Flags “global” level configuration manipulate Regex search way next set example level understanding Regex using global fashion tackle realworld problemsTags JavaScript Software Development Software Engineering Development Programming
3,797
The Future Factor
The Future Factor Tarot, time, and the mind . . . Photo by Santiago Lacarta on Unsplash Divination offers the promise of peering beyond our present illusions, perhaps into a timeless reality that is continually unfolding before us. But the reasoning mind has many questions . . . Does “the future” exist? In what sense? Is it already defined, or can we change it? Do we ever really see ahead in time — or is it just a trick of the imagination? Divination and the Direction of Time In a general sense, the verb “to divine” means to produce information that would otherwise be hidden. More specifically, it means to learn the will of the gods. Hence the etymological bond of “divination” and “divinity.” The information produced can be about anything, and it can be drawn from any point in time — past, present, or future. A “divining rod,” for example, discovers water or other things presently buried. And divination is frequently used in traditional cultures to discover who did something in the past or what is currently afflicting a sick person. Which is all quite useful. But the “future factor” is what really fascinates us — and may set divination apart from the many other ways of human knowing. After all, things that have already happened or are currently happening produce information that’s available in ordinary as well as extraordinary ways. Mysteries of the past and present may be solved by gathering clues and making deductions, because the information exists in some literal way; what happened did happen, what is happening is happening. These things existed at some point in time. But as far as can be told, what has yet to happen does not exist and never has. Therefore we can’t find out about it in any of our ordinary ways. Though we might guess or bet or predict or project — we cannot know, because there is nothing to know. Or is there? It’s true that we believe the future doesn’t “exist”; but why do we believe that? In the first place, there’s the evidence of our senses. And here again, language leaves clues. We “remember” the past, we “perceive” the present, but we don’t “________” the future. There’s a blank there because we don’t have a common word for future-knowing — and we don’t have a word for it because we don’t commonly experience it. The Newtonian world view, which is based on our senses and our reasoning capacity, naturally tells us that “causes” must precede “effects” and closed systems always tend toward disorder (that is, things get older but never younger, things break but never get unbroken, and so on). These are the principal explanations of why time appears to unfold from past to future. But from a post-Newtonian perspective, information is often wildly opposed to sense data. For example — in the quantum world, cause-and-effect doesn’t necessarily apply, and time isn’t necessarily linear. Because our systems, processes, and technologies are still based almost entirely on a mechanical, Newtonian interpretation of the world, we haven’t progressed much in our ability to relate to the future. In fact, this is one of the few areas in which we have no new technologies — or even any “promising developments.” As it works out, we now have (reasonably) reliable, (mostly) mechanical ways of doing all those past and present knowledge-things for which divination was once employed. For example, we have science-based tools for finding water, diagnosing illness, or solving crimes. And therefore we don’t have a practical, everyday need for divining rods and shamanic rituals. But when it comes to determining future events, we haven’t any better tools than the Maya did, or the Homeric Greeks, or the ancient Chinese — all of whom employed what we now call divination. Divination and the Nature of Mind There have been many efforts to validate the possibility of future-knowing (precognition) through experimentation and theoretical constructs, but so far, a scientific approach hasn’t brought us much insight on this subject. Since there are many things science doesn’t yet understand — or in some cases, has had to dramatically re-understand — the fact that there’s no scientific evidence of precognition is not dispositive. It may just be that our science hasn’t yet achieved a basis for understanding certain phenomena. From that perspective . . . attempting to develop explanations of future-knowing in terms of known constructs may be an inadequate, even counter-productive activity. So where should we be looking? One direction has been psychology, especially as viewed from a Jungian perspective. Lama Chime Radha, Rinpoche, then head of the Tibetan Section of the British Library, offered this observation in an article on divination in traditional Tibet: From the “scientific” point of view it would of course be possible and even necessary to explain away the belief in divination and other magical operations as mere superstitions having no correspondence with objective reality, and of relevance only to the social anthropologist. More sympathetic explanations might invoke the concept of synchronicity, the interconnectedness of all objects and events in space and time, whereby in states of heightened awareness it becomes possible “to see a world in a grain of sand and a heaven in a wild flower.” Or one could hypothesize that the external apparatus of divination, whether it is a crystal ball, the pattern of cracks in a tortoise shell, or a complex system of astrology, is essentially a means of focusing and concentrating the conscious mind so that insights and revelations may arise (or descend) from the profounder and perhaps supra-individual levels of the unconscious. [1] All of these approaches — the scientific vision of space-time and the related hypothesis of synchronicity, the speculations of parapsychology and transpersonal psychology — are intriguing. But as Lama Radha points out, such attempts at explanation may still fall very far short of correctly connecting mind and reality: The Tibetans themselves would certainly regard the visions and predictions of seers and diviners as mind-created, but then in accordance with Buddhist philosophy so they would regard everything that is experienced either subjectively or objectively, including entities of such seemingly varied degrees of solidity and independent existence as mountains, trees, other beings, sub-atomic particles and waves. Such an in fact continuity between mind and world, consciousness and created reality, is still by no means scientifically accepted or even widely entertained — much less authentically experienced by most of us. For the most part, speculation along these lines has been confined to some few scientists with a philosophical bent and/or an acquaintance with mystical experience. And so the science of space-time and the new model of consciousness that might issue from it remains very abstract. [2] But Eastern philosophy, which has been investigating the space-time continuum for more than a millennium, can bring the concept much closer to our own experience and our own embodiment. As Peter Barth explains in Piercing the Autumn Sky: A Guide to Discovering the Natural Freedom of Mind — his delightful guide to Tibetan Buddhist mind training: Exploring the nature of time and space more directly, as present in our lives, we may begin to discover the vastness of time and space itself, the vastness of our human awareness. We may note the sameness of each moment, or each millionth of a moment, in the sense that each “piece” of time or space contains the complete nature of all of time and space. We are endowed with space and time itself in the fabric of our being. [3] In other words, all that is (or was/will be) is in us. We perceive differences between time and space, then and now, thought and matter, “me” and “it” not because such things are in fact separate, but because we are conditioned (physically and mentally) to construct the world in a certain way. At our present level of evolution, it is very difficult for most people to transcend these limitations of perception for any length of time. Although psychoactive drugs and certain techniques for achieving ecstatic trance can produce temporary suspensions of our habitual perception, an effortful, sustained pursuit of spiritual discipline or mind training is needed to bring about more lasting alterations. As Barth explains, with careful practice, rather [than seeing time as] a linear road that we are on, we may discover what can be called vast time, a time which is inherent in everything, as eternal, unimpeded dynamism; a source of unlimited energy. By getting to know this aspect of our minds, by attending to the dynamic nature of our experience directly, we can actually begin to enter the dance of vast time itself, with no space between ‘us’ and ‘time.’ The fabrications of ‘past,’ ‘present,’ and ‘future’ places and selves begin to loosen their grip on us. Experientially, we realize that the past and future are only projections of our thoughts, while the present remains an indeterminate state that cannot be pinned down. The mind-training disciplines taught by Tibetan Buddhism and a few other traditions can eventually produce this expanded relationship with time. But we don’t all have the leisure or the temperament to pursue these practices intensively (at least in this lifetime). Work with Tarot, however, can be a surprisingly effective way for almost anyone to bring something of this experience into her or his life. My own experience suggests that the ability to sense something of the future is frequently an aspect of being entirely in the present. A complete, serene, and unselfconscious engagement with the given moment (such as may be experienced during a Tarot reading) actually frees the mind from habitual projections into the future and allows the future to reveal itself. The more we cultivate a deep, fluent command of the cards, the more likely we are to find awareness growing beyond the present. There are also ways to improve concentration — one of several benefits that Tarot practitioners can derive from meditation. So I’ll be writing soon about four approaches to meditation, how they resonate with the four suits of Tarot, and how to choose a path that will expand your sense of time.
https://medium.com/tarot-a-textual-project/the-future-factor-645e771907ab
['Cynthia Giles']
2020-11-11 01:14:19.877000+00:00
['Meditation', 'Spirituality', 'Creativity', 'Psychology', 'Tarot']
Title Future FactorContent Future Factor Tarot time mind Photo Santiago Lacarta Unsplash Divination offer promise peering beyond present illusion perhaps timeless reality continually unfolding u reasoning mind many question “the future” exist sense already defined change ever really see ahead time — trick imagination Divination Direction Time general sense verb “to divine” mean produce information would otherwise hidden specifically mean learn god Hence etymological bond “divination” “divinity” information produced anything drawn point time — past present future “divining rod” example discovers water thing presently buried divination frequently used traditional culture discover something past currently afflicting sick person quite useful “future factor” really fascinates u — may set divination apart many way human knowing thing already happened currently happening produce information that’s available ordinary well extraordinary way Mysteries past present may solved gathering clue making deduction information exists literal way happened happen happening happening thing existed point time far told yet happen exist never Therefore can’t find ordinary way Though might guess bet predict project — cannot know nothing know It’s true believe future doesn’t “exist” believe first place there’s evidence sens language leaf clue “remember” past “perceive” present don’t “” future There’s blank don’t common word futureknowing — don’t word don’t commonly experience Newtonian world view based sens reasoning capacity naturally tell u “causes” must precede “effects” closed system always tend toward disorder thing get older never younger thing break never get unbroken principal explanation time appears unfold past future postNewtonian perspective information often wildly opposed sense data example — quantum world causeandeffect doesn’t necessarily apply time isn’t necessarily linear system process technology still based almost entirely mechanical Newtonian interpretation world haven’t progressed much ability relate future fact one area new technology — even “promising developments” work reasonably reliable mostly mechanical way past present knowledgethings divination employed example sciencebased tool finding water diagnosing illness solving crime therefore don’t practical everyday need divining rod shamanic ritual come determining future event haven’t better tool Maya Homeric Greeks ancient Chinese — employed call divination Divination Nature Mind many effort validate possibility futureknowing precognition experimentation theoretical construct far scientific approach hasn’t brought u much insight subject Since many thing science doesn’t yet understand — case dramatically reunderstand — fact there’s scientific evidence precognition dispositive may science hasn’t yet achieved basis understanding certain phenomenon perspective attempting develop explanation futureknowing term known construct may inadequate even counterproductive activity looking One direction psychology especially viewed Jungian perspective Lama Chime Radha Rinpoche head Tibetan Section British Library offered observation article divination traditional Tibet “scientific” point view would course possible even necessary explain away belief divination magical operation mere superstition correspondence objective reality relevance social anthropologist sympathetic explanation might invoke concept synchronicity interconnectedness object event space time whereby state heightened awareness becomes possible “to see world grain sand heaven wild flower” one could hypothesize external apparatus divination whether crystal ball pattern crack tortoise shell complex system astrology essentially mean focusing concentrating conscious mind insight revelation may arise descend profounder perhaps supraindividual level unconscious 1 approach — scientific vision spacetime related hypothesis synchronicity speculation parapsychology transpersonal psychology — intriguing Lama Radha point attempt explanation may still fall far short correctly connecting mind reality Tibetans would certainly regard vision prediction seer diviner mindcreated accordance Buddhist philosophy would regard everything experienced either subjectively objectively including entity seemingly varied degree solidity independent existence mountain tree being subatomic particle wave fact continuity mind world consciousness created reality still mean scientifically accepted even widely entertained — much le authentically experienced u part speculation along line confined scientist philosophical bent andor acquaintance mystical experience science spacetime new model consciousness might issue remains abstract 2 Eastern philosophy investigating spacetime continuum millennium bring concept much closer experience embodiment Peter Barth explains Piercing Autumn Sky Guide Discovering Natural Freedom Mind — delightful guide Tibetan Buddhist mind training Exploring nature time space directly present life may begin discover vastness time space vastness human awareness may note sameness moment millionth moment sense “piece” time space contains complete nature time space endowed space time fabric 3 word waswill u perceive difference time space thought matter “me” “it” thing fact separate conditioned physically mentally construct world certain way present level evolution difficult people transcend limitation perception length time Although psychoactive drug certain technique achieving ecstatic trance produce temporary suspension habitual perception effortful sustained pursuit spiritual discipline mind training needed bring lasting alteration Barth explains careful practice rather seeing time linear road may discover called vast time time inherent everything eternal unimpeded dynamism source unlimited energy getting know aspect mind attending dynamic nature experience directly actually begin enter dance vast time space ‘us’ ‘time’ fabrication ‘past’ ‘present’ ‘future’ place self begin loosen grip u Experientially realize past future projection thought present remains indeterminate state cannot pinned mindtraining discipline taught Tibetan Buddhism tradition eventually produce expanded relationship time don’t leisure temperament pursue practice intensively least lifetime Work Tarot however surprisingly effective way almost anyone bring something experience life experience suggests ability sense something future frequently aspect entirely present complete serene unselfconscious engagement given moment may experienced Tarot reading actually free mind habitual projection future allows future reveal cultivate deep fluent command card likely find awareness growing beyond present also way improve concentration — one several benefit Tarot practitioner derive meditation I’ll writing soon four approach meditation resonate four suit Tarot choose path expand sense timeTags Meditation Spirituality Creativity Psychology Tarot
3,798
Getting to Know the Mel Spectrogram
Read this short post if you want to be like Neo and know all about the Mel Spectrogram! (Ho maybe not all, but at least a little) For the tl;dr and full code, go here. A Real Conversation That Happened in My Head a Few Days Ago Me: Hi Mel Spectrogram, may I call you Mel? Mel: Sure. Me: Thanks. So Mel, when we first met, you were quite the enigma to me. Mel: Really? How’s that? Me: You are composed of two concepts that their whole purpose is to make abstract notions accessible to humans - the Mel Scale and Spectrogram - yet you yourself were quite difficult for me, a human, to understand. Mel: Is there a point to this one-sided speech? Me: And do you know what bothered me even more? I heard through the grapevine that you are quite the buzzz in DSP (Digital Signal Processing), yet I found very little intuitive information about you online. Mel: Should I feel bad for you? Me: So anyway, I didn’t want to let you be misunderstood, so I decided to write about you. Mel: Gee. That’s actually kinda nice. Hope more people will get me now. Me: With pleasure my friend. I think we can talk about what are your core elements, and then show some nice tricks using the librosa package on python. Mel: Oooh that’s great! I love librosa! It can generate me with one line of code! Me: Wonderful! And let’s use this beautiful whale song as our toy example throughout this post! What do you think? Mel: You know you’re talking to yourself right? The Spectrogram Visualizing sound is kind of a trippy concept. There are some mesmerizing ways to do that, and also more mathematical ones, which we will explore in this post. Photo credit: Chelsea Davis. See more of this beautiful artwork here. When we talk about sound, we generally talk about a sequence of vibrations in varying pressure strengths, so to visualize sound kinda means to visualize airwaves. But this is just a two dimensional representation of this complex and rich whale song! Another mathematical representation of sound is the Fourier Transform. Without going into too many details (watch this educational video for a comprehensible explanation), Fourier Transform is a function that gets a signal in the time domain as input, and outputs its decomposition into frequencies. Let’s take for example one short time window and see what we get from applying the Fourier Transform. Now let’s take the complete whale song, separate it to time windows, and apply the Fourier Transform on each time window. Wow can’t see much here can we? It’s because most sounds humans hear are concentrated in very small frequency and amplitude ranges. Let’s make another small adjustment - transform both the y-axis (frequency) to log scale, and the “color” axis (amplitude) to Decibels, which is kinda the log scale of amplitudes. Now this is what we call a Spectrogram! The Mel Scale Let’s forget for a moment about all these lovely visualization and talk math. The Mel Scale, mathematically speaking, is the result of some non-linear transformation of the frequency scale. This Mel Scale is constructed such that sounds of equal distance from each other on the Mel Scale, also “sound” to humans as they are equal in distance from one another. In contrast to Hz scale, where the difference between 500 and 1000 Hz is obvious, whereas the difference between 7500 and 8000 Hz is barely noticeable. Luckily, someone computed this non-linear transformation for us, and all we need to do to apply it is use the appropriate command from librosa. Yup. That’s it. But what does this give us? It partitions the Hz scale into bins, and transforms each bin into a corresponding bin in the Mel Scale, using overlapping triangular filters.
https://towardsdatascience.com/getting-to-know-the-mel-spectrogram-31bca3e2d9d0
['Dalya Gartzman']
2020-05-09 20:19:17.674000+00:00
['Python', 'Programming', 'Music', 'Audio', 'Data Science']
Title Getting Know Mel SpectrogramContent Read short post want like Neo know Mel Spectrogram Ho maybe least little tldr full code go Real Conversation Happened Head Days Ago Hi Mel Spectrogram may call Mel Mel Sure Thanks Mel first met quite enigma Mel Really How’s composed two concept whole purpose make abstract notion accessible human Mel Scale Spectrogram yet quite difficult human understand Mel point onesided speech know bothered even heard grapevine quite buzzz DSP Digital Signal Processing yet found little intuitive information online Mel feel bad anyway didn’t want let misunderstood decided write Mel Gee That’s actually kinda nice Hope people get pleasure friend think talk core element show nice trick using librosa package python Mel Oooh that’s great love librosa generate one line code Wonderful let’s use beautiful whale song toy example throughout post think Mel know you’re talking right Spectrogram Visualizing sound kind trippy concept mesmerizing way also mathematical one explore post Photo credit Chelsea Davis See beautiful artwork talk sound generally talk sequence vibration varying pressure strength visualize sound kinda mean visualize airwave two dimensional representation complex rich whale song Another mathematical representation sound Fourier Transform Without going many detail watch educational video comprehensible explanation Fourier Transform function get signal time domain input output decomposition frequency Let’s take example one short time window see get applying Fourier Transform let’s take complete whale song separate time window apply Fourier Transform time window Wow can’t see much It’s sound human hear concentrated small frequency amplitude range Let’s make another small adjustment transform yaxis frequency log scale “color” axis amplitude Decibels kinda log scale amplitude call Spectrogram Mel Scale Let’s forget moment lovely visualization talk math Mel Scale mathematically speaking result nonlinear transformation frequency scale Mel Scale constructed sound equal distance Mel Scale also “sound” human equal distance one another contrast Hz scale difference 500 1000 Hz obvious whereas difference 7500 8000 Hz barely noticeable Luckily someone computed nonlinear transformation u need apply use appropriate command librosa Yup That’s give u partition Hz scale bin transforms bin corresponding bin Mel Scale using overlapping triangular filtersTags Python Programming Music Audio Data Science
3,799
Exception Handling in Java Streams
Unchecked Exceptions Let’s take an example use of streams: You are given a list of strings and you want to convert them all to integers. To achieve that, we can do something simple like this: List<String> integers = Arrays.asList("44", "373", "145"); integers.forEach(str -> System.out.println(Integer.parseInt(str))); The above snippet will work perfectly, but what happens if we modify the input to contain an illegal string, say "xyz" . The method parseInt() will throw a NumberFormatException , which is a type of unchecked exception. A naive solution, one that is typically seen is to wrap the call in a try/catch block and handle it. That would look like this: List<String> integers = Arrays.asList("44", "373", "xyz", "145"); integers.forEach(str -> { try { System.out.println(Integer.parseInt(str)); }catch (NumberFormatException ex) { System.err.println("Can't format this string"); } } ); While this works, this defeats the purpose of writing small lambdas to make code readable and less verbose. The solution that comes to mind is to wrap the lambda around another lambda that does the exception handling for you, but that is basically just moving the exception handling code somewhere else: static Consumer<String> exceptionHandledConsumer(Consumer<String> unhandledConsumer) { return obj -> { try { unhandledConsumer.accept(obj); } catch (NumberFormatException e) { System.err.println( "Can't format this string"); } }; } public static void main(String[] args) { List<String> integers = Arrays.asList("44", "xyz", "145"); integers.forEach(exceptionHandledConsumer(str -> System.out.println(Integer.parseInt(str)))); } The above solution can be made much, much better by using generics. Let’s build a generic exception handled consumer that can handle all kinds of exceptions. We will, then, be able to use it for many different use cases within our application. We can make use of the above code to build out our generic implementation. I will not go into the details of how generics work, but a good implementation would look like this: static <Target, ExObj extends Exception> Consumer<Target> handledConsumer(Consumer<Target> targetConsumer, Class<ExObj> exceptionClazz) { return obj -> { try { targetConsumer.accept(obj); } catch (Exception ex) { try { ExObj exCast = exceptionClazz.cast(ex); System.err.println( "Exception occured : " + exCast.getMessage()); } catch (ClassCastException ccEx) { throw ex; } } }; } As you can see, this new consumer is not bound to any particular type of object it consumes and accepts the type of Exception your code might throw as a parameter. We can now, simply use the handledConsumer method to build our consumers. The code for parsing our list of Strings to Integers will now be this: List<String> integers = Arrays.asList("44", "373", "xyz", "145"); integers.forEach( handledConsumer(str -> System.out.println(Integer.parseInt(str)), NumberFormatException.class)); If you have a different block of code that may throw a different exception, you can just reuse the above method. For example, the code below takes care of ArithmeticException due to a divide by zero.
https://medium.com/swlh/exception-handling-in-java-streams-5947e48f671c
['Arindam Roy']
2019-09-02 12:19:51.740000+00:00
['Software Development', 'Programming', 'Software Engineering', 'Java']
Title Exception Handling Java StreamsContent Unchecked Exceptions Let’s take example use stream given list string want convert integer achieve something simple like ListString integer ArraysasList44 373 145 integersforEachstr SystemoutprintlnIntegerparseIntstr snippet work perfectly happens modify input contain illegal string say xyz method parseInt throw NumberFormatException type unchecked exception naive solution one typically seen wrap call trycatch block handle would look like ListString integer ArraysasList44 373 xyz 145 integersforEachstr try SystemoutprintlnIntegerparseIntstr catch NumberFormatException ex SystemerrprintlnCant format string work defeat purpose writing small lambda make code readable le verbose solution come mind wrap lambda around another lambda exception handling basically moving exception handling code somewhere else static ConsumerString exceptionHandledConsumerConsumerString unhandledConsumer return obj try unhandledConsumeracceptobj catch NumberFormatException e Systemerrprintln Cant format string public static void mainString args ListString integer ArraysasList44 xyz 145 integersforEachexceptionHandledConsumerstr SystemoutprintlnIntegerparseIntstr solution made much much better using generic Let’s build generic exception handled consumer handle kind exception able use many different use case within application make use code build generic implementation go detail generic work good implementation would look like static Target ExObj extends Exception ConsumerTarget handledConsumerConsumerTarget targetConsumer ClassExObj exceptionClazz return obj try targetConsumeracceptobj catch Exception ex try ExObj exCast exceptionClazzcastex Systemerrprintln Exception occured exCastgetMessage catch ClassCastException ccEx throw ex see new consumer bound particular type object consumes accepts type Exception code might throw parameter simply use handledConsumer method build consumer code parsing list Strings Integers ListString integer ArraysasList44 373 xyz 145 integersforEach handledConsumerstr SystemoutprintlnIntegerparseIntstr NumberFormatExceptionclass different block code may throw different exception reuse method example code take care ArithmeticException due divide zeroTags Software Development Programming Software Engineering Java